137 97 12MB
English Pages [282] Year 2023
Medical Imaging Toolbox™ User's Guide
R2023b
How to Contact MathWorks Latest news:
www.mathworks.com
Sales and services:
www.mathworks.com/sales_and_services
User community:
www.mathworks.com/matlabcentral
Technical support:
www.mathworks.com/support/contact_us
Phone:
508-647-7000
The MathWorks, Inc. 1 Apple Hill Drive Natick, MA 01760-2098 Medical Imaging Toolbox™ User's Guide © COPYRIGHT 2022–2023 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be trademarks or registered trademarks of their respective holders. Patents
MathWorks products are protected by one or more U.S. patents. Please see www.mathworks.com/patents for more information. Revision History
September 2022 March 2023 September 2023
Online only Online Only Online Only
New for Version 1.0 (Release 2022b) Revised for Version 1.1 (Release 2023a) Revised for Version 23.2 (R2023b)
Contents
1
2
Getting Started Medical Imaging Toolbox Product Description . . . . . . . . . . . . . . . . . . . . . .
1-2
Read, Process, and Write 3-D Medical Images . . . . . . . . . . . . . . . . . . . . . .
1-3
Get Started with Medical Image Labeler . . . . . . . . . . . . . . . . . . . . . . . . . . Open Medical Image Labeler App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create or Open Labeling Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Load Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Visually Explore Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Label Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Label Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Export Labeling Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Medical Image Labeler Manages Ground Truth Labels . . . . . . . . . .
1-10 1-11 1-11 1-11 1-12 1-12 1-13 1-15 1-16
Medical Image Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patient Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intrinsic Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1-17 1-17 1-18
Introduction to Medical Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Medical Imaging Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . Typical Workflow for Medical Image Analysis . . . . . . . . . . . . . . . . . . . . .
1-20 1-20 1-22
Import, Export, and Spatial Referencing Read, Process, and View Ultrasound Data . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2-2
Display and Volume Rendering Choose Approach for Medical Image Visualization . . . . . . . . . . . . . . . . . . Display 2-D Medical Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display 3-D Medical Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-2 3-2 3-6
Visualize 3-D Medical Image Data Using Medical Image Labeler . . . . . .
3-11
Display Medical Image Volume in Patient Coordinate System . . . . . . . .
3-21
iii
4
Display Labeled Medical Image Volume in Patient Coordinate System .
3-24
Create STL Surface Model of Femur Bone for 3-D Printing . . . . . . . . . .
3-33
Medical Image-Based Finite Element Analysis of Spine . . . . . . . . . . . . .
3-41
Image Preprocessing and Augmentation Medical Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preprocessing in Advanced Workflows . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-2 4-2 4-3 4-3 4-4 4-4 4-5
Medical Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scenarios for Medical Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . Functions for Medical Image Registration . . . . . . . . . . . . . . . . . . . . . . . . .
4-6 4-6 4-8
Register Multimodal Medical Image Volumes with Spatial Referencing .........................................................
5
Medical Image Labeling Label 2-D Ultrasound Series Using Medical Image Labeler . . . . . . . . . . .
6
Contents
5-2
Label 3-D Medical Image Using Medical Image Labeler . . . . . . . . . . . . .
5-10
Automate Labeling in Medical Image Labeler . . . . . . . . . . . . . . . . . . . . .
5-21
Collaborate on Multi-Labeler Medical Image Labeling Projects . . . . . . . Create Label Definitions and Assign Data to Labelers (Project Manager) ..................................................... Label Data and Publish Labels for Review (Labeler) . . . . . . . . . . . . . . . . Inspect Labeled Images (Reviewer) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Export Ground Truth Data and Send to Project Manager (Labeler) . . . . . Collect, Merge, and Create Training Data (Project Manager) . . . . . . . . .
5-29 5-30 5-31 5-33 5-33 5-34
Medical Image Segmentation Create Datastores for Medical Image Semantic Segmentation . . . . . . . . . Medical Image Ground Truth Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
4-11
6-2 6-2
Datastores for Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-3
Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-5
Create Training Data for 3-D Medical Image Semantic Segmentation . . .
6-9
Segment Lungs from CT Scan Using Pretrained Neural Network . . . . . .
6-14
Brain MRI Segmentation Using Pretrained 3-D U-Net Network . . . . . . .
6-24
Breast Tumor Segmentation from Ultrasound Using Deep Learning . . .
6-32
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40
7
IBSI Standard and Radiomics Function Feature Correspondences . . . . Shape Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Texture Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-53 6-53 6-54 6-59
Get Started with Radiomics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typical Workflow of Radiomics Application . . . . . . . . . . . . . . . . . . . . . . .
6-81 6-81
Classify Breast Tumors from Ultrasound Images Using Radiomics Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-83
Microscopy Segmentation Using Cellpose Getting Started with Cellpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install Support Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Apply Pretrained Cellpose Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Refine Pretrained Cellpose Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Train Cellpose Model Using Transfer Learning . . . . . . . . . . . . . . . . . . . . .
7-2 7-2 7-2 7-3 7-4
Choose Pretrained Cellpose Model for Cell Segmentation . . . . . . . . . . . .
7-5
Refine Cellpose Segmentation by Tuning Model Parameters . . . . . . . . .
7-12
Train Custom Cellpose Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7-22
Detect Nuclei in Large Whole Slide Images Using Cellpose . . . . . . . . . .
7-28
v
1 Getting Started • “Medical Imaging Toolbox Product Description” on page 1-2 • “Read, Process, and Write 3-D Medical Images” on page 1-3 • “Get Started with Medical Image Labeler” on page 1-10 • “Medical Image Coordinate Systems” on page 1-17 • “Introduction to Medical Imaging” on page 1-20
1
Getting Started
Medical Imaging Toolbox Product Description Visualize, register, segment, and label 2D and 3D medical images Medical Imaging Toolbox provides apps, functions, and workflows for designing and testing diagnostic imaging applications. You can perform 3D rendering and visualization, multimodal registration, and segmentation and labeling of radiology images. The toolbox also lets you train predefined deep learning networks (with Deep Learning Toolbox™). You can import, preprocess, and analyze radiology images from various imaging modalities, including projected X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine (PET, SPECT). The Medical Image Labeler app lets you semi-automate 2D and 3D labeling for use in AI workflows. You can perform multimodal registration of medical images, including 2D images, 3D surfaces, and 3D volumes. The toolbox provides an integrated environment for end-to-end computer-aided diagnosis and medical image analysis.
1-2
Read, Process, and Write 3-D Medical Images
Read, Process, and Write 3-D Medical Images This example shows how to import and display volumetric medical image data, apply a filter to the image, and write the processed image to a new file. You can use the medicalVolume object to import image data and spatial information about a 3-D medical image in one object. Download Image Volume Data This example uses one chest CT volume saved as a directory of DICOM files. The volume is part of a data set containing three CT scans. The size of the entire data set is approximately 81 MB. Run this code to download the data set from the MathWorks® website and unzip the folder. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeDICOMData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The dataFolder folder contains the downloaded and unzipped data. dataFolder = fullfile(filepath,"MedicalVolumeDICOMData","LungCT01");
Read Image File The medicalVolume object imports data from the DICOM, NIfTI, and NRRD medical image file formats. DICOM volumes can be stored as a single file or as a directory containing individual files for each 2-D slice. The medicalVolume object automatically detects the file format and extracts the image data, spatial information, and modality from the file metadata. For this example, specify the data source as the download directory of the chest CT scan. medVol = medicalVolume(dataFolder) medVol = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[512×512×88 int16] [1×1 medicalref3d] "mm" "transverse" [0.7285 0.7285 2.5000] [0 0 1] 512 512 88 ["sagittal" "coronal" "CT" [88×1 double] [88×1 double]
"transverse"]
The Voxels property contains the image intensity values. If the file metadata specifies the rescale intercept and slope, medicalVolume automatically rescales the voxel intensities to the specified units. In this example, the CT intensity values are rescaled to Hounsfield units. intensities = medVol.Voxels;
The VolumeGeometry property contains a medicalref3d object that defines the spatial referencing for the image volume, including the mapping between the intrinsic and patient coordinate systems. 1-3
1
Getting Started
The intrinsic coordinate system is defined by the rows, columns, and slices of the Voxels array, with coordinates in voxel units. The patient coordinate system is defined relative to the anatomical axes of the patient, with real-world units such as millimeters. R = medVol.VolumeGeometry R = medicalref3d with properties: VolumeSize: Position: VoxelDistances: PatientCoordinateSystem: PixelSpacing: IsAffine: IsAxesAligned: IsMixed:
[512 512 88] [88×3 double] {[88×3 double] "LPS+" [88×2 double] 1 1 0
[88×3 double]
[88×3 double]}
Display Volume as Slices Medical Imaging Toolbox™ provides several options for visualizing medical image data. For details, see “Choose Approach for Medical Image Visualization” on page 3-2. For this example, display the transverse slices of the CT volume by creating a sliceViewer object. By default, the viewer opens to display the center slice. Use the slider to navigate through the volume. sliceViewer(medVol,Parent=figure) title("CT Volume, Transverse Slices")
1-4
Read, Process, and Write 3-D Medical Images
Display the Volume in 3-D Display the CT volume as a 3-D object by using the volshow function. The volshow function uses the spatial details in medVol to set the Transformation property of the output Volume object and display the volume in the patient coordinate system. You can customize the volume display by setting properties of the Volume object. Specify a custom transparency map and colormap that highlight the rib cage. The alpha and color values are based on the CT-bone rendering style from the Medical Image Labeler app. The intensity values have been tuned for this volume using trial and error. alpha = [0 0 0.72 1.0]; color = [0 0 0; 186 65 77; 231 208 141; 255 255 255] ./ 255;
1-5
1
Getting Started
intensity = [-3024 100 400 1499]; queryPoints = linspace(min(intensity),max(intensity),256); alphamap = interp1(intensity,alpha,queryPoints)'; colormap = interp1(intensity,color,queryPoints);
Display the volume with the custom display settings. Specify the cinematic rendering style, which displays the volume with realistic lighting and shadows. vol = volshow(medVol,Colormap=colormap,Alphamap=alphamap,RenderingStyle="CinematicRendering");
Pause to apply all of the cinematic rendering iterations before updating the display in Live Editor. pause(3.5) drawnow
Optionally, you can clean up the viewer window by using the 3-D Scissors tool, , to remove the patient bed. For a detailed example, see “Remove Objects from Volume Display Using 3-D Scissors”. This image shows the final volume display after removing the patient bed and rotating to view the anterior plane.
1-6
Read, Process, and Write 3-D Medical Images
Smooth Voxel Intensity Data Smooth the image with a 3-D Gaussian filter. Applying a Gaussian filter is one approach for reducing noise in medical images. sigma = 2; intensitiesSmooth = imgaussfilt3(intensities,sigma);
Create a new medicalVolume object that contains the smoothed voxel intensities and preserves the spatial referencing of the original file. Create a copy of the original object medVol and set the Voxels property of the new object, medVolSmooth, to the smoothed image data. medVolSmooth = medVol; medVolSmooth.Voxels = intensitiesSmooth;
Display the smoothed voxel data. figure sliceViewer(medVolSmooth,Parent=figure) title("Smoothed CT Volume, Transverse Slices")
1-7
1
Getting Started
Write Processed Data to New NIfTI File Write the smoothed image data to a new NIfTI file by using the write object function. The function supports writing medical volume data in only the NIfTI file format. niftiFilename = "LungCT01_smoothed.nii"; write(medVolSmooth,niftiFilename)
Read the new file using medicalVolume. Because the NIfTI file format does not contain metadata related to modality or display windows, the Modality property value is "unknown" and the WindowCenters and WindowWidths properties are empty.
1-8
Read, Process, and Write 3-D Medical Images
medVolNIfTI = medicalVolume(niftiFilename);
See Also medicalVolume | medicalref3d | extractSlice | sliceLimits
More About •
“Medical Image Coordinate Systems” on page 1-17
1-9
1
Getting Started
Get Started with Medical Image Labeler The Medical Image Labeler app enables you to explore and interactively label pixels in 2-D and 3-D medical images. You can export labeled data as a groundTruthMedical object to train semantic segmentation algorithms. You can publish snapshot images and animations with or without labels.
This tutorial provides an overview of the capabilities of the Medical Image Labeler and compares 2-D and 3-D image labeling. The typical app workflow includes these steps:
1-10
1
“Open Medical Image Labeler App” on page 1-11
2
“Create or Open Labeling Session” on page 1-11
3
“Load Image Data” on page 1-11
4
“Visually Explore Data” on page 1-12
5
“Create Label Definitions” on page 1-12
6
“Label Image” on page 1-13
7
“Export Labeling Results” on page 1-15
Get Started with Medical Image Labeler
Open Medical Image Labeler App Open the Medical Image Labeler app from the Apps tab on the MATLAB® toolstrip, under Image Processing and Computer Vision. You can also load the app by using the medicalImageLabeler command.
Create or Open Labeling Session Manage labeling in the Medical Image Labeler using app sessions. Within one app session, you can import and label multiple image files. These might be repeat scans from one patient or scans from multiple patients with the same set of tissues, organs, or other regions of interest to label. The app enables you to create either a volume session or an image session. Use a volume session to label 3-D medical image data. Use an image session to label 2-D images or an image series, such as an ultrasound video. You can either create a new labeling session or reopen a previous session: • New session — On the app toolstrip, click New Session and select New Volume session (3-D) or New Image session (2-D). In the dialog box that opens, specify a session folder. As you draw the label images, the app automatically saves them to the session folder. Therefore, the session folder must have access to enough memory to save all label images for the session. For more details, see the “How Medical Image Labeler Manages Ground Truth Labels” on page 1-16 section. • Open session — On the app toolstrip, click Open Session and select one of the listed recent sessions, or click Open Session and navigate to a previous session folder.
Load Image Data Load images to label from a file or from a groundTruthMedical object. Load image files when starting a new labeling project. Use groundTruthMedical objects when working with labeled or partially labeled data from outside of MATLAB or that have been shared from a different workstation. For more details about working on a multi-person labeling team, see “Collaborate on Multi-Labeler Medical Image Labeling Projects” on page 5-29. To load an image from a file, click Import and, under Data, select From File. The app supports loading these file formats: • Volume session — Single NIfTI, NRRD, or DICOM file, or a directory containing multiple DICOM files corresponding to one image volume. 1-11
1
Getting Started
• Image session — Single NIfTI or DICOM file. To load a groundTruthMedical object, click Import, and under Ground Truth, select From File to load the object from a MAT file or From Workspace to load the object from the MATLAB workspace. Importing a groundTruthMedical object loads the image data, label data, and label definitions stored in the object into the app. The Data Browser pane lists all of the image files currently loaded in the app. Click a filename in the Data Browser to change the file to display and label.
Visually Explore Data The app displays 3-D image data using individual panes for the Transverse, Sagittal, and Coronal slice planes and a 3-D Volume pane. For an example of how to customize the display of 3-D images, and publish images and animations, see “Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 3-11. The app displays 2-D image data in the Slice pane. For details about navigating frames or adjusting the brightness and contrast of a 2-D image series, see “Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2.
Create Label Definitions A label definition specifies the name, color, and order of each label assigned to the image. You must use the same label definitions across all images within an app session. The Medical Image Labeler app supports pixel labeling. You can create a label definition interactively in the app by clicking Create Label Definition in the Label Definitions pane. Optionally, you can assign a name for the label by clicking it, or change the color of the label by clicking the color square next to the label name.
1-12
Get Started with Medical Image Labeler
Alternatively, import label definitions from a label definitions file or as part of a groundTruthMedical object. You can create a label definitions file programmatically, or load one exported from a previous app session. For more details about creating a label definitions file programmatically, see the LabelDefinitions property of the groundTruthMedical object.
Label Image Label pixels using the tools in the Draw and Automate tabs of the app toolstrip. The app provides manual, semi-automated, and automated labeling tools. Manual Labeling Manually draw labels using the Freehand, Assisted Freehand, Polygon, and Paintbrush tools. Semi-Automated Labeling Add labels using semi-automated tools including Fill Region, Paint by Superpixels, and Trace Boundary. You can interpolate labeled regions between image frames or volume slices using the Auto Interpolate and Manually Interpolate tools.
1-13
1
Getting Started
1-14
Tool
Description
Fill Region
Flood fill label image or fill holes in label region.
Paint by Superpixels
Manually paint within an adjustable-sized grid of pixels. To use this tool, first select Paint Brush and then click Paint by Superpixels. Each superpixel contains a cluster of similar intensity values. Adjust the Superpixels Size to change the size of the superpixel grid.
Image
Get Started with Medical Image Labeler
Tool
Description
Trace Boundary
Label connected regions that have similar intensity values. Select Trace Boundary in the app toolstrip, and then pause on a seed pixel or voxel in the region you want to label. The tool predicts the boundary of the region by including pixels or voxels with intensities that are similar to the current seed. Use the Threshold slider to adjust the similarity threshold used to predict the region boundary. Increasing the threshold includes a wider range of intensities above and below the seed value in the predicted region. Move your cursor to change the seed.
Image
Automated Labeling The Automate tab contains built-in automation algorithms to refine existing labels or fully automate labeling. The app provides slice-based algorithms including Active Contours, Adaptive Threshold, Dilate, and Erode. In a volume session, the app additionally provides the Filter and Threshold, Smooth Edges, and Otsu's Threshold algorithms, which you can apply to all slices or to a specified slice range. You can add a custom automation algorithm to use in the app. On the Automate tab, click Add Algorithm. Import an existing algorithm by selecting From File, or create a new algorithm using the provided function or class template. See “Automate Labeling in Medical Image Labeler” on page 521 for an example that applies a custom automation algorithm for 2-D labeling. Labeling Examples • For an example of labeling 2-D image data, see “Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2. • For an example of labeling 3-D image data, see “Label 3-D Medical Image Using Medical Image Labeler” on page 5-10.
Export Labeling Results You can export the groundTruthMedical object and label definitions as files to share with a colleague. • To export the groundTruthMedical object as a MAT file, on the Labeler tab, click Export and, under Ground Truth, select To File. 1-15
1
Getting Started
• To export the label definitions as a MAT file, on the Labeler tab, click Export and, under Label Definitions, select To File. Note The app stores the actual pixel label data in the LabelData subfolder of the session folder. See “How Medical Image Labeler Manages Ground Truth Labels” on page 1-16 for details.
How Medical Image Labeler Manages Ground Truth Labels As you label images in the Medical Image Labeler app, the app automatically saves three sets of data in the session folder associated with the current app session. • A groundTruthMedical object stored as a MAT file. The groundTruthMedical object specifies the file locations of the unlabeled images and corresponding label images, as well as the name and color associated with each label. • A subfolder named LabelData, which contains the label images. The app saves pixel label images created in an image session in the LabelData folder as MAT files. A MAT file stores the pixel labels as a uint8 array. You can read the label image into the MATLAB workspace by using the load function. The app saves voxel label images created in a volume session in the LabelData folder as NIfTI files. A NIfTI file stores the voxel labels as a uint8 array. You can read the label images into the MATLAB workspace by using the niftiread function. • A subfolder named AppData, which contains data about the app session stored as a MAT file.
See Also Medical Image Labeler | groundTruthMedical
Related Examples
1-16
•
“Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 3-11
•
“Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
•
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10
•
“Automate Labeling in Medical Image Labeler” on page 5-21
•
“Collaborate on Multi-Labeler Medical Image Labeling Projects” on page 5-29
Medical Image Coordinate Systems
Medical Image Coordinate Systems In medical imaging, there are two distinct coordinate systems: the intrinsic coordinate system and the world, or patient, coordinate system. You can access locations in medical images using the intrinsic coordinate system and the patient coordinate system. The intrinsic coordinate system defines the voxel space, and the patient coordinate system defines the anatomical space. To understand the position and orientation of intrinsic coordinates with respect to the patient, you must transform the data into the patient coordinate system. You can use the intrinsicToWorldMapping function to obtain the transformation between the intrinsic coordinates and patient coordinates of a 3-D medical image volume stored as a medicalVolume object.
Patient Coordinate System The patient coordinate system is made up of three orthogonal axes: • Left (L)/Right (R) — x-axis • Anterior (A)/Posterior (P) — y-axis • Inferior (I)/Superior (S) — z-axis The patient xyz-axes define the coronal, sagittal, and transverse anatomical planes. This table shows the relationship between the anatomical planes and patient axes. Anatomical Planes and Patient Axes
Sagittal Plane
Coronal Plane
Transverse Plane
• Defined by the y- and • Defined by the x- and • Defined by the x- and z-axes. z-axes. y-axes. • Divides the body into • Divides the body into • Divides the body into right and left anterior and inferior and superior segments. posterior segments. segments.
Note The patient coordinate system rotates together with the physical orientation of the patient. Mapping Patient Coordinate Axes to Anatomical Planes Medical image files store a transformation matrix to map intrinsic coordinates (i, j, k) to patient coordinates (x, y, z), and each file format has a different convention that defines the positive direction of each axis. For example, a DICOM file uses an LPS+ coordinate system and a NIfTI file uses an RAS+ coordinate system. 1-17
1
Getting Started
Patient Axes
DICOM
NIfTI
• x-axis values increase from right to left
• x-axis values increase from left to right
• y-axis values increase from anterior to posterior
• y-axis values increase from posterior to anterior
• z-axis values increase from inferior to superior
• z-axis values increase from inferior to superior
Intrinsic Coordinate System The intrinsic coordinate system describes the spatial dimensions of the patient coordinate system. Intrinsic coordinates are in units of voxels, while patient coordinates have real-world dimensions and are usually in units of millimeters. You can obtain pixel dimensions from the PixelSpacing property of a medicalImage object for 2-D data, and from the VoxelSpacing property of a medicalVolume object for 3-D data. The origin of the intrinsic coordinate system is located at the center of the first pixel (2-D image) or voxel (3-D volume), represented by the black circle. The i-axis corresponds to the first dimension (rows), the j-axis corresponds to the second dimension (columns), and the k-axis corresponds to the third dimension of a medical image or volume. This image grid shows the i-axis corresponding to the anatomical z-axis and the j-axis corresponding to the anatomical x-axis,
whereas this image grid shows the i-axis corresponding to the anatomical x-axis and the j-axis corresponding to the anatomical z-axis.
Both image grids represent a valid way a medical device might store voxels in a file. When you create a medicalVolume object for an image volume, the spatial dimensions of the patient coordinate system correspond to the values in the Voxels property of the object. 1-18
Medical Image Coordinate Systems
Tip Use the intrinsicToWorldMapping function to compute the geometric transformation between the intrinsic and patient coordinate systems for a medical image volume.
See Also Objects medicalref3d | medicalVolume Functions intrinsicToWorldMapping
Related Examples •
“Read, Process, and Write 3-D Medical Images” on page 1-3
•
“Display Medical Image Volume in Patient Coordinate System” on page 3-21
1-19
1
Getting Started
Introduction to Medical Imaging Medical imaging is the acquisition and processing of images of the human body for clinical applications. You can use medical image processing to improve the quality of medical images, for diagnosis of medical conditions, for surgical planning, or for research. Medical imaging enables the detailed, yet non-invasive, study of human anatomy. The success of medical imaging requires collaboration between medical professionals such as radiologists, pathologists, or clinicians, and technology professionals skilled in image processing. The major types of medical images you can process for clinical applications are radiology images, such as MRI scans, CT scans, X-ray scans, ultrasound scans, or PET/SPECT scans, or pathological microscopy images, such as biopsies and blood smears. You can analyze radiology images for a variety of applications. • Diagnostic Systems: For example, to detect tumors from brain MRI scans, to detect COVID-19 from CT scans, to detect pneumonia from chest X-ray scans, or to detect tumors from breast ultrasound. • Biomedical Engineering: For example, to model bones or to design prostheses. • Functional Analysis: For example, to analyze brain function from functional MRI. • Pharmaceutical Research: For example, to measure drug efficacy and clearance time. • Device Design: For example, to build new MRI, CT, ultrasound devices. You can use Medical Imaging Toolbox to analyze radiology images for such applications. Although the functions in Medical Imaging Toolbox are modality-agnostic, the major medical imaging modalities that you can use the toolbox for include MRI, CT, X-ray, ultrasound, and PET/SPECT.
Common Medical Imaging Modalities
Magnetic Resonance Imaging (MRI) The human body consists mostly of water, and therefore contains many hydrogen nuclei. MRI scans acquire an image by disturbing the magnetic equilibrium of the hydrogen nuclei in the body. The scanner then measures the time taken by the hydrogen nuclei to regain equilibrium, which varies based on the composition of the organ being imaged. MRI scans are particularly useful for imaging soft tissues such as the brain, spinal cord, nerves, muscles, ligaments, and tendons, as soft tissues have more water content than bone. Unlike other radiology modalities, MRI scans do not use any ionizing radiation. However, medical professionals must ensure that the patient undergoing the scan 1-20
Introduction to Medical Imaging
does not have any metal on or in their body that might be attracted to the magnetic field. There are several forms of MRI, based on the nature of particles and the type of magnetization property measured, including T1-weighted MRI, T2-weighted MRI, diffusion MRI, and functional MRI. The different types of MRI provide different insights into the human body. Mathematically, an MRI scanner generates an image in the Fourier domain also known as k-space. Each scan typically consists of a collection of 2-D slices imaged in k-space. The scanner transforms the k-space image to the spatial domain, enabling you to observe the imaged anatomy. The final output of the MRI scanner is a 3-D volume in the spatial domain with spatial localization details. You can use a medicalVolume object to store the voxel data and spatial referencing information for the MRI volume. MRI images are prone to degradations in the form of acquisition noise, undersampling artifacts, and patient motion artifacts. Computed Tomography (CT) CT scans use X-ray radiation to image human anatomy. The magnitude of attenuation of the radiation depends on the organ being imaged. Because bones effectively block X-rays, CT scans image them particularly well. You can image tissues in the human body, using contrast agents, which help attenuate the X-rays. As a result, CT scans are versatile and can be used for imaging of the head and neck, as well as organs such as heart, lungs, abdomen, and pelvis. Additionally, CT scans are fast and cost-effective for patients. Mathematically, a CT scanner reconstructs the image from a series of projections obtained using the Radon transform, typically represented as a sinogram. The Radon transform produces a collection of projections of radiation through the body along different angles. There are a variety of techniques to reconstruct an image from the projection data, including the inverse Radon transformation and other iterative methods, enabling you to observe the imaged anatomy. The final output of the CT scanner is a 3-D volume in the spatial domain with spatial localization details. You can use a medicalVolume object to store the voxel data and spatial referencing information for the CT volume. CT images are prone to degradations in the form of low contrast and artifacts due to miscalibration of the X-ray detectors. X-Ray Imaging X-ray imaging is a direct digitized recording of the attenuation of the X-ray radiation on a 2-D sensor array or a radiographic film. It is fast and cost-effective compared to other scans. Thus, it is a good option for preliminary diagnosis. Mathematically, an X-ray image is a simple 2-D image captured by an X-ray detector. You can use a medicalImage object to store the pixel data and metadata for an Xray image. Ultrasound (US) Ultrasound imaging involves emitting of ultrasound waves and measuring the strength of the reflected echo waves. The strength of the reflected echo waves depends on the organ being imaged. Because air can block ultrasound waves, it is not suitable for imaging bones, or tissues that contain air such as lungs. You can use ultrasound imaging to, for example, monitor the development of a fetus during pregnancy, or for imaging the heart, breast, and abdomen. You can use Doppler ultrasound for functional imaging of the blood flow in the blood vessels. Mathematically, an ultrasound image is derived from the reflected ultrasound waves. The final output of the ultrasound scan is a sequence of 2-D images in the spatial domain. You can use a medicalImage object to store the pixel data and metadata for an ultrasound image sequence. Note that the emitted and reflected ultrasound waves can cause interference, which can show up as speckle in the ultrasound image. 1-21
1
Getting Started
Nuclear Medicine Imaging Nuclear medicine imaging involves introducing radioactive tracers, also known as radiotracers or radiopharmaceuticals, that contain radioactive isotopes into the body of the patient. The movement of the radiotracers in the body of the patient provides insights about the organs being imaged. Different types of nuclear medicine imaging employ different radiotracers and are used for different purposes. Mathematically, nuclear medicine imaging is performed as a tomography. The decay of the radiotracers emits radiation, and the scanner measures the attenuation of the radiation in the tomography. The final output of the scanner is a 3-D volume in the spatial domain with spatial localization details. You can use a medicalVolume object to store the voxel data and spatial referencing information for a 3-D volume. Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) use different types of radiotracers for imaging. The decay of the radiotracers used in PET emits positrons. PET is used primarily for diagnosis and tracking of cancer. The decay of the radiotracers used in SPECT emits gamma rays. SPECT is used primarily for diagnosis and tracking of heart disease.
Typical Workflow for Medical Image Analysis
Import and Spatial Referencing A typical medical imaging workflow begins with importing the medical images into the workspace. Medical images are available in file formats such as NIfTI, DICOM, NRRD, Analyze7.5, and Interfile. Medical Imaging Toolbox provides several functions you can use to import medical images into the workspace and export them back to medical image formats after processing. For more information, see “Read, Process, and Write 3-D Medical Images” on page 1-3. Display, Volume Rendering, and Surfaces Once you have imported a medical image into the workspace, you can view and inspect the image to plan your workflow. Medical Imaging Toolbox provides tools for detailed viewing of the medical images in different orientations, for volume rendering to visualize intensity volumes in 3-D, and for generating surfaces for efficient display or 3-D printing or modeling applications. For more information, see “Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 3-11. 1-22
Introduction to Medical Imaging
Preprocessing and Augmentation An imported medical image can contain noise that you must reduce. Multiple medical images that you must process together can be misaligned and require registration. Also, if the medical imaging data set is too small for your intended application, you might need to augment it. Medical Imaging Toolbox provides functions for preprocessing and augmentation. For more information, see “Medical Image Preprocessing” on page 4-2. Ground Truth Labeling and Segmentation For object detection in deep learning applications, you might need to perform segmentation and labeling of the preprocessed medical image training data set. Medical Imaging Toolbox provides the Medical Image Labeler app for segmentation and ground truth labeling. For more information, see “Get Started with Medical Image Labeler” on page 1-10.
1-23
2 Import, Export, and Spatial Referencing
2
Import, Export, and Spatial Referencing
Read, Process, and View Ultrasound Data This example shows how to import and display a 2-D multiframe ultrasound series, and how to apply a denoising filter to each frame of the series. You can use the medicalImage object to import image data and metadata from 2-D medical images and series of images related by time. In this example, you use the properties and the extractFrame object function of a medicalImage object to work with a multiframe echocardiogram ultrasound series. Read Ultrasound Image Series Specify the name of an echocardiogram ultrasound series contained in a DICOM file. filename = "heartUltrasoundSequenceVideo.dcm";
Read the metadata and image data from the file by creating a medicalImage object. The image data is stored in the Pixels property. Each frame of the image series is a 600-by-800-by-3 pixel RGB image. The FrameTime property indicates that each frame has a duration of 33.333 milliseconds. The NumFrames property indicates that the series has a total of 116 image frames. medImg = medicalImage(filename) medImg = medicalImage with properties: Pixels: Colormap: SpatialUnits: FrameTime: NumFrames: PixelSpacing: Modality: WindowCenter: WindowWidth:
[600×800×116×3 uint8] [] "unknown" 33.3330 116 [1 1] 'US' [] []
Display Ultrasound Image Frames as Montage Display the frames in the Pixels property of medImg as a montage. montage(medImg)
2-2
Read, Process, and View Ultrasound Data
Display Ultrasound Series as Video Open the Video Viewer app to view the ultrasound series as a video by using the implay function. The implay function automatically sets the frame rate using the FrameTime property of medImg. implay(medImg)
2-3
2
Import, Export, and Spatial Referencing
Reduce Speckle Noise Reduce the noise in the ultrasound image series by applying a speckle-reducing anisotropic diffusion filter to each frame. Extract each frame from the medicalImage object by using the extractFrame object function, and convert the frame from RGB to grayscale. Apply the filter by using the specklefilt function. Specify the DegreeOfSmoothing and NumIterations name-value arguments to control the smoothing parameters. [h,w,d,c] = size(medImg.Pixels); pixelsSmoothed = zeros(h,w,d); for i = 1:d Ii = extractFrame(medImg,i); Ii = im2double(im2gray(Ii));
2-4
Read, Process, and View Ultrasound Data
pixelsSmoothed(:,:,i) = specklefilt(Ii,DegreeOfSmoothing=0.6,NumIterations=50); end
Store Smoothed Data as Medical Image Object Create a new medicalImage object that contains the smoothed image pixels. Use the property values from the original file to maintain the correct frame information. medImgSmoothed = medImg; medImgSmoothed.Pixels = pixelsSmoothed; medImgSmoothed medImgSmoothed = medicalImage with properties: Pixels: Colormap: SpatialUnits: FrameTime: NumFrames: PixelSpacing: Modality: WindowCenter: WindowWidth:
[600×800×116 double] [] "unknown" 33.3330 116 [1 1] 'US' [] []
View the first frame of the smoothed image series by using the montage function with the Indices name-value argument set to 1. figure montage(medImgSmoothed,Indices=1)
2-5
2
Import, Export, and Spatial Referencing
Display the smoothed image data as a video. implay(pixelsSmoothed)
2-6
Read, Process, and View Ultrasound Data
See Also medicalImage | extractFrame | specklefilt | montage | Video Viewer
Related Examples •
“Read, Process, and Write 3-D Medical Images” on page 1-3
2-7
3 Display and Volume Rendering • “Choose Approach for Medical Image Visualization” on page 3-2 • “Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 3-11 • “Display Medical Image Volume in Patient Coordinate System” on page 3-21 • “Display Labeled Medical Image Volume in Patient Coordinate System” on page 3-24 • “Create STL Surface Model of Femur Bone for 3-D Printing” on page 3-33 • “Medical Image-Based Finite Element Analysis of Spine” on page 3-41
3
Display and Volume Rendering
Choose Approach for Medical Image Visualization Medical Imaging Toolbox provides tools to display, explore, and publish 2-D and 3-D medical image data. Visualization is important for clinical diagnosis, treatment planning, and image analysis. Sharing snapshots and animations helps convey useful clinical information to patients, colleagues, and in publications.
Display 2-D Medical Image Data 2-D medical image data includes single images such as X-rays as well as multiframe image series such as ultrasound videos. The Medical Image Labeler app is useful for interactive display, especially when viewing multiple image files or labeled data. To view 2-D images without needing to launch the app, use the montage and implay object functions, which accept a medicalImage object as input. This table describes the options for displaying 2-D medical image data using Medical Imaging Toolbox. If you do not have Medical Imaging Toolbox installed, see “Display and Exploration” (Image Processing Toolbox™).
3-2
Choose Approach for Medical Image Visualization
Goal
Approach
Interactively explore Medical Image images: Labeler app • View multiple related image files in one app session. • Toggle visibility of labels and adjust label opacity. • Adjust display contrast by modifying the intensity window and level.
• Open the app from the MATLAB Toolstrip, on the Apps tab, under Image Processing and Computer Vision.
Image
See Also • “Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
• Open the app from the MATLAB command prompt using medicalImageL abeler.
3-3
3
Display and Volume Rendering
Goal
Approach
Publish snapshots and animations to view outside of MATLAB.
Medical Image Labeler app • Open the app from the MATLAB Toolstrip, on the Apps tab, under Image Processing and Computer Vision. • Open the app from the MATLAB command prompt using medicalImageL abeler.
3-4
Image
See Also • Export Snapshot Image from App on page 3-0 • Export Animations from App on page 30
Choose Approach for Medical Image Visualization
Goal
Approach
Programmatically display all frames in an image series.
Display the data in the medicalImage object medImage by using the command montage(medImage ).
Image
See Also “Read, Process, and View Ultrasound Data” on page 2-2
3-5
3
Display and Volume Rendering
Goal
Approach
Programmatically play image series as a video.
Display the data in the medicalImage object medImage by using the command implay(medImage) .
Image
See Also “Read, Process, and View Ultrasound Data” on page 2-2
Display 3-D Medical Image Data 3-D medical image data includes volumes from modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The Medical Image Labeler app is useful for interactive display, especially when viewing multiple image files or labeled data. To view 3-D images without needing to launch the app, use the volshow and montage object functions or create a sliceViewer object, which all accept a medicalVolume object as input. This table describes the options for displaying 3-D medical image data using Medical Imaging Toolbox. If you do not have Medical Imaging Toolbox installed, see “Display and Exploration” (Image Processing Toolbox).
3-6
Choose Approach for Medical Image Visualization
Goal
Approach
Interactively explore Medical Image image volumes: Labeler app • View multiple related image files in one app session. • Toggle visibility of labels and adjust label opacity. • Adjust display contrast by modifying the intensity window and level. • Publish snapshots and animations to view outside of MATLAB.
• Open the app from the MATLAB Toolstrip, on the Apps tab, under Image Processing and Computer Vision.
Image
See Also “Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 311
• Open the app from the MATLAB command prompt using medicalImageL abeler.
3-7
3
Display and Volume Rendering
Goal
Approach
Publish snapshots and animations to view outside of MATLAB.
Medical Image Labeler app • Open the app from the MATLAB Toolstrip, on the Apps tab, under Image Processing and Computer Vision.
Image
See Also • Export Snapshot Image from App on page 3-0 • Export Animations from App on page 30
• Open the app from the MATLAB command prompt using medicalImageL abeler.
Programmatically display volume in a figure window, with optional label overlays.
Display the data in the medicalVolume object medVol by using the command volshow(medVol).
• “Display Medical Image Volume in Patient Coordinate System” on page 3-21 • “Display Labeled Medical Image Volume in Patient Coordinate System” on page 3-24
3-8
Choose Approach for Medical Image Visualization
Goal
Approach
Programmatically display slices along one dimension.
Display the data in the medicalVolume object medVol by using the command montage(medVol).
Image
See Also “Read, Process, and Write 3-D Medical Images” on page 1-3
3-9
3
Display and Volume Rendering
Goal
Approach
Programmatically display slices along one dimension in a scrollable figure window.
Display the data in the medicalVolume object medVol by using the command sliceViewer(medV ol).
Image
See Also “Display Medical Image Volume in Slice Viewer”
See Also Medical Image Labeler | implay | montage | volshow | sliceViewer
Related Examples
3-10
•
“Choose Approach to Display 2-D and 3-D Images”
•
“Visualize 3-D Medical Image Data Using Medical Image Labeler” on page 3-11
Visualize 3-D Medical Image Data Using Medical Image Labeler
Visualize 3-D Medical Image Data Using Medical Image Labeler This example shows how to interactively explore medical image volumes and export snapshots and animations from the Medical Image Labeler app. Download Data This example labels chest CT data from a subset of the Medical Segmentation Decathlon data set [1 on page 3-20]. Download the MedicalVolumNIfTIData.zip file from the MathWorks® website, then unzip the file. The size of the subset of data is approximately 76 MB. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeNIfTIData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath) dataFolder = fullfile(filepath,"MedicalVolumeNIfTIData");
Open Medical Image Labeler Open the Medical Image Labeler app from the Apps tab on the MATLAB® toolstrip, under Image Processing and Computer Vision. You can also open the app by using the medicalImageLabeler command.
3-11
3
Display and Volume Rendering
Create New Volume Labeling Session To start a new 3-D session, on the app toolstrip, click New Session and select New Volume session (3-D). In the Create a new session folder dialog box, specify a location in which to save the new session folder by entering a path or selecting Browse and navigating to your desired location. In the New Session Folder box, specify a name for the folder for this app session. Select Create Session. Load Image Data into Medical Image Labeler To load an image into the Medical Image Labeler app, on the app toolstrip, click Import. Then, under Data, select From File. Browse to the location containing the downloaded data, specified by the dataFolder workspace variable, and select the file lung_027.nii.gz. For a Volume Session, the imported data file can be a single DICOM or NIfTI file containing a 3-D image volume, or a directory containing multiple DICOM files corresponding to a single image volume.
View Data as Anatomical Slice Planes The app displays the data as anatomical slice planes in the Transverse, Sagittal, and Coronal panes. By default, the app displays the center slice in each slice plane. You can change the displayed slice by using the scroll bar at the bottom of a slice pane, or you can click the pane and then press the left and right arrow keys. The app displays the current slice number out of the total number of slices, such as 132/264, for each slice pane. The app also displays anatomical display markers indicating the 3-12
Visualize 3-D Medical Image Data Using Medical Image Labeler
anterior (A), posterior (P), left (L), right (R), superior (S), and inferior (I) directions. To toggle the visibility of the display markers, on the app toolstrip, click Display Markers and, under 2-D Slices, select or clear 2D Orientation Markers. You can zoom in on the current slice pane using the mouse scroll wheel or the zoom controls that appear when you pause on the slice pane. By default, the app displays the scan using the Radiological display convention, with the left side of the patient on the right side of the image. To display the left side of the patient on the left side of the image, on the app toolstrip, click Preferences, and in the dialog box that opens, change the Display Convention to Neurological. To change the brightness or contrast of the display, adjust the display window. Changing the display window does not modify the underlying image data. To update the display window, you can directly edit the values in the Window Level and Window Width boxes in the app toolstrip.
Alternatively, adjust the window using the Window Level tool. First, select in the app toolstrip. Then, click and hold in any of the slice panes, and drag up and down to increase and decrease the brightness, respectively, or left and right to increase and decrease the contrast. The Window Level and Window Width values in the toolstrip automatically update as you drag. Clear to deactivate the tool. To reset the brightness and contrast, click Window Level and select Reset. You can swap the mouse actions, such that dragging vertically changes the contrast and dragging horizontally changes the brightness, in the Preferences dialog box.
3-13
3
Display and Volume Rendering
View Data as 3-D Volume The app displays the data as a 3-D volume in the 3-D Volume pane. You can toggle the visibility of the volume display using the Show Volume button on the Labeler tab of the app toolstrip. In the 3-D Volume pane, click and drag to rotate the volume or use the scroll wheel to zoom. You can adjust the volume display using the tools in the Volume Rendering tab of the app toolstrip. To change the background color, click Background Color. Toggle the use of a background gradient by clicking Use Gradient. To change the gradient color, with Use Gradient selected, click Gradient Color. To restore the default background settings, click Restore Background. You can also view orientation axes and scale bar display markers in the 3-D Volume pane. To toggle the visibility of each display marker, on the Labeler tab of the app toolstrip, click Display Markers, and under 3-D Volume, select or clear 3D Orientation Axes and Scale Bars.
You can change how the app displays voxel intensity data by selecting one of several built-in rendering styles. On the Volume Rendering tab, select CT - Bone from the Rendering Presets gallery to use an alphamap and colormap recommended for highlighting bone tissue in computed tomography (CT) scans. To refine the built-in displays for your data set, adjust the display parameters by using the Rendering Editor. To open the editor, on the Volume Rendering tab of the app toolstrip, click Rendering Editor.
3-14
Visualize 3-D Medical Image Data Using Medical Image Labeler
Using the Rendering Editor, you can manually adjust the rendering technique, the transparency alphamap, and the colormap of the 3-D volume rendering. The app supports these rendering techniques: • Volume Rendering — View the volume based on the specified color and transparency for each voxel. • Maximum Intensity Projection — View the voxel with the highest intensity value for each ray projected through the data. • Gradient Opacity — View the volume based on the specified color and transparency, with an additional transparency applied if the voxel is similar in intensity to the previous voxel along the viewing ray. When you render a volume with uniform intensity using Gradient Opacity, the internal portion of the volume appears more transparent than in the Volume Rendering rendering style, enabling better visualization of label data, if present, and intensity gradients in the volume. For this example, select Gradient Opacity.
3-15
3
Display and Volume Rendering
You can refine the alphamap and colormap by manipulating the plots and colorbar in the Rendering Editor directly. To save your customized settings, in the Volume Rendering tab of the app toolstrip, click Save Rendering. The app adds an icon for your custom settings to the gallery in the app toolstrip, under User-Defined. If you have multiple files loaded in the app session, you can apply the same built-in or custom rendering settings to all volumes by clicking Apply To All Volumes in the app toolstrip. Export Snapshot Image from App You can export images from the Medical Image Labeler app to use for figures. In the Labeler tab of the app toolstrip, click Save Snapshot. In the Save Snapshot dialog box, select the views you want to export and click Save. By default, the app saves a snapshot for the transverse, coronal, and sagittal slice planes, as well as the 3-D volume window. In the second dialog box, specify the location where you want to save the images, and click Save again. Each snapshot is saved as a separate PNG file. The 2-D snapshots match the current brightness and contrast settings of the app, but show the full slice without any zooming or anatomical display markers. The 3-D snapshot matches the current rotation and zoom values in the 3-D Volume pane, and contains any enabled display markers. Export Animations from App You can generate 2-D animations that scroll through transverse, sagittal, or coronal slices, or 3-D animations of the volume rotating about a specified axis. Access the animation generator tool in the Labeler tab of the app toolstrip by clicking Generate Animations. To export a 2-D animation, in the Animation editor, select 2-D slices. In the Direction section, specify the Slice Direction along which you want to view slices. Use the settings in the Animation section to customize the Slice Range and Step Size. Select Append animation in reverse to make the animation scroll forward and then backward through the slides, rather than forward only. In the Export section, specify the Loop Count and Animation Length. Specify the loop count as a positive integer, for a fixed number of loops, or as inf to create a continuously looping animation. Specify the animation length in seconds. The editor displays the speed in frames per second based on your specified slice range and animation length. The settings in this image create an animation that captures every eighth slice along the transverse direction. Each loop animates forward and then in reverse through the slice range. The animation loops infinitely, with a length of 10 seconds per loop, and a frame rate of 6.475 FPS. 3-16
Visualize 3-D Medical Image Data Using Medical Image Labeler
Click Preview to preview one loop of the animation with the specified Direction and Animation settings. The preview does not reflect the loop count or animation length of the exported GIF. The preview plays in the slice view of the selected slice direction. If you are satisfied with the preview, click Export GIF, and, in the dialog box, select the location to save the animation GIF file. You can open and view the GIF file from the saved location and share the animation outside of MATLAB.
3-17
3
Display and Volume Rendering
To export a 3-D animation, in the Animation editor, select 3-D Volume. Under Camera, select the Rotation Axis to rotate around as SI (superior-inferior), LR (left-right), AP (anterior-posterior), or the current up vector in the 3-D Volume pane. Select Snap Camera to Axis to set the camera location to view the volume head-on versus maintaining the rotation in the 3-D Volume pane. In the Animation section, use Limits to specify the orientations, in degrees, between which to rotate the volume. Specify the Step Size in degrees. The settings in this image create an animation of the volume rotating about the superior-inferior axis, with frames acquired every ten degrees of rotation. The animation loops infinitely, with a length of 3 seconds per loop.
3-18
Visualize 3-D Medical Image Data Using Medical Image Labeler
Click Preview to preview one loop of the animation with the specified Camera and Animation settings. The preview does not reflect the loop count or animation length of the exported GIF. The preview plays in the 3-D Volume pane. If you are satisfied with the preview, click Export GIF, and, in the dialog box, select the location to save the animation GIF file. You can open and view the GIF file from the saved location and share the animation outside of MATLAB.
3-19
3
Display and Volume Rendering
References
[1] Medical Segmentation Decathlon. "Lung." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com. The Lung data set is provided by the Medical Segmentation Decathlon under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details. This example uses a subset of the original data set consisting of two CT volumes.
See Also Medical Image Labeler | sliceViewer | volshow | montage
Related Examples
3-20
•
“Choose Approach for Medical Image Visualization” on page 3-2
•
“Display Medical Image Volume in Patient Coordinate System” on page 3-21
•
“Display Labeled Medical Image Volume in Patient Coordinate System” on page 3-24
Display Medical Image Volume in Patient Coordinate System
Display Medical Image Volume in Patient Coordinate System This example shows how to display 3-D CT data in the patient coordinate system using volshow. The volshow function uses the spatial referencing information from a medicalVolume object to transform intrinsic image coordinates, in voxel units, to patient coordinates in real-world units such as millimeters. This is particularly useful for visualizing anisotropic image voxels, which have unequal spatial dimensions. Viewing images in the patient coordinate system accurately represents the aspect ratio of anisotropic voxels, which avoids distortions in the image. If you do not have Medical Imaging Toolbox™ installed, see volshow (Image Processing Toolbox™). Download Image Volume Data This example uses a chest CT volume saved as a directory of DICOM files. The volume is part of a data set containing three CT volumes. The size of the entire data set is approximately 81 MB. Download the data set from the MathWorks® website, then unzip the folder. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeDICOMData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath) dataFolder = fullfile(filepath,"MedicalVolumeDICOMData","LungCT01");
Import Image Volume Create a medicalVolume object that contains the image data and spatial referencing information for the CT volume. The Voxels property contains a numeric array of the voxel intensities. The VoxelSpacing property indicates that the voxels are anisotropic, with a size of 0.7285-by-0.7285by-2.5 mm. medVol = medicalVolume(dataFolder) medVol = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[512×512×88 int16] [1×1 medicalref3d] "mm" "transverse" [0.7285 0.7285 2.5000] [0 0 1] 512 512 88 ["sagittal" "coronal" "CT" [88×1 double] [88×1 double]
"transverse"]
Display Image Volume Create a colormap and transparency map to display the rib cage. The values have been determined using trial and error. alpha = [0 0 0.72 1.0]; color = [0 0 0; 186 65 77; 231 208 141; 255 255 255] ./ 255;
3-21
3
Display and Volume Rendering
intensity = [-3024 100 400 1499]; queryPoints = linspace(min(intensity),max(intensity),256); alphamap = interp1(intensity,alpha,queryPoints)'; colormap = interp1(intensity,color,queryPoints);
To display the volume in patient coordinates, pass the medicalVolume object as input to volshow. Specify the custom colormap and transparency map, as well as the cinematic rendering style, which displays the volume with realistic lighting and shadows. The axes display indicators label the inferior/ superior (S), left/right (L), and anterior/posterior (P) anatomical axes.
volPatient = volshow(medVol,Colormap=colormap,Alphamap=alphamap,RenderingStyle="CinematicRenderin
Pause to apply all of the cinematic rendering iterations before updating the display in Live Editor. pause(3.5) drawnow
The volshow function uses the spatial details in medVol to set the Transformation property of the output Volume object, volPatient. Display the transformation used to display the anisotropic voxels for this volume. volPatient.Transformation.A ans = 4×4 0 0.7285 0
3-22
0.7285 0 0
0 -187.2285 0 -187.2285 2.5000 -283.7500
Display Medical Image Volume in Patient Coordinate System
0
0
0
1.0000
Optionally, you can clean up the viewer window by using the 3-D Scissors tool, , to remove the patient bed. For a detailed example, see “Remove Objects from Volume Display Using 3-D Scissors”. This image shows the final volume display after removing the patient bed and rotating to view the anterior side of the patient.
See Also medicalVolume | medicalref3d | intrinsicToWorldMapping | volshow
Related Examples •
“Choose Approach for Medical Image Visualization” on page 3-2
•
“Display Labeled Medical Image Volume in Patient Coordinate System” on page 3-24
•
“Display Volume Using Cinematic Rendering”
•
“Remove Objects from Volume Display Using 3-D Scissors”
3-23
3
Display and Volume Rendering
Display Labeled Medical Image Volume in Patient Coordinate System This example shows how to display labeled 3-D medical image volumes by using volshow. The volshow function uses the spatial referencing information from a medicalVolume object to transform intrinsic image coordinates, in voxel units, to patient coordinates in real-world units such as millimeters. You can visualize labels as an overlay by using the OverlayData property of the Volume object created by volshow. If you do not have Medical Imaging Toolbox™ installed, see volshow (Image Processing Toolbox™). Download Image Volume Data This example uses a subset of the Medical Segmentation Decathlon data set [1]. The subset of data includes two CT chest volumes and corresponding label images, stored in the NIfTI file format. Run this code to download the MedicalVolumNIfTIData.zip file from the MathWorks® website, then unzip the file. The size of the data file is approximately 76 MB. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeNIfTIData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The folder dataFolder contains the downloaded and unzipped data. dataFolder = fullfile(filepath,"MedicalVolumeNIfTIData");
Specify the filenames of the CT volume and label image used in this example. dataFile = fullfile(dataFolder,"lung_043.nii.gz"); labelDataFile = fullfile(dataFolder,"LabelData","lung_043.nii.gz");
Import Image Volume Create a medical volume object that contains the image data and spatial referencing information for the CT volume. The Voxels property contains a numeric array of the voxel intensities. The VoxelSpacing property indicates that the voxels are anisotropic, with a size of 0.7695-by-0.7695by-2.5 mm. medVolData = medicalVolume(dataFile) medVolData = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters:
3-24
[512×512×129 single] [1×1 medicalref3d] "mm" "transverse" [0.7695 0.7695 2.5000] [0 0 -1] 512 512 129 ["sagittal" "coronal" "unknown" 0
"transverse"]
Display Labeled Medical Image Volume in Patient Coordinate System
WindowWidths: 0
Import Label Data Create a medical volume object that contains the label image data. The label image has the same spatial information as the intensity CT volume. medvolLabels = medicalVolume(labelDataFile) medvolLabels = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[512×512×129 uint8] [1×1 medicalref3d] "mm" "transverse" [0.7695 0.7695 2.5000] [0 0 -1] 512 512 129 ["sagittal" "coronal" "unknown" 0 0
"transverse"]
Display CT Volume with Tumor Overlay Create a colormap and transparency map to display the rib cage. The alpha and color values are based on the CT-bone rendering style from the Medical Image Labeler app. The intensity values have been tuned for this volume using trial and error. alpha = [0 0 0.72 1.0]; color = ([0 0 0; 186 65 77; 231 208 141; 255 255 255]) ./ 255; intensity = [-3024 -700 -400 3071]; queryPoints = linspace(min(intensity),max(intensity),256); alphamap = interp1(intensity,alpha,queryPoints)'; colormap = interp1(intensity,color,queryPoints);
To display the volume in the patient coordinate system, pass the medicalVolume object as input to volshow. Use the CinematicRendering rendering style to view the volume with realistic lighting and shadows. Specify the custom colormap and transparency map. The volshow function uses the spatial details in medVol to set the Transformation property of the output Volume object, vol. The voxels are scaled to the correct anisotropic dimensions. The axes display indicators label the inferior/superior (S), left/right (L), and anterior/posterior (P) anatomical axes. vol = volshow(medVolData, ... RenderingStyle="CinematicRendering", ... Colormap=colormap, ... Alphamap=alphamap);
3-25
3
Display and Volume Rendering
Pause to apply all of the cinematic rendering iterations before updating the display in Live Editor. pause(4) drawnow
View the tumor label image as an overlay on the CT volume. You can set the OverlayData and OverlayAlphamap properties of an existing Volume object, or specify them during creation using volshow. Note that you must set the OverlayData property to the numeric array in the Voxels property of medVolLabels, rather than the medicalVolume object itself. vol.OverlayData = medvolLabels.Voxels; vol.OverlayAlphamap = 1;
3-26
Display Labeled Medical Image Volume in Patient Coordinate System
pause(4) drawnow
Optionally, you can clean up the viewer window by using the 3-D Scissors tool, , to remove the patient bed. For a detailed example, see “Remove Objects from Volume Display Using 3-D Scissors”.
3-27
3
Display and Volume Rendering
This image shows the final volume after bed removal.
3-28
Display Labeled Medical Image Volume in Patient Coordinate System
Display CT Volume as Slice Planes Visualize the CT volume and label overlay as slice planes. Use the RenderingStyle name-value argument to specify the rendering style as "SlicePlanes". Specify the tumor label overlay using the OverlayData name-value argument. Note that you must set the OverlayData property to the numeric array in the Voxels property of medVolLabels, rather than the medicalVolume object itself. volSlice = volshow(medVolData, ... OverlayData=medvolLabels.Voxels, ... RenderingStyle="SlicePlanes", ... Alphamap=linspace(0.01,0.2,256), ... OverlayAlphamap=0.75);
3-29
3
Display and Volume Rendering
To scroll through the transverse slices, pause the cursor on the transverse slice until it highlights in blue, then drag the cursor along the inferior/superior axis.
3-30
Display Labeled Medical Image Volume in Patient Coordinate System
Drag the cursor to rotate the volume. The tumor overlay is visible in the slices for which the overlay is defined.
3-31
3
Display and Volume Rendering
References [1] Medical Segmentation Decathlon. "Lung." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com/. The Medical Segmentation Decathlon data set is provided under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details.
See Also volshow | Volume Properties | medicalref3d | medicalVolume | intrinsicToWorldMapping
Related Examples
3-32
•
“Choose Approach for Medical Image Visualization” on page 3-2
•
“Display Medical Image Volume in Patient Coordinate System” on page 3-21
•
“Display Volume Using Cinematic Rendering”
•
“Remove Objects from Volume Display Using 3-D Scissors”
Create STL Surface Model of Femur Bone for 3-D Printing
Create STL Surface Model of Femur Bone for 3-D Printing This example shows how to convert a segmentation mask from a CT image into an STL surface model suitable for 3-D printing. There are several clinical applications and research areas involving 3-D printing of medical image data: • Surgical treatment planning using patient-specific anatomical models. • Fabrication of custom prosthetics, implants, or surgical tools. • Bioprinting of tissues and organs. 3-D printing creates physical models of computer-generated surface models. The typical workflow for 3-D printing anatomical structures from medical image volumes includes these steps: 1
Segment the region of interest, such as a bone, organ, or implant.
2
Convert the segmentation mask to a triangulated surface, defined by faces and vertices.
3
Write the triangulation to an STL file, which most commercial 3-D printers accept.
4
Import the STL file into the 3-D printing software, and print the model.
This example converts a mask of a femur bone into a triangulated surface, and writes an STL file suitable for 3-D printing. Download Image Volume Data This example uses a binary segmentation mask of a femur. The mask was created by segmenting a CT scan from the Medical Segmentation Decathlon liver data set [1] using the Medical Image Labeler app. For an example of how to segment medical image volumes, see “Label 3-D Medical Image Using Medical Image Labeler” on page 5-10.
3-33
3
Display and Volume Rendering
When you label an image in the Medical Image Labeler, the app saves the segmentation mask as a NIfTI file. You can find the file path of the mask generated in an app session by checking the LabelData property of the groundTruthMedical object exported from that session. This example downloads the femur mask as part of a data set containing the original abdominal CT volume as well as masks of the femur and liver. Download the data set from the MathWorks® website, then unzip the folder. zipFile = matlab.internal.examples.downloadSupportFile("medical", ... "MedicalVolumeLiverNIfTIData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath) dataFolder = fullfile(filepath,"MedicalVolumeLiverNIfTIData");
Specify the filename of the femur mask. filename = fullfile(dataFolder,"Femur.nii");
Load CT Segmentation Mask Import the femur mask by creating a medicalVolume object for the NIfTI file. The image data is stored in the Voxels property. The VoxelSpacing property indicates that the voxels are anisotropic, with a size of 0.7-by-0.7-by-5.0 mm. medVol = medicalVolume(filename) medVol = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[512×512×75 uint8] [1×1 medicalref3d] "mm" "transverse" [0.7031 0.7031 5] [0 0 1] 512 512 75 ["sagittal" "coronal" "unknown" 0 0
"transverse"]
The VolumeGeometry property of a medical volume object contains a medicalref3d object that specifies the spatial referencing information. Extract the medicalref3d object for the CT scan into a new variable, R. R = medVol.VolumeGeometry;
Extract Isosurface of Femur Specify the isovalue for isosurface extraction. isovalue = 0.05;
Extract the vertices and faces that define an isosurface for the image volume at the specified isovalue. 3-34
Create STL Surface Model of Femur Bone for 3-D Printing
[faces,vertices] = extractIsosurface(medVol.Voxels,isovalue);
The vertices output provides the intrinsic ijk-coordinates of the surface points, in voxels. Create a triangulation of the vertices in intrinsic coordinates. triInstrinsic = triangulation(double(faces),double(vertices));
If you display the surface defined by vertices in intrinsic coordinates, the femur surface appears squished. Intrinsic coordinates do not account for the real-world voxel dimensions, and the voxels in this example are larger in the third dimension than in the first two dimensions. viewerIntrinsic = viewer3d; obj = images.ui.graphics3d.Surface(viewerIntrinsic, ... Data=triInstrinsic, ... Color=[0.88 0.84 0.71], ... Alpha=0.9);
3-35
3
Display and Volume Rendering
Transform Vertices into Patient Coordinates To accurately represent the femur surface points, transform the ijk-coordinates in vertices to the patient coordinate system defined by the medicalref3d object, R. The X, Y, and Z outputs define the xyz-coordinates of the surface points, in millimeters. I = vertices(:,1); J = vertices(:,2); K = vertices(:,3); [X,Y,Z] = intrinsicToWorld(R,I,J,K); verticesPatientCoords = [X Y Z];
Create a triangulation of the transformed vertices, verticesPatientCoords. Maintain the original connectivity defined by faces. triPatient = triangulation(double(faces),double(verticesPatientCoords));
Display the surface defined by the transformed patient coordinates. The femur surface no longer appears squished. viewerPatient = viewer3d; obj = images.ui.graphics3d.Surface(viewerPatient, ... Data=triPatient, ... Color=[0.88 0.84 0.71], ... Alpha=0.9);
3-36
Create STL Surface Model of Femur Bone for 3-D Printing
Create STL File Check the number of vertices in the surface, which corresponds to the length of the Points property. triPatient triPatient = triangulation with properties: Points: [21950×3 double] ConnectivityList: [43896×3 double]
You can reduce the number of triangles required to represent a surface, while preserving the shape of the associated object, by using the reducepatch function. Reduce the number of triangles in the femur surface by 50%. The reducepatch function returns a structure with fields faces and vertices. pReduced = reducepatch(faces,verticesPatientCoords,0.5);
3-37
3
Display and Volume Rendering
Create a triangulation of the reduced surface. The number of vertices, specified by the Points property, is approximately one-half of the number of vertices in triPatient. triReduced = triangulation(double(pReduced.faces),double(pReduced.vertices)) triReduced = triangulation with properties: Points: [10976×3 double] ConnectivityList: [21948×3 double]
Display the reduced surface to verify that the shape of the femur is preserved. viewerPatient = viewer3d; obj = images.ui.graphics3d.Surface(viewerPatient, ... Data=triReduced, ... Color=[0.88 0.84 0.71], ... Alpha=0.9);
Write the surface data to an STL format file by using the stlwrite function. 3-38
Create STL Surface Model of Femur Bone for 3-D Printing
Name = "Femur.stl"; stlwrite(triReduced,Name)
You can use the STL model file as input to most commercial 3-D printers to generate a physical 3-D model of the femur. This image shows a 3-D print of the STL file.
References [1] Medical Segmentation Decathlon. "Liver." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com/. The Medical Segmentation Decathlon data set is provided under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details.
See Also Apps Medical Image Labeler Functions extractIsosurface | stlwrite | triangulation | patch | reducepatch Objects medicalref3d | medicalVolume
3-39
3
Display and Volume Rendering
Related Examples •
3-40
“Display Medical Image Volume in Patient Coordinate System” on page 3-21
Medical Image-Based Finite Element Analysis of Spine
Medical Image-Based Finite Element Analysis of Spine This example shows how to estimate bone stress and strain in a vertebra bone under axial compression using finite element (FE) analysis. Bones experience mechanical loading due to gravity, motion, and muscle contraction. Estimating the stresses and strains within bone tissue can help predict bone strength and fracture risk. This example uses image-based FE analysis to predict bone stress and strain within a vertebra under axial compression. Image-based FE uses medical images, such as computed tomography (CT) scans, to generate the geometry and material properties of the FE model. In this example, you create and analyze a 3-D model of a single vertebra under axial loading using this workflow: 1
Segment vertebra from CT scan on page 3-41.
2
Extract vertebra geometry and generate FE mesh on page 3-42.
3
Assign material properties on page 3-45.
4
Apply loading and solve model on page 3-46.
5
Analyze results on page 3-47.
Download Data This example uses a CT scan saved as a directory of DICOM files. The scan is part of a data set that contains three CT volumes. Run this code to download the data set from the MathWorks website as a zip file and unzip the file. The total size of the data set is approximately 81 MB. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeDICOMData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
After you download and unzip the data set, the scan used in this example is available in the dataFolder directory. dataFolder = fullfile(filepath,"MedicalVolumeDICOMData","LungCT01");
Segment Vertebra from CT Scan This example uses a binary segmentation mask of one vertebra in the spine. The mask was created by segmenting the spine from a chest CT scan using the Medical Image Labeler app. For an example of how to segment medical image volumes in the app, see “Label 3-D Medical Image Using Medical Image Labeler” on page 5-10.
3-41
3
Display and Volume Rendering
The segmentation mask is attached to this example as a supporting file. Load the mask as a medicalVolume object. labelVol = medicalVolume("LungCT01.nii.gz");
The VolumeGeometry property of a medical volume object contains a medicalref3d object that defines the patient coordinate system for the scan. Extract the medicalref3d object for the mask into a new variable, R. R = labelVol.VolumeGeometry;
Extract Vertebra Geometry and Generate FE Mesh Extract Isosurface of Vertebra Mask Calculate the isosurface faces and vertices for the vertebra mask by using the extractIsosurface function. The vertices coordinates are in the intrinsic coordinate system defined by rows, columns, and slices. isovalue = 1; [faces,vertices] = extractIsosurface(labelVol.Voxels,isovalue);
3-42
Medical Image-Based Finite Element Analysis of Spine
Transform vertices to the patient coordinate system defined by R by using the intrinsicToWorld object function. The X, Y, and Z outputs define the xyz-coordinates of the surface points, in millimeters. I = vertices(:,1); J = vertices(:,2); K = vertices(:,3); [X,Y,Z] = intrinsicToWorld(R,I,J,K); verticesPatient = [X Y Z];
Convert the vertex coordinates to meters. verticesPatientMeters = verticesPatient.*10^-3;
Display the vertebral isosurface as a triangulated Surface object. triangul = triangulation(double(faces),double(verticesPatientMeters)); viewer = viewer3d; surface = images.ui.graphics3d.Surface(viewer,data=triangul,Color=[1 0 0],Alpha=1);
Create PDE Model Geometry Create a general PDEModel model container for a system of three equations. A general model allows you to assign spatially varying material properties based on bone density. model = createpde(3);
3-43
3
Display and Volume Rendering
Specify the geometry for the model by using the geometryFromMesh (Partial Differential Equation Toolbox) object function, with the isosurface in patient coordinates as input. geometryFromMesh(model,verticesPatientMeters',faces'); pdegplot(model,FaceLabels="on")
Generate Mesh from Model Geometry Specify the maximum edge length for the FE mesh elements. This example uses a max edge length of 1.6 mm, which was determined using a mesh convergence analysis. hMax = 0.0016;
Generate the mesh from the model geometry. Specify the maximum edge length, and specify that the minimum edge length is half of the maximum edge length. The elements are quadratic tetrahedra. msh = generateMesh(model,Hmax=hMax,Hmin=hMax/2); pdemesh(model);
3-44
Medical Image-Based Finite Element Analysis of Spine
Assign Material Properties Bone material properties vary based on spatial location and direction: • Spatial location — Local regions with greater bone density are stiffer than low density regions. • Direction — Bone stiffness depends on the loading direction. This example models bone using transverse isotropy. A transversely isotropic material is stiffer in the axial direction versus in the transverse plane, and is uniform within the transverse plane. In this example, you assign spatially varying, transversely isotropic linear elastic material properties. Extract HU Values Within Vertebra Mask To assign spatially varying material properties, you need to map CT intensity values to FE mesh coordinates. Load the original, unsegmented CT scan using a medicalVolume object. The medicalVolume object automatically converts the intensity to Hounsfield units (HU). The HU value is a standard for measuring radiodensity. DCMData = medicalVolume(dataFolder);
Extract the indices and intensities of CT voxels inside the vertebra mask. trueIdx = find(labelVol.Voxels==1); HUVertebra = double(DCMData.Voxels(trueIdx));
3-45
3
Display and Volume Rendering
Convert the linear indices in trueIdx into subscript indices, in units of voxels. [row,col,slice] = ind2sub(size(labelVol.Voxels),trueIdx);
Transform the subscript indices into patient coordinates and convert the coordinates from millimeters to meters. [X2,Y2,Z2] = intrinsicToWorld(R,col,row,slice); HUVertebraMeters = [X2 Y2 Z2].*10^-3;
Map HU Values from CT Voxels to Mesh Nodes The FE node coordinates are scattered and not aligned with the CT voxel grid. Create a scatteredInterpolant object to define the 3-D interpolation between the voxel grid and FE nodes. F = scatteredInterpolant(HUVertebraMeters,HUVertebra);
Specify Coefficients Specify the PDE coefficients for the model. In this structural analysis, the c coefficient defines the material stiffness of the model, which is inversely related to compliance. To define a spatially varying c coefficient in Partial Differential Equation Toolbox™, represent c in function form. In this example, the HU2TransverseIsotropy on page 3-51 helper function defines transversely isotropic material properties based on bone density. Bone density is calculated for a given location using the scattered interpolant F. The helper function is wrapped in an anonymous function, ccoeffunc, which passes the location and state structures to HU2TransverseIsotropy. The FE solver automatically computes location and state structure arrays and passes them to your function during the simulation. ccoeffunc = @(location,state) HU2TransverseIsotropy(location,state,F); specifyCoefficients(model,'m',0, ... 'd',0, ... 'c',ccoeffunc, ... 'a',0, ... 'f',[0;0;0]);
Apply Loading and Solve Model To simulate axial loading of the vertebra, fix the bottom surface and apply a downward load to the top surface. To simulate distributed loading, the load is applied as a pressure. Identify the faces in the model geometry to apply the boundary conditions. In this example, the faces were identified using visual inspection of the plot created above using pdegplot. bottomSurfaceFace = 1; topSurfaceFace = 250;
Specify the total force to apply, in newtons. forceInput = -3000;
Estimate the area of the top surface. nf2=findNodes(model.Mesh,"region",Face=topSurfaceFace); positions = model.Mesh.Nodes(:,nf2)'; surfaceShape = alphaShape(positions(:,1:2)); faceArea = area(surfaceShape);
3-46
Medical Image-Based Finite Element Analysis of Spine
Calculate the pressure magnitude as a function of force and area, in pascals. inputPressure_Pa = forceInput/faceArea;
Apply the boundary conditions. applyBoundaryCondition(model,"dirichlet",Face=bottomSurfaceFace,u=[0,0,0]); applyBoundaryCondition(model,"neumann",Face=topSurfaceFace,g=[0;0;inputPressure_Pa]);
Solve the model. The output is a StationaryResults (Partial Differential Equation Toolbox) object that contains nodal displacements and their gradients. Rs = solvepde(model);
Analyze Results View the results of the simulation by plotting the axial displacement, stress, and strain within the model. Displacement Plot axial displacement, in millimeters, for the full model by using the pdeplot3D function. Uz = Rs.NodalSolution(:,3)*10^3; pdeplot3D(model,"ColorMapData",Uz) clim([-0.15 0]) title("Axial Displacement (mm)")
3-47
3
Display and Volume Rendering
Plot displacement in a transverse slice at the midpoint of vertebral height. Create a rectangular grid that covers the geometry of the transverse (xy) slice with spacing of 1 mm in each direction. The x and y vectors define the spatial limits and spacing in the transverse plane, and z is a scalar the provides the midpoint of vertebral height. The Xg, Yg, and Zg variables define the grid coordinates. x = min(msh.Nodes(1,:)):0.001:max(msh.Nodes(1,:)); y = min(msh.Nodes(2,:)):0.001:max(msh.Nodes(2,:)); z = min(msh.Nodes(3,:))+0.5*(max(msh.Nodes(3,:))-min(msh.Nodes(3,:))); [Xg,Yg] = meshgrid(x,y); Zg = z*ones(size(Xg));
Interpolate normal axial displacement onto the grid coordinates by using the interpolateSolution (Partial Differential Equation Toolbox) object function. Convert displacement from meters to millimeters, and reshape the displacement vector to the grid size. U = interpolateSolution(Rs,Xg,Yg,Zg,3); U = U*10^3; Ug = reshape(U,size(Xg));
Plot axial displacement in the transverse slice. surf(Xg,Yg,Ug,LineStyle="none") axis equal grid off view(0,90) colormap("turbo") colorbar clim([-0.15 0]) title("Axial Displacement (mm)")
3-48
Medical Image-Based Finite Element Analysis of Spine
Stress In a structural analysis, stress is the product of the c coefficient and the gradients of displacement. Calculate normal stresses at the transverse slice grid coordinates by using the evaluateCGradient (Partial Differential Equation Toolbox) object function. [cgradx,cgrady,cgradz] = evaluateCGradient(Rs,Xg,Yg,Zg,3);
Convert normal axial stress to megapascals, and reshape the stress vector to the grid size. cgradz = cgradz*10^-6; cgradzg = reshape(cgradz,size(Xg));
Plot the normal axial stress in the transverse slice. surf(Xg,Yg,cgradzg,LineStyle="none"); axis equal grid off view(0,90) colormap("turbo") colorbar clim([-10 1]) title("Axial Stress (MPa)")
3-49
3
Display and Volume Rendering
Strain In a structural analysis, strain is the gradient of the displacement result. Extract the normal strain at the transverse slice grid coordinates by using the evaluateGradient (Partial Differential Equation Toolbox) object function. [gradx,grady,gradz] = evaluateGradient(Rs,Xg,Yg,Zg,3);
Reshape the axial strain vector to the grid size. gradzg = reshape(gradz,size(Xg));
Plot the axial strain in the transverse slice. surf(Xg,Yg,gradzg,LineStyle="none"); axis equal grid off view(0,90) colormap("turbo") colorbar clim([-0.008 0]) title("Axial Strain")
3-50
Medical Image-Based Finite Element Analysis of Spine
Helper Functions The HU2TransverseIsotropy helper function specifies transverse isotropic material properties using these steps: • Map voxel intensity to Hounsfield units (HU) at nodal locations using the scatteredInterpolant object, F. • Convert HU to CT density using a linear calibration equation from [1 on page 3-53]. • Convert CT density to elastic modulus in the axial direction, E3, and in the transverse plane, E12, as well as the bulk modulus, G, using density-elasticity relationships from [2 on page 3-53]. The Poisson's ratio in the axial direction, v3, and transverse plane, v12, are also assigned based on [2]. • Define the 6-by-6 compliance matrix for transverse isotropy by using the elasticityTransOrthoC3D on page 3-52 helper function. • Convert the 6-by-6 compliance, or c-matrix, to the vector form required by Partial Differential Equation Toolbox by using the SixMat2NineMat on page 3-52 and SquareMat2CCoeffVec on page 3-53 helper functions.
function ccoeff = HU2TransverseIsotropy(location,state,F) HU = F(location.x,location.y,location.z); rho = 5.2+0.8*HU; E3 = -34.7 + 3.230.*rho; E12 = 0.333*E3;
3-51
3
Display and Volume Rendering
v12 = 0.104; v3 = 0.381; G = 0.121*E3; ccoeff = zeros(45,length(location.x)); for i = 1:length(location.x) cMatrix = elasticityTransOrthoC3D(E12(i),E3(i),v12,v3,G(i)); nineMat = SixMat2NineMat(cMatrix); ccoeff(:,i) = SquareMat2CCoeffVec(nineMat); end end
The elasticityTransOrthoC3D helper function defines the 6-by-6 compliance matrix for a transversely isotropic linear elastic material based on the elastic moduli, E12 and E3, bulk modulus, G, and Poisson's ratios, v12 and v3. The helper function converts all modulus values from megapascals to pascals before computing the compliance matrix. function C = elasticityTransOrthoC3D(E12,E3,v12,v3,G) E12 = E12*10^6; E3 = E3*10^6; G = G*10^6; v_zp = (E3*v3)/E12; C = zeros(6,6); C(1,1) = 1/E12; C(1,2) = -v12/E12; C(1,3)= -v_zp/E3; C(2,1) = C(1,2); C(2,2) = 1/E12; C(2,3) = -v_zp/E3; C(3,1) = -v3/E12; C(3,2) = C(2,3); C(3,3) = 1/E3; C(4,4) = 1/(2*G); C(5,5) = 1/(2*G); C(6,6) = (1+v12)/E12; C=inv(C); end
The SixMat2NineMat helper function converts a 6-by-6 c coefficient matrix in Voigt notation to a 9by-9 matrix corresponding to expanded form. function nineMat = SixMat2NineMat(sixMat) for i = 1:6 nineVecs(i,:) = sixMat(i,[1 6 5 6 2 4 5 4 3]); end nineMat = [ nineVecs(1,:); ... nineVecs(6,:); ... nineVecs(5,:); ...
3-52
Medical Image-Based Finite Element Analysis of Spine
nineVecs(6,:); ... nineVecs(2,:); ... nineVecs(4,:); ... nineVecs(5,:); ... nineVecs(4,:); ... nineVecs(3,:)]; end
The SquareMat2CCoeffVec converts a 9-by-9 c coefficient matrix into vector form as required by Partial Differential Equation Toolbox. This helper function creates a 45-element vector, corresponding to the "3N(3N+1)/2-Element Column Vector c, 3-D Systems" case in “c Coefficient for specifyCoefficients” (Partial Differential Equation Toolbox). function c45Vec = SquareMat2CCoeffVec(nineMat) C11 = [nineMat(1,1) nineMat(1,2) nineMat(2,2) C12 = [nineMat(1,4) nineMat(2,4) nineMat(3,4) nineMat(1,6) nineMat(2,6) nineMat(3,6)]; C13 = [nineMat(1,7) nineMat(2,7) nineMat(3,7) nineMat(1,9) nineMat(2,9) nineMat(3,9)]; C22 = [nineMat(4,4) nineMat(4,5) nineMat(5,5) C23 = [nineMat(4,7) nineMat(5,7) nineMat(6,7) nineMat(4,9) nineMat(5,9) nineMat(6,9)]; C33 = [nineMat(7,7) nineMat(7,8) nineMat(8,8) c45Vec = [C11 C12 C22 C13 C23 C33]';
nineMat(1,3) nineMat(2,3) nineMat(3,3)]; nineMat(1,5) nineMat(2,5) nineMat(3,5) ... nineMat(1,8) nineMat(2,8) nineMat(3,8) ... nineMat(4,6) nineMat(5,6) nineMat(6,6)]; nineMat(4,8) nineMat(5,8) nineMat(6,8) ... nineMat(7,9) nineMat(8,9) nineMat(9,9)];
end
References
[1] Turbucz, Mate, Agoston Jakab Pokorni, György Szőke, Zoltan Hoffer, Rita Maria Kiss, Aron Lazary, and Peter Endre Eltes. “Development and Validation of Two Intact Lumbar Spine Finite Element Models for In Silico Investigations: Comparison of the Bone Modelling Approaches.” Applied Sciences 12, no. 20 (January 2022): 10256. https://doi.org/10.3390/app122010256. [2] Unnikrishnan, Ginu U., and Elise F. Morgan. “A New Material Mapping Procedure for Quantitative Computed Tomography-Based, Continuum Finite Element Analyses of the Vertebra.” Journal of Biomechanical Engineering 133, no. 7 (July 1, 2011): 071001. https://doi.org/10.1115/1.4004190.
See Also Medical Image Labeler | medicalVolume | intrinsicToWorld | scatteredInterpolant | createpde | geometryFromMesh | solvepde | StationaryResults
Related Examples •
“Medical Image Coordinate Systems” on page 1-17
•
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10 3-53
3
Display and Volume Rendering
3-54
•
“Create STL Surface Model of Femur Bone for 3-D Printing” on page 3-33
•
“c Coefficient for specifyCoefficients” (Partial Differential Equation Toolbox)
4 Image Preprocessing and Augmentation • “Medical Image Preprocessing” on page 4-2 • “Medical Image Registration” on page 4-6 • “Register Multimodal Medical Image Volumes with Spatial Referencing” on page 4-11
4
Image Preprocessing and Augmentation
Medical Image Preprocessing In this section... “Background Removal” on page 4-2 “Denoising” on page 4-3 “Resampling” on page 4-3 “Registration” on page 4-4 “Intensity Normalization” on page 4-4 “Preprocessing in Advanced Workflows” on page 4-5 Image preprocessing prepares data for a target workflow. The main goals of medical image preprocessing are to reduce image acquisition artifacts and to standardize images across a data set. Your exact preprocessing requirements depend on the modality and procedure used to acquire data, as well as your target workflow. Some common preprocessing steps include background removal, denoising, resampling, registration, and intensity normalization.
Background Removal Background removal involves segmenting the region of interest from the image background. By limiting the image to the region of interest, you can improve the efficiency and accuracy of your target workflow. One example of background removal is skull stripping, which removes the skull and other background regions from MRI images of the brain. Background removal typically consists of applying a mask of the region of interest that you create using morphological operations or other segmentation techniques. For more information about morphological operations, see “Types of Morphological Operations”.
To perform background removal, multiply the mask image and the original image. For example, consider a grayscale image, im, and a mask image, mask, that is the same size as im and has a value of 1 for every element in the region of interest and 0 for each element of the background. This code returns a new image, imROI, in which the elements in the region of interest have the same values as in im, and the background elements all have values of 0. imROI = im.*mask;
4-2
Medical Image Preprocessing
Denoising Medical imaging modalities are susceptible to noise, which introduces random intensity fluctuations in an image. To reduce noise, you can filter images in the spatial and frequency domains. Medical Imaging Toolbox provides the specklefilt function, which reduces the speckle noise common in ultrasound images. For additional image filtering tools, see “Image Filtering” in Image Processing Toolbox. You can also denoise medical image data using deep learning. For details, see “Train and Apply Denoising Neural Networks”.
Resampling Use resampling to change the pixel or voxel size of an image without changing its spatial limits in the patient coordinate system. Resampling is useful for standardizing image resolution across a data set that contains images from multiple scanners.
To resample 3-D image data in a medicalVolume object, use the resample object function. Using the resample object function maintains the spatial referencing of the volume. To resample a 2-D medical image, you can use the imresize function with a target number of columns and rows. For example, this code shows how to resample the pixel data in the medicalImage object medImg to a target pixel size, specified by targetPixelSpacing. ratios = medImg.PixelSpacing./targetPixelSpacing; targetSize = ceil(ratios.*size(medImg.Pixels)); % in pixels resamplePixels = imresize(medImg.Pixels,targetSize);
Define the spatial referencing of the original and resampled images by using the imref2d object. 4-3
4
Image Preprocessing and Augmentation
originalR = imref2d(size(medImg.Pixels),medImg.PixelSpacing(1),medImg.PixelSpacing(2)); resampledR = imref2d(size(resamplePixels),targetPixelSpacing(1),targetPixelSpacing(2));
Registration You can use image registration to standardize the spatial alignment of 2-D or 3-D medical images in a data set. Registration can be useful for aligning images of different patients, or images of the same patient acquired at different times, on different scanners, or using different imaging modalities. For more details, see “Medical Image Registration” on page 4-6.
Intensity Normalization Intensity normalization standardizes the range of image intensity values across a data set. Typically, you perform this process in two steps. First, clip intensities to a smaller range. Second, normalize the clipped intensity range to the range of the image data type, such as [0, 1] for double or [0, 255] for uint8. Whereas visualizing image data using a display window changes how you view the data, intensity normalization actually updates the image values. One normalization strategy is to rescale the intensity values within each image relative to the minimum and maximum values in the image. For example, this code uses the rescale function to limit the intensities in the image im to the 0.5 and 99.5 percentile values, and to rescale the values to the range [0, 1]: imMax = prctile(im(:),99.5); imMin = prctile(im(:),0.5); imNormalized = rescale(im,0,1,InputMin=imMin,InputMax=imMax);
Another strategy is to rescale values to the same intensity range across images. In CT imaging, intensity windowing limits intensity values, in Hounsfield units (HU), to a suitable range for the tissue of interest. HU is a standard CT density scale defined by the density of air (–1000 HU) and water (0 HU). The medicalVolume object automatically rescales the data returned in the Voxels property for DICOM files that define the RescaleIntercept and RescaleSlope metadata attributes. You can find suggested intensity windows for your application in clinical guidelines or from your scanner manufacturer. For example, to segment lungs from a chest CT, you might use the range [–1200, 600] HU, and to segment bone, which is more dense, you might use the range [–1400, 2400]. This figure 4-4
Medical Image Preprocessing
shows a transverse slice of a chest CT with no intensity windowing, a "lung" intensity window, and a "bone" intensity window.
This code applies intensity windowing to the image stored in the DICOM file dicomFile, assuming the values of medVol.Voxels are in units of HU: medVol = medicalVolume(dicomFile); lungWindow = [-1200 600]; imWindowed = rescale(medVol.Voxels,0,1,InputMin=lungwin(1),InputMax=lungwin(2));
Preprocessing in Advanced Workflows For an example that preprocesses image data as part of a deep learning workflow, see “Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24.
See Also specklefilt | resample | rescale | medicalVolume
Related Examples •
“Register Multimodal Medical Image Volumes with Spatial Referencing” on page 4-11
•
“Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24
•
“Segment Lungs from CT Scan Using Pretrained Neural Network” on page 6-14
More About •
“Types of Morphological Operations”
•
“Train and Apply Denoising Neural Networks”
•
“Medical Image Registration” on page 4-6
4-5
4
Image Preprocessing and Augmentation
Medical Image Registration Medical image registration is the process of aligning multiple medical images, volumes, or surfaces to a common coordinate system. In medical imaging, you may need to compare scans of multiple patients or scans of the same patient taken in different sessions under different conditions. Use medical image registration as a preprocessing step to align the medical images to a common coordinate system before you analyze them.
Scenarios for Medical Image Registration Classification by Type of Misalignment • Translation registration - Required when the two images, volumes, or surfaces differ by a global shift common to all pixels, voxels, or points.
• Rigid registration - Required when the two images, volumes, or surfaces differ by a global shift and a global rotation common to all pixels, voxels, or points.
• Similarity registration - Required when the two images, volumes, or surfaces differ by a global shift, a global rotation, and a global scale factor common to all pixels, voxels, or points. 4-6
Medical Image Registration
• Affine registration - Required when the two images, volumes, or surfaces differ by a global shift, a global rotation, a global scale factor, and a global shear factor common to all pixels, voxels, or points.
• Deformable registration (also known as non-rigid registration) - Required when the images, volumes, or surfaces differ by local transformations specific to certain pixels, voxels, or points.
4-7
4
Image Preprocessing and Augmentation
Classification by Type of Input • Image registration - Aligns 2-D grayscale images. • Volume registration - Aligns 3-D intensity volumes. • Surface registration - Aligns surfaces extracted from 3-D intensity volumes. • Groupwise registration - Aligns slices in a series of 2-D medical images, such as a timeseries, to reduce sliding motion between the slices.
Functions for Medical Image Registration Medical Imaging Toolbox provides various functions for medical image registration. Function
Type of Misalignment
Type of Input
Function Details
Method
imregister
Translation
2-D grayscale images
Specify configuration using imregconfig. Transform returned by imregtform.
Optimization-based technique. Regular step gradient descent optimizer with mean squares metric for monomodal configuration. Oneplus-one evolutionary optimizer with Mattes mutual information metric for multimodal configuration.
Transform returned by function itself.
Optimization-based technique. Uses the iterative closest point (ICP) algorithm.
Rigid Similarity
3-D intensity volumes
Affine
imregicp
Translation Rigid
4-8
Surfaces
Medical Image Registration
Function
Type of Misalignment
Type of Input
Function Details
Method
imregmtb
Translation
2-D grayscale and RGB images
Transform returned by function itself.
Fast registration technique. Uses median threshold bitmap (MTB) method. Suitable for monomodal and multimodal images.
imregmoment
Translation
2-D grayscale images
Transform returned by function itself.
Fast registration technique. Uses moment of mass method, with the option to also use the median threshold bitmap (MTB) method. Suitable for monomodal and multimodal images and volumes.
Rigid Similarity
imregdemons
Deformable
3-D intensity volumes
2-D grayscale images 3-D intensity volumes
imregdeform
Deformable
2-D grayscale images 3-D intensity volumes
Displacement field Deformable returned by registration function itself. technique. Uses a diffusion-based Demons algorithm. Displacement field Deformable returned by registration function itself. technique. Uses the isotropic total variation regularization method.
imreggroupwise Deformable
3-D image series consisting of 2-D slices
Does not require a reference image. Displacement field returned by function itself.
Groupwise registration technique. Uses the isotropic total variation regularization method.
imregcorr
2-D grayscale, binary, and RGB images
Function returns only the transform. Use imwarp to apply the transformation and get the registered image.
FFT-based technique. Uses the phase correlation method.
Translation Rigid Similarity
4-9
4
Image Preprocessing and Augmentation
See Also Functions imregister | imregtform | imregconfig | imregicp | imregmtb | imregmoment | imregdemons | imregdeform | imreggroupwise | imregcorr | imwarp
Related Examples •
4-10
“Register Multimodal Medical Image Volumes with Spatial Referencing” on page 4-11
Register Multimodal Medical Image Volumes with Spatial Referencing
Register Multimodal Medical Image Volumes with Spatial Referencing This example shows how to align two medical image volumes using moment-of-mass-based registration. Multimodal image registration aligns images acquired using different imaging modalities, such as MRI and CT. Even when acquired for the same patient and region of interest, multimodal images can be misaligned due to differences in patient positioning (translation or rotation) and voxel size (scaling). Different imaging modalities often have different voxel sizes due to scanner variability and concerns about scan time and radiation dose. The imregmoment function enables fast registration of 3-D image volumes, including multimodal images. Medical images have an intrinsic coordinate system defined by rows, columns, and slices and a patient coordinate system that describes the real-world position and voxel spacing. You can use the medicalVolume object to import and manage the spatial referencing of a medical image volume. To register images with different voxel sizes using imregmoment, you must use spatial referencing to register the images in the patient coordinate system. During registration, imregmoment aligns a moving image with a fixed image, and then resamples the aligned image in the intrinsic coordinates of the fixed image. The registered volumes share one patient coordinate system and have the same number of rows, columns, and slices. In this example, you use medicalVolume and imregmoment to register multimodal MRI and CT images of the head. Load Images The data used in this example is a modified version of the 3-D CT and MRI data sets from The Retrospective Image Registration Evaluation (RIRE) Dataset, provided by Dr. Michael Fitzpatrick. For more information, see the RIRE Project homepage. The modified data set contains one CT scan and one MRI scan stored in the NRRD file format. The size of the entire data set is approximately 35 MB. Download the data set from the MathWorks® website, then unzip the folder. zipFile = matlab.internal.examples.downloadSupportFile("medical", ... "MedicalRegistrationNRRDdata.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
In image registration, consider one image to be the fixed image and the other image to be the moving image. The goal of registration is to align the moving image with the fixed image. In this example, the fixed image is a T1 weighted MRI image. The moving image is a CT image from the same patient. The images are stored in the NRRD file format. Use medicalVolume to read the MRI image. By looking at the Voxels and VoxelSpacing properties, you can determine that the MRI volume is a 256-by-256-by-26 voxel array, where each voxel is 1.25-by-1.25-by-4.0 mm. filenameMRI = fullfile(filepath,"supportfilesNRRD","Patient007MRT1.nrrd"); fixedMRIVolume = medicalVolume(filenameMRI) fixedMRIVolume = medicalVolume with properties:
4-11
4
Image Preprocessing and Augmentation
Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[256×256×26 single] [1×1 medicalref3d] "unknown" "unknown" [1.2500 1.2500 4] [0 0 1] [] [] [] ["unknown" "unknown" "unknown" [] []
"unknown"]
Import the CT image. The size of each voxel in the CT image is 0.65-by-0.65-by-4 mm, which is smaller than in the MRI image. filenameCT = fullfile(filepath,"supportfilesNRRD","Patient007CT.nrrd"); movingCTVolume = medicalVolume(filenameCT) movingCTVolume = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters: WindowWidths:
[512×512×28 single] [1×1 medicalref3d] "unknown" "unknown" [0.6536 0.6536 4] [0 0 1] [] [] [] ["unknown" "unknown" "unknown" [] []
"unknown"]
Extract the image voxel data from the Voxels property of the medicalVolume objects. fixedMRIVoxels = fixedMRIVolume.Voxels; movingCTVoxels = movingCTVolume.Voxels;
Display Unregistered Images Use the imshowpair function to judge image alignment. First, calculate the location of the middle slice of each volume. The VolumeGeometry property of the medical volume object contains a medicalref3d object, which specifies spatial details about the volume. Use the VolumeSize property of each medicalref3d object to calculate the location of the middle slice of the corresponding volume, in voxels. fixedVolumeSize = fixedMRIVolume.VolumeGeometry.VolumeSize; movingVolumeSize = movingCTVolume.VolumeGeometry.VolumeSize; centerFixed = fixedVolumeSize/2; centerMoving = movingVolumeSize/2;
4-12
Register Multimodal Medical Image Volumes with Spatial Referencing
Define spatial referencing for the 2-D slices using imref2d. Specify the size, in voxels, and voxel spacing, in millimeters, of the transverse slices. fixedVoxelSpacing = fixedMRIVolume.VoxelSpacing; movingVoxelSpacing = movingCTVolume.VoxelSpacing; Rfixed2D = imref2d(fixedVolumeSize(1:2),fixedVoxelSpacing(2),fixedVoxelSpacing(1)); Rmoving2D = imref2d(movingVolumeSize(1:2),movingVoxelSpacing(2),movingVoxelSpacing(1));
Display the image slices in the patient coordinate system using imshowpair. Plot the slice in the third spatial dimension, which corresponds to the transverse anatomical plane. The MRI slice is magenta, and the CT slice is green. figure imshowpair(movingCTVoxels(:,:,centerMoving(3)), ... Rmoving2D, ... fixedMRIVoxels(:,:,centerFixed(3)), ... Rfixed2D) title("Unregistered Transverse Slice")
4-13
4
Image Preprocessing and Augmentation
You can also display the volumes as 3-D objects. Create a viewer3d object, in which you can display multiple volumes. viewerUnregistered = viewer3d(BackgroundColor="black",BackgroundGradient="off");
Display the medicalVolume objects as 3-D isosurfaces using volshow. The volshow function uses medicalVolume properties to display each volume in its respective patient coordinate system. volshow(fixedMRIVolume, ... Parent=viewerUnregistered, ... RenderingStyle="Isosurface", ... IsosurfaceValue=0.05, ... Colormap=[1 0 1], ... Alphamap=1);
4-14
Register Multimodal Medical Image Volumes with Spatial Referencing
Warning: The volume's VolumeGeometry.PatientCoordinateSystem property is not set. Assuming "LPS+" volshow(movingCTVolume, ... Parent=viewerUnregistered, ... RenderingStyle="Isosurface", ... IsosurfaceValue=0.05, ... Colormap=[0 1 0], ... Alphamap=1);
Warning: The volume's VolumeGeometry.PatientCoordinateSystem property is not set. Assuming "LPS+"
Register Images To accurately register volumes with different voxel dimensions, specify the spatial referencing for each volume using imref3d objects. Create each imref3d object using the volume size in voxels and the voxel dimensions in millimeters. Rfixed3d = imref3d(fixedVolumeSize,fixedVoxelSpacing(2), ... fixedVoxelSpacing(1),fixedVoxelSpacing(3)); Rmoving3d = imref3d(movingVolumeSize,movingVoxelSpacing(2), ... movingVoxelSpacing(1),movingVoxelSpacing(3));
The imregmoment function sets fill pixels added to the registered volume to 0. To improve the display of registration results, scale the CT intensities to the range [0, 1], so that the fill value is equal to the minimum of the image data range. rescaledMovingCTVoxels = rescale(movingCTVoxels);
4-15
4
Image Preprocessing and Augmentation
Register the volumes using imregmoment, including the imref3d objects as inputs. Specify the MedianThresholdBitmap name-value argument as true, which is appropriate for multimodal images. [geomtform,movingCTRegisteredVoxels] = imregmoment(rescaledMovingCTVoxels,Rmoving3d, ... fixedMRIVoxels,Rfixed3d,MedianThresholdBitmap=true);
The geomtform output is an affinetform3d geometric transformation object. The T property of geomtform contains the 3-D affine transformation matrix that maps the moving CT volume in its patient coordinate system to the fixed MRI volume in its patient coordinate system. geomtform.T ans = 4×4 1.0000 0.0039 -0.0014 -4.8094
-0.0039 1.0000 -0.0063 -16.0063
0.0013 0.0063 1.0000 -1.3481
0 0 0 1.0000
The movingRegisteredVoxels output contains the registered CT volume. The imregmoment function resamples the aligned CT volume in the MRI intrinsic coordinates, so the registered volumes have the same number of rows, columns, and slices. whos movingCTRegisteredVoxels fixedMRIVoxels Name
Size
fixedMRIVoxels movingCTRegisteredVoxels
256x256x26 256x256x26
Bytes 6815744 6815744
Class
Attributes
single single
Create medicalVolume Object for Registered Image Create a new medicalVolume object that contains the registered voxel data and its spatial referencing information. You can create a medicalVolume object by specifying a voxel array and a medicalref3d object. The registered CT volume has the same spatial referencing as the original MRI volume, so use the medicalref3d object stored in the VolumeGeometry property of fixedMRIVolume. R = fixedMRIVolume.VolumeGeometry; movingRegisteredVolume = medicalVolume(movingCTRegisteredVoxels,R) movingRegisteredVolume = medicalVolume with properties: Voxels: VolumeGeometry: SpatialUnits: Orientation: VoxelSpacing: NormalVector: NumCoronalSlices: NumSagittalSlices: NumTransverseSlices: PlaneMapping: Modality: WindowCenters:
4-16
[256×256×26 single] [1×1 medicalref3d] "unknown" "unknown" [1.2500 1.2500 4] [0 0 1] [] [] [] ["unknown" "unknown" "unknown" []
"unknown"]
Register Multimodal Medical Image Volumes with Spatial Referencing
WindowWidths: []
Display Registered Images To check the registration results, use imshowpair to view the middle transverse slices of the fixed and registered volumes. Use the spatial referencing object for the fixed volume. figure imshowpair(movingRegisteredVolume.Voxels(:,:,centerFixed(3)), ... Rfixed2D,fixedMRIVoxels(:,:,centerFixed(3)),Rfixed2D) title("Registered Transverse Slice")
Display the registered volumes as 3-D isosurfaces using volshow. You can zoom and rotate the display to assess the registration results. viewerRegistered = viewer3d(BackgroundColor="black",BackgroundGradient="off"); volshow(fixedMRIVolume, ... Parent=viewerRegistered, ... RenderingStyle="Isosurface", ... IsosurfaceValue=0.05, ... Colormap=[1 0 1], ... Alphamap=1);
Warning: The volume's VolumeGeometry.PatientCoordinateSystem property is not set. Assuming "LPS+" volshow(movingRegisteredVolume, ... Parent=viewerRegistered, ...
4-17
4
Image Preprocessing and Augmentation
RenderingStyle="Isosurface", ... IsosurfaceValue=0.05, ... Colormap=[0 1 0], ... Alphamap=1);
Warning: The volume's VolumeGeometry.PatientCoordinateSystem property is not set. Assuming "LPS+"
See Also imregmoment | medicalref3d | medicalVolume
Related Examples •
4-18
“Medical Image Registration” on page 4-6
5 Medical Image Labeling • “Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2 • “Label 3-D Medical Image Using Medical Image Labeler” on page 5-10 • “Automate Labeling in Medical Image Labeler” on page 5-21 • “Collaborate on Multi-Labeler Medical Image Labeling Projects” on page 5-29
5
Medical Image Labeling
Label 2-D Ultrasound Series Using Medical Image Labeler This example shows how to label 2-D image data using the Medical Image Labeler app. The Medical Image Labeler app provides manual, semi-automated, and automated tools for labeling 2-D medical image data. This example labels the left ventricle in an echocardiogram ultrasound image series. Open the Medical Image Labeler Open the Medical Image Labeler app from the Apps tab on the MATLAB® toolstrip, under Image Processing and Computer Vision. You can also load the app by using the medicalImageLabeler command.
Create New Image Labeling Session To start a new 2-D labeling session, on the app toolstrip, click New Session and select New Image session (2-D). In the Create a new session folder dialog box, specify a location in which to save the new session folder by entering a path or selecting Browse and navigating to your desired location. In the New Session Folder dialog box, specify a name for the folder for this labeling session. Then, select Create Session.
5-2
Label 2-D Ultrasound Series Using Medical Image Labeler
Load Image Data into Medical Image Labeler To load an image into the Medical Image Labeler app, on the app toolstrip, click Import Data. Then, under Data, select From File. Browse to the location of heartUltrasoundSequenceVideo.dcm in the same directory as this example file. For an image session, the imported data file must be a single DICOM or NIfTI file containing a 2-D image or a series of 2-D images related by time. The name of the imported image is visible in the Data Browser pane. The no labels symbol next to the file name indicates that the image file does not contain any pixel labels. You can import multiple 2-D image files into an image session. All files imported into a single app session must label the same regions of interest, such as tumor or ventricle, and are exported together as one groundTruthMedical object.
Explore the Image Series The Medical Image Labeler app displays the imported image in the Slice pane. By default, the app displays the middle frame in the ultrasound series. You can change the displayed frame by using the scroll bar at the bottom of the Slice pane, or you can click the pane and then press the left and right arrow keys. The app displays the current frame number out of the total number of frames, such as 58/116. You can zoom in on the current frame using the mouse scroll wheel or the zoom controls that appear when you pause on the Slice pane.
5-3
5
Medical Image Labeling
Explore the echocardiogram image series to identify the left ventricle. The annotated image shows the approximate outline of the ventricle to label. Note that all labels in this example have been created for illustrative purposes only and have not been verified by a clinical professional.
Create Label Definitions A label definition specifies the name, color, and numeric index of a label. In the Label Definitions pane, select Create Label Definition to create a label with the default name Label1. To change the name of the label, click the label and enter a new name. The label name must be a valid MATLAB variable name with no spaces. For this example, specify the name of the label as LeftVentricle. To change the default color associated with the label, click the colored square in the label identifier and select a color from the Color dialog box.
5-4
Label 2-D Ultrasound Series Using Medical Image Labeler
Use Drawing Tools to Label Regions in Image To assign pixels to the LeftVentricle label, click the LeftVentricle label in the Label Definitions pane. You can use the tools on the Draw tab in the app toolstrip to define the region. You can choose from the Freehand, Assisted Freehand, Polygon, Paint Brush, and Trace Boundary tools. Label frame 1. When you add the label, the app adds a bar above the slider, using the color associated with the label, to indicate which frames you have labeled.
In addition to the drawing tools, you can add or refine label regions using the tools in the Automate tab of the app toolstrip. The app provides automation algorithms including Active Contours, Adaptive Threshold, Dilate, and Erode. To apply an algorithm, click Algorithm Parameters to adjust settings, if applicable, and click Run. You can also specify a custom range of frames to process by specifying a Start frame and an End frame. Use Interpolation to Speed Up Labeling You can move through the image series and draw labels frame-by-frame, but the Medical Image Labeler app also provides interpolation tools in the Draw tab that can help you label an object between frames. Interpolation is most suitable between frames where the region of interest has a similar shape and size. In an echocardiogram, the ventricle experiences cycles of contraction, during which the ventricle rapidly changes in shape, and relaxation, during which the ventricle is relatively still. Therefore, you can most effectively use interpolation between the start and end of a relaxation period, such as between frames 20 and 38. To use interpolation, you must first manually label a region in two frames. Label the ventricle in frame 20 and frame 38. If the Auto Interpolate button is not active, make sure the labeled region is selected by clicking Select Drawn Region in the app toolstrip and selecting the labeled region. 5-5
5
Medical Image Labeling
Click Auto Interpolate. The app automatically labels the ventricle in the intermediate frames. The app adds bars above the slider to indicate all the frames that have labeled pixels, which now appears as a solid bar from frame 20 to frame 38. Alternatively, after labeling an ROI on two frames, you can click Manually Interpolate. With this option, the app opens the Manually Interpolate dialog box. Select the two regions between which you want to interpolate, Region One and Region Two. To select the first region, use the slider at the bottom of the dialog box to navigate to frame 20, and then click inside the labeled ventricle. To select the second region, click Region Two, navigate to frame 38, and click inside the labeled ventricle. After selecting both regions, click Run to interpolate the label in the intermediate frames.
5-6
Label 2-D Ultrasound Series Using Medical Image Labeler
After using interpolation, check the individual frames to see if the interpolation created satisfactory labels. You can manually correct labels using the Paint Brush and Eraser tools. Alternatively, you can refine the labels using one of the tools in the Automate tab. For example, you can use Active Contours to grow an ROI in a frame where it does not fill the full area of the ventricle. You can also undo the interpolation and try interpolating across fewer frames to improve results. Modify Labels To refine drawn labels, you can remove label data from individual pixels, from individual frames within an image series, from an entire image, or from the whole labeling session. • Remove labels from individual pixels — In the Draw tab, use the Eraser tool. • Remove labels from one connected region in a slice — Click Select Drawn Region and, in the slice pane, select the region from which you want to remove labels. You can also select the label to remove by right-clicking on the label within the slice plane and selecting Select Drawn Region. Press Delete, or right-click and select Delete, to remove labels from the region.
5-7
5
Medical Image Labeling
• Remove all labels from an image series — Right-click the file name of the image series in the Data Browser and select Remove Labels. Removing labels from an image removes all pixel labels for all label definitions within the file. • Remove a label from all images within the app session — In the Label Definitions pane, rightclick the label name and select Delete. This deletes the label from the groundTruthMedical object in the session folder, and removes all pixel labels for the deleted label name in all images in the session. Apply Custom Automation Algorithm You can add a custom automation algorithm to use in the app. On the Automate tab, click Add Algorithm. Import an existing algorithm by selecting From File, or create a new algorithm using the provided function or class template. See “Automate Labeling in Medical Image Labeler” on page 521 for an example that applies a custom automation algorithm for 2-D labeling. Under Slice-Based, select the New option and click Function Template to create a new function that operates on each 2-D image frame. The app opens the template in the MATLAB editor. Replace the sample code in the template with the code for your algorithm. Your function must accept two arguments: each frame as a separate image, and a mask. Your function must also return a mask image. When you are done editing the template, save the file. The Medical Image Labeler app automatically creates a button in the Automate tab of the toolstrip for your function. To test your function, select it and click Run. By default, the app applies the function to only the current frame.
After testing your function on a single frame, you can run it on all of the frames, or a subset of the frames. You can run it from the current frame to the end (the highest numbered frame) or from the current frame back to the beginning (frame 1). You can also specify a range of frames by specifying the starting frame and the ending frame. Export Ground Truth Data For this example, the labels for heartUltrasoundSequenceVideo.dcm are complete when you have labeled the ventricle in each frame. The app automatically assigns a value of 0 to unlabeled pixels in the label images saved to the session folder, so you do not need to label the background manually. The Medical Image Labeler app automatically saves a groundTruthMedical object as a MAT file in the session folder. You can also export a groundTruthMedical object, saved as a MAT file, to an alternate file location from the app. On the Labeler tab, click Export and, under Ground Truth, select To File. You can load the exported MAT file into the MATLAB workspace using the load function. The properties of the groundTruthMedical object, gTruthMed, contain information about the image data source, label definitions, and location of the saved label images. Display information about the object and each of its properties using these commands. 5-8
Label 2-D Ultrasound Series Using Medical Image Labeler
• gTruthMed — Display the properties of the groundTruthMedical object. • gTruthMed.DataSource — Location of the source of the unlabeled medical images and image series. • gTruthMed.LabelDefinitions — Table of information about label definitions. • gTruthMed.LabelData — Locations of the saved, labeled images.
See Also Medical Image Labeler | groundTruthMedical
Related Examples •
“Get Started with Medical Image Labeler” on page 1-10
•
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10
•
“Automate Labeling in Medical Image Labeler” on page 5-21
5-9
5
Medical Image Labeling
Label 3-D Medical Image Using Medical Image Labeler This example shows how to label 3-D image data using the Medical Image Labeler app. The Medical Image Labeler app provides manual, semi-automated, and automated tools for labeling 3-D medical image data. This example segments a chest CT volume to label a lung tumor region. Download Data to Label This example labels chest CT data from a subset of the Medical Segmentation Decathlon data set [1 on page 5-20]. The size of the subset of data is approximately 76 MB. Download the MedicalVolumNIfTIData.zip file from the MathWorks® website, then unzip the file. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeNIfTIData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath) dataFolder = fullfile(filepath,"MedicalVolumeNIfTIData");
Open the Medical Image Labeler Open the Medical Image Labeler app from the Apps tab on the MATLAB® toolstrip, under Image Processing and Computer Vision. You can also load the app by using the medicalImageLabeler command.
5-10
Label 3-D Medical Image Using Medical Image Labeler
Create New Volume Labeling Session To start a new 3-D labeling session, on the app toolstrip, click New Session and select New Volume session (3-D). In the Create a new session folder dialog box, specify a location in which to save the new session folder by entering a path or selecting Browse and navigating to your desired location. In the New Session Folder dialog box, specify a name for the folder for this labeling session. Then, select Create Session. Load Image Data into Medical Image Labeler To load an image into the Medical Image Labeler app, on the app toolstrip, click Import Data. Then, under Data, select From File. Browse to the location where you downloaded the data, specified by the dataFolder workspace variable, and select the file lung_027.nii.gz. For a Volume Session, the imported data file can be a single DICOM or NIfTI file containing a 3-D image volume, or a directory containing multiple DICOM files corresponding to a single image volume. The name of the imported image is visible in the Data Browser pane. The no labels symbol next to the file name indicates that the volume does not contain any pixel labels. You can import multiple 3-D image files into a Volume Session. All files imported into a single app session must label the same regions of interest, such as tumor or lung, and are exported together as one groundTruthMedical object. Import the other file in the download location, lung_043.nii.gz.
5-11
5
Medical Image Labeling
Explore the Image Volume The Medical Image Labeler app displays a 3-D rendering of the scan in the 3-D Volume pane, and displays anatomical slice planes in the Transverse, Sagittal, and Coronal panes. You can toggle the visibility of the volume display using the Show Volume button on the Labeler tab of the app toolstrip. By default, the app displays the center slice in each slice plane. You can change which slice is displayed by using the scroll bar at the bottom of a slice pane, or you can click the pane and then press the left and right arrow keys. The app displays the current slice number out of the total number of slices, such as 132/264, for each slice pane. The app also displays anatomical display markers indicating the anterior (A), posterior (P), left (L), right (R), superior (S), and inferior (I) directions. You can zoom in on the current slice pane using the mouse scroll wheel or the zoom controls that appear when you pause on the slice pane. You can also navigate between slices in the three planes using crosshair navigation. To turn on the crosshair indicators, in the Labeler tab of the app toolstrip, select Crosshair Navigation. The indicators show the relative positions of the slices in the other 2-D views. To navigate slices, pause on one of the crosshair axes until the cursor changes to the fleur shape, , and then click and drag to a new position. The other slice views update automatically. To hide the crosshairs, in the app toolstrip, clear Crosshair Navigation.
By default, the app displays the scan using the Radiological display convention, with the left side of the patient on the right side of the image. To display the left side of the patient on the left side of the 5-12
Label 3-D Medical Image Using Medical Image Labeler
image, on the app toolstrip, click Preferences, and in the dialog box that opens, change the Display Convention to Neurological. To change the brightness or contrast of the display, adjust the display window. Changing the display window does not modify the underlying image data. To update the display window, you can directly edit the values in the Window Level and Window Width boxes in the app toolstrip.
Alternatively, adjust the window using the Window Level tool. First, select in the app toolstrip. Then, click and hold in any of the slice panes, and drag up and down to increase and decrease the brightness, respectively, or left and right to increase and decrease the contrast. The Window Level and Window Width values in the toolstrip automatically update as you drag. Clear to deactivate the tool. To reset the brightness and contrast, click Window Level and select Reset. You can swap the mouse actions, such that dragging vertically changes the contrast and dragging horizontally changes the brightness, in the Preferences dialog box. With lung_027.nii.gz selected in the Data Browser pane, explore the chest volume to identify the tumor region you want to label. The tumor is visible between slices 79 and 105 in the Transverse slice pane.
5-13
5
Medical Image Labeling
Create Label Definitions A label definition specifies the name, color, and numeric index of a label. In the Label Definitions pane, select Create Label Definition to create a label with the default name Label1. To change the name of the label, double-click on the label and type in a new name. The label name must be a valid MATLAB® variable name with no spaces. For this example, specify the name of the label as tumor. To change the default color associated with the label, double-click on the colored square in the label identifier and select a color from the Color dialog box.
Use Drawing Tools to Label Regions in Image Click on the tumor label in the Label Definitions pane to assign pixels to the tumor label. You can use the tools on the Draw tab in the app toolstrip to define the region. You can choose from the Freehand, Assisted Freehand, Polygon, Paint Brush, and Trace Boundary tools. This example uses the Paint Brush tool, with Paint by Superpixels enabled, to label the tumor, but you can use any of the drawing tools. You can draw labels in the Transverse, Sagittal, or Coronal slice panes. Label slice 79. When you add the label, the app adds a bar above the slider, using the color associated with the label, to indicate which slices you have labeled.
5-14
Label 3-D Medical Image Using Medical Image Labeler
In addition to the drawing tools, you can add or refine label regions using the tools in the Automate tab of the app toolstrip. The app provides slice-based automation algorithms including Active Contours, Adaptive Threshold, Dilate, and Erode, as well as volume-based algorithms including Filter and Threshold, Smooth Edges, and Otsu's Threshold. To apply a slice-based algorithm, click Algorithm Parameters to adjust settings if applicable, select an option from the Slice Direction list, and click Run. You can also specify a custom range of slices to process by specifying a Start slice and an End slice. Use Interpolation to Speed Up Labeling You can move through the image volume and draw labels slice-by-slice. The Medical Image Labeler app also provides interpolation tools in the Draw tab that can help with labeling an object between slices. To use interpolation, you must first manually label a region on two slices. You have already labeled the tumor in slice 79. Use the same process to label the tumor in slice 83. If the Auto Interpolate button is not active, make sure the labeled region is selected by clicking Select Drawn Region in the app toolstrip and selecting the labeled region.
5-15
5
Medical Image Labeling
Click Auto Interpolate. The app automatically labels the tumor in the intermediate slices. The app adds bars above the slider to indicate all the slices that have labeled pixels, which now appears as a solid bar from slice 79 to slice 83. Alternatively, after labeling an ROI on two slices, you can click Manually Interpolate. With this option, the app opens the Manually Interpolate dialog box. Select the two regions between which you want to interpolate, Region One and Region Two. To select the first region, use the slider at the bottom of the dialog box to navigate to slice 79, and then click inside the labeled tumor. To select the second region, click Region Two, navigate to slice 83, and click inside the labeled tumor. After selecting both regions, click Run to interpolate the label in the intermediate slices.
5-16
Label 3-D Medical Image Using Medical Image Labeler
After using interpolation, check the individual slices to see if the interpolation created satisfactory labels. You can manually correct labels using the Paint Brush and Eraser tools. Alternatively, you can refine the labels using one of the tools in the Automate tab. For example, you can use Active Contours to grow an ROI in a slice where it does not fill the full area of the tumor. You can also undo the interpolation and try interpolating across fewer slices to improve results. Visualize and Modify Labels As you draw labels, you can view them as an overlay in the 3-D Volume pane. In the warning message at the top of the 3-D Volume pane, click Update. Note that Update is only available after you update the label data in a slice pane. To adjust the opacity of all labels in the session, in the Labeler tab of the app toolstrip, adjust the Label Opacity slider. To toggle the visibility of an individual label, in the Label Definitions pane, click the eye symbol
next to the label name.
5-17
5
Medical Image Labeling
To refine drawn labels, you can remove label data from individual pixels, from individual slices within an image, from an entire image, or from the whole labeling session. • Remove labels from individual pixels — In the Draw tab, use the Eraser tool. • Remove labels from one connected region in a slice — Click Select Drawn Region and, in the slice pane, select the region from which you want to remove labels. You can also select the label to remove by right-clicking on the label within the slice plane and selecting Select Drawn Region. Press Delete, or right-click and select Delete, to remove labels from the region. • Remove all labels from a slice — Right-click anywhere on the slice and select Select All. Press Delete, or right-click and select Delete, to remove all labels from the slice. • Remove all labels from an image volume — Right-click the file name of the image volume in the Data Browser and select Remove Labels. Removing labels from an image volume removes all pixel labels for all label definitions within the file. • Remove a label from all images within the app session — In the Label Definitions pane, rightclick the label name and select Delete. This deletes the label from the groundTruthMedical object in the session folder, and removes all pixel labels for the deleted label name in all images in the session. Apply Custom Automation Algorithm You can add a custom automation algorithm to use in the app. On the Automate tab, click Add Algorithm. Import an existing algorithm by selecting From File, or create a new algorithm using the provided function or class template. See “Automate Labeling in Medical Image Labeler” on page 521 for an example that applies a custom automation algorithm for 2-D labeling. For this example, under Slice-Based, select the New option and click Function Template to create a new function that operates on each 2-D image slice. The app opens the template in the MATLAB editor. Replace the sample code in the template with code that you want to use. Your function must accept two arguments: each slice as a separate image, and a mask. Your function must also return a mask image. When you are done editing the template, save the file. The Medical Image Labeler app automatically creates a button in the Automate tab toolstrip for your function. To test your function, 5-18
Label 3-D Medical Image Using Medical Image Labeler
select an option from the Slice Direction list and click Run. By default, the app applies the function to only the current slice.
After testing your function on a single slice, you can run it on all of the slices, or a subset of the slices, in the selected direction. You can run it from the current slice to the end (the highest numbered slice) or from the current slice back to the beginning (slice 1). You can also specify a range of slices by specifying the starting slice and the ending slice. Label Next Image Volume For this example, the labels for lung_027.nii.gz are complete when you have labeled each slice of the tumor. Medical Image Labeler automatically assigns a value of 0 to unlabeled pixels in the label images saved to the session folder, so you do not need to label the background manually. To move to the next volume, select lung_043.nii.gz in the Data Browser pane. Repeat the labeling process to draw labels on the tumor region in this volume.
5-19
5
Medical Image Labeling
Export Ground Truth Data The Medical Image Labeler app automatically saves a groundTruthMedical object as a MAT file in the session folder. You can also export a groundTruthMedical object, saved as a MAT file, to an alternate file location from the app. On the Labeler tab, click Export and, under Ground Truth, select To File. You can load the exported MAT file into the MATLAB workspace using the load function. The properties of the groundTruthMedical object, gTruthMed, contain information about the image data source, label definitions, and location of the saved label images. Display information about the object and each of its properties using these commands. • gTruthMed — Display the properties of the groundTruthMedical object. • gTruthMed.DataSource — Location of the source of the unlabeled medical volumes. • gTruthMed.LabelDefinitions — Table of information about label definitions. • gTruthMed.LabelData — Locations of the saved, labeled images. References
[1] Medical Segmentation Decathlon. "Lung." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com/. The Lung data set is provided by the Medical Segmentation Decathlon under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details. This example uses a subset of the original data set consisting of two CT volumes. The labels shown in this example were created for illustration purposes and have not been verified by a clinician.
See Also Medical Image Labeler | groundTruthMedical
Related Examples
5-20
•
“Get Started with Medical Image Labeler” on page 1-10
•
“Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
•
“Automate Labeling in Medical Image Labeler” on page 5-21
Automate Labeling in Medical Image Labeler
Automate Labeling in Medical Image Labeler This example shows how to implement a deep learning automation algorithm for labeling tumors in breast ultrasound images by using the Medical Image Labeler app. Semantic segmentation assigns a class label to every pixel in an image. The Medical Image Labeler app provides manual, semiautomated, and automated tools to label medical images for semantic segmentation. Using automation, you can create and apply custom segmentation functions that use image processing or deep learning. For an example that shows how to train the network used in this example, see “Breast Tumor Segmentation from Ultrasound Using Deep Learning” on page 6-32.
Download Pretrained Network Create a folder to store the pretrained network and image data set. dataDir = fullfile(tempdir,"BreastSegmentation"); if ~exist(dataDir,"dir") mkdir(dataDir) end
Download the pretrained DeepLab v3+ network and test image by using the downloadTrainedNetwork helper function. The helper function is attached to this example as a supporting file. pretrainedNetwork_url = "https://www.mathworks.com/supportfiles/"+ ... "image/data/breastTumorDeepLabV3.tar.gz"; downloadTrainedNetwork(pretrainedNetwork_url,dataDir);
Unzip the TAR GZ file completely. gunzip(fullfile(dataDir,"breastTumorDeepLabV3.tar.gz"),dataDir); untar(fullfile(dataDir,"breastTumorDeepLabV3.tar"),dataDir); exampleDir = fullfile(dataDir,"breastTumorDeepLabV3");
Download Image Data This example uses a subset of the Breast Ultrasound Images (BUSI) data set [1 on page 5-28]. The BUSI data set contains 2-D ultrasound images stored in the PNG file format. The total size of the data set is 197 MB. The data set contains 133 normal scans, 487 scans with benign tumors, and 210 scans with malignant tumors. This example uses images from the tumor groups only. Each ultrasound image has a corresponding tumor mask image. The tumor mask labels have been reviewed by clinical radiologists [1]. 5-21
5
Medical Image Labeling
Run this code to download the data set from the MathWorks® website and unzip the downloaded folder. zipFile = matlab.internal.examples.downloadSupportFile("image", ... "data/Dataset_BUSI.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The imageDir folder contains the downloaded and unzipped data set. imageDir = fullfile(filepath,"Dataset_BUSI_with_GT");
Extract a subset of 20 benign tumor images to a new folder, dicomDir. Write the copied images in the DICOM file format, which the Medical Image Labeler supports. imds = imageDatastore(fullfile(imageDir,"benign","*).png")); dicomDir = fullfile(exampleDir,"images"); if ~exist(dicomDir,"dir") mkdir(dicomDir) for i = 1:20 I = imread(imds.Files{i}); [~,filename,~] = fileparts(imds.Files{i}); dicomwrite(I,fullfile(exampleDir,"images",filename+".dcm")); end end
Load Data Source Images into Medical Image Labeler Open the Medical Image Labeler app and create a new 2-D image session. medicalImageLabeler("Image")
The app launches and opens the Create a new session folder dialog box. In the dialog box, specify a location to save the new session folder by entering a path or selecting Browse and navigating to your desired location. Then, select Create Session. To load the ultrasound images into the app, on the app toolstrip, click Import. Then, under Data, select From File. Browse to the location of dicomDir. Select all of the files by clicking the first filename, pressing Shift, and clicking the last filename. Click Open. The app loads the files and lists the names in the Data Browser pane.
5-22
Automate Labeling in Medical Image Labeler
Create Label Definition A label definition specifies the name, color, and numeric index of a label. In the Label Definitions pane, select Create Label Definition to create a label with the default name Label1. To change the name of the label, click the label and enter a new name. The label name must be a valid MATLAB® variable name with no spaces. For this example, specify the name of the label as tumor.
Create Automation Algorithm In the Automate tab of the app toolstrip, click Add Algorithm and select New > Function Template. In the function template, enter code for the automation algorithm and click Save. This example uses the default name for the algorithm, myalgorithm, but you can specify any name. The automation function must have two inputs, I and MASK. 5-23
5
Medical Image Labeling
• I is the input data source image, which in this example is the ultrasound image. When you run the algorithm, the app automatically passes the data source image selected in the Data Browser to the algorithm as I. • MASK is the initial label image. When you run the algorithm, the app automatically passes the currently selected label in the Label Definitions pane to the algorithm as MASK. The label image can be empty or contain some labeled pixels. The automation function must return one output, MASK, which is the final label image. The MASK image must be a logical array of the same size as I, with pixels valued as true where the label is present. The algorithm function used in this example, myalgorithm, performs these steps: • Resize the input image I to the input size of the network. • Load the pretrained network. • Apply the network to the input image I by using the semanticseg function. The network returns the segmented image, segmentedImg, as a categorical array with labels "tumor" and "background". • Convert the categorical label matrix to a binary image that is true within the tumor label region. • Resize the logical label mask, MASK, back to the original size of the input image. function %Medical % % I % % MASK % % % %
MASK = myalgorithm(I,MASK) Image Processing Function - RGB or grayscale image I that corresponds to the image data of the slice during automation. - A logical array where the first two dimensions match the first two dimensions of input image I. If the user has already created a labeled region, MASK may have pixels labeled as true when passed to this function.
%-------------------------------------------------------------------------% Auto-generated by the Medical Image Labeler App. % % When used by the App, this function will be called for every slice of the % volume as specified by the user. % %-------------------------------------------------------------------------% Replace the sample below with your code---------------------------------imsize = size(I); networkSize = [256 256]; I = imresize(I,networkSize); persistent trainedNet if isempty(trainedNet) pretrainedFolder = fullfile(tempdir,"BreastSegmentation","breastTumorDeepLabV3"); savedData = load(fullfile(pretrainedFolder,"breast_seg_deepLabV3.mat")); trainedNet = savedData.trainedNet; end segmentedImg = semanticseg(I,trainedNet);
5-24
Automate Labeling in Medical Image Labeler
MASK = segmentedImg=="tumor"; MASK = imresize(MASK,imsize(1:2)); %-------------------------------------------------------------------------end
Close the function template. In the Automate tab of the app toolstrip, your new algorithm appears in the Algorithm gallery.
Run Automation Algorithm Select the image to run the algorithm on in the Data Browser pane. Make sure that the tumor label definition is selected in the Label Definitions pane. In the Automation tab of the app toolstrip, in the Algorithm gallery, select myAlgorithm and select Run. If your data is a multiframe image series, you can adjust the Direction settings to specify whether the algorithm is applied to the current frame, from the current frame back to the first frame, or from the current frame to the last frame. Alternatively, you can directly enter a custom frame range using the Start and End text boxes. When the algorithm finishes running, the label image is appears in the Slice panel. View the label image to visually check the network performance.
5-25
5
Medical Image Labeling
If you need to update the algorithm, reopen the function template in the location you specified when you saved the function template. Before rerunning an updated algorithm, you can remove labels from the first iteration by right-clicking on the filename in the Data Browser and selecting Remove Labels. Alternatively, you can create new label definitions for each new iteration to visually compare results in the Slice pane. Apply Algorithm to Batch of Images Once you are satisfied with your automation algorithm, you can apply it to each image loaded in the app one by one by selecting the next file in the Data Browser and selecting Run. Alternatively, if you have many images to label, applying the algorithm outside the app can be faster. To apply the algorithm outside the app, first export the app data as a groundTruthMedical object. On the Labeler tab of the app toolstrip, select Export. Under Ground Truth, select To File. In the Export Ground Truth dialog box, navigate to the folder specified by exampleDir and click Save. Then, load the groundTruthMedical object into the MATLAB workspace. load(fullfile(exampleDir,"groundTruthMed.mat"))
Extract the list of label images in the groundTruthMedical object. gTruthLabelData = gTruthMed.LabelData;
5-26
Automate Labeling in Medical Image Labeler
Find the data source images that contain no labels. idxPixel = gTruthLabelData == ""; imagesNotProcessedIdx = find(idxPixel); imageProcessedIdx = find(~idxPixel,1);
Find the name of the directory that contains the label images. labelDataLocation = fileparts(gTruthLabelData(imageProcessedIdx));
Find the numeric pixel label ID for the tumor label definition. idxPixel = strfind(gTruthMed.LabelDefinitions.Name,"tumor"); pixID = gTruthMed.LabelDefinitions.PixelLabelID(idxPixel);
Loop through the data source files with no labels, applying the myalgorithm function to each image. Write each output label image as a MAT file to the directory specified by labelDataLocation. Add the filenames of the new label images to the list gTruthLabelData. for i = imagesNotProcessedIdx' imageFile = gTruthMed.DataSource.Source{i}; medImage = medicalImage(imageFile); labels = zeros(size(medImage.Pixels,1:3),"uint8"); for j = 1:medImage.NumFrames tumorMask = myalgorithm(extractFrame(medImage,j),[]); temp = labels(:,:,j); temp(tumorMask) = pixID; labels(:,:,j) = temp; end [~,filename] = fileparts(imageFile); filename = strcat(filename,".mat"); labelFile = fullfile(labelDataLocation,filename); save(labelFile,"labels") gTruthLabelData(i) = string(labelFile); end
Create a new groundTruthMedical object that contains the original ultrasound images and label definitions plus the new label image names. newGroundTruthMed = groundTruthMedical(gTruthMed.DataSource, ... gTruthMed.LabelDefinitions,gTruthLabelData);
View the fully labeled data set by loading the new groundTruthMedical object in the Medical Image Labeler app. Upon launching, the app opens the Create a new session folder dialog box. You must use the dialog box to create a new app session to view the update groundTruthMedical object data. medicalImageLabeler(newGroundTruthMed);
View different images by clicking filenames in the Data Browser. Running the algorithm on the batch of images is complete, so all of the images have tumor labels. 5-27
5
Medical Image Labeling
References
[1] Al-Dhabyani, Walid, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. “Dataset of Breast Ultrasound Images.” Data in Brief 28 (February 2020): 104863. https://doi.org/10.1016/ j.dib.2019.104863.
See Also Medical Image Labeler | groundTruthMedical
Related Examples
5-28
•
“Get Started with Medical Image Labeler” on page 1-10
•
“Breast Tumor Segmentation from Ultrasound Using Deep Learning” on page 6-32
•
“Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
•
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10
Collaborate on Multi-Labeler Medical Image Labeling Projects
Collaborate on Multi-Labeler Medical Image Labeling Projects This page shows how to work with a multi-person team to label large medical image data sets using the Medical Image Labeler app. Use this workflow to label a data set that consists of all 2-D images or all 3-D images and has the same set of target labels (for example tumor, lung, and chest). The original intensity images to be labeled are data source images.
A labeling team consists of a project manager, individual labelers, and one or more reviewers. The project manager creates the label definitions, assigns the images to be labeled by each labeler, and collects and compiles the labeled data. The labelers label the data source images using the label definitions provided by the project manager, request feedback from the reviewer, and send approved labels to the project manager. The reviewer, who can also be the project manager, checks labeled images and provides feedback to the labelers. This figure illustrates the overall workflow and responsibilities of each role.
The multi-person labeling process consists of these steps: 1
“Create Label Definitions and Assign Data to Labelers (Project Manager)” on page 5-30 5-29
5
Medical Image Labeling
2
“Label Data and Publish Labels for Review (Labeler)” on page 5-31
3
“Export Ground Truth Data and Send to Project Manager (Labeler)” on page 5-33
4
“Inspect Labeled Images (Reviewer)” on page 5-33
5
“Collect, Merge, and Create Training Data (Project Manager)” on page 5-34
Create Label Definitions and Assign Data to Labelers (Project Manager) Create Medical Image Labeler Session 1
Open the Medical Image Labeler app from the Apps tab on the MATLAB toolstrip, or by using the medicalImageLabeler command.
2
In the app toolstrip, click New Session. If the data set is 2-D, select New Image session (2-D). If the data set is 3-D, select New Volume session (3-D).
Create Label Definitions 1
Click Create Label Definition in the Label Definitions pane.
2
Click on the label to change the default name. The label name must be a valid MATLAB® variable name with no spaces. For details about valid variable names, see “Variable Names”.
3
To change the color associated with the label, click the colored square in the label identifier and select a color from the Color dialog box.
4
Click Create Label Definition to create additional labels. Create all of the labels required for the labeling project. When labels are nested within one another, such as tumors within an organ within the chest cavity, create the labels in order from outermost to innermost (chest, then lung, then tumor).
5
Click Export and, under Label Definitions, select To File to export the label definitions to a MAT file.
Distribute Labeling Assignments Assign each labeler a subset of images in the data set to label. For example, for a data set of 300 chest CT scans, labeler 1 might label the tumor, lung, and chest cavity in scans 1–100, and labeler 2 might label the tumor, lung, and chest cavity in scans 101–200. To ensure you can merge the ground truth data after labeling, you must assign each labeler a unique set of data source images to label, and use the same label definitions for each set. Send this information to each labeler: • Label definitions, saved as a MAT file. • List of data source files to label. Each labeler must have access to their assigned images, either in a shared network location or on their local machine. 5-30
Collaborate on Multi-Labeler Medical Image Labeling Projects
Label Data and Publish Labels for Review (Labeler) Create Medical Image Labeler session 1
Open the Medical Image Labeler app from the Apps tab on the MATLAB toolstrip, or by using the medicalImageLabeler command.
2
In the app toolstrip, click New Session. If the data set is 2-D, select New Image session (2-D). If the data set is 3-D, select New Volume session (3-D).
Import Label Definitions File 1
On the app toolstrip, click Import.
2
Under Import Label Definitions, select From File.
3
Select the label definitions MAT file provided by the project manager. The imported labels appear in the Label Definitions pane.
5-31
5
Medical Image Labeling
Import Data to Label Load the data source images assigned to you by the project manager into the app. To load an image, click Import on the app toolstrip and select an option under Data. Select one or more files to import in the Import Image dialog box. • For a volume session, you can import an image from a file or from a directory of DICOM files corresponding to one volume. • For an image session, you can import an image or image series from a file. You can check that the app successfully imported the files by reading the list of filenames in the Data Browser pane. Label Images Select an image in the Data Browser to begin labeling. The no labels symbol the Data Browser indicates which image files have not been labeled.
next file names in
Draw pixel labels using the labels in the Label Definitions pane. For an example showing how to label 3-D medical images, see “Label 3-D Medical Image Using Medical Image Labeler” on page 5-10. For an example showing how to label a 2-D image series, see “Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2. The labels are saved as label images, which contain a mask of the drawn labels for a data source image. Publish Label Overlay Images for Review To get feedback from the reviewer, you can publish and share label overlay images, which show the data source image with its corresponding label image as an overlay. You can publish label overlay images as individual PNG files or as a single PDF, which the reviewer can inspect without opening MATLAB. To publish label overlay images, follow these steps: 1
The published images match the current settings in the app. Configure the app display settings as desired by using these steps: a
5-32
You can change the visibility of an individual label by clicking the name in the Label Definitions pane.
icon next to the label
b
On the Labeler tab of the app toolstrip, use the Label Opacity slider to set the transparency of the labels.
c
On the Labeler tab of the app toolstrip, use the Window Level tool to set the contrast of 2-D slice images.
d
To include a 3-D snapshot in the published images, on the Labeler tab of the app toolstrip, select Show Volume. The 3-D snapshot matches the current rotation, zoom, and display markers in the 3-D Volume pane. On the Labeler tab, click Display Markers to change the visibility of the scale bar and orientation axes display markers.
2
In the Data Browser, select the data source image file for which you want to publish label overlay images. You can publish label overlay images for only one data source at a time.
3
On the app toolstrip, click Publish to open the Publish pane.
Collaborate on Multi-Labeler Medical Image Labeling Projects
4
Under Publish Format, select either Images or PDF. Selecting Images publishes individual PNG files for each frame of an image series, or slice of a volume. Selecting PDF publishes one PDF file for the entire image file.
5
Specify the Slices information. For an image session, you can specify a range of slices for a multi-frame image sequence. For a volume session, select the Slice Direction (coronal, sagittal, or transverse) along which to publish images and select either All Slices or Range. If you select Range, specify a range of slices. To include a 3-D snapshot, select Include 3-D volume Snapshot.
6
Click Publish and specify the export location.
Inspect Labeled Images (Reviewer) Collect published label overlay images from each labeler. You can open the published PDF or PNG files using a PDF or image viewer without using MATLAB. Inspect the labeled images, and send feedback to the labelers if necessary. The labelers can update the labels and publish a new set of images for additional review. When the labels are satisfactory, the labeler exports the ground truth data and sends it to the project manager.
Export Ground Truth Data and Send to Project Manager (Labeler) Export Ground Truth Object When you receive approval from the reviewer, export the labeled data as a groundTruthMedical object to share with the project manager. On the Labeler tab, click Export and, under Ground Truth, select To File. 5-33
5
Medical Image Labeling
Send Ground Truth Data to Project Manager Send these files to the project manager: • MAT file containing the groundTruthMedical object. • Label images containing the label masks for the data source images specified in the ground truth object. You can access the complete path to the label images in the LabelData property of the exported groundTruthMedical object. Access the LabelData property by loading the ground truth MAT file into the MATLAB workspace by using the load function. Share a copy of the label images with the project manager by sending them directly or by saving a copy in a shared network location.
Collect, Merge, and Create Training Data (Project Manager) Collect Labeled Ground Truth Data Collect these files from each labeler: • MAT file containing a groundTruthMedical object. • Label images containing the label masks for the data source images specified in the groundTruthMedical object. You can load each object into the workspace by using the load function. Because the Medical Image Labeler always saves the ground truth object as gTruthMed in the exported MAT file, you must specify unique variable names when loaded each file to avoid overwriting data. For example, this code loads ground truth objects from two exported MAT files, gTruth1.mat and gTruth2.mat, that each contain a groundTruthMedical object, gTruthMed. matFile1 = gTruthMed1 matFile2 = gTruthMed2
load("gTruth1.mat"); = matFile1.gTruthMed; load("gTruth2.mat"); = matFile2.gTruthMed;
The groundTruthMedical object contains file paths in the DataSource and LabelData properties that point to the location of the data source images and the label images, respectively, on the local machine of the associated labeler. Update File Paths To merge the ground truth objects and create training data, you must update each ground truth object to point to the data source and label image file locations on your machine. Even if the files are saved in a shared network location, if a labeler maps a different drive letter to the shared network folder, the file path can be incorrect. To update these paths, use the changeFilePaths object function. Specify the ground truth object as an input argument to this function. If the directory paths have changed, but the filenames have not, specify a string vector containing the folder names for the old and new paths. The function updates all file paths in the groundTruthMedical object at the specified original path. The function returns any paths that it is unable to resolve. For example, this code shows how to change the drive letter for a directory. alternativePaths = ["C:\Shared\ImgFolder","D:\Shared\ImgFolder"]; unresolvedPaths = changeFilePaths(gTruthMed1,alternativePaths);
5-34
Collaborate on Multi-Labeler Medical Image Labeling Projects
If the filenames also changed, specify a string array of old and new file paths. For example, this code shows how to change the drive letter for individual files, and how to append a suffix to each filename. alternativePaths = ... {["C:\Shared\ImgFolder\Img1.png","D:\Shared\ImgFolder\Img1_new.png"], ... ["C:\Shared\ImgFolder\Img2.png","D:\Shared\ImgFolder\Img2_new.png"], ... . . . ["C:\Shared\ImgFolder\ImgN.png","D:\Shared\ImgFolder\ImgN_new.png"]}; unresolvedPaths = changeFilePaths(gTruthMed1,alternativePaths);
By default, changeFilePaths updates both the data source and the label image file paths. To update the paths stored in the DataSource and LabelData properties separately, use the propertyName argument. Merge Ground Truth Data Merge the updated groundTruthMedical objects into one object by using the merge object function. For example, this code merges two objects, gTruthMed1 and gTruthMed2. gTruthMerged = merge(gTruthMed1,gTruthMed2);
Create Training Data You can use the merged labeled ground truth data to train a semantic segmentation network. To create training data, load the data source images into an imageDatastore. Load the label images into a pixelLabelDatastore. For more details about creating training data for semantic segmentation from a groundTruthMedical object, see “Create Datastores for Medical Image Semantic Segmentation” on page 6-2.
See Also groundTruthMedical | changeFilePaths | merge
Related Examples •
“Get Started with Medical Image Labeler” on page 1-10
•
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10
•
“Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
•
“Automate Labeling in Medical Image Labeler” on page 5-21
5-35
6 Medical Image Segmentation • “Create Datastores for Medical Image Semantic Segmentation” on page 6-2 • “Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network” on page 6-5 • “Create Training Data for 3-D Medical Image Semantic Segmentation” on page 6-9 • “Segment Lungs from CT Scan Using Pretrained Neural Network” on page 6-14 • “Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24 • “Breast Tumor Segmentation from Ultrasound Using Deep Learning” on page 6-32 • “Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network” on page 6-40 • “IBSI Standard and Radiomics Function Feature Correspondences” on page 6-53 • “Get Started with Radiomics” on page 6-81 • “Classify Breast Tumors from Ultrasound Images Using Radiomics Features” on page 6-83
6
Medical Image Segmentation
Create Datastores for Medical Image Semantic Segmentation Semantic segmentation deep learning networks segment a medical image by assigning a class label, such as tumor or lung, to every pixel in the image. To train a semantic segmentation network, you must have a collection of images, or data sources, and a collection of label images that contain labels for the pixels in the data source images. Manage training data for semantic segmentation by using datastores: • Load data source images by using an imageDatastore object. • Load pixel label images by using a pixelLabelDatastore object. • Pair data source images and pixel label images by using a CombinedDatastore or a randomPatchExtractionDatastore object.
Medical Image Ground Truth Data You can use the Medical Image Labeler app to label 2-D or 3-D medical images to generate training data for semantic segmentation networks. The app stores labeling results in a groundTruthMedical object, which specifies the filenames of data source and pixel label images in its DataSource and LabelData properties, respectively. The table shows how a groundTruthMedical object formats the data source and label image information for 2-D versus 3-D data. Type of Data
Data Source Format
Label Data Format
2-D medical images or multiframe 2-D image series
The DataSource property contains an ImageSource object that specifies 2-D images or image series in one of these formats:
The LabelData property contains a string array. Each element specifies the filename of the label image for the corresponding data source.
• Single DICOM file.
• 2-D label images are MAT files, regardless of the data source file format.
• Single NIfTI file. Note A groundTruthMedical object can specify a combination of 2-D DICOM and NIfTI data sources.
6-2
• If a data source has no labels, the corresponding element of LabelData is an empty string, "".
Create Datastores for Medical Image Semantic Segmentation
Type of Data
Data Source Format
Label Data Format
3-D medical image volumes
The DataSource property contains a VolumeSource object that specifies 3-D image volumes in one of these formats:
The LabelData property contains a string array. Each element specifies the filename of the label image for the corresponding data source.
• Directory of DICOM files corresponding to one volume. • Single DICOM file. • Single NIfTI file. • Single NRRD file.
• 3-D label images are NIfTI files, regardless of the data source file format. • If a data source has no labels, the corresponding element of LabelData is an empty string, "".
Note A groundTruthMedical object can specify a combination of 3-D DICOM, NIfTI, and NRRD data sources.
Datastores for Semantic Segmentation You can perform medical image semantic segmentation using 2-D or 3-D deep learning networks. A 2D network accepts 2-D input images and predicts segmentation labels using 2-D convolution kernels. The input images can be one of these sources: • Images from 2-D modalities, such as X-ray. • Individual frames extracted from a multiframe 2-D image series, such as an ultrasound video. • Individual slices extracted from a 3-D image volume, such as a CT or MRI scan. A 3-D network accepts 3-D input images and predicts segmentation labels using 3-D convolution kernels. The input images are 3-D medical volumes, such as entire CT or MRI volumes. The benefits of 2-D networks include faster prediction speeds and lower memory requirements. Additionally, you can generate many 2-D training images from one image volume or series. Therefore, fewer scans are required to train a 2-D network that segments a volume slice-by-slice versus training a fully 3-D network. The major benefit of 3-D networks is that they use information from adjacent slices or frames to predict segmentation labels, which can produce more accurate results. • For an example that shows how to create datastores that contain 2-D ultrasound frames, see “Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network” on page 6-5. • For an example that shows how to create, preprocess, and augment 3-D datastores for segmentation, see “Create Training Data for 3-D Medical Image Semantic Segmentation” on page 6-9.
See Also groundTruthMedical | ImageSource | VolumeSource | imageDatastore | pixelLabelDatastore | CombinedDatastore | randomPatchExtractionDatastore
6-3
6
Medical Image Segmentation
Related Examples
6-4
•
“Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network” on page 6-5
•
“Create Training Data for 3-D Medical Image Semantic Segmentation” on page 6-9
•
“Collaborate on Multi-Labeler Medical Image Labeling Projects” on page 5-29
Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network
Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network This example shows how to create training data for a 2-D semantic segmentation network using a groundTruthMedical object that contains multiframe ultrasound image series. To train a semantic segmentation network, you need pairs of data source images and label images stored in an imageDatastore and pixelLabelDatastore respectively. To train a 2-D network using data from a multiframe image series, you must convert the series into individual frames stored in separate files. Create Ground Truth Object You can export a groundTruthMedical object from the Medical Image Labeler app or create one programmatically. This example creates a groundTruthMedicalobject by using the createGroundTruthMed2D helper function. The helper function is attached to this example as a supporting file. The groundTruthMedical object references a multiframe echocardiogram data source and its corresponding label image series. The data source and label image are stored as a single DICOM file and MAT file, respectively. gTruthMed = createGroundTruthMed2D;
Extract Data from Ground Truth Object Extract the data source and label image file names from the groundTruthMedical object. dataSource = gTruthMed.DataSource.Source; labelData = gTruthMed.LabelData;
Remove any data sources that are missing label images. noLabelsIdx = labelData ~= ""; dataSource = dataSource(noLabelsIdx); labelData = labelData(noLabelsIdx);
Extract the label definitions from the groundTruthMedical object. Add an addition label definition for the background region, corresponding to a pixel value of 0. labelDefs = gTruthMed.LabelDefinitions; labelDefs(2,:) = {"background",[0 1 0],0};
Convert Multiframe Series into 2-D Frames The data source is a multiframe ultrasound image series stored in a single DICOM file. Convert the DICOM file into individual 2-D images, stored as MAT files, by using the convertImageSeriesToFrames supporting function. The supporting function is defined at the end of this example. newDataSource = convertImageSeriesToFrames(dataSource); newDataSource = string(newDataSource);
The label data is a multiframe image series stored in a single MAT file. Convert the single MAT file into individual MAT files for each frame by using the convertLabelSeriesToFrames supporting function. The supporting function is defined at the end of this example. 6-5
6
Medical Image Segmentation
newLabelData = convertLabelSeriesToFrames(labelData);
Create Datastores Load the individual MAT file data sources into a imageDatastore. imds = imageDatastore(newDataSource,... ReadFcn=@readFramesLabels,... FileExtensions=[".mat",".dcm"]);
Load the individual MAT file label images into a pixelLabelDatastore (Computer Vision Toolbox). Use the label definitions from the groundTruthMedical object to map pixel values to categorical labels in the datastore. pxds = pixelLabelDatastore(cellstr(newLabelData),labelDefs.Name,labelDefs.PixelLabelID,... ReadFcn=@readFramesLabels,... FileExtensions=".mat");
Preview one image and label. Display the labeled image by using the labeloverlay function. im = preview(imds); label = preview(pxds); imOverlay = labeloverlay(im,label); imshow(imOverlay)
6-6
Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network
Create a CombinedDatastore that pairs each data source image with its corresponding label image. trainingData = combine(imds,pxds);
Supporting Functions The convertImageSeriesToSlices function converts a multiframe DICOM image series, such as ultrasound data, into individual 2-D frames stored in MAT files. The function returns a cell array of the new MAT filenames. function newDataSource = convertImageSeriesToFrames(labelDataSource) % Create a data directory to store MAT files dataFileDir = fullfile(pwd,"GroundTruthData"); if ~isfolder(dataFileDir) mkdir(dataFileDir) end image = medicalImage(labelDataSource); data3d = image.Pixels; % Assumption that time is the third dimension numFrames = size(data3d,3); newDataSource = cell(numFrames,1); for frame = 1:numFrames data = squeeze(data3d(:,:,frame,:)); [~,name,~] = fileparts(labelDataSource); matFileName = strcat(fullfile(dataFileDir,name),"_",num2str(frame),".mat"); save(matFileName,"data"); newDataSource{frame} = string(matFileName); end end
The convertLabelSeriesToFrames function converts a multiframe image series stored in a single file into individual frames stored as separate MAT files. The function returns a cell array of the new MAT filenames. function newlabelData = convertLabelSeriesToFrames(labelSource) % Create a label directory to store MAT files labelFileDir = fullfile(pwd,"GroundTruthLabel"); if ~isfolder(labelFileDir) mkdir(labelFileDir) end labelData = load(labelSource); % Assumption that time is the third dimension numFrames = size(labelData.labels,3); newlabelData = cell(numFrames,1); for frame = 1:numFrames data = squeeze(labelData.labels(:,:,frame,:)); [~,name,~] = fileparts(labelSource); matFileName = strcat(fullfile(labelFileDir,name),"_",num2str(frame),".mat"); save(matFileName,"data");
6-7
6
Medical Image Segmentation
newlabelData{frame} = string(matFileName); end end
The readFramesLabels function reads data stored in a variable named data from a MAT file. function data = readFramesLabels(filename) d = load(filename); data = d.data; end
See Also groundTruthMedical | imageDatastore | pixelLabelDatastore | transform | combine
Related Examples
6-8
•
“Label 2-D Ultrasound Series Using Medical Image Labeler” on page 5-2
•
“Create Training Data for 3-D Medical Image Semantic Segmentation” on page 6-9
Create Training Data for 3-D Medical Image Semantic Segmentation
Create Training Data for 3-D Medical Image Semantic Segmentation This example shows how to create training data for a 3-D semantic segmentation network using a groundTruthMedical object that contains 3-D medical image data. This example also shows how to transform datastores to preprocess and augment image and pixel label data. Create Ground Truth Object You can export a groundTruthMedical object from the Medical Image Labeler app or create one programmatically. This example creates a groundTruthMedical object by using the createGroundTruthMed3D helper function. The helper function is attached to this example as a supporting file. The ground truth object references three data sources, including one chest CT volume stored as a directory of DICOM files and two chest CT volumes stored as NIfTI files. The directory of DICOM files and its label image are attached to this example as supporting files. The createGroundTruthMed3D function downloads the NIfTI files and their label images from the MathWorks® website. The NIfTI files are a subset of the Medical Segmentation Decathlon data set [1 on page 6-13], and have a total file size of approximately 76 MB. gTruthMed = createGroundTruthMed3D;
Extract Data from Ground Truth Object Extract the data source and label image file names from the groundTruthMedical object. dataSource = gTruthMed.DataSource.Source; labelData = gTruthMed.LabelData;
Remove data sources that are missing label images. noLabelsIdx = labelData~=""; dataSource = dataSource(noLabelsIdx); labelData = labelData(noLabelsIdx);
Extract the label definitions from the groundTruthMedical object. Add an addition label definition for the background region, corresponding to a pixel value of 0. labelDefs = gTruthMed.LabelDefinitions; labelDefs(2,:) = {"background",[0 1 0],0};
Load Data Source Images into Image Datastore A groundTruthMedical object can contain data sources stored as single DICOM, NIFTI, or NRRD file, or a directory of DICOM files. The imageDatastore object does not support reading volumetric images from a directory of DICOM files. Use the convertMultifileDICOMs supporting function to search dataSource for data sources stored as a directory of DICOM files, and convert any DICOM directories into single MAT files. dataSource = convertMultifileDICOMs(dataSource); dataSource = string(dataSource);
Load the updated data sources into an imageDatastore. Specify a custom read function, readMedicalVolumes, which is defined at the end of this example. The readMedicalVolumes 6-9
6
Medical Image Segmentation
function reads data from medical volumes stored as a single MAT file, DICOM file, or NIfTI file. The .gz file extension corresponds to compressed NIfTI files. imds = imageDatastore(dataSource,... ReadFcn=@readMedicalVolumes,... FileExtensions=[".mat",".dcm",".nii",".gz"]);
Load Label Images into Pixel Label Datastore Load the label images into a pixelLabelDatastore (Computer Vision Toolbox) using niftiread as the read function. pxds = pixelLabelDatastore(cellstr(labelData),labelDefs.Name,labelDefs.PixelLabelID,... ReadFcn=@niftiread,... FileExtensions=[".nii",".gz"]);
Preview one image volume and its label image. vol = preview(imds); label = preview(pxds);
Display one slice of the previewed volume by using the labeloverlay function. To better view the grayvalues of the CT slice with labeloverlay, first rescale the image intensities to the range [0, 1]. volslice = vol(:,:,100); volslice = rescale(volslice); labelslice = label(:,:,100); imOverlay = labeloverlay(volslice,labelslice); imshow(imOverlay)
6-10
Create Training Data for 3-D Medical Image Semantic Segmentation
Pair Image and Label Data Create a CombinedDatastore that pairs each data source image with its corresponding label image. trainingData = combine(imds,pxds);
Augment and Preprocess Training Data Specify the input size of the target deep learning network. targetSize = [300 300 60];
Augment the training data by using the transform function with custom operations specified by the jitterImageIntensityAndWarp supporting function defined a the end of this example. augmentedTrainingData = transform(trainingData,@jitterImageIntensityAndWarp);
6-11
6
Medical Image Segmentation
Preprocess the training data by using the transform function with custom preprocessing operation specified by the centerCropImageAndLabel supporting function defined at the end of this example. preprocessedTrainingData = transform(augmentedTrainingData,... @(data)centerCropImageAndLabel(data,targetSize));
Supporting Functions The convertMultifileDICOMs function reads a cell array of data source file names, and converts any data sources stored as a directory of DICOM files into a single MAT file. function dataSource = convertMultifileDICOMs(dataSource) numEntries = length(dataSource); dataFileDir = fullfile(pwd,"GroundTruthData"); if ~isfolder(dataFileDir) mkdir(dataFileDir) end for idx = 1:numEntries currEntry = dataSource{idx}; % Multi-file DICOMs if length(currEntry) > 1 matFileName = fileparts(currEntry(1)); matFileName = split(matFileName,filesep); matFileName = replace(strtrim(matFileName(end))," ","_"); matFileName = strcat(fullfile(dataFileDir,matFileName),".mat"); vol = medicalVolume(currEntry); data = vol.Voxels; save(matFileName,"data"); dataSource{idx} = string(matFileName); end end end
The readMedicalVolumes function loads medical image volume data from a single MAT file, DICOM file, or NIfTI file. function data = readMedicalVolumes(filename) [~,~,ext] = fileparts(filename); if ext == ".mat" d = load(filename); data = d.data; else vol = medicalVolume(filename); data = vol.Voxels; end end
The jitterImageColorAndWarp function randomly adjusts the brightness, contrast, and gamma correction of the data source image values, and applies random geometric transformations including scaling, reflection, and rotation to the data source and pixel label images. function out = jitterImageIntensityAndWarp(data) % Unpack original data. I = data{1};
6-12
Create Training Data for 3-D Medical Image Semantic Segmentation
C = data{2}; % Apply random intensity jitter. I = jitterIntensity(I,Brightness=0.3,Contrast=0.4,Gamma=0.2); % Define random affine transform. tform = randomAffine3d(Scale=[0.8 1.5],XReflection=true,Rotation=[-30 30]); rout = affineOutputView(size(I),tform); % Transform image and pixel labels. augmentedImage = imwarp(I,tform,"OutputView",rout); augmentedLabel = imwarp(C,tform,"OutputView",rout); % Return augmented data. out = {augmentedImage,augmentedLabel}; end
The centerCropImageAndLabel function crops the data source images and pixel labels to the input size of the target deep learning network. function out = centerCropImageAndLabel(data,targetSize) win = centerCropWindow3d(size(data{1}),targetSize); out{1} = imcrop3(data{1},win); out{2} = imcrop3(data{2},win); end
References
[1] Medical Segmentation Decathlon. "Brain Tumours." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com/. The Medical Segmentation Decathlon data set is provided under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details.
See Also groundTruthMedical | imageDatastore | pixelLabelDatastore | transform | combine
Related Examples •
“Label 3-D Medical Image Using Medical Image Labeler” on page 5-10
•
“Convert Ultrasound Image Series into Training Data for 2-D Semantic Segmentation Network” on page 6-5
6-13
6
Medical Image Segmentation
Segment Lungs from CT Scan Using Pretrained Neural Network This example shows how to import a pretrained ONNX™ (Open Neural Network Exchange) 3-D U-Net [1 on page 6-22] and use it to perform semantic segmentation of the left and right lungs from a 3-D chest CT scan. Semantic segmentation associates each voxel in a 3-D image with a class label. In this example, you classify each voxel in a test data set as belonging to the left lung or right lung. For more information about semantic segmentation, see “Semantic Segmentation” (Computer Vision Toolbox). A challenge of applying pretrained networks is the possibility of differences between the intensity and spatial details of a new data set and the data set used to train the network. Preprocessing is typically required to format the data to match the expected network input and achieve accurate segmentation results. In this example, you standardize the spatial orientation and normalize the intensity range of a test data set before applying the pretrained network.
Download Pretrained Network Specify the desired location of the pretrained network. dataDir = fullfile(tempdir,"lungmask"); if ~exist(dataDir,"dir") mkdir(dataDir); end
Download the pretrained network from the MathWorks® website by using the downloadTrainedNetwork helper function. The helper function is attached to this example as a 6-14
Segment Lungs from CT Scan Using Pretrained Neural Network
supporting file. The network on the MathWorks website is equivalent to the R231 model, available in the LungMask GitHub repository [2 on page 6-22], converted to the ONNX format. The size of the pretrained network is approximately 11 MB. lungmask_url = "https://www.mathworks.com/supportfiles/medical/pretrainedLungmaskR231Net.onnx"; downloadTrainedNetwork(lungmask_url,dataDir);
Import Pretrained Network Import the ONNX network as a function by using the importONNXFunction (Deep Learning Toolbox) function. You can use this function to import a network with layers that the importONNXNetwork (Deep Learning Toolbox) function does not support. The importONNXFunction function requires the Deep Learning Toolbox™ Converter for ONNX Model Format support package. If this support package is not installed, then importONNXFunction provides a download link. The importONNXFunction function imports the network and returns an ONNXParameters object that contains the network parameters. When you import the pretrained lung segmentation network, the function displays a warning that the LogSoftmax operator is not supported. modelfileONNX = fullfile(dataDir,"pretrainedLungmaskR231Net.onnx"); modelfileM = "importedLungmaskFcn_R231.m"; params = importONNXFunction(modelfileONNX,modelfileM);
Function containing the imported ONNX network architecture was saved to the file importedLungmask To learn how to use this function, type: help importedLungmaskFcn_R231.
Warning: Unable to import some ONNX operators or attributes. They may have been replaced by 'PLAC 1 operator(s)
:
Operator 'LogSoftmax' is not supported with its current settings or in this
Open the generated function, saved as an M file in the current directory. The function contains these lines of code that indicate that the unsupported LogSoftmax operator is replaced with a placeholder: % PLACEHOLDER FUNCTION FOR UNSUPPORTED OPERATOR (LogSoftmax): [Vars.x460, NumDims.x460] = PLACEHOLDER(Vars.x459);
In the function definition, replace the placeholder code with this code. Save the updated function as lungmaskFcn_R231. A copy of lungmaskFcn_R231 with the correct code is also attached to this example as a supporting file. % Replacement for PLACEHOLDER FUNCTION FOR UNSUPPORTED OPERATOR (LogSoftmax): Vars.x460 = log(softmax(Vars.x459,'DataFormat','CSSB')); NumDims.x460 = NumDims.x459;
Save the network parameters in the ONNXParameters object params. Save the parameters in a new MAT file. save("lungmaskParams_R231","params");
Load Data Test the pretrained lung segmentation network on a test data set. The test data is a CT chest volume from the Medical Segmentation Decathlon data set [3 on page 6-22]. Download the MedicalVolumNIfTIData ZIP archive from the MathWorks website, then unzip the file. The ZIP file 6-15
6
Medical Image Segmentation
contains two CT chest volumes and corresponding label images, stored in the NIfTI file format. The total size of the data set is approximately 76 MB. zipFile = matlab.internal.examples.downloadSupportFile("medical","MedicalVolumeNIfTIData.zip"); filePath = fileparts(zipFile); unzip(zipFile,filePath) dataFolder = fullfile(filePath,"MedicalVolumeNIfTIData");
Specify the file name of the first CT volume. fileName = fullfile(dataFolder,"lung_027.nii.gz");
Create a medicalVolume object for the CT volume. medVol = medicalVolume(fileName);
Extract and display the voxel data from the medicalVolume object. V = medVol.Voxels; volshow(V,RenderingStyle="GradientOpacity");
6-16
Segment Lungs from CT Scan Using Pretrained Neural Network
Preprocess Test Data Preprocess the test data to match the expected orientation and intensity range of the pretrained network. Rotate the test image volume in the transverse plane to match the expected input orientation for the pretrained network. The network was trained using data oriented with the patient bed at the bottom of the image, so the test data must be oriented in the same direction. If you change the test data, you need to apply an appropriate spatial transformation to match the expected orientation for the network. rotationAxis = [0 0 1]; volAligned = imrotate3(V,90,rotationAxis);
Display a slice of the rotated volume to check the updated orientation. imshow(volAligned(:,:,150),[])
6-17
6
Medical Image Segmentation
Use intensity normalization to rescale the range of voxel intensities in the region of interest to the range [0, 1], which is the range that the pretrained network expects. The first step in intensity normalization is to determine the range of intensity values within the region of interest. The values are in Hounsfield units. To determine the thresholds for the intensity range, plot a histogram of the voxel intensity values. Set the x- and y-limits of the histogram plot based on the minimum and maximum values. The histogram has two large peaks. The first peak corresponds to background pixels outside the body of the patient and air in the lungs. The second peak corresponds to soft tissue such as the heart and stomach. figure histogram(V) xlim([min(V,[],"all") max(V,[],"all")]) ylim([0 2e6]) xlabel("Intensity [Hounsfield Units]")
6-18
Segment Lungs from CT Scan Using Pretrained Neural Network
ylabel("Number of Voxels") xline([-1024 500],"red",LineWidth=1)
To limit the intensities to the region containing the majority of the tissue in the region of interest, select the thresholds for the intensity range as –1024 and 500. th = [-1024 500];
Apply the preprocessLungCT helper function to further preprocess the test image volume. The helper function is attached to this example as a supporting file. The preprocessLungCT function performs these steps: 1
Resize each 2-D slice along the transverse dimension to the target size, imSize. Decreasing the number of voxels can improve prediction speed. Set the target size to 256-by-256 voxels.
2
Crop the voxel intensities to the range specified by the thresholds in th.
3
Normalize the updated voxel intensities to the range [0, 1].
imSize = [256 256]; volInp = preprocessLungCT(volAligned,imSize,th);
6-19
6
Medical Image Segmentation
Segment Test Data and Postprocess Predicted Labels Segment the test CT volume by using the lungSeg helper function. The helper function is attached to this example as a supporting file. The lungSeg function predicts the segmentation mask by performing inference on the pretrained network and postprocesses the network output to obtain the segmentation mask. To decrease the required computational time, the lungSeg function performs inference on the slices of a volume in batches. Specify the batch size as eight slices using the batchSize name-value argument of lungSeg. Increasing the batch size increases the speed of inference, but requires more memory. If you run out of memory, try deceasing the batch size. During postprocessing, the lungSeg helper function applies a mode filter to the network output to smooth the segmentation labels using the modefilt function. You can set the size of the mode filter by using the modeFilt name-value argument of lungSeg. The default filter size is [9 9 9]. labelOut = lungSeg(volInp,batchSize=8);
Display Predicted Segmentation Labels Display the segmentation results by using the volshow function. Use the OverlayData argument to plot the predicted segmentation labels. To focus on the label data, use the Alphamap argument to set the opacity of the image volume to 0 and the OverlayAlphamap argument to set the opacity of the labels to 0.9. volshow(volInp,OverlayData=labelOut,... Alphamap=0,... OverlayAlphamap=0.9,... RenderingStyle="GradientOpacity");
6-20
Segment Lungs from CT Scan Using Pretrained Neural Network
You can also display the preprocessed test volume as slice planes with the predicted segmentation labels as an overlay by setting the RenderingStyle name-value argument to "SlicePlanes". Specify the lung segmentation label using the OverlayData name-value argument. volshow(volInp,OverlayData=labelOut,... OverlayAlphamap=0.9,... RenderingStyle="SlicePlanes");
Click and drag the mouse to rotate the volume. To scroll in a plane, pause on the slice you want to investigate until it becomes highlighted, then click and drag. The left and right lung segmentation masks are visible in the slices for which they are defined.
6-21
6
Medical Image Segmentation
References
[1] Hofmanninger, Johannes, Forian Prayer, Jeanny Pan, Sebastian Röhrich, Helmut Prosch, and Georg Langs. “Automatic Lung Segmentation in Routine Imaging Is Primarily a Data Diversity Problem, Not a Methodology Problem.” European Radiology Experimental 4, no. 1 (December 2020): 50. https://doi.org/10.1186/s41747-020-00173-2. [2] GitHub. “Automated Lung Segmentation in CT under Presence of Severe Pathologies.” Accessed July 21, 2022. https://github.com/JoHof/lungmask.
6-22
Segment Lungs from CT Scan Using Pretrained Neural Network
[3] Medical Segmentation Decathlon. "Lung." Tasks. Accessed May 10, 2018. http:// medicaldecathlon.com. The Medical Segmentation Decathlon data set is provided under the CC-BY-SA 4.0 license. All warranties and representations are disclaimed. See the license for details.
See Also importONNXFunction | modefilt | volshow
Related Examples •
“Segment Lungs from 3-D Chest Scan”
•
“Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24
6-23
6
Medical Image Segmentation
Brain MRI Segmentation Using Pretrained 3-D U-Net Network This example shows how to segment a brain MRI using a deep neural network. Segmentation of brain scans enables the visualization of individual brain structures. Brain segmentation is also commonly used for quantitative volumetric and shape analyses to characterize healthy and diseased populations. Manual segmentation by clinical experts is considered the highest standard in segmentation. However, the process is extremely time-consuming and not practical for labeling large data sets. Additionally, labeling requires expertise in neuroanatomy and is prone to errors and limitations in interrater and intrarater reproducibility. Trained segmentation algorithms, such as convolutional neural networks, have the potential to automate the labeling of large clinical data sets. In this example, you use the pretrained SynthSeg neural network [1 on page 6-30], a 3-D U-Net for brain MRI segmentation. SynthSeg can be used to segment brain scans of any contrast and resolution without retraining or fine-tuning. SynthSeg is also robust to a wide array of subject populations, from young and healthy to aging and diseased subjects, and a wide array of scan conditions, such as white matter lesions, with or without preprocessing, including bias field corruption, skull stripping, intensity normalization, and template registration.
Download Brain MRI and Label Data This example uses a subset of the CANDI data set [2 on page 6-30] [3 on page 6-30]. The subset consists of a brain MRI volume and the corresponding ground truth label volume for one patient. Both files are in the NIfTI file format. The total size of the data files is ~2.5 MB. Run this code to download the dataset from the MathWorks® website and unzip the downloaded folder. zipFile = matlab.internal.examples.downloadSupportFile("image","data/brainSegData.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The dataDir folder contains the downloaded and unzipped dataset. dataDir = fullfile(filepath,"brainSegData");
6-24
Brain MRI Segmentation Using Pretrained 3-D U-Net Network
Load Pretrained Network This example uses a pretrained TensorFlow-Keras convolutional neural network. Download the pretrained network from the MathWorks® website by using the helper function downloadTrainedNetwork. The helper function is attached to this example as a supporting file. The size of the pretrained network is approximately 51 MB. trainedBrainCANDINetwork_url = "https://www.mathworks.com/supportfiles/"+ ... "image/data/trainedBrainSynthSegNetwork.h5"; downloadTrainedNetwork(trainedBrainCANDINetwork_url,dataDir);
Load Test Data Read the metadata from the brain MRI volume by using the niftiinfo function. Read the brain MRI volume by using the niftiread function. imFile = fullfile(dataDir,"anat.nii.gz"); metaData = niftiinfo(imFile); vol = niftiread(metaData);
In this example, you segment the brain into 32 classes corresponding to anatomical structures. Read the names and numeric identifiers for each class label by using the getBrainCANDISegmentationLabels helper function. The helper function is attached to this example as a supporting file. labelDirs = fullfile(dataDir,"groundTruth"); [classNames,labelIDs] = getBrainCANDISegmentationLabels;
Preprocess Test Data Preprocess the MRI volume by using the preProcessBrainCANDIData helper function. The helper function is attached to this example as a supporting file. The helper function performs these steps: • Resampling — If resample is true, resample the data to the isotropic voxel size 1-by-1-by-1 mm. By default, resample is false and the function does not perform resampling. To test the pretrained network on images with a different voxel size, set resample to true if the input is not isotropic. • Alignment — Rotate the volume to a standardized RAS orientation. • Cropping — Crop the volume to a maximum size of 192 voxels in each dimension. • Normalization — Normalize the intensity values of the volume to values in the range [0, 1], which improves the contrast. resample = false; cropSize = 192; [volProc,cropIdx,imSize] = preProcessBrainCANDIData(vol,metaData,cropSize,resample); inputSize = size(volProc);
Convert the preprocessed MRI volume into a formatted deep learning array with the SSSCB (spatial, spatial, spatial, channel, batch) format by using dlarray (Deep Learning Toolbox). volDL = dlarray(volProc,"SSSCB");
Define Network Architecture Import the network layers from the downloaded model file of the pretrained network using the importKerasLayers (Deep Learning Toolbox) function. The importKerasLayers function 6-25
6
Medical Image Segmentation
requires the Deep Learning Toolbox™ Converter for TensorFlow Models support package. If this support package is not installed, then importKerasLayers provides a download link. Specify ImportWeights as true to import the layers using the weights from the same HDF5 file. The function returns a layerGraph (Deep Learning Toolbox) object. The Keras network contains some layers that the Deep Learning Toolbox™ does not support. The importKerasLayers function displays a warning and replaces the unsupported layers with placeholder layers. modelFile = fullfile(dataDir,"trainedBrainSynthSegNetwork.h5"); lgraph = importKerasLayers(modelFile,ImportWeights=true,ImageInputSize=inputSize);
Warning: Imported layers have no output layer because the model does not specify a loss function. Warning: Unable to import some Keras layers, because they are not supported by the Deep Learning
To replace the placeholder layers in the imported network, first identify the names of the layers to replace. Find the placeholder layers using findPlaceholderLayers (Deep Learning Toolbox). placeholderLayers = findPlaceholderLayers(lgraph) placeholderLayers = PlaceholderLayer with properties: Name: KerasConfiguration: Weights: InputLabels: OutputLabels:
'unet_prediction' [1×1 struct] [] {''} {''}
Learnable Parameters No properties. State Parameters No properties. Show all properties
Define existing layers with the same configurations as the imported Keras layers. sf = softmaxLayer;
Replace the placeholder layers with existing layers using replaceLayer (Deep Learning Toolbox). lgraph = replaceLayer(lgraph,"unet_prediction",sf);
Convert the network to a dlnetwork (Deep Learning Toolbox) object. net = dlnetwork(lgraph);
Display the updated layer graph information. layerGraph(net) ans = LayerGraph with properties: InputNames: {'unet_input'}
6-26
Brain MRI Segmentation Using Pretrained 3-D U-Net Network
OutputNames: {1×0 cell} Layers: [60×1 nnet.cnn.layer.Layer] Connections: [63×2 table]
Predict Using Test Data Predict Network Output Predict the segmentation output for the preprocessed MRI volume. The segmentation output predictIm contains 32 channels corresponding to the segmentation label classes, such as "background", "leftCerebralCortex", "rightThalamus". The predictIm output assigns confidence scores to each voxel for every class. The confidence scores reflect the likelihood of the voxel being part of the corresponding class. This prediction is different from the final semantic segmentation output, which assigns each voxel to exactly one class. predictIm = predict(net,volDL);
Test Time Augmentation This example uses test time augmentation to improve segmentation accuracy. In general, augmentation applies random transformations to an image to increase the variability of a data set. You can use augmentation before network training to increase the size of the training data set. Test time augmentation applies random transformations to test images to create multiple versions of the test image. You can then pass each version of the test image to the network for prediction. The network calculates the overall segmentation result as the average prediction for all versions of the test image. Test time augmentation improves segmentation accuracy by averaging out random errors in the individual network predictions. By default, this example flips the MRI volume in the left-right direction, resulting in a flipped volume flippedData. The network output for the flipped volume is flipPredictIm. Set flipVal to false to skip the test time augmentation and speed up prediction. flipVal = if flipVal flippedData = flippedData = flippedData = flippedData = flipPredictIm else flipPredictIm end
; fliplr(volProc); flip(flippedData,2); flip(flippedData,1); dlarray(flippedData,"SSSCB"); = predict(net,flippedData); = [];
Postprocess Segmentation Prediction To get the final segmentation maps, postprocess the network output by using the postProcessBrainCANDIData helper function. The helper function is attached to this example as a supporting file. The postProcessBrainCANDIData function performs these steps: • Smoothing — Apply a 3-D Gaussian smoothing filter to reduce noise in the predicted segmentation masks. • Morphological Filtering — Keep only the largest connected component of predicted segmentation masks to remove additional noise. 6-27
6
Medical Image Segmentation
• Segmentation — Assign each voxel to the label class with the greatest confidence score for that voxel. • Resizing — Resize the segmentation map to the original input volume size. Resizing the label image allows you to visualize the labels as an overlay on the grayscale MRI volume. • Alignment — Rotate the segmentation map back to the orientation of the original input MRI volume. The final segmentation result, predictedSegMaps, is a 3-D categorical array the same size as the original input volume. Each element corresponds to one voxel and has one categorical label. predictedSegMaps = postProcessBrainCANDIData(predictIm,flipPredictIm,imSize, ... cropIdx,metaData,classNames,labelIDs);
Overlay a slice from the predicted segmentation map on a corresponding slice from the input volume using the labeloverlay function. Include all the brain structure labels except the background label. sliceIdx = 80; testSlice = rescale(vol(:,:,sliceIdx)); predSegMap = predictedSegMaps(:,:,sliceIdx); B = labeloverlay(testSlice,predSegMap,"IncludedLabels",2:32); figure montage({testSlice,B})
Quantify Segmentation Accuracy Measure the segmentation accuracy by comparing the predicted segmentation labels with the ground truth labels drawn by clinical experts. Create a pixelLabelDatastore (Computer Vision Toolbox) to store the labels. Because the NIfTI file format is a nonstandard image format, you must use a NIfTI file reader to read the pixel label data. You can use the helper NIfTI file reader, niftiReader, defined at the bottom of this example. 6-28
Brain MRI Segmentation Using Pretrained 3-D U-Net Network
pxds = pixelLabelDatastore(labelDirs,classNames,labelIDs,FileExtensions=".gz",... ReadFcn=@niftiReader);
Read the ground truth labels from the pixel label datastore. groundTruthLabel = read(pxds); groundTruthLabel = groundTruthLabel{1};
Measure the segmentation accuracy using the dice function. This function computes the Dice index between the predicted and ground truth segmentations. diceResult = zeros(length(classNames),1); for j = 1:length(classNames) diceResult(j)= dice(groundTruthLabel==classNames(j),... predictedSegMaps==classNames(j)); end
Calculate the average Dice index across all labels for the MRI volume. meanDiceScore = mean(diceResult); disp("Average Dice score across all labels = " +num2str(meanDiceScore)) Average Dice score across all labels = 0.80793
Visualize statistics about the Dice indices across all the label classes as a box chart. The middle blue line in the plot shows the median Dice index. The upper and lower bounds of the blue box indicate the 25th and 75th percentiles, respectively. Black whiskers extend to the most extreme data points that are not outliers. figure boxchart(diceResult) title("Dice Accuracy") xticklabels("All Label Classes") ylabel("Dice Coefficient")
6-29
6
Medical Image Segmentation
Supporting Functions The niftiReader helper function reads a NIfTI file in a datastore. function data = niftiReader(filename) data = niftiread(filename); data = uint8(data); end
References
[1] Billot, Benjamin, Douglas N. Greve, Oula Puonti, Axel Thielscher, Koen Van Leemput, Bruce Fischl, Adrian V. Dalca, and Juan Eugenio Iglesias. “SynthSeg: Domain Randomisation for Segmentation of Brain Scans of Any Contrast and Resolution.” ArXiv:2107.09559 [Cs, Eess], December 21, 2021. http://arxiv.org/abs/2107.09559. [2] “NITRC: CANDI Share: Schizophrenia Bulletin 2008: Tool/Resource Info.” Accessed October 17, 2022. https://www.nitrc.org/projects/cs_schizbull08/.
6-30
Brain MRI Segmentation Using Pretrained 3-D U-Net Network
[3] Frazier, J. A., S. M. Hodge, J. L. Breeze, A. J. Giuliano, J. E. Terry, C. M. Moore, D. N. Kennedy, et al. “Diagnostic and Sex Effects on Limbic Volumes in Early-Onset Bipolar Disorder and Schizophrenia.” Schizophrenia Bulletin 34, no. 1 (October 27, 2007): 37–46. https://doi.org/10.1093/schbul/sbm120.
See Also niftiread | importKerasLayers | findPlaceholderLayers | pixelLabelDatastore | dice | boxchart
Related Examples •
“Segment Lungs from CT Scan Using Pretrained Neural Network” on page 6-14
•
“Breast Tumor Segmentation from Ultrasound Using Deep Learning” on page 6-32
•
“3-D Brain Tumor Segmentation Using Deep Learning”
6-31
6
Medical Image Segmentation
Breast Tumor Segmentation from Ultrasound Using Deep Learning This example shows how to perform semantic segmentation of breast tumors from 2-D ultrasound images using a deep neural network. Semantic segmentation involves assigning a class to each pixel in a 2-D image. In this example, you perform breast tumor segmentation using the DeepLab v3+ architecture. A common challenge of medical image segmentation is class imbalance. In segmentation, class imbalance means the size of the region of interest, such as a tumor, is small relative to the image background, resulting in many more pixels in the background class. This example addresses class imbalance by using a custom Tversky loss [1 on page 6-39]. The Tversky loss is an asymmetric similarity measure that is a generalization of the Dice index and the Jaccard index. Load Pretrained Network Create a folder in which to store the pretrained network and image data set. In this example, a folder named BreastSegmentation created within the tempdir directory has been used as dataDir. Download the pretrained DeepLab v3+ network and test image by using the downloadTrainedNetwork helper function. The helper function is attached to this example as a supporting file. You can use the pretrained network to run the example without waiting for training to complete. dataDir = fullfile(tempdir,"BreastSegmentation"); if ~exist(dataDir,"dir") mkdir(dataDir) end pretrainedNetwork_url = "https://www.mathworks.com/supportfiles/"+ ... "image/data/breastTumorDeepLabV3.tar.gz"; downloadTrainedNetwork(pretrainedNetwork_url,dataDir);
Unzip the TAR GZ file completely. Load the pretrained network into a variable called trainedNet. gunzip(fullfile(dataDir,"breastTumorDeepLabV3.tar.gz"),dataDir); untar(fullfile(dataDir,"breastTumorDeepLabV3.tar"),dataDir); exampleDir = fullfile(dataDir,"breastTumorDeepLabV3"); load(fullfile(exampleDir,"breast_seg_deepLabV3.mat"));
Read the test ultrasound image and resize the image to the input size of the pretrained network. imTest = imread(fullfile(exampleDir,"breastUltrasoundImg.png")); imSize = [256 256]; imTest = imresize(imTest,imSize);
Predict the tumor segmentation mask for the test image. segmentedImg = semanticseg(imTest,trainedNet);
Display the test image and the test image with the predicted tumor label overlay as a montage. overlayImg = labeloverlay(imTest,segmentedImg,Transparency=0.7,IncludedLabels="tumor", ... Colormap="hsv"); montage({imTest,overlayImg});
6-32
Breast Tumor Segmentation from Ultrasound Using Deep Learning
Download Data Set This example uses the Breast Ultrasound Images (BUSI) data set [2 on page 6-39]. The BUSI data set contains 2-D ultrasound images stored in the PNG file format. The total size of the data set is 197 MB. The data set contains 133 normal scans, 487 scans with benign tumors, and 210 scans with malignant tumors. This example uses images from the tumor groups only. Each ultrasound image has a corresponding tumor mask image. The tumor mask labels have been reviewed by clinical radiologists [2 on page 6-39]. Run this code to download the dataset from the MathWorks® website and unzip the downloaded folder. zipFile = matlab.internal.examples.downloadSupportFile("image","data/Dataset_BUSI.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The imageDir folder contains the downloaded and unzipped dataset. imageDir = fullfile(filepath,"Dataset_BUSI_with_GT");
Load Data Create an imageDatastore object to read and manage the ultrasound image data. Label each image as normal, benign, or malignant according to the name of its folder. imds = imageDatastore(imageDir,IncludeSubfolders=true,LabelSource="foldernames");
Remove files whose names contain "mask" to remove label images from the datastore. The image datastore now contains only the grayscale ultrasound images. imds = subset(imds,find(~contains(imds.Files,"mask")));
6-33
6
Medical Image Segmentation
Create a pixelLabelDatastore (Computer Vision Toolbox) object to store the labels. classNames = ["tumor","background"]; labelIDs = [1 0]; numClasses = numel(classNames); pxds = pixelLabelDatastore(imageDir,classNames,labelIDs,IncludeSubfolders=true);
Include only the subset of files whose names contain "_mask.png" in the datastore. The pixel label datastore now contains only the tumor mask images. pxds = subset(pxds,contains(pxds.Files,"_mask.png"));
Preview one image with a tumor mask overlay. testImage = preview(imds); mask = preview(pxds); B = labeloverlay(testImage,mask,Transparency=0.7,IncludedLabels="tumor", ... Colormap="hsv"); imshow(B) title("Labeled Test Ultrasound Image")
6-34
Breast Tumor Segmentation from Ultrasound Using Deep Learning
Combine the image datastore and the pixel label datastore to create a CombinedDatastore object. dsCombined = combine(imds,pxds);
Prepare Data for Training Partition Data into Training, Validation, and Test Sets Split the combined datastore into data sets for training, validation, and testing. Allocate 80% of the data for training, 10% for validation, and the remaining 10% for testing. Determine the indices to include in each set by using the splitlabels (Computer Vision Toolbox) function. To exclude images in the normal class without tumor images, use the image datastore labels as input and set the Exclude name-value argument to "normal". idxSet = splitlabels(imds.Labels,[0.8,0.1],"randomized",Exclude="normal"); dsTrain = subset(dsCombined,idxSet{1}); dsVal = subset(dsCombined,idxSet{2}); dsTest = subset(dsCombined,idxSet{3});
Augment Training and Validation Data Augment the training and validation data by using the transform function with custom preprocessing operations specified by the transformBreastTumorImageAndLabels helper function. The helper function is attached to the example as a supporting file. The transformBreastTumorImageAndLabels function performs these operations: 1
Convert the ultrasound images from RGB to grayscale.
2
Augment the intensity of the grayscale images by using the jitterIntensity function.
3
Resize the images to 256-by-256 pixels.
tdsTrain = transform(dsTrain,@transformBreastTumorImageAndLabels,IncludeInfo=true); tdsVal = transform(dsVal,@transformBreastTumorImageAndLabels,IncludeInfo=true);
Define Network Architecture This example uses the DeepLab v3+ network. DeepLab v3+ consists of a series of convolution layers with a skip connection, one maxpool layer, and one averagepool layer. The network also has a batch normalization layer before each ReLU layer. Create a DeepLab v3+ network based on ResNet-50 by using the using deeplabv3plusLayers (Computer Vision Toolbox) function. Setting the base network as ResNet-50 requires the Deep Learning Toolbox™ Model for ResNet-50 Network support package. If this support package is not installed, then the function provides a download link. Define the input size of the network as 256-by-256-by-3. Specify the number of classes as two for background and tumor. imageSize = [256 256 3]; lgraph = deeplabv3plusLayers(imageSize,numClasses,"resnet50");
Because the preprocessed ultrasound images are grayscale, replace the original input layer with a 256-by-256 input layer. newInputLayer = imageInputLayer(imageSize(1:2),Name="newInputLayer"); lgraph = replaceLayer(lgraph,lgraph.Layers(1).Name,newInputLayer);
6-35
6
Medical Image Segmentation
Replace the first 2-D convolution layer with a new 2-D convolution layer to match the size of the new input layer. newConvLayer = convolution2dLayer([7 7],64,Stride=2,Padding=[3 3 3 3],Name="newConv1"); lgraph = replaceLayer(lgraph,lgraph.Layers(2).Name,newConvLayer);
To better segment smaller tumor regions and reduce the influence of larger background regions, use a custom Tversky pixel classification layer. For more details about using a custom Tversky layer, see “Define Custom Pixel Classification Layer with Tversky Loss” (Deep Learning Toolbox). Replace the pixel classification layer with the Tversky pixel classification layer. The alpha and beta weighting factors control the contribution of false positives and false negatives, respectively, to the loss function. The alpha and beta values used in this example were selected using trial and error for the target data set. Generally, specifying the beta value greater than the alpha value is useful for training images with small objects and large background regions. alpha = 0.01; beta = 0.99; pxLayer = tverskyPixelClassificationLayer("tverskyLoss",alpha,beta); lgraph = replaceLayer(lgraph,"classification",pxLayer);
Alternatively, you can modify the DeepLab v3+ network by using the Deep Network Designer (Deep Learning Toolbox) from Deep Learning Toolbox. Use the Analyze tool in the Deep Network Designer (Deep Learning Toolbox) to analyze the DeepLab v3+ network. deepNetworkDesigner(lgraph)
Specify Training Options Train the network using the adam optimization solver. Specify the hyperparameter settings using the trainingOptions (Deep Learning Toolbox) function. Set the learning rate to 1e-3 over the span of training. You can experiment with the mini-batch size based on your GPU memory. Batch normalization layers are less effective for smaller values of the mini-batch size. Tune the initial learning rate based on the mini-batch size. options = trainingOptions("adam", ... ExecutionEnvironment="gpu", ... InitialLearnRate=1e-3, ... ValidationData=tdsVal, ... MaxEpochs=300, ... MiniBatchSize=16, ... VerboseFrequency=20, ... Plots="training-progress");
Train Network To train the network, set the doTraining variable to true. Train the model using the trainNetwork (Deep Learning Toolbox) function. Train on a GPU if one is available. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU. For more information, see “GPU Computing Requirements” (Parallel Computing Toolbox). Training takes about four hours on a single-GPU system with an NVIDIA™ Titan Xp GPU and can take longer depending on your GPU hardware. doTraining = if doTraining
6-36
;
Breast Tumor Segmentation from Ultrasound Using Deep Learning
[trainedNet,info] = trainNetwork(tdsTrain,lgraph,options); modelDateTime = string(datetime("now",Format="yyyy-MM-dd-HH-mm-ss")); save("breastTumorDeepLabv3-"+modelDateTime+".mat","trainedNet"); end
Predict Using New Data Preprocess Test Data Prepare the test data by using the transform function with custom preprocessing operations specified by the transformBreastTumorImageResize helper function. This helper function is attached to the example as a supporting file. The transformBreastTumorImageResize function converts images from RGB to grayscale and resizes the images to 256-by-256 pixels. dsTest = transform(dsTest,@transformBreastTumorImageResize,IncludeInfo=true);
Segment Test Data Use the trained network for semantic segmentation of the test data set. pxdsResults = semanticseg(dsTest,trainedNet,Verbose=true); Running semantic segmentation network ------------------------------------* Processed 65 images.
Evaluate Segmentation Accuracy Evaluate the network-predicted segmentation results against the ground truth pixel label tumor masks. metrics = evaluateSemanticSegmentation(pxdsResults,dsTest,Verbose=true); Evaluating semantic segmentation results ---------------------------------------* Selected metrics: global accuracy, class accuracy, IoU, weighted IoU, BF score. * Processed 65 images. * Finalizing... Done. * Data set metrics: GlobalAccuracy ______________ 0.97588
MeanAccuracy ____________ 0.96236
MeanIoU _______ 0.86931
WeightedIoU ___________
MeanBFScore ___________
0.9565
0.57667
Measure the segmentation accuracy using the evaluateBreastTumorDiceAccuracy helper function. This helper function computes the Dice index between the predicted and ground truth segmentations using the dice function. The helper function is attached to the example as a supporting file. [diceTumor,diceBackground,numTestImgs] = evaluateBreastTumorDiceAccuracy(pxdsResults,dsTest);
Calculate the average Dice index across the set of test images. disp("Average Dice score of background across "+num2str(numTestImgs)+ ... " test images = "+num2str(mean(diceBackground))) Average Dice score of background across 65 test images = 0.98581
6-37
6
Medical Image Segmentation
disp("Average Dice score of tumor across "+num2str(numTestImgs)+ ... " test images = "+num2str(mean(diceTumor))) Average Dice score of tumor across 65 test images = 0.78588 disp("Median Dice score of tumor across "+num2str(numTestImgs)+ ... " test images = "+num2str(median(diceTumor))) Median Dice score of tumor across 65 test images = 0.85888
Visualize statistics about the Dice scores as a box chart. The middle blue line in the plot shows the median Dice index. The upper and lower bounds of the blue box indicate the 25th and 75th percentiles, respectively. Black whiskers extend to the most extreme data points that are not outliers. figure boxchart([diceTumor diceBackground]) title("Test Set Dice Accuracy") xticklabels(classNames) ylabel("Dice Coefficient")
6-38
Breast Tumor Segmentation from Ultrasound Using Deep Learning
References
[1] Salehi, Seyed Sadegh Mohseni, Deniz Erdogmus, and Ali Gholipour. “Tversky Loss Function for Image Segmentation Using 3D Fully Convolutional Deep Networks.” In Machine Learning in Medical Imaging, edited by Qian Wang, Yinghuan Shi, Heung-Il Suk, and Kenji Suzuki, 10541:379–87. Cham: Springer International Publishing, 2017. https://doi.org/10.1007/978-3-319-67389-9_44. [2] Al-Dhabyani, Walid, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. “Dataset of Breast Ultrasound Images.” Data in Brief 28 (February 2020): 104863. https://doi.org/10.1016/ j.dib.2019.104863.
See Also imageDatastore | pixelLabelDatastore | subset | combine | transform | deeplabv3plusLayers | semanticseg | evaluateSemanticSegmentation
Related Examples •
“Segment Lungs from CT Scan Using Pretrained Neural Network” on page 6-14
•
“Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24
•
“3-D Brain Tumor Segmentation Using Deep Learning”
More About •
“Datastores for Deep Learning” (Deep Learning Toolbox)
•
“Define Custom Pixel Classification Layer with Tversky Loss” (Computer Vision Toolbox)
•
“GPU Computing Requirements” (Parallel Computing Toolbox)
6-39
6
Medical Image Segmentation
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network This example shows how to perform semantic segmentation of the left ventricle from 2-D cardiac MRI images using U-Net. Semantic segmentation associates each pixel in an image with a class label. Segmentation of cardiac MRI images is useful for detecting abnormalities in heart structure and function. A common challenge of medical image segmentation is class imbalance, meaning the region of interest is small relative to the image background. Therefore, the training images contain many more background pixels than labeled pixels, which can limit classification accuracy. In this example, you address class imbalance by using a generalized Dice loss function [1 on page 6-51]. You also use the gradientweighted class activation mapping (Grad-CAM) deep learning explainability technique to determine which regions of an image are important for the pixel classification decision. This figure shows an example of a cine-MRI image before segmentation, the network-predicted segmentation map, and the corresponding Grad-CAM map.
Load Pretrained Network Download the pretrained U-Net network by using the downloadTrainedNetwork helper function. The helper function is attached to this example as a supporting file. You can use this pretrained network to run the example without training the network. exampleDir = fullfile(tempdir,"cardiacMR"); if ~isfolder(exampleDir) mkdir(exampleDir); end trainedNetworkURL = "https://ssd.mathworks.com/supportfiles" + ... "/medical/pretrainedLeftVentricleSegmentation.zip"; downloadTrainedNetwork(trainedNetworkURL,exampleDir);
Load the network. data = load(fullfile(exampleDir,"pretrainedLeftVentricleSegmentationModel.mat")); trainedNet = data.trainedNet;
6-40
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
Perform Semantic Segmentation Use the pretrained network to predict the left ventricle segmentation mask for a test image. Download Data Set This example uses a subset of the Sunnybrook Cardiac Data data set [2 on page 6-51,3 on page 652]. The subset consists of 45 cine-MRI images and their corresponding ground truth label images. The MRI images were acquired from multiple patients with various cardiac pathologies. The ground truth label images were manually drawn by experts [2]. The MRI images are in the DICOM file format and the label images are in the PNG file format. The total size of the subset of data is ~105 MB. Download the data set from the MathWorks® website and unzip the downloaded folder. zipFile = matlab.internal.examples.downloadSupportFile("medical","CardiacMRI.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
The imageDir folder contains the downloaded and unzipped data set. imageDir = fullfile(filepath,"Cardiac MRI");
Predict Left Ventricle Mask Read an image from the data set and preprocess the image by using the preprocessImage on page 6-50 helper function, which is defined at the end of this example. The helper function resizes MRI images to the input size of the network and converts them from grayscale to three-channel images. testImg = dicomread(fullfile(imageDir,"images","SC-HF-I-01","SC-HF-I-01_rawdcm_099.dcm")); trainingSize = [256 256 3]; data = preprocessImage(testImg,trainingSize); testImg = data{1};
Predict the left ventricle segmentation mask for the test image using the pretrained network by using the semanticseg (Computer Vision Toolbox) function. segmentedImg = semanticseg(testImg,trainedNet);
Display the test image and an overlay with the predicted mask as a montage. overlayImg = labeloverlay(mat2gray(testImg),segmentedImg, ... Transparency=0.7, ... IncludedLabels="LeftVentricle"); imshowpair(mat2gray(testImg),overlayImg,"montage");
6-41
6
Medical Image Segmentation
Prepare Data for Training Create an imageDatastore object to read and manage the MRI images. dataFolder = fullfile(imageDir,"images"); imds = imageDatastore(dataFolder,... IncludeSubfolders=true,... FileExtensions=".dcm",... ReadFcn=@dicomread);
Create a pixelLabelDatastore (Computer Vision Toolbox) object to read and manage the label images. labelFolder = fullfile(imageDir,"labels"); classNames = ["Background","LeftVentricle"]; pixIDs = [0,1]; pxds = pixelLabelDatastore(labelFolder,classNames,pixIDs,... IncludeSubfolders=true,... FileExtensions=".png");
Preprocess the data by using the transform function with custom operations specified by the preprocessImage on page 6-50 helper function, which is defined at the end of this example. The helper function resizes the MRI images to the input size of the network and converts them from grayscale to three-channel images. timds = transform(imds,@(img) preprocessImage(img,trainingSize));
Preprocess the label images by using the transform function with custom operations specified by the preprocesslabels on page 6-51 helper function, which is defined at the end of this example. The helper function resizes the label images to the input size of the network. tpxds = transform(pxds,@(img) preprocessLabels(img,trainingSize));
6-42
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
Combine the transformed image and pixel label datastores to create a CombinedDatastore object. combinedDS = combine(timds,tpxds);
Partition Data for Training, Validation, and Testing Split the combined datastore into data sets for training, validation, and testing. Allocate 75% of the data for training, 5% for validation, and the remaining 20% for testing. numImages = numel(imds.Files); numTrain = round(0.75*numImages); numVal = round(0.05*numImages); numTest = round(0.2*numImages); shuffledIndices = randperm(numImages); dsTrain = subset(combinedDS,shuffledIndices(1:numTrain)); dsVal = subset(combinedDS,shuffledIndices(numTrain+1:numTrain+numVal)); dsTest = subset(combinedDS,shuffledIndices(numTrain+numVal+1:end));
Visualize the number of images in the training, validation, and testing subsets. figure bar([numTrain,numVal,numTest]) title("Partitioned Data Set") xticklabels({"Training Set","Validation Set","Testing Set"}) ylabel("Number of Images")
Augment Training Data 6-43
6
Medical Image Segmentation
Augment the training data by using the transform function with custom operations specified by the augmentDataForLVSegmentation on page 6-51 helper function, which is defined at the end of this example. The helper function applies random rotations, translations, and reflections to the MRI images and corresponding ground truth labels. dsTrain = transform(dsTrain,@(data) augmentDataForLVSegmentation(data));
Measure Label Imbalance To measure the distribution of class labels in the data set, use the countEachLabel function to count the background pixels and the labeled ventricle pixels. pixelLabelCount = countEachLabel(pxds) pixelLabelCount=2×3 table Name PixelCount _________________ __________ {'Background' } {'LeftVentricle'}
5.1901e+07 8.5594e+05
ImagePixelCount _______________ 5.2756e+07 5.2756e+07
Visualize the labels by class. The image contains many more background pixels than labeled ventricle pixels. The label imbalance can bias the training of the network. You address this imbalance when you design the network. figure bar(categorical(pixelLabelCount.Name),pixelLabelCount.PixelCount) ylabel("Frequency")
6-44
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
Define Network Architecture This example uses a U-Net network for semantic segmentation. Create a U-Net network with an input size of 256-by-256-by-3 that classifies pixels into two categories corresponding to the background and left ventricle. numClasses = length(classNames); net = unetLayers(trainingSize,numClasses);
Replace the input network layer with an imageInputLayer (Deep Learning Toolbox) object that normalizes image values between 0 and 1000 to the range [0, 1]. Values less than 0 are set to 0 and values greater than 1000 are set to 1000. inputlayer = imageInputLayer(trainingSize, ... Normalization="rescale-zero-one", ... Min=0, ... Max=1000, ... Name="input"); net = replaceLayer(net,net.Layers(1).Name,inputlayer);
To address the class imbalance between the smaller ventricle regions and larger background, this example uses a dicePixelClassificationLayer (Computer Vision Toolbox) object. Replace the pixel classification layer with the Dice pixel classification layer. pxLayer = dicePixelClassificationLayer(Name="labels"); net = replaceLayer(net,net.Layers(end).Name,pxLayer);
6-45
6
Medical Image Segmentation
Specify Training Options Specify the training options by using the trainingOptions (Deep Learning Toolbox) function. Train the network using the adam optimization solver. Set the learning rate to 0.001 over the span of training. You can experiment with the mini-batch size based on your GPU memory. Batch normalization layers are less effective for smaller values of the mini-batch size. Tune the initial learning rate based on the mini-batch size. options = trainingOptions("adam", ... InitialLearnRate=0.0002,... GradientDecayFactor=0.999,... L2Regularization=0.0005, ... MaxEpochs=100, ... MiniBatchSize=32, ... Shuffle="every-epoch", ... Verbose=false,... VerboseFrequency=100,... ValidationData=dsVal,... Plots="training-progress",... ExecutionEnvironment="auto",... ResetInputNormalization=false);
Train Network To train the network, set the doTraining variable to true. Train the network by using the trainNetwork (Deep Learning Toolbox) function. Train on a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a CUDA®-enabled NVIDIA® GPU. ; doTraining = if doTraining trainedNet = trainNetwork(dsTrain,net,options); modelDateTime = string(datetime("now",Format="yyyy-MM-dd-HH-mm-ss")); save(fullfile(exampleDir,"trainedLeftVentricleSegmentation-" ... +modelDateTime+".mat"),"trainedNet"); end
Test Network Segment each image in the test data set by using the trained network. resultsDir = fullfile(exampleDir,"Results"); if ~isfolder(resultsDir) mkdir(resultsDir) end pxdsResults = semanticseg(dsTest,trainedNet,... WriteLocation=resultsDir,... Verbose=true,... MiniBatchSize=1); Running semantic segmentation network ------------------------------------* Processed 161 images.
6-46
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
Evaluate Segmentation Metrics Evaluate the network by calculating performance metrics using the evaluateSemanticSegmentation (Computer Vision Toolbox) function. The function computes metrics that compare the labels that the network predicts in pxdsResults to the ground truth labels in pxdsTest. pxdsTest = dsTest.UnderlyingDatastores{2}; metrics = evaluateSemanticSegmentation(pxdsResults,pxdsTest); Evaluating semantic segmentation results ---------------------------------------* Selected metrics: global accuracy, class accuracy, IoU, weighted IoU, BF score. * Processed 161 images. * Finalizing... Done. * Data set metrics: GlobalAccuracy ______________ 0.99829
MeanAccuracy ____________
MeanIoU _______
0.97752
0.9539
WeightedIoU ___________
MeanBFScore ___________
0.99667
0.96747
View the metrics by class by querying the ClassMetrics property of metrics. metrics.ClassMetrics ans=2×3 table
Background LeftVentricle
Accuracy ________
IoU _______
MeanBFScore ___________
0.99907 0.95598
0.99826 0.90953
0.995 0.93994
Evaluate Dice Score Evaluate the segmentation accuracy by calculating the Dice score between the predicted and ground truth label images. For each test image, calculate the Dice score for the background label and the ventricle label by using the dice function. reset(pxdsTest); reset(pxdsResults); diceScore = zeros(numTest,numClasses); for idx = 1:numTest prediction = read(pxdsResults); groundTruth = read(pxdsTest); diceScore(idx,1) = dice(prediction{1}==classNames(1),groundTruth{1}==classNames(1)); diceScore(idx,2) = dice(prediction{1}==classNames(2),groundTruth{1}==classNames(2)); end
Calculate the mean Dice score over all test images and report the mean values in a table. meanDiceScore = mean(diceScore); diceTable = array2table(meanDiceScore', ... VariableNames="Mean Dice Score", ... RowNames=classNames)
6-47
6
Medical Image Segmentation
diceTable=2×1 table Mean Dice Score _______________ Background LeftVentricle
0.99913 0.9282
Visualize the Dice scores for each class as a box chart. The middle blue line in the plot shows the median Dice score. The upper and lower bounds of the blue box indicate the 25th and 75th percentiles, respectively. Black whiskers extend to the most extreme data points that are not outliers. figure boxchart(diceScore) title("Test Set Dice Accuracy") xticklabels(classNames) ylabel("Dice Coefficient")
Explainability By using explainability methods like Grad-CAM, you can see which areas of an input image the network uses to make its pixel classifications. Use Grad-CAM to show which areas of a test MRI image the network uses to segment the left ventricle. Load an image from the test data set and preprocess it using the same operations you use to preprocess the training data. The preprocessImage on page 6-50 helper function is defined at the end of this example. 6-48
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
testImg = dicomread(fullfile(imageDir,"images","SC-HF-I-01","SC-HF-I-01_rawdcm_099.dcm")); data = preprocessImage(testImg,trainingSize); testImg = data{1};
Load the corresponding ground truth label image and preprocess it using the same operations you use to preprocess the training data. The preprocessLabels on page 6-51 function is defined at the end of this example. testGroundTruth = imread(fullfile(imageDir,"labels","SC-HF-I-01","SC-HF-I-01gtmask0099.png")); data = preprocessLabels({testGroundTruth}, trainingSize); testGroundTruth = data{1};
Segment the test image using the trained network. prediction = semanticseg(testImg,trainedNet);
To use Grad-CAM, you must select a feature layer from which to extract the feature map and a reduction layer from which to extract the output activations. Use analyzeNetwork (Deep Learning Toolbox) to find the layers to use with Grad-CAM. In this example, you use the final ReLU layer as the feature layer and the softmax layer as the reduction layer. analyzeNetwork(trainedNet) featureLayer = "Decoder-Stage-4-Conv-2"; reductionLayer = "Softmax-Layer";
Compute the Grad-CAM map for the test image by using the gradCAM (Deep Learning Toolbox) function. gradCAMMap = gradCAM(trainedNet,testImg,classNames,... ReductionLayer=reductionLayer,... FeatureLayer=featureLayer);
Visualize the test image, the ground truth labels, the network-predicted labels, and the Grad-CAM map for the ventricle. As expected, the area within the ground truth ventricle mask contributes most strongly to the network prediction of the ventricle label. figure tiledlayout(2,2) nexttile imshow(mat2gray(testImg)) title("Test Image") nexttile imshow(labeloverlay(mat2gray(testImg),testGroundTruth)) title("Ground Truth Label") nexttile imshow(labeloverlay(mat2gray(testImg),prediction,IncludedLabels="LeftVentricle")) title("Network-Predicted Label") nexttile imshow(mat2gray(testImg)) hold on imagesc(gradCAMMap(:,:,2),AlphaData=0.5) title("GRAD-CAM Map") colormap jet
6-49
6
Medical Image Segmentation
Supporting Functions
The preprocessImage helper function preprocesses the MRI images using these steps: 1
Resize the input image to the target size of the network.
2
Convert grayscale images to three channel images.
3
Return the preprocessed image in a cell array.
function out = preprocessImage(img,targetSize) % Copyright 2023 The MathWorks, Inc. targetSize = targetSize(1:2); img = imresize(img,targetSize); if size(img,3) == 1 img = repmat(img,[1 1 3]); end out = {img}; end
The preprocessLabels helper function preprocesses label images using these steps: 6-50
Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
1
Resize the input label image to the target size of the network. The function uses nearest neighbor interpolation so that the output is a binary image without partial decimal values.
2
Return the preprocessed image in a cell array.
function out = preprocessLabels(labels, targetSize) % Copyright 2023 The MathWorks, Inc. targetSize = targetSize(1:2); labels = imresize(labels{1},targetSize,"nearest"); out = {labels}; end
The augmentDataForLVSegmentation helper function randomly applies these augmentations to each input image and its corresponding label image. The function returns the output data in a cell array. • Random rotation between 0 to 180 degrees. • Random translation along the x- and y-axes of -10 to 10 pixels. • Random reflection to flip the image in the x-axis.
function out = augmentDataForLVSegmentation(data) % Copyright 2023 The MathWorks, Inc. img = data{1}; labels = data{2}; inputSize = size(img,[1 2]); tform = randomAffine2d(... Rotation=[-5 5],... XTranslation=[-10 10],... YTranslation=[-10 10]); sameAsInput = affineOutputView(inputSize,tform,BoundsStyle="sameAsInput"); img = imwarp(img,tform,"linear",OutputView=sameAsInput); labels = imwarp(labels,tform,"nearest",OutputView=sameAsInput); out = {img,labels}; end
References
[1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.” In 2016 Fourth International Conference on 3D Vision (3DV), 565–71. Stanford, CA, USA: IEEE, 2016. https://doi.org/10.1109/3DV.2016.79.
6-51
6
Medical Image Segmentation
[2] Radau, Perry, Yingli Lu, Kim Connelly, Gideon Paul, Alexander J Dick, and Graham A Wright. “Evaluation Framework for Algorithms Segmenting Short Axis Cardiac MRI.” The MIDAS Journal, July 9, 2009. https://doi.org/10.54294/g80ruo. [3] “Sunnybrook Cardiac Data – Cardiac Atlas Project.” Accessed January 10, 2023. http:// www.cardiacatlas.org/studies/sunnybrook-cardiac-data/.
See Also imageDatastore | pixelLabelDatastore | subset | combine | transform | unetLayers | semanticseg | evaluateSemanticSegmentation | gradCAM
Related Examples •
“Segment Lungs from CT Scan Using Pretrained Neural Network” on page 6-14
•
“Brain MRI Segmentation Using Pretrained 3-D U-Net Network” on page 6-24
•
“Breast Tumor Segmentation from Ultrasound Using Deep Learning” on page 6-32
•
“3-D Brain Tumor Segmentation Using Deep Learning”
More About •
6-52
“Datastores for Deep Learning” (Deep Learning Toolbox)
IBSI Standard and Radiomics Function Feature Correspondences
IBSI Standard and Radiomics Function Feature Correspondences Radiomics is a technique for extracting a large number of quantitative features from medical images. Radiomics features relate to the shape, intensity, and texture of a specified region of interest (ROI) in a medical image. They contain characteristic information about the medical image that can help in clinical diagnosis. Standardization of radiomics features ensures reproducibility and validation of radiomics studies. The image biomarker standardisation initiative (IBSI) provides standardized nomenclature and definitions for radiomics features, a standard procedure for medical image preprocessing, and reporting guidelines, among other standardization tools. These tables list the radiomics features related to shape, intensity, and texture as defined by IBSI standards, as well as the corresponding feature names in the shapeFeatures, intensityFeatures, and textureFeatures functions of the radiomics object. The tables also specify the relevant Type and SubType of the feature. The Type of the feature is applicable only to intensity features and texture features. It indicates the category of features within the intensity features and texture features. The SubType indicates aggregation and resampling used for feature computation. For shapeFeatures and intensityFeatures, the SubType indicates whether the features are computed from the 2D resampled data, 3D resampled data, or both. For textureFeatures, the SubType indicates whether the features are computed for each 2D slice in the 2D resampled data and averaged, after merging all 2D slices in the 2D resampled data, for the entire 3D volume in the 3D resampled data, or all three options.
Shape Features Compute these shape features using the shapeFeatures function. You can compute only the 2D or 3D shape features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in shapeFeatures Output
SubType
Volume (mesh)
VolumeMesh2D
"2D"
VolumeMesh3D
"3D"
VolumeVoxelCount2D
"2D"
VolumeVoxelCount3D
"3D"
SurfaceAreaMesh2D
"2D"
SurfaceAreaMesh3D
"3D"
SurfaceVolumeRatio2D
"2D"
SurfaceVolumeRatio3D
"3D"
Compactness1_2D
"2D"
Compactness1_3D
"3D"
Compactness2_2D
"2D"
Compactness2_3D
"3D"
Volume (voxel counting) Surface area (mesh) Surface to volume ratio Compactness 1 Compactness 2 Spherical disproportion
SphericalDisproportion2D "2D"
6-53
6
Medical Image Segmentation
SphericalDisproportion3D "3D" Sphericity
Sphericity2D
"2D"
Sphericity3D
"3D"
Asphericity2D
"2D"
Asphericity3D
"3D"
CentreOfMassShift2D
"2D"
CentreOfMassShift3D
"3D"
MajorAxisLength2D
"2D"
MajorAxisLength3D
"3D"
MinorAxisLength2D
"2D"
MinorAxisLength3D
"3D"
LeastAxisLength2D
"2D"
LeastAxisLength3D
"3D"
Elongation2D
"2D"
Elongation3D
"3D"
Flatness2D
"2D"
Flatness3D
"3D"
Volume density (axis-aligned bounding box)
VolumeDensityAABB_2D
"2D"
VolumeDensityAABB_3D
"3D"
Area density (axis-aligned bounding box)
AreaDensityAABB_2D
"2D"
AreaDensityAABB_3D
"3D"
Volume density (approximate enclosing ellipsoid)
VolumeDensityAEE_2D
"2D"
VolumeDensityAEE_3D
"3D"
Volume density (convex hull)
VolumeDensityConvexHull2 "2D" D
Asphericity Centre of mass shift Major axis length Minor axis length Least axis length Elongation Flatness
VolumeDensityConvexHull3 "3D" D Area density (convex hull) Integrated intensity
AreaDensityConvexHull2D
"2D"
AreaDensityConvexHull3D
"3D"
IntegratedIntensity2D
"2D"
IntegratedIntensity3D
"3D"
Intensity Features Compute intensity features using the intensityFeatures function. To compute all categories of intensity features listed in this section, specify Type as "all".
6-54
IBSI Standard and Radiomics Function Feature Correspondences
Local Intensity Features To compute only these intensity features, when using the intensityFeatures function, specify Type as "LocalIntensity". You can compute only the 2D or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in intensityFeatures Output
SubType
Local intensity peak
LocalIntensityPeak2D
"2D"
LocalIntensityPeak3D
"3D"
GlobalIntensityPeak2D
"2D"
GlobalIntensityPeak3D
"3D"
Global intensity peak
Intensity Based Statistical Features To compute only these intensity features, when using the intensityFeatures function, specify Type as "IntensityBasedStatistics". You can compute only the 2D or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in intensityFeatures Output
SubType
Mean intensity
MeanIntensity2D
"2D"
MeanIntensity3D
"3D"
IntensityVariance2D
"2D"
IntensityVariance3D
"3D"
IntensitySkewness2D
"2D"
IntensitySkewness3D
"3D"
IntensityKurtosis2D
"2D"
IntensityKurtosis3D
"3D"
MedianIntensity2D
"2D"
MedianIntensity3D
"3D"
MinimumIntensity2D
"2D"
MinimumIntensity3D
"3D"
Intensity variance Intensity skewness (Excess) intensity kurtosis Median intensity Minimum intensity th
10 intensity percentile
TenthIntensityPercentile "2D" 2D TenthIntensityPercentile "3D" 3D
90th intensity percentile
NinetiethIntensityPercen "2D" tile2D NinetiethIntensityPercen "3D" tile3D
Maximum intensity
MaximumIntensity2D
"2D"
MaximumIntensity3D
"3D"
6-55
6
Medical Image Segmentation
Intensity interquartile range
IntensityInterquartileRa "2D" nge2D IntensityInterquartileRa "3D" nge3D
Intensity range
IntensityRange2D
"2D"
IntensityRange3D
"3D"
Intensity-based mean absolute deviation
MeanAbsoluteDeviation2D
"2D"
MeanAbsoluteDeviation3D
"3D"
Intensity-based robust mean absolute deviation
RobustMeanAbsoluteDeviat "2D" ion2D RobustMeanAbsoluteDeviat "3D" ion3D
Intensity-based median absolute MedianAbsoluteDeviation2 "2D" deviation D MedianAbsoluteDeviation3 "3D" D Intensity-based coefficient of variation
CoefficientOfVariation2D "2D"
Intensity-based quartile coefficient of dispersion
QuartileCoefficientOfDis "2D" persion2D
CoefficientOfVariation3D "3D"
QuartileCoefficientOfDis "3D" persion3D Intensity-based energy Root mean square intensity
IntensityEnergy2D
"2D"
IntensityEnergy3D
"3D"
RootMeanSquare2D
"2D"
RootMeanSquare3D
"3D"
Intensity Histogram Features To compute only these intensity features, when using the intensityFeatures function, specify Type as "IntensityHistogram". You can compute only the 2D or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in intensityFeatures Output
Mean discretised intensity
MeanDiscretisedIntensity "2D" 2D
SubType
MeanDiscretisedIntensity "3D" 3D Discretised intensity variance
DiscretisedIntensityVari "2D" ance2D DiscretisedIntensityVari "3D" ance3D
6-56
IBSI Standard and Radiomics Function Feature Correspondences
Discretised intensity skewness
DiscretisedIntensitySkew "2D" ness2D DiscretisedIntensitySkew "3D" ness3D
(Excess) discretised intensity kurtosis
DiscretisedIntensityKurt "2D" osis2D DiscretisedIntensityKurt "3D" osis3D
Median discretised intensity
MedianDiscretisedIntensi "2D" ty2D MedianDiscretisedIntensi "3D" ty3D
Minimum discretised intensity
MinimumDiscretisedIntens "2D" ity2D MinimumDiscretisedIntens "3D" ity3D
10th discretised intensity percentile
TenthDiscretisedIntensit "2D" yPercentile2D TenthDiscretisedIntensit "3D" yPercentile3D
90th discretised intensity percentile
NinetiethDiscretisedInte "2D" nsityPercentile2D NinetiethDiscretisedInte "3D" nsityPercentile3D
Maximum discretised intensity
MaximumDiscretisedIntens "2D" ity2D MaximumDiscretisedIntens "3D" ity3D
Intensity histogram mode
IntensityHistogramMode2D "2D" IntensityHistogramMode3D "3D"
Discretised intensity interquartile range
DiscretisedIntensityInte "2D" rquartileRange2D DiscretisedIntensityInte "3D" rquartileRange3D
Discretised intensity range
DiscretisedIntensityRang "2D" e2D DiscretisedIntensityRang "3D" e3D
Intensity histogram mean absolute deviation
IntensityHistogramMeanAb "2D" soluteDeviation2D IntensityHistogramMeanAb "3D" soluteDeviation3D
6-57
6
Medical Image Segmentation
Intensity histogram robust mean IntensityHistogramRobust "2D" absolute deviation MeanAbsoluteDeviation2D IntensityHistogramRobust "3D" MeanAbsoluteDeviation3D Intensity histogram median absolute deviation
IntensityHistogramMedian "2D" AbsoluteDeviation2D IntensityHistogramMedian "3D" AbsoluteDeviation3D
Intensity histogram coefficient of variation
IntensityHistogramCoeffi "2D" cientOfVariation2D IntensityHistogramCoeffi "3D" cientOfVariation3D
Intensity histogram quartile coefficient of dispersion
IntensityHistogramQuarti "2D" leCoefficientOfDispersio n2D IntensityHistogramQuarti "3D" leCoefficientOfDispersio n3D
Discretised intensity entropy
DiscretisedIntensityEntr "2D" opy2D DiscretisedIntensityEntr "3D" opy3D
Discretised intensity uniformity DiscretisedIntensityUnif "2D" ormity2D DiscretisedIntensityUnif "3D" ormity3D Maximum histogram gradient
MaximumHistogramGradient "2D" 2D MaximumHistogramGradient "3D" 3D
Maximum histogram gradient intensity
MaximumHistogramGradient "2D" Intensity2D MaximumHistogramGradient "3D" Intensity3D
Minimum histogram gradient
MinimumHistogramGradient "2D" 2D MinimumHistogramGradient "3D" 3D
Minimum histogram gradient intensity
MinimumHistogramGradient "2D" Intensity2D MinimumHistogramGradient "3D" Intensity3D
6-58
IBSI Standard and Radiomics Function Feature Correspondences
Intensity Volume Histogram Features To compute only these intensity features, when using the intensityFeatures function, specify Type as "IntensityVolumeHistogram". You can compute only the 2D or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in intensityFeatures Output
Volume at intensity fraction
TenPercentVolumeFraction "2D" 2D
SubType
TenPercentVolumeFraction "3D" 3D NinetyPercentVolumeFract "2D" ion2D NinetyPercentVolumeFract "3D" ion3D Intensity at volume fraction
TenPercentIntensityFract "2D" ion2D TenPercentIntensityFract "3D" ion3D NinetyPercentIntensityFr "2D" action2D NinetyPercentIntensityFr "3D" action3D
Volume fraction difference between intensity fractions
VolumeFractionDifference "2D" 2D VolumeFractionDifference "3D" 3D
Intensity fraction difference between volume fractions
IntensityFractionDiffere "2D" nce2D IntensityFractionDiffere "3D" nce3D
Texture Features Compute texture features using the textureFeatures function. To compute all categories of texture features listed in this section, specify Type as "all". Grey Level Co-occurrence Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "GLCM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features.
6-59
6
Medical Image Segmentation
IBSI Feature
Corresponding Feature in textureFeatures Output
Aggregation
Joint maximum
JointMaximumAverag Averaged ed2D
SubType
2D
JointMaximumSliceM Slice-merged erged2D JointMaximumDirect Direction-merged ionMerged25D
2.5D
JointMaximumMerged Merged 25D JointMaximumAverag Averaged ed3D
3D
JointMaximumMerged Merged 3D Joint average
JointAverageAverag Averaged ed2D
2D
JointAverageSliceM Slice-merged erged2D JointAverageDirect Direction-merged ionMerged25D
2.5D
JointAverageMerged Merged 25D JointAverageAverag Averaged ed3D
3D
JointAverageMerged Merged 3D Joint variance
JointVarianceAvera Averaged ged2D
2D
JointVarianceSlice Slice-merged Merged2D JointVarianceDirec Direction-merged tionMerged25D
2.5D
JointVarianceMerge Merged d25D JointVarianceAvera Averaged ged3D
3D
JointVarianceMerge Merged d3D Joint entropy
JointEntropyAverag Averaged ed2D JointEntropySliceM Slice-merged erged2D
6-60
2D
IBSI Standard and Radiomics Function Feature Correspondences
JointEntropyDirect Direction-merged ionMerged25D
2.5D
JointEntropyMerged Merged 25D JointEntropyAverag Averaged ed3D
3D
JointEntropyMerged Merged 3D Difference average
DifferenceAverageA Averaged veraged2D
2D
DifferenceAverageS Slice-merged liceMerged2D DifferenceAverageD Direction-merged irectionMerged25D
2.5D
DifferenceAverageM Merged erged25D DifferenceAverageA Averaged veraged3D
3D
DifferenceAverageM Merged erged3D Difference variance
DifferenceVariance Averaged Averaged2D
2D
DifferenceVariance Slice-merged SliceMerged2D DifferenceVariance Direction-merged DirectionMerged25D
2.5D
DifferenceVariance Merged Merged25D DifferenceVariance Averaged Averaged3D
3D
DifferenceVariance Merged Merged3D Difference entropy
DifferenceEntropyA Averaged veraged2D
2D
DifferenceEntropyS Slice-merged liceMerged2D DifferenceEntropyD Direction-merged irectionMerged25D
2.5D
DifferenceEntropyM Merged erged25D DifferenceEntropyA Averaged veraged3D
3D
6-61
6
Medical Image Segmentation
DifferenceEntropyM Merged erged3D Sum average
SumAverageAveraged Averaged 2D
2D
SumAverageSliceMer Slice-merged ged2D SumAverageDirectio Direction-merged nMerged25D
2.5D
SumAverageMerged25 Merged D SumAverageAveraged Averaged 3D
3D
SumAverageMerged3D Merged Sum variance
SumVarianceAverage Averaged d2D
2D
SumVarianceSliceMe Slice-merged rged2D SumVarianceDirecti Direction-merged onMerged25D
2.5D
SumVarianceMerged2 Merged 5D SumVarianceAverage Averaged d3D
3D
SumVarianceMerged3 Merged D Sum entropy
SumEntropyAveraged Averaged 2D
2D
SumEntropySliceMer Slice-merged ged2D SumEntropyDirectio Direction-merged nMerged25D
2.5D
SumEntropyMerged25 Merged D SumEntropyAveraged Averaged 3D
3D
SumEntropyMerged3D Merged Angular second moment AngularSecondMomen Averaged tAveraged2D
2D
AngularSecondMomen Slice-merged tSliceMerged2D AngularSecondMomen Direction-merged tDirectionMerged25 D
6-62
2.5D
IBSI Standard and Radiomics Function Feature Correspondences
AngularSecondMomen Merged tMerged25D AngularSecondMomen Averaged tAveraged3D
3D
AngularSecondMomen Merged tMerged3D Contrast
ContrastAveraged2D Averaged
2D
ContrastSliceMerge Slice-merged d2D ContrastDirectionM Direction-merged erged25D ContrastMerged25D
Merged
ContrastAveraged3D Averaged ContrastMerged3D Dissimilarity
2.5D
3D
Merged
DissimilarityAvera Averaged ged2D
2D
DissimilaritySlice Slice-merged Merged2D DissimilarityDirec Direction-merged tionMerged25D
2.5D
DissimilarityMerge Merged d25D DissimilarityAvera Averaged ged3D
3D
DissimilarityMerge Merged d3D Inverse difference
InverseDifferenceA Averaged veraged2D
2D
InverseDifferenceS Slice-merged liceMerged2D InverseDifferenceD Direction-merged irectionMerged25D
2.5D
InverseDifferenceM Merged erged25D InverseDifferenceA Averaged veraged3D
3D
InverseDifferenceM Merged erged3D Normalised inverse difference
NormalisedInverseD Averaged ifferenceAveraged2 D
2D
6-63
6
Medical Image Segmentation
NormalisedInverseD Slice-merged ifferenceSliceMerg ed2D NormalisedInverseD Direction-merged ifferenceDirection Merged25D
2.5D
NormalisedInverseD Merged ifferenceMerged25D NormalisedInverseD Averaged ifferenceAveraged3 D
3D
NormalisedInverseD Merged ifferenceMerged3D Inverse difference moment
InverseDifferenceM Averaged omentAveraged2D
2D
InverseDifferenceM Slice-merged omentSliceMerged2D InverseDifferenceM Direction-merged omentDirectionMerg ed25D
2.5D
InverseDifferenceM Merged omentMerged25D InverseDifferenceM Averaged omentAveraged3D
3D
InverseDifferenceM Merged omentMerged3D Normalised inverse difference moment
NormalisedInverseD Averaged ifferenceMomentAve raged2D
2D
NormalisedInverseD Slice-merged ifferenceMomentSli ceMerged2D NormalisedInverseD Direction-merged ifferenceMomentDir ectionMerged25D
2.5D
NormalisedInverseD Merged ifferenceMomentMer ged25D NormalisedInverseD Averaged ifferenceMomentAve raged3D NormalisedInverseD Merged ifferenceMomentMer ged3D
6-64
3D
IBSI Standard and Radiomics Function Feature Correspondences
Inverse variance
InverseVarianceAve Averaged raged2D
2D
InverseVarianceSli Slice-merged ceMerged2D InverseVarianceDir Direction-merged ectionMerged25D
2.5D
InverseVarianceMer Merged ged25D InverseVarianceAve Averaged raged3D
3D
InverseVarianceMer Merged ged3D Correlation
CorrelationAverage Averaged d2D
2D
CorrelationSliceMe Slice-merged rged2D CorrelationDirecti Direction-merged onMerged25D
2.5D
CorrelationMerged2 Merged 5D CorrelationAverage Averaged d3D
3D
CorrelationMerged3 Merged D Autocorrelation
AutoCorrelationAve Averaged raged2D
2D
AutoCorrelationSli Slice-merged ceMerged2D AutoCorrelationDir Direction-merged ectionMerged25D
2.5D
AutoCorrelationMer Merged ged25D AutoCorrelationAve Averaged raged3D
3D
AutoCorrelationMer Merged ged3D Cluster tendency
ClusterTendencyAve Averaged raged2D
2D
ClusterTendencySli Slice-merged ceMerged2D ClusterTendencyDir Direction-merged ectionMerged25D
2.5D
6-65
6
Medical Image Segmentation
ClusterTendencyMer Merged ged25D ClusterTendencyAve Averaged raged3D
3D
ClusterTendencyMer Merged ged3D Cluster shade
ClusterShadeAverag Averaged ed2D
2D
ClusterShadeSliceM Slice-merged erged2D ClusterShadeDirect Direction-merged ionMerged25D
2.5D
ClusterShadeMerged Merged 25D ClusterShadeAverag Averaged ed3D
3D
ClusterShadeMerged Merged 3D Cluster prominence
ClusterProminenceA Averaged veraged2D
2D
ClusterProminenceS Slice-merged liceMerged2D ClusterProminenceD Direction-merged irectionMerged25D
2.5D
ClusterProminenceM Merged erged25D ClusterProminenceA Averaged veraged3D
3D
ClusterProminenceM Merged erged3D Information correlation InformationCorrela Averaged 1 tion1Averaged2D
2D
InformationCorrela Slice-merged tion1SliceMerged2D InformationCorrela Direction-merged tion1DirectionMerg ed25D
2.5D
InformationCorrela Merged tion1Merged25D InformationCorrela Averaged tion1Averaged3D InformationCorrela Merged tion1Merged3D
6-66
3D
IBSI Standard and Radiomics Function Feature Correspondences
Information correlation InformationCorrela Averaged 2 tion2Averaged2D
2D
InformationCorrela Slice-merged tion2SliceMerged2D InformationCorrela Direction-merged tion2DirectionMerg ed25D
2.5D
InformationCorrela Merged tion2Merged25D InformationCorrela Averaged tion2Averaged3D
3D
InformationCorrela Merged tion2Merged3D Grey Level Run Length Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "GLRLM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in textureFeatures Output
Aggregation
Short runs emphasis
ShortRunsEmphasisA Averaged veraged2D
SubType
2D
ShortRunsEmphasisS Slice-merged liceMerged2D ShortRunsEmphasisD Direction-merged irectionMerged25D
2.5D
ShortRunsEmphasisM Merged erged25D ShortRunsEmphasisA Averaged veraged3D
3D
ShortRunsEmphasisM Merged erged3D Long runs emphasis
LongRunsEmphasisAv Averaged eraged2D
2D
LongRunsEmphasisSl Slice-merged iceMerged2D LongRunsEmphasisDi Direction-merged rectionMerged25D
2.5D
LongRunsEmphasisMe Merged rged25D LongRunsEmphasisAv Averaged eraged3D
3D
6-67
6
Medical Image Segmentation
LongRunsEmphasisMe Merged rged3D Low grey level run emphasis
LowGrayLevelRunEmp Averaged hasisAveraged2D
2D
LowGrayLevelRunEmp Slice-merged hasisSliceMerged2D LowGrayLevelRunEmp Direction-merged hasisDirectionMerg ed25D
2.5D
LowGrayLevelRunEmp Merged hasisMerged25D LowGrayLevelRunEmp Averaged hasisAveraged3D
3D
LowGrayLevelRunEmp Merged hasisMerged3D High grey level run emphasis
HighGrayLevelRunEm Averaged phasisAveraged2D
2D
HighGrayLevelRunEm Slice-merged phasisSliceMerged2 D HighGrayLevelRunEm Direction-merged phasisDirectionMer ged25D
2.5D
HighGrayLevelRunEm Merged phasisMerged25D HighGrayLevelRunEm Averaged phasisAveraged3D
3D
HighGrayLevelRunEm Merged phasisMerged3D Short run low grey level ShortRunLowGrayLev Averaged emphasis elEmphasisAveraged 2D
2D
ShortRunLowGrayLev Slice-merged elEmphasisSliceMer ged2D ShortRunLowGrayLev Direction-merged elEmphasisDirectio nMerged25D
2.5D
ShortRunLowGrayLev Merged elEmphasisMerged25 D ShortRunLowGrayLev Averaged elEmphasisAveraged 3D
6-68
3D
IBSI Standard and Radiomics Function Feature Correspondences
ShortRunLowGrayLev Merged elEmphasisMerged3D Short run high grey level emphasis
ShortRunHighGrayLe Averaged velEmphasisAverage d2D
2D
ShortRunHighGrayLe Slice-merged velEmphasisSliceMe rged2D ShortRunHighGrayLe Direction-merged velEmphasisDirecti onMerged25D
2.5D
ShortRunHighGrayLe Merged velEmphasisMerged2 5D ShortRunHighGrayLe Averaged velEmphasisAverage d3D
3D
ShortRunHighGrayLe Merged velEmphasisMerged3 D Long run low grey level LongRunLowGrayLeve Averaged emphasis lEmphasisAveraged2 D
2D
LongRunLowGrayLeve Slice-merged lEmphasisSliceMerg ed2D LongRunLowGrayLeve Direction-merged lEmphasisDirection Merged25D
2.5D
LongRunLowGrayLeve Merged lEmphasisMerged25D LongRunLowGrayLeve Averaged lEmphasisAveraged3 D
3D
LongRunLowGrayLeve Merged lEmphasisMerged3D Long run high grey level LongRunHighGrayLev Averaged emphasis elEmphasisAveraged 2D
2D
LongRunHighGrayLev Slice-merged elEmphasisSliceMer ged2D LongRunHighGrayLev Direction-merged elEmphasisDirectio nMerged25D
2.5D
6-69
6
Medical Image Segmentation
LongRunHighGrayLev Merged elEmphasisMerged25 D LongRunHighGrayLev Averaged elEmphasisAveraged 3D
3D
LongRunHighGrayLev Merged elEmphasisMerged3D Grey level nonuniformity
GrayLevelNonUnifor Averaged mityAveraged2D
2D
GrayLevelNonUnifor Slice-merged mitySliceMerged2D GrayLevelNonUnifor Direction-merged mityDirectionMerge d25D
2.5D
GrayLevelNonUnifor Merged mityMerged25D GrayLevelNonUnifor Averaged mityAveraged3D
3D
GrayLevelNonUnifor Merged mityMerged3D Normalised grey level non-uniformity
NormalisedGrayLeve Averaged lNonUniformityAver aged2D
2D
NormalisedGrayLeve Slice-merged lNonUniformitySlic eMerged2D NormalisedGrayLeve Direction-merged lNonUniformityDire ctionMerged25D
2.5D
NormalisedGrayLeve Merged lNonUniformityMerg ed25D NormalisedGrayLeve Averaged lNonUniformityAver aged3D
3D
NormalisedGrayLeve Merged lNonUniformityMerg ed3D Run length nonuniformity
RunLengthNonUnifor Averaged mityAveraged2D RunLengthNonUnifor Slice-merged mitySliceMerged2D
6-70
2D
IBSI Standard and Radiomics Function Feature Correspondences
RunLengthNonUnifor Direction-merged mityDirectionMerge d25D
2.5D
RunLengthNonUnifor Merged mityMerged25D RunLengthNonUnifor Averaged mityAveraged3D
3D
RunLengthNonUnifor Merged mityMerged3D Normalised run length non-uniformity
NormalisedRunLengt Averaged hNonUniformityAver aged2D
2D
NormalisedRunLengt Slice-merged hNonUniformitySlic eMerged2D NormalisedRunLengt Direction-merged hNonUniformityDire ctionMerged25D
2.5D
NormalisedRunLengt Merged hNonUniformityMerg ed25D NormalisedRunLengt Averaged hNonUniformityAver aged3D
3D
NormalisedRunLengt Merged hNonUniformityMerg ed3D Run percentage
RunPercentageAvera Averaged ged2D
2D
RunPercentageSlice Slice-merged Merged2D RunPercentageDirec Direction-merged tionMerged25D
2.5D
RunPercentageMerge Merged d25D RunPercentageAvera Averaged ged3D
3D
RunPercentageMerge Merged d3D Grey level variance
GrayLevelVarianceA Averaged veraged2D
2D
GrayLevelVarianceS Slice-merged liceMerged2D GrayLevelVarianceD Direction-merged irectionMerged25D
2.5D
6-71
6
Medical Image Segmentation
GrayLevelVarianceM Merged erged25D GrayLevelVarianceA Averaged veraged3D
3D
GrayLevelVarianceM Merged erged3D Run length variance
RunLengthVarianceA Averaged veraged2D
2D
RunLengthVarianceS Slice-merged liceMerged2D RunLengthVarianceD Direction-merged irectionMerged25D
2.5D
RunLengthVarianceM Merged erged25D RunLengthVarianceA Averaged veraged3D
3D
RunLengthVarianceM Merged erged3D Run entropy
RunEntropyAveraged Averaged 2D
2D
RunEntropySliceMer Slice-merged ged2D RunEntropyDirectio Direction-merged nMerged25D
2.5D
RunEntropyMerged25 Merged D RunEntropyAveraged Averaged 3D
3D
RunEntropyMerged3D Merged Grey Level Size Zone Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "GLSZM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in textureFeatures Output
SubType
Small zone emphasis
SmallZoneEmphasis2D
"2D"
SmallZoneEmphasis25D
"2.5D"
SmallZoneEmphasis3D
"3D"
LargeZoneEmphasis2D
"2D"
LargeZoneEmphasis25D
"2.5D"
LargeZoneEmphasis3D
"3D"
Large zone emphasis
6-72
IBSI Standard and Radiomics Function Feature Correspondences
Low grey level zone emphasis
LowGrayLevelZoneEmphasis "2D" 2D LowGrayLevelZoneEmphasis "2.5D" 25D LowGrayLevelZoneEmphasis "3D" 3D
High grey level zone emphasis
HighGrayLevelZoneEmphasi "2D" s2D HighGrayLevelZoneEmphasi "2.5D" s25D HighGrayLevelZoneEmphasi "3D" s3D
Small zone low grey level emphasis
SmallZoneLowGrayLevelEmp "2D" hasis2D SmallZoneLowGrayLevelEmp "2.5D" hasis25D SmallZoneLowGrayLevelEmp "3D" hasis3D
Small zone high grey level emphasis
SmallZoneHighGrayLevelEm "2D" phasis2D SmallZoneHighGrayLevelEm "2.5D" phasis25D SmallZoneHighGrayLevelEm "3D" phasis3D
Large zone low grey level emphasis
LargeZoneLowGrayLevelEmp "2D" hasis2D LargeZoneLowGrayLevelEmp "2.5D" hasis25D LargeZoneLowGrayLevelEmp "3D" hasis3D
Large zone high grey level emphasis
LargeZoneHighGrayLevelEm "2D" phasis2D LargeZoneHighGrayLevelEm "2.5D" phasis25D LargeZoneHighGrayLevelEm "3D" phasis3D
Grey level non-uniformity
GrayLevelNonUniformity2D "2D" GrayLevelNonUniformity25 "2.5D" D GrayLevelNonUniformity3D "3D"
Normalised grey level nonuniformity
NormalisedGrayLevelNonUn "2D" iformity2D
6-73
6
Medical Image Segmentation
NormalisedGrayLevelNonUn "2.5D" iformity25D NormalisedGrayLevelNonUn "3D" iformity3D Zone size non-uniformity
ZoneSizeNonUniformity2D
"2D"
ZoneSizeNonUniformity25D "2.5D" ZoneSizeNonUniformity3D Normalised zone size nonuniformity
"3D"
NormalisedZoneSizeNonUni "2D" formity2D NormalisedZoneSizeNonUni "2.5D" formity25D NormalisedZoneSizeNonUni "3D" formity3D
Zone percentage
Grey level variance
Zone size variance
Zone size entropy
ZonePercentage2D
"2D"
ZonePercentage25D
"2.5D"
ZonePercentage3D
"3D"
GrayLevelVariance2D
"2D"
GrayLevelVariance25D
"2.5D"
GrayLevelVariance3D
"3D"
ZoneSizeVariance2D
"2D"
ZoneSizeVariance25D
"2.5D"
ZoneSizeVariance3D
"3D"
ZoneSizeEntropy2D
"2D"
ZoneSizeEntropy25D
"2.5D"
ZoneSizeEntropy3D
"3D"
Grey Level Distance Zone Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "GLDZM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in textureFeatures Output
SubType
Small distance emphasis
SmallDistanceEmphasis2D
"2D"
SmallDistanceEmphasis25D "2.5D" Large distance emphasis
SmallDistanceEmphasis3D
"3D"
LargeDistanceEmphasis2D
"2D"
LargeDistanceEmphasis25D "2.5D" LargeDistanceEmphasis3D Low grey level zone emphasis
6-74
"3D"
LowGrayLevelDistanceZone "2D" Emphasis2D
IBSI Standard and Radiomics Function Feature Correspondences
LowGrayLevelDistanceZone "2.5D" Emphasis25D LowGrayLevelDistanceZone "3D" Emphasis3D High grey level zone emphasis
HighGrayLevelDistanceZon "2D" eEmphasis2D HighGrayLevelDistanceZon "2.5D" eEmphasis25D HighGrayLevelDistanceZon "3D" eEmphasis3D
Small distance low grey level emphasis
SmallDistanceLowGrayLeve "2D" lEmphasis2D SmallDistanceLowGrayLeve "2.5D" lEmphasis25D SmallDistanceLowGrayLeve "3D" lEmphasis3D
Small distance high grey level emphasis
SmallDistanceHighGrayLev "2D" elEmphasis2D SmallDistanceHighGrayLev "2.5D" elEmphasis25D SmallDistanceHighGrayLev "3D" elEmphasis3D
Large distance low grey level emphasis
LargeDistanceLowGrayLeve "2D" lEmphasis2D LargeDistanceLowGrayLeve "2.5D" lEmphasis25D LargeDistanceLowGrayLeve "3D" lEmphasis3D
Large distance high grey level emphasis
LargeDistanceHighGrayLev "2D" elEmphasis2D LargeDistanceHighGrayLev "2.5D" elEmphasis25D LargeDistanceHighGrayLev "3D" elEmphasis3D
Grey level non-uniformity
GrayLevelDistanceNonUnif "2D" ormity2D GrayLevelDistanceNonUnif "2.5D" ormity25D GrayLevelDistanceNonUnif "3D" ormity3D
Normalised grey level nonuniformity
NormalisedGrayLevelDista "2D" nceNonUniformity2D
6-75
6
Medical Image Segmentation
NormalisedGrayLevelDista "2.5D" nceNonUniformity25D NormalisedGrayLevelDista "3D" nceNonUniformity3D Zone distance non-uniformity
ZoneDistanceNonUniformit "2D" y2D ZoneDistanceNonUniformit "2.5D" y25D ZoneDistanceNonUniformit "3D" y3D
Normalised zone distance nonuniformity
NormalisedZoneDistanceNo "2D" nUniformity2D NormalisedZoneDistanceNo "2.5D" nUniformity25D NormalisedZoneDistanceNo "3D" nUniformity3D
Zone percentage
ZoneDistancePercentage2D "2D" ZoneDistancePercentage25 "2.5D" D ZoneDistancePercentage3D "3D"
Grey level variance
GrayLevelDistanceVarianc "2D" e2D GrayLevelDistanceVarianc "2.5D" e25D GrayLevelDistanceVarianc "3D" e3D
Zone distance variance
Zone distance entropy
ZoneDistanceVariance2D
"2D"
ZoneDistanceVariance25D
"2.5D"
ZoneDistanceVariance3D
"3D"
ZoneDistanceEntropy2D
"2D"
ZoneDistanceEntropy25D
"2.5D"
ZoneDistanceEntropy3D
"3D"
Neighbourhood Grey Tone Difference Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "NGTDM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features.
6-76
IBSI Feature
Corresponding Feature in textureFeatures Output
SubType
Coarseness
Coarseness2D
"2D"
Coarseness25D
"2.5D"
IBSI Standard and Radiomics Function Feature Correspondences
Contrast
Busyness
Complexity
Strength
Coarseness3D
"3D"
Contrast2D
"2D"
Contrast25D
"2.5D"
Contrast3D
"3D"
Busyness2D
"2D"
Busyness25D
"2.5D"
Busyness3D
"3D"
Complexity2D
"2D"
Complexity25D
"2.5D"
Complexity3D
"3D"
Strength2D
"2D"
Strength25D
"2.5D"
Strength3D
"3D"
Neighbouring Grey Level Dependence Based Features To compute only these texture features, when using the textureFeatures function, specify Type as "NGLDM". You can compute only the 2D, 2.5D, or 3D features by specifying the corresponding SubType, or specify SubType as "all" to compute all listed features. IBSI Feature
Corresponding Feature in textureFeatures Output
SubType
Low dependence emphasis
LowDependenceEmphasis2D
"2D"
LowDependenceEmphasis25D "2.5D" LowDependenceEmphasis3D High dependence emphasis
"3D"
HighDependenceEmphasis2D "2D" HighDependenceEmphasis25 "2.5D" D HighDependenceEmphasis3D "3D"
Low grey level count emphasis
LowGrayLevelCountEmphasi "2D" s2D LowGrayLevelCountEmphasi "2.5D" s25D LowGrayLevelCountEmphasi "3D" s3D
High grey level count emphasis
HighGrayLevelCountEmphas "2D" is2D HighGrayLevelCountEmphas "2.5D" is25D HighGrayLevelCountEmphas "3D" is3D
6-77
6
Medical Image Segmentation
Low dependence low grey level emphasis
LowDependenceLowGrayLeve "2D" lEmphasis2D LowDependenceLowGrayLeve "2.5D" lEmphasis25D LowDependenceLowGrayLeve "3D" lEmphasis3D
Low dependence high grey level LowDependenceHighGrayLev "2D" emphasis elEmphasis2D LowDependenceHighGrayLev "2.5D" elEmphasis25D LowDependenceHighGrayLev "3D" elEmphasis3D High dependence low grey level HighDependenceLowGrayLev "2D" emphasis elEmphasis2D HighDependenceLowGrayLev "2.5D" elEmphasis25D HighDependenceLowGrayLev "3D" elEmphasis3D High dependence high grey level emphasis
HighDependenceHighGrayLe "2D" velEmphasis2D HighDependenceHighGrayLe "2.5D" velEmphasis25D HighDependenceHighGrayLe "3D" velEmphasis3D
Grey level non-uniformity
GrayLevelDependenceNonUn "2D" iformity2D GrayLevelDependenceNonUn "2.5D" iformity25D GrayLevelDependenceNonUn "3D" iformity3D
Normalised grey level nonuniformity
NormalisedGrayLevelDepen "2D" denceNonUniformity2D NormalisedGrayLevelDepen "2.5D" denceNonUniformity25D NormalisedGrayLevelDepen "3D" denceNonUniformity3D
Dependence count nonuniformity
DependenceCountNonUnifor "2D" mity2D DependenceCountNonUnifor "2.5D" mity25D DependenceCountNonUnifor "3D" mity3D
6-78
IBSI Standard and Radiomics Function Feature Correspondences
Normalised dependence count non-uniformity
NormalisedDependenceCoun "2D" tNonUniformity2D NormalisedDependenceCoun "2.5D" tNonUniformity25D NormalisedDependenceCoun "3D" tNonUniformity3D
Dependence count percentage
DependenceCountPercentag "2D" e2D DependenceCountPercentag "2.5D" e25D DependenceCountPercentag "3D" e3D
Grey level variance
GrayLevelDependenceVaria "2D" nce2D GrayLevelDependenceVaria "2.5D" nce25D GrayLevelDependenceVaria "3D" nce3D
Dependence count variance
DependenceCountVariance2 "2D" D DependenceCountVariance2 "2.5D" 5D DependenceCountVariance3 "3D" D
Dependence count entropy
DependenceCountEntropy2D "2D" DependenceCountEntropy25 "2.5D" D DependenceCountEntropy3D "3D"
Dependence count energy
DependenceCountEnergy2D
"2D"
DependenceCountEnergy25D "2.5D" DependenceCountEnergy3D
"3D"
See Also radiomics | shapeFeatures | intensityFeatures | textureFeatures
Related Examples •
“Classify Breast Tumors from Ultrasound Images Using Radiomics Features” on page 6-83
More About •
“Get Started with Radiomics” on page 6-81
6-79
6
Medical Image Segmentation
External Websites •
6-80
Image Biomarker Standardisation Initiative (IBSI)
Get Started with Radiomics
Get Started with Radiomics Radiomics is a technique in medical imaging that computes a large number of quantitative features from medical images. These features quantify characteristics related to the shape, intensity, and texture of a region of interest in a medical image, reducing dependence on subjective interpretation of medical images for clinical workflows. Further, radiomics can capture characteristics that are not visible. You can use the same set of radiomics features for any medical imaging modality and for multiple applications, such as studying associations between medical imaging features and patient biology and predicting clinical outcomes from medical images, which makes radiomics a versatile technique in medical imaging. Standardization of radiomics features ensures reproducibility and validation of radiomics studies. The image biomarker standardisation initiative (IBSI) provides standardized nomenclature and definitions for radiomics features, a standard procedure for medical image preprocessing, and reporting guidelines, among other standardization tools.
Typical Workflow of Radiomics Application
The typical workflow of a radiomics application involves these steps. Import Medical Image into Workspace Import the medical image into the workspace as a medicalVolume object. You can compute radiomics features from medical images of any modality, such as MRI, CT, or ultrasound. For more information, see “Common Medical Imaging Modalities” on page 1-20. Preprocess Medical Image Clean the acquired medical image using preprocessing techniques such as background removal, denoising, registration, augmentation, and intensity normalization. For more information, see “Medical Image Preprocessing” on page 4-2 and “Medical Image Registration” on page 4-6. Identify Region of Interest (ROI) If you have already identified the ROI, import the mask of the ROI as a medicalVolume object. If you have not identified the ROI, segment the medical image to identify the region of interest. Medical Imaging Toolbox provides the Medical Image Labeler app and various functions for medical image segmentation. For more information, see “Segmentation and Analysis”.
6-81
6
Medical Image Segmentation
Preprocess Medical Image for Radiomics Use the radiomics object to preprocess the medical image as required by IBSI standards. Preprocessing for radiomics involves resampling, resegmentation, and discretization. Radiomics Feature Computation Use the shapeFeatures, intensityFeatures, and textureFeatures functions of the radiomics object to compute radiomics features related to the shape, intensity, and texture of the region of interest, respectively. Postprocessing You can apply statistical methods to the computed radiomics features to identify associations between medical imaging features and patient biology, or apply machine learning or deep learning models to predict clinical outcomes. For an example of clinical prediction using radiomics, see “Classify Breast Tumors from Ultrasound Images Using Radiomics Features” on page 6-83.
See Also radiomics | shapeFeatures | intensityFeatures | textureFeatures
Related Examples •
“Classify Breast Tumors from Ultrasound Images Using Radiomics Features” on page 6-83
More About •
“IBSI Standard and Radiomics Function Feature Correspondences” on page 6-53
External Websites •
6-82
Image Biomarker Standardisation Initiative (IBSI)
Classify Breast Tumors from Ultrasound Images Using Radiomics Features
Classify Breast Tumors from Ultrasound Images Using Radiomics Features This example shows how to use radiomics features to classify breast tumors as benign or malignant from breast ultrasound images. Radiomics extracts a large number of features from medical images related to the shape, intensity distribution, and texture of the specified region of interest (ROI). The extracted radiomics features contain useful information of the medical image that you can use in a classification application for automated diagnosis. This example extracts radiomics features from the tumor regions of breast ultrasound images and uses these features in a deep learning network to classify whether the tumor is benign or malignant. About the Data Set This example uses the Breast Ultrasound Images (BUSI) data set [1 on page 6-91]. The BUSI data set contains 2-D ultrasound images stored in the PNG file format. The total size of the data set is 196 MB. The data set contains 133 normal scans that have no tumors, 437 scans with benign tumors, and 210 scans with malignant tumors. Each ultrasound image has a corresponding tumor mask image and a label of normal, benign, or malignant. The tumor mask labels have been reviewed by clinical radiologists [1 on page 6-91]. The normal scans do not have any tumor regions in their corresponding tumor mask images. Thus, this example uses images from only the tumor groups. Download Data Set Download and unzip the data set. zipFile = matlab.internal.examples.downloadSupportFile("image","data/Dataset_BUSI.zip"); filepath = fileparts(zipFile); unzip(zipFile,filepath)
Load and Prepare Data Prepare the data for training by performing these tasks. 1
Label each image as normal, benign, or malignant, based on the name of its folder.
2
Separate the ultrasound and mask images.
3
Retain images from only the tumor groups.
4
Create a variable to store labels.
Create a variable imageDir that points to the folder containing the downloaded and unzipped data set. Create an image datastore to read and manage the ultrasound image data. Label each image as normal, benign, or malignant, based on the name of its folder. imageDir = fullfile(filepath,"Dataset_BUSI_with_GT"); ds = imageDatastore(imageDir,IncludeSubfolders=true,LabelSource="foldernames");
Create a subset of the image datastore containing only the ultrasound images by selecting only files whose names do not contain "mask". imgds = subset(ds,find(~contains(ds.Files,"mask")));
6-83
6
Medical Image Segmentation
Create a subset of the image datastore containing only the tumor mask images corresponding to the ultrasound images by selecting all files whose names contain "mask.png". maskds = subset(ds,find(contains(ds.Files,"mask.png")));
Remove the ultrasound images and the tumor mask images for the images labeled normal, as they do not contain any tumor regions. imgds = subset(imgds,imgds.Labels~="normal"); maskds = subset(maskds,maskds.Labels~="normal");
Create a vector named labels from the labels of the ultrasound image datastore. The removecats function removes unused categories from a categorical array. Remove the unused category "normal" from labels. labels = imgds.Labels; labels = removecats(labels);
View Sample Data View an ultrasound image that contains a benign tumor with the tumor mask on the image. benignIdx = find(imgds.Labels == "benign"); benignImage = readimage(imgds,benignIdx(146)); benignMask = readimage(maskds,benignIdx(146)); B = labeloverlay(benignImage,benignMask,Transparency=0.7,Colormap="hsv"); figure imshow(B) title("Ultrasound Image with Benign Tumor")
6-84
Classify Breast Tumors from Ultrasound Images Using Radiomics Features
View an ultrasound image that contains a malignant tumor with the tumor mask on the image. malignantIdx = find(imgds.Labels == "malignant"); malignantImage = readimage(imgds,malignantIdx(200)); malignantMask = readimage(maskds,malignantIdx(200)); B = labeloverlay(malignantImage,malignantMask,Transparency=0.7,Colormap="hsv"); figure imshow(B) title("Ultrasound Image with Malignant Tumor")
6-85
6
Medical Image Segmentation
Export Ultrasound Images and Tumor Masks to DICOM Volumes The radiomics object supports medicalVolume objects as input for the data and roi arguments. Export the ultrasound images and tumor masks as DICOM volumes, which you can import as medicalVolume objects. Compute the number of ultrasound images in the image datastore. n = length(imgds.Files) n = 647
For this example, turn off MATLAB® warning messages. warning off
6-86
Classify Breast Tumors from Ultrasound Images Using Radiomics Features
To create a volume from each 2-D ultrasound image, pad the ultrasound image with two layers of zeros in the third dimension. Then, write the padded ultrasound image to a DICOM file using the dicomwrite function. To create a volume from each 2-D tumor mask image, convert the tumor mask image to a logical matrix, and pad it with two layers of logical zeros in the third dimension. Then, write the padded tumor mask image to a DICOM file using the dicomwrite function. idx = 1:n; for i = idx img = readimage(imgds,i); img = im2gray(img); img = cat(3,img,zeros([size(img) 2],class(img))); dicomwrite(img,"img" + i + ".dcm"); maskimg = readimage(maskds,i); maskimg = logical(im2gray(uint8(maskimg))); maskimg = cat(3,maskimg,false([size(maskimg) 2])); dicomwrite(maskimg,"maskimg" + i + ".dcm"); end
Create Training and Test Data Sets Split the data into 70 percent training data and 30 percent test data using holdout validation. rng("default") p = 0.3; datapartition = cvpartition(n,"Holdout",p); idxTrain = training(datapartition); idxTest = test(datapartition);
Create separate vectors for the labels of the training and test data. labelsTrain = labels(idxTrain); labelsTest = labels(idxTest);
Compute Radiomics Features for Training Data The shape of breast tumors in a medical image is very informative for the purpose of classifying breast tumors as benign or malignant. Thus, in this example, you compute the shape-related radiomics features from the tumor regions of the breast ultrasound images. Create an empty table named radiomicsFeaturesTrain. radiomicsFeaturesTrain = table;
For each ultrasound image in the training data, import the ultrasound image DICOM file and the corresponding tumor mask DICOM file as medicalVolume objects. Create a radiomics object using the ultrasound medical volume as the data and the tumor mask medical volume as the ROI labels. Compute the shape features of the radiomics object, remove the variable LabelID, and append the features to radiomicsFeaturesTrain. The feature computation may take a long time depending on your system configuration. t = 1; for i = idx(idxTrain) data = medicalVolume("img" + i + ".dcm"); data.Voxels = squeeze(data.Voxels); roiData = medicalVolume("maskimg" + i + ".dcm");
6-87
6
Medical Image Segmentation
roiData.Voxels = squeeze(roiData.Voxels); R = radiomics(data,roiData); S = shapeFeatures(R,SubType="2D"); S = removevars(S,"LabelID"); radiomicsFeaturesTrain(t,:) = S; t = t + 1; end
Convert the table radiomicsFeaturesTrain to an array. Save the feature names in the variable featureNames. radiomicsFeaturesTrain = table2array(radiomicsFeaturesTrain); featureNames = S.Properties.VariableNames;
Remove Redundant Radiomics Features Compute the number of features. f = size(radiomicsFeaturesTrain,2) f = 21
Calculate the correlation coefficients between each pair of radiomics features in the training data. featureCorrTrain = corrcoef(radiomicsFeaturesTrain);
If the magnitude of the correlation coefficient between two radiomics features is greater than or equal to 0.95, the features are redundant with one another. Remove each redundant feature from the training and test data, as well as from the list of feature names. selectedFeatures = true(1,f); for i = 1:f-1 if isnan(featureCorrTrain(i,i)) selectedFeatures(i) = false; end if selectedFeatures(i) for j = i+1:f if abs(featureCorrTrain(i,j)) >= 0.95 selectedFeatures(j) = false; end end end end radiomicsFeaturesTrain = radiomicsFeaturesTrain(:,selectedFeatures); featureNames = featureNames(selectedFeatures);
Visualize Radiomics Features Compute the number of features after feature selection. f = size(radiomicsFeaturesTrain,2) f = 11
Create a logical vector that corresponds to the labels of the training data, representing benign labels with the value false and malignant labels with the value true. labelsTrainLogical = labelsTrain == "malignant";
6-88
Classify Breast Tumors from Ultrasound Images Using Radiomics Features
Visualize each selected feature for the training data alongside their labels. The first bar in the visualization shows the ground truth classification of the training data, with benign tumors indicated in blue and malignant tumors indicated in yellow. The rest of the bars show each of the selected features. Note that, although the colormap of each feature is different, the trend of each feature changes where the malignancy changes. For example, for the VolumeDensityConvexHull2D feature, the bottom portion of the bar corresponding to malignant tumors is more blue than the upper portion corresponding to benign tumors. figure(Position=[0 0 1500 500]) tiledlayout(1,f + 1) nexttile imagesc(labelsTrainLogical) xticklabels({}) yticklabels({}) ylabel("Malignancy",Interpreter="none") for i = 1:f nexttile imagesc(radiomicsFeaturesTrain(:,i)) xticklabels({}) yticklabels({}) ylabel(featureNames{i},Interpreter="none") end
Train Deep Learning Network Use the training data and the corresponding labels to train a deep learning classification neural network using the fitcnet (Statistics and Machine Learning Toolbox) function with tanh activation and three fully connected layers, which have 11, 33, and 267 outputs respectively. Standardize each numeric predictor variable, and set the regularization term strength Lambda to 6.4977e-6.
model = fitcnet(radiomicsFeaturesTrain,labelsTrain,Activations="tanh",Standardize=true,Lambda=6.4
This example uses the deep learning classification neural network model and hyperparameters determined using the fitcauto (Statistics and Machine Learning Toolbox) function. Compute Radiomics Features for Test Data Compute the shape features for the test data using a similar process as for the training data. The feature computation may take a long time depending on your system configuration. Remove the same redundant features as from the training data. 6-89
6
Medical Image Segmentation
radiomicsFeaturesTest = table; t = 1; for i = idx(idxTest) data = medicalVolume("img" + i + ".dcm"); data.Voxels = squeeze(data.Voxels); roiData = medicalVolume("maskimg" + i + ".dcm"); roiData.Voxels = squeeze(roiData.Voxels); R = radiomics(data,roiData); S = shapeFeatures(R,SubType="2D"); S = removevars(S,"LabelID"); radiomicsFeaturesTest(t,:) = S; t = t + 1; end radiomicsFeaturesTest = table2array(radiomicsFeaturesTest); radiomicsFeaturesTest = radiomicsFeaturesTest(:,selectedFeatures);
Classify Test Data using Trained Deep Learning Network Use the trained model and the predict (Statistics and Machine Learning Toolbox) function to predict whether each test data image contains a benign or malignant tumor. predictedLabelsTest = predict(model,radiomicsFeaturesTest);
Evaluate Classification Accuracy Compute the accuracy of the trained deep network model using the minimal expected misclassification cost loss function. accuracy = 1 - loss(model,radiomicsFeaturesTest,labelsTest) accuracy = single 0.9749
Create a confusion matrix chart from the true labels labelsTest and the predicted labels predictedLabelsTest. Note that the percentage of false positives and the percentage of false negatives are both low, indicating that the prediction from this classification model has high accuracy and is balanced. figure confusionchart(labelsTest,predictedLabelsTest,Normalization="total-normalized")
6-90
Classify Breast Tumors from Ultrasound Images Using Radiomics Features
References
[1] Al-Dhabyani, Walid, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. “Dataset of Breast Ultrasound Images.” Data in Brief 28 (February 2020): 104863. https://doi.org/10.1016/ j.dib.2019.104863.
See Also radiomics | shapeFeatures | intensityFeatures | textureFeatures
More About •
“Get Started with Radiomics” on page 6-81
•
“IBSI Standard and Radiomics Function Feature Correspondences” on page 6-53
External Websites •
Image Biomarker Standardisation Initiative (IBSI)
6-91
7 Microscopy Segmentation Using Cellpose • “Getting Started with Cellpose” on page 7-2 • “Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5 • “Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12 • “Train Custom Cellpose Model” on page 7-22 • “Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28
7
Microscopy Segmentation Using Cellpose
Getting Started with Cellpose In this section... “Install Support Package” on page 7-2 “Apply Pretrained Cellpose Model” on page 7-2 “Refine Pretrained Cellpose Model” on page 7-3 “Train Cellpose Model Using Transfer Learning” on page 7-4 Segment microscopy images using the Medical Imaging Toolbox Interface for Cellpose Library support package. Using the support package, you can apply pretrained models from the Cellpose Library, or retrain a model with your own data. To learn more about the pretrained models and their training data, see the Cellpose Library Documentation. Cellpose models can segment different cell types across microscopy modalities, or segment other types of objects. Cellpose is a single-class instance segmentation algorithm, meaning the models label many objects of one type. For example, cellpose can detect hundreds of blood cells in an image, but it cannot classify red blood cells versus white blood cells. The algorithm labels each instance separately, as indicated by the different colored masks in this figure.
Install Support Package You can install the Medical Imaging Toolbox Interface for Cellpose Library from Add-On Explorer. For more information about installing add-ons, see “Get and Manage Add-Ons”. The support package also requires Deep Learning Toolbox and Computer Vision Toolbox™. Processing images on a GPU requires a supported GPU device and Parallel Computing Toolbox™. The Cellpose Library uses Python®. The Medical Imaging Toolbox Interface for Cellpose Library support package internally uses the pyenv function to interface with the Cellpose Library and automatically load a compatible Python environment. You do not need to configure the Python environment.
Apply Pretrained Cellpose Model For many microscopy images, a pretrained model with default options can give accurate results. When working with a new data set, try a pretrained model using these steps. 1
7-2
Load the image to segment. Cellpose models expect a 2-D intensity image as input. If you have an RGB or multichannel image, you can convert it to a grayscale image by using the im2gray function, or extract a specific color channel by using the imsplit function.
Getting Started with Cellpose
img = imread("AT3_1m4_01.tif"); 2
Create a cellpose object to configure a pretrained model from the Cellpose Library. This code configures an instance of the cyto2 model. cp = cellpose(Model="cyto2");
3
Segment the image, specifying the approximate cell diameter in pixels. If you do not know the cell diameter for your image, you can measure it by using the Image Viewer app. labels = segmentCells2D(cp,img,ImageCellDiameter=56);
4
Display the results. B = labeloverlay(img,labels); imshow(B)
For a detailed example that compares pretrained models, see “Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5.
Refine Pretrained Cellpose Model To refine the results of a pretrained model, try tuning the model parameters. Tuning affects the preprocessing and postprocessing performed by the model, and does not modify the underlying deep learning network. For a detailed example, see “Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12. This image shows labels predicted using Cellpose with default parameter settings and after refinement. The zoomed in regions show areas where the refined model labels cell membrane protrusions that the default model misses.
7-3
7
Microscopy Segmentation Using Cellpose
Train Cellpose Model Using Transfer Learning If none of the pretrained models work well, try retraining a model using transfer learning. For an example, see “Train Custom Cellpose Model” on page 7-22.
References [1] Stringer, Carsen, Tim Wang, Michalis Michaelos, and Marius Pachitariu. “Cellpose: A Generalist Algorithm for Cellular Segmentation.” Nature Methods 18, no. 1 (January 2021): 100–106. https://doi.org/10.1038/s41592-020-01018-x. [2] Pachitariu, Marius, and Carsen Stringer. “Cellpose 2.0: How to Train Your Own Model.” Nature Methods 19, no. 12 (December 2022): 1634–41. https://doi.org/10.1038/s41592-022-01663-4.
See Also cellpose | segmentCells2D | segmentCells3D | trainCellpose | downloadCellposeModels
Related Examples •
“Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5
•
“Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12
•
“Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28
•
“Train Custom Cellpose Model” on page 7-22
External Websites
7-4
•
https://www.cellpose.org/
•
Cellpose Library Documentation
•
Cellpose GitHub® Repository
Choose Pretrained Cellpose Model for Cell Segmentation
Choose Pretrained Cellpose Model for Cell Segmentation This example shows how to segment cells from microscopy images using a pretrained Cellpose model. The Cellpose Library [1],[2] provides several pretrained models for cell segmentation in microscopy images. To learn more about each model and its training data, see the Models page of the Cellpose Library Documentation. In this example, you segment an image using three pretrained Cellpose models and visually compare the results. You can use this approach to find the pretrained model that is most suitable for your own images. If none of the pretrained models work well for an image data set, you can train a custom Cellpose model using transfer learning. For more information on training, see the “Train Custom Cellpose Model” on page 7-22 example. This example requires the Medical Imaging Toolbox™ Interface for Cellpose Library. You can install the Medical Imaging Toolbox Interface for Cellpose Library from Add-On Explorer. For more information about installing add-ons, see “Get and Manage Add-Ons”. The Medical Imaging Toolbox Interface for Cellpose Library requires Deep Learning Toolbox™ and Computer Vision Toolbox™. Load Image Read and display an image. This example uses a single-channel grayscale image that is small enough to fit into memory. When working with your own images, consider these tips: • Cellpose models expect a 2-D intensity image as input. If you have an RGB or multi-channel image, you can convert to grayscale by using the im2gray function or extract a specific color channel by using the imsplit function. In general, pick the channel with the strongest edges for the cell boundary. If your image contains a separate channel for nuclear staining, consider specifying the nuclear image during segmentation using the AuxiliaryChannel name-value argument of the segmentCells2D function. • If you have many images to segment, pick a representative image or subset of images. Use an image that contains cells of the type and shape you want to label in the full data set. • If you are working with large images, such as whole-slide microscopy images, extract a representative region of interest. Pick a region containing the cell types you want to label in the full image. For an example that uses the blockedImage object and getRegion object function to extract a region from a large microscopy image, see “Process Blocked Images Efficiently Using Partial Images or Lower Resolutions”. img = imread("AT3_1m4_01.tif"); imshow(img)
7-5
7
Microscopy Segmentation Using Cellpose
Measure Average Cell Diameter To ensure accurate results, specify the approximate diameter of the cells, in pixels. This example assumes a diameter of 56 pixels. averageCellDiameter = 56;
Alternatively, you can measure the approximate diameter in the Image Viewer app. Open the app by entering this command: imageViewer(img)
In the Viewer tab of the app toolstrip, click Measure Distance. In the main display pane, click one point on the boundary of the cell you want to measure, and drag to draw a line across the cell. When you release the button, the app labels the line with a distance measurement, in pixels. You can reposition the endpoints or move the whole line by dragging the ends or middle portion of the line, respectively. This image shows an example of several distance measurements in the app.
7-6
Choose Pretrained Cellpose Model for Cell Segmentation
Segment Image Using Cellpose Models In this section, you compare the results from three of the pretrained Cellpose Library models: cyto, cyto2, and nuclei. Configure Cellpose Models Create a cellpose object for each model, specifying the Model property for each object. The first time you create an object for a given model, MATLAB® downloads the model from the Cellpose Library and saves a local copy. By default, MATLAB saves the models in a folder named cellposeModels within the folder returned by the userpath function. You can change the save location by specifying the ModelFolder property at object creation. cpCyto = cellpose(Model="cyto"); cpCyto2 = cellpose(Model="cyto2"); cpNuclei = cellpose(Model="nuclei");
Predict Cell Segmentation Labels Segment the image using each of the three models by using the segmentCells2D object function. First, specify values for the cell threshold and flow error threshold. The cell threshold typically ranges from –6 to 6, and the flow error threshold typically ranges from 0.1 to 3. For this example, set these values so as to favor a greater number of detections. cellThreshold = -6; flowThreshold = 3;
7-7
7
Microscopy Segmentation Using Cellpose
Predict labels for each model. The segmentCells2D object function returns a 2-D label image, where each cell detected by the model has a different numeric pixel value. labels1 = segmentCells2D(cpCyto,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=cellThreshold, ... FlowErrorThreshold=flowThreshold); labels2 = segmentCells2D(cpCyto2,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=cellThreshold, ... FlowErrorThreshold=flowThreshold); labels3 = segmentCells2D(cpNuclei,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=cellThreshold, ... FlowErrorThreshold=flowThreshold);
Compare Results Visualize the input image and the results from each model. The title for each image includes the model name and the number of cells detected. figure tiledlayout(2,2,TileSpacing="tight",Padding="tight") nexttile h1 = imshow(img); title("Input image") nexttile h2 = imshow(labeloverlay(img,labels1)); nl = numel(unique(labels1(:))) - 1; title(cpCyto.Model + " Detected: " + nl + " Cells") nexttile h3 = imshow(labeloverlay(img,labels2)); nl = numel(unique(labels2(:))) - 1; title(cpCyto2.Model + " Detected: " + nl + " Cells") nexttile h4 = imshow(labeloverlay(img,labels3)); nl = numel(unique(labels3(:))) - 1; title(cpNuclei.Model + " Detected: " + nl + " Cells") linkaxes([h1.Parent h2.Parent h3.Parent h4.Parent])
7-8
Choose Pretrained Cellpose Model for Cell Segmentation
Zoom in each image to compare the results from each model for a single cell. In general, you want to select a model that accurately detects the correct number of cells, such that it detects one real cell as one cell in the label image. Based on the zoomed in image, the cyto2 model is most suitable for this family of images. xlim([200 370]) ylim([190 320])
7-9
7
Microscopy Segmentation Using Cellpose
Once you have selected a model, you can refine the accuracy of the boundaries by tuning parameters such as the cell threshold and flow error threshold. For more information, see the “Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12 example.
References [1] Stringer, Carsen, Tim Wang, Michalis Michaelos, and Marius Pachitariu. “Cellpose: A Generalist Algorithm for Cellular Segmentation.” Nature Methods 18, no. 1 (January 2021): 100–106. https://doi.org/10.1038/s41592-020-01018-x. [2] Pachitariu, Marius, and Carsen Stringer. “Cellpose 2.0: How to Train Your Own Model.” Nature Methods 19, no. 12 (December 2022): 1634–41. https://doi.org/10.1038/s41592-022-01663-4.
See Also cellpose | segmentCells2D | trainCellpose | segmentCells3D | downloadCellposeModels
Related Examples
7-10
•
“Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12
•
“Train Custom Cellpose Model” on page 7-22
•
“Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28
Choose Pretrained Cellpose Model for Cell Segmentation
External Websites •
https://www.cellpose.org/
•
Cellpose Library Documentation
•
Cellpose GitHub® Repository
7-11
7
Microscopy Segmentation Using Cellpose
Refine Cellpose Segmentation by Tuning Model Parameters This example shows how to use options in the segmentCells2D function to improve the segmentation results from a Cellpose model. To learn how to select a suitable model, see “Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5. This example requires the Medical Imaging Toolbox™ Interface for Cellpose Library. You can install the Medical Imaging Toolbox Interface for Cellpose Library from Add-On Explorer. For more information about installing add-ons, see “Get and Manage Add-Ons”. The Medical Imaging Toolbox Interface for Cellpose Library requires Deep Learning Toolbox™ and Computer Vision Toolbox™. Load Image Read and display an image. This example uses an image small enough to fit into memory. If you are working with very large images, such as whole slide images, see “Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28. img = imread("AT3_1m4_01.tif"); imshow(img)
7-12
Refine Cellpose Segmentation by Tuning Model Parameters
Configure Cellpose Model Create a cellpose object that uses the cyto2 pretrained model from the Cellpose Library. Specify the average cell diameter in pixels. For an example showing how to measure the approximate diameter in the Image Viewer app, see “Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5. cp = cellpose(Model="cyto2"); averageCellDiameter = 56;
Segment Image Using Default Settings Segment the image using the default model settings, specifying only the model name and the average cell diameter to configure the model. The default model generally does a good job labeling each cell, but the labels exclude the cell membrane protrusions. The remaining sections of this example show you how to tune additional parameters to refine the labels. labelsDefault = segmentCells2D(cp,img,ImageCellDiameter=averageCellDiameter); loverlayDefault = labeloverlay(img,labelsDefault); imshow(loverlayDefault)
7-13
7
Microscopy Segmentation Using Cellpose
Visualize Impact of Cell Probability Threshold Visualize the effect of adjusting the CellThreshold name-value argument of the segmentCells2D function. The model applies this threshold value to the network output, and includes only pixels above the threshold in the label predictions. For most images, a suitable value ranges from –6 to 6. Specify the range of CellThreshold values to investigate as –5 to 6, in increments of 1. ctRange = -5:1:6;
For each value to investigate, segment the image using that CellThreshold value and display the result for one representative cell. View the results for all values as a tiled image display. numResults = numel(ctRange); numRows = round(sqrt(numResults)); figure tiledlayout(numRows,ceil(numResults/numRows),TileSpacing="none",Padding="tight") for ind = 1:numResults labels = segmentCells2D(cp,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=ctRange(ind), ... FlowErrorThreshold=10); loverlay = labeloverlay(img,labels); nexttile imshow(loverlay) title("Cell Threshold: " + num2str(ctRange(ind))) end linkaxes(findobj(gcf,Type="axes"))
In general, decreasing the CellThreshold value results in more cells being detected, but can generate less accurate boundaries between cells. Increasing this value generates clean boundaries, but can miss some cells or regions within a cell. The optimal value depends on your application and what regions of the cell you want to include. The rest of the example uses a cell threshold value of –2 to balance including cell membrane protrusions with tightly following the cell boundary. xlim([235 320]) ylim([226 292])
7-14
Refine Cellpose Segmentation by Tuning Model Parameters
Visualize Impact of Flow Error Threshold Visualize the effect of adjusting the FlowErrorThreshold name-value argument of the segmentCells2D function. The function applies the flow error threshold to flow fields, which are an intermediate network output used to compute the final label masks. For most images, a suitable value ranges from 0.1 to 3. Specify the FlowErrorThreshold values to investigate. Based on the results from the previous section, set the CellThreshold to –2. ct = -2; ftRange = [3 1 0.5 0.4 0.3 0.2];
For each value to investigate, segment the image using that FlowErrorThreshold value and display the result for one representative cell. View the results for all values as a tiled image display. numResults = numel(ftRange); results = cell(1,numResults); numRows = round(sqrt(numResults)); figure tiledlayout(numRows,ceil(numResults/numRows),TileSpacing="none",Padding="tight") for ind = 1:numResults labels = segmentCells2D(cp,img, ... ImageCellDiameter=averageCellDiameter, ...
7-15
7
Microscopy Segmentation Using Cellpose
CellThreshold=ct, ... FlowErrorThreshold=ftRange(ind)); loverlay = labeloverlay(img,labels); nexttile imshow(loverlay) title(num2str(ftRange(ind))) end linkaxes(findobj(gcf,Type="axes"))
In general, decreasing the FlowErrorThreshold value generates cleaner boundaries, but can miss some cells or regions of a cell. Increasing this value results in more cells being detected, but can generate less accurate boundaries between cells. The optimal value depends on your application and what regions of the cell you want to include. The rest of the example uses a flow error threshold value of 0.4. xlim([235 320]) ylim([226 292])
Visualize Impact of Tiling and Augmentation Cellpose models can generally detect and label cells with varied shapes, positions, and rotations. For some data sets, applying tiling and augmentation during segmentation can improve accuracy, but at the cost of longer segmentation times. When you use these techniques, the segmentCells2D function makes multiple predictions for some pixels, and averages the predictions to generate the
7-16
Refine Cellpose Segmentation by Tuning Model Parameters
final output. Averaging can improve accuracy by reducing random errors that occur in individual predictions. Apply tiling by specifying the Tile name-value argument as True. The segmentCells2D function divides the input image into overlapping 224-by-224 pixel tiles. The function predicts labels for each tile individually and averages the results for the overlapping regions. Control the amount of overlap by specifying the TileOverlap name-value argument. For most images, a suitable value ranges from 0.1 (10% overlap) to 0.5 (50% overlap). For this example, specify the TileOverlap value as 0.5. Based on the results from the previous section, set the CellThreshold to –2 and the FlowErrorThreshold to 0.4. ft = 0.4; tileLabels = segmentCells2D(cp,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=ct, ... FlowErrorThreshold=ft, ... Tile=true, ... TileOverlap=0.5);
Apply augmentation by specifying the TileAndAugment name-value argument as true. The segmentCells2D function automatically applies tiling with a 50% overlap and applies random flips and rotations to the tiles. The function predicts labels for each tile individually and averages the results for the overlapping regions. augLabels = segmentCells2D(cp,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=ct, ... FlowErrorThreshold=ft, ... TileAndAugment=true);
Visually compare the labels computed without tiling or augmentation, those computed with tiling only, and those computed with tiling and augmentation. Compute labels without tiling or augmentation, then display the results from each computation method for one representative cell as a tiled image display. labels = segmentCells2D(cp,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=ct, ... FlowErrorThreshold=ft); figure tiledlayout(1,3,TileSpacing="none",Padding="tight") nexttile imshow(labeloverlay(img,labels)) title("Normal") nexttile imshow(labeloverlay(img,tileLabels)) title("Tiled") nexttile imshow(labeloverlay(img,augLabels)) title("Tiled and Augmented") linkaxes(findobj(gcf,Type="axes")) xlim([540 620]) ylim([240 300])
7-17
7
Microscopy Segmentation Using Cellpose
Visualize Impact of Model Ensembling For some pretrained models, including cyto, cyto2, and nuclei, the Cellpose Library offers multiple versions of the model that have been trained using different initial parameter values. Set the UseEnsemble name-value argument when creating a new cellpose object to configure the model to use ensembling. When segmenting images, the segmentCells2D function predicts labels for each version of the model, and averages the results across the ensemble. Using a model ensemble can increase accuracy, but takes longer to segment images. Create a cellpose object that uses the cyto2 model ensemble. cpEnsemble = cellpose(Model="cyto2",UseEnsemble=true);
Segment the image using the model ensemble. enLabels = segmentCells2D(cpEnsemble,img, ... ImageCellDiameter=averageCellDiameter, ... CellThreshold=ct, ... FlowErrorThreshold=ft);
Visually compare the labels computed using the normal and ensembled cyto2 models. Display the results from each model for one representative cell as a tiled image display. figure tiledlayout(1,2,TileSpacing="none",Padding="tight") nexttile imshow(labeloverlay(img,labels))
7-18
Refine Cellpose Segmentation by Tuning Model Parameters
title("Normal") nexttile imshow(labeloverlay(img,enLabels)) title("Ensemble") linkaxes(findobj(gcf,Type="axes")) xlim([530 694]) ylim([120 225])
Interactively Explore Parameters Try segmenting the image using different combinations of argument values. When creating the cellpose object, select a different model or enable ensembling to see how they affect the result. When performing segmentation, you can adjust the cell diameter (in pixels), the cell threshold, and the flow error threshold, as well as enable tiling or tiling with augmentation. Note that if you specify TileAndAugment as true, the segmentCells2D function automatically tiles the image, regardless of the value of Tile. Each time you adjust a value, the display updates to show the label predictions. To restore the default value for an argument, right-click the argument value and select Restore Default Value. cpl = cellpose(Model=
,UseEnsemble=
);
labelsl = segmentCells2D(cpl,img, ... ImageCellDiameter=
, ...
7-19
7
Microscopy Segmentation Using Cellpose
CellThreshold=
, ...
FlowErrorThreshold= Tile=
, ... , ...
TileOverlap= TileAndAugment=
, ... );
figure imshow(labeloverlay(img,labelsl)) title("Segmented Image")
References [1] Stringer, Carsen, Tim Wang, Michalis Michaelos, and Marius Pachitariu. “Cellpose: A Generalist Algorithm for Cellular Segmentation.” Nature Methods 18, no. 1 (January 2021): 100–106. https://doi.org/10.1038/s41592-020-01018-x. [2] Pachitariu, Marius, and Carsen Stringer. “Cellpose 2.0: How to Train Your Own Model.” Nature Methods 19, no. 12 (December 2022): 1634–41. https://doi.org/10.1038/s41592-022-01663-4. 7-20
Refine Cellpose Segmentation by Tuning Model Parameters
See Also cellpose | segmentCells2D | segmentCells3D | trainCellpose | downloadCellposeModels
Related Examples •
“Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5
•
“Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28
•
“Train Custom Cellpose Model” on page 7-22
External Websites •
https://www.cellpose.org/
•
Cellpose Library Documentation
•
Cellpose GitHub® Repository
7-21
7
Microscopy Segmentation Using Cellpose
Train Custom Cellpose Model This example shows how to train a custom Cellpose model, using new training data, to detect noncircular shapes. The Cellpose Library provides several pretrained models that have been trained to detect cells in microscopy images. The pretrained models can often detect other circular objects. For example, you can use the pretrained cyto2 model to label pears or coins by specifying the segmentCells2D function with an ImageCellDiameter value that reflects the approximate object diameter, in pixels. To detect other shapes, you can train a custom Cellpose model using training images with the target shape. In this example, you train a custom model that detects diamond shapes.
This example requires the Medical Imaging Toolbox™ Interface for Cellpose Library. You can install the Medical Imaging Toolbox Interface for Cellpose Library from Add-On Explorer. For more information about installing add-ons, see “Get and Manage Add-Ons”. The Medical Imaging Toolbox Interface for Cellpose Library requires Deep Learning Toolbox™ and Computer Vision Toolbox™. Create Training and Testing Images Create a new image data set to train a custom model. For this example, create a data set to train a model that detects diamond-shaped objects. First, create a structuring element corresponding to the object shape and size you want to detect. For this example, create a diamond in which the center and corner points are 35 pixels apart. radius = 35; sd = strel("diamond",radius);
Specify the target size of the images, in pixels, and the number of diamonds to include in each image. imageSize = [128 128]; numObjectsPerImage = 1;
Create training images by using the makeRepeatedShapeData helper function, which is defined at the end of this example. Save the test image to a subfolder named train within the current directory. numTrainingImages = 15; trainingFolderName = "train"; makeRepeatedShapeData(trainingFolderName,sd, ... numTrainingImages,imageSize,numObjectsPerImage)
7-22
Train Custom Cellpose Model
Create one test image using the makeRepeatedShapeData helper function. Save the test image to a subfolder named test within the current directory. numTestingImages = 1; testFolderName = "test"; makeRepeatedShapeData(testFolderName,sd, ... numTestingImages,imageSize,numObjectsPerImage)
You can visually check the training and test images by using the Image Browser app. imageBrowser(trainingFolderName)
Test Pretrained Model Test whether the pretrained cyto2 model can detect the diamonds in the test image. cp = cellpose(Model="cyto2"); imTest = imread(testFolderName + filesep + "1_im.png"); labelPretrained = segmentCells2D(cp,imTest,ImageCellDiameter=2*radius);
Display the test image and the labels predicted by the pretrained cyto2 model. The label mask is empty, indicating that the model does not detect the diamond. 7-23
7
Microscopy Segmentation Using Cellpose
figure tiledlayout(1,2,TileSpacing="tight") nexttile imshow(imTest) title("Test Image") nexttile imshow(labeloverlay(imTest,labelPretrained)) title("Label from Pretrained cyto2")
Train Custom Model Train a custom Cellpose model by using the trainCellpose function. Specify the training data location and the name for the new model as the first two input arguments. Use name-value arguments to specify additional training details. The PretrainedModel name-value argument specifies whether to start training from one of the pretrained models in the Cellpose library. This example retrains a copy of the pretrained cyto2 model. This is the default choice, and is generally suitable unless your training data is similar to that of another pretrained model. To learn more about each model and its training data, see the Models page of the Cellpose Library Documentation. Specify 7-24
Train Custom Cellpose Model
the ImageSuffix and LabelSuffix arguments to match the filenames of the images and masks in the training data set, respectively. The function saves the new model in the current directory. By default, the function trains on a GPU if one is available, and otherwise trains on the CPU. Training on a GPU requires Parallel Computing Toolbox™. For information on supported devices, see “GPU Computing Requirements” (Parallel Computing Toolbox). OutputModelFile = "diamondDetectionModel"; trainCellpose(trainingFolderName,OutputModelFile,... PreTrainedModel="cyto2", ... MaxEpoch=2, ... ImageSuffix="_im", ... LabelSuffix="_mask");
INFO:cellpose.io:30 / 30 images in C:\train folder have labels INFO:cellpose.models:>> cyto2 >>> model diam_mean = 30.000 (ROIs rescaled to this size during training) INFO:cellpose.dynamics:flows precomputed INFO:cellpose.core:>>>> median diameter set to = 30 INFO:cellpose.core:>>>> mean of training label mask diameters (saved to model) 52.052 INFO:cellpose.core:>>>> training network with 2 channel input > LR: 0.20000, batch_size: 8, weight_decay: 0.00001 INFO:cellpose.core:>>>> ntrain = 30 INFO:cellpose.core:>>>> nimg_per_epoch = 30 INFO:cellpose.core:Epoch 0, Time 0.9s, Loss 0.3013, LR 0.0000 INFO:cellpose.core:saving network parameters to C:\models/cellpose_residual_on_style_on_concatena
Test Custom Model Examine the custom model by using it to create a new cellpose object. The TrainingCellDiameter property value now reflects the size of the diamonds in the training image masks. cpt = cellpose(Model=OutputModelFile); cpt.TrainingCellDiameter ans = 52.0520
Test the custom model by segmenting the test image. labelCustom = segmentCells2D(cpt,imTest);
Display the result. The custom model correctly labels the diamond shape. figure tiledlayout(1,2,TileSpacing="tight") nexttile imshow(imTest) title("Test Image") nexttile imshow(labeloverlay(imTest,labelCustom)) title("Label from Custom Model")
7-25
7
Microscopy Segmentation Using Cellpose
Helper Functions The makeRepeatedShapeData helper function creates images and label masks by performing these steps: • Creates images of the size specified by imageSize that contain the shape described by a structuring element, sd, plus salt and pepper noise. • Creates a binary ground truth label mask showing the location of the shape. • Saves the images and ground truth masks as PNG files to a subfolder named train within the current directory. If the subfolder does not exist, the function creates it before saving the images. function makeRepeatedShapeData(folderName,sd,numImages,imageSize,numObjectsPerImage) % Set the random number seed to generate consistent images across runs rng(0) % Create the specified folder if it does not exist if ~exist(folderName,"dir") mkdir(folderName); end % Convert the structuring element to a numeric matrix shape = double(sd.Neighborhood)*127;
7-26
Train Custom Cellpose Model
% Create and save the images for ind = 1:numImages img = zeros(imageSize,"uint8"); % Seed the shape centers objCenters = randi(numel(img),numObjectsPerImage,1); img(objCenters) = 1; % "Stamp" the shapes into the image img = imfilter(img,shape); % Create the mask mask = img > 0; % Add noise to the image img = imnoise(img); img = imnoise(img,"salt & pepper"); % Save the image and mask baseFileName = folderName + filesep + string(ind); imwrite(img,baseFileName + "_im.png"); imwrite(mask,baseFileName + "_mask.png"); end end
References [1] Stringer, Carsen, Tim Wang, Michalis Michaelos, and Marius Pachitariu. “Cellpose: A Generalist Algorithm for Cellular Segmentation.” Nature Methods 18, no. 1 (January 2021): 100–106. https://doi.org/10.1038/s41592-020-01018-x. [2] Pachitariu, Marius, and Carsen Stringer. “Cellpose 2.0: How to Train Your Own Model.” Nature Methods 19, no. 12 (December 2022): 1634–41. https://doi.org/10.1038/s41592-022-01663-4.
See Also cellpose | segmentCells2D | segmentCells3D | trainCellpose | downloadCellposeModels
Related Examples •
“Choose Pretrained Cellpose Model for Cell Segmentation” on page 7-5
•
“Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12
•
“Detect Nuclei in Large Whole Slide Images Using Cellpose” on page 7-28
External Websites •
https://www.cellpose.org/
•
Cellpose Library Documentation
•
Cellpose GitHub® Repository
7-27
7
Microscopy Segmentation Using Cellpose
Detect Nuclei in Large Whole Slide Images Using Cellpose This example shows how to detect cell nuclei in whole slide images (WSIs) of tissue stained using hemotoxylin and eosin (H&E) by using Cellpose. Cell segmentation is an important step in most digital pathology workflows. The number and distribution of cells in a tissue sample can be a biomarker of cancer or other diseases. Digital pathology uses digital images of microscopy slides, or whole slide images (WSIs), for clinical diagnosis or research. To capture tissue- and cellular-level detail, WSIs have high resolutions, and can have sizes on the order of 200,000-by-100,000 pixels. To facilitate efficient display, navigation, and processing of WSIs, the best practice is to store them in a multiresolution format. This example addresses two challenges to segmenting nuclei in a WSI: • Accurately labeling cell nuclei — This example uses a pretrained model from the Cellpose Library [1],[2], which provides deep learning models for cell segmentation. You can configure Cellpose models in MATLAB® by using the cellpose object and its object functions. • Managing large images — This examples uses a blocked image approach to process the WSI without loading the full image into system memory. You can manage and process images block-byblock by using the blockedImage object and its object functions. This example requires the Medical Imaging Toolbox™ Interface for Cellpose Library. You can install the Medical Imaging Toolbox Interface for Cellpose Library from Add-On Explorer. For more information about installing add-ons, see “Get and Manage Add-Ons”. The Medical Imaging Toolbox Interface for Cellpose Library requires Deep Learning Toolbox™ and Computer Vision Toolbox™. For a similar example that uses only Image Processing Toolbox, see “Detect and Count Cell Nuclei in Whole Slide Images”. Download Sample Image from Camelyon17 Data Set This example uses one WSI from the Camelyon17 challenge. You can download the file for the sample, tumor_065.tif, from the GigaDB page for the data set. On the GigaDB page, scroll down to the table and select the Files tab. Navigate the table to find the entry for tumor_065.tif, and click the filename to download the image. After you download the image, update the fileName variable in this line of code to point to the full path of the downloaded image. fileName = "tumor_065.tif";
Create Blocked Image Create a blockedImage object from the sample image. This image has 11 resolution levels. The finest resolution level, which is the first level, has a size of 220672-by-97792 pixels. The coarsest resolution level, which is the last level, has a size of 512-by-512 pixels and easily fits in memory. bim = blockedImage(fileName); disp(bim.NumLevels) 11 disp(bim.Size) 220672 110592
7-28
97792 49152
3 3
Detect Nuclei in Large Whole Slide Images Using Cellpose
55296 27648 13824 7168 1577 3584 2048 1024 512
24576 12288 6144 3072 3629 1536 1024 512 512
3 3 3 3 3 3 3 3 3
Set the spatial referencing for the blocked image by using the setSpatialReferencingForCamelyon16 helper function, which is attached to this example as a supporting file. The helper function sets the WorldStart and WorldEnd properties of the blockedImage object using the spatial referencing information from the TIF file metadata. bim = setSpatialReferencingForCamelyon16(bim);
Display the blocked image using the bigimageshow function. bigimageshow(bim);
Test Cellpose Model in Representative Subregion In this example, you use a representative region of interest to test a Cellpose model and tune its parameter values. You then apply the final model to all regions of the image that contain tissue. Identify and display a representative region in the image. 7-29
7
Microscopy Segmentation Using Cellpose
xRegion = [52700 53000]; yRegion = [98140 98400]; xlim(xRegion) ylim(yRegion) title("Region of Interest")
Extract the region of interest at the finest resolution level by using the getRegion function. imRegion = getRegion(bim,[98140 52700],[98400 53000]);
Measure Average Nuclei Diameter To ensure accurate results, specify the approximate average diameter of the nuclei, in pixels. For this example, assume a diameter of 20 pixels. averageCellDiameter = 20;
Alternatively, you can measure the approximate nucleus diameter in the Image Viewer app. Open the app by entering this command. imageViewer(imRegion)
In the app, measure the nucleus of several cells to estimate the average diameter in the whole image. In the Viewer tab of the app toolstrip, click Measure Distance. In the main display pane, click on the boundary of the nucleus you want to measure and drag to draw a line across the nucleus. When you release the button, the app labels the line with a distance measurement, in pixels. You can reposition the endpoints or move the whole line by dragging the ends or middle portion of the line, respectively. This image shows an example of several distance measurements in the app. 7-30
Detect Nuclei in Large Whole Slide Images Using Cellpose
Configure Nuclei Cellpose Model Segment the image region by using the pretrained nuclei model from the Cellpose Library. This model is suitable for detecting nuclei. To learn more about other models and their training data, see the Cellpose Library documentation. The nuclei model expects input images that have one channel in which the nuclei are bright relative to other areas. To use this model, convert the RGB input image to an inverted grayscale image. img = im2gray(imRegion); imgc = imcomplement(img);
Create a cellpose object for the nuclei pretrained model. cp = cellpose(Model="nuclei");
Segment the input image using the Cellpose model. Specify the average diameter of the nuclei to the segmentCells2D function by using the ImageCellDiameter name-value argument. The function returns a 2-D label image that labels pixels of the background as 0 and the pixels of different nuclei as increasing values, starting at 1. 7-31
7
Microscopy Segmentation Using Cellpose
labels = segmentCells2D(cp,imgc,ImageCellDiameter=averageCellDiameter);
Display the original RGB image, grayscale image, inverted grayscale image, and RGB image overlaid with the predicted labels as a tiled montage. The predicted labels image shows a different color for each detected nucleus, corresponding to the value of the associated pixels in labels. figure tiledlayout(1,4,TileSpacing="none",Padding="tight") nexttile imshow(imRegion) title("RGB Image") nexttile imshow(img) title("Grayscale Image") nexttile imshow(imgc) title("Inverted Image") nexttile imshow(labeloverlay(imRegion,labels)) title("Predicted Labels") linkaxes(findobj(gcf,Type="axes"));
Interactively Explore Labels Across WSI In the previous section, you configured a Cellpose model based on one representative region. Before you apply the model to the full WSI, which takes several minutes, verify that the model works well for other representative regions in the slide. 7-32
Detect Nuclei in Large Whole Slide Images Using Cellpose
Copy, paste, and run this code in the Command Window to call the exploreCamelyonBlockedApply helper function, which is attached to this example as a supporting file. A figure window opens, showing the whole slide with a rectangular ROI on the left and zoomed in views of the ROI on the right. The function applies the model to only the current ROI inside the rectangle, which is faster than processing the full image. exploreCamelyonBlockedApply(bim, ... @(bs)labeloverlay(bs.Data, ... segmentCells2D(cp,imcomplement(im2gray(bs.Data)), ... ImageCellDiameter=averageCellDiameter)), ... BlockSize=[256 256],InitialXY=[64511 95287])
In the figure window, use the scroll wheel to zoom in on the slide until the rectangular ROI is clearly visible. Drag the rectangle to move the ROI. When you release the button, the zoomed in views 7-33
7
Microscopy Segmentation Using Cellpose
automatically update. For this example, you can see that the model works well because it has labelled the dark purple nuclei in the plane of the microscope, and the majority of touching nuclei have been correctly labeled as separate instances. If the model does not produce accurate results for your data, try tuning the model parameters. For more information, see the “Refine Cellpose Segmentation by Tuning Model Parameters” on page 7-12 example.
Apply Cellpose Segmentation to Full WSI Once you are satisfied with the Cellpose model in some representative regions, apply the model to the full WSI. For efficiency, apply the Cellpose model only to blocks that contain tissue, and ignore blocks without tissue. You can identify blocks with tissue using a coarse tissue mask. Create Tissue Mask In this example, you create a tissue mask by using the createBinaryTissueMask helper function, which is attached to this example as a supporting file. The code for the function has been generated by using the Color Thresholder app. This image shows the app window with the selected RGB color thresholds.
7-34
Detect Nuclei in Large Whole Slide Images Using Cellpose
Optionally, you can segment the tissue using a different color space, or select different thresholds by creating your own createBinaryTissueMask function. For example, to select different thresholds, open a low-resolution version of the image in the Color Thresholder app by entering this code in the Command Window. After you select new thresholds, select Export > Export Function from the app toolstrip. Save the generated function as createBinaryTissueMask.m. iml9 = gather(bim,Level=9); colorThresholder(iml9)
Use the apply function to run the createBinaryTissueMask function on each block and assemble a single blockedImage. This example creates the mask at the ninth resolution level, which fits in memory and follows the contours of the tissue better than a mask at the coarsest resolution level. maskLevel = 9; tissueMask = apply(bim,@(bs)createBinaryTissueMask(bs.Data),Level=maskLevel);
Display the mask overlaid on the original WSI. figure h = bigimageshow(bim); showlabels(h,tissueMask,Colormap=[0 0 0; 0 1 0]) title("Tissue Mask")
7-35
7
Microscopy Segmentation Using Cellpose
Select Blocks with Tissue Select the set of blocks inside the tissue mask by using the selectBlockLocations function. Include only blocks that have at least one pixel inside the mask by specifying the InclusionThreshold value as 0. bls = selectBlockLocations(bim,BlockSize=[512 512], ... Masks=tissueMask,InclusionThreshold=0);
Configure Cellpose Model Create a new cellpose object to process the full WSI using the nuclei pretrained model. Use the canUseGPU function to check whether a supported GPU and Parallel Computing Toolbox™ are available. By default, a cellpose object processes images on a GPU if one is available. If a GPU is not available, enable OpenVINO acceleration for best performance on the CPU. if canUseGPU cp = cellpose(Model="nuclei"); else cp = cellpose(Model="nuclei",Acceleration="openvino"); end
Apply Cellpose Model Use the apply function to run the labelNuclei on page 7-42 helper function on all blocks in the tissue mask. The helper function, defined at the end of this example, segments each block using the Cellpose model and returns metrics about the segmentation labels. The helper manually normalizes 7-36
Detect Nuclei in Large Whole Slide Images Using Cellpose
all blocks to the same range before segmenting them. Specify the limits, gmin and gmax, as the minimum and maximum intensities from the original representative region img, respectively. The helper function uses an overlapping border between blocks to ensure it counts nuclei on the edges of a block only once. Specify a border size equal to 1.2 times the average nucleus diameter. gmin = min(img,[],"all"); gmax = max(img,[],"all"); borderSize = round(1.2*[averageCellDiameter averageCellDiameter]); if(exist("labelData","dir")) rmdir("labelData","s") end [bstats,bcount] = apply(bim, ... @(bs)labelNuclei(bs,cp,gmin,gmax,averageCellDiameter), ... Level=1, ... BorderSize=borderSize, ... BlockLocationSet=bls, ... PadPartial=true, ... OutputLocation="labelData", ... Adapter=images.blocked.MATBlocks, ... UseParallel=true); WARNING: no mask pixels found
Explore Cellpose Results for WSI Explore the results by creating a heatmap and histogram of nuclei statistics, and by verifying the results. Display Heatmap of Detected Nuclei View the results as a heat map of the nuclei count in each block. For more efficient display, downsample the count data before displaying it. bcount.BlockSize = [50 50]; hb = bigimageshow(bim); showlabels(hb,bcount,Colormap=flipud(hot(550))) title("Heatmap of Nuclei Count Per Block")
7-37
7
Microscopy Segmentation Using Cellpose
Zoom in to examine a local region. The darker heatmap regions indicate higher nuclei density, indicating more cells per unit of area. xlim([43000 47000]) ylim([100900 104000])
7-38
Detect Nuclei in Large Whole Slide Images Using Cellpose
Display Histogram of Nucleus Area Load the nuclei statistics into memory. stats = gather(bstats);
Remove empty blocks that are part of the tissue mask, but do not contain cell nuclei. emptyInd = arrayfun(@(x)isempty(x.Area),stats); stats = stats(~emptyInd);
Concatenate the results from all blocks into vectors that contain the area, centroid coordinates, and bounding box for each nucleus. areaVec = horzcat(stats.Area); centroidsXY = vertcat(stats.CentroidXY); boundingBoxedXYWH = vertcat(stats.BoundingBoxXYWH);
Create a histogram of nucleus area values, in square pixels. figure histogram(areaVec); title("Histogram of Nucleus Area, " + "Mean: " + mean(areaVec) + " square pixels")
7-39
7
Microscopy Segmentation Using Cellpose
View Centroids and Bounding Boxes Verify that nuclei within the overlapping border regions have been counted only one time. First, display a subregion of the WSI at the intersection of multiple blocks. hb = bigimageshow(bim); xlim([44970 45140]) ylim([98210 98380]) hold on
Display the grid lines between blocks. hb.GridVisible = "on"; hb.GridAlpha = 0.5; hb.GridColor = "g"; hb.GridLineWidth = 5;
Plot a marker at the centroid of each nucleus detected by the model. Make the size of each marker proportional to the area of the corresponding nucleus. The display shows that the Cellpose model successfully detects nuclei near block boundaries and does not double count them. If you observe double detections for your own image data, try increasing the value of the borderSize argument when applying the model using the apply function. scatter(centroidsXY(:,1),centroidsXY(:,2),areaVec/10,MarkerFaceColor="w")
7-40
Detect Nuclei in Large Whole Slide Images Using Cellpose
Add bounding boxes to validate that each Cellpose label correctly spans the full, corresponding nucleus. xs = xlim; ys = ylim; inRegion = centroidsXY(:,1) > xs(1) ... & centroidsXY(:,1) < xs(2) ... & centroidsXY(:,2) > ys(1) ... & centroidsXY(:,2) < ys(2); bboxesInRegion = boundingBoxedXYWH(inRegion,:); for ind = 1:size(bboxesInRegion,1) drawrectangle(Position=bboxesInRegion(ind,:)); end
7-41
7
Microscopy Segmentation Using Cellpose
Helper Functions
The labelNuclei function performs these operations to segment nuclei within a block of blocked image data: • Converts the input image to grayscale using im2gray, normalizes the intensities to the range [gmin gmax] using the rescale function, and inverts the image using imcomplement. • Segments cells using the segmentCells2D function with the cellpose object cp. The helper function disables automatic intensity normalization to avoid applying different normalization limits across blocks. • Computes statistics for the nuclei segmentation masks, including area, centroid, and bounding boxes, by using the regionprops function. • Removes data for nuclei in the border region by using the clearBorderData helper function, is defined in this section. function [stats,count] = labelNuclei(bs,cp,gmin,gmax,ndia) % Convert to grayscale img = im2gray(bs.Data); % Normalize img = rescale(img,InputMin=gmin,InputMax=gmax); % Invert imgc = imcomplement(img); % Detect
7-42
Detect Nuclei in Large Whole Slide Images Using Cellpose
labels = segmentCells2D(cp,imgc,ImageCellDiameter=ndia,NormalizeInput=false);
% Compute statistics. Add additional properties if required. % If you require pixel level masks for each cell, add "Image" to the vector in the second input a % Note that this increases the output directory size. rstats = regionprops(labels,["Centroid","Area","BoundingBox"]); % Only retain stats and labels whose centroids are within this block rstats = clearBorderData(rstats,bs); % Flatten out the data into a scalar structure stats.Area = [rstats.Area]; stats.CentroidXY = vertcat(rstats.Centroid); stats.BoundingBoxXYWH = vertcat(rstats.BoundingBox); if ~isempty(rstats) % Move to image coordinates (Start is in YX format, so flip it) offset = fliplr(bs.Start(1:2) - 1); stats.CentroidXY = stats.CentroidXY + offset; stats.BoundingBoxXYWH(:,1:2) = stats.BoundingBoxXYWH(:,1:2) + offset; end % Density count count = numel(rstats); end
The clearBorderData helper function removes statistics for nuclei in the overlapping border between blocks to prevent double-counting. function rstats = clearBorderData(rstats,bs) % Clear labels whose centroids are in the border area fullBlockSize = size(bs.Data); % includes border pixels % Go from (x,y) coordinates to (row,column) indices within the block lowEdgeXY = fliplr(bs.BorderSize(1:2)+1); highEdgeXY = fliplr(fullBlockSize(1:2)-bs.BorderSize(1:2)); % Find indices of centroids inside the current block (excludes the % border region) centroidsXY = reshape([rstats.Centroid],2,[]); centroidsXY = round(centroidsXY); inBlock = centroidsXY(1,:) >= lowEdgeXY(1) ... & centroidsXY(1,:) = lowEdgeXY(2) ... & centroidsXY(2,:)