Demystifying Medical Image Processing Concepts for Design, Implementation and Management with Real Time Case Studies [Team-IRA] 9798886977370


235 26 22MB

English Pages [191] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Preface
Chapter 1
Introduction to Medical Imaging Modalities
1.1. Introduction
1.2. X-RAY Imaging
1.3. Computed Tomography Imaging
1.4. Magnetic Resonance Imaging
1.5. Ultrasound Imaging
Conclusion
Chapter 2
Basics of Medical Image Processing Using MATLAB
2.1. Introduction
2.2. Source of Medical Data
2.3. Usage of Imread and Imshow Function
2.4. Usage of Subplot Function
2.5. Usage of Imtool Function
2.6. Usage of Imresize Function
2.7. Usage of Flipud and Fliplr Function
2.8. Usage of Imhist Function
2.9. Usage of Imadd Function
2.10. Program to Add Two Images
2.11. Use of Imsubtract Function
2.12. Program to Subtract Two Images
2.13. Use of Imultiply Function
2.14. Program To Multiply Two Images
2.15. Use of Imdivide Function
2.16. Program to Divide Images
2.17. Extraction of Red, Green and Blue Component from a Color Image
2.18. Use of Im2bw Function
2.19. Use of Imscale and Imrotate Function
2.20. Display the Bit Planes of a Grayscale Image
Conclusion
Chapter 3
Spatial Domain Filtering and Enhancement of Medical Images
3.1. Introduction
3.2. MATLAB Code for Average Filter
3.3. MATLAB Code for High Pass Filter
3.4. MATLAB Code for Weighted Average Filter
3.5. MATLAB Code for Median Filter
3.6. MATLAB Code for Image Intensity Slicing
3.7. MATLAB Code for Gaussian Filter
3.8. MATLAB Code for Nonlinear Filters
3.9. MATLAB Code for Wiener Filter
Conclusion
Chapter 4
Frequency Domain Filtering of Medical Images
4.1. Introduction
4.2. MATLAB Code for Fast Fourier Transform of an Image
4.3. MATLAB Code for Gaussian Low Pass Frequency Domain Filter
4.4. MATLAB Code for Butterworth Low Pass Frequency Domain Filter
4.5. MATLAB Code for Gaussian High Pass Frequency Domain Filter
4.6. MATLAB Code for Butterworth High Pass Frequency Domain Filter
4.7. MATLAB Code for Gaussian High Boost Frequency Domain Filter
4.8. MATLAB Code for Butterworth High Boost Frequency Domain Filter
Conclusion
Chapter 5
Medical Image Segmentation
5.1. Introduction
5.2. MATLAB Code for Adaptive Thresholding
5.3. MATLAB Code for Region Growing
5.4. MATLAB Code for Edge Detection
5.5. MATLAB Code for Watershed Segmentation
5.6. MATLAB Code for K-Means Clustering
5.7. MATLAB Code for Gauss Gradient Approach for Edge Detection
Conclusion
Chapter 6
Wavelet Transforms in Medical Image Processing
6.1. Introduction
6.2. Role of Wavelet Transforms in Medical Image Processing
6.3. MATLAB Code for Wavelet Decomposition of Medical Image
6.4. MATLAB Code for Watermarking of Medical Image
6.5. MATLAB Code for Image Compression Using Haar Wavelet Transform
Conclusion
Chapter 7
Medical Image Compression
7.1. Introduction
7.2. Block Truncation Coding
7.3. Wavelet Transform Based Compression
7.4. Medical Image Compression Using Principal Component Analysis
7.5. Image Compression Using Singular Value Decomposition
Conclusion
Chapter 8
Evaluation of Preprocessing, Segmentation, and Compression Algorithms
8.1. Introduction
8.2. Validation of Preprocessing Algorithms
8.3. Validation of Segmentation Algorithms with Ground Truth Images
8.4. Validation of Clustering Segmentation Algorithms
8.5. Validation of Compression Algorithms
Conclusion
Chapter 9
Medical Image Encryption and Fusion
9.1. Introduction
9.2. Telemedicine
9.3. Medical Image Fusion
9.4. MATLAB Code for Medical Image Fusion Using Averaging Rule
9.5. MATLAB Code for Medical Image Fusion Using Wavelet Transform
9.6. MATLAB Code for Classical Medical Image Encryption
9.7. MATLAB Code for Medical Image Encryption Using BitXor Operator
Conclusion
Chapter 10
Case Studies in Medical Image Processing
10.1. Introduction
10.2. Retinal Blood Vessel Detection Using Kirsch Algorithm
10.3. Medical Image Edge Detection Using Homogeneity Operation
10.4 MATLAB Code for Homogeneous Mask Area Filter
Conclusion
References
Index
About the Authors
Blank Page
Recommend Papers

Demystifying Medical Image Processing Concepts for Design, Implementation and Management with Real Time Case Studies [Team-IRA]
 9798886977370

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Biomedical Devices and Their Application

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Biomedical Devices and Their Application Demystifying Medical Image Processing Concepts for Design, Implementation and Management with Real Time Case Studies S. N. Kumar, PhD and S. Suresh, PhD 2023. ISBN: 979-8-88697-737-0 (Softcover) 2023. ISBN: 979-8-88697-796-7 (eBook)

Mechanical Ventilators for Non-Invasive Ventilation: Principles of Technology and Science Antonio M. Esquinas, MD, PhD (Editor) 2020. ISBN: 978-1-53617-435-9 (Hardcover) 2020. ISBN: 978-1-53617-436-6 (eBook)

Nanoparticles and their Conjugates for Biomedical Applications: An Advanced Material for Diagnosis and Therapeutic Treatment Aneeya Kumar Samantara and Satyajit Ratha, PhD (Editors) 2019. ISBN: 978-1-53616-596-8 (Hardcover) 2019. ISBN: 978-1-53616-597-5 (eBook)

More information about this series can be found at https://novapublishers.com/product-category/series/biomedical-devices-andtheir-applications/

S. N. Kumar, PhD and S. Suresh, PhD

Demystifying Medical Image Processing Concepts for Design, Implementation and Management with Real Time Case Studies

Copyright © 2023 by Nova Science Publishers, Inc. DOI: https://doi.org/10.52305/CRFG5486

All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication. Please visit copyright.com and search by Title, ISBN, or ISSN. For further questions about using the service on copyright.com, please contact:

Phone: +1-(978) 750-8400

Copyright Clearance Center Fax: +1-(978) 750-4470

E-mail: [email protected]

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regards to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS.

Library of Congress Cataloging-in-Publication Data Names: Kumar, S. N., 1985- author. | Suresh, S. (Professor of computer science), author. Title: Demystifying medical image processing concepts for design, implementation and management with real time case studies / S. N. Kumar, S. Suresh. Description: New York : Nova Science Publishers, [2023] | Series: Biomedical devices and their applications | Includes bibliographical references and index. | Identifiers: LCCN 2023016481 (print) | LCCN 2023016482 (ebook) | ISBN 9798886977370 (paperback) | ISBN 9798886977967 (adobe pdf) Subjects: LCSH: Diagnostic imaging--Data processing. | Diagnostic imaging--Data processing--Case studies. | MATLAB. Classification: LCC RC78.7.D53 K855 2023 (print) | LCC RC78.7.D53 (ebook) | DDC 616.07/540285--dc23/eng/20230505 LC record available at https://lccn.loc.gov/2023016481 LC ebook record available at https://lccn.loc.gov/2023016482

Published by Nova Science Publishers, Inc. † New York

Contents

Preface

.......................................................................................... vii

Chapter 1

Introduction to Medical Imaging Modalities ..................1

Chapter 2

Basics of Medical Image Processing Using MATLAB ...............................................................13

Chapter 3

Spatial Domain Filtering and Enhancement of Medical Images ............................................................35

Chapter 4

Frequency Domain Filtering of Medical Images ..........55

Chapter 5

Medical Image Segmentation..........................................73

Chapter 6

Wavelet Transforms in Medical Image Processing .............................................................93

Chapter 7

Medical Image Compression ........................................103

Chapter 8

Evaluation of Preprocessing, Segmentation, and Compression Algorithms .......................................117

Chapter 9

Medical Image Encryption and Fusion........................139

Chapter 10

Case Studies in Medical Image Processing ..................153

References

.........................................................................................163

Index

.........................................................................................175

About the Authors ....................................................................................179

Preface

Digital image processing deals with the usage of computer-aided algorithms for extracting useful information from an image. Biosignals and medical images play a pivotal role in the health care sector for disease diagnosis and therapeutic planning. The Simulation role is an inevitable tool in science and technology. This book focuses on the processing of medical images using computer-aided algorithms developed in MATLAB. MATLAB is a userfriendly software that gains prominence in science and engineering disciplines. MATLAB codes are also included in each chapter, which gives a lot of insights to beginners doing research in medical image processing. Simple, easy, and step-by-step practical illustration with conceptual ideas make the book much preferable to the beginner and the intermediate compared with other books currently in the market. This book will also be beneficial for undergraduate and postgraduate students doing their academic projects and research works in image processing. The book comprises of 10 chapters. Chapter 1 gives an outline of medical imaging techniques used in the health care sector. The working principle and features of X-ray imaging, Computed Tomography Imaging, Magnetic Resonance Imaging, and Ultrasound imaging are highlighted in this chapter. Chapter 2 gives an overview of basic MATLAB codes for processing an image. The basic MATLAB functions to process images are explained in this chapter. Chapter 3 emphasizes the spatial domain filtering and enhancement of images. The MATLAB codes of widely used linear and nonlinear filters are highlighted in this chapter. Chapter 4 focuses on the frequency domain filtering and enhancement of images. The MATLAB codes of different types of Gaussian and Butterworth filters are explored in this chapter. Chapter 5 describes the segmentation algorithms with their features. The MATLAB codes of widely used segmentation algorithms like thresholding, region growing, edge detection, and watershed are emphasized in this chapter.

viii

S. N. Kumar and S. Suresh

Chapter 6 highlights the role of wavelet transform in image processing. The MATLAB codes for wavelet decomposition of an image, watermarking of an image, and compression are highlighted in this chapter. Chapter 7 points out the role of image compression in the medical field. The MATLAB codes for some of the widely used compression techniques are also narrated in this chapter. Chapter 8 explores the importance of performance metrics for the validation of preprocessing, segmentation, and compression algorithms. The MATLAB codes for the widely used metrics are also discussed in this chapter. Chapter 9 describes the importance of medical image encryption and fusion. The MATLAB codes for some of the typical fusion and encryption algorithms are given in this chapter. Chapter 10 emphasizes the cases studies in medical image processing. The MATLAB codes for some of the typical case studies in medical image processing are illustrated in this chapter. The outcome of this book paves the way for the researchers and students working on medical image processing.

Dr. S. N. Kumar Dr. S. Suresh

Chapter 1

Introduction to Medical Imaging Modalities 1.1. Introduction Medical imaging modalities are utilized to obtain the picture of anatomical organs for disease diagnosis and treatment planning [1, 2]. The X-ray is the classical medical imaging modality and Computed Tomography also utilizes the principle of the X-ray machine, however the efficiency is higher than Xray. The CT generates a 360-degree image. Magnetic resonance imaging (MRI) utilizes powerful magnets and radio waves. Positron tomography (PET) helps to analyze the activity of anatomical organs along with the structural information.

1.2. X-RAY Imaging The most widely used and widely available diagnostic imaging test is X-rays [1, 2]. X-rays are electromagnetic radiation, just like visible light. X-rays, on the other hand, are more powerful than light and can penetrate most objects, including the human body. Internal organs and tissues are visualized via medical X-rays. Electromagnetic waves are a form of radiation that includes X-rays (with wavelengths ranging from 0.01 to 10 nanometers). X-ray can be used to diagnose diseases such as bone deterioration, dislocations, fractures, tumours, and infections. The X-ray radiation is blocked by bone and other solid substances as it passes through your body, making the X-ray film seem white. The tissues that are less thick are difficult to discern and appear greyish. Radiation from Xrays isn’t dangerous in small doses. Soft and hard X-rays are the two primary categories of X-rays [3, 4, 5]. •

Soft X-rays have wavelength of about 10 nanometers, which is quite short (nanometers are one-billionth of a meter). As a result, they fall somewhere between gamma rays and ultraviolet (UV) light in the electromagnetic (EM) spectrum.

2

S. N. Kumar and S. Suresh



The wavelengths of hard X-rays are around 100 picometers (a picometer equals one trillionth of a metre). On the electromagnetic spectrum, they are in the same region as gamma rays.

The common type of X-ray detector is photographic film, but there are several others that are used to make digital images. The X-ray images produced by this technique are known as radiographs. X-ray photons do have the ability to intercept tissue, where they will be attenuated in part by the tissue and passed through in part to interact with and expose the radiography film. X-rays are absorbed more by thicker tissue/objects than thinner tissue of comparable composition. The more tissue absorption occurs, the fewer X-ray photons reach the film, resulting in a whiter image. The radiograph will show a range of densities, from white to various shades of grey to black. Tissues and things that are radiolucent appear blacker, whereas those that are radiopaque appear whiter. The ensuing pattern of opacities generates a recognizable and interpretable image on the radiograph. For a radiograph, a patient is positioned so that the area of the body being scanned is directly between an X-ray emitter and an X-ray detector. When the equipment is turned on, X-rays travel through the body and are absorbed in different amounts by different tissues based on their radiological density. Radiological density is determined by the density and atomic number (the number of protons in an atom’s nucleus) of the materials being scanned. Bones absorb X-rays fast and create a lot of contrast on X-rays as a result of this property. As a result, bone structures look whiter on a radiograph than other tissues against a dark backdrop. X-rays, on the other hand, may easily penetrate less in dense tissues such as fat and muscle, as well as air-filled cavities such as the lungs. These patterns show in grayscale on radiographs. The X-ray imaging flow diagram is depicted in Figure 1.1.

Figure 1.1. X-ray imaging flow diagram.

Introduction to Medical Imaging Modalities

3

The diagnostic potential of X-ray examinations much exceeds the hazards when they are used properly. X-rays, on the other hand, produce ionizing radiation, which can kill live tissue. This is a concern that increases with the number of exposures a person has amassed during their lifetime. Radiation, on the other hand, has a low cancer risk. An X-ray in a pregnant woman poses no harm to the fetus if the part of the body being examined is not the abdomen or pelvis. For imaging the abdomen and pelvis, doctors use non-radiation exams like MRI or ultrasound. An X-ray may be a feasible alternative imaging option if neither of these modalities can deliver the information required, or if an emergency or other time limitation develops. Children are more susceptible to ionizing radiation and have a longer life expectancy than adults, putting them at a higher cancer risk. The goal of ongoing X-ray technology development is to reduce dose of radiation, improve resolution of the image, and improve contrast materials and techniques. The computer aided algorithms are utilized in [6] for the detection of COVID 19 from chest X-ray images.

1.3. Computed Tomography Imaging A Computed Tomography (CT) scan of the body gives detailed, high-quality images. It’s a more advanced X-ray that shows the spine, vertebrae, and internal organs in 360-degree images [3, 4, 5]. A contrast dye may be injected into the bloodstream to help the doctor view the body structures more clearly on the CT scan. A CT scan provides doctors with precise, high-quality pictures of bones, blood vessels, soft tissue, and organs, which can help them diagnose conditions including appendicitis, cancer, trauma, heart disease, musculoskeletal problems, infectious infections, and more. A CT scanner resembles a large box with a tunnel running through it. A concentrated beam of X-rays is directed at a patient and quickly spun around the body, creating signals that the machine’s computer analyses to generate cross-sectional pictures or slices of the body. Because tomographic pictures contain more information than normal X-rays, they are called tomographic images. After the machine takes a number of slices, they may be digitally “layered” together to create a threedimensional picture of the patient, making it simpler to identify and find structural elements as well as suspected tumors or abnormalities. The computed tomography imaging flow diagram is depicted in Figure 1.2.

4

S. N. Kumar and S. Suresh

Figure 1.2. Computed Tomography imaging flow diagram.

Unlike a standard X-ray, a CT scanner employs a motorized X-ray source that spins around a gantry-shaped frame’s circular entrance. An X-ray tube rotates around the patient, generating narrow X-ray beams through the body, while the patient is positioned on a bed that travels slowly across the gantry. Instead of film, CT scanners employ digital X-ray detectors situated immediately across from the X-ray source. As the X-rays leave the patient, the detectors pick them up and send them to the computer. The CT computer uses complicated mathematical algorithms to generate a 2D image slice of the patient every time the X-ray source completes one full revolution. The thickness of the tissue shown in each imaging slice varies depending on the CT equipment used, although it normally ranges from 1 to 10 mm. Once a full slice has been generated, the picture is stored and the motorized bed is slowly moved forward into the gantry. Repeating the X-ray scanning method yields a new picture slice. This step is continued until all of the needed slices have been acquired. The computer may display the bones, organs, and tissues, as well as any abnormalities the clinician is searching for, separately or heaped together to make a 3D image of the patient. This technology offers various benefits, including the ability to rotate the 3D image in space or display slices in sequence, making it easier to locate the specific location of a problem. CT scans can detect potentially fatal illnesses like hemorrhage, blood clots, and malignancy. A timely diagnosis of these illnesses could save a person’s life. CT scans, on the other hand, use X-rays, and emit ionizing radiation. In living tissue, ionizing radiation has the ability to produce

Introduction to Medical Imaging Modalities

5

biological consequences. Certain persons may get allergic responses or, in rare situations, temporary renal failure as a result of contrast chemicals. The contrast agents should not be given to patients with impaired renal function since they can lead to severe kidney damage, which is potentially fatal. The Positron Emission Tomography (PET) and CT Urography are the two variants that finds useful for physicians for accurate disease diagnosis [3, 4, 5]. •



A PET allows the doctor to see the level of activity as well as the structure of specific organs and tissues in the body. The patient will be given a “tracer” before test, which comprises glucose and a little quantity of radioactive material. This tracer passes through the body. It functions as a dye that the imaging scan detects. More dye will be picked up if there is significant chemical activity in specific locations, resulting in bright spots on the image, alerting the doctor to a possible ailment. For most people, the tracer’s radiation dose is safe and modest. Depending on the body part being investigated, the tracer will be eaten, breathed, or injected. PET scans are frequently used by doctors to detect health complications, malignancies, and brain disorders. CT urography is a specialist radiographic test that evaluates the urethra, including the ureters, kidneys, and bladder. It’s a cuttingedge technique that employs computed tomography to create crosssectional scans of the entire body. Internal organ scans are extremely detailed, allowing doctors to make the most accurate treatment recommendations possible. This test is most typically used to detect kidney stones and check for blood in the urine.

The fundamental idea behind Sparse CT is to save all of the important imaging data while preventing the vast majority of X-rays from reaching the patient during a CT scan. Combining a novel X-ray blocking device with compressed sensing mathematics, the system reconstructs pictures from fewer datasets. Compression sensing is analogous to collecting video with a highspeed, low-pixel camera and then applying arithmetic operations to transform the picture to high-definition quality. The chest X-ray radiographs and CT scans are utilized for the detection of COVID 19 in addition to the RTPCR test results [7].

6

S. N. Kumar and S. Suresh

1.4. Magnetic Resonance Imaging Magnetic resonance imaging (MRI) is a non-invasive screening method that creates three-dimensional images of the complete body [3, 4, 5]. It’s commonly used to diagnose, track, and identify diseases. It works by exciting and detecting changes in the rotational axis of protons in biological tissue water using cutting-edge technology. The MRI flow diagram is depicted in Figure 1.3.

Figure 1.3. MRI flow diagram.

A strong magnetic field is created by powerful magnets in an MRI, which causes proton in the body to align with it. When a radiofrequency current is pulsed through the body, straining against the magnetic field, protons are activated and spin out of balance. After the radiofrequency field is turned off, the energy created as the protons realign with the magnetic field may be detected by the MRI sensors. The quantity of energy released and the time it takes for the protons to realign with the magnetic field differ depending on the environment and the chemical makeup of the molecules. On the basis of these magnetic features, doctors may discriminate between various types of tissues. In order to avoid blurring the image during an MRI scan, the patient is placed within a huge magnet and must remain still during the operation. To speed up the pace at which protons realign with the magnetic field, contrast medicines (typically containing the element Gadolinium) may be given intravenously to

Introduction to Medical Imaging Modalities

7

a patient before or during the MRI. The brighter the image, the faster the protons realign. The following are types of MRI scans: 1. High-Field Open MRI Scans (1.5T): An open MRI with a high field (1.5T) produces excellent image quality: 2. Open MRI: This has to do with the setup of the equipment. Because it is open on three sides, the Open MRI creates beautiful images and provides a relatively wide experience. It features a contemporary design that enables for rapid and comfortable examinations, which is especially beneficial for persons who are claustrophobic. 3. High-Field (1.5T): The visual quality is described as “high-field” (1.5T). In some imaging applications, the 1.5T offers a wider range of coil possibilities than the 3T, allowing for higher picture quality. 4. High-Field MRI Short Bore Scans (1.5T): The entrance of the MRI imaging machine is referred to as the bore. Short bore MRI scans are 50% shorter and 5% broader than traditional MRI scans. The proportions provide the patient with a light and airy imaging experience. 5. MRI Open Bore Scans - High-Field (1.5T): An open bore MRI has a much bigger opening, making the scan much more comfortable. A regular bore’s entrance is slightly larger than the patient, resulting in an uncomfortable and constrained situation. 6. MRI Scans – High-Field (1.5T): The most cutting-edge imaging equipment is installed in this high-field (1.5T) MRI scanner. Clinicians benefit from scanning all body areas, and it’s considered standard practice. 7. 3T MRI Scans: The 3 Tesla MRI scan, often known as a 3T MRI scan, is a fast and powerful imaging scan that may replace a 1.5T conventional scan. Three-dimensional scanners were originally solely found in medical research institutions, but they are now frequently employed in clinical settings. Strong, powerful magnets are used in 3T scans, resulting in a magnetic field that is substantially stronger than in 1.5T scans. This enables the MRI to produce more detailed images in a shorter amount of time. 8. MRI Spectroscopy: The biochemistry of infarcts, tumors, and other diseases can be determined by MRI spectroscopy, which is a noninvasive approach. It’s frequently utilized to identify certain

8

S. N. Kumar and S. Suresh

metabolic issues, such as brain illnesses. It aids clinicians in determining tumor characteristics such as metabolism and toughness. 9. MRCP Scans: The MRCP scan examines the pancreatic and hepatobiliary systems using magnetic resonance cholangiopancreatography. 10. Open MRI Scans – High-Field Performance: Magnetic resonance imaging scanners with increased capabilities are known as open MRI scanners. They come with a variety of advantages and characteristics. They offer 360-degree panoramic views from all four sides. They also provide a relaxing and spacious environment. MRI, unlike X-rays and CT scans, does not use ionizing radiation, although it does require a large magnetic field. The device’s magnetic field penetrates beyond it, exerting massive pressures on iron, steel, and other magnetizable materials; it’s powerful enough to drive a wheelchair across the room. Prior to an MR scan, patients should inform their doctors if they have any type of medical condition or implant. The following factors should be considered before getting an MRI scan: • • •



Patients with implants should not be subjected to MRI scan Some MR scanners may require hearing protection, since sound levels up to 120 decibels are generated. MRI scans should be avoided during pregnancy, particularly in the first trimester when the baby’s organs are still developing and contrast chemicals, if used, might enter the fetal bloodstream... Long scan periods within the machine may be problematic for people who have even moderate claustrophobia. Patients should be given techniques to manage with their suffering, such as visualization techniques, sedatives, and anesthesia, as well as getting familiar with the equipment and process. Other coping methods include listening to music or viewing a film or movie, covering or disguising one’s eyes, and activating a panic button. Because the open MRI machine has open sides instead of a tube with one end closed, the patient is not completely enclosed. It was created to meet the needs of patients who are uncomfortable with the limited space and loud noises of regular MRIs, as well as those whose size or weight make them unfeasible.

Introduction to Medical Imaging Modalities

9

Unlike PET or SPECT, traditional MRI is unable to detect metabolic rates. Researchers funded by the National Institutes of Health have identified a means to evaluate a tumor’s metabolic rate by injecting particular compounds (hyperpolarized carbon 13) into prostate cancer patients. This information can assist physicians in rapidly and precisely determining the tumor’s aggressiveness. The variants of MRI systems like diffusion, confusion and functional MRI are highlighted in [8].

1.5. Ultrasound Imaging Medical ultrasonography is divided into two categories: diagnostic and therapeutic [5]. Ultrasound is a non-invasive diagnostic procedure that uses sound waves to create images of the inside of the human body. Transducers, also known as ultrasound probes, produce sound waves at frequencies above the human hearing threshold (above 20 kHz), but most transducers in use today do so at much higher frequencies (in the megahertz (MHz) range). The vast majority of diagnostic ultrasonography probes are applied to the skin. Probes can be inserted into the body through the gastrointestinal system, vaginal canal, or blood vessels to improve picture quality. Ultrasonography is also sometimes utilized during surgery, with a sterile probe inserted into the operative site. Anatomical and functional ultrasonography are two subtypes of diagnostic ultrasound. Internal organs or other structures are imaged with anatomical ultrasonography. To construct “information maps,” functional ultrasonography integrates information such as tissue or blood movement and velocity, tissue softness or hardness, and other physical attributes with anatomical images. These diagrams can help clinicians visualise changes in function inside a structure or organ. Therapeutic ultrasonography, like medical ultrasound, employs sound waves that are louder than human hearing but does not generate images. Its goal is to influence biological tissues to change or disappear by interacting with them. The use of very high-intensity lasers that may kill damaged or irregular tissues like tumours enables these destructive, or ablative, effects. The ultrasonic imaging is depicted in Figure 1.4. In most circumstances, ultrasound therapy has the advantage of being non-invasive. There are no wounds or scars because, no incisions or cuts in the skin. A transducer, which can both emit and detect ultrasonic echoes reflected back, generates ultrasound waves.

10

S. N. Kumar and S. Suresh

Figure 1.4. Ultrasonic imaging.

In most ultrasound transducers, piezoelectrics, which are specific ceramic crystal materials, are used as active elements. Some materials can produce sound waves when subjected to an electric field, but they can also produce an electric field when a sound wave hits them. The transducer emits a beam of sound waves into the body when used in an ultrasonic scanner. Sound waves are reflected back to the transducer by tissue boundaries in the course of the beam. When the echoes contact the transducer, they generate electrical impulses that are sent to the ultrasonic scanner. Using the speed of sound and the time it takes for each echo to return, the scanner calculates the distance between the transducer and the tissue border. The distances are then utilised to construct two-dimensional models of tissues and organs. During an ultrasonic examination, a gel will be put to the skin. This avoids the formation of air pockets between the transducer and the skin, which would block the flow of ultrasonic waves into the body. Ultrasonography does not release ionising radiation like X-rays, it is widely regarded to be harmless. Nonetheless, ultrasonography has the ability to cause biological impacts in the body in certain contexts and conditions. As a result, the FDA mandates diagnostic ultrasound equipment to function within certain parameters. The FDA, as well as several professional organizations, discourages the use of ultrasound for non-medical purposes (such as making memento recordings) and recommends that it be used only when absolutely necessary. The ability of a high-intensity ultrasound technique termed histotripsy to disintegrate clots for the non-invasive treatment of deep-vein thrombosis is being tested by researchers at the University of Michigan (DVT). Hybrid imaging is the technique of fusing more than one medical imaging modality [9]. The features of hybrid imaging comprise of better localization and accurate disease diagnosis for treatment planning [10].

Introduction to Medical Imaging Modalities

11

Conclusion This chapter focus on the medical imaging modalities for health care applications. The principle of operation and working of X-ray, Computed Tomography, Magnetic Resonance Imaging and Ultrasound imaging are discussed in this chapter. The applications of medical imaging modalities are also discussed in this chapter. Medical image fusion is gaining prominence in the current scenario for precise disease diagnosis and surgical preplanning.

Chapter 2

Basics of Medical Image Processing Using MATLAB 2.1. Introduction Medical image processing refers to the usage of computer aided algorithms for the detection of anomalies on medical images. The anomalies may be a tumor, cyst or the malfunctioning of organs. This chapter focus on the basic MATLAB codes utilized for the processing of medical images. The MATLAB software has many built in functions and the widely used functions are discussed here in this chapter.

2.2. Source of Medical Data In the real time scenario, the medical images from the hospital and scan centers were used for analysis. Here in this work, the medical images from the public database are used [11, 12, 13, 14]. The brain tumor progression data set comprises MR images of 20 cases [11, 12] sample images from this data set are used for the analysis of algorithms. The Cancer Genome Atlas Liver Hepatocellular Carcinoma (TCGA-LIHC) data collection focused on connecting cancer phenotypes to genotypes, sample images from this data set are used for the analysis of algorithms [13, 14].

2.3. Usage of Imread and Imshow Function The imread function is to read image from graphics file. It read the graphic image of row and column by 3 array in A. The imshow function is used to display the grayscale image in A. The images are stored in varied formats like .png, .jpg or .bmp format. The medical images are usually represented in DICOM (.dcm) format. The simulation result is depicted below in Figure 2.1.

14

S. N. Kumar and S. Suresh

Figure 2.1. Output of imshow function.

Syntax A = imread(filename) Imshow(A); Program clc; close all; A=imread(‘liver1.png’); figure,imshow(A); title (‘Input Liver Image’);

2.4. Usage of Subplot Function The subplot function divides the figure into row and column grid and the position of the figure. The simulation result is depicted in Figure 2.2. Syntax subplot(m,n,p) Program clc; close all; A=imread(‘liver1.png’);

Basics of Medical Image Processing Using MATLAB

15

B=imread(‘brain1.png’); subplot(1,2,1), imshow(A); title (‘Liver Image’); subplot(1,2,2), imshow(B); title(‘Brain Image’); The results are depicted below in Figure 2

Figure 2.2. Simulation result depicting the usage of subplot function.

2.5. Usage of Imtool Function The imtool function is an image viewer menu. The file menu is used to import the image form the workspace and display the image. The simulation result is depicted in figure 2.3. Syntax imtool imtool(I) Program clc; close all; i=imread(‘brain.png’); imtool(i);

16

S. N. Kumar and S. Suresh

Figure 2.3. Output of imtool function.

2.6. Usage of Imresize Function B = imresize(A,scale) returns image B that is scale times the size of A. The input image A can be a grayscale, RGB, or binary image. If A has more than two dimensions, imresize only resizes the first two dimensions. If scale is in the range [0, 1], B is smaller than A. If scale is greater than 1, B is larger than A. By default, imresize uses bicubic interpolation. The simulation result is depicted in Figure 2.4. Syntax B = imresize(A,scale) B = imresize(A,[numrows numcols]) Program clc; close all; i=imread(‘brain.png’); i_big1=imresize (i, 0.4); figure, imshow (i); title (‘Original image’); figure, imshow(i_big1); title (‘Reduced image through bicubic interpolation’);

Basics of Medical Image Processing Using MATLAB

(a)

17

(b)

Figure 2.4. (a) Input medical image, (b) resultant output after image resizing with a value of 0.4.

2.7. Usage of Flipud and Fliplr Function The flipup function is used to flip the image array into up-down direction. The fliplr function is used to flip the image array into left to right direction. The simulation result is depicted in Figure 2.5. Syntax B = flipud(A) B = fliplr(A) Program: clc; clear all; i=imread(‘brain1.png’); j=flipud(i); k=fliplr(i); subplot(1,3,1),imshow(i),title(‘original image’); subplot(1,3,2),imshow(j),title(‘flipped upside-down’); subplot(1,3,3),imshow(k),title(‘flipped left-right’);

18

S. N. Kumar and S. Suresh

(a)

(b)

(c)

Figure 2.5. (a) Input image, (b) filp upside down function output, (c) flip left right function output.

2.8. Usage of Imhist Function The imhist function is to display the histogram of the grayscale image with 256 pixel intensity. After loading the grayscale image into the workspace. Histogram is a plot of number of pixels with specific gray values in an image. The simulation result is depicted below in Figure 2.6. Syntax imhist(input image) % Program to determine histogram of an image clc; close all; i=imread(‘brain1.png’); figure,imshow(i); title(‘Brain Input’); figure,imhist(i); title(‘Histogram of Input Image’);

Basics of Medical Image Processing Using MATLAB

19

(a)

(b) Figure 2.6. (a) Input image, (b) histogram of input image.

2.9. Usage of Imadd Function The imadd function adds each element in the array X with the corresponding element in the array Y and gives the sum in the corresponding element of the

20

S. N. Kumar and S. Suresh

output array Z. The addition of a numerical value to an image increase the gray values of pixels. The simulation result is depicted below in Figure 2.7. The imadd performs the brightness modification such that, overall contrast of the image is enhanced.

(a)

(b)

Figure 2.7. (a) Input image, (b) resultant output by adding constant value to an image.

Let f(x,y) be the input image, g(x,y) be the resultant image g(x,y)=f(x,y) + K Where K is a constant value. Syntax Z = imadd(X, K) % Program to add constant value to an image clc; close all; I = imread(‘brain.png’); figure,imshow(I); title(‘Input Medical Image’); Y=imadd(I,10);

Basics of Medical Image Processing Using MATLAB

21

figure,imshow(Y); title(‘Adding constant value to an image’);

2.10. Program to Add Two Images The image addition is utilized in image averaging to minimize the noise. The addition operation performed on images enhances the output. When multiple images of a same object are available for approximately, the same time acquired, image addition is carried out to minimize the noise in the resultant image.

(a)

(b)

(c)

Figure 2.8. (a) Input image 1, (b) input image 2, (c) resultant output by adding two images.

% Program to add two images clc; close all; I = imread(‘liver.png’); I=imresize(I,[256 256]); figure,imshow(I); J = imread(‘brain.png’); J=imresize(J,[256 256]); figure,imshow(J); K = imadd(I, J); figure,imshow(K); title(‘ Resultant Image after addition’);

22

S. N. Kumar and S. Suresh

2.11. Use of Imsubtract Function The imsubract function subtracts each element in the array Y from the corresponding element in the array X and gives the difference in the corresponding element of the output array Z. the subtraction of a numerical value from an image minimizes the gray value of pixels. The imsubtract performs the brightness modification such that, overall contrast of the image is decreased.

(a)

(b)

Figure 2.9. (a) Input image, (b) resultant output by subtracting constant value from an image.

Let f(x,y) be the input image, g(x,y) be the resultant image g(x,y)=f(x,y) – K Where K is a constant value. Syntax Z = imsubtract(X,Y) % Program to subtract constant value in an image clc; close all;

Basics of Medical Image Processing Using MATLAB

23

I = imread(‘brain.png’); figure,imshow(I); title(‘Input Medical Image’); Y=imsubtract(I,50); figure,imshow(Y); title(‘Subtraction of constant value to an image’);

2.12. Program to Subtract Two Images Image subtraction is normally used for change detection. It is used to detect the changes between two images of same object or scene. Image subtraction is also used to eliminate certain objects in an image. The common example of image subtraction is digital subtraction angiography in medical field. It is also used in industry to detect the missing components in product assembly.

(a)

(b)

(c)

Figure 2.10. (a) Input image 1, (b) input image 2, (c) resultant output by subtraction of two images.

% Program to subtract two images clc; close all; I = imread(‘liver.png’); I=imresize(I,[256 256]); figure,imshow(I); J = imread(‘brain.png’); J=imresize(J,[256 256]);

24

S. N. Kumar and S. Suresh

figure,imshow(J); K = imsubtract(I, J); figure,imshow(K); title(‘ Resultant Image after subtraction’);

2.13. Use of Imultiply Function Z = immultiply(X, Y) multiplies each element in array X by the corresponding element in array Y and returns the product in the corresponding element of the output array Z. The multiplication of a numerical value with the pixel’s gray level increases/ decreases based on the value chosen. When the pixel’s grey values are multiplied with a numerical value of 2, the net gray value increases, while the net gray value decreases, when it is multiplied with a value of 0.2. The contrast adjustment is done here; all the pixels are scaled with a constant value. g(x,y)= f(x,y) * K Where K is a constant value Syntax Z = immultiply(X, Y) The image multiplication has its application in masking. % Program to multiply image with a constant value clc; close all; I = imread(‘brain.png’); figure,imshow(I); title(‘Input Medical Image’); Y=immultiply(I,2); figure,imshow(Y); title (‘Multiply constant value to an image’);

Basics of Medical Image Processing Using MATLAB

(a)

25

(b)

Figure 2.11. (a) Input image 1, (b) resultant output by multiplication of image by a constant value.

2.14. Program To Multiply Two Images This program multiply two images and the imresize function is used to resize the images to same size [256]. The pixel by pixel multiplication was found to be less useful and in real time scenarios, it is employed for masking an image. clc; close all; I = imread(‘liver.png’); I=imresize(I,[256 256]); figure,imshow(I); J = imread(‘brain.png’); J=imresize(J,[256 256]); figure,imshow(J); K = immultiply(I, J); figure,imshow(K); title(‘ Resultant Image after multiplication’);

26

S. N. Kumar and S. Suresh

(a)

(b)

(c) Figure 2.12. (a) Input image 1, (b) input image 2, (c) resultant output by multiplication of two images.

2.15. Use of Imdivide Function The imdivide function divide one image into another image or divide image by constant. When the image is divided by a constant value, the net gray value decreases. Syntax Z = imdivide(X, Y) % Program to divide constant value in an image clc; close all; I = imread(‘brain.png’); figure,imshow(I);

Basics of Medical Image Processing Using MATLAB

27

title(‘Input Medical Image’); Y=imdivide(I,2); figure,imshow(Y); title(‘Divide constant value to an image’);

(a)

(b)

Figure 2.13. (a) Input image, (b) resultant output by dividing constant value from an image.

2.16. Program to Divide Images Image division can be considered as multiplication of one image and the reciprocal of other image. The image multiplication and division are used to alter the gray level shading in an image. % Program to divide two images clc; close all; I = imread(‘liver.png’); I=imresize(I,[256 256]); figure,imshow(I); J = imread(‘brain.png’); J=imresize(J,[256 256]);

28

S. N. Kumar and S. Suresh

figure,imshow(J); K = imdivide(I, J); figure,imshow(K); title(‘ Resultant Image after division’);

(a)

(b)

(c)

Figure 2.14. (a) Input image 1, (b) input image 2, (c) resultant image after division.

2.17. Extraction of Red, Green and Blue Component from a Color Image A color image can be represented as a 3 order matrix. The first order is for the rows, the second order is for the columns and the third order is for specifying the color of the corresponding pixel. The RGB color format, so the third order will take 3 values of Red, Green and Blue respectively. The values of the rows and columns depends on the size of the image. This has application in retinal image processing. The green component is extracted for processing in the fundus images, because green component gives the best contrast for the blood vessels. The retinal images from STARE database is used here [15, 16].

(a)

(b)

(c)

(d)

Figure 2.15. (a) Input retinal image, (b) red component, (c) green component, (d) blue component.

Basics of Medical Image Processing Using MATLAB

29

% Extraction of R, G and B components clc; close all; img = imread(‘13_left.jpeg’); figure, imshow(img) red = img(:,:,1); green = img(:,:,2); blue = img(:,:,3); subplot(1,3,1),imshow(red); subplot(1,3,2),imshow(green); subplot(1,3,3),imshow(blue);

2.18. Use of Im2bw Function The im2bw function is used to convert the image into binary image based on the threshold. The thresholding is widely used in medical image processing for ROI extraction. Syntax BW = im2bw(I,level) BW = im2bw(I, level) converts the grayscale image I to binary image BW, by replacing all pixels in the input image with luminance greater than level with the value 1 (white) and replacing all other pixels with the value 0 (black). % Program to perform binarization of image clc; close all; I = imread(‘brain.png’); figure,imshow(I); title(‘Input Medical Image’); level = graythresh(I); bw = im2bw(I, level); figure, imshow(bw) title(‘Binaarization of Output’);

30

S. N. Kumar and S. Suresh

(a)

(b)

Figure 2.16. (a) Input image, (b) resultant image after binarization.

2.19. Use of Imscale and Imrotate Function The imresize function is used to resize the image based on the given scaling factor. The imrotate function is used to rotate the given image based on the given angle degrees in counterclockwise direction around its center point. Syntax J = imresize(I,scaling_factor) J = imrotate(I,angle) % Program that uses imscale and imrotate function clc; close all; % Scaling (Resize) I=imread (‘cameraman.tif’); subplot (2, 2, 1); imshow (I); title (‘Original Image’); s=input (‘Enter Scaling Factor’); j=imresize (I, s); subplot (2, 2, 2); imshow (j); title (‘Scaled Image’); % Rotation K=imrotate (j, 60); subplot(2, 2,3); imshow (K); title(‘Rotated Image 60deg’);

Basics of Medical Image Processing Using MATLAB

31

R=imrotate (j, 45); subplot (2, 2, 4); imshow(R); title (‘Rotated Image 45deg’); The scaling factor value of 0.5 is given during execution

Figure 2.17. First row depicts the input and scaled version of input image, second row depicts the resultant image after rotation through an angle 60 degree and 45 degree.

2.20. Display the Bit Planes of a Grayscale Image The bit plane slicing helps to understand the importance of each bit in an image. Suppose if each pixel in an image comprises of 8 bits, there are 8 bit planes. The bit plane slicing is depicted below, bit plane 0 comprises of less importance data and bit plane 7 comprises of vital information. The bit 0 is called least significant bit and the bit 7 is called most significant bit. The bit

32

S. N. Kumar and S. Suresh

plane slicing finds its role in image compression, where the image is represented with few number of bits.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 2.18. (a-c) Bit plane 0, 1, and 2 outputs, (d-f) bit planes 3.4 ad 5 outputs, (g-h) bit planes 6, 7 outputs.

Basics of Medical Image Processing Using MATLAB

33

% Program to display bit planes of an image Clc; Clear all; i=imread(‘brain.png’); b0=double(bitget(i,1)); b1=double(bitget(i,2)); b2=double(bitget(i,3)); b3=double(bitget(i,4)); b4=double(bitget(i,5)); b5=double(bitget(i,6)); b6=double(bitget(i,7)); b7=double(bitget(i,8)); subplot(3,3,1);imshow(i);title(‘Original Image’); subplot(3,3,2);subimage(b0);title(‘BIT PLANE 0’); subplot(3,3,3);subimage(b1);title(‘BIT PLANE 1’); subplot(3,3,4);subimage(b2);title(‘BIT PLANE 2’); subplot(3,3,5);subimage(b3);title(‘BIT PLANE 3’); subplot(3,3,6);subimage(b4);title(‘BIT PLANE 4’); subplot(3,3,7);subimage(b5);title(‘BIT PLANE 5’); subplot(3,3,8);subimage(b6);title(‘BIT PLANE 6’); subplot(3,3,9);subimage(b7);title(‘BIT PLANE 7’);

Conclusion The basics of medical image processing using MATLAB software was highlighted in this chapter. The basic functions in MATLAB to read and display an image are initially discussed followed by arithmetic operations on the images. The scaling and rotational operation of images are discussed in this chapter, gains importance in image registration. The binary thresholding operation, extraction of R, G and B components of a color image and extraction of bit planes from a greyscale image are also discussed in this chapter. The outcome of this chapter paves a way towards to the researchers in understanding the advance concepts in image processing using MATLAB.

Chapter 3

Spatial Domain Filtering and Enhancement of Medical Images 3.1. Introduction Filtering is an operation that changes the observable quality of an image, in terms of resolution, contrast and noise. Typically, filtering involves applying the same or similar mathematical operation at every pixel in an image. For example, spatial filtering modifies the intensity of each pixel in an image using some function of the neighboring pixels, filtering is one of the most elementary image processing operations. When the function employed in spatial filtering is linear, it is called as linear filter. The spatial filter thus comprises of a mask or kernel and a function.

3.2. MATLAB Code for Average Filter The average filter replaces each pixel in an image by the mean values of pixels in the kernel or mask surrounding the pixel. When the kernel size increases, the noise is removed efficiently, however the image gets blurred. The 3×3 average filter is a low pass filter. The low pass filter attenuates the high frequency components and pass the low frequency components. The low pass filter blurs the edges and hence it is not widely used for enhancement. A comparative study of spatial domain filters for medical enhancement is highlighted in [17]. The selective soothing in the region of interest of the ultrasound images is carried out by the average filter, to prevent the loss of structural information [17, 18]. % Average Filter clc clear all; close all;

36

S. N. Kumar and S. Suresh

Figure 3.1. Average filter output corresponding to the input image corrupted by salt and pepper noise.

Figure 3.2. Average filter output corresponding to the input image corrupted by Gaussian noise.

Spatial Domain Filtering and Enhancement of Medical Images

% Getting input image inputimage=imread(‘cameraman.tif’); % Addition of noise to the input image b=imnoise(inputimage,’salt & pepper’); c=imnoise (input image, ‘gaussian’); d=imnoise (input image, ‘speckle’); % Defining 3x3 and 5x5 kernel kernel1=1/9*ones (3, 3); kernel2=1/25*ones (5, 5); % Reconstruction of images b1=conv2(b, kernel1,’same’); b2=conv2(b, kernel2,’same’); c1=conv2(c, kernel1,’same’); c2=conv2(c, kernel2,’same’); d1=conv2(d, kernel1,’same’); d2=conv2(d, kernel2,’same’); % Displaying the result of averaging filter for Salt & Pepper noise figure, subplot (2,2,1), imshow(inputimage), title (‘Original Input Image’), subplot (2,2,2), imshow(b), title (‘Salt & Pepper noise’), subplot (2,2,3), imshow(uint8(b1)), title (‘3 x 3 Averaging filter’), subplot (2,2,4), imshow(uint8(b2)), title (‘5 x 5 Averaging filter’) % Displaying the result of averaging filter for Gaussian noise figure, subplot (2,2,1), imshow(inputimage), title (‘Original Input Image’), subplot (2,2,2), imshow(c), title (‘Gaussian noise’), subplot (2,2,3), imshow(uint8(c1)), title (‘3 x 3 Averaging filter’), subplot (2,2,4), imshow(uint8(c2)), title (‘5 x 5 Averaging filter’),

37

38

S. N. Kumar and S. Suresh

% Displaying the result of averaging filter for speckle noise figure, subplot (2,2,1), imshow(inputimage), title (‘Original Input Image’), subplot (2,2,2), imshow(d), title (‘Speckle noise’), subplot (2,2,3), imshow(uint8(d1)), title (‘3 x 3 Averaging filter’), subplot (2,2,4), imshow(uint8(d2)), title (‘5 x 5 Averaging filter’),

Figure 3.3. Average filter output corresponding to the input image corrupted by speckle noise

Here in this program, the cameraman.tif image is given as input and the outputs are simulated. The noise of various types is added in this work. In the real time scenario, the filtering of medical images is done directly. The addition of noise and validation is used for the design and verification of new filtering techniques. The average filter output corresponding to the real time medical image is as follows and the program is re-written as follows; % Average filter for medical images clc clear all;

Spatial Domain Filtering and Enhancement of Medical Images

39

close all; % Getting input image inputimage=imread(‘brain.png’); inputimage=rgb2gray(inputimage); % Defining 3x3 and 5x5 kernel kernel1=1/9*ones (3, 3); kernel2=1/25*ones (5,5); kernel3=1/49*ones (7,7); % Attempt to recover the image b1=conv2(inputimage, kernel1,’same’); b2=conv2(inputimage, kernel2,’same’); b3=conv2(inputimage, kernel3,’same’); % Displaying the result of averaging filter figure, subplot (2,2,1), imshow(inputimage), title (‘Original Image’), subplot (2,2,2), imshow(uint8(b1)), title (‘3 x 3 Averaging filter’), subplot (2,2,3), imshow(uint8(b2)), title (‘5 x 5 Averaging filter’) subplot (2,2,4), imshow(uint8(b3)), title (‘7 x 7 Averaging filter’) The kernel size plays vital role, as the size of the kernel increases, the resultant output is blurred. This is the classical way of writing the code, however the simplified manner in writing the code is as follows % Simplified code for average filter clc; close all; % Getting the input image inputimage=imread(‘cameraman.tif’);

40

S. N. Kumar and S. Suresh

% Usage of fspecial creates a predefined 2D filter and filter2 generates average filter output f1=fspecial(‘average’,[3 3]); cf1=filter2(f1,inputimage); % Display of output image figure,imshow(uint8(cf1));

Figure 3.4. Average filtering of medical image with respect to various kernel size.

3.3. MATLAB Code for High Pass Filter High pass filter pass the high frequency components and attenuates the low frequency components. The high pass filter gains prominence in edge detection or edge enhancement. The high pass filter mask of value [1 -2 1;-2 4 -2; 1 -2 1] is used here. The quasi high pass filter is used for the edge

Spatial Domain Filtering and Enhancement of Medical Images

41

detection in the medical imaging modalities which efficiently reduce the noise and extract the edge [19]. For the enhancement of fused brain images the Butterworth High Pass filter for the sharpening of the edges and Cross Bilateral filter for smoothing the images[20] . % High pass filter clc; close all; % Getting input image inputimage=imread(‘cameraman.tif’); % High pass filter mask f2= [1 -2 1;-2 4 -2; 1 -2 1]; cf2=filter2 (f2, inputimage); figure,imshow(uint8(cf2));

(a)

(b)

Figure 3.5. Highpass filter ouput corresponding to cameraman.tif input.

When cameraman.tif input image is used, there is no need of using the command rgb2gray(inputimage). The simulation results corresponding to the medical input image is as follows

42

S. N. Kumar and S. Suresh

(a)

(b)

Figure 3.6. High pass filter corresponding to medical input image.

3.4. MATLAB Code for Weighted Average Filter Pixels are multiplied by different coefficients, the pixel at the center of the mask is multiplied by a higher value than any other, thus giving this pixel more importance in the calculation of the average. The other pixels are inversely weighted as a function of their distance from the center of the mask

1 1 [2 16 1

2 1 4 2] 2 1

The general implementation of weighted average filtering size in MxN image

𝑔(𝑚, 𝑛) =

∑𝑖𝑠=−𝑖 ∑𝑗𝑡=−𝑗 𝑤(𝑠, 𝑡)𝑓(𝑚 + 𝑠, 𝑛 + 1) ∑𝑖𝑠=−𝑖 ∑𝑗𝑡=−𝑗 𝑤(𝑠, 𝑡)

The modified weighted average filter is performed in the medical images after detecting the noise weight and an optimal mask is selected for performing the filtering operation [21]. For the removal of the speckle noise from the ultrasound thyroid images an adaptive weighted average filter is used which

Spatial Domain Filtering and Enhancement of Medical Images

43

outperforms in metrics like SNR, SSIM, and Ultrasound Despeckling Assessment Index (USDAI) [22]. % Weighted average filter clc close all; clear all; % Getting input image inputimage=imread(‘cameraman.tif’); figure, imshow(inputimage); title (‘Original Image’); % Adding gaussian noise to image noisyinput=imnoise(inputimage,’gaussian’); figure, imshow(noisyinput); title (‘Noisy Image (gaussian)’); % Define the box filter mask w= (1/16) *[1 2 1;2 4 2;1 2 1]; % Get array sizes [ma, na] = size(noisyinput); [mb, nb] = size(w); % Perform the convolution c = zeros (ma+mb-1, na+nb-1); for i = 1: mb for j = 1: nb r1 = i; r2 = r1 + ma - 1; c1 = j; c2 = c1 + na - 1; c(r1: r2,c1:c2) = c(r1:r2, c1:c2) + w(i,j) * double(noisyinput);

44

S. N. Kumar and S. Suresh

end end % Extract region of size(a) from c r1 = floor(mb/2) + 1; r2 = r1 + ma - 1; c1 = floor(nb/2) + 1; c2 = c1 + na - 1; c = c (r1: r2, c1:c2); figure, imshow(uint8(c)) title (‘Filtered Image using Weighted Average Operation of Box Filter’)

Figure 3.7. Weighted average filter output corresponding to the cameraman.tif input image.

The program can be modified for medical image by removing the noise adding section and directly applying the input to the weighted average filter similar in section 3.1, the result obtained is follows % Average filter using box filter clc close all; clear all; % Getting input image inputimage=imread(brain.png’);

Spatial Domain Filtering and Enhancement of Medical Images

45

inputimage=rgb2gray(inputimage); figure, imshow(inputimage); title (‘Original Image’); % Defining the box filter mask w= (1/16) *[1 2 1;2 4 2;1 2 1]; % Get array sizes [ma, na] = size(inputimage); [mb, nb] = size(w); % Perform the convolution c = zeros (ma+mb-1, na+nb-1); for i = 1: mb for j = 1: nb r1 = i; r2 = r1 + ma - 1; c1 = j; c2 = c1 + na - 1; c(r1: r2,c1:c2) = c(r1:r2, c1:c2) + w(i,j) * double(inputimage); end end % Extract region of size(a) from c r1 = floor(mb/2) + 1; r2 = r1 + ma - 1; c1 = floor(nb/2) + 1; c2 = c1 + na - 1; c = c (r1: r2, c1:c2); figure, imshow(uint8(c)) title (‘Filtered Image using Weighted Average Operation of Box Filter’)

46

S. N. Kumar and S. Suresh

(a)

(b)

Figure 3.8. Weighted average filter output corresponding to the medical image.

3.5. MATLAB Code for Median Filter The median filter is a non-linear digital filter and is widely used for the denoising of Computed Tomography images. The median filter was found to be proficient for the filtering of salt and pepper noise in medical images. The key idea behind the median filter is the replacement of each pixel by the median of the neighbourhood pixels. The size of the kernel plays vital role in the median filtering operation, as the size of the kernel increases, the output image become blurred. The adaptive median filter was proposed in [23] for the filtering of impulse noise in X-ray and Ct images and speckle noise in ultrasound images. The median filter is enhanced to preserve the edges and important edge features without much PSNR loss[24]. % Median filter clc; close all; % Getting input images inputimage=imread(‘brain.png’); inputimage=rgb2gray(inputimage);

Spatial Domain Filtering and Enhancement of Medical Images

47

%Median filter operation output1= medfilt2(inputimage,[3,3]); output2=medfilt2(inputimage,[5,5]); output3=medfilt2(inputimage,[7,7]); % Displaying the outputs subplot(2,2,1); imshow(inputimage); title(‘Original Image’); subplot(2,2,2); imshow(output1); title(‘3x3 Image’); subplot(2,2,3); imshow(output2); title(‘5x5 Image’); subplot(2,2,4); imshow(output3); title(‘7x7 Image’); The simulation results are as follows depicted in Figure 3.9. As the kernel size increases, the output image becomes blurred.

Figure 3.9. Median filter output corresponding to various kernel sizes.

48

S. N. Kumar and S. Suresh

3.6. MATLAB Code for Image Intensity Slicing Intensity Level or Gray level Slicing technique in which gray or Intensity level slicing high lights certain range of gray levels in the original image [25]. Intensity Level slicing technique is Piecewise linear transformation which permit segmentation of certain gray level regions from the rest of the image [26]. This technique is useful when different features of an image are contained in different gray levels.

(a) (a) (a)

(d) (d)(d)

(b)

(b) (b)

(e)

(f)

(e) (e)

(f) (f)

Figure 3.10. Outputs of gray level slicing technique.

% Gray Level Slicing clc; close all; % Getting the input image

(c)

(c) (c)

Spatial Domain Filtering and Enhancement of Medical Images

49

inputimage=imread(‘brain.png’); subplot(3,2,1);imshow(inputimage); title(‘Original Image’); l=im2double(inputimage); % Perform gray level thersholding and intensity slicing level=graythresh(l); BW = im2bw(l,level); subplot(3,2,2); imshow(BW); title(‘Image graythresh’); level1=0.2*BW; subplot(3,2,3); imshow(level1); title(‘0.2 Slice’); level2=0.4*BW; subplot(3,2,4); imshow(level2);title(‘0.4 Slice’); level3=0.6*BW; subplot(3,2,5); imshow(level3);title(‘0.6 Slice’); level4=0.8*BW; subplot(3,2,6); imshow(level4); title(‘0.8 Slice’);

3.7. MATLAB Code for Gaussian Filter It is a spatial low pass filter based on gaussian probability function and the 1D gaussian function is expressed as follows

The 2D gaussian function is expressed as follows

The comman fspecial(‘gaussain’) creates a gaussian filter. The gaussian filter is used for the effective removal of the noise with an weighted average value within a fixed small window but not range in the whole image which removes image noises and maintains texture structure[27].

50

S. N. Kumar and S. Suresh

% Gaussain filter clc; close all; % Getting the input image inputimage=imread(‘cameraman.tif’); % Parameters of gaussian filter a=9;s=3; % Gaussian filter g=fspecial(‘gaussian’,[a a],s); surf(1:a,1:a,g) figure,surf(1:a,1:a,g); figure,imshow(filter2(g,inputimage)/256); The simulation results are as follows corresponding to cameraman.tif input image

(a)

(b)

Figure 3.11. (a) gaussain probablity function, (b) gaussian filter output.

The gaussian filter output corresponding to the medical image is as follows

Spatial Domain Filtering and Enhancement of Medical Images

(a)

51

(b)

Figure 3.12. (a) Gaussain probablity function, (b) Gaussian filter output.

The standard deviation ‘s’ by default the value will be taken as 0.5, if not specified. The surf function plots the gaussian probability function.

3.8. MATLAB Code for Nonlinear Filters The non linear filter employs the non linear function for the spatial domain operation. The non linear filter that employs max and min function are employed here. The max function takes maximum of values under the mask and the min function takes minimum of values under the mask. The non linear filter is used to enhance the noisy image, the results where comparitively better than MDB, WMF, AMFM, MMFM, and CWM on the basis of PSNR [28]. Non linear Adaptive median filter effectively reduce the noise under high noise condition where found to be superior [29]. % Non-linear maximum filter clc; close all; % Getting the input image inputimage=imread(brain.png’); inputimage=rgb2gray(inputimage); cmax=nlfilter(inputimage,[3,3],’max(x(:))’);

52

S. N. Kumar and S. Suresh

cmax1=nlfilter(inputimage,[3,3],’min(x(:))’); figure,imshow(cmax); title(‘Non-linear Maximum Filter’); figure,imshow(cmax1); title(‘Non-linear Minimum Filter’);

(a)

(b)

Figure 3.13. (a) non-linear filter with max function, (b) non-linear filter with min function.

3.9. MATLAB Code for Wiener Filter The wiener filter is a non linear spatial domain filter and it operates on the principle of least squares. Though it is a low pass filter, it generates better results than classical low pass filter. The weiner filter performance is compared with the median filter in the denoising of the image. The weiner performs better for speckle noise and Gaussian noise [30]. The enhance weiner filter is tunes its kernel to preserve the joint edges and information with good noise reduction [31, 32]. % Wiener Filter clc; close all;

Spatial Domain Filtering and Enhancement of Medical Images

53

% Getting the input image inputimage=imread(‘brain.png’); inputimage=rgb2gray(inputimage); % Defining wiener filter wieneroutput1= wiener2(inputimage,[3 3]); wieneroutput2= wiener2(inputimage,[5 5]); % Displaying output figure,imshow(wieneroutput1); figure,imshow(wieneroutput2);

(a)

(b)

Figure 3.14. (a) Input image, (b) Wiener filter output.

Conclusion This chapter highlight the filtering and enhancement techniques for the medical images. The MATLAB code for Average filter, High pass filter, Weighted Average Filter, Median filter are initially discussed. Gaussian filter MATLAB code was discussed in this chapter, gains prominence in the filtering of CT images corrupted by Gaussian noise. The Median filter was

54

S. N. Kumar and S. Suresh

found to be proficient for the filtering of medical images corrupted by salt and pepper noise. The nonlinear filter types and its MATLAB codes are highlighted in this chapter. The Gray level slicing and Wiener filter MATLAB code was also discussed in this chapter. Choice of filtering algorithm relies on the nature of medical images and the noise distribution. There is no universal filter for all medical imaging modalities.

Chapter 4

Frequency Domain Filtering of Medical Images 4.1. Introduction The noise is an unavoidable factor in the acquired signal or image. The electronic sensors in the equipment contributes noise. The filtering algorithms are used to minimize the effect of noise and to improve the visual appearance for analysis. The spatial domain filtering relies on the convolution operation, while the multiplication is used in the frequency domain filtering. The input image matrix is multiplied by filter frequency response in frequency domain, the inverse Fourier transform is then utilized to generate the output image. The fourier transform is the classical transform that has immense applications in signal, image and audio processing.

Figure 4.1. Frequency domain filter.

The frequency domain filters are generally classified into three types; ideal filters, Gaussian filters and butter worth filters. The ideal filters are easy to implement; however, the results are not satisfactory. The ideal filters exhibit ringing effects in the output and affects the edges or boundary of objects in an image. The Gaussian filter is based on the Gaussian function (bell shaped curve) that has a Gaussian probability distribution function. The ringing effects are not observed in Gaussian filter outputs, cut off frequency of

56

S. N. Kumar and S. Suresh

Gaussian filter is determined by the variable ( ). The butterworth filter transfer function can be tuned and the order of the filter is changed to obtain the desired response. The cut off frequency is a parameter in the transfer function and has efficient tuning based on a specific application. The parameters of the frequency domain filter area distance function, cut off frequency and band width. The steps for filtering in the frequency domain are as follows: 1. Let I(x,y) be the input digital image of size P×Q 2. Zero padding is carried out, Ip (x, y) is the padded image, the size of the padded image is M×N, where M=2P and N=2Q. 3. The padded image is then multiplied with (-1)(x+y) to center the transform 4. The 2D discrete fourier transform I (U, V) of the input image is calculated in step 3. 5. The filter transfer function h(U,V) is then defined and multiplied with the I(U,V) 6. The inverse discrete fourier transform of the image is then estimated and the output processed image G(x, y) is estimated.

4.2. MATLAB Code for Fast Fourier Transform of an Image The fourier transform has many applications in image processing, in [33] fourier transform was employed for skin lesion classification. The fourier transform was employed for medical image registration in [34]. The stock well transform combines the features of fourier, Gabor and Wavelet transform find its role in the analysis of medical images [35]. The fourier transform transfer the image data into frequency data and the fourier spectra comprises of low frequency and high frequency components. % Fourier transform of an image clc; close all; inputimage=im2double(imread(‘Brain.png’)); figure, imshow(inputimage); f1=fft(inputimage); f2=fftshift(f1); subplot(2,2,1); imshow(abs(f1)); title(‘Frequency Spectrum’);

Frequency Domain Filtering of Medical Images

subplot(2,2,2); imshow(abs(f2)); title(‘Centered Spectrum’); f3=log(1+abs(f2)); subplot(2,2,3); imshow(f3); title(‘log(1+abs(f2))’); l=fft2(f1); l1=real(l); subplot(2,2,4); imshow(l1);title(‘ 2-D FFT’);

(a)

(b) Figure 4.2. (a) Input image, (b) Fourier transform outputs of a medical image.

57

58

S. N. Kumar and S. Suresh

4.3. MATLAB Code for Gaussian Low Pass Frequency Domain Filter The frequency domain filtering operation is done by multiplying the fast fourier transform of image with the filter transfer function. The shape transitions like edges are the high frequency components in an image and the shape of object is the low frequency component in an image. The low pass filter smooths the image, sometimes generates a blurry effect. The high pass filter enhances the boundaries, however in many cases, it enhances noise too. Three variants of low pass filters are there; ideal low pass filter, Gaussian low pass filter, Butterworth low pass filter. The MATLAB code for various frequency domain filters are highlighted in [36] and here some of the codes are depicted below. This is the generalized filter for digital images. It is given in the form of filter function as follows 2

2

𝐻(𝑢, 𝑣) = 𝑒 −𝐷(𝑢,𝑣)⁄2𝐷0

where D(u,v) and D0 are same as defined in [37,38]. % Gaussian Low Pass frequency domain filter clear all; close all; % Read input image inputimage =imread(‘1-14.png’); cim=rgb2gray(inputimage); cim=double (cim); p1=2*p; q1=2*q; pim=zeros((p1),(q1)); kim=zeros((p1),(q1)); %Perform padding

(1)

Frequency Domain Filtering of Medical Images

59

for i=1:p for j=1:q pim(i,j)=cim(i,j); end end %Center the fourier transform for i=1:p for j=1:q kim(i,j)=pim(i,j)*((-1)^(i+j)); end end %2D fast fourier transform fim=fft2(kim); % Cutoff radius in frequency domain for filter, user defined value thresh=30; % Order for butterworth filter, user defined value n=1; % Sub function call for gaussian low pass filter, uncomment each bold lines for execution of the respective frequency domain filter him=glp(fim,thresh); % sub function call for butterworth low pass filter % him=blpf(fim,thresh,n); % sub function call for gaussian high pass filters % him=ghp(fim,thresh); % sub function call for butterworth high pass filters % him=bhp(fim,thresh,n);

60

S. N. Kumar and S. Suresh

% function call for high boost Gaussian filter % him=hbg(fim,thresh); % function call for high boost butterworth filter % him=hbb(fim,thresh,n); % Inverse 2D fast fourier transform ifim=ifft2(him); for i=1:p1 for j=1:q1 ifim(i,j)=ifim(i,j)*((-1)^(i+j)); end end % Eliminate the padding for i=1:p for j=1:q rim(i,j)=ifim(i,j); end end % Retain the real part of the matrix outputimage=real(rim); rim=uint8(outputimage); figure, imshow(rim); % Display the input and output images figure; subplot(2,3,1);imshow(inputimage);title(‘Original image’); subplot(2,3,2);imshow(uint8(kim));title(‘Padding’); subplot(2,3,3);imshow(uint8(fim));title(‘Transform centering’); subplot(2,3,4);imshow(uint8(him));title(‘Fourier Transform’); subplot(2,3,5);imshow(uint8(ifim));title(‘Inverse fourier transform’);

Frequency Domain Filtering of Medical Images

61

subplot(2,3,6);imshow(uint8(outputimage));title(‘Cropiped image’); % Sub function definition of the Gaussian low pass filter, function call is done in the main file function res = glp(im,thresh) % inputs % im is the fourier transform of the image % thresh is the cutoff circle radius %outputs % res is the filtered image [p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(q,q); for i=1:p for j=1:q d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end for i=1:p for j=1:q h(i,j)= exp ( -( (d(i,j)^2)/(2*(d0^2)) ) ); end end for i=1:p for j=1:q res(i,j)=(h(i,j))*im(i,j); end end

62

S. N. Kumar and S. Suresh

Figure 4.3. Gaussian low pass frequency domain filter output.

4.4. MATLAB Code for Butterworth Low Pass Frequency Domain Filter The butterworth low pass filter generates efficient filtering results for the SPECT images, when compared with the Hamming filter [39]. The low pass filter smooths the image, however the output image becomes blurred [40]. Butterworth filter has a varying performance depending on its order. For higher order the Butterworth filter approaches the Ideal filter. For low order values Butterworth filter is more like a Gaussian filter. It is given in the form of filter function as follows

𝐻(𝑢, 𝑣) =

1 2𝑛 𝐷(𝑢,𝑣) ⁄𝐷 ] 1+[ 0

where n is the filter order. D(u,v) and D0 are same as defined in [37, 38].

(2)

Frequency Domain Filtering of Medical Images

Figure 4.4. Butterworth low pass frequency domain filter output.

% MATLAB sub function code for the butterworth low pass filter function res = blpf(im,thresh,n) [p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(p,q); for i=1:p for j=1:q d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end for i=1:p for j=1:q h(i,j)= 1 / (1+ (d(i,j)/d0)^(2*n) ) ;

63

64

S. N. Kumar and S. Suresh

end end for i=1:p for j=1:q res(i,j)=(h(i,j))*im(i,j); end end

4.5. MATLAB Code for Gaussian High Pass Frequency Domain Filter Three variants of high pass filters are there; ideal high pass filter, butterworth high pass filter and Gaussian high pass filter. The high pass filter enhances the sharpness in an image [41,42]. The enhancement technique comprising of low pass filter followed by high pass filter was employed in [42] for the CT images of lungs. 2

2

𝐻(𝑢, 𝑣) = 1 − 𝑒 −𝐷(𝑢,𝑣)⁄2𝐷0

D(u,v) and D0 are same as defined in [37, 38]. % MATLAB sub function code for the Gaussian high pass filter function res = ghp(im,thresh) % inputs % im is the fourier transform of the image % thresh is the cutoff circle radius %outputs % res is the filtered image

(3)

Frequency Domain Filtering of Medical Images

Figure 4.5. Gaussian high pass frequency domain filter output.

[p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(p,q); for i=1:p for j=1:q d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end for i=1:p for j=1:q h(i,j)=1- exp ( -( (d(i,j)^2)/(2*(d0^2)) ) ); end end for i=1:p for j=1:q

65

66

S. N. Kumar and S. Suresh

res(i,j)=(h(i,j))*im(i,j); end end

4.6. MATLAB Code for Butterworth High Pass Frequency Domain Filter The butterworth high pass filter is utilized for image sharpening, it refers to the technique of enhancing the fine details and the preservation of edges. It blocks the low frequency components. The butterworth high pass filter along with the cross bilateral filter was found to be beneficial for the multimodal image fusion [43]. The Butterworth high pass filter function is defined as follows

𝐻(𝑢, 𝑣) =

1 𝐷 1+[ 0⁄𝐷 ] (𝑢,𝑣)

2𝑛

(4)

where n is the filter order. D(u,v) and D0 are same as defined in [37, 38] % MATLAB sub function code for the butterworth high pass filter function res = bhp(im,thresh,n) [p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(p,q); for i=1:p for j=1:q d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end for i=1:p for j=1:q h(i,j)= 1 / (1+ (d0/d(i,j))^(2*n) ) ;

Frequency Domain Filtering of Medical Images

67

end end for i=1:p for j=1:q res(i,j)=(h(i,j))*im(i,j); end end

Figure 4.6. Butterworth high pass frequency domain filter output.

4.7. MATLAB Code for Gaussian High Boost Frequency Domain Filter The high boost filter focus on the high frequency components without compromising the low frequency components. The adaptive high boost filter was proposed in [44] for gray scale and color images to improve the sharpness.

68

S. N. Kumar and S. Suresh

The high boost filter was found to be robust for magnetic resonance image enhancement [45]. High boost filtering is a generalization of unsharp masking. Unsharp masking consists simply of generating a sharp image by subtracting from an image a blurred version of itself.

𝑓ℎ𝑏 (𝑥, 𝑦) = 𝐴𝑓(𝑥, 𝑦) − 𝑓𝐼𝑝 (𝑥, 𝑦)

(5)

𝑓ℎ𝑏 (𝑥, 𝑦) = (𝐴 − 1)𝑓(𝑥, 𝑦) + 𝑓ℎ𝑝 (𝑥, 𝑦)

(6)

𝐻ℎ𝑏 (𝑢, 𝑣) = (𝐴 − 1) + 𝐻ℎ𝑝 (𝑢, 𝑣)

(7)

% MATLAB sub function code for the Gaussian high boost filter function res = hbg(im,thresh) % inputs % im is the fourier transform of the image % thresh is the cutoff circle radius %outputs % res is the boosted image [p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(p,q); for i=1:p for j=1:q d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end A=1.75; % boost factor or coefficient for i=1:p for j=1:q h(i,j)= 1-exp ( -( (d(i,j)^2)/(2*(d0^2)) ) ); h(i,j)=(A-1)+h(i,j);

Frequency Domain Filtering of Medical Images

69

end end for i=1:p for j=1:q res(i,j)=(h(i,j))*im(i,j); end end

Figure 4.7. Gaussian high boost frequency domain filter output.

4.8. MATLAB Code for Butterworth High Boost Frequency Domain Filter The high boost filter gains importance in remote sensing and satellite image processing. The different versions of frequency domain filters were implemented in OpenCL, a parallel computing platform [46]. A comparative analysis of butterworth and Gaussian filters are highlighted in [47].

70

S. N. Kumar and S. Suresh

Figure 4.8. Butterworth high boost frequency domain filter output.

% MATLAB sub function code for the butterworth high boost filter function res = hbb(im,thresh,n) % inputs % im is the fourier transform of the image % thresh is the cutoff circle radius %outputs % res is the filtered image [p,q]=size(im); d0=thresh; d=zeros(p,q); h=zeros(p,q); A=1.75; % boost factor or coefficient for i=1:p for j=1:q

Frequency Domain Filtering of Medical Images

71

d(i,j)= sqrt( (i-(p/2))^2 + (j-(q/2))^2); end end for i=1:p for j=1:q h(i,j)= 1 / (1+ (d0/d(i,j))^(2*n) ) ; h(i,j)=(A-1)+h(i,j); end end

Conclusion This chapter focuses on the frequency domain filtering of medical images. The MATLAB codes of the following frequency domain filtering filters are discussed in this chapter; Gaussian low pass filter, Gaussian high pass filter, Butterworth low pass filter, Butterworth high pass filter, Gaussian high boost, and Butterworth high boost filter. Frequency domain filtering gains prominence for biosignal and medical image processing applications.

Chapter 5

Medical Image Segmentation 5.1. Introduction Segmentation is the process of extraction of the desired region of interest in an image. The region of interest in a medical image will be an anatomical organ or anomaly such as tumor, cyst. In general, the segmentation algorithms are classified into three types; manual, semi-automatic and automatic. A detailed classification of segmentation algorithms for medical images are highlighted in [48, 49]. The classification of medical image segmentation algorithms is depicted in Figure 5.1. Based on the hierarchy of evolution, the segmentation algorithms are classified in Figure 5.1. The features of the segmentation model are depicted in Table 5.1. The algorithms are compared based on the four features; reproducibility, robustness, time, interactivity, and complexity. figure 5.1.

Ist Generation

Thresholding Region Growing Edge Based Methods

IInd Generation

Deformable Models Clustering Watershed method

IIIrd Generation

Classifiers Graph guided Atlas guided Hybrid Approaches

Figure 5.1. Classification of segmentation algorithms.

The reproducibility of the manual segmentation and user defined semiautomatic algorithms results is low, while the reproducibility of the user assisted methods and fully automated techniques are high. The robustness is

74

S. N. Kumar and S. Suresh

high for manual segmentation model, while it is low for fully automated model. The computational complexity is low for the manual model, while it is high for fully automated model. The execution time is low for the fully automated model, while it is high for the manual segmentation model. Hybrid segmentation approaches are gaining prominence in the current scenario and a detailed review highlighting hybrid segmentation approaches are discussed in [50]. Table 5.1. Features of segmentation model Algorithms used/Features 2D- Manual approach 2D- Manual with assisted contouring 2D/3D semiautomated model

Reproducibility

Robustness

High

High

Higher than manual approach

2D/3D fully automated model

Higher than semi-automated model

High, but lower than 2D manual with assisted contouring

Execution Time Very high

Interactivity

Complexity

Good

Very low

Higher than manual approach Low

Low

Low

Low, when compared with semiautomated model

Low, when compared with semiautomated model

Not as good as manual approach Not as good as 2DManual with assisted contouring Low, when compared with semiautomated model

Low

High

Higher the semiautomated model

5.2. MATLAB Code for Adaptive Thresholding Thresholding uses a threshold value and based on that, the pixels are grouped into two classes. The pixel with gray values less than the threshold value belongs to one class, while the pixels with threshold value greater than the threshold belongs to another class. The classical thresholding approach is a binary thresholding technique and finds less prominence in medical image processing. The current methods in medical image segmentation is highlighted in [51]. In the case of medical images, multilevel thresholding gains importance, since automatic tuning of threshold values are done by optimization techniques. The classical thresholding is sensitive to noise and hence appropriate preprocessing technique is required prior to segmentation and in hybrid segmentation approaches also, the thresholding has importance [52]. The multilevel thresholding when coupled with crow search optimization

Medical Image Segmentation

75

algorithm yields proficient results when compared with the electromagnetism like optimization and harmony search optimization techniques [53]. The local adaptive thresholding relies on the local kernel size and the filtering approach is also incorporated. The local adaptive thresholding was performed with mean filter or median filter. The global thresholding technique utilizes a single threshold value, while the local thresholding technique utilizes a local threshold value in each sub region [54]. The MATLAB code for adaptive thresholding [55] is as follows; % MATLAB code for adaptive thresholding clear; close all; inputimage=imread(‘liver.png’); inputimage =rgb2gray(im1); inputimage =imresize(inputimage, [256 256]); bwim1=adaptivethreshold(inputimage,30,0); figure,imshow(inputimage); figure,imshow(bwim1); imwrite(bwim1,’at2.jpg’); Sub function for adaptive thresholding function bw=adaptivethreshold(image,ws,tm) image=mat2gray(image); if tm==0 mIM=imfilter(image,fspecial(‘average’,ws),’replicate’); else mIM=medfilt2(image,[ws ws]); end sIM=mIM-image; bw=im2bw(sIM,0); bw=imcomplement(bw);

.

76

S. N. Kumar and S. Suresh

The simulation results are as follows; the adaptive thresholding algorithm was tested on abdomen CT images.

(a)

(b)

(c)

Figure 5.2. (a) Input image, (b) adaptive thresholding output with local window size 15 (average filter), (c) adaptive thresholding output with local window size 45 (average filter).

The filtering option was initially ‘0’, hence the average filter was employed, the results are depicted in Figure 5.2 (b). By choosing the parameter C=1, the median filter based thresholding was generated, the results are depicted in Figure 5.2 (c).

(a)

(b)

Figure 5.3. (a) adaptive thresholding output with local window size 15 (median filter), (b) adaptive thresholding output with local window size 45 (median filter).

The window size plays vital role in adaptive thresholding technique, for small value of window size, the output was not found to be satisfactory. The higher the value of window size, more emphasis is generated on ROI. The parameters are tuned manually and varies with respect to input images. The

Medical Image Segmentation

77

adaptive thresholding output with variable window size is depicted in Figure 5.3 (a) and (b). The fractional order Darwinian particle swarm optimization is an improved version of PSO and is coupled with the multilevel thresholding approach for the segmentation of medical images [56].

5.3. MATLAB Code for Region Growing The region growing is also a classical segmentation algorithm in which the neighboring pixels are merged based on the seed point. The seed points are defined by the user and automatic tuning of seed points are also there. The difference between pixel gray value and the region mean value is estimated and it is used as a similarity measure. The distance measure is used as the homogeneity criteria and the pixels with the smallest difference is considered here (difference in intensity between the neighboring pixel and the seed point pixel. In region growing, four neighborhood connectivity or eight neighborhood connectivity is considered. Based on the similarity criteria, the pixels are grouped with the seed point until no more pixels are to be examined. Again a new seed point will be formulated and the region growing process continues until no more pixels are to be examined. The automatic seeded region growing algorithm was proposed in [57], where the seed points are specified by particle swarm optimization for the segmentation of tumor from MR breast images. The 3D automatic region growing algorithm was highlighted in [58], genetic algorithm was employed for the tuning of seed points. The modified automatic seeded region growing was proposed in [59] for the segmentation of tumor in MR breast images. % MATLAB code for region growing segmentation clc; close all; clear all; inputimage=imread(‘brain.png’); inputimage=imresize(inputimage,[256 256]); figure,imshow(inputimage); inputimage=rgb2gray(inputimage); % Seed point initialization and distance measure initialization x=110; y=122;

78

S. N. Kumar and S. Suresh

distance=0.15; inputimage=im2double(inputimage); % function call to region growing algorithm outputimage=regiongrowing(inputimage,x,y,distance); figure,imshow(outputimage); % Sub function definition function J=regiongrowing(I,x,y,distance) % I is the input image, J is the output image,) %(x,y) represents the seed point value, distance value is default 0.2 J = zeros(size(I)); Isizes = size(I); % Mean of the region is estimated reg_mean = I(x,y); reg_size = 1; % Free memory initialization to store neighbours of the (segmented) region neighbourhood_freespace = 10000; neighbourhood_pos=0; neg_list = zeros(neighbourhood_freespace,3); %Distance of the neighbouring pixel to the region mean pixdist=0; % Neighbourhood pixel coordinates neigb=[-1 0; 1 0; 0 -1;0 1]; % Region growing process is initiated and the process is terminated until the pixel value is greater than the threshold while(pixdist=1)&&(xn1; The simulation results are depicted below in Figure 5.4. In the code, the seed point value is (175,122) and the distance value is set to 0.15.

(a)

(b)

Figure 5.4. (a) Input image, (b) region growing segmentation result.

5.4. MATLAB Code for Edge Detection The edge detection is a classical segmentation algorithm and it is used to trace the boundaries of objects in an image. The gradient mathematical operator is used in edge detection. Out of the classical edge detectors, the canny edge detector produces proficient results. Edges represents the discontinuities in an image that is used to detect the region of interest. The role of edge detection in medical images are highlighted in [60]. The variants of classical edge detection are there, in [61] genetic algorithm was employed to trace edges for medical images. The artificial neural network was deployed in [52] for the detection of edges. % Program to detect the edges in an image clc; close all; I = imread(‘1-130.png’);

Medical Image Segmentation

81

I=rgb2gray(I); figure,imshow(I); title(‘Input Medical Image’);

(a) (a)

(b) (b)

(d) (d) Figure 5.5. Edge detection algorithms outputs.

% Sobel edge detection BW1 = edge(I,’sobel’); figure,imshow(BW1); title(‘Sobel Output’);

(c) (c)

(e) (e)

82

S. N. Kumar and S. Suresh

% Canny edge detection BW2 = edge(I,’canny’); figure,imshow(BW2); title(‘Canny Output’); % Prewitt edge detection BW3 = edge(I,’prewitt’); figure,imshow(BW3); title(‘Prewitt Output’); % Robert edge detection BW4 = edge(I,’roberts’); figure,imshow(BW4); title(‘Robert Output’);

5.5. MATLAB Code for Watershed Segmentation Watershed segmentation is a region-based segmentation technique that evolved from mathematical morphological concepts. An image in the watershed algorithm is considered as the topographical landscape with ridges and valleys. The image is decomposed in such a manner that; each pixel is assigned to a region or watershed. The classical watershed algorithm is sensitive to noise and the common problems of under-segmentation and oversegmentation. The marker-controlled watershed algorithm generates proficient results and here the seed point or marker is defined initially for the foreground and background regions. The implementation of watershed algorithms using open-source libraries is highlighted in [62]. A hybrid segmentation approach consisting of region growing with the watershed algorithm was put forward in [63] for medical images. The watershed segmentation algorithm with nonlinear tensor diffusion filter was found to be efficient for the segmentation of CT/MR medical images of the abdomen and knees [64]. % Watershed transform of an image clc; clear all;

Medical Image Segmentation

83

% Loading of input image I=imread(‘liver.png’); I=rgb2gray(I); figure, imshow(I), title (‘Input image’); % Sobel edge detection hy = fspecial(‘sobel’); hx = hy’; Iy = imfilter(double(I), hy, ‘replicate’); Ix = imfilter(double(I), hx, ‘replicate’); gradmag = sqrt(Ix.^2 + Iy.^2); figure, imshow(gradmag,[]), title(‘Gradient magnitude (gradmag)’) % Calling watershed function L = watershed(gradmag); Lrgb = label2rgb(L); figure, imshow(Lrgb), title(‘Watershed transform of gradient magnitude (Lrgb)’); % Morphological operations Structuralelement = strel(‘disk’,5); Ioutput = imopen(I, Structuralelement); figure, imshow(Ioutput), title(‘Opening (Io)’) Ioc = imclose(Ioutput, Structuralelement); figure, imshow(Ioc), title(‘Opening-closing (Ioc)’) Ieoutput = imerode(I, Structuralelement); Iobroutput = imreconstruct(Ieoutput, I); figure, imshow(Iobroutput), title(‘Opening-closing (Iobroutput)’) Iobrd = imdilate(Iobroutput, Structuralelement); Iobrcbr = imreconstruct(imcomplement(Iobrd), imcomplement(Iobroutput)); Iobrcbr = imcomplement(Iobrcbr);

84

S. N. Kumar and S. Suresh

figure, imshow(Iobrcbr), title(‘Opening-closing by reconstruction (Iobrcbr)’)

(a)

(d)

(b)

(e)

(c)

(f)

Figure 5.6. Watershed segmentation algorithm phase 1 results.

% Estimation of regional maxima and superimposition on the input image fgm = imregionalmax(Iobrcbr); figure, imshow(fgm), title(‘Regional maxima of opening-closing by reconstruction (fgm)’) I2 = I; I2(fgm) = 255; figure, imshow(I2), title(‘Regional maxima superimposed on original image (I2)’) se2 = strel(ones(5,5)); fgm2 = imclose(fgm, se2); fgm3 = imerode(fgm2, se2); fgm4 = bwareaopen(fgm3, 20);

Medical Image Segmentation

85

I3 = I; I3(fgm4) = 255; figure, imshow(I3) title(‘Modified regional maxima superimposed on original image (fgm4)’) bw = im2bw(Iobrcbr, graythresh(Iobrcbr)); figure, imshow(bw), title(‘Thresholded opening-closing by reconstruction (bw)’) % Watershed ridge lines, superimposition of markers and object boundaries on input image D = bwdist(bw); DL = watershed(D); bgm = DL == 0; figure, imshow(bgm), title(‘Watershed ridge lines (bgm)’) gradmag2 = imimposemin(gradmag, bgm | fgm4); L = watershed(gradmag2); I4 = I; I4(imdilate(L == 0, ones(3, 3)) | bgm | fgm4) = 255; figure, imshow(I4) title (‘Markers and object boundaries superimposed on original image (I4)’) L = watershed(gradmag2); Lrgb = label2rgb(L, ‘jet’, ‘w’, ‘shuffle’); figure, imshow(Lrgb) title(‘Colore watershed label matrix (Lrgb)’); figure, imshow(I), hold on himage = imshow(Lrgb); set(himage, ‘AlphaData’, 0.3); title (‘Watershed output superimposed transparently on original image’);

86

S. N. Kumar and S. Suresh Figure 5.6: Watershed segmentation algorithm phase 1 results

Figure 5.7. Watershed segmentation algorithm phase 2 results.

Figure 5.8. Watershed segmentation algorithm phase 3 results.

5.6. MATLAB Code for K-Means Clustering Clustering algorithm gains much prominence in medical image processing for the delineation of specific regions of interest. The K-means clustering is a hard clustering technique, while the fuzzy c means clustering is a soft clustering

Medical Image Segmentation

87

technique. Clustering is an unsupervised segmentation approach and relies on the distance metric. The euclidean distance metric is widely used in clustering approaches. The K means clustering, when coupled with the watershed algorithm generates efficient results [65]. The random initialization of centroids in classical clustering technique was replaced by the optimization algorithm and the FCM coupled with the crow search optimization was found to generate efficient results for abdomen CT images [66]. The binary tree quantization is a clustering algorithm that yields efficient results for CT/PET color medical images [67]. The fast fuzzy c means clustering proposed in [68] was found to exhibit effective ROI extraction in COVID-19 CT images. A novel clustering approach based on fuzzy concept was proposed in [69] for CT/MR medical images.

(a)

(b)

(c)

(d)

Figure 5.9. K means segmentation algorithm results.

88

S. N. Kumar and S. Suresh

% K means clustering segmentation algorithm clc clear all close all % Loading the input image Inputimage = im2double(imread(‘Brain.png’)); F = reshape (Inputimage,size(Inputimage,1)*size(Inputimage,2),3); % K-means clustering algorithm % Cluster Numbers K = 4; % Cluster Centers CENTROID = F( ceil(rand(K,1)*size(F,1)) ,:); % Distances and Labels DL = zeros(size(F,1),K+2); Iteration = 10; for n = 1:Iteration for i = 1:size(F,1) for j = 1:K DL(i,j) = norm(F(i,:) - CENTROID(j,:)); end [Distance, CN] = min(DL(i,1:K)); DL(i,K+1) = CN; DL(i,K+2) = Distance; end for i = 1:K A = (DL(:,K+1) == i); CENTROID(i,:) = mean(F(A,:)); if sum(isnan(CENTROID(:))) ~= 0 NC = find(isnan(CENTROID(:,1)) == 1); for Ind = 1:size(NC,1) CENTROID(NC(Ind),:) = F(randi(size(F,1)),:); end end

Medical Image Segmentation

89

end end X = zeros(size(F)); for i = 1:K idx = find(DL(:,K+1) == i); X(idx,:) = repmat(CENTROID(i,:),size(idx,1),1); end T = reshape(X,size(Inputimage,1),size(Inputimage,2),3); % Displaying the results figure() subplot(121); imshow(Inputimage); title(‘Original Image’) subplot(122); imshow(T); title(‘Segmented Image’) disp(‘number of segments =‘); disp(K)

5.7. MATLAB Code for Gauss Gradient Approach for Edge Detection The gauss gradient approach for edge detection generates better results than classical edge detectors [70]. The 2D Gaussian kernel is utilized here and the kernels are generated along the x and y direction. The standard deviation of the kernel is the use defined parameter and here for values of standard deviation, the results are taken and are depicted in Figure 5.10. % Gauss gradient edge detection algorithm clear; close all; % Reading the input images inputimage =imread(‘brain.png’); fim=mat2gray(inputimage); % Calling Gauss Gradient Function figure(‘name’,’Magnitude of Gradient’); [imx,imy]=gaussgradient(fim,0.5); subplot(2,2,1); imshow(abs(imx)+abs(imy)); title(‘sigma=0.5’);

90

S. N. Kumar and S. Suresh

[imx,imy]=gaussgradient(fim,1.0); subplot(2,2,2); imshow(abs(imx)+abs(imy)); title(‘sigma=1.0’); [imx,imy]=gaussgradient(fim,1.5); subplot(2,2,3); imshow(abs(imx)+abs(imy)); title(‘sigma=1.5’); [imx,imy]=gaussgradient(fim,2.0); subplot(2,2,4); imshow(abs(imx)+abs(imy)); title(‘sigma=2.0’); % Sub function function [gradientx,gradienty]=gaussgradient(input,sigma) % Generate a 2-D Gaussian kernel along x direction epsilon=1e-2; halfsize=ceil(sigma*sqrt(-2*log(sqrt(2*pi)*sigma*epsilon))); size=2*halfsize+1; %generate a 2-D Gaussian kernel along x direction for i=1:size for j=1:size u=[i-halfsize-1 j-halfsize-1]; Hx(i,j)=gauss(u(1),sigma)*dgauss(u(2),sigma); end end Hx=Hx/sqrt(sum(sum(abs(Hx).*abs(Hx)))); %Generate a 2-D Gaussian kernel along y direction Hy=Hx’; %2-D filtering gradientx=imfilter(input,Hx,’replicate’,’conv’); gradienty=imfilter(input,Hy,’replicate’,’conv’); function y = gauss(x,sigma) %Gaussian function y = exp(-x^2/(2*sigma^2)) / (sigma*sqrt(2*pi));

Medical Image Segmentation

91

function y = dgauss(x,sigma) %first order derivative of Gaussian y = -x * gauss(x,sigma) / sigma^2;

Figure 5.10. Gauss gradient segmentation results.

Conclusion Segmentation plays a vital role in the extraction of the region of interest in an image. In medical image processing, the segmentation role is vital in the extraction of regions of interest. The segmentation algorithms are classified into three categories and the MATLAB codes of the following algorithms are discussed in this chapter; thresholding, region growing, watershed segmentation, edge detection and K means clustering. The MATLAB code of the gauss gradient-based edge detection is also put forward in the chapter.

Chapter 6

Wavelet Transforms in Medical Image Processing 6.1. Introduction The wavelet transform is an important transformation tool in image processing for filtering, enhancement, fusion and compression [71,72, 73]. Eradicating noise from the image without affecting the image information is a challenging task during image denoising [74]. An effective technique for image denoising is by utilizing Discrete Wavelet Transform (DWT) [75]. Transform coefficients are generated while applying the wavelet transform. In the wavelet transform, the original images are subdivided into four images such as diagonal details, horizontal details, approximation details, and vertical details [76]. The wavelet decomposition of an image is represented in Figure 6.1.

Figure 6.1. Wavelet decomposition of an image.

6.2. Role of Wavelet Transforms in Medical Image Processing The wavelet transform role is inevitable in biosignal processing and medical images for disease diagnosis. In medical image processing, wavelets are used in filtering, compression, feature extraction, and registration [77, 78]. A hybrid compression scheme comprising of Haar wavelet transform and discrete cosine transform along with run-length coding was employed for image compression [79]. The modified Haar wavelet transform was utilized in [80], the particle swarm optimization algorithm was deployed for the optimum

94

S. N. Kumar and S. Suresh

selection of threshold values. The discrete wavelet transform finds its role in multimodal medical image fusion of PET and MRI images of the brain [81]. The choice of wavelet for the lossy and lossless compression schemes is highlighted in [82]. The guided filter along with DWT was utilized for the fusion of multisensory images [83, 81]. The MATLAB code for the image fusion using wavelet transform with guided filter is highlighted in [84]. The wavelet transform gains prominence in the filtering of medical images, the MATLAB code link for the image denoising with different types of threshold is put forward in [85]. There is no down sampling of inputs in the stationary wavelet transform and it solves the problem of lack of translation-invariance of the discrete wavelet transform. The MATLAB code for filtering of images using classical discrete wavelet transform and the stationary wavelet transform is put forward in [86].

6.3. MATLAB Code for Wavelet Decomposition of Medical Image The MATLAB code for wavelet decomposition of an image is described below. Two-stage decomposition is put forward here, however, multistage decomposition is also possible. clc clear all close all % Reading the input medical image i=imread(‘brain.png’); % First stage wavelet decomposition [A, H, V, D] = dwt2(i,’db1’); e1=[uint8(A) H; V D]; figure, imshow(e1); title (‘first stage decomposition’); % Second stage wavelet decomposition [A1, H1, V1, D1] = dwt2(A,’db1’); b1=[uint8(A1) H1; V1 D1]; figure, imshow(b1); title (‘second stage decomposition ‘);

Wavelet Transforms in Medical Image Processing

Figure 6.2. First stage wavelet decomposition.

Figure 6.3. Second stage wavelet decomposition.

95

96

S. N. Kumar and S. Suresh

6.4. MATLAB Code for Watermarking of Medical Image The MATLAB code for watermarking a medical image using Daubechies wavelet is highlighted here in the section. The embedding strength parameter in the code is set to 0.001 and it is a user-defined parameter. The single-stage decomposition is performed here, however multistage decomposition can be done before watermarking also. The MATLAB code for image watermarking using Haar wavelet transform and support vector decomposition is highlighted in [87]. clc; close all; % Reading the input medical image img = imread(‘brain.png’); img=rgb2gray(img); img=imresize(img,[300 300]); img = double(img); %Initialize the weight of Watermarking c = 0.001; figure, imshow(uint8(img)), title (‘Original Image’); [p q] = size(img); p1=p; q1=q; %Generate the key n = rgb2gray(imread(‘key.jpg’)); key = imresize(double(n), [p q]); figure, imshow(uint8(key)); title(‘Key’); %Compute 2D wavelet transform [ca, ch, cv, cd] = dwt2(img,’db1’); %Perform the watermarking y = [ca ch; cv cd]; Y = y + c*key;

Wavelet Transforms in Medical Image Processing

p=p/2; q=q/2; for i=1: p for j=1: q nca (i, j) = Y (i, j); ncv(i,j) = Y(i+p,j); nch (i, j) = Y (i, j+q); ncd (i, j) = Y (i+p, j+q); end end

(a)

(c)

(b)

(d)

Figure 6.4. (a, b) Original image and the key, (c, d) watermarked image and the key extraction.

%Display the Watermarked image wimg = idwt2(nca, nch, ncv, ncd,’db1’); figure, imshow(uint8(wimg)), title (‘Watermarked Image’); %Extraction of key from Watermarked image %Compute 2D wavelet transform

97

98

S. N. Kumar and S. Suresh

[rca, rch, rcv, rcd] = dwt2(wimg,’db1’); n1= [rca, rch; rcv, rcd]; N1=n1-y; figure, imshow(double(N1*4)), title (‘Extract the key from watermarked image’)

6.5. MATLAB Code for Image Compression Using Haar Wavelet Transform The Haar wavelet transform is widely used in image compression owing to its low computational complexity. The MATLAB code for greyscale and color image compression using Haar wavelet is highlighted in [88, 89]. The hybrid combination of DWT and Walsh Transform was found to be efficient for the compression of grayscale and color images, the MATLAB code is highlighted in [90]. The MATLAB code for color image compression using a hybrid combination of wavelet and cosine transform is highlighted in [91]. clc; close all; image_name=imread(‘brain.png’); image_name=rgb2gray(image_name); delta=0.02; % Check number of inputs. %H1, H2, H3 are the transformation matrices for Haar wavelet Transform H1= [0.5 0 0 0 0.5 0 0 0; 0.5 0 0 0 -0.5 0 0 0;0 0.5 0 0 0 0.5 0 0 ;0 0.5 0 0 0 -0.5 0 0 ;0 0 0.5 0 0 0 0.5 0;0 0 0.5 0 0 0 -0.5 0;0 0 0 0.5 0 0 0 0.5;0 0 0 0.5 0 0 0 -0.5;]; H2= [0.5 0 0.5 0 0 0 0 0; 0.5 0 -0.5 0 0 0 0 0;0 0.5 0 0.5 0 0 0 0;0 0.5 0 0.5 0 0 0 0;0 0 0 0 1 0 0 0;0 0 0 0 0 1 0 0;0 0 0 0 0 0 1 0;0 0 0 0 0 0 0 1;]; H3= [0.5 0.5 0 0 0 0 0 0; 0.5 -0.5 0 0 0 0 0 0;0 0 1 0 0 0 0 0;0 0 0 1 0 0 0 0;0 0 0 0 1 0 0 0;0 0 0 0 0 1 0 0;0 0 0 0 0 0 1 0;0 0 0 0 0 0 0 1;]; %Normalize each column of H1, H2, H3 to a length 1(This results in orthonormal columns of each matrix) H1=normc(H1); H2=normc(H2);

Wavelet Transforms in Medical Image Processing

99

H3=normc(H3); H=H1*H2*H3; %Resultant transformation matrix x=double(image_name); len=length(size(x)); y=zeros(size(x)); [r,c]=size(x); %Above 8x8 transformation matrix(H) is multiplied by each 8x8 block in the image for i=0:8:r-8 for j=0:8:c-8 p=i+1; q=j+1; y(p:p+7,q:q+7)=(H’)*x(p:p+7,q:q+7)*H; end end figure; imshow(x/255); %compression ratio depends on the delta value you select %Larger the value ‘delta’, the compression ratio will be larger %delta=0.01; n1=nnz(y); % Number of non-zero elements in ‘y’ z=y; m=max(max(y)); y=y/m; y(abs(y)mult; q=colfilt(q,[blocksize,blocksize],’distinct’,@(x) ones(blocksize^2,1)*sum(x)); m=blocksize^2;

%length*width of block

A=mult-sigma.*(sqrt(q./m-q)); B=mult+sigma.*(sqrt(m-q./q));

%Low mean value %High mean value

%Components of Bitmap image H=Inputimage>=mult; Y(H)=A(H); Y(~H)=B(~H);

105

106

S. N. Kumar and S. Suresh

% Display of output image Outputimage=uint8(Y); figure,imshow(Outputimage); title(‘Outputimage’);

(a)

(b)

Figure 7.2. (a) Input image, (b) compressed image.

(a) Figure 7.3. (a) Input image, (b) compressed image.

(b)

Medical Image Compression

107

7.3. Wavelet Transform Based Compression The wavelet transform plays vital role in image and signal processing. The MATLAB code for image compression using wavelet transform is represented below, the biorthogonal wavelet is used here. clc; clear all; close all; % Reading an image file Inputimage=imread(‘1-14.png’); % Decomposition level and wavelet type n=3; wname = ‘ bior3.7’; %Conversion of RGB to Grayscale Inputimage1= double(rgb2gray(Inputimage)); % Wavelet decomposition of the image [c,s] = wavedec2(Inputimage1,n,wname); % wdcbm2 for selecting level dependent thresholds alpha = 1.5; m = 2.7*prod(s(1,:)); [thr,nkeep] = wdcbm2(c,s,alpha,m); % Compression phase and calculation of compression ratio [xd,cxd,sxd,a,b] = wdencmp(‘lvd’,c,s,wname,n,thr,’h’); disp(‘Compression Ratio’); disp(a);

108

S. N. Kumar and S. Suresh

% Decompression phase R = waverec2(c,s,wname); rc = uint8(R); % Display the results figure,imshow(uint8(Inputimage1)); title(‘Original Image’); figure,imshow(uint8(xd)); title(‘Compressed Image’); figure,imshow(rc); title(‘Decompressed Image’);

(a)

(b)

(c)

Figure 7.4. (a) Input image, (b) compressed image, (c) reconstructed image.

(a)

(b)

(c)

Figure 7.5. (a) Input image, (b) compressed image, (c) reconstructed image.

Medical Image Compression

109

7.4. Medical Image Compression Using Principal Component Analysis PCA (Principal Components Analysis) is a method for computing the mutually orthogonal directions of greatest variance in a set of d-dimensional data. The data space is extended by these d directions, which constitute an orthogonal basis. In the field of medical image processing, Principal Component Analysis has a number of applications. In medical applications, image compression is essential. The fundamental goal of image compression is to reduce redundancy from images so that they can be reconstructed in the same way at the receiving end. Lossy and lossless image compression methods are the most common. Lossy compression methods are used to compress digital or video pictures. On the other hand, because each piece of information is valuable, lossless compression methods are used in medical applications such as X-ray and EGS images, but CT and MRI imaging modalities generate vast volumes of data, forcing image compression methods. The following is a summary of the PCA technique for data transformation and compression. Represent the image data using a collection of m vectors, A= [a1 ,a2 ,..., ai ,..aM] where ai specifies the n number of features. As a result, the feature column vector l is represented by the data set A. The input data convergence matrix A must be computed, and the eigen values and eigen vectors must be extracted. In a matrix C, all of the feature column vectors are clustered. Consider the tiny eigen values as zeros to modify the dimensionality of the input feature vector. The accuracy of the produced findings is determined by the eigen value threshold value chosen. The principal component analysisbased approach for compressing medical images was to identify the region of interest first. After that, the image’s backdrop was programmed as a basic model. This resulted in a high compression ratio. To achieve a high compression ratio while preserving critical information, a complex ROI (region of interest) method was required. The duplicated and unnecessary data were also removed using Principal Component Analysis (PCA) Algorithm of PCA for image compression: Step 1: From the image data, get the feature column vector matrix. Step 2: Collect the covariance matrix A. Step 3: To get the Eigen values, use the characteristic equation (𝜆i - EA) = 0.

110

S. N. Kumar and S. Suresh

The matrix Ey is made up of these Eigen values. Step 4: Calculate the eigenvectors matrix using the eigenvalues. Step 5: Using the eigenvectors as columns, calculate the Transformation WT. Step 6: Using Cv = CA WT, obtain the features vector matrix. Step 7: To compress a picture, the resulting feature vector’s dimensionality is reduced by turning small Eigen values l to 0. clc clear all; input= imread(‘brain.png’); %Conversion of image in RGB format to gray scale input=rgb2gray(input); % Getting the rows and columns of the input image [r,c]=size(input); % Estimate the mean of the input image m=mean(input’)’; % Subtract the mean from each image [Centering the data] d=double(input)-repmat(m,1,c); % Estimate the covariance matrix co=1 / (c-1)*d*d’; % Estimate the eigen values and eigen vectors of the covariance matrix [eigvec,eigvl]=eig(co); % Sort the eigenvectors according to the eigenvalues (Descending order)

Medical Image Compression

111

eigvalue = diag(eigvl); [junk, index] = sort(-eigvalue); eigvalue = eigvalue(index); eigvec = eigvec(:, index); for i=10:10:100 % Project the original data of the input image onto the PCA space Compressed_Image=eigvec(:,1:(i/100)*size(eigvec,2))’*... double(d); % Reconstruct the image ReConstructed_Image=(eigvec(:,1:(i/100)*size(eigvec,2)))... *Compressed_Image; ReConstructed_Image=ReConstructed_Image+repmat(m,1,c); % Display the reconstructed image figure,imshow(uint8(ReConstructed_Image)) imwrite(uint8(ReConstructed_Image), ‘Recosntructed image.png’); end The simulation results are as follows

(a)

(b)

(c)

Figure 7.6. (a) Input image, (b) compressed image, (c) reconstructed image. (brain.png)

112

S. N. Kumar and S. Suresh

(a)

(b)

(c)

Figure 7.7. (a) Input image, (b) compressed image, (c) reconstructed image. (liver.png)

7.5. Image Compression Using Singular Value Decomposition The singular value decompsition along with the wavelet transform was found to be proficient in the compression of images [102], efficient results were produced, when compared with the JPEG 2000 scheme. In [103], singular value decompsition along with the huffman coding was deployed for compression of images. Low rank matrix completion algorithm that releis on SVD was found to be proficeint in the compression of images [104]. If σ is a nonnegative scalar and u and v are nonzero m- and n-vectors, respectively, given a complex matrix A with m rows and n columns, such that

𝐴𝑣 = σv 𝐴∗ 𝑣 = σv then σ is a singular value of A and u and v are the corresponding left and right singular vectors. If there are exactly t linearly independent corresponding right singular vectors for a given positive singular value and t linearly independent corresponding left singular vectors for a given positive singular value, the singular value has multiplicity t, and the space spanned by the right (left) singular vectors is the corresponding right (left) singular space. The matrix product UΣV* is a singular value decomposition for a complex matrix A with m rows and n columns if and only if U and V have orthonormal columns also Σ has nonnegative items on their primary diagonals and zeros everywhere else and 𝐴 = 𝑈Σ𝑉 ∗

Medical Image Compression

113

Let p and q be the number of rows and columns in Σ respectively. V is n x q with q ≤ n, while U is m x p, p ≤ m. The SVD is available in three different versions. All have the ith diagonal value of Σ designated σi and are arranged in the following order: σ1 ≥ σ2 ≥ … ≥ σk where r is the index such that σr > 0 and either k = r or σr +1 = 0 1. p = m, and q = n. The matrix Σ has the same dimensions as A. 2. 2. p = q = min{m,n} The matrix Σ is square matrix. 3. The matrix Σ is square if p = q = r. This is known as a reduced SVD and is indicated by Û ΣV̂*. where m < n, the first form of the singular value decomposition is given by

𝐴 = 𝑈𝛴𝑉 ∗ The MATLAB code for image compression using singular value decomposition is represented below. clear all; close all; % Number of Singular values N=100; % Getting Input Image inputimage = imread(‘1-14.png’); figure,imshow(inputimage); % Conversion to grayscale grayscaleimage = rgb2gray(inputimage); imwrite(grayscaleimage, ‘input.png’, ‘png’); A = im2double(grayscaleimage); [rows, cols] = size(A);

114

S. N. Kumar and S. Suresh

TotalSingularValues = min(size(A)); if N > TotalSingularValues sprintf(‘Must have N at most min[rows,cols]’); end [U, S, V] = svd(A); SN = zeros(N,N); for i=1:N SN(i,i)=S(i,i); end Usmaller = U(:,1:N); Vsmaller = V(:,1:N); OP = Usmaller*SN*(Vsmaller’); % Display of output image outputimage = im2uint8(OP); imwrite(outputimage , ‘outputimage .png’, ‘png’); subplot(1,2,1),imshow(grayscaleimage); title(‘Input Image’); subplot(1,2,2),imshow(outputimage ); title(‘Output Compressed Image’); % estimation of compression ratio CompressionRatio = (rows*N + cols*N + N)/(rows*cols)

Medical Image Compression

(a)

115

(b)

Figure 7.8. (a) Input image, (b) compressed image.

The compression ratio in this case is 0.6983.

(a) Figure 7.9. (a) Input image, (b) compressed image.

The compression ratio in this case is 0.3910.

(b)

116

S. N. Kumar and S. Suresh

Conclusion Compression of medical images plays vital role in data storage and transfer. Lossless compression algorithms are widely used for medical images, how ever lossy algorithm is also used, when the reconstructed image quality is good. In health care sector, huge medical data is generated and compression algorithm gains importance. This chapter put forward the MATLAB codes of the following; block truncation coding, wavelet compression of images, compression using principal component analysis and compression using singular value decomposition.

Chapter 8

Evaluation of Preprocessing, Segmentation, and Compression Algorithms 8.1. Introduction The evaluation of an algorithm is vital to determine its efficiency. The performance metrics are used for the validation of algorithms [105, 106]. Different types of performance metrics are there for preprocessing, segmentation, and compression algorithms, however, based on the application, specific metrics are used. This chapter focuses on the different types of metrics used for the validation of preprocessing, segmentation, and compression algorithms.

8.2. Validation of Preprocessing Algorithms The filtering algorithms are used as an initial step in all image processing algorithms as a preprocessing phase for the filtering of noise. The noise nature relies on the medical imaging modality used and the choice of filtering algorithm also varies for an application. It is difficult to judge; how much noise is present in the real-time medical images obtained from the hospitals. For the validation of filtering algorithms, phantom images are widely used; e.g. sheep log phantom images. The phantom image is corrupted by adding noise with varying variance and filtering algorithms are then applied for its validation. The expression for PSNR and MAE is represented as follows in equations. 2552 𝑥0 𝑥1 2 ) 𝑑𝐵 0 −𝑥1 ‖

𝑃𝑆𝑁𝑅 = 10 𝑙𝑜𝑔 (‖𝑥 𝑀𝐴𝐸 =

‖𝑥0 −𝑥1 ‖ 𝑥0 𝑥1

where 𝑥0 and 𝑥1 represent the original input image and the filtered image. The PSNR value will be high and MSE, MAE values will be low for an efficient filtering algorithm.

118

S. N. Kumar and S. Suresh

The edges play a vital role in images and for the validation of edges, Edge Preservation Index (EPI) is used [107]. The value of EPI is ‘1’ for a good restoration algorithm and lower the value indicates the discrepancy of the filtering approach. 𝐸𝑃𝐼 =

Γ(Δ𝐼1 − ̅̅̅̅ Δ𝐼1 , Δ𝐼2 − ̅̅̅̅ Δ𝐼2 ) √(Δ𝐼1 − ̅̅̅̅ Δ𝐼1 , Δ𝐼1 − ̅̅̅̅ Δ𝐼1 ) ∘ Γ(Δ𝐼2 − ̅̅̅̅ Δ𝐼2 , Δ𝐼2 − ̅̅̅̅ Δ𝐼2 )′

where, 𝐼1 is reference data, 𝐼2 is measured data, N is the number of matrix ̅̅̅ is the operation of Laplacian filtering in the region of interest elements, and Δ𝐼 (ROI). The Multi-Scale Structural Similarity Index (MSSIM) depicts the similarity between the input and the filtered image. The value of MSSIM, when closer to 1 represents the efficiency of restoration algorithms. The MSSIM index is represented as follows. 𝛽𝑖 𝑀𝑆𝑆𝐼𝑀 = ∏𝑀 𝑖=1(𝑆𝑆𝐼𝑀𝑖 )

where the value of 𝛽𝑖 is estimated through psychophysical measurement. 1

𝑆𝑆𝐼𝑀𝑖 =

𝑁 { 1𝑖

∑ 𝐶(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) 𝑆(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) ; 𝑖 = 1,2,3 … 𝑀 − 1

𝑁𝑖

∑ 𝐿(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 )𝐶(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) 𝑆(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) ; 𝑖 = 𝑀

The 𝑡𝑖,𝑗 , 𝑎𝑛𝑑 𝑠𝑖𝑗 represent 𝑗 𝑡ℎ local image patches at the 𝑖 𝑡ℎ scale, 𝑁𝑖 be the number of evaluation windows in the scale. The 𝐿(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 )𝐶(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) 𝑎𝑛𝑑 𝑆(𝑡𝑖,𝑗 , 𝑠𝑖,𝑗 ) represents the luminance, contrast and structural similarities. The filtering performance of algorithms on real-time images was validated by entropy measure. The low value of entropy indicates the efficiency of the restoration algorithm. The MATLAB code for edge preservation index is described below. % MATLAB code for Edge Preservation Index clc; clear all;

Evaluation of Preprocessing, Segmentation …

119

close all; % Get the original image of your choice s=imread(‘cameraman.tif’); % Add with the type of noise along with density ns=imnoise(s,’salt & pepper’,0.2); % A high pass laplacian filter is created H = fspecial(‘laplacian’,0.5); % Noisy image is restored with median filter of size 3 X 3 y=medfilt2(ns); % Preparing the components of EPI % input image is highpass filtered with laplacian filter deltas=imfilter(s,H); meandeltas=mean2(deltas); % Restored Image is highpass filtered with laplacian filter deltascap=imfilter(y,H); meandeltascap=mean2(deltascap); % computation of EPI p1=deltas-meandeltas; p2=deltascap-meandeltascap; num=sum(sum(p1.*p2)); den=(sum(sum(p1.^2)))*(sum(sum(p2.^2))); epi=num/sqrt(den) In the above code, the salt and pepper noise is added to the benchmark image and filtered by the median filter. In the real-time scenario, the noisy image will be there and the choice of filter is the user-defined criterion. The MATLAB code for image quality measures is highlighted in [108].

120

S. N. Kumar and S. Suresh

8.3. Validation of Segmentation Algorithms with Ground Truth Images In image processing, a wide number of segmentation algorithms are there. There is a need to validate the performance of the segmentation technique for an application. For validation, ground truth or gold standard image is widely used. The gold standard image can be called as a reference image generated by the physician for the comparison of the computer-generated result. The red curve represents the gold standard result (G) and the black curve represents the machine-generated result (M). The success and error rates [109, 110] for the evaluation of segmentation algorithms are highlighted here.

Figure 8.1. True positive, true negative, false positive, and false negative values.

For a good segmentation algorithm, the false positive and false negative error rate should be minimum and the true positive success rate should be maximum. The Dice coefficient (DC) and the Jaccard Coefficient (JC) are based on the amount of spatial overlap between M and G [111]. The false positive dice coefficient (FPDC) quantifies the oversegmentation and false negative dice coefficient (FNDC) quantifies the under segmentation and they are expressed in equations.

Evaluation of Preprocessing, Segmentation …

121

2|M∩G|

FPDC = |M|+|G| × 100 2|𝑀∩G|

FNDC = |M|+|G| × 100 The rand index (RI) quantifies the consistency of gray values in the ground truth and machine-generated result [112]. Closer the value of RI to 1 indicates the efficiency of the segmentation algorithm. The William index is used, when multiple segmentation techniques have to be compared with a proposed technique. Table 8.1 represents the expressions of Success and Error rates for the evaluation of segmentation algorithms. Table 8.2 represents the expressions of Similarity and Distance measures for the evaluation of segmentation algorithms. The VOI metric relies on the entropy values of ground truth and machinegenerated results. The I(S,M) represents the mutual information between S and M. The hausdorff distance, hamming distance, Region-based hamming distance, Mean absolute difference, and Maximum difference are distance based metrics for the evaluation of segmentation results [113, 108]. The local consistency error rate (LCER) and global consistency error rate (GCER) should be low for a good segmentation algorithm [114]. The local consistency error rate is expressed as follows in equations 𝐶 (𝑀, 𝑋𝑖 ) ⁄ (𝐺, ) | 𝐶 𝑋𝑖 𝐿𝐶𝐸𝑅 (𝑀, 𝐺, 𝑋𝑖 ) = |𝐶(𝑀, 𝑋𝑖 )| |

𝐶 (𝐺, 𝑋𝑖 ) ⁄𝐶 (𝑀, 𝑋 ) | 𝑖 𝐿𝐶𝐸𝑅 (𝐺, 𝑆, 𝑋𝑖 ) = |𝐶 (𝐺, 𝑋𝑖 )| |

The GCER is expressed in terms of LCER as follows in equation 1

𝐺𝐶𝐸𝑅 (𝑀, 𝐺) = 𝑁 min{∑𝑖 LCER (S, G, xi ) ∑𝑖 LCER (G, S, xi )}

122

S. N. Kumar and S. Suresh

Table 8.1. Success and Error rates for evaluation of segmentation algorithms Success and Error rates Sensitivity Specificity Precision TP rate TN rate FP rate FN rate Classification Error rate Likelihood ratio positive Likelihood ratio negative Accuracy

|𝑇𝑃| |𝑇𝑃| + |𝐹𝑁| |𝑇𝑁| |𝑇𝑁| + |𝐹𝑃| |𝑇𝑃| |𝑇𝑃| + |𝐹𝑃| |𝑇𝑃| |𝑇𝑃| + |𝐹𝑃| |𝑇𝑁| |𝐹𝑁| + |𝑇𝑁| |𝐹𝑃| |𝑇𝑁| + |𝐹𝑃| |𝐹𝑁| |𝑇𝑃| + |𝐹𝑁| |𝐹𝑃 + |𝐹𝑁| |𝐹𝑃| + |𝑇𝑃| + |𝐹𝑁| + |𝑇𝑁| 𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 1 − 𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦| 1 − 𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦| |𝑇𝑃| + |𝑇𝑁| |𝑇𝑃| + |𝐹𝑁| + |𝑇𝑁| + |𝐹𝑃|

Table 8.2. Similarity and Distance measures for evaluation of segmentation algorithms Similarity and Distance measures Dice Coefficient (DC) Jaccard Coefficient (JC) Overlap Ratio (OR) Volume similarity (VS) Variation of Information (VI ) Williams Index Hausdorff distance Rand Index (RI) Hamming Distance Region based hamming distance Mean absolute difference

2 |𝑇𝑃| 2|𝑇𝑃| + |𝐹𝑁| + |𝐹𝑃| |𝑇𝑃| |𝑇𝑃| + |𝐹𝑁| + |𝐹𝑃| 𝐷𝐶 2 − 𝐷𝐶 𝑉𝑠 − 𝑉𝑔 ( ) × 100 𝑉𝑔 𝑉𝐼(𝑆, 𝐺) = 𝐻(𝑆) + 𝐻(𝐺) + 𝐼(𝑆, 𝐺) (𝑛 − 1) ∑𝑛𝑖 𝐷𝑝𝑔 𝑊𝑔 = 2 ∑𝑛𝑖=2 ∑𝑖=1 𝑗=1 𝐷𝑝𝑞 𝐻 = 𝑚𝑎𝑥(𝐻𝑆𝐺 , 𝐻𝐺𝑆 ) |𝑇𝑃| + |𝑇𝑁| |𝑇𝑃| + |𝐹𝑁| + |𝑇𝑁| + |𝐹𝑃| 𝐷𝐻 (𝑆 ⇒ 𝑅) = ∑𝑟𝑖𝜖𝑅𝑆𝑘≠𝑆𝑗 ∑𝑆𝑘∩𝑟𝑖≠0|𝑟𝑖 ∩ 𝑆𝑘 | 𝑝 = 1 − (𝐷_𝐻 (𝑆 ⇒ 𝑅) + 𝐷_𝐻 (𝑅 ⇒ 𝑆))/(2 × |𝑆| ) 𝐾

𝑀𝐴𝐷𝐽 =

1 ∑ 𝑑(𝑏𝑖 , 𝑇) 𝐾 𝑖=1

Maximum difference

𝑀𝐴𝑋𝐷𝑗 = 𝑚𝑎𝑥𝑖∈[1,𝐾] {𝑑(𝑏1 , 𝑇)}

Evaluation of Preprocessing, Segmentation …

123

In some applications, ground truth image will be difficult to generate and hence in those cases, other metrics like partition coefficient (PC), Partition Entropy (PE) and Layout Entropy (E) are used [115]. The MATLAB code for the segmentation algorithms evaluation metrics are represented below. clc; close all; % Getting ground truth image TargetImage =imread(‘ a1.png’); TargetImage = imresize(TargetImage, [255 255]); TargetImage =rgb2gray(TargetImage); figure,imshow(TargetImage); title(‘Ground Truth Image’); % Getting segmented image TestImage =imread(‘ a2.png’); TestImage = imresize(TestImage, [255 255]); TestImage =rgb2gray(TestImage); figure,imshow(TestImage); title(‘Segmented Image’); [u v]=size(TargetImage); TPrate=zeros(1, u, 2); FPrate=zeros(1, u, 2); TargetMask = zeros(size(TargetImage)); TargetMask = TargetImage >0; NotTargetMask = logical(1-TargetMask); TestMask = zeros(size(TestImage)); % Estimation of false positive, true positive, false negative and true negative values FPimage = TestMask.*NotTargetMask; FP = sum(FPimage(:)) TPimage = TestMask.*TargetMask;

124

S. N. Kumar and S. Suresh

TP = sum(TPimage(:)) FNimage = TargetMask-TPimage; FN = sum(FNimage(:)) TNimage = 1- FPimage; TN =sum(TNimage(:)) % Estimation of true positive and false positive rate TPrate(1,:,1)=TP/(TP+FN); FPrate(1,:,1)=FP/(FP+TN); % Estimation of sensitivity, specificity, and accuracy Sensitivity(1)= TP/(TP+FN)*100 Specificity(1)= TN/(TN+FP)*100 Accuracy(1)=(TP+TN)/(TP+TN+FN+FP)*100 The sub-function for the estimation of the Jaccard coefficient and dice coefficient is as follows. This sub-function can be called from the main program for the evaluation of coefficients. % Estimation of jaccard and dice coefficient function [Jaccard,Dice]=sevaluate(target image,test image) targetimage= targetimage (:); testimage= testimage (:); common=sum(targetimage & testimage); union=sum(targetimage | testimage); cm=sum(targetimage); % the number of voxels in m co=sum(testimage); % the number of voxels in o Jaccard coefficient =common/union; Dice coefficient = (2*common)/(cm+co);

Evaluation of Preprocessing, Segmentation …

125

8.4. Validation of Clustering Segmentation Algorithms Clustering algorithms gain prominence in medical image segmentation. A wide number of clustering algorithms are there, the fuzzy C means clustering is the classical algorithm. The Cluster Validity Indexes (CVIs) plays a vital role in the evaluation of centroid-based clustering segmentation algorithms and are defined as follows The terminologies used for describing CVIs are as follows N: Number of data objects for clustering m: the fuzzifier which estimates the level of cluster fuzziness yi: the ith , 1 ≤ 𝑖 ≤ 𝑁 data object K: the number of clusters Ck: the Kth 1 ≤ 𝑘 ≤ 𝐾 cluster |𝐶𝑘 |:the number of data objects in Kth cluster Vk : the centroid of the kth cluster. 𝑉̂ : the centroid of all data objects

1 N Vˆ =  x i N i =1 ‖𝑥 − 𝑦‖: the distance between a pair of data objects Fik : the membership degree of yi belonging to Ck. The partition coefficient (PC) and partition entropy (PE) are widely used indexes for the evaluation of clustering validity [116, 117]. The PC determines the mean strength of belongingness of data and PE is the entropy measurement determined by taking the logarithm of PC. The PC and PE are based on the membership values Fik of a fuzzy partition of the dataset [116]. The best and optimal partition is achieved by maximizing PC (or minimizing PE). The PC and PE focus only on the compactness and variations with cluster and they are expressed as follows.

PC(K ) =

1 K N 2  N k =1 i =1 F ik

126

S. N. Kumar and S. Suresh

1 K N  log 2 ( F ik ) N k =1 i =1 F ik

PE ( K ) =

The Calinski-Harabasz Index (CHI) [118] represents the spatial relationship on pixels through the agglomerative and divisive method; the maximum value indicates the best clustering result.

W

1K

CHI (K ) = K − 1

W

2K

N −K where K

W 1K =  C K k =1

K

W

2K

= k =1

v

k

− vˆ

 y i −v k

2

xiC K

The Davies Bouldin Index (DBI) [119] is a function of the ratio of the sum of within-cluster scatter to between cluster separations; the minimum value indicates the best clustering result.

DBI ( K ) =

1 K max  K k =1 j  k

S +S v j −v k j

K 2

where Sj is the average Euclidean distance of the vectors in class j to the centroid of class j. The Xie and Beni index (XBI) is defined as the ratio of the fuzzy withincluster sum of squared distances to the product of the number of elements and the minimum between the cluster separations. The minimum value of XBI indicates the best clustering result [120].

Evaluation of Preprocessing, Segmentation … K

XBI ( K ) =

N

 F y i −v k

127

2

K

ik

k =1 i =1

2

y i −v j

N .min i j

The Fukuyama and Sugeno Index (FSI) is based on compactness and separation. The first term in the equation represents the compactness measure and the second term represents the degree of separation. The minimum value of FSI indicates the best clustering result [120]. K

N

2

y i −v k

FSI ( K ) =  F ik

m

k =1 i =1

K

N

−  F ik

m

k =1 i =1

v k −vˆ

2

The SC Index (SCI) determines the compactness ratio by combining the two functions, 𝑆𝐹1 (𝐾) and 𝑆𝐹2 (𝐾) ; the maximum value indicates the best clustering result [121]. SCI (K ) = SF1 (K ) − SF 2 (K )

SF (K ) = 1

1 K

K

 v k −vˆ k =1

 N m   F ik  k =1  i =1 K

y i −v k

(

k −1 K  N     min k =1 j = k +1  i =1  SF 2 ( K ) =   N 2   max F ik    i =1 1  k  K   

where N

nij =  min i =1

(F , F ) ik

ij

(

2

N

F i =1

ik

  

)

 nij  F ik, F ij   N    max F ik    i =1 1  k  K   

)

2

128

S. N. Kumar and S. Suresh

The CS Index (CSI) estimates the ratio of compactness separation of the data objects and the centroids; the minimum value indicates the best clustering result [121].    K  1   max y j − yi      k =1  C k  x jC k x j  C k     CSI ( K ) = K

 min v − v j =1

i j

i

j

The Partition Coefficient and Exponential Separation (PCAES) Index [121] explores the normalized partition coefficient and exponential separation for each cluster and the maximum value indicates the best clustering result.

PCAES ( K ) =  F F K

N

k =1 i =1

2 ik M

 −  min  i j − exp     

  v k −v h    T     2

where

F 

T

M

=

= 1 K

min  N 2  F ik  1  k  K  i =1  K

 v k − vˆ k −1

2

The Pakhira-Bandyopadhyay-Maulik (PBMF) Index is derived from the fuzzy variant of PBM Index. It is based on the compactness of clusters and between clusters. On varying the number of clusters, the maximum value of PBMF indicates the best cluster value [119].

Evaluation of Preprocessing, Segmentation …

129

N

PBMF ( K ) =

max jk

v −v  F y −v j

K

k

i1

i =1

N

 F y − v k =1 i =1

i

1

m

ik

i

k

The WL Index (WLI) estimates the compactness of clusters by taking into account the fuzzy weighted distances and fuzzy cardinality of the clusters. The minimum value of WLI indicates the best clustering result [121].

 N 2   F ik y i − v k WLI =   i =1 N k =1   F ik  i =1

2

K

    

The MATLAB codes for cluster validation indexes are as follows; Here X represents the input image and in the real-time scenario, the normalization and resizing will be done. The cluster validity metrics can be called after the main clustering segmentation code. function fvi=validity(X,U,V,m) % U membership function K x N % V Cluster Centroid K x 1 % X Dataset N x 1 % N Number of Instance % K Number of Cluster fvi=zeros(12,1); N=size(X,2); K=size(V,1); %m=2.0; %T=100; %OPTIONS(1) = m; %OPTIONS(2) = T;

130

S. N. Kumar and S. Suresh

%X = rand(N,1); %[V,U,~]=fcm(X,K,OPTIONS); % Partition Coefficient (pc) and Partition Entropy (pe) pc= sum( sum(U.^2) ) / N ; % pearson coefficient pe = (-1/N) * sum( sum( U.*log2(U) )); % partition entropy disp(sprintf( ‘PC = %.4f’, pc) ); disp(sprintf( ‘PE = %.4f’, pe) ); fvi(1)= pc; fvi(2)=pe; % Dunn index [~,mi] = max(U); dia =zeros(1,K); for k=1 : K I=(mi==k); ds= pdist2(X(I)’,X(I)’,’euclidean’); dia(k) = max(ds(:)) ; end dis =zeros(K,K); for s=1 : K si=(mi==s); for t=1 : K ti=(mi==t); ds= pdist2(X(si)’,X(ti)’,’euclidean’); dis(s,t) = min(ds(:)) ; end end I= (dis~=0); dis= dis(I)./ max(dia); dui = min (min(dis)); disp(sprintf( ‘DUI = %.4f’, dui) ); fvi(3)=dui;

Evaluation of Preprocessing, Segmentation …

% Calinski-harabasz index Bk=0; Wk=0; Vg = mean(X); for k=1 : K I=(mi==k); Ck = sum(I); Bk = Bk + Ck* sqrt((V(k)- Vg)^2) ; Wk = Wk + sum( sqrt((X(I)-V(k)).^2)); end chi = ( Bk/ (K-1) ) / (Wk / (N-K)); disp(sprintf( ‘CHI = %.4f’, chi) ); fvi(4)=chi; % Davies-bouldin index tmp4 =0; S =zeros(1,K); for k=1 : K I=(mi==k); Ck = sum(I); S(k)=sum( (X(I) - V(k)).^2 ) / Ck ; end for j=1 : K val=zeros(1,K); for k=1 : K if (k~=j) val(k) = (S(j) + S(k)) / (V(j) - V(k))^2 ; end end tmp4 = tmp4 + max(val); end dbi = tmp4/ K; disp(sprintf( ‘DBI = %.4f’, dbi) ); fvi(5)=dbi;

131

132

S. N. Kumar and S. Suresh

% Xie - Beny function tmp5= 0; for k=1 : K tmp5 = tmp5 + (U(k,:).^2)*((X -V(k)).^2)’ ; end ds= pdist2(V,V); I = (ds~=0); mds = min(ds(I)); xbi = tmp5 / (N * mds); disp(sprintf( ‘XBI = %.4f’, xbi) ); fvi(6)=xbi; % Fukuyamma - Sugeno index tmp61= 0; tmp62=0; for k=1 : K tmp61 = tmp61 + (U(k,:).^m) * ((X -V(k)).^2)’ ; tmp62 = tmp62 + sum( (U(k,:).^m) * ((V(k)-Vg).^2) ) ; end fsi = tmp61 -tmp62; disp(sprintf( ‘FSI = %.4f’, fsi) ); fvi(7)=fsi; % SC index tmp71= mean ( (V-Vg).^2 ); tmp72=0; for k=1 : K tmp72= tmp72 + ( (U(k,:).^m) * ((X -V(k)) .^2)’ ) / sum( U(k,:)) ; end sc1 = tmp71/ tmp72; tmp73 = sum( max( (U.^2) )); tmp74 = sum( max(U) ); tmp75=0; for k=1 : K-1 for j=k+1 : K nij = sum( min( [U(k,:); U(j,:)] ) );

Evaluation of Preprocessing, Segmentation …

tmp = sum( min( [U(k,:); U(j,:)] ).^2 ); tmp75 = tmp75 + (tmp/nij) ; end end sc2 = tmp75 / ( tmp73/tmp74 ) ; sci = sc1 -sc2; disp(sprintf( ‘SCI = %.4f’, sci) ); fvi(8)=sci; % Centriodal index tmp8=0; for j=1 : K kj=mi==j; Ck = sum(kj); for i=1 : K ki=mi==i; ds= pdist2(X(kj)’,X(ki)’,’euclidean’); tmp8 = tmp8 + sum( max(ds) ) / Ck; end end tmp82=0; for j=1 : K ds=pdist2(V,V(j),’euclidean’); ds=sort(ds); tmp82 = tmp82 + ds(2); end csi = tmp8 / tmp82; disp(sprintf( ‘CSI = %.4f’, csi) ); fvi(9)=csi; % PCAES index pca=0; uM = min( sum(U.^2) ); bT = mean( ( V-mean(V) ).^2) ; for k=1 : K ds= (V(k)-V).^2; I=(ds~=0); pca = pca+ sum( (U(k,:).^2)./uM ) - exp ( - min(ds(I))/bT );

133

134

S. N. Kumar and S. Suresh

end disp(sprintf( ‘PCAES = %.4f’, pca) ); fvi(10)=pca;

% PBMF index tmp10=0; for k=1 : K tmp10 = tmp10 + (U(k,:).^m) * sqrt( (X - V(k)).^2 )’ ; end ds= pdist2(V,V); tmp101 = max(ds(:)) * ( U(1,:) * sqrt( (X - V(1)).^2)’ ) ; pbmf= tmp101 / tmp10 ; disp(sprintf( ‘PBMF = %.4f’, pbmf) ); fvi(11)=pbmf; % WL index wli =0; for k=1 : K wli = wli+ ( (U(k,:).^2) * ( (X - V(k)).^2 )’ ) / sum( U(k,:) ) ; end disp(sprintf( ‘WLI = %.4f’, wli) ); fvi(12)=wli;

8.5. Validation of Compression Algorithms The validation of compression algorithms plays a vital role in judging the performance of the algorithm. Lossy and lossless algorithms are there, for medical image processing, lossless algorithms are widely used, and however lossy schemes also can be employed, if the quality of the reconstructed image is good. The performance metrics involves two parameters; input image and reconstructed image. The reconstructed image is also called a decoded image or decompressed image. The performance analysis of compression algorithms is vital and can be done qualitatively or quantitatively. The qualitative analysis by human perception based on the reconstructed image quality may not be much acceptable and hence quantitative is the needy one. The Compression Ratio

Evaluation of Preprocessing, Segmentation …

135

(CR), Mean Square Error (MSE), and Peak Signal to Noise Ratio (PSNR) are the generally used metrics for the validation of compression algorithms. The quality of the reconstructed image is also investigated by metrics like Normalized Cross-Correlation (NCC), Normalized Absolute Error (NAE), Average Difference (AD), Structural Content (SC) and Laplacian Mean Square Error (LMSE) [113]. Let P(i,j) and Q(i,j) represents the pixel values of the original input image and reconstructed image after decompression, the metrics are defined as follows. The Compression ratio is expressed as the ratio of input and output image sizes in bits and is expressed in the equation. 𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑟𝑎𝑡𝑖𝑜 =

Size of the input image s of the resultant compressed image

The PSNR estimates the quality of the compression algorithm. The efficiency of compression algorithm is good for high values of PSNR and is expressed in equation 𝑃𝑆𝑁𝑅 = 10𝑙𝑜𝑔 (

2552 ) 𝑀𝑆𝐸

Table 8.3 below depicts the ideal values of performance metrics for evaluating the efficiency of the compression algorithm. Table 8.3. Compression algorithms performance metrics and their ideal values Metrics Ideal Values

MSE 0

PSNR Inf

NCC 1

AD 0

SC 1

NAE 0

LMSE 0

The NCC represents the similarity between the input image and the reconstructed image; the efficiency of the compression algorithm is good when the NCC value is closer to 1 and is represented in equation 𝑚

𝑛

𝑁𝐶𝐶 = ∑ ∑ 𝑖=1 𝑗=1

P(i, j) × Q(i, j) P(i, j) 2

136

S. N. Kumar and S. Suresh

The compression algorithm is said to be efficient when NAE value is low, closer to 0 and is represented in equation 𝑁𝐴𝐸 =

𝑛 ∑𝑚 𝑖=1 ∑𝑗=1(P(i, j) − Q(i, j) ) 𝑛 ∑𝑚 𝑖=1 ∑𝑗=1 P(i, j)

The AD represents the difference between the input image and the reconstructed image. For a good compression algorithm, the value of AD should be low and closer to 0 and is expressed in equation. 𝑚

𝑛

1 𝐴𝐷 = ∑ ∑[𝑃(𝑖, 𝑗) − 𝑄(𝑖, 𝑗)] 𝑚𝑛 𝑖=1 𝑗=1

The lower value of SC indicates the efficiency of the compression algorithm and is expressed in equation 𝑆𝐶 =

𝑛 2 ∑𝑚 𝑖=1 ∑𝑗=1(P(i, j) ) 𝑛 2 ∑𝑚 𝑖=1 ∑𝑗=1(Q(i, j) )

The edge preservation is quantified by LMSE. The lower value of LMSE indicates the efficiency of the compression algorithm and is expressed in equation

𝐿𝑀𝑆𝐸 =

𝑛 2 ∑𝑚 𝑖=1 ∑𝑗=1[𝐿(P(i,j)))−𝐿(Q(i,j))] 𝑛 2 ∑𝑚 𝑖=1 ∑𝑗=1[𝐿(P(i,j))]

The below-mentioned MATLAB function can be called in a program. In the code, P represents the input image and Q represents the reconstructed image. % MATLAB code for validation of compression Algorithms function [MSE, PSNR, AD, SC, NK, MD, LMSE, NAE] =compression measures (P, Q) R=P-Q;

Evaluation of Preprocessing, Segmentation …

137

%Peak to Signal Noise Ratio (PSNR) and Mean Square Error (MSE) % MATLAB code for PSNR and MSE % MSE MSE=sum(sum(R.^2))/(size(P,1) *size(P,2)); % PSNR if MSE>0 PSNR=10*log10(255^2/MSE); else PSNR=Inf; end; % Average Difference % MATLAB code for AD R=P-Q; AD=sum(sum(R))/(size(P,1) *size(P,2)); % Normalized Cross Correlation % MATLAB code for NK P=double(P); Q=double(Q); NK = sum(sum(P .* Q)) / sum(sum(P .* Q)); % Lapalcian Mean Square Error % MATLAB code for LMSE X=4*del2(P); LMSE=sum(sum((X-4*del2(Q)). ^2))/sum(sum(X.^2)); % Normalized Absolute Error % MATLAB code for NAE NAE=sum(sum(abs(R)))/sum(sum(abs(P))); % Maximum Difference % MATLAB code for MD P=double(P);

138

S. N. Kumar and S. Suresh

Q=double(Q); error = P- Q; MD = max(max(error)); % Average Difference % MATLAB code for AD P=double(P); Q=double(Q); error = P- Q; [M N] = size(P); AD = sum(sum(error)) / (M * N);

Conclusion Performance metrics plays vital role in the validation of algorithms. Validation metrics for preprocessing, segmentation and compression are discussed in this chapter. Ground truth or gold standard image is required for the estimation of the performance metrics, however for some segmentation algorithms like clustering, performance metrics are estimated without ground truth images also. The ideal values of performance metrics and the MATLAB codes for the performance metrics of preprocessing, segmentation and compression algorithms are also discussed in this chapter.

Chapter 9

Medical Image Encryption and Fusion 9.1. Introduction The process of encoding a message so that only authorized parties can access it is called encryption. The process of encrypting an image is called image encryption. Encryption can be characterized as a strategy by which data is changed over into secret code that conceals the data’s actual significance [122, 123]. In processing, decoded information is otherwise called plaintext, and scrambled information is called ciphertext. Image encryption is the way towards encoding a secret image with the assistance of some encryption calculation so that unapproved clients can’t get to it. Picture and video encryption have applications in different fields including web correspondence, mixed media frameworks, clinical imaging, Tele-medication, and military correspondence [124, 125]. Medical image encryption gains prominence in the transfer of medical data and big data handling is a crucial task in today’s digital environment. Image fusion also gains prominence in disease diagnosis by combining medical images of different modalities.

9.2. Telemedicine Telemedicine is a health-related service that uses telecommunications and electronic information technologies to offer care. It encompasses the entire range of deliverables aimed at assisting patients and their doctors or healthcare providers [126]. Online patient consultations, remote control, telehealth nursing, and remote physical and psychiatry rehabilitation are just a few of the applications. Streamlining clinical procedures and lowering hospital travel expenses, enhances health-care alternatives, improves emergency-service quality and performance, reduces diagnosis time, and saves money for both doctors and patients. Telemedicine’s main purpose is to improve a patient’s clinical health status. Telemedicine involves audio and visual components, and can be delivered in real-time as live, two-way audio-visual exchanges between patients and clinicians (synchronous telemedicine), or

140

S. N. Kumar and S. Suresh

asynchronously by storing and sending data and images for later use (asynchronous telemedicine). Telemedicine is one of the most useful technologies for people to receive preventive care and improve their long-term health. It is especially beneficial to people who are unable to receive appropriate therapy due to financial or geographical constraints. Telehealth has the potential to improve the efficiency, organization, and accessibility of health care. Telemedicine is a fantastic option for dealing with mental health difficulties. It removes some of the barriers that hinder patients from accessing life-saving care. Telemedicine allows patients to obtain medical care at a time that is convenient for both the doctor and the patient, while also keeping them safe. This means that a person can go to the doctor without having to arrange childcare or take time off from work. Going to the clinic and sitting in close contact with the doctor can lead to infection, which is especially dangerous for persons with chronic medical problems or a weak immune system. Telemedicine eliminates the risk of infection spreading at the clinic. There are three forms of telemedicine [127] that are commonly used: 1. Real-Time Interactive Medicine, which allows patients and doctors to communicate in real-time while staying compliant with HIPAA. 2. 2. Consulting Medicine, which allows clinicians to discuss patient information with a practitioner who is located in a different area. 3. Remote monitoring allows caregivers to keep an eye on patients who are at home by collecting data with mobile medical equipment (e.g., blood sugar or blood pressure). Telemedicine assessment isn’t appropriate for all patients or clinical situations. Examples include patients who are unable to have a private conversation or who cannot make decisions. Some patients, especially the elderly, maybe technologically illiterate, making telemedicine consultations difficult to schedule. Telemedicine visits may be difficult for older people with hearing or vision impairments. Some people with disabilities may require adaptive equipment to have a satisfactory telemedicine experience. A signlanguage interpreter, for example, could be included to aid people who are deaf or hard of hearing. Patients in rural areas can benefit from the expansion of telemedicine services in primary care and specialty consultation care. In primary care, telemedicine encounters can be used for a variety of visits. The video component may provide vital clinical information that a phone call or

Medical Image Encryption and Fusion

141

electronic mail alone cannot provide. Telemedicine sessions can be used for medication reconciliation, substance-abuse issue counseling, and form completion (e.g., return to work or school paperwork). Data from remote patient monitoring equipment (e.g., glucometers, blood pressure monitors, scales, oximeters, non-invasive ventilation equipment for sleeping) can also be uploaded and transmitted to a provider, and in some cases, providers can communicate with the patient’s electronic medical record automatically. The physician can utilize this information to track and adapt therapy, such as medication changes and behavioral change suggestions. Telemedicine, like primary care, is beneficial for specialty-care management. Telemedicine is used in cardiology, endocrinology, hepatology, nephrology, neurology, pediatrics, and surgical perioperative care management. In traditional specialized care, the purpose of telemedicine has been to empower patients to improve their health and prevent disease exacerbations. Figure 9.1 depicts the role of telemedicine during the COVID 19 scenario.

Figure 9.1. Role of telemedicine during the COVID 19 scenario.

Tele mental health services have become increasingly popular, particularly in areas where in-person mental health care is limited. Telemedicine visits can be used for capacity assessments and treatment of mood disorders and psychoses, and people with mental illnesses can usually participate fully in telemedicine sessions. Importantly, telemedicine encounters can substitute a variety of in-person sessions when in-person visits must be minimized, increasing the scope of virtual care applications.

142

S. N. Kumar and S. Suresh

Telemedicine, for example, is being utilized to determine whether or not someone has COVID-19. Remote patient management can help to reduce the number of needless in-person medical visits, such as to primary care doctors, urgent care centers, and emergency rooms, easing the stress on an already overburdened and overworked healthcare system (including the use of limited resources, such as personal protective equipment). This can help in the treatment of a variety of infectious diseases by lowering the risk of infection. Telemedicine has proven particularly useful for chronic disease management during the COVID-19 pandemic, allowing for continuity of care for high-risk patients while allowing for social distancing and lowering the chance of infection [128,129]. Image encryption role is inevitable in the data transfer in telemedicine. The general framework of medical image encryption is depicted in Figure 9.2. The medical images in the DICOM format are processed by algorithms, cipher images are produced. The generated cipher images are transmitted through the cloud network. In the receiver side, decryption is carried out and the original input image is reconstructed.

Figure 9.2. General frame work of medical image encryption.

9.3. Medical Image Fusion Medical image fusion is the process of coalescing multiple images from multiple imaging modalities [130]. In the context of medical imaging, the combined analysis of numerous medical modalities has made image fusion one of the readily employed medical diagnostic tools and techniques.

Medical Image Encryption and Fusion

143

Type of source data image fusion can be categorized following aspects: Multi-view Fusion: It is a type of modality taken at the same time but different conditions from different angles. It provides information from supplementary views. Multi-modal Fusion: Here, images are captured via different sensors are joined together to integrate the information present in complimentary images. Multi-temporal Fusion: This method of fusion shows the images same view or the modality are taken at different times, and to detect the significant changes with respect to time. Multi-focus Fusion: Multi-focus of the image, taken at the same time of the same scene with different areas or objects and fused to get all the information in an image. Image fusion has been carried out in different manners [131]: pixel, feature level, and decision level. Pixel level fusion: In the image fusion based on a pixel level, this fused image acquires a value which is based on the pixel values. It is the basic type of image fusion performed at the signal level. At spatial level: This may perform by linear or non-linearly merging the pixel intensity value of the respective images.

Figure 9.3. Schematic representation of pixel-level image fusion.

Decision level fusion: This means merging the information at a higher level of generalization, combining the results from a set of algorithms to get a final fused decision. Input images are processed one by one to extract content

144

S. N. Kumar and S. Suresh

as shown in Figure 2, which is then combined by applying decision rules to reinforce the common interpretation.

Figure 9.4. Schematic representation for decision level image fusion.

Figure 9.5. Schematic representation for feature level image fusion.

Medical Image Encryption and Fusion

145

Feature level fusion: It is performed at feature level and involves the mining of the detailed information of images. Feature level fusion is the higher-level processing. This type of fusion extracts information from the images integrated using complex techniques.

9.4. MATLAB Code for Medical Image Fusion Using Averaging Rule Multimodal medical image fusion plays a vital role in disease diagnosis and therapeutic planning. The images will be multimodality images of the same patients. For example CT/MR or PET/MR imaging etc. This is a classical method of image fusion. The average maximum and average contrast rules are put forward in [132] for the fusion of CT/PET images. The variation mode decomposition was utilized in [133] for the fusion of medical images, the generated intrinsic mode functional components of the images are fused by the local energy maxima. A detailed review focusing on medical image fusion was highlighted in [134], spatial domain, transform domain and deep learning approaches are discussed. The MATLAB code for medical image fusion using an averaging rule is depicted below. The CT and MR images of the brain are taken from RIRE data set (https://rire.insight-journal.org). clc; clear all; close all; M1 = imread(‘CT.png’); M2 = imread(‘MR.png’’); FusedImage=(M1 + M2) / 2; subplot(1,3,1);imshow(M1,[ ]); subplot(1,3,2);imshow(M2,[ ]);title(‘Image Fusion using Averaging’); subplot(1,3,3);imshow(FusedImage,[ ]);

146

S. N. Kumar and S. Suresh

(a)

(b)

(c)

Figure 9.6. (a) CT input image, (b) MR input image, (c) fused output.

9.5. MATLAB Code for Medical Image Fusion Using Wavelet Transform The medical image fusion using DWT along with principal component analysis was highlighted in [135], the MATLAB code for the DWT-PCA fusion is put forward in [136]. Rotated wavelet transform was utilized in [137] for the fusion of medical images. The quaternion wavelet transform resolves the issue of the classical wavelet transform that does not consider the dependence between the coefficients and was employed for the medical image fusion in [138].

(a)

(b)

Figure 9.7. (a) CT input image, (b) MR input image, (c) fused output.

(c)

Medical Image Encryption and Fusion

clc; close all; clear all; % Read Images a=imread(‘CT.png’); b=imread(‘MR.png’); % Wavelet Transform [a1,b1,c1,d1]=dwt2(a,’db2’); [a2,b2,c2,d2]=dwt2(b,’db2’); [p,q]=size(a1); % Fusion Rules % Average Rule for i=1:p for j=1:q a3(i,j)=(a1(i,j)+a2(i,j))/2; end end %Max Rule for i=1:p for j=1:q b3(i,j)=max(b1(i,j),b2(i,j)); c3(i,j)=max(c1(i,j),c2(i,j)); d3(i,j)=max(d1(i,j),d2(i,j)); end end % Inverse Wavelet Transform c=idwt2(a3, b3, c3, d3,’db2’); imshow(a) title(‘First Input Image’) figure,imshow(b) title (‘Second Input Image’) figure,imshow(c,[]) title(‘Fused Output Image’)

147

148

S. N. Kumar and S. Suresh

9.6. MATLAB Code for Classical Medical Image Encryption A survey of medical image encryption algorithms was highlighted in [139]. The lightweight encryption scheme comprising of dual permutation was proposed in [140] for encryption of medical images. Chaos plays a pivotal role in medical image encryption and in [141, 142, 143], a hybrid chaotic approach was followed for the encryption of medical images. The MATLAB code for classical medical image encryption is depicted below. The Red, Green, and Blue components of the image are extracted and a cipher image is created. In the decryption stage, the inverse of the encryption is carried out. %Encryption of input image a=imread(‘1-14.png’); figure; imshow(a);title(‘Input image’); Red=a(:,:,1); Green=a(:,:,2); Blue=a(:,:,3); for i=1:256 for j=1:256 Red1(i,j)=double(mod((double(Red(i,j))+126),256)); Green1(i,j)=double(mod((double(Green(i,j))+122),256)); Blue1(i,j)=double(mod((double(Blue(i,j))+120),256)); end end Red1=uint8(Red1); Green1=uint8(Green1); Blue1=uint8(Blue1); cipher=cat(3,Red1,Green1,Blue1); figure; imshow(cipher);title(‘ Encrypted image’); % Decryption for i=1:256 for j=1:256 Red(i,j)=double(mod((double(Red1(i,j))-126),256));

Medical Image Encryption and Fusion

149

Green(i,j)=double(mod((double(Green1(i,j))-122),256)); Blue(i,j)=double(mod((double(Blue1(i,j))-120),256)); end end

(a)

(b)

(c)

Figure 9.8. (a) Input image, (b) encrypted image, (c) decrypted image.

9.7. MATLAB Code for Medical Image Encryption Using BitXor Operator The medical image encryption using the BitXor operator is another classical approach for medical image encryption. The Xor operation finds its role in hybrid encryption techniques in yielding proficient results [142,143]. % Enter the input image a=imread(‘brain.png’); a=imresize(a,[256 256]); figure,imshow(a); title(‘Input image’); % Encryption Phase b=encryption(a); figure; imshow(b);title(‘ Encrypted image ‘); % Decryption Phase c=decryption(b);

150

S. N. Kumar and S. Suresh

figure; imshow(c);title(‘ Decrypted image ‘); % Encryption sub function function out=encryption(a) Red=a(:,:,1); Green=a(:,:,2); Blue=a(:,:,3); Red = reshape(Red,[1,65536]); Green = reshape(Green,[1,65536]); Blue = reshape(Blue,[1,65536]); red=double(Red); green=double(Green); blue=double(Blue); for i=1:65536 red(i)=mod((double(bitxor(red(i),i*23))),256); green(i)=mod((double(bitxor(green(i),i.^2*red(i)))),256); blue(i)=mod((double(bitxor(blue(i),i.^2*green(i)))),256); end P=uint8(red); Q=uint8(green); R=uint8(blue);

P = reshape(P,[256,256]); Q = reshape(Q,[256,256]); R= reshape(R,[256,256]); % Output display k=cat(3,P,Q,R); out=k; end

Medical Image Encryption and Fusion

% Decryption Subfunction function out=decryption(a) % Decomposing in to R,G,B Red=a(:,:,1); Green=a(:,:,2); Blue=a(:,:,3); Red = reshape(Red,[1,65536]); Green = reshape(Green, [1,65536]); Blue = reshape(Blue,[1,65536]); red=double(Red); green=double(Green); blue=double(Blue); for i=1:65536 blue(i)=mod((double(bitxor(blue(i),i.^2*green(i)))),256); green(i)=mod((double(bitxor(green(i),i.^2*red(i)))),256); red(i)=mod((double(bitxor(red(i),i*23))),256); end P=uint8(red); Q=uint8(green); R=uint8(blue); P = reshape(P,[256,256]); Q = reshape(Q,[256,256]); R = reshape(R,[256,256]); % Output k=cat(3,P,Q,R); out=k; end

151

152

S. N. Kumar and S. Suresh

(a)

(b)

(c)

Figure 9.9. (a) Input image, (b) encrypted image, (c) decrypted image.

Conclusion This chapter put forward the importance of medical image encryption and fusion for medical images. The characteristics and importance of telemedicine is discussed here followed by the types of medical image fusion. The MATLAB code for medical image fusion using averaging rule and wavelet transform was put forward here followed by the classical medical image and BitXor based encryption.

Chapter 10

Case Studies in Medical Image Processing 10.1. Introduction Medical image processing refers to the utilization of the computer aided algorithms for the analysis of medical images. This chapter focus on the some of the case studies in medical image processing.

10.2. Retinal Blood Vessel Detection Using Kirsch Algorithm Retinal imaging has developed during 1840’s and is a now a backbone of the clinical care and management of patients with retinal as well as systemic diseases. Fundus photography is extensively used for the detection of diabetic retinopathy, glaucoma, and age-related macular degeneration. Optical coherence tomography (OCT) and fluorescein angiography are widely used in the diagnosis and management of patients with diabetic retinopathy, macular degeneration, and inflammatory retinal diseases [144]. Diabetic Retinopathy (DR) is the most common cause of blindness and vision defects [145]. Due to its common occurrence and clinical implication the research community has endeavored to advance its diagnosis and treatment by developing algorithms to perform retinal image analysis, fundus image enhancement [146, 147, 148], and monitoring. The blood vessel extraction from fundus images was carried out by many segmentation algorithms, here the MATLAB code for the blood vessel detection [149] using Kirsch algorithm was highlighted. The mask for Kirsch algorithm is unique and there are eight mask (h1-h8), convolution is performed between the input image and the mask values. The images used here were downloaded from the public data base (https://www5.cs.fau.de/research/data/fundus-images) [150] clc; close all; clear all;

154

S. N. Kumar and S. Suresh

% Reading input image im2=imread(‘01_dr.jpg’); % Pre-processing of input image im2=imresize(im2,[512 512]); figure,imshow(im2) title(“Input Retinal Image”); im2=rgb2gray(im2); figure,imshow(im2); imwrite(im2,’Resize image.tif’); title(‘Input Image’); figure,imhist(im2); title(‘Histogram of Input Image’); figure;imshow(histeq(im2));title(‘Histogram Equalized Image’); figure;imhist(histeq(im2));title(‘Histogram of Histogram-Equalized image’); % Define mask for Kirsch Algorithm h1=[5 -3 -3;5 0 -3;5 -3 -3]/15; h2=[-3 -3 5;-3 0 5;-3 -3 5]/15; h3=[-3 -3 -3;5 0 -3;5 5 -3]/15; h4=[-3 5 5;-3 0 5;-3 -3 -3]/15; h5=[-3 -3 -3;-3 0 -3;5 5 5]/15; h6=[5 5 5;-3 0 -3;-3 -3 -3]/15; h7=[-3 -3 -3;-3 0 5;-3 5 5]/15; h8=[5 5 -3;5 0 -3;-3 -3 -3]/15; % Perform the filtering operation t1=filter2(h1,im2); t2=filter2(h2,im2); t3=filter2(h3,im2); t4=filter2(h4,im2); t5=filter2(h5,im2); t6=filter2(h6,im2); t7=filter2(h7,im2); t8=filter2(h8,im2); s=size(im2); out=zeros(s(1),s(2));a=zeros(1,8);

Case Studies in Medical Image Processing

155

% Generation of edge detected output for i=1:s(1) for j=1:s(2) a(1)=t1(i,j);a(2)=t2(i,j);a(3)=t3(i,j);a(4)=t4(i,j);a(5)=t5(i,j);a(6)=t6(i,j);a(7)=t 7(i,j);a(8)=t8(i,j; if(max(a)>7.5) out(i,j)=max(a); end end end figure;imshow(out);title(‘Detected Vessels’); imwrite(out,’outputimage.jpg’);

(a)

(b)

Figure 10.1. (a) Input image (glaucoma) (D1), (b) grayscale output.

In Figure 10.1, (a) depicts the input image and (b) depicts the grayscale output. In figure 10.2, (a) depicts the histogram of the input image and (b) depicts the histogram of the equalized image. The histogram equalized output is depicted in Figure 10.3(a) and the extracted blood vessels are depicted in Figure 10.3 (b).

156

S. N. Kumar and S. Suresh

(a)

(b)

Figure 10.2. (a), Histogram of input image (b) histogram of equalized image.

(a)

(b)

Figure 10.3. (a) Histogram equallized output, (b) blood vessel extraction output.

Similarly the outputs corresponding to the second input are dpeicted in Figure 9.4 to 9.6.

Case Studies in Medical Image Processing

(a)

157

(b)

Figure 10.4. (a) Input image (diabetic retinopathy) (D2), (b) grayscale output.

(a)

(b)

Figure 10.5. (a) Histogram of input image, (b) histogram equalized output.

(a)

(b)

Figure 10.6. (a) Histogram equalized output, (b) edge detected output.

158

S. N. Kumar and S. Suresh

10.3. Medical Image Edge Detection Using Homogeneity Operation Edge detection in image processing is extensively used in the areas of feature detection and feature extraction, this process of detecting points in image brightness changes sharply or more formally has discontinuities [151,152]. The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the images such as discontinuities in depth, discontinuities in surface orientation, changes in material properties, and variations in scene illumination. The edge detector identifies and connect the curves that indicate the boundaries of objects, the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation [153]. The MATLAB code for the medical image edge detection using homogeneity operator is represented below. In the edge detection approach, higher the homogeneity factor, better the edge detection approach. clc; close all; clear all; inputimage=imread(‘1-14.png’); inputimage=rgb2gray(inputimage); figure;imshow(inputimage);title(‘Original Image’); [m,n]=size(inputimage); newimg=zeros(m,n); for i=2:m-1 for j=2:n-1 newimg(i,j)=max([abs(inputimage(i,j)-inputimage(i-1,j-1)),... abs(inputimage(i,j)-inputimage(i,j-1)),... abs(inputimage(i,j)-inputimage(i-1,j)),... abs(inputimage(i,j)-inputimage(i+1,j+1)),... abs(inputimage(i,j)-inputimage(i+1,j)),... abs(inputimage(i,j)-inputimage(i,j+1)),... abs(inputimage(i,j)-inputimage(i+1,j-1)),... abs(inputimage(i,j)-inputimage(i-1,j+1))]); end

Case Studies in Medical Image Processing

end % Threshold calculation by Otsu method th=graythresh(inputimage)*max(inputimage(:)); figure;imshow(newimg>th/8);title(‘Homogenity Operator Output’);

(a)

(b)

(c) Figure 10.7. (a) Input image, (b) output corresponding to homogeneity factor corresponding to th/4, (c) output corresponding to homogeneity factor corresponding to th/8.

159

160

S. N. Kumar and S. Suresh

(a)

(b)

(c) Figure 10.8. (a) Input image, (b) output corresponding to homogeneity factor corresponding to th/4, (c) output corresponding to homogeneity factor corresponding to th/8.

10.4 MATLAB Code for Homogeneous Mask Area Filter Homomorphic filtering involves a nonlinear mapping of image to a transforming to different domain in which linear filter techniques is implied, trailed by reverse mapping to the original domain. A homomorphic filter function H (.) is applied on the FFT computed for the logarithmically compressed image trailed by its inverse FFT computation and decompression to obtain the filtered image [154]. The MATLAB code for homogeneous mask area filter [155] is represented below. This filtering approach utilizes local mean and variance of the image.

Case Studies in Medical Image Processing

clc; close all; clear all; inputImage=imread(‘brain.png’); inputImage=rgb2gray(inputImage); figure,imshow(inputImage); title(‘Input Image’); inputImage=double(inputImage); [nRows,nCols] = size(inputImage); localMean = zeros([1 9]); localVar = zeros([1 9]); for i=3:nRows-2 for j=3:nCols-2 localWindow = inputImage(i-2:i+2,j-2:j+2); for p=2:3 for q=2:3 subWindow = localWindow(p-1:p+1,q-1:q+1); localMean = mean(subWindow); localVar = var(subWindow); end end C_k = localVar./(localMean+eps); [~,index] = min(C_k); outputImage(i,j) = (localMean(index)); end end outputImage = outputImage(3:end-2,3:end-2); figure,imshow(uint8(outputImage)); title(‘Output Image’);

161

162

S. N. Kumar and S. Suresh

(a)

(b)

Figure 10.9. (a) Input image, (b) filtered output.

(a)

(b)

Figure 10.10. (a) Input image, (b) filtered output.

Conclusion This chapter discuss about some of the case studies in medical image processing. The retinal blood vessel extraction using Kirsch detection algorithm on fundus images was initially proposed in this chapter. The MR and CT image edge detection based on homogeneity operator was highlighted in this chapter. The homogeneous mask area filter was also proposed in this chapter for filtering of CT/MR images.

References [1] [2] [3] [4] [5] [6]

[7]

[8] [9]

[10]

[11] [12]

[13]

[14]

[15]

Hendee WR, Ritenour ER. Medical imaging physics. John Wiley & Sons; 2003 Apr 14. Suetens P. Fundamentals of medical imaging. Cambridge university press; 2017 May 11 https://www.envrad.com/difference-between-x-ray-ct-scan-and-mri/ https://openmedscience.com/medical-imaging/ https://www.physio-pedia.com/Medical_Imaging Punn NS, Agarwal S. Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks. Applied Intelligence. 2021 May;51(5):2689-702. Maghdid HS, Asaad AT, Ghafoor KZ, Sadiq AS, Mirjalili S, Khan MK. Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. In Multimodal Image Exploitation and Learning 2021 2021 Apr 12 (Vol. 11734, p. 117340E). International Society for Optics and Photonics. Le Bihan D. Diffusion, confusion and functional MRI. Neuroimage. 2012 Aug 15;62(2):1131-6. Hricak H, Choi BI, Scott AM, Sugimura K, Muellner A, von Schulthess GK, Reiser MF, Graham MM, Dunnick NR, Larson SM. Global trends in hybrid imaging. Radiology. 2010 Nov;257(2):498-506. Beyer T, Hicks R, Brun C, Antoch G, Freudenberg LS. An international survey on hybrid imaging: do technology advances preempt our training and education efforts?. Cancer Imaging. 2018 Dec;18(1):1-8. Schmainda KM, Prah M (2018). Data from Brain-Tumor-Progression. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2018.15quzvnb. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. Erickson BJ, Kirk S, Lee Y, Bathe O, Kearns M, Gerdes C, Lemmerman J. (2016). Radiology Data from The Cancer Genome Atlas Liver Hepatocellular Carcinoma [TCGA-LIHC] collection. The Cancer Imaging Archive. http://doi.org/10.7937/ K9/TCIA.2016.IMMQW8UQ. Clark KW, Gierada DS, Marquez G, Moore SM, Maffitt DR, Moulton JD, Wolfsberger MA, Koppel P, Phillips SR, Prior FW. Collecting 48,000 CT exams for the lung screening study of the National Lung Screening Trial. Journal of digital imaging. 2009 Dec;22:667-80. https://cecas.clemson.edu/~ahoover/stare.

164 [16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26] [27] [28]

[29] [30] [31]

References Hoover, V Kouznetsova and Goldbaum M. “Locating Blood Vessels in Retinal Images by Piece-wise Threhsold Probing of a Matched Filter Response,” IEEE Transactions on Medical Imaging, vol. 19 no. 3, pp. 203-210, March 2000. Narasimha C, Rao AN. A comparative study: Spatial domain filter for medical image enhancement. In 2015 International Conference on Signal Processing and Communication Engineering Systems 2015 Jan 2 (pp. 291-295). IEEE. Zhang M, Gunturk B. A new image denoising method based on the bilateral filter. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing 2008 Mar 31 (pp. 929-932). IEEE. De Araujo AF, Constantinou CE, Tavares JM. Smoothing of ultrasound images using a new selective average filter. Expert Systems with Applications. 2016 Oct 30; 60:96-106. Lin WC, Wang JW. Edge detection in medical images with quasi high-pass filter based on local statistics. Biomedical Signal Processing and Control. 2018 Jan 1;39:294-302. Lalotra B, Vig R, Budhiraja S. Multimodal medical image fusion using Butterworth high pass filter and Cross bilateral filter. MATEC Web of Conferences 2016 (Vol. 57, p. 01021). EDP Sciences. Gupta S, Sunkaria RK. Real-time salt and pepper noise removal from medical images using a modified weighted average filtering. In 2017 fourth international conference on image information processing (ICIIP) 2017 Dec 21 (pp. 1-6). IEEE. Sawant AR, Zeman HD, Muratore DM, Samant SS, DiBianca FA. Adaptive median filter algorithm to remove impulse noise in x-ray and CT images and speckle in ultrasound images. In Medical Imaging 1999: Image Processing 1999 May 21 (Vol. 3661, pp. 1263-1274). International Society for Optics and Photonics. Ning CY, Liu SF, Qu M. Research on removing noise in medical image based on median filter method. In 2009 IEEE International Symposium on IT in Medicine & Education 2009 Aug 14 (Vol. 1, pp. 384-388). IEEE. Verma, B Kumar Singh, and Thokec, AS. “An enhancement in adaptive median filter for edge preservation,” Procedia Comput. Sci., vol. 48, no. C, pp. 29–36, 2015, doi: 10.1016/j.procs.2015.04.106. Singh and A Mittal, “Various image enhancement techniques-a critical review,” Int. J. Innov. Sci. Res., vol. 10, no. 2, pp. 267–274, 2014. Kovaleski RP, Oliveira MM. High-quality brightness enhancement functions for real-time reverse tone mapping. The Visual Computer. 2009 May;25:539-47.P. Wang M, Zheng S, Li X, Qin X. A new image denoising method based on Gaussian filter. In 2014 International Conference on information science, electronics and electrical engineering 2014 Apr 26 (Vol. 1, pp. 163-167). IEEE. Khare C, Nagwanshi KK. Image restoration technique with non linear filter. International Journal of Advanced Science and Technology. 2012 Feb;39:67-74. Thivakaran TK, Chandrasekaran RM. Nonlinear filter based image denoising using AMF approach. arXiv preprint arXiv:1003.1803. 2010 Mar 9. Kumar S, Kumar P, Gupta M, Nagawat AK. Performance comparison of median and wiener filter in image de-noising. International Journal of Computer Applications. 2010 Dec;12(4):27-31.

References [32]

[33]

[34]

[35]

[36]

[37] [38] [39] [40]

[41] [42] [43]

[44]

[45]

[46]

165

Baselice F, Ferraioli G, Pascazio V, Schirinzi G. Enhanced Wiener filter for ultrasound image denoising. In EMBEC & NBC 2017: Joint Conference of the European Medical and Biological Engineering Conference (EMBEC) and the Nordic-Baltic Conference on Biomedical Engineering and Medical Physics (NBC), Tampere, Finland, June 2017 2018 (pp. 65-68). Springer Singapore. Sokołowski A, Pardela T. Application of Fourier transforms in classification of medical images. Human-computer systems interaction: backgrounds and applications 3 2014 (pp. 193-200). Springer, Cham. Luce J, Gray J, Hoggarth MA, Lin J, Loo E, Campana MI, Roeske JC. Medical image registration using the fourier transform. International Journal of Medical Physics, Clinical Engineering and Radiation Oncology. 2014 Jan 27; 2014. Zhu H, Goodyear BG, Lauzon ML, Brown RA, Mayer GS, Law AG, Mansinha L, Mitchell JR. A new local multiscale Fourier analysis for medical imaging. Medical physics. 2003 Jun;30(6):1134-41. Mohamed Athiq (2021). Frequency domain filtering for grayscale images (https://www.mathworks.com/matlabcentral/fileexchange/40579-frequencydomain-filtering-for-grayscale-images), MATLAB Central File Exchange. Retrieved November 1, 2021. Gonzales RC. RE Woods–Digital Image Processing, Reading. MA: Addison– Wesley. Gonzales R, Woods R, and Eddins S. Digital Image Processing Using Matlab, 2nd Edition, Prentice Hall, USA,2002. Sayed IS, Ismail SS. Comparison of Low-Pass Filters for SPECT Imaging. International journal of biomedical imaging. 2020 Apr 1; 2020. Seeram E, Seeram D. Image postprocessing in digital radiology—a primer for technologists. Journal of Medical Imaging and Radiation Sciences. 2008 Mar 1;39(1):23-41. Dogra A, Bhalla P. Image sharpening by gaussian and butterworth high pass filter. Biomedical and pharmacology journal. 2014 Nov; 7(2):707-13. Rajab MI, Eskandar AA. Enhancement of radiographic images in patients with lung nodules. Thoracic cancer. 2011 Aug;2(3):109-15. Lalotra B, Vig R, Budhiraja S. Multimodal medical image fusion using Butterworth high pass filter and Cross bilateral filter. MATEC Web of Conferences 2016 (Vol. 57, p. 01021). EDP Sciences. Raphiphan Y, Wattanakaroon S, Khetkeeree S. Adaptive High Boost Filtering for Increasing Grayscale and Color Image Details. In 2020 International Conference on Power, Energy and Innovations (ICPEI) 2020 Oct 14 (pp. 69-72). IEEE. Oszust M, Piórkowski A, Obuchowicz R. No‐reference image quality assessment of magnetic resonance images with high‐boost filtering and local features. Magnetic resonance in medicine. 2020 Sep;84(3):1648-60. Satapathy A, Livingston LJ. Optimized OpenCL™ kernels for frequency domain image high-boost filters using image vectorization technique. SN Applied Sciences. 2019 Nov;1(11):1-20.

166 [47]

[48] [49]

[50]

[51] [52]

[53]

[54] [55]

[56]

[57]

[58]

[59]

[60]

References Zawaideh FH, Yousef QM, Zawaideh FH. Comparison between butterworth and gaussian high-pass filters using an enhanced method. International Journal of Computer Science and Network Security. 2017 Jul 30;17(7):113-7. Kumar SN, Muthukumar S, Kumar H, Varghese P. A voyage on medical image segmentation algorithms. Biomedical Research (0970-938X). 2018 Nov 18. Kumar SN, Fred AL, Varghese PS. An overview of segmentation algorithms for the analysis of anomalies on medical images. Journal of Intelligent Systems. 2020 Jan 1;29(1):612-25. Bourouis S, Alroobaea R, Rubaiee S, Ahmed A. Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications. Information. 2020 Mar; 11(3):155. Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annual review of biomedical engineering. 2000 Aug;2(1):315-37. Gunawan TS, Yaacob IZ, Kartiwi M, Ismail N, Za’bah NF, Mansor H. Artificial neural network based fast edge detection algorithm for mri medical images. Indonesian Journal of Electrical Engineering and Computer Science. 2017 Jul; 7(1):123-30. Kumar SN, Fred AL, Kumar AH, Padmanabhan P, Gulyas B. 14. Multilevel thresholding using crow search optimization for medical images. Computational Intelligence 2020 Aug 10 (pp. 231-258). De Gruyter. Singh TR, Roy S, Singh OI, Sinam T, Singh K. A new local adaptive thresholding technique in binarization. arXiv preprint arXiv:1201.5227. 2012 Jan 25. Guanglei Xiong, Local Adaptive Thresholding (https://www.mathworks.com/ matlabcentral/fileexchange/8647-local-adaptive-thresholding), MATLAB Central File Exchange. Retrieved November 1, 2021. Ahilan A, Manogaran G, Raja C, Kadry S, Kumar SN, Kumar CA, Jarin T, Krishnamoorthy S, Kumar PM, Babu GC, Murugan NS. Segmentation by fractional order darwinian particle swarm optimization based multilevel thresholding and improved lossless prediction based compression algorithm for medical images. IEEE Access. 2019 Jul 15;7:89570-80. Al-Faris AQ, Ngah UK, Isa NA, Shuaib IL. Breast MRI tumour segmentation using modified automatic seeded region growing based on particle swarm optimization image clustering. In Soft Computing in Industrial Applications 2014 (pp. 49-60). Springer, Cham. Law TY, Heng P. Automated extraction of bronchus from 3D CT images of lung based on genetic algorithm and 3D region growing. In Medical Imaging 2000: Image Processing 2000 Jun 6 (Vol. 3979, pp. 906-916). International Society for Optics and Photonics. Al-Faris AQ, Ngah UK, Isa NA, Shuaib IL. Computer-aided segmentation system for breast MRI tumour using modified automatic seeded region growing (BMRIMASRG). Journal of digital imaging. 2014 Feb; 27:133-44. Doronicheva AV, Socolov AA, Savin SZ. Using Sobel operator for automatic edge detection in medical images. Journal of Mathematics and System Science. 2014 Apr 1; 4(4).

References [61]

[62] [63]

[64]

[65] [66]

[67]

[68]

[69] [70]

[71] [72] [73] [74]

[75]

167

Gudmundsson M, El-Kwae EA, Kabuka MR. Edge detection in medical images using a genetic algorithm. IEEE transactions on medical imaging. 1998 Jun; 17(3):469-74. Kornilov AS, Safonov IV. An overview of watershed algorithm implementations in open source libraries. Journal of Imaging. 2018 Oct;4(10):123. Ng HP, Huang S, Ong SH, Foong KW, Goh PS, Nowinski WL. Medical image segmentation using watershed segmentation with texture-based region merging. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2008 Aug 20 (pp. 4039-4042). IEEE. Kumar SN, Fred AL, Kumar HA, Varghese PS. Nonlinear tensor diffusion filter based marker-controlled watershed segmentation for CT/MR images. Proceedings of International Conference on Computational Intelligence and Data Engineering 2018 (pp. 317-331). Springer, Singapore. Christ MJ, Parvathi RM. Segmentation of medical image using clustering and watershed algorithms. American Journal of Applied Sciences. 2011; 8(12):1349. Fred AL, Kumar S, Padmanaban P, Gulyas B, Kumar HA. Fuzzy-crow search optimization for medical image segmentation. Applications of Hybrid Metaheuristic Algorithms for Image Processing. 2020 Mar 27:413-39. Sekhar VG, Kumar SN, Fred L, Varghese S. An improved color segmentation algorithm for the analysis of liver anomalies in CT/PET images. In 2016 IEEE International Conference on Engineering and Technology (ICETECH) 2016 Mar 17 (pp. 1151-1154). IEEE. Kumar SN, Ahilan A, Fred AL, Kumar HA. ROI extraction in CT lung images of COVID-19 using Fast Fuzzy C means clustering. In Biomedical Engineering Tools for Management for Patients with COVID-19 2021 Jan 1 (pp. 103-119). Academic Press. Kaur P, Chaira T. A novel fuzzy approach for segmenting medical images. Soft Computing. 2021 Mar;25(5):3565-75. Kumar SN, Fred AL, Kumar AH, Varghese S. Medical image edge detection using Gauss Gradient operator. Journal of Pharmaceutical Sciences and Research. 2017 May 1;9(5):695. Wavelet Analysis. Wolfram Documentation Center. 2016. Web. 1 Apr. 2016. https://reference.wolfram.com/language/guide/Wavelets.html. Salomon, David. A Guide to Data Compression Methods. Springer Professional Computing. New York: Springer-Verlag New York, 1 Feb. 2002. Print. Bhavana V, Krishnappa HK. Multi-modality medical image fusion using discrete wavelet transform. Procedia Computer Science. 2015 Jan 1; 70:625-31. Remigius, Onyshczak and Abdou Youssef. “Fingerprint Image Compression and the Wavelet Scalar Quantization Specification,” a book chapter in Fingerprint Imaging Technologies. Springer- Verlag, 2004, pp. 385- 413. https://www.seas. gwu.edu/ayoussef/papers/FingerPrintWSQ- chapter.pdf. Machado, DP, Leonard, A, Starck JL, Abdalla FB, & Jouvel, S. Darth Fader: Using Wavelets to Obtain Accurate Redshifts of Spectra at Very Low Signal-to-Noise. Astronomy and Astrophysics 560. (16 Dec. 2013): 20. Web. 27 Feb. 2016. http://www.aanda.org/articles/aa/pdf/2013/12/aa19857-12.pdf.

168 [76] [77]

[78]

[79]

[80]

[81]

[82]

[83] [84]

[85]

[86]

[87]

[88]

[89] [90]

References Tamboli SS, and VR. Udupi. Image Compression Using Haar Wavelet Transform. Tian DZ, Ha MH. Applications of wavelet transform in medical image processing. In Proceedings of 2004 international conference on machine learning and cybernetics (IEEE Cat. No. 04EX826) 2004 Aug 26 (Vol. 3, pp. 1816-1821). IEEE. Jin Y, Angelini E, Laine A. Wavelets in medical image processing: denoising, segmentation, and registration. Handbook of biomedical image analysis 2005 (pp. 305-358). Springer, Boston, MA. Khan S, Nazir S, Hussain A, Ali A, Ullah A. An efficient JPEG image compression based on Haar wavelet transform, discrete cosine transform, and run-length encoding techniques for advanced manufacturing processes. Measurement and Control. 2019 Nov;52(9-10):1532-44. Alkinani MH, Zanaty EA, Ibrahim SM. Medical Image Compression Based on Wavelets with Particle Swarm Optimization. CMC-Computers Materials & Continua. 2021 Jan 1; 67(2):1577-93. Hui Li, Manjunath BS and Mitra SK. “Multi-sensor image fusion using the wavelet transform,” Proceedings of 1st International Conference on Image Processing, Austin, TX, USA, 1994, pp. 51-55 vol.1, doi: 10.1109/ICIP.1994.413273. Saffor A, Bin Ramli AR, Ng KH. Wavelet-based compression of medical images: filter-bank selection and evaluation. Australasian Physics & Engineering Sciences in Medicine. 2003 Jun;26(2):38-43. Li S, Kang X, Hu J. Image fusion with guided filtering. IEEE Transactions on Image processing. 2013 Jan 30;22(7):2864-75. Nan Lin (2021). Wavelet Transform & Guided Filtering Based Image Fusion (https://github.com/sentient-codebot/WaveletImgFusion/releases/tag/v1.1.0), GitHub. Retrieved November 9, 2021. Abbas Hussien Miry (2021), Image Denoising based wavelet transform, https://www.mathworks.com/matlabcentral/fileexchange/57493-image-denoisingbased-wavelet-transform), MATLAB Central File Exchange. Retrieved November 9, 2021. Serwan Bamerni (2021). Image Denoising Based on Stationary Wavelet Transform (https://www.mathworks.com/matlabcentral/fileexchange/55696-imagedenoising-based-on-stationary-wavelet-transform), MATLAB Central File Exchange. Retrieved November 9, 2021. Prince garg, Image Watermarking_using singular value decomposition and discrete wavelet transform, https://www.mathworks.com/matlabcentral/fileex change/62728-image_watermarking_using-singular-value-decomposition-anddiscrete-wavelet-transform ), MATLAB Central File Exchange.), https://www.mathworks.com/matlabcentral/fileexchange/57392-haar-waveletimage-compression), MATLAB Central File Exchange. Chathura (2021). Retrieved November 9, 2021. https://people.math.osu.edu/husen.1/teaching/572/image_comp.pdf. Mohammed Siddeq (2021). Walsh and Wavelet Transform for Color/Gray Image Compression (https://www.mathworks.com/matlabcentral/fileexchange/36335walsh-and-wavelet-transform-for-color-gray-image-compression), MATLAB Central File Exchange. Retrieved November 9, 2021.

References [91]

[92]

[93] [94]

[95]

[96]

[97]

[98]

[99]

[100]

[101]

[102]

[103]

[104]

169

Color Image Compression / Decompression by using Hybrid Wavelet Transform and Cosine transform (https://www.mathworks.com/matlabcentral/fileexchange/ 34026-color-image-compression-decompression-by-using-hybrid-wavelet-trans form-and-cosine-transform), MATLAB Central File Exchange. Mohammed Siddeq (2021). Manimekalai MA, Vasanthi NA. Enhanced Lempel-Ziv-Welch Based Medical Image Compression Using Optimization Methods. Journal of Medical Imaging and Health Informatics. 2019 Jan 1;9(1):32-41. Venugopal D, Mohan S, Raja S. An efficient block based lossless compression of medical images. Optik. 2016 Jan 1;127(2):754-8. Weinlich A, Rehm J, Amon P, Hutter A, Kaup A. Massively parallel lossless compression of medical images using least-squares prediction and arithmetic coding. In 2013 IEEE International Conference on Image Processing 2013 Sep 15 (pp. 1680-1684). IEEE. Erickson BJ, Manduca A, Palisson P, Persons KR, Earnest 4th F, Savcenko V, Hangiandreou NJ. Wavelet compression of medical images. Radiology. 1998 Mar; 206(3):599-607. Anitha J, Sophia PE, de Albuquerque VH. Performance enhanced ripplet transform based compression method for medical images. Measurement. 2019 Oct 1;144:203-13. Cyriac M, Chellamuthu C. An object-based lossless compression approach for medical images using DPCM. International Journal of Bioinformatics Research and Applications. 2016;12(1):59-71. Hosseini SM, Naghsh-Nilchi AR. Medical ultrasound image compression using contextual vector quantization. Computers in biology and medicine. 2012 Jul 1;42(7):743-50. Kumar SN, Fred AL, Varghese PS. Compression of CT images using contextual vector quantization with simulated annealing for telemedicine application. Journal of medical systems. 2018 Nov;42(11):1-3. Pourasad Y, Cavallaro F. A Novel Image Processing Approach to Enhancement and Compression of X-ray Images. International Journal of Environmental Research and Public Health. 2021 Jan; 18(13):6724. Kasban H, Salama DH. A robust medical image retrieval system based on wavelet optimization and adaptive block truncation coding. Multimedia Tools and Applications. 2019 Dec; 78(24):35211-36. Rufai AM, Anbarjafari G, Demirel H. Lossy image compression using singular value decomposition and wavelet difference reduction. Digital Signal Processing. 2014 Jan 1;24:117-23. Rufai AM, Anbarjafari G, Demirel H. Lossy medical image compression using Huffman coding and singular value decomposition. In 2013 21st Signal Processing and Communications Applications Conference (SIU) 2013 Apr 24 (pp. 1-4). IEEE. Kumar R, Patbhaje U, Kumar A. An efficient technique for image compression and quality retrieval using matrix completion. Journal of King Saud UniversityComputer and Information Sciences. 2019 Aug 9.

170 [105]

[106] [107]

[108] [109]

[110]

[111]

[112]

[113]

[114]

[115]

[116]

[117]

[118]

[119]

References Athi, Image Quality Measures (https://www.mathworks.com/matlabcentral/file exchange/25005-image-quality-measures), MATLAB Central File Exchange. Retrieved January 28, 2023 Chen, B. Ma, and Zhang K. “On the similarity metric and the distance metric,” Theoretical Computer Science, vol. 410, no. 24–25, 2009, pp. 2365–2376. Chumning, G. Huadong, and W. Changlin, “Edge preservation evaluation of digital speckle filters,” in IEEE International Geoscience and Remote Sensing Symposium, 2002, vol. 4, pp. 2471–2473. Zhang Y, Huang D, Ji M, Xie F. Image segmentation using PSO and PCM with Mahalanobis distance. Expert systems with applications. 2011 Jul 1;38(7):9036-40. Sinthanayothin C, Boyce JF, Williamson TH, Cook HL, Mensah E, Lal S, Usher D. Automated detection of diabetic retinopathy on digital fundus images. Diabetic medicine. 2002 Feb;19(2):105-12. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing. 2004 Apr 13;13(4):600-12. Hamamci, N. Kucuk, K. Karaman, K. Engin, and G. Unal, “Tumor-cut: segmentation of brain tumors on contrast enhanced MR images for radiosurgery applications,” IEEE transactions on medical imaging, vol. 31, no. 3, pp. 790–804, 2011. Coelho LP, Shariff A, Murphy RF. Nuclear segmentation in microscope cell images: a hand-segmented dataset and comparison of algorithms. In 2009 IEEE international symposium on biomedical imaging: from nano to macro 2009 Jun 28 (pp. 518-521). IEEE. Mrak S Grgic, and Grgic M. “Picture quality measures in image compression systems,” in The IEEE Region 8 EUROCON 2003. Computer as a Tool., 2003, vol. 1, pp. 233–236. Xess M, Agnes SA. Analysis of image segmentation methods based on performance evaluation parameters. Int. J. Comput. Eng. Res. 2014 Mar;4(3):6875. Zhang H, Fritts JE, Goldman SA. Entropy-based objective evaluation method for image segmentation. In Storage and Retrieval Methods and Applications for Multimedia 2004 2003 Dec 18 (Vol. 5307, pp. 38-49). SPIE. Lee, RT Smith, and Laine AF. “Interactive segmentation for geographic atrophy in retinal fundus images,” in 2008 42nd Asilomar Conference on Signals, Systems and Computers, 2008, pp. 655–658. Karch P, Zolotova I. An experimental comparison of modern methods of segmentation. In 2010 IEEE 8th International Symposium on Applied Machine Intelligence and Informatics (SAMI) 2010 Jan 28 (pp. 247-252). IEEE. Olivier C Mocquillon, Rousselle JJ, Boné R, and Cardot H. “A supervised texturebased active contour model with linear programming,” in 2008 15th IEEE International Conference on Image Processing, 2008, pp. 1104–1107. Thakur A, Anand RS. A local statistics based region growing segmentation method for ultrasound medical images. International Journal of Medical and Health Sciences. 2007 Oct 25;1(10):564-9.

References [120]

[121] [122]

[123]

[124] [125]

[126] [127]

[128]

[129]

[130] [131] [132] [133]

[134]

[135]

171

Malladi JA. Sethian, and Vemuri BC. “Shape modeling with front propagation: A level set approach,” IEEE transactions on pattern analysis and machine intelligence, vol. 17, no. 2, pp. 158–175, 1995. Belaid LJ, Mourou W. Image segmentation: a watershed transformation algorithm. Image Analysis & Stereology. 2009;28(2):93-102. Sankpal PR, Vijaya PA. Image encryption using chaotic maps: a survey. In 2014 fifth international conference on signal and image processing 2014 Jan 8 (pp. 102107). IEEE. Ratna AA, Surya FT, Husna D, Purnama IK, Nurtanio I, Hidayati AN, Purnomo MH, Nugroho SM, Rachmadi RF. Chaos-based image encryption using Arnold’s cat map confusion and Henon map diffusion. Adv. Sci. Technol. Eng. Syst. 2021;6(1):316-26. Suresh GB, Mathivanan V. Chaos Based Image Encryption. Indonesian Journal of Electrical Engineering and Computer Science. 2018 Jan;9(1):97-100.. Mekki N, Hamdi M, Aguili T, Kim TH. A real-time chaotic encryption for multimedia data and application to secure surveillance framework for IoT system. In 2018 International Conference on Advanced Communication Technologies and Networking (CommNet) 2018 Apr 2 (pp. 1-10). IEEE. Craig J, Petterson V. Introduction to the practice of telemedicine. Journal of telemedicine and telecare. 2005 Jan;11(1):3-9. Hersh WR, Hickam DH, Severance SM, Dana TL, Krages KP, Helfand M. Diagnosis, access and outcomes: Update of a systematic review of telemedicine services. Journal of telemedicine and telecare. 2006 Sep;12(2_suppl):3-1. Ohannessian R, Duong TA, Odone A. Global telemedicine implementation and integration within health systems to fight the COVID-19 pandemic: a call to action. JMIR public health and surveillance. 2020 Apr 2;6(2):e18810. Mann DM, Chen J, Chunara R, Testa PA, Nov O. COVID-19 transforms health care through telemedicine: evidence from the field. Journal of the American Medical Informatics Association. 2020 Jul;27(7):1132-5. Du J, Li W, Lu K, Xiao B. An overview of multi-modal medical image fusion. Neurocomputing. 2016 Nov 26; 215:3-20. Dogra, B Goyal, and Agrawal S. “Medical Image Fusion: A Brief Introduction,” vol. 11, no. September, pp. 1209–1214, 2018. Indira KP. Image fusion for pet CT images using average maximum and average contrast rules. vol. 2015;10:673-80. Polinati S, Bavirisetti DP, Rajesh KN, Naik GR, Dhuli R. The Fusion of MRI and CT Medical Images Using Variational Mode Decomposition. Applied Sciences. 2021 Jan;11(22):10975. Huang B, Yang F, Yin M, Mo X, Zhong C. A review of multimodal medical image fusion techniques. Computational and mathematical methods in medicine. 2020 Apr 23;2020. Vijayarajan R, Muttan S. Discrete wavelet transform based principal component averaging fusion for medical images. AEU-International Journal of Electronics and Communications. 2015 Jun 1;69(6):896-902.

172 [136]

[137]

[138]

[139]

[140]

[141] [142]

[143]

[144]

[145] [146]

[147]

[148] [149] [150]

[151]

References Vijayarajan R. (2021). DWT based Principal Component Averaging Fusion (DWTPCAv) (https://www.mathworks.com/matlabcentral/fileexchange/60774dwt-based-principal-component-averaging-fusion-dwtpcav), MATLAB Central File Exchange. Retrieved November 9, 2021. Chavan S, Pawar A, Talbar S. Multimodality medical image fusion using rotated wavelet transform. Advances in Intelligent Systems Research. 2017 Jan; 137:62735. Zhancheng Z, Xiaoqing L, Mengyu X, Zhiwen W, Kai L. Medical image fusion based on quaternion wavelet transform. Journal of Algorithms & Computational Technology. 2020 Jun;14:1748302620931297. Pavithra V, Jeyamala C. A survey on the techniques of medical image encryption. In 2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC) 2018 Dec 13 (pp. 1-8). IEEE. Hasan MK, Islam S, Sulaiman R, Khan S, Hashim AH, Habib S, Islam M, Alyahya S, Ahmed MM, Kamil S, Hassan MA. Lightweight encryption technique to enhance medical image security on internet of medical things applications. IEEE Access. 2021 Feb 24;9:47731-42. Li S, Zhao L, Yang N. Medical Image Encryption Based on 2D Zigzag Confusion and Dynamic Diffusion. Security and Communication Networks. 2021 May 8;2021. Ahmad J, Khan MA, Ahmed F, Khan JS. A novel image encryption scheme based on orthogonal matrix, skew tent map, and XOR operation. Neural Computing and Applications. 2018 Dec;30:3847-57. Xu SJ, Chen XB, Zhang R, Yang YX, Guo YC. An improved chaotic cryptosystem based on circular bit shift and XOR operations. Physics Letters A. 2012 Feb 20;376(10-11):1003-10. Sheet D, Pal S, Chakraborty A, Chatterjee J, Ray AK. Visual importance pooling for image quality assessment of despeckle filters in optical coherence tomography. In 2010 International Conference on Systems in Medicine and Biology 2010 Dec 16 (pp. 102-107). IEEE. Singer DE, Nathan DM, Fogel HA, Schachat AP. Screening for diabetic retinopathy. Annals of Internal Medicine 1992;116(8):660–71. Cree MJ, Olson JA, McHardy KC, Sharp PF, Forrester JV. The preprocessing of retinal images for the detection of fluorescein leakage. Physics in Medicine and Biology 1999;44(1):293–308. Cree MJ, Gamble E, Cornforth D. Colour normalisation to reduce inter-patient and intrapatient variability in microaneurysm detection in colour retinal images. Proceedings of the APRS Workshop on Digital Image Computing 2005;163:8. Foracchia M, Grisan M, Ruggeri A. Luminosity and contrast normalization in retinal images. Medical Image Analysis 2003; 9(3):179-90. https://www.mathworks.com/matlabcentral/fileexchange/24990-retinal-bloodvessel -Matlab Central File Exchange. Budai Attila, Bock Rüdiger, Maier Andreas, Hornegger Joachim, Michelson Georg. Robust Vessel Segmentation in Fundus Images. International Journal of Biomedical Imaging, vol. 2013, 2013. Bernd Jahne, Digital Image Processing, 6th ed., Springer, 2005.

References [152] [153]

[154]

[155]

173

Nagao M, Matsuyama T. Edge preserving smoothing: Computer Graphics and Image Processing, 9, 394–407, doi: 10.1016/0146-664X (79) 90102-3. Ziou, D and Tabbone, S. “Edge detection techniques an overview,” International Journal of Pattern Recognition and Image Analysis, Vol 8, No 4, 1998, pp 537559. Sheet S Pal, Chakraborty A, Chatterjee J, Ray AK. “Image quality assessment for performance evaluation of despeckle filters in Optical Coherence Tomography of human skin,” 2010 IEEE EMBS Conf. Biomedical Engineering and Sciences (IECBES), pp. 499-504, Nov. 30 2010 - Dec. 2 2010. [http://dx.doi.org/10.1109/ IECBES.2010.5742289] Debdoot Sheet (2022). Homogeneous Mask Area Filter (https://www.mathworks. com/matlabcentral/fileexchange/34219-homogeneous-mask-area-filter), MATLAB Central File Exchange. Retrieved January 21, 2022.

Index

A

D

algorithm(s), v, vii, viii, 3, 4, 13, 54, 55, 73, 74, 75, 76, 77, 78, 80, 81, 82, 84, 86, 87, 88, 89, 91, 93, 101, 104, 109, 112, 116, 117, 118, 120, 121, 122, 123, 125, 134, 135, 136, 138, 142, 143, 148, 153, 154, 162, 163, 164, 166, 167, 170, 171, 172 average, 35, 36, 38, 39, 40, 42, 43, 44, 45, 46, 49, 53, 75, 76, 126, 134, 137, 145, 147, 164, 171

decomposition, viii, 93, 94, 95, 96, 101, 107, 112, 113, 116, 145, 168, 169, 171 detection, vii, 3, 5, 13, 23, 40, 80, 81, 82, 83, 89, 91, 153, 158, 162, 164, 166, 167, 170, 172, 173 display, 4, 13, 15, 18, 31, 33, 40, 60, 97, 106, 108, 111, 114, 150 domain, v, vii, 35, 51, 52, 55, 56, 58, 59, 62, 63, 64, 65, 66, 67, 69, 70, 71, 104, 145, 160, 164, 165

B

E

bit plane(s), 31, 32, 33 blood vessel, 3, 9, 28, 153, 155, 156, 162, 164 boost, 60, 67, 68, 69, 70, 71, 165 butterworth, vii, 41, 56, 58, 59, 60, 62, 63, 64, 66, 67, 69, 70, 71, 164, 165, 166

encryption, v, viii, 139, 142, 148, 149, 150, 152, 171, 172 enhancement, v, vii, 35, 40, 53, 64, 68, 93, 153, 164, 165, 169 evaluation, v, 117, 118, 120, 121, 122, 123, 124, 125, 168, 170, 173 extraction, 28, 29, 33, 73, 87, 91, 93, 97, 153, 156, 158, 162, 166, 167

C clustering, 86, 88, 91, 125, 126, 127, 128, 129, 138, 166, 167 component, 28, 58, 109, 116, 140, 146, 171, 172 compression, v, viii, 5, 32, 93, 98, 99, 100, 101, 103, 104, 105, 107, 109, 112, 113, 114, 115, 116, 117, 134, 135, 136, 138, 166, 167, 168, 169, 170 computed tomography (CT), vii, 1, 3, 4, 5, 8, 11, 46, 53, 64, 76, 82, 87, 104, 109, 145, 146, 147, 162, 163, 164, 166, 167, 169, 171

F filter(s), vii, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 55, 56, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 75, 76, 82, 94, 117, 118, 119, 160, 162, 164, 165, 166, 167, 168, 170, 172, 173 filtering, v, vii, 35, 38, 40, 42, 46, 53, 55, 56, 58, 62, 68, 71, 75, 76, 90, 93, 94, 117, 118, 154, 160, 162, 164, 165, 168 Fourier, 55, 56, 57, 60, 165

176 frequency, v, vii, 35, 40, 55, 56, 58, 59, 62, 63, 64, 65, 66, 67, 69, 70, 71, 165 function(s), 5, 9, 10, 13, 14, 15, 16, 17, 18, 19, 22, 24, 25, 26, 29, 30, 35, 42, 49, 50, 51, 52, 55, 56, 58, 59, 60, 61, 62, 63, 64, 66, 68, 70, 75, 78, 83, 89, 90, 91, 124, 126, 129, 131, 136, 150, 151, 160 fusion, v, viii, 11, 66, 93, 94, 139, 142, 143, 144, 145, 146, 147, 152, 164, 165, 167, 168, 171, 172

Index 39, 40, 41, 43, 44, 45, 47, 49, 50, 52, 53, 56, 57, 60, 61, 75, 77, 78, 81, 82, 83, 84, 85, 89, 90, 94, 96, 97, 98, 99, 100, 105, 106, 108, 111, 113, 114, 123, 145, 147, 148,149, 150, 154, 155, 158, 159, 161 imsubtract, 22, 23, 24 imtool, 15, 16 imultiply, 24 intensity, 9, 10, 18, 35, 48, 49, 77, 79, 143

K G Gaussian, vii, 36, 37, 49, 50, 51, 52, 53, 55, 58, 60, 61, 62, 64, 65, 67, 68, 69, 71, 89, 90, 91, 164 grayscale, 2, 13, 16, 18, 29, 31, 98, 107, 113, 155, 157, 165 growing, vii, 77, 78, 80, 82, 91, 166, 170

Kirsch, 153, 154, 162 k-means, 86, 88

L low pass, 35, 49, 52, 58, 59, 61, 62, 63, 64, 71

H

M

Haar, 93, 96, 98, 101, 168 high pass, 40, 41, 42, 53, 58, 59, 64, 65, 66, 67, 71, 119, 164, 165 homogeneous, 160, 162, 173

magnetic resonance imaging (MRI), vii, 1, 3, 6, 7, 8, 9, 11, 94, 109, 163, 166, 171 MATLAB, v, vii, viii, 13, 33, 35, 40, 42, 46, 48, 49, 51, 52, 53, 56, 58, 62, 63, 64, 66, 67, 68, 69, 70, 71, 74, 75, 77, 80, 82, 86, 89, 91, 94, 96, 98, 101, 104, 107, 113, 116, 118, 119, 123, 129, 136, 137, 138, 145, 146, 148, 149, 152, 153, 158, 160, 165, 166, 168, 169, 170, 172, 173 median, 46, 47, 51, 52, 53, 75, 76, 119, 164 medical, v, vii, viii, 1, 7, 8, 9, 10, 11, 13, 17, 20, 23, 24, 27, 29, 33, 35, 38, 40, 41, 42, 44, 46, 50, 53, 55, 56, 57, 71, 73, 74, 77, 80, 81, 82, 86, 91, 93, 94, 96, 101, 103, 104, 105, 109, 116, 117, 125, 134, 139, 140, 141, 142, 145, 146, 148, 149, 152, 153, 158, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 179

I im2bw, 29, 49, 75, 85 imaging, v, vii, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 41, 54, 109, 117, 139, 142, 145, 153, 163, 164, 165, 166, 167, 169, 170, 172 imdivide, 26, 27, 28 imread, 13, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 26, 27, 29, 30, 33, 37, 39, 41, 43, 44, 46, 49, 50, 51, 53, 56, 58, 75, 77, 80, 83, 88, 89, 94, 96, 98, 104, 107, 110, 113, 119, 123, 145, 147, 148, 149, 154, 158, 161 imresize, 16, 21, 23, 25, 27, 30, 75, 77, 96, 123, 149, 154 imrotate, 30, 31 imscale, 30 imshow, 13, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 38,

N nonlinear, vii, 51, 54, 82, 160, 164, 167

Index O operation, 6, 11, 21, 33, 35, 42, 44, 45, 46, 47, 51, 55, 58, 118, 149, 154, 158, 172

P

177 thresholding, vii, 29, 33, 74, 75, 76, 91, 166 transform, 5, 55, 56, 57, 58, 59, 60, 61, 64, 68, 70, 82, 83, 93, 94, 98, 101, 103, 104, 107, 145, 146, 165, 168, 169

preprocessing, v, viii, 74, 117, 138, 172 processing, v, vii, viii, 13, 28, 29, 33, 35, 55, 56, 69, 71, 74, 86, 91, 93, 101, 107, 109, 117, 120, 134, 139, 145, 153, 154, 158, 162, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 179

U

R

validation, viii, 38, 117, 118, 120, 125, 129, 134, 136, 138

region, vii, 2, 35, 44, 45, 73, 75, 77, 78, 79, 80, 82, 91, 109, 118, 121, 122, 166, 167, 170 retinal, 28, 153, 154, 162, 164, 170, 172

S segmentation, v, vii, viii, 48, 73, 74, 77, 80, 82, 84, 86, 87, 88, 91, 117, 120, 121, 122, 123, 125, 129, 138, 153, 166, 167, 168, 170, 171, 172 singular, 112, 113, 116, 168, 169 subplot, 14, 15, 17, 29, 30, 31, 33, 37, 38, 39, 47, 49, 56, 57, 60, 61, 89, 90, 114, 145

T telemedicine, 139, 140, 141, 142, 152, 169, 171, 179

ultrasound, vii, 3, 9, 10, 11, 35, 42, 46, 104, 164, 165, 169, 170 usage, vii, 13, 14, 15, 16, 17, 18, 19, 40

V

W watermarking, viii, 96, 101, 168 watershed, vii, 82, 83, 84, 85, 86, 87, 91, 167, 171 wavelet transform, v, viii, 56, 93, 94, 96, 97, 98, 101, 104, 107, 112, 146, 147, 152, 167, 168, 169, 171, 172 weighted, 42, 43, 44, 45, 46, 49, 53, 129, 164 Wiener, 52, 53, 54, 165

X x-ray, vii, 1, 2, 3, 4, 5, 8, 10, 11, 46, 104, 109, 163, 164, 169

About the Authors

Dr. S. N. Kumar Associate Professor, Department of Electrical and Electronics Engineering Amal Jyothi College of Engineering Kanjirappally, Kerala, India

Dr. S. N. Kumar is currently working as Associate. Professor, Dept. of Electrical and Electronics Engineering, Amal Jyothi College of Engineering, Kanjirappally, Kerala, India. He obtained the Bachelor degree in Electrical and Electronics Engineering from Sun College of Engineering and Technology, Nagercoil, Tamil Nadu, Master degree in Applied Electronics from Anna University of Technology, Tirunelveli, Tamil Nadu. He was awarded PhD from Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu in the Dept. of Electronics and Communication Engineering. Research work was a part of the DST sanctioned funded project under Instrument Development Programme. With 13 years of teaching experience, his areas of interest include Medical Image Processing and Embedded System Applications in Telemedicine. He was the Co-Principal Investigator of the DST IDP-funded project and research fellow of the RCA scheme, NTU Singapore. He has authored 18 peer-reviewed journal publications and 7 textbooks in engineering disciplines. He also has 4 patent publications, 30 book chapters from renowned publishers, and 44 conference publications to his credit.

180

About the Authors

Dr. S. Suresh Professor, Department of Computer Science and Engineering, KPR Institute of Engineering and Technology, Coimbatore (Dist.), Tamil Nadu, India P A College of Engineering and Technology, Coimbatore (Dist.), Tamil Nadu, India

Dr. Suresh S, worked as Professor, Department of Computer Science and Engineering, KPR Institute of Engineering and Technology, Avinashi, Coimbatore (Dist.), Tamil Nadu, India. Previously worked at PA College of Engineering and Technology, Coimbatore and Adhiyamaan College of Engineering, Hosur. He obtained the Bachelor degree in Computer Science and Engineering from PSG College of Technology, Coimbatore, Tamil Nadu, Master degree in Computer Science and Engineering, SKP Engineering College, Thiruvanamalai, and PhD degree in Information and Communication Engineering from Anna University, Chennai, India. He has authored more than 25 (Twenty-five) peer reviewed journal/national/international conference papers in the field of IoT and Big Data Analytics, AI & Machine Learning, Server Virtualization & Cloud Computing, System Modelling and Simulation. He has a teaching experience of more than 16 years, authored the book, “C#.Net Programming,” published three Patents. He has worked in several funded projects sanctioned by central government of India. Area of interests includes IoT and Big Data Analytics, AI & Machine Learning, Server Virtualization & Cloud Computing, System Modelling and Simulation.