132 110 23MB
English Pages 311 Year 2022
Progress in Optical Science and Photonics
Cheng Liu Shouyu Wang Suhas P. Veetil
Computational Optical Phase Imaging
Progress in Optical Science and Photonics Volume 21
Series Editors Javid Atai, Sydney, NSW, Australia Rongguang Liang, College of Optical Sciences, University of Arizona, Tucson, AZ, USA U. S. Dinish, Institute of Bioengineering and Bioimaging, A*STAR, Singapore, Singapore
The purpose of the series Progress in Optical Science and Photonics is to provide a forum to disseminate the latest research findings in various areas of Optics and its applications. The intended audience are physicists, electrical and electronic engineers, applied mathematicians, biomedical engineers, and advanced graduate students.
More information about this series at https://link.springer.com/bookseries/10091
Cheng Liu · Shouyu Wang · Suhas P. Veetil
Computational Optical Phase Imaging
Cheng Liu Jiangnan University Wuxi, China
Shouyu Wang Jiangnan University Wuxi, China
Suhas P. Veetil Higher Colleges of Technology Dubai, United Arab Emirates
ISSN 2363-5096 ISSN 2363-510X (electronic) Progress in Optical Science and Photonics ISBN 978-981-19-1640-3 ISBN 978-981-19-1641-0 (eBook) https://doi.org/10.1007/978-981-19-1641-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
Over the last several decades, contrast enhancement techniques based on spatial phase distribution have provided a new paradigm for advanced visualization in scientific and clinical applications involving samples that are transparent or have uniform transmittance amplitudes. Recent advances in phase retrieval algorithms and numerical techniques have led to the development of new visualization modes that have evolved from being a stunning visual tool to providing a quantitative interpretation of all physical parameters based on phase imaging. With computational phase imaging, many physical limitations of conventional imaging can be overcome, resulting in several advances in image magnification, spatial resolution, and contrast. By combining the advantages of microscopy, interferometry, holography, and iterative numerical computations, several techniques have been developed to record and restore phase information, including ptychographic iteration machines, Fourier ptychographic microscopy, and coherent modulation imaging. Artificial intelligence (AI), usually in the form of deep learning, has also been successfully applied to phase imaging, achieving accuracy comparable to that of classical interferometry. Many new applications are now possible in the field of optical phase imaging with the help of advanced numerical computational algorithms. The purpose of this book is to provide a comprehensive and self-contained introduction to computational phase imaging. The book is divided into six chapters. In the first chapter, we provide a theoretical understanding of optical imaging, spatial resolution, magnification, and optical phase imaging. In the second part, we discuss classical qualitative phase contrast techniques, including phase contrast and differential interference contrast microscopy. Our third chapter discusses the most popular interferometric techniques. There is a detailed discussion of non-interferometric methods in chapter four, including Shark-Hartmann sensing, the transport intensity equation, and phase retrieval techniques that iteratively calculate the light phase via a mathematical approach of derivative descent, including the G-S algorithm, the Fienup algorithm, and coherent modulation mapping. Using MATLAB® as a simulation tool, this book shows the reader how phase estimation works for different computational phase imaging modalities, enabling them to gain a deeper understanding of the physical and mathematical principles behind phase estimation. This fifth part v
vi
Preface
presents some of the typical applications of the above techniques, including the measurement of deformations, observation of biological samples, and diagnosis of laser beam quality. The last part of this book discusses some recent developments in computational phase imaging methods. The book provides researchers and professionals with a comprehensive and rigorous overview of the theoretical and applied aspects of computational phase imaging. In recent decades, both computation and instrumentation have substantially improved in the field of computational phase imaging. It was challenging to cover so many developments in six chapters, however, the authors did their best to cover key concepts, methods, applications, as well as recent trends in almost 300,000 words. The book developed largely from papers published by the authors and lectures and computer programs used in photo-electronic information science and engineering graduate courses at Jiangnan University, Wuxi, China. Students who have completed a course in modern optics or a similar one, such as optoelectronics, optical engineering, photonics, biophotonics, or applied physics may find this book helpful. By publishing the book, we hope to encourage researchers interested in optical imaging to explore new possibilities. Wuxi, China Wuxi, China Dubai, United Arab Emirates January 2022
Cheng Liu Shouyu Wang Suhas P. Veetil
Contents
1 Introduction to Computational Phase Imaging . . . . . . . . . . . . . . . . . . . . 1.1 Fundamentals of Optical Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Limitations of Common Intensity Imaging . . . . . . . . . . . . . . . . . . . . . 1.3 Intensity Imaging to Phase Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Basic Principles of Computational Phase Imaging . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 5 7 8 12
2 Qualitative Phase Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Phase Contrast Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Differential Interference Contrast (DIC) Microscopy . . . . . . . . . . . . 2.3 Spectrum Modulation Contrast Imaging . . . . . . . . . . . . . . . . . . . . . . . 2.4 Hoffman Modulation Contrast Microscopy . . . . . . . . . . . . . . . . . . . . . 2.5 Schlieren Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Comparison of Qualitative Phase Imaging Techniques . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13 14 18 23 28 30 32 33
3 Interference-Based Quantitative Optical Phase Imaging . . . . . . . . . . . 3.1 Description of Holography and Interferometry . . . . . . . . . . . . . . . . . . 3.2 Classification of Holography and Interferometry . . . . . . . . . . . . . . . . 3.2.1 On-Axis and Off-Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Fresnel and Fourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Shearing and Non-shearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Numerical Simulations on Holography and Interferometry . . . . . . . . 3.3.1 Numerical Simulation on Gabor Digital Holography . . . . . . 3.3.2 Numerical Simulation on On-Axis Digital Holography . . . . 3.3.3 Numerical Simulation on Off-Axis Digital Holography . . . . 3.3.4 Numerical Simulation on Interferometry . . . . . . . . . . . . . . . . 3.3.5 Numerical Simulation on Fresnel Digital Holography . . . . . 3.3.6 Numerical Simulation on Fourier Digital Holography . . . . . 3.3.7 Numerical Simulation on Lateral Shearing Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 Numerical Simulation on Phase Unwrapping . . . . . . . . . . . . .
35 36 38 39 45 45 48 48 51 55 61 66 69 74 78 vii
viii
Contents
3.4 Improvements in Holography and Interferometry . . . . . . . . . . . . . . . . 81 3.4.1 Improvements in Phase Retrieval Methods . . . . . . . . . . . . . . . 81 3.5 Extensions on Holography and Interferometry . . . . . . . . . . . . . . . . . . 85 3.5.1 Spatial Light Interference Microscopy (SLIM) . . . . . . . . . . . 85 3.5.2 Quantitative Differential Interference Contrast (DIC) Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.5.3 Quadriwave Lateral Shearing Interferometry . . . . . . . . . . . . . 89 3.5.4 Optical Scanning Holography (OSH) . . . . . . . . . . . . . . . . . . . 90 3.5.5 Fresnel Incoherent Correlation Holography (FINCH) . . . . . . 93 3.5.6 Coded Aperture Correlation Holography (COACH) . . . . . . . 95 3.5.7 Computer-Generated Holography (CGH) . . . . . . . . . . . . . . . . 98 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4 Non-interferometric Quantitative Optical Phase Imaging . . . . . . . . . . 4.1 Coherent Diffraction Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 G-S Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 ER and HIO Algorithms of Fienup . . . . . . . . . . . . . . . . . . . . . 4.1.3 Existence of Convergence in Iterative CDI Algorithms . . . . 4.1.4 Equivalence of ER Algorithm and Steepest-Decent Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.5 Ptychographic Iterative Engine . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.6 Fourier Ptychographic Microscopy . . . . . . . . . . . . . . . . . . . . . 4.1.7 Coherent Modulation Imaging . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Transport of Intensity Equation Method . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Theory of Transport of Intensity Equation and Its Classical Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Numerical Simulations on TIE Method . . . . . . . . . . . . . . . . . . 4.2.3 Important Improvements on Phase Retrieval . . . . . . . . . . . . . 4.2.4 Important Improvement on Multi-focal Imaging . . . . . . . . . . 4.2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Shack-Hartmann Wavefront Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Theory of Hartmann and Shack-Hartmann Wavefront Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Numerical Simulations on Shack-Hartmann Wavefront Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Other Quantitative Computational Optical Phase Imaging Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Differential Phase Contrast Microscopy . . . . . . . . . . . . . . . . . 4.4.2 Pyramid Wavefront Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Moiré Deflectometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Coded Aperture Phase Imaging . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Phase Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
109 109 110 114 119 121 123 129 135 139 140 144 150 153 155 156 156 159 165 166 166 169 172 175 177 179
Contents
5 Typical Applications of Computational Phase Imaging . . . . . . . . . . . . . 5.1 Measurement of Inner Stress and Deformation . . . . . . . . . . . . . . . . . . 5.1.1 Measurement of 3D Stress Around Laser Induced Damage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Measurement of Deformation with Digital Holography . . . . 5.2 Applications of CDI in Optical Engineering . . . . . . . . . . . . . . . . . . . . 5.2.1 Diagnosing the High Power Laser Beam Online with Coherent Modulation Imaging (CMI) . . . . . . . . . . . . . . . 5.2.2 Inspection on the Quality of Optical Element with PIE . . . . 5.3 Computational Optical Phase Imaging in Biomedical Imaging . . . . 5.3.1 Computational Optical Phase Microscopy in Static Specimen Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Computational Optical Phase Imaging in Dynamic Specimen Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Computational Optical Phase Imaging in Hybrid Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Computational Optical Phase Imaging in Extended Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Computational Optical Phase Imaging for Adaptive Optics . . . . . . . 5.4.1 Computational Optical Phase Imaging in Optical Aberration Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Computational Optical Phase Imaging in Deep Imaging Within Complex Scattering Media . . . . . . . . . . . . . . 5.4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Refocusing and Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Refocusing in Optical Phase Imaging for In-Focus Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Refocusing in Optical Phase Imaging for Depth of View Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Refocusing in Optical Phase Imaging for Three-Dimensional Particle Tracking . . . . . . . . . . . . . . . . 5.5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Three-Dimensional (3D) Computational Phase Imaging . . . . . . . . . . 5.6.1 Classical Optical Diffraction Tomography . . . . . . . . . . . . . . . 5.6.2 3D Imaging with Curved Illumination . . . . . . . . . . . . . . . . . . . 5.6.3 3D Imaging with K-Domain Transform . . . . . . . . . . . . . . . . . 5.6.4 3D Imaging with Ptychography . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
189 190 191 195 198 198 201 205 208 215 220 222 223 224 224 226 228 229 229 232 239 240 246 246 247 249 253 255 257 258
x
Contents
6 Recent Trends in Computational Optical Phase Imaging . . . . . . . . . . . 6.1 Deep Learning in Computational Optical Phase Imaging . . . . . . . . . 6.1.1 Deep Learning Used in Phase Retrieval . . . . . . . . . . . . . . . . . 6.1.2 Deep Learning Used in Computational Optical Phase Imaging Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Point-of-Care Computational Optics Phase Imaging . . . . . . . . . . . . . 6.2.1 Point-of-Care Digital Holographic Microscopy . . . . . . . . . . . 6.2.2 Point-of-Care Ptychographic Microscopy . . . . . . . . . . . . . . . . 6.2.3 Point-of-Care Transport of Intensity Phase Microscopy . . . . 6.2.4 Point-of-Care Differential Phase Contrast Microscopy . . . . . 6.2.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
281 282 282 283 284 285 287 290 291 293 293 294
Chapter 1
Introduction to Computational Phase Imaging
Smooth surfaces of metal, glass, and water can all create a virtual image by changing the direction of light according to the principle of geometric optics [1]. Thus, mirrors became the oldest imaging devices and are widely used in daily life, although the image produced cannot be recorded directly and can only be observed by the naked eye. A tiny pinhole is the simplest device to produce a recordable image, which is the geometric projection of the object being observed, and its spatial resolution is inversely proportional to the size of the pinhole [2]. To obtain images with sufficiently high resolution, the diameter of the pinhole must be very small, which greatly reduces the energy of the received light. Pinhole methods are therefore only suitable for imaging extremely bright objects such as candle flames and electronic sparks, and have limited applications in both scientific research and industry. Our understanding of how we see things around us and our ability to make good lenses led to the development of optical imaging technology [3]. The human eye is a sophisticated optical imaging system, as shown in Fig. 1.1. When we look at something, our eye lens transmits the intensity distribution of light on its surface to the retina, generates a bioelectronic signal proportional to the intensity, and transmits it to our brain to create the sensation of “seeing” [4]. Practical optical imaging techniques essentially mimic this biological process. Glass and other transparent materials are used to create lenses that capture light and produce accurate reproductions of the intensity distribution of objects [5]. A light-sensitive film or electronic device is used to record the intensity distribution [6]. All imaging systems, such as giant astronomical telescopes, optical microscopes, transmission electron microscopes, and household cameras, operate on the same principle. The only difference is the spatial resolution, which is determined by the wavelength and the numerical aperture of the optical systems [7].
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 C. Liu et al., Computational Optical Phase Imaging, Progress in Optical Science and Photonics 21, https://doi.org/10.1007/978-981-19-1641-0_1
1
2
1 Introduction to Computational Phase Imaging
Fig. 1.1 Imaging principle of human eye
1.1 Fundamentals of Optical Imaging Image formation by a convex lens of focal length f is shown in Fig. 1.2. The object O is at a distance of S 0 from the lens and has a height of h0 . The axial and vertical positions of the image I are determined by the well-known Gaussian formulas as [8] 1 1 1 + = , SI S0 f
hI h0 = SI S0
(1.1)
These positions are determined by the principle of geometry optics. The transverse amplification of such an imaging system can be defined as M = SS0I which gives an axial amplification of M 2 [8]. Thus for M > 1, the image obtained will be “fat” in axial direction as shown in Fig. 1.2. Since light is electric–magnetic, the point object O radiates divergent spherical wave fronts which are shown in Fig. 1.2. The function of a lens is to transfer the divergent spherical wave into that of a converging one. The divergent light wave reaching the lens plane can be written as U (y) =
1 S02 + (y − h 0 )2
Fig. 1.2 Basic principle of imaging with a lens
ei2πk
√
S02 +(y−h 0 )2
(1.2)
1.1 Fundamentals of Optical Imaging
3
Under paraxial approximation U (y) can be written as
1 i2πk U (y) = e S0
S0 + (
y−h 0 )2 2S0
(1.3)
By neglecting the third and higher orders of Taylor series of the convergent wave leaving lens plane can be simplified as
1 −i2πk U (y) = e S0
SI + (
y+H I )2 2S I
S02 + (y − H0 )2 ,
(1.4)
The transmission function of lens is calculated as −i2πk U (y) =e t(y) = U (y)
SI + (
y+h I )2 2S I
By neglecting the constant phase terms and using 2
t(y) = e
−i2π ky2
1 SI
+ S1
0
+S0 + (
hI SI
y−h 0 )2 2S0
=
= e−iπ f y k
(1.5)
h0 , S0
2
(1.6)
The light leaving a point object O radiates at an angle of 4π in space and only a small amount of light falling within a cone of angle θ 0 is collected by the lens. It is then transformed into a convergent light that falls in a cone of angle SS0I θ0 . For simplicity without losing generality, for a given point object on the optical axis, the complex amplitude of light on the image plane can be expressed as h c (y) =
k sin
−k sin
S0 2S I S0 2S I
θ0
θ0
ei2π yk sin α d(k sin α)
(1.7)
Equation (1.7) is essentially the Fourier transform of rectangular function ky , and thus rect S0 2k sin
2S I
θ0
S0 S0 h c (y) = 2k sin θ0 sin c 2k sin θ0 y 2S I 2S I
(1.8)
h c (y) is plotted as a solid curve in Fig. 1.3a, whose first zero emerges at −1 S0 θ . Thus the transverse size of the image of a point object y = 2k sin 2S 0 I −1 S0 becomes k sin 2S θ0 . Hence any two point objects that are separated by a I −1 S0 distance smaller than 2Mk sin 2S θ on object plane cannot be distinguished. 0 I
4
1 Introduction to Computational Phase Imaging
Fig. 1.3 Spatial resolutions in a transverse and b axial directions
−1 S0 2Mk sin 2S θ essentially corresponds to the so-called spatial resolution of 0 I the optical imaging system. An increase in spatial resolution requires a higher value S0 for 2k sin 2SI θ0 which can be achieved by using a shorter wavelength to obtain a S0 larger k and using a bigger lens to increase θ0 . The maximum value of sin 2S θ 0 I is 1.0, and it is shown as a dotted curve in Fig. 1.3a where the first zero emerges at y = 0.5λ. This explains why the highest transverse resolution of an optical imaging system is half of the wavelength used. h c (y) in Eq. (1.8) is the coherent point spread function of the imaging system. For an extended object O(y), each point on the object at a vertical distance of y0 from optical axis will generate an image with amplitude of h c (y − M y0 ), and the overall amplitude on the image plane becomes U (y) =
O(y0 )h c (y − M y0 )dy0 =
1 y O ⊗ h c (y) M M
(1.9)
Equation (1.9) means that the obtained image amplitude is the convolution of ideal geometrical image with the point spread function of the optical system. With the same method, the light field along the optical axis can be written as k
∫
U I (z) = 2 k cos
S0 2S I
θ0
ei2π zk cos α d(k cos α)
S k y −0.5k 1+cos 2S0 θ0 I . S k 1−cos 2S0 θ0
(1.10)
So U I (z) Equation (1.10) is the Fourier transform of Rect I S iπkz 1+cos 2S0 θ0 S0 I . The amplitude can be rewritten as 2e sin c kz 1 − cos 2S θ 0 I of U I (z) is shown in Fig. 1.3b as a solid curve, whose first zero emerges at −1 −1 S0 S0 θ where λ 1 − cos θ is the axial resolution of z = λ 1 − cos 2S 0 2S I 0 I S0 θ the imaging system. Since light should propagate along the optical axis, cos 2S 0 I
1.1 Fundamentals of Optical Imaging
5
S0 is equal to zero, the first zero of U I (z) cannot take negative values. When cos 2S θ 0 I emerges at z = λ as shown in Fig. 1.3b with a dotted line and thus the highest axial resolution reachable for any imaging system is lower than the wavelength used. From the above, it is clear that the position of the image of a point object is determined by the geometric parameters of the optical alignment used, and the spatial resolution of the optical system depends on the wavelength and the acquisition angle of the lens. The ultimate goal of most imaging techniques is to achieve higher spatial resolution and gain.
1.2 Limitations of Common Intensity Imaging Textbooks often treat optical imaging as if the sample were 2D; however, in reality, all practical samples have finite thickness. In other words, all samples are 3D in practice. The imaging of a 3D object with a convex lens is shown schematically in Fig. 1.4. The outermost layers of a 3D object, a giraffe and a flower, are represented in two discrete planes with a separation of d 0 between them with respective transmission functions t 1 (x, y) and t 2 (x, y). With parallel illumination, the transmitted light through the giraffe layer t 1 (x, y) forms an illumination of U illu (x, y) on the flower layer which is the Fresnel propagation of t 1 (x, y) by a distance d 0 , that is, Uillu (x, y) = Fres [t1 (x, y), d0 ]. The transmitted light leaving this layer is Utrans (x, y) = Fres [t1 (x, y), d0 ]t2 (x, y). When d 0 is much smaller than S0 , the amplification of both layers can be approximated to be the same and can be assumed to be 1.0. If a detector is placed at the image plane of the flower, it records an intensity |Utrans (x, y)| = |Fres [t1 (x, y), d0 ]t2 (x, y)|, which is intensity image of flower layer t2 (x, y) multiplied by diffraction pattern of giraffe Fres [t1 (x, y), d0 ]. If the detector is moved further by d from the image plane of flower, the recorded intensity becomes Fres [Utrans (x, y), d]. So, the recorded intensity is always a mixture of two layers which makes it difficult to identify the structural information contained in each layer separately. A 3D sample consists of several such layers and the structural information of the whole sample becomes inseparable along the axial direction. This is the reason why we need thin layered samples in biological research and disease diagnosis.
Fig. 1.4 Optical imaging of thick sample
6
1 Introduction to Computational Phase Imaging
Diffraction within such a thin sample is negligible and the light beam passes directly through the sample without changing its direction. The transmission function of the sample under uniform illumination U 0 can be approximated as follows [9] t(x, y) = U0 e−u(x,y)d
(1.11)
Generally, u(x, y) has complex value and thus can be written as u (x, y) = u r (x, y) + iu i (x, y) t(x, y) = U0 e−ur (x,y)d−iu i (x,y)d
(1.12)
d is the thickness of sample. u r (x, y) is the 2D absorbing coefficient, with its value varying between 10 mm−1 and 100 mm−1 , depending on light wavelength and nature of the sample. If d is only a few micrometers, the range of u r (x, y)d is much smaller than 0.1, so the sample becomes transparent, resulting in poor contrast of the transmitted light intensity |t(x, y)|, thereby suppression of relevant information about the sample. Several dying methods were developed as an alternative to enhance the contrast and a huge number of dying chemicals were invented to enhance the variation of u r (x, y). However, dying the sample changes its characteristics or functionality. Theoretically, optical imaging systems cannot provide useful information about the structure of purely transparent samples because their absorption coefficients u r (x, y) are uniform. Optical imaging systems suffer from the diffraction effect, and the intensity drops off at points where the term u i (x, y)d changes abruptly. For some samples, such as binary phase plates and cultured cells, their shapes or edges appear in the intensity and phase images, as in Fig. 1.5. The acquired image clearly shows the boundaries between regions with zero phase and π-phase. The phenomenon in Fig. 1.5 is explained by the convolution effect of imaging with optical lens, that is, the formed image is essentially the convolution between then real object U 0 (y) and the point spread function h c (y), that is U I y =
∫+∞ −∞ h c y − y Uo (y)dy [14]. If the object is a pure phase grating with phase ϕ(y) = π comb(y − 2n) ⊗ rect(2y) the complex amplitude of as shown in Fig. 1.6a, iϕ(y) y h − y e dy. For an ideal diffracthe image formed will be U I y = ∫+∞ c −∞ tion limited imaging system with pupil of a in diameter, h c (y) is equal to sin c
ay λdi
,
Fig. 1.5 Wide field intensity and phase images of a binary pure phase plate and b cultured oral cell
1.2 Limitations of Common Intensity Imaging
7
Fig. 1.6 a Convolution effect in imaging of a pure phase grating and b corresponding intensity image
where λ is the wavelength and d i is the distance of image plane to lens. Then used, a ( y −y ) iϕ(y) ay +∞ ∫ e U I y is −∞ sin c λdi dy. Since the value of sin c λdi decreases quickly with increasing |y|, the value of U I y is dominated by the integration in the range λd y + i a y −y a y −y of the main lobe of sin c ( λdi ) . That is, U I y ≈ ∫ λdai sin c ( λdi ) eiϕ(y) dy. y− a λdi ay ay a ∫ For y = 0, sin c λd is at the origin of y-axis, U eiπ dy, and = (0) I λdi sin c λd i i − a λd 2 i ay , since sin c ay is positive in the range − λdi , λdi , |U I (0)|2 = ∫ a λdi sin c λd dy λdi a a i − a ay |U I (0)|2 is definitely larger than 0. For y = 25 , sin c λd is at the edge of π phase and i 2 5 5 2 5 + λdi a 2 −y ) a ( 2 −y ) 0 ∫ zero phase. U I 25 = ∫02 a sin c ( λd dy − dy ≈ 0. sin c λdi 5 λdi i 2− a The intensity image formed is shown in Fig. 1.6b, which has zero intensity at y = n + 21 and outlines the edges between the regions of zero phase and π-phase. This is the reason why many snaky segments appear in Fig. 1.5a. In natural transparent
samples, there is no remarkable phase jump in most cases, and since U I y is never zero, the contrast of the images taken with a classical bright microscope is quite poor and insufficient to see many details of the sample details as shown in Fig. 1.5(b).
1.3 Intensity Imaging to Phase Imaging When light passes through a sample, it experiences a phase delay of ϕ(x, y) = 2πd n(x, y) where n(x, y) is the refractive index and its value depends on the physλ ical and chemical structure of the sample under study. For most biological samples n(x, y) varies around 1.3, λ is around 0.5 μm for visible light. When d is only 10 microns, the phase delay can reach a value of 1.2, even with a small variation of 0.01 in n(x, y). This shows that phase measurement can be an alternative solution for imaging problems with transparent samples. Since light is an electromagnetic wave, it has also a temporal component 2π νt in addition to the spatial phase n(x, y). Therefore, the phase of the transmitted light can be written (x, y) = 2πd λ
8
1 Introduction to Computational Phase Imaging
in the form ϕ(x, y, t) = 2π dλ n(x, y) + νt . The electric field of the light in the x–y plane can then be expressed as E(x, y, t) = |E(x, y)| cos[ϕ(x, y) + ωt], where |E(x, y)| is the amplitude of light and its square |E(x, y)|2 can be measured by CCD or other electronic devices,ϕ(x, y) + ωt is its phase at the coordinate (x, y) of time t when its strength oscillates at the angular frequency of ω. When the detector has a faster response rate than the angular frequency of ω, the recorded light changes periodically between −|E(x, y)| and +|E(x, y)| with the frequency of ω, where the negative sign “−”means electric field changes direction and the spatial phase can E(x,y,t) −1 be determined as ϕ(x, y) = cos |E(x,y,t)|max − ωt. However, detectors can only record the time average of the square of the electronic field according to Eq. (1.13), because the frequency of light is of the order of 1017 Hz for visible light, much higher than the response rate of any available electronic device. The spatial phase informan(x, y) which is able to highlight the structural difference, is lost tion of the light 2πd λ during the recording and the detector records only the intensity as I (x, y) = |E(x, y, t)|2 = |E(x, y)|2 cos2 [ϕ(x, y) + ωt] = 0.5|E(x, y)|2 (1.13)
1.4 Basic Principles of Computational Phase Imaging Since it is not possible for physical detectors to capture the optical phase information directly, only indirect decoding methods can recover the lost phase from the recordable intensity. Phase contrast microscopy and differential interference contrast microscopy are two such classical methods of phase imaging based on the fact that the generated intensity I (x, y) is proportional to the phase term ϕ(x, y) and its spatial derivatives [10, 11] and therefore I (x, y) itself is a direct indication of the phase distribution ϕ(x, y) obtained without additional calculations. However, this is not the case for thick samples with large values of ϕ(x, y) where the intensity I (x, y) is not strictly proportional to ϕ(x, y) or its derivatives. Therefore, these two methods have always been used for biological sample observations and cannot be used for quantitative measurements. In other phase imaging techniques such as interferometry, digital holography, transport intensity equation (TIE), and coherent diffraction imaging (CDI) etc., the distribution of recorded I (x, y) is significantly different from that of ϕ(x, y), and various algorithms have been developed to compute ϕ(x, y) from I (x, y). The main difficulty in measuring the phase term ϕ(x, y) of light is that E(x, y, t) changes very rapidly and no electronic detectors can operate on the frequency scale of light. Therefore, to realize phase measurements, we should generate a slowly changing signal to replace E(x, y, t) and use it to calculate the phase of light ϕ(x, y). If two light fields with frequencies ω1 and ω2 reach the same coordinate (x, y), a beat signal with frequency ω1 − ω2 can be generated, which is resolvable for a common detector. Combining two such light fields E 1 (x, y, t) = |E 1 (x, y)|ei[ϕ1 (x,y)+ω1 t] and
1.4 Basic Principles of Computational Phase Imaging
9
E 2 (x, y, t) = |E 2 (x, y)|ei[ϕ2 (x,y)+ω2 t] , we obtain the resultant intensity as I (x, y, t) = |E 1 (x, y)|2 + |E 2 (x, y)|2 + 2|E 1 (x, y)||E 2 (x, y)| × cos[ϕ1 (x, y) − ϕ2 (x, y) + (ω1 − ω2 )t]
(1.14)
If E 2 (x, y, t) is a uniform parallel light beam, which can be written as E 2 (x, y) = E 0 and ϕ2 (x, y) = 0, I (x, y, t) will become I (x, y, t) = |E 1 (x, y)|2 + |E 0 |2 + 2|E 1 (x, y)||E 0 | cos[ϕ1 (x, y) + (ω1 − ω2 )t] (1.15) If (ω1 − ω2 ) is much lower than the response rate of detector, a time sequential signal in the form of cosine function can be recorded at coordinate (x, y), and then 2 2 −1 I (x,y,t)−|E 1 (x,y)| −|E 0 | −(ω1 − ω2 )t, ϕ1 (x, y) can be calculated as ϕ1 (x, y) = cos 2|E 1 (x,y)||E 0 | realizing phase imaging. If E 1 (x, y, t) and E 2 (x, y, t) in Eq. (1.14) are from the same laser source, their frequencies become ω1 = ω2 = ω, then I (x, y) = |E 1 (x, y)|2 + |E 0 |2 + 2|E 1 (x, y)||E 0 | cos[ϕ1 (x, y)]
(1.16)
The intensity of I (x, y, t) in Eq. (1.16) becomes static and is recordable as ϕ1 (x, y) = for common detectors, and the phase ϕ1(x, y) can be calculated 2 2 2 2 −1 I (x,y,t)−|E 1 (x,y)| −|E 0 | −1 I (x,y,t)−|E 1 (x,y)| −|E 0 | . Since cos cannot distinguish cos 2|E 1 (x,y)||E 0 | 2|E 1 (x,y)||E 0 | ϕ1 (x, y) from −ϕ1 (x, y), additional techniques like phase shifting and off-axis illuminating reference were developed and widely applied to decide the sign of ϕ1 (x, y) [12, 13]. The principle for indirect phase measurement shown in Eq. (1.16) was called interferometry. Various techniques and instruments have been developed based on this idea, such as the Twyman-Green interferometer, Fizeau interferometer, and ordinary off-axis holography, in which a separate parallel or spherical light beam E 2 (x, y, t) was used as a reference light to obtain a static I (x, y). Interferometry can also be realized via an approach called self-interference. Shearing interferometers, for example, detect light E(x, y, t) interferences with its shifted replica E(x + x, y, t), and the phase deviation ϕ (x, y) can then be deduced from static I (x, y), and finally obtain ϕ(x, y) as an integral of ϕ (x, y) along the x-axis. Phase imaging can also be realized by computationally retrieving the phase from the recorded intensity pattern. According to the principle of Fresnel diffraction, when a light field |E(x0 , y0 )|eiϕ(x0 ,y0 ) propagates a distance of z from x 0 -y0 plane to x–y plane, its complex light amplitude E(x, y) on x–y plane can be written as [14]
10
1 Introduction to Computational Phase Imaging
Fig. 1.7 Imaging methods of a coherent diffraction imaging, b transport intensity equation, and c Shark-Hartmann sensor
E(x, y) =
1 j k (x 2 +y 2 ) e 2z jλz
¨
|E(x0 , y0 )|eiϕ(x0 ,y0 ) e j 2z (x0 +y0 ) e− j λz (x0 x y+y0 y) dy0 dx0 k
2
2
2π
(1.17) Though the phase distribution ϕ(x0 , y0 ) is encoded into the spatial distribution of |E(x, y)|2 , it is lost during data acquisition and cannot be calculated analytically in most of the cases. However, by adding additional constraints on |E(x0 , y0 )|eiϕ(x0 ,y0 ) , its phase ϕ(x0 , y0 ) can be iteratively retrieved using various computational algorithms. Figure 1.7a shows such an example, where |E(x0 , y0 )|eiϕ(x0 ,y0 ) is confined by a tiny hole and a detector at some distance records a diffraction pattern |E(x, y)|2 . Then, |E(x0 , y0 )| and ϕ(x0 , y0 ) can be iteratively retrieved using Error-Reduction (ER) or Hybrid-Input-Out (HIO) algorithm developed by Fienup [15]. These types of imaging modalities [16, 17] have been evolved into a sub-branch of phase imaging, coherent diffraction imaging (CDI). In contrast to iterative methods, the transport of intensity equation (TIE) is another category of phase extraction method that uses propagation to extract the phase directly rather than iteratively. The principle of this method is shown in Fig. 1.7b. An intensity image |E(x, y)|2 is recorded in the xy plane and a defocused image |E(x1 , y1 )|2 is recorded at a slightly defocused x1 y1 plane, and the light phase ϕ(x, y) in xy plane is calculated by comparing the intensities |E(x, y)|2 and |E(x1 , y1 )|2 . The dotted curve indicates the curvature of the wave-front ϕ(x, y) of a light beam at xy plane with a uniform intensity |E 0 |2 . Due to waves having different curvatures at different parts of a wavefront, the transmitted light is reorganized either convergently or divergently, resulting in an intensity redistribution. Since the upper part of ϕ(x, y) has a convex wave-front, light rays are diverging and the intensity |E(x1 , y1 )|2 formed by these rays is smaller than |E 0 |2 . The middle part of ϕ(x, y) is planar where the light rays travel parallel to the optical axis and leave the intensity |E(x1 , y1 )|2 roughly equal to |E 0 |2 . The lower part of ϕ(x0 , y0 ) has a concave wavefront, causing light rays to converge to redistribute the intensity |E(x, y)|2 to be stronger than |E 0 |2 . The phase ϕ(x, y) can be computed by comparing intensities in both planes, and this is the basic idea behind Transport Intensity Equation (TIE) [18–20], which is widely studied and applied to biological sample observations. The Shack-Hartmann wavefront sensor is another phase measurement technique works
1.4 Basic Principles of Computational Phase Imaging
11
on the principle of geometric optics. The |E(x0 , y0 )|eiϕ(x0 ,y0 ) in Eq. (1.16) is tilted by a small angle 8 with respect to the optical axis, it causes E(x, y) to shift transversely and the phase ramp of |E(x0 , y0 )|eiϕ(x0 ,y0 ) can be computed from the shift in E(x, y). As shown in Fig. 1.7c, by splitting |E(x0 , y0 )|eiϕ(x0 ,y0 ) into many sub-apertures and focusing it using micro-lens arrays, the phase ramp of light in each sub-aperture can be computed from the transverse shift of each focus from its central positions. This is the basic principle of Shark-Hartmann sensor which has found several applications for astronomical observations. Technologies for imaging and measuring phase can be broadly divided into two basic groups based on their ability to recover phase: a qualitative method for phase imaging and a quantitative method for phase measurement. The first group typically includes phase contrast microscopy, differential interference contrast microscopy, and the schlieren method. The second group includes interferometry, holography, coherent diffraction imaging, Hartmann sensing, transport intensity equation, and artificial intelligence-based methods, etc. The category of these methods is shown in Fig. 1.8. We try to present all these techniques and their applications in this book systematically according to their characteristics and history. The second chapter discusses qualitative phase representation and imaging. Typical interferometric methods of quantitative phase imaging are discussed in Chap. 3 and methods of phase representation without interferometry are discussed in Chap. 4. Several typical
Fig. 1.8 Category of computational phase imaging method
12
1 Introduction to Computational Phase Imaging
applications of these methods are discussed in Chap. 5, and the trend and future of phase representation is briefly discussed in Chap. 6.
References 1. Romano, A., Cavaliere, R.: Geometric Optics. Springer, Berlin (2016) 2. Renner, E.: Pinhole Photography, 4th edn. Focal Press, Oxford (2008) 3. Sun, H.: Lens Design: A Practical Guide (Optical Sciences and Applications of Light), 1st edn. CRC Press, Boca Raton (2016) 4. Atchison, D., Smith, G.: Optics of the Human Eye. Butterworth- Heinemann, Oxford (2000) 5. Malacara-Hernández, D., Malacara-Hernández, Z.: Handbook of Optical Design, 3rd edn. CRC Press, Boca Raton (2013) 6. Schwartz, K.: The Physics of Optical Recording, 1st edn. Springer-Verlag, Berlin (1993) 7. Sakai, K., Hirayama, N., Tamura, R. (eds.): Novel Optical Resolution Technologies. SpringerVerlag, Berlin (2007) 8. Rowlands, A.: Physics of Digital Photography. Institute of Physics Publishing, Philadelphia (2017) 9. Mazda, F. (ed.): Telecommunications Engineer’s Reference Book. Butterworth-Heinemann, Oxford (1993) 10. Benford, J.R., Richard, L.S.: Phase contrast microscopy for opaque specimens. J. Opt. Soc. Am. 40, 314–316 (1950) 11. Holmes, T.J.: Signal-processing characteristics of differential-interference-contrast microscopy 2: noise considerations in signal recovery. Appl. Opt. 27, 1302–1309 (1988) 12. Ishiguro, K.: The phase-shift measurement of thin films and its amplification. J. Opt. Soc. Am. 40, 789–790 (1950) 13. Hernandez, G.: Analytical description of a fabry-perot spectrometer 3: off-axis behavior and interference filters. Appl. Opt. 13, 2654–2661 (1974) 14. Steward, E.G.: Fourier Optics: An Introduction, 2nd edn. Dover Publications, New York (2011) 15. Fienup, J.R.: Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758–2769 (1982) 16. Faulkner, H.M.L., Rodenburg, J.M.: Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm. Phys. Rev. Lett. 93, 023903 (2004) 17. Zhang, F., Rodenburg, J.M.: Phase retrieval based on wave-front relay and modulation. Phys. Rev. B 82, 121104 (2010) 18. Teague, M.R.: Deterministic phase retrieval: a Green’s function solution. J. Opt. Soc. Am. 73, 1434–1441 (1983) 19. Roddier, F.: Curvature sensing and compensation: a new concept in adaptive optics. Appl. Opt. 27, 1223–1225 (1988) 20. Roddier, F.: Wavefront sensing and the irradiance transport equation. Appl. Opt. 29, 1402–1403 (1990)
Chapter 2
Qualitative Phase Imaging
The traditional imaging methods depend on the specimen acting on the incident illumination to create an observable image. Light absorbed by different parts of the object produces a brightness contrast or amplitude contrast that gives a visual impression of the object. Many biological and industrial specimens differ more in refractive indices than in light absorption, so a contrast in intensity does not reveal all of their details. Furthermore, such methods do not produce a satisfactory contrast when observing transparent or thick samples. Staining and labeling are widely used to improve contrast in images, but they can damage biological specimens. Besides biomedical applications, some plasma assisted applications also require optical diagnosis of phenomena occurring in transparent media. Consequently, label-free solutions are preferred for obtaining high-contrast images. If a microscopic specimen has structural details that differ in optical path, it will alter the phase of light passing through the specimen, so every part of the wavefront will have its own phase. Phase change can be characterized as a change in brightness, allowing us to identify microscopic areas causing the change. This can be achieved effectively with phase contrast imaging and provides an alternative to intensity contrast imaging. Several label-free high-contrast imaging techniques for phase objects have been developed since Zernike introduced them in the 1930s. Hoffman contrast microscopy and differential interference contrast (DIC) microscopy are excellent tools for studying cells and tissues. In plasma applications, Schlieren photography is another technique used to observe air turbulence. The basic principle behind all of these methods is the transformation of phase into intensity, thereby improving the contrast of the image. But each method does not rely on a quantifiable technique to measure phase. Since modulated intensity and specimen phase are mixed in the images, it is not possible to reconstruct phase directly. Thus, these methods are considered qualitative phase imaging methods. A mathematical description of qualitative phase imaging techniques is presented in this chapter, along with numerical simulations using MATLAB® . Furthermore, the chapter discusses recent developments in phase contrast imaging and the capabilities and limitations of each technique. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 C. Liu et al., Computational Optical Phase Imaging, Progress in Optical Science and Photonics 21, https://doi.org/10.1007/978-981-19-1641-0_2
13
14
2 Qualitative Phase Imaging
2.1 Phase Contrast Microscopy Described first by Zernike [1–3], phase contrast microscopy enhances the contrast of an image of a transparent sample using the phase difference between direct and scattered light beams from the sample. When these beams are combined, they create a contrast either by strengthening or cancelling the beams based on the phase difference. We present here both theoretical and numerical models of phase contrast microscopy. Figure 2.1a represents the principle of phase contrast imaging. For a transparent specimen in the object plane, its transmission distribution t can be described as t = eiϕ(x,y)
(2.1)
where ϕ(x, y) indicates the specimen phase distribution. The complex wavefront distribution at the back focal plane of the first lens (the front focal plane of the second lens) can be computed through the Fourier transform F as = F(t)
(2.2)
Fig. 2.1 Phase contrast microscopy. a Phase contrast imaging principle; (A1) positive phase contrast mask; (A2) negative phase contrast mask; b phase contrast imaging in commercial microscope; (B1) illumination ring; (B2) positive phase contrast mask; (B3) negative phase contrast mask
2.1 Phase Contrast Microscopy
15
When the specimen phase is small, Eq. (2.1) can be simplified as Eq. (2.3). t = eiϕ(x,y) ≈ 1 + iϕ(x, y)
(2.3)
Substituting Eq. (2.3) into Eq. (2.2), the complex wavefront distribution becomes = δ+iF[ϕ(x, y)]
(2.4)
The first term in Eq. (2.4) is zeroth order specimen spectrum and the second term represents non-zeroth order specimen spectrum. A phase contrast mask as shown in Fig. 2.1(A1) is introduced at the back focal plane of the first lens, which has the function to delay the phase of the zeroth order spectrum by π/2. The modified complex wavefront distribution is = iδ(ξ, η)+iF[ϕ(x, y)]
(2.5)
The complex wavefront distribution at the back focal plane of the second lens is = F = i + iϕ(x, y)
(2.6)
The image I can be computed as 2 I = = |i + iϕ(x, y)|2 ≈ 1 + 2ϕ(x, y)
(2.7)
where the square term of ϕ(x, y) is ignored since the specimen phase is small. Alternatively, using a phase contrast mask as shown in Fig. 2.1(A2) would delay the phase of the zeroth order spectrum by 3π/2, equivalent to advancing the phase by π/2. Re-writing Eqs. (2.5)–(2.7), = −iδ(ξ, η)+iF[ϕ(x, y)]
(2.8)
= F = −i + iϕ(x, y)
(2.9)
2 I = = |−i + iϕ(x, y)|2 ≈ 1 − 2ϕ(x, y)
(2.10)
Equations (2.7) and (2.10) describe two ways of phase contrast-positive (bright) phase contrast microscopy and negative (dark) phase contrast microscopy. However, both tactics improve the imaging contrast by transforming phase into intensity. Commercial microscopes often implement phase contrast microscopy in a configuration similar to Fig. 2.1b, which is very similar to dark field microscopy. The light
16
2 Qualitative Phase Imaging
passing through the illumination ring shown in Fig. 2.1(B1) is modulated by the positive or negative phase contrast mask shown in Fig. 2.1(B2) and 1(B3). Zeroth order light is within the ring region, which is different from Fig. 2.1a, but the principle of phase contrast imaging remains the same: the phase contrast image is caused by the interference between directly transmitted light within the phase contrast mask ring region and phase shifted light containing the specimen information from other portions of the phase contrast mask. The phase contrast mask and lens are often integrated in a phase contrast micro-objective. Phase contrast microscopy described in Fig. 2.1a is numerically implemented using the following MATLAB code. Figure 2.2 shows the results of numerical simulation using samples of red blood cells. Parameters used were obtained statistically following massive measurements [4]. The phase and amplitude of these red blood cells are shown in Fig. 2.2a. Simulations disregard absorption and scattering effects. The bright field image obtained in Fig. 2.2b does not show any specimens due to the unique amplitude distribution shown in Fig. 2.2a. The images shown in Fig. 2.2c, d are obtained with positive and negative phase contrast masks. Simulation results show a significant improvement in contrast. Readers may find more images of various samples in phase contrast microscopy image galleries [5, 6].
Fig. 2.2 Numerical simulation of phase contrast microscopy. a Amplitude and phase of red blood cells; b bright field image; c positive phase contrast image; d negative phase contrast image
2.1 Phase Contrast Microscopy
17
Matlab Code for Phase Contrast Imaging
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
% Computational Optical Phase Imaging % Phase Contrast Microscopy %% CCC clear all; close all; clc; %% System Parameters PixelNumber = 2^8; % Image recorder size PixelSize = 6*10^(-6); % Image recorder pixel size, unit: m MagnificationRatio = 40; % Magnification ratio of optical system, 40x RIwater = 1.33; % Refractive index of water RIcell = 1.40; % Refractive index of red blood cell lambda = 532*10^(-9); % Wavelength %% Red Blood Cell Model [4] in Chapter 2 Radius = 7.65*10^(-6)/2; Size = floor(2*Radius/PixelSize*MagnificationRatio); Xaxis = linspace(-Radius, Radius, Size); Yaxis = linspace(-Radius, Radius, Size); [Xmatrix, Ymatrix] = meshgrid(Xaxis, Yaxis); Rho = (Xmatrix.^2+Ymatrix.^2).^0.5; RhoNormalized = Rho/Radius; Mask = double(RhoNormalized