407 101 4MB
English Pages 434 Year 2003
Photonics Rules of Thumb Ed Friedman John Lester Miller
Optics, Electro-Optics, Fiber Optics, and Lasers
Second Edition
McGraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 9-78-007143345-7 The material in this eBook also appears in the print version of this title: 0-07-138519-3. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. For more information, please contact George Hoare, Special Sales, at [email protected] or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms. THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise. DOI: 10.1036/0071385193
Professional
Want to learn more? We hope you enjoy this McGraw-Hill eBook! If you’d like more information about this book, its author, or related books and websites, please click here.
For more information about this title, click here
Contents
Acknowledgments xi Introduction xiii
Chapter 1 Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 SNR Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Johnson Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Probability of Detection Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Correcting for Probability of Chance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Detection Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Estimating Probability Criteria from N50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Gimbal to Slewed Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Identification and Recognition Improvement for Interpolation . . . . . . . . . . . . . . . . . . . . . . . 14 Resolution Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 MTF Squeeze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Psychometric Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Rayleigh Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Resolution Required to Read a Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Subpixel Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 National Image Interpretability Rating Scale Criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Chapter 2
Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Atmospheric “Seeing” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Blackbody Temperature of the Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Direct Lunar Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Number of Actuators in an Adaptive Optic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Number of Infrared Sources per Square Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Number of Stars as a Function of Wavelength. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Number of Stars above a Given Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Photon Rate at a Focal Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Reduction of Magnitude by Airmass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 A Simple Model of Stellar Populations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Chapter 3
Atmospherics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Atmospheric Attenuation or Beer’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Impact of Weather on Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 iii
iv
Photonics Rules of Thumb
Atmospheric Transmission as a Function of Visibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Bandwidth Requirement for Adaptive Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Cn2 Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Cn2 as a Function of Weather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Free-Space Link Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Fried Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Index of Refraction of Air . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 The Partial Pressure of Water Vapor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Phase Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Shack-Hartmann Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Vertical Profiles of Atmospheric Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Visibility Distance for Rayleigh and Mie Scattering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 4
Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Clutter and Signal-to-Clutter Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Clutter PSD Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Earth’s Emission and Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Effective Sky Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Emissivity Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Frame Differencing Gain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 General Infrared Clutter Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Illuminance Changes during Twilight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Reflectivity of a Wet Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Sky Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Spencer’s Signal-to-Clutter Ratio as a Function of Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chapter 5
Cryogenics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Bottle Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Cold Shield Coatings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Cooler Capacity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Cooling with Solid Cryogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Failure Probabilities for Cryocoolers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Joule–Thomson Clogging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Joule–Thomson Gas Bottle Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Sine Rule of Improved Performance from Cold Shields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Stirling Cooler Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Temperature Limits on Detector/Dewar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Thermal Conductivity of Multilayer Insulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Cryocooler Sizing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Radiant Input from Dewars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Chapter 6
Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
APD Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Responsivity of Avalanche Photodiodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Defining Background-Limited Performance for Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Digitizer Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 HgCdTe “x” Concentration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Martin’s Detector DC Pedestal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Noise Bandwidth of Detectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Nonuniformity Effects on SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Peak versus Cutoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Performance Dependence on RoA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Responsivity and Quantum Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Shot Noise Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Specifying 1/f Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Well Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 IR Detector Sensitivity to Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Contents
Chapter 7
v
Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Analog Square Pixel Aspect Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Comfort in Viewing Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Common Sense for Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Gamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Gray Levels for Human Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Horizontal Sweep. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Kell Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 NTSC Display Analog Video Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 The Rose Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Wald and Ricco’s Law for Display Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Display Lines to Spatial Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Chapter 8
The Human Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Cone Density of the Human Eye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Data Latency for Human Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Dyschromatopic Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Energy Flow into the Eye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Eye Motion during the Formation of an Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Frequency at which Sequences of Images Appear as a Smooth Flow. . . . . . . . . . . . . . . . . . . 147 Eye Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Little Bits of Eye Stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Old-Age Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Optical Fields of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Pupil Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 The Quantum Efficiency of Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Retinal Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Rod Density Peaks around an Eccentricity of 30° . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Simplified Optics Transfer Functions for the Components of the Eye . . . . . . . . . . . . . . . . . 160 Stereograph Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Superposition of Colors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Vision Creating a Field of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Chapter 9
Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Aperture Size for Laser Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Atmospheric Absorption of a 10.6-µm Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Cross Section of a Retro-reflector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Gaussian Beam Radius Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Increased Requirement for Rangefinder SNR to Overcome Atmospheric Effects . . . . . . . . 172 Laser Beam Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Laser Beam Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Laser Beam Scintillation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Laser Beam Spread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Laser Beam Spread Compared with Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Laser Beam Wander Variance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Laser Brightness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 LED vs. Laser Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 LIDAR Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 On-Axis Intensity of a Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Peak Intensity of a Beam with Intervening Atmosphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Pointing of a Beam of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Pulse Stretching in Scattering Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Thermal Focusing in Rod Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Chapter 10
Material Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Cauchy Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Diameter-to-Thickness (Aspect) Ratio for Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Dip Coating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
vi
Photonics Rules of Thumb
Dome Collapse Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Figure Change of Metal Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Mass Is Proportional to Element Size Cubed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Mechanical Stability Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Mirror Support Criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Natural Frequency of a Deformable Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Pressure on a Plane Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Properties of Fused Silica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Spin-Cast Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Chapter 11
Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Amdahl’s and Gustafson’s Laws for Processing Speedup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Arrhenius Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Cost of a Photon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Crickets as Thermometers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Distance to Horizon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Learning Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Moore’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Murphy’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Noise Resulting from Quantization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Noise Root Sum of Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Photolithography Yield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Solid Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Speed of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Chapter 12
Ocean Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Absorption Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Absorption Caused by Chlorophyll. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Absorption of Ice at 532 nm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Bathymetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 f-Stop under Water. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Index of Refraction of Seawater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Ocean Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Underwater Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Underwater Glow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Wave Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Chapter 13
Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Aberration Degrading the Blur Spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Aberration Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Acousto-optic Tunable Filter Bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Blur vs. Field-Dependent Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Circular Variable Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Defocus for a Telescope Focused at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Diffraction Is Proportional to Perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Diffraction Principles Derived from the Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . 248 f/# for Circular Obscured Apertures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Fabry–Perot Etalons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Focal Length and Field of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Grating Blockers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Grating Efficiency as a Function of Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Hollow Waveguides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Hyperfocal Distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 The Law of Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Limit on FOV for Reflective Telescopes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Linear Approximation for Optical Modulation Transfer Function . . . . . . . . . . . . . . . . . . . . 258 Antireflection Coating Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Maximum Useful Pupil Diameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Minimum f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Contents
vii
Optical Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Optical Performance of a Telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Peak-to-Valley Approximates Four Times the Root-Mean-Square . . . . . . . . . . . . . . . . . . . . . 265 Pulse Broadening in a Fabry–Perot Etalon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Root-Sum-Squared Blur. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Scatter Depends on Surface Roughness and Wavelength. . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Shape of Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Spherical Aberration and f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Stop Down Two Stops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Chapter 14
Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Absolute Calibration Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Bandpass Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Blackbody or Planck Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Brightness of Common Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Calibrate under Use Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Effective Cavity Emissivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 The MRT/NE∆T Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 The Etendue or Optical Invariant Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Ideal NETD Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Laboratory Blackbody Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Lambert’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Logarithmic Blackbody Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Narrowband Approximation to Planck’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 The Peak Wavelength or Wien Displacement Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Photons-to-Watts Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Quick Test of NE∆T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 The Rule of 4f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Chapter 15
Shop Optics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Accuracy of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Approximations for Foucault Knife-Edge Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Cleaning Optics Caution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Collimator Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Detection of Flatness by the Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Diamond Turning Crossfeed Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Effect of Surface Irregularity on the Wavefront. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Fringe Movement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Material Removal Rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Oversizing an Optical Element for Producibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Pitch Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Sticky Notes to Replace Computer Punch Cards for Alignment . . . . . . . . . . . . . . . . . . . . . . 308 Preston’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Properties of Visible Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Scratch and Dig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Surface Tilt Is Typically the Worst Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Chapter 16
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Baffle Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Expected Modulation Transfer Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 BLIP Limiting Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Dawes Limit of Telescope Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Divide by the Number of Visits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 General Image Quality Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Good Fringe Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 LWIR Diffraction Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Overlap Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Packaging Apertures in Gimbals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Pick Any Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
viii
Photonics Rules of Thumb
Procedures to Reduce Narcissus Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Relationship between Focal Length and Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Simplified Range Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 System Off-Axis Rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Temperature Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Typical Values of EO System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Wind Loading on a Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Largest Optical Element Drives the Mass of the Telescope . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Chapter 17
Target Phenomenology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Bidirectional Reflectance Distribution Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Causes of White Pigment’s Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Chlorophyll Absorptance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Emissivity Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 The Hagan–Rubens Relationship for the Reflectivity of Metals . . . . . . . . . . . . . . . . . . . . . . . 337 Human Body Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 IR Skin Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Jet Plume Phenomenology Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Lambertian vs. Specular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Laser Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 More Plume Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Plume Thrust Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Rocket Plume Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Solar Reflection Always Adds to Signature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Temperature as a Function of Aerodynamic Heating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Chapter 18
Visible and Television Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Airy Disk Diameter Approximates f/# (for Visible Systems). . . . . . . . . . . . . . . . . . . . . . . . . . 355 CCD Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Charge Transfer Efficiency Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 CMOS Depletion Scaling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Correlated Double Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Domination of Spurious Charge for CCDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Equivalent ISO Speed of a Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Hobbs’ CCD Noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Image Intensifier Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Increase in Intensifier Photocathode EBI with Temperature. . . . . . . . . . . . . . . . . . . . . . . . . 362 Low-Background NE∆Q Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Microchannel Plate Noise Figure and Noise Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Noise as a Function of Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Noise Equations for CMOS APSs and CCDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Photomultiplier Tube Power Supply Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 P-Well CCDs are Harder than N-Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Richardson’s Equation for Photocathode Thermionic Current. . . . . . . . . . . . . . . . . . . . . . . 369 Silicon Quantum Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Williams’ Lines of Resolution per Megahertz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Appendix A Glossary Index
373 389
409
About the Authors
417
Acknowledgments
The authors owe a debt to many; virtually every rule contained in this book is the result of a specific contribution or the inspiration of someone in the field of electro-optics. Without their efforts to develop the technology and its many applications, this book would neither exist nor have any value. We especially thank those who have contributed to this edition: Bjorn Andressen Joel Anspach Cynthia Archer Bill Bloomquist Gary Emerson Jim Gates Paul Graf Jim Haidt Joel Johnson Noel Jolivet Brian McComas Steve Ridgway Mike Soel Richard Vollmerhausen Scott Way George Williams John Wiltse Separately, we also thank all of those who suggested rules and provided permissions and all of those who helped review and improve the first edition. Finally, the authors recognize the role of our families (especially our beloved wives, Judith Friedman and Corinne Foster) for tolerating the long periods of loneliness. The reader will take note that this is the third edition, and the creation of all three has taken a decade. Our families (wives, parents, children, and five grandchildren born during this period), while supportive of this effort, must be relieved that the saga is over. ix
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
This page intentionally left blank
Introduction
“Few formulas are so absolute as not to bend before the blast of extraordinary circumstances.” Benjamin Nathan Cardozo The evolution of the electro-optical (EO) sciences parallels, and feeds from, developments in a number of somewhat unrelated fields, including astronomy, satellite and remote sensing technology, materials science, electronics, optical communications, military research, and many others. The common thread of all of this effort, which really came into focus in the 1950s, is that scientists and engineers have been able to combine highly successful electronic technologies with the more ancient concepts and methods of optics and electromagnetic wave propagation. The merging of these fields has provided an unprecedented capability for instruments to “see” targets and communicate with them in a wide range of wavelengths for the benefit of security systems, science, defense, and (more recently) consumers. Major departments at universities are now devoted to producing new graduates with specialties in this field. There is no end in sight for the advancement of these technologies, especially with the continued development of electronics and computing as increasingly integral parts of EO instrumentation. One of the disturbing trends in this technology is the constant narrowing of the role of engineers. As the technology matures, it becomes more difficult for anyone working in an area of the EO discipline to understand all that is being done in the related sciences and engineering. This book has been assembled to make a first, small step to expose anyone working in EO to a wide range of critical topics through simple calculations and explanations. There is no intent to compete with stalwart texts or the many journals or conferences devoted to the EO field, all of which provide considerable detail in every area. Rather, this book is intended to allow any EO engineer, regardless of specialty, to make first guesses at solutions in a wide range of topics that might be encountered in system design, modeling, or fabrication, as well as to provide a guide for choosing which details to consider more diligently. Another distinguishing feature of this book is that it has few of the detailed derivations found in typical academic books. We are not trying to replace them but to provide an augmentation of what they provide. xi
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
xii
Photonics Rules of Thumb
This book will help any EO team to make quick assessments, generally requiring no more than a calculator, so that they quickly find the right solution for a design problem. The book is also useful for managers, marketeers, and other semitechnical folks who are new to the electro-optical industry (or are on its periphery) to develop a feel for the difference between the chimerical and the real. Students may find the same type of quick-calculation approach valuable, particularly in the case of oral exams in which the professor is pressuring the student to do a complex problem quickly. Using these assembled rules, you can keep your wits about you and provide an immediate and nearly correct answer, which usually will save the day. But after the day is saved, you should go back to the question and perform a rigorous analysis. These rules are useful for quick sanity checks and basic relationships. Being familiar with the rules allows one to rapidly pinpoint trouble areas or ask probing questions in meetings. They aid in thinking on your feet and in developing a sense of what will work and what won’t. But they are not, and never will be, the last word. It is fully recognized that errors may still be present, and for that we apologize in advance to readers and those from whom the material was derived. Like the previous two editions of this book, the motivation for this edition was to provide a vehicle for engineers working in the electro-optical fields to create and check ideas that allow them to quickly assess if a design idea would work, what it might cost, and how risky it is. All of us do this, but we usually don’t organize our own set of rules and make them public. To assist us in this endeavor, we have solicited the cooperation of as many experts as would agree to help. Their input gives us a wide variety of experience from many different technical points of view. Alas, technology advances, and all of us wonder how we can possibly keep up. Hopefully, this book will not only provide some specific ideas related to electro-optic technology, it will also suggest some ways of thinking about things that will lead to a whole new generation of such rules and ideas. As we discovered with the previous editions of this book, not everyone has the same thing in mind when considering “a rule of thumb.” To qualify for our definition of a rule of thumb, a rule should be useful to a practitioner and possess at least most of the following attributes: ■ It should be easy to implement. ■ It should provide roughly the correct answer. ■ The main points should be easy to remember. ■ It should be simple to express. ■ It should highlight the important variables while diminishing the role of generally unimportant variables. ■ It should provide useful insight to the workings of the subject matter. In the first edition of the book, we found it valuable to create a detailed standard form and stick to it as closely as possible. We did so with the second edition and this edition as well. However, like the second edition, which concentrated on optical applications in telecommunications, we have simplified the format to eliminate duplication of material. For this edition, the format has been simplified, grouping all additional information into the “Discussion” section of the rule, because a reader often will want more detail than is provided in the rule itself. References are provided whenever possible. In addition, reference material is mentioned that can be considered as recommended reading for the reader with a desire for more detail than could be presented in the “rule” and “discussion.” Not every entry in the references was used to create the rule. The reader should note that each rule “stands on its own,” so the abbreviations and terminology may not be entirely consistent throughout. The rules in this book are intended to be correct in principle and to give the right answer to an approximation. Some are more accurate than others. Some are laws of physics, and
Introduction
xiii
some represent existing technology trends. Many derive from observations made by researchers in the field, augmented by curve fitting that results in polynomic approximations. These can be quite good for explorations of the proper operating point of a system, resolving trade studies, and other applications. Readers with a desire for a more precise answer can consult the references, which usually contain some more detailed analyses that can lead to an even better answer. Rules based on the current state of the art will be useful in the future only to demonstrate how hard an early twenty-first century electro-optical engineer had to work in these current Dark Ages. Many of the rules will become less useful and inappropriate as time marches on, but they are valid now and for the near future. Others derive directly from laws of physics and will, we expect, endure forever. However, even today, there may arise odd situations in which a particular rule will be invalid. When this happens, a detailed understanding between management and technician must exist as to why the state of the art is being beaten. It isn’t impossible to beat the state of the art—only unlikely (unless you are trying). The authors arrived at the same place by very different paths. John spent some of his career in astronomy before joining the aerospace industry to work on infrared sensors for space surveillance. He later worked on search-and-rescue and enhanced vision systems. Ed spent much of his career working on remote sensing technologies applied to Earth, its atmosphere and oceans, and, more recently, astronomical instruments and advanced pointing systems. We met in Denver in 1985, both working for a major government contractor on exotic electro-optical systems. Those were halcyon days, with money flowing like water and contractors winning billions of dollars for some concepts that were overly optimistic or barely possible at best. In the center of the whole fray were bureaucrats, politicians, and managers who were demanding that we design systems that would be capable of the impossible. We saw many requirements and goals being levied on our systems that were far from realistic, often resulting from confusing (and poorly understood) interpretations of the capabilities of optical and electro-optical systems and the properties of targets or backgrounds. We found a common ground when managers discovered that many co-workers, in an attempt to outdo the competition, were promising to perform sensor demonstrations that violated many rules of engineering, if not physics. On one multibillion-dollar program, after some consistent exposure to neophytes proposing all sorts of undoable things, we decided to try to educate everyone by creating a half-serious, half-humorous posting for the local bulletin board (this was before web sites were ubiquitous) called “Dr. Photon’s Rules of Thumb.” Its content was a list of basic rules that apply when optics or electro-optics are being used. That first list consisted of simple scientific and engineering truths, inspired by the worst of the erroneous ideas that nontechnical people had proposed. Our early ideas weren’t that far from an EO Dilbert©, but we failed to put them into comic form. The goal was to eliminate many of the bad ideas and try to instill at least a little scientific foundation into the efforts of the team. Although the list of simple rules embarrassed a few of the misinformed, it was generally met with enthusiasm. We found copies of it not only in this project but all across the company, and even among competitors. When we decided to publish the material in a book, we needed many more rules than were contained in the original posting. The quest for rules led to key papers, hallmark books, colleagues, private submissions from experts, out-of-print books, foreign books, technical papers, and into the heart of darkness of past experience. What an education is was! Each of us was surprised and perplexed by at least a few of the things we discovered along the way. Some of these rules are common folklore in the industry. We developed a number ourselves. The original list included nearly 500 such rules, now winnowed down to the 300 or so that survive in this edition. Rule selection was based on our perceptions of
xiv
Photonics Rules of Thumb
each rule’s practical usefulness to a wide range of users, designers, and managers of electro-optical systems in the early twenty-first century. The down-selection was accomplished by examining every rule for truthfulness, practicality, range of applicability, ease of understanding, and, frankly, how “cool” it is. As such, this is an eclectic assortment that will be more useful to some than to others, and more useful on some days than others. Some rules have several nearly identical equations or concepts describing the same concept. The bulk of this book consists of more than 300 rules, divided into 18 chapters. Each chapter begins with a short background and history of the general subject matter to set the stage and provide a foundation. The rules follow. Because many rules apply to more than one chapter, a comprehensive index and detailed table of contents is included. We apologize for any confusion you may have in finding a given rule, but it was necessary to put them somewhere and, quite honestly, it was often an arbitrary choice between one or more chapters. Students and those new to the field will find the glossary useful. Here you will find definitions of jargon, common acronyms, abbreviations, and a lexicon intended to resolve confusing and ambiguous terms. To summarize, this collection of rules and concepts represents an incomplete, idiosyncratic, and eclectic toolbox. The rules, like tools, are neither good nor bad; they can be used to facilitate the transition of whimsical concepts to mature hardware or to immediately identify a technological path worth pursuing. Conversely, misused, they can also obfuscate the truth and, if improperly applied, derive incorrect answers. Our job was to refine complex ideas to simplified concepts and present these rules to you with appropriate cautions. However, it is your responsibility to use them correctly. Remember, it is a poor workman who blames his tools, and we hope you will find these tools useful. Dr. Edward Friedman John Lester Miller
Chapter
1 Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
Acquisition, tracking, and pointing (ATP) naturally decomposes into detection, recognition, and identification (DRI); all are critical functions in a number of scientific, military, and commercial security systems. The ATP function is often used to refer to the servo system, including the gimbals, stabilization, and slewing functions, whereas DRI often refers to the ability of the complete system to present information to a user (human or machine) tasked with performing an intelligent detection, recognition, or identification function. Several recent developments have vastly increased this capability, including multispectral and hyperspectral imagery, image fusion, image enhancement, and automatic target detection algorithms. Generally, the tasks of acquisition, tracking, and pointing occur before the target is detected, or in the first phases of detection, and traditionally have been analog in nature, although modern systems perform this all digitally. The detection, recognition, and identification process occurs after the ATP and generally involves a human, machine vision system, or automatic target recognizer. One can imagine early hunters going through the same activities as today’s heat-seeking missiles when the concepts of ATP/DRI are applied. For instance, the hunter looks for a bird to kill for dinner and, like the fighter pilot, eventually finds prey. He then recognizes it as a bird, identifies it as to the type of bird, and verifies that it is a tasty species of bird. Acquisition takes place as all other distractions are eliminated and attention is turned to this particular target. Next, the brain of the hunter and the computer in the aircraft begin to observe and follow the flight of the victim. Assuming that random motions are not employed (a protection tactic for both types of prey once they know they are being considered as a target), the tracking function begins to include anticipation of the trajectory of the bird/target. This anticipation is critical for allowing appropriate lead-ahead aiming, because the sensor-weapon cannot be pointed at the current position but rather must be pointed at the intended collision point. Finally, the hunter must aim (or point) his weapon properly, taking into account the lead-ahead effect as well as the size of the target. Today’s
1
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
2
Chapter One
technology solves the size problem by constantly tracking and correcting the trajectory of the weapon so that the closer it gets, the more accurate the hit. The same type of function occurs at much lower rates in a variety of scientific applications of electro-optics (EO). For example, astronomical telescopes must acquire and track the stars, comets, and asteroids that are chosen for study. The rate at which the star appears to cross the sky is much lower than that of a military or culinary target, but the pointing must be much more accurate if quality images are to be obtained. Very few EO systems do not have to perform ATP/DRI functions. Of course, the great leaps forward in ATP/DRI over the last century have derived from a combination of optical technologies and electronics. Some advances derive, in theory and practice, from the development of radar during WWII. Indeed, some of the rules in this chapter relate to the computation of the probability of detection of targets. These computations have direct analogs in both the EO and radar worlds. A common feature of both radar and EO tracking systems is the concern about the signal-to-noise ratio (SNR) that results from different types of targets, since the quality of the ATP/DRI functions depends in a quantitative way on the SNR. Targets that are not well distinguished from background and sensor noise sources will be poorly tracked, and the success of the system eventually will be compromised, even if the pointing system is very capable. The U.S. Army’s Night Vision and Electronic Sensors Directorate [part of U.S. Army Communications–Electronics Command (CECOM)] in Fort Belvoir has long been a leader in the scientific and engineering investigation of DRI for EO systems. The development of ATP systems rests on several basic parameters: the shape and size of the target, the range between target and sensor, the contrast between the target surface characteristics and those of the surrounding scene, the atmosphere, the intensity of the target signature, the ability of the system operator to point the weapon, and the speed at which it can accommodate the trajectory (crossing rate) of the target. Some modern implementations of these systems rely on advanced mathematical algorithms for estimating the trajectory, based on the historical motions of the target. They also rely heavily on the control systems that will point both the tracking system and the weapon. It is clear from recent successes in both military and scientific systems that ATP is a mature technology that is becoming limited only by the environment through which the system must view the target. Reference 1 details several of the hardware challenges a gimbaled system must overcome to provide this capability. This chapter covers a number of rules related to EO sensing and detection, and it also addresses some empirical observations related to how humans perform detection and tracking functions. It includes some mechanical and gimbal rules. These are frequently included in ATP discussions, and, frankly, we didn’t have enough good rules to form an independent chapter. Our ancestors did not know it, but they were using the same type of rules that appear in “Target Resolution vs. Line Pair Rules” in that quantitative expression is given for the resolution needed to perform the ATP functions. Throughout this chapter, we refer to the functions of detection, recognition, and identification. Richard Vollmerhausen2 offers the following definitions and cautions for a human searching an image for targets: ■ Target detection means that an observer (or machine) has found something that is potentially a target of interest. A detection does not mean that the observer knows that a target is present. Sometimes an object is detected because it is in a likely place; sometimes it is because a Sun glint or hot spot attracted the observer’s attention. Sometimes a target is detected because it looks like a target (target recognition). Generally, detection means that some further action must be taken to view a location or object. With a thermal infrared imager, if a hot spot is viewed and the observer switches to a narrow field of view to see what the hot spot contains, the detection occurred when the hot spot caused interest by the observer. The result of a detection is further interest by the observer; that is, the
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
3
observer might switch to a narrower field of view, select another sensor, call a lookout position, and so on. Quite often, detection, recognition, and even identification occur simultaneously. ■ Recognition involves discriminating which class of object describes the target. For military vehicles, the observer is asked to say whether the vehicle is a tank, a truck, or an armored personnel carrier (APC). If the observer identifies the class correctly (tank, truck, or APC), the task is scored as correct. That is, the observer might mistake a T72 Russian tank for an old American Sheridan tank. But he has correctly “recognized” that the target is a tank. It does not matter that he incorrectly identified the vehicle. ■ Target identification requires the observer to make the correct vehicle determination. In this case, the observer must correctly identify the target, not just the class. He must call a T72 a T72 and a Sheridan a Sheridan. Calling a T72 tank a Sheridan tank is scored as an incorrect choice. The reader should realize an important fact about recognition and identification. The difficulty of recognizing or identifying a vehicle depends both on the vehicle itself and on the alternatives or confusers. Task difficulty is established by the set of possible choices, not just the individual target that happens to be within the sensor field of view (FOV) at some point in time, and it may require more than the six or eight line pairs that are often stated as necessary for identification. For example, consider the following scenario. The U.S. is engaged with an enemy who has T72 Russian tanks. The U.S. has two allies, one using T62 Russian tanks (which look like T72 tanks) and the other using old American Sheridan tanks (which look different from Russian tanks). A “friend versus foe” decision by U.S. forces is much easier for the ally who uses the Sheridan than the ally who uses the Russian T62.
References 1. L. West and T. Segerstorm, “Commercial Applications in Aerial Thermography: Powerline Inspection, Research and Environmental Studies,” Proc. SPIE, Vol. 4020, 2000, pp. 382–386. 2. Private communications with Rich Vollmerhausen, 2003.
4
Chapter One
SNR REQUIREMENTS A signal-to-noise ratio of 6 is adequate to perform most sensing and tracking functions. Any more SNR is not needed. Any less will not work well. Targets can be sensed with certainty to a range defined by the signal-to-noise ratio (SNR) and allowable false alarm rate. Beyond that range, they are not usually not detectable.
Discussion This rule derives directly from standard results in a variety of texts that deal with target detection in noise. See the rule, “Pd Estimation,” in this chapter for additional details on how to compute the probability of detection for a variety of conditions. Clearly, there are some cases in which the probability of false alarm (Pfa) can be raised, which allows a small drop in the SNR for a required probability of detection (Pd). For example, if one is willing to tolerate a Pfa of 1 in 100, then the requirement on SNR drops from 6 to about 4 for 90 percent Pd. Conversely, there are some applications in which a much higher SNR is required (e.g., optical telecommunication receivers where a high SNR is required to achieve the required very low bit error rate).1 This rule assumes “white noise,” in which noise is present in all frequencies with the same probability, and no “clutter.” The situation is different with noise concentrated at a particular frequency or with certain characteristics. In general, if you have a priori knowledge of the characteristics, you can design a filter to improve performance. However, if you do not know the exact characteristics (which is usually the case with clutter), then your performance will be worse than expected by looking at tables tabulated for white noise. The more complex case in which the noise has some “color” cannot be dealt with so easily, because all sorts of possible characterizations can occur. From the graphic, one can see the high levels of probability of detection for an SNR of 6. Also, note that the probability of detection increases rapidly as SNR increases (at a false alarm of 1 × 10–4, doubling the SNR from 3 to 6 results in the probability of detection increasing from about 0.1 to well above 0.95). For example, the Burle Electro-Optics Handbook2 shows that a probability of detection in excess of 90 percent can be achieved only with a probability of false alarm of around 1 in 1 million if the SNR is about 6. In most systems, a Pfa of at most 1 in 1 million is about right. It must also be noted that the data shown in the reference are pixel rates so, in a large focal plane, there may be around 1 million pixels. Therefore, the requirement of Pfa of around 1 in 1 million limits the system to 1 false alarm per frame.
References 1. J. Miller and E. Friedman, Optical Communications Rules of Thumb, McGraw-Hill, New York, p. 43, 2003. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 112, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 3. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, p. 61.
THE JOHNSON CRITERIA 1. Detection (an object is present) 0.5 to 1 line pair or less 2. Orientation (the potential direction of motion of the object is known) 2 to 3 line pairs 3. Reading an English alphanumeric: 2.5 to 3 line pairs*
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
5
4. Recognition (the type of object is known) 3 to 4 line pairs 5. Identification (type of vehicle is known) 6+ line pairs
Discussion The functions of detection, recognition, and identification of a target by a human observer (typically called observation tasks) depend on the signal-to-noise ratio and the precision with which the target is resolved. This is founded on empirical observations of actual likely users viewing tactical military targets. The Johnson criteria use the highest spatial frequency visible (fJ) at the apparent target-to-background contrast to quantify target acquisition range. Range is proportional to fJ, with the proportionality depending on task difficulty. It should be noted that, to use the criteria correctly, target area and contrast refer to averages over the group of targets involved in the scenario. The Johnson criteria are used to quantify sensor image quality. First, one takes the RSS of the target to background contrast and determines the highest spatial frequency visible (to a human) through the entire system (telescope, focal plane, electronics, and display) with all of its noise sources. Then, range is given for a particular observation task based on the Johnson criteria. The actual range depends on task difficulty, which is determined by experiment and/or experience. Although the Johnson criteria have some difficulty when noise is present (as there is spread in the data), noise nevertheless is included in the criteria. The most important caution about this rule is that is assumes that the observer is relying on a single spectral bandwidth for the observation task. A multispectral instrument need not be an imager to provide sufficient target detection information with less resolution. Also, the DRI processes are part and parcel to human perception and do not apply to machine vision or advanced image processing algorithms. These results are sensitive to test setup, human proficiency, and target class. The cycles presented here do not include the deleterious background effects of clutter; generally, more cycles are needed to perform a given observation task as clutter increases. Resolution as described above will result in proper performance by 50 percent of the observers asked to perform the observational task under nominal conditions. More detail on the expected number of line pairs on a target is contained in Table 1.1, from Ref. 1. If you are unsure about the number of cycles required and need to do a calculation, use the nominal value in Table 1.2 as a guide. TABLE 1.1 Selected Specific Number of Cycles of Resolution to Accomplish a Given Observation Task for Specified Targets Target Truck M48 tank Stalin tank Centurion tank Halftrack Jeep Command car Soldier 105 Howitzer *Johnson
Detection
Orientation
Recognition
Identification
0.9 0.75 0.75 0.75 1.0 1.2 1.2 1.5 1.0
1.25 1.2 1.2 1.2 1.5 1.5 1.5 1.8 1.5
4.5 3.5 3.3 3.5 4.0 4.5 4.3 3.8 4.8
8.0 7.0 6.0 6.0 5.0 5.5 5.5 8.0 6.0
never addressed reading alphanumerics; however, this has become increasingly important with security and surveillance systems, and a body of work is developing that indicates that the required number of cycles is about 2.5 for a 50 percent probability. See Refs. 3 and 4 and associated rules in this chapter.
6
Chapter One
TABLE 1.2 Typical Cycles Required for a Given Observation Function (Assuming Two-Dimensional Sampling)
Number of line pairs required
Typical minimum required
Detection Classification Reading alphanumeric Recognition Identification
n/a 1 2.5 2.5 5
Nominal value (applicable when more detail is unknown) 0.75 2 2.8 3 6
Typical maximum required 1.5 3 4 4 n/a
A line pair is a particular way to define spatial resolution. It is equal to a dark bar and a white space, often called one cycle, across the critical dimension. Neophytes sometime confuse this with the number of pixels, but it is not the same. It can be crudely assumed to be twice the number of pixels, assuming perfect phasing across the target, focal plane, and display (see the Kell factor rule in Chapter 7, “Displays”). So, when 4.0 line pairs are quoted, it is equal to identifying a pattern with 4 bars and equal-width spaces between them. This requires a minimum of the footprints of 8 pixels across the target and often as much as 11 (8/0.7). The above criteria tend to fall apart somewhat at the extremes of detection and identification. Certainly, subpixel detection can easily occur if the SNR is high. If the intensity is high enough, a target need not subtend a full pixel to be detected, or we would never be able to see a single star at night. (Actually, the optics of the eye provide a diffraction blur larger than a single rod or cone and, in fact, this technique of making the blur larger than the detector is often used in star trackers.) Conversely, it is often impossible to correctly distinguish between similar targets with many more than the generally accepted 6 or 8 line pairs across them (e.g., the difference between a Camaro and Firebird, or two similar Kanji symbols). In this rule, the mentioned line pairs are across the target’s “critical” dimension, which can be assumed to be the minimal dimension for a worst-case scenario. However, the critical dimension is generally calculated as the × for two-dimensional sampling. The Johnson criteria are inherent to the classic FLIR performance codes of ACQUIRE and NVTHERM and have become part of the basic lexicon in the electronic imaging community. Therefore, it must be noted that the current Night Vision and Electronic Sensors Directorate [a part of U.S. Army Communication–Electronics Command (CECOM)] has a rich history in this field, and its staff were pioneers of the basic concept of using the number of resolution elements across a target to detect, recognize, or identify it. Historically, the subject appeared in 1940s literature, authored by Albert Rose, J. Coltman, and Otto Shade, involving research into the human eye and perception. In the late 1950s, John Johnson (of the U.S. Army) experimented with the effect of resolution on one’s ability to perform target detection, orientation, recognition, and identification functions using image intensifiers. This was followed by much additional research by Johnson, Ratches, Lawson, and others from the 1950s through 1970s. The effects of signal-to-noise were added in the 1970s by Rosell, Wilson, and Gerhart, and Vollmerhausen added to the models in the 1990s. Driggers, Vollmerhausen, and others are continuing to refine this concept in the early twenty-first century. The U.S. Army is developing a new metric based on the Johnson criteria, but able to accommodate digital imagery. The Johnson criteria use the highest spatial frequency seen through the sensor and display at the apparent target-to-background contrast to quantify image “quality.” The Johnson criteria relate to average contrast at one frequency, so there are problems (e.g., with sampled imagers, image boost, and digital filters)
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
7
that make the Johnson criteria conservative for 2-D imaging sensors. The new metric will attempt accommodate these situations. Called the target task performance metric (TTP), it is equal to2 C tgt MTF ( ξ ) 1 ⁄ 2 TTP = ⎛ -------------------------------⎞ dξ ⎝ CTF ( ξ ) ⎠
∫
where
Ctgt = contrast of the target ξ = sampling frequency MTF(ξ) = modulation transfer function as a function of ξ CTF(ξ) = contrast transfer function as a function of ξ
Then, the range is calculated from TTP A --------------------t NR where At = area of the target NR = value for the “N” required Lastly, there has been much discussion and controversy about the actual number of line pairs needed to do the functions, depending on clutter, spectral region, quality of images, test control, display brightness, and a priori knowledge. However, rarely have there been suggestions that the above numbers are incorrect by more than a factor of 3.
References 1. J. Howe, “Electro-Optical Imaging System Performance Prediction,” in Vol. 4, ElectroOptical Systems Design, Analysis and Testing, M. Dudzik, Ed., The Infrared and ElectroOptical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 92, 99, 1993. 2. R. Vollmerhausen and E. L. Jacobs, “New Metric for Predicting Target Acquisition Performance,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 3. J. Miller and J. Wiltse, “Resolution Requirements for Reading Alphanumerics,” Optical Engineering, 42(3), pp. 846–852, March 2003. 4. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 5. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 121, 1974, found at http://www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 6. NVTherm Users Manual, U.S. Army or ONTAR, 2000. 7. L. Biberman, “Introduction: A Brief History of Imaging Devices for Night Vision,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 1-11 through 1-16, 2000. 8. G. Holst, CCD Arrays Cameras and Displays, JCD Publishers, Winter Park, FL, pp. 362–364, 1998. 9. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report, ECOM-7043, 1975. 10. G. Gerhart et al., “The Evaluation of Delta T Using Statistical Characteristics of the Target and Background,” Proc. SPIE, Vol. 1969, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing, IV, pp. 11–20, 1993.
8
Chapter One
PROBABILITY OF DETECTION ESTIMATION Some simple expressions provide very good approximations to the probability of detection. For example,1 ⎛ I s – I t⎞ 1 Pd ≈ --- 1 + erf ⎜ ------------⎟ 2 ⎝ 2I ⎠ n
where Is,t,n = the detector current associated with s (signal), t (threshold), and n (noise) In addition, Kamerman2 has a different approximation, as follows: ⎫ 1 1⎧ 1 Pd = --- ⎨ 1 + erf ⎛ --- + SNR⎞ – ln ⎛ --------⎞ ⎬ ⎝ 2 ⎠ ⎝ P fa⎠ 2⎩ ⎭ for SNR > 2 and Pfa (the probability of false alarm) between 10–12 and 10–3.
Discussion The detection of targets is described in mathematical terms in a number of contexts. However, there is commonalty between the formulations used for radar, optical tracking, and other applications. The above equations are based on theoretical analysis of detection of point sources in white noise and backed up by empirical observations. The calculation of the probability of detection, Pd, of a signal in Gaussian noise is unfortunately quite complex. The exact expression of this important parameter is2 ∞
1 Pd = --π
∫ VT
where
π ⎞ ⎛ x2 + A2 ⎞ ⎛⎜ x exp ⎜ – ------------------⎟ exp ( xA cos y )dy⎟ dx ⎟ 2 ⎠⎜ ⎝ ⎝ ⎠
∫ 0
x, y = integration variables A=
2SNR
VT = –2 ln ( P fa ) Pfa = probability of false alarm SNR = signal-to-noise ratio As in any other approximation, the limits of application of the simplified version must be considered before using the rule. However, the rules shown above are quite broad in their range of application. These rules assume “white noise,” which means that the noise that is present in the system has equal amplitude at all frequencies. This approximation is required to develop any general results, because the spectral characteristics of noise in real systems are more complex and can, of course, take on an infinite number of characteristics. This assumes point source detection, without consideration to resolution or “lines across a target.” In all cases, erf refers to the error function, defined as x
2 –u2 erf ( x ) = ------- e du π
∫ 0
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
9
The error function is widely available in reference tables and is included in most modern mathematical software. For example, it can be found in virtually any book on statistics or mathematical physics.
References 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 111, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 2. G. Kamerman, “Laser Radar,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, p. 45, 1993. 3. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, Norwood, MA, pp. 147–157, 1993.
CORRECTING FOR PROBABILITY OF CHANCE Vollmerhausen1 states that the probability of chance must be accounted for before using experimental data to calibrate DRI models. Pmeasured – Pchance Pmodel = ------------------------------------------1 – Pchance where
Pmodel = the probability to use in the model Pmeasured = the measured probability Pchance = the probability of correctly identifying (or recognizing) the target or target class just by chance
Discussion Even a blind squirrel will occasionally get a nut. That is, by chance, a number of correct answers will result. Models for detection, recognition, and identification (DRI) must have this “chance” probability of guessing correctly removed using the above equation. Guessing doesn’t inherently reduce the chance of doing things right. This can be determined only if you know the truth with certainty. The point is that developing models requires that the amount of guessing be known and appropriately considered. If 4 targets (target classes) are used in the experiment, then Pchance is 0.25, and if 12 targets (target classes) are used, then Pchance is 0.08. To compare model predictions with field data, the above formula is inverted as shown below: Pmeasured = Pmodel ( 1 – Pchance ) + Pchance
Example Assume that the Johnson criteria are used to find the probability of classifying tracked versus wheeled vehicles. The critical dimension of the vehicles (square root of area) is 4 m. From the Johnson criteria rule, explained later in this chapter, the N50 (the number of cycles across the critical dimension for 50 percent of respondents to correctly perform the observation task) for this task is 2. In general, if there are only two categories, the observer will be correct half the time with his eyes closed! The model probabilities must be corrected for chance as follows: Predicted measured probability = Model probability( 1 – Pchance ) + Pchance Predicted measured probability= 0.5( 1 – 0.5 ) + 0.5
10
Chapter One
Predicted measured probability = 0.75 Now consider an example. A scanning forward-looking infrared (FLIR) system with an instantaneous field of view of 0.1 milliradians (mrad) (and a cutoff frequency, Fcutoff , of 1/ 0.1 cycles/mrad) is used to detect a target with a critical dimension (w) of 4 m. Therefore, from the resolution requirement rule, wF cutoff R ≈ -------------------1.5N 50 1 ( 4 )⎛ -------⎞ ⎝ 0.1⎠ R ≈ -------------------( 1.5 )( 2 ) R ≈ 13.3 kilometers Thus, at the range 13.3 km, the task is performed correctly 75 percent of the time.
References 1. Private communications with Rich Vollmerhausen, 2003. 2. R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, New York, 2001, pp. 20–83.
DETECTION CRITERIA Targets can be easily detected if half (or more) of the photons in the pixel are from that target and half are from noise sources.
Discussion The signal-to-noise ratio of a detector is –
es -----------------– – es + en In this equation, × represents the number of electrons generated in the detector by signal photons [and for point sources, the background photons within the instantaneous field of view (IFOV)]. It is equal to the rate of arrival at the focal plane of signal photons times the quantum efficiency of the detector and the integration time. × is the number generated by all of the noise sources. × is made up of contributions from photon noise in the background along with the contribution from leaked background clutter along with internal detector – – noise. If es = en , the signal-to-noise ratio is –
es – ----------- or 0.71 es – 2es Because the generation of electrons from the signal is proportional to the arrival of photons from the target, and this number is generally quite large, the SNR can be quite high. This is rarely the result expected by the uninitiated.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
11
Signal-to-noise ratios must still be greater than about 5 for good target detection prospects; therefore, more than 70 photons per integration time should fall on the detector (assuming a 70 percent quantum efficiency). If no noise sources are present, then this equation is reduced to –
es
and only about 25 signal electrons are needed for detection. Photons from noise sources in the scene generate electrons in direct proportion to the quantum efficiency at their wavelength. This is because both the signal and background photon fluxes follow Poisson statistics. For laser detection, avalanche photodiode, UV, and visible systems, the noise is frequently dominated by photon noise from the target and background. Therefore, this rule holds true if the total of the target and background photogenerated electrons is higher (on the order of 50 electrons or more), because this assures a current signal-to-noise ratio in excess of 6 or 7. This also holds true for IR systems, as both their targets and backgrounds generate many more electrons. This rule does not apply for detection in high clutter or background conditions or where noise sources other than that of the detector dominate. For example, in a Sony charge-coupled device (CCD), the noise per pixel per field from the video amplifier is typically 50 or 60 electrons, corresponding to a 2500 to 3600 electron well charge. Thus, this rule would not apply to low light level applications of a commercial CCD.
ESTIMATING PROBABILITY CRITERIA FROM N50 The Johnson criteria, mentioned elsewhere in this chapter, provide for a 50 percent probability for target acquisition tasks (N50). To estimate higher or lower probabilities, one can use this empirical curve fit: N ⎞E ⎛ -------⎝ N 50⎠ P( N ) = ------------------------EN 1 + ⎛ ---------⎞ ⎝ N 50⎠ where N50 = the number of cycles needed to be resolved across the target dimension for 50 percent of the observers to get the target choice correct (with the probability of chance subtracted); target dimension is typically taken as the square root of target area N = the number of cycles actually resolved across the target E = an empirical scaling factor, equal to N E = 1.7 + 0.5⎛ ---------⎞ ⎝ N 50⎠
Discussion The Johnson criteria (see the “Johnson Criteria” rule, p. 4) were originally developed to yield the probability of detection, recognition, and identification (DRI) of military targets with a 50 percent correct probability. Much work has been done to extend the 50 percent
12
Chapter One
to higher numbers (e.g., for weapon targeting systems) and lower probabilities (e.g., for general surveillance systems). Frequently, the engineer will be asked for various levels of probability. The above equations are based on curve fits to empirical data. A similar equation is provided by Vollmerhausen in Ref. 1. The difference is that N --------N 50 is replaced by MTF cutoff ----------------------K where K is fitted by experimental data and MTF stands for the modulation transfer function, a measure of the optical performance of the system. Table 1.3 can be used for some specific probabilities and represents modernized data from Ref. 2. TABLE 1.3 Suggested Multiplier from N50 to Nx* Probability (Nx)
Multiplier to Go from N50 to the Probability on the Left
*For
0.98
>2 and ≤3
0.95
2
0.8
1.5
0.7
1.2
0.3
0.75–0.8
0.1
0.5
example, to go from N50 to N95, multiply N50 by 2.
The equation assumes that the observer has plenty of time for the observation task. Also, the term probability has a unique meaning in this case. Given that a group of observers try to detect, recognize, or identify each vehicle in a group of targets, then the probability is the fraction of correct answers out of the total number of tries. Readers should realize that the empirical equation for “E” has varied over the years, and they may find related but different versions in the references and older models such as (3.8 + 0.7)(N/N50). The version used in this rule was the most recent and acceptable as of the publication of this book and should be used. However, authorities are refining this, so the equation might experience slight changes in the future. Figure 1.1 shows the performance described by the above equation as a function of N for each of the tasks. On the horizontal axis is the number of resolution elements across the critical dimension, and the vertical axis shows the probability of successfully completing the task of detection, classification, recognition, or identification. This is based on the number of resolution elements (not line pairs) for N50 of 1.5 for detection, 3.0 for classification, 6.0 for recognition, and 12.0 for identification. Thus, when there are 3 cycles across the critical dimension of a target, there are 6 pixels, and the plot shows a 50 percent probability of classification. When there are 10 pixels, the probability of correct recognition jumps to almost 90 percent. Much of the work in this field has been done by the Night Vision and Electronic Sensors Directorate, part of U.S. Army Communication–Electronics Command (CECOM). Inter-
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
13
FIGURE 1.1 Probability of detection, classification, recognition, and identification as a function of the number of pixels across the critical dimension.
ested readers should refer to the several other rules in this chapter relating to similar topics—particularly the one on Johnson criteria.
References 1. R. Vollmerhausen et al., “Influence of Sampling on Target Recognition and Identification,” Optical Engineering 38(5): p. 763, 1999. 2. G. Holst, CCD Arrays Cameras and Displays, JCD Publishing, Winter Park, FL, 1996, pp. 364–365. 3. J. Howe, “Electro-Optical Imaging System Performance Prediction,” in Vol. 4, ElectroOptical Systems Design, Analysis and Testing, M. Dudzik, Ed., The Infrared and ElectroOptical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 92, 1993. 4. NVTHERM Users Manual, ONTAR, Andover, MD, 1999. 5. FLIR92 Users Manual, U.S. Army. 6. Software Users Manual for TV Performance Modeling, p. A36, Sept. 1991. 7. R. Harney, “Information-Based Approach to Performance Estimation and Requirements Allocation in Multisensor Fusion for Target Recognition,” Optical Engineering, 36(3): March 1997. 8. J. Howe, “Thermal Imaging Systems Modeling—Present Status and Future Challenges,” in Infrared Technology XX, B. F. Andresen, Ed., Proc. SPIE 2269, pp. 538–550, 1994. 9. R. Driggers et al., “Targeting and Intelligence Electro-Optical Recognition Modeling: A Juxtaposition of the Probabilities of Discrimination and the General Image Quality Equation,” Optical Engineering, 37(3): pp. 789–797, March 1998. 10. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report ECOM-7043, 1975. 11. Private communications with R. Vollmerhausen, 2003.
14
Chapter One
GIMBAL TO SLEWED WEIGHT The mass of a gimbal assembly is directly proportional (or raised to a slight power such as 1.2) to the mass of the slewed (payload).
Discussion This is based on typical industry experience for state-of-the-art systems. To use this scaling, accelerations, velocities, base rejection, and stability must be identical; slewed masses should be within a factor of 3 of each other; and the gimbals must be of the same type, material and number of axes. Lastly, environmental, stability, stiffness, and slewing specifications must be similar. A given gimbal’s mass highly depends on its design, material composition, size, required accelerations, base motion rejection, stability, the lightweighting techniques applied, and the mass and momentum of the object to be pointed. An estimation of the mass of an unknown gimbal assembly can be scaled based on the known mass of a similar gimbal. Past designs and hardware indicate that the gimbal mass scales approximately linearly to the mass to be slewed. See Fig. 1.2.
FIGURE 1.2 FLIR Systems’ SAFIRE, an example of a gimbaled electro-optical system. (Courtesy of FLIR Systems Inc.)
IDENTIFICATION AND RECOGNITION IMPROVEMENT FOR INTERPOLATION Performance increases with the conditions and interpolation technique, but results vary widely. Generally, identification range improvement increases from 20 percent to 65 percent with simple interpolation.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
15
Discussion The authors of this book have long heard anecdotal evidence of increased detection, recognition, and identification (DRI) ranges as a result of simple interpolation between pixels to pseudo-enhance the display resolution. The above rule is based on one of the few actual tests of this concept.1 Pixel interpolation is very useful when combined with electronic zoom (e-zoom). Ezoom makes the picture bigger and overcomes the modulation transfer function (MTF) of the display and eye. However, e-zoom is often the trivial duplication of pixels. When the duplicated group of pixels becomes easily visible to the eye, the pixel structure hides (disrupts) the underlying image. When pixel structure is visible, the eye cannot spatially integrate the underlying image. To avoid this situation, interpolation is used. This can make a substantial difference in the operation of real-world sensors. For example, imagine the electronic zoom being so large that the pixilation is readily apparent. In this case, the loss of resolution (MTF loss) from the display or video recorders has little impact on the final system resolution, ignoring spurious response. This becomes especially important when dealing with surveillance and security legal criteria and post-surveillance processing issues, as these situations involve factors other than detection and are frequently recorded or digitally transmitted. Many algorithms exist for obtaining superresolution. These superresolution algorithms take pixel interpolation a step further. Rather than a simple interpolation, they actually increase the signal frame spatial content. This is typically done by sampling a scene faster than the displayed rate and imposing a slight pixel offset (sometimes using the natural optical flow, a deliberately imposed “dither,” target motion, or jitter to accomplish the offset). For point sources (or sharp edges), the Gaussian blur across several pixels can be used to calculate a position much more accurately than the IFOV and to provide superresolution accuracy. The above discussion assumes that resolution is the limiter, not noise. Superresolution and pixel interpolation do not help to reduce noise.
References 1. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, pp. 105–108, 2000. 2. J. Schuler and D. Scribner, “Dynamic Sampling, Resolution Enhancement, and Super Resolution,” in Analysis of Sampled Imaging Systems, R. Vollmerhausen and R. Driggers, Bellingham, SPIE Press, Bellingham, WA, pp. 125–138, 2000. 3. http://www.geocities.com/CapeCanaveral/5409/planet_index.html, 2003. 4. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 60, 61, 292, 1994. 5. N. Nguyn, P. Milanfar, “A Computationally Efficient Superresolution Image Reconstruction Algorithm,” IEEE Transactions on Image Processing, 10(4), April 2001. 6. Private communications with Rich Vollmerhausen, 2003.
RESOLUTION REQUIREMENT 1. For scanning sensors, the required system cutoff spatial frequency in cycles per milliradian (generally determined by the detector cutoff spatial frequency) can be estimated from 1.5NR F cutoff ≈ --------------w 2. For staring sensors, the required half-sample frequency in cycles per milliradian can be estimated from
16
Chapter One
0.85NR F half -sample ≈ -----------------w Fcutoff = = Fhalf-sample = = N=
required system resolution in units of cycles per milliradian 1/detector active area subtense half-sample rate of sensor 0.5/detector pitch required number of cycles or line pairs across the target (typically 1.5 for detection, 3 for recognition, and 6 for identification). w = square root of the target projected area in meters R = slant range in kilometers
Discussion This rule assumes that the resolvable spatial frequency is 65 percent of the system’s cutoff frequency for scanning systems. In some cases, the resolvable frequency can be nearer the cutoff frequency. The factor 1.5 can vary from 1.5 to 1.2. For staring sensors, the factor can vary from 0.7 to 1. That is, resolved frequency normally is beyond the half-sample rate (sometimes called the Nyquist frequency). The required resolutions of an electro-optical system, supporting a human operator, depend on the target recognition (or detection) range and the number of pixel pairs or line pairs required for a human to perform the function at a given level. In general, humans can resolve frequencies in the range of 60 to 80 percent of the system’s resolution. Also, see the rule in this chapter that describes the number of line pairs needed to resolve various types of targets. As mentioned in a related rule in this chapter, a system that can resolve about five line pairs across a target can provide very effective target recognition. Using this rule, we obtain that 1.5NR ( 1.5 )( 5 )( 1 ) F r ≈ --------------- = --------------------------- = 2.5 w 3 Therefore, for a 3-m target at a range of 1 km, the required number of cycles per milliradian for recognition is about 2.5.
References 1. B. Tsou, “System Design Considerations for a Visually Coupled System,” in Vol. 8, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, pp. 520–521, 1993. 2. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham WA, pp. 92–95, 2000. 3. NVTHERM Users Manual, ONTAR (www.ontar.com), 2000.
MTF SQUEEZE 1. The modulation transfer function (MTF) squeeze factor (SQ) for a target identification or recognition task is SQ = 1 – 0.5 SRR so that the MTF of a sampled imager is “squeezed” to become MTFSQ.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
17
2. Equivalently, SQ can be used to adjust N50 or even the range itself. However, SQ should be applied only once (to MTF, N50, or the range predicted for a nonsampled imager). N 50 N 50 – corrected = -------SQ or 3. RangeSA = rangeNS SQ, where rangeSA is the prediction for sampled imager, and rangeNS is the range predicted for a nonsampled imager. SRR is the total spurious response ratio, SR(f) = aliased content at spatial frequency (f), and TR(f) = the system transfer response.
∫
SR( f ) sin ( θ )d f SRR = --------------------------------------TR( f )d f θ = tan
∫
–1
SR( f ) --------------TR( f )
Discussion The range performance of sampled imagers can be affected by sampling artifacts. For nonsampled imagers, the Johnson criteria are generally used to predict range. For sampled imagers, the Johnson criteria are again used, but the range is “squeezed” to account for the degrading effect of sampling. The “squeeze” can be applied in three ways, each essentially equivalent to the others. 1. The MTF can be squeezed as described below. 2. The N50 used for range prediction can be divided by the squeeze factor. 3. The range prediction itself can be squeezed by multiplying by the squeeze factor. Although staring arrays have significant signal-to-noise advantages over scanning systems, in the 1990s researchers in several organizations noted that some nonscanning (staring) systems were not performing as well as might be expected, based on scanning performance models. They traced the cause to sampling artifacts associated with staring focal plane arrays. A small presample blur results in aliased frequency content in the displayed image. A small presample blur is associated with either fast optics (so that the diffraction blur is small) or with a small detector fill factor. A small display blur, which results in raster or visible pixel edges, also causes sampling artifacts. This occurs with displayed images, regardless of wavelength (visible, infrared, MMW, and so on). Vollmerhausen and Driggers quantified this effect and included this in the NVTHERM performance model.1 As Vollmerhausen2 astutely explains, “MTF squeeze factor adjusts the predicted detection, recognition, or identification range for sampled imagers to account for sampling artifacts.” Figure 1.3 illustrates the transfer and spurious response for a sampled imager. The presample blur MTF is sampled, resulting in replicas of the presample MTF at all multiples of the sample frequency. The display MTF multiplies the presample MTF, resulting in the transfer response of the system. The display MTF multiplies the presample MTF replicas, resulting in the aliased content—also called the spurious response. The transfer response and spurious response are used to calculate the SRR and the resulting squeeze factor SQ.
18
Chapter One
FIGURE 1.3 Sampling results in replicas of presample MTF at multiples of sample frequency. The display MTF multiplies the presample MTF to form the transfer response. The display MTF multiplies the replicas to form the spurious response (aliasing). (From Ref. 3.)
References 1. NVTHERM Users Manual, ONTAR, 2000. 2. Private communications with Richard Vollmerhausen, 2003. 3. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, p. 92–95, 2000.
PSYCHOMETRIC FUNCTION The psychometric function is well matched by a Weibull function as follows:1
P( x ) = 1 – (1 – γ ) ⋅ 2
x –⎛ ---⎞ ⎝ α⎠
β
where P = the fraction of correct responses x = the stimulus strength β = a parameter that determines the steepness of the curve γ = guess rate (0.50) α = stimulus strength at which 75 percent of the responses are correct
Discussion Reference 1 points out that “the probability of a correct response (in a detection task) increases with stimulus strength (e.g., contrast). If the task is impossible because the contrast is too low, the probability of a correct response is 50 percent (guess rate), and if the
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
19
task is easy, the observer score will be 100 percent correct. The relationship between stimulus strength and probability of a correct response is called the psychometric function.” An example is plotted in Fig. 1.4. Threshold is defined as the stimulus strength at which the observer scores a predefined correct level (e.g., 75 percent). This figure plots the function for various values of β. These plots assume a γ value of 0.5 and an α value of 3 (hence, they all converge at that stimulus point). Similarly, Refs. 2 and 3 give a related value for a probability of detection (Pd) as a function of contrast as 2 1 1 Pd ≅ --- ± --- 1 – exp [ –4.2( C/C t – 1 ) ] 2 2
where C = the contrast of interest, other than Ct Ct = the threshold contrast of 50 percent correct detection A minus sign is used when C < Ct where the ± is shown. The probability (P1) that a human will search a field that is known to contain one target and lock onto the target with his fovial (see Chap. 8, “The Human Eye,” for a discussion of fovia) vision for a sufficient time (say, 1/4 of a second) is difficult to estimate, but Refs. 2 and 3 suggest a relationship of P1 = 1 – exp [ –( 700/G )( at /as )t ] where at = area of the target as = area to be searched
FIGURE 1.4 Example of the psychometric function. The fraction of correct responses gradually increases with stimulus strength (e.g., contrast) from 50 percent (representing chance) to 100 percent. The threshold is defined as the stimulus strength at which the observer scores 75 percent correct. The threshold is independent of the decision criterion of the observer. The various plotted lines in the figure are for different betas from 2 to 8.
20
Chapter One
t = time G = a congestion factor, usually between 1 and 10 The total probability of detection is P1 Pd η where η = overall degradation factor arising from noise
References 1. P. Bijl and J. Valeton, “Bias-Free Procedure for the Measurement of the Minimum Resolvable Temperature Difference and Minimum Resolvable Contrast,” Optical Engineering 38(10): 1735–1742, October 1999. 2. Electro-Optics Handbook, Burle Inc., Lancaster, PA, pp. 120–124, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 3. H. Bailey, “Target Detection Through Visual Recognition: A Quantitative Model,” Rand Corporation, Santa Monica, CA, February 1970. 4. P. Bijl, A. Toet, and J. Valeton, “Psychophysics and Psychophysical Measurement Procedures—Introduction,” in The Encyclopedia of Optical Engineering, R. Driggers, Ed., Marcel Dekker, New York, 2003. 5. K. Brunnstrom, B. Schenkman, and B. Jacobson, “Object Detection in Cluttered Infrared Images,” Optical Engineering 42(2): 2003, pp. 388–399.
RAYLEIGH CRITERION The Rayleigh criterion states that the minimum angular separation at which two point sources can be resolved occurs when the peak of one of the Airy disks falls at the position of first minima of the other.
Discussion Lord Rayleigh, in an early attempt at quantifying resolution, developed this criterion for prism and grating spectroscopes. It says that one can determine two distinct, equal intensity point sources if the peak of one of the Airy disks (as produced by an optical instrument) falls at the first dark ring of the other. The distribution from the second point source peaks at this dark band of the distribution from the first point source (Fig. 1.5). When the energy is added, as is the case for most sensors, a distinct “saddle” results with a dip between the two peaks (Fig. 1.6). The energy distribution of a diffraction-limited Airy disk follows a Bessel function (see associated rule about the Airy Disk in Chap. 13, “Optics”), the general function being 2
4J 1 ( x ) ----------------2 x The addition of the intensity peaks from intersecting Airy disks results in a simple saddle with one minimum. For grating systems of unobscured, diffraction-limited, monochromatic light with each point source having the same intensity, the minimum (in the middle of the saddle) has an intensity of 8/π2 or 0.811 of the peak intensity. For circular apertures (which are more common) it turns out that the saddle is somewhat less. As shown in Fig. 1.6, for circular apertures, it is much closer to 0.73.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
FIGURE 1.5
21
Intersection of two monochromatic Airy disks.
FIGURE 1.6 When the Rayleigh criteria is met, the addition of the intensity of the two Airy disks produces two peaks with a minimum between. This figure illustrates the Rayleigh criterion for two point sources viewed through a circular aperture.
For unobscured, circular diffraction-limited optics, this works out to some simple equations that can be used to provide quick “on-your-feet” estimations. For example, angular diffraction blur spot radius is equal to 1.22( λ/d ) where λ = wavelength d = aperture diameter
22
Chapter One
In terms of distance on the focal plane, 1.22λ( f /# ) and in terms of numerical aperture (NA), this reduces to 0.61λ/NA Another related criterion is the Sparrow limit. This limit indicates that resolution can be achieved with a closer spacing of the Airy disks. The Sparrow limit is defined as 0.47λ/NA
RESOLUTION REQUIRED TO READ A LETTER To reliability read an English alphanumeric character (letter), the system should clearly display at least 2.8 cycles along the letter’s height.
Discussion By definition, one’s ability to read an alphanumeric depends on resolution. Generally, letters have a high probability of being correctly read when between 2.5 and 3 cycles across the height of the letter can be resolved by the human viewing the letter (whether printed or on a display). The curve of identifying the letter is very steep between 2 and 3 cycles; with 2 or less, it is practically indistinguishable; and it is almost always readable when 3 or more cycles are displayed, as illustrated in Figs. 1.7 and 1.8. One of the authors (Miller) has conducted several field experiments leading to the results presented in the figures. You can prove this for yourself by observing Fig. 1.7. It is almost impossible to read the “letters” on the low-resolution image, yet it is easy to read those of the high-resolution image. Of course, the ability to resolve two or three cycles across the letter depends on its distance from the observer and the quality of the imaging system (eyes). This rule is based on English block letters only; this does not apply to highly detailed language characters such as Kanji, Hindi, and Arabic symbols. A cycle defines the resolution required to separate high-contrast white and black lines, generally 2 pixels (ignoring phase). Assuming a Kell factor of approximately 0.7, digital imaging requires about 4 cycles or 8 pixels across the height of the letter (see the associated “Kell Factor Rule” in Chap. 7, “Displays”).
FIGURE 1.7 Images of letters at various resolutions. When these were originally displayed, they were of 2, 2.5, 3, and 4 cycles, respectively; however, they have suffered an additional resolution loss through the frame grabbing and book printing process.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
23
FIGURE 1.8 The numbers of correct readings of an alphanumeric as a function of cycles. The plot represents data from 50 individuals viewing three different letters (C = diamonds, F = triangles, and B = squares).
In addition to the empirical evidence of Figs. 1.7 and 1.8, theoretical analysis supports these assertions. Figure 1.9 clearly indicates that the power spectral content of English alphanumerics is concentrated below four cycles per letter height. The requirements for a human to successfully read these alphanumerics is somewhat lower and is demonstrated by empirical results to be about 2.5 cycles for about a 50 percent success and 3 cycles for a much higher success rate. Figure 1.10 is a cumulative plot of the standard deviation of
FIGURE 1.9 Contour plot of the Fourier transform of the entire “Helvetica” alphanumeric set. This is based on a 32 × 32 pixel “frame” where the letter height occupied all of the vertical 32 pixels.
24
Chapter One
FIGURE 1.10 Cumulative distribution of the standard deviation of the 36 alphanumeric characters in the “Helvetica” font.
the power in Fourier space for the entire alphabet and numerals in the Helvetica font. This represents the power spectral contents of the difference between the characters. About half of this distribution falls below 2.5 cycles, indicating that most letters can be distinguished with such resolution.
References 1. J. Miller and J. Wiltse, “Resolution Requirements for Alphanumeric Readability,” Optical Engineering, 42(3), pp. 846–852, March 2003. 2. W. A. Smith, Modern Optical Engineering, McGraw-Hill, New York, p. 355, 1990. 3. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 4. Private communication with Dr. John Wiltse and Dr. Cynthia Archer, 2003.
SUBPIXEL ACCURACY One can determine the location of a blur spot on the focal plane to an accuracy that equals the resolution divided by approximately the signal-to-noise ratio or,1 Angular Limit ALOS ≈ Constant -----------------------------------SNR where
ALOS = line-of-sight noise (or tracking accuracy), sometimes called the noise equivalent angle Angular Limit = larger of the diffraction limit, blur spot, and pixel footprint SNR = signal-to-noise ratio Constant = constant is generally 0.5 to 1
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
25
Discussion A system using centroiding or other higher processing techniques can locate a target to an angular position of roughly the optical resolution of the system divided by the signal-tonoise ratio. This is valid for SNRs up to about 100. Beyond 100 or 200, a variety of other effects limit the noise equivalent angle. The limit in performance of such systems is about 1/100 of the pixel IFOV, although higher performance has been demonstrated in some specialized scientific systems with much calibration. Of course, the minimum pixel SNR must exceed about 5, or the target will not be reliably tracked. Often, this subpixel accuracy does not work at SNR of less than 5 or so. This rule is based on empirical performance of existing systems. In a staring system, the target rarely falls centered on a given pixel. Usually, it is split between two or four pixels. The signal on the adjacent pixels can be used to better define its location. A number of people have calculated the theoretical limit of subpixel tracking using the method of centroids. In this analysis, it is assumed that the light from the target is projected onto a number of pixels in the focal plane, with the minimum being 4 for a staring system (scanning systems can use fewer pixels, because the detector can move over the image). This means that the focal plane array (FPA) tends to performs like the quad cells used in star trackers and other nonimaging tracking systems. By measuring the light falling in each of the four cells, the centroid of the blur can be computed. This superresolution requires that the signal from each pixel in the focal plane be made available to a processor that can compute the centroid of the blur. In advanced systems, either electronic or optical line-of-sight stabilization is employed to ensure that each measurement is made with the blur on the same part of the focal plane. This eliminates the effect of noise that results from nonuniformity in pixel responsivity and noise. Results do not include transfer errors that build up between coordinates at the focal plane and the final desired frame of reference. We find the following variants of the above rule, first from Ref. 2: π1.22λ Subpixel resolution = -----------------8DSNR where λ = wavelength D = aperture diameter This means that the constant referred to at the beginning of this rule is equal to about 0.39. Also, Ref. 3 gives 3πλ --------------------16DSNR and Ref. 4 has included position tracking error and defined it as Pixel IFOV θLSB = ----------------------------SNR where
θLSB = least significant bit of greatest resolution Pixel IFOV = instantaneous resultant field of view of a pixel
In a scanning system, the blur circle produced by the target at the focal plane is scanned over the FPA element. The FPA element can be sampled faster than the time it takes the
26
Chapter One
blur to move over the FPA. This provides a rise-and-fall profile with which the location can be calculated to an accuracy greater than the pixel footprint or blur extent. The higher the SNR, the faster the samples can be and the more accurate the amplitude level will be, both increasing the accuracy of the rise-and-fall profile. Lloyd5 points out that the accuracy for a cross scan, or when the only knowledge is that the target location falls within a detector angular subtense (DAS) is DAS ----------12 Additionally, the angular resolution is 0.31λ -------------------( D )SNR Also, McComas6 provides this form: 0.61λ 1 σo = ------------- -----------------d SN Rave Lastly, the measurement error in the angular separation of two objects is just like the rule above except that, if the SNRs approximately the same, then SNR1 or 2 SNR = ---------------------2 and the angular resolution is 2 pixel field of view ---------------------------------------------------SNR
References 1. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 60–61, 1994. 2. K. J. Held and J. D. Barry, “Precision Optical Pointing and Tracking from Spacecraft with Vibrational Noise,” SPIE Press, Vol. 616, Optical Technologies for Communication Satellite Applications, 1986. 3. M. Shao and M. Colavita, “Long-Baseline Optical and Infrared Stellar Interferometry,” Annual Review of Astronomy and Astrophysics, Vol. 30, pp. 457–498, 1992. 4. C. Stanton et al., “Optical Tracking Using Charge Coupled Devices,” Optical Engineering, Vol. 26, pp. 930–938, September 1987. 5. M. Lloyd 1993, “Fundamentals of Electro-Optical Imaging Systems Analysis,” in Vol. 4, Electro-Optical Systems Design, Analysis, and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 42–44. 6. B. K. McComas and E. Friedman, “Wavefront Sensing for Deformable Space-Based Optics Exploiting Natural and Synthetic Guide Stars,” Optical Engineering, 41(8), August 2002.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
27
NATIONAL IMAGE INTERPRETABILITY RATING SCALE CRITERIA TABLE 1.4 National Image Interpretability Rating Scale Criteria NIIRS Examples of exploitation Examples of exploitation Examples of exploitation rating level tasks (visible) tasks (multispectral) tasks (infrared) 0
Interpretability of the Interpretability of the Interpretability of the imagery is precluded imagery is precluded imagery is precluded by obscuration, degraby obscuration, noise, by obscuration, noise, dation, or very poor poor registration, degdegradation, or very resolution. radation, or very poor poor resolution. resolution.
1 (Over 9 m GSD*)
Distinguish between Distinguish between Distinguish between major land use urban and rural areas; runways and traffic classes (urban, foridentify a large wetways; detect large est, water, etc.); detect land (greater than 100 areas (greater than 1 a medium-size port acres); delineate km2) of marsh or facility; distinguish coastal shoreline. swamp. between taxiways and runways.
2 Detect large buildings (4.5 to 9 m (hospitals, factories); GSD) detect military training areas.
Detect multilane highDetect large aircraft and ways; detect strip minlarge buildings; distining; delineate extent of guish between naval cultivated land. and commercial port facilities.
3 (2.5 to 4.5 m GSD)
Distinguish between Detect individual houses Detect vegetation/soil moisture differences large (707, A300) and in residential neighalong a linear feature; small (A-4, L-39) airborhoods; detect identify golf courses; craft; distinguish trains on tracks, but detect reservoir deplebetween freighters not individual cars; tion. and tankers of 200 m detect a helipad; idenof more in length; tify a large surface identify individual thership in port by type. mally active flues running between the boiler hall and smoke stacks at a thermal power plant.
4 (1.2 to 2.5 m GSD)
Identify farm buildings Distinguish between as barns, silos, or resitwo-lane improved dences; identify, by and unimproved general type, tracked roads; detect small vehicles and field artilboats (3 to 5 m) in lery; identify large open water. fighters by type.
Identify the wing configurations of small fighter aircraft; detect a 50-m2 electrical transformer yard in an urban area.
28
Chapter One
TABLE 1.4 National Image Interpretability Rating Scale Criteria (Continued) NIIRS Examples of exploitation Examples of exploitation Examples of exploitation rating level tasks (visible) tasks (multispectral) tasks (infrared) 5 (0.75 to 1.2 m GSD)
Detect large animals in grasslands; identify a radar as vehicle mounted or trailer mounted.
6 (0.4 to 0.75 m GSD)
Identify individual teleDetect a foot trail phone/electric poles in through tall grass; residential neighbordetect recently hoods; identify the installed minefields in spare tire on a ground forces deploymedium-size truck. ment areas; detect navigational channel markers and mooring buoys in water.
7 (0.2 to 0.4 m GSD)
Identify individual railroad ties; identify fitments and fairings on a fighter-size aircraft.
Detect small marine Identify automobiles as mammals on sand or sedans or station waggravel beaches; distinons; identify antenna guish crops in large dishes on a radio relay trucks; detect undertower. water pier footings.
8 (0.1 to 0.2 m GSD)
Identify windshield wipers on a vehicle; identify rivet lines on a bomber aircraft.
Recognize the class of chemical species on small surfaces such as human limbs or small containers.
Identify limbs on a person; detect closed hatches on a tank turret.
9 Identify individual barbs (Less than on a barbed wire 0.1 m fence; detect individGSD) ual spikes in railroad ties; identify vehicle registration numbers.
Identify chemical species on limbs of person or on the surface of small containers.
Identify individual rungs on bulkhead-mounted ladders; identify turret hatch hinges on armored vehicles.
*GSD
Detect an automobile in Distinguish between sina parking lot; detect gle-tail and twin-tail disruptive or deceptive fighters; identify outuse of paints or coatdoor tennis courts. ings on buildings at a ground force installation. Distinguish between thermally active tanks and APCs; distinguish between a two-rail and a four-rail launcher; identify thermally active engine vents atop diesel locomotives.
refers to ground sample distance, a measure of resolution.
Discussion The National Image Interpretability Rating Scale (NIIRS, see Table 1.3) was developed by the reconnaissance and remote sensing community for evaluating and grading “perceptional-based” image quality from airborne and space platforms, usually viewing in a near nadir angle. Introduced in 1974, its application is to quickly convey the “quality and usefulness” of an image to analysts, without going through detailed MTF analysis and without ambiguities tied to “resolution” and photographic scale. It provides a standardized method for a number of photointerpreters (PIs) to agree on the information content in an image. Presumably, a number of them, all shown the same image, would give the picture
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
29
about the same interpretability score. In the late 1990s and early 2000s, NIIRS has been appearing more and more in the discussion of the performance of tactical airborne imagers as well as other nontraditional nonintelligence systems. NIIRS is defined and developed under the auspices of the U.S. government’s Imagery Resolution Assessments and Reporting Standards (IRARS) committee. It is largely resolution based, assuming there is sufficient contrast. The NIIRs image rating system is a scale from 0 to 9, with 9 having the most image detail and content. Generally, it is defined only as whole integers, but sometimes one will see a fractional value. The fractional value is sometimes referred to as ∆NIIRS. Fiete1 states, “A ∆NIIRS that is less than 0.1 is usually not perceptible and does not impact the interpretability of the image, whereas a ∆NIIRS above 0.2 NIIRS is easily perceptible.” The scale is more qualitative than the Johnson criteria or detailed MTF analysis. It is less easy to model, quantify, or argue in a meeting, although we include a rule elsewhere in this chapter that explains how to estimate NIIRS from optical properties of the sensor. The qualitative nature does allow for flexibility, and sometimes a custom the NIIRS scale will be established for a given mission or objective. However, NIIRS easily and quickly conveys the level of image content to nontechnical people.
References 1. R. Fiete, “Image Quality and λFN/p for Remote Sensing,” Optical Engineering, Vol. 38, pp. 1229–1240, July 1999. 2. http://www.physics.nps.navy.mil/appendices.pdf, 2003. 3. http://www.fas.org/irp/imint/niirs.htm, 2003. 4. J. Lubin et al., Vision Model-Based Assessment of Distortion Magnitudes in Digital Video, 2002, http://www.mpeg.org/MPEG/JND, 2003.
This page intentionally left blank
Chapter
2 Astronomy
This chapter contains a selection of rules specifically involving the intersection of the disciplines of astronomy and electro-optics (EO). Sensors frequently look upward, so astronomical objects often define the background for many systems. Moreover, many sensors are specifically designed to detect heavenly bodies, so astronomical relationships define the targets for many sensors. Over the past few hundred years, astronomy has driven photonics and optics. Likewise, photonics and optics have enabled modern astronomy. The disciplines have been as interwoven as DNA strands. Frequently, key discoveries in astronomy are impossible until photonic technology develops to a level that permits them. Conversely, photonic development often has been funded and refined by the astronomical sciences as well as the military. Military interests have been an important source of new technology that has furthered the application of electro-optics in astronomy. The authors contend that one of the most important contributions of the Strategic Defense Initiative (SDI) was the advancement of certain photonic technologies that are currently benefiting astronomers. Some of these include adaptive optics, synthetic guide stars, large and sensitive focal planes, advanced materials for astronomical telescopes, new methods of image stabilization, and advanced computers and algorithms for interpreting images distorted by atmospheric effects. The new millennium will include a host of new-technology telescopes that may surpass space-based observation capabilities (except in the spectral regions where the atmosphere strongly absorbs or scatters). The two Keck 10-m telescopes represent an amazing electrooptical engineering achievement. By employing segmented lightweight mirrors and lightweight structure, and by adjusting the mirrors in real time, many of the past notions and operating paradigms of both ground-based and space-based telescopes have been discarded. Soon, the Kecks will be eclipsed by larger and more powerful phased arrays of optical telescopes, all using new photonic technology that was not available 20 years ago. This new emphasis on novel technology applied to Earth-based telescopes represents a major addition to the astronomical community’s toolbox and a shift in the electro-optical and astronomical communities’ perceptions. In the near future, these high-technology telescopes, coupled with advanced precision instruments, will provide astronomers with new tools to make new and wondrous discoveries. Of course, there is no inherent reason why the technologies used in ground telescopes cannot be used in space. In fact, the next generation of science telescopes will
31
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
32
Chapter Two
feature these advances. For example, at this writing, the James Webb Space Telescope (formerly the Next Generation Space Telescope) will exploit a segmented, actuated primary mirror. In honor of the important role that adaptive optics now plays in ground-based astronomy and may soon play in space astronomy, we have included a number of rules on that topic. For the reader interested in more details, there are myriad observational astronomy books, but few deal directly with observational astronomy using electro-optics. For specific EO discussions, an uncoordinated smattering can be found throughout SPIE’s Infrared and Electro-Optical Systems Handbook, Janesick’s Scientific Charge Coupled Devices, and Miller’s Principles of Infrared Technology. Additionally, Schroeder’s Astronomical Optics addresses many principles in detail. A recent additional to the library is Bely’s The Design and Construction of Large Optical Telescopes. Do not forget to check the journals, as they seem to have more material relating to EO astronomy than do books. SPIE regularly has conference sessions on astronomical optics, instruments, and large telescopes, and the American Astronomical Society has regular conferences that feature many EO-related papers. Additionally, there are many good articles and papers appearing in Sky and Telescope, Infrared Physics, and the venerable Astrophysical Journal. Finally, do not overlook World Wide Web sites on the Internet; much good information is made available by several observatories.
Astronomy
33
ATMOSPHERIC “SEEING” Good “seeing” from the ground is seldom much better than about 1 arc second (arcsec) (or about 5 µrad).
Discussion The inhomogeneous and time-varying refractive index of the atmosphere degrades images of distant objects. The varying atmosphere induces wavefront tilt (apparent displacement of the target), scintillation (fluctuating apparent brightness of the target), and wavefront aberrations (blurring). The combination of these effects is called “seeing.” Typical seeing obtainable on good nights at high-altitude observatories is approximately 1 arcsec (about 5 µrad). This empirical limit is imposed by the atmosphere; it is not strongly related to the aperture of the optics. Common amateur telescopes’ apertures of 10 cm or less are well matched to the atmosphere in the sense that larger apertures do not permit finer resolution. A small-aperture telescope is sensitive to wavefront tilts, which are manifest as images that seem to move around over time intervals of one-tenth of a second or so. Large-aperture telescopes such as used by professional astronomers are sensitive to additional aberrations caused by the atmosphere, which are manifest as fuzzy images that appear to boil. Over the long exposures typically employed, the boiling is averaged out, and the fuzzy images have typical angular extents of an arcsec. Large apertures do, of course, collect more light, some of which can be used to control active optical elements that can undo much of the effect of bad atmospheric seeing. Large telescopes also tend to be deliberately sited where the seeing has been measured to be good—mountaintops, for example. Seeing is better at high altitudes and at longer wavelengths. Bad sites and bad weather (lots of convection) make seeing worse. Seeing tends to improve with wavelength (to something like the 6/5th power); that is, the seeing angle gets smaller (better) as the wavelength increases. Also see the various rules in the Chap. 3, “Atmospherics,” particularly the Fried parameter rule of resolution. Although very rare, seeing may approach 0.1 to 0.2 arcsec (or around 0.5 to 1 µrad) with ideal conditions.
BLACKBODY TEMPERATURE OF THE SUN Consider the to be Sun a 6000 kelvin (K) blackbody.
Discussion The Sun is a complex system. It is a main-sequence star (G2) of middle age. Almost all of the light we see is emitted from a thin layer at the surface. Temperatures can reach over 10 million degrees in the center. However, at the surface, the temperature is usually something under 6,000 K, with the best fit to a blackbody curve usually at 5770 K. The Sun appears as a blackbody with a temperature anywhere from about 5750 K and 6100 K, depending on the wavelength of observation chosen for matching the various candidate blackbody curves. At shorter than the visible wavelengths the Sun appears somewhat brighter than the temperatures shown above. The Sun (and all other stars) do have some absorption and emission lines, so using a blackbody approximation is valid only for broad bandpasses. The absorption lines are well documented and can be found in collections of Fraunhofer line shapes and depths. The lines result from various metals in the outer atmosphere of the Sun, with iron, calcium, and
34
Chapter Two
sodium causing some of the most prominent lines. Earth’s atmosphere strongly absorbs some wavelengths, so solar radiation reaching the surface may not resemble a blackbody spectrum for those wavelength bands. The above is from general curve fit for wide bandpasses, disregarding atmospheric absorption. A star’s blackbody temperature based on spectral type is (to a first order) approximately B: 27000 K A: 9900 K F: 7000 K G: 5900 K K: 5200 K M: 3800 K It is likely an accident that the peak of the Sun’s radiation is well matched to a major transmission window of the atmosphere. On the other hand, it is no accident that the peak performance of the human vision system is well matched to the solar radiation that reaches the ground. Evolution of visual systems have assured that performance is best around 555 nm. Of course, due to the absorption properties of the atmosphere, the Sun deviates significantly from blackbody properties when seen from the ground.
DIRECT LUNAR RADIANCE In the visible wavelength range, the dominant signal from the Moon is the reflectance of sunlight. This is expressed in the form of its radiance, L, bb
L ( λ,5900 K ) ΩRm ( λ ) Lreflected ( λ ) = ---------------------------------------------------------π where the equation specifically notes the approximate blackbody temperature of the Sun (5900 K). Rm is the reflectivity of the Moon, which has typical values of 0.1 in the visible wavelengths, 0.3 for 3 to 6 µm, and 0.02 for 7 to 15 µm. Ω is the solid angle formed by the Moon when viewed from Earth. For the infrared, the Moon’s thermal emission must also be considered as it can be the dominant source of photons in some bands.
Discussion The Moon is an important (and sometimes dominant) source of radiation in the night sky. Its signature includes radiation ranging from the visible to the infrared, so all types of sensors must be designed to tolerate its presence. Many sensors (such as image intensifiers and low-light-level cameras) exploit this light. The total radiance seen when viewing the Moon is the superposition of emitted radiation, reflection of solar radiation, and emission from the atmosphere. Lmoon ( λ ) = τatm ( λ )[ Lreflected ( λ ) + Lemitted ( λ ) ] + Latm ( λ ) where
τ = transmission of the atmosphere Lemitted = radiance of the Moon Latm = radiance of the atmosphere
The infrared signature from the full Moon is defined by its apparent blackbody temperature of 390 K. Anyone using the following equation should take note that the actual temperature of the Moon depends on the time elapsed since the location being imaged was last
Astronomy
35
illuminated by the Sun. This can result in a substantial difference from the following equation, but it is good enough if you don’t know the details of the lunar ephermeris. bb
Lemitted = ε( λ )L ( λ,390 K ) The spectral emissivity, ε( λ ) , in the equation above can be estimated by using the reflectivity numbers quoted previously, remembering that 1 − R = ε. As a result of changes in the distance from Earth to the Moon, the solid angle of the Moon seen from Earth is Ω = 6.8 × 10
–5
sr ( with variation from 5.7 × 10
–5
–5
to 7.5 × 10 )
Reference 1. J. Shaw, “Modeling infrared lunar radiance,” Optical Engineering, 38(10), October 1999, pp. 1763–1764.
NUMBER OF ACTUATORS IN AN ADAPTIVE OPTIC To correct for atmospheric turbulence effects, an adaptive optic system needs a minimum number of actuators. The number of actuators that is required if evenly spaced over the adaptive optic is Telescope Aperture 2 ≈ ⎛ --------------------------------------------------⎞ ⎝ ⎠ r0 where r0 = the form of Fried’s parameter used for spherical waves, 2 2
r 0 = 3.024( k C n L )
–3 ⁄ 5
For plane waves, such as are received from star light, 2 2
r 0 = 1.68( k C n L )
–3 ⁄ 5
In addition, L = distance the light propagates through the disturbing atmosphere k = 2π/λ 2
C n = atmospheric structure coefficient, which, in its simplest form, is equal to about 10–14 m–2/3 λ = wavelength
Discussion These results derive directly from the turbulence theory of the atmosphere, which has been described elsewhere in this book. Considerable effort has gone into confirming the accuracy of the theory, as described in the introduction to this chapter and several of the rules. The results shown here are for the simplifying situation in which the properties of the atmosphere are assumed to be constant over the path through which the light propagates. It also assumes that r0 is smaller than the aperture that is equipped with the adaptive optics technology. Otherwise, adaptive optics are neither necessary nor helpful. For example, adaptive optics provide no improvement for the relatively small apertures used by most
36
Chapter Two
amateur astronomers, because the aperture is about the size of Fried’s parameter, meaning that only tilt occurs in such systems. Tilt can be removed with a steering mirror. We also note that the typical astronomical case involves correcting for the turbulence in a nearly vertical path through the atmosphere. The descriptions above for Fried’s parameter apply only for constant atmospheric conditions. The complexity of computing r0 for the nearly vertical case can be avoided by assuming that r0 is about 15 cm. This also shows the wavelength dependence of the performance of an adaptive optics system. Some algebra reveals that the wavelength dependence of the number of actuators goes as λ–12/5, so longer wavelengths require fewer adaptive elements, as expected. The number of actuators depends on the properties of the atmosphere, the length of the path the light travels, the wavelength of the light, and the application of the adaptive optics. The latter point derives from whether the light is a plane wave, such as pertains to star light, or spherical waves, such as characterize light beams in the atmosphere. To properly compensate for atmospheric turbulence, the number of actuators depends on the above form of the Fried parameter and the size of the optic. The optical surface must be divided into more moveable pieces than the maximum amount of turbulent cells that can fit on the same area. If fewer actuators are used, then the atmosphere will cause a wavefront error that cannot be compensated. Tyson2 shows that a more accurate representation of the number of actuators is 2
⎛ 0.05k 2 LC n D5 ⁄ 3⎞ ≈ ⎜ -------------------------------------⎟ ln ( 1/S ) ⎝ ⎠
6⁄5
where S = the desired Strehl ratio The Strehl ratio is a commonly used performance measure for telescope optics and essentially defines how closely an optical system comes to performing in a diffraction-limited way. A little algebra shows that the two results are equal if one desires a Strehl ratio of 0.88. Diffraction-limited imaging is usually assumed to require a Strehl ratio of 0.8. Another way to look at this issue is to investigate the fitting error for a continuous facesheet. The following equation shows the variance in the fitting error in radians squared as a function of Fried’s parameter (r0 ) and d, the actuator spacing.3 d 5⁄3 2 wavefront variance = 0.28⎛ ----⎞ ( rad ) ⎝ r 0⎠ Thus, we see that, using the simple form of the rule, a 1-m aperture operating at a location that typically experiences a Fried parameter of 5 cm will need 400 actuators. This is typical of nearly vertical viewing associated with astronomical applications. The reader should keep in mind that there are at least two ways to implement the corrections in wavefront. The first approach is to actually change the shape of the primary mirror, which is rarely done at high bandwidth. The more common approach is to correct a smaller optic located at a pupil. Because of the magnification of the telescope, the pupil will necessarily be smaller than the primary mirror, meaning that the number of actuators computed above must fit into this smaller area.
References 1. H. Weichel, Laser System Design, SPIE Course Notes, SPIE Press, Bellingham, WA, p. 144, 1988. 2. R. Tyson, Principles of Adaptive Optics, Academic Press, Orlando, FL, p. 259, 1991. 3. R. Dekany et al., “1600 Actuator Tweeter Mirror Upgrade for the Palomar Adaptive Optics System (PALAO),” Proc. SPIE 4007, Astronomical Telescopes and Instrumentation 2000, March 29–31, 2000.
Astronomy
37
NUMBER OF INFRARED SOURCES PER SQUARE DEGREE The number of infrared sources (Ns) brighter than the irradiance at wavelength λ per square degree is log N s [ s( b ) ] ≈ log [ A( b,l ) ] + B( b,l )log [ E 12 { λ,s(b) } ] where E 12 { λ,s( b ) } = equivalent spectral irradiance at 12 µm producing Nλ{s(b)} sources per square degree, jansky s(b) = spectral index, defined as the ratio of the 12-µm spectral irradiance to the 25-µm spectral radiance that produced the same source count N; as a function of galactic latitude, the spectral index is b s( b ) = – 0.22 – 1.38⎛ 1.0 – exp ⎛ ------⎞ ⎞ ⎝ ⎝ 15⎠ ⎠ b = galactic latitude in degrees, 0° ≤ b ≤ 90° l = galactic longitude in degrees 0° ≤ l ≤ 180° 2
0.000061l – 0.02082l + 3.0214 log[A(b,l)] = 0.000488l – 0.78 + ------------------------------------------------------------------------b 1.4 1 + ⎛ ------⎞ ⎝ 12⎠ b B(b,l) = ( – 0.00978l + 0.88 )⎛ 1.0 – exp ⎛ ------------------------⎞ ⎞ + ( 0.00978l – 1.8 ) ⎝ ⎝ 8.0 – 0.05l⎠ ⎠ for 0° ≤ l ≤ 90° for l > 90°, B = 0.92
Discussion The IR sky is rich with astronomical sources. This complicated rule provides an excellent match to the distribution of sources found in the archive developed by the IRAS spacecraft. Note that it is the function of the spectral index portion of the equation to extend the model to other wavelengths. To do so, the spectral energy distribution of the mean ensemble of sources must be known. This rule works well from wavelengths of about 2 to 40 µm. The largest uncertainty exists in the approximate longitude range of 0 to 90° and 270 to 360° for galactic latitudes within ±3° of the galactic equator. The jansky unit deserves some attention. This term has its genesis in radio astronomy but is finding wide use in infrared astronomy. A jansky is defined as 10–26 watts per square meter of receiving area per hertz of frequency band (W/m2/Hz) and is named for the pioneer radio astronomer Karl Jansky. The following discussion shows how the conversion from typical radiant intensity to janskys is performed. We start by noting that there is an equivalence between the energy E expressed in either frequency or wavelength as follows: E λ dλ = E v dv dv E λ = E v -----dλ We also note that v = c/λ so that dv/dλ= –c/λ2. This leads to
38
Chapter Two
c –6 E λ = E v ----2-10 λ where the numerical factor converts from W/m2/Hz on the right side of the equation to W/ m2/µm on the left side. Both c and λ are expressed in meters. From this equation, we find that a jansky at 20 µm is equal to about 7.5 × 10–15 W/m2/µm. At 0.56 µm, a jansky is equal to 9.6 × 10–12 W/m2/µm. Visible-wavelength stars can also be a source of confusion and a source of wavefront control signals. Figures 2.1 and 2.2 illustrate the variation in the density of stars of a particular magnitude as a function of galactic latitude. The data can be matched to about a factor of 2 to 5 with the following expression: –4 – latitude ⁄ 30 mv
N ( mv ,latitude ) = 6.55 × 10 e
e
where N = number of stars per square degree with magnitude greater than mv, and the latitude is expressed in degrees
References 1. D. D. Kryskowski and G. Suits, “Natural Sources,” in Vol. 1, Sources of Radiation, G. Zissis, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 179–180, 1993. 2. Data derived from W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 3–22, 1978.
FIGURE 2.1 The number of stars per square degree is approximately exponential up to magnitudes of about 18. The legend shows galactic latitude.
Astronomy
39
FIGURE 2.2 The presence of the high density of stars near the galactic plane is easy to see in this graph. The legend is visual magnitude from the Earth.
NUMBER OF STARS AS A FUNCTION OF WAVELENGTH At visible wavelengths and beyond, for a given sensitivity, the longer the wavelength, the fewer stars you can sense. The falloff in the number of stars approximates the following: #Sλ2 ≈ #Sλ1 × 10
–0.4R
where #Sλ2 = number of stars at wavelength λ2 (λ2 larger than λ1) at a given irradiance R = ratio of one wavelength to another ( λ2 ⁄ λ1 ) #Sλ1 = number of stars at wavelength λ1
Discussion This rule is based on curve-fitting empirical data. Curves supporting this can be found in Seyrafi’s Electro-Optical Systems Analysis and Hudson’s Infrared Systems Engineering. This is useful for separate narrow bands, from about 0.7 to 15 µm, and irradiance levels on the order of 10–13 W/cm2/µm. Generally, this provides accuracy to within a factor of 2. The authors’ curve fitted data to derive the above relationship. Most stars are like our Sun and radiate most of their energy in what we call the visible part of the spectrum. As wavelength increases, there are fewer stars, because the Planck function is dropping for the stars that peak in the visible, and fewer stars have peak radiant output at longer wavelengths.
40
Chapter Two
NUMBER OF STARS ABOVE A GIVEN IRRADIANCE 1. The total number of stars at or above an irradiance of 10–13 W/cm2/µm is approximately 1400 × 10–0.626λ where λ is the wavelength in micrometers. 2. The number of stars at or above 10–14 W/cm2/µm is approximately 4300 × 10–0.415λ, where λ is the wavelength in micrometers. 3. The number of stars at or above a radiance of 10–15 W/cm2/µm is approximately 21,000 × 10–0.266λ where λ is the wavelength in micrometers.
Discussion As one observes in longer and longer wavelengths, there are fewer stars to observe at a given brightness. This phenomena stems from stellar evolution and populations as well as Planck’s theory. Most stars fall into what astronomer’s call “the main sequence” and have their peak output between 0.4 and 0.8 µm. From Planck’s equations, we also note that the longer the wavelength, the less output (brightness) a star has for typical stellar temperatures, because the infrared emission is on the tail of the Planck function. These simple equations seem to track the real data within a factor of two (or three) within the wavelength bounds. The curve for 10–15 W/cm2/µm tends to underpredict below 4 µm. The 10–13 W/cm2/µm curve was based on data from 1 to 4 µm, the 10–14 equation on wavelengths of 2 to 8 µm, and the 10–15 on wavelengths from 2 to 10 µm. The authors curve fitted some reasonably crude data to derive the above relationships. The above rule highlights two phenomena. First, the longer the wavelength (beyond visual) you observe, the fewer stars you will detect at a given magnitude or irradiance. Second, increased instrument sensitivity provides an increased number of stars detected.
PHOTON RATE AT A FOCAL PLANE The photon rate at a focal plane from a star of magnitude m is1 π 2 2 –0.4m S = NT --- ( 1 – ε )D ∆λ10 4 where
S = photon flux in photons/second 7 photons ⎞ - for a star of magnitude N = irradiance of a magnitude-zero star ⎛ ≈10 ------------------------2 ⎝ ⎠ cm secµm 0 in the band centered on 0.55 µm (See other rules in this chapter for details.) D = diameter of the telescope (cm) T = unitless transmittance of the atmosphere and optics ∆λ = bandpass of interest (µm) m = visual magnitude of the star ε = obscuration ratio (This number represents the ratio of the size of the secondary mirror to the size of the primary mirror. The additional obscuration of the struts that hold the secondary mirror is included as well. The latter effect will not occur if the telescope is an off-axis design.)
Discussion The above rule allows the approximate calculation of the number of photons per second at the instrument focal plane. Additionally, Ref. 2 gives us the following handy approximations:
Astronomy
41
A difference of 1 magnitude results in a difference of about 2.5 in spectral irradiance. A difference of 5 magnitudes is a factor of 100 difference in spectral irradiance. ■ A small magnitude difference is equivalent to an equal percentage difference in brightness (10.01 magnitudes is ≈1 percent dimmer than 10.00 magnitudes). This rule was developed for A class stars (the hottest subclass of white stars with surface temperature about 9000 K and prominent hydrogen lines). It is valid for narrow visible bandpasses. (Use with caution elsewhere.) Most on-axis reflecting telescopes have circular obscurations in the center of the aperture. Therefore, Schroeder suggests that ■ ■
2
π(1 – ε ) -------------------- = 0.7 4 for a Cassegrain telescope so the equation simplifies to S = 0.7 NT D2∆λ 10–0.4 m. Finally, we note that a star of magnitude m and temperature T will produce the following number of watts/m2/µm:3 –12
3.12 × 10 1 --------------------------- ------------------------------------------------------------------m 0.0144 5 2.5 0.2444 λ ⎛ exp ---------------- – 1⎞ ⎝ ⎠ λT
References 1. D. Schroeder, Astronomical Optics, Academic Press, Orlando, FL, p. 319, 1987. 2. Private communications with Dr. Walt Kailey, 1995. 3. D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight LowOrder Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Optical Engineering, 36(7), pp. 1910–1917, July 1997.
REDUCTION OF MAGNITUDE BY AIRMASS The atmosphere absorbs about 0.2 magnitudes per airmass.
Discussion Ground-based astronomers frequently represent atmospheric path length as airmass normalized to the zenith. Looking straight up, one has an “airmass of 1.” As the telescope’s line of sight is reduced in elevation, the amount of air through which it must view is increased and reaches a maximum at the horizon. The total airmass for viewing an object at the horizon from sea level is about 10 times the vertical view. This is because the densest part of the atmosphere is near the ground. In this rule, we will show the general calculation of the path length as a function of the elevation angle of the telescope. To start the calculation, let us first make the “flat-Earth” assumption. That is, let us take the case where the zenith angle is small (less than about 45°). This allows a simple computation of the total concentration of airmass between the ground telescope and the vacuum of space. In performing this calculation, we assume that the density of the atmosphere decreases as an exponential of the altitude, ρ(h) = ρoe–Lh where L is the reciprocal of the scale height of the atmosphere, h is the altitude, and ρ is the density of air molecules. A typical value for the scale height is 7 km, meaning that, at an altitude of 7 km, the pressure and density are 1/e (37 percent) of their surface values. This ideal-model atmosphere is
42
Chapter Two
easy to derive from the common model of an exponential pressure profile from the ground to space. The total column of air along a path from the ground to space is found by the following integration, where ρ is the density of air as a function of position along the integration path: ∞
∫ ρ(S)ds 0
Written in terms of the path over which we are viewing (s), the integral is ∞
∫ ρo e
–Ls cos Z
ρo ds = ---------------L cos Z
0
where Z = the zenith angle For this simple model, the elevation angle at which the airmass is 10× is 5.7°. Now consider the more complex case of a curved Earth of radius Re. Here, we find that h and s are related by 2
2
h( s ) = – Re + Re + s + 2sRe cos Z where Z = the zenith angle The path integral is now ∞
∫ ρo e
–Lh ( s )
ds
0
Although this looks impossibly complex, a little numerical analysis shows that (except for angles greater than about 69°) h and s are still related by the cosine of the zenith angle. This means that, for a wide range of angles, the flat-Earth result works just fine. A detailed analysis shows that, when the elevation angle is about 2.8°, the total molecular path is about 10 times the case for the shortest (vertical) path through the atmosphere. In any case, the horizontal view assures a very long atmospheric path. We are all familiar with the effects of such a path, as we have seen the intense red of the setting Sun. In this view, the path is so long that the blue component of the Sun’s light is scattered away, leaving only the long-wavelength component. Although the term airmass to represent the pressure-weighted atmospheric path length was developed by astronomers and previously almost exclusively used by observatories, it is finding its way into the lexicon of the general electro-optical and security sensor practitioner.
A SIMPLE MODEL OF STELLAR POPULATIONS The number of stars above a given visual magnitude mv can be estimated from #S = 11.84 × 10 where #S = approximate number of stars
0.4204mV
Astronomy
43
Discussion This simple rule is accurate to within a factor of 3 between magnitudes 0 and 20. The simple equation provides a good match—no worse than a factor of 5 for most magnitudes. It tends to underpredict the number of stars between magnitudes 13 and 15 and overpredict the number of stars between magnitudes 16 and 20. The issue of magnitudes is widely discussed. A reminder, however, is appropriate about the difference between visual and absolute magnitudes. The definition of the relationship between the two is quite simple. The absolute magnitude is the magnitude that the star would exhibit if at a distance of 10 parsecs (about 33 light years). We already know that two stars can be compared in apparent magnitude by the rule, 2
d m1 – m2 = 2.5 log ----12d2 where d1 and d2 are the distances of stars 1 and 2. Therefore, using 10 parsecs for d2, the formula becomes 2
d m1 – m2 = 2.5 log ----12- = m – M = 5log d 1 – 5 d2 where M indicates a measure of absolute magnitude. Of course, both d1 and d2 are measured in parsecs. The one value to remember is that, in the V band, a magnitude 0 star has a photon flux of very close to 1 × 107 photons/cm2/sec/micron. This can be easily derived from the number in Fig. 2.3 for the V band using the fact that the energy in a photon is hc/λ. The properties of other bands can be found in the appendix.
44
FIGURE 2.3
The number of stars brighter than a particular visual magnitude is an exponential function.
Chapter
3 Atmospherics
It is hard to imagine a subject more complex, and yet more useful, than the study of the propagation of light in the atmosphere. Because of its importance in a wide variety of human enterprises, considerable attention has been paid to this topic for several centuries. Initially, the effort was dedicated to learning how the apparent size and shape of distant objects depend on the properties of the atmosphere. Maturation of the field of spectroscopy led to a formal understanding of the absorption spectra of significant atmospheric species and their variation with altitude. Computer models that include virtually all that is known about the absorption and scattering properties of atmospheric constituents have been assembled and can provide very complete descriptions of transmission as a function of wavelength with a spectral resolution of about 1 cm–1. This is equivalent to a wavelength resolution of 0.1 nm at a wavelength of 1 µm. In addition to gradually refining our understanding of atmospheric absorption by considering the combined effect of the constituents, we also have developed a rather complete and elaborate theory of scattering in the atmosphere. The modern model of the scattering of the atmosphere owes its roots to the efforts of Mie and Rayleigh. Their results have been extended greatly by the use of computer modeling, particularly in the field of multiple scattering and Monte Carlo methods. For suspended particulates of known optical properties, reliable estimates of scattering properties for both plane and spherical waves can be obtained for conditions in which the optical thickness is not too large. Gustav Mie (1868–1957) was particularly influential, as he was the first to use Maxwell’s equations to compute the scattering properties of small spheres suspended in a medium of another index of refraction. A number of references1 suggest that Mie was not the first to solve the problem but was the first to publish the results. His work, along with work by Debye, is now generally called “Mie theory.” Rayleigh had already shown that scattering should vary as the fourth power of the wavelength using dimensional analysis arguments. Mie theory is often compared with the earlier approach of Airy. The interested reader will find technical and historical details in Ref. 2. Two technologies have been in the background in all of these theoretical developments: spectroscopy and electro-optical technology. Spectroscopes, which are essential instruments in the measurement of the spectral transmission of the atmosphere, are EO systems relying on improvements in detectors, optics, control mechanisms, and many of the other topics addressed in this book. As these technologies have matured, continuous improve-
45
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
46
Chapter Three
ments have been seen in the ability to measure properties of the atmosphere and to turn those results into useful applications. The applications range from measuring the expected optical properties being considered for astronomical telescope location to determining the amount of sunlight that enters the atmosphere and is subsequently scattered back into space. Laser technology has also been a key factor in new measurements of atmospheric behavior, in both scattering and absorption phenomena. Tunable lasers with very narrow line widths have been employed to verify atmospheric models and have provided the ability to characterize not only the “clean” atmosphere but also the properties of atmospheres containing aerosols and industrial pollution. Laser radar is regularly used to characterize the vertical distribution of scattering material, cloud density, and other features of the environment, weather, and climate. New advances in EO technologies have also allowed new insight into radiation transfer into, out of, and within the atmosphere. Satellite-based remote sensors have been measuring the radiation budget of the Earth in an attempt to define its impact on climatic trends. There are many other examples of space-based EO sensors that concentrate on measuring properties of the atmosphere, including the concentration of trace constituents in the stratosphere, ozone concentrations over the poles, and so on. Recently, at the urging of the military, measurements and improved theory have led to the development of methods for estimating and removing clear-air turbulence effects with important improvements for astronomers, imaging, and optical communications. New advancements in measuring the wavefront errors resulting from turbulence are included in adaptive optics. This technology is able to remove, up to a point, adverse atmospheric effects, which leads to telescope images that parallel those that would occur in transmission through a vacuum. We see, then, that atmospherics and astronomy have an intertwined history. The most recent advances in astronomical technology have come in two areas: space telescopes and overcoming the adverse impacts of the atmosphere. As an example of this connection, consider that even the casual observer of the night sky has noticed that planets do not twinkle but may not know why. This is because the angular size of planets is sometimes larger than the isoplanatic angle of the atmosphere. Similarly, for the same reason, a passenger in a high-flying jet, viewing a city at night, will see no twinkling. We can expect continual improvement in our understanding of the atmosphere and the way that it interacts with light propagating within it. All of these improvements in theory, supported by advancements in instrumentation quality, will result in even more capable EO systems and allow them to reduce the perturbing effects of their operating environments. The interested reader can find technical articles in Applied Optics and similar technical journals. At the same time, magazines such as Sky and Telescope occasionally include information on the way astronomers are using new technologies to cope with the effects of the atmosphere. A few new books have come out that deal specifically with imaging through the atmosphere. The International Society for Optical Engineering (SPIE) is a good source for these texts.
References 1. Scienceworld.wolfram.com/physics/MieScattering.html, 2003. 2. R. L. Lee, Jr., “Mie theory, Airy theory, and the Natural Rainbow,” Applied Optics, 37(9), March 20, 1998, p. 1506. This paper is also available at http://www.usna.edu/Users/oceano/ raylee/papers/RLee_MieAiry_paper.pdf, 2003.
Atmospherics
47
ATMOSPHERIC ATTENUATION OR BEER’S LAW The attenuation of light traversing an attenuating medium can often be estimated by the simple form, Transmission = e
– αz
where α = attenuation coefficient in units of distance–1 z = path length in same units as the attenuation coefficient
Discussion This common form is called Beer’s law and is useful in describing the attenuation of light in atmospheric and water environments and in optical materials. Because both absorption and single scattering will remove energy from the beam, α is usually expressed as α = a+γ where a = absorption per unit length γ = scattering per the same unit length The rule is derived from the fact that the fractional amount of radiation removed from the beam is independent of the intensity but is dependent on path length. The idea is that scattering and absorption remove light at a rate proportional to the length of the path and the amount of scattering or absorbing material that is present. This leads to a differential equation of the form dz ----- = constant z The solution of this simple equation is of exponential form. The numerical values in the equations are derived from field measurements. For example, downwelling light in the atmosphere or ocean from the Sun is described by a different attenuation coefficient that must take into account the fact that the scattered light is not removed from the system but can still contribute to the overall radiation. Deviation from exact adherence to Beer’s law can result if the medium is high in multiple scattering and if the sensor does not have a small field of view. In those conditions, multiply scattered light can be detected, dramatically altering the equation in the rule. This is widely observed in fog conditions when images cannot be formed but the total light level is not necessarily low. In a beam case, scattering removes light from the beam in an explicit way. Consider this example. When observing a person at distance, light from the target is emitted into a hemisphere, but you see only the very narrow-angle beam that happens to encounter your eye. If you are using a telescope or other instrument that restricts your field of view (FOV) to a small angle, Beer’s law applies. If you use a wide-FOV lens, multiply scattered light may be detected, affecting the intensity and clarity of the target. In this case, Beer’s law does not apply. Multiple scattering in turbid media results in a violation of the simple equation at the start of this rule. Beer’s law works only for conditions that do not allow multiple scattering, either because there is little scattering present or because the instruments involved do not allow multiply scattered light to be detected. In most applications that involve imaging, Beer’s law is the one to use. In situations where only the intensity of the light field is needed, an adequate estimation is to include only the absorption term.
48
Chapter Three
The presence of an exponential attenuation term in transmission of the atmosphere is no surprise, as this mathematical form appears for a wide variety of media (including the bulk absorption of optical materials such as glass).
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 161–165, 1969. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA., p. 87, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003.
IMPACT OF WEATHER ON VISIBILITY In Beer’s law, the scattering coefficient, γ, associated with rain can be estimated in the following way: γ = 3.92/V
(1)
where V = visual range (usually defined as contrast between target and object of 2 percent)
Discussion A similar rule is Koschmeider’s,2 in which visibility = 3/α
(2)
for black targets against a bright sky. In this case, α is the total attenuation, including both scattering and absorption. See the rule on Beer’s law (p. 47) for more details on this parameter. Allard’s law2 applies to visibility of lights at night and is – αV
e E T = I -----------2 V
(3)
where ET = illumination detection threshold of the human eye I = power of the light source in watts V = visibility in kilometers Choose units that result in a meaningful value of ET . For example, we expect α to be in km–1, so V must be in kilometers. Other units can be used as long as the product of the distance and attenuation coefficient is unitless. For ET to be in watts/m2, the units of V in the denominator must be meters. It is obvious that the equation above derives from a 1/R2 type beam spread model, coupled with an attenuation term. Reference 2 also describes a measure of pilots viewing down a runway as log ( E T ) = 0.64 log ( B ) – 5.7
(4)
where B = background luminance level Some authors suggest that Eqs. (3) and (4) apply under different conditions; pilots should use the larger of the two during the day but use only Eq. (4) at night.
Atmospherics
49
Reference 3 also shows that the effect of rain on visual range and scattering coefficient can be estimated from γ = 1.25 × 10
–6 R ----3
r
where R = rainfall rate in centimeters per second r = radius of the drop in centimeters Alternatively, Ref. 1 gives the scattering coefficient of rainfall as γ = 0.248 f
0.67
where f = rainfall rate in millimeters per hour Reference 4 provides some insight into the effect of aerosols into scattering in the atmosphere. The authors of Ref. 4 point out that, for a uniform distribution of particles of concentration D and radius a, the scattering coefficient is 2
β sc = Dπa Q sc where Qsc = Mie scattering coefficient, which is a strong function of the ratio 2πa α = ---------λ As a increases, either by considering scattering at shorter wavelengths or by increasing the aerosol size, Qsc becomes 2. The result is that, for a large particle size or short wavelength, the particles have a scattering cross section twice their geometric size. The combined atmospheric extinction for the MWIR tends to be between 0.2 and 0.3, as illustrated in the Fig. 3.1. This is based on the U.S. Navy’s R384 database,5 which includes 384 detailed observations of atmospheric conditions in a multitude of maritime locations. All of the path lengths were horizontal. The graph included here stops at the 95th percentile, as the final 5 percent had very high extinction coefficients (over 1; the highest recorded in this database was 7.66 per km).
References 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 87, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 2. U.S. Department of Transportation, Federal Aviation Administration, United States Experience Using Forward Scatterometers for Runway Visual Range, March 1997, DOT-VNTSCFAA-97-1. 3. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 161–165, 1969. 4. J. Oakley and B. Satherley, “Improving Image Quality in Poor Visibility Conditions Using a Physical Model for Contrast Degradation,” IEEE Transactions on Image Processing 7(2), p. 167, February 1998. 5. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Chap. 29 in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 29-33 to 29-37, 2001.
50
Chapter Three
FIGURE 3.1 MWIR extinction. The figure comes from R384. It shows the cumulative extinction coefficient for MWIR wavelengths.
ATMOSPHERIC TRANSMISSION AS A FUNCTION OF VISIBILITY Atmospheric transmission can be estimated via the range, visibility, and wavelength by – 3.91 λ – q τ = exp ⎛ -------------- ⎛ ----------⎞ R⎞ ⎝ V ⎝ 0.55⎠ ⎠ where V = visibility in the visual band in kilometers λ = wavelength in micrometers q = a size distribution for scattering particles; typical values are 1.6 for high visibility, 1.3 for average, and 0.585 V1/3 for low visibility R = range in kilometers Transmission is the ratio of the intensity of the light received to the light transmitted.
Discussion All sorts of human enterprise involves looking through the atmosphere. Simple rules for establishing how far one can see have always been of interest. This little rule mixes a little physics, in the form of the size distribution effects, into the empirical transmission. The longer the wavelength, the less the scatter, so wavelength is in the numerator. As in another rule, the absorption can be estimated from the visibility by 4/V. In another rule, we also note that the total attenuation is approximated by 3/V for black targets against a bright sky.
Atmospherics
51
As can be seen, the choice of q depends on the visibility for each particular situation. Of course, this requires that the visibility be known. Furthermore, the rule assumes that the visibility is constant over the viewing path. This never happens, of course. Nonetheless, this is a useful and easy-to-compute rule that can be used with considerable confidence. Visibility is easily obtained from the FAA or the National Weather Service and is readily available at most electro-optical test sites. Modern visibility measurements are usually made with scatterometers using near IR lasers and a 1-m path length. The measurement is then extrapolated for range, conditions, and the responsivity curve of the eye. An extensive data base has been acquired and algorithms refined over the past few decades so that this technique works well for the visible bandpass. However, because this data is not typically from transmissometers, scaling it is extremely questionable for other wavelengths. This rule provides a quick and easy approach for estimating the transmission of the atmosphere as a function of wavelength if some simple characteristics are known. Field work related to lasers, observability of distant objects, or the applicability of telescopes can make use of this rule if visibility is known. Of course, one can attempt to estimate the visibility if transmission can be measured. In general, aerosols have particle radii of 0.01 to 1 µm with a concentration of 10 to 1000 particles per cc, fog particles have a radius of 10 to 50 µm with a concentration of 10 to 100 per cc, clouds have particle radii of 1 to 10 µm with a concentration of 10 to 300 per cc, and rain has particle radii of 100 to 10,000 µm with a concentration of 0.01 to 10–5 per cc.1 This rules is an expansion of Beer’s law, and the interested reader should review that rule as well.
References 1. M. Thomas and D. Duncan, “Atmospheric Transmission,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 12, 1993. 2. P. Kruse, L. McGlauchlin, and R. McQuistan, Elements of Infrared Technology, John Wiley & Sons, New York, pp. 189–192, 1962. 3. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 31, 1993.
BANDWIDTH REQUIREMENT FOR ADAPTIVE OPTICS To correct phase fluctuations induced by the atmosphere, an adaptive optics servo system should have a bandwidth of 0.4v w -------------λL where vw = wind velocity λ = wavelength L = path length inducing the phase fluctuations
Discussion This handy relationship indicates that the shorter the wavelength and the higher the wind velocity, the faster the servo system needs to be controlled. The bandwidth is lowered as
52
Chapter Three
the path length increases, because of the effect of averaging over the path. The bandwidth defined by this formula is often referred to as the Greenwood frequency. A more complete expression for the Greenwood frequency is
–1 fG =
3⁄5
∞
2 5⁄3 0.102k sec θ C n ( z ) V ( z ) dz 2
∫ 0
where
θ = angle from the line of sight to the zenith V(z) = vertical profile of the wind 2
C n = atmospheric structure function k = 2π/λ With a little work, it can be shown that the Greenwood frequency goes as the –6/5 power of wavelength. An even simpler form of the rule is2 0.43ν w -----------------r0 where r0 = Fried parameter, defined elsewhere in this chapter Finally, it can be shown that there is a relationship between Greenwood frequency and Strehl ratio (S).3 A number of rules about Strehl ratio, an important metric for the performance of laser and optical systems, are found in Chap. 9, “Lasers.” It is shown that ⎛ f G⎞ S = exp – 0.95 ⎜ -------⎟ ⎝ f B⎠
5⁄3
where fG, fB = the Greenwood and system bandwidth, respectively
References 1. R. Tyson, Principles of Adaptive Optics, Academic Press, Orlando, FL, p. 36. 1991. 2. J. Mansell, Ph.D. dissertation Stanford University, Micromachined Deformable Mirrors for Laser Wavefront Control, Chap. 2, p. 3, 2002. Available at www.intellite.com, 2003. 3. P. Berger et al., “AEOS adaptive-optics system and visible imager,” Proc. 1999 AEOS Technical Conference, U.S. Air Force, Maui, HA.
Cn2 ESTIMATES The index of refraction structure constant C 2n is a measure of the index of refraction variation induced by small-scale variations in the temperature of the atmosphere. The effect results from the fact that the index of refraction of air changes with temperature. There are several quick ways to estimate C 2n . The easiest1 is that it varies with altitude in the following way:
Atmospherics
53
– 13
2 1.5 × 10 C n = ------------------------------ for h < 20 km h ( in meters )
and 2
C n = 0 for altitudes above 20 km Note that C 2n (h) has the rather odd units of m–2/3.
Discussion No other parameter is so common as C 2n in defining the impact of the atmosphere on the propagation of light. C 2n (h), in which we have explicitly shown that the function depends on altitude, is critical in determining a number of key propagation issues such as laser beam expansion, visibility, and adaptive optics system performance, as well as in defining the performance impact that the atmosphere has on ground astronomical telescopes, surveillance sensors, and high-resolution FLIRs. Each of the estimates shown below is just that—an estimate. However, for the modeling of systems or the sizing of the optical components to be used in a communications or illumination instrument, these approximations are quite adequate. This field of study is particularly rich, having benefited from work by both the scientific and military communities. Any attempt to provide really accurate approximations to the real behavior of the atmosphere is beyond the scope of this type of book but is covered frequently in the astronomical literature, particular conferences and papers that deal with the design of ground-based telescopes. 2 Propagation estimates rely on knowledge of C n . Many of those estimation methods appear in this chapter. Although a number of estimates of C 2n are widely used, most will provide adequate results for system engineers trying to determine the impact of turbulence on the intensity of propagating light as well as other features of beam propagation. The most widely used analytic expression for C 2n (h) is the so-called Hufnagel-Valley (HV) 5/7 model. It is “so-called” because the profile of C 2n results in a Fried parameter (see the rule, “Fried parameter”) of 5 cm and an isoplanatic angle of 7 µrad for a wavelength of 0.5 µm. Beland2 expresses the Hufnagel-Valley (HV) 5/7 model as 8.2 × 10
– 16 – h/1500 – 14 – h/100 h 10 2 – h/1000 ------------⎞ W e + 2.7 × 10 e + 1.7 × 10 e ⎝ 1000⎠
– 26 ⎛
where h = height in meters W = wind correlating factor, which is selected as 21 for the HV 5/7 model Note that the second reference has an error in the multiplier in the last term. That error has been corrected in what is presented above. In many cases, C 2n value can be crudely approximated as simply 1 × 10–14 during the night and 2 × 10–14 during the day. The R3843 database has a minimum value of 7.11 × 10–19 and a maximum value of 1.7 × 10–13 for ground-based, horizontal measurements. The R384 average is almost exactly 1 × 10–14. C 2n is strictly a property of the turbulence induced in the atmosphere by tiny fluctuations in temperature. These temperature fluctuations induce very small variations in the index of refraction of the air. The temperature fluctuations are usually described in terms of the temperature structure parameter, C 2T . While the index of refraction of air varies slightly with wavelength, the effect is minor and, as will be seen in other rules, has a minor impact on imaging performance.
54
Chapter Three
References 1. J. Accetta, “Infrared Search and Track Systems,” in Vol. 5, Passive Electro-Optical Systems, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 287, 1993. 2. R. Beland, “Propagation through Atmospheric Optical Turbulence,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 221, 1993. 3. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Ch. 29, in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 29-33 to 29-37, 2001. 4. M. Friedman, “A Collection of C 2n Models,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, VA, May 1, 2002. Available in NVTHERM at www.ontar.com, 2003. 5. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, VA, May 24, 2002. Available in NVTHERM at www.ontar.com, 2003.
Cn2
AS A
FUNCTION OF WEATHER
For altitudes above 15 m and up to an altitude of a few hundred meters, the following approximation can be used to estimate C 2n for various wind speeds and humidities: 2
C n = 3.8 × 10
– 14
W + 2 × 10
– 17
RH – 1.1 × 10
+ 2.9 × 10 – 2.5 × 10
2
– 15
WS + 1.2 × 10
– 17
WS – 5.3 × 10
– 8.5 × 10 where
– 15
3
T – 2.8 × 10 –19
–15
RH
WS
– 15
RH
3
2
–13
W = temporal hour weight (described below) T = air temperature in kelvins RH = relative humidity (percent) WS = wind speed (m/s) 2
C n = defined elsewhere in this chapter
Discussion A variety of C 2n models abound in the literature, but few attempt to capture its relationship to environmental conditions. Once the value of C 2n is found, it can be scaled with altitude using an appropriate model, as defined elsewhere in this chapter. This rule provides an algorithm for the weather-related effects in the important altitudes near the ground. The authors of Ref. 1 point out that an even more complete model is created when one includes the effects of aerosols. They do this by estimating the total cross-sectional area (TCSA), as below, and modifying the estimate of C 2n . Note that the units of TCSA are cm2/m3.
Atmospherics
–4
–5
TCSA = 9.69 × 10 RH – 2.75 × 10 RH –7
3
2
–9
+ 4.86 × 10 RH – 4.48 × 10 RH + 1.66 × 10
– 11
5
RH – 6.26 × 10 4
–5
– 1.34 × 10 SF + 7.30 × 10
–3
55
4
ln RH
–3
and C n = 5.9 × 10
– 15
W + 1.6 × 10
+ 6.7 × 10
– 17
RH – 3.9 × 10
2
× 10
– 15
2
WS + 1.3 × 10
+ 2.8 × 10
– 14
+ 1.4 × 10
– 15
– 15
2
– 19
RH
3
2
– 14
– 15
RH – 3.7
WS – 8.2 × 10
SF – 1.8 × 10
– 14
T – 3.7 × 10
– 17
WS
3
TCSA
TCSA – 3.9 × 10
– 13
where SF = solar flux in units of kW-m–2 We also note the introduction of the concept of temporal-hour in the equation and a weighting function (W) associated with it. The temporal-hour is defined as one-twelfth of the time between sunrise and sunset.2 In winter, a temporal-hour is less than 60 min, for example. Table 3.1 shows the values of W that should be used to change the estimate of C 2n during the day. Figure 3.2 is the cumulative probability of C 2n from the U.S. Navy’s R384 database. This is a database of R384 detailed atmospheric measurements at multiple maritime locations across the globe, with a horizontal path.
References 1. Y. Yitzhaky, I. Dror, and N. Kopeika, “Restoration of Atmospherically Blurred Images According to Weather-Predicted Atmospheric Modulation Transfer Functions,” Optical Engineering, 36(11), November 1997, pp. 3064–3072. 2. D. Sadot and N. S. Kopeika, “Forecasting Optical Turbulence Strength on the Basis of Macroscale Meteorology and Aerosols: Models and Validation,” Optical Engineering, 31(2), February 1992, pp. 200–212. 3. M. Friedman, “A Collection of C2n Models,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia, May 1, 2002. Available in NVTHERM at www.ontar.com, 2003. 4. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia, May 24, 2002. Available in NVTHERM at www.ontar.com, 2003.
56
Chapter Three
TABLE 3.1 Values of W Temporal hour interval
Relative weight (W)
Until –4 –4 to –3 –3 to –2 –2 to –1 –1 to 0
0.11 0.11 0.07 0.08 0.06
Sunrise
0 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6 6 to 7 7 to 8 8 to 9 9 to 10 10 to 11
0.05 0.1 0.51 0.75 0.95 1 0.9 0.8 0.59 0.32 0.22
Sunset
11 to 12 12 to 13 Over 13
0.10 0.08 0.13
2
FIGURE 3.2 Cumulative probability of maritime C n . Data represent the results of experiments with a horizontal view near the ground.
FREE-SPACE LINK MARGINS The atmosphere has a distinct impact on the ability of terrestrial laser communications. The following data indicates the relative impact of different conditions.
Atmospherics
57
Discussion Atmospheric absorption, scatter, and scintillation all will decrease the SNR and, if bad enough, will eliminate the ability for an electro-optical system to detect a target or send information to another location. Table 3.2 gives some guidelines for the link margins suitable for various weather types. This is highly subjective, as all of these weather conditions, in the real world, can be “clumpy,” both spatially and temporally, and such definitions are often in the eye of the beholder, which doesn’t see in the 1550 nm bandpass. The user must use these with an extreme grain of salt, but they provide a good starting point. TABLE 3.2 Suitable Link Margins Weather Condition
Required link margin, dB/km
Urban haze
0.5
Typical rainfall
3
Heavy rainfall
6
Typical snow, heavy downpour, or light fog
10
White out snowfall or fog
20
Heavy to severe fog
30–120
The practitioner is encouraged to get the local weather statistics for his link to determine the link margin needed for a give locale. Obviously, Adelaide and Tucson will need a lower margin for a given reliability than Seattle or Halifax. The above link margins are for wavelengths of 1550 nm. Visible wavelengths perform slightly worse, and the long-wave infrared (LWIR) slightly better. Reference 1 gives the first five entries. The last entry is derived from the author’s experience.
Reference 1. R. Carlson, “Reliability and Availability in Free Space Optical Systems,” Optics in Information Systems, SPIE Press 12(2), October 2001.
FRIED PARAMETER The Fried parameter is computed as follows: L
Fried parameter = r 0 =
2
0.423k sec
2
– 3/5
2 β C n ( z ) dz
∫ 0
where k = propagation constant of the light being collected, k = 2π/λ β = zenith angle of the descending light waves L = path length through which the light is collected 2
C n = atmospheric refractive structure function, discussed elsewhere in this chapter λ = wavelength sec = secant function (the reciprocal of the cosine function) z = dummy variable that represents the path over which the light propagates
58
Chapter Three
Discussion Fried has developed a useful characterization of the atmosphere. He has computed a characteristic scale of an atmospheric path, commonly referred to as r0 (and now known as Fried’s parameter). It is pronounced “r zero.” It has the property that a diffraction-limited aperture of diameter r0 will have the same angular resolution as that imposed by atmospheric turbulence. Clearly, for any path over which C 2n is constant we get 2
2
2
r 0 = ( 0.42k sec βC n ( z )L )
– 3/5
For a vertical path that includes the typical profile of C 2n , the value of r0 is about 15 cm. Fried derived this expression using the Kolmogorov spectrum for turbulence. Continued development of the theory has led to new approximations and more accurate characterizations of the impact of the atmosphere on light propagation. This is particularly true for performance evaluation of space and aircraft remote sensors. Astronomical telescopes and laser applications such as optical communication have also benefitted. Proper characterization of C 2n is necessary to get a good estimate of Fried’s parameter. Note also that there is a wavelength dependence in the results, hidden in the parameter k, which is equal to 2π/λ. Unfortunately, characterization of C 2n is an imprecise empirical exercise. Astronomical sites measure it in a limited way, but it varies with location, season, and weather conditions, so it is usually only approximated. Of course, attention must be paid to using the correct units. Because C 2n is always expressed as m–2/3, meters are the preferred units. This rule provides a convenient characterization of the atmosphere that is widely used in atmospheric physics, including communications, astronomy, and so on. Properly applied, it provides a characterization of the maximum telescope size for which the atmosphere does not provide an impediment that blurs the spot beyond the diffraction limit. That is, a small enough telescope will perform at its design limit even if the presence of the atmosphere is taken into account. The Fried parameter is often used in adaptive optics to determine the required number of active cells and the number of laser-generated guide stars necessary for some level of performance. Fried’s parameter continues to find other uses. For example, the resolved angle of a telescope can be expressed as approximately λ/r0, or about 3.3 µrad for an r0 of 15 cm and a wavelength of 0.5 µm. Note that this result is consistent with those in the rule, “Atmospheric Seeing,” in Chap. 2, “Astronomy.” Convection, turbulence, and varying index of refraction of the atmosphere distort and blur an image, limiting its resolution. The best “seeing” astronomers can obtain (on good nights, at premier high-altitude observatories such as Mauna Kea) is on the order of 0.5 to 1.5 µrad. This limit is regardless of aperture size. It is not a question of diffraction limit but one of being “atmospheric seeing limited” by atmospheric effects. The seeing tends to improve with increasing altitude and wavelength. The Fried parameter is the radius in which the incoming wavefront is approximately planar. In the visible, it ranges from about 3 to 30 cm. The Fried parameter is strongly spatially and temporally dependent on the very localized weather at a given location, and it varies with the airmass (or the telescope slant angle). It also can be affected by such localized effects as the telescope dome and air flow within the telescope. Moreover, the Fried parameter can vary across the aperture of a large telescope. Moderate-size telescopes (say less than 5 m in aperture), operating at 10 µm or longer, tend to be diffraction limited. Stated another way, the Fried parameter in those cases exceeds, or at least equals, the telescope aperture. The amateur astronomer’s 5- to 10-inch aperture telescope is about as big as a telescope can be before atmospheric effects come into play, if the local environment (city lights and
Atmospherics
59
so forth) is not a factor. The really large telescopes have the same angular resolution, because they too are affected by the atmosphere. Of course, the big telescopes are not in your backyard but are sited where atmospheric effects are as insignificant as possible. In addition, large telescopes collect light faster than smaller ones and thus allow dimmer objects to be seen in a reasonable length of time. One method to correct for this atmospheric distortion is to employ a wavefront sensor to measure the spatial and temporal phase change on the incoming light, and to use a flexible mirror to remove the distortions that are detected, in essence removing the atmospheric effects in real time. The wavefront sensor can be a Shack-Hartmann sensor, which is a series of lenslets (or subapertures) that “sample” the incoming wavefront at the size of (or smaller than) the Fried parameter. The size of the wavefront sensor subaperture and the spacing of the actuators that are used to deform an adaptive mirror can be expected to be less than the Fried coherence cell size. The diameter of the telescope divided by the Fried parameter indicates the minimal number of subapertures needed. The optimal size of the wavefront spacing and correction actuators seems to be between 0.6 and 1.0 times the Fried cell size. More details on this topic are covered in other rules in this chapter. Reference 2 notes that r0 may be estimated by a number of practical methods, the simplest of which relies on measurement of image motion. ⎛ λ2 ⎞ r 0 = 0.346 ⎜ --------------------⎟ ⎝ σ 2 D 1 ⁄ 3⎠
3/5
where D = telescope aperture diameter σ = rms angular image motion
References 1. C. Aleksoff et al., “Unconventional Imaging Systems,” in Emerging Systems and Technologies, Vol. 8, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, SPIE Press, p. 132, 1993. 2. D. S. Acton, “Simultaneous Daytime Measurements of the Atmospheric Coherence Diameter r0 with Three Different Methods,” Applied Optics, 34(21), pp. 4526-4529, July 20, 1995.
INDEX OF REFRACTION OF AIR The index of refraction of air can be approximated by n = 1 + 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
–6 –3 –2 ⎞ P --- × 10
⎠T
where P = pressure in millibars λ = wavelength in microns T = temperature in kelvins1
Discussion Algorithms for expressing the index of refraction of air can be very important for computing ray traces related to imaging through the atmosphere, particularly descriptions of color properties of the atmosphere (rainbows, glory, and so on). In addition, because the optical
60
Chapter Three
effect of turbulence in the atmosphere derives from variations the index of refraction, the above expression can be useful in scaling C 2n , as described in Ref. 1. The reference points out that fluctuations in the index depend on wavelength and temperature according to ∆n = 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
–3 –2 ⎞ P –6 ------ × 10 ∆T ⎠ 2
T
This result is obtained by simply taking the derivative of the equation in the rule. This type of result can be useful in dealing with observed scintillation and pointing jitter induced by local temperature fluctuations as described in other rules in this chapter. In addition, one can estimate the change in index as pressure or temperature changes occur. An experimenter who is dealing with changing weather conditions while trying to perform longpath experiments will find application of this simple rule. To make these estimates complete, we include a simple method of estimating the density (ρ) of air from Ref. 2. ρ = 1.286 – 0.00405T Here, density is in kg/m3, and T is in degrees Celsius. Zanjan3 suggests a small correction to the first equation, resulting in –6 –3 –2 ⎞ P 4810e --- × 10 ⎛ 1 + ---------------⎞
n = 1 + 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
⎠T
⎝
PT ⎠
where e = water vapor pressure in millibars Another approach4 is to define the index as a function of frequency rather than wavelength.
( n – 1 )10
where
6
2 2 ⎛ 526.3v 1 11.69v 2⎞ P dry ⎜ = 237.2 + ------------------- + -------------------⎟ ⎛ -----------⎞ ⎠ ⎜ 2⎟⎝ 2 2 2 v2 – v ⎠ T v1 – v ⎝
N = the refractivity v = the wave number in cm–1 v1 = 114,000 cm–1 v2 = 62,400 cm–1 Pdry = dry air pressure in kilopascals T = temperature in kelvins
Finally, Ref. 5 compares a number of different strategies for estimating the index of refraction of dry air and water vapor. They use the concept of reduced refraction, A(λ), to simplify the equations. Reduced refraction uses the fact (seen above) that the index usually is of the form P n λ – 1 = --- A ( λ ) T
Atmospherics
61
In what follows, we will provide the formulae for A(λ). For dry air (using the subscript D), we can use either of the following formulae:
A D1
⎛ ⎞ –9 ⎜ 2406030⎟ 15997 2.84382 × 10 ⎜ 8342.13 + ----------------------- + ----------------------⎟ 1 1 ⎜ 38.9 – ------ 130 – ------⎟ 2 2⎠ ⎝ λ λ
A D2
–9 1.36 162.88 2.69578 × 10 ⎛ 28760.4 + ---------- + -----------------⎞ ⎝ 4 2 ⎠ λ λ
These two algorithms match each other and the one provided in the rule almost exactly. Because the version in the rule is the simplest of the three, it is the one that most people will want to use. The reduced refractivity for water (subscript W) is provided by either of the two following formulae: A w1
–9 1.36 162.88 2.84382 × 10 ⎛ 24580.4 + ---------- + -----------------⎞ ⎝ 4 2 ⎠ λ λ
A w2
–7 0.004028 0.03238 2.6422 2.24756 × 10 ⎛ 295.235 + ----------------------- – -------------------- + -----------------⎞ ⎝ 6 4 2 ⎠ λ λ λ
The reader should be alert to the fact that the two forms above differ by about 3 percent. This difference seems not to have been resolved by the community, as both versions are in use. Using the formulas presented using the A(λ) formulation, we can compute the index of refraction as 1 n λ = 1 + --- [ A D ( λ )P D + A W ( λ )P W ] T where
D and W = subscripts in the equations above PD and PW = the partial pressure of dry air and water vapor, respectively
References 1. W. Brown et al., “Measurement and Data-Processing Approach for Estimating the Spatial Statistics of Turbulence-Induced Index of Refraction Fluctuations in the Upper Atmosphere,” Applied Optics, 40(12), p. 1863, April 20, 2001. 2. D. L. Hutt, “Modeling and Measurements of Atmospheric Optical Turbulence over Land,” Optical Engineering, 38(8), pp. 1288–1295, August 1999. 3. M. Sarazin, Atmospheric Turbulence in Astronomy, 2001, available at www.eso.org/astclim/espas/iran/zanjan/sanjan01.ppt, 2003. 4. G. Kamerman, “Laser Radar,” in Vol. 2, Atmospheric Transmission, M. Thomas and D. Duncan, Eds., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 88, 1993. 5. S. van der Werf, “Ray Tracing and Refraction in the Modified U.S. 1976 Atmosphere,” Applied Optics, 42(3), p. 354–366, January 20, 2003.
62
Chapter Three
THE PARTIAL PRESSURE OF WATER VAPOR The following equation shows the partial pressure of water vapor as a function of air temperature and relative humidity: P = 1.333 RH {[(C3 Tc + C2) Tc + C1] Tc + C0} where
P = partial pressure of water vapor in millibars RH = relative humidity C0 = 4.5678 C1 = 0.35545 C2 = 0.00705 C3 = 3.7911 × 10–4 Tc = air temperature in degrees Celsius
Discussion The amount of water in the path length affects nearly all wavebands by reducing the transmission. Additionally, a number of rules related to upwelling and downwelling radiation in the atmosphere depend on knowing the partial pressure of water vapor. Upwelling and downwelling describe the flow direction of radiation. Scattered sunlight (as might be produced by a thick cloud cover) is downwelling, whereas reflections from the surface are upwelling. The rule matches the observed distribution by fitting data with a least-squares curve. The partial pressure is useful in a number of applications. For example, the same reference shows that the downwelling radiance onto a horizontal plane is proportional to the square root of the partial pressure of water vapor. This rule gives immediate results for estimating the partial pressure of water vapor. It clearly shows that as the temperature rises, the partial pressure increases rapidly, and vice versa. In fact, review of the equation shows that the increase goes as T 3 for one term and T2 for another.
Reference 1. D. Kryskowski and G. Suits, “Natural Sources,” in Vol. 1, Sources of Radiation, G. Zissis, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 147, 1993.
PHASE ERROR ESTIMATION The maximum phase error induced by the atmosphere can be written as 2
maximum phase error = 0.57k zC n D where
5⁄3
z = distance through which the aberrated wave propagates 2 C n = atmospheric structure function
D = aperture diameter k = wave propagation constant, 2π/λ
Atmospherics
63
Discussion The stroke of actuators in a deformable mirror system must be able to accommodate the phase errors induced by the atmosphere and aberrations in the telescope. The rule shown here compares with a discourse by Tyson2 in which he shows the phase effects of various optical aberration terms induced by the atmosphere. Some algebra shows that what is shown above (as well as the terms described by Tyson) can be put in a form that includes the ratio of the aperture diameter and Fried’s parameter, r0. Fried’s parameter is described in another rule. We can compare the various results using the following equation: σ
2
D 5⁄3 = n ⎛ ----- ⎞ ⎝r ⎠ 0
We will assume that r0 is expressed as [0.423k2 C 2n z]–3/5, the value it takes on for constant C 2n . Tyson shows the following values for n: Variance in phase (σ2)
Value of n
Piston
1.0299
One-dimensional tilt
0.582
Two-dimensional tilt
0.134
Focus
0.111
The rule above, taking into account that it is described as the maximum (which we might assume is 3 σ).
0.256
Thus, we see that there is not complete agreement between the rule and Ref. 2, but all values fall in the same range. Consider the case in which D = 1 m, C 2n is 2 × 10–14 m–2/3, L = 5000 m (approximately the distance in the atmosphere over which turbulence is a significant factor), and λ = 1 µm. From these numbers, we get about 35 radians (about 6 waves or about 3 to 6 µm) for the maximum piston stroke needed to accommodate the effects of the atmosphere.
References 1. R. Mali et al., “Development of Microelectromechanical Deformable Mirrors for Phase Modulation of Light,” Optical Engineering, 36(2), pp. 542–548, February 1997. 2. R. Tyson, Principles of Adaptive Optics, Academic Press, New York, p. 79, 1991.
SHACK-HARTMANN NOISE The variance in the wavefront resulting from nonperfect sensing in a Shack-Hartmann sensor can be expressed as1 2
2 2 L 2 Kb + Ks + (K N N d) var = ( 0.86π ) ⎛ ----- ⎞ ------------------------------------------------ rad ⎝r ⎠ 2 0 Ks
where Ks = average number of detected signal photons in each Hartmann spot Kb = number of background photons detected in each subarray
64
Chapter Three
KN = read noise in each pixel Nd = number of detectors in each subarray L = outer scale of the atmosphere r0 = Fried’s parameter (defined elsewhere in this book)
Discussion The Shack-Hartmann sensor is widely used to determine the wavefront error to be corrected by an active optical system. The sensor divides the telescope pupil into a number of small areas (or subapertures) and determines the wavefront tilt at each area. The ensemble of tilts measured in each subaperture is used to fit the overall wavefront shape and to command the correction system. Noise or other error corrupts the tilt measurements and leads to incomplete correction of wavefront error. To implement such a system, groups of pixels in a typical focal plane are assigned to a particular location in the pupil. Each group of pixels (a subarray) represents a point in the array of samples of the wavefront sensor. Each point in the array collects light over an area called the subaperture. A wavefront entering such a subaperture is imaged in the part of the focal plane assigned to it, falling in the center of the subarray only if there is no tilt. A tilted wavefront will be brought to focus in a noncentered location. The center of light of the spot that is formed is determined by measuring the intensity of the light in each pixel and performing a center of mass calculation. The location of the center of the spot indicates the two-dimensional tilt for the particular subaperture. The outer scale describes a feature of the turbulence of the atmosphere that indicates the source of the kinetic energy of the atmosphere. In most cases, the outer scale at low altitude is about one-half the altitude, becoming 100 m or more away from the ground where free flow occurs. Fried’s parameter indicates the lateral spatial scale over which the phase of rays arriving at the sensor are about the same. The formula below is a simpler version that explicitly shows the role of signal-to-noise ratio in determining the variance in the measurements. The noise equivalent angle of the individual tilt measurements is2 2
2 λ σ tilt = 0.35 ---------------------------2 2 d s ( SN R v )
where
σtilt = rms tilt error λ = wavelength ds = subaperture diameter SNRv = voltage signal-to-noise ratio
The first figure, Fig. 3.3, is an image of the subaperture spots taken at a zenith angle of 33° on the 3.5-m Apache Point Observatory. Note that the spots vary in position over the pupil. That is, they do not form a uniform grid. The locations of the spots provide the information that indicates what type of tilt imperfections are present in the subaperture of the wavefront. The second figure, Fig. 3.4, shows the resulting wavefront error mapped across the pupil. In this figure, the wavefront error is shown as an optical path error (in nanometers) for each position on the pupil. This is called a zonal representation of the error. In some cases, it is useful to convert the zonal representation into its decomposition in terms of Zernike polynomial coefficients, which is called a modal representation. This type of data can be input to a computer to command positioners to deform a mirror (for phase conjugation) to reduce these wavefront errors. The result is a much cleaner image.
Atmospherics
65
FIGURE 3.3 Image of subaperture spots.3 A close look reveals that the spots do not form a uniform grid. This results from the wavefront tilt that is being measured.
FIGURE 3.4 The resulting wavefront error mapped across the pupil as derived from the wavefront error present in the SH results shown above.3
66
Chapter Three
References 1. D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight LowOrder Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Optical Engineering, 36(7), pp. 1910–1917, July 1997. 2. R. Tyson and P. Ulrich, Adaptive Optics, Ch. 2 of Vol. 8 of the Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 215, 1993. 3. www.Astro.Washington.edu/morgan/APO/SH-measurements/initial_Trials/report.html, 2003.
VERTICAL PROFILES OF ATMOSPHERIC PARAMETERS Using the following functional form, one can estimate vertical profiles of temperature, pressure and humidity. 2
f ( h ) = f ( 0 ) exp ( – a h – bh ) where f(0) represents the surface value of each parameter and h is the height in kilometers. Note that the pressure is in millibars. Atmospheric parameter
a (km–1)
b (km–2)
Humidity (g/m3)
0.308
0.05
Temperature (K)
0.01464
0.00081
Pressure (mB)
0.11762
0.00109
Discussion The fact that there is a strong exponential component to these models will not be a surprise to anyone who has studied the thermodynamics of the atmosphere. Both pressure and temperature can be evaluated for an equilibrium atmosphere by investigating the impact that gravity has on the pressure profile. Further analysis of the entropy and molecular properties of the atmospheric constituents leads to the conclusion that both pressure and temperature exhibit an idealized exponential change with altitude. Modeling the vertical profile of water vapor is not so easily done, but it should be clear that the thermodynamics of water, particularly the exchange of energy that generates changes in phase from liquid to vapor to solid (ice), are entwined with the temperature profile. The typical exponential property of the atmosphere can be seen in the first term of the equation. The quadratic term assists in fitting data used in MODTRAN calculations. The approximation works well up to 4 km altitude in a marine environment.
Reference 1. F. Hanson, “Coherent Laser Radar Performance in Littoral Environments—A Statistical Analysis Based on Weather Observations,” Optical Engineering, 39(11), pp. 3044–3052, November 2000.
VISIBILITY DISTANCE FOR RAYLEIGH AND MIE SCATTERING Rayleigh scattering is the only significant type of scattering that occurs when the visibility is greater than 10 to 12 km.
Atmospherics
67
Discussion When the relative humidity is greater than roughly 75 percent, aerosols (haze) grow into the size range for Mie scattering. As a rule of thumb, Mie scattering is the type of scattering that reduces visibilities below the criterion for unrestricted visibility ( 2.
∑
–mν
15 e ---------{ [ ( mν + 3 )mν + 6 ]mν + 6 } F 0 – λT = -----44 π m = 1, 2 , 3 , … m 2
4
6
for ( ν ≥ 2 ) 8
15 3 1 ν ν ν ν ν F 0 – λT = 1 – -----4- ν ⎛ --- – --- + ------ – ------------ + ------------------- – ---------------------------⎞ ⎝ 3 8 60 5040 272,160 13,305,600⎠ π
for ( ν < 2 )
The number of terms included in the series of the first equation is selected to obtain the desired accuracy. This function calculates the fraction of the total blackbody radiation emitted up to the indicted value of λT. By doing two calculations, one can find the power in a particular band. The break point of “2” is equivalent to T = 7.2 × 10–3/λ, where wavelength is expressed in meters. For example, if we are dealing with a blackbody of temperature 3000 K, then the first formula applies for wavelengths shorter than 2.4 µm. For cooler bodies, say 300 K, the first formula applies shorter than 24 µm. As an example, consider a blackbody with a temperature of 300 K. What fraction of its total emissive power (σT4) falls between the wavelengths of 50 and 51 µm? Using the second of the equations, we compute F0–λT for both 50 and 51 µm. We find that the defined band represents 0.156983 percent of the total output of the blackbody. This result compares very well with the value obtained from an on-line blackbody calculator,2 which gives the result of 0.1565362 percent. While not as accurate as a computation using Planck’s integral, the result is certainly adequate for a quick assessment.
References 1. R. Siegel and J. Howell, Thermal Radiation Heat Transfer, Appendix A, McGraw-Hill, New York, 1972. 2. http://thermal.sdsu.edu/testcenter/javaapplets/planckRadiation/blackbody.html, 2003.
282
Chapter Fourteen
BRIGHTNESS OF COMMON SOURCES 1. 2. 3. 4. 5. 6.
The Sun near the equator at noon has a brightness of 105 lux and emits about 1026 W. A full Moon is 500,000 times dimmer (0.2 lux). A super-pressure mercury lamp emits about 250 W/cm2 sr. A 60-W bulb emits approximately 50 lux, which is 50 lm/m2 at 1 m. A 4-mW HeNe laser emits about 106 W/cm2 sr. The power produced by a quasar is about 1040 W, assuming that the quasar emits uniformly in all directions.
Discussion It is interesting to compare these brightnesses to the sensitivities of some common visible sensors. Note that a fully dark-adapted eye, under ideal conditions, can produce a visual sensation with about 3 × 10–9 lux (lm/m2). This can be converted into photons per second by noting that the dark-adapted eye has a peak sensitivity at 510 nm and that, at that wavelength, the conversion from lumens to watts is 1725 lm/W. Thus, the dark-adapted (scoptic) eye, having an entrance area of about 13 × 10–6 m2, can sense about 22 × 10–15 W. Each photon at 510 nm carries 390 × 10–21 joules, so the eye is sensitive to about 58 photons/sec. This is confirmed by the Burle E-O Handbook,2 which quotes work showing the sensitivity of the dark-adapted eye at 58 to 145 photons/sec. In contrast, photographic film grays if 2 × 10–3 lux is imposed for 1 sec. At a film resolution of 30 lines per millimeter, we could see the effect on 10–9 m2 or, with 10–11 lumens, about 3,000 photons at 510 nm.
References 1. M. Klein, Optics, John Wiley & Sons, New York, p. 129, 1970. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 121, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003.
CALIBRATE UNDER USE CONDITIONS The calibration of an instrument for a specific measurement should be conducted such that, to whatever extent possible, the results are independent of instrument artifacts. Moreover, the calibration should be conducted under conditions that reproduce (or closely approximate) the situations under which field measurements are to be made.
Discussion Sometimes, calibrations of sufficient accuracy may be done in environments that differ slightly from their use environment, but this approach generally should be avoided. The radiometric and environmental conditions should simulate those that are likely to be experienced in use. This rule is useful when designing calibration facilities, determining calibration requirements, and understanding the usefulness of a previous calibration (see Fig. 14.2). The purpose of a calibration is to establish a known relationship between the output of an electro-optical sensor and a known level of incident flux, which can be traced to pri-
Radiometry
FIGURE 14.2
283
Example of a multiple blackbody test facility. (Courtesy of FLIR Systems Inc.)
mary standards (e.g., NIST in the U.S.A.). As stated above, when the flux is to be measured at different times and places or with different instruments, the results should be the same (or of a known function of each other). Sensors need to be fully characterized so that their contribution can be estimated, allowing for appropriate corrections. Electro-optical sensors have quirky attributes that make their output a complex, nonintuitive function of the total input, which includes spectral radiant flux from the background, the temperature of the sensor, the expected radiant flux on the aperture, similar target geometry, background, polarization, vibration, and so on. Also, measurements must be made a sufficient number of times to determine repeatability and an estimate of calibration uncertainty.
EFFECTIVE CAVITY EMISSIVITY The effective emissivity of a cylindrical cavity is1,2 ε εeff = ----------------------------A A ε ⎛ 1 – ---⎞ + --⎝ S⎠ S where ε = emissivity of the material A = area of the exit port of the cavity S = total surface area of the cylinder
284
Chapter Fourteen
Discussion Just about any shape that has a large surface area compared with the surface of the exit aperture will create a high effective emissivity, regardless of the inherent emissivity of the materials. For example, a cone with an aspect ratio (ratio of length to aperture diameter) of 6 will have an effective emissivity of 0.995.1 Even a stack of shiny razor blades, when viewed looking at the sharp edges, is extremely black. Reference 2 provides an equation that includes an additional level of detail and applies to general closed shapes. In this formulation, ε(1 + k ) εeff = ----------------------------A A ⎛ ε 1 – ---⎞ + --⎝ S⎠ S A A where k = ( 1 – ε )⎛ --- – -----⎞ ⎝ S So⎠ So is the surface area of a sphere whose diameter is equal to the depth of the cavity. That is, So is defined by the distance from the exit plane to the deepest point of the cavity. For example, consider a sphere of diameter 50 cm and an opening of 1 cm. Because S and So are nearly the same, k ≈ 0, yielding the equation in the rule. Using the rule and the dimensions of the sphere mentioned above, we can estimate the emissivity of the cavity as ε ε′ = ---------------------------------------------------ε( 1 – 0.0004 ) + 0.0004 If ε ≈ 1, then ε′ = 1, as expected. However, suppose that the material is very reflective and ε ≈ 0.1. Then, 0.1 εeff = -------------------------------------------------- = 0.9964 0.1( 0.9996 ) + 0.0004 This very high emissivity is consistent with our experience with blackbodies, which is that the surface material is unimportant if the shape is approximately a closed surface with a small hole. Yet another approach, but with less adaptability, appears in Ref. 3. For a typical cylindrical cavity (one end open and the other closed) with a diameter-to-length ratio of about 1:2, the cavity’s effective emissivity can be estimated from 2
εcavity = 0.8236 + 0.43ε – 0.367ε + 0.113ε
3
where ε = surface emissivity This polynomial equation allows the computation of emissivity for a cylinder. In any other case, the effective emissivity depends on the cavity shape and surface properties. Note that an ε of 0.8 results in a cavity emissivity of about 0.99. These results apply in the wavelength range of 2.2 to 4.7 µm. Figure 14.3 illustrates the functional form of the polynomial equation.
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 68–69, 1969. 2. W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 2–3. 3. R. Bouchard and J. Giroux, “Test and Qualification Results on the MOPITT Flight Calibration Source,” Optical Engineering, 36(11), p. 2992, November 1997.
Radiometry
FIGURE 14.3
285
Functional form of polynomial equation.
THE MRT/NE∆T RELATIONSHIP For IR systems, the minimum resolvable temperature (MRT) can be forecast from the noise equivalent delta (or differential) temperature (NE∆T) by NE∆T MRT ≈ k ------------------------MT F sys ( f ) where MRT = minimum resolvable temperature k = proportionality constant (Holst indicates that this should be 0.2 to 0.5) NE∆T = noise equivalent delta temperature MTFsys(f) = system-level modulation transfer function at spatial frequency f
Discussion This is an approximation only, accurate for mid-spatial frequencies, and it applies to thermal imaging IR sensors only. It assumes that operator head movement is allowed in the testing. MTF and MRT must be at the same spatial frequency, and MTF should account for line-of-sight (LOS) stability and stabilization. This rule can be used to estimate a hard-to-calculate quantity (MRT) from two easily calculable (or determined) quantities (NE∆T and MTF). Conversely, it may be used to estimate the MRT when an MRT test (or its data) is not convenient. The sensitivity of thermal imaging sensor systems seems to fall off linearly with a decrease in the modulation transfer function. For IR systems, this can be related to the NE∆T. The MRT is usually somewhere between 1/2 and 1/5 times the NE∆T.
Reference 1. G. Holst, “Minimum Resolvable Temperature Predictions, Test Methodology and Data Analysis,” Infrared Technology XV, Vol. 1157, SPIE Press, Bellingham, WA, pp. 208-216, 1989.
286
Chapter Fourteen
THE ETENDUE OR OPTICAL INVARIANT RULE In a lossless optical system, the etendue is constant at all planes crossed by the light rays. Etendue is the product of the area of the bundle of rays crossing a surface and the solid angle from which those rays come (or to which they go). 2
C = n AΩ where C = a numeric constant for a given detector pixel size and wavelength equal to about λ2/2N, where N is the number of pixels (This value is called the etendue of the system.) A = area of the optics n = index of refraction of the medium in which AΩ is measured Ω = solid angle field of view
Discussion The numerical value of the constant depends on the optical system. A and Ω could be the area of a pixel and the solid angle of the light falling on the pixel, or they could be the area of the optical pupil and the solid angle of the instantaneous field of view. The term n is the refractive index of the medium in which the AΩ product is measured. Often, but not always, n is unity. Detectors or microscope objectives immersed in a high-index medium, and light going from air into an optical fiber, are examples of the need to account for n. For the rest of this discussion, the above equation is true when the index of refraction of the medium in which the rays are traveling is the same throughout the system. The etendue relationship is a basic property of radiometry and optics. The expression above, simplified to AΩ = λ2, can be derived from the diffraction theory of optics. The etendue of a system is defined by the least optimized segment of the system. Once it is known, the limit of performance of the entire optical train is determined. Additionally (with reference to Fig. 14.4), we can write 2
Ad Ωi = As Ωo = Ao Ω′ ≈ Ao ( IFOV ) ≈ C d λ where Ad = Ωi = As = Ωo =
2
area of the entire detector (or pixel in an array) solid angle subtended by the detector in image space area of interest in the scene solid angle of the scene in object space
FIGURE 14.4 Etendue defines the efficiency with which light is conveyed from place to place in an optical system.
Radiometry
Ao = Ω′ = IFOV = λ= Cd =
287
area of the optics solid angle of the detector projected into object space instantaneous filed of view of a detector (FPA) pixel wavelength (for a broad bandpass system, use the midpoint for this) a constant determined by the pixel geometry’s relationship to the blur diameter (see below) (Generally, for imaging systems, this is from 1.5 to 10, although it may be higher for systems in which it is advantageous to oversample the blur, such as star trackers).
In a diffraction-limited system, the blur diameter is equal to the Airy disk or 2.44 (λ/D)f, where D is the aperture diameter, and f is the focal length. If a square detector is matched to the blur, its area is the square of this, or 5.95(λ2/D2)f2. The solid angle seen by the detector is Ao/f2, so we have the product ⎛ λ2 ⎞ 2 ⎛ Ao⎞ 2 Ad Ωi ≈ 5.95⎜ -----2-⎟ ⎛ f ⎞ ⎜ -----2-⎟ ≈ 6λ ⎝ ⎠ ⎝D ⎠ ⎝f ⎠ In systems that are not diffraction limited, the “6” (or Cd) is replaced by a larger number, but the important λ2 dependence remains. Conversely, if the blur spot is oversampled (large compared to the detector pixel size), this will be smaller (e.g., 1.5 λ2 if the radius of the Airy disk is matched to that of the detector pixel). Similarly, a pixel (again, matched to the Airy disk) projected onto the scene has an area of [2.44(λ/D)R]2, where R is the range, and the solid angle is [2.44(λ/D)R]2]2/R2, or simply 5.95λ2/D2 and (for an unobscured circular aperture) ⎛ 5.95λ2⎞ -⎟ Ao Ω′ = ⎜ --------------⎝ D2 ⎠
2
2 ⎛ πD ----------⎞ ≈ 4.7λ ⎝ 4 ⎠
The etendue works regardless of aperture and pixel shape (and is frequently used by spectroscopists working with slits). When properly applied, the rule allows one to estimate another system’s (e.g., a competitor’s) useful aperture, f/#, or detector size. It provides a determination of collection aperture for radiometric applications. It can be used for estimates when coupling light into fibers, because there is a small cone, defined by the equation, that allows acceptance of light into a fiber. This rule goes by many different names. Spectroscopists like etendue, ray tracers like Lagrange theorem, and radiometry buffs like to say optical invariant. The important relationship is that the useful aperture and the field of view are inversely related. The numerical value of their actual relationship depends on the optical design, the chosen paraxial ray, and the height of the object or detector. The numerical constant is not important (you can choose that based on your design or assumptions). What is important is the understanding that increasing one must result in a decrease in the other. Hence, large multiple-meter astronomical telescopes have small fields of view, and wide-angle warning systems have very small apertures. You can have one but not the other. Additionally, a zoom lens will have a larger effective aperture when viewing its narrow field (resulting in a brighter image) as compared to its wide field. As Longhurst puts it, In paraxial geometrical optical terms, the ability of an optical system to transmit energy is determined by a combination of the sizes of the field stop and the pupil in the same optical space; it is measured by the product of the area and the pupil in the same optical space; it is measured by the product of the area of one and the solid angle sub-
288
Chapter Fourteen
tended at its center by the other. This is the three-dimensional equivalent of the Helmholtz–Lagrange invariant or the Sine Relation.6
Given the size of a detector and the loose approximation that it can “sense” energy from one steradian of solid angle, the upper limits of the field of view and capture area in any associated optical system are immediately calculable. The energy captured (using a small angle approximation) is approximately Ωo or Ao/R2. For a “fast” system, this is on the order of unity, so the energy captured is approximately Ad/Ao. In any event, a small Ad implies small capture area for a given FOV (hence, low energy capture). A large IFOV implies a small capture Ao area for a given detector area (Ad). Hobbs8 provides a particularly nice discussion of the limits of use of the concept of etendue and points out that the etendue of a Gaussian beam is (π2/16)λ2. Any fully coherent beam has an etendue of exactly λ2/2. This follows very neatly from the reciprocity theorem or the conservation of energy.
Reference 1. Private communications with Dr. J. Richard Kerr, 1995. 2. Private communications with Dr. George Spencer, 1995. 3. R. Kingslake, Optical Systems Design, Harcourt Brace Jovanovich, Orlando, FL, pp. 36–38, 43–44, 1988. 4. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 672, 1986. 5. C. Wyatt, Radiometric System Design, Macmillan, New York, pp. 36, 52, 1987. 6. R. Longhurst, Geometrical and Physical Optics, Longman, New York, pp. 465–467, 1976. 7. I. Taubkin et al., “Minimum Temperature Difference Detected by the Thermal Radiation of Objects,” Infrared Physics and Technology, 35(5), p. 718, 1994. 8. P. Hobbs, Building Electro-Optical Systems: Making It All Work, Wiley Interscience, New York, p. 27, 2000.
IDEAL NETD SIMPLIFICATION One approximation for the noise equivalent temperature difference (NE∆T or NETD) is 2
NETD*ideal ( λco ) ≈ kT D*ideal@λco where
NETD*ideal = ideal noise equivalent temperature difference achievable from a 300 K nominal for a given spectral cutoff λco k = Stefan–Boltzmann constant (1.38 × 10–23 J/K) D*ideal@λco = specific detectivity of an ideal photoconductor with a cutoff wavelength λco T = temperature in kelvins
Discussion As photodetectors get better and better, background-limited performance (BLIP) conditions are more often achieved. The above rules give a simple approximation for the minimum (best) NETD achievable. Anyone claiming to have a system that can do better is uniformed, observing at a different temperature, or using multiple bands. Although the authors cannot think of a single system that achieves the incredibly small NETD calculated above, they are probably just beyond your current calendar.
Radiometry
289
These rules are based on definitions of NETD and basic radiometric principles applied to a condition in which the noise is dominated by the variation in the arrival of the photons (a BLIP condition). To a first approximation, the NETD*ideal can be approximately scaled by D*ideal ------------------D*actual However, as always when using D*, one must beware of the D* measurement parameters (does it include 1/f noise, readout noise, cold shield, and so on?) and be conscious of well size (detector readout capacity can limit integration time). The reference provides a simple expression of the minimum noise equivalent temperature difference achievable (BLIP conditions) from a 300 K target is 5.07 × 10
–8
300 --------T
(K cm s
1/2
)
This result is normalized for an integration time (t) of 1 sec, a photodetector area (A) of 1 cm2, and exposure to radiation coming from a hemispheric solid angle around the detector. This result is analogous to the standard definition for detector performance, D*. For conditions different from those just stated, divide the result above by At . If the exposure solid angle is not π, divide the above by Ω ⁄ π .
Reference 1. I. Taubkin et al., “Minimum Temperature Difference Detected by the Thermal Radiation Of Objects,” Infrared Physics and Technology, 35(5), p. 718, 1994.
LABORATORY BLACKBODY ACCURACY When used in a real-world setup, laboratory blackbodies are radiometrically accurate to a only few percent.
Discussion When blackbodies are incorporated into test facilities, several practical constraints limit the radiometric accuracy. First, there is a small temperature cycle caused by the control bandwidth. This can be on the order of one-tenth of a percent of the temperature. Second, a minor temperature uncertainty results from the separation of the emitting surface from the heating and cooling mechanisms and the temperature measurement devices, all contained within the blackbody (but not on the radiating surface). The resultant temperature gradients are small, but small changes in temperature can result in significant changes in radiant exitance. For example, a 1 K bias (at 300 K) alone causes a 3.7 percent radiometric error in the 3- to 5-µm band and a 1.7 percent error in the 8- to 12-µm band. Third, black coatings are not perfectly black; rarely is their emissivity greater than 0.95. In fact, after measuring 16 “black” surfaces, one of the authors (Miller) could not find a single instance of a reflection lower than 0.06 in the LWIR. There tends to be a slight emissivity variance across the aperture and a small reflectance. Fourth, the blackbody may have contaminants on it or be slightly (yet unknown to the user) damaged. This rule is based on empirical observations of the state of the art. Most commercial blackbodies have a few percentage points variation across their aperture because of (1) re-
290
Chapter Fourteen
flections, (2) emissivity that varies with wavelength and viewing angle, and (3) temperature inaccuracies. Rarely are common commercial blackbodies traceable to a National Radiometric Standard, which should be attempted in all cases. Blackbodies employing phase change materials and laboratories that exercise extreme care can claim better accuracy. Conversely, poor radiometric facilities and blackbodies used outside their intended temperature range, or aged ones, can be much worse. Radiometric accuracy tends to decrease as blackbody temperature decreases. The plot in Fig. 14.5 demonstrates the sensitivity of photon flux to minor changes in temperature, emissivity, and reflection. The plot was made by comparing the calculated photon flux from a perfect blackbody (emissivity of 1, exact temperature, and no reflection) to that with an emissivity of 0.9985, a 0.15 percent temperature reduction, and a reflection of 0.0015 of a 300 K background. MWIR refers to a 3- to 5-µm bandpass, and LWIR is an 8- to 12-µm bandpass. The slight bumpiness of the LWIR results near 300 K is a result of the reflection term, which slightly offsets the reduced radiance resulting from reducing the temperature and emissivity. Note that a blackbody with an emissivity that deviates from perfection by only 0.0015 is very good indeed.
FIGURE 14.5 Calibration error in percent for temperature error of 0.15 percent and emissivity error of 0.15%. This figure shows the performance of blackbodies for different temperatures of operation (see text for details).
LAMBERT’S LAW Most surfaces can be considered to be diffuse (or “Lambertian”), and their emittance and reflection follow the following law: M L = ----π where L = radiance in watts per square meter per steradian (W/m2/sr) M = radiant exitance in watts per square meter (W/m2)
Radiometry
291
Discussion In terms of radiant intensity, I θ ∝ I cos θ where Iθ = radiant intensity in watts per steradian (W/sr) from a surface viewed at an angle θ from the normal to the surface I = emitted radiant intensity in watts per steradian (W/sr) θ = the viewing angle between the normal to the emitting surface and the receiver The second expression says that as the angle from normal incidence increases, the radiance projected toward a viewer goes down as cosθ. This rule assumes that the surface is not specular, which is a good assumption for the visible spectrum unless you know differently. In reality, most materials have a specular component to their reflectivity, along with a Lambertian component. For shiny surfaces, the specular component dominates. This simple rule explains why the full Moon looks uniformly bright right up to its edges, even though the surface area per unit angle increases with cosθ. It is true that you are viewing dramatically more surface area per unit angle near the edge. But, at the same time, the Lambertian properties of the surface reduce the radiation from that area that is directed toward your eye. The increased area per angle is cancelled by the smaller radiation into that angle, and all locations on the disk seem to have the same intensity. As the angle of incidence decreases, the surface is less likely to exhibit Lambertian properties. At grazing incidences, most surfaces exhibit a specular quality. Unless polished, most surfaces reflect and emit as a diffuse surface in the visible wavelengths, and at high angles of incidence. This is because most surfaces are “rough” at the scale of the wavelength of light. Therefore, they reflect and emit their radiation following Lambert’s cosine law. Mirrors do not follow this law, but, rather, the laws of reflection for geometrical optics. Although simple, the first equation represents a powerful rule. It enables one to quickly change from a spectral emittance defined by the Planck function to a spectral radiance merely by dividing by π. This works for reflection as well. Note, that the expression is M/ π, not M/2π. The conversion factor is just one π. The projection of the hemisphere (2π) in which the object is radiating into onto a two-dimensional flat surface yields a disk (π). As wavelength increases, a given surface is less likely to be Lambertian, because the ratio of surface roughness to the wavelength becomes smaller, and random scattering decreases. This effect goes as the inverse of the wavelength squared, as shown in a rule in Chap. 13, “Optics.” A diffuse surface in the visible is frequently specular in the IR, and most surfaces are specular in the millimeter-wave regime, which can sometimes make imaging challenging—although this does facilitate manufacture of high-quality millimeterwave mirrors. Said another way, the metrics used for mirror quality in the visible and infrared wavelengths (say, λ/20) apply as well at much longer wavelengths. Because the wavelength is longer, the tolerance for surface roughness increases.
LOGARITHMIC BLACKBODY FUNCTION When a blackbody’s output is plotted versus wavelength on a log-log graph, the following is true: 1. The shape of the blackbody radiation curve is exactly the same for any temperature. 2. A line connecting the peak radiation for each temperature is a straight line.
292
Chapter Fourteen
3. The shape of the curve can be shifted along the straight line connecting the peaks to obtain the curve at any temperature.
Discussion The spectral exitance of a blackbody can be determined quickly using the following method. First, trace the general shape of the Planck function in log-log units (examples appear in Fig. 14.6). Place it on the graph, matching the peak of your trace to the tilted peak line. The spectral exitance can be determined by moving the trace up and down that line, setting the peak to the desired temperature. This is illustrated in the figure, which shows the blackbody spectral exitance for a range of temperatures. The plots cover the spectral range from 100 to 2000 nm (2 µm) and temperatures from 5000 to 12,000 K. A quick look shows that the curves indeed have the same shape (as stated in item no. 1 above). The straight line illustrates item no. 2 of the rule. A little imagination will convince the reader that no. 3 is correct as well. Properly moved, each curve will lie on top of its neighbor; the curves have the same shape. It should be noted that the straight line on the curve is a representation of Wien’s displacement law. It shows that the product of the wavelength at which the curve has a peak and the temperature is a constant.
Reference 1. C. Wyatt, Radiometric Calibration, Theory and Methods, Academic Press, Orlando, FL, pp. 32–33, 1978.
FIGURE 14.6 Illustration of Wein’s law and the assertion that a single curve, duplicated and moved, can represent any blackbody function.
Radiometry
293
NARROWBAND APPROXIMATION TO PLANCK’S LAW Martin1 gives a narrowband approximation to the Planck radiation law as follows: 2c∆λ –1.44 ⁄ λT -e Φλ = -----------4 λ where Φλ = flux in photons per square centimeter per second per steradian at the center wavelength c = speed of light (3 × 1010 cm/sec) ∆λ = difference in wavelength across the bandpass (in centimeters) (λh – λl ), “h” meaning high wavelength and “l” meaning low wavelength λ = median wavelength (in cm) T = temperature of interest (e.g., background or scene temperature) in kelvins
Discussion Planck’s blackbody integral law becomes algebraic when the difference between the upper and lower wavelengths’ difference is less than about 0.5 µm. This closed-form expression of the Planck function (with units of photons per second per square centimeter per steradian) varies exponentially as Φλ ( T ) = Φλ e
– X ( λ )/T
where Φλ(T) = photon flux at a given wavelength for a given blackbody temperature Φλ = 2c ∆λ λ–4 X(λ) = hc/kλ T = temperature This rule works well for hyperspectral and multispectral systems where each bandpass is less than 0.25 µm. The narrower the strip of the spectrum, the more accurate this approximation becomes. Typically, the accuracy is the ratio of the width of the strip to the wavelength. This rule assumes a narrow band, for bands of 1 µm or less. Generally, this provides an accuracy of a few percent for bands less than 1 µm in width. This is good for a narrowband approximation to the Planck’s radiation law when calculated in photons. This is very useful in normal IR practices when attempting to calculate the amount of background flux on a detector (which may limit integration time by filling up the wells). For wide bands, the photon flux can be found by adding up successive pieces of the bandpass using the above equation (e.g., numerical integration on a spreadsheet). This narrowband approximation is important, as its derivative at a specified temperature can be taken in closed form to allow the contrast function to be found for a given temperature. Aficionados will realize that the 1.44 relates to the classic radiation constant derived from hc/k. Consider the solar spectrum and assume the Sun to be a 5770 K blackbody source (see the rule, “Blackbody Temperature of the Sun,” p. 33), and let’s calculate the photon flux for a 1-µm wide bandpass at 0.6 µm. In this case, 2c∆λ –1.44/λT 27 –4.16 25 2 -e = 4.63 × 10 e = 7.23 × 10 photons/sec cm ∆λ(cm) Φλ = -----------4 λ
294
Chapter Fourteen
If we multiply 7.23 × 1025 by a 1-µm bandpass (10–4 cm), we get 7.2 × 1021 photons/ sec cm2. Using standard theory, the flux is 7.33 × 1021 photons/sec cm2, so the rule provides reasonably accurate results.
Reference 1. Private communications with Dr. Robert Martin, 1995.
THE PEAK WAVELENGTH OR WIEN DISPLACEMENT LAW The peak wavelength (in micrometers) of a blackbody expressed in watts is approximately 3000 divided by the temperature in kelvins.
Discussion According to Planck’s law, a blackbody will have an energy distribution with a unique peak in wavelength. For a blackbody, this peak is solely determined by the temperature and is equal to 2898/T. This assumes the emitter is indeed a blackbody and not a spectral emitter. Hudson1 points out that about 25 percent of the total energy lies at wavelengths shorter than the peak, and about 75 percent of the energy lies at wavelengths longer than the peak. Additionally, Hudson gives the following shortcuts: To calculate the wavelengths where the energy is half of the peak (half power or at the 3-dB points), divide 1780 by the temperature in kelvins for the lower end and 5270 for the higher end. You will then find: Four percent of the energy lies at wavelengths shorter than the first half power point. Sixty-seven percent of the energy lies between the half points. Twenty-nine percent of the energy lies at wavelengths longer than the longest half power point.1
Of course, there are other ways to describe blackbodies, and each has its own version of the Wien law. For example, the maximum occurs for photon emission from a blackbody when the product of wavelength and temperature equals 3670 µmK.
Reference 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 58–59, 1969.
PHOTONS-TO-WATTS CONVERSION To covert a radiometric signal from watts to photons per second, multiply the number of watts by the wavelength (in micrometers) and by 5 × 1018. Photons per second = λ (in micrometers) × watts × 5 × 10
18
Discussion The actual conversion can be stated as watts = ( hc ) ⁄ λ photons/sec, so photons per second = ( watts × λ ) ⁄ ( hc ) = watts × λ (in meters) × 5 × 1024 for all terms with the dimension of meters. Note that the term 5 × 1024 derives from the inverse product of h and c. There are
Radiometry
295
one million micrometers in a meter, so the constant is 106 smaller if one uses micrometers. If you require more than two significant figures, the constant is 5.0345 × 1018. Actual results are only for given wavelength or an infinitesimally small bandpass. Typically, using the center wavelength is accurate enough for lasers or bandwidths of less than 0.2 µm. However, if doing this conversion for wide bandpasses, set up a spreadsheet and do it in, for example, 1/20-µm increments. The rule is valid only if the wavelength is expressed in micrometers. The constant must be adjusted for wavelengths expressed in other units.
QUICK TEST OF NE∆T If an infrared system can image the blood vessels in a person’s arm, then the system is achieving a NE∆T or MRT (whichever is proper) of 0.2 K or better.
Discussion Because they are transporting hot blood, vessels in a person’s arm or head tend to have a temperature difference of several degrees above the outside skin. However, the thermal and optical transmission through even thin skin is low, so it usually appears that veins have 0.1 to 0.3°C higher temperatures than the outside skin temperature. If your camera can image them, then it has a NE∆T of this amount or better (smaller). This phenomenon can be observed with any infrared camera that has this level of performance. Keep in mind that human skin has a temperature of about 93°F (not 98.6°, the temperature deep in the body). Using the Wien displacement law, we find that the peak of the blackbody spectrum of skin is about 9.3 µm. The emissivity of skin is widely reported to be around 0.97 or above in the infrared. A rule in Chap. 17, “Target Phenomenology,” provides more information on the signature of the human body. This rule provides crude approximations only. It does not account for adverse effects such as focusing, atmospheric transmission, and the like. It doesn’t tell you how good the system is for subpixel detection—just that it is better than a NE∆T of 0.2 K. It seems to be more difficult to image the vein in a women’s arm than in a man’s. This is probably due to the extra layer of fat and less muscle (and therefore smaller veins). Imaging a women’s arm veins would often indicate an even better NE∆T. This is useful for quick estimates of system performance. This is especially useful when the camera is not yours and is publicly set up at a conference, lab demo, or some other function.
THE RULE OF 4f/#2 From an extended background, the scene power on the detector can be represented as Ms E d ∝ -----------------24( f /# ) where Ed = irradiance at the detector Ms = radiant exitance of the scene f/# = effective f/# (effective focal length/effective aperture)
296
Chapter Fourteen
Discussion This rule is based on basic radiometry and is an outgrowth of associated rules in this book. It can be shown that, for an extended object (e.g., the background), the power on the detector depends only on the f/# and not the aperture size. Consider that the per-pixel energy entering the aperture (ignoring the atmosphere and expressed in watts) is 2
NπD 2 2 ------------- ( IFOV ) ( R ) 2 4R where
N = source exitance in watts per square centimeter per steradian (W/cm2/sr) D = aperture size IFOV = angular size of a detector (the focal length divided by the detector linear dimension) R = distance to the target
Rewriting, we get 2
ND π detector dimension 2 -------------- ⎛ ---------------------------------------------⎞ ⎠ 4 ⎝ f where f is the focal length of the optics, so the irradiance in the focal plane is M 2 -----------------2- ( detector dimension ) 4( f /# ) because M = πN, and f/# = f/D. From this, we obtain the formula in the rule by dividing both sides by the square of the detector dimension. This handy rule allows easy use and estimation of the relationship between the power from the scene and the expected power on the detector. It also provides for calculations of the effect of f/# on detector power. This can help answer questions such as, “Will the new f/# cause my detector pixel’s wells to overfill?” An extended source will result in a flux on the detector that is governed by the f/# of the optical system only. Again, the actual calculation will need to include optical transmission, any losses in the intervening medium, blur spot effects, and consideration for spectral filtering.
Chapter
15 Shop Optics
The industry is full of rules of thumb for the shop manufacture and test of optics. Included here are a select few that should be available to anyone involved in electro-optics. Shop optics began many centuries ago, when the first optical hardware was made. To early investigators, the world was full of rules of thumb and principles explained by thought process alone. Certainly, shop optics developed earlier than 425 B.C., when the first known written account of making lenses was written (The Clouds, by Aristophanes). Glass was discovered about 3500 B.C., and crude lenses dating from 2000 B.C. were found in Crete and Asia Minor. Euclid, in his 300 B.C. publication Optics, may have been the first to write about curved mirrors. The Romans polished gemstones, and, by the eleventh century, glassblowers were making lenses for spectacles and research. Optical shop manufacture and test was flourishing shortly after A.D. 1600 with the invention of both the telescope and the microscope and the popularity of spectacles. Roger Bacon wrote of magnification and applied lenses to look at celestial objects. His reward, a gift from the religious fanatics of his time, was imprisonment. It was during the Renaissance that the likes of Galileo, Lippershey, and others applied optics to telescopes, microscopes, and other visual aids. An interesting note1 (which the reference cautions may be “partly or wholly fiction”) is that there is evidence that Lippershey applied for a patent for his telescope and sent one as a gift to the Prince of the Netherlands, Mauritz of Nassau. The prince showed it to many luminaries of the time. Lippershey was eventually denied a patent because, he was told. “too many people have knowledge of this invention.” This was indeed true, given that toy telescopes were already for sale in Paris, and other spectacle makers were also claiming to have invented the telescope. The publication of Isaac Newton’s Opticks in 1704 was a milestone in the science and engineering of optics. His work was a compendium of theories to explain the optical phenomena that he observed in his optics shop. Thus, spherical polishing has a rich 300-year history at a minimum. Spherical polishing entails using a tool, of about the same size as the optical element, that oscillates and rotates against the surface of the element while flooded with abrasives and water. This technique works very well for spherical optics but not does not apply to aspheres. Today, aspheres are usually made by diamond turning or computer-controlled subaperture polishing, although magnetorheological finishing and ion polishing are viable alternatives.
297
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
298
Chapter Fifteen
Jean Foucault (1819–1868) gave mirror makers another powerful tool called the knifeedge test. A straight edge, often a razor blade, is used to block the light at the focus of an optic. The appearance of the mirror, when viewed from near the knife edge, is a powerful indication of the figure of the optic. However, you should be careful not to nick your nose while performing this test. The nineteenth century saw the development of slurry-based lapidary grinding and polishing techniques that are still employed today (with machines such as shown in Fig. 15.1). This is how almost all lenses and mirrors were made until the late 1980s. The 1980s and 1990s witnessed several key advancements, including near-net shaping, diamond turning (diamond turning machines such as shown in Fig. 15.2), and precision molding. Today, a new technique called deterministic microgrinding (DMG) is being explored and holds promise for additional commercial use, as does the more advanced technology of growing optics in deposition chambers. Key to modern shop optics are modern materials and testing techniques. Many very precise testing techniques use interferometers. Hence, some of the rules in this chapter address fringes—the light and dark patterns produced by interferometers. The interferometer is one of the most accurate and finest measurement devices known, with measurement accuracy of better than half a millionth of a centimeter (with a tenth of a wave readily achievable). Interferometers can easily display wavefront deformation much smaller than a wavelength, so they are often used to measure the shape of an optical surface. The Fizeau interferometer is one of the most common for optical metrology. Usually, the optical surface under test is compared with a “known” flat or spherical surface. The light waves reflected from the two surfaces produce interference fringes whose spacing indicates the difference of the two surface’s profiles. If the reference surface is truly flat, the fringe spacing directly indicates the figure (or shape) of the test surface. Dan Malacara’s Optical Shop Testing, T. Izumitani’s Optical Glass, and H. Karow’s Fabrication Methods for Precision Optics are stalwart texts on practical shop optics. Occasionally, papers can be found on optical manufacturing techniques in the journals and
FIGURE 15.1
Traditional lapidary grinding process. (Figure courtesy of FLIR Systems Inc.)
Shop Optics
FIGURE 15.2
299
Modern diamond turning facility. (Figure courtesy of FLIR Systems Inc.)
trade magazines, but skills are usually still handed down from one master optician to the next. There are several good web sites and organizations such as sponsored by the University of Rochester.4 Also, the publications and web site of the American Precision Optics Manufactures Association (APOMA)5 should be checked regularly by those with an interest in the subject.
References 1. “History of the Telescope,” 2003, from www.stormpages.com/swadwa/hofa/ht.html, 2002. 2. D. Golini, “Improved Technologies Make the Manufacture of Aspherical Optics Practical,” OE Magazine, August 2001. 3. W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, and the Office of Naval Research, Washington, DC, pp. 3-129 to 3-142, 1978. 4. www.APOMA.org, 2003. 5. www.opticsexellence.org, 2003.
300
Chapter Fifteen
ACCURACY OF FIGURES 1. The “Rayleigh Quarter-Wave” rule states that an optic whose output wavefront is accurate to one-quarter of a wavelength (λ/4) is sufficient for most applications. 2. A figure of λ/15 is required for interferometric, phase sensitive, and critical imaging applications. 3. λ/10 is often sufficient for imaging optics requiring low beam distortion (especially with multiple elements).
Discussion This rule is based on simplified diffraction theory and Strehl ratios and backed by empirical observations. The “Laser Beam Quality” rule (p. 174) shows that a Strehl of 80 percent is considered to be diffraction limited. This is equivalent to a wavefront of λ/4.5, hence the more general from in the rule. The rule provides a quick estimate of the figure required for a given optical element based on the specific application and a useful criterion for estimating the allowable aberrations in a typical image-forming system. The rule refers to the maximum-to-minimum or peak-to-valley (PV) wavefront error. This is valid for normal applications. Super-high-power lasers, extremely wide fields of view, and other exotic implementations will require more stringent wavefront control. The critical parameter here (λ) should be measured at the wavelength of interest (thus, this is a more difficult specification to meet for a UV system than an imaging millimeter wave system). An optical surface’s figure is the shape of the surface. Its quality is measured as the departure from an ideal desired surface measured as a fraction of the wavelength at which the optics will be used. It is usually quoted as a PV measurement. Sometimes it is quoted as a root-mean-square (RMS) wavefront error, which is smaller than a PV measurement by about a factor of 4. Anyway, when someone says “λ/x,” the larger the value of x, the smaller the departure from the ideal and the better the quality. The appropriate quality requirement depends on its final use. One can afford to be sloppier with the figure for plastic toy binoculars than for an interferometer or a space telescope. In some advanced applications, the figure is defined in terms of the spatial frequencies over which specifications are written. For example, a model for density of surface figure variance as a function of spatial frequency (k) is that the power spectral density varies as A PSD( k ) = -------------------3k ⎛ 1 + -----⎞ ⎝ k o⎠ 2
2
k = 1 ⁄ Λ = ( k x + k y ) cycles/cm x, y = the dimensions of the mirror Λ = spatial dimension across the surface A = a constant that sets the amplitude of figure variance ko, kx , ky = spatial frequencies
where
For an example of a very high-quality mirror,1 intended for use in a space-based coronagraph, the following parameters were used: A = 2.4 × 105 Å2 cm2 (with a goal of 6 × 104 Å2 cm2 between Λ = 40 cm and Λ = 2 cm) ko = 0.040 cycles/cm
Shop Optics
301
Rayleigh found that when an optical element had spherical aberration to such an extent that the wavefront of the exit pupil departs from the best fit by a 1/4 of a wavelength, the intensity at the focus is diminished by 20 percent or less. He also found that this could be tolerated and was difficult to notice. Subsequent workers also found that when other common aberrations reduce the Gaussian focus intensity by about 20 percent or less, there was little overall effect on the quality of the image. This is the genesis of item 1 of the rule. When manufacturers invoke this rule to describe their product, they usually mean 1/4 of the HeNe laser line at 0.6328 µm. However, the wise engineer will find out exactly what test wavelengths were used and will also note if specifications are quoted as PV, peak-topeak, RMS, or whatever. For transmissive optical elements, the equivalent surface figure error (SFE) resulting from index of refraction variations in the optical element can also be represented as follows: WFE SFE = -----------------( ∆n – 1 ) where
∆n = index change across the surface (such as from air to glass) WFE = wave front error
References 1. Jet Propulsion Laboratory Technology Announcement, Terrestrial Planet Finder Technology Demonstration Mirror, June 13, 2002. 2. Private communications with W. M. Bloomquist, 1995. 3. Private communications with Tom Roberts, 1994. 4. M. Born and E. Wolf, Principles of Optics, Pergamon Press, New York, pp. 408–409, l980. 5. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 64, 1994.
APPROXIMATIONS FOR FOUCAULT KNIFE-EDGE TESTS A Foucault knife-edge test may indicate the following: 1. Spherical aberration is indicated if the shadow shows more than one dark region. 2. Coma is indicated if the shadow pattern consists of rectangularized hyperbolas or an ellipse. 3. Astigmatism is indicated if the shadow is a straight line with a slope and can be made to rotate by using different placements of the knife edge about the optic axis. 4. Additionally, the center of curvature or a defocus error can be tested by observing the pattern as the knife is placed first inside and then outside the focus. The shadow will change sides.
Discussion This set of rules is based on diffraction theory, plain old geometrical optics, and empirical observations of the patterns from a knife-edge test. Foucault’s knife-edge test consists of cutting the image of a point source off with a straight edge and observing the shadows on the mirror. This test is a quintessential optical shop test for aberrations. It is useful for measuring the radius of curvature and as a null test. It is easy to implement, and the experienced optician can derive a wealth of knowledge about the surface being tested. The accuracy of the Foucault knife-edge test can be impressive. It has been estimated that with the eye alone (assuming a 2 percent contrast), that wavefronts can be tested to λ/600. It is frequently employed as an in-process test to determine the status of the figure to determine the need for additional polishing.
302
Chapter Fifteen
When conducting the knife-edge test, the shadows (sometimes called a Foucaultgram), look like a very oblique illumination of the surface. If the source is to the right of the knife edge, the apparent illumination is from the right. A lamp shining on the surface looks bright toward the light (right, in this case) and dark on the other side. A divot looks dark on the right and light on the left. If the wavefront is perfectly spherical, the Foucaultgram appears as a uniform gray. Some common indications of imperfect optics include ■ “Lemon peel,” which indicates local roughness and poor polishing ■ A bright edge toward the light, which indicates a turned down edge ■ A dark edge toward light, which indicates a minor miracle—a turned up edge The reader should note that the knife-edge test is sensitive to knife-edge placement and test setup. Astigmatism may escape detection if only one knife orientation is used.
Reference 1. D. Malacara, Optical Shop Testing, John Wiley & Sons, New York, pp. 231–253, 1978. 2. Private communication with W. M. Bloomquist, 1995.
CLEANING OPTICS CAUTION Dirty optics should be cleaned only after great deliberation and with great caution, because 1. Most surfaces, and all fingers, have very fine abrasive dirt on them that will scratch an optical surface. 2. A few big areas of dirtiness are less harmful (scatter less light) than the myriad long scratches left behind after removing the hunks. 3. Small particles can adhere very strongly (in proportion to their mass) and cannot be blown or washed off easily. 4. Washing mounted optics just moves the dirt into mounting crevices where it will stay out of reach, waiting for a chance to migrate back to where it is harmful.
Discussion Sometimes it is necessary to clean optics, especially when the contaminant is causing excessive scatter or if a laser will burn them into the coating. The longer the wavelength, the more valid the above rules become (e.g., UV systems need to be cleaner than IR systems). Additionally, optics near a focal plane need to be cleaner than optics located at the aperture. It is often surprising how well optical systems work when they are dirty. There are legends about tactical FLIR systems working fine on a mission. When inspected afterward, the crew is surprised to find the window splattered with mud and dead bugs. Additionally, many older telescopes (e.g., the Palomar’s 200-in primary) have numerous surface nicks, cracks, and gores, and yet the optics seem to work fine (once the flaws are painted black). At least one space optic has been observed to be marked with the fingerprint of one of the last technicians to work on it. Most observatories wait until a few year’s worth of dust accumulates on their primary mirrors before washing them. Again, optics near a focal plane (e.g., reticles, field stops, and field lenses) must be kept cleaner. This is because a particle or defect on a surface near the focal plane, projected back to the front aperture, could be a large portion of the collecting aperture. There are several reasons for this apparent contradiction. First is that the human eye is especially good at seeing imperfections (dirt, pits, and so forth) on smooth surfaces. Second is that the dirt usually does not amount to a large fraction of the surface area, so the
Shop Optics
303
diffraction, MTF, and transmission losses are surprisingly small. Third is that these particles are far out of focus. Often, the most important effect of the dirt is scatter. Optics should be stored in containers in a laminar-flow bench. When in use, hanging upside-down helps. When you do clean, be very careful of the water and cloth that you use. Soaps often have some sandy grit in them to add friction for easier dirt removal. Additionally, alcohol-based perfumes are frequently added to cleaning products and may remove optical coatings.
References 1. Private communications with W. M. Bloomquist, 1995.
COLLIMATOR MARGIN It is good design practice to make the diameter of the collimating mirror in a test setup at least 10 to 20 percent greater than that of the optics to be tested and to make the focal length at least 5 to 10 times that of the element under test.
Discussion This rule should be adhered to whenever you consider using a collimator, or you will spend many hours hunched over an optical table wondering why it isn’t working right. It should be considered whenever performing test system design, determining collimator specifications, collecting test fixtures, and so on. Generally, it is wise to have the test apparatus larger and more accurate that the item to be tested. The collimator’s useful exit diameter should be significantly larger than the diameter of the element under test so that placement of the lens is not critical (10 to 20 percent minimum). This also allows for some slop in pointing, assures that the optics under test will be completely filled with collimator light, and reduces the deleterious contributions from off-axis sources. Moreover, it is good design practice to make the focal length of the collimating mirror ten times the focal length of the lens under test. Under some highly controlled conditions, accurate measurements can be taken with a collimator that is only slightly larger than the entrance optics. In a really tough situation, the collimator can be undersized. If so, much data must be taken, and the information can be stitched together. However, this requires consideration of schedule and budgets. Also, because off-axis collimators must turn the beam, they need to be even larger than those used in on-axis testing.
Reference 1. Private communications with Max Amon, 1995.
DETECTION OF FLATNESS BY THE EYE The naked eye can detect a lack of flatness having a radius of curvature up to about 10,000 times the length of the surface being viewed.
Discussion Johnson1 states, “The test of a flat surface by oblique reflection is so sensitive that even the naked eye will detect quite a low degree of sphericity, if near grazing incidence is employed such that light from the entire surface enters the eye.”
304
Chapter Fifteen
The surface area must therefore be at least several times the diameter of the eye’s pupil (e.g., some multiple of about 5 mm, which is the typical size of the pupil in room light). If the surface is not flat, then an image of a distant object will appear fuzzy as a result of the astigmatism introduced by the surface.
Reference 1. B. K. Johnson, Optics and Optical Instruments, Dover Publications, New York, pp. 196–197, 1960.
DIAMOND TURNING CROSSFEED SPEED When diamond turning an element, the crossfeed rate should be varied as 1/r to maintain a constant removal rate, where r is the radius of the element.
Discussion Diamond turning leaves marks on the surface of the optic. For example, the Hubble telescope’s primary mirror has groove depths of about 20 nm. To reduce the groove depth and enhance the consistency of finish (including scratch and dig blemishes as discussed in another rule), the removal rate of bulk material should be constant. Unfortunately, when grinding, the tool will experience different speeds as it traverses the element—high speeds at the edge and lower speeds at the center. The velocity should be adjusted to compensate for this. As stated in Reference 1, When constant spindle speeds and crossfeed rates are used to contour grind a rotationally symmetric surface, the material removal rate changes dramatically as the tool moves from the edge to the center of the part, causing both surface form and finish variations across the part . . . . In contour grinding, if a constant tool crossfeed rate is maintained across the part surface, a decrease in volumetric removal per unit time occurs as the tool is moved from the edge to the center of the part. It is this decrease in volumetric removal that usually produces a distinct v-shaped removal error at the center of the workpiece. This is the result of the increased loads (and hence larger tool deflections) near the workpiece edges. Adjusting the crossfeed speed as a function of the radial position has been demonstrated to maintain a constant volumetric removal rate and mitigate the central v-shaped deformation. For constant crossfeed speed Vc and depth of cut dc, the volumetric removal rate dV/dt increases linearly with radial distance r from the center of the part as:
dV /dt = 2πrV c d c Therefore, to maintain a constant removal rate, the crossfeed must be varied as 1/r.1
The crossfeed speed can be either increased or reduced. Increasing it results in an infinite speed at the center. Thus, in practice, the speed increase reaches a maximum and will still result in some unfortunate effects. Starting the grinding with the maximum velocity and reducing the speed solves these problems but results in a longer processing time.
References 1. S. Gracewski, P. Funkenbush, and D. Rollins, “Process Parameter Optimization during Contour Grinding,” Convergence, 10(5), pp. 1–3, September-October 2002. 2. P. Funkenbush, et al., “Use of a Non-dimensional Parameter to Reduce Errors in Grinding of Axisymmetric Aspheric Surfaces.” International Journal of the Japan Society for Precision Engineering, Vol. 33, pp. 337–339, 1999.
Shop Optics
305
EFFECT OF SURFACE IRREGULARITY ON THE WAVEFRONT OPD = 0.5( n′ – n )( number of fringes ) where
OPD = optical path difference ( n′ − n) = change in index of refraction across the surface number of fringes = height of the irregularity or surface bump in fringes (there are two fringes to a wavelength, one dark and one light)
Discussion The number of fringes is a common term referring to the height of a bump or depression in the interferogram as expressed in the deviation across fringes. Smith states that “in most cases, irregularity takes the form of an astigmatic or toric surface, and a compromise focus usually reduces its effect by a factor of 2.”1 The above equation relates to surface irregularities. The difference in surface radius corresponding to N fringes of departure from a test plate is given by 2R ∆R = Nλ⎛ ------⎞ ⎝d⎠
2
where R = nominal radius λ = test wavelength d = diameter over which the N fringes are observed
References 1. W. Smith, Modern Lens Design, McGraw-Hill, New York, pp. 432–433, 1992.
FRINGE MOVEMENT When testing an optic on a reference flat, 1. If you gently press (e.g., with a pencil eraser) on the edge of the upper optic and the fringes move toward the point of pressure, then the upper surface is convex. Conversely, it is concave if the fringes move away from this point. This effect is the result of the differences between the two surfaces—the flat and the test element. 2. Press near the center of the fringe system on the top optic and, if the surface is convex, the center of the fringe will not change, but the diameter will increase. 3. If the source is white light and pressure is applied to a convex center, the first fringe will be dark and the first light fringe will be white. The next fringe will be tinged bluish on the inside and reddish on the outside. A concave surface will have a dark outer fringe, and the color tingeing will be reversed. 4. When fringes are viewed obliquely from a convex optic, the fringes appear to move away from the center as the eye is moved from normal to oblique. The reverse occurs for a concave surface.
Discussion These relationships assume that the test optic is referenced to a standard flat and that the measurements are made in air, with the air gap less than 6λ (the surface under test sits well
306
Chapter Fifteen
with the test plate). They are based on analysis of the optical path difference caused by a varying air thickness between the optic and test flat. For instance, if you apply pressure near the center of a concave optic, the air is forced out, leaving a smaller optical path difference.
References 1. D. Malacara, Optical Shop Testing, John Wiley & Sons, New York, pp. 8–11, 1978.
MATERIAL REMOVAL RATE References 1 and 2 state that, for loose abrasive grinding (lapping), the surface microroughness induced by the material removal process scales (varies) with 1/H1/2, and the material removal rate increases with increasing E ------------2K cH where E = Young’s modulus H = hardness (in the same units as Young’s Modulus) Kc = fracture toughness
Discussion This rule illustrates that hardness is not just a theoretical issue with optical materials. We also make the distinction between slurry grinding (loose abrasive grinding) and deterministic microgrinding wherein the abrasive is permanently attached to a tool that is fed into the material at a constant rate. Reference 1 also provides a microroughness estimate for deterministic microgrinding. Surface microroughness increases with increasing ductility index (Kc/H)2 (a complementary concept to the brittleness index sometimes encountered). Some work shows that the ductility index has the units of nanometers. The following additional important facts come from Refs. 1 through 4: 1. E/(KcH2) is an approximation for E7/6/(KcH23/12) a relationship developed empirically. 2. For both loose abrasive grinding (lapping) and deterministic microgrinding, the subsurface damage increases linearly with increasing surface microroughness. 3. When grinding typical optical glasses using the same abrasive sizes, deterministic microgrinding produces 3 to 10 times lower surface microroughness and subsurface damage than loose abrasive grinding (lapping). Table 15.1 shows some typical values of E, H, and Kc. Typical uncertainty for the hardness value is ~5 percent and for fracture toughness is ~10 percent. TABLE 15.1 Typical Values E (GPa)
H (GPa) (at 200-g force)
Kc (MPa m1/2)
Fused silica
73
8.5
0.75
BK7
81
7.2
0.82
Material
It is also important to understand the hardness scales that are used in these types of calculations. Hardness is a well known physical parameter, but many different methods have
Shop Optics
307
been derived for the measurement and classification of materials on a hardness scale. The values are derived from the Knoop scale. Others that you might find in the literature include Moh, Vickers, Rockwell, and Brinell. Knoop values are found by using a pyramidal diamond point that is pressed into the material in question with a known force. The indentation made by the point is then measured, and the Knoop number is calculated from this measurement. The test has been designed for use on a surface that has not been work-hardened in the lattice direction in which the hardness value is being measured. Even the Knoop number varies slightly with the indenter load as well as with the temperature. A material that is soft (i.e., potassium bromide) might have a Knoop number of 4, whereas a hard material such as sapphire has a Knoop number of 2000. The Knoop number for diamond is 7000. Note that, even within the optics community, there is not a consensus to use Knoop numbers. The values on the Moh scale are arrived at by measuring the relative empirical hardness of selected common materials by observing which materials are able to scratch other materials. The Moh scale, which is not linear, is limited by the softest material, talc (Moh = 1), and the hardest material, diamond (Moh = 10). This scale is frequently used by geologists and mineralogists. The Vickers scale is determined by pressing a pyramidal indenter into the material in question and dividing the indenter load (in kilograms) by the pyramidal area of the indenter (in square millimeters). Rockwell and Brinell hardness are not often quoted. The Rockwell figures for materials are relative to a specific measuring instrument, and the Brinell hardness is analogous to the Vickers scale except that a spherical indenter is used.
References 1. http://www.opticsexcellence.org/InfoAboutCom/InformationBrief/grindingbrief.htm, 2003. 2. J. Lambropoulos, S. Xu. and T. Fang, “Loose Abrasive Lapping Hardness of Optical Glasses and Its Interpretation,” Applied Optics, 36(7), pp. 1501–1516, March 1, 1997. 3. M. Buijs and K. Korpel-van Houten, “Three-Body Abrasion of Brittle Materials as Studied by Lapping,” Wear, Vol. 166, pp. 237–245, 1993. 4. M. Buijs and K. Korpel-Van Houten, “A Model for Lapping of Glass,” Journal of Material Science, Vol. 28, pp. 3014–3020, 1993. 5. http://www.crystran.co.uk/optics.htm, 2003.
OVERSIZING AN OPTICAL ELEMENT FOR PRODUCIBILITY The radius of a lens or mirror should be oversized by 1 to 2 mm to ease the grinding that will result in a good figure and coating in the usable area.
Discussion Like it or not, an optical element must be handled during manufacture. Often, the least expensive way to handle and mount the element is by the edges. This implies some small but finite region to clamp, support, and secure the element. Additionally, every optical element must eventually be mounted. Allowing a millimeter or two of radial oversize for glue or encroachment of mounting fixtures eases the construction of the entire optical assembly. There are even more reasons to slightly oversize the optics. A small chamfer is needed, or edge chipping is inevitable. This requires a little extra space. Additionally, rays close to the edge are also close to becoming “stray light” by scattering from the cylindrical edge of
308
Chapter Fifteen
the lens, so it is wise to mechanically block off the edges in any event. Most modern diamond turning machines and coating equipment can ensure proper specification to within a millimeter or two of the edges. However, the figure is usually poor in this region. This rule assumes that the optical piece can be mechanically supported by the extra 1 or 2 mm. Large heavy optics will require a larger mounting area. This rule is useful when determining a specification for optics that can really be made. It should also be considered when doing mechanical layouts (e.g., that cold filter in the dewar is actually a little larger than the useful diameter). It is also wise to include a chamfer (generally a cut of a 45° angle on the edge of each polished surface) to avoid chipping and cracking during handling, coating, and assembly.
PITCH HARDNESS When pressing hard, if you press your thumbnail into the pitch and an indentation can just be made, the pitch probably has the correct hardness.
Discussion Even in these days of automatic diamond turning machines, pitch is frequently used to mount optical elements. In earlier days of optical design, the optical technician would judge the hardness of the pitch by forcing his thumbnail into it. Should an impression just barely be made, he would assume it to be the correct consistency. Pitch is also used in the still-common lapidary grinding process. The hardness of the pitch laps employed in the polishing process is a critical concern. If the pitch is too soft, then the lap will rapidly deform (go out of shape) because of the flow of the pitch. Conversely, if the pitch is too hard, the glass surface being polished will become scratched because any small particles falling on the pitch will not be absorbed in the bulk of the pitch before damage to the element occurs.
References 1. B. K. Johnson, Optics and Optical Instruments, Dover Publications, New York, p. 208, 1960.
STICKY NOTES TO REPLACE COMPUTER PUNCH CARDS FOR ALIGNMENT For adjusting optical alignment, Hobbs1 suggests using “sticky notes,” which seem to be close to 0.1 mm thick. Alternatively, shards from aluminum pop cans (the latter from 0.07 to 0.1 mm thick) may be used with less precision.
Discussion It seems like optical mounts and tables can never get the optical element to the right height or tilt required by the lab rat. There is an existing and growing need for a cheap, solid, object of 0.1 to 0.5 mm thick that can be inserted to adjust the height and tilt in such increments. In the olden days, we inserted IBM computer punch cards to raise, lower, or tilt optical components (the authors have no idea what was used before punch cards). Well, punch cards are not available anymore. One of the authors (Miller) remembers one optical engineer in the early 1990s, in a panic and patronizing garage sales, corporate going-out-of-
Shop Optics
309
business sales, and government surplus sales in an effort to buy enough punch cards to complete his career. He proudly succeeded. As a interesting side note, punch cards were invented to quicken the census calculations, as it was feared that the 1890 census would take more than ten years to tabulate. “Hollerith invented and used a punched card device to help analyze the 1890 U.S. census data. Hollerith’s great breakthrough was his use of electricity to read, count, and sort punched cards whose holes represented data gathered by the census takers.”2 Hobbs1 suggests the above replacement materials. He notes that such sticky-notes (especially Post-It® notes, are surprisingly consistent at 0.1 mm. The glue is negligible in thickness, several can be easily combined and then easily removed, and one can use a paper punch to make mounting holes. Hobbs states that there is “no other shim material that makes to so easy to adjust beam positions in 100 micron increments.” He also notes that aluminum pop can shards tend to buckle, so they are not very good for stacking and not as consistent in thickness. Incidentally, the glue used in sticky notes is interesting. It is strong enough to hold the note but leaves little residue when removed and can be reused many times. It gets these properties from its microscopic structure. According to the 3M web site,3 “It was an adhesive that formed itself into tiny spheres with a diameter of a paper fiber. The spheres would not dissolve, could not be melted, and were very sticky individually. But because they made only intermittent contact, they did not stick very strongly when coated onto tape backings.”
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, p. 381, 2000. 2. www.About.com, 2003. 3. www.3m.com, 2003.
PRESTON’S LAW The volume of material removed during loose abrasive microgrinding can be related to several significant parameters by Preston’s law.1 1 ∆V ∆h --- ------- = ------ = C p pV rel , A ∆t ∆t where
A= ∆V = ∆t = ∆h = Vrel = Cp =
nominal component (tool contact) area volume of material removed time in which ∆V was removed corresponding height reduction relative speeds of the optical component and the tool Preston’s coefficient, in the units of volume removed per work unit (cubic meters per joule) p = nominal pressure
Discussion In the 1920s, Preston related the removal of material to the tool pressure, time, and a coefficient that includes the optical material’s hardness, abrasive properties, and many process parameters. Reference 2 states, “The effects of any coolant used, abrasive size, backing plate, and all material properties are absorbed within Preston’s coefficient, which is not a
310
Chapter Fifteen
material property but a process parameter.” Preston’s coefficient (using an Al2O3 abrasive and typical pressures and grinding speeds) is generally between 5 × 10–10 and 1 × 10–11 m3/J for most optical glasses (e.g., for BK-7, it is 8.8 × 10–11). Reference 3 points out that removing material during lapping by fracturing the optical surface is ten times more effective (in terms of specific energy) than removing material by means of ploughing and plastic deformation during lapping. Thus, polishing not withstanding, for efficient and quick grinding, the lapping process should be set to levels and conditions that remove material by microfracturing the material (as opposed to conditions that induce ploughing and plastic deformation). This rule can be used to scale and estimate the grinding time (and thus cost) for various volumes of removal for a given material and process. This rule applies to lapping for grinding but not diamond turning or fine polishing of the optic.
References 1. F. Preston, “The Theory and Design of Plate Glass Polishing Machines,” Journal of the Society of Glass Technology, Vol. 11, pp. 214–256, 1927. 2. J. Lambropoulos, S. Xu, and T. Fang, “Loose Abrasive Lapping Hardness of Optical Glasses and Its Interpretation,” Applied Optics, 36(7), pp. 1501–1516, March 1, 1997. 3. M. Buijs and K. Korpel-van Houten, “Three-Body Abrasion of Brittle Materials as Studied by Lapping,” Wear, Vol. 166, pp. 237–245, 1993. 4. M. Buijs and K. Korpel-Van Houten, “A Model for Lapping of Glass,” Journal of Material Science, Vol. 28, pp. 3014–3020, 1993.
PROPERTIES OF VISIBLE GLASS The types of optical glasses available in the visible wavelengths are limited in their refractive index to between 1.4 and about 2, and dispersion values (v) between about 20 and 85.
Discussion The above is based on empirical analysis of available glass and applies to visible-wavelength glass optics only. For example, Ge has an index of 4 and is commonly used for infrared optics. Crown glasses (a type of alkali-lime silicate optical glass) tend to have low indices and low dispersions, whereas flint glasses (crystal glasses with dopants) have higher indices and dispersions. Dispersion is the change in the index of refection as a function of wavelength and is an important consideration for large-spectral-bandwidth optical systems. For historical reasons, with visible glasses, the dispersion is often quoted as a unitless ratio called the Abbe number (v). The numerator is the index of refraction of the glass at a wavelength of the D line of sodium (at 0.589 µm) minus 1 to accommodate the index of refraction of the atmosphere. This is then divided by the change in the index from the Balmer alpha line of hydrogen (0.6563 µm) to that at the Balmer beta line (at 0.4861 µm). The following equation describes the Abbe number (v): nd – 1 v = --------------nβ – nα where nd = index of refraction at the sodium D line nβ = index of refraction at the Balmer beta line of hydrogen nα = index of refraction of the Balmer alpha line of hydrogen
Shop Optics
311
This ratio was conceived by Ernst Abbe, who was a professor at the University of Jena and one of the early owners of the Carl Zeiss Optics Company. This rule is useful for quick and crude estimates of the available index of refraction and dispersion. In another chapter, we present a discussion of Cauchy’s equation, which deals with numerical modeling of dispersion.
SCRATCH AND DIG Commercial-grade IR optics usually have a scratch-and-dig specification of about 80/50 to 80/60, which is pretty poor. The specification is usually 60/40 for better-quality visual optics; 40/20 for low-power active applications, and 10/5 for high-power lasers.
Discussion When manufacturing an optical surface, various sizes of grit are used. Unfortunately, it is unavoidable to have grit of one size contaminate the polishing with grit of a smaller size. This leads to “scratches” and “digs” on the optical surface. When these scratches are small enough and few enough, no noticeable degradation in spot size or MTF is noticeable. It is therefore wise (to keep costs down) to specify the scratch-and-dig specification at a level where the errors are slightly less than expected from diffraction and aberrations. The first number of the above is the “scratch,” and the second is the “dig.” These refer to two graded sets of surface quality standards drawing on military standard MIL-0-13830 and MIL-C-48497, 45208, 45662, and so on. The units of the scratch-and-dig specification are normally excluded for some reason. A scratch is a marking along the polished surface. It is defined in MIL-0-13830A1 as follows: “When a maximum size scratch is present, the sum of the products of the scratch numbers times the ratio of their length to the diameter of the element or appropriate zone shall not exceed one-half the maximum scratch number.” Generally, the scratch is measured in units of ten thousandths (1/10,000) of a millimeter; thus, an “80” is a scratch 8 µm in width. However, the definition of scratch is very subjective, and the units tend to have little meaning when compared from one manufacturer to another and are usually rated using visual methods by subjective humans. Additionally, a scratch is any blemish on the optical surface. Scratch types are identified as the following:2 ■ Block reek chain-like scratch produced in polishing ■ Runner cut curved scratch caused by grinding ■ Sleek hairline scratch ■ Crush or rub surface scratch or a series of small scratches generally caused by mishandling For digs, the units are different. They are expressed in units of 1/100 mm. A dig is defined as a rough spot, pit, or hole in the surface, and the number represents the diameter. Thus, a rating of “60” means the dig is a whopping 0.6 mm diameter, and a “10” means 0.1 mm. Irregularly shaped digs are calculated as (length × width)/2. Usually, digs of less than 2.5 µm are ignored. This rule provides the current conventional wisdom on the level needed. It is based on empirical observations and simplification of scatter theory as outlined in the U.S. military specifications. Some high-resolution, low-scatter optics will require more stringent specifications. Conversely, some low-cost, high-volume production applications may require less stringent specifications. Alternatively, surfaces near images (e.g., reticles) require a finer scratch-and-dig specification.
312
Chapter Fifteen
The above is valuable to understand the scratch-and-dig specs that are appropriate for various applications. It is also useful for specifying the scratch-and dig-spec and to get a feel for what to expect from a vendor of a given quality.
References 1. 2. 3. 4. 5.
U.S. Military Specification MIL-O-13830A, 1994. www.davidsonoptronics.com, 2003. Private communications with Tom Roberts, 1995. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 64, 1994. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 447–449, 2000. 6. www. Abrisa.com, 2003. 7. http://www.crystran.co.uk/optics.htm, 2003.
SURFACE TILT IS TYPICALLY THE WORST ERROR Surface tilt does more damage to an image than any other manufacturing error.
Discussion Surface tilt manifesting itself as element wedge causes image degradation more frequently than other common manufacturing error. Wedge is normally removed during the centering operation of manufacturing. If the tolerance is not known from a detailed tolerencing program, it should be kept to a minimum; Kingslake suggests keeping it to less than 1 arcmin. Similarly, element decenter is often the most important tolerance in a lens assembly and is the result of tolerance buildup element diameter and the bore of the lens housing.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, p. 381, 2000. 2. R. Kingslake, Lens Design Fundamentals, Academic Press, Orlando, FL, p. 192, 1978.
Chapter
16 Systems
The ultimate goal of most EO research and development is to create systems that generate quality information, often (but not always) in the form of images with high information content. Systems represent the integration of myriad optical, structural, electronic, and mechanical systems into a group of things that work together to execute some function. An amazing and fun attribute presented to managers and project engineers is to “get it all right” across a multitude of disciplines. As we see in the set of rules presented in this chapter, some key and underlying factors are reliable aids to the system design analysis and optimization process. Although optical systems have been developed since the sixteenth century, EO systems experienced little development before WWII. This is the result of immature component technologies and the fact that commercial and scientific instruments seemed to concentrate on film technology. Film technology was not well suited for broadcast television, and visible electro-optical systems were always pushed by the television industry. (Please review the introduction to Chap. 18, “Visible and Television Sensors,” for a history of television cameras and to the introduction to Chap. 7, “Displays,” for a brief discussion of the standards.) In fact, it was not until the Vietnam War that electro-optic missile seekers and laserguided bombs first proved their utility. Later, the Soviet-Afghan war again underscored the usefulness of night vision and low-cost electro-optic seekers on low-cost missiles. By the time of Desert Storm, warfare was being largely fought and won at night, placing the priority for electro-optics as high as more traditional technologies such as radar and communications. Even early-warning satellites were used to detect incoming SCUD missiles. Operation Enduring Freedom in Afghanistan tied disjoint platforms performing remote electronic sensing together for the first time in warfare. Smart munitions and cruise missiles relied on electro-optic input. In 2002, a remotely controlled unmanned combat aerial vehicle (UCAV) successfully fired missiles at vehicles containing al Qaeda murderers, using electro-optical sensors to relay images to the control center and electro-optical sensors on the missiles. These concepts were expanded, and network-centric sensing, multiplatform battle management and remote targeting was successfully used in Iraqi Freedom in 2003. The worldwide concern for protecting domestic resources and people against terror is looking heavily toward electro-optical systems to deal with border intrusion, face recognition, nuclear, biological and chemical identification, concealed weapon detection, physical security, perimeter security, and so forth. The military and security list goes on and on.
313
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
314
Chapter Sixteen
While all these wars were waging, interplanetary space probes, orbital spacecraft, and nuclear testing further augmented the development of EO systems. Today, thanks to television, wars, and space probes, the table has turned on film, and digitized EO sensors are always the system of choice for the professional and now the consumer. The advent of highresolution CCDs and CMOS imagers has now enabled EO systems to dominate still photography, with traditional film being relegated to art. The movie industry is rapidly moving from “filming” a movie to digitally taping and even composing the entire entertainment feature from computer graphics. Lastly, modern camcorders are digital EO systems. The system designer is challenged on many fronts. The designer’s role is to evaluate the needs of a customer and derive the engineering requirements that are a satisfactory balance of the following four key topics (which are related by a flippant rule): 1. Performance 2. Risk 3. Cost 4. Schedule The design process usually starts with a detailed assessment of the performance requirements, conversion of them to concepts, and the eventual elimination of the weaker concepts. Throughout the process, consideration must be given to these four key criteria. Among other duties, the system engineer has the responsibility for explaining the concept in enough detail that the detailed design process can be undertaken. General design rules, often in the form of rules of thumb, are used to judge the effectiveness of a concept before the design process can begin. The system engineer needs the skills to communicate with all of the members of the design team and must be able to facilitate communication between the team members and know when communication is urgent. In particular fields, it can be very important to develop rules of thumb that all designers can understand. That way, the specialists in various fields can anticipate, with reasonable accuracy, the performance characteristics that will affect other design areas. For example, the controls designers should have an easy way to estimate the mass of the motors used to move some part of the system that is controlled by his part of the design. That way, even though he is not an expert on actuators and motors, it is possible to know whether the design meets the mass budget that has been allocated by the systems engineer. Clearly, this book does not address all of the rules of thumb that might come up in designs of electro-optical systems, because they are so widely varied. However, the types of rules presented here can act as a guide for the rule-development process and the eventual creation of general guidelines upon which all designers can rely. The interested reader can find any number of texts that provide details on how various EO systems function. Of course, the reader who is interested in a specific topic will have to resort to books and technical journals that are dedicated to systems that apply in those cases. Often, EO systems are described in the journals Optical Engineering and OE Reports, and in SPIE and MSS conference proceedings. Additionally, the journal Applied Optics presents a description of EO systems in each issue, at a fairly brief and sophisticated level. Anthony Smart1 wrote a short but great paper giving several common-sense checklists for consideration in the development of an optical system.
References 1. A. Smart, “Folk Wisdom in Optical Design,” Applied Optics (suppl.), December 1, 1994. 2. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 3–51, 1993. 3. L. West and T. Segerstorm, “Commercial Applications in Aerial Thermography: Powerline Inspection, Research and Environmental Studies,” Proc. SPIE, Vol. 4020, Thermosense XXI, pp. 382–386, 2000. 4. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 354-377, 2000.
Systems
315
BAFFLE ATTENUATION Sensors need baffles to work properly. A standard cylindrical sunshade can have an attenuation (or stray light rejection) factor of 105. Two-stage baffles usually have an attenuation factor of 108 or higher.
Discussion The attenuation factor is defined as the ratio of the irradiance from an “out of field” radiation noise source at the entrance aperture of the baffle to the irradiance produced by the scattered radiation at the exit aperture of the baffle. The actual attenuations from baffles and sunshades are complicated functions of the geometrical design, coatings, wavelengths, distance off-axis of the source, and so on. A scatter program or detailed ray trace considering the surface bidirectional reflectance distribution function (BRDF) should be conducted to assess performance. Several programs can calculate complicated scatter, including APART/PADE, ASAP, ZEMAX, and OPTICAD. The above assumes baffles with a length-to-diameter ratio of at least 1, with internal fins. Aspect ratio is a powerful factor in the performance of the baffle. For some specialized optics, thermal gradients across optical elements can be a concern, and baffles are also important to control the input radiation to these optical elements. See associated rules in other chapters on BRDF, Lambertian vs. specular, and emissivity. Unwanted radiation from scatter can also occur off the optics, especially if they are dirty. Thus, for low-scattering systems, it is important to keep the optics as clean as possible and within the minimum scratch/dig specification. Moreover, this requirement is even more demanding at shorter wavelengths. Ultraviolet optics, for example, have to be scrupulously clean to obtain their performance potential.
EXPECTED MODULATION TRANSFER FUNCTION For typical sensors and a good display, the diffraction and the detector’s modulation transfer function (MTF) will dominate the system’s MTF as follows: ■ Diffraction MTF is typically 0.7. ■ Detector MTF is typically 0.6 to 0.7. ■ Processor MTF is close to 1. ■ Display’s MTF is 0.6 to 0.7. Thus, the entire sensor system MTF will be in the range of 0.20 to 0.4.
Discussion The rule is based on empirical observations and current technology. It is also important to note that these values are given at fo, which is defined as 1/(2IFOV), where IFOV is the instantaneous field of view. The total MTF of a system is the product of the MTFs of all of the subsystems.
Reference 1. G. Hopper, “Forward Looking Infrared Systems,” in Passive Electro-Optical Systems, Vol. 5, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 128–130, 1993.
316
Chapter Sixteen
BLIP LIMITING RULE There is little to no benefit in having a sensor more sensitive than the noise imposed by the background.
Discussion BLIP is common terminology for background limited in performance (or sometimes noted as background limited infrared photodiode). When the noise from the background is much larger than other noise sources, the sensor is operating in the BLIP regime. A significant noise source for very low-light-level TVs, infrared sensors, and millimeter wave (MMW) imagers is the noise caused by the inconstant arrival rate of the photons from the background. This fluctuation in the arrival rate of the photons is an unavoidable feature of the radiation source. Such photon flux is characterized by Poisson statistics in which the mean is equal to the variance. For high backgrounds (large IFOVs, bandpasses, and integration times), this is frequently the driving noise source. When BLIP limited, extending money and effort to obtain more sensitive detectors will not increase overall system sensitivity.
DAWES LIMIT OF TELESCOPE RESOLUTION In the blue part of the visible wavelength spectrum, up to the limit imposed by the atmosphere, objects are resolvable if they are separated by 4.5 arcsec divided by the diameter of the telescope (in inches), or2 4.5 arcsec θr > -----------------------D where θr = angular separation of the objects D = diameter of the telescope in inches
Discussion William R. Dawes developed this for use with astronomical telescopes in the visible portion of the spectrum. However, it can be approximately applied to other applications. The basic equation approximates the Rayleigh criterion, except the Dawes limit gives an answer of about 2.2 times better resolution than the Rayleigh criterion in the reddish visible part of the spectrum. Because the Dawes limit does not accommodate different wavelengths, it should be used with extreme caution outside the visible part of the spectrum. In fact, some claim that it is valid only at 0.463 µm.2 The rule assumes good quality optics and alignment, good weather, and good seeing. This rule does not account for special or exotic signal processing (such as microscanning), which can increase effective resolution. For example, let’s assume we have a 15.3-cm telescope (about 6 in). According to the Dawes limit, in the visible region, this telescope can separate 4.5/6 or 0.75 arcsec, or objects with a little less than about a 3.6 µrad separation. Using the Rayleigh criterion of 1.22( λ ) ⁄ d , we also would get –4
1.22( 5 × 10 ) --------------------------------- = 4 µrad 15.3
Systems
317
which is close to the 3.6 as calculated by the rule. However, if we wished to use the same telescope at 10 µm, the Dawes’s limit would still predict 4 µrad, but the Rayleigh criteria would yield a more reasonable 80 µrad.
References 1. http://www.stkate.edu/physics/phys104/curric/telescope.html, 2003. 2. http://www.palmbeachastro.org/planets/PlanetObserving/section3.htm, 2003.
DIVIDE BY THE NUMBER OF VISITS Specifications in a data sheet are accurate to the numbers advertised divided by the number of visits that you have had to the supplier.
Discussion Do not believe what you read in marketing sheets. If you are interested, contact the vendor yourself. If you are really interested, buy one of the products and test it yourself (otherwise, you’ll be disappointed). The truth is that commercial data sheets stretch the truth. When you are on the edge of technology in this hurried world, marketing data sheets and catalog descriptions frequently are released before the product is completely designed or the mass production of it is proven. Also, product improvements and changes are not included in old data sheets and product specifications. As a result, the specifications sometimes are downright wrong. Additionally, sometimes overzealous marketeers stretch the truth to an extent that would have Gumby screaming in pain. Of course, the specification can usually be met with an increase in cost and schedule. This rule applies to figures of merit that have the property that the higher they are, the better they are (e.g., D*, optical transmission). The inverse applies when the figure of merit improves when the number is lower (e.g., NEP, cost, weight).
GENERAL IMAGE QUALITY EQUATION The general image quality equation (GIQE) predicts, from instrument design parameters, the value of the National Image Interpretability Scale (NIIRS) when photointerpreters view a scene. A numerical estimate of such a response is1 NIIRS = 10.251 – alog
G
10 GSDGM + blog 10 RERGM – 0.656H GM – 0.344 ---------SNR
where GSDGM = geometric mean ground sample distance RERGM = geometric mean of the normalized relative edge response (RER) HGM = geometric mean-height overshoot caused by the edge sharpening G = resulting noise gain resulting from the edge sharpening SNR = signal to-noise ratio The coefficient a equals 3.32, and b equals 1.559 if RERGM > 0.9; a equals 3.16, and b equals 2.817, if RERGM < 0.9.1.
318
Chapter Sixteen
Discussion The prediction of performance of imaging systems demands the evolution of predictive algorithms. Among the most important are those that algorithmically capture the likely response of expert image analysts viewing a particular scene. The attempt to devise an algorithm for a human response to an image is a decades-old issue, just like quantifying the performance of an electro-optical system (e.g., automated NETD and automated target recognizers). This rule relates to the human interpretation to the NIIRS system. A numerical scale for such prediction is the National Image Interpretability Scale (NIIRS). A rule in another chapter provides additional detail about this scale. The scale is summarized in Table 16.1 (from Ref. 1). We also note a slightly different form from Ref. 2, NIIRS = 11.81 + 3.32log
10 ( RERGM /GSDGM ) – 1.48H GM – G/ ( SNR )
Note that NIIRS changes by 1.0 if the GSD is halved without changing SNR (such as would occur if the distance from target to camera is halved, ignoring atmospheric effects). But, if GSD is reduced by changing the integration time, SNR will suffer, and NIIRS will not improve as much as in the former case. A simple way to relate GSD and NIIRS is offered by Ref. 3. RER GSD = ------------------------------------------( NIIRS – 11.80 )/3.32 10 In this case, the reference suggests that RER around 0.53 provides good matching with the standard NIIRS criteria, as shown in Chap. 1.
References 1. R. Fiete and T. Tantalo, “Image Quality of Increased Along-Scan Sampling for Remote Sensing Systems,” Optical Engineering, 38(5), pp. 815–820, May 1999. 2. R. Driggers et al., “Targeting and Intelligence Electro-optical Recognition Modeling: A Juxtaposition of the Probabilities of Discrimination and the General Image Quality Equation,” Optical Engineering, 37(3), pp. 789–797, March l998. 3. R. Driggers, P. Cox, and M. Kelley, “National Imagery Interpretation Rating System and the Probabilities of Detection, Recognition, and Identification,” Optcal Engineering, 36 (7), pp. 1952–1959, July l997.
GOOD FRINGE VISIBILITY Good fringe visibility from a disk source occurs when πhθ --------- = 1 λo where h = distance between the two slits (or mirrors) forming the fringes θ = angular separation between two point sources λο = wavelength of a narrow bandwidth
Discussion For a narrowband source of wavelength λo and diameter D, projecting light at a distance of 2 R, there is an area of coherence at the source π ( h ⁄ 2 ) over which pairs of slits will pro-
Systems
319
duce fringes. Viewed at the point at which fringes will form, the angular size of the disk is θ = D ⁄ R , and the transverse correlation distance is 0.32 ( Rλo ) ⁄ D . A set of apertures separated by h (or closer) will produce fringes. This is useful for determining when a source will produce visible fringes such as required to measure the angular size of a disk of some object (e.g., a star). This is also useful for experiment design in a teaching setting and in the design of a Michelson stellar interferometer. Finally, take note that use of closely spaced slits might produce fringes, which may not be desirable. This rule is based on the Van Cittert–Zernike theorem and has been verified by any number of experiments. The fringe visibility is related to the degree of coherence of the optical field at each mirror of the interferometer and can be modeled as a Bessel function for circular apertures. This is valid for a narrow spectral band only, and it assumes a disk source of nearly constant intensity. Meeting the criteria above leads to a fringe visibility of 0.88.
References 1. E. Hecht, Optics, Addison-Wesley, Reading, MA, p. 532, 1990.
LWIR DIFFRACTION LIMIT The diffraction limit of an LWIR (8- to 12-µm) telescope in milliradians is approximately the inverse of the optic diameter in inches.
Discussion This is very useful to quickly estimate the angular diffraction limit of a CO2 laser or an 8to 12-µm LWIR imager in some meeting. The angular diffraction limit is defined by 2.44λ ------------D where λ = wavelength D = optic diameter If D is expressed in centimeters and λ in micrometers, then the diffraction limit expressed in milliradians is equal to 0.244λ/D. Coincidently, the conversion from centimeters to inches is 2.54, almost 2.44. Thus, if λ is 10.4 µm, then the rule holds to three decimal places. Obviously, this rule can be extrapolated to an MWIR system by including a factor of 1/ 2, and to a visible system by including a factor of about 1/20.
Reference 1. S. Linder, from S. Weiss, “Rules of Thumb, Shortcuts Slash Photonics Design and Development Time,” Photonics Spectra, pp. 136–139, October 1998.
OVERLAP REQUIREMENTS In a step-stare (or step-scanned) pattern, the overlap from one step to another should be 50 percent and never less than about 10 percent.
320
Chapter Sixteen
Discussion When a system employs a step-stare or scanning, it is advisable to overlap the scan or steps. The amount of this overlap is determined by the step-to-step correlation and processing. Conservatively, it is advisable to overlap each new step by 50 percent of the area of the previous step to ensure that the sensor motion does not conflict with the Nyquist criterion. This requirement provides for confident registration and adequate coverage, and it allows sufficient oversample for advanced image processing to be used. However, in an accurate system, requiring no more than stitching the scene, and with high-quality inertially registered data, this can be reduced to a few percentage points. In such a system, postprocessing will register the frames properly. Another approach is the inclusion of line-of-sight control using a fast-steering mirror pointing system that relies on inertial data for mirror control or uses a reference point in or near the target to stabilize the pointing system. Frequently, the image processor will require information from some pixels from prior scans to properly execute its algorithms. Imagine a 9 × 9 spatial filter with its values being set by the surrounding 12 × 12 box. To properly execute such an algorithm for the edge pixel of a scan or step-stare, the pixels of the previous scan must be known from either overlap or memory. This rule is based on empirical observations and the need for computer algorithms to register the images and find their positions. The requirement for overscan really depends on the accuracy of the scan-to-scan correlation.
PACKAGING APERTURES IN GIMBALS It is difficult to package a gimbaled aperture in a volume where the ratio of aperture diameter to package diameter exceeds about 0.65. It is difficult to package a nongimbaled aperture in a volume where the ratio of aperture diameter to package diameter is 0.80 or more.
Discussion A gimbal has motors, encoders, structure, and other nasty hardware stuff that simply must be accommodated by the design and must be external to the optical path. This leads to the gimbal being substantially larger than the clear aperture. In addition, most gimbaled EO users demand capability in several spectral regions or with a multitude of sensors (see Fig. 1.2 in Chap. 1). Each aperture for these multispectral sensors must share the same gimbal and result in even lower ratios. When attempting to package an electro-optical instrument, the ratio of the aperture to the total size tends to follow the above rule. Although there have been some systems that may violate this rule, they required enormous levels of engineering ingenuity and complexity, such as off-axis apertures feeding the optical train through the arm of a gimbal or fiber optic imaging. This, of course, translates into added cost for the system. We usually find that additional impacts occur such as the need to place nearby structures in precarious positions or limit the operational range of the sensor. This rule is a generalization, of course, because individual designs and requirements should be fully analyzed and traded. Optical systems may press the limits for these ratios, but the presence of cryogenic cooling lines, electrical cables, and other connections to the outside world unavoidably take up space.
PICK ANY TWO A developing system (e.g., at the preliminary design review) can be any two of the following: low cost, high reliability, high performance, and fast schedule (see Fig. 16.1).
Systems
Fast delivery
High performance
Low cost
High reliability
321
Pick any two (well, maybe three if you are lucky) FIGURE 16.1 This figure attempts to convey that full optimization of a system is impossible.
Discussion Often, the requirements (or attributes) of an electro-optical project compete with each other and may even be contradictory. For instance, it is usually difficult to increase performance and reliability while reducing cost. One requirement may act like nitro to another’s glycerin; just moving the combination can cause the whole thing to explode. Usually, an astute engineering/manufacturing organization can fuse at least two opposing requirements together and satisfy both simultaneously for a system of mature design. For a developmental system, they can usually promise three of the above dreams with straight faces. Managers and executives had better be smart enough to detect this situation (usually by the odor). This rule is founded on tongue-in-cheek empirical observations but illustrates an important truth in systems design: low cost, high reliability, high performance, and fast delivery are all relative. Several million dollars may be considered low in cost for a given system at a certain reliability, performance, and delivery for a space-based instrument. On the other hand, a few thousand dollars may seem costly for a physical security camera. Increase the reliability, and the cost will increase relative to its initial cost. The same is true for performance and schedule. This rule assumes comparison of the same type of cost, reliability, and so forth. For instance, an increase in reliability may cause a decrease in life cycle cost (but rarely development cost). The above does not account for disruptive changes in technology or production techniques. Said another way, it may take a revolution in technology to invalidate this rule. Finally, this assumes the design wasn’t seriously flawed such that a minor design change will affect several attributes favorably.
PROCEDURES TO REDUCE NARCISSUS EFFECTS Lloyd1 suggests the following five design procedures to reduce cold reflections (sometimes known as Narcissus effects) that cause a detector to see reflections of itself: 1. Reduce the focal plane effective radiating cold area by warm baffling. 2. Reduce lens surface reflections by using high-efficiency antireflection coatings (on both sides of the optical elements).
322
Chapter Sixteen
3. Defocus the potential cold return by designing the optical system so that no confocal surfaces are present. 4. Cant (or tilt) all flat windows. This means that rays traveling parallel to the line of sight of the detectors will be diverted out of the sensor line of sight. 5. Null out the cold reflections with advanced electronic image processing.
Discussion Often, with IR system design, reflections from cold surfaces onto the FPA cause an image flaw of low levels in those areas. Lloyd offers us the above five techniques to reduce this unwanted effect. Sometimes no. 1 cannot be done. For LWIR systems, no. 2 should always be done if budget and throughput requirements allow. No. 3 should be done whenever the optical requirements allow. No. 4 is an effective way of reducing this effect. It should be implemented whenever possible. This method is used in virtually all LWIR imagers.
References 1. J. Lloyd, Thermal Imaging Systems, Plenum Press, New York, p. 281, 1975.
RELATIONSHIP BETWEEN FOCAL LENGTH AND RESOLUTION The IFOV of a system can be estimated from 1000( pm ) IFOV = ---------------------EFL where IFOV = instantaneous field of view in microradians pm = pixel size in micrometers EFL = effective focal length in millimeters
Discussion This is a version of a fundamental optical equation modified for focal plane pixels. The focal length of a lens (commonly given in millimeters) determines the “scale” of angular subtense to linear dimension at the focal plane. The focal length is equal to the detector size divided by the field of view. The above equation includes a factor of 1000 to convert units of micrometers for the pixel size to units of millimeters for the focal length, giving the pixel’s field of view in microradians. In most optical designs, the units of millimeters are commonly used for focal length, and the units of micrometers are commonly used for pixel pitch. The above equation can be modified to represent the total field of view by merely replacing the pixel size with the FPA size or by multiplying the pixel size by the number of pixels along the appropriate direction.
SIMPLIFIED RANGE EQUATION The probability of detection (Rp) of a target at a range of R is approximately
Systems
323
e ∆C ------ln 1 0.7 ----- ------------------k p MRC R p = ------------------------------------N xx βatm + --------βsys Dc where ∆C = contrast differential, which is either the difference in temperature for a FLIR (∆T) or the difference in reflected contrast for a visible sensor (∆C) MRC = minimum resolvable contrast [Minimum resolvable temperature (MRT) can be substituted for MRC if ∆T is used instead of ∆C.] βatm = average bandpass-integrated atmospheric extinction coefficient for the path length (range) involved in 1/km units e = length-to-width ratio of the bar pattern used to determine the MRC or MRT; if you don’t know it, assume it to be 0.7 so that the radical becomes unity kp = normalized signal-to-noise ratio for the probability of detection p; some examples are given below βsys = slopes of the regression line fitted to the values of the MRC (or MRT) in the specifications of the particular sensor (This relates to the sensor’s resolution. If unknown, just base it on the detector’s instantaneous field of view.) Nxx = Johnson criterion for the given probability (xx) of correctly performing an observation task Dc = target’s critical dimension
Discussion A global range equation for electro-optical systems does not exist in a form that humans can easily comprehend. To this end, there are several official computer models (such as NVTHERM) that attempt to give an approximate target range for a given myriad of inputs. Additionally, almost every company in this field has its own model calibrated to its own equipment. The best of these homegrown models are based on theory normalized to realworld performance; the next best are based on averages of MRTs or MRCs taken from real instruments. Obviously, this is just an approximate range based on the most rudimentary values, but it can be useful in a pinch or when comparing one system to another. The authors of this book refrained from placing such a range equation in the first edition. After several suggestions, we decided to include a few such models, as they illustrate critical physical relationships between the hardware and real-world performance. The reader is cautioned that, although the basic physics of the this equation is sound, any atmospheric or target statistics are questionable, and production floor engineering inputs can be derived only from statistics. Reference 1 gives the following table for some values relating the probability of kp. TABLE 16.1 Probability of detection, ρ
kp, normalized SNR for probability, ρ
0.90
1.5
0.5
1.0
0.1
0.5
324
Chapter Sixteen
References 1. L. Biberman, “Alternate Modeling Concepts,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 11-8 to 11-13, 2000. 2. NVTHERM Users Manual, ONTAR (www.ontar.com), 2000. 3. FLIR 92 Users Manual, U.S. Army Center for Night Vision.
SYSTEM OFF-AXIS REJECTION It is difficult, but not impossible, to create sensors that can perform (and in some cases, not be damaged) when the Sun or Moon is within about 10° of their line of sight. In addition, it is difficult to make the sensor function as desired when sunlight is falling on the any of the optical surfaces, even if it is not within the field of view. It is very difficult to design a system that will not be damaged when its focal plane is in direct continuous exposure to the Sun.
Discussion A review of the stray light rejection of various telescope designs shows that the point source transmittance of most telescopes is about 10–3 at 10° angle from optic axis.1 Although not true of all designs, it is still a typical value. Because the irradiance at a sensor from even a small part of the Sun or Moon is higher than that of most stars and all man-made objects (except nuclear weapons), rejections of 10–5 to 10–6 may be required. Such rejections are easily achieved only at off-axis angles of 30° or higher. Chaisson2 reports that highly sensitive astronomical telescopes, such as Hubble, cannot tolerate the Sun within 50° of the line of sight or the Moon within 15°. Unless otherwise designed, security system cameras, and even commercial camcorders, cannot be pointed at the Sun for long without damage. Even advanced cameras can have problems. On Apollo 12, the lunar surface camera was accidentally, and only momentarily, pointed at the Sun, which burned out the vidicon tube that was forming the images. Millions of disappointed Earth-bound viewers witnessed this event. A critical part of the stray-light performance of a telescope is the cleanliness of the optical surfaces. Even the smallest surface contamination will cause light from out of the field of view to be scattered in a way that today’s modern and very sensitive detectors will see. Therefore, while sensors are now more capable of sensing subtle features of the things they are pointed at, they also see undesirable sources of light. The exact optical design, baffle design, sunshade design, and radiometric system parameters will determine the true performance of the system. The guidelines in the rule provide a good first estimate of what can be expected. We do note, however, that there are some important exceptions. For example, laser warning, missile warning, and some forward-looking infrared sensors can function with the Sun in the field of view, albeit with careful design. Key to the tolerance of the Sun is the baffle and sunshade design along with its “black coatings.” Generally, these coatings should have a reflectance of less than 20 percent in the bandpass of interest and be Lambertian. However, these requirements depend on the details of the design. Some baffle and cold shield designs actually work better with specular coatings of low reflection. See the “Baffle Attenuation” rule in this chapter (p. 315) and, in other chapters, rules concerning BRDF, emissivity, and Lambertian versus specular reflectance.
References 1. W. Wolfe, “Imaging Systems,” in W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 19–24 and 19–25, 1978.
Systems
325
2. E. Chaisson, The Hubble Wars, HarperCollins, New York, p. 72, 1994. 3. www.ligo.caltech.edu/~ajw/40m_cdr/40m_AOS_DRD.ppt, 2003. 4. Y. Yakushenkov, Electro-Optical Devices, MIR Publishers, Moscow, pp. 125–127, 1983.
TEMPERATURE EQUILIBRIUM An optical system is in thermal equilibrium after a time equivalent to four to six times its thermal time constant.
Discussion When an optical system is exposed to a different temperature, thermal gradients will cause spacing and tilt misalignments and distort the image. After four thermal time constants, the difference in temperature is less than one percent. Newton’s law of thermodynamics shows that thermalization is an exponential process with the reduction following a 1/e falloff for each time constant. Obviously, 1/e4 to 1/e6 is a very small number. This originates from an equation that states that the change in temperature is proportional to 1 – e–tτ, where τ is the thermal time constant, and t is the time. When the product of t and τ is 4, the temperature deviates from equilibrium by 1.8 percent.
TYPICAL VALUES OF EO SYSTEM PARAMETERS When the details of a particular electro-optic system are unknown, one can assume the following typical values for performance calculations: Overall transmission through the optics
0.65
Electrical efficiency
0.85
Scan efficiency
0.8
Frame time
1/60 to 1/30 sec for imaging systems
Atmospheric transmission
0.88/km in a transparent wavelength region
Typical laser reflectivity of targets not intended to be “stealthy”
40 percent
Temperature of objects in space
270 K
Detector D*
1011 Jones (cm Hz1/2/W)
Temperature of objects on the ground
300 K
Optical MTF
Diffraction limited with a 1/4 wavelength defocus
Discussion Often, one needs to perform quick calculations of sensitivity or determine the range of some system attribute (e.g., how big does the aperture need to be?) when complete design information is, unfortunately, unknown. Although this can lead to dangerous design decisions, when necessary, the above can be substituted into the equations.
326
Chapter Sixteen
The above values are based on empirical approximations based on experience. Much of the above data were contributed by Seyrafi from his 1973 book, and they still generally apply today. These are typical values, and any given system may have drastically different values; use these only when no other information is available. Rarely will any of the above be off by more than a factor of ten. However, if more than one of these numbers is used, the actual result may be off considerably, and care must be taken to ensure that the end result is correct. On the other hand, if a large number of guesses are made, it is entirely possible that the errors will average out, and you’ll get about the right answer. This rule allows first-guess calculations when design details are unknown. This set of parameters also allows the designer to begin the design process and set up the design equations while awaiting revelation of the actual details. These guidelines usually add sufficient margin so that the hardware can actually be made to perform to expectations.
Reference 1. K. Seyrafi, Electro-Optical Systems Analysis, Electro-Optical Research Company, Los Angeles, CA, pp. 238 and 294, 1973.
WIND LOADING ON A STRUCTURE The wind force on a telescope (or other structure, such as a dome) can be estimated from 1 2 F = --- ρν AC 2 where ρ = density of air v = wind velocity A = area of the object projected in the wind direction C = a factor derived from the wind direction and the specific surface features of the telescope; often referred to as the drag coefficient Note that the dimensions of the elements of the equation need to be selected to give the force in the desired units.
Discussion It is always desirable to know the factors that can lead to pointing errors and other disturbances. For instance, wind loading on a sensor or telescope will create a number of effects, including a rocking of the structure and stimulation of bending modes. Mountain tops, where most telescopes are deployed, are places of high wind exposure. The reference suggests that an additional correction factor, Λ, should be used as well, thereby changing the equation to 1 2 F = --- ρν AΛC 2 where Λ = dimensionless factor derived from the aspect ratio of the object (A typical telescope will have a value of 0.6.) In general, C will be around 0.5 (for rounded objects) to 1.0 for blunt objects. Streamlined objects can have a very low value of C. A flat plate perpendicular to the wind direction has a value of 2. From these estimates, we might guess that a cylindrical telescope dome will have a value of C no larger than about 0.5.
Systems
327
The authors of the reference suggest the following wind-direction-dependent values for C; for the force perpendicular (or normal) to a dome opening use ⎧ C ( β ) = ⎨ 0.1 sin 2β β < π ⁄ 4 π/4 ≤ β ≤ π ⁄ 2 ⎩ 0.1 where β = angle between the wind direction and the normal to the entrance of the dome (For the force parallel to the dome opening normal, use 1.)
Reference 1. H. Jakobsson, “Modeling and Analysis of the Large Earth-Based Solar Telescope Dynamics,” Optical Engineering, 37(37), pp. 2432–2448, September l998.
LARGEST OPTICAL ELEMENT DRIVES THE MASS OF THE TELESCOPE The mass of a telescope assembly is directly proportional to the mass of the largest optical element.
Discussion A telescope’s mass depends on its design, materials, size, required strength, and the lightweighting techniques applied. An estimation of the mass of an unknown telescope assembly can quickly be scaled based on the known mass of a similar element. Telescope assembly masses usually track the mass of the heaviest optical elements approximately linearly, as the secondary and tertiary mirrors are usually of much smaller size and mass. Usually, this is valid for telescopes from a few centimeters to a meter or two in aperture, but it does not include exotic systems. If used as a comparison, optical element masses should be within a factor of 3 of each other, and telescopes must be of the same type and material (e.g., two on-axis, reflective Cassegrains made of aluminum). In addition, telescopes should have the same number of elements and have similar environmental, stability, stiffness, and slewing specifications to apply this rule. Finally, the off-axis stray-light rejection specifications should be comparable. The rule is useful (with the other rules on optics and telescope mass) for system tradeoffs when comparing the mass impact of changing the optics size and estimating whether a given telescope requires technology advancement to meet the mass goals. One should be sure to use the heaviest optic element. Usually, this is the largest, but it may not be in so in refractive designs with thick lenses. Usually, the largest and heaviest element is the first objective or the primary mirror. However, in off-axis reflective systems and some wide field of view designs, the largest element is usually not the primary or objective. If the telescope is a Schmidt, the primary is larger than the clear aperture or the correcting plate.
This page intentionally left blank
Chapter
17 Target Phenomenology
Generally, the properties of targets and their signatures, such as are summarized in this chapter, fall into the domain of the military designer. However, increasingly, many of these rules apply to nonmilitary segments of the EO market such as security cameras, paramilitary organizations, search and rescue, homeland defense systems, environmental monitoring, general surveillance, remote diagnostics, remote sensing, and industrial security. This chapter provides a brief look into the short-cut characterizations that were largely developed to assess the signatures of various potential targets for typical EO systems. Regardless of their heritage, several of these rules are applicable in the generic business of assessing what a sensor might be able to detect, recognize, or identify. Although most of these rules were developed for the infrared spectrum, they illustrate important principles that may be applied (albeit with caution) to other parts of the spectrum, including UV, visible, and millimeter wave. Often, targets of interest to the EO sensor designer consist of the metal body and frame containing some kind of engine (such as your car). Such a target can be detected by sensing the metal hardbody (e.g., the roof of your car), the hot engine compartment (the heat dissipated from under the hood), or the spectral engine emission (e.g., the hot CO2 coming out of your tailpipe). The emission of hot gases and particles is generally called a plume. Although all man-made engines produce significant plumes, those of jet engines and rockets draw the most attention. Rocket and jet plumes have long been of interest to EO designers, as these provide bright signature-to-background ratios. Much early work was done in the 1950s and 1960s in remote plume diagnostics by electro-optical instruments for jet and rocket engine development. At least one major contemporary sensor company evolved from a support group charged to develop sensors to characterize its company’s rocket engines. Much effort was expended in the 1960s on large and small rocket plume signatures, thanks to the space and arms races. The signatures of tactical air-to-air and surface-to-air missiles were investigated in the hope of providing effective countermeasures. Plume investigations of large rockets continued in support of early warning efforts. Maturation of this study and the perceived need were formalized during the United States’ Strategic Defense Initiative (SDI) era in which significant effort was expended in refining the plume and hard-body signatures of large missiles and warheads. This tradition is continuing as a result of the increased emphasis on homeland defense (by many nations worldwide) and the unfortunate
329
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
330
Chapter Seventeen
recent proliferation of missiles of all types, all able to carry a chemical or biological weapon of mass destruction. This requires defensive weapons to protect against such weapons, which in turn requires accurate target characterization. In the 1990s, significant effort was devoted to determine the signatures of smaller tactical missiles for platform missile warning systems and targeting systems. Ironically, one of the most challenging features of characterization of a threat signature is determining the reflectivity and transparency of the plume. The most difficult of these is the liquid-fueled rocket, as the emissions may consist primarily of water and other hot gases. These appear nearly transparent in many bands, especially in the visible wavelengths. The space shuttle, for example, produces a large opaque plume from its solid rocket motors, but the plumes from the hydrogen-oxygen engines are nearly transparent in the visible. In fact, using visible wavelengths, one can see directly into the engine housings (during launch) on the three engines built into the orbiter. Although much less intense than the infrared signature, the visible and UV signatures of plumes are of interest because of the availability of more mature hardware technologies and smaller diffraction, and because the plume is more confined in spatial extent and is located in a predictable place with respect to the rocket’s hardbody. Available computing power has become sufficient to spur various governments and companies toward developing computer codes that allow system engineers to estimate the magnitude of the signatures in a given spectral bandpass. Some of the most frequently used ones include the Joint Army, Navy, NASA, Air Force (JANNAF) plume radiance code (PLURAD), the Standard Plume Flow Field (SPF-2), and Standardized IR Radiation Model (SIRRIM). These codes were created to estimate signatures from tactical missiles based on complex chemistry and flow fields. The Composite High Altitude Radiance Model (CHARM) is a complex model intended to estimate large rocket plume signatures and shapes by assessing the chemistry of the rocket fuels used and their interaction with the rarefied atmosphere. The signature code craze is not limited to the plumes, as complicated codes have been developed to predict laser cross section, reflectivity, and hardbody signatures throughout the spectrum [e.g., the Spectral Infrared Thermal Signatures (SPIRITS) and the Optical Signature Code (OSC)]. As expected, there has been considerable interest in the reflectivity of target surfaces, because this determines the amount of laser tracker radiation that can be expected to come back to a receiver, determines the visible-band signature, and affects the IR signature. The emissivity can be estimated by subtracting the reflectivity from 1. For the reader interested in more details, the Infrared and Electro-Optical Handbook is a good starting point. It contains information on almost every topic related to targets and backgrounds. Other sources that should not be overlooked are the older versions (similar compilations) called The Infrared Handbook and the older Handbook of Military Infrared Technology. In fact, the older books cover a number of topics more completely than the newer eight-volume set. Look for the older versions in used bookstores and in the offices of older employees. Red, blue, and green cover versions were produced. Be wary, though, because the earlier versions had some errors. If possible, try to find a colleague who has a copy of the extensive errata sheet that has been developed over the years. Many specific signature handbooks are available from governments and corporations that are active in the field, and these can provide valuable insight. For up-to-date detailed measurements, explanations of phenomena, and code development, do not overlook the frequent publications of the Military Sensing Symposium, IEEE and SIE conferences and proceedings, as well as the IRIA web site (http://www.iriacenter.org).
Target Phenomenology
331
BIDIRECTIONAL REFLECTANCE DISTRIBUTION FUNCTION Nicodemus1 suggested the bidirectional reflectance distribution function (BRDF) as differential radiance dPs ⁄ dΩs Ps ⁄ Ωs BRDF = ---------------------------------------------------- ≈ ---------------------- ≈ -----------------differential irradiance Pi cos θs Pi cos θs where Ps = power of the scattered light, generally at a given angle (Generally, the angle is fixed for each plot of a BRDF.) Ωs = solid angle into which the light is scattered (or the instrument’s receiver solid angle) Pi = power of the incident light θs = angle (from normal to the sample surface) that the light is scattered (or the angle at which the receiver is positioned) (Generally the BRDF is plotted as a function of this angle and has units of 1/solid angle.)
Discussion In the 1950s, it became apparent that the total integrated reflection from a surface did not describe the correct property for critical applications such as baffle design, high-quality optics, illuminated targets, or targets sensed via reflected light (see Fig. 17.1). Nicodemus1 suggested the BRDF, which has been widely used and modified for more FIGURE 17.1 Illustration of surface scatter. exacting tasks. Basically, it defines the reflectance at any angle resulting from an input (From http://ciks.cbt.nist.gov/appearance/.) source at some other angle (see Fig. 17.2). BRDF provides more information than total integrated scatter or simple reflectance or emissivity, as it defines the reflectivity of the surface at all possible combinations of incidence and reflected angles. It can be integrated to achieve integrated scatter and reflectance and, after subtracting from unity, the emissivity according to Kirchoff’s law for opaque surfaces. BRDF is becoming increasingly important in target phenomenology and frequently is used for satellite remote sensing applications (e.g., the BRDF of a mangrove swamp as opposed to urban development or open sea). As active sensing increases in popularity, the BRDF will see more importance as a target parameter. A perfectly specular source (e.g., a perfectly flat, perfectly reflective mirror) would have a function that is zero at all points, except its Snell reflection angle, at which point it would be a delta function. Figure 17.3 shows the BRDF of a (low-quality) gold mirror at 10.3 µm. Note that the BRDF varies by a factor of 1 million from 15 to 60° when the incident beam is at 60°. A perfect Lambertian source has a BRDF proportional to ρ/π per Ω, where ρ is the reflectivity of the surface, and Ω is the solid angle of the receiver. Figure 17.4 shows a good Lambertian black surface at 10.3 µm. It is exposed to incident beams at –10° and –60°, and the BRDF varies less than about two orders of magnitude across the plotted angles. BRDFs are frequently used to select low-reflectivity coatings with given specular/Lambertian characteristics for cold shields and baffles. A material’s BRDF is almost always
332
Chapter Seventeen
FIGURE 17.2 Geometry for BRDF measurements. (From http://ciks.cbt.nist.gov/appearance/.)
FIGURE 17.3 Example of BRDF of a specular surface. This measurement used 10.3-µm infrared radiation, and the incident beam is plotted for 10° (diamonds) and 60° (triangles). (Courtesy of The Research Triangle Institute.)
measured at room temperature, yet it is applied to cryogenic surfaces. One of the authors (Miller) has taken numerous measurements over decades and often finds that the cryogenic reflectivity of the coating is higher than calculated from the room-temperature measurements. It is surmised that one cause of this is that the reflectivity (emissivity/scattered) of black surfaces is almost always measured and quoted at room temperatures—the lab environment is the only convenient and affordable venue to take such measurements. Yet many of the Lambertian and absorptive properties of a surface depend on the surface morphology, which can be a function of temperature. Generally, the surface should be rough and cone-like at a scale greater than the wavelength to trap the photons. When cooled, most surfaces contract. A given structure that is very “black” at room temperature because
Target Phenomenology
333
FIGURE 17.4 Example of BRDF of a Lambertian surface, RTI/OS black paint. The BRDF with a 10° incidence is the lower line (diamonds), and the measured BRDF with a 60° incidence is the upper line (triangles). (Courtesy of The Research Triangle Institute.)
of its surface morphology will contract when cooled to cryogenic temperatures. This surface morphology change can result in the surface being more specular and/or reflective at the wavelength of interest. The user of BRDFs should be cautious that some measurements include cosine corrections while others do not. Additionally, the angles are often defined differently.
References 1. F. Nicodemus, et al., Geometric Considerations and Nomenclature for Reflectance, NBS (now NIST) Monograph 160, 1977. 2. J. Stover, Optical Scattering, SPIE Press, Bellingham, WA, pp. 19–22, 1995. 3. http:www.iriacenter.org, 2003. 4. J. Conant and M. LeCompte, “Signature Prediction and Modelling,” in Vol. 4, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 318–321, 1993. 5. W. Wolfe, “Radiation Theory,” Chap. 1, The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 1-30 to 1-31, 1978. 6. W. Wolfe, “Optical Materials,” Chap. 7, The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 7-78 to 7-79, 1978.
CAUSES OF WHITE PIGMENT’S COLOR An appearance of white can be achieved with small pieces of transparent or translucent material as long as they are large enough (as compared with the wavelength of light) to allow for multiple refractions and reflections. This is the case with milk, sugar, snow, beaten egg whites, clouds, and so on.
Discussion It is shown in a number of texts that scatter from particles depends on the relation of sizes of particles and the wavelength of light. Rayleigh showed that, for particles about one4 tenth of the wavelength of the light, scattering goes as 1 ⁄ λ and, for larger particles, the
334
Chapter Seventeen 2
scattering goes as 1 ⁄ λ . This is the root of the famous “blue sky” question that Rayleigh answered by showing that the scattering occurs in molecules that are small with respect to the wavelength of light. Hence, when looking straight up at blue sky, shorter wavelengths are scattered more effectively, so we see scattered sunlight predominantly in the blue part of the spectrum. Particles approaching or exceeding the wavelength of light scatter according to Mie theory, which is not easily captured in a simple equation. One of the authors (Friedman) has also had the experience of using powdered quartz as a means of inducing turbidity into very clear and clean waters. The quartz was added to provide a measurable and repeatable turbidity. Although this work was never published, all of the ocean optics researchers at that time employed this method to create a cheap and widely available source of calibration. Titanium oxide is a visually transparent material occurring in the form of small particles that provide a white pigment for some paints. One of the authors (Miller) knows of an instance of an optical engineer taking some titanium oxide particles from the U.S.A. into Canada (for a test). The Canadian customs demanded to examine the container, as it apparently looked like an illegal drug. While examining it, they spilled some on their dark uniforms and the table. When they tried to brush it off, it just stuck to clothing and tablecloth, causing permanent white streaks. It made a permanent mess of their uniforms and any other object laid on the inspection table for some time thereafter. Since the appearance of “whiteness” depends on matching the size of the scattering material to the wavelength of light, paints that include this type of material do not work over wide ranges of wavelength. Infrared reflectivity may be considerably different from that in the visible regime. In addition, one must be certain that the materials added into the clear matrix have a different refractive index; otherwise, the material will look uniformly clear. Small glass beads, organized into sheets and reflectively coated on the rear half, can increase the apparent reflection coefficient in the direction of viewing by factors from 100 to 1500 over that of white paint. They are actually acting as inefficient retroreflectors and are widely sold for bicycle and automobile applications. This rule offers an easy approach for creating high-reflectivity surfaces without resorting to exotic approaches such as retroreflectors.
Reference 1. E. Hecht, Optics, Addison-Wesley, Reading, MA, pp. 114, 1990.
CHLOROPHYLL ABSORPTANCE Healthy plants tend to have strong absorptance near 0.68, 1.4, and 2.0 µm. Camouflage and distressed plants less absorptance. In extreme cases, the absorptance can approach zero at these wavelengths.
Discussion This is based on the spectra of water and chlorophyll. Plants that rely on photosynthesis must have water and chlorophyll to be healthy. The highly visible absorption of chlorophyll ends at about 0.7 µm, and a high transmission band extends to about 1.3 µm. By making observations in these short-wave infrared (SWIR) bandpasses, the health of plants can be determined. Many diseases, and the onset of autumnal changes, can first be detected by observing the content of chlorophyll and water. The high spectral absorption coefficient of water within healthy tissue produces deep reflectance and transmittance minima near 1.4 and 2.0 µm. Since healthy tissue containing active chlorophyll must also contain some water to permit photosynthesis, the con-
Target Phenomenology
335
current appearance of a chlorophyll absorption band near 0.68 µm and the water absorption bands near 1.4 and 2.0 µm is generally expected. Frequently the change of leaf spectra resulting from plant stress is first made manifest by disruption of the photosynthetic process. The disruption is caused by the destruction of chlorophyll before water has been completely lost by the leaf. Consequently, the water absorption bands may still be present in the leaf spectra after the leaf is dead. 1
The reader should also be aware that healthy plants also absorb deep in the blue and produce fluorescence in the near IR that contributes to the signature that is detected. Hence, if you are attempting to measure the reflectance of leaves in situ, be sure to include the influence of fluorescence that will be detected by your spectrometer.
References 1. W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, and the Office of Naval Research, Washington, DC, pp. 3-129 to 3-142, 1978. 2. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, Norwood, MA, p. 31, 1993. 3. W. Porter and H. Enmark, A System Overview of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), http:www.iriacenter.org, 2003. 4. See the many vegetation spectra found at http:www.iriacenter.org, 2003.
EMISSIVITY APPROXIMATIONS If you don’t know the emissivity of an object, assume 0.2 for metals and 0.8 for anything else. Assume 0.9 for anything that is claimed to be “black.”
Discussion It is important to know an object’s emissivity when doing calculations related to target signatures. The emissivity can cause an order of magnitude change in radiant emittance. Generally, metals are of low emissivity, with optically smooth polished surfaces having an emissivity of 0.01 or less. For most metal objects, the emissivity is closer to 0.2 or 0.3. Emissivity tends to be 0.8 (or at least between 0.6 and 0.9) for most nonmetallic objects. In addition, if the sensor is looking into a cavity, such as a jet or rocket engine or person’s mouth, the emissivity will tend to approach unity no matter what material is being viewed. A rule in Chap. 14, “Radiometry,” describes the role of geometry in defining the effective emissivity of a closed space. Emissivities can vary by wavelength, and this can be a very useful property. Several “thermal” paints are “white” and of relatively low emissivity in the visible but of high emissivity in the infrared. When applied to a sunlit object (e.g., a satellite), these will reduce its temperature, as the solar wavelength will be efficiently reflected, and the thermal emission in the IR will be high as well. The emissivity of an object is also a function of surface morphology and coating. At the approximate size of the wavelength, a rough surface with cavities will have higher emissivity than the same material with a smooth surface. Figure 17.5 is a scanning electron microscope (SEM) image of RTI/OS black, which is a modified paint. The surface has extreme roughness at a scale of a few microns, making it very black from the visible through LWIR. See also Fig. 17.6. The above is useful only for a quick estimate of an objects emissivity when little is known about it. Appendix A has a table of emissivities for common materials. All of these tables are approximate infrared emissivities and should be used with caution, as emissivity
336
Chapter Seventeen
FIGURE 17.5 A rough surface morphology on the scale of the wavelength can create a black surface as shown in this SEM of RTI/OS black paint. (Courtesy of The Research Triangle Institute.)
FIGURE 17.6 A discrete Fourier transform of the SEM of Figure 17.5. Strength of the darkness denotes an increase in power. This shows that the majority of the structure is concentrated at a scale of 10 µm or larger, which indicates that the surface should be black at those wavelengths (which it is). (Courtesy of The Research Triangle Institute.)
Target Phenomenology
337
varies with temperature, wavelength, and surface roughness. There are several emissivity libraries on the Internet including excellent ones found at the following sites: ■ http://www.iriacenter.com/backgrnds.nsf/Emissivities?OpenPag, 2003. ■ http://www.x20.org/library/thermal/emissivity.htm, 2003. ■ http://www.electro-optical.com/bb_rad/emissivity/matlemisivty.htm, 2003.
THE HAGAN–RUBENS RELATIONSHIP FOR THE REFLECTIVITY OF METALS The reflectivity of metals can be estimated by 0.5
3.7ρ R ≈ 100 – --------------0.5 λ where R = reflectivity (in percent) ρ = resistivity of the metal in microhms per meter (e.g., 0.02 µΩ/m for copper) λ = wavelength (in µm)
Discussion The amount of light reflection of a given (solid) object depends on the exterior surface material, roughness, and surface coating. The reflectivity of a metal is related to the complex index of refraction of the surface coating. This can be estimated for a given wavelength by the metal’s absolute magnetic permeability and electrical conductivity. With some substitution and reasonable assumptions, the above rule can also be expressed as 2 4πν R = 100 – 100 --- --------c µσ where R = reflectivity (in percent notation) ν = frequency of the light radiation σ = electrical conductivity c = speed of light µ = absolute magnetic permeability Schwartz et al2 give another variant of the relationship as 2ω 1 ⁄ 2 R( ω ) = 1 – ⎛ -----------⎞ ⎝ πσdc⎠ where σdc = DC conductivity ω = frequency of the light This rule applies to wavelengths from the visible through IR. This rule can be used for a first-cut quick estimate of reflectivity. It also provides an estimate of the way the reflectivity of materials changes with wavelength. This can be useful in estimating the target signature from reflected sunlight or laser illumination if the reflectivity is known at one wavelength.
338
Chapter Seventeen
References 1. D. Fisher, Rules of Thumb for Scientists and Engineers, Gulf Publishing, Houston, TX, 1988. 2. A. Schwartz et al., “On-Chain Electrodynamics of Metallic (TMTSF)2X Salts: Observation of Tomonaga-Luttinger Liquid Response,” Physical Review B, 58(3), pp. 1261–1271, July 15, 1998. 3. H. Lee et al., “Optical Properties of a Nd0,7Sr0,3MnO3 Single Crystal,” Physical Review B, 60(8), pp. 5251–5157, August 15, 1999-II.
HUMAN BODY SIGNATURE 1. The surface of the human body emits about 500 W/m2 and has a radiative transfer to an ambient environment (of 23°C) of about 130 W. 2. In the 3- to 5-µm bandpass, it radiates into π sr approximately 7.2 W/m2. 3. In the 8- to 12-µm bandpass, it radiates into π sr approximately 131 W/m2. 4. The peak wavelength of a human’s radiant emittance is about 9.5 µm.
Discussion The human body emits heat by a variety of means, including infrared emission, evaporation of surface moisture (sweat), and breathing. Generally, the body has a very high emissivity in the thermal infrared (0.97 or more) and a surface area of around 2 m2. Skin is quite antireflective (black and very Lambertian) as discussed in an associated rule. The surface temperature of a human is less than the often-quoted internal body temperature of 37°C. The surface temperature is a complex function of metabolism, state of activity, and health, but at rest one can assume it to be between 30 and 38°C for a person at rest, with 32 to 34°C as a nominal average. The large difference between the above heat transfer and total emittance is a result of radiative input from the ambient environment, which is generally relatively close to that of the body (e.g., a 23°C room is only 3.5 percent colder than the surface of the body). Many web sites and other references confirm the radiative transfer for a relatively hairless, naked human in a 23° environment. They show that the human body tends to transfer about as much heat as a light bulb (net), although we radiate as much as a hair dryer (total). An adult male’s basal metabolism generates about 90 W at rest, and more when active. When more than 90 W is transferred from the body, we feel cold; when less, we feel hot. This is why it feels colder on the ski lifts than the slopes. When skiing, the muscles and brain are working hard, and more heat is generated as opposed to when sitting on a lift. The brain consumes about 20 W, which heats it up, which is why we lose so much heat through our heads and need hats in the winter. If the exterior temperature is the same as that of the body, there is no net heat transfer, yet we still radiate. If the exterior temperature is colder, then the body losses net heat though radiation according to 4
4
εAσ( T B – T A ) where ε = emissivity A = area σ = Stephan–Boltzmann constant TB = body’s surface temperature (in kelvins) TA = ambient temperature (also in kelvins)
Target Phenomenology
339
Body heat is regulated by the hypothalamus. When we radiate much more heat than we produce, the hypothalamus triggers mechanisms to produce more heat. These include shivering to activate muscles; vasoconstriction to decrease the flow of blood (and thus heat) to the skin; and secretion of norepinephrine, epinephrine, and thyroxine to increase heat production. If the exterior temperature is hotter, mechanisms in addition to radiation help cool the body. When the body’s surface temperature reaches about 37°C, perspiration results. The body’s heat production remains about constant, so the only way we can transfer heat is through evaporation of perspiration. In dry air, this evaporation occurs much more quickly than in humid air, so more heat is removed per unit of time. This explains the phenomena of “dry heat”: 38°C in Tucson is more pleasant than 26°C in Orlando. Note that these results assume relatively hairless, naked humans. Clothing and thick hair (such as on many people’s heads, the author’s excluded) reduce the radiation. Also, many animals have different body and skin temperatures, some have more hair, and some don’t sweat.
References 1. 2. 3. 4. 5. 6.
http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/bodrad.html, 2003. http://www.space.ualberta.ca/~igor/phys_224/outline_2.pdf, 2003. http://web.media.mit.edu/~testarne/TR328/node2.html, 2003. http://www.shef.ac.uk/~phys/teaching/phy001/unit7.html, 2003. http://www.tiscali.co.uk/reference/encyclopaedia/hutchinson/m0006045.html, 2003. R. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, p. 103, 1969.
IR SKIN CHARACTERISTICS Beyond about 1.4 µm, human skin has high emissivity and is very Lambertian.
Discussion Regardless of the visible characteristics of skin, extreme absorption of light occurs at wavelengths beyond about 1.3 µm, implying a high emissivity and, hence, a bright thermal emission. Moreover, as a result of the surface morphology (many little pits, hairs, and imperfections), skin is extremely Lambertian. In fact, living skin tends to be “blacker” than all but the very best coatings and surface treatments in the MWIR and LWIR. The data presented here (from Miller) do indicate a slight increase in backscatter reflectivity when observing at angles of low incidence, but this is still tiny (being less than a factor of 4) as compared to angles near normal. To illustrate this result, Fig. 17.7 shows the bidirectional reflectance distribution function (BRDF, see the associated rule for a definition) for live human skin taken in the MWIR (4.46 µm) and the LWIR (10.3 µm). The figure shows the forward scatter BRDF in MWIR and LWIR at an incident beam of minus 20°. That is, the beam is incident at –20° from the normal, so the specular in-plane forward scatter peak would appear at 20° on this plot. This reflection peak (following Snell’s law) is present in the LWIR data but is extremely small. For example, bare aluminum has a specular peak about five orders of magnitude above the low points, and a mirror has even larger peaks. The MWIR data are extraordinarily Lambertian. The integrated BRDF data indicate an LWIR emissivity of about 0.98 and a MWIR emissivity of over 0.99. Additional data from multiple human subjects (of both sexes) and relatively flat, hairless portions of the body support these assertions. All of these data have good signal-tonoise ratios (e.g., 8 to 100) and were repeatable to within about a factor of 2. The inflec-
340
Chapter Seventeen
FIGURE 17.7 females.
Forward scatter BRDF from human skin, data averaged from two adult males and two
tions and slight curves should not be taken too seriously, as it is impossible to find perfectly flat skin, so there is some error induced by the angularity of skin and potential minor movements by the person during the measurement. The point is that skin has very high emissivity and is very Lambertian in the infrared bands; thus it is very low in reflectivity. Do you need a quick blackbody? Then tape a temperature sensor on someone’s hand and view the hand. Recent work in human evolution indicates that about 1.6 million years ago, Homo Ergaster exhibited an increased number of sweat glands that kept his brain from overheating.1 This could also explain why humans have such high emissivity in the thermal IR. Having high emissivity in the infrared improves the natural ability of the body to radiate heat, which we do quite well. When really hot, we also sweat to cool by means of evaporating water.
Reference 1. N. Jablonski and G. Chaplin, “Skin Deep,” Scientific American, pp. 74–81, October 2002.
JET PLUME PHENOMENOLOGY RULES 1. The radiance (W/m2/sr) of a plume from a jet aircraft engine at 35,000 ft is about one-half of what it is at sea level. 2. It is better to observe an airplane’s plume in the region of the CO2 emission band at 4.3 µm than in the water band at 2.7 µm. 3. The extent of a plume from an airplane is roughly equal to the length of the airplane. 4. A turbojet engine can be considered a graybody with an emissivity of 0.9, a temperature equal to the exit gas temperature, and an area equal to that of the exhaust nozzle. 5. For a subsonic aircraft, the exhaust temperature after expansion is approximately equal to 0.85 multiplied by the exhaust gas temperature (in the tailpipe) in kelvins. 6. For constant engine settings, plumes are larger at higher altitudes, where the static atmospheric pressure is lower.
Target Phenomenology
341
Discussion A jet engine burns fuel with the atmosphere’s oxygen and produces thrust with strong emissions in the water and CO2 bands. Pressure and temperature broadening of the plume emission bands will cause emissions just short and long of the atmospheric absorption band. Generally, the radiance from a turbofan is less than that from a turbojet. Also, in the LWIR, the heat of the jet engine cavity usually produces a much larger signature than the plume. These rules are based on empirical observations for a number of types of aircraft engines and assume ■ The aircraft is in normal operation; that is, the engine is not set for “afterburning.” ■ The aircraft uses a classical turbojet engine design. Additionally, these assertions are very bandpass dependent.
References 1. R. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, pp. 86–90, 1969. 2. R. Barger and N. Melson, “Comparison of Jet Plume Shape Predictions and Plume Influence on Sonic Boom Signature,” NASA Technical Paper 3172, March 1992.
LAMBERTIAN VS. SPECULAR No surface is perfectly Lambertian or specular. Generally, a surface is considered Lambertian if its bidirectional reflectance distribution function (BRDF) peak reflection at angles near normal incidence is less than one order of magnitude above the average. Conversely, a surface is generally considered specular if its peak is four orders (or more) of magnitude above its average.
Discussion A perfectly Lambertian surface emits equally over 2π steradians. A perfectly specular surface emits in an infinitesimally small angle determined by Snell’s law. Nothing is perfect in nature, so surfaces are a combination of the two. Note that these definitions do not have anything to do with total reflectance. Surfaces exist that are very low in reflectance yet still very specular, and vice versa. For example, gloss black paint has an overall reflectance that is quite low, yet it is specular. An automobile with a high-quality black finish will provide a very nice reflective image. Paper is the opposite; it is designed to be of high reflectivity but is also intended to be Lambertian (diffuse). Most readers find that the glossy paper used is some books and magazines is annoying, because one might be distracted by the reflection of the light source. The characteristics that you desire in a target, background, or hardware surface treatment must be carefully analyzed and should be determined by statistical ray traces. Figure 17.8 illustrates the difference between a notional specular (or mirror-like) surface and a notional Lambertian surface. If an incident beam encounters the surfaces at a 45° angle from normal, the Lambertian surface will have about the same level of reflectance at all observing angles. The specular surface will generally have a lower reflectance at all angles except near the Snell reflection angle, where it will be many orders of magnitude greater. This effect had considerable relevance in World War II. The British air forces discovered that their aircraft were less likely to be seen by ground spotters using searchlights when they painted the bottom of their aircraft with highly specular black absorptive paint
342
Chapter Seventeen
FIGURE 17.8 A specular surface has a strong reflection at the Snell angle from an incident beam, whereas a Lambertian one does not. For the Lambertian, the same amount of energy may be reflected, but a many more angles.
as opposed to the more intuitive choice, diffuse absorptive paint. In retrospect, the reason is clear. With specular paint, only spotters that happened to be at exactly the Snell angle would catch a glimpse of the light reflected from the bottom of the aircraft. In the diffuse, or Lambertian case, the light from any illuminating source was scattered into a hemisphere but dimly. The brighter the illuminating source, the more likely that the diffuse reflection will be detected.
LASER CROSS SECTION The laser cross section of a target is about 10 percent of the target area projected toward the laser.
Discussion The laser cross section of objects with complex surface shapes is an important field of study for many types of applications. The classic concern is the detectability of military vehicles, as some targeting technologies use lasers to determine range and generate signals for pointing control. The effective laser cross section of a target is generally much less than the projected area. For example, consider a spherical target (with radius r) with a diffuse (Lambertian) surface. The projected area of the sphere is πr2 but the radiant intensity is the incident irradiance times the reflectivity and divided by π. Because the reflectivity of most targets is less than 80 percent and greater than 20 percent, the effective cross section is between 0.2/ π and 0.8/π times the physical cross section. The first value is 0.06, and the latter value is 0.25. On average, the effective cross section from a radiometric perspective is on the order of 10 percent, and more for high-cross-section targets and less for stealth targets. Moreover, in military applications, it is quite clear that the enemy is going to use materials to suppress the reflectivity of its vehicles. The U.S. Air Force maintains a dedicated test
Target Phenomenology
343
range to measure the properties of targets, paints, and other factors that determine the size of the cross section that will be encountered in the battlefield. The above details assume that the surface is a Lambertian reflector of average reflectivity. Try a rigorous calculation of laser cross section (or a model designed for such purposes) before designing a system. This rule is handy for quick thought problems and what-ifs when other information is lacking.
MORE PLUME RULES 1. A plume’s brightness in just about any band varies approximately linearly with the thrust. 2. The diameter of the plume is approximately F D ≈ ------P where D = diameter of the plume F = thrust in units that agree with P P = ambient pressure in units that agree with F
Discussion The plume from a rocket or jet usually (depending on the bandpass and aspect viewing angle) contains most of the usable signature that will be seen by a sensor. Within a band, for a given resolution at the same altitude, the signature varies (can be scaled) as the thrust is increased as described in another rule in this chapter. The signature varies in a complex fashion but, when all is said and done, it is usually is pretty close to linear. The diameter of the plume can be estimated based on momentum balancing the plume pressure and the effective atmospheric pressure. Most of the signature will be contained within this diameter. This is based on approximation of flow-field calculations and inspired observations. This was devised for rockets. It may be applied to jets, small missiles, and other vehicles with caution. It assumes that the exhaust velocity greatly exceeds vehicle velocity, which is not the case when the rocket has been thrusting for some time and is moving at great speed.
PLUME THRUST SCALING One can scale the signature of a jet or missile’s plume to that of another by the size of the thrust, or I 1 = I 2 ( N 1 /N 2 )
x
where I1 = in-band radiant intensity of plume 1 I2 = in-band missile radiant intensity of plume 2 x = a constant depending on spectral bandpass [The constant, x, is usually between 0.7 and 2. Assume 1.0 (linear) if you don’t have specific data for the engine type and fuel.] N1, N2 = thrust in newtons for the engines producing the plumes
344
Chapter Seventeen
Discussion Signatures from small tactical missiles are typically from 100 to 10,000 W/sr/µm, depending on bandpass and viewing aspect angle. ICBM and payload-orbiting rockets1 typically range from 105 to 107W/sr/µm. The in-band radiant intensity of a missile is proportional to the rate of fuel combustion, and that is proportional to the thrust of the motor. Therefore, the signature (within a defined band) tends to scale close to linear with the relationship to thrust. Do not use this rule to scale across different spectral bands, as missile plumes are strong spectrally selective emitters. A slight change in bandpass prevents accurate scaling. Also, scale only similar fuels, motors, and motor geometries, and only for the same altitudes. Additionally, Wilmot1 gives a scaling law for the viewing angle variations in observed signatures. I θ = I 90 sin ( θ + φ ) where Iθ = radiant intensity of the missile when observed at angle θ I90 = intensity at the beam viewing angle (sideways, 90° from its velocity vector) θ = angle between the velocity vector (frequently the same as axis of the plume) and the observer φ = offset angle, a small correction whose value depends on the geometry of the missile and plume [This may compensate for the difference between the velocity vector and the plume axis (if not aligned). This is an apparent effect depending on the viewing geometry.]
References 1. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 19–21, 1993. 2. R. Peters and J. Nichols, “Rocket Plume Image Sequence Enhancement Using 3D Operators,” IEEE Transactions on Aerospace and Electronic Systems, 33(2), April 1997. 3. E. Beiting and R. Klingberg, “K2 Titan IV Stratospheric Plume Dispersion,” Aerospace Report TR-97 (1306)-1, SMC-TR-97-01, January 10, 1997.
ROCKET PLUME RULES 1. The size of the plume increases in diameter with altitude, and the intrinsic shock structures expand greatly with altitude. For large rockets, the plume can eventually envelop the entire rocket at highest altitudes, just before burnout. 2. A minimum in infrared intensity is observed not far from the time that the missile velocity and exhaust velocity are the same in magnitude. This minimum is generally observed at missile altitudes from 70 to 90 km, in many cases coincidentally close to the time of staging. 3. A solid rocket plume has a temperature of about 2000°C but may have a low emissivity, depending on its density. An example can be found by looking at the plumes of the solid rocket boosters on the Space Shuttle solid rocket boosters. The plumes contain aluminum oxide and are very white, indicating a low emissivity in the visible bandpass.
Target Phenomenology
345
Discussion These rules result from the diminishing effects of the atmosphere as the rocket ascends combined with empirical observations of current rocketry. Although data suggest these rules, they should be cautiously applied. A number of phenomena cause these rules to be approximations only. For example, the “high-altitude trough” causes the signature of a rocket to get brighter with altitude but then diminish for a range of higher altitudes. This is the result of reduced afterburning of fuels that are not consumed in the engine as tends to occur at lower altitudes. The reduced afterburning results from reduced oxygen content in the atmosphere. As the rocket continues to accelerate, brightness returns to the plume, because the unburned fuel, although in a lowoxygen environment, then encounters the available oxygen with enough speed to stimulate burning. These rules are useful in predicting the signatures that will be available for tracking during rocket flight. They allow estimation of the performance of sensors by providing the necessary target spectral characteristics.
References 1. I. Spiro and M. Schlessinger, Infrared Technology Fundamentals, Marcel Dekker, New York, pp. 60–62, 1989. 2. http://code8200.nrl.navy.mil/lace.html, 2002. 3. R. Peters and J. Nichols, “Rocket Plume Image Sequence Enhancement Using 3D Operators,” IEEE Transactions on Aerospace and Electronic Systems, 33(2), April 1997. 4. E. Beiting and R. Klingberg, “K2 Titan IV Stratospheric Plume Dispersion,” Aerospace Report TR-97 (1306-1), SMC-TR-97-01, January 10, 1997. 5. E. Beiting, “Stratospheric Plume Dispersion: Measurements from STS and Titan Solid Rocket Motor Exhaust,” Aerospace Report TR-99(1306-1), SMC-TR-99-24, April 20, 1999.
SOLAR REFLECTION ALWAYS ADDS TO SIGNATURE The Sun is bright. When present, solar reflection always adds signature to a target, regardless of the bandpass. Specific gains depend on the conditions, sensor, and bandpass.
Discussion Imaging systems detect either reflected light, emitted light, or both. Increases in signature from solar reflection are usually great for wavelengths less than ≈3 µm and inconsequential beyond ≈5 µm (see associated rule in Chap. 4, “Backgrounds”). However, beyond 5 µm, the target may absorb solar irradiation, causing an increase in temperature. Solar reflection is usually measurable but usually not significant between these two wavebands. Total solar irradiance at the ground (insolation) (Cs) in (W m–2) can be expressed as 2π( n – 3 ) C S = ( 1353A ) 1 + 0.0338 cos ⎛ ---------------------⎞ ⎝ 365 ⎠ where A = fraction of solar radiation transmitted by the atmosphere (Outside the atmosphere, A is 1. At zero altitude, A varies from essentially 0 in bad weather to about 0.81 for very clear conditions. Reference 1 provides some empirical
346
Chapter Seventeen
relationships between solar radiation and solar angle for different cloud types that define A.) N = Julian day The amount of solar irradiation on a horizontal plane is the sum of direct solar illumination from the disk of the Sun (Sn) and the illumination from diffuse sky caused by scattering (D). For clear sky conditions, the relation between the total irradiance and the direct illumination is given by1 C S = Sn sin ( ϑs ) + D and 3C S – C S sin ( ϑs ) Sn = ------------------------------------2 sin ( ϑs ) where ϑs = elevation angle of the Sun Atmospheric effects must be considered if operating within the atmosphere. In the visible and UV spectral regions, objects are typically viewed by solar reflection only. In the infrared, the object’s own thermal emission is normally used to provide the signal to detect the target. However, in the shortwave and midwave infrared, reflection of radiation emitted by the Sun (and sometimes the Earth or Moon) can contribute significantly to a cold object’s signature. The contribution may be enough to allow smaller optics or a less-sensitive (and cheaper) focal plane. For example, consider a 1-m2 satellite at a temperature of 300 K, a reflectivity of 0.3 (Lambertian), and an emissivity of 0.7. It is desired to observe this target with an 8- to 12µm, a 3- to 5-µm, and a visible bandpass. Assume that the observation is done by another satellite in Earth orbit with no background. By using the Planck equation and solar emission tables from the IR Handbook, Table 17.1 can be generated. This does not consider background effects and the potential heating of the target (the latter also contributing it its emitted signature). TABLE 17.1 Radiant Intensity as a Function of Bandpass (Does Not Consider Heating by the Sun)
Thermal radiant intensity (W/sr)
0.4- to 0.6-µm band
3- to 5-µm band
Essentially 0
1.5
Solar reflection (expressed in W/sr)
35
2.2
Total (W/sr)
35
3.7
Percent of signature contribution by the Sun
100
60
8- to 12-µm band 28 0.7 29 2
One can see that the solar contribution is dominant in the visible. In the IR bands, it contributes significantly to the signature of the MWIR band while being only a minor contributor to the LWIR band. Nevertheless, it does contribute something to every band.
Reference 1. P. Jacobs, Thermal Infrared Characterization of Ground Targets and Backgrounds, SPIE Press, Bellingham, WA, pp. 34–37, 1996.
Target Phenomenology
347
TEMPERATURE AS A FUNCTION OF AERODYNAMIC HEATING The stagnation temperature caused by aerodynamic heating can be estimated as1 (γ – 1) 2 T = T amb 1 + r --------------M 2 where
T = stagnation temperature in kelvins Tamb = ambient temperature of the air r = recovery factor (usually 0.8 and 0.9); for laminar flow, use r = 0.85, and for turbulent flow, use1 r = 0.89 γ = ratio of the specific heats of air at constant pressure to that at constant volume (usually 1.4) M = Mach number
With some assumptions, the first equation can be further simplified for high-altitude flight as 2
T = 217( 1 + 0.164M )
Discussion This is a dual-use rule in that it can be used (a) to estimate the temperature of a high-speed target in the atmosphere and (b) to estimate the temperatures that will be encountered in designing sensor components, such as windows, that will be used in various types of airborne sensors. This provides a basic piece of information about the design of such sensors, given that elevated window (or dome) temperatures may result in emissions in the detection band of the sensor via photon noise, and these may need to be considered. The temperature and the emissivity of the window material determine how much background radiation flux (and therefore photon noise) the window adds to the sensor. The equation gives the stagnation temperature of the air at the surface of the object when moving directly against the air. The actual temperature of the object will be somewhat lower. For instance, the temperature of a dome will fall off rapidly as the position of interest moves away from the center of the dome. This is accounted for by r, the recovery factor. Hudson suggests using 0.82 for laminar flow and 0.87 for turbulent flow. The first equation applies for Mach numbers less than about 6. For higher Mach numbers, particularly above 8, Gilbert2 suggests that the Tamb term should be divided by approximately 2.
References 1. J. Accetta, “Infrared Search and Track Systems,” in Vol. 5, Passive Electro-Optical Systems, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 223, 1993. 2. K. Gilbert et al., “Aerodynamic Effects,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 241, 1993. 3. R. D. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, p. 101, 1969.
348
Chapter Seventeen
4. S. Maitra, “Aerodynamic Heating of Ballistic Missile Including the Effects of Gravity,” Sadhana, Vol. 25, pp. 463–473, October 2000. 5. R. Quinn and L. Gong, “Real Time Aerodynamic Heating and Surface Temperature Calculations,” NASA Technical Memorandum 4222, August 1990.
Chapter
18 Visible and Television Sensors
This chapter contains rules relating to sensor systems and detectors operating in the visible portion of the electromagnetic spectrum. This spectral slice has proven to be the technically easiest to implement (thanks to nature’s gifts of phosphors for displays and silicon for detectors), has provided the easiest images to interpret (thanks to our Sun peaking in the visible wavelengths and humans evolving their only imaging sense in this spectrum), and has been able to address the largest market (again, thanks to human familiarity with imaging in this spectrum). After still photography was developed and popularized in the nineteenth century, the idea of moving images was ignited in the minds of many scientists and engineers. While many worked on technology to produce moving photographic images (surprisingly still present in the modern cinema), some early pioneers realized that these moving images would reach a wider audience if electronically acquired and disseminated. Paul Nipkow developed a rotating-disk mechanically scanned television system as early as 1884, and the first CRT appeared as early as 1897. Ever since the dawn of the first moving pictures, the desire to include sound with electro-optical images was paramount. By the time moving film pictures gained popularity, exIdaho potato farmer Philo Farnsworth was hot on the trail of all-electrical television while others were still trying to perfect mechanical television—both with no market success. The world’s first public demonstration of a television system (it was of a mechanical architecture) was presented on January 23, 1926, by John Logie Baird of England. Baird later went on to license the Farnsworth electronic television and receive a British Broadcasting Corporation (BBC) contract. However, technical difficulties and an inopportune fire resulted in his loss of the follow-on BBC contract to EMI. Zworkin, at RCA, filed several patents in the 1920s, 1930s, and 1940s relating to electronic televisions. Within a few years of Baird’s 1926 demonstration, televisions were available to consumers in the U.S. and England. However, they didn’t catch on, because there was little to watch, and they were very expensive. (Not much has changed, as these are the very barriers of HDTV at the time of this writing.) In 1935, a U.S. court affirmed that Farnsworth had the controlling patents on television. This led to RCA licensing Farnsworth’s patents in 1939 (the first time RCA paid royalties rather than collecting them). Farnsworth also envisioned cable television as early as 1937 in an agreement between his company and AT&T.1 In these early days, AT&T was acutely
349
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
350
Chapter Eighteen
interested in visible cameras and displays for a picture-phone that, like mechanical television, also never received widespread consumer use. Color television was promoted by CBS as early as early as 1940, using 343 lines and 120 fields per second.2 Heated wars on standards erupted, delaying the acceptance of color for almost three decades. After the Supreme Court intervened in 1951, CBS actually broadcast color for a few months and then abandoned it, because no one could watch it on a black-and-white TV. Color displays reached maturity in the late 1960s, when a broadcast standard that allowed black-and-white receivers to display the images was developed. Back in the 1940s, mechanical television gave way to the electronic television pioneered by Farnsworth. The work of the National Television System Committee (NTSC), laid the necessary foundations that made monochrome television practical in the U.S., and its 1941 standards (subsequently adopted by the FCC) are still used today. The reader is referred to the introduction of Chap. 7, “Displays,” for a related discussion of the broadcast and display standards of NTSC and phase alternate line (PAL). In the 1950s, the Academy of Television Arts and Sciences decided to give out an award akin to the Oscar. They named it the “IMMY” after RCA’s Image Orthicon Camera (frequently called by that name), based on the Farnsworth patents that they previously licensed. This was later changed to “Emmy” to reflect the female nature of the statue.5 Perhaps it is time to rename it the “Dee” (for the CCD). The era before 1935 can be called the “unsuccessful age of mechanical television.” One can assume that the era from about 1935 until 200X(?) will be considered the successful age of analog television. We are on the precipice of a fundamental change in television to digital architecture and higher resolution. We can’t say how successful or long this age will last, but it is likely to be replaced by holographic, super-interactive, or some other form of visible home electronic imaging that is now hard to imagine. A little history on recording is noteworthy. Ever since 1928, when mechanical television broadcast President Hoover’s acceptance speech, the recording of video became paramount in many video engineers’ minds. Kinescope was used to record video on film, and that became the mainstay of the television recording industry until about 1960. Charles Ginsberg led the research team at Ampex in developing the first practical videotape recorder in the early 1950s. The first ones sold in 1956 for $50,000, and they achieved a modicum of success in the 1960s with studios and industry. Sony introduced the videocassette recorder in 1971 and subsequently popularized the Betamax in the mid1970s. Also in the late 1970s, Matsushita (parent company of JVC) developed the VHS format, which had slightly reduced quality compared to Betamax but could hold two (and later more) hours of video on a single cassette, allowing a typical Hollywood film to be recorded on a single tape. The VHS standard proliferated in the home entertainment and security markets. However, currently, VHS suffers from the fact that it is an all-analog architecture. Digital is all important, and many relevant standards evolved in the 1990s. Currently, MPEG2 is a generic standard allowing for up to 1.5 megabits per second as well as SMPTE292 (which defines several standards including a 60-Hz interlaced 1920 × 1080 digital video). The camera recording the early television images was the vacuum tube-based vidicon. These devices employed coatings of phosphorous materials and produced analog signals. These cameras are bulky and fragile, require high voltages, and are difficult to calibrate. However, they were producible, manufacturable, stable, and provided adequate resolution with low-cost 1940s technology. The charge-coupling principle was devised on October 19, 1969, by Willard Boyle and George Smith and was first described in a 1970 publication by them.3,4 The charge-coupled device (CCD) enabled solid state, video imaging. Strictly speaking, the CCD is not a detector but a readout architecture (Fig. 18.1). By a happy coincidence, both the visible wavelength detector and the readout electronics can be made from silicon devices, in a sin-
Visible and Television Sensors
351
FIGURE 18.1 Typical front-side charge-coupled device architecture. (From www.Chem.vt.edu/chem-ed/optics.)
gle chip. Photons are converted to electrons and holes in the depletion region of the bulk silicon. The charge migrates to a potential well in the CCD structure. The CCD moves this photogenerated charge in a “bucket brigade” from one well in a unit cell to the next by clocking the well potentials, then to wells in the adjacent unit cells, then to a shift register of CCDs that continue the transfer in the other dimension to the output lead pin. Classic CCDs use a high-resistivity n-type silicon substrate with a three-phase, triple-polysilicon gate and buried channels. An alternative technology soon developed to avoid this bucket brigade—the charge injection device or CID. These devices had some market appeal in the 1980s and early 1990s for spectrometers, trackers, and security systems. However, like Betamax, they failed in the marketplace. The CID’s market presence was largely defeated by the CCDs lower cost resulting from larger, more universal markets. Figure 18.2 plots the acceptance of the CCD for the European Southern Observatory (an organization managing several observatories, see www.ESO.org). In the 1980s, the CCD became accepted as the visible focal plane of choice for most scientific imaging, and subsequently military imaging. It then became ubiquitous in professional television cameras and, eventually, consumer camcorders in the 1990s. In the 1990s, the stalwart CCD saw developments through back-illumination and
FIGURE 18.2 The demography of optical sensitive area devoted to observation at the European Southern Observatory, 1973–2000. Data were collected and plotted by G. Monnet (ESO). (From D. Groom, Recent Progress on CCDs for Astronomical Imaging, Proc. SPIE, Vol. 4008, March 2000.)
352
TABLE 18.1 Examples of Advanced CCDs* First light
Camera
Format
Pixel size
Packing fraction
Format
Manufacturer or part number
Telescope
1998
CFH12K
12k × 8k
15 µm
98%
12 × (2k × 4k)
MIT/LL CCID20
1999
Suprime-Cam
10k × 8k
15 µm
96.5%
10 × (2k × 4k)
SITe + MIT/LL†
SUBARU
1999
SDSS
12k × 10k
24 µm
≈43%
30 × (2k × 2k)
SITe
Apache Pt
CFHT
1999
NOAO
8k × 8k
15 µm
98%
8 × (2k × 4k)
SITe
CTIO 4-m
2000
DEIMOS
8k × 8k
15 µm
97%
8 × (2k × 4k)
MIT/LL CCID20
Keck
Hamamatsu
2000
MAGNUM
4k × 8k
15 µm
96%
4 × (2k × 4k)
2000
WFI
8k × 8k
15 µm
95.6%
8 × (2k × 4k) 20 × (2k × 4k)
ARC 3.5m
36 × (2k × 4k)
VST
2001
UW
2002
OmegaCAM
16k × 16k
2002
MegaPrime
>16k × 18k
13–15 µm
>90%
≥ 36 × (2k × 4k)
2002
Megacam
18k × 18k
13.5 µm
>90%
36 × (2k × 4.5k)
≥80%
CFHT EEV CCD42-90
2004‡
DMT
Annulus
13 µm
1300 × (1k × 1k)
2004‡
WFHRI_1§
36k × 36k
5 µm
4 × (30 × 30) × (600 × 600)
MIT/LL
2006‡
SNAPsat
≈
15 µm
83%
≈250 × (2k × 2k)
LBNL**
2010‡
GAIA
≈ 109 pix
9 µm × 27 µm
86%
≈240 CCDs
*Source: D. Groom,
199
pix
2 m, Haleakala MPG/ESO
SAO/MMT DMT 8-m ≈25 × 2.5 m Satellite ESA satellite
“Recent Progress on CCDs for Astronomical Imaging, Proc. SPIE, Vol. 4008, Optical and IR Telescope Instrumentation and Detectors, March 2000. 4 SITe ST-002A and 4 MIT/LL CCID-20. Will add two more MIT/LL to make a full array. ‡This is for the focal plane in one of ≈25 telescopes in the WFHRI array. Each array consists of four chips, each a 30 × 30 array of 600 × 600 OTCCDs. §Proposed. **Commercial foundry licensed by LBNL. †Presently
Visible and Television Sensors
353
the incorporation of gain for high sensitivity and other simplistic signal processing. The market pull of digital video for the Internet, HDTV, and e-cinema spurred advancements regarding on-chip digitization, smaller pixels, and larger formats. Table 18.1 lists some advanced technology CCDs planned for astronomical applications. At the time of this writing, the CCD has become the ubiquitous visible sensor for almost every application, but it has a new-technology competitor. Newer CMOS active-pixel sensors (APSs) and focal planes have significant advantages over CCDs for high-definition imaging and in being able to incorporate complex image processing into the focal plane, plus they have increased radiation hardness. Their origins go back to 1968,6 and they are becoming widely used for scientific, surveillance, and military applications as well as HDTV cameras (Fig. 18.3). CMOS APS devices have a different architecture, allowing the device to have parallel access (like the CID), which allows one detector or any area on the chip to be selected for readout. The signal from the pixel is the difference between the potential on the photodiode before and after the photodiode is reset. These two potentials are stored at the bottom of the column capacitors. The voltages on the capacitors are differentially read out to produce a voltage proportional to the photocharge.7
The only limit in the programmability of the device is that the bandwidth of readout (pixels per second) cannot be exceeded. If just a few pixels are chosen, they can be read at a higher rate than if the entire chip is read out. CMOS APSs tend to exhibit relatively large fixed pattern noise (FPN) as compared to CCDs. However, the CMOS APSs can outperform the CCD for large formats, as CCDs exhibit other difficulties. This is the result of threshold voltage and capacitance variations in the pixels. At present, security cameras use either CCDs or CMOS APSs, with the CCD having the largest market but APSs rapidly growing due to their on-focal-plane image-processing flexibility and better performance for large formats. These will likely replace the CCD for almost all applications and avoid suffering the fate of the CID. APSs are also attractive because they can be manufactured in the same foundry that makes memory chips or other common silicon devices, as they don’t require the overlapping polysilicon layers and nonstandard processing modules inherent to the CCD. Thus, the old CCD demands more “touch time” and can’t enjoy the production efficiency gains that have been shown by CMOS APS visible focal plane arrays.
FIGURE 18.3 Active pixel CMOS HDTV chip. (Courtesy of Rockwell Scientific.)
354
Chapter Eighteen
The reader is cautioned that many commercial CCDs and APS perform an “on-chip” “pseudo-resolution” whereby a row of pixels (row 1) is added to the row below (row 2) to produce a TV line. Then the next line is composed of row 2 and the one below it (row 3), and so on. This makes determining resolution and sampling frequency difficult. Williams8 cautions that lines of resolution is a rather confusing term in the video and television world. This type of metric survives from the early days of analog television. It is poorly understood, and it is inconsistently measured and reported by manufacturers. But we’re stuck with it until all video is digital, at which time we might just possibly change the convention and start reporting resolution in terms of straight pixel counts (as is done in the infrared community). There are some common misconceptions. Lines of resolution is not the same as the number of pixels (either horizontal or vertical) found on a camera’s CCD, or on a digital monitor or other display such as a video projector or such device, and it is not the same as the number of scanning lines used in an analog camera or television system such as PAL, NTSC, or SECAM, and so forth. Additionally, it is important to note that both NTSC and PAL systems are fundamentally analog in nature. Even if you digitize a PAL or NTSC signal from a digital CCD, you are digitizing an analog data stream with all its inherent limitations and noises, and you are typically limited to about 6 or 7 bits of actual useful data. Images may originate with a modern CCD, which can produce 12 to 14 bits of real data, but you’ll never truly recover all the lost and compressed information from the sloppy analog signal. Regardless of the solid state digital advantages of silicon detectors, in this chapter, we also include some rules for image intensifiers (commonly called I2), photomultiplier tubes (PMTs), and microchannel plates (MCPs). All are serious scientific and military visible technologies and are still seeing wide use for niche markets. There are several books on semiconductor physics (such as by Sze9, listed below) that provide detailed technical discussions of the CCD. SPIE and IEEE proceedings and journals also have numerous papers concerning new research and engineering in visible imaging technology.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
E. Schwartz, The Last Lone Inventor, HarperCollins, New York, p. 242, 2002. www.Tvhistory.tv.hmtl, 2002. J. Janesick, Scientific Charge Coupled Devices, SPIE Press, Bellingham, WA, p. 3, 2001. W. Boyle and G. Smith, “Charge Coupled Semiconductor Devices,” Bell Systems Technical Journal, 49(587), 1970. E. Schwartz, The Last Lone Inventor, HarperCollins, New York, p. 291, 2002. P. Noble, “Self-Scanned Silicon Image Detector Arrays,” IEEE Transactions on Electron Devices,” Vol. 15, pp. 202–209, December 1968. S. Miyatake et al., “Transversal-Readout Architecture for CMOS Active Pixel Image Sensors,” IEEE Transactions on Electron Devices, 50(1), pp. 121–128, January 2003. Private communication with George Williams, 2003. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 407–426, 1981. WWW.Williamson-labs.com/ntsc-fink.htm, 2002. http://inventors.about.com/library/inventors/blvideo.htm, 2002.
Visible and Television Sensors
355
AIRY DISK DIAMETER APPROXIMATES f/# (FOR VISIBLE SYSTEMS) The diameter of the Airy disk (in micrometers) in the visible wavelengths is approximately equal to the f/# of the lens.
Discussion This is based on diffraction theory and plugging in the appropriate values. However, as detailed below, this is valid only near visible wavelengths of 0.5 µm. The linear size of the first Airy dark ring is, in radius, fλ R = 1.22 -----D where f = focal length of the optical system For the visible spectrum, λ is about 0.4 to 0.7 µm, and f/D is a small-angle approximation for the f/# of the telescope. Thus, the size of the first Airy ring is, in diameter (2 × 1.22 × 0.5 × f/#) µm, which is nearly equal to the numerical value of the f/#, as the multiplication of all the numerals approximately cancel each other.
CCD SIZE Like plumbing pipe, the actual active size of a visible sensor array is only roughly approximate to its nominal (literally, “named”) size. Although the formats vary with time and from one supplier to another, the user can assume the following crude approximations:
Format
Horizontal (mm)
Vertical (mm)
Diagonal (mm)
1/6 inch
2.5
1.8
3.1
1/4 inch
3.6
2.7
4.5
1/3 inch
4.8
3.6
6
1/2 inch
6.4
4.8
8
2/3 inch
8.9
6.6
11
Discussion Unfortunately, the “format size” of a visible sensor (be it a CCD, CID, or CMOS active pixel sensor) has little relationship to its actual size. Historically, formats were defined based on ancient vidicons and are meant to be exchangeable with such video tubes (although the authors can’t imagine anyone doing that today). The size roughly approximates the diagonal dimensions of the active area of the chip. Moreover, the actual chip’s imaging area may be substantially smaller than the chip, as there frequently is nonactive area. Some manufactures include a microlens for every pixel, resulting in a fill factor approaching 1. The above table gives the reader general sizes of the imaging area of the chip for popular formats. Unfortunately, these will vary slightly from manufacturer to manufacturer. There is no universal standard, but the above guidelines give the user a good place to start.
356
Chapter Eighteen
CHARGE TRANSFER EFFICIENCY RULES 1. Charge transfer efficiency (CTE) usually improves as the temperature is lowered. 2. CTE decreases as accumulated total dose (in rads) increases. 3. CTE is generally around 0.997 for commercial devices and as high as 0.999999 for scientific devices.
Discussion A charge-coupled device (CCD) operates by transferring the charge in one pixel across a row through the other pixels and eventually to a shift register. A large CCD may transfer some of the charge several thousand times before it reaches an amplifier. High efficiency of the transfer is critical to prevent shadowing effects and reduced SNR. Several things can happen to the lost electrons. They can be recombined with a hole, reducing signal, or they can be left behind and end up as signal in the neighboring pixel. As a CCD is cooled, its transfer efficiency usually increases, and its inherent noise decreases. This is why most very large and high-sensitivity CCDs operate at reduced temperatures. As CCDs are exposed to nuclear radiation, their performance decreases. There is a total dose deleterious effect. Insulating oxides break down, and shorting may occur. Additionally, the wells tend to fill up with noise-generated electrons, and the charge transfer efficiency is reduced. CMOS structures do not have this problem but are otherwise adversely affected when exposed to radiation. The effects are nonlinear and vary along the shift register, so care should be exercised in using these rules. Generally, there is a decrease in MTF that varies along the array.
CMOS DEPLETION SCALING According to Williams,1 CMOS manufacturing processes are being driven below submicron scale, and the depletion region in CMOS imagers scales as the square root of the process voltage and the doping concentration. This has the effect of reducing the (long wavelength) red response of these imagers.
Discussion Reference 2 states that the absorption of photons in silicon can be modeled by I ( λ,x ) = I 0 ( λ )e
–α ( λ )x
where I(x) = photon flux of wavelength λ at depth x in the silicon α = wavelength dependent absorption coefficient The reader will note that this is a form of Beer’s law (described elsewhere in this book), and the absorption coefficient (α) decreases as wavelength increases. Thus, on average, shorter wavelengths generate electron-hole pairs closer to the surface of the silicon than do red wavelengths. Thus, the depth of the p-n junction influences the spectral response. With the above in mind, as CMOS processes shrink in feature size according to Moore’s law, and process voltages decrease, the CMOS active pixel sensor (APS) imager red response is reduced. The resultant reduction of oxide thickness lowers the threshold voltage, which must be compensated for by increasing the diffusion doping in the channel, drain, and source. As the rule indicates, this is difficult, as it increases only as the square root of
Visible and Television Sensors
357
the process voltage and the doping concentration (not a very strong function). Migration from about 1- to 0.25-µm photolithography results in a need to increase the CMOS doping concentration by an order of magnitude to maintain the quantum efficiency. However, as the photons must be collected within a depletion region, and as red photons are absorbed between 10 and 100 µm deep in the silicon, the red response is reduced. Typically, the CMOS process limits this depletion region to 1 to 3 µm, limiting spectral coverage to less than 0.7 µm. Full depletion of a standard low-resistivity silicon substrate is not technically feasible. Therefore, the technical developments for expanding the wavelength sensitivity of scientific silicon detector arrays have focused on high-resistivity substrates. Some MOS developments are of the deep-depletion type. In these devices, partial depletion of the substrate is achieved to depths of typically 40 to 80 µm. Such devices must still be thinned to 40 to 50 µm to eliminate the free region between the depletion layer and the backside. Thinning unfortunately undermines the long wavelength sensitivity. Figure 18.4 illustrates the great change in absorption layers across the spectrum of silicon. Below about 400 nm, Beer’s law breaks down as surface effects dominate. Above about 900 nm, transparency dominates. Don Groom (the author of Ref. 3 and originator of the figure) likes to point out that this is the most important figure in his CCD/CMOS talks. He adds that the dashed curves approximate the theory, and the solid are experimental
FIGURE 18.4 Absorption length in the depletion region increases for longer (red) wavelengths. The dashed curves are calculated for the phenomenological fits by Rajkanan et al. Absorption length of light in silicon is represented by the solid curve. Except at wavelengths approaching the bandgap cutoff at 1100 nm, essentially all absorbed photons produce electron-hole pairs. The sensitive region of a conventional silicon detector is a 20 micron-thick epitaxial layer, while in the high-resistivity silicon; the fullydepleted 300 micron substrate may be active. (From Ref. 3.)
References 1. Private communications with George Williams, 2003.
358
Chapter Eighteen
2. K. Findlater et al., “A CMOS Image Sensor with a Double-Junction Active Pixel,” IEEE Transactions On Electron Devices, 50(1), pp. 32–42, January 2003. 3. D. Groom, “Recent Progress on CCDs for Astronomical Imaging,” Proc. SPIE, Vol. 4008, Optical and IR Telescope Instrumentation and Detectors, pp. 48–70, March 2000. 4. K. Rajkanan, R. Singh, and J. Shewchun, “Absorption Coefficient of Silicon for Solar Cell Calculations,” Solid-State Electronics, Vol. 22, pp. 793–795, 1979. 5. S. Holland et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions on Electron Devices, 50(1), pp. 225–238, January 2003.
CORRELATED DOUBLE SAMPLING Correlated double sampling (CDS) is a method employed to improve the signal-to-noise ratio of integrating image sensors. By subtracting a pixel’s dark or reference output level from the actual light-induced signal, static fixed pattern noise and several types of temporal noise are effectively removed from the sensor’s output.
Discussion In an optical sensor, the photo charge is generally collected in a capacitor. The signal amplitude is read as the voltage on that capacitor (V = Q/C). With the CDS procedure, the signal voltage Vs = Qs/C is compared with the “dark,” “empty,” or “reset” level voltage, Vr = Qr/C, that is obtained when all charges of C have been channeled off to a fixed potential. Thus, for each pixel, the final output V = Vs – Vc = (Qs – Qr)/C. Spatial and temporal noises that are common to Vr and Vs disappear from the result. Thus, the following noises almost disappear: ■ kTC noise (or reset noise) of the photodiode’s capacitance, on the condition that this capacitance is not reset in between measuring Vs and Vr. If the capacitor is reset in between the two sampling instants, their noises are uncorrelated, and kTC noise persists. (Sometimes this method of readout is called double sampling, DS, in contrast to CDS. Removal of kCT noise is the main reason most people employ CDS.) ■ 1/f noise ■ Dark level drifts But the following noise sources are not mitigated, and might be even promoted, by CDS: ■ Second-order effects resulting from pixel gain nonuniformity or nonlinearity are not compensated. ■ Uncorrelated temporal white noise originating from before the differencing operation, such as broadband amplifier noise, is multiplied by a factor 1.4 by the differencing operation. ■ All of the downstream noise sources, such as electromagnetic interference (EMI), digitization, system noise, discretization noise, and so on are not affected. ■ Low-frequency MOSFET define noise (1/f noise, flicker noise) is reduced only by a factor that is the logarithm of the associated reduction in bandwidth—typically a factor not more than 1 to 3. In the literature, the reduction of 1/f noise is typically over-estimated or not recognized as such, as the 1/f noise after CDS or DS appears to be “white,” which is the result of aliasing effects. ■ Signal noise, as optical shot noise is, in principle, not affected by CDS.
Visible and Television Sensors
359
CDS was developed by McCann and White of Westinghouse in the 1970s to reduce the reset noise (kTC).1 When the reset switch is operated, there is a residual charge that is an inherent noise to the CCD architecture. This noise charge is the square root of the total reset charge, or (kTC)1/2, where k is Boltzmann’s constant, T is the temperature, and C is the capacitance.
References 1. J. Hall, “Arrays and Charged Coupled Devices,” Applied Optics and Optical Engineering, R. Shannon and J. Wyant, Eds., Academic Press, New York, pp. 373–375, 1980. 2. Private communications with George Williams, 2003. 3. WWW.CCD.com, 2003.
DOMINATION OF SPURIOUS CHARGE FOR CCDS As CCDs are clocked faster, spurious charge noise dominates over charge transfer inefficiencies.
Discussion This is becoming more dominant especially with HDTV because of the large numbers of pixels requiring rapid clocking rates and the fact that spurious charge increases linearly with the number of transfers. Although an academic curiosity in the past, this noise source is becoming a serious issue with images as pixel numbers increase. Spurious charge (mechanism explained below) increases exponentially with the leading edge of the clock rise and clock swing (the change in timing in the intricate CCD clock cycle). The faster the change in clock rise, the more spurious charge results. Assuming the gate voltages are fixed, “wave shaping” the gate clocks and allowing approximately five time constants on the clock overlaps between phases of the CCD readout process will reduce spurious charge (as it allows the holes to return to the channel stops under a lower potential). When CCDs are clocked into inversion, minority carriers (holes) migrate from the channel stops and collect beneath the gate. This results in “pinning” the surface to substrate potential (they are both at the same potential). This process occurs very quickly in CCDs (on the order of a few tens of nanoseconds). Some of the holes become trapped in the Si-SiO2 interface. When the clock is switched to the noninverting state to transfer charge, the trapped holed are accelerated out of the Si-SiO2 interface. To make matters worse, some holes are released with sufficient energy to create additional electron-hole pairs by colliding with silicon atoms (called impact ionization). All of these contribute to spurious noise. Fast moving, high-amplitude clocks increase the amount of impact ionization caused by the electric fields involved. It is important to note that spurious charge is generated only on the leading edge of the drive clock transition, when the phase assumes a noninverting state. Experiments have shown that the falling edge has no effect on spurious charge. Furthermore, and unfortunately, impact ionization has been shown to increase at low temperatures. Spurious charge increases exponentially with the decreasing time available for clock rise and with short voltage swings, as holes are sent back to the channel stops.
References 1. Private communication with George Williams, 2003. 2. J. Janesick, Scientific Charge-Coupled Devices, SPIE Press, Bellingham, WA, pp. 649–654, 2001.
360
Chapter Eighteen
EQUIVALENT ISO SPEED OF A SENSOR 0.8 ISO ≈ ------Em Assume Em to be the noise floor (see below) of the sensor that you are using.
Discussion Williams1 points out that, for film, Em is officially defined as the intersection of the density/exposure curve and the base fog and is given in lux. Assuming that the “base fog” is roughly equivalent to the noise floor of a visible sensor, one can substitute accordingly. To paraphrase Ref. 2, assuming that a signal-to-noise equivalent of 3 is required for raw detection, then a signal-to-noise equal to 10 dB (or an SNR of 3) is the threshold sensitivity of a visible sensor. This roughly corresponds to a flux of 3 × 10–4 lux (assuming a 1/30sec exposure and appropriate spectral weighting). A readout noise of five electrons rms is assumed, and it is assumed that the detectors exhibit no fixed pattern noise or charge transfer inefficiency. Converting units, we find that an equivalent base fog of 1 ×10–5 lux-seconds (which describes the conditions above) is determined for a back-illuminated sensor. An equivalent ISO speed of between 107,000 and 49,000 is determined for high-end CCDs. The reader can compare this with typical high-speed film, which is about 400. Of course, this applies only to a cooled CCD with CDS. Incidentally, the ISO numbers of film (e.g., 200, 400, and 1000) indicate how efficiently the film reacts to light. The higher the number, the quicker the film will form an image at a given light level. This effect is roughly linear between time and ISO rating, as a film with an ISO 400 rating reacts twice as quickly to the same light as ISO 200 film, and so on.3
References 1. Private Communications with George Williams, 2003. 2. G. Williams, H. Marsh, and M. Hinds, “Back-Illuminated CCD Imagers for High Information Content Digital Photography,” Proc. SPIE, 1998. 3. http://photographytips.com/page.cfm/268.
HOBBS’ CCD NOISES 1. A commercial-grade CCD operating at room temperature has a dark current of around 100 electrons per pixel in 1/30 sec integration time. 2. A good uncooled CCD camera has a readout noise of about 30 electrons per readout. 3. A cooled CCD can have noise as low as five electrons per pixel with correlated double sampling. 4. A higher-grade cooled CCD has a dark current of around one electron per second per pixel and an rms readout noise of about five electrons. 5. A scientific-grade, astronomical, cooled multiphased pinned CCD can have dark current below one electron per pixel per second, but the noise is bandwidth dependent, and the bandwidth is smaller for most typical astronomical designs.
Discussion The basic CCD is a linear array of MOS (metal-oxide semiconductor) diodes, which acts as an analog memory and shift register. Electrons are moved across a line of potential wells
Visible and Television Sensors
361
by synchronously varying the potential in the wells associated with each detector location. For example, as one well goes low and its neighbor goes positive, the electrons migrate to the more positive well. This occurs for the next well, and next, and so on. This moving of charge was frequently called a bucket brigade, as it is analogous to the volunteer fire-fighting technique of moving water in a straight line. As in the bucket brigade, charge sometimes slops out and is lost, leading to a charge transfer efficiency of less than 1. Two-dimensional arrays are composed of a series of these linear CCD shift registers reading out the image in one dimension (e.g., just rows or columns) to another (nonphotoactive) linear CCD, which then acts like a shift register in the other dimension. The above rules are based on the state of the art for low-noise MOS CCD imagers. Much of the noise in a CCD is thermally generated, so cooling reduces the noise (see associated rules in this chapter). These rules should approximately apply to the dark current and readout noise of CMOS APS imagers as well. On a per-pixel basis, CMOS imagers tend to be noisier but, with special designs and extra cooling, they approach the performance of CCDs, and their performance may even be better for large pixel counts. The CMOS products are also becoming popular, as they can be manufactured on a standard semiconductor device or memory production line. Correlated double sampling eliminates some of the thermal fluctuations of the reset voltages. This is done simply by sampling the output before and after each readout and subtracting one from the other (see associated rule in this chapter). The multiphase pinning mentioned above acts to reduce or eliminate surface states, resulting in exquisitely low dark current. Surface states cause charge to collect at the surface or metal-semiconductor interfaces. The surface energy level associated with the surface state specifies the level below which all surface states must be filled for charge neutrality at the surface.4 High surface states result in higher noise and higher potentials.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 39 and 109, 2000. 2. www-inst.eecs.Berkeley.edu, 2003. 3. www.ccd.com, 2003. 4. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 407–426, 1981.
IMAGE INTENSIFIER RESOLUTION 1. Modern image intensifiers (I2) can produce images with resolution of about 60 line pairs per millimeter (lp/mm).1 2. Also, by approximate generations,2
Generation
Approximate years of production
Multichannel plate pitch
Nyquist limit
1
Mid 1970s to early 1980s
14 to 15 µm
33 to 36 lp/mm
2
Mid 1980s to early 1990s
10 to 12 µm
42 to 50 lp/mm
3
Mid 1990s to mid 2000s
6 µm
83 lp/mm
3. Bender2 also states that, throughout these periods, the overall resolution has consistently tracked the MCP’s Nyquist limit at about 80 percent of the Nyquist.
362
Chapter Eighteen
Discussion Image intensifiers come in several architectures, generally called generations, or Gen 1, Gen 2, Gen 3, and so on. Gen 1, as described in Reference 3, “refers to image intensifiers that do not use a microchannel plate (MCP) and where the gain is usually no greater than 100 times.” Gen 2 devices employ MCPs for electron multiplication. “Types using a single-stage MCP have a gain of about 10,000, while types using a 3-stage MCPs offer a much higher gain of more than 10 million.”3 Third generation refers to the use of semiconductor materials (e.g., GaAs) as photocathode and come in a filmed and unfilmed type. The “film” is an ion barrier that stops ions from flying back into the photocathode. The unfilmed types are more resistant to some types of damage and are more sensitive. An 18-mm image intensifier assembly can produce 2160 resolution elements, or 1080 cycles across its diameter, and an 11-mm unit can produce 1320 resolution elements or 660 cycles. Generally, image intensifiers are wide-angle devices (10 to 60°), and they require an optic with an f/# of less than 4 or 5 to provide sufficient signal-to-noise to practically accomplish this level of resolution. This rule assumes that the intensifier is the limiting resolution factor. Obviously, if the system in which it is employed has jitter or optical resolution less than the above, the potential resolution will never be achieved. Early image intensifiers would also have significantly less resolution—perhaps as low as 15 to 20 lp/mm. They also suffer from much higher levels of blooming and lower scene dynamic range. Although these problems are still fundamental to image intensifiers, they are greatly improved in newer-generation devices.
References 1. J. Hall, “Characterization and Calibration of Signal-Generating Image Sensors,” ElectroOptical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, p. 7-4, 2000. 2. E. Bender, “Present Image-Intensifier Tube Structures,” Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, 2000, p. 5-33. 3. Hamamatsu Corp., “Image Intensifiers,” available at www.hamamatsu.com, 2003. 4. Private communications with George Williams, 2003. 5. R. Jung, “Image sensor technology for beam instrumentation,” available at www.slac. stanford.edu/pubs/confproc/biw98/jung.pdf, 2003. 6. http://usa.hamamatsu.com, 2003. 7. www.ITTnv.com, 2003.
INCREASE IN INTENSIFIER PHOTOCATHODE EBI WITH TEMPERATURE The equivalent background input (EBI) of a Gen 3 image intensifier (I2) generally doubles for every 2.5 to 5°C rise in photocathode temperature.
Discussion EBI rises with temperature and can become the dominant noise source, ultimately limiting I2 sensor low-light sensitivity for high f/# systems. When used in military applications, image intensified sensors must be capable of operating across a wide ambient temperature range, typically from –40 to +55°C. I2 CCD sensors are sometimes embedded in larger systems, often surrounded by other electronics and mechanical housings, further exacerbating the temperature increase of the photocathode. Photocathode temperature rises of +15°C above ambient are not uncommon and must be accounted for during initial system
Visible and Television Sensors
363
design, modeling, and operation. As the temperature rises, the thermal “dark current” noise associated with the photocathode microchannel plate may become the dominant noise level that limits sensor performance in terms of minimum detectable signal level and measurable scene dynamic range. The detrimental impact on system performance generally occurs in very low-light conditions, as that is where the EBI becomes dominant. Furthermore, high-f/# systems are more susceptible to EBI levels, as the transmitted scene illumination (measured at the photocathode image surface) is lower than with faster (low-f/#) systems. High-performance I2 CCD sensors can employ thermoelectric cooling (TEC) devices to keep the temperature at a desired level.
LOW-BACKGROUND NE∆Q APPROXIMATION A high-quality CCD and good-quality telescope will result in a noise equivalent photon flux density (NE∆Q) of 3 N d + F BΩ N r - + -----2NE∆Q ≈ ---- ---------------------Ω ti t i
where Ω = solid angle of the detector pixel (in steradians) Nd = dark current noise in electrons per second FB = background flux in photons per second per steradian (assume roughly 3.3 × 1011 for low backgrounds) ti = integration time Nr = readout noise in electrons
Discussion The above equation assumes that the combined in-band quantum efficiency of the optics and detector are 0.5, which is achievable but quite good. A high-quality astronomical visible detector has about a one electron per second per pixel dark current noise and a readout noise of less than five electrons per readout. CMOS devices tend to have a higher noise value but don’t suffer from transfer inefficiency, which may be an issue for large HDTV CCD arrays.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 38–39, 2000.
MICROCHANNEL PLATE NOISE FIGURE AND NOISE FACTOR 1. A microchannel plate (MCP) noise figure (Nf) can be defined as follows:1 0.5
Sp N f = 1.03 ----------SNR
364
Chapter Eighteen
where Nf = noise figure (not factor) Sp = sensitivity of the photocathode (generally in terms of microamperes per lumen) SNR = tube signal-to-noise ratio 2. The noise figure for a channel electron multiplier is1,2 1 1 0.5 N f = --- –0.5⎛ 2 + ---⎞ ⎝ d⎠ η where η = effective quantum efficiency (photoelectron detection efficiency) d = first strike yield in electrons; if not known, assume a number between 3 and 4 3. When the noise figure (Nf) in decibels is related to the noise factor (Fn),3 1 1 N f = 20log10 ( F n ) = ( 20 )log10 ---------- = 10log10 -----Pf Pf where Pf = fill factor, or the fraction of the microchannel plate surface area that collects photoelectrons Fn = noise factor, or the factor by which the noise appears to increase as a result of the fill factor being less than unity
Discussion A microchannel plate is an array of curved, hollow, tube-like channels coated with a material that provides electron amplification (typically, an amplification of two or three electrons per bounce). A microchannel plate can provide a two-dimensional intensified image. Microchannel plates need to be used with a photoemissive surface to provide the initial photoelectron that enters the curved tube and becomes amplified. The major contribution to noise from the microchannel plate is a result of its amplification process. A noise figure is generally defined as its input SNR divided by its output SNR. In optical and electronics applications, it is common to use the power SNR. An exception is in the astronomical community, which usually uses electrical current SNR, which is the square root of the power SNR. That choice seems to apply here as well. MCPs can yield a per-pixel SNR dominated by the background and quantum efficiency such that SNR ≤ ηN where n = quantum efficiency of the photocathode N = average number of incident photons (Ref. 4) For imaging application, the nonuniformity can also limit the SNR, and the reader is urged to consider the scene SNR as described in a rule in Chap. 16, “Systems.”
References 1. E. Bender, “Present Image-Intensifier Tube Structures,” Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 5–32. 2. P. Csorba, Image Tubes, Sams, Indianapolis, IN, 1985. 3. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York, pp. 124–126, 1984.
Visible and Television Sensors
365
4. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 98–101, 2000. 5. http://www.laidback.org/~daveg/academic/labreports/rep3/PMT.html, 2003.
NOISE AS A FUNCTION OF TEMPERATURE The dark current in visible silicon detector arrays doubles for every 8 or 9°C increase in temperature.
Discussion This is why scientific and high-performance visible images will typically have a thermoelectric cooler to reduce the operating temperature by 20 to 40°C. As the detector’s temperature is increased, the dark noise increases. This rule is dependent on the state of the art (which changes) and assumes normal silicon focal planes. Additionally, scaling should be limited to ±40° about the normal operating temperature. Clearly, a temperature is reached for any material in which additional cooling provides no additional system-level sensitivity. When a CCD (or an avalanche photodiode) is cooled, the noise decreases, causing its overall sensitivity to improve. Reducing dark noise by a factor of 2 will lead to an increase in sensitivity of 2 . Other benefits from additional cooling may be increased uniformity, longer wavelength response, and the ability to integrate longer. However, eventually, a temperature will be reached at which further cooling provides minimal gains as the total noise become dominated by background noise, spurious noise, or other sources. Additionally, multiplexer and bias circuitry may fail if operated at temperatures colder than their design limit, and carrier freeze-out will occur at liquid nitrogen temperatures, reducing SNR. Some noise sources (e.g., spurious noise) actually increase with reduced temperature. Finally, sensitivity versus temperature is not linear. It is a curve, so don’t overuse this rule.
NOISE EQUATIONS FOR CMOS APSS AND CCDS 1. The noise from an CMOS APS focal plane array (in terms of electrons) can be calculated as follows: 2
2
2
I lt ⎛ E n∆ f nC s ⎞ ⎛ I n∆ f nC s ⎞ HRt 2 -⎟ + ⎜ --------------------⎟ + ( FPN ) Qn = ⎛ ----------⎞ + ⎛ -----⎞ + ⎜ --------------------2 ⎝ q ⎠ ⎝q⎠ ⎝ ⎠ ⎝ q2 g2m ⎠ q where Qn = H= R= t= q= Il = En =
0.5
(1)
RMS noise charge in number of electrons per sample at the output irradiance on the FPA in watts per square meter responsivity in amperes per watt from each pixel element time frame electronic charge in coulombs leakage current in amperes from each element noise voltage density of on-chip source follower transistor in volts per root hertz referred to the diode node ∆fn = effective noise bandwidth of the source follower and following op-amp together in hertz)
366
Chapter Eighteen
Cs = In = gm = FPN =
capacitance of diode node to ground in farads noise current density of off chip op-amp in amperes per root hertz transconductance of source follower transistor in volts per amp residual fixed pattern noise (in electrons)
2. The noise from a CCD is a modification of the above, as follows: 2
2
2
I lt ⎛ E n∆ f nC s ⎞ ⎛ I n∆ f nC s ⎞ HRt 2 -⎟ + ⎜ --------------------⎟ + ( N t ) Qn = ⎛ ----------⎞ + ⎛ -----⎞ + ⎜ --------------------2 ⎝ q ⎠ ⎝q⎠ ⎝ ⎠ ⎝ q2 g2 ⎠ q
0.5
(2)
m
where Nt = transfer noise (especially important for surface channel devices, and negligible for the more common buried-channel devices)
Discussion These equations represent a root sum of squares (RSS) of the major noise sources for a CCD and APS visible focal plane arrays. Equation (1) assumes that correlated double sampling (CDS) is not employed, or if CDS is employed and is not perfect, this allows for the residual FPN leakage. FPN is the leakage of total spatial FPN after the application of whatever algorithm is employed to mitigate FPN. Equation (2) assumes correlated double sampling to remove the thermal fluctuation of the reset voltage (reset noise, sometimes called kCT noise) and any fixed pattern noise. In a surface-channel CCD, the signal is moved along the surface and is limited by the effects of interface traps that add a transfer noise and reduce the transfer efficiency. In a buried-channel CCD, the charges are confined to a channel below the surface, increasing transfer efficiency and eliminating interface trapping. For buried-channel CCDs, transfer noise (Nt) is not an issue, and that term disappears. Likewise, that term is not an issue for CMOS APS, as they require only one charge transfer to read out the signal. Typical commercial CCDs have a few hundred noise electrons per sample, whereas high-grade thermoelectrically cooled devices can have noise below a couple of tens of electrons per second. Advanced and expensive scientific visible detector arrays exhibit noise of a few electrons per second. Reference 1 points out that settling time can become a concern for large-format HDTV CCDs, and correlated double sampling cannot occur and still meet the required data rates. Fixed pattern noise is a critical noise source for CMOS APS FPAs. Although significant improvements in this noise source have been made in the 2000s, it can be a dominant noise source for some applications. Reference 2 states, There are two types of FPN for CMOS APSs. One originates from the pixel-to-pixel variation in dark current and source follower threshold voltage and the other from column to column variation in column readout structures. The former may become invisible in the future due to process improvements.
Additionally, Ref. 3 cautions that, although correlated double sampling can reduce FPN noise in CMOS, There always remains some residue to this FPN, which can be important if no care is taken. To lower it, it is important to model and quantify FPN as a function of the design parameters, which includes the layout level and technology matching parameters given by the foundry.
The interested reader is also referred to the other related rules in this book relating to correlated double sampling, RSS of noise sources, and the noise bandwidth.
Visible and Television Sensors
367
References 1. J. Hall, “Characterization and Calibration of Signal-Generating Image Sensors,” ElectroOptical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 7-55 to 7-57, 2000. 2. S. Miyatake, et al., “Transversal-Readout Architecture for CMOS Active Pixel Image Sensors,” IEEE Transactions on Electron Devices, 50(1), pp. 121–129, January 2003. 3. A. Afzalian and D. Flandre, “Modelling of the Bulk versus SOI CMOS Performances for the Optimal Design of APS Circuits in Low-Power Low-Voltage,” IEEE Transactions on Electron Devices, 50(1), pp. 106–110, January 2003. 4. J. Hall, “Arrays and Charge-Coupled Devices,” Applied Optics And Optical Engineering, Vol. 8, R. Shannon and J. Wyant, Eds., Academic Press, New York, pp. 377–379, 1980. 5. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 420–421, 1981.
PHOTOMULTIPLIER TUBE POWER SUPPLY NOISE To keep the gain of the tube stable to 1 percent, control the power supply voltage to 0.03 percent at 1000 V.
Discussion Photomultiplier tubes (see Fig. 18.5) are fast and sensitive. Stability is sometimes an issue, as the dynode sensitivity is a function of the applied voltage. The absolute voltages are typically high (e.g., –500 to 2000 V). As Hobbs points out, this is usually “provided by a powerful high-voltage supply and a multi-tap voltage divider made of high-value resistors.”2 A 0.03 percent change in voltage at 1000 V is 0.3-V change. This voltage change results in a subsequent change in gain, resulting in noise.
FIGURE 18.5
Typical PMT architecture. (From www.chem.vt.edu/chem-ed/optics.)
References 1. http://www.chem.vt.edu/chem-ed/optics/detector/pmt.html, 2003.
368
Chapter Eighteen
2. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 98–101, 2000. 3. http://www.laidback.org/~daveg/academic/labreports/rep3/PMT.html, 2003. 4. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York pp. 116–121, 1984. 5. C. Soodak, Application Manual for Photomultiplier Tubes, American Instrument Co., 1972.
P-WELL CCDS ARE HARDER THAN N-TYPE P-well CCDs have better radiation performance than do n-type devices, and CMOS can be quite hard.
Discussion According to Williams,1 conventional n-channel CCDs have phosphorus doped buried channels and suffer from the generation of phosphorus-vacancy (P-V) electron traps that degrade charge transfer efficiency. The dominant hole trap expected after proton irradiation of a p-channel CCD is the divacancy. Divacancy formation is considered to be less favorable in a p-channel CCD as compared to P-V formation in an n-channel CCD. In addition, the energy level of the deviancy, 0.21 eV above the valance band, is not likely to yield efficient dark current generation sites as compared to P-V sites, located closer to the middle of the bandgap (0.42 to 0.46 eV below the conduction band edge). CCDs have been shown to perform without significant degradation when exposed to about 4 krad of ionizing radiation and approximately 50,000 energic neutrons (typically 0.5 to 5 MeV).2 If properly designed, CMOS APS sensors can withstand quite high doses [e.g., 63 MeV proton radiation to an equivalent total dose of 1.3 Mrad (Si)].3 Moreover, Ref. 3 indicates that typically the front gate threshold shifts are very small for large dosages, and the “weak link” with respect to radiation hardness is the back gate. However, some common sensor system electronic components can fail before the CCD or CMOS focal planes, especially when tested at relatively high dose rates. “For example, the DSP56001 chip fails below 3 krad (Si) if tested with a dose rate of 100 rad/sec, whereas it operates successfully to 15 to 20 krad (Si) if tested with a dose rate of 100 rad/ sec. In all cases, recovery (annealing) occurred, and no permanent damage was observed.”4
References 1. Private communications with George Williams, 2003. 2. K. Klaasen et al., “Operations and Calibration of the Solid-State Imaging System during the Galileo Extended Mission at Jupiter,” Optical Engineering, 42(2), pp. 494–509, February 2003. 3. Y. Li et al., “The Operation of 0.35 µm Partially Depleted SOI CMOS Technology in Extreme Environments,” Solid State Electronics, Vol. 47, pp. 1111–1115, 2003. 4. G. Eppeldauer, “Temperature Monitored/Controlled Silicon Photodiodes for Standardization,” Proc. SPIE, Vol. 1479, Surveillance Technologies, 1991. 5. S. Holland, et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions On Electron Devices, 50(1), pp. 225–238, January 2003. 6. J. Bogaerts et al., “Total Dose and Displacement Damage Effects in a Radiation-Hardened CMOS APS,” IEEE Transactions On Electron Devices, 50(1), pp. 84–90, January 2003. 7. Y. Li et al., “Proton Radiation Effects in 0.35 µm Partially Depleted SOI MOSFETs Fabrication on UNIBOND,” IEEE Transactions Nuclear Science, 49(6), pp. 2930–2936, 2002.
Visible and Television Sensors
369
RICHARDSON’S EQUATION FOR PHOTOCATHODE THERMIONIC CURRENT Richardson’s equation gives the photocathode thermionic current as –Φ ⎛ ---------0-⎞ 2 ⎝ kT ⎠
it = Ad ST e where Ad = photocathode area T = temperature Φ0 = photocathode work function k = Boltzmann’s constant S = a constant equal to
2
4πmqk S = -----------------3 h where m = mass of the electron q = charge of the electron h = Planck’s constant
Discussion Thermionic emission is the spontaneous emission of an electron as a result of random thermal energy. The higher the temperature, the more emission occurs, because of Brownian motion exceeding the energy needed to eject an electron. This can be the dominant contributor to dark noise in a photomultiplier tube or microchannel plate. Reference 1 states that the T2 term indicates that “cooling the PMT will reduce dark current and therefore increase the linear dynamic range at the small-signal end. Cooling a PMT to about –40°C (233 K) will often reduce the thermionic contribution below the other sources of dark current.” Although developed for photomultiplier tubes, this is useful for any photocathode component. This equation calculates that the current will increase about a factor of 2 for every 4 to 6°C increase in temperature (at room temperatures), this is a contributor to the “Increase in Intensifier Photocathode EBI with Temperature,” rule (p. 362).
References 1. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York, pp. 118–119, 1984. 2. http://dept.physics.upenn.edu/balloon/phototube.html, 2003. 3. http://bilbo.bio.purdue.edu/~baker/courses/595R/equat_em.pdf, 2003.
SILICON QUANTUM EFFICIENCY In-band quantum efficiency is typically 30 percent for conventional front-illuminated devices and up to 80 percent for thinned back-illuminated devices (Fig. 18.6).
370
Chapter Eighteen
FIGURE 18.6
CCD quantum efficiency. (From Ref. 1.)
Discussion As a material, silicon tends to have a very high quantum efficiency for visible photons, although many can be lost as a result of reflection if a suitable antireflective coating isn’t applied. At room temperature, intrinsic silicon is a semiconductor with a bandgap of 1.12 eV. A 1.12-eV bandgap makes the production of electron-hole pairs possible for absorbed photons with wavelengths less than about 1.1 µm. Intrinsic photodetectors do not require doping for detection and produce carriers (electrons and/or holes) when a photon is absorbed by band-to-band transitions. Typical visible detector arrays do not have doping to alter the bandgap and operate as intrinsic devices. Silicon can be doped to reduce the bandgap (allowing detection far into the longwave infrared). This heavily doped silicon is an extrinsic material and produces carriers by transitions involving forbidden gap energy levels. Quantum efficiency is affected by many factors. First, the photons need to be absorbed into the material and not reflected or transmitted through the active region. Detectors usually have antireflective coatings applied to them to maximize the absorptance. Then, the photons need to survive to the depletion region to generate useful electron-hole pairs. When the photon energy generates a carrier, the carrier must migrate to the collection well and be captured and held until it is read out. All of these effects occur with some inefficiencies that reduce the total effective quantum efficiency. Typical visible silicon detectors (whether CCD, CID, or APS) tend to have quantum efficiencies of around 30 to 40 percent. Back-illumination architectures and thin material can boost this number to over 80 percent. Often, with CCDs and APS devices, the product of the fill factor and quantum efficiency is quoted; unfortunately, this is not always clear in data sheets or web sites. The addition of on-focal-plane electronics, such as antiblooming circuits, reduces the active area and thus the product of the fill factor and the quantum efficiency. However, antiblooming does not reduce quantum efficiency (which is a function of material, antireflection coatings, and
Visible and Television Sensors
371
capture efficiency). Antiblooming can reduce fill factors to 70 percent of that of a non-antiblooming CCD. The CCD with antiblooming will then need to integrate twice as long for the same sensitivity. For the reader’s convenience, we provide the following references that include additional plots and information on this topic.
References 1. G. Williams, H. Marsh, and M. Hind,. “Back-Illuminated CCD Imagers for High Information Content Digital Photography,” Proc. SPIE, Vol. 3302, Digital Solid State Cameras: Design and Applications, 1998. 2. S. Holland et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions on Electron Devices, 50(1), pp. 225–238, January 2003. 3. J. Tower et al., “Large Format Backside Illuminated CCD Imager for Space Surveillance,” IEEE Transactions On Electron Devices, 50(1), pp. 218–224, January 2003.
WILLIAMS’ LINES OF RESOLUTION PER MEGAHERTZ From Ref. 1, 2 Lines of resolution per megahertz= ---T l A where A = frame aspect ratio in decimal format (a 4:3 ratio = 1.33) Tl = active CCD line time in microseconds
Discussion This is based on simple math, right out of the NTSC/ITU standards, and allows one to quickly estimate the resolution as a function of electronic speed. It can have significant system effects. Lines of resolution is a technical parameter that has been in use since the introduction of analog television. The measurement of lines of resolution attempts to give a comparative value to enable the evaluation of one television or video system against another in terms of overall resolution. Note that the reference here to system and overall indicates that this measurement refers to a complete video or television system. This includes everything employed to display the image, including the lens, camera, video tape (if used), and all the electronics that make the entire system work. This number (horizontal or vertical) indicates the overall resolution of a complete television or video system. There are two types of this measurement: (1) lines of horizontal resolution, also known as LoHR, and (2) lines of vertical resolution, or LoVR. However, it is much more common to see the term TVL (for TV lines). Note that this is different from the simple display lines (e.g., HTVL) referred to in Chap. 7, and the reader will find similar rules relating to analog displays in that chapter. There are some common misconception pitfalls. Lines of resolution is not the same as the number of pixels (either horizontal or vertical) found on a camera’s CCD, on a digital monitor, or on other displays such as a video projector. It is also not the same as the number of scanning lines used in an analog camera or television system such as PAL, NTSC, SECAM, and so on. Lines of resolution refers to the limit of visually resolvable lines per picture height (e.g., TVL/ph = TV lines per picture height). In other words, it is measured by counting the number of horizontal or vertical black and white lines that can be distinguished on an area
372
Chapter Eighteen
that is as wide as the picture is high. The idea is to make this measurement independent of the aspect ratio. If the system has a horizontal resolution of, for example, 750 lines, then the whole system (lens + camera + tape + electronics) can provide 375 perceptible black lines and 375 white perceptible spaces in between (375 + 375 = 750 lines). In either case, if you add any more lines per picture height, then you can’t reliably resolve the lines and spaces in a distinguishable manner, and the system has reached its limit of resolving detail. Lines of horizontal resolution applies to not only cameras but also to television displays, to signal formats such as those produced by a DVD player, and so forth. Therefore, when people talk about lines of resolution but don’t specify if they are horizontal or vertical lines, you need to be cautious. If a manufacturer doesn’t make the reference clear, then you can assume them to be horizontal numbers, because these are always larger numbers, so they sound more impressive. The reader should also see the related rule on number of display lines in Chap. 7, “Displays.”
Example For a format of = 4:3 and a line time of 53.3 µsec, there are 79.96 lines per MHz. For a 16:9 format, there are 59.95 lines per MHz.
References 1. Private communications with George Williams, 2003.
Appendix A Tables of Useful Values and Conversions The following idiosyncratic collection of tables and values represents frequently needed approximate information and conversion factors for the EO practitioner. This eclectic collection consist of constants, conversions, and definitions categorized by the authors’ whims. Although sometimes included, the authors are not encouraging the use of deprecated and archaic units. Such units are included here only as a help with their translation to more widely accepted SI units. TABLE A.1 Angular Measurement Arcseconds
4.848 microradians 2.77 × 10–4 degrees
Degree
0.01745 radians 17.5 milliradians 60 arcminutes 3600 arcseconds 17,452 microradians
Arcminute
0.0167 degrees 2.909 × 10–4 radians
Radian
0.159 of the circumference 57.296 degrees 3438 arcminutes 2.06 × 105 seconds
Steradian
0.08 of total solid angle
Rads/sec
0.159 revolutions per second 9.55 revolutions per minute 57.3 degrees per second
RPM
6 degrees per second 0.0167 revolutions per second 0.105 radians per second
RPS
21,600 degrees per minute
373
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
374
Appendix A
TABLE A.2 Area Measurements Square centimeter
1.076 × 10–3 square foot 0.155 square inch 1 × 10–4 square meter
Square mil
645 square micrometers
Square inch
6.452 square centimeters
Square foot
939 square centimeters 0.092 square meters
TABLE A.3 Astronomy Astronomical unit (mean Earth–Sun distance)
1.496 × 108 kilometers 93 million miles
Light year
9.46 × 1015 meters 9.46 × 1012 kilometers 5.88 × 1012 miles
Parsec
3.26 light years 3.09 × 1013 kilometers
Effective solar temperature
5900 K
Solar constant
1350 to 1390 W/m2 (mean above atmosphere)
Irradiance of a zeroth magnitude star 3.1 × 10–13 W/cm2 ≈0.2 mag per airmass
Astronomical visual absorption
TABLE A.4 Astronomical Bands (Given Flux Is Associated with a Star of Visual Magnitude Zero) Center Band wavelength abbreviation (µm) U B
0.365 0.44
Bandwidth (µm)
Flux (photons/m2 µsec)
Flux (W/cm2/µm)
Jansky (W/cm2/Hz)
0.068
7.90 × 1010
4.30 × 10–12
1910.6
0.098
1.60 ×
1011
7.23 ×
10–12
4664.7
10010
3.47 ×
10–12
3498.5
V
0.55
0.089
9.6 ×
R
0.7
0.22
6.20 × 1010
1.76 × 10–12
2875.7
I
0.88
0.24
4.90 × 1010
1.11 × 10–12
2857.1
0.26
2.02 ×
1010
3.21 ×
10–13
1673.1
1009
1.15 ×
10–13
1045.2
J
1.25
H
1.65
0.29
9.56 ×
K
2.2
0.41
4.53 × 1009
4.09 × 10–14
660.3
L
3.4
0.57
1.17 × 1009
6.84 × 10–15
263.6
0.45
5.06 ×
1008
2.01 ×
10–15
167.6
1007
9.69 ×
10–17
34.9
7.18 × 10–18
9.7
M
5
N
10.4
5.19
5.07 ×
Q
20.1
7.8
7.26 × 1006
Tables of Useful Values and Conversions
375
TABLE A.5 Atmospherics Absorption of CH4
Bands centered at 3.31, 6.5, and 7.6 µm
Absorption of CO2
1.35 to 1.5 µm 1.8 to 2.0 ≈4.2 to 4.6 µm ≈14 to 16 µm
Absorption of H2O
1.35 to 1.5 µm 1.8 to 2.0 ≈2.7 to 3.0 µm 5.6 to 7.1 µm (with the main absorption in ≈6.1 to 6.5 µm) And some minor narrow bands centered at 0.94, 1.1, 1.38, and 3.2 µm
Absorption of NO2
3.9 µm 4.5 µm 7.7 µm 17.1 µm And various bands in the UV
Absorption of ozone
≈0.15 to 0.3 (peak at ≈0.26 µm)
Atmospheric pressure
101,325 N/m2 101 kPa 760 mm of Hg at sea level
Density of air @ STP
1.29 × 103 g/cc 1.29 kg/m3
Troposphere altitude (nominal) 0 to ≈11 km (depends on season and latitude) Stratosphere (nominal)
11 to 24 km (Some misguided folks define the stratosphere to include the mesosphere.)
Mesosphere (nominal)
24 to 80 km
Thermosphere (nominal)
80 to ≈7000 km
Pressure of std. atmosphere
1.01 × 105 nt/m2 14.7 psi
TABLE A.6 CCD Size* Approximate Approximate unit cell size for Approximate well Format dimensions (mm)† nominal 768 × 480 array (µm) size (electrons) 19 × 14
25 × 30
>500,000
1-inch
12.8 × 9.6
16.7 × 20
330,000
2/3-inch
8.8 × 6.6
11.4 × 13.8
160,000
1/2-inch
6.4 × 4.8
8.3 × 10
80,000
1/3-inch
4.8 × 3.6
6.3 × 7.5
40,000
1/4-inch
3.65 × 2.74
4.8 × 5.5