Spectral Sensing Research For Surface And Air Monitoring In Chemical, Biological And Radiological Defense And Security Applications 9789812835925, 9789812835918

This book provides unique perspectives on the state of the art in multispectral/hyperspectral techniques for early-warni

257 100 29MB

English Pages 545 Year 2009

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Spectral Sensing Research For Surface And Air Monitoring In Chemical, Biological And Radiological Defense And Security Applications
 9789812835925, 9789812835918

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Spectral Sensing Research for Surface and Air Monitoring in Chemical, Biological and Radiological Defense and Security Applications

SELECTED TOPICS IN ELECTRONICS AND SYSTEMS Editor-in-Chief: M. S. Shur

Published Vol. 34: Radiation Effects and Soft Errors in Integrated Circuits and Electronic Devices eds. R. D. Schrimpf and D. M. Fleetwood Vol. 35: Proceedings of the 2004 IEEE Lester Eastman Conference on High Performance Devices ed. Robert E. Leoni III Vol. 36: Breakdown Phenomena in Semiconductors and Semiconductor Devices M. Levinshtein, J. Kostamovaara and S. Vainshtein Vol. 37: Radiation Defect Engineering Kozlovski V. and Abrosimova V. Vol. 38: Design of High-Speed Communication Circuits ed. R. Harjani Vol. 39: High-Speed Optical Transceivers eds. Y. Liu and H. Yang Vol. 40: SiC Materials and Devices — Vol. 1 eds. M. S. Shur, S. Rumyantsev and M. Levinshtein Vol. 41: Frontiers in Electronics Proceedings of the WOFE-04 eds. H Iwai, Y. Nishi, M. S. Shur and H. Wong Vol. 42: Transformational Science and Technology for the Current and Future Force eds. J. A. Parmentola, A. M. Rajendran, W. Bryzik, B. J. Walker, J. W. McCauley, J. Reifman, and N. M. Nasrabadi Vol. 43: SiC Materials and Devices — Vol. 2 eds. M. S. Shur, S. Rumyantsev and M. Levinshtein Vol. 44: Nanotubes and Nanowires ed. Peter J. Burke Vol. 45: Proceedings of the 2006 IEEE Lester Eastman Conference on Advanced Semiconductor Devices eds. Michael S. Shur, P. Maki and J. Kolodzey Vol. 46: Terahertz Science and Technology for Military and Security Applications eds. Dwight L. Woolard, James O. Jensen, R. Jennifer Hwu and Michael S. Shur Vol. 47: Physics and Modeling of Tera- and Nano-Devices eds. M. Ryzhii and V. Ryzhii Vol. 48: Spectral Sensing Research for Water Monitoring Applications and Frontier Science and Technology for Chemical, Biological and Radiological Defense eds. D. Woolard and J. Jensen

KwangWei - Spectral Sensing Vol49.pmd

2

6/19/2009, 9:34 AM

Selected Topics in Electronics and Systems – Vol. 49

Spectral Sensing Research for Surface and Air Monitoring in Chemical, Biological and Radiological Defense and Security Applications

Editors

Jean-Marc Theriault Defence Research & Development, Canada

James O. Jensen US Army Edgewood Chemical Biological Center, USA

World Scientific NEW JERSEY



LONDON



SINGAPORE



BEIJING



SHANGHAI



HONG KONG



TA I P E I



CHENNAI

A-PDF Merger DEMO : Purchase from www.A-PDF.com to remove the watermark

Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

Selected Topics in Electronics and Systems — Vol. 49 SPECTRAL SENSING RESEARCH FOR SURFACE AND AIR MONITORING IN CHEMICAL, BIOLOGICAL AND RADIOLOGICAL DEFENSE AND SECURITY APPLICATIONS Copyright © 2009 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

ISBN-13 978-981-283-591-8 ISBN-10 981-283-591-1

Editor: Tjan Kwang Wei

Printed in Singapore.

KwangWei - Spectral Sensing Vol49.pmd

1

6/19/2009, 9:34 AM

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) iii–vi  World Scientific Publishing Company

FOREWORD This special journal issue contains select papers from the 2006 International Symposium on Spectral Sensing Research (2006 ISSSR) that fall into the subject matter areas of Multispectral/Hyperspectral Techniques for Surface & Air Monitoring in Chemical, Biological and Radiological (CB&R) Defense Applications. The specific focus of the 2006 ISSSR was on the creation of new technology-program oriented networks that will serve as a research and development foundation for the advancement of the state-of-theart in spectroscopic-based early-warning sensor capabilities. In recent years, spectral sensing has experienced rapid technical advancement that has led to practical field sensors. Therefore spectral-based techniques exhibit the clear potential for providing more effective, economical and supportable (i.e., reagentless) solutions to military and homeland defense early-warning monitoring requirements for water, surface and air related sensing applications. An increased emphasis on reagentless spectroscopy is motivated primarily by performance issues associated with traditional chemical and biological (CB) point and standoff techniques. In particular, sensors have been previously developed and fielded that rely heavily on reagents and/or burdensome support structures that are expensive and difficult to maintain and that have serious false alarm issues. Previously implemented technologies include biological assays, mass spectrometry and ion mobility. Other explored methodologies include novel materials (mips, smart ligands, amino acid sequences, aptamers, sol gel, aerogel, electro-conducting polymers, etc.) or bulk property interactions (electrochemistry, surface acoustic wave, surface plasmon resonance, thermal capacity) and combinations of the two. At this time, extensive expertise exists in the multispectral/hyperspectral community for applications such as airborne and space-based sensing and imaging which has proved effective in monitoring weather, resource management (agriculture, forestry), oil/mineral deposits and CB detection in air releases. Hence, spectral-based techniques clearly have potential for providing near to mid-term solutions for many of the monitoring problems associated with CB&R contaminations of water, surfaces and air. However, the ultimate realization of such spectroscopic techniques will probably require the fusion of many types of spectral-sensing techniques and modality. Therefore, standoff and point interrogation sensors are now sought that can provide for extremely high confidence in CB&R detection and monitoring scenarios and the goal of the 2006 ISSSR and this companion special issue is to organize and focus the science and technology base towards these important challenges.

v

vi

Foreword

This special issue begins with a collection of research and development papers on the subject of spectroscopic and lidar techniques for “Surface Sensing and Monitoring” that have relevance to a general array of CB&R defense and security applications. The first section of papers focuses on novel scientific techniques and phenomenology applications that offer the potential for enhancing the effectiveness of existing hyperspectral/multispectral surface sensing methodologies. Examples of these research and development efforts include: Creation of reference data for vapors and liquids infrared spectral signatures suitable for quantitative analyses in environment monitoring; Novel approach for passive standoff detection of surface contaminants by differential polarization FTIR spectrometry, which mitigates sky radiance drifts and favours unambiguous contaminant detections; Advances in principal components analysis for the detection and classification of organic and organophosphorus analytes on soil from reflection-absorption spectroscopy; Novel developments in the detection of invisible bacilli spores on surfaces using a portable surface-enhanced raman spectroscopy (SERS) analyzer, which provides identification of chemicals based on their unique spectrum; Novel concept and simulation of a multiple-field-of-view (mfov) lidar for the standoff determination of bioaerosol size based on double scattering measurements; Advances in infrared spectrometry and analysis for the detection and differentiation of spore and vegetative forms of bacillus spp; New application of passive standoff radiometry for the measurement of explosives with a field result at a standoff distances of 60 m; Investigation of signature and signal generation for explosive detection using thz timedomain spectroscopy which addresses the selectivity of thz spectra to distinguish the clutter from background spectra; Recent advances and results in differential passive Long Wave Infrared (LWIR) radiometry for the detection, identification and quantification of toxic chemical vapor clouds in an open-air environment; Comparative analyses of spectral background statistics in direct and differential Fourier Transform (LWIR) measurements, which serve to optimize spectral detection methods of atmospheric contaminants; The second portion of the “Surface Sensing and Monitoring” section focuses on advanced sensing technology and algorithms developments that will impact spectroscopic-based sensing in the future. Some examples of these research and development efforts include: Overview of novel techniques under investigation by NVESD on active imaging of hard targets; Presentation of a high-resolution 2D imaging laser radar for occluded hard target viewing and identification with verification of system performance during a variety of operating conditions; Emerging technology of 3D flash Ladar focal planes and time dependent imaging that shows evidences and applications of this technology; Presentation of the design for the advanced responsive tacticallyeffective military imaging spectrometer (ARTEMIS); Assessment of the effects of image segmentation on subspace-based and covariance-based detection of anomalous sub-pixel materials in hyperspectral visible/near-IR/SWIR imagery; Advanced design of a spectral processing method for laser-induced fluorescence from threatening biological aerosols with simulations showing a good signal-to-background discrimination level;

Foreword

vii

Development of a processing method for improving pixel purity index for endmember extraction in hyperspectral imagery; Novel methodology of signal processing for multicomponent Raman spectra of particulates for improving identification of chemical fingerprints; Development of support vector classification method for land cover and benthic habitat from hyperspectral images; and, Construction of a compact high peak power eye-safe optical parametric oscillator obtained by pumping with a master oscillator power amplifier. The second section of this special issue contains a collection of research and development papers on the subject of spectroscopic techniques for “Air Sensing and Monitoring”. The goal of this portion of the special issue is to investigate technologies for early warning, detection, and identification of chemical and biological contaminants in the atmosphere. This part examines optically-based sensors that are capable of detecting and identifying contaminants in the air. This area is divided into two categories: (1) point technologies where the sensor is in physical contact with the threat, and (2) standoff technologies where a sensor is physically separated from the threat by some distance. Active standoff techniques require the propagation of a probe beam such as in LIDAR systems. Passive standoff techniques rely on ambient electromagnetic radiation for detection and do not utilize probe beams. New and novel spectroscopic techniques for detection, identification, and quantification of contaminants are examined. All regions of the electromagnetic spectrum, from radio waves to x-rays, are considered to be of interest. New spectral methods for the discrimination of contaminants from atmospheric interferents are of interest. Methods that increase detection sensitivity while reducing false alarms are also examined. Fluorescence, Raman, infrared, and terahertz spectroscopy are possible detection techniques locating and quantifying airborne chemical and biological contaminants. Wide area detection and surveillance involves the simultaneous monitoring of the atmosphere over large surface areas for a possible contaminant. Spectral imaging techniques provide continuous, real-time monitoring of large areas for pollutions or other contaminants. Hyperspectral and ultraspectral imaging systems allow passive spatial/spectral monitoring of large areas which contain a possible pollution source. Hyperspectral and ultraspectral imaging systems also allow point source detection of hazards and determination of the flux of pollutants at important temporal and spatial scales Airborne and ground-based sensors are possible platforms for deployment of sensor. Remote sensing of atmospheric pollutants usually involves scenarios with low signalto-noise. New and novel signal processing techniques are required in order to extract the pollutant's spectroscopic signatures. Radiometric models are very important in designing new sensors. Good models allow one to computationally prototype and test a sensor before actually building it. New excitation sources for optical detection are examined. New laser sources for CB detection are relevant. Also better sources in other regions of the electromagnetic

viii

Foreword

spectrum, such as far infrared and millimeter wave regions, are also being sought. Recent developments in IR source technologies have expanded the possibilities for creating chemical and biological sensors which are compact, highly selective, and extremely sensitive. These IR source technologies are e.g. Quantum Cascade Lasers, Optical Parametric Oscillators, Difference Frequency Generation. Finally, the organizing committee 2006 ISSSR and the Editors of this special issue would like to recognize the following Best Paper Presentations that lead off this collection of technical papers: Infrared Spectral Signatures: Creation of Reference Data for Vapors and Liquids By Steven Sharpe, Pacific Northwest National Laboratory Passive Standoff Detection of Surface Contaminants: A Novel Approach by Differential Polarization FTIR Spectrometry By Jean-Marc Theriault, DRDC-Valcartier, Canada Background Contributions in Direct and Differential Fourier Transform LWIR Measurements: A Comparative Analysis By Francois Bouffard, DRDC-Valcartier, Canada Signal Processing of Multi-Component Raman Spectra of Particulate Matter By Javier Foshesatto, University of Alaska, Fairbanks Computed Tomographic Imaging Spectrometer (CTIS) and a Snapshot Hyperspectral Imager and Polarimeter By John Hartke, United States Military Academy Wide Area Spectrometric Bioaerosol Monitoring in Canada: from SINBAHD to Biosense By Jean-Robert Simard, DRDC –Valcartier, Canada

Editors Jean-Marc Thériault, Defense Research & Development, Canada James Jensen, U.S. Army Edgewood Chemical Biological Center

March 27, 2009 16:39 WSPC/120-IJHSES

02˙S01291564081802

CONTENTS

Foreword

v

Surface Sensing & Monitoring Sessions Infrared Spectral Signatures: Creation of Reference Data for Vapors and Liquids S. Sharpe, T. Johnson, R. Sams, J. Hylden, J. Kleimeyer and B. Rowland

3

Passive Standoff Detection of Surface Contaminants: A Novel Approach by Differential Polarization FTIR Spectrometry J.-M. Th´eriault, H. Lavoie, E. Puckrin and F. Bouffard

23

Background Contributions in Direct and Differential Fourier Transform LWIR Measurements: A Comparative Analysis F. Bouffard and J.-M. Th´eriault

35

Signal Processing of Multicomponent Raman Spectra of Particulate Matter J. Fochesatto and J. Sloan

49

Signature and Signal Generation Aspects of Explosive Detection Using Terahertz Time-Domain Spectroscopy R. Osiander, M. J. Fitch, M. Leahy-Hoppa, Y. Dikmelik and J. B. Spicer

67

Novel Application of Passive Standoff Radiometry for the Measurement of Explosives E. Puckrin, J.-M. Th´eriault, H. Lavoie, D. Dub´e and P. Brousseau

79

Detection and Classification of Organic and Organophosphorus Analytes on Soil from Reflection-Absorption Spectroscopy T. A. Blake, P. L. Gassman and N. B. Gallagher

91

Support Vector Classification of Land Cover and Benthic Habitat from Hyperspectral Images V. Manian and M. Velez-Reyes

109

Some Effects of Image Segmentation on Subspace-Based and Covariance-Based Detection of Anomalous Sub-Pixel Materials C. Gittins, D. Konno, M. Hoke and A. Ratkowski

121

ix

March 27, 2009 16:39 WSPC/120-IJHSES

x

02˙S01291564081802

Contents

Advanced Responsive Tactically-Effective Military Imaging Spectrometer (ARTEMIS) Design T. W. Cooley, R. B. Lockwood, T. M. Davis, R. M. Nadile, J. A. Gardner, P. S. Armstrong, Capt. A. M. Payton, Capt. S. D. Straight, Lt. W. C. Henry, T. G. Chrien, E. L. Gussin and D. Makowski Eyesafe Active Imaging of Hard Targets: An Overview of Techniques Under Investigation by NVESD B. W. Schilling, S. R. Chinn, B. Thomas and T. J. Scholz A High-Resolution 2D Imaging Laser Radar for Occluded Hard Target Viewing and Identification R. J. Grasso, J. C. Wikman, D. P. Drouin, G. F. Dippel and P. I. Egbert

141

147

165

Three Dimensional Flash Ladar Focal Planes and Time Dependent Imaging R. Stettner, H. Bailey and S. Silverman

173

Detection of Invisible Bacilli Spores on Surfaces Using a Portable SERS-Based Analyzer S. Farquharson and F. E. Inscore

179

Detection and Differentiation of Spore and Vegetative Forms of Bacillus spp. Using Infrared Spectroscopic Methods D. St. Amant, M. Campbell, A. Beck, L. Williams, J. Minter, P. Collett, C. Zhu and A. Samuels Spectral Processing of Laser-Induced Fluorescence from Threatening Biological Aerosols P. Lahaie, J. R. Simard, J. Mcfee, S. Buteau, J. Ho, P. Mathieu, G. Roy and V. Larochelle

189

201

Standoff Determination of Bioaerosol Size Based on Double Scattering Measurement With MFOV Lidar; Concept and Numerical Simulation G. Roy and L. R. Bissonnette

217

Detection and Identification of Toxic Chemical Vapors in an Open-Air Environment by a Differential Passive LWIR Standoff Technique H. Lavoie, E. Puckrin and J.-M. Th´eriault

229

A Pyramid-Based Block of Skewers for Pixel Purity Index for Endmember Extraction in Hyperspectral Imagery C.-I. Chang, M. Hsueh, W. Liu, C.-C. Wu, F. Chaudhry, G. Solyar and A. Plaza

241

March 27, 2009 16:39 WSPC/120-IJHSES

02˙S01291564081802

Contents

A Compact Eye-Safe OPO Pumped by a Nd:YAG Microchip MOPA J. Ding, B. W. Odom, A. R. Geiger and R. D. Richmond

xi

255

Air Sensing & Monitoring Sessions Wide Area Spectrometric Bioaerosol Monitoring in Canada: From SINBAHD to Biosense J.-R. Simard, S. Buteau, P. Lahaie, P. Mathieu, G. Roy, V. Larochelle, B. Dery, J. McFee and J. Ho

267

Computed Tomographic Imaging Spectrometer (CTIS) and a Snapshot Hyperspectral Imager and Polarimeter J. Hartke, N. Hagan, B. A. Kinder and E. L. Dereniak

279

Hyperspectral Imaging Using Chromotomography: A Fieldable Visible Instrument for Transient Events R. L. Bostick and G. P. Perram

293

Advanced Hyperspectral Algorithms for Tactical Target Detection and Discrimination A. Schaum

305

AIRIS — The Canadian Hyperspectral Imager; Current Status and Future Developments P. Fournier, T. Smithson and D. St-Germain

319

The Hypertemporal-Hyperspectral Analysis Test Station — HYHATS T. Old, R. Hendrick, D. Higham, N. Palmer and C. Manning

331

Wavelength Selective Bolometer Design S. Han, J.-Y. Jung and D. P. Neikirk

343

Multisensory Detection System for Damage Control and Situational Awareness C. P. Minor, D. A. Steinhurst, K. J. Johnson, S. L. Rose-Pehrsson, J. C. Owrutsky, S. C. Wales and D. T. Gottuk Inexpensive Chemical Defense Network for a Fixed Site J. A. Seeley, M. Angel, R. L. Aggarwal, T. H. Jeys, A. Sanchez-Rubio, W. Dinatale and J. M. Richardson Precision Measurement of Atmospheric Trace Constituents Using a Compact Fabry-Perot Radiometer W. S. Heaps, E. L. Wilson and E. M. Georgieva

349

367

375

March 27, 2009 16:39 WSPC/120-IJHSES

xii

02˙S01291564081802

Contents

Background Characterization with a Scanned Fourier Transform Spectrometer A. K. Lazarevich, D. A. Oursler and D. D. Duncan Spectral Signatures of Acetone Vapor from Ultraviolet to Millimeter Wavelengths R. E. Peale, A. V. Muravjov, C. J. Fredricksen, G. D. Boreman, H. Saxena, G. Braunstein, V. L. Vaks, A. V. Maslovsky and S. D. Nikifirov

387

401

The Standoff Aerosol Active Signature Testbed (SAAST) at MIT Lincoln Laboratory J. M. Richardson and J. C. Aldridge

413

Discrimination Between Natural Dense Dust Clouds with IR Spectral Measurements E. Agassi, A. Ronen, N. Shiloah and E. Hirsch

421

Signal Processing Algorithms for Staring Single Pixel Hyperspectral Sensors D. Manolakis, M. Rossacci, E. O’Donnell, F. M. D’Amico

435

Performance Estimation Tools for: Decoupling by Filtering of Temperature and Emissivity (DEFILTE), An Algorithm for Thermal Hyperspectral Image Processing P. Lahaie

449

Estimating the Limit of Bio-Aerosol Detection with Passive Infrared Spectroscopy A. Ifarraguerri, A. Ben-David and R. G. Vanderbeek

475

Eye Safe Polarization Diversity LIDAR for Aerosol Studies: Concept Design and Preliminary Applications J. Fochesatto, R. L. Collins, K. Sassen, H. Quantz and K. Ganapuram

487

Aerosol Type-Identification using UV-NIR-IR LIDAR System S. Egert and D. Peri Rare-Earth Doped Potassium Lead Bromide Mid-IR Laser Sources for Standoff Detection K. C. Mandal, S. H. Kang, M. Choi and R. D. Rauh 3D Deconvolution of Vibration Corrupted Hyperspectral Images A. H. Webster, M. R. Davenport and J.-P. Ardouin

501

509

521

International Journal of High Speed Electronics and Systems  World Scientific Publishing Company

ISSR 2006

SURFACE SENSING AND MONITORING SESSION

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 231–250  World Scientific Publishing Company

INFRARED SPECTRAL SIGNATURES: CREATION OF REFERENCE DATA FOR VAPORS AND LIQUIDS STEVEN SHARPE1 Pacific Northwest National Laboratory, PO Box 999 Richland, Washington 99352, USA [email protected] TIMOTHY JOHNSON1, ROBERT SAMS1, JEFFREY HYLDEN1, JAMES KLEIMEYER2 AND BRAD ROWLAND2 1

Pacific Northwest National Laboratory, PO Box 999 Richland, Washington 99352, USA 2

Dugway Proving Ground (USA)

Index (liquid)

Two primary goals of infrared spectroscopic detection are chemical identification and quantification. In order to accomplish these goals, a comprehensive and quantitative spectral library suitable for digital manipulation is required. To a large degree, the contents of such a library depend on the application. Since the primary application of the PNNL/DOE spectral library is for environmental monitoring, we have focused our efforts on hazardous pollutants, as well as a large variety of natural and anthropogenic chemicals. As a spin-off project and in collaboration with Dugway Proving Ground, we also had the opportunity to analyze a limited set of chemical warfare agents (CWAs). An example of such data appears below. 1.60 1.55 VX, liquid

1.50 1.45

Absorbance (vapor)

Absorbance (liquid)

1.40 1.35 2.5x10-3 2.0 1.5 1.0 0.5 0.0 2.5x10-3

VX, liquid

2.0 1.5 1.0 0.5

VX, vapor

0.0 1000

1500

2000 Wavenumber (cm-1)

2500

3000

3500

Infrared optical properties of VX [O-ethyl S-(2-diisopropylaminoethyl) methylphosphonothioate] in the vapor and liquid phases. Refractive index data (top traces) are critical for modeling aerosol and reflection phenomena. 3

4

S. Sharpe et al.

1. Introduction Accurate and quantitative infrared spectra of liquid phase, chemical warfare agents (CWAs) including Tabun (GA), Sarin (GB), Soman (GD), Cyclosarin (GF), VX, Sulfur mustard (HD), Nitrogen mustard (HN3) and Lewisite-1 (L) are required for analysis of data from both active and passive optical sensor systems. Of particular importance is the need for both the real (dispersive) and imaginary (absorptive) components of the refractive index as a function of wavelength. This data is essential for predicting the optical properties of both the liquid and aerosolized CWAs. 2. Experiment The experimental setup involved placing a Bruker Vector 33 Fourier transform infrared (FTIR) spectrometer into one of the surety hoods at the West Desert Chemical Facility, located within Dugway Proving Ground. The FTIR was fitted with a kinematic sample cell holder to receive the sample cells and may be seen in Figure 1. Spectrometer control was accomplished by a computer located outside of the Surety lab, connected via a long data-link cable. Liquid agent samples were transferred from their original vials to the sample cells via a Hamilton gas-tight syringe fitted with a Luer-lok connector. No more than 2.5 ml of each agent was required for the entire analysis.

Figure 1. Fourier transform infrared spectrometer similar to one used in the Dugway Proving Ground’s Surety facility.

Infrared Spectral Signatures

5

2.1. Data Acquisition parameters Spectral resolution as defined by interferometer travel : 2 cm-1 (0.9/OPD) Apodization: Boxcar Zero-fill on interferogram: 2X (resulting in a data spacing of 0.96 cm-1) Beamsplitter: KBr Source: Silicon carbide Phase correction: Mertz Scan velocity: 10 kHz on He:Ne zero-crossings Folding limits: 0 to 15,800 cm-1 Detector: DTGS 2.2. Sample cell construction Due to the toxicity of the samples, it was decided that each sample cell would only be used once. A typical sample cell, from International Crystal Labs (ICL) is displayed in Figure 2 and consists of two rectangular Potassium chloride (KCl) salt windows separated by a lead-amalgam spacer, sandwiched between two stainless steel plates. The top, salt window has two holes extending through the window and allows the sample to enter the space created by the amalgam spacer and the two salt windows. The top steel plate is fitted with two Luer-lok connectors.

Figure 2. Sample cell used to analyze liquid CWA. Sample is injected into the bottom port until the liquid meniscus passes completely through the window area. Once filled, the two Luer-loc ports are snugly sealed with Teflon inserts.

6

S. Sharpe et al.

Leakage occurred during preliminary testing of the sample cells, when triethylphosphate (TEP) was injected during practice runs. This leakage occurred from between the salt windows, between the top window and steel plate and from the base of the Luer-lok ports on the steel plates. As a precaution, all cells were sealed with copious amounts of a commercially available silicone sealant (GE Silicone-I, RTV). A potential concern was that some of the sealant might diffuse or leach into the cell causing interfering spectral features. In order to identify potential interferent features due to the sealant, a reference spectrum was acquired and appears in Figure 3. None of the agent spectra were found to contain spectral features due to silicone sealant.

1000

1500

2000 2500 -1 Wavenumber (cm )

3000

3500

Figure 3. Reference spectrum of silicone sealant used to seal sample cells. The peaks at 800 and 1260 cm-1 are due to Si-O-Si and the doublet centered at 1022 cm-1 is due to Si-O-C.

2.3. Sample cell characterization During sample cell characterization it was found that the path length as specified by the vendor and the actual path length were often not in agreement with one-another. For instance, a cell specified as 15 µm was in fact 27 µm. In addition, the vendor could not supply us with short path cells (~1 to 5 µm). As a consequence, we had to crush the saltamalgam-salt cell in a hydraulic press. A pressure of 200-500 pounds inch2 was applied to a “25” µm cell for approximately 5 minutes in order to obtain a 1-5 µm path length. This crushing process was a black-art with a mortality rate of 50% the windows tended to delaminate and the cell was rendered useless. In order to rapidly pre-screen cells for adequate parallelism of the windows after crushing, the crushed cell was examined under a monochromatic light source (Hg lamp). A cell with parallel windows was indicated by a concentric pattern of light and dark circles and may be seen in figure 4.

Infrared Spectral Signatures

7

Figure 4. A quick test for the quality of a sample cell consisted of illuminating the cell under a green Mercury lamp. A good cell is indicated by concentric Newton fringes.

The path length of each sample cell was rigorously determined using the computer program “RNJ22A” written by Prof. John Bertie (emeritus) of the Univ. of Alberta.1-3 This computer code is based on the original programs and procedures described by Young and Jones.4 Basically, each empty sample cell was placed in our Fourier transform infrared spectrometer and its spectrum acquired. An absorbance spectrum was then created by the decadic logarithmic-ratio, A10=-log(I/IO). Typically, these empty cell absorbance spectra required further manipulations due to high frequency noise, baseline wander and overlapping water vapor features. The absorbance data for the empty cell was then analyzed by computer program RNJ22A, which outputs a statistically derived value of the cell path length. An example of the empty cell absorbance spectrum may be seen in Figure 5 and is seen to be periodic in wavenumber spacing.

8

S. Sharpe et al.

0.65 0.60 0.55 0.50 0.45 0.40 0.35 2000

2200 2400 -1 Wavenumber (cm )

2600

2800

Figure 5. Representative absorbance spectrum of an empty sample cell. Nominal path length is 100 micrometers.

For diagnostic purposes, it is possible to estimate cell length by measuring the spacing (cm-1) between any two fringes and applying Equation (1). Where ∆ν is the spacing of any two fringes in cm-1 and X is the cell path length in centimeters. X =

1 2∆ν

(1)

The advantage to using program RNJ22A is that the resultant value of cell path length is a statistical average based on many fringe spacings. See Table 1 for a complete listing of all sample cells used in this study. Decontamination of the sample cells consisted of placing the cells in bleach for 48 hours or until the salt window material dissolved completely.

Infrared Spectral Signatures Table 1. Sample cells used in this study.

CWA

Cell path length (µm)

Cell gradeA

GA

4.71

A

GA

9.85

D

GA

18.9

A

GA

32.11

A

GA

100.3

A

GA

503.6

A

GB

3.5

A

GB

5.2

B

GB

10.5

A

GB

19.9

A

GB

54.6

A

GD

3.0

B

GD

6.5

D

GD

12.2

A

GD

22.9

A

GD

58.0

A

GD

502.0

B

GF

2.9

A

GF

8.2

B+

GF

15.1

B

GF

25.2

B

GF

55.9

B

GF

105.2

B+

GF

507.6

B

HD

11.0

D

HD

16.0

D

HD

34.4

A

HD

62.7

A

HD

113.4

C

HD

1033.1

B

HN3

7.0

B

HN3

11.5

D

HN3

29.1

B

9

10

S. Sharpe et al.

CWA

Cell path length (µm)

Cell gradeA

HN3

61.9

A

HN3

112.9

B+

HN3

1049.0

D

L1

4.0

B

L1

11.9

D

L1

30.5

A

L1

61.7

C

L1

109.2

C

L1

1041.6

B

VX

2.6

C

VX

2.7

A

VX

12.1

C

VX

16.8

B

VX

47.0

B

VX

206.7

A

VX

507.6

A

A

The cell grade is based on the quality of the fringes from observation of empty cell spectrum and is a subjective scale. The fringe pattern appearing in Figure (5) is considered grade “A”.

3. Sample analysis The analysis required to determine the full refractive index for a liquid requires several steps as discussed in detail by Bertie, et.al.1-3 The process starts by acquiring quantitative spectra of the liquid sample using as many different path length cells as feasible. We required at least two short path lengths (~1 to 5 µm) be used so that all major spectral features were within the dynamic range of our FTIR system (less than 2.5 absorbance units, base-10). We also required two moderate path lengths (~10-50 µm) and two long path lengths (~100-1000 µm). The experimental absorbance spectra, designated as AEX are created by using the expression AEX=-log(I/I0) where I0 is the single channel spectrum with no sample cell present and I is the single channel spectrum with the filled sample cell present. Referring to equation 3, d is the sample path length. Each of these experimental absorbance spectra are then analyzed using program “RNJ46A”.1-3 This program performs several important calculations including correcting the sample spectra for reflective losses due to the air-window-liquid-window-air interfaces. The refractive index (n) for our KCl windows appears in Figure 6.5

Infrared Spectral Signatures

11

1.45 1.40 1.35 1.30 2000

4000 6000 -1 Wavenumber (cm )

8000

Figure 6. Measured (circles) and calculated (line) refractive index for potassium chloride windows. This data was used to correct for reflective loses at the windows.

In addition, RNJ46A converts the experimental absorbance spectra (originally in units of micrometer-1) to the unitless vector, k(ν).

 I (ν )  AEX = A10 (ν ) = − log10    I 0 (ν )  A10 (ν ) d

(3)

ln(10) ⋅ K (ν ) 4 ⋅ π ⋅ν

(4)

K (ν ) = k (ν ) =

(2)

Note that the length-normalized experimental absorbance vector is designated by large-K(ν), while the unitless absorbance vector is designated by small k(ν). The set of k(ν) vectors at various path lengths, for a specific sample, are then fitted using a weighted, linear, least-squares program.6 The weighting vector is created from the original experimental absorbance vectors (AEX) as per equation (5). 2

W (ν ) = T (ν )  = 10 − AEX (ν ) 

2

(5)

Where T(ν) is the transmission of a specific wavenumber channel for a specific path length. The fitted or composite spectrum is then examined and corrected for water vapor, carbon dioxide and other interfering features, by spectral subtraction. The refractive index of a material is a complex value, consisting of both real and imaginary components, as per equation (6).

12

S. Sharpe et al.

n(ν ) = n(ν ) + ik (ν )

(6)

The real component of the refractive index, n(ν) is called the dispersion index and describes the phase delay of light passing through the sample at a specific wavenumber value (cm-1). The imaginary component in Equation (6) is the same k(ν) hat appears on the left side of equation (4) and describes the absorbance of light passing through a sample at a specific wavenumber value. The mathematical relationships between n(ν) and k(ν) is through the Kramers-Kronig transform and has been discussed in detail in numerous papers dating back to 1927.7-10

n(ν0 ) − n ( ∞ ) = k (ν0 ) =

−2ν0

π

2

π∫



∞ 0

∞ 0

ν ⋅ k (ν ) dν ν 2 −ν02

n(ν ) dν ν 2 −ν02

(7) (8)

As one can see from the pair of equations (7-8), converting between real and imaginary refractive index components is readily done. The solution of equation (7-8) can not be performed in closed form due to the singularity when ν=ν0. Bertie et al. have written a program (“LZKKTB”) based on a numerical procedure to perform the KK transform.1-3 The term n(∞) is called the infinite or high refractive index value and represents the effect or tail of all high lying electronic or optical states extending down to the infrared. This term, n(∞) can be derived several ways. The preferred way to determine n(∞) is to measure the dispersion index directly, in the laboratory, at the highest wavenumber (cm-1) value corresponding to the measured infrared spectrum. This must be caveated by adding that the wavenumber value chosen to measure n(∞) must lay sufficiently above any vibrational bands. Unfortunately, we were unable to measure n(∞) in the laboratory and must resort to an approximation. The refractive index, taken at 16,969 cm-1 (sodium D line) exists for most pure chemicals and may be found in either the Knovel Critical Tables11 or in the literature. In fact, n(16,969 cm-1) was available for the 9 CWAs investigated here. Using the n(16,969 cm-1) value for the CWAs and assuming that most organophosphorous compounds behave similarly, it was possible to extrapolate to n(8000 cm-1). Referring to Figure 7, plots of refractive index for 4 organophosphorous compounds at 4 experimental points obtained from the Knovel Critical Tables are fitted to a 3rd order polynomial. The 4 highest wavenumber points correspond to the H-α, β, and γ and sodium D emission lines. Table 2 contains the wavelengths and energy values for the four emission lines used to determine n(∞), as reported in the Critical Tables.

Infrared Spectral Signatures

13

1.425

-Diethylphosphonate -Diethylphosphine -Triethylphsophite -Diethylphosphonic acid

1.420

1.415

1.410

1.405

1.400 8

10

12

14 16 -1 Wavenumber (cm )

18

20

3

22x10

Figure 7. Dispersion index curves for 4 common organophosphourous compounds based on data from the Knovel Critical Tables. The first point is the extrapolated value used in the approximation of n(∞) for GA, GD, GF and VX at 8000 cm-1. Table 2. Four common wavelengths used to determine n, expressed in both nanometer and wavenumber.

H-α

656.3 nm

15236.9 cm-1

D(Na)

589.3

16969.3

H-β

486.15

20569.8

H-γ

434.06

23038.3

The first point on each curve, at ~8000 cm-1 is the result of extrapolating our 3rd-order polynomials. In general, there is a 1.0% decrease in n(8000 cm-1) versus n(16969 cm-1). Based on this 1% change in n, we can now estimate n(∞) for 5 of our CWAs. Since Lewisite, HN3 and HD are not organophosphorous compounds, this approximation scheme is not valid. For L, HN3 and HD n(∞)=n(16,969 cm-1) was used, realizing that there is an offset of up to ~2% in n(ν) for these compounds.

4. Results For comparison purposes and as a reality check, we have compared our results for VX with similar data appearing in the 1973 work of Dismukes.12 Unlike the study presented here, Dismukes measured the dispersion index directly, using an internal reflectance system described by Hanson.13 Figure 8 contains the plots for the data reported here and Dismukes absorption index values.

14

S. Sharpe et al.

Absorption index (k)

0.25 0.20 0.15 0.10 0.05 0.00 600

800

1000 1200 -1 Wavenumber (cm )

1400

Figure 8. Absorption index (k) for VX. Red trace is from this study and blue trace is from the work of Dismukes.

In general, there appears to be excellent agreement in the band positions and intensities between ours and Dismukes studies. As seen in Figure 9, the comparison of the dispersion index for VX is qualitatively similar, but Dismukes data appears to have considerably more structure. As pointed out by Dismukes, their VX sample was quite old and suspected of having contamination problems. It appears that the dispersion index is more sensitive to contamination features than is the absorption index probably due to the 1st derivative nature of the KK transform. The agreement of n at higher wavenumber values is close and helps vindicate our n(∞) approximation. Although, the comparison of n appears to depart at the longer wavelengths (lower wavenumbers).

1.6 1.5 1.4 1.3 1.2 600

800

1000 1200 -1 Wavenumber (cm )

1400

Figure 9. Dispersion index (n) for VX. Red trace is from this study and blue trace is from the work of Dismukes.

Infrared Spectral Signatures

15

As a final check of our results, a comparison is made between our previous vapor phase CWA data and the results for the liquid phase CWAs reported here. By normalizing both the vapor and liquid phase data to the same number-density-path length it is possible to make an approximate comparison. In general, good agreement is found for all of the CWAs. Figure 10 contains the comparison for VX vapor and liquid.

1000

1500

2000 2500 -1 Wavenumber (cm )

3000

3500

Figure 10. Comparison of absorption coefficients for liquid (blue) and vapor (red) phase VX.

5. Conclusion Extensive data sets of 8 liquid phase CWAs have been collected and analyzed for the purpose of determining their full refractive index values from ~600 to 6500 cm-1. See appendices A and B. The analysis technique is based on the detailed work of Bertie et al.1-3 A comparison, for VX with the earlier work of Dismukes is in good agreement.11 Two major uncertainties have been identified in this study and include (1) Approximation values used for n(∞) (2) Error in characterizing the short path sample cells. If future values for n(∞) become available these will be used to re-analyze the data presented here.

6. Acknowledgements We would like to thank the Department of Energy’s office of Nonproliferation Research and Development (NA-22) for financial support. PNNL is operated for the US Department of Energy by the Battelle Memorial Institute under contract DE-AC0676RLO 1830.

16

S. Sharpe et al.

7. References 1. J. E. Bertie, S. L. Zhang, H. H. Eysel, S. Baluja and M. K. Ahmed, Infrared Intensities of Liquids XI: Infrared Refractive Indices from 8000 to 2 cm-1, Absolute Integrated Intensities, and Dipole Moment Derivatives of Methanol at 25°C, Applied Spectroscopy 47, 1100-1114 (1993). 2. J. E. Bertie, S. L. Zhang and C. D. Keefe, Measurement and use of absolute absorption intensities of neat liquids, Vibrational Spectroscopy 8, 215-229 (1995). 3. Private communication. Computer codes available over the world wide web at http://www.ualberta.ca/~jbertie/JBDownload.HTM 4. R. P. Young and R. N. Jones, Computer Programs for Infrared Spectrophotometry, Spectrochim Acta 32A 75, 85, 99, 111 (1976). 5. Edited by W. L. Wolfe and G. J. Zissis, The Infrared Handbook, IRIA Series in Infrared & Electro-Optics, P 7-71 (1993). 6. John R. Taylor, An Introduction to Error Analysis; the Study of Uncertainties in Physical Measurements (University Science Books, California, 1997), 2nd ed., PP 181-207. 7. H. A. Kramers, Estratto dagli Atta del Congresso Internazionale de Fisici Comom, 2, 545 (1927) 8. R. de L.Kronig, On the theory of dispersion of X-rays, J. Opt. Soc. Am. 12, 547-557 (1926). 9. K. Ohta and H. Ishada, Comparison among several numerical integration methods for Kramers-Kronig transformation, Appl. Spect., 4, 952-957 (1988). 10. J. E. Bertie and S. L. Zhang, Infrared intensities of liquids. IX. The Kramers-Kronig transform, and its approximation by infinite Hilbert transform via fast Fourier transforms, Can. J. Chem., 70, 520-531 (1992). 11. International critical tables of numerical data, physics, chemistry and technology, prepared under the auspices of the International Research Council and the National Academy of Sciences by the National Research Council of the United States of America, 1926-1930. 7 volumes. Available over the World Wide Web at http://www.knovel.com/knovel2/default.jsp 12. E. B. Dismukes, Spectral measurements of chemical agents, ED-CR 73005, Edgewood Arsenal, Dept. of the Army, (Sept. 1973). 13. W. N. Hansen, Variable angle reflection attachment for the ultraviolet, visible and infrared, Anal. Chem., 37, 1142-1145 (1965).

Infrared Spectral Signatures

17

Appendix A. Physical properties of CWAs GA: Tabun, EA1205 or Ethyl N,N-dimethylphosphoramidocyanidate– Chemical formula: C5H11N2O2P CAS: 77-81-6 Formula weight = 162.13 g/mole Freezing point = –50 °C Boiling point = 248 °C Density (20 °C) = 1.073 g/cm3 Refractive index = 1.425 (589 nm and 20 °C) Extrapolated refractive index: n∞ = 1.4179 (8000 cm-1) GB: Sarin, EA1208, Isopropyl methylphosphonofluoridate – Chemical formula: C4H10FO2P CAS: 107-44-8 Formula weight = 140.09 g/mole Freezing point = –56°C Boiling point = 158°C Density (20 °C) = 1.102 g/cm3 Refractive index = 1.3811 (589 nm and 20 °C) Extrapolated refractive index: n∞ = 1.3742 (8000 cm-1) GD: Soman, EA1210, Pinacolyl methylphosphonofluoridate– Chemical formula: CAS: Formula weight = Freezing point = Boiling point = Density (20 °C) = Refractive index = Extrapolated refractive index:

C7H16FO2P 96-64-0 182.17 g/mole –42 °C 198 °C 1.0222 g/cm3 1.4068 (589 nm) n∞=1.3998 (8000 cm-1)

GF: Cyclosarin, Cyclohexyl methylphosphonofluoridate – Chemical formula: C7H14FO2P CAS: 329-99-7 Formula weight = 180.16 g/mole Freezing point = –12° C Boiling point = 239° C Density (20 °C) = 1.1327 g/cm3 Refractive index = 1.433 (589 nm and 20 °C) Extrapolated refractive index: n∞=1.4258 (8000 cm-1) Supplier and stated purity: Edgewood lot# GF-S-6092-CTF-N-1, 96.6%

18

S. Sharpe et al.

VX: EA1701, O-Ethyl S-2-diisopropylaminoethyl methylphosphonothiolate– Chemical formula: C11H26NO2PS CAS: 50782-69-9 Formula weight = 267.37 g/mole Freezing point = –50°C Boiling point = 298°C Density (20 °C) = 1.0083 g/cm3 Refractive index = 1.4802 (589 nm) Extrapolated refractive index: n∞=1.4728 (8000 cm-1) HD: sulfur mustard, EA1033, Bis(2-chloroethyl)sulfide– Chemical formula: CAS: Formula weight = Freezing point = Boiling point = Density (20 °C) = Refractive index = Extrapolated refractive index:

C4H8Cl2S 505-60-2 159.08 g/mole 14.45 °C 217.5 °C 1.274 g/cm3 1.5318 (589 nm and 20 °C) 589 nm value used

HN3: Nitrogen mustard, HN-3, 2-chloro-N,N-bis(2-chloroethyl)ethaneamine– Chemical formula: (ClC2H4)3N CAS: 555-77-1 Formula weight = 204.527 g/mole Freezing point = –37 °C Boiling point = 256 °C Density (20 °C) = 1.22 g/cm3 Refractive index = 1.485 (589 nm and 20 °C) Extrapolated refractive index: 589 nm value used L: Lewisite-1, L1, EA1034, 2-Chlorovinyl dichloroarsine– Chemical formula: CAS: Formula weight = Freezing point = Boiling point = Density (20 °C) = Refractive index = Extrapolated refractive index:

C2H2AsCl3 541-25-3 207.32 g/mole -18.2 °C 190 °C 1.888 g/cm3 1.5889 (589 nm and 20 °C) 589 nm value used

Infrared Spectral Signatures

Appendix B. Real and imaginary refractive indices of CWAs

Figure A. Full refractive index plots for liquid Tabun (GA).

Figure B. Full refractive index plots for liquid Sarin (GA).

19

20

S. Sharpe et al.

Figure C. Full refractive index plots for liquid Soman (GD).

Figure D. Full refractive index plots for liquid Cylosarin (GF).

Infrared Spectral Signatures

Figure E. Full refractive index plots for liquid VX.

Figure F. Full refractive index plots for liquid Sulfur Mustard (HD).

21

22

S. Sharpe et al.

Figure G. Full refractive index plots for liquid Nitrogen Mustard (HN3).

Figure H. Full refractive index plots for liquid Lewisite-1 (L1).

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 251–262  World Scientific Publishing Company

PASSIVE STANDOFF DETECTION OF SURFACE CONTAMINANTS: A NOVEL APPROACH BY DIFFERENTIAL POLARIZATION FTIR SPECTROMETRY JEAN-MARC THÉRIAULT*, HUGO LAVOIE, ELDON PUCKRIN AND FRANCOIS BOUFFARD RDDC-DRDC Valcartier, 2459 Pie-XI North Blvd, Québec, Qc, G3J 1X5, CANADA [email protected]

An approach for the passive standoff detection of surface contaminants by differential polarization FTIR spectrometry is proposed. The surface radiance modeling associated with the method is given. Unpolarized and polarized sensing measurements obtained with the CATSI sensor for the standoff detection of liquid agent VX deposited on high-reflectivity surfaces are presented. The analysis of results indicates that the differential polarization approach is well suited to mitigate sky radiance drifts, which favours unambiguous surface contaminant detections. An experimental and modeling study initiated to address the spectral polarization phenomenology is outlined. The design of an optimized FTIR sensor for differential polarization spectrometry measurements is discussed. Keywords: Surface Contaminants; Standoff Detection; Polarization, FTIR Spectroscopy.

1. Introduction Chemical warfare (CW) agents such as sulphur mustard (HD) and VX are recognized as very serious threats by the Defence community. These agents, which are characterized by moderate to low vapour pressures, can readily condense on surfaces creating very hazardous health conditions. Current reconnaissance systems normally employ point detectors that are based on techniques involving ion mobility spectrometry, flame photometry, dye solubility or enzymatic reactions to detect CW contamination on surfaces. There are some active systems based on the LIDAR (light detection and ranging) technique that are currently being developed to measure chemical warfare agents. Current research involving infrared reflection spectrometry also show promising results for soil and surface contaminant detection1,2. Passive long-wave infrared (LWIR) spectrometric sensors, such as the M21 – RSCAAL (Remote Sensing Chemical Agent ALarm), have been used in the field for the passive standoff detection of chemical vapour clouds. Preliminary measurements and analyses3 suggest that passive LWIR spectrometric sensors have the potential to remotely detect surface contaminants by using the reflectance of the cold sky. More recent results4 indicate that liquid HD and VX *

RDDC-DRDC Valcartier, 2459 Pie-XI North Blvd, Québec, Qc, G3J 1X5, CANADA. 23

24

J. M. Thériault et al.

agents deposited on high-reflectivity surfaces can be detected, identified and possibly quantified with passive sensors. For low-reflectivity surfaces, the presence of contaminants can usually be detected; however, their identification based on simple correlations with the absorption spectrum of the pure contaminant is not clearly evident4. To be fully operational for surface contaminant detection, current passive spectrometric techniques would greatly benefit from a significant improvement in signalto-clutter ratio. Infrared polarimetry is recognized to have strong potential for natural clutter suppression5 providing larger signal-to-clutter ratios in many scenarios such as mine, tank and truck detection. The novel spectral polarization sensing approach described in this paper is based on differential polarization measurements, which is intended to improve the signal-to-clutter ratio in the infrared remote sensing of surface contamination. The main premise of this approach is that the difference between the two linearly polarized components (“s” and “p”) of IR emission from the surface coated with a contaminant is significantly different from the polarization difference of the uncontaminated surface (background). The expected improvement of the proposed technique arises from the optical subtraction property of a double-beam FTIR (Fourier Transform Infrared) spectrometer6 used to obtain the differential polarization spectrum in a single measurement on a single detector. This paper presents the principles, the phenomenology and the approach for the passive standoff detection of surface contaminants by differential polarization FTIR spectrometry. 2. Surface Radiance Phenomenology 2.1. Modeling surface contaminant radiance: Unpolarized case The radiative transfer intervening at a surface can be understood from simple physical arguments. Figure 1 shows a diagram and defines the parameters used to evaluate the radiance originating from clean and contaminated surfaces exposed to an outdoor environment. For a clean surface having a reflectance R0, the spectral radiance measured by the sensor contains two components, i.e., the emitted radiance from the surface, B(1- R0), and the cold sky radiance reflected by the surface, R0 Lsky. The parameter, Lsky, represents the downwelling radiance from the sky and B is the Planck radiance given by     − 12 3 1.191 × 10 ν  B= ,   1.439ν     T   −1   e

(1)

Lsky

Lsky

R0

25

Se ns or

Se ns or

Passive Standoff Detection of Surface Contaminants

Rcont B

B

Clean

Contaminated

Fig. 1. Schematic diagram and the parameters used to evaluate the radiance (unpolarized case) of a clean surface and a surface covered by a contaminant.

where ν is the wavenumber in cm-1, and B is in W/(cm2-sr-cm-1) evaluated at the temperature (T) of the surface. Adding these two radiance components, B(1- R0 ) and R0 Lsky , yields an expression for the radiance of the clean surface given by

(

)

Lclean = B − R0 B − Lsky .

(2)

The radiance emanating from a contaminated surface (Fig. 1) can be derived using similar arguments, i.e.

(

)

Lcont = B − Rcont B − Lsky .

(3)

A quantity of interest for studying the perturbation effects of a contaminant on a surface is the differential spectral radiance (∆L), i.e., the radiance change (Lcont - Lclean) obtained by subtracting Eq. 2 from Eq. 3,

(

)(

)

∆L ≡ Lcont − Lclean = R0 − Rcont B − Lsky .

(4)

Inspection of Eq. 4 reveals some simple facts concerning the sensitivity for detecting contaminants by passive spectral radiometry. First of all, the radiance change is proportional to the reflectance contrast (R0 –Rcont) indicating that a highly reflecting surface (R0), such as a metallic plate, offers more sensitivity for detection. Secondly, the radiance change is proportional to the radiative contrast between the Planck surface radiance and the downwelling sky radiance, (B–Lsky). Since the downwelling sky radiance increases with cloud cover, which in turn results in a decrease in the radiative contrast, the best detection possibilities are obtained for clear sky conditions.

26

J. M. Thériault et al.

In the presence of a sky radiance drift between the two radiance measurements, Lcont and Lclean, the differential radiance would be given by

(

)

∆Lcont ≡ Lcont − Lclean = (R0 − Rcont ) B − Lsky + ∆L * sky .

(5)

In this case, there is an additive spectral clutter component (∆L*sky) that perturbs the surface spectral signature that complicates the extraction of spectral information.

2.2. Modeling surface contaminant radiance: Polarized case For the polarized case, the radiative transfer intervening at a surface can be understood from similar physical arguments. Figure 2 shows a diagram where the same contaminated surface is probed by a sensor having either a polarizer oriented parallel (ppolarization) or perpendicular (s-polarization) to the plan of incidence. In this case, the spectral radiance measured by the sensor probing the p-polarization component is given by

(

)

p p Lcon = B − Rcont B − Lsky , t

(6)

and the spectral radiance measured by the sensor probing the s-polarization component is given by

(

)

s B−L . Lscont = B − Rcont sky

(7)

The superscripts “p” and “s” associated with radiances (L) and reflectances (R) refer to each polarization component. In Eqs. 6 and 7 we have used the well accepted argument that in the thermal infrared the downwelling sky radiance (Lsky) is not polarized. This is consistent with the work of 7 Shaw on the polarization measurements of the spectral radiance from water. The quantity of interest is now the differential polarization radiance obtained by subtracting the s-polarization radiance (Eq. 7) from the p-polarization radiance (Eq. 6) yielding,

(

)

pol s − Lp =  R p − R s  B − L . ∆Lcon ≡L  cont  t cont cont cont  sky 

(8)

Equation 8 represents the main analytical result supporting the differential polarization FTIR method we propose for surface contaminant detection. This equation expresses the fact that the differential polarization radiance (the signal) is proportional to the difference between the two linearly polarized reflectances of the surface contaminant. This approach assumes that there is a significant difference between the two linearly polarized reflectances, which is supported by the experimental evidence shown below. An important attribute of the proposed method can be deduced by inspection of Eq. 8. If the two polarization measurements ( Lscont and Lpcont) can be performed simultaneously on the same surface then the resulting differential radiance (Eq. 8) would be free from sky

ns o

r ns o

Se

Lsky

Se

Lsky

27

r

Passive Standoff Detection of Surface Contaminants

RScont

RPcont S P ∆Lpol = ( Rcont − Rcont )(B − Lsky )

B Polarization - P

B Polarization - S

Fig. 2. Schematic diagram and the parameters used to evaluate the radiance (polarized case) of a surface covered by a contaminant.

radiance fluctuations (clutter). In this case the two polarisation components associated with the sky radiance fluctuations cancel out leaving a radiative contrast proportional to the average sky radiance. To achieve this simultaneous polarization measurement we propose the use of a balanced double-beam FTIR spectrometer capable of optical subtraction between the two polarized radiance components. Such an instrument is described below.

3. Standoff Sensor and Field Experiment 3.1. CATSI instrument The Compact ATmospheric Sounding Interferometer (CATSI) instrument is an FTIR spectrometer that takes advantage of the differential detection capability provided by a 8,9 symmetrical dual-beam interferometer . For this system, two beams of thermal radiation coming from different scenes can be combined onto a single detector and subtracted optically in real-time. The same type of system can also be used to probe and simultaneously subtract two cross-polarized radiation beams coming from the same scene. The CATSI instrument consists of two identical Newtonian telescopes with diameters of 10 cm that are optically coupled to the dual-beam interferometer. Figure 3 shows the instrument mounted on a tripod, together with a schematic diagram that summarises the instrument’s optical design. The specifications of the CATSI system are: nominal scene field-of-view (FOV) of 8 mrad, spectral coverage 7-13 µm, and a nominal spectral resolution of 8 cm-1. A flat-plate mirror placed in front of each telescope can be rotated to the selected scene. A double-pendulum scanning mechanism (Fig. 3, right) controls the periodic displacement of the two corner-cube (CC) reflectors that generate the interferogram. The beamsplitter (BS) consists of a thin air gap squeezed between two ZnSe substrates. Only one of the two output channels is currently used. This output

28

J. M. Thériault et al.

Input 1

Input 2

MCT

Collimating Optics Corner Cube Beam Splitter Double Pendulum

Fig. 3. Photograph of the CATSI sensor with the optical head mounted on a tripod, and (b) the associated optical diagram (see text).

module contains a parabolic and condensing mirror that focuses the radiation beam onto an MCT detector (1 mm) mounted on a micro-cooler. Two CCD cameras mounted on the top of the two telescope modules are used to view the scenes under consideration.

3.2. SURFCON field experiment The CATSI system was employed at a trial for the passive standoff detection of real surface agent contaminants (SURFCON), which was held at DRDC Suffield, 22 – 27 September 2002. The main purpose of the SURFCON trial was to test the passive standoff detection method on CW agents, such as HD and VX, and to acquire infrared spectra of the agents on contaminated surfaces under laboratory and field conditions. A fully documented paper has been published on the trial experiment and results4. For the SURFCON trial, the CATSI system was operated either in differential or direct detection mode, which was obtained by blocking one of the two input telescopes. The mode of operation was chosen according to the constraints of the observed scene. For instance the field measurements (not polarized) on high-reflectivity surfaces were mostly performed in direct mode to avoid the use of a reference clean plate required for the differential mode. The relative orientation of the reference and target plates was found to be critical.

3.3. Example of results Figure 4a shows an example of a field measurement obtained at the SURFCON trial. This corresponds to spectra measured at a standoff distance of 60 m for 1-g/m2 of VX deposited on a flat aluminium plate oriented at 45o to the line-of-sight of the instrument. For this measurement the CATSI sensor was operated in the direct mode.

-1

-7

3 10

0.0012 Diff. Polarization

-2

0 -7

-5 10

-6

-1 10

-6

Diff. (temporal) Diff. corrected for sky radiance drift

-1.5 10

-6

-2 10

700

a)

Differential Radiance (W/cm -sr-cm )

2

VX: 1 g/m

2

-1

-7

5 10

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

b)

0

0.0009

-7

-3 10

0.0006 2

VX: 3g/m -7

-6 10

0.0003

-7

-9 10

700

800 900 1000 1100 1200 1300

29

VX - Absorption Coefficient (a.u.)

Differential radiance (W/cm -sr-cm )

Passive Standoff Detection of Surface Contaminants

0

-1

Wavenumber (cm )

Fig. 4. Differential radiance measured with the CATSI sensor at a standoff distance of 60 m for an Al-plate covered with VX contaminant: a) Differential unpolarized radiance results for a 1-g/m2 surface coverage of VX, and b) Differential polarized radiance results for a 3-g/m2 surface coverage of VX and the absorption coefficient of VX (right scale).

The differential spectrum (dotted line) in Fig. 4a was obtained by subtracting the spectrum of the ccontaminated plate from the spectrum previously recorded (approximately two minutes before) for the clean plate. There is a strong spectral residual in the resulting temporal differential spectrum. This is attributed to a variation in the downwelling sky radiance (partly cloudy and variable sky conditions) between the two consecutive measurements. The ozone signature near 1050 cm-1 (dotted line) is a clear indication of this. From this differential measurement an additional processing step was then required to extract a clutter-free spectrum of VX. This was achieved by matching and subtracting a temporal differential spectrum extracted from a sequence of spectra previously recorded for the clean plate. After this additional correction for the sky radiance drift, the VX spectrum was clearly seen as shown in Fig. 4a (full line). This example measurement well illustrates the general results obtained at the SURFCON trial. For high-reflectivity surfaces it was possible to identify the surface contaminants by passive standoff sensing (unpolarized sensing), but the sky radiance drift adds a complication that needs to be mitigated by processing or hardware solutions. The hardware solution we propose to mitigate the sky radiance drift in passive standoff sensing is by exploitation of the polarization effects of surface contaminants. In order to investigate this approach, a few polarization measurements were taken with CATSI at the SURFCON trial. This was accomplished by the addition of two polarisers to the sensor, i.e., one was mounted in the first telescope with its polarization axis oriented vertically and the other was mounted in the second telescope with its polarization axis oriented horizontally. With this configuration each telescope passed a beam of linearly polarized radiation to each arm of the interferometer. As a result of the subtractive attributes of the CATSI interferometer, the resulting spectrum corresponded to the difference between the spectrum of the vertically polarized and the horizontally polarized radiances, which defines the differential polarization radiance spectrum of the scene. Figure 4b shows an example of the differential polarization radiance spectrum

30

J. M. Thériault et al.

measured for a 3-g/m2 surface coverage of VX deposited on an aluminium plate (standoff distance of 60 m). For this measurement the two telescopes were aimed exactly at the same position on the target plate. From this figure, it is evident that all the spectral characteristics of VX can been detected by differential polarization without additional processing for a sky radiance drift correction. It is also noted that the spectral region below 1100 cm-1 are inverted relative to that above 1100 cm-1. The cause of this band inversion is not completely understood. A possible explanation would be that the two responsivities associated with each arm of the CATSI sensor are not fully symmetrical in the differential polarization measurement mode. A second explanation would be that the band inversion is intrinsically related to the optical properties of the VX-Aluminium interface. This experimental result shows that there is a strong potential for detecting surface contaminants by differential polarization sensing. It also stresses the need to address the polarization phenomenology of surfaces in order to obtain a quantitative understanding of the differential polarization sensing results.

4. Experimental and Modeling Study The data taken at the SURFCON trial in the differential polarization mode has prompted an experimental and modeling study aimed at understanding the polarization phenomenology associated with surfaces. A reflectometer-transmissometer assembly has been set up in the laboratory to study the optical properties of surfaces in the longwave infrared (LWIR) region. Figure 5 shows a schematic diagram of the setup used to measure the polarized spectral reflectance and transmittance of various samples. The transmissometer configuration (Fig. 5a) uses an ambient blackbody (Bamb) and a hot blackbody (Bhot) as sources and the CATSI sensor (single telescope mode) as a receiver. TRANSMISSOMETER (0-60o)

REFLECTOMETER (45o)

Bamb

Bhot

sample

Bamb

Bamb

Polarizer

Bamb Polarizer

(wire-grid) CATSI

T=

Bhot

L T − Lamb Lhot − Lamb

CATSI

R=

LR − Lamb Lhot − Lamb

Fig. 5. Schematic diagram of the experimental setup used to measure the polarized spectral reflectance and transmittance of various samples: a) The transmissometer configuration that uses blackbody sources (Bhot and Bamb) and the CATSI sensor as a receiver, and b) The reflectometer configuration.

Passive Standoff Detection of Surface Contaminants

31

A wire-grid polarizer is used to generate a linearly polarized beam. This configuration allows sample transmittance measurements with the angle of incidence varying from 0 to 60o. For the reflectometer configuration (Fig. 5b) the positions of the ambient and the hot blackbodies are simply interchanged, and the angle of incidence is limited to 45o due to practical constraints. The transmittance (T) and reflectance (R) of the sample are deduced from the three radiance (L) measurements taken with and without the sample as indicated in Figs. 5a and 5b. With these two configurations the error on T and R estimated from reproducibility measurements is on the order of one percent. Figure 6 summarizes a series of laboratory measurements on high-reflectivity and low-reflectivity surfaces. Figure 6a shows the s- and p-polarized reflectances (incidence of 45o) measured for a clean aluminum plate (RSAl and RPAl) and for the same plate covered with an optically thin layer of silicon spray (RScont and RPcont). The silicone spray (trade mark: Krylon 41325) has a sharp spectral signature in the LWIR that makes it a good simulant3 for liquid CW contaminants. Figure 6b shows the resulting differential polarization reflectance (RScont-RPcont) that clearly shows the signature of Krylon. Figures 6c and 6d show the polarized transmittances (incidence of 45o) and reflectances (incidence of 45o) for a clean sheet of Glad Wrap (Trade Mark) of approximately 12.7 micron thickness and for the same sheet covered with an optically thin layer of silicon spray. The Glad Wrap sheet is used here to address the polarization phenomenology associated with low-reflectivity surfaces. Channel spectra are clearly seen both in transmittance and reflectance spectra of the clean sheet (Fig. 6c) indicating a high level of transparency except for the spectral region near 720 cm-1 where a sharp absorption peak is observed. For the contaminated sheet (Fig. 6d) the spectral signature of Krylon is observed with a more pronounced contrast for the transmittance (TS and TP) than for the reflectance spectra (RS and RP). The ultimate goal of our experimental and modeling study is the determination of the complex refractive index (n-ik) associated with liquid contaminants and surfaces. With this knowledge it would be possible to simulate the sensor performance for a wide variety of passive sensing detection scenarios. As an example of polarization phenomenology investigation, Gurton and co-workers10 have performed an exhaustive experimental study on the linear degree of polarization (in the infrared) associated with rough surfaces and for surfaces subjected to dew formation and aerosol contamination. Another important research step on surface contaminant phenomenology has been achieved by Sharp and co-workers11 who apply the method of Bertie12, 13 and measure the complex refractive index of many liquid CW agents. In our experimental and modeling study we attempt to derive the LWIR optical properties of contaminants and surfaces based on the polarized reflectance and transmittance measurements as described above. Our initial effort is centered on the determination of the complex refractive index of Krylon. For that, we are currently developing a method based on Fresnel coefficients and thin-film optics14 for the determination of n, k and d (layer thickness) from the linearly polarized transmittance spectra measured at several incidence angles (0 to 60o). Figs 6e and 6f show an example of polarized transmittance spectra (s-polarization) recorded at different angles of

32

J. M. Thériault et al. 0.25

(R

S

0.9

R

Differential Reflectance

Reflectance

Al

R

1

Al

P

0.8

R

cont S

0.7

cont

R 0.6

700

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

T

0.6

S

Clean Glad 0.4

R 0.2

S

R

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

700

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

S

Krylon / Glad 0.4

R 0.2

S

R 700

800

P

900 1000 1100 1200 1300 -1

Wavenumber (cm )

Krylon / Glad Wrap

1

o

0

0.9

o

o

40 o

40

o

0.7

T

o

30

0.8

P

0.6

d)

50

0.6 60

o

0.5

Transmittance

0

0.9

T 0.8

0

Clean Glad Wrap

Transmittance

700

P

1

0.8 0.7

o

50 0.6

60

o

0.5 0.4

0.4

"S" Polarisation

"S" Polarisation

e)

0.05 0

Reflectance / Transmittance

Reflectance / Transmittance

P

0.8

0.3

0.1

1

T

c)

0.15

b)

1

0

cont ) p

-R

P

Krylon / Al

a)

cont

s

0.2

700

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

0.3

f)

700

800

900 1000 1100 1200 1300 -1

Wavenumber (cm )

Fig. 6. Spectral transmittance and reflectance measurements (laboratory) for a variety of samples: a) s-and ppolarization reflectance spectra for a clean Al-plate and the plate covered with Krylon, b) the resulting differential polarization reflectance spectrum for the Al-plate covered with Krylon, the polarized reflectance and transmittance spectra for c) clean Glad Wrap sheet and d) the sheet covered with Krylon, and the polarized transmittance spectra measured at various angles for e) clean glad wrap sheet and f) the sheet covered with Krylon.

Passive Standoff Detection of Surface Contaminants

33

P m

P

Double Beam FTIR Spectrometer S m S Surface Contaminant

Fig. 7. Optical diagram of a sensor optimized for the passive standoff detection of surface contaminants by differential polarization FTIR spectrometry.

incidence for a clean sheet of Glad Wrap and for the same sheet covered with a thin layer of Krylon. An algorithm for estimating n, k and d from the angular transmittance spectra is currently being developed and more work is being performed on modeling the surface roughness and layer inhomogeneity for inclusion into the retrieval algorithm.

5. Optimized Sensor Design for Differential FTIR Sensing The actual CATSI sensor (Fig. 3) is designed to perform the optical subtraction between two unpolarized beams. The inclusion of two polarizers with different polarization orientations creates an unbalance between the two responsivities associated with each of the input ports of the instrument. As a consequence the differential polarization measurements performed with the actual CATSI-polarizer configuration is not fully balanced. The diagram of Fig. 7 summarizes the optical configuration of a system optimized for differential polarization FTIR sensing. In this hypothetical system the telescope would serve to split (without losses) the two polarization components (s and p) of the same scene where each component would be directed to each input port of a double-beam FTIR interferometer. This interferometer would be designed such that the s-polarization and p-polarization channels have balanced responsivities. Such a system may also be designed to include an imaging capability that would provide a full scene image composed of differential polarization spectra.

6. Conclusion Our analysis based on analytical and experimental evidence has shown that differential polarization FTIR sensing is a viable method for the passive standoff detection of surface contaminants. To further the development of this method there is a strong need to address the infrared polarization phenomenology associated with both surfaces and

34

J. M. Thériault et al.

contaminants. The experimental and modeling study outlined in this paper is intended to address this issue. As a priority for the near future, the polarization properties of contaminants deposited on low-reflectivity and natural surfaces will be investigated as well as the design development of an optimized FTIR sensor.

References 1. T. A. Blake and P. L. Gassman, Detection of Soil Surface Contaminants by Infrared Reflection Spectrometry, SPIE Proceedings, 4577, 239-260 (2003). 2. A. Ben-David, P. Vujkovic Cvijin and A. Samuels, Standoff Detection of Surface Contamination with Diffuse Reflectance, Proceedings of the 6th Joint Conference on Standoff Detection for Chemical and Biological Defense, Williamsburg VA, 25-29 Oct 2004, Vol. 6. 3. J.-M. Thériault, J. O. Jensen, A. Samuels, A. Ben-David, C. Gittins, and W. Marinelli, Detection of Non-volatile Liquids on Surfaces using Passive Infrared Spectroradiometers, Proceedings of the 5th Joint Conference on Standoff Detection for Chemical and Biological Defense, Williamsburg VA, 24-28 Sep 2001, Vol. 5. 4. J.-M. Thériault, E. Puckrin, J. Hancock, P. Lecavalier, C. J. Lepage and J.O. Jensen, Passive standoff detection of chemical warfare agents on surfaces, Appl. Opt., 43, 5870-5885 ( 2004). 5. T. J. Rogne, F. G. Smith, and J. E. Rice, Passive Target Detection using Polarized Components of infrared Signatures, SPIE Proceedings, 1317, 242-251 (1990). 6. J.-M. Thériault, Modeling the Responsivity and Self-emission of a Double-beam Fouriertransform infrared interferometer, Appl. Opt., 38, 505-515 (1999). 7. J. A. Shaw, Degree of Linear Polarization in Spectral Radiances from Water-Viewing Infrared Radiometers, Appl. Opt., 38, 3157-3165 (1999). 8. J.-M. Thériault, E. Puckrin, F. Bouffard and B. Déry, Passive remote monitoring of chemical vapours by differential FTIR radiometry: results at a range of 1.5 km, Appl. Opt., 43, 14251434, 2004. 9. J.-M. Thériault, Passive Standoff Detection of Chemical Vapors by Differential FTIR Radiometry, DREV Technical Report TR-2000-156, UNCLASSIFIED (2001). 10. K. P. Gurton, R. Dahmani and G. Videen, Measured Degree of Infrared Polarization for a Variety of Thermal Emitting Surfaces, Army Research Laboratory ARL-TR-3240 (2004). 11. S. W. Sharpe, T. Johnson, R. Sams, J. Hylden, J. Kleimeyer and B. Rowland, Infrared Spectral Signatures for Stand-Off Monitoring: Creation of a Quantitative Library, Its Utility and Limitations, International Journal of High Speed Electronics and Systems, Vol. XX, No. X (2009) 34–12 (THIS ISSUE). 12. J.E. Bertie, S. L. Zhang, H. H. Eysel, S. Baluja and M. K. Ahmed, Infrared Intensities of Liquids XI: Infrared Refractive Indices from 8000 to 2 cm-1, Absolute Integrated Intensities, and Dipole Moment Derivatives of Methanol at 25°C, Applied Spectroscopy 47, 1100-1114 (1993). 13. J. E. Bertie, S. L. Zhang and C. D. Keefe, Measurement and Use of Absolute Absorption Intensities of Neat Liquids, Vibrational Spectroscopy, 8, 215-229 (1995). 14. O. S. Heavens, "Optical Properties of thin Solid Films", Dover Publications, Inc., New York, (1965).

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 263–275  World Scientific Publishing Company

BACKGROUND CONTRIBUTIONS IN DIRECT AND DIFFERENTIAL FOURIER TRANSFORM LWIR MEASUREMENTS: A COMPARATIVE ANALYSIS FRANÇOIS BOUFFARD [email protected] JEAN-MARC THÉRIAULT [email protected] Optronic Surveillance, Defence R&D Canada — Valcartier, 2459 Pie-XI Blvd North, Québec, Québec G3J 1X5, Canada

In order to assess the differences between background clutter using the CATSI instrument in direct (single-beam) and differential (double-beam) mode, a survey of background measurements was undertaken. Measurements include samples of sky, mountains, forest, buildings, roads and snow during springtime in the long wave infrared using both single-beam and double-beam interferometry. It is found that background distribution and statistics in these two modes are significantly different, with the differential mode presenting less variation than the direct mode. This may impact the ability to detect atmospheric contaminants. This analysis was performed in order to better understand the difference between operating a standard and a differential FTIR instrument. Keywords: Fourier transform spectrometry; passive standoff detection; double-beam interferometry.

1. Introduction The passive standoff detection of atmospheric contaminants in the long wave infrared (LWIR) is a powerful but complex technique that is often limited by the presence of background clutter. In most cases, the background accounts for a very large portion of the infrared radiation that reaches the detector, yielding small signal to clutter ratios. The background clutter is typically handled using processing methods such as clutter-matched filters and orthogonal subspace projection1,2, which usually require a good knowledge of the background statistics. The detection performance associated with these processing methods also depends on the background clutter statistics. Consequently, a priori knowledge of the background clutter statistics is required for predicting the detection performance of a particular instrument.

35

36

F. Bouffard & J-M. Thériault

In order to address the variability and to build the statistics of the background clutter, a variety of radiance spectra were recorded at DRDC Valcartier. The Compact Atmospheric Sounding Interferometer (CATSI) spectrometer3,4 was used to acquire spectra of the outdoor environment. The measurements were taken during five different days on a scene grid spanning 50º in azimuth and ranging from –10º to +25º in elevation. This provides a good sampling of natural (sky, trees, mountains, snow) and urban (asphalt, buildings, stacks) backgrounds during the spring season. Measurements at a resolution of 4, 8 and 16 cm–1 were taken for both the direct mode and the differential mode (for which adjacent field-of-views are optically subtracted). In this paper, an analysis of the background clutter statistics in both modes is performed in order to better understand the differences between operating a standard and a differential FTIR instrument. 2. Experimental Setup The CATSI instrument is an optically balanced double-beam FTIR spectrometer equipped with an MCT detector allowing spectral radiance measurements in the 7 to 14 µm spectral range (700 to 1300 cm–1). For more details concerning the optical layout and capabilities of the CATSI instrument, interested readers are referred to previous papers from our laboratory5,6. Two different optical head configurations can be fitted on the instrument: a single 10– inch Cassegrain telescope, or alternately, two independent 4–inch telescopes. This latter setup was used in this experiment, producing two 8–mrad fields of view. The fields of view were adjusted to be on the same horizontal level, with a fixed angular separation of approximately 2º. When the instrument is operated in direct mode, a shutter is used to block one of the fields of view, producing a constant radiance in the corresponding input port. Calibration allows the evaluation of the spectral radiance entering the other port. When the instrument is operated in differential mode, both input ports probe the external radiance, and the recorded spectral radiance is the difference between the radiance entering each input port. In both cases, the instrument self-emission is effectively suppressed. The measurement routine was scripted using the Python programming language*. This allowed the automatic control of the calibration mechanism, the spectra acquisition and processing, and the movement of the Quickset motorized pan & tilt. Measurements were taken on a grid divided in two sections. The bottom section ranged from approximately –9º to +5º in elevation, with a 2º–separation between each measurement (for both azimuth and elevation). The upper section ranged from approximately +5º to +25º in elevation, with a 5º–separation between each measurement (for both azimuth and elevation). In both sections, the measurements covered an azimuth range of 50º.

*

See http://www.python.org.

Background Contributions in Direct and Differential Measurements

37

Measurements on this grid were performed several times at different resolutions from March 16 to March 22, 2005, with a total of approximately 1350 measurements for each mode. Each measurement is composed of 64 averaged interferograms for each interferometer sweep direction, thus producing two individually calibrated spectra each. A complex calibration7 using both gain and offset corrections was performed on directmode measurements, while differential-mode measurements were calibrated to correct for the complex gain only. Fig. 1 shows a mosaic picture of the bottom section of the measurement grid. The sky fills the entire upper section of the grid, while the bottom section is more textured, as shown in the mosaic. Temperature during the measurement runs was mild, most often around 0º C. Measurements were recorded under both clear and cloudy skies from early morning to late afternoon.

Fig. 1: Mosaic picture showing the bottom section of the measurement area.

−6

−6

Direct

x 10

15

Differential

x 10

2

Spectral radiance (W/cm ⋅sr⋅cm )

10

10

−1

Spectral radiance (W/cm−1⋅sr⋅cm2)

15

5

0

−5 700

800

900

1000

1100

Wavenumber (cm−1)

1200

1300

5

0

−5 700

800

900

1000

1100

1200

−1

Wavenumber (cm )

Fig. 2: 211 direct (left) and 211 differential (right) background measurements at a resolution of 4 cm–1.

1300

38

F. Bouffard & J-M. Thériault

3. Results Fig. 2 shows the 211 measurements performed at a resolution of 4 cm–1 in each mode on the same scale. Direct measurements (left) predominantly exhibit blackbody-like radiances, with a few measurements (corresponding to clear sky) showing atmospheric absorption features. The ozone band is visible in a few measurements in the 1000 to 1100 cm–1 region. In contrast, the differential measurements (right) are closely packed around zero, with few measurements showing large radiance differentials, both positive and negative, between the adjacent fields of view. A small number of them also contain atmospheric features. In order to compare the direct-mode and differential-mode measurement sets in a quantitative fashion, many techniques can be applied. Several of them are presented below. 3.1. Clustering A first comparison involves classifying the two measurement sets using a clustering algorithm. The results in Fig. 3 are obtained using the well-known k–means algorithm (see for example Schowengerdt8) with 5 classes on 936 direct and 931 differential measurements at a resolution of 8 cm–1. Our objective here is not to claim that the k– means algorithm is the most appropriate algorithm to perform clustering on background samples. In addition, the number of classes (which is a required parameter for k–means) was chosen somewhat arbitrarily. However, the application of this procedure gives some useful insight for comparing the distribution of each measurement set. −5

1.4

−6

Direct

x 10

2.5

Differential

x 10

T ≈ 296.5 K (54)

T ≈ 240 K (34) 0.8

T ≈ 200 K (95)

0.6

0.4

0.2

0 700

2

T ≈ 273 K (579) 1

1.5

−1

Spectral radiance (W/cm ⋅sr⋅cm )

Spectral radiance (W/cm−1⋅sr⋅cm2)

∆T ≈ +12 K (7) 2

T ≈ 282.5 K (174)

1.2

1

∆T ≈ +4 K (45)

0.5

∆T ≈ 0 K (808)

0 ∆T ≈ −4 K (56) −0.5 −1

∆T ≈ −12 K (15)

−1.5

800

900

1000

1100 −1

Wavenumber (cm )

1200

1300

−2 700

800

900

1000

1100

1200

1300

−1

Wavenumber (cm )

Fig. 3: Classification of direct (left) and differential (right) measurements at a resolution of 8 cm–1, using the kmeans clustering algorithm with 5 classes. The numbers in parentheses are the class populations, and the associated spectra represent the class centroids.

Background Contributions in Direct and Differential Measurements

39

The k-means algorithm assigns each background measurement to one of five classes, and also returns the five class centroids. In Fig. 3, the five spectra shown in each box are the class centroids, and the numbers in parentheses are the class populations. An approximate temperature (for direct measurements) or temperature differential (for differential measurements) is also given. Most direct measurements (579 of them) are closer to the class centroid having an average brightness temperature of 273 K, which is close to the average air temperature during the experiment. The second most populous direct measurement class, with 174 members, has a centroid with an average brightness temperature of 282.5 K. This class, together with the class having a 296.5–K centroid (54 members), represents measurements taken when the instrument was pointed at buildings, which were often heated by the sun. The two other classes have centroids typical of clear-sky measurements, showing obvious atmospheric features with brightness temperatures as low as 200 K. Most direct measurements are thus close to blackbody radiances around 273 K, with approximatley 38 % of the measurements being either significantly warmer or colder. Differential measurements are even more tightly grouped. The most populous class contains 808 measurements (87 % of all samples) and its centroid is close to the usual radiance residual due to the imperfect balance of the instrument. Most of the other measurements (101 of them) belong to the two closest classes, whose centroids are close to what would be seen if the fields of view looked at two blackbodies with a temperature difference of approximately 4 K. Only very few measurements (22 out of 931) belong to classes associated with temperature differences of ±12 K, which for example would happen when one telescope is pointed to a sun-heated building surface and the other on snow. This comparative analysis of the clustering results seems to indicate that differential background measurements are significantly less varied than direct ones. 3.2. Norm and spectral length Another avenue for comparing direct and differential background measurements is to study the distribution of a scalar parameter associated with each measurement. However, this parameter must be computed for both modes, thus excluding characteristics such as brightness temperature, which can only be computed for direct-mode measurements. We propose two scalar parameters that can be computed for either direct or differential measurements. The first is the norm, which can be expressed as: x =

∑x

2 k

,

(1)

k

where x is the measurement vector. For direct measurements, higher average radiances imply higher norms. For differential measurements, high norms indicate high radiance differentials (positive or negative) between the fields of view.

40

F. Bouffard & J-M. Thériault

The second is the spectral length, which can be seen as the cumulative length of all the segments linking the measurement vector's elements. To simplify its definition, we simply use the norm of the first-order numerical derivative y of the measurement vector x , which is a closely related quantity that has the advantage of having no dependence on sample spacing: y k = xk +1 − xk

(2)

, l= y .

A smooth measurement such as a blackbody radiance curve results in a low spectral length, while atmospheric lines and rich spectral features result in high spectral lengths. In direct mode, clear sky measurements have the highest spectral lengths. High spectral lengths are not frequent in differential mode because atmospheric features are generally reasonably well suppressed unless only one of the two fields of view contains sky, which can only happens at building, mountain or cloud edges. Fig. 4 shows histograms of the norm and spectral length for 936 direct and 931 differential measurements at a resolution of 8 cm–1. Direct measurements histograms show much richer distributions of the norm and spectral lengths than differential measurements. In this case, the spectral length histogram distinctly separates clear sky measurements from the others. In contrast, differential measurements show tightly grouped (probably Rayleigh–distributed) samples for both the norm and the spectral length, the latter exhibiting the lowest variance. 150

Direct

400 100

Blackbody−like backgrounds

Sky

300

Blackbody−like backgrounds

200

Sky

50 100 0

2

4

6

8

10

12

14

16

0

0

2

4

6

8

−5

−6

x 10 500

x 10 600

Differential

400 400

300 200

200 100 0

0

0.5

1

1.5 Norm

2

2.5

3 −5

x 10

0

0

1

2 3 Spectral length

4

5 −6

x 10

Fig. 4: Histograms of the norm (left column) and spectral length (right column) for 936 direct (upper row) and 931 differential (lower row) measurements at a resolution of 8 cm–1.

Again, using the norm and spectral length criteria, direct measurements seem to be much more diverse than differential ones. The presence of two distinct groups in the distribution of the spectral length for the direct mode suggests that classification of direct measurements may be useful in detection algorithms.

Background Contributions in Direct and Differential Measurements

41

3.3. Covariance It is straightforward to compute the covariance matrices for both sets of measurements at a resolution of 8 cm–1. While the general structure of these matrices is quite similar, the magnitudes of their elements are not. Fig. 5 presents the square root of the diagonal of both covariance matrices (left graph), which show the standard deviation of each spectral bin in radiance units. For all wavenumbers, the standard deviation is at least two times higher in direct mode than in differential mode. On the right graph of Fig. 5, the eigenvalues of both matrices are plotted. The last eigenvalues (corresponding mostly to measurement noise) follow the same pattern in both modes. The first eigenvalues however are larger in direct mode, the first one being almost two orders of magnitude larger than in differential mode. This is expected since differential measurements typically have a much lower norm than direct ones. −6

3

x 10

−8

10

Standard deviation (W/cm−1⋅sr⋅cm2)

Direct Differential 2.5

Direct Differential −10

10

2 −12

10 1.5

−14

10 1

−16

10

0.5

0 700

−18

800

900

1000

1100 −1

Wavenumber (cm )

1200

1300

10

0

50

100 Eigenvalue

150

Fig. 5: Square root of the diagonal of the covariance matrices (left) of direct and differential measurements at a resolution of 8 cm–1, and the eigenvalues of these matrices (right).

Differences in covariance matrices are of direct interest for detection algorithms based on unstructured background models such as the matched filter and the adaptive coherence estimator (ACE)1, whose performances depend on the inverse covariance matrix.

42

F. Bouffard & J-M. Thériault

3.4. Distribution of synthesis coefficients In detection algorithms based on structural background models, such as the generalized likelihood ratio test (GLRT) for these models1, each background b is a linear combination of a basis U such that b = Uc , where c is a vector of synthesis coefficients. The basis U is either deduced from an a priori model of the backgrounds, or can be extracted from a set of background measurements. Many procedures may produce a (generally orthonormal) basis from such a set. For example, QR and eigenvalue decompositions are often used. We have chosen the singular value decomposition (SVD) of the set to accomplish the task. As an example, the magnitude of the synthesis coefficients for one particular direct and one particular differential measurement are presented in Fig. 6. In this figure, a different orthonormal basis was built for each set. Direct

−4

10

−5

−5

Synthesis coefficient magnitude

10

10

−6

−6

10

10

−7

−7

10

10

−8

−8

10

10

−9

−9

10

10

−10

−10

10

10

−11

10

Differential

−4

10

−11

0

50

100 Singular mode

150

10

0

50

100 Singular mode

150

Fig. 6: Magnitude of the synthesis coefficients for one arbitrary direct (left) and differential (right) background measurement at a resolution of 8 cm–1.

For each background measurement, a vector of synthesis coefficients c can thus be found. It is interesting to study the distribution of these coefficients. In Fig. 7, the histograms for the first five synthesis coefficients (associated with the five most significant singular values of both background sets) are presented for direct (left column) and differential (right column) measurements at a resolution of 8 cm–1. For all of the first five singular modes, it is obvious that the distribution of the synthesis coefficients is more tightly grouped in differential measurements than in direct ones. This again suggests that there is much less variety in differential background measurements than in direct ones.

43

Background Contributions in Direct and Differential Measurements

Singular mode 1

Direct

Differential

150

400

100 200 50 0 −16

−14

−12

−10

−8

−6

−4

−2

0 −3

−2

−1

0

1

2

3

−5

−5

Singular mode 2

x 10 200

x 10 600 400

100 200 0 −2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

0 −6

−4

−2

0

2

4

6

−5

Singular mode 3

x 10

150

300

100

200

50

100

0 −4

−2

0

2

4

6

8

10

12

0 −3

−2

−1

0

1

2

−6

−6

Singular mode 4

x 10 300

x 10 400

200 200 100 0 −4

−2

0

2

4

6

0 −1.5

−1

−0.5

0

0.5

1

1.5

−6

x 10

150

600

100

400

50

200

0 −2

−1.5

−1

−0.5 0 0.5 1 Synthesis coefficient value

1.5

2 −6

x 10

2 −6

x 10 Singular mode 5

8 −6

x 10

0 −1.5

−1

−0.5

0 0.5 1 1.5 Synthesis coefficient value

2

2.5 −6

x 10

Fig. 7: Histograms of the first five synthesis coefficients for 936 direct (right column) and 931 differential (left column) background measurements at a resolution of 8 cm–1.

For each synthesis coefficient, it is possible to compute meaningful statistics. The variance of each synthesis coefficient for direct and differential measurements is very close to the eigenvalues of the covariance matrices illustrated in Fig. 5. Higher-order statistics are also of interest. Fig. 8 shows the sample skewness and kurtosis of the distribution of the synthesis coefficients for both direct and differential measurements. The skewness (left) is a measure of the distribution asymmetry, and is not significantly different for the two modes. Synthesis coefficients corresponding to the last singular modes present a skewness close to zero. The kurtosis (right) is a measure of the “peakedness” of the distribution. High kurtosis is indicative of a distribution with a strong central peak and long, low-valued wings. The kurtosis of a gaussian distribution has a value of 3, which is the value taken by the kurtosis of the synthesis coefficients for the last singular modes. The zero-skewness and kurtosis of 3 for synthesis coefficients above the 40th or so singular mode suggest that those modes correspond to Gaussian noise and that background clutter in both direct and differential modes has a dimension lower than

44

F. Bouffard & J-M. Thériault

approximately 40. Synthesis coefficients for differential measurements have the highest kurtosis, especially in the first singular modes, which again suggests that differential clutter is composed of a core of highly similar backgrounds with very few outsiders. The orthonormal basis obtained by SVD of the background measurement set has as many dimensions as the number of spectral bins in each measurement, which is enough to synthetize any vector of that dimension. If it is to be used in a detection algorithm to discriminate between background and signal, the basis must be truncated to keep only the most significant orthogonal vectors (i.e. those associated with the strongest singular values). The number of orthogonal column vectors kept in U is a parameter whose optimal value is not easy to find. The skewness and kurtosis plotted in Fig. 8 suggest an upper limit above which adding orthogonal vectors is useless.

50

Direct Differential 2

1

0

−1

−2

0

50

100 Singular mode

150

Synthesis coefficient kurtosis

Synthesis coefficient skewness

3

Direct Differential 40

30

20

10

0

0

50

100

150

Singular mode

Fig. 8: Skewness (left) and kurtosis (right) of the synthesis coefficients for the direct and differential background measurements at a resolution of 8 cm–1.

3.5. Background suppression Detection algorithms based on structured background models typically involve a transformation matrix P (a projector) which projects measurements out of the space spanned by the background basis U . The projector out of the space spanned by the −1 columns of U is P = I − U (U T U ) U T (which reduces to I − UU T if U is orthonormal). This transformation is used to eliminate background clutter and hopefully reveal any hidden contaminant signature. However, since it may not be totally orthogonal to the basis U , the target signature is also affected by the projector, albeit presumably less than the background clutter. Therefore, choosing the optimal number of orthogonal vectors for building P involves a trade-off between efficient background suppression (which requires as many orthogonal vectors as possible, up to the limit identified in Fig. 8) and minimal target suppression (which requires as few orthogonal vectors as possible). The previous sections provide hints that the distribution of differential backgrounds measurements may help detection algorithms in that regard. Since the distribution of differential backgrounds is less diverse than for direct backgrounds, it may require a smaller number of orthogonal vectors to suppress the background clutter to a given level.

Background Contributions in Direct and Differential Measurements

45

Conversely, the suppression may be better for a given number of orthogonal vectors used to build P . Even if both the direct and differential measurements sets seem to span the same number of significant dimensions (around 40 in the specific case illustrated at Fig. 8), it is still entirely possible that the clutter may be synthesized more accurately in differential mode than in direct mode, using projectors of the same rank. Fig. 9 illustrates the dynamic of clutter suppression as the projectors P built with an increasing number of orthogonal vectors are used on the entire set of background measurements. The vertical axis is the mean, across all wavenumbers, of the standard deviation of the projection residual Px , where x is a background measurement. In this case, each measurement vector has 156 elements, and as the number of orthogonal vectors in U approaches 156, every measurement can be completely suppressed. This is not shown in the figure, which illustrates the effect of using 1 to 100 orthogonal vectors in the basis U . −5

Mean standard deviation of residual

10

Direct − no classification Direct − with classification Differential − no classification Differential − with classification

Background suppression regime

−6

10

Noise behavior regime

−7

10

−8

10

0

10

20

30

40 50 60 70 Number of orthogonal vectors

80

90

100

Fig. 9: Residuals after applying projectors built using a basis with a successively higher number of orthogonal vectors on direct and differential background measurements at a resolution of 8 cm–1.

The two solid curves present the background suppression behavior for the direct and differential measurements. Above approximately 40 orthogonal vectors, they behave as do random noise vectors, indicating that no more background clutter is left after the projection (with a high number of orthogonal vectors, noise is significantly affected by the projector, too). The interesting part of the graph thus lies below 40 orthogonal vectors, where the residual drops sharply after a few orthogonal vectors have been added to U . This indicates that the bulk of the background clutter has been suppressed before the curve’s elbow, at around 10 orthogonal vectors. The differential measurement curve starts lower since even without background suppression, the background clutter is much smaller. The residual for differential measurements is nonetheless always smaller than the residual for direct measurements, especially before the elbow.

46

F. Bouffard & J-M. Thériault

Dotted lines present the same results, but in which the measurements sets were split in two categories according to their spectral length, which were then treated separately. The spectral length threshold for direct measurements was chosen to separate the sky measurements identified in Fig. 4 from the other measurements. It was chosen somewhat arbitrarily using the median spectral length for the differential measurements. As expected, such a classification has almost no effect for differential measurements, while it significantly helps direct measurements by producing two sets with more favorable distributions. However, classified direct measurements still do not reach the lower residuals obtained with differential measurements.

4. Conclusion While the results of Fig. 9 are interesting, they do not represent actual detection algorithm performance. These results alone are not sufficient to fully establish a better detection performance in differential. Previous measurement campaigns with the CATSI instrument have shown good detection performances in differential mode, but actual ROC (Receiver Operating Characteristic) curves are notoriously difficult to obtain, especially using non-imaging sensors with a limited measurement rate. Detection simulations using synthetic data also pose some difficulties, since it is not an easy task to reproduce the statistics of the real measurements, especially at higher orders, even using actual measurements as a starting point. Nonetheless, the distribution of differential background measurements seems at first sight to be more favorable to detection algorithms than the distribution of direct measurements. All metrics studied in this paper show that differential measurements are in general more similar and less spread out than direct measurements, and systematically produce smaller residuals after the projection step that is used in many detection algorithms. Our analysis is based on a relatively high number of background spectra but for a limited set of springtime measurements. Our plan is to extend background measurements to cover a wider field of regard, a larger range of temperatures and more diverse atmospheric conditions. It would also be interesting to study the influence of the angular separation of the fields of view on the differential measurement distribution.

References 1. D. Manolakis, D. Marden and G. Shaw, Target detection algorithms for hyperspectral imaging application, Lincoln Laboratory Journal 14 79–116 (2003). 2. A. Hayden, E. Niple and B. Boyce, Determination of trace-gas amounts in plumes by the use of orthogonal digital filtering of thermal-emission spectra, Appl. Opt. 35 2802–2809 (1996). 3. J.-M. Thériault, Modeling the responsivity and self-emission of a double-beam Fouriertransform infrared interferometer, Appl. Opt. 38 505–515 (1999). 4. J.-M. Thériault, Fourier-transform spectrometer configuration optimized for self-emission suppression and simplified radiometric calibration, United States Patent No. US 6,233,054 B1, May 15 (2001).

Background Contributions in Direct and Differential Measurements

47

5. J.-M. Thériault, E. Puckrin, F. Bouffard and B. Déry, Passive remote monitoring of chemical vapors by differential FTIR radiometry: results at a range of 1.5 km, Appl. Opt. 43 1425–1434 (2004). 6. H. Lavoie, E. Puckrin, J.-M. Thériault and F. Bouffard, Passive standoff detection of SF6 at a distance of 5.7 km by differential FTIR radiometry, Appl. Spectr. 59 1189–1193 (2005). 7. H. E. Revercomb, H. Buijs, H. B. Howell, D. D. Laporte, W. L. Smith and L. A. Sromovsky, Radiometric calibration of IR Fourier transform spectrometers: solution to a problem with the high-resolution interferometer sounder, Appl. Opt. 27 3210–3218 (1988). 8. R. A. Schowengerdt, Remote sensing: models and methods for image processing (second edition), Academic Press, San Diego, 1997, p. 403.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 277–294  World Scientific Publishing Company

SIGNAL PROCESSING OF MULTICOMPONENT RAMAN SPECTRA OF PARTICULATE MATTER JAVIER FOCHESATTO Geophysical Institute, University of Alaska Fairbanks, 903 Koyukuk Dr., Fairbanks, Alaska 99775, United States [email protected] JAMES SLOAN Department of Chemistry University of Waterloo, 200 University Av W., Waterloo, Ontario N2L 3G1, Canada [email protected]

We report advances in the signal processing of Multicomponent Raman Spectra of particulate matter. We evaluate laboratory and ambient samples collected in field experiments in Canada (during the Pacific 2001 Experiment, Vancouver, BC and at ALERT station, Nunavut, 2002). We discuss methodologies for signal processing the Raman spectra: de-noising and de-peaking, baseline reduction, and identification of chemical fingerprints. The ambient samples were collected near the surface in different environmental conditions during field experiments. In this article we compare and assess the methodologies performances and differences. Keywords: Aerosol Chemical Speciation, Raman Spectroscopy.

1. Introduction Chemical speciation of particulate matter plays a crucial role in atmospheric processes (e.g. aerosol nucleation, scattering/absorption of solar radiation), air quality, and identifying aerosol threat related to health. Over recent decades urban air quality has become increasingly affected by anthropogenic activities; this may carry serious respiratory consequences (e.g. increased correlation with mortality rates1). This is often seen as mostly a problem of mega-cities2, but chemical signatures of urban aerosols3 can also be observed in isolated places at higher latitudes where local anthropogenic production is definitely not the main aerosol source4. Long range transport (e.g. Asian dust aerosols) and boreal forest fires5 are the main sources of high latitude aerosols. Thus the determination of aerosol chemical composition is an important piece of information required to accurately characterize aerosol-atmosphere interactions and their multiple interconnection and feedback processes. When aerosol chemical composition is known, more accurate calculation on climate trends (radiation budget, albedo changes, etc.) can be performed since aerosols are one of the most sensitive parameters for detecting climate changes and, more importantly, for quantifying anthropogenic components of global warming. 49

50

J. Fochesatto & J. Sloan

Raman spectroscopy offers an advanced spectral sensing technique to investigate the chemical composition of particulate matter suspended in the atmosphere6,7. Chemical speciation of aerosol mixtures indicating the presence of organic components (e.g. polycyclic aromatic hydrocarbons [PAHs]), inorganic components embedded in the aerosol body (e.g. sulfates, nitrates, etc.) and signatures of carbonaceous aerosol (i.e. elemental carbon [EC] and organic carbon [OC]) can be identified by measuring the Raman spectrum of laser-irradiated solid aerosol samples deposited on aerosol sampling filters. In the analysis presented here the experiments were performed using a two-stages cascade impactor loaded with 0.2 and 3 µm cut point with Silver membrane filters. A typical laboratory setup to measure the aerosol Raman spectrum has been described6 and improved upon4 and is briefly summarized in section 2. Chemical information about aerosols is deduced from measured Raman spectra after an adequate mathematical procedure of lines and bands identification which is based on matching the signal with a spectral database library containing Raman signatures of substances that could be present in the atmosphere. Thermal noise, dark current detector noise and cosmic rays introduced during the measuring process are signals that are superimposed over the Raman spectrum and in certain cases their effect can mask the Raman information. In this paper we discuss methodologies to process detection noise and peaks using different digital signal processing techniques (i.e. digital filters, wavelet transform (WT) and signal spectrum reconstruction). Spectrum baseline reduction is performed using traditional polynomial baseline fitting; the performance of the traditional method is compared with more sophisticated multi-resolution non-orthogonal WT. Multicomponent chemical fingerprint detection is assessed by using a procedure that combines a gradient algorithm, for peak-fingerprint localization, with a sequential nested fitting algorithm applied over a multi-parametric nonlinear function with variable numbers of lines and shapes. The fitting process is organized as a nested sequence of searching parameters in a sequentially auto-loaded numerical implementation adjusting central line position, line width, and feature strength. The numerical procedure is based on the simplex search method8 and uses an unconstrained nonlinear optimization algorithm. The methodology is compared with more elaborate algorithms9, and their performance in chemical fingerprint retrieval is discussed. In this article we compare and assess different methodologies with the aim of further developing an automatic implementation of the multi-component Raman spectral treatment and processing to retrieve the chemical fingerprints using an “on-line” Raman spectrometer instrument. 2. Experimental Setup A Raman spectrometer is based on three elements: a single longitudinal mode continuous-wave (CW) laser with 100 mW to 2W adjustable optical power and high pointing stability, coupled to a dispersive spectrometer having a multi-channel detector charge coupled device (CCD). Our experimental setup uses a Verdi 2 CW-532 nm Laser,

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

51

Figure 1. Optical setup for Raman spectroscopy measurement of surface particulate matter. Chemistry Department, University of Waterloo, Ontario.

a 702 McPherson spectrometer and a cryogenically cooled LN2 CCD detector from Jobin Yvon (512 x 2048 pixels). The optical arrangement is shown in Figure 1 with the two optical planes folded into one. The CCD is oriented with the large (2048 pixels) side coincident with the spectrometer optical plane in order to match the maximum wavelength expansion and the smaller (512 pixels) side with the same orientation as the entrance slit axis of symmetry. This configuration permits a large spectral collection, 2048 pixel points in the spectrum, and a vertical averaging maximum of 512 spectra per acquisition. The spectrometric configuration gives a maximum of 0.0084 nm per pixel resolution (~0.3 cm-1) at each spectrometer grating position. The final spectral resolution in this case is determined by the entrance slit; in addition, sample quality plays an important role in the signal-to-noise (S/N) ratio of the spectral signature. The Raman spectrum at the frequency shift interval 500-3500 cm-1 is acquired by framing the wavelength interval (i.e. successive measurements of ≤ 5 to 9 frames) after stepping the spectrometer at previously calibrated positions which are selected after reproducing a laboratory black body emission curve in the spectral coverage range for the Raman signature from 532 - 653.7 nm corresponding to 3500 cm-1 of Raman shift. The focused laser beam is sent into the sample holder via a folding mirror and through a focusing lens which permits the operator to point the laser at the desired location on the sample surface. To acquire the Raman spectrum it is necessary to maximize laser intensity; therefore a biconvex lens with 200 mm focal length is used to focus the laser beam onto the sample. The theoretical diffraction-limited spot size under these experimental conditions is ~37 µm considering the effective lens f-number. Optical simulation using Zemax software including laser spot characteristics gives an r.m.s spot size of 43 µm when the laser beam diameter = 2.25 +/- 10 % mm and the optical lens full divergence angle = 0.5 mrad. In this case the optical distance between lens and aerosol sample required to minimize the laser spot is 199.45 mm, adjustable with a micrometric

52

J. Fochesatto & J. Sloan

translation stage. Variations in the focusing spot are important to consider because laser beam de-focusing can cause important variations in the laser energy deposited into the sample. Hence, the focusing lens is mounted on a three-axial platform and controlled by micrometer screws, preserving the laser spot in place. Prior to hitting the sample, the laser beam is intercepted by a mechanical chopper which controls the laser exposure time. Exposure can be varied from 10 ms to 60 s for the purposes of these experiments. The sample holder is mounted on a tri-axial platform that holds a plate containing the filter sample. The platform has a predefined inclination angle of 21o referenced to the vertical of the optical bench plane, where an off-axis parabolic mirror is set to capture a certain amount (~ 3%) of the total angular scattering radiation and reflect it into the spectrometer. This sample-holder configuration prevents the main laser reflection from entering the spectrometer. The collected light from the off-axis parabolic mirror is sent to the spectrometer through an f-4.7 focusing lens matching the spectrometer f-number and the clear aperture diameter of the off-axis parabolic lens (~ 50 mm). An interferential notch filter is used at the spectrometer entrance to suppress the radiation at the laser frequency with more than -20 dB in direct transmission at the laser wavelength. 3. Peak and Noise signals Removal from Raman Spectra 3.1. Spike Removal in Raman Spectra Peaks and noise signals are high bandwidth signatures always present in the Raman spectrum. These perturbations appear during the process of detecting the very weak Raman scattering signal. Peak signals are very fast, intense spikes originating at the detector level, mostly due to the detection of cosmic rays. Due to their high strength (i.e. thousands of electrons in one CCD element, or over a very few adjacent elements) their consideration is important because they cannot be minimized by averaging spectra without consequent alteration of the original data’s statistical distribution. Their occurrence in the detection process is spectrally random. Detection of Raman scattering signals implies long integration times (i.e. 1 to 10 sec.), while signal intensity and spike density increase linearly with time. Co-adding spectra is an experimental technique to improve the S/N ratio. Consequently, spectral distortions may be expected if this coadding process is made before a correction procedure for signal-spike removal is applied. Spectral techniques for noise reduction based on shrinking signal bandwidth can potentially enhance the spikes, which may eventually be mistaken for a true Raman feature. Spikes present in the Raman signal can be identified when a smoothed version of the spectrum is compared with the original data. Missing point quadratic fitting can be used for correcting isolated single and double spikes11. An advanced version of spikes correction was successfully implemented based on a multiple-missing-point model to correct one-dimensional signals prior to the co-adding process12. Nonlinear median filters with logic threshold were also used to remove impulsive noise from meteorological transmitted data13.

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

53

The de-peaking technique developed here is based on the detection of spike position in the Raman spectrum and replacing spikes with their polynomial interpolation weighted by a certain number of neighboring points with predefined statistical parameters. The interpolation affects only the spike, leaving the rest of the data in their original form. Detecting spikes is achieved by exploring the spectrum measurements yi and certain statistical indicators of the data frame. The mean value y for N sample frame (N = 2048) and the standard deviation are calculated for the frame under scrutiny. Spike recognition assumes a local signal deviation higher than 2.5 times standard deviation σ y over y for the considered frame. Then a least-squared fourth-polynomial fitting over a moving window is executed in the neighborhood of each spike interval to estimate the new yˆ i that will replace the yi (contaminated data). A recursive procedure of polynomial fitting continues until the yi under analysis remains within one- σ y standard deviation of the non-contaminated data. In this research it was found that successive fourth-order polynomial fitting with moving windows of 9, 15 and 20 non-contaminated points was necessary to correct most of the spikes encountered for the explored Raman spectra. Figure 2 illustrates a case of recursive spike identification and removal. Multiple spikes of up to 6 contiguous contaminated data over a 2048 data-point frame can be reduced.

Figure 2. Spike identification and correction sequence for a single CCD data frame acquired during 10 sec exposition, 100 mW Laser power over a graphite sample. Panel A is one CCD data row displaying 5-point spike contamination in channel numbers 739 to 743. Panels C to F illustrate the sequential spike correction (black trace) and the raw data (blue trace) for comparison. Panel B shows the corrected signal.

To assess the impact of peak removal, horizontal (over 2048 pixels) and vertical (over 512 pixels) statistical parameters for raw and corrected data were calculated. Table 1 shows standard deviation values calculated vertically (i.e., considering the same CCD pixel) before and after spike correction. In this exercise the horizontal variation of the mean and standard deviation over 2048 CCD pixels is not significant before or after correction.

54

J. Fochesatto & J. Sloan Table 1. Statistical parameters comparison per CCD channel before and after spike correction CCD-Channel Number

σ(y)

σ(yc)

739 740 741 742 743

6.18 6.99 9.05 16.96 6.08

5.94 5.87 5.54 5.77 5.66

3.2. Noise Removal in Raman Spectra Noise in Raman spectra is caused by thermal processes in the detection system and fluorescence in the sample material. Cryogenically or thermoelectrically cooled CCDs are desirable multi-channel detectors because they have very low electron noise per pixel, but the noise signature in atmospheric aerosols is more sample- than detector-dependent. Noise contribution to Raman spectra of atmospheric aerosols can be stationary or nonstationary. Time-dependent noise contribution can be kept low if sources of noise during the experiment remain the same. Noise reduction is generally achieved by co-adding a certain number of spike-free spectra, since the noise process is immersed in a broadband spectral density distribution which has no correlation with the Raman signal. Variation in the noise signal intensity can also be expected due to different quantum efficiencies of CCD or single channel detectors within the spectral range of the measurements. Therefore, correct reduction of noise and peaks is mandatory prior to any fingerprint interpretation because of the poor S/N ratio in the Raman spectrum and the very small amount of aerosol sample material available to irradiate. De-noising methodologies based on digital filtering offer simple mathematical expressions to be implemented in on-line digital signal processors executing very simple mathematical algorithms. These techniques are effective in reducing noise due to instrumental artifacts and other undesired signal sources. However, no matter which digital filter is chosen, the statistical momenta of the data (minimum up to second order) must be retained to preserve the original morphology of very weak Raman features. On the other hand, advanced signal processing techniques based on multi-resolution WT14 are very effective tools that can be used to overcome the noise reduction problem. In this work we implemented de-noising techniques based on orthogonal WT; for the purposes of describing their potentials and limitations, we also implemented basic digital filtering techniques. From a frequency point of view the problem of de-noising a signal can be solved by reducing its high frequency components (i.e. signal variability beyond the instrumental spectral resolution setup). The WT procedure allows the decomposition of the Raman signal in a energy-period plane, and by using a low pass filter criterion, the Raman signal can be reconstructed by integrating a certain number of approximation reconstruction wavelet coefficients15,16. The Raman signal is decomposed based on the wavelet filter bank implemented using different wavelet mother functions. In this application a group of orthogonal Daubechies wavelets was selected15. Figure 3, upper

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

55

left panel, shows a Raman signal acquisition frame of the CCD over amorphous carbon samples characterized by wide Raman bands. The signal is analyzed by an orthogonal wavelet decomposition based on the orthogonal wavelet group of Daubechies wavelets. The lower left panel displays the signal reconstruction at the different wavelet filter bank approximations. The right panel shows the detailed coefficients corresponding to the high frequency filter part of the total signal reconstruction.

Figure 3. First order Raman spectrum of amorphous carbon samples: 7-level Daubechies orthogonal wavelet decomposition. Upper left panel is the Raman Signal, lower left panel is the low pass filter reconstruction based on the “Approximate Coefficients” (Ap 1-7) and the right panel shows the high pass reconstruction corresponding to each decomposition level based on the “Detail Coefficients” (D 1-7).

The same signal was filtered by using two classical digital filters to establish a comparison framework. The first is a simple moving average (MA) filter presented in Eq.(1), which can be seen as a time convolution of the signal with a rectangular centered digital pulse of variable width: N

yF [n] =

y (n + N )

∑ 2 ⋅ N +1

N :1, 2,3,... .

(1)

−N

In this filter, (2·N+1) is the data frame length. The second filter is the Savitzky and Golay (SG) smoothing filter10 based on a moving window that fits a polynomial function at each step for a selected data frame length. The length of the frames at each step is selectable, as is the polynomial degree. To avoid further signal distortions the polynomial degree is set to a fourth order where ci in Eq.(2) are the polynomial fitting coefficients10. This filter is commonly used in spectroscopy applications to de-noise signals due to its performance in preserving peak shapes. The filter is applied to a series of equally spaced data. The filter is expressed in Eq.(2):

56

J. Fochesatto & J. Sloan i = nR

∑ c ⋅ y (i + n) .

yF [n] =

(2)

i

i =− nL

where nL is the number of points used to the left of data point i and nR is the number of points to the right. Figure 4 shows the application of the two digital filters to the first order Raman spectrum of amorphous carbon samples.

Figure 4. Digital filtering for de-noising the first order Raman spectrum of amorphous carbon samples. Left panel shows the effect of the (MA) filter corresponds to the Moving Average Filter (from top to bottom: raw data and filtered data with N = 3, 5, 7, 8, 10 points (see Eq(1)). Right panel shows the effects of Savitzky and Golay (SG) smoothing application using a fourth order polynomial with a data frame of 7, 11, 15, 19 and 21 data points [(nR+nL+1); nR = nL]. On top is the raw data and below are the filtering signals for the corresponding values of data frame.

To assess the performance of the smoothing procedure, the S/N ratio as fluctuation of scattered signal is calculated using the Eq (3) for a given spectral position where a Raman feature is present and where no Raman feature is present:  S  yJ .  =  N  σ yJ

(3)

where J is the number of samples taken for the calculation. The S/N ratio was calculated for positions at 1580 cm-1 and 1800 cm-1 in a window frame of 10 cm-1 wide. Table 2 shows the performance of each filtering technique in terms of the S/N ratio by comparing the Approximate Coefficient (Ap5)-WT approximation with N = 10 MA filtering and an n = 21 data frame in the SG algorithm. Table 2. Signal to noise (S/N) calculation Spectrum position

(S/N) – Raw data

(S/N)-WT

(S/N)-MA

(S/N)-SG

1580 cm-1 (Raman feature)

30

364

163

105

12

329

97

31

-1

1800 cm (background)

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

57

4. Baseline Reduction in Raman Spectra The baseline signal in Raman spectra has a negative impact in the effective signal dynamic range. Raman spectra from atmospheric samples obtained using short laser wavelengths (i.e. visible or ultraviolet lasers) are subject to strong baseline distortions17,18. Baseline signal is associated with the broadband fluorescence scattering that could initially be related to quantitative parameters of the aerosol sample (i.e. total amount of material under analysis6). Reduction of the baseline effect, in particular for the spectral region where the Raman fingerprints are located, is necessary in order to quantify more accurately the areas under particular bands or lines. Methodologies to reduce the baseline effect are available in commercial software packages but their performances are optimized for the specific designed applications. A mathematical procedure for baseline calculation is generally available in those packages in the form of polynomial or transcendental function fitting. This fitting procedure assumes that part of the spectrum where no Raman signal is expected is included, in order to constrain the convergence of the mathematical procedure to that particular region. In addition, the spectral behavior is assumed to match the mathematical form of the fitting function in the whole Raman spectrum. On the other hand, multiresolution wavelet analysis is a powerful technique successfully implemented in special cases for pigment signatures recognition19, in near infrared Raman spectroscopy20, etc. The Raman signal depends on the components of the aerosol sample; very sharp, defined spectral features are expected when high order crystals (e.g. pyrolitic graphite, inorganic salts or organic components) are the main components 6,7, and broad features (i.e. bands) when the sample is composed of disordered amorphous materials such as those encountered in the atmosphere 3,4. Therefore, a spectral decomposition of the Raman spectrum based on a time-frequency analysis technique will initially allow for better analysis of the Raman signal contribution in the time-frequency plane. In this case the WT time-frequency plane allows the Raman signal and baseline contributions to be separated. Then, based on an adequate reconstruction procedure (i.e. inverse WT), the baseline can be retrieved. A hard thresholding technique19 based on orthogonal WT can be applied to reconstruct the baseline signal when the high frequency contribution is frequency-independent of the low frequency contribution. Thresholding methodologies by blocks, Shannon information and maximum information gain20 based on the information level available in the time-frequency plane allow for a more compact reconstruction procedure for the Raman signal. The problem pointed out here is that atmospheric carbonaceous aerosols display a Raman spectrum in which simultaneous contribution at medium frequencies (~ 500 to 200 cm-1) is observed from both the pure Raman signal and the baseline signal. This contribution is due to the highly disordered material that can be sampled in the atmosphere from different sources (e.g. combustion processes, biomass burning, intercontinentally-transported aerosols). Therefore, in the time-frequency plane, frequency variability from both the disordered Raman features and the baseline are embedded in the same frequency pattern.

58

J. Fochesatto & J. Sloan

In order to study this problem a non-orthogonal wavelet decomposition allowing variable frequency resolution with a large number of frequency decomposition coefficients was implemented, followed by a time-variable reconstruction with high pass bandwidth scaled by the temporal signal variability to further subtract the baseline from the reconstructed spectrum. The time-varying thresholding for signal reconstruction is based in the variability of the significant levels of wavelet coefficient calculated at each signal’s time position. The significance coefficient was calculated based on statistical variation of the Raman signal; the highest temporal variability is indicated for the Raman signal, and lower variability represents the baseline variation coefficient. The numerical procedure used to implement WT is a modified version of the Continuous WT (CWT) successfully applied to analyze long temporal series for climatology studies of Pacific Ocean sea surface temperatures22. The CWT implemented over a discrete signal sequence is equivalent to a convolution of the discrete sequence with the scaled, translated version of the mother wavelet. The Morlet complex plane wave modulated by a Gaussian function is the Mother wavelet in this implementation15,23. The expression for the wavelet coefficients is given in Eq. (4): N −1

Wn ( s ) =

∑y

n'

n '= 0

 ( n '− n ) ⋅ δ t  ⋅ψ *  . s  

(4)

where s is the wavelet scale (starting at 1cm-1) and n is the localized time. The WT is solved by applying the discrete Fourier transform. Thus, making use of the convolution theorem, the WT is calculated by the inverse Fourier transform of the product in frequency: N −1

Wn ( s ) =

∑ yˆ

k

⋅ψˆ * ( s ⋅ ω k ) ⋅ e i⋅ωk ⋅n⋅δ t .

(5)

k =0

where k = 0, .. ,N-1 is the frequency index and yˆ k is the Fourier transform of yn. The 2 wavelet power spectrum is calculated from the complex coefficients by Wn ( s ) . Thus, selection of the scales in this non-orthogonal wavelet decomposition allows us to build up a more complete set of scales of the Wn coefficients to give a more accurate representation of both signal and baseline including all their embedded variability23. Following Torrence and Compo22, the scales are written as fractional powers of 2 [see Eq. (6)] for WT and for reconstruction Eq. (7): s j = s0 ⋅ 2 j⋅δ j , j = 0, 1,..., J .

J=

1

δj

 N ⋅δ t  ⋅ log 2  .  s0 

(6) (7)

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

59

Figure 5. CWT from an aerosol Raman spectrum: Wavelet coefficients in the wave period-wavenumber diagram. Contour levels (iso-periodic level coefficients) indicated from 1 to 4096 cm-1 on logarithmic scale, colors represents coefficient intensities.

where s0 is the smallest resolvable scale and J determines the largest scale. The s0 value is set to 2 × δ t according to the Nyquist Theorem. The δj value is the subscale decomposition (in powers of 2); the lower the δj, the higher the definition in the spectral decomposition that can be achieved. Here we implemented 12-level decomposition with 4 sub-octaves per scale. Figure 5 shows a contour plot of the wavelet power coefficients in the time (wavenumber)-wave-period plane. The contours represent the iso-periodic level coefficients and the color levels the coefficient intensities. The mother wavelet decomposition is Morlet with a sampling interval of 1 cm-1 and the decomposition is based on 12 levels, each with 4 sub-octaves. The first period begins at 2 cm-1. The original data are reconstructed as the discrete sum of the real part of WT over the scales as shown in Eq.(8):

xn =

δ j ⋅ δt Cδ ⋅ψ 0 ( 0 )

J

∑ j =0

ℜ {Wn ( s j )}

sj

.

(8)

In Eq.(8) the factor ψ0(0) removes the energy scaling, while s j transforms the WT coefficients into an energy density and ℜ indicates the real part of the transformed wavelet coefficients. The factor Cδ comes from reconstructing a δ function from its WT using the function ψ0(η); see details in22. The upper panel of Figure 6 plots the original Raman signal with the reconstruction spectrum using the entire set of wavelet coefficients shown in Figure 5, by using the inverse CWT [ICWT, see Eq.(8)]. In the lower panel of Figure 6 the reconstructed spectral residue is calculated.

60

J. Fochesatto & J. Sloan

Figure 6. Signal reconstruction by inverse CWT (ICWT). Upper panel: full bandwidth reconstruction of the Raman signal superimposed on the original data. Lower panel: reconstruction spectral residue.

A statistical test was performed (using the original data, synthetic data and background simulated data) to verify that the wavelet coefficients for the ICWT fall within the 95% confidence interval. Interested readers can see the published results 22. The baseline reduction [Eq.(9)] was obtained by subtracting from the original signal (yn), the reconstructed data xn from [Eq.(8)]. The J components were determined using a variable bandwidth reconstruction as function of the temporal variability of the signal following the statistical significance calculation giving in reference22.

y B = yn −

δ j ⋅ δt Cδ ⋅ψ 0 ( 0 )

J



ℜ{Wn ( s j )}

j =0

sj

(9)

4.1. Baseline Reduction based on Polynomial Fitting In theory the baseline spectral variation follows a fourth-power law and is displayed continuously in the spectrum. Therefore the baseline signal variation where the Raman bands are located is assumed to be a continuation from the variations of the baseline signal where no Raman signature is expected. Based on these assumptions a polynomial fitting procedure can be applied to the whole signal, making the fitting process converge where no Raman signal is expected6,7. Note that higher-order polynomial fitting may initially give a more reduced fitting error, but in that case high-order oscillations can occur in regions where no fitting control is imposed (i.e. under Raman spectral features). The convergence criterion is set initially to fit the polynomial parameters in three spectral intervals: 500–600, 1700–2500, and 3400–3500 cm-1. The numerical procedure is based on a fitting and refitting process that is repeated until a minimum error is reached and only small parameter oscillations in the neighborhood of the fitted region can be verified. The methodology for fitting convergence is based on the Simplex Algorithm for

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

61

multiple parameter fitting adapted to each of the fitting formulas with unconstrained number of runs and convergence in the specific spectral region defined above10. The convergence error is evaluated by the signed error of the difference between the Raman signal and the baseline in the spectral region of convergence. Figure 7 illustrates the baseline calculation based on the WT and polynomial fitting.

Figure 7. Baseline retrieval using WT and polynomial fitting

5. Chemical Fingerprint Retrieval Chemical fingerprint identification is the main goal of the Raman spectroscopy analysis. Detection of multicomponent species during chemical identification of airborne aerosol samples is based on a matching procedure between the actual Raman signature and the known fingerprints of basic components (i.e. carbon components, inorganic salts, and organic components). Line-matching recognition is highly desirable in algorithms performing rapid identification of chemical components in real-time surface aerosol samplers. The chemical fingerprint algorithm developed here was compared with a numerical procedure that were previously validated using laboratory samples6,7. Peakfit program9 uses a line constraint position to fix relative areas of lineshape, redundancy codes, and refitting to reach convergence. The program fits a series of line shapes together with the baseline function. The chemical fingerprint retrieval developed for multicomponent Raman spectra is based on a set of predefined available chemical species fingerprints available to initialize the fitting procedure. Then a multidimensional unconstrained nonlinear minimization fitting methodology based on the Simplex Method8,10 is implemented to reproduce the Raman fingerprints.

62

J. Fochesatto & J. Sloan

Figure 8. Raman Spectrum Fitting algorithm diagram.

The numerical routine is constrained by the number of line shape positions determined initially by edge detection. The edge detection algorithm performs the first derivative over the Raman signal, searching for local maxima to identify peak positions. A threshold level for this identification is calculated using statistical parameters of the entire Raman spectrum. Once a peak position is detected, the algorithm searches in the database for its chemical proxy to build an approximate line-shape function. The edge-detection process continues over the entire spectrum until a complete first-approximation spectrum is built. At this point the specific Raman fingerprint fitting begins; however only Lorentzianshapes formulation is considered here. A series of numerical trials allowed to identify a preferred order in the fitting procedure to be established: the first fitting-loop is run to accommodate the Center Line (ν0), the second loop to search for Line Width (∆ν) and finally the third loop to seek Strength Lines (I0). The outputs of each fitting step are autolinked in order to build the complete fitting procedure. After a certain number of cycles the sequence is auto-started, reloading the final values reached in the previous sequence. The number of cycles and the maximum expansion of the variables are constrained in order to fit and re-fit the signal a certain number of times in the search for the minimum fitting error. Figure 8 illustrate the algorithm diagram.

5.1. Chemical composition of aerosols during Pacific 2001 and ALERT 2002 experiments A number of surface aerosol spectra were measured to explore the algorithm’s capabilities. Two samples were selected randomly to search for the chemical signatures

Signal Processing of Multicomponent Raman Spectra of Particulate Matter

63

of aerosols in two different environments. Figure 9 shows the evaluation of sample S30:70 over a 2 hour sampling period during the Pacific-2001 experiment in Slocan Park, Vancouver (BC). Figure 10 displays the aerosol Raman spectrum collected at the ALERT station, Nunavut, Canada. The upper panel of both figures shows the measured Raman spectrum (de-peaked and de-noised); and the lower panel of both figures shows the series of fitted Lorentzian line shapes approximating spectrum + baseline with 3000.

The detection can be improved when looking at multiple peaks. Extending the spectral region from 3 to 6 THz more than doubles the number of available structures. An example is given below for RDX. Using an absorptivity value of α = 0.08 cm 2 mg −1 for RDX at 0.8 THz, as determined experimentally16 we apply this calibration across the absorption spectrum of RDX in cm2/mg as shown in Figure 2. We have studied a number of explosives and related compounds, including TNT, RDX, HMX, and PETN, which is reported by Leahy-Hoppa, et al.19

Figure 2. Terahertz absorption in RDX calibrated in absolute units of cm2 mg-1. Further details to be published.1

72

R. Osiander et al.

4. THz Time-Domain Reflection Spectroscopy The transmission geometry is relevant for laboratory investigations, but for most applications, a reflection geometry is required. There are a number of additional effects to consider when looking at a reflection geometry, such as signal generation, scattering from the reflecting surface, and background interference, just to mention a few. The analysis for the reflection geometry is somewhat different than encountered for the transmission scenario. A reference waveform is still required, however in this case the best reference is a perfect mirror and we typically use either a gold-coated glass optical mirror or a copper plate. In reflection geometry, the ratio of the Fourier transforms of the electric fields for the sample and the reference is:

Esam (ω ) nˆ (ω ) − 1 (n(ω ) + iκ (ω )) − 1 = | r (ω ) | exp[iϕ (ω )] = = Eref (ω ) nˆ (ω ) + 1 (n(ω ) + iκ (ω )) + 1

(4)

where the explicit dependence on frequency is shown but hereafter suppressed. The absorptivity is related to the imaginary part of the complex index of refraction through α = 2ωκ / c . Equating real and imaginary parts yields two equations in two unknowns, yielding the absorptivity as a function of the magnitude ratio r and the phase difference φ between sample and reference:

α=

2ω 2 | r | sin ϕ c 1+ | r |2 −2 | r | cos ϕ

(5)

This is described in further detail in recent references.20, 21 The determination of the phase angle is, especially on samples with random surfaces, problematic.22 The experimental setup used in reflection measurements is shown below in Figure 3. The THz beam, 0.1 to 2.5 THz, is collimated using 5 cm diameter optics, and is incident on the sample at an angle of 22 deg. For experiments on rough surfaces, a Au mirror is used as a reference.

5. Wave scattering from granular materials The primary goal of these experiments is the identification of spectral features of explosives in a granular material such as sand. To accomplish this, we are investigating the THz TDS signals from patterned surfaces and random23 (granular) surfaces, and how these structures influence or disguise spectral features from a trace material. As an introductory example, let us discuss a simple case of wave scattering from a stepped interface, as shown in Figure 4. Time-domain spectroscopy with ultrafast laser pulses is done by sampling the electric field at particular relative delays of pump and probe laser pulses. By varying the relative delay, the electric field waveform is measured as a function of time, retaining amplitude and phase information. For a stepped interface, the wave reflecting from the surface is received with two different delays: A1 E (t ) + A2 E (t + δ ) with real coefficients.

Signature and Signal Generation Aspects of Explosive Detection

73

Figure 3. Photograph of the THz time-domain spectroscopy system in reflection for granular samples. The laser paths are superimposed in red, and the THz path in blue. The photoconductive emitter is on the right side of the vertical breadboard.

Figure 4. Reflection from an idealized stepped interface (left), and an Al machined structure (right).

The Fourier spectrum is:

A1 g (ω ) + A2 exp(iωδ ) g (ω )

(5)

Where g(ω) is the Fourier transform of E(t). The power is the magnitude squared:

g (ω )

2

{A

2 1

}

+ A22 + A1 A2 cos(ωδ )

.

(6)

The cosine term is a spectral interference term. These are known as “thickness fringes” which occur widely in many forms of spectroscopy, not only in TDS. To illustrate experimentally the appearance of these interferences, waveforms were measured in reflection from two structures machined in Al, with stepped interfaces created by milling channels around square mesas. A photograph of one structure is shown in Figure 4. The

74

R. Osiander et al.

two samples differ in the depth of the channels: a low-relief cut of 0.381mm (0.015”) and a high-relief cut of 1.143mm (0.045”). The experimental reflected power is shown in Figure 5. The periodicity of the spectral interference fringes depends on the depth of the mesas as expected.

Figure 5. THz spectrum for machined Al mesas illustrating spectral interference fringes. Left: low-relief 0.381mm (0.015”), Right: high-relief 1.143mm (0.045”)

One way to extend this model to a random surface is to introduce the height deviation h(x,y) from a reference surface and to employ statistical properties of this function23. A standard assumption is that h is “normal”, or Gaussian distributed where the average height is h = 0 (by proper choice of the reference surface), and σ h is the standard deviation. If a waveform E(t) is incident on a rough surface, the reflected waveform will now contain a distribution of delays δ = −2h / c corresponding to different surface heights: E (t + δ ) , where the angle brackets indicate an average over the surface. The Fourier transform will then be: g (ω ) exp(iωδ ) containing an average over the phases. Using a theorem from statistics, we have shown24 that: g (ω ) exp(iωδ ) = g (ω ) exp(iω δ ) exp(−ω 2σ δ2 / 2)

(7)

where standard deviation of the delays is σ δ = 2σ h / c . This result is exact if h is Gaussian, and is approximate otherwise (neglected terms involve higher-order moments of the statistical distribution). The first exponential is a phase factor that arises from the average delay (which can be chosen to be zero). The second exponential is a Gaussian roll-off of the spectral magnitude and depends strongly on the standard deviation of the height distribution. In numerical investigations, one typically creates an instance of a

Signature and Signal Generation Aspects of Explosive Detection

75

rough surface using random means with the desired parameter σ h . It is then desirable to average the results over many instances of rough surfaces.23 We have experimentally supported this model with THz TDS reflection measurements from instances of rough surfaces created with metal powder.24 As an example, shown in Figure 6 are reflection spectra from a coarse-grained pure silica sand, a finer commercial grade of sand, and copper powder (-100 mesh). Grain sizes are largest for the silica sand and smallest for the copper powder, which fits qualitatively well with the expected slopes of the roll-off. Note that the copper powder exhibits the higher signal due to the high reflectivity of copper. It should be noted that reflections from copper powder are from surface grains, while for sand, deeper grains may also contribute to the reflected signal.

Figure 6. Reflection spectra for several granular materials showing the Gaussian frequency roll-off.

Given the Gaussian frequency roll-off from scattering, we expect that absorption features of trace chemicals in scattering media can still be recovered with sufficient experimental SNR. For these experiments, we use tartaric acid as a simulant for explosives, because explosives are expensive in high purity grades, and are associated with safety hazards and waste disposal issues. Tartaric acid has a molecular weight of 150.1 g/mole, similar to some explosives, forms crystals at room temperature and is food-grade safe. Tartaric acid has an absorption band at 1.1 THz. The scattering sample was copper powder (-100 mesh) in a petri dish, to which was added 20 ml of 0.69M solution of tartaric acid in methanol, and the solvent was removed by evaporation. Copper powder was chosen as the scattering substrate because the reflectivity of Cu is higher than sand (see Figure 6) so that the returned signals would be higher. We estimate 2 g of tartaric acid was distributed throughout the volume of copper powder. The frequency spectrum of copper powder with and without tartaric acid, is shown in Figure 7. The signal with tartaric acid shows an

76

R. Osiander et al.

absorption anomaly at the 1.1 THz absorption band of tartaric acid, which does not appear in the bare copper powder. Unfortunately the signal also drops below the noise floor for this measurement above 1.2 THz due to the roll-off from the scattering distributions, and the further form of this feature cannot be observed. While we cannot yet claim spectral identification, the observation of a spectral absorption anomaly on a scattering substrate (copper powder) is an important step forward. We are working towards demonstrating unambiguous identification of a chemical on this and other scattering substrates, including sand. 0.01

spectral magnitude [arb]

Cu powder (-100 mesh) Cu powder + tartaric acid 0.001 1.1 THz 0.0001

10

-5

10

-6

0

0.5

1

1.5

2

Frequency (THz)

Figure 7. Absorption anomaly of tartaric acid on copper powder.

6. Conclusions Potential applications of terahertz spectroscopy to explosives detection in the field will continue to drive research and development, especially for THz sources and detectors. A catalog of spectral features for explosives in this frequency range is now available.13 However the signal generation in reflection, a basis for application of the technology in the field, has been neglected. We have determined absolute absorptivity values for explosives (shown here for RDX, full analysis to be published1), which determines the sensitivity (SNR) required for detection with THz at a target trace explosive concentration. In addition, we have developed a model for the reflection from a granular surface, and have demonstrated experimentally that it is possible to observe spectral anomalies of chemicals on granular substrates (shown here, tartaric acid on Cu powder), and we are working towards unambiguous identification. Of particular interest would be the observation of spectral features of explosives in realistic scattering substrates such as sand.

Signature and Signal Generation Aspects of Explosive Detection

77

7. Acknowledgments This work was supported by the Army Research Office under the MURI program contract/grant number DAAD190210255. We would also like to thank Ed Ott (JHUAPL) for preparation of the polyethylene pellets.

References 1. M. J. Fitch, M. R. Leahy-Hoppa, E. Ott, and R. Osiander, "Absolute absorptivity and crosssection measurements for the explosives TNT, RDX, HMX, and PETN," submitted to Chemical Physics Letters (2007). 2. J. B. Spicer, P. Dagdigian, R. Osiander, J. A. Miragliotta, X. C. Zhang, R. Kersting, D. R. Crosley, R. K. Hanson, and J. Jeffries, "Overview: MURI center on spectroscopic and time-domain detection of trce explosives in condensed and vapor phases," Proc. of SPIE 5089(2), 1088-1094 (2003). 3. F. C. DeLucia, "Spectroscopy in the terahertz spectral region," in Sensing with terhertz radiation, D. Mittleman, ed. (Springer-Verlag, New York, 2003). 4. D. G. Allis, D. A. Prokhorova, and T. M. Korter, "Solid-state modeling of the terahertz spectrum of the high explosive HMX," Journal of Physical Chemistry A 110(5), 1951-1959 (2006). 5. M. R. Leahy-Hoppa, "Terahertz spectroscopy and molecular modeling of nonlinear optical polymers," (University of Maryland, Baltimore County, 2006). 6. W. R. Tribe, D. A. Newnham, P. F. Taday, and M. C. Kemp, "Hidden opject detection: security applications of terahertz technology," Proc. of SPIE 5354, 168-176 (2004). 7. C. Baker, W. R. Tribe, T. Lo, B. E. Cole, S. Chandler, and M. C. Kemp, "People screening using terahertz technology," Proc. of SPIE 5790, 1-10 (2005). 8. M. C. Kemp, P. F. Taday, C. B.E, J. A. Cluff, F. A.J, and W. R. Tribe, "Security applications of technology," Proc. of SPIE 5070, 44-52 (2003). 9. J. E. Bjarnason, T. L. J. Chan, A. W. M. Lee, M. A. Celis, and E. R. Brown, "Millimeterwave, terahertz, and mid-infrared transmission through common clothing," Applied Physics Letters 85(4), 519-521 (2004). 10. D. C. Leggett, J. H. Cragin, J. T.F., and T. A. Ranney, "Release of explosive-related vapors from landmines," U.S. Army Engineer Research and Development Center-Cold Regions Research and Engineering Library, ERDC-CRREL TR-01-6, Hanover, NH (2001). 11. A. Anaya and I. Padilla, "3D laboratory-scale SoilBed for assessment of fate and transport of explosive-related compounds in soils under vaiable environmental conditions," Proc. of SPIE 6217, 921738 (2006 (in press)). 12. M. J. Fitch, D. Schauki, C. Dodson, and R. Osiander, "THz spectroscopy of explosives and related compounds," Proc. of SPIE 5411, 84-91 (2004). 13. Y. Chen, H. Liu, Y. Deng, D. B. Veksler, M. S. Shur, X.-C. Zhang, D. Schauki, M. J. Fitch, R. Osiander, C. Dodson, and J. B. Spicer, "Spectroscopic characterization of explosives in the farinfrared region," Proc. of SPIE 5411, 1-8 (2004). 14. M. J. Fitch, C. Dodson, Y. Chen, H. Liu, X.-C. Zhang, and R. Osiander, "Terahertz reflection spectroscopy for explosives detection," Proc. of SPIE 5790, 281-288 (2005). 15. C. Dodson, M. J. Fitch, R. Osiander, and J. B. Spicer, "Terahertz inaging for anti-personnel mine detection," Proc. of SPIE 5790, 85-93 (2005).

78

R. Osiander et al.

16. R. Osiander, J. A. Miragliotta, Z. Jiang, J. Xu, and X.-C. Zhang, "Mine field detection and identification using terahertz spectroscopic imaging," Proc. of SPIE 5070, 1-6 (2003). 17. R. A. Cheville, M. T. Reiten, R. McGowan, and D. R. Grischkowsky, "Applications of Optically Generated Terahertz Pulses to Time Domain Ranging and Scattering," in Sensing with Terahertz Radiation, D. Mittleman, ed. (Springer-Verlag, New York, 2003). 18. L. Duvillaret, F. Garet, and J. L. Coutaz, "A reliable method for extraction of material parameters in terahertz time-domain spectroscopy," IEEE journal of selected topics in quantum optoelectronics 2(3), 739-746 (1996). 19. M. R. Leahy-Hoppa, M. J. Fitch, X. Zheng, L. M. Hayden, and R. Osiander, "Wideband terahertz spectroscopy of explosives," Chemical Physics Letters 434 (2007) 227-230. 20. P. U. Jepsen and B. M. Fischer, "Dynamic range in terahertz time-domain transmission and reflection spectroscopy," Optics Letters 30(1), 29-31 (2005). 21. E. M. Vartiainen, Y. Ino, R. Shimano, M. Kuwata-Gonokami, Y. P. Svirko, and K.-E. Peiponen, "Numerical phase correction method for terahertz time-domain reflection spectroscopy," Journal of Applied Physics 96(8), 4171-4175 (2004). 22. Y. Chen, H. Liu, M. J. Fitch, R. Osiander, J. B. Spicer, M. Shur, and X.-C. Zhang, "THz diffuse reflectance spectra of selected explosives and related compounds," Proc. of SPIE 5790, 1924 (2005). 23. J. A. Ogilvy, Theory of wave scattering from random rough surfaces (IOP Publishing Ltd., New York, 1991). 24. Y. Dikmelik, M. J. Fitch, R. Osiander, and J. B. Spicer, "The effects of rough surface reflection on terahertz time-domain spectroscopy," Optics Letters 31 (2006) 3653-3655.

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 307–318  World Scientific Publishing Company

NOVEL APPLICATION OF PASSIVE STANDOFF RADIOMETRY FOR THE MEASUREMENT OF EXPLOSIVES ELDON PUCKRIN, JEAN-MARC THÉRIAULT, HUGO LAVOIE, DENIS DUBÉ and PATRICK BROUSSEAU Defence Research & Development Canada – Valcartier 2459 Pie XI Blvd N, Quebec, Quebec, G3J 1X5, Canada [email protected]

The objective of this paper is to show that explosives may potentially be detected by passive standoff FTIR radiometry. It is demonstrated that many explosives exhibit a signature (fingerprint) in the longwave infrared (LWIR) region (i.e., 8 – 14 µm). Simulations using the radiative transfer model, MODTRAN4, clearly suggest that such materials can be identified when a thermal contrast exists between the material and its environment. The explosives considered in this study include octogen (HMX), trinitrotoluene (TNT), cyclonite (RDX), and the plastic explosives, C-4 and Detasheet-C. In addition, passive FTIR measurements of HMX have been performed in the field at standoff distances up to 60 m. The development of a passive standoff detection capability based on FTIR radiometry may be a potentially useful addition to the arsenal of measurement techniques that currently exist for the detection and identification of explosive threats. Keywords: FTIR spectroscopy, passive standoff detection, explosives

1. Introduction As a result of recent terrorist activities involving explosive devices, government organizations and agencies have placed emphasis on developing techniques to detect and identify explosive devices. In particular, a concerted effort is being made to develop standoff techniques for detecting an explosive device from outside the zone of severe damage that could be inflicted by such a device1. Techniques for detecting explosives are usually classified according to whether the explosive is in bulk form or trace quantities. For bulk detection, imaging techniques consist of methods based on X-ray technology, terahertz and microwave imaging, nuclear methods involving neutrons and gamma rays, and magnetic resonance methods1. These techniques typically distinguish the spatial features associated with the bulk explosive. The vapor pressure associated with most explosives is extremely small, being on the order of parts per billion or less. As a result, the standoff detection of trace quantities of explosives is extremely difficult. Most detection methods require that the residual explosive accumulate on surfaces, which are then swiped and analyzed by non-standoff detection approaches such as mass spectrometry, ion mobility spectrometry, electron capture, acoustic wave sensors or field-ion spectrometry. Gas chromatography is often 79

80

E. Puckrin et al.

used in conjunction with many of these techniques to enhance the selectivity of the nitrogen-containing molecules found in the majority of explosives. Standoff detection techniques that are currently being investigated for application to explosive detection consist of LIDAR (Light Detection and Ranging), DIAL (Differential Absorption LIDAR), DIRL (Differential Reflectance LIDAR) and nonlinear optical techniques involving CARS (Coherent Anti-Stokes Raman Scattering) processes1. Passive standoff Fourier-transform infrared (FTIR) radiometry is a well-known technique for the detection and identification of chemical vapor threats based on their unique infrared signature. It has been applied to the measurement of chemical vapours2-6, chemical warfare agent contamination on surfaces7, biological warfare agents8,9 and radiological compounds10. Its application to explosive detection and identification is not currently considered as a potential technique. Therefore, the objective of this paper is to make a preliminary investigation of the application of the passive standoff FTIR radiometric technique for detecting and identifying explosive materials where a temperature difference occurs between the target and background scene, such as an outdoor environment. To accomplish this goal it was first necessary to determine if explosives have a signature in the thermal infrared region of the spectrum where passive detection occurs. Hence, diffuse reflectance measurements were obtained in the laboratory of several explosive materials, including octogen (HMX), trinitrotoluene (TNT), cyclonite (RDX) and the plastic explosives C-4 (mixture of RDX with polyisobutylene, motor oil and di-(2-ethylhexyl) sebacate) and Detasheet-C (mixture of pentaerythritoltetranitrate (PETN) and nitrocellulose) to determine their spectral signatures in the 8-14µm long-wave infrared (LWIR) region. Simulations of passive detection at standoff distances of up to 100 m were performed with the MODTRAN4 radiative transfer model11, which incorporated the measured laboratory reflectance signatures. Finally, a preliminary measurement was performed for HMX at a standoff distance of 60 m in a field trial at Canadian Forces Base Valcartier on 4 June 2004. 2. Detection Principles and Phenomenology A passive long-wave infrared standoff sensor functions by exploiting the temperature difference (∆T) that exists between the target scene and the background scene. If the target is warmer than the background, then the spectrum of the target chemical will be measured as an emission feature in the spectrum recorded by the sensor. Conversely, if the target is colder than the background, then the target chemical spectrum will be measured as an absorption feature. When a target consisting of a solid is present on a surface, ∆T is zero since the surface and the solid target are in contact. However, if the radiation from a surrounding external hot or cold source is reflected from the surface, then it is possible to observe the spectrum of the solid target. In outdoor environments, the radiation from the cold sky provides a high surface-to-sky temperature difference that yields favourable detection possibilities.

Passive Standoff Detection of Explosives

81

The radiative transfer intervening at a surface can be understood from a simple physical argument. A diagram is presented in Fig. 1 that defines the parameters used to evaluate the radiance emanating from a clean surface and from one contaminated with a solid explosive. For a clean surface having a reflectance Ro, the spectral radiance measured at the sensor contains two components, i.e. the emitted radiance from the surface, B(1- Ro), and the background radiance reflected by the surface, Ldown Ro. The parameter, Ldown, represents the downwelling radiance from the background (sky or other type of background surface) and B is the Planck radiance evaluated at the temperature (T) of the surface, which is given by   1.191× 10 −12ν 3  , B=  1.439ν       T  −1   e

(1)

where ν is the wavenumber in cm-1, and B is in W/cm2 sr cm-1. Adding these two radiance components, B(1- Ro) and Ldown Ro, yields an expression for the radiance of the uncontaminated surface that is given by Lclean = B − Ro ( B − Ldown ) .

(2)

Similarly, for a surface contaminated with a solid target having reflectance Rcont, as shown in Fig. 1, the spectral radiance measured by the sensor is given by Lcont = B − Rcont ( B − Ldown ) .

(3)

An important quantity of interest for studying the radiative effects of the solid target on a surface is the differential spectral radiance (∆L), which is the radiance difference (Lcont -Lclean) obtained by subtracting Eq. (2) from Eq. (3), Lcont − Lclean ≡ ∆L = ( Ro − Rcont ) ( B − Ldown ) .

(4)

Inspection of Eq. (4) reveals some simple facts concerning the sensitivity for detecting solid contaminants by passive spectral radiometry. First of all, the radiance difference is proportional to the reflectance contrast (Ro – Rcont), indicating that the greater the reflective contrast is between the contaminant and the clean surface, the greater the sensitivity for detection. Generally, explosive compounds display variations on the order of 2% in their reflectance signatures. Secondly, the radiance difference is proportional to the radiative contrast between the Planck surface radiance and the downwelling radiance, (B – Ldown), from the surroundings. In outdoor environments the best detection possibilities are obtained for clear sky conditions where Ldown is a minimum and the temperature difference is about 60 K between the surface contaminant and the surrounding background environment.

82

E. Puckrin et al.

Surroundings, Ldown

Sensor

Rcont Ro

εcont

B B Clean Surface

Surface contaminated with explosive

Fig. 1: Diagram and parameters used to evaluate the radiance of a clean surface and a surface covered with a solid contaminant, such as an explosive.

3. Measurement Approach 3.1. Laboratory measurements Diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) is a simple method for acquiring reflectance spectra from solid samples12. It was used in this work to measure the reflectance from pure samples of HMX, TNT, RDX, C-4 and Detasheet-C. The reflectance spectra were recorded with a Digilab Fourier-transform spectrometer (FTS 3000) equipped with a silicon carbide globar, Peltier-cooled DTGS detector and a Ge-coated KBr beamsplitter. The spectrometer was capable of a maximum resolution of 0.1 cm-1; however, in this work the measurements were recorded at a resolution of 4 cm-1. Several spectra that were measured at a better resolution indicated no additional spectral information was evident at the higher setting. The diffuse reflectance apparatus (Spectra-tech, Inc.) consisted of a biconical optical arrangement, as illustrated in the schematic diagram in Fig. 2. Infrared radiation from the ceramic source was directed from the interferometer towards the parabolic mirror, M3, and focused onto the sample cup that was filled with explosive. The reflected diffuse radiation from the sample was collected by another parabolic mirror, M5, and redirected towards the DTGS detector of the FTS system. The spot size illuminated by the infrared beam at the sample surface was less than 2-mm in diameter. A small piece of 600-grit silicon carbide paper with adhesive backing and coated with a 0.25-µm layer of pure gold by vapor deposition constituted the reference sample. Its absolute reflectance has been measured to be close to unity throughout the thermal infrared region12. The procedure for performing the reflectance measurements was as follows. The gold reference sample was first placed in the DRIFTS receptacle. The system was

Passive Standoff Detection of Explosives

83

Fig. 2: Schematic diagram of the optical arrangement used to measure reflectance spectra in the laboratory.

purged for 15 min to reduce the amount of residual water vapor and carbon dioxide that remained in the sample compartment of the spectrometer, and then a spectrum of the intensity of the radiation reflected from the gold surface was measured. The gold sample was then removed, and the receptacle was filled with one of the explosive samples. After another 15 min purge, a spectrum of the radiation reflected from the powder was measured. The gold sample was replaced a second time and a measurement was obtained to ensure that the DRIFTS assembly gave reproducible results. The ratio of the reflected intensity from the explosive to the reflected intensity from the samples was used to determine the reflectance spectrum of the explosive.

3.2. Field measurements A modified Bomem MB100 FTIR spectrometer was used to perform the passive standoff measurements of explosive materials in the field. The interferometer consisted of a double pendulum scanning mechanism that controlled the periodic displacement of two corner-cube reflectors, which resulted in an interferogram. The KBr beamsplitter had a multilayer coating of ZnSe and KBr on its external faces, which resulted in a useful transmission range from 400 – 5000 cm-1. The output module contained parabolic and condensing mirrors that focused the beam onto an MCT detector (1 mm) mounted on a liquid nitrogen Dewar. The spectrometer was equipped with a telescope with a diameter of 25 cm, which resulted in a scene field-of-view of 5 mrad. Radiance measurements were obtained at a spectral resolution of 8 cm-1 (FWHM) over the thermal infrared region (8 – 14 µm). The spectrometer was set up at a distance of 60 m from the explosive target, which is shown in Figs. 3(A, B). The aluminum tray in which the explosive was placed was well-grounded in order to prevent the accidental ignition of the explosive. An aluminum mirror positioned at an angle of 45 degrees, as shown in Fig. 3B, was used to direct the radiance from the target to the spectrometer. The radiometric calibration of the

84

E. Puckrin et al.

system was performed using a two-point calibration procedure. The calibration was performed before each measurement of the explosive target. Each spectrum consisted of 80 co-added scans that resulted in a total measurement time of 200 s. This measurement time was chosen to ensure a minimal noise contribution in the measured signal; however, a shorter measurement period would also have sufficed in producing quality spectral measurements.

Fig. 3: Photographs showing (A) HMX sample in the aluminum tray. (B) The mirrors reflect the radiance from the explosive towards the spectrometer, which was set up at a distance of 60 m from the samples.

4. Results and Analysis 4.1. Laboratory measurements of diffuse reflectance Reflectance spectra measured by the DRIFTS technique for HMX, TNT, RDX, C-4 and Detasheet-C are presented in Fig. 4. The materials exhibit a rich spectral structure throughout the 700 – 1300 cm-1 (8 – 14 µm). The absorption features are attributed to vibrational processes of the solid. Some partial rotation and slight translation can also occur in solids; however, these generate so-called lattice modes that typically occur beyond 20 µm13. The absorption features can be very sensitive to small changes in crystal structure or chemistry. In general, the features are much broader than those associated with gases due to the thermal vibrations in the lattice, which has the effect of smearing out the band shape14.

Passive Standoff Detection of Explosives

85

0.14 HMX RDX TNT Detasheet-C C-4

0.12

Reflectance

0.10 0.08 0.06 0.04 0.02 0.00 700

800

900

1000

1100

1200

1300

Wavenumber (cm-1) Fig. 4: Reflectance spectra measured by the DRIFTS technique for five explosive compounds over the 700 – 1300 cm-1 thermal infrared region. The measurements were obtained at a resolution of 4 cm-1 and are relative to the reflectance of a gold surface.

4.2. Simulation of the passive standoff detection of solid explosive materials In order to be passively detected by standoff FTIR radiometry, it is necessary that the energy reflected and emitted by the explosive material be transmitted through the atmosphere to the sensor located at some distance from the material. A transparent window exists in the 700 – 1300 cm-1 region to enable such detection. A simulation of the detection of HMX was performed, which was based on the MODTRAN4 transmission model incorporating the 1976 U.S. standard atmosphere15 and the reflectance spectra of HMX measured in the laboratory. It was assumed that the field of view of the sensor was completely filled with the explosive material. The simulation is shown for both, direct radiance mode and differential radiance mode, which represents the difference in the radiance between the clean surface and one contaminated with an explosive material. The differential radiance is minimally affected by the atmospheric absorption intervening between the ground and the sensor. In practice it can be readily measured in real-time with a dual-beam instrument such as CATSI16, or subsequently through software subtraction. The simulated scenario corresponds to a horizontal standoff distance of 100 m between the sensor and the HMX target. The simulated radiance spectrum is shown by the black curve in Fig. 5A. The spectrum is predominantly blackbody-like with a few minor features in the 750 – 1250 cm-1 region. Also included for comparison is the simulated radiance spectrum resulting from an asphalt target located 100 m from the sensor, as shown by the red curve. The asphalt target is represented by a surface having a reflectance of 3%. The corresponding differential radiance spectrum, obtained by

86

E. Puckrin et al.

subtracting the radiance spectra of the HMX and asphalt targets, is shown by the black curve in Fig. 5B. The simulated differential radiance spectrum is the result that a dualbeam interferometer, such as CATSI, measures in real time. The emissivity spectrum of HMX that was derived from the laboratory reflectance measurements is also shown by the blue curve in Fig. 5B, It is evident that most of the spectral features between the two spectra agree very well, thus enabling a positive identification to be made of the HMX target. The HMX features at 650 – 700 cm-1 and 1250 cm-1 in the emissivity spectrum are nearly non-existent in the simulated differential radiance spectrum due to the interference from overlapping bands of atmospheric carbon dioxide and water vapor. There is also some fine structure located in the differential radiance spectrum between 1100 and 1250 cm-1, which results from an incomplete cancellation of the water vapor lines due to the difference that exists between the reflectance values of asphalt and HMX in this region. These simulations suggest that HMX targets may have the potential to be detected with a passive standoff FTS technique. 1.0e-6

1.0e-5

CO2

5.0e-6

(A) 0.0 500

750

-1

1.2

HMX spectral emissivity

1.1 5.0e-7 1.0

0.9

0.0

H 2O

Emissivity

Direct radiance (asphalt bkgd for 100 m horizontal path)

∆R (HMX bkgd - asphalt) for 100 m horizontal path

2

1.5e-5

2

-1

Radiance (W/cm sr cm )

Direct radiance (HMX bkgd and 100 m horizontal path)

Differential Radiance (W/cm sr cm )

2.0e-5

0.8

(B)

-5.0e-7 500 1000 1250 1500 -1

Wavenumber (cm )

750

0.7 1000 1250 1500 -1

Wavenumber (cm )

Fig. 5: (A) Direct total radiance spectra simulated for an HMX target and an asphalt target located 100 m horizontally from the sensor. (B) Differential radiance spectrum simulated by subtracting the HMX radiance spectrum from the asphalt radiance spectrum. The laboratory emissivity spectrum of HMX is included for comparison of the spectral features.

4.3.

Effect of temperature contrast on passive standoff detection

The expression (B-Ldown) that appears in Eq. (4) shows that the intensity of the differential radiance depends on the temperature contrast between the sky and the target on the ground. In order to further examine the effect of sky temperature on the radiance, simulations of the radiance for HMX were performed for a number of atmospheric

Passive Standoff Detection of Explosives

87

scenarios; clear sky, cloud base at 5 km, cloud base at 1 km and cloud base at 250 m. The optically thick cloud characterizes a blackbody radiating at the temperature of the atmosphere at the altitude it is located; i.e., clouds located at lower altitudes in the atmosphere are warmer, and thereby reduce the temperature contrast with the surface. The simulations of the differential radiance are summarized in Fig. 6. For the case of a clear sky where the temperature contrast between the sky and ground is about 80 K in the 700 – 1300 cm-1 region, the differential radiance has a maximum absolute value of about 1.3×10-7 W/(cm2 sr cm-1). For an optically thick cloud at an altitude of 5 km the temperature contrast is reduced to 33 K and the differential radiance decreases by about a factor of two. When the temperature contrast is further reduced to 7 K by the presence of a cloud at an altitude of 1 km, the resulting maximum in the differential radiance is less than 2×10-8 W/(cm2 sr cm-1). Despite the weak intensity under these conditions, most passive sensors should still have sufficient sensitivity to detect the HMX at the surface. In the extreme situation where the cloud is positioned close to the ground at an altitude of 250 m, the contrast in temperature is only about 2K, which results in a differential radiance maximum of less than 5×10-9 W/(cm2 sr cm-1). 1.00

1.5e-7 0.95 1.0e-7 0.90

5.0e-8

0.0

0.85 ∆R (HMX bkgd - asphalt) with clear sky (∆T ~ 80 K) ∆R (HMX bkgd - asphalt) with cloud @ 5 km (∆T ~ 33 K) ∆R (HMX bkgd - asphalt) with cloud @ 1 km (∆T ~ 7 K) ∆R (HMX bkgd - asphalt) with cloud @ 250 m (∆T ~ 2 K) HMX spectral emissivity

-5.0e-8

-1.0e-7

-1.5e-7 500

750

1000

1250

Emissivity

2

-1

Differential Radiance (W/cm sr cm )

2.0e-7

0.80

0.75 1500

Wavenumber (cm-1)

Fig. 6: Simulated effect of sky temperature on the passive detection of HMX. The sky temperature was altered by changing the base altitude of an optically thick cloud from 5 km to 250 m. The differential radiance spectra were simulated by subtracting the HMX radiance spectrum from the asphalt radiance spectrum. The intensity of the HMX absorption features diminishes with increased sky temperature.

4.4. Passive standoff measurement of solid explosive materials A field measurement was performed with 1 kg of HMX spread over an aluminum tray with an area of 4 m2, as shown in Fig. 4A. The spectrometer was set up at a distance of 60 m from the explosive target, and a radiance spectrum was measured at a resolution of

88

E. Puckrin et al.

8 cm-1 under mostly clear skies and at a temperature of about 20 °C. The spectrum, shown by the black curve in Fig. 7A, closely resembles the radiance corresponding to a blackbody at a temperature of 28 °C. Also shown in Fig. 7A is the radiance from a piece of roofing paper at a distance of 60 m (red curve). The roofing paper radiance was multiplied by a factor of 1.12 in order to provide a better fit with the HMX spectrum. The subtraction of the roofing paper spectrum from the spectrum of HMX results in a differential radiance spectrum (black curve) that is nearly free of atmospheric interferents in the 800 – 1200 cm-1 region, as indicated in Fig. 7B. In order to demonstrate the presence of HMX features in the measurement of Fig. 7B, a differential radiance spectrum was simulated, as shown by the red curve in Fig. 7B, that incorporated the spectral reflectivity of HMX obtained in the laboratory. The simulated differential radiance spectrum was calculated without the presence of an atmosphere; hence, it represents only the features associated with HMX without the interfering contribution from atmospheric constituents. Comparing the two curves in Fig. 7B clearly indicates that there are HMX features present in the measured differential radiance spectrum between 800 and 1000 cm-1, as indicated by the red arrows.

Fig. 7: (A) Radiance spectrum of HMX measured at a standoff distance of 60 m compared with the radiance from a sheet of roofing paper, which was increased by a factor of 1.12 in order to provide a better match with the spectrum of HMX. (B) The measured differential radiance spectrum was calculated by subtracting the radiance of the roofing paper from the HMX target. The comparison with the simulated differential radiance (without atmosphere present) shows the presence of three features in the measured differential spectrum that correspond to HMX.

Passive Standoff Detection of Explosives

89

5. Conclusions Over the past 18 months DRDC Valcartier has been investigating the possibility of using passive standoff FTIR radiometry to detect and identify solid explosive materials. The results from laboratory DRIFTS measurements indicate that a number of explosives, including HMX, TNT, RDX, C-4 and Detasheet, have detailed infrared signatures in the transparent window region of the LWIR spectrum. From simulations performed with the MODTRAN4 radiative transfer model, it has been shown that HMX has the potential to be detected and identified at standoff distances of up to 100 m. Preliminary measurements involving HMX in a field experiment has also shown that explosives may be passively detected and identified at a standoff distance of 60 m. This work provides evidence to warrant further study into the passive detection of explosive materials by FTIR radiometric techniques. Future work should include an investigation of mixtures of explosives with other materials. A greater range of background surfaces should also be considered for a future study, along with the dependence of measurement sensitivity on surface coverage.

6. Acknowledgments We would like to express our thanks to Hélène Gagnon (DRDC Valcartier, Emerging Materials) for her help in obtaining the measurements of diffuse reflectance. The explosive samples were kindly provided by WO Maurice Chartrand (DRDC Valcartier, Munitions Experimental Test Centre). Master WO Caron (DRDC Valcartier, Munitions Experimental Test Centre) provided expert advice for the measurement of explosives in the field.

7. References 1. Committee on the Review of Existing and Potential Standoff Explosives Detection Techniques, “Existing and Potential Standoff Detection Techniques”, The National Academies Press, 148 pp., Washington, D.C., 2004. 2. C.T. Chaffin, T.L. Marshall, N.C. Chaffin, “Passive FT-IR remote sensing of smokestack emissions”, Field Analytical Chemistry and Technology, 3, 111-115, 1999. 3. D.F. Flanigan, “Hazardous cloud imaging: a new way of using passive infrared”, Appl. Opt. 36, 7027-7036, 1997. 4. G. Laufer and A. Ben-David, “Optimized differential absorption radiometer for remote sensing of chemical effluents”, Appl. Opt. 41, 2263-2273, 2002. 5. J.-M. Thériault, E. Puckrin, F. Bouffard and B. Déry, “Passive remote monitoring of chemical vapors by differential FTIR radiometry: results at a range of 1.5 km”, Appl. Opt., 43, 14251434, 2004. 6. H. Lavoie, E. Puckrin, J.-M. Thériault and F. Bouffard, Passive standoff detection of SF6 at a distance of 5.7 km by differential FTIR radiometry, Applied Spectroscopy, 59, 1189-1193, 2005. 7. J.-M. Thériault, E. Puckrin, J. Hancock, P. Lecavalier, Carmela Jackson Lepage and J.O. Jensen, “Passive standoff detection of chemical warfare agents on surfaces”, Appl. Opt., 43, 5870-5885, 2004.

90

E. Puckrin et al.

8. J.-M. Thériault and E. Puckrin, “Passive Standoff Detection of BG Aerosol by FTIR Radiometry”, Appl. Opt., 42, 6696-6705, 2003. 9. A. Ben-David, “Remote detection of biological aerosols at a distance of 3 km with a passive Fourier transform infrared (FTIR) sensor”, Opt. Express, 11, 418-429, 2003. 10. E. Puckrin and J.-M. Thériault, “Passive Standoff Detection of Radiological Products by FTIR Radiometry”, Opt. Lett., 29, 1375-1377, 2004. 11. G.P. Anderson, A. Berk, L.S. Bernstein, J.H. Chetwynd, P.K. Acharya, H. Dothe, M.W. Matthew, S.M. Adler-Golden, R.J. Ratkowski, G.W. Felde, J.A. Gardner, M.L. Hoke, S.C. Richtsmeier, B. Pukall, J. Mello, and L.S. Jeong, “MODTRAN4: Radiative transfer modeling for remote sensing and atmospheric correction”, Proc., EUROPTO Remote Sensing Congress, Florence, Italy, 1999. 12. D.B. Nash, “Mid-infrared reflectance spectra (2.3 – 22 µm) of sulfur, gold, KBr, MgO, and halon”, Appl. Opt., 25, 2427-2433, 1986. 13. R.N. Clark, Chapter 1: Spectroscopy of Rocks and Minerals, and Principles of Spectroscopy, in Manual of Remote Sensing, Volume 3, Remote Sensing for the Earth Sciences, (A.N. Rencz, ed.) John Wiley and Sons, New York, p 3- 58, 1999. 14. B. Hapke, Theory of reflectance and emittance spectroscopy, p. 42, Cambridge University Press, Cambridge, 1993. 15. G.P. Anderson, S.A. Clough, F.X. Kneizys, J.H. Chetwynd and E.P. Shettle, “AFGL atmospheric constituent profiles (0-120 km)”, AFGL-TR-86-0110, Air Force Geophysics Laboratory, Hanscom AFB, MA, 1986. 16. J.-M. Thériault, Passive Standoff Detection of Chemical Vapors by Differential FTIR Radiometry, DREV Technical Report TR-2000-156, January, 2001.

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 319–336  World Scientific Publishing Company

DETECTION AND CLASSIFICATION OF ORGANIC AND ORGANOPHOSPHORUS ANALYTES ON SOIL FROM REFLECTIONABSORPTION SPECTROSCOPY THOMAS A. BLAKE Pacific Northwest National Laboratory P.O. Box 999, Mail Stop K8-88 Richland, WA 99352 [email protected] PAUL L. GASSMAN Pacific Northwest National Laboratory P.O. Box 999, Mail Stop K8-96 Richland, WA 99352 [email protected] NEAL B. GALLAGHER Eigenvector Research, Inc. 160 Gobblers Knob Lane Manson, WA 98831 [email protected]

Principal Components Analysis (PCA) is a common anomaly detection tool that was used in this work to detect organic and organophosphate analytes on soils using mid-infrared reflectionabsorption spectroscopy. Detection is hindered by large variability in sample-to-sample soil reflectivity that is due to the random nature of the soil particle packing. Extended multiplicative scatter correction (EMSC) and Savitzky-Golay derivative preprocessing were examined as methods to reduce this variability and enhance detection capability. Second derivative preprocessing provided results that were at least as good as EMSC for detection and the simplicity of the derivative methodology makes it an attractive preprocessing approach. Typically, PCA is applied to all spectral channels and results from detection events are interrogated to identify a potential cause. In this work, PCA models were developed for specific wavenumber ranges corresponding to functional group frequencies with the objective of providing some classification capability. It was found that detection of CH2, CH3 and P=O stretching bands was possible; however, results for a –CH2 scissors band was less encouraging and detection of O–H stretch, –C–C– skeletal stretch, and PO–C stretch modes was poor. Some limited classification capability may be possible, but it would be difficult to make a unique assignment of the analytes present using the strategies studied. Keywords: Soil reflection-absorption spectroscopy; principal component analysis; extended multiplicative scatter correction; multivariate curve resolution; dibutyl phosphate; tributyl phosphate; dodecane; bis(2-ethylhexyl) hydrogen phosphate.

91

92

T. A. Blake, P. L. Gassman & N. B. Gallagher

1. Introduction Infrared reflection spectroscopy promises to be a powerful technique for rapid detection of low volatility compounds that have been spilled, dumped, or allowed to accrete onto soils. Such a tool would be useful for detecting and identifying chemicals associated with industrial processes, verifying treaty compliance at suspected weaponsof-mass-destruction production sites, and application monitoring of agri-chemicals such as fertilizers, pesticides and herbicides. Unfortunately, the use of reflection spectroscopy for detection and classification of organic compounds on soil surfaces is hampered by three major problems. The first is that low reflectivity of soils makes detecting return signals problematic; however, this might be addressed by using a sufficiently bright incoherent light source with long integration times or a broadband laser source. The second problem is that scene-to-scene (or sample-to-sample) variability of the return signal soil is large. The variability in soil reflectivity is due to the random nature in which soil particles are packed and the random distribution of particle sizes. The variability can swamp smaller signals from analytes of interest, but as discussed below, it can be filtered out to some extent thereby increasing analyte signal relative to the soil reflection signal. The third problem is that spectral features shift in wavenumber, change intensity relative to other features, or disappear altogether relative to the corresponding neat liquid spectrum.1-3 Therefore, algorithms such as the matched-filter4 and step-wise regression that are based on libraries of neat liquid spectra are not expected to be generally successful for detecting and classifying compounds adsorbed on soils. In contrast, the principal component analysis-based detection algorithm used here is not based on the analyte spectrum, but instead on anomaly detection in windows of frequencies expected to have signal from specific functional groups. Knowledge of which windows have anomalous measurements allows some limited analyte classification capability. Two broad strategies for monitoring of soils are typical. The first allows for characterization of analyte-free soil prior to monitoring for changes attributable to organic or organophosphorus analyte. Such a scenario would first measure and characterize reflection spectra over a region of analyte-free soil and then measurements would be made in test regions containing a similar soil that may or may not be coated with the target analyte. Fenceline monitoring would be an example of such a scenario. In a second strategy, the soil reflection spectra are not well characterized and may be rapidly changing (monitoring from a moving vehicle, for example). Each monitoring strategy allows different assumptions for detection and classification algorithms and only the former case with well characterized, analyte-free soil is considered here. In this study, two soils were measured in the laboratory as a prelude to field studies. The objective was to identify data preprocessing techniques that minimize variability in the soil reflection signal while enhancing signal from analyte compounds on the soil. The effect of preprocessing was tested using an anomaly detection algorithm based on principal components analysis (PCA). Anomalies here corresponded to analyte being present on the soil. The PCA-based methodology is outlined in the Theory Section and was applied to specific wavenumber ranges corresponding to functional group absorptions with the objective of allowing for some limited classification capability. Quantification was not considered.

Classification of Organic and Organophosphorus Analytes on Soil

93

The two soil types studied included Quincy Soil (a quartz-dominant soil) and League Soil (a montmorillonite-dominant soil). These soils are considerably different (sandy and clay-like, respectively) and represent two important soil-types expected to be observed in many regions of the world. Reflection spectra of the soil/analyte samples were recorded using a benchtop FTIR and specular reflection accessory. Spectra were measured with and without an analyte compound applied to the soil surface, and with and without a dry nitrogen purge in the spectrometer’s sample compartment. The latter sampling condition allowed for modeling atmospheric interferents. These interferences could then be accounted for, at least over a short optical pathlength. Four analyte compounds were studied: dibutyl phosphate (DBP), tributyl phosphate (TBP), bis(2-ethylhexyl) hydrogen phosphate (BIS), and dodecane (DOD). The first three compounds are alkyl phosphates that exhibit similar infrared spectra and the fourth was a simple straight-chain alkane. Data preprocessing methods included Savitzky-Golay derivatives5 and extended multiplicative scatter correction (EMSC). The random orientation, packing, and size distributions of soil particles leads to considerable variability in measured sample-tosample reflectivity of soil surfaces and scatter correction techniques have been proposed to correct for such variability. These methods include multiplicative scatter correction (MSC),6 piece-wise MSC,7,8 inverse scatter correction,9 and extended multiplicative scatter correction (EMSC).10-13 Scatter correction attempts to correct for, or filter out variability, and account for pathlength differences for light moving through the soil, thus providing good predictive models, as well as reasonably interpretable spectra. EMSC allows explicit accounting for signal variability and can be used to filter known interferences, for example, atmospheric spectra and basis functions that account for sample-to-sample variability. Previous work described using EMSC applied to reflection spectra of Quincy and League soils coated with varying concentrations of DBP.14 The same data set was used to test a multivariate curve resolution (MCR) algorithm that directly extracted DBP spectra from the soil reflection spectra.3 MCR results for DBP and tributyl phosphate (TBP) adsorbed on Quincy and League soils are briefly described below and were used to guide the detection and classification strategies. 2. Theory 2.1. Principal Components Analysis for Anomaly Detection Principal components analysis (PCA) is a useful multivariate analysis technique for exploratory analysis and unsupervised pattern recognition,15 and is widely used for multivariate statistical process control (MSPC).16-20 MSPC also can be thought of as anomaly detection21 and is the basis of the soft independent modeling of class analogy (SIMCA) classification approach.22 In the present work, PCA is the basis of an anomaly detection methodology and is briefly reviewed. The nomenclature used here is consistent with Jackson and additional details can be found in that reference.23 2.1.1. PCA Model A PCA model is a data-based model that requires a calibration data set represented by a MxN matrix Xcal where each of the m rows ( m = 1,..., M ) corresponds to a measured

94

T. A. Blake, P. L. Gassman & N. B. Gallagher

spectrum and each of the n columns ( n = 1,..., N ) corresponds to a spectral channel. In the present study, X cal corresponds to spectral measurements of analyte-free soil, and anomalies correspond to new measurements that are unusual with respect to normal variation observed for the analyte-free soil. It is possible that unusual measurements can be related to instrument measurement artifacts or other sources not related to the presence of analyte. However, for this work, it is assumed that the detection of an anomaly is due to the presence of analyte. Preprocessing is typically applied to measured data to remove unwanted or irrelevant variance prior to constructing PCA models. Several preprocessing methods (discussed below) were employed in this study, and in each case, mean-centering (often used in anomaly detection) was used after preprocessing. Preprocessing followed by mean decomposed as shown in Equation (1). centering transforms Xcal to a MxN matrix X cal  = TPT + E X cal

(1)

In Equation (1), T is a MxK matrix of scores, P is a NxK matrix of loadings, E is a MxN matrix of residuals, and K is the number of principal components (PCs) kept in  the model. Typically, K 99%, [CH3(CH2)3CH(C2H5)CH2O]2P(O)OH] were chosen because they exhibit a number of mid-infrared alkyl, −P=O, PO−C, and −C−C− stretching and skeletal vibrational modes. DBP and BIS also exhibit hydrogen bonded O−H stretch modes. DBP and BIS have very similar liquid spectra and are a test of the ability of the detection algorithm to distinguish between the two. The fourth analyte, dodecane [DOD, Aldrich, >99%, CH3(CH2)10CH3], was selected as an analyte exhibiting a subset of spectroscopic features – alkyl stretches and bends – common with the other compounds, but without a phosphate group. 3.4. Calibration and Test Sets

Calibration data sets consisted of spectra of analyte-free soils that were purged and not purged with dry-N2. Test data sets consisted of spectra of analyte-free soils and soils coated with analyte. Test data sets included samples that were purged and not purged with dry-N2. All test set data were obtained on a different day (in some cases several months apart) from the calibration set. Band groups and spectral ranges used for monitoring and detection are listed in Table 1. 3.4.1. Quincy Soil Data Sets

• • • •

The calibration set for Quincy Soil had 184 samples with 102 purged with dry-N2. Test I. 115 samples with DBP at concentrations ranging from 0 to 600 mM. (described in References 3, 14) Test II. 137 samples with TBP at concentrations ranging from 0 to 600.5 mM analogous to data in Test I. Test III. 60 samples all purged with dry-N2. Samples 1-31 are analyte-free. Samples 32-60 contain varying concentrations of DBP, TBP, BIS, and DOD singly or all four

100

T. A. Blake, P. L. Gassman & N. B. Gallagher

together. Concentrations of individual analytes varied from ~20 to ~100 mM. Total analyte concentration varied from ~92 to ~402 mM. 3.4.2. League Soil Data Sets

• • •

The calibration set for the League Soil had 106 samples with 88 purged with dry-N2. Test I. 115 samples with DBP at concentrations ranging from 0 to 600 mM. (described in Reference 3, 14) Test II. 114 samples with TBP at concentrations ranging from 0 to 601 mM analogous to data in Test I.

4. Results

Figure 1 shows the neat liquid and estimated adsorbed spectra for the four analytes with regions used for monitoring and detection highlighted (see Table 1). One of the most notable observations is that the PO−C stretch centered at about 1029 cm-1 in the neat liquid is almost absent for DBP, TBP and BIS when absorbed on soil. The CH2, CH3 C−H stretching bands in the 2968 to 2851 cm-1 region are present, as is the P=O stretching band at about 1241 cm-1. 0.1

BIS 0.05

0

DOD 0.05

0

TBP 0.05

0

DBP 0.05

0

5000

4500

4000

3500

3000

2500

2000

1500

1000

Wavenumber (cm−1)

Figure 1. Neat liquid (thin line) and estimated adsorbed (thick dashed line) spectra for the four analytes. Spectral regions used for detection PCA models are highlighted (see Table 1). All spectra are normalized using a 2-norm for comparison.

Classification of Organic and Organophosphorus Analytes on Soil

101

Table 1. Functional groups used for monitoring models. Ranges listed in parentheses were used for monitoring. Functional Group

Wavenumber Range (cm-1)

CH2, CH3 sym and antisym C−H stretch

2968 to 2851 (2968 to 2851)

O−H (1) stretch

2748 to 2640

O−H (2) stretch

2349 to 2100 (2500 to 2390)

CH2 scissors

1497 to 1414 (1485 to 1414)

P=O stretch

1300 to 1202 (1272 to 1210)

–C–C– skeletal stretch

1165 to 1123 (1158 to 1143)

PO−C stretch

1099 to 939 (1075 to 982)

Figure 2 shows example soil spectra from the calibration set with regions used for monitoring highlighted (Table 1). Water (sharp features in the ~3900 to 3600 and 1800 to 1400 cm-1 range) and carbon dioxide (double peak in the 2390 to 2300 cm-1 range) are readily apparent in the spectra not purged with dry-N2. For a given soil, the spectra have similar shapes easily discernable by inspection, but the variability is quite large with respect to the task of the signal detection algorithm. −ln(I) 8

Quincy 6

4

2 8

League 6

4

2 5000

4500

4000

3500

3000

2500

2000

1500

1000

Wavenumber (cm−1)

Figure 2. Example soil spectra from the calibration set: Quincy (top) and League (bottom).

102

T. A. Blake, P. L. Gassman & N. B. Gallagher

Detailed results for all band groups and preprocessing are listed in Table 2 and examples of the Q and T2 statistics for the CH2, CH3 C−H stretching bands using preprocessing (i) None and (iii) Second Derivative are shown in Figures 3 and 4, respectively. Preprocessing (i) None does remarkably well; however, the distance ratio is nearly two orders of magnitude higher when using (iii) Second Derivative. This means that the true positives for the (iii) Second Derivative preprocessing are further away from the calibration data than for (i) None and that the confidence limit (set at 99.9% for this study) could be set higher without having a significant negative impact on the sensitivity while improving the specificity (i.e., reducing the false alarms). Table 2a. Detection results for Quincy Soil in each test region: Sensitivity (Sens) and Specificity (Spec), CL = 99.9%. Column 1 lists preprocessing method.

Tot CH2,CH3 None 1st Derivative 2nd Derivative EMSC EMSC+2nd D O−H (2) Tot None 1st Derivative 2nd Derivative EMSC EMSC+2nd D P=O Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd D CH2 Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd D –C–C– Tot None 1st Derivative 2nd Derivative EMSC EMSC+2nd D PO−C Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd De

Test: I DBP Sens Spec 76 39 1 0.92 1 0.97 1 1 1 0.97 1 0.85 76 39 0.93 0.62 0.13 1 0.30 0.97 0.93 0.23 0.99 0.10 76 39 1 0.97 1 0.90 1 1 1 1 1 0.95 76 39 0.88 1 0.72 1 0.91 1 0.95 0.92 1 0.85 76 39 0.01 1 0.20 1 0.61 1 0.93 1 0.96 1 76 39 0.68 1 0.93 0.97 0.59 1 0.97 0.82 0.95 0.97

II TBP Sens 95 1 1 1 1 1 0 na na na na na 95 0.56 0.59 1 1 1 95 0.88 0.67 0.85 0.89 1 95 0.00 0.00 0.02 0.14 0.37 95 0.00 0.12 0.72 0.78 0.92

Spec 42 1 1 1 1 0.95 137 1 0.88 0.65 0.67 0.39 42 1 0.95 1 1 0.90 42 1 0.95 0.98 0.93 0.98 42 1 0.98 1 0.90 0.95 42 1 0.98 0.90 0.74 0.79

III Sens 29 1 1 1 1 1 25 0.88 0.36 0.40 0.88 1 28 0.79 1 1 1 1 29 0.97 0.86 0.93 0.97 1 28 0.00 0.00 0.00 0.71 0.79 28 0.04 0.29 0.57 0.82 0.96

Spec 31 1 1 0.81 1 0.77 35 0.83 0.74 0.97 0.63 0.29 32 1 1 1 0.97 1 31 1 1 1 0.94 0.71 32 1 1 1 1 1 32 1 1 1 0.94 1

Total Correct 312 0.99 1 0.98 1 0.95 312 0.91 0.65 0.62 0.69 0.54 312 0.84 0.86 1 1 0.98 312 0.93 0.81 0.92 0.93 0.95 312 0.37 0.41 0.52 0.68 0.77 312 0.53 0.64 0.76 0.85 0.93

f Dist Ratio 1.4e3 3.0e4 1.1e5 3.3e4 9.6e4 2.9e1 7.3e0 2.1e1 4.2e1 2.9e1 9.4e1 5.2e2 7.4e2 2.7e2 4.4e2 9.5e1 3.7e1 3.8e1 4.4e1 8.3e1 8.9e0 1.0e1 1.5e1 7.0e1 6.5e1 1.9e1 2.4e1 2.4e1 5.1e1 1.0e2

Classification of Organic and Organophosphorus Analytes on Soil

103

Table 2b. Detection results for League Soil in each test region: Sensitivity (Sens) and Specificity (Spec), CL = 99.9%. Column 1 lists preprocessing method.

CH2,CH3 Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative O−H (2) Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative P=O Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative CH2 Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative –C–C– Tot None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative PO−C Total None 1st Derivative 2nd Derivative EMSC EMSC+2nd Derivative

Test: I DBP Sens Spec 76 39 0.97 0.95 1 0.51 1 0.18 1 0.18 1 0.49 76 39 1 0.00 1 0.10 0.96 0.77 1 0.28 0.91 0.33 76 39 1 0.38 1 0.74 0.93 0.59 0.96 0.82 0.92 0.74 76 39 1 0.95 0.99 0.97 1 0.15 1 0.10 1 0.23 76 39 1 0.00 0.38 0.90 0.76 0.92 0.92 0.90 1 0.87 76 39 1 0.00 0.75 0.97 0.99 0.44 0.91 0.05 1 0.08

Sens 76 1 1 1 1 1 0 na na na na na 76 1 1 1 1 1 76 1 1 1 1 1 76 1 0.66 0.55 0.82 0.95 76 1 0.58 0.95 0.88 1

II TBP Spec 38 0.89 0.74 0.76 0.00 0.47 114 0.51 0.48 0.43 0.24 0.34 38 0.89 0.24 1 0.45 0.53 38 0.29 0.89 0.45 0.11 0.34 38 0.03 0.95 0.68 0.66 0.89 38 0.11 1 0.87 0.11 0.13

Total Correct 229 0.97 0.87 0.82 0.69 0.83 229 0.59 0.59 0.66 0.50 0.53 229 0.88 0.83 0.91 0.86 0.85 229 0.87 0.97 0.76 0.70 0.76 229 0.67 0.66 0.71 0.84 0.94 229 0.68 0.77 0.86 0.62 0.70

f Dist Ratio 2.9e2 2.2e4 2.0e4 5.6e3 2.3e4 6.7e0 1.9e1 2.6e1 4.2e2 2.9e1 3.2e1 2.1e2 4.7e2 1.2e2 3.2e2 6.0e2 7.1e2 6.7e2 4.3e2 2.2e3 1.3e1 1.5e1 6.7e0 7.2e1 4.5e1 5.6e0 5.2e1 3.3e1 9.2e0 5.1e1

Figure 5 shows the total correctly classified for each functional group and preprocessing for both Quincy and League Soils. Although effort was made to provide good results for all preprocessing methods studied, it is expected that further optimization of the parameters for each method may result in better performance. However, some general comments can be made from this study. The CH2, CH3 C−H and P=O stretching bands showed fair to good detection for nearly all the preprocessing methods used while other bands were less successful. The League Soil test sets typically had large numbers of false alarms suggesting that the calibration measurements do not span the soil variability observed in the test sets (possibly due to differences in soil size distributions and/or

104

T. A. Blake, P. L. Gassman & N. B. Gallagher

packing differences due to changes in sample preparation). This observation highlights an underlying assumption of the detection algorithm: the calibration measurements must span the space of the test soils. None CH2,CH3 2

Q

10

0

10

−2

10

Quincy Cal

Test I

Quincy Cal

Test I

II

III

Leag Cal

Test I

II

II

III

Leag Cal

Test I

II

0

T2

10

−5

10

Sample Index

Figure 3. Q (top) and T2 (bottom) statistics for CH2, CH3 C–H stretch group frequency using preprocessing (i) None. Open circles are for analyte-free samples and filled circles are when the functional group is present. Results are shown for (left to right) the Quincy Calibration Set, Quincy Test Sets I-III, League Calibration Set, and League Test Sets I and II.

Some of the observations for each functional group can be understood by examining Figures 1 and 2. The CH2, CH3 C−H stretching bands had good structured signal in a region where the soil spectra were relatively smooth and the detection results were good. The O–H stretching bands were in a region where both the analyte on soil and background soil have similar smooth spectra and the detection results were not good. The –CH2 scissors band had a smaller signal, but its spectral shape was different from the background soil for DBP and TBP. Unfortunately, the spectrum for BIS and DOD on the soil was less structured and this band was overlapped by water bands. Therefore, results for the –CH2 scissors were fair, but not good. The P=O stretch has a strong structured feature that is different from the background and the results were good for the last three preprocessing methods but only fair for the first two. This could be attributable to large variation of the soil spectra in this region and incomplete filtering. The –C–C– skeletal stretching feature is a relatively small signal that overlaps a similar feature in the soil

Classification of Organic and Organophosphorus Analytes on Soil

105

spectra and detection of this band is not good. The PO–C stretch band that has strong signal in the neat liquid for DBP, TBP and BIS is essentially absent for the adsorbed analytes and the result is poor detection for this band, as well. The PO–C stretch band is thought to be missing because of the manner in which the DBP, TBP and BIS molecules interact with the –OH groups on the surface of the soil particles; see Reference 3. 2nd Derivative CH2,CH3 4

10

Q

2

10

0

10

Quincy Cal

Test I

Quincy Cal

Test I

II

III

Leag Cal

Test I

II

II

III

Leag Cal

Test I

II

2

T2

10

0

10

−2

10

Sample Index

Figure 4. Q (top) and T2 (bottom) statistics for CH2, CH3 C–H stretch group frequency using preprocessing (iii) Second Derivative. Open circles are for analyte-free samples and filled circles are when the functional group is present. Results are shown for (left to right) the Quincy Calibration Set, Quincy Test Sets I-III, League Calibration Set, and League Test Sets I and II.

5. Conclusions

This study showed that detection of CH2, CH3 C–H and P=O stretch bands is distinctly possible for a monitoring strategy where the background soil reflection spectrum is well characterized. Results for a –CH2 scissors band was fair and detection of O–H stretch, –C–C– skeletal stretch, and PO–C stretch was poor. Therefore, some limited classification capability is possible, but it would be difficult or impossible to make a unique assignment of organic or organophosphorus analytes present on the soil using the strategies studied. The observation that the PO–C stretch band is nearly missing for analytes adsorbed on soil is interesting given that this band (which has a strong feature in the neat liquids) has been proposed in the literature for detection strategies.

106

T. A. Blake, P. L. Gassman & N. B. Gallagher Quincy League

1 0.8 0.6 0.4

Fraction Correctly Classified

0.2 0

−OH

CH2,CH3 i

ii

iii

iv

v

i

−CH2 ii

iii

iv

v

i

ii

iii

iv

v

iii

iv

v

1 0.8 0.6 0.4 0.2 0

−C−C−

P=O i

ii

iii

iv

v

i

ii

PO−C iii

iv

v

i

ii

Preprocessing

Figure 5. Fraction of test samples correctly classified for Quincy and League soils. Results are for each of the band groups and preprocessing studied: (i) None, (ii) 1st Derivative, (iii) 2nd Derivative, (iv) EMSC, and (v) EMSC followed by 2nd Derivative.

Second derivative preprocessing provided results that were at least as good as the more complicated extended multiplicative scatter correction for detection of CH2, CH3 C–H and P=O stretching bands. Although some improvement may be gained by optimizing the preprocessing strategies, the simplicity of the derivative methodology makes it an attractive approach. 6. Acknowledgments

Funding for this research was provided by the United States Department of Energy’s Office of Nonproliferation Research and Development (NA-22). The experimental portion of this work was performed at the W. R. Wiley Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at the Pacific Northwest National Laboratory. The Pacific Northwest National Laboratory is operated for the United States Department of Energy by Battelle under contract DE-AC05-76RLO 1830.

Classification of Organic and Organophosphorus Analytes on Soil

107

7. References 1. 2. 3.

4.

5. 6. 7. 8. 9. 10.

11.

12. 13.

14.

15. 16. 17.

18.

C. E. Miller; T. K. Yin, Near-infrared reflectance of poly(octadecyl methacrylate) absorbed on alumina, J. Mater. Sci. Lett. 8, 467-469 (1989). H. Brunner, U. Mayer H. Hoffmann, External Reflection Infrared Spectroscopy of Anisotropic Adsorbate Layers on Dielectric Substrates, Appl. Spectrosc. 51(2), 209-217 (1997). N. B. Gallagher, T. A. Blake, P. L. Gassman, J. M. Shaver; W. Windig, Multivariate Curve Resolution Applied to Infrared Reflectance Measurements of Soil Contaminated with Organic Analyte, Appl. Spectrosc. 60(7), 713-722 (2005). C. C. Funk, J. Theiler, D. A. Roberts C. C. Borel, Clustering to Improve Matched Filter Detection of Weak Gas Plumes in Hyperspectral Thermal Imagery, IEEE T. Geosci. Remote 39(7), 1410-1420 (2001). A. Savitzky; M. J. E. Golay, Smoothing and Differentiation of Data by Simplified Least Squares Procedures, Anal. Chem. 36(8), 1627-1639 (1964). P. Geladi, D. MacDougall; H. Martens, Linearization and scatter-correction for near-infrared reflectance spectra of meat, Appl. Spectrosc. 39(3), 491-500 (1985). T. Isaksson; B. Kowalski, Piece-wise multiplicative scatter correction applied to near-infrared diffuse transmittance data from meat products, Appl. Spectrosc. 47(7), 702-709 (1993). T. B. Blank, S. T. Sum, S. D. Brown; S. L. Monfre, Transfer of near-infrared multivariate calibrations without standards, Anal. Chem. 68(17), 2987–2995 (1996). I. S. Helland, T. Næs T. Isaksson, Related versions of the multiplicative scatter correction method for preprocessing spectroscopic data, Chemometr. Intell. Lab. 29, 233-241 (1995). H. Martens; E. Stark, Extended Multiplicative Signal Correction and Spectral Interference Subtraction: New Preprocessing Methods for Near Infrared Spectroscopy, J. Pharmaceut. Biomed. 9(8), 625-635 (1991). H. Martens, J. P. Nielsen; S. B. Engelsen, Light Scattering and Light Absorbance Separated by Extended Multiplicative Signal Correction. Application to Near-Infrared Transmission Analysis of Powder Mixtures, Anal. Chem. 75(3), 394-404 (2003). S. N. Thennadil E. B. Martin, Empirical preprocessing methods and their impact on NIR Calibrations: a simulation study, J. Chemometr. 19(2), 77-89 (2005). S. N. Thennadil, H. Martens A. Kohler, Physics-based multiplicative scatter correction approaches for improving the performance of calibration models, Appl. Spectrosc. 60(3), 315321 (2006). N. B. Gallagher, T. A. Blake; P. L. Gassman, Application of Extended Multiplicative Scatter Correction to mid-Infrared Reflectance Spectroscopy of Soil, J. Chemometr. 19(5-7), 271-281 (2005). I. A. Cowe J. W. McNicol, The Use of Principal Components in the Analysis of Near-Infrared Spectra, Appl. Spectrosc. 39(2), 257-266 (1985). J. E. Jackson; G. S. Mudholkar, Control Procedures for Residuals Associated with Principal Component Analysis, Technometrics 21(3), 341-349 (1979). B. M. Wise, N. L. Ricker, D. F. Veltkamp; B. R. Kowalski, A Theoretical Basis for the Use of Principal Component Models for Monitoring Multivariate Processes, Process Contr. Qual. 1, 41-51 (1990). P. Nomikos; J. F. MacGregor, Monitoring Batch Processes Using Multiway Principal Component Analysis, AIChE J. 40(8), 1361-1375 (1994).

108

T. A. Blake, P. L. Gassman & N. B. Gallagher

19. W. Ku, R. H. Storer; C. Georgakis, Disturbance detection and isolation by dynamic principal component analysis, Chemometr. Intell. Lab. 30, 179-196 (1995). 20. B. M. Wise; N. B. Gallagher, The Process Chemometrics Approach to Chemical Process Monitoring and Fault Detection, J. Process Control 6(6), 329-348 (1996). 21. C. I. Chang; S. S. Chiang, Anomaly Detection and Classification for Hyperspectral Imagery, IEEE T. Geosci. Remote 40(6), 1314-1325 (2002). 22. I. E. Frank; S. Lanteri, Classification models: Discriminant analysis, SIMCA, CART, Chemometr. Intell. Lab. 5, 247-256 (1989). 23. J. E. Jackson, A User’s Guide to Principal Components. (John Wiley & Sons: New York, NY, 1991). 24. E. R. Malinowski, Factor Analysis in Chemistry. Second ed.; (John Wiley & Sons: New York, NY, 1991). 25. J. A. Westerhuis, S. P. Gurden A. K. Smilde, Standardized Q-statistic for improved sensitivity in the monitoring of residuals in MSPC, J. Chemometr. 14, 335-349 (2000). 26. H. Martens; T. Næs, Multivariate Calibration. (John Wiley & Sons, New York, NY: 1989). 27. R. Tauler, A. K. Smilde B. R. Kowalski, Selectivity, Local Rank, Three-Way Data Analysis and Ambiguity in Multivariate Curve Resolution, J. Chemometr. 9, 31-58 (1995). 28. P. J. Gemperline E. Cash, Advantages of Soft versus Hard Constraints in Self-Modeling Curve Resolution Problems. Alternating Least Squares with Penalty Functions, Anal. Chem. 75, 4236-4243 (2003). 29. W. Windig, B. Antalek, J. L. Lippert, Y. Batonneau C. Brémard, Combined Use of Conventional and Second-Derivative Data in the SIMPLISMA Self-Modeling Mixture Analysis Approach, Anal. Chem. 74, 1371-1379 (2002).

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 337–348  World Scientific Publishing Company

SUPPORT VECTOR CLASSIFICATION OF LAND COVER AND BENTHIC HABITAT FROM HYPERSPECTRAL IMAGES VIDYA MANIAN Department of Electrical and Computer Engineering, University of Puerto Rico, Mayaguez, PR 00682-6242 [email protected] MIGUEL VELEZ-REYES Department of Electrical and Computer Engineering, University of Puerto Rico, Mayaguez, PR 00682-6242 [email protected]

This paper presents a novel wavelet and support vector machine (SVM) based method for hyperspectral image classification. A 1-D wavelet transform is applied to the pixel spectra, followed by feature extraction and SVM classification. Contrary to the traditional method of using pixel spectra with SVM classifier, our approach not only reduces the dimension of the input pixel feature vector but also improves the classification accuracy. Texture energy features computed in the spectral dimension are mapped using polynomial kernels and used for training the SVM classifier. Results with AVIRIS and other hyperspectral images for land cover and benthic habitat classification are presented. The accuracy of the method with limited training sets and computational burden is assessed. Keywords: Hyperspectral image classification; support vector method.

1. Introduction Hyperspectral images are characterized by large amounts of data spread over several spectral bands. They are widely used for land and coastal ecological classification, target detection, crop health monitoring, in search and rescue operations and also in the biomedical field for cancer diagnosis. Both supervised and unsupervised techniques have been developed for classifying these images. Traditionally these images are classified by “spectral information alone1”. The high dimensionality of the feature space and data dependency results in poor performance of classification algorithms that otherwise perform well with gray scale images. In order to avoid these problems band subset selection methods employing spectral binning, principal components, discriminant and independent component analysis have been developed. Principal component transformation decorrelates the original data and transforms it to an orthogonal feature space, discriminant analysis enhances inter class separability and independent component transformation results in a non-gaussian space. Recently there has been an interest in kernel based methods. SVMs with kernel mapping scales well to high-dimensional 109

110

V. Manian & M. Velez-Reyes

problems and converge fast to the solution. They also have well-defined statistical properties and hence are useful for hyperspectral image (HSI) classification. There are several applications of using “SVMs for HSI classification2,3”. Independent components mixture models and support vector machines have been “used in4 for classification”. Feedforward neural networks have been “used in5” for hyperspectral image classification. Ensembles of feed forward neural networks are trained with different random weight initialization methods. The training methods include bagging, boosting and adaptive boosting. A comparison of support vector machines, kernel Fisher discriminant, regularized radial basis function feedforward neural network and regularized adaptive boosting methods is “presented in6 for classification”. All of the above methods utilize only the spectral signature of each pixel as the original feature set. In this paper, we present a wavelet based SVM algorithm for hyperspectral image classification. In the literature, wavelets have been used widely for hyperspectral/multispectral data compression. They have been reported to be used for “image dimensionality reduction and classification7,8”. In this paper, the novelty is the use of wavelets as a preprocessing and feature extraction step for computing texture energy features. In addition to results of land cover classification, benthic habitat classification is also presented. Section 2 presents the wavelet-SVM algorithm. Section 3 presents land cover classification results and Section 4 presents benthic habitat classification results. Sections 5 and 6 present discussion and conclusions, respectively. 2. Wavelet Feature Based Support Vector Algorithm In this method, the spectral signature of each pixel is transformed using a 1-dimensional wavelet filter. The wavelets can be broadly classified into: continuous and discrete wavelets. The discrete wavelets are further classified as: non-orthogonal, biorthogonal or orthogonal wavelets. Non-orthogonal wavelets are linearly dependent and redundant frames. Orthogonal wavelets are linearly independent and complete in L2(R). A family of real orthonormal bases ψ m ,n (u ) obtained through translation and dilation of a kernel function ψ(u) known as the mother wavelet is used to decompose the signal or image.

ψ m ,n (u ) = 2 − m / 2ψ (2 − m u − n) .

(1)

where m and n are integers. Due to the orthonormal property, the wavelet coefficients of a signal f(u) can be easily computed via +∞

cm ,n =

∫ f (u)ψ

m ,n

(u )du

(2)

−∞

To construct the wavelet ψ(u), the scaling function φ(u) is first determined which satisfies the two-scale difference equation,

φ (u ) = 2

∑ h(k )φ (2u − k ) k

(3)

Support Vector Classification of Land Cover

111

Then, the wavelet kernel ψ(u) is related to the scaling function via

ψ (u ) = 2

∑ g (k )φ (2u − k )

(4)

k

where

g (k ) = (−1) k h(1 − k )

(5)

g and h are the wavelet and scaling filters. The discrete wavelet transform (DWT) is used in this study. Any function f(t)∈ Rn can be “written as9”:

f (t ) =



J1 −1

c J 0 ,kφ J 0,k (t ) +

∑∑ d j=J0

k

ψ j ,k (t )

j ,k

(6)

k

where φ(t) and ψ(t) are the scaling and wavelet functions, and c J 0 ,k and d j ,k are the corresponding scaling and wavelet coefficients, J0 is the coarsest decomposition level, respectively. The wavelet energy feature computed as the root mean square value of the scaling and wavelet coefficients is used as the feature. Hence, the feature vector, F=[F0, F1,…,FJ1]T where J1 is the maximum wavelet decomposition level, T is the transpose operation of a vector. The jth element of the feature vector is computed as, Fj =

1 N

Nj

∑d

2 j ,k

j∈{1,2,…,J1}

(7)

k =1

where Nj is the number of coefficients in level j. In this work, the Daubechies orthonormal wavelet filter of length 14 “has been used10”. This filter has been chosen for consistency and due to their orthonormal property and suitability for texture feature extraction. The reduced dimension feature vector Fj is used to train the SVM classifier. SVM finds the optimal separation surface between classes by identifying the most representative training samples also called support vectors. Generally, a kernel method is used to project the data non-linearly into a higher dimensional space, where the classes are linearly separable. Typically, SVM is formulated for a two-class problem with N training samples represented by the set pairs {( yi , xi ) , i = 1, 2,..., N } with yi a class label of value ±1 and xi∈Rn feature vector with n components. The classifier is represented by the function f(x;α)→y with α the parameters of the classifier. The SVM method consists in finding the optimum separating hyperplane so that samples with labels y=±1 are located on each side of the hyperplane, the distance of the closest vectors to the hyperplane in each side is maximum. These are called support vectors and the distance is the optimal margin. A complete mathematical formulation of SVM and application for classification are “described in11,12”.

112

V. Manian & M. Velez-Reyes

The maximization of the margin with the equations of the two support vector hyperplanes is a constrained optimization problem given by, 2 1 min  w  with yi ( w.x + b ) ≥ 1, i = 1,..., N 2 

(8)

where w.x+b=0 is the hyperplane, (w,b) are the parameters of the hyperplane. Fig. 1 shows the classification methodology of SVMs by tracing maximum margin hyperplanes in the kernel space where samples are mapped. For non-linearly separable training samples a regularization parameter C and error variables εi are introduced in (1) to reduce the weighting of misclassified vectors. This optimization problem is solved using Lagrange multipliers as, N   N  1 λi λ j yi y j xi .x j  min  λi − 2 i , j =1   i =1   0 ≤ ≤ C , ∀ i = 1, 2,..., N , λ  i  N  λi yi = 0, ∀i = 1, 2,..., N ,  i =1 





(9)



where, the λi are the Lagrangian multipliers and are non-zero only for the support vectors. Thus, the hyperplane parameters (w,b) and the classifier function f(x;w,b) can be computed by optimization process. The free source code “osu-svm toolbox13” has been adapted for hyperpsectral imagery. x

x

x

x x

x w

x

x

o

x

o

x

o

o

w

x

o

o

o ξ/|w|

o

o o

o

o

x margin

(a)

x

o -ξ ξ/|w|

margin

(b)

Fig. 1. (a) Optimal decision hyperplane (ODH) in a linearly separable problem, (b) Linear decision hyperplanes in nonlinearly separable data.

For computing nonlinear decision surfaces in Rn, the data is projected in a higher dimensional space where it is linearly separable. Usually kernels are used for the projection. The kernel function K embeds the data into a feature space where classes can be separated using hyperplanes. Fig. 2 depicts the mapping. The kernel computes inner

Support Vector Classification of Land Cover

113

K K ( x) K ( x)

x K (o )

o x

x

x o

o o o

K (o )

K ( x) K (o )

K ( x) K ( x)

x o

K (o )

x

K ( x)

K (o )

K (o )

Fig. 2. Kernel Mapping

products in the feature space directly from inputs. In these experiments polynomical kernel given by K ( x, xi ) = ( x.xi + 1)

p

(10)

where p is the degree of the polynomial is used. SVMs are designed to solve two-class M ( M − 1) problems. For M class problem, a one against one: classifiers are applied on 2 each pair of classes, the most often computed label is assigned for each unknown test feature vector.

3. Land Habitat Classification This section presents results of classification of agricultural field and natural and manmade objects using two hyperspectral images.

3.1. Indian pines hyperspectral image The Airborne Visible Infrared Imaging Spectrometer (AVIRIS) hyperspectral data acquired over the Indian pine test site in Northwestern Indiana in 1992 is used. This image has originally 220 spectral bands, after excluding the water absorption and noise bands, 200 bands are selected. Fig. 3 shows the image of size 145x145 pixels with the 7 ground truth classes. 90 training samples from each of the 7 classes are used to first compute the wavelet features. The wavelet filtering is done up to the maximum number of decomposition levels, which is J1= log2200=7.6 ≈ 8. Hence, there are 9 features per sample (8 energy features from the 8 levels of detail coefficients and 1 feature from the approximate coefficients). The parameters d (degree of polynomial kernel) is tuned between 1 and 15 and C (regularization parameter) between 100 and 106. In this experiment, the polynomial degree d = 14. The SVM classifier is trained with the mapped features and regularization parameter C = 15,000. The number of testing

114

V. Manian & M. Velez-Reyes

Soy notill

(a) Fig. 3. Subset of AVIRIS hyperspectral image of Indian Pine with ground truth classes.

samples per class is 200. Table 1 shows the confusion matrix for the wavelet-SVM method, where each entry Xii (ith row and ith column) is the percentage of samples from class i classified correctly in class i and Xij is the percentage of samples from class i classified incorrectly in class j. The producer’s accuracy in the last column is 100% minus the percent omission error. The last row of the table gives the user’s accuracy which is equal to 100% minus the commission error. The overall accuracy (OA) is 94.29%. For comparison the SVM classifier is also trained with the 200 spectral features per class. Table 2 gives the confusion matrix for this experiment. In order to prove the superior performance of SVM classifier, classification results are given for a minimum distance classifier in Table 3. The degree of the polynomial kernel, d = 3 and C = 1000. It can be seen that the wavelet – SVM method improves accuracy by about 2.5%. Table 1. Confusion matrix for Indian Pines image using wavelet-SVM method.

Classes

Soy min 1 94 0 0 5 14 1

wood

hay

Soynotill Soymin Wood Hay Cornnotil Cornmin Grass/trees

Soy notill 99 3 0 0 5 0 0

0 0 0 100 1 0 0

Corn notill 0 1 0 0 88 5 0

Corn min 0 2 0 0 1 80 0

Grass/ Trees 0 0 0 0 0 1 99

Producer’s [%] 99 94 100 100 88 80 99

0 0 100 0 0 0 0

User’s [%]

92.5

81.7

100

99

93.6

96.4

99

OA=94.29

Support Vector Classification of Land Cover

115

Table 2. Confusion matrix for Indian Pines image using SVM method.

Classes Soynotill Soymin Wood Hay Cornnotil Cornmin Grass/trees User’s [%]

Soy notill 100 4 0 0 1 4 0 91.7

Soy min 0 94 0 0 3 23 3 76.4

wood

hay

0 0 92 0 0 0 0 100

0 0 0 100 0 0 2 98

Corn notill 0 0 0 0 96 7 0 93.2

Corn min 0 2 0 0 0 65 1 95.6

Grass/ Trees 0 0 8 0 0 1 94 91.3

Producer’s [%] 100 94 92 100 96 65 94 OA=91.71

Table 3. Confusion matrix for Indian Pines image using minimum distance classifier.

Classes Soynotill Soymin Wood Hay Cornnotil Cornmin Grass/Trees User’s [%]

Soy notill 34 0 0 0 8 8 0 68

Soy min 10 34 0 0 3 14 0 55.7

wood

hay

0 0 100 0 0 0 2 98

0 0 0 100 0 0 0 100

Corn notill 41 66 0 0 26 13 0 45.2

Corn min 15 0 0 0 63 65 0 45.5

Grass/ Trees 0 0 0 0 0 0 98 100

Producer’ s [%] 34 34 100 100 26 65 98 OA=65.3

3.2. Hydice roof image This experiment uses a segment of a HYDICDE frame. HYDICE is a hyperspectral sensor that collects data in 210 bands with an instantaneous field of view of 3.5 meters. This imagery has high spatial resolution. Fig. 4 shows the image of size 160 x 263 pixels, with 4 classes: roof, asphalt (road and parking lot), car and trees. The number of training samples from each class is 65 and number of testing samples per class is 500. The confusion matrix for the wavelet-SVM algorithm is shown in Table 4. The feature vector is of length 9, similar to the Indian Pines experiment. The degree of the polynomial is d = 5 and the regularization parameter C = 1000. Table 5 shows a comparison with the SVM method applied directly to the spectral signatures. In this case, the feature vector length is 210, d =3 and C = 1000. The overall accuracy of the waveletSVM method is 91.49% and that of the SVM applied directly to the pixel spectra is 84.71%, respectively. In this case, the wavelet-SVM method outperforms SVM method by about 7%. These two experiments indicate that the wavelet-SVM method is better able to discriminate between different land cover types: agricultural, natural and manmade objects.

116

V. Manian & M. Velez-Reyes

Cars

Trees Roof

Asphalt Fig. 4. HYDICE image

Table 4. Confusion matrix for HYDICE image using wavelet-SVM method.

Classes

Roof

asphalt

car

trees

Roof Asphalt Car trees User’s [%]

96.42 0 3.58 5.67 91.2

2.69 97.01 9.55 2.69 86.7

0 2.09 82.99 2.09 95.2

0.9 0.9 3.88 89.55 94

Producer’s [%] 96.42 97.01 82.99 89.55 OA=91.49

Table 5. Confusion matrix for HYDICE image using SVM method.

Classes

Roof

asphalt

car

trees

Roof Asphalt Car trees User’s [%]

100 0 17.7 10.11 78.24

0 99.77 12.18 1.15 88.21

0 0.23 52.18 1.84 96.18

0 0 17.93 86.9 82.9

Producer’s [%] 100 99.77 52.18 86.9 OA=84.71

4. Benthic Habitat Classification In this experiment coastal AVIRIS image of La Parguera area of Puerto Rico is used.

Support Vector Classification of Land Cover

117

4.1. Aviris la parguera hyperspectral image AVIRIS image of the south coast of Puerto Rico was acquired in 2003. Fig. 5(a) shows a subset of this image of the Enrique reef coastal area of size 188 x 427 pixels. The spatial resolution of this image is 17 meter with 224 spectral bands. Fig. 5(b) shows an IKONOS image of the same area with the ground truth classes.

(a)

(b)

Fig. 5. (a) Subset of AVIRIS hyperspectral image of La Parguera (Enrique reef), (b) IKONOS image of Enrique reef with ground truth classes.

Table 6. La Parguera class information and samples

Classes Seagrass Sand Coral reef Mangrove Water Total

Training samples 24 117 25 12 500 678

Testing samples 312 364 450 90 1584 2800

Table 7. Confusion matrix for AVIRIS La Parguera Image using wavelet-SVM method.

Classes Sand Water Coral Grass mangrove User’s [%]

Sand 94.87 0 3.85 8.00 0 88.9

Water 0 100 1.92 0 0 98.12

coral 5.13 0 94.23 8.00 0 87.77

grass

Mangrove

0 0 0 84 0 100

0 0 0 0 100 100

Producer’s [%] 94.87 100 94.23 84 100 OA=94.62

Table 6 gives the class information along with the training and testing samples in each class. Table 7 gives the confusion matrix for the wavelet-SVM algorithm. The length of the feature vector in this case is also 9, the polynomial degree, d = 14 and regularization parameter C = 10000. The OA is 94.62% compared to the SVM method which gives

118

V. Manian & M. Velez-Reyes

87.62%. There is an increase in the accuracy of about 9%. In the case of the SVM method, the feature vector length is 224, polynomial degree is d= 3 and C = 1000. Table 8 shows the results for the SVM method. On comparison, it is seen that the waveletSVM method performs better in discriminating coral from sand. Table 8. Confusion matrix for AVIRIS La Parguera Image using SVM method.

Classes Sand Water Coral Grass Mangrove User’s [%]

Sand 82.05 0 2.96 4 0 92.18

Water 0 81.5 0 0 0 100

Coral 3.42 18.5 86.54 8 0 74.31

Grass

Mangrove

5.13 0 10.5 88 0 84.9

9.4 0 0 0 100 91.4

Producer’s [%] 82.05 81.5 86.54 88 100 OA=87.62

5. Discussion Fig. 6 shows a plot of the number of training samples vs. overall accuracy for the wavelet-SVM and SVM method for classifying the testing samples of HYDICE image (Section 3.2). The SVM method applied to the pixel spectra directly reaches higher accuracy with lesser training samples, but does not increase further. The wavelet-SVM method requires a certain minimum number of training samples but gives higher accuracy for the same number of training samples as SVM method.

Fig. 6. Plot of number of training samples and classification accuracy.

Support Vector Classification of Land Cover

119

The wavelet-SVM method has feature dimensionality much lower than the SVM method, hence the SVM learning and classification is more efficient, demonstrated by the results. Table 9 gives the timing results for classifying the HYDICE image by both methods. It is seen that the wavelet decomposition and feature extraction takes extra time. However, the wavelet-SVM learning and classification requires much less time compared to the SVM method, which takes longer time due to the high feature dimensionality. Table 9. Timing results.

Method

SVM Wavelet-SVM

Time (in secs.) (feature extraction + Training/Classification) 6.36 (1.322 + 5.038) 77.952 ( 77.572 + 0.38)

6. Conclusions This paper has presented a method based on wavelets and SVM for hyperspectral image classification. Contrary to traditional wavelet-SVM methods, texture energy features are computed from the wavelet decomposition and the kernel mapping is done on this reduced feature set. This method achieves higher classification accuracy with lower feature dimension. A polynomial kernel transformation of the wavelet features requires higher degree polynomials and results in efficient SVM learning of the mapped feature vectors. The wavelet feature extraction part requires additional computation, which can be speeded up by efficient implementations. However the training and classification using wavelet features requires lesser time, which is significant in large scale applications. The limitations of this method are the number of training samples and the choice of kernel parameters. Currently texture features are computed only on the spectral dimension. Future work will be the development and implementation of spatial kernels (such as Space Frequency “Kernels14” ) for hyperspectral (vector-valued) images. Data ordering schemes will be used to manage the “vector-valued spectral channels15,16”. This scheme will be advantageous as it integrates both spectral and spatial information.

7. Acknowledgment This work was sponsored partly by DoD under contract W911NF-06-1-0008. The research performed here used facilities of the Center for Subsurface Sensing and Imaging Systems at the University of Puerto Rico-Mayaguez sponsored by the Engineering Research Centers Program of the US National Science Foundation under grant EEC9986821.

120

V. Manian & M. Velez-Reyes

8. References 1. D. A. Landgrebe, Signal theory methods in multispectral remote sensing,” Wiley, NJ, 2003. 2. J. A. Gualtieri, “Hyperspectral analysis, the support vector machine, and land and benthic habitats,” Proceedings of IEEE workshop on Advances in techniques for analysis of remotely sensed data, pp. 354-363, Oct 2003. 3. M. Lennon, G. Mercier and L. Hubert-Moy, “Classification of hyperspectral images with nonlinear filtering and SVMs, Proceedings IEEE Conf. IGARSS’02, Vol. 3, pp. 1670-1672. 4. S. B. Serpico, M. D’Inca and G. Moser, “Design of spectral channels for hyperspectral image classification,” Proceedings IGARSS’04, Vol. 2, pp. 956-959, 2004. 5. M. F. Redondo, C. H. Espinosa and J. T. Sosperda, “Hyperspectral image classification by ensembles of multilayer feedforward networks,” Proceedings IJCNN, Vol. 2, pp. 1145-1149, 2004. 6. G. C-Valls and L. Bruzzone, “Kernel based methods for hyperspectral image classification,” IEEE Trans GRS, Vol. 43, No. 6, pp. 1351-1362, June 2005. 7. S. Kaewpijit, J. Le Moigne and T. El-Ghazawi, “Automatic reduction of hyperspectral imagery using wavelet spectral analysis” IEEE Trans. GRS, 2003. 8. C. Liu, C. Zhao and W. Chen, “Hyperspectral image classification by second generation wavelet based on adaptive band selection,” Proceedings. IEEE Intl. Conf. Mechatronics and Automation, 2005. 9. S. Burrus, R. A. Gopinath and H. Gao, Introduction to wavelets and wavelet transforms-A primer, Prentice Hall, 1998. 10. I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” IEEE Trans. Information Theory, Vol. 32, No. 5, pp. 961-1005, 1990. 11. C. J. Burges, “A tutorial on SVMs for pattern recognition,” Data mining and knowledge discovery, U. Fayyad, Ed. Kluwer Academic, pp. 1-42, 1998. 12. V. N. Vapnick, Statistical learning theory, John wiley and sons, 1998. 13. OSU SVM Classifier Matlab Toolbox (ver 3.00) http://www.ece.osu.edu/~maj/osu_svm. 14. M. Sabri and J. Alirezai, “Optimized space frequency kernel for texture classification,” Proc. IEEE Intl. Conf. Image Processing, Vol. 3, pp.1421-1524, 2004. 15. A. Plaza, P. Martinez, J. Plaza and R. Perez, “Dimensionality reduction and classification of hyperspectral iamge data using sequences of extended morphological transformations,” IEEE Trans. GARS, Vol. 43, No. 3, pp. 466-479, 2004. 16. C. Kotropoulos and I. Pitas, “Multichannel L filters based on marginal data ordering,” IEEE Trans. Signal Processing, Vol. 42, No. 10, pp. 2581-2595.

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 349–367  World Scientific Publishing Company

SOME EFFECTS OF IMAGE SEGMENTATION ON SUBSPACE-BASED AND COVARIANCE-BASED DETECTION OF ANOMALOUS SUB-PIXEL MATERIALS CHRISTOPHER GITTINS* and DAISEI KONNO Physical Sciences Inc. 20 New England Business Center Andover, MA 01810 USA [email protected]

MICHAEL HOKE and ANTHONY RATKOWSKI US Air Force Research Laboratory, AFRL/VSBYH 29 Randolph Road Hanscom AFB, MA 01731 USA

In this paper we assess the effect that clustering pixels into spectrally-similar background types, for example, soil, vegetation, and water in hyperspectral visible/near-IR/SWIR imagery, prior to applying a detection methodology has on material detection statistics. Specifically, we examine the effects of data segmentation on two statistically-based detection metrics, the Subspace Generalized Likelihood Ratio Test (Subspace GLRT) and the Adaptive Cosine Estimator (ACE), applied to a publicly-available AVIRIS datacube augmented with a synthetic material spectrum in selected pixels. The use of synthetic spectrum-augmented data enables quantitative comparison of SubspaceGLRT and ACE using Receiver Operating Characteristic (ROC) curves. For all cases investigated, Receiver Operating Characteristic (ROC) curves generated using ACE were as good as or superior to those generated using Subspace-GLRT. The favorability of ACE over Subspace-GLRT was more pronounced as the synthetic spectrum mixing fraction decreased. For probabilities of detection in the range of 50-80%, segmentation reduced the probability of false alarm by a factor of 3–5 when using ACE. In contrast, segmentation had no apparent effect on detection statistics using Subspace-GLRT, in this example. Keywords: hyperspectral; image segmentation; anomaly detection; sub-pixel; Generalized Likelihood Ratio

1. Introduction Detection of anomalous materials in hyperspectral imagery is a challenging problem and one of ongoing interest to data analysts. In this paper we address spectral detection of anomalous materials present at the sub-pixel level when the material spectrum is known a priori. While a review of hyperspectral image processing literature reveals a wide array of analysis approaches, detection metrics which follow from the assumption of a linear *

Present address: MIT Lincoln Laboratory, 244 Wood St., Lexington, MA 02420 121

122

C. Gittins et al.

mixing model, that is, that the observed spectrum may be described as a linear combination of the spectrum of interest and a nominal background spectrum, are of particular interest because the assumption of linear mixing enables the implementation of computationally-efficient detection algorithms, e.g., spectral-matched-filter and matchedsubspace detectors.1 Within the context of the linear mixing model, the question arises whether the background is better modeled as ‘structured’ or ‘unstructured’, i.e., modeled using a linear combination of basis spectra or as a random vector relative to the mean spectrum of the data. (In the unstructured background model, variation about the mean is described by the sample covariance matrix.) In this paper we assess the effect that segmenting data into spectrally similar background types, e.g., soil, vegetation, and water in hyperspectral visible/near-IR/SWIR imagery, prior to applying a detection methodology has on the detection statistics that follow from the assumption of structured and unstructured backgrounds. Specifically, we examine the effects of data segmentation on the SubspaceGeneralized Likelihood Ratio Test (Subspace-GLRT) and the Adaptive Cosine Estimator (ACE). The former detector follows from the structured background model; the latter from the unstructured background model. The results we present here expand on the work of Funk et al.2 Our analysis utilizes a publicly-available datacube augmented with a synthetic material spectrum in selected pixels. The datacube is from NASA’s Airborne Visible Infrared Imaging Spectrometer (AVIRIS). The use of synthetic spectrum-augmented data enables quantitative comparison of Subspace-GLRT and ACE using Receiver Operating Characteristic (ROC) curves. We utilize AVIRIS data as the basis for our analysis because it exhibits non-Gaussian spectral clutter and thereby poses a more challenging test for statistics-based detection approaches than does data exhibiting purely Gaussian variability. For all cases investigated, ROC curves generated using ACE were as good as or superior to those generated using Subspace-GLRT. The favorability of ACE over Subspace-GLRT was more pronounced as the synthetic spectrum mixing fraction decreased. For probabilities of detection in the range of 50–80%, segmentation reduced the probability of false alarm by a factor of 3–5 when using ACE. In contrast, segmentation had no apparent effect on detection statistics using Subspace-GLRT, in this example. 2. Background and Technical Approach 2.1. Material detection algorithms: Subspace-GLRT and ACE We are interested in anomaly detection when the spectrum of the anomaly is known. Subspace-GLRT-based material detection follows from the presumption of a structured background or “subspace” model for the measured spectrum. The structured background model for detection of a single anomalous material with known spectrum is: x = α s + µ + Bβ + n

(1)

Effects of Image Segmentation on Subspace-Based Detection

123

where x is a K dimensional vector, s is the material spectrum, α is a proportionality coefficient, µ is the mean spectrum of the background, B is the background subspace matrix whose columns are the basis vectors which span background subspace, β is a vector of proportionality coefficients and n is the residual between the observed and modeled spectrum (nominally noise plus spectral clutter). The values of the proportionality coefficients in Eq. (1) minimize the variance between the measured spectrum and best fit spectrum. We implemented a Subspace-GLRT detection metric on noise-whitened data: DGLRT ( x ) ≡

PB⊥ D −1/ 2 ( x − µ ) PSB⊥ D −1/ 2 ( x − µ )

2

2

−1

(2)

where D = diag {σ 12 ,..., σ K2 } , i.e., it is a diagonal matrix whose elements are the meansquared noise/spectral clutter in each spectral band. The interference rejection operator, PA⊥ , is defined with respect to a matrix A: PA⊥ = I − D −1/ 2 A ( AT D −1 A )

-1

AT D −1/ 2

(3)

where A is the B matrix or composite of the S and B matrices, respectively. In practice, −1

the D matrix is set equal to  diag { Σ −1 } where Σ is the sample covariance matrix. The columns of B are the leading eigenvectors of the sample covariance. (We note that there are many alternative methods for specifying B.) The value of DGLRT ranges from zero to infinity, with higher values indicating higher probability that the material spectrum sought is present in the measured spectrum. The statistics of DGLRT for data containing normally-distributed noise have been described in the literature.3 As we will show in the Results section of this paper, there is an optimum number of basis vectors for modeling the background in a given data set. Inclusion of too few basis spectra results in poor modeling of observed spectra and therefore poor detection statistics. Conversely, inclusion of too many basis spectra in the background model results in an inadvertent null of material spectrum, s. This causes the numerator in Eq. (2) to tend to zero and results in poor detection statistics. In contrast to Subspace-GLRT, ACE-based material detection follows from the assumption of an unstructured background. The unstructured background model for detection of a single anomalous material is:

x = α s + µ +ν

(4)

αs and µ are the same as in Eq. (1), and ν is a random vector with zero mean. The probability distribution function for ν follows from the sample covariance matrix. (For a multivariate normal distribution, metric for ACE-based detection is:

p (ν ) = ( 2π )

−K / 2

det ( Γ )

−1/ 2

exp  − 12 ν T Γ −1ν  . ) The

124

C. Gittins et al.

DACE ( x ) ≡

PS Γ −1/ 2 ( x − µ ) Γ −1/ 2 ( x − µ )

2

where Γ is the covariance matrix for the background and y of y. The PS operator is: −1

(5)

2

PS ≡ Γ −1/ 2 s ( s T Γ −1s ) s T Γ −1/ 2

2

is the magnitude-squared

(6)

The ACE detection metric may be regarded as cos-squared of the spectral angle between the noise-whitened measurement, Γ −1/ 2 ( x − µ ) , and the noise-whitened material reference spectrum, Γ −1/ 2 s . This is illustrated conceptually in Fig. 1. ‘Material present’ is decided when the de-meaned and noise-whitened measurement vector falls within the cone defined about the noise-whitened reference vector. The statistics of DACE for a multivariate-normal distributed background has been described in the literature.3

Fig. 1. Conceptual illustration of ACE-based material detection in three dimensions. “Material present” is decided when the demeaned, noise-whitened test spectrum falls within the cone defined about reference vector for the material of interest.

Effects of Image Segmentation on Subspace-Based Detection

125

The principal challenge in applying Eqs. (2) and (5) is accurate estimation of the background mean and either the sample covariance matrix or basis spectra and noisewhitening matrix, D. This requires that bad data and pixels containing the material of interest be identified and removed from the sample. Our approach to removing these ‘outlier’ pixels and establishing robust estimates of the quantities of interest is: (1) calculate mean and covariance of full image, µ and Γ, (2) calculate the Mahalanobis distance of each pixel with respect to sample mean, (3) identify anomalous pixels in scene on the basis of their Mahalanobis distance, d2; with anomalies typically defined as having d 2 ≥ 2 K , where K is the dimensionality of the pixel spectrum, (4) re-calculate mean and covariance of sample, µˆ and Γˆ , respectively, without anomalous pixels, (5) if applying a structured background model, then the columns of B are the leading n Principal Components of Γˆ . The anomaly identification step is crucial. If pixels containing the spectrum of the material of interest (MOI) are included in the Principal Components analysis, then that spectrum will contaminate the estimated background basis vectors and render the Subspace-GLRT ineffective. Fortunately, pixels which contain statistically-significant amounts of the MOI spectrum of interest show up as anomalies and can be removed from the sample by screening based on Mahalanobis distance. We observe that the calculated robust mean and covariance, µˆ and Γˆ , are insensitive to the specific outlier fraction for outlier fractions between ~5% and 15%. A discussion of robust estimation methods is beyond the scope of this paper but can be found elsewhere.4

2.2. Source data and data segmentation We tested the algorithms on datacubes derived from publicly-available data from NASA’s Airborne Visible Infrared Imaging Spectrometer (AVIRIS).5 The purpose of using AVIRIS data for algorithm testing is that it exhibits non-normal spectral clutter and, therefore, poses a more challenging test for statistically-based material detection approaches than does data exhibiting purely Gaussian variability. AVIRIS is a visible/near-IR/SWIR hyperspectral sensor which produces datacubes containing 244 bands covering the 400–2500 nm spectral region at ~10 nm spectral resolution. Datacubes are reflectance data products. The radiance data is calibrated to ±10% absolute accuracy. The datacubes analyzed here were derived from a publiclyavailable datacube recorded over Moffett Field on August 21, 1992 (see http://aviris.jpl.nasa.gov/html/aviris.freedata.html). The Moffett Field data was recorded at 20 km altitude which resulted in a ground sampling distance of 17 m. A gray-scale panchromatic representation of the Moffett Field scene is depicted in Fig. 2.

126

C. Gittins et al.

Fig. 2. Synthetic gray scale broadband image of AVIRIS Moffett Field datacube. White box indicates the main region used for algorithm testing.

The datacubes used for algorithm testing consisted of 36 bands down-sampled from the original 224 bands and augmented at selected pixels with a synthetic spectrum. Our motivation for spectral down-sampling was threefold: 1) redundant information in 224 bands (little information lost by sparse sampling), 2) downsampling reduces computational load, and 3) the band set is not required to demonstrate the functionality of image segmentation clustering or spectral-based material detection algorithms. In order to assess detection algorithm performance, we embedded a synthetic MOI spectrum at selected pixels in the sub-cubes. The synthetic spectrum is shown in Fig. 3. The purpose of embedding the synthetic spectrum was to enable calculation of PD and PFA with absolute certainty and thereby enable calculation of ROC curves. The new pixel spectra were calculated using the equation:

x new = (1 − α ) x + α s

(7)

where x is the original pixel spectrum, s is the synthetic spectrum, and α is a mixing coefficient between 0 and 1. Figure 4 shows a representative pixel spectrum before and after the synthetic spectrum was added. We refer to sub-cubes with embedded spectra as ‘MOI-augmented’ AVIRIS data. Test datacubes consisted of 128 x 128 and 256 x 256 pixels and mixing fraction was 0.01 or 0.02 in the pixels containing the embedded spectrum. Figure 5 depicts a gray-scale image of a 128 x 128 pixel MOI-augmented subcube. The location of the 6 x 6 block of MOI-augmented pixels is indicated by the arrow and outlined by the white box.

Effects of Image Segmentation on Subspace-Based Detection

Fig. 3. Synthetic signature embedded in AVIRIS sub-cubes analyzed in this work.

Fig. 4. Spectrum of AVIRIS pixel before and after embedding the synthetic spectrum with α=0.01.

127

128

C. Gittins et al.

Fig. 5. Synthetic panchromatic representation of AVIRIS data (128 x 128 pixel block). Black region corresponds to water; light gray to dried vegetation/soil; medium/dark gray to live vegetation. The 6 x 6 pixel block to which the synthetic spectrum was added is outlined in white.

2.3. Data segmentation Data segmentation was performed using Expectation Maximization (EM)6 and Stochastic Expectation Maximization (SEM) algorithms.7 In summary, both algorithms attempt to maximize the likelihood function for the set of observed spectra, {xi}: L ({ x i } ; Ψ ) =

N

K

i =1

j =1

∏ ∑π

j

f L ( xi ; µ j , Γ j )

(8)

where Ψ is the set of mixture model parameters (number of clusters, cluster abundances, cluster means and covariances), N is the number of pixels in the sample, πj is the fractional abundance with cluster j, and f L ( xi ; µ j , Γ j ) is the function for which describes the probability of observing xi given cluster mean, µj, and covariance, Γ j . The solution to the clustering problem, i.e., estimation of the optimum mixture model parameters, Ψ, is determined iteratively. We used the fractional change in the L from one iteration to the next as the termination criterion:

( (

) )

ln  L { xi } ; Ψ ( q +1)    q ( ) ln  L { xi } ; Ψ   

−1 ≤ ε

(9)

where q is the iteration number. Our convergence criterion was typically ε =0.0001. We did not systematically investigate the effect of the convergence criterion on algorithm results. Our semi-quantitative observations suggest that there is little change in Ψ when ε

Effects of Image Segmentation on Subspace-Based Detection

129

drops below 0.001, i.e., variations in clustering solution depend more on initialization conditions than on the choice of convergence threshold when ε < 0.001. As expected, SEM produced essentially identical clustering results but in significantly less time by virtue of the fact that it assigns pixels exclusive member in a class during algorithm iterations. The most computationally efficient approach to clustering was to apply SEM to Principal Components-transformed data rather than to full rank data. The Principal Components-transformed data is: y = QT ( x − µ )

(10)

where the columns of Q are the leading m eigenvectors of the full sample covariance matrix and µ is the mean of the full sample. Following the Principal Component transform, the quantities µj and Γj above are calculated in y-space and SEM proceeds as per Eqs. (8) and (9). For most AVIRIS subcubes analyzed, the leading two Principal Components accounted for >99% of the data variance. Using the >99% variance criterion for specifying the number of components to use in the PC transform therefore reduced the dimensionality of the data from 36 to 2. This dimensionality reduction was achieved with no apparent loss of classification accuracy and approximately an order of magnitude increase in algorithm execution speed. (Once the clustering solution was achieved, the Subspace-GLRT and ACE algorithms were applied to full rank data separately for each cluster.) Good algorithm initialization is essential for achieving rapid algorithm convergence and avoiding convergence on a local minimum. We used a K-means initialization. We tested a variety of approaches for picking the initial means and initially settled on an approach in which the first mean was chosen at random from pixels whose Mahalanobis distance fell between the 50-th and 90-th percentiles and then subsequent means were chosen to minimize the cost function: m −1

F ( µm ) =

∑ µ −µ i =1

1

i

2

(11)

m

Successive means were chosen by evaluating the cost function for all pixels with respect to the previously chosen means. The pixel which yields the lowest value of cost function is chosen as the m-th cluster mean, µm. For the sub-cube depicted in Fig. 5, the optimum clustering solution was three clusters, nominally soil/dry vegetation, live vegetation, and water. Figure 6 depicts the cluster means estimated using SEM, where the symbols and lines correspond to solutions from different algorithm runs. Figure 7 shows the classification map for the scene; white pixels indicate 100% probability of association with the class and black pixels indicate 0% probability of association.

130

C. Gittins et al.

Fig. 6. Cluster means estimated using SEM. Symbols indicate solutions with the highest likelihood value. Lines indicate means from solutions which converged but with slightly lower likelihood values.

Gray Scale Broadband

Water

Soil

Vegetation

Fig. 7. Maps of three classes found in AVIRIS sub-cube. Classes nominally correspond to live vegetation, dry vegetation/soil, and water. White pixels indicate most probable class assignment.

Effects of Image Segmentation on Subspace-Based Detection

131

Both our EM and SEM algorithms were implemented using Interactive Data Language (IDL version 6.2: RSI, Boulder CO) and ran in real-time on a desktop PC (Pentium 4 with 2.8 GHz processor and 768 MB RAM running Windows XP) or laptop PC (Pentium M with 1.4 GHz processor and 768 MB RAM running Windows XP).

3. Results: Detection Algorithm Performance Comparison We generated ROC curves for ACE and Subspace-GLRT detection by calculating probability of detection, PD, from the synthetic spectrum-augmented pixels and the probability of false alarm, PFA, from the rest of the pixels in the scene. Both probabilities are determined by setting a threshold for deciding “spectrum of interest is present” and determining what fraction of the pixels in the region of interest exceed that threshold value. The probability of detection is: η max

PD (η ) =



p ( D;α > 0 ) dD

(12)

η

where η is the detection threshold, p(D;α) is the probability of observing either DACE or DGLRT for mixing fraction α, and ηmax is the maximum value of the detection parameter; ηmax=1 for ACE, ηmax=∞ for Subspace-GLRT. The integral for PD is evaluated over the pixels where the synthetic spectrum is embedded. Similarly, PFA is evaluated over the remaining pixels in scene (where α=0): η max

PFA (η ) =



p ( D;α = 0 ) dD

(13)

η

ROC curves are parametric plots of PD(η) vs. PFA(η). Plotting the data in this manner removes the explicit dependence on the analysis algorithm-dependent detection threshold and enables direct comparison of different detection algorithms. Figure 8 shows ROC curves for Subspace-GLRT-based detection applied to unsegmented data with α=0.01, i.e., the synthetic spectrum-augmented sub-cube depicted in Fig. 5. Curves are plotted for different numbers of basis vectors used to model the background. Although curves were calculated for nb=1-15, only nb=3, 6, 10, and 14 are plotted in Fig. 8. Figure 8 indicates that there is an optimum number of basis functions for modeling the background. ROC curves improve up to nb~10 but degrade for higher values of nb. Our working hypothesis is that inclusion of additional basis vectors has the effect of nulling the spectrum of interest. In practice, the optimum value for nb is fairly soft. ROC curves do not change significantly in going from nb=8 to nb=11. Figure 9 shows ROC curves for GLRT-based detection applied to segmented data with mixing fraction equal to 0.01. Similar to Fig. 8, the ROC curves were calculated using the leading nb Principal Components of each cluster to model the background subspace. (The ROC curves were calculated using the same number of basis functions for each cluster.) The optimum number of basis functions appears to be ~6 for this synthetic

132

C. Gittins et al.

Fig. 8. ROC curves for GLRT detection applied to un-segmented data; mixing fraction = 0.01, nb = number of basis vectors used to model the background subspace in each cluster.

Fig. 9. ROC curves for GLRT subspace model; nb = number of basis vectors used to model the background subspace in each cluster.

Effects of Image Segmentation on Subspace-Based Detection

133

spectrum-augmented sub-cube. We note that as with the unsegmented data, the observed optimum value was a rather ‘soft’, i.e., there was relatively little variation from nb=6 to nb=8. The nb=3 curve reflects the fact that the structure of the background is not captured using only three basis functions. The nb=10 and nb=14 ROC curves reflect the fact that the spectrum of interest is not orthogonal to the spectra which span the background subspace. Inclusion of too many basis functions in the estimated background subspace causes the background subspace rejection operator, PB⊥ , to null the spectrum of interest as well as the background subspace, which leads to a corresponding reduction in (signalof-interest)-to-background contrast. The critical conclusion from Figs. 8 and 9 is that there is an optimum number of basis functions for defining the background in a scene. Using either too many or too few degrades contrast. In practice, we found that the most challenging aspect of applying the structured background model for spectral matchingbased detection was attempting to determine the number of basis functions that optimize the ROC curve.

Fig. 10. Best ROC curves for subspace-GLRT detection: comparison of curves calculated for clustered and un-clustered data; rural detection environment depicted in Fig. 7.

The objective of this paper is to assess the effects of clustering on spectral detection algorithms. Figure 10 shows optimal ROC curves for GLRT-based detection using clustered and un-clustered data. The nearly overlapping ROC curves suggest that segmentation does little or nothing to improve the effectiveness of Subspace-GLRTbased detection. For comparison with Subspace-GLRT, we also calculated ROC curves for ACE applied to the same data. Figure 11 shows the ROC curves for ACE and GLRT applied to un-clustered data. The ROC curves indicate that ACE outperforms the GLRT for PD> ψ 0 , we find the well-known single-scattering lidar equation: zc

−2 ∫ A cτ Pss ( zc ,θ i +1 ) = P0 2 (α a ,s ( zc ) + α R ) p (π )e 0 zc 2

α ( z ) zdz

(4)

as it should be. Practically speaking for a laser beam with a divergence of 0.3 mrad, almost all the single scattering is contained in a FOV of 1mrad.

2.4. Double scattering The essence of the multiple-field-of-view lidar technique is the measurement of the scattered power S (θ ) as a function of field of view θ. In Figure 1 the angle θ corresponds to one half of the lidar total FOV. Changes in the particle size distribution lead to variations of S (θ ) . We assume in our model that the laser beam divergence is small and that the scattered power originating from single scattering events is concentrated inside the smallest field of view θmin of the instrument. We also assume that the extinction and the particle size of the atmospheric aerosols are spatially homogenous and that the time delay of the scattered photons is negligible. According to these assumptions, for angles θ >θmin the scattered radiation must come from multiple

222

G. Roy & L. R. Bissonnette

scattering events. In lidar measurements, the FOV range [θmin, θmax] is usually divided into several concentric intervals and the scattered power inside these intervals is integrated by a detector. Scattered power in the FOV interval ∆θ i = θ i + 1 - θ i can be calculated using4,5: S ( z c , ∆θ i ) = S 0 e

−2 ∫

zc

0

α ( z )dz

cτ A 2 2 zc2

zc 2π β j +1

∫ ∫ ∫ [α ( z) p(r, β )][α ( z ) p(r, β s

s

c

back )]sin β dβ dφ dz

za 0 β j

(5) 2,6

The factor 2 in front of the integral comes from the reciprocity theorem , α is the extinction coefficient, p(r , β ) and p (r , β back ) are the values of the phase function for the forward and backward scattering angles, respectively β and β back = π − β + θ for a particle of radius r, z a is the distance to the boundary of the probed bioaerosol layer, zc is the measurement range, z is the range where the forward scattering takes place, the quantity [α s ( z ) p (r , β )] represents the forward scattering coefficient while [α s ( zc ) p (r , β back )] represents the backscattering coefficient, and φ is the azimuthal angle ranging from 0 to 2π . Following the usual notation convention for lidar, c , τ and A represent respectively the speed of light, the pulse width and the area of the collecting aperture. From Fig. 1, the scattering angle β can be easily related to the FOV θ via z tan θ the relation tan β = c . The backscattered signal is given by the sum: zc − z Pss ( zc ,θ i +1 − θ i ) + S ( z c ,θ i +1 − θ i ) .

3. Results Figure 4 illustrates the single scattering lidar return as a function of distance for the background aerosol plus a bioaerosol cloud extending from 500 m to 530 m. We have also indicated on the graph the distance of 580 m of which we set the backscattering measurement at FOV up to 100 mrad full angle using a gated ICCD camera. For clarification purposes we have also indicated the lidar return in absence of Raleigh scattering. In short, the Rayleigh scattering is important and clearly enhances the backscattering at the wavelength we are considering. There is a small bioaerosol cloud extending from 500 m to 530 m. The lidar signal from the bioaerosol considered is just above the background and Rayleigh scattering signal. We are interested in the lidar signal as a function of the FOV at the back of the bioaeosol cloud. Figure 5 shows the difference between the bioaerosols signal ( Sbio+bg ) and the background aerosol signal ( Sbg ) second order lidar return within FOV θ i +1 and θ i for 3 effective bioaerosol radius (r=0.5, 2.5 and 10µm). The difference ( ∆S ( z c ,θ i +1 − θ i ) ) has been calculated as follow: ∆S ( zc ,θ i +1 − θ i ) = Sbio+bg ( zc ,θ i +1 − θ i ) − Sbg ( zc ,θ i +1 − θ i ) using Eq. (5) with za=500 m, zb=530 m and zc=580 m for the calculation of S ( zc , ∆θ i ) . The FOV increment between adjacent points is ( θ i +1 - θ i ) and the FOVs are equidistantly distributed on a log scale.

Standoff Measurement of Bioaerosol Size

223

1.E+02

bg+bioaerosols+Rayleigh Scattering

1.E+01 background aerosol + bioaerosols background aerosol + Rayleigh scattering

W/J/m^2

1.E+00

1.E-01 Measurement plan, 580 m 1.E-02

1.E-03 30 m 1.E-04 0

100

200

300 400 Distance (z)

500

600

700

Fig. 4. Single scattering lidar return as a function of distance.

3.E-06

r=0.5µm r=2.5 µm r=10 µm

W/J/m^2

2.E-06

1.E-06

0.E+00 1.E-04

1.E-03

1.E-02

1.E-01

fov (rad)

Fig. 5. Lidar return ∆S between the bioaerosols signal and the background aerosol signal within FOV θ i +1 and θ i for 3 bioaerosol effective radius (r=0.5, 2.5 and 10µm) and for za=500 m, zb=530 m and zc=580 m. The FOV increment between adjacent points has been set to ( θ i +1 - θ i ).

224

G. Roy & L. R. Bissonnette

The peaks are attributed to diffraction and the rather flat signals at larger FOVs is attributed to geometrical scattering. It is clear that the information content on the bioaerosols size is present. From the result of ref. 7, we find, for small extinction and for zc + z a measurement geometry such that > zb , that the effective diameter 2 d eff = 2 < r 3 > / < r 2 > is related to the angular scale θ md and to the probing wavelength via the relation d eff = 0.46

λ zc − z a  θ md

zc

( zb − z a ) 2  1 − 2  ( zc − za ) 

1/ 2

(6).

The position of the maximum of ∆S defines the angular scale θ md . In order to asses to the quality of the effective diameter recovered through Eq. (6), the bioaerosol defined in Table 1 were studied for 30-m cloud depth situated at distances za of 500m, 1000m and 2000m and for zb= 530 m, 1030 m and 2030 m and zc= 580 m, 1080 m and 2080 m. Table 2a and 2b provide for two FOV ranges (0.1mrad to 50 mrad and 0.5 mrad to 5.5mrad) the scale θ md and the recovered effective diameter d eff and the error on the recovered values. Table 2b, for the smaller FOV range, clearly shows the necessity to cover a large FOV range to obtain information on small particles. Table 2a The scale θ md (identified as Theta in the table), the recovered effective diameter d eff and the error on the recovered values for FOVs ranging from 0.1 mrad to 50 mrad z(m) 580 1080 2080

r=0.5 µm; D(eff)=1.13 µm Theta D(eff) % err Deff 18.4 1.1 0.4 10.1 1.1 -1.7 5.51 1.1 -6.5

r=2.5 µm; D(eff)=5.7µm Theta D(eff) % err Deff 4.15 5.0 -11.7 2.02 5.6 -2.6 1.11 5.2 -8.0

r=10 µm; D(eff)=22.8 µm Theta D(eff) % err Deff 1.11 18.8 -17.5 0.61 18.4 -19.4 0.27 21.6 -5.4

Table 2b The scale θ md (identified as Theta in the table), the recovered effective diameter d eff and the error on the recovered values for FOVs ranging from 0.5 mrad to 5.5 mrad z(m) 580 1080 2080

r=0.5 µm; D(eff)=1.13 µm Theta D(eff) % err Deff ind ind ind ind ind ind ind ind ind

r=2.5 µm; D(eff)=5.7µm Theta D(eff) % err Deff 4.15 5.0 -11.7 2.02 5.6 -2.6 1.11 5.2 -8.0

r=10 µm; D(eff)=22.8 µm Theta D(eff) % err Deff 1.11 18.8 -17.5 0.61 18.4 -19.4 0.27 21.6 -5.4

So far we have considered an infinitesimal laser beam with zero divergence and simply assumed that we can extract easily the second order scattering. Now we will have a look at the backscattered light as a function of field-of-view taking account of the laser beam radial profile. Figure 7 shows the two lidar returns as functions of the FOV at a distance of 580 m in the presence and in absence of bioaeosol; the laser beam has a divergence of 0.5 mrad full angle. For FOVv ranging from 0.0001 to 0.001 mrad, figure 7 shows the encircled energy of the laser beam. Beyond 0.001 mrad there is a slight increase of the signal as a function of the FOV caused by multiple scattering. The difference between the two signals is very small. We need to subtract the two signals in order to see clearly the multiple scattering caused by the bioaerosols. The difference of energy between the

Standoff Measurement of Bioaerosol Size

225

0.003 background aerosol + bioaerosol background aerosol

W/J/m^2

0.0025

0.002

0.0015

0.001

0.0005

0 0.0001

0.001

0.01

0.1

fov (rad)

Fig. 7. Lidar returns as functions of the FOV at a distance of 580 m in the presence and in absence of bioaeosol; the laser beam has a divergence of 0.5 mrad full angle.

bioaerosol signal Pbio ( zc ,θ i ) and background aerosols signal Pbg ( zc ,θ i ) in a ring within ( θ i +1 and θ i ) is given by:

∆P( zc ,θ i +1 − θ i ) = ( Pbio ( zc ,θ i +1 ) − Pbio ( zc ,θ i )) − ( Pbg ( zc ,θ i +1 ) − Pbg ( zc ,θ i ))

(7)

n

and the integrated energy is given by: C (θ i ) =

∑ ∆P( z ,θ c

i +1

− θi )

i =2

If we apply Eq.7 on the 2.5-µm radius bioaerosol particles for the 3 distances previously considered we obtain the result displayed on figure 8. Clearly there is still information on size but the “signals” are not as clear as on figure 5. In fact, because the doubly scattered light is mixed with the laser beam, the information at small FOVs is no longer available. In absence of “information” at small FOVs, it is no longer possible in some cases to retrieve meaningful information on the particle size. Table 3 shows the scale θ md and the recovered effective diameter d eff and the error on the recovered values. As expected the recovered values for the larger particles have significant errors especially at longer distances.

226

G. Roy & L. R. Bissonnette

Table 3. The scale θ md (identified as Theta in the table), the recovered effective diameter d eff and the error on the recovered values for FOVs ranging from 0.1 mrad to 50 mrad. Laser beam divergence has been taken into account. r=0.5 µm; D(eff)=1.13 µm Theta D(eff) % err Deff 18.4 1.1 0.4 10.1 1.1 -1.7 5.51 1.1 -6.5

z(m) 580 1080 2080

r=2.5 µm; D(eff)=5.7µm Theta D(eff) % err Deff 3.7 5.6 -1.0 2.02 5.6 -2.6 1.11 5.2 -8.0

r=10 µm; D(eff)=22.8 µm Theta D(eff) % err Deff 1.11 18.8 -17.5 0.91 12.3 -46.0 ind ind ind

1.E-05

580 m 1080 m 2080 m

W/J/m^2

1.E-06

1.E-07

1.E-08 1.E-04

1.E-03

1.E-02

1.E-01

fov (rad)

Fig.8. Lidar return within FOV θ i +1 and θ i for zc=580 m,1080 m and 2080 m for 2.5-µm radius bioaerosols clouds. Beam divergence has been set to 0.5 mrad.

4. Conclusion We have shown that second order scattering contains information on the size of bioaerosols and it is possible in principle to recover the size of low concentration bioaerosols using a background aerosol subtraction technique. The bioaerosol size could be retrieved with reasonable accuracy when the combination of FOV range and distances contain information on the size. However, the laser beam divergence and the quality of the optical components will play an important role into the determination of the minimum concentration required for particle size determination. Experimental trials are required to determine the physical and engineering problems associated with the proposed technique and so determine the minimal bioaerosols concentration required for size determination.

Standoff Measurement of Bioaerosol Size

227

References 1. J.-R. Simard, Gilles Roy, Pierre Mathieu, Vincent Larochelle, John McFee, and Jim Ho, “Standoff Sensing of Bioaerosols Using Intensified Range-Gated Spectral Analysis of LaserInduced Fluorescence,” IEEE Trans. On Geoscience and Remote Sensing, . 42, 865–874 (2004). 2. Lidar : Range-Resolved Optical Remote Sensing of the Atmosphere, chapter 3 : Lidar and Multiple Scattering, Editor Claus Weikamp (Springer Series in Optical Sciences) (Hardcover), 2005, 455 pp, ISBN 0-387-40075. 3. N. Roy, G. Roy, L. R. Bissonnette, and J.-R. Simard, “Measurement of the azimuthal dependence of cross-polarized lidar returns and its relation to optical depth,” Appl. Opt. 43, 2777–2785 (2004). 4. G. Roy, L. R. Bissonnette, C. Bastille, and G. Vallée, “Retrieval of droplet-size density distribution from multiple field-of-view cross-polarized lidar signals,” Appl. Opt. 38, 52025211, 1999. 5. Lidar : Range-Resolved Optical Remote Sensing of the Atmosphere, chapter 3 : Lidar and Multiple Scattering, Editor Claus Weikamp (Springer Series in Optical Sciences) (Hardcover), 2005, 455 pp, ISBN 0-387-40075. 6. L. Katsev, E. P. Zege, A. S. Prikhach, and I. N. Polonsky, “Efficient technique to determine backscattered light power for various atmospheric and oceanic sounding and imaging systems,” J. Opt. Soc. Am. A 14, 1338–1346 (1997). 7. L. R. Bissonnette, G. Roy and N. Roy, “Multiple-scattering-based lidar retrieval: method and results of cloud probings, Appl.. Opt. 44,.5565-5581 (2005) .

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 457– 468  World Scientific Publishing Company

DETECTION AND IDENTIFICATION OF TOXIC CHEMICAL VAPORS IN AN OPEN-AIR ENVIRONMENT BY A DIFFERENTIAL PASSIVE LWIR STANDOFF TECHNIQUE *

HUGO LAVOIE , ELDON PUCKRIN AND JEAN-MARC THÉRIAULT Defence Research and Development Canada – Valcartier, 2459 Pie-XI North Blvd, Québec, Québec,G3J 1X5, Canada [email protected]

In this paper, the passive standoff long wave infrared technology developed for atmospheric remote sensing was used to detect and identify chemical pollutants in the atmosphere. The measurement approach is based on the differential passive standoff detection method that has been developed by DRDC Valcartier during the past few years. The measurements were performed on real chemical warfare agents and toxic chemical vapors. The results clearly demonstrate the capability of the differential radiometry approach for the detection, identification and quantification of toxic chemical vapor clouds in an open-air environment. Keywords: CATSI; Toxic chemical vapours; Passive standoff detection; FTIR, Chemical warfare agent.

1. Introduction Passive standoff FTIR (Fourier Transform Infrared) spectrometers represent a class of sensors highly suited for the remote sensing of environments. Passive FTIR sensing can provide unique information on the presence, identification and quantification of toxic chemical vapors in the atmosphere 1-8. However, there are two main difficulties associated with common Fourier-transform infrared (FTIR) techniques for the passive remote monitoring of chemical vapors. First, there is the difficulty of suppressing the strong thermal background signal that is mixed with the desired target signature. Secondly, it is necessary to account for the infrared emission arising from the spectrometer itself. Defence Research and Development Canada (DRDC) – Valcartier has developed a method 9, 10 that mitigates the impact of these two difficulties by taking advantage of the differential detection capability provided by a dual-beam interferometer (CATSI – Compact ATmospheric Sounding Interferometer) sensor. This sensor provides the real-time optical subtraction of the background without extensive calculations and shows stable radiometric calibration over a long period of time. This technique has been

*

Author to whom correspondence should be sent. E-mail:[email protected]. 229

230

H. Lavoie, E. Puckrin & J.-M. Thériault

previously applied for the measurement of chemical vapor clouds at standoff distances of up to 5.7 km 11-13 and for the detection of chemical warfare contaminants on surfaces 14. In this paper, the passive differential technique has been applied to the detection, identification and quantification of toxic industrial chemicals and chemical warfare agents. The analysis of results for the passive standoff detection of phosgene (CG) and sarin (GB) at a distance of up to 1.5 km is presented. 2. Detection method Figure 1 shows a diagram of the 3-layer geometry model that describes the differential detection method using a double-beam interferometer, such as CATSI. Both detection modes (direct and differential) will subtract the background (Ffar) and the atmospheric (Nnear, τnear and τair) contributions from the target signal; however, the direct mode may introduce temporal fluctuations. Equation (1) is based on the parameters denoted in Figure 1 where Lgas and τgas are respectively the total path radiance and the transmittance associated with the target gas layer. The transmittance of the air is given by τair, and Nnear and τnear are respectively the near-field path radiance and the transmittance of the atmosphere. The total path radiance without target gas (Lclear) and the far-field radiance (Ffar) are terms which are often quite difficult to evaluate. These difficulties are reduced by the capability of CATSI to subtract optically in real time the background (atmosphere and clutter) from the total path radiance, Lgas. The result of the optical subtraction (δLcalc) is given by equation (2).

ATMOSPHERE SENSOR

BACKGROUND CLOUD

τgas

Lgas Nnear

Ffar

Lclear τair τnear z1

z2

Figure 1: Diagram of the 3-layer geometry for the differential detection of chemical cloud

Passive Standoff Detection of Toxic Chemical Vapors

δ Lcalc ≡ Lgas − Lclear = (1 − τ gas )  Bgasτ near + N near − Lclear 

231

(1)

Equation (1) describes an ideal scenario (Figure 1) where the background is constant over the acquisition for the sequential (direct) and the simultaneous (differential) detection. Finally, it can be verified that if a chemical cloud occupies a fraction, f , of the field of view (FOV) of the sensor, the differential radiance can be calculated by:

δ Lcalc ≡ Lgas − Lclear = ∆Lclear + f (1 − τ gas )( Bgasτ near + N near − Lclear − ∆Lclear )

(2)

For more details on the detection method and other scenarios, see references 11, 13, 15. 2.1. The CATSI sensor In the passive standoff detection approach the monitoring procedure takes advantage of the differential detection capability provided by an optimized dual-beam interferometer (CATSI) with adjacent fields-of-view 9, 15. In this system, two beams of thermal radiation originating from different scenes can be optically combined onto a single detector and subtracted in real-time. Thus, if one beam entering the interferometer corresponds to the target-plus-background scene and the other corresponds to a consistent background scene, then the resulting differential spectrum corresponds primarily to the target scene minimally perturbed by the background. The standard configuration for the CATSI sensor consists of two identical Newtonian telescopes (Figure 2) each with a diameter of 10 cm, which are optically coupled to the dual-beam interferometer (Bomem-type instrument). Each telescope can be individually aimed at a selected scene, i.e., one on the target cloud scene and the other on a cloud-free background scene. This system allows measurements of spectra to be made according to the following specifications: (1) scene FOV of 11 mrad or less, (2) spectral coverage from 7 to 14 µm, and (3) a maximum spectral resolution of 1 cm-1. Coarse and fine adjustments in the azimuth and elevation pointing directions are simply achieved by rotating the whole assembly, which is mounted on a pan-and-tilt over a tripod. Additional information on the CATSI instrument and its radiometric calibration has been reported elsewhere 9, 15. 2.2. Experimental conditions The CATSI spectral specifications and Gasem parameters for the present work were set as follows: (1) spectral resolution of 8 cm-1, (2) co-addition of 20 scans, (3) noiseequivalent signal radiance of 2×10-9 W/(cm2 sr cm-1), and (4) the plume temperature was set at the same as the air.

232

H. Lavoie, E. Puckrin & J.-M. Thériault

Figure 2: Picture of the CATSI sensor with the 4 inches telescopes configuration used for this trial.

3. Experimental results Passive standoff detection of toxic chemical vapors like CG and GB was investigated in a low thermal infrared contrast environment at a distance between 500 and 1500 m. Two scenarios are presented in this paper. The first one is a military scenario where the CG gas was released instantaneously by explosion of its container and the second one consisted of the constrained release of GB. 3.1. Instantaneous phosgene release Figure 3 shows the results obtained for an instantaneous release of 6 kg of phosgene at 870 m from the sensor. The overlay plot and the spectrogram of the differential radiance show all the spectra collected by CATSI during the release. The background for this measurement consisted mostly of grass, and the radiative temperature contrast was evaluated at 1.1 K.

Passive Standoff Detection of Toxic Chemical Vapors

233

-7

x 10 6

Measurements Mean Standard deviation

(A)

2

-1

Spectral radiance (W / cm sr cm )

5 4 3 2 1 0 -1

127.766 s

-2 800

900

1000

1100

1200

-1

Wavenumber (cm ) -7

x 10 6

1300

1200

5

(B)

-1

Wavenumber (cm )

4 1100

3 2

1000 1 900

0 -1

800

-2 20

40

60

80

100 120 Time (s)

140

160

180

Figure 3: Explosive phosgene experiment (release of 6 kg). Evolution of the measured differential radiance in different projections showing; overlay plot of the radiance acquired by CATSI (A), and spectrogram showing the radiance as a function of time (B).

During the measurement period, both positive and negative phosgene bands could be observed in the overlay plot (Figure 3a). The positive signal was observed just after the explosion and is attributed to a chemical vapor cloud with a temperature higher than the background. After about 100 s the signal became negative and this is associated with phosgene gas at a temperature lower than the background. The period between 90 and

234

H. Lavoie, E. Puckrin & J.-M. Thériault

100 s when no signal was observed could be attributed to the time when the chemical vapor was at the same temperature as the background, or when the gas was absent from the sensor FOV. The spectrogram in the 800-900 cm-1 region shows clearly the evolution of the phosgene band as a function of time with a sufficient signal-to-noise ratio (SNR) to permit detection and identification in the interval between 80 and 185 s. Figure 4 shows the best-fit between the phosgene spectrum computed with the GASEM model (solid curve) and the measured differential radiance (dashed curve) at 82 s (top) and 153 s (bottom) after the spectral collection began. The comparison of the fitted signatures and measured spectra results in a fit quality of 0.95 and 0.90, respectively. The upper graph (Figure 4) shows the presence of a continuum offset, which can be explained by the gas and dust cloud at a higher temperature acting as a blackbody radiator shortly after the explosion. Figure 5 shows an IR image (Flir camera, 8-12 µm) of the scene at 1 s after detonation occurred and the explosion can clearly be distinguished by its higher temperature (dashed circle).

6

A

R = 0.95

B

R = 0.90

2

5

-7

2

-1

Spectral Radiance (x10 ) (W / cm sr cm )

4 3 2 1 0 0.0

-0.5

-1.0

x10

-1.5 2

800

1000 1200 -1 Wavenumber (cm )

1400

Figure 4: Comparison of the best-fit spectrum of a phosgene simulation (plain curve) with the measured differential radiance spectrum (dotted curve) at 82 s (A) and 153 s (B) after the release began.

Passive Standoff Detection of Toxic Chemical Vapors

235

Figure 5: Infrared (8-12µm) image of the phosgene instantaneous release.

3.2. Chemical warfare agent (GB) release Figure 6 shows spectral measurements for a small release of GB at a distance of 485 m from the sensor. A small amount of liquid GB was released inside a cloche and the resulting vapor cloud was extracted with a fan positioned at the front end of it. Figure 6 shows the best-fit between the GB spectra computed with the GASEM model (solid curve) and the measured differential radiance spectrum (dashed curve). The comparison of the fitted signature and the measured spectrum gives a fit quality of 0.23. The SNR observed during this measurement was evaluated at approximately two and this result is adequate for the detection algorithms to detect and identify the GB. Figure 7 shows a plot of the goodness of fit (r2) and the column density (CL) of GB vapor as a function of time determined from the GASEM fitting procedure. This figure demonstrates that CATSI was capable of detecting and identifying GB vapor at a low concentration and at a distance of 485 m. The global fit quality is relatively low if compared to the measurements with CG but relatively high if compared to the noise base line (Figure 7); however, it was sufficient to determine whether or not the GB was present in the measurement. The fitting procedure indicated that, for the GB release, the temperature of the vapor in the FOV of CATSI sensor was roughly 2.5 K higher than the air temperature. This temperature difference could be explained by the method of dissemination in which the CWA inside the cloche was warmed by the sunlight on the surrounding enclosure and was released at a temperature higher than the air.

236

H. Lavoie, E. Puckrin & J.-M. Thériault

1

Spectral radiance x10

-8

2

-1

(W / cm sr cm )

2

0

-1

2

R = 0.23 -2 800

1000 1200 -1 Wavenumber (cm )

1400

Figure 6: Result of the best-fit GB simulation (black curve) with the measured differential radiance spectrum (grey curve).

1.0

Global r

2

0.8 0.6 0.4 0.2 0.0 50

CL (ppm m)

40 30 20 10 0 0

50

100

150 200 Time (sec)

250

300

350

Figure 7: Plot of the goodness of fit (r2) and the column density (CL) of GB determined from the GASEM fitting procedure.

Passive Standoff Detection of Toxic Chemical Vapors

237

Figure 8 shows the result from a Gaussian plume model 16, 17 that simulates the chemical vapor dissemination as a function of flow rate and wind speed. The Gaussian plume simulation is presented to verify the CL calculated with the GASEM algorithm. An average value of the CL was calculated for the simulated plume model over the CATSI FOV in order to compare with the column density obtained with the GASEM algorithm. This figure also gives an idea of the FOV coverage by the chemical vapor release as a function of the release point distance from the sensor. The figure presents a plume dispersion of GB with a wind speed at 4 m/s. At the distance of 485 m the FOV was approximately 4 m, and it can be evaluated with Figure 8 that the FOV was mostly covered with the release plume if the sensor is pointed near the release point. In order to verify the CL value calculated with the GASEM algorithm, an average CL value was calculated over the CATSI FOV from the plume simulation model. For this simulation, a column density of 25 ppm m was calculated and it can be compared to the measured CL presented in Figure 7. This is consistent with the column density obtained with the GASEM fitting procedure.

Figure 8: Gaussian plume simulation of a GB release. The figure presents the column density (color bar, CL in ppm m) as a function of distance. The circle describes the field-of-view (FOV) of the sensor.

The Gaussian plume model presented in Figure 8 was also compared with ground truth measurements obtained with point sensors distributed on the ground in front of the cloche and the results obtained are on the same order of magnitude as the simulated plume model. Figure 9 shows simulations performed with the Line-by-Line Radiative Transfer Model (LBLRTM) 18 incorporating the same atmospheric and concentration parameters as the experiment, and at three different values of ∆T. The simulations show that the

238

H. Lavoie, E. Puckrin & J.-M. Thériault

differential radiance observed in Figure 6 is on the same order of magnitude as the signal modeled for a cloud with a column density of 25 ppm m and a ∆T between 1 and 2 K. Hence, the good agreement between the measured and simulated GB results suggests that the atmospheric conditions agree well with those of the field experiment.

Figure 9: LBLRTM simulation of GB spectrum performed at three different ∆T with a column density of 25 ppm m.

4. Summary and conclusion A passive standoff differential detection method that exploits the background suppression capability of a dual-beam interferometer system (CATSI) for chemical agent detection has been tested in the field. Perhaps, the greatest advantage of this approach is to provide, in the field, a spectrally clean signature of the remote chemical plume, which facilitates its processing in real-time. The results obtained to date for on-line detection and quantification are promising. In the present paper, it has been shown that the differential method can successfully detect, identify and estimate the integrated pathconcentrations of toxic chemical vapor plumes for horizontal standoff distances up to 1.5 km. These results clearly demonstrate the capability of the differential approach to detect and identify chemical vapor clouds located at large distances from the sensor. In general, the measurements performed in this open-air environment provided clear evidence that passive standoff LWIR sensors can detect and identify real CWA vapors released at low concentration and low radiative thermal contrast. These results provide

Passive Standoff Detection of Toxic Chemical Vapors

239

evidence that the environment does not significantly alter the signature of the toxic chemical vapors from their pure state. The explosive releases of toxic chemical vapors (phosgene) provided good detection results and demonstrated the capability of the CATSI sensor to detect and identify a toxic chemical cloud dispersed instantaneously. These results are of military importance since they approximate the real conditions that may be encountered on the battlefield or from a terrorist threat. 5. Acknowledgements The authors would like to express their gratitude to the DSTL trial officer and trial personnel for the invitation, their support and help during the trial. We also would like to thank Mr. Denis Dubé (DRDC Valcartier) for his technical assistance with the setup and his support during the trial. 6. References 1. Flanigan, D. S., Prediction of the Limits of Detection of Hazardous vapors by Passive Infrared with the use of MODTRAN. Appl. Opt. 1996, 35, 6090-6098. 2. Flanigan, D. S., Hazardous Cloud Imaging: A New Way of Using Passive Infrared. Appl. Opt. 1997, Flanigan, D. S., " Hazardous Cloud Imaging: A New Way of Using Passive Infrared", Appl. Opt., vol. 36, 7027, 1997., 7027-7036. 3. Hayden, A.; Niple, E.; Boyce, B., Determination of Trace-gas Amounts in Plumes by the use of Orthogonal Digital Filtering of Thermal-emission Spectra. Appl. Opt. 1996, 35, 2802. 4. Kroutil, R. T.; Combs, R. J.; Knapp, R. B.; Godfrey, J. P. In Infrared Interferogram Analysis for Ammonia Detection with Passive FT-IR spectrometry, Conference on Electro-optical Technology for Remote Chemical Detection and Identification, Orlando, Fl, USA, 1996; SPIE, Ed. Orlando, Fl, USA, 1996. 5. Polak, M. L.; Hall, J. L.; Herr, K. C., Passive Fourier-transform Infrared Spectroscopy of Chemical Plumes: An Algorithm for Quantitative Interpretation and Real-time Background Removal. Appl. Opt. 1995, 34, 5406. 6. Small, G. W.; Carpenter, S. E.; Kaltenbach, T. F.; Kroutil, T. F., Discriminant Analysis Techniques for the Identification of Atmospheric pollutants from passive Fourier Transform infrared interferograms. Analytica Chemica Acta 1991, 246, 85-102. 7. Gittins, C. M.; Hinds, M. F.; Lawrence, W. G.; Mulhall, P. A.; Marinelli, W. J. In Remote sensing and selective detection of chemical vapor plumes by LWIR imaging Fabry-Perot spectrometry, Proceedings of the International Symposium on Spectral Sensing Research, (ISSSR) 2001, 2001; 2001; pp 294-302. 8. Schildkraut, E. R.; Connors, R.; Ben-David, A. In Initial test results from ultra-high sensitivity passive FTIR instrumentation (HISPEC), Proceedings of the International Symposium on Spectral Sensing Research (ISSSR) 2001, 2001; 2001; pp 365-374. 9. Thériault, J.-M., Modeling the Responsivity and Self-emission of a Double-beam Fouriertransform infrared interferometer. Appl. Opt. 1999, 38, 505-515. 10. Thériault, J.-M.; Bradette, C.; Villemaire, A.; Chamberland, M.; Giroux, J. In Differential detection with a double-beam interferometer, Electro-optical Technology for Remote Chemical Detection and Identification II, 1997; SPIE: 1997; pp 65-75.

240

H. Lavoie, E. Puckrin & J.-M. Thériault

11. Lavoie, H.; Puckrin, E.; Thériault, J.-M.; Bouffard, F., Passive standoff detection of SF6 at a distance of 5.7 km by differential FTIR radiometry. Appl. Spectros. 2005, 59(10), 1189-1193. 12. Thériault, J.-M.; Puckrin, E., Remote sensing of chemical vapours by differential FTIR radiometry. Int. J. Remote Sens. 2005, 26, 981-995. 13. Thériault, J.-M.; Puckrin, E.; Bouffard, F.; Déry, B., Passive Remote Monitoring of Chemical Vapors by Differential FTIR Radiometry: Results at a Range of 1.5 km. Appl. Opt. 2004, 43, 1425-1434. 14. Thériault, J.-M.; Puckrin, E.; Hancock, J.; Lecavalier, P.; Lepage, C. J.; Jensen, J. O., Passive standoff detection of chemical warfare agents on surfaces. Appl. Opt. 2004, 43, 5870-5885. 15. Thériault, J.-M. Passive Standoff Detection of Chemical Vapors by Differential FTIR Radiometry; DREV Technical Report, TR-2000-156; 2001. 16. Pasquill, F., The estimation of the dispersion of windborne material. Meteorol. Mag. 1961, 90, 33-49. 17. Seinfeld, J. H.; Pandis, S. N., Air Pollution, Physical and Chemical Fundamentals. McGrawHill: New York, 1998; Vol. Ch. 18. 18. Clough, S. A.; Iacono, M. J., Line-by-line calculations of atmospheric fluxes and cooling rates 2: Application to carbon dioxide, ozone, methane, nitrous oxide and the halocarbons. J. Geophys. Res. 1995, 100, 16519-16535.

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 469– 482  World Scientific Publishing Company

A PYRAMID-BASED BLOCK OF SKEWERS FOR PIXEL PURITY INDEX FOR ENDMEMBER EXTRACTION IN HYPERSPECTRAL IMAGERY CHEIN-I CHANG, MINGKAI HSUEH, WEIMIN LIU, CHAO-CHENG WU, FARZEEN CHAUDHRY, GREGORY SOLYAR Remote Sensing Signal and Image Processing Laboratory Department of Computer Science and Electrical Engineering University of Maryland, Baltimore County, Baltimore, MD 21250 [email protected] ANTONIO PLAZA Computer Science Department, University of Extremadura Avda. de la Universidad s/n,10.071 Caceres, SPAIN [email protected]

Pixel Purity Index (PPI) has been widely used for endmember extraction. Recently, an approach using blocks of skewers was proposed by Theiler et al., called blocks of skewers (BOS) method, to improve computation of the PPI. It utilizes a block of skewers to reduce number of calculations of dot products operated by the PPI on each skewers with all data sample vectors. Unfortunately, the BOS method also suffers from the same drawbacks that the PPI does in terms of several parameters which are needed to be determined a priori. Besides, it also has an additional parameter, block size, B needed to be determined where no guideline is provided of how to select this parameter. In this paper, the BOS method is also investigated. Most importantly, a new pyramid-based block design for the BOS method is also introduced as opposed to the cube-based block designed used by Theiler et al.'s BOS. One major advantage of our proposed pyramid-based BOS over Theiler et al.'s cubedesign BOS is the hardware design for Field Programmable Gate Arrays (FPGAs) implementation. Keywords: Blocks of skewers (BOS) method; Field Programmable Gate Arrays (FPGAs); Pixel purity index (PPI).

1. Introduction Endmember extraction is one of fundamental tasks in hyperspectral data exploitation. According to the definition given in [1], an endmember is an idealized, pure signature for a class. The importance of endmember extraction can be found in many applications in image classification, particularly, spectral unmixing where the signature matrix used for unmixing is made up of endmembers that are presumably resident in the image data. An endmember extraction algorithm (EEA) finds and locates endmembers present in image data. Over the past years, many EEAs have been developed and reported in literature such as pixel purity index (PPI) [2], N-finder (N-FINDR) algorithm [3], iterative error analysis (IEA) [4], automated morphological endmember extraction (AMEE) algorithm [5], minimum volume transform [6], convex geometry [7], convex cone analysis [8], etc. 241

242

C.-I. Chang et al.

and a comparative study among different EEAs was also conducted recently in [9]. Among these EEAs are PPI and N-FINDR algorithms which have been widely used in remote sensing community. In particular, the PPI is one of most popular EEAs due to its availability in the Environment for Visualization Imaging (ENVI) software. Its idea can be briefly described as follows. First of all, the PPI generates a large number of so-called “skewers” which are random vectors and then computes the dot product of each skewer with each pixel, where each of dot products can be carried out independently. Finally, a small number of pixels are selected manually as “pure” signatures by using an ENVIvisualization tool. Since the PPI requires a huge number of random skewers to produce reasonable and acceptable results, the computational cost is extremely high. To address this issue, a PPI-based approach using block of skewers (BOS), referred to as BOS-PPI was proposed by Theiler et al. [10]. By virtue of the BOS, the number of independent skewers which are used in the PPI can be drastically reduced, while the linearly dependent skewers generated by linear combinations of block of skewers (BOS) can be also used as skewers to find endmembers. In this case, the skewers used as a base for BOS can be considered as independent skewers. Since the derived dot products only involve scalar operations, they can be computed directly from the results of the original independent skewers used in the BOS. However, such a BOS-PPI method also inherits same drawbacks as the PPI does. Most importantly, there is an additional parameter, the block size B that is used to form a block of skewers and must be determined a priori where no guideline is provided for how to do it. In order to address this issue a new pyramid-based block of skewer PPI algorithm was proposed in [11] and was compared to the cube-based block designed and originally used by Theiler et al. [10]. In this paper, the FPGA implementations for the PPI and two versions of BOS-PPI, cube-based and pyramid-based will be also investigated. In particular, a new reconfigurable system for FPGA to implement our proposed pyramid-based BOS-PPI is further developed. The remainder of this paper is organized as follows. Section 2 reviews the PPI [2] and delineates the PPI algorithm to be used in this paper. Section 3 describes the BOS method proposed in [10] and further develops a new pyramid-based BOSPPI and cube-based BOSPPI algorithms. Section 4 presents experiments based on computer simulations and real HYDICE images. Section 5 demonstrates the FPGA design architecture of our BOSPPI algorithm. Finally, some conclusions are drawn in Section 6. 2. Pixel Purity Index (PPI) The PPI has been widely used for endmember extraction in remote sensing image processing applications. Although the PPI can be implemented through the ENVI software, its detailed implementation has not been reported. In this section, we make an effort to describe steps that we believe the PPI is carried out by the software as follows. PPI Algorithm 1.

Apply the MNF transform to reduce sensitivity of noise and dimensionality.

Pyramid-Based Block of Skewers for Pixel Purity Index

2.

3.

k

Let {skewer j } j=1 be a large set of k randomly generated L-dimensional vectors, called “skewers” where k is an arbitrary, but predetermined positive integer. Also assume that t is a preset threshold positive integral value. For each skewer j , all the data sample pixel vectors are projected onto skewer j to find sample pixel vectors at its extreme positions to form an extreme set for the skewer skewer j , denoted by S extrema ( skewer j ) . Despite the fact that a different skewer j generates a different extreme set S extrema ( skewer j ) , it is very likely that some sample pixel vectors may appear in more than one extreme set. Define an indicator function of a set S, IS(r) by

1; if r ∈ S I S (r ) =  and N PPI (r ) = ∑ j I Sextrema (skewer j ) (r ) . 0; if r ∉ S 4.

243

(1)

where N PPI ( r ) is defined as the PPI counts of the sample vector r. Find all the sample pixel vectors with N PPI ( r ) ≥ t defined by Eq. (1), denoted by p denoted by {ri }i=1 where t is a threshold of determining how many candidate pixels p will be selected for visualization. Load this set of {ri }i=1 pixels in an L-dimensional visualization tool to select pixel vectors that correspond to pure pixel vectors.

3. Blocks-of-Skewers (BOS) Based PPI Method In the block-of-skewers (BOS) method, the dot product of a block of B skewers is calculated with a single data sample pixel vector. If a new skewer was the linear combination of original block B of skewers, the dot product of this new skewer with the same data point would also be linear combination of the dot product of the block of skewers.

3.1. Cube-based block-of-skewers (BOS) In the original PPI algorithm, the number of skewers, k must be set to be very large in order to ensure that all the desired endmembers will not missed by the PPI algorithm. However, it also requires a huge number of dot products which must be performed between each of k skewers and each data sample vector in the image. As a result, computational complexity is usually very intensive. Recently, [10] looked into this issue and developed the so-called Block of Skewers (BOS) method to mitigate this dilemma. In their BOS method a block size, B for skewers is proposed and the skewers to be used for the PPI are linear combinations of the skewers that form the block B. By virtue of the BOS, the number of dot products is significantly reduced, which is the key to make hardware implementation possible. The block diagram of the BOS method is depicted below in Fig. 1.

244

C.-I. Chang et al.

Hyperspectral imagery

Final endmember set

Perform dimensionality reduction using MNF transform

Threshold PPI counts

Determine block size, B

Compute PPI counts using BOS

Fig. 1. A block diagram of BOS Method

The idea of the BOS method can be explained by using an example of block size 3 for illustration shown in Fig. 2 with 9 pixels in an L-dimensional image cube.

L-Dimensions

Fig. 2. L-Dimensional Image Cube 3 Assume that there are three skewers specified by {skeweri }i= 1 . The BOS method uses 3 {skeweri }i=1 to form a basic block of size 3 and generates all remaining skewers that are linear combinations of these three skewers via the following equation

a1skewer1 + a2skewer1 + a3skewer3

(2)

with three real-valued combinatorial coefficients a1, a2, and a3. If the coefficients a1, a2, and a3 are constrained to either +1 or -1, that is, ai ∈ {1, −1} , there are 8 linear combinations that can be generated as additional skewers for the PPI implementation. In this case, a total of 11 skewers can be used as skewers for the PPI. Among these 11 skewers the three {skeweri }3i =1 are independent skewers and the remaining 8 skewers generated by (2) are dependent skewers. Unfortunately, similar drawbacks to those found in the PPI are also occurred in the BOS. In particular, the BOS-PPI does not provide a criterion as to how to choose an appropriate block size B. In addition, the selection of ai ∈ {1, −1} , which is essentially the corners of the cube, is not necessarily optimal. Such an issue can be addressed by a new proposed pyramid-based BOS PPI described in Section 3.2. In other words, the BOS method also suffers from additional drawbacks which do not occur in the PPI algorithm. 1. It does not provide any clue to choose the block size B. It must be done by trial and error.

Pyramid-Based Block of Skewers for Pixel Purity Index

2.

245

Obviously, the selection of linear combination coefficients in step 4 to be a ji ∈ {−`1,1} is not optimal.

3.2. Pyramid-based block-of-skewers (BOS)

The pyramid-based BOS-PPI was originally proposed in [11] which can be summarized as follows. Using the VD to determine the number of dimensions required to be retained in the Minimum Noise Fraction (MNF) transformed image, the pyramid-based BOS-PPI can be described by the following steps. 1. Apply the MNF transform to perform dimensionality reduction. 2. Determine the block size to be used, B. 3. Generate a set of B skewers randomly, denoted by {skewer1, skewer2, … , skewerB} where for 1 ≤ j ≤ B skewerj is an L-dimensional column vector given by T skewer j = ( skewer j1 , , skewer jL ) and set n = 1. 4. Find a set of block skewers which are given by Bskewer j = ∑ iB=1 a ji skeweri with a ji ∈ {−`1,1} . As a result, there are 2B block skewers, denoted by Bskewer1 , Bskewer2 , , Bskewer2 B where for each 1 ≤ j ≤ 2 B , Bskewerj is also an T L-dimensional column vector given by Bskewer j = ( Bskewerj1 ,, BskewerjL ) . 5. For each data sample pixel vector r, calculate the dot product of r with each block skewers by of 2B {Bskewer1 , Bskewer2 ,, Bskewer2B } < Bskewer j , r >= ∑ iB=1 ai < skeweri , r > where < skewer j , r > was already calculated in step 4. 6. For all the sample pixel vectors, find N PPI (r ) = ∑ j I (r ) defined by Eq. S extrema ( Bskewer j )

(1), denoted by {r

(n) k j j=1

}

.

7.

Let n ← n + 1 . If n < 3 , go to step 4. Otherwise, continue.

8.

Find E ( n ) = ∩ {r j( m ) } j =1 and E ( n−1) = ∩ {r j( m ) } j =1 .

9.

If E ( n ) ≠ E ( n−1) , go to step 4. Otherwise, the algorithm is terminated. In this case, the sample pixel vectors in E(n) are the desired endmembers.

n

m =1

k

n −1

k

m =1

Furthermore, based on the experimental study conducted in [11], a small block size generally performed better than a large block size by reducing the number of linear combinations as well as redundancy. Empirically, it was concluded that block size of 3 is a good choice. Block size of 2 also performs well but less effective than B = 3. But with this block size it is almost the same as the original PPI. Consequently, in our design, we use block size of B = 3 and the coefficient ai’s are assumed to be either -1 or 1. In this case, there are 8 linear combinations which form the eight vertices of a cube with its center at the origin (0,0,0) as shown in Fig. 3(a). This is the case which “Cube-based Block of Skewers” is implemented. On the contrary, in the pyramid based design, the number of linear coefficient combinations for the block of skewers will be reduced to

246

C.-I. Chang et al.

(-1,-1,1)

(0,0,1)

(-1,1,1)

(1,-1,1)

(1,1,1)

(-1,-1,-1)

(0,0,0) (-1,-1,-1)

(0,0,0)

(-1,1,-1)

(-1,1,-1)

(1,-1,-1)

(1,1,-1)

(1,-1,-1)

(1,1,-1)

(a) cube

(b) pyramid

Fig, 3. Cube-based and Pyramid-based Block of Skewer

(0, 0, 1), (1, 1, -1), (1, -1, -1), (-1,1, -1) and (-1, -1, -1), as pictorially shown in Fig. 3(b) where only 5 derived skewers will be used compared to 8 used for cube based design. One comment on the selection of (0, 0, 1), (1, 1, -1), (1, -1, -1), (-1,1, -1) and (-1, -1, -1) to form the pyramid in Fig. 3(b) is noteworthy. The idea beyond the pyramid design is derived from the fact that a cube can be decomposed into 4 pyramids as shown in Fig. 4, referred to as east, west, north and south pyramids where pyramids form basic blocks for a cube. North Pyramid

(-1,-1,1)

(-1,1,1)

West Pyramid

(0,0,-1)

(1,1,1)

(1,-1,1)

East Pyramid

(0,-1,0) (0,1,0)

(-1,-1,-1) (-1,1,-1)

(0,0,1) (1,-1,-1)

Cube

(1,1,-1)

South pyramid

Fig. 4. Decomposition of a cube into four pyramids

As a result, any one of these four pyramids can be used for the pyramid design and produces similar results. In this paper, the south pyramid is selected for our pyramid design with no particular preference except its better visual interpretation. Since the center point (0,0,0) in Fig. 3(a) cannot be used as the top vertex of a pyramid, it is linearly translated to the point (0,0,1). However, it should be noted that such a linear translation does not have any impact on the pyramid design since the BOS is performed by linear combinations.

Pyramid-Based Block of Skewers for Pixel Purity Index

247

4. Experimental Results

The HYDICE image is shown in Fig. 5(a), which has a size of 64 × 64 pixel vectors with 15 panels in the scene. p11, p12, p13 p21, p22, p23 p31, p32, p33 p41, p42, p43 p51, p52, p53 (a)

(b)

(c) Fig. 5. (a) A HYDICE panel scene which contains 15 panels; (b) Ground truth map of spatial locations of the 15 panels; (c) Spectral signatures of p1, p2, p3, p4 and p5

It has 210 spectral bands with a spectral coverage from 0.4µm to 2.5µm. Low signal/high noise bands: bands 1-3 and bands 202-210; and water vapor absorption bands: bands 101112 and bands 137-153 were removed. So, a total of 169 bands were used. The spatial resolution is 1.56m and spectral resolution is 10nm. Within the scene in Fig. 5(a) there is a large grass field background, a forest on the left edge and a barely visible road running on the right edge of the scene. There are 15 panels located in the center of the grass field and are arranged in a 5 × 3 matrix as shown in Fig. 5(b) which provides the ground truth map of Fig. 5(a). There are 15 panels located on the field and are arranged in a 5 × 3 matrix as shown in Fig. 5(b) of a ground truth map. Each element in this matrix is a square panel and denoted by pij with row indexed by i = 1, 2,,5 and column indexed by j = 1, 2,3 . For each row i = 1, 2, ,5 , the three panels p i1 , p i 2 , p i 3 were made by the same material but have three different sizes. For each column j = 1, 2,3 , the five panels

248

C.-I. Chang et al.

p1 j , p 2 j , p 3 j , p 4 j , p 5 j have the same size but were made by five different materials. It should be noted that the panels in rows 2 and 3 are made by the same material with different paints, so did the panels in rows 4 and 5. Nevertheless, they were still considered as different materials. The sizes of the panels in the first, second and third columns are 3m × 3m , 2m × 2m and 1m × 1m respectively. So, the 15 panels have five different materials and three different sizes. The ground truth map provides the precise spatial locations of these 15 panels. As shown in Fig. 5(b), black pixels are the panel center pixels and the pixels in the white masks are either panel boundary pixels mixed with background pixels or background pixels close to panels. The 1.5 m-spatial resolution of the image scene suggests that all of these panels are only one-pixel wide except that p 21 , p 31 , p 41 , p 51 which are two-pixel panels, all the remaining panels are one-pixel wide. Fig. 5(c) plots the five panel spectral signatures in Fig. 5(b) where the i-th panel signature, denoted by pi was obtained by averaging the black panel center pixels in row i. 4.1. Cube-based BOS Design

The cube-based BOS was performed repeatedly on the real HYDICE image for 1250 times to generate 10,000 linear combinations of 3 independent skewers. A total of 363 pixels were extracted in Fig. 6 with the PPI count image and the color PPI count image shown in Fig. 6(b-c). Among the extracted pixels, there were 10 panel pixels extracted in 5 Fig. 6(c) which represent five distinct panel signatures, {p i }i=1 .

(a) Original image

(b) PPI count image

(c) Color PPI count image

Fig. 6. Results of cube-based BOS for real HYDICE Image (Repetitions = 1250)

4.2. Pyramid-based BOS Design

Similar experiments were also performed for the pyramid-based BOS-PPI. In this case, the pyramid based block was also repeated 1250 times to generate 5000 linearly dependent skewers. A total of 432 pixels were extracted by the pyramid-based BOS in Fig. 7 with the PPI count image and the color PPI count image shown in Fig. 7(b-c). Among the extracted pixels, there were 10 panel pixels extracted in Fig. 7(c) which 5 represent five distinct panel signatures {p i }i=1 .

Pyramid-Based Block of Skewers for Pixel Purity Index

(a) Original image

(b) PPI count image

249

(c) Color PPI count image

Fig. 7. Results of pyramid-based BOS for real HYDICE Image (Repetitions = 1250)

Comparing the results in Fig. 7 to that in Fig. 6 the pyramid-based design and the cubebased design performed very similarly with the pyramid-based design using fewer skewers than that used by the cube-based design. 5. FPGA Implementation of Pyramid Based BOS PPI Algorithm

In this section, we further discuss the hardware implementation of our proposed pyramid based BOS-PPI. We first focus on the part of the algorithm which represents the most significant fraction of the execution time. Obviously, the projections of the image data cube onto the independent skewers that form the BOS dominate the computational cost. Here, we focus on this particular attribute for hardware acceleration to account for its highly parallelizable operations. Table 2 provides the pseudo-code implementation for the part of the BOS-PPI. Due to the limitation of the hardware resources, a single FPGA chip might not be able to accommodate the entire Hyperspectral image scene in one shot. As a result, the image will be processed partially in parallel block-wise for the hyperspectral image cube fed into the hardware at a time. By way of this processing, the entire image will be processed after several iterations. By the end, the algorithm will be repeated until a stopping rule is met. An MINMAX module is used to store the pixel location if it is greater than a selected local extreme. Table 2. Pseudo-code for parallel implementation of BOS-PPI algorithm do for-parallel (allowed image partition for hardware capacity) for-parallel (each pixel “r” in image partition) for-parallel (each skewer “s”) Productj,band = P.E.(r[ith pixel][band],s[jth skewer][band]); temp-summationj,band = temp-summationj,band-1 + Productj,band; pass “temp-summationj,band” to the next P.E. endfor endfor endfor MINMAX; // log the local extrema and memorize the spatial coordinates // change set of skewers according to the results while (meet stopping rule?)

250

C.-I. Chang et al.

Based on the pseudo-code provided in Table 2, the architecture of the proposed pyramid-based BOS-PPI is shown in Fig. 8. Three independent skewers are used to generate four linearly dependent skewers. The modules to implement the pyramid-based BOS-PPI are described as follows. The block diagrams for the individual design are shown in Figs. 9, 10 and 11. Module 1: Dot-product module – a group of Processing Element (P.E.) for performing dot product in parallel. P.E. is the basic computation component. Module 2: Pyramid-based Block of Skewer Generator – compute the dot-product results for the block of skewers from the independent skewers Module 3: MINMAX module – compare the incoming dot-product result with the current Maximum and Minimum value in the memory. Bank of MINMAX modules will be duplicated and used to speed up the output. BOS Unit MINMAX MINMAX Pyramid based Block of Skewer Generator

Dot-product module

. . .

. . . MINMAX

Fig. 8. Pyramid-based BOS-PPI Architecture

Table 1 tabulates the computational complexity (CC) of the original PPI, cube-based BOS-PPI and pyramid-based BOS-PPI where K is the number of independent skewers, B is the size of Block of skewers, L is the number of spectral bands, and N is the total number of pixel vectors [12]. As we can see from the table, the CC of cube-based BOSPPI is 3/8 of the original PPI, because it only uses 3/8 of independent skewers required for the original PPI to achieve the same amount of dot products. Despite the fact that it may require extra time to perform the linear combinations of dot-product resulting from independent skewers, the addition-based computational time can be neglected compared with a large number of multiplications. In comparison between pyramid and cube based design, the former only needs half of additions that are required by the latter. Table 1. Computational complexity of different PPI implementations Original PPI

Cube-based BOSPPI

Pyramid-Based BOSPPI

Multiplication

K × L× N

( 3 8) × K × L × N

Addition

0

K×N

(3 8) K × L × N (1 2 ) × K × N

Pyramid-Based Block of Skewers for Pixel Purity Index

251

4th band

3rd band

4th band

3rd band

1st band

2nd band

1st pixel

1st band

2nd pixel

2nd band

Fig. 9 shows the design of the dot-product module. The dot products of every incoming pixel vector with the three original independent skewers are computed in parallel and fed into the Pyramid based Block of Skewer PPI Generator. The P.E. (Processing Element) is a building block of dot-product module. Based on the functionality, we name it as P.E.1 or P.E.2. The difference between these two blocks is that P.E.1 only does multiplication while P.E.2 performs accumulation after the multiplication. As a result, P.E.2 is fundamentally equivalent to a MAC (Multiplier and ACcumulate) computation unit. On a whole, each P.E. is responsible for computing the multiplication of the original skewers with incoming pixel vector in a particular band. Then it accumulates the result from the previous P.E. and passes it on to the next P.E. until it reaches the MINMAX module. The data is transmitted in serial to the data bus so that the data can be de-serialized before performing any computation. The same rule is also applied to the MINMAX module.

Skewer 1

P.E.1

P.E.2

P.E.2

P.E.2

P.E.1

P.E.2

P.E.2

P.E.2

P.E.1

P.E.2

P.E.2

P.E.2

Skewer 2

Skewer 3

Fig. 9. Dot-Product modules

As discussed in the previous sections, the BOS method generates new skewers using dot products calculated from the independent random skewers. The pyramid-based BOSPPI generator generates the bottom four corners of a pyramid by linear combinations of

252

C.-I. Chang et al.

three independent skewers. The design can be accomplished as shown in Fig. 10 where a square box “F/AS” performs as three input full-adder/subtractor to sum up the projection results of the linearly dependent skewers.

P1 P1

P2 P2

P3

P3

P1 + P2 – P3 F/AS

P1 – P2 – P3 F/AS

–P1 + P2 – P3 F/AS

F/AS

–P1 – P2 – P3

Fig. 10. Pyramid-based block of skewer generator

Fig. 11 shows architecture of a MINMAX module which finds the minimum and the maximum of the projected scores of the pixels onto the original skewers and pyramid generated skewers. It updates 2 registers, MIN and MAX by comparing the current result “C” to the value stored in the MIN and MAX registers. The Min comparator compares the “C” against the value stored in “MIN”. If C 30 kW peak intensity. The laser pulse width measurement is shown in Figure 2a and the spectrum

258

J. Ding et al.

measurement is shown in Figure 2b. The relatively short laser cavity (~ 25cm) also enables a rather narrow spectral linewidth. The laser has about 40dB signal to noise ratio (SNR) as shown in the Figure 2b.

a) b)

Fig. 2. (a) Pulse width measurement of the Nd:YAG microchip laser; (b) Spectrum of the Nd:YAG microchip laser output.

Besides numerous advantages, compared with the active Q-switching, the passive Qswitching technique has an intrinsic drawback: passively Q-switched pulses usually exhibit larger jitter in the peak power and the pulse repetition rate. Under the CW pumping, the current Nd:YAG microchip laser demonstrated typical passively Qswitched laser behaviors: with randomly-changing repetition rate and more than 5% energy jitter. In order to reduce the pulse-to-pulse energy jitter and to stabilize the pulse repetition rate, a pump modulation technique was developed for the Nd:YAG microchip laser. (Details of the pump modulation technique will be discussed in a separate report.) The characteristics of the pumped modulated passively Q-switched Nd:YAG microchip laser are listed in Table. 1, and a brief energy log of the output laser pulses is shown in Figure 3. The pump-modulated microchip laser produced linearly polarized 1064 nm laser pulses with 3 ns pulse width, 3 kHz repetition rate and 100 µJ pulse energy with energy jitter less than 1%. Table 1: Summary of the characteristics of the pump-modulated microchip laser. Pump modulation frequency

3Khz

Output laser pulse repetition rate

3Khz

Average pulse energy

104mJ

Minimum pulse energy

102mJ

Maximum pulse energy

106mJ

Pulse energy standard deviation

2.7mJ

Pulse-to-pulse energy jitter

0.6%

Compact Eye-Safe OPO Pumped by a Nd

259

120

Laser Pulse Energy (µ J)

100

80

60

40

20

0 0

20

40

60

80

100

120

Time(s)

Fig. 3. Energy log of the pump-modulated passively Q-switched microchip laser.

3. Large Mode Area Yb Doped Fiber Amplifier Ytterbium doped fibers are favorable candidates for laser/amplifiers in the near-infrared regime due to the Yb ions' large absorption cross-section near 980nm and low-cost commercially available 980nm laser diodes. For a Yb doped fiber amplifier, the maximum output pulse energy depends on the saturation energy, bulk damage threshold, surface damage threshold, amount of amplified spontaneous emission, the influence of the Brillouin scattering and Raman scattering. All these limitations can be relieved by increasing the mode field area and the Yb doping concentration of Yb doped fibers. However, the increased mode field area also brings an adverse effect: more spatial modes are supported in fiber with the same core numerical aperture (NA). Decreasing the fiber core numerical apertures can reduce the number of modes supported. However the decrease of the fiber core NA is limited by the refractive index which id dependent on the rare earth ions’ doping concentration. Photonic crystal fibers could yield a lower core NA by using an appropriate photonic crystal structure. Mode filtering can also be used to force the large mode area fibers into single spatial mode operation. There are two common mode filtering techniques: coiling the fiber so to introduce mode-dependant bending loss and spatial mode filtering using fiber tapers. A 4 meter Yb doped DC polarization-maintaining (PM) LMA fiber was used to boost the 1064 nm laser output from the pump-modulated passively Q-switched Nd:YAG microchip laser. The Yb fiber has a core diameter of 25 µm (~ 7 µm core diameter for single mode Yb doped fibers), core NA of 0.065 and inner cladding diameter of 250 µm. With this larger core diameter (even with the smaller NA) the fiber supports more than one spatial mode if there is no external mode filtering applied. In this study, the fiber was coiled on a 100mm fiber mandrel. Those higher spatial modes encounter bigger bending loss and are not amplified by the fiber. The spatial distribution of the fiber amplifier output is shown in Fig. 4a. A high brightness near-infrared laser beam is crucial to pump the subsequent optical parametric process. A laser with higher brightness means the laser energy is more concentrated, and results in a higher efficiency in the nonlinear process

260

J. Ding et al.

since the gain is dependent to the square of the pump laser intensity. The microchip laser output was coupled into the fiber amplifier by a 15 mm focal length lens. The fiber amplifier was pumped in the counter-propagation configuration to reduce un-desired nonlinear effects. A Faraday isolator was inserted between the microchip laser and the fiber amplifier to avoid the influence of the fiber amplifier back-emission into the microchip laser. The amplified linearly polarized output from the Yb doped LMA fiber amplifier has a 3 ns pulse width as shown in Figure 4b.

a)

b)

Fig. 4. (a) Spatial distribution of the Yb doped LMA fiber amplifier output; (b) Pulse width measurement of the amplified pulses from the Yb doped LMA fiber amplifier.

The amplification performance of the Yb doped LMA fiber amplifier with the pumpmodulated microchip seed laser is shown in Fig. 5. The Yb doped LMA fiber amplifier was able to amplify the 3kHz, 3ns, 100µJ seed laser pulses to as much as 570 µJ. The amplified laser pulse energy grew linearly proportional to the amplifier pump power after the pump power reached the threshold. The fiber amplifier showed little sign of saturation while the energy jitter of the amplified pulses became significant in the high pump region. This increased energy jitter was attributed to the competition between the pulse amplification and various nonlinear effects, such as stimulated Brillouin scattering (SBS). At the high pump region, the amplified pulses had a peak power of ~ 200 KW, and peak intensity of more than 30 TW/cm2. The SBS threshold is strongly dependent on the linewidth of the laser, while narrower laser linewidth means considerably lower SBS threshold. Fiber bulk damage was observed when the amplified laser pulse energy reached 600µJ, which corresponds to a bulk damage fluence of 100 J/cm2 and damage intensity of ~ 400 TW/cm2 for the 3 ns pulse width and the 25 µm fiber core diameter. Both ends of the fiber amplifier were protected with coreless silica end-caps so the surface damage did not occur. The insert in Figure 5 shows the section of the damaged fiber, the whiter portion in the middle. Later examination showed that the damaged fiber section was in

Compact Eye-Safe OPO Pumped by a Nd

261

fact powderized and was capsulated in the outer cladding polymer sheath. In conclusion, the linearly polarized 1064 nm 3ns laser pulses were amplified to the maximum allowable peak intensity of the 25/250 LMA PM Yb doped fiber. Nd:YAG microchip seed laser modulated at 3Khz

Amplified Pulse Energy (µ J)

600 Maximum Pulse Energy Minimum Pulse Energy Average Pulse Energy

500 400 300 200 100 0 0

2

4 6 Amplifier Pump Power (W)

8

10

Fig. 5. Amplification performance of the Yb doped LMA fiber amplifier with a 3 Khz, 3 ns 1064 nm Nd:YAG microchip seed laser.

4. Eye-safe Optical Parametric Oscillator Quasi phase matching (QPM) in periodically poled nonlinear optical materials has several significant advantages over birefringent phase matching for efficient optical parametric oscillators: higher nonlinear coefficient and collinear propagation of pump and generated signal/idler waves. Among various QPM materials, periodically poled LiNbO3 attracted special attention due to its mature fabrication, ready availability, good transparency from 0.35 µm to more than 4 µm, higher nonlinear coefficient, and good temperature tunability. A schematic of the PPLN based OPO cavity is shown in Fig. 6. The linearly polarized laser output from the compact Nd:YAG microchip MOPA was collimated with an achromatic lens with 15 mm focal length. The polarization of the near-infrared pump laser was carefully oriented to be perpendicular to the PPLN slab. A 50 mm long PPLN slab (from Crystal Technology) was used as the OPO crystal. Both faces of the PPLN crystal normal to the beam propagation were polished flat and parallel. For compactness, a linear OPO cavity was constructed using two flat OPO mirrors which are specially coated to let the OPO cavity oscillate in the eye-safe region. The input coupler of this singly-resonating OPO was anti-reflection coated for the near-infrared pump and the midinfrared idler, and 99.9% reflecting at 1.4-1.7 µm. The OPO output was anti-reflection coated for the pump and idler, and 20% reflecting at 1.4-1.7 µm. A 140 mm focal lens was chosen to focus the collimated near-infrared pump into the PPLN crystal.

262

J. Ding et al.

OPO input coupler Collimated 1um MOPA output 50mm PPLN 140mm focal lens

OPO output coupler

Fig. 6. Schematic diagram of the singly-resonating OPO.

The microchip MOPA pumped PPLN SROPO produced eye-safe laser emission with 3 ns pulse width, and as much as 140 µJ pulse energy, as shown in Fig. 7. The generated eye-safe laser increases linearly with the increase of the pump pulse energy with ~ 25% conversion efficiency from the near-infrared pump laser to the eye-safe signal beam. Further optimization of the OPO cavity, such as using a confocal cavity with optimized spatial mode overlap between the pump and the signal beam, would increase the conversion efficiency into the eye-safe emission. The PPLN slab used has 8 individual grating periods ranging from 28.5 to 29.9 µm. By choosing different grating period combined with appropriate temperature tuning, the eye-safe OPO can be easily tuned from 1.4 to 1.7 µm. 160

1.5um OPO Signal Pulse Energy ( µ J)

140 y = 0.2575x - 13.746 120 100 80 60 40 20 0 0

100

200

300

400

500

600

1um Pum p Pulse Energy (µ µ J)

Fig. 7. Eye-safer laser emission from the Nd:YAG microchip MOPA pumped singly-resonating OPO.

Compact Eye-Safe OPO Pumped by a Nd

263

5. Conclusion A compact high-peak-power eye-safe laser based on a Nd:YAG microchip MOPA pumped OPO was demonstrated. The Nd:YAG microchip MOPA consisted of a diodepumped passively Q-switched Nd:YAG microchip laser, and a high power Yb doped LMA PM fiber amplifier with 25 µm core diameter. The compact MOPA generated single spatial mode 1064 nm pulses with peak power as much as 200 kW. By pumping a 50 mm PPLN crystal with high power 1064 nm pulses, the eye-safe OPO outputs 3 kHz pulses with 3 ns pulse width and as much as 140 µJ pulse energy, corresponding to a peak power of ~ 47 kW. This compact eye-safe laser can serve as a very good light source for various portable ranging and remote sensing applications. 6. Acknowledgments The authors are indebted to Dale Ritcher from ITT for his assistance in building the Nd:YAG microchip laser. The work was supported under USAF contract FA8650-04-C1714. References 1. R. C. Stoneman, L. Esterowitz, “Laser-Pumped Room-Temperature 1.50-µm Tm3+:YLiF4 Continuous-Wave Cascade Laser”, Proceedings of the Conference on Advanced Solid State Lasers, (1990) 2. R. Fluck, R. Haring, R. Paschotta, E. Gini, H. Melchior, and U. Keller, “Eyesafe pulsed microchip laser using semiconductor saturable absorber mirrors”, App. Phys. Lett., Vol. 72, No. 25, pp. 3273-3275, (1998) 3. F. Di teodoro, M. savage-Leuchs, and M. Norsen, “High-power pulsed fibre source at 1567 nm”, Electronics Lett., Vol. 40, No. 24, pp. 1525-1526 (2004) 4. Y. Yashkir and H. M. van Driel, “Passively Q-switched 1.57-mu m intracavity optical parametric oscillator”, App. Optics, Vol. 38, No. 12, pp. 2554-2559 (1999) 5. G. H. Xiao, M. Bass, and M. Acharekar, “Passively Q-switched solid-state lasers with intracavity optical parametric oscillators”, IEEE J. Of Quan. Elec. Vol. 34, No. 11, pp.22412245 (1998) 6. J. J. Zayhowski, C. Dill III, “DIODE-PUMPED PASSIVELY Q-SWITCHED PICOSECOND MICROCHIP LASERS”, Opt. Lett. 19, 1427 (1994) 7. J. J. Zayhowski, “Passively Q-switched Nd : YAG microchip lasers and applications”, J. Of Alloys and Compounds, Vol. 303, pp. 393-400 (2000) 8. F. D. Teodoro, C. D. Brooks, “Multistage Yb-doped fiber amplifier generating megawatt peak-power, subnanosecond pulses”, Opt. Lett. 30, 2694 (2005) 9. E. C. Honea, R. J. Beach, S. C. Mitchell, and P. V. Avizonis, “183-W, M-2 = 2.4 Yb : YAG Q-switched laser”, Opt. Lett. Vol. 24, No. 3 pp. 154-156 (1999) 10. M. Y. Cheng, Y. C. Chang, A. Galvanauskas, P. Mamidipudi, R. Changkakoti, and P. Gatchell, “High-energy and high-peak-power nanosecond pulse generation with beam quality control in 200-mu m core highly multimode Yb-doped fiber amplifiers”, Opt. Lett., Vol. 30, No. 4 pp. 358-360 (2005)

264

J. Ding et al.

11. C. D. Brooks, and F. D. Teodoro, “1-mJ energy, 1-MW peak-power, 10-W average-power, spectrally narrow, diffraction-limited pulses from a photonic-crystal fiber amplifier”, Opt. Express Vol. 13, No. 22, pp. 8999-9002, (2005) 12. J. J. Zayhowski, “Microchip optical parametric oscillators”, IEEE Photonics Tech. Lett., Vol. 9, No. 7, pp. 925-927 (1997) 13. I. F. Elder, and J. A. C. Terry, “Efficient conversion into the near- and mid-infrared using a PPLN OPO”, J. Opt. A: Pure and Appl. Opt. 2 L19-L23, (2000)

International Journal of High Speed Electronics and Systems  World Scientific Publishing Company

ISSR 2006

AIR SENSING AND MONITORING SESSION

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2008) 493 –504  World Scientific Publishing Company

WIDE AREA SPECTROMETRIC BIOAEROSOL MONITORING IN CANADA: FROM SINBAHD TO BIOSENSE SIMARD JEAN-ROBERT, BUTEAU SYLVIE, LAHAIE PIERRE, MATHIEU PIERRE, ROY GILLES, LAROCHELLE VINCENT, DERY BERNARD Defence Research and Development Canada − Valcartier, 2459, boul. Pie-XI North, Val-Bélair, Qc, Canada G3J 1X5 Jean-Robert.Simard @drdc-rddc.gc.ca MCFEE JOHN, HO JIM Defence Research and Development Canada − Suffield, Box 4000, Medicine Hat, AB, Canada T1A 8K6

Threats associated with bioaerosol weapons have been around for several decades. However, with the recent political developments that changed the image and dynamics of the international order and security, the visibility and importance of these bioaerosol threats have considerably increased. Over the last few years, Defence Research and Development Canada has investigated the spectrometric LIDAR-based standoff bioaerosol detection technique to address this menace. This technique has the advantages of rapidly monitoring the atmosphere over wide areas without physical intrusions and reporting an approaching threat before it reaches sensitive sites. However, it has the disadvantages of providing a quality of information that degrades as a function of range and bioaerosol concentration. In order to determine the importance of these disadvantages, Canada initiated in 1999 the SINBAHD (Standoff Integrated Bioaerosol Active Hyperspectral Detection) project investigating the standoff detection and characterization of threatening biological clouds by Laser-Induced Fluorescence (LIF) and intensified range-gated spectrometric detection techniques. This article reports an overview of the different lessons learned with this program. Finally, the BioSense project, a Technology Demonstration Program aiming at the next generation of wide area standoff bioaerosol sensing, mapping, tracking and classifying systems, is introduced. Keywords: bioaerosol; fluorescence; UV; laser; standoff; spectral; detection; classification.

1. Introduction With the decline of the traditional worldwide balance of power between the major alliances since the early 1990s, and the concurrent rise of asymmetric threats, biological warfare has been identified as and still is one of the threats for which adequate defenses are the most challenging. Up to the end of the 1990s, the major efforts to address this threat were based on point biological detectors. The main drawback with this technological approach is that a point detector must be within a cloud of bioagents in order to detect it. This may not be a problem if the asset being monitored is a key narrow space such as the main ventilation room of a building. However, an approach based on (an array of) point detectors has significant limitations when a wide-open area needs to be 267

268

J.-R. Simard et al.

monitored for bioaerosol threats. These limitations result from the large number of detectors required to efficiently cover a multi-square-kilometer zone and their (presently excessive) unit price*. To improve the monitoring of bioaerosols over wide areas, Defence Research and Development Canada (DRDC) has been investigating standoff biodetectors based on spectrometric inelastic Light Detection and Ranging (LIDAR) technologies since 1999 with the SINBAHD (Standoff Integrated Bioaerosol Active Hyperspectral Detection) project1,2. This project, after having integrated a UV Excimer laser, a LIDAR transmitter and a range-gated spectrometric intensified CCD camera on a mobile laboratory platform, aimed at investigating the capability of such a technological approach by participating in several national and international trials where bioaerosol simulants and atmospheric obscurants were released in the open air. From the analyses of the data acquired at these trials, several results support important properties associated with the phenomenology characterized by this technology for wide area bioaerosol monitoring. Strengthened by these lessons learned, DRDC initiated in Spring 2005 a 5-year demonstration project, BioSense, that combines the SINBAHD technology with a geo-referenced Near InfraRed (NIR) LIDAR cloud mapper. This paper first review important fundamentals associated with aerosol LIDAR technology. Then, experimental results are presented which were obtained with the SINBAHD instrument and which have important impacts on the classification specificity, detection sensitivity, the effects of obscurants/interferants, and the robustness of bioaerosol wide-area monitoring with spectrometric inelastic LIDAR. Finally, the BioSense project is introduced. 2. Inelastic LIDAR Modeling At the fundamental level, the optical signal collected by all LIDARs can be defined as a differential equation relating the spectral element of light collected and sent to a detector as a function of time and the element of volume from where this light originated. Based on Measures3, this equation can be derived as    dPλ (λ0 , r , t ) = J λ (λ0 , r , t ) pλ (r ) dV , (1) where dPλ is the spectral element of light power detected at a time t resulting from elastic or inelastic scattered laser light with an excitation wavelength λ0† at a volume element dV   located at position r , J λ (λ0 , r , t ) is the spectral radiance induced by the incident laser

* Commercially available point bioaerosol detectors having sufficient low false alarm rates for such scenarios presently cost well over $100,000 each, and their price reduction to an amount compatible with efficient wide area monitoring is not foreseen in the near future. † Here, the laser light is considered to be monochromatic and centered at wavelength λ0. This definition could be generalized for multi-wavelength or white lasers. However, the increased level of complexity generated by this additional step of generalization is not necessary for the subject discussed in this paper.

.

Wide Area Spectrometric Bioaerosol Monitoring in Canada

269

irradiance at that volume dV responsible for the detected spectral element of light‡  and pλ (r ) , the geometric factor, is the fraction of this induced spectral radiance that reaches the LIDAR detector. Solving Eq. (1) in detail for LIDAR systems lies outside the scope of this paper. However, three issues concerning this equation deserve additional comments. First, the induced spectral radiance results from the intensity of the laser irradiation present in the volume element and the properties of the matter interacting with it. In LIDAR physics, the quantification of the properties of the irradiated material is one of the two ultimate goals in wide area bioaerosol monitoring (the second being determining the spatial distribution of the concentration of that material or cloud mapping). The material properties are characterized by the volume backscattering coefficient§ which expresses the ratio of induced spectral radiance to the laser irradiance incident on the volume element. In the case of bioaerosol autofluorescence detection, an important family of chemical compounds interacting efficiently with UV/VIS incident radiation are aromatic residues which are parts of amino acids like tryptophan, tyrosine and phenylalanine4. Another important compound producing induced fluorescence is reduced nicotinamide adenine dinucleotide (NADH). This molecule, because of its key role in the cell energy production cycle and its excitation band (340-360 nm) which includes a technologically efficient lasing wavelength (355 nm of frequency-tripled-YAG laser), has been targeted by the Canadian SINBAHD and BioSense projects. Finally, the interacting molecules may be preferentially located in the cell. As an example, NADH, because of its role in energy production, will accumulate near the mitochondria, the cell’s power source of vegetative cells or the cell membrane for the bacteries5. Therefore, chemical links with the matter surrounding the center of fluorescence will affect the emission spectral properties and the optical trajectories followed by detected photons through this surrounding material may also influence the statistics of the detected spectra. The second characteristic of Eq. (1) that should be emphasized is the geometric factor  pλ (r ) . This variable takes into account not only the fraction of the induced photons that will reach the detector by comparing the solid angle sustained by the aperture stop of the LIDAR transmitter with the full 4π steradian surrounding the center of fluorescence**, but also the probability that fluorescent photons are absorbed along the optical path between the fluorescent centre and the LIDAR detector. The 1/R2 dependency of the signal detected with LIDAR is derived from the solid angle analysis. On the other hand, the



A time delay exists between the moment where the spectral radiance is generated within the volume element dV and the moment it is detected by the detector. This delay corresponds to the time of flight between the volume element and the detector and must be inserted in the time function relating Jλ to dPλ. § In the case of monochromatic irradiation, this (inelastic) coefficient is a one-dimension spectral array. ** The fluorescence probability distribution as a function of the direction of emission is likely not uniform over the 4π steradian surrounding a (molecular) center of fluorescence and will also include a polarization structure. However, as it will be stated later in this paper, experimental measurements, at least up to now, do not result from a single centre of fluorescence but from the average obtained over a large number of these centers. Under these conditions, this structure is greatly reduced due to statistical averaging and a uniform 4π probability distribution has provided a satisfactory experimental correlation between the model and the experiment.

270

J.-R. Simard et al.

probability that a fluorescent photon is absorbed along the optical path will lead to the definition of the transmission of the atmosphere, of the LIDAR optical system and, if the attenuation factor is significant, the transmission of the material that the photon has to go through before exiting the aerosol. Finally, the signal obtained experimentally, aside from parasitic contributions originating from electronics or natural radiance, results generally from the integration of Eq. (1) over a large number of aerosol particles by spatial integration (limited by the LIDAR range resolution and the laser diameter defining the sampled volume) or by time integration (binning multiple laser pulses). Even in the case of a bioaerosol point detector based on a fluorescence aerodynamic particle sizer, the Eq. (1) is still valid. In the latter, the signal detected may also combine inhomogeneous transmission effects from the different optical paths associated with the large numerical aperture of such devices. In all these cases, the extent of the spectral information gathered by the integration of Eq. (1) will depend on the spectral mean, and more importantly, the spectral covariance, resulting from this statistical ensemble. We make the usual LIDAR approximations: a narrow diameter of the sampled volume in comparison with its range, an averaged concentration Ni of a given aerosol species of averaged spectral cross section σλ,i within the sampled volume and a range interval ∆R of the sampled volume derived from the laser pulse length, the aerosol autofluorescence time decay and the time resolution of the detector††. Based on these assumptions, the spectral signal detected Eλ,i by the fluorescent LIDAR system and originating from the sampled volume can be schematically interpreted as

Eλ ,i = f λ (electronics, optics, atmospheric, R ) × { ( N i σ λ ,i ) ∆R} ,

(2)

where fλ is the function combining the effects of the electronics and the optics of the detection hardware, the transmission of the atmosphere and the range to the probed volume. A detailed description of the model represented by Eq. (2) and how it is derived can be found in the work reported by Measures3 and Simard et all6 among many others. Based on Eq. (2), the detected signal or likewise the sensitivity of LIDAR-based widearea standoff biodetectors depends not only on the aerosol concentration but also on the length (depth) of the sampled volume. This is usually referred as the CL product. With such characteristics, a LIDAR will present the same sensitivity with either a 10-m-deep cloud having an average concentration of 100 particles per liter (ppl) as a 100-m-deep cloud having an average concentration of 10 ppl. The potential to classify bioaerosols from a standoff position using spectrometric laser-induced fluorescence (LIF) LIDAR resides in the statistical properties (spectral mean and covariance) of σλ,i in Eq. (2). Several multivariate methods can be used to

†† In the case of LIDAR using an ICCD (as does SINBAHD), the time resolution is dictated by the gate duration of the intensifier ∆tg. When the laser pulse duration and the fluorescence decay time is negligible in comparison with ∆tg, this gate duration is directly interpreted as a range-gate defining the range resolution as c∆tg/2. For maximum sensitivity, it is advantageous to align the gate duration with the range depth of the probed cloud.

Wide Area Spectrometric Bioaerosol Monitoring in Canada

271

exploit this spectral information. A useful representation of the objectives pursued with these multivariate methods is schematically represented in Fig.1 (left graph). This figure represents a multidimensional space where each orthogonal axis reports the amplitude of a spectrally resolved element of the collected fluorescence. With this model, a collected spectrum Eλ is represented  as a vector. The objective of the multivariate analysis is to find a linear combination Eλ of a set of normalized classification basis vectors  (collectively referred to as a bases set or design set) sλis that can be associated with known sources of inelastic scattering. This combination can be expressed as      Eλ = Eb1 sλb1 + Eb2 sλb 2 +  + E N2 sλN 2 + EH2O sλH 2O +  , (3) where the indexes identify the different types of bioaerosols and all other inelastic scatterers such as Raman signals originating from atmospheric molecules. One objective of the multivariate analysis is to optimize the amplitudes  of these individual bases in  order to minimize the vector difference γ between Eλ and Eλ . This minimized difference is also commonly represented as the spectral quality fit parameter ‘chi square’ 2 or γ . Each of the monomials on the right side of Eq. (3) is a representation of Eq. (2)  where each basis sλis corresponds to the normalized product of σλ,i and the spectral transmission of the optics and the atmosphere. The coefficients of the bases in Eq. (3) combine the scaling parameters in Eq. (2) associated with the electronics, the optics, the atmosphere, the range, the range interval, the average quantum yield associated with the fluorescing aerosol species and most importantly, the averaged concentration of this aerosol. Directly solving the linear coefficients in Eq. (3) gives a measure of the aerosol concentration involved once correcting for the electronic, optic and atmospheric effects. A graceful approach to reduce the difficulty associated with this task is to use the linear coefficient associated with inelastic Raman scattering from nitrogen (having a wellknown cross section), whose return is collected simultaneously with the fluorescence. Based on the detailed LIDAR equation, it is possible to show that the ratio between a linear coefficient associated with a bioaerosol and that associated with a Raman signal such as the one originating from nitrogen, once corrected for the spectral structure of the bioaerosol basis relative to the Raman wavelength, is directly proportional to the ratio of the respective products of their scattering cross section multiplies by their concentration. A detailed description of this procedure can be found in Simard et all1. The capability of this spectral methodology to detect and classify fluorescing aerosols depends in large part on how the halo of uncertainty‡‡, a characteristic resulting from the mean and covariance of individual aerosol species and their capacities to produce fluorescence, can be separated in multidimensional spectral space. In the following section, important lessons learned, regarding these issues, with the SINBAHD platform over the last five years are reported.

‡‡

The halo of uncertainty resulting from the linear combination of the covariance of the individual inelastic  scattering contributions in Eq. (3) is shown in Fig.1 as a grey ellipse centered at the tip of Eλ .

272

J.-R. Simard et al.

Fig.1 Schematic representation of the  multidimensional spectral analysis space (left plot). In this space, a collected laser-induced spectrum E λ is a vector.  The multivariate analysis performed with the SINBAHD system finds the optimum linear combination E λ of normalized  spectral bases (spectral reference vectors)   s λis that minimizes the vector difference γ between E λ and E λ . On the right plot, a series of normalized spectral bases from pollens of different trees pollens and from BG and EH measured with the SINBAHD system (355 nm excitation) at ranges varying from 100 - 500 m.

3. Inelastic Spectrometric LIDAR Validation In order to exploit the spectral properties of fluorescing aerosols, several properties concerning the mean spectral vector (hereafter called simply the spectral mean) and the spectral covariance of fluorescing aerosols were investigated using a spectrometric LIF LIDAR integrated on a mobile platform. This system, developed through the SINBAHD project, is based on a XeF/XeCl (351/308 nm) excimer laser firing laser pulses at 125 Hz, a 12-inch collecting transmitter, a spectrometer and an ICCD (a detailed description of the platform can be found in Simard et all1,2). It has been deployed in different remote sites including DRDC Suffield and Dugway Proving Ground (DPG) where it characterized several spectral properties of biological simulants and obscurants during open-air releases such as their spectral specificity/robustness and their detection/ classification sensitivity. All results presented were produced, unless stated otherwise, by binning the spectral fluorescence induced with 1000 pulses, each pulse having between 120 and 200 mJ. The electronic and background radiance contributions were sampled between each fired pulse, integrated and directly subtracted from the induced fluorescence spectra. Spectral signatures were derived from these corrected acquisitions, or from the combination of several corrected acquisitions, during the release of single fluorescing species aerosol clouds. Basis vectors for the type of fluorescing aerosol released were constructed by normalizing the spectral integral of the corrected spectrum§§. Then, these bases were §§ The spectral design set or bases shown here and used to produce the presented results were not corrected for the spectral transmission of the SINBAHD instrument or from the atmosphere. The use of these uncorrected bases have no impact on the presented data since they were obtained with the SINBAHD instrument itself and minimal atmospheric effects were observed for most ranges of measurements reported here.

Wide Area Spectrometric Bioaerosol Monitoring in Canada

273

introduced in Eq. (3) and the amplitudes Ei associated with these bases were derived with a multivariate analysis based on a singular value decomposition algorithm7 in subsequent 2 collected spectra. Simultaneously, the fitting accuracy parameter γ was also obtained from this algorithm for each processed spectrum.

Fig.2 Plots of the spectral bases of BG (left plot) and EH (right plot) measured at ranges varying between 105 meters and 2.5 km, with 355 nm excitation and released as a dry powder (BG only) or in wet solution at DRDC Suffield and DPG between 2001 and 2005. These plots, where the variability is mostly the result of the weak signal measured, show the level of robustness observed over the years with these two types of simulant. The numbers in parenthesis indicates the number of binned acquisitions used to produce the resulting bases.

3.1. Spectral specificity of fluorescing aerosols Through multiple trials performed at DRDC and DPG, the spectral signatures of twenty naturally occurring fluorescing aerosols, six simulants of bioaerosol agents and multiple obscurants were characterized. It was observed that more than 90 % of these open-air released fluorescing aerosols showed clear differences in their spectral means. Figure 1 (right plot) shows examples acquired at night of these bases for five pollens of trees (maple, aspen, oak, elm and poplar) and two bioaerosol simulants (Baccillus Globiggi or BG, and Erwinia Herbicola or EH). Spectral signatures of the bioaerosol simulants were obtained at DRDC Suffield during May and September 2001 (see Figure 2 and Section 3.2). In August 2004, the signatures of natural bioaerosols obtained commercially from Greer Laboratories, were produced with single acquisitions during the release of 3 g of materials each in a 20-m long open chamber located at a range of 500 m from the sensor. Note that the spectral variances associated with these results are not much wider than the thickness of the graphical traces shown and several of the spectral features observed are specific to the fluorescence collected from the given bioaerosols. Furthermore, measurements made with the same natural materials stored in a dry environment were repeated in September 2005 and showed red shifts from zero up to a few nm in comparison with the results of the 2004 campaign8.

274

J.-R. Simard et al.

3.2. Spectral robustness of fluorescing bioaerosols From these results, several types of fluorescing aerosols show classifiable spectral signatures. However, in order to use this library of aerosol reference spectra as a viable bases for classification of aerosols in the atmosphere, the spectral information must be robust. This issue may require an extensive investigation considering the large number of natural and artificial fluorescing aerosols that may be present in the atmosphere. However, we have already obtained several results that show good stability of the spectral mean as a function of different parameters. First, we have observed little change of the spectral mean over several successive acquisitions made from the same cloud of fluorescing aerosols. Second, as mentioned in the previous section and detailed by Buteau8, only slight changes have been observed in the spectral means of twenty natural bioaerosols tested after a period of storage of more than one year. Furthermore, spectra of BG and EH obtained during open-air night releases of a dry powder (BG) or an aqueous solution (BG and EH) made at DPG and DRDC Suffield over a period of four years show similar mean spectral vector (see Fig.2). Finally, these results show little dependency with the origins (BG) or the slight processing variations that are expected when biological material is fabricated by a different technical team (EH). It is also interesting to observe in Fig.2 that the spectral transmission of the atmosphere for the distances over which these acquisitions were made had little impact on these spectral means.

Fig.3 Detection and spectral classification made by SINBAHD (308 nm excitation) during a 15-minute open-air release of BG at a range of 6.5 km during the Technical Readiness Evaluation that was held at Dugway Proving Ground in 2002. The left plot shows the amplitude of the same detected fluorescence using first the spectral signature of BG (green trace) and secondly with the spectral signature of EH (orange trace). The two traces show equivalent amplitudes. However, by inspecting the fitting accuracy of the spectral signature (right plot), the fluorescence is quickly recognized as being associated with BG instead of EH. In fact, when the fluorescence signal is about six times the noise floor before cloud arrival (left graph), the fitting accuracy (simultaneously computed) has already increased by a factor of four with respect to the standard deviation before cloud arrival.

Wide Area Spectrometric Bioaerosol Monitoring in Canada

275

3.3. Classification sensitivity of fluorescing aerosols Figures 1 (right plot) and 2, and sections 3.1 and 3.2 show graphically the spectral structure of examples of spectral bases of interest and illustrate the level of spectral robustness that can be expected from this source of information. A successful exploitation of the spectral properties of fluorescing aerosols requires a minimum level of signal to obtain a positive spectral classification. The difference in spectral means shown in Fig.1 (right plot) between natural bioaerosols and simulants may, on the surface, appear limited. Nevertheless, evidence of efficient spectral classifications has been observed at several occasions throughout our research to date. Figure 3 shows an example of experimental results where evidence of the spectral classification between BG and EH would have been successively achieved with a modest signal-to-noise ratio (S/N). This result was obtained during the Technical Readiness Evaluation 02-1 held at DPG during summer 2002. On that occasion, BG (dry powder) was released in the open air at a range of 6.5 km at night for a period of about fifteen minutes. Fluorescence spectra were collected with acquisitions resulting from the binning of 1000 laser pulses with an intensified range-gate of 50 meters that included completely the released aerosol cloud. Then, for each acquisition, the multivariate analysis was applied using first the BG basis, then was repeated independently with the EH basis. Figure 3 (left plot) shows the amplitude in CCD counts derived by the multivariate analysis for each selected basis. The plot of these amplitudes shows clearly the arrival of the aerosol cloud in the probed volume but it also shows that the detected amplitude is largely independent of the chosen basis. On the other hand, the quantity derived from the spectral fitting parameters  (see γ in fig.1, left plot) for the two selected bases shows a different result (see fig.3,

Fig.3 Detection and spectral classification made by SINBAHD (308 nm excitation) during a 15-minute open-air release of BG at a range of 6.5 km during the Technical Readiness Evaluation that was held at Dugway Proving Ground in 2002. The left plot shows the amplitude of the same detected fluorescence using first the spectral signature of BG (green trace) and secondly with the spectral signature of EH (orange trace). The two traces show equivalent amplitudes. However, by inspecting the fitting accuracy of the spectral signature (right plot), the fluorescence is quickly recognized as being associated with BG instead of EH. In fact, when the fluorescence signal is about six times the noise floor before cloud arrival (left graph), the fitting accuracy (simultaneously computed) has already increased by a factor of four with respect to the standard deviation before cloud arrival.

276

J.-R. Simard et al.

right plot). This graph plots the fit quality parameter, given by the chi-square of the EH fit minus that of BG normalized by that of BG for each acquisition as function of time.  Since the best fit is associated with the shortest γ (smallest chi-square), a positive result with the computed fit quality parameter will indicate a BG classification while a negative result will favor EH bioaerosols. From this model, observation of Fig.3 (right plot) would have quickly classified the released compound as BG instead of EH. Furthermore, a close examination of the evolution of the amplitude of the fluorescence and of the fitting accuracy parameter as a function of time (see magnified graph portions) shows that, as the S/N associated with the detected amplitude of fluorescence reaches six times the noise floor before cloud arrival, the fit quality parameter has already increased to four times the noise floor. This would result in excellent discrimination between the two bases.

Fig.5 Plots of the amplitude of the signal measured with SINBAHD (355 nm excitation) associated with the basis of EH during a dissemination of EH made at DPG in June 2005 (left plot). A referee LIDAR, probing the disseminated cloud at the same location and reporting the level of signal in concentration-length (CL), was used to correlate the SINBAHD signal. From this correlation, the standard deviation of the amplitude of the basis associated with EH reported by SINBAHD before the cloud arrival is equivalent to ~780 ppl-m. The cloud, released at 2.5 km and with a thickness of 50 meters, was characterized with a range-gate of 500 meters. The EH spectral signal is extricated from background induced fluorescence collected simultaneously and containing about 20 times more signal than the noise floor associated with the EH spectral amplitude. The picture at the right shows an artist’s representation of the scenario of operation targeted by the BioSense project. Monitoring weak variation in fluorescing aerosol concentration over a wide area (1) and inspecting suspect events as unidentified vehicles or explosions (2) are the two main types of capability targeted by BioSense.

3.4. Detection sensitivity of fluorescing aerosols To determine detection sensitivity, trials performed with bioaerosol clouds of known concentration were performed with the SINBAHD platform. Figure 4 shows representative examples of results. These plots of induced fluorescence versus time were obtained from the acquisition of induced fluorescence spectra from the binning of 1000 laser pulses fired at a bioaerosol cloud of BG (~3 µm particle diameter) released wet in the open air. A range of 1388 meters was used for the night acquisitions (Fig.4, left plot) and 508 m was used in day light (Fig.4, right plot). The induced fluorescence, normalized by the Raman signal generated simultaneously from atmospheric nitrogen, were obtained

Wide Area Spectrometric Bioaerosol Monitoring in Canada

277

with intensified range-gates of 5 m (night acquisitions) and 30 m*** (day acquisitions) centered at a distance of about 1 m above the referee sensor. The referee sensor was a slit sampler array capable of simultaneously measuring the concentration of viable BG in Agent Containing Particles per Liter of Air (ACPLA), the quantity reported (red traces) on the right y axes in Fig.4. Good time correlation between the fluorescent spectra and the concentration of the cloud reported by the referee sensor has allowed Simard et all1 to evaluate the standard deviation of the SINBAHD signal, before cloud arrival, as 4.4 ACPLA at night (Fig.4, left plot) and 6.7 ACPLA in day time (Fig.4, right plot). The evaluation of the detection sensitivity was repeated at DPG in July 2005. Figure 5 (left plot) shows a representative example of detection made on that occasion. For this result, EH bioaerosols (~2 µm particle diameter) was released in the open air for about fifteen minutes at a range of 2.5 km at night. To characterize the aerosol cloud detected by the SINBAHD instrument, a second referee LIDAR calibrated to report the concentration-length (CL) of a cloud of EH in particles per liter multiplied by the cloud thickness in m (ppl•m) was aimed at the same target location as was SINBAHD. Figure 5 (left plot) reports the quantity CL measured by the referee LIDAR (continuous line) and the amplitude in CCD counts of the EH basis computed from the SINBAHD acquisitions (dashed line) as a function of time. By correlating the two measurements, the standard deviation of the noise floor for this bioaerosol before cloud arrival is ~780 ppl•m. Since the probed cloud had an approximate thickness of 50 m, this noise floor standard deviation measured in CL would correspond to a cloud concentration of ~16 ppl. Fig.5 (left plot) also shows the amplitude of the induced spectral signal originating from background fluorescing aerosols present in the probed volume (green triangles). This strong background signal, about twenty times the energy corresponding to the standard deviation associated with the amplitude of the EH basis, results from the 500-m-long range-gate†††. This parasitic signal was processed simultaneously by the multivariate analysis with the EH basis and a second basis defining the background signal from the collected data before the arrival of the EH cloud. 4. Next Step: BioSense As a result of the lessons learned with SINBAHD, DRDC initiated the Technology Demonstration Project BioSense in April 2005. This project aims to build a prototype system and to demonstrate that the spectrometric LIF LIDAR technique already demonstrated by SINBAHD, when combined with NIR LIDAR cloud mapping, can detect, map, track and classify bioaerosol threats from multi-kilometer distances with a positive detection and a false alarm rate compatible with Canadian Forces (CF) operational requirements (see the artist’s representation in Fig.5, right image). The

*** Even if a range-gate of 30 meters was used during day, the thickness of the released cloud measured by the SINBAHD platform was of the order of 10 meters. ††† For this measure, a 500-meter range gate was used to facilitate the spatial collection of the induced fluorescence from the released bioaerosol cloud since SINBAHD has a very limited cloud mapping capability.

278

J.-R. Simard et al.

definition year for the project ended in March 2006 and the 4-year implementation phase has started. The implementation phase includes the construction of the prototype beginning by the end of 2006 with an extensive test and evaluation phase of the delivered prototype by 2009. 5. Acknowledgments The authors and DRDC would like to thank John Strawbridge from Edgewood Chemical and Biological Center (ECBC) and the DPG’s field trial team for having made possible the participation of the SINBAHD platform in the Technical Evaluation Readiness trials organized at Dugway in the summers of 2002 and 2005. 6. References 1. J.-R. Simard, G. Roy, P. Mathieu, V. Larochelle, J. McFee and J. Ho, Standoff Integrated Bioaerosol Active Hyperspectral Detection (SINBAHD): Final Report, DRDC Valcartier TR2002-125, 113 pages (2002). 2. J.R. Simard, G. Roy, P. Mathieu, V. Larochelle, J. E. McFee, J. Ho, Standoff sensing of bioaerosols using intensified range-gated spectral analysis of laser-induced fluorescence, IEEE Trans. on Geoscience and Remote Sensing, 42(4): 865-874 (2004). 3. R. M. Measures, Laser Remote Sensing: Fundamentals and Applications, John Wiley & Sons, Inc., Chap. 7, (1984). 4. J.-K. Li, E. C. Asali, A. E. Humphrey, J. J. Horvath, Monitoring cell concentration and activity by multiple excitation fluorometry, Biotechnology Progress, 7(1): 21-27 (1991). 5. H. Andersson, H. Baechi, T. Baechi, M. Hoechl, C. Richter, Autofluorescence of living cells, Journal of Microscopy, 191(1): 1-7 (1998). 6. J.-R. Simard, Short-Range Bioaerosol LIDAR Detection: Transmitter design and Sensitivity Analysis, DRDC Valcartier TM 2005-303, Unclassified (2005). 7. W. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, Numerical Recipes in C, Cambridge University Press, Chap. 2.9, (1988). 8. S. Buteau, J.-R. Simard and G. Roy, Standoff detection of natural bioaerosol by range-gated laser-induced fluorescence spectroscopy”. In proceedings of Optics East SPIE conference Chemical and Biological Standoff Detection III, #5995, Boston, USA, 11 pages (2005).

International Journal of High Speed Electronics and Systems Vol. 18, No. 2 (2008) 505–517  World Scientific Publishing Company

COMPUTED TOMOGRAPHIC IMAGING SPECTROMETER (CTIS) AND A SNAPSHOT HYPERSPECTRAL IMAGER AND POLARIMETER JOHN HARTKE Photonics Research Center, United States Military Academy West Point, NY 10996 USA [email protected] NATHAN HAGAN College of Optical Sciences, University of Arizona, Tucson, AZ 85721 USA [email protected] BRIAN A. KINDER Optical Systems Division, 3M Corporation St Paul, MN 55144 USA [email protected] EUSTACE L. DERENIAK College of Optical Sciences, University of Arizona, Tucson, AZ 85721 USA [email protected]

A Computed Tomographic Imagining Spectrometer (CTIS) is an imaging spectrometer system that acquires all the information required to reconstruct the data cube in a single integration time. This is compared to conventional systems such as whiskbroom systems, pushbroom systems, and filter wheel systems that requiring scanning in one or more coordinate direction. CTIS systems have been designed and tested in several different singular spectral bands as well as a dual band system. In addition to hyperspectral imaging spectrometers, CTIS systems have been used as an imaging spectropolarimeter and as a ranging imaging spectrometer. An imaging spectropolarimeter not only reconstructs the spectral content at every point in the scene of interest, but also provides the Stokes parameters at every point. So instead of just one data cube, we get four data cubes, one for each element of the Stokes vector. The ranging CTIS incorporates a LADAR system with the CTIS to provide the range information to targets in scene as well as the reconstructed data cube. The physical principles behind the CTIS system are presented as well as some of representative data from single band systems, the dual band proof of concept, the spectropolarimeter, and the ranging imaging spectrometer.

279

280

J. Hartke et al.

1. Background 1.1. Imaging Spectrometry Imaging spectrometry involves the acquisition of the spatially registered spectral content of a scene of interest. A classical imaging system is a system that provides a scene’s spatial radiance distribution. A spectrometer on the other hand provides the spectral radiance content of a scene or the spectral signature of an object. Therefore, the objective of an imaging spectrometer is to combine these tasks and provide a spatially registered spectral content of a scene’s radiance distribution. The manifestation of an imaging spectrometer is the creation of a data cube. A data cube is a three-dimensional representation of the scene with two spatial coordinates and wavelength as the third coordinate. The value at each three-dimensional point within the data cube gives a radiometric measurement. From this data cube, we can then take slices through the data cube to analyze different characteristics of the scene. Cuts along a constant wavelength show the scene content at that wavelength. A slice at a particular spatial location gives the spectral signature of the scene at that spatial location. There are numerous uses of imaging spectrometry that include astronomy, resource mapping, and military applications.1 Astronomers can use imaging spectrometry to classify the spectral signature and locations of distant stars, study the composition of planets, and characterize other celestial objects by their emitted and reflected spectra. Remote sensing applications include identifying natural resources and geological structures and thematic mapping. The military can use imaging spectrometry for buried mine detection2, missile defense, and target identification.3 In all of these applications the goal is the same: provide the spectral content of each spatial point in the scene. 1.2. Conventional Imaging Spectrometers Conventional means of obtaining the data cube have been pushbroom systems, whiskbroom systems, and the use of numerous spectral filters. A whiskbroom spectrometer obtains the spectral signature of the scene at a single (x, y) coordinate in the data cube. The spectral signature is obtained by placing a line of detectors behind a spectrally dispersive element giving the spectral content of a single point on the ground. The spectrometer is then swept back and forth as the detector platform moves forward obtaining the spectral signature for the rest of the scene. A pushbroom spectrometer is similar but gets the spectral signature along an entire line. The pushbroom system uses a two-dimensional array of detectors behind the dispersive element instead of just a linear array of detectors like in a whiskbroom system. The spectrometer has only to scan in one direction to complete the data cube. A third type of imaging spectrometer is a filter system which completes the data cube by obtaining information about the scene at one wavelength band at a time. A two-dimensional array is placed behind a color filter and the filter spectral band pass is changed after each integration time.

CTIS & Snapshot Hyperspectral Imager & Polarimeter

281

In all of these cases, the acquisition of the data cube takes numerous integration times to gather all the required information while the system scans either spatially as in a pushbroom or whiskbroom system, or spectrally as in a filter system. For dynamic scenes, the scene changes can occur much faster than the total data cube collection time. A Computed Tomographic Imaging Spectrometer (CTIS) uses diffractive optics and tomographic techniques to reconstruct a data cube from an image taken over a single integration time. For the cases where we want to reconstruct data cubes over two separate spectral regions a dual band CTIS has been developed and tested. In addition to collecting the spectra at each location in the scene, CTIS systems have been adapted to obtain the Stokes vector components at each spatial location. A CTIS system has also been combined with a LADAR system to provide range information in addition to spectral information at each point in the scene. This paper will present the basics behind the CTIS system, as well as discuss the dual band CTIS, the CTIS spectropolarimeter and the CTIS ranging spectrometer. 2. CTIS Optical System The CTIS optical system consists of four main optical elements: objective optics, collimating optics, a disperser, and a re-imaging element. The objective takes the scene and images it to the field stop. The collimator takes the light from the field stop and collimates it to pass through the disperser. After passing through the disperser the light is re-imaged to a two-dimensional detector array. (Figure 1)

Figure 1: Schematic of CTIS

Key to the CTIS system is the dispersive element. The dispersive element is a computer generated holographic (CGH) etched phase grating. For visible systems the CGH is commonly made from poly-methyl methacrylate (PMMA) material etched with an electron beam. The dispersion of the light is achieved by changing the depth of the PMMA and thus changing the phase of the wave front at each point. The disperser is composed of numerous unit cells. Each cell consists of an integer number (usually 8 x 8, 10 x 10 or 16 x 16) of square phasels. Each phasel is etched to a specified depth. The term phasels is used to distinguish the cell etching of the CGH from the detector pixel on

282

J. Hartke et al.

the focal plane array and the voxel, which is the three dimensional unit cell of the data cube. The etched depth leads to a phase delay of the transmitted wavefront. The CGH modifies the incident wave front to create the desired diffraction pattern on the focal plane. The diffraction pattern becomes the tomographic projections of the data cube. We design diffraction patterns that create a 3 x 3, 5 x 5, or even a 7 x 7 diffraction pattern on the focal plane. When designing a CGH, we must consider the required spectral and spatial resolution, focal plane characteristics, and the object’s characteristics. The CTIS system data cube reconstruction is based on the computed tomographic techniques similar to those used in medical imaging. Computed tomography involves reconstructing a three-dimensional data cube from a series of two-dimensional projections of the object. In this system the two-dimensional projections are created by the CGH dispersive element in the collimated space of the system. The two-dimensional projections are the diffracted images of the object’s image in the plane of the field stop and constitute a series of parallel projections of the three-dimensional object cube.4 The center, or zero order projection, is a direct polychromatic image of the object. The first orders are projections through the data cube at the same angle measured from the wavelength axis. The reconstruction techniques used most often with the CTIS system are Expectation Maximization (EM)5 and the Multiplicative Algebraic Reconstruction Technique (MART)6. Both techniques are iterative processes where each iteration is in general, a better estimate of the object than the previous. 2.1. Order Overlap One of the limitations of the CTIS and other imaging spectrometers that use a diffractive element is that the range of wavelengths is limited to about a single octave. This limitation occurs because where the longer wavelengths of one diffraction order strike the focal plane at the exact same location as the shorter wavelength of the next higher order. If we consider the diffraction equation: mλ=OPD

(1)

where m is the diffraction order number and OPD is the optical path difference between two rays, we see that the diffraction maxima for m=2 and λ=1 µm occurs at the same location in the observation plane as m=1 and λ=2 µm. A possible solution to overcoming the order overlap issue is to use prisms and filters to split the incoming light into two separate focal planes, each sensitive to a different spectral range of interest. The drawback to this approach is that the spatial coregistration of the two focal planes is difficult at best and would have to be rechecked during operation to insure there is no shift. Now if we have an interest in determining the spectral content of a scene over two separate spectral regions, we must overcome the order overlap and spatial co-registration issues.

CTIS & Snapshot Hyperspectral Imager & Polarimeter

283

2.2. Dual-Band CTIS It is possible to overcome the order overlap limitation by using a focal plane composed of a mix of detectors sensitive to a different portion of the spectrum organized in interwoven pattern instead of a focal plane made up of detectors with the same spectral bandpass. Many standard digital cameras’ focal plane arrays consist of the required interwoven mixed bandpass detectors in the visible portion of the spectrum. These focal plane arrays consist of predominately silicon detectors with filters that have three spectral bandpasses, red (600-700 nm), green (500-600 nm), and blue (400-500 nm). An interwoven mixed pattern for the detectors is where the detector types alternate between red, green, and blue sensitive pixels. If we now group one set of red, green, and blue pixels into a single cluster, we can treat the cluster as a single spatial pixel providing the spectral content of all three bands at that focal plane cluster location. Now we can treat the set of all pixels sensitive to a single band as one focal plane and the pixels sensitive to another spectral band as a second focal plane. These two focal planes are automatically spatially coregistered by the clustering scheme. At the points of order overlap on the focal plane, we can separate the input of the two bands with the proper imaging software and resolve the order overlap issue. The fact that the red pixels are slightly shifted from the blue pixels is accounted for in the calibration procedures.7

Interwoven FPA

Red sensitive pixels

Co-registered clusters

Blue sensitive pixels

Separated monocolor FPA Green sensitive pixels

Figure 3: Schematic of interwoven mixed focal plane under clustering scheme and separated color bands

Integrating a CGH disperser into a commercially available professional digital camera, we have demonstrated a dual band CTIS system as a proof of concept in the visible spectrum. The CGH is designed to operate from about 400-700 nm. As a part of the proof of concept, we treated the red and blue bands as the two bands to reconstruct, ignoring the green band. 2.3. Dual-Band Results A dual band visible CTIS system was tested by reconstructing alternating black and white bars, alternating red and blue bars. The two bands reconstructed were the blue (400-500 nm) and the red (600-700 nm). The alternating black and white bars and the alternating

284

J. Hartke et al.

red and blue bars demonstrate the CTIS ability to reconstruct co-registered data cubes under the two cases of spectral content of both data cube in the same location and the case where the spectral content of each data cube do not over lap. Figure 4 shows the relative radiometric measure along a constant y-pixel at a given wavelength along the xdirection.

Figure 4: Cut along y=42 -pixels of the black and white bars at λ=495 nm of the blue data cube and λ=625 nm of the red data cube.

Figure 5: Cut along y=42 -pixels of the red and blue bars at λ=495 nm of the blue data cube and λ=625 nm of the red data cube.

From the figures 4 and 5 we can see that the two data cubes are co-registered to within + 2 pixels, this is within the uncertainty of the experiment. The spatial co-registration conclusion is based on examining the locations of the peaks between the red and blue data cubes. In Figure 4, the third maximum from the right in the blue bands occurs at pixel 22. In the red bands that peak occurs at pixel 20. In the data cubes reconstructing the red and blue bars, we expect the peaks in the blue bands to correspond to the valleys in the red band. In Figures 5, pixel 20 is a valley in the blue band and a peak in the red band. Combining these findings with the fact that the image of the white light point source in the zero diffraction order used during calibration

CTIS & Snapshot Hyperspectral Imager & Polarimeter

285

is at the same (x, y) pixel location on the focal plane in both bands strengthens the conclusion that the system produces co-registered data cubes in both bands. 3. Channeled Spectropolarimetry Channeled spectropolarimetry8 is a technique which, through the simple addition of a pair of thick retarders and an analyzer to an optical system, allows the conversion of a spectrometer into a spectropolarimeter, and therefore also an imaging spectrometer into an imaging spectropolarimeter. Its great advantages include its compactness and its lack of moving parts. (Figure 6)

Figure 6: The layout of the CTICS system, with the pair of thick retarders and analyzer inserted in front of the disperser.

This technique uses the dispersion of birefringence in high-order retarders to interfere components of the input beam. For example, we can consider a spectrum which is passed through a horizontal polarizer, after which it is incident on a thick retarder oriented at 45°. As the light passes through the retarder, it is split into two equal components: the projections onto the +45° and −45° axes of the retarder. When the light emerges from the retarder, these two components (+45° and −45°) will be out of phase by an amount given by the retardance of the waveplate. For thick waveplates, the retardance δ varies linearly with wavenumber σ = 1/λ, δ = 2 π d ∆n σ .

(2)

where d is the physical thickness of the retarder. (The above relationship assumes a waveplate material which has no dispersion of birefringence, but a similar linear relationship still holds for materials with a dispersion of birefringence which is linear in wavenumber.) Thus, the blue end of the spectrum will show a large retardation between the +45° and −45° components, while the red end of the spectrum will show much less retardation between the same two components. If a horizontally-aligned analyzer is then placed into the beam, the two components are forced to interfere, and since there is a retardance between orthogonal components in the beam which is linear in wavenumber, the spectrum is modulated sinusoidally (at a frequency proportional to the retardance). To build a complete polarimeter with this technique, it is necessary to produce interference not only in the horizontal/vertical linear polarization components of the beam but also the +45°/−45° linear components and the let/right circular components.

286

J. Hartke et al.

Figure 6 illustrates the principle of operation: the input broadband spectrum passes through two high-order retarders, the first retarder is aligned with the analyzer’s transmission axis and the second retarder is aligned at 45° to the first. Using Mueller calculus, and representing the spectrum in terms of wavenumber σ rather than wavelength λ, we can relate the measured spectrum I(σ) in terms of the Stokes vector polarization components in the input beam 2 I(σ) = s0 + s1 cos(δ2) + s2 sin(δ1) sin(δ2) − s3 cos(δ1) sin(δ2) ,

(3)

where δ1 and δ2 are the retardances of the respective retarders (and each are linear functions of wavenumber σ). Taking the Fourier Transform of I(σ), we obtain a function in which the four individual Stokes components are separated into seven channels (from equation (2), where one channel is centered at DC and two channels are present for each of the remaining sinusoidal terms). By proper choice of thicknesses for the two retarders,9 and therefore the retardances δ1 and δ2, these channels do not overlap, so that windowing each channel individually and taking the Fourier transform of the result, we have the four spectrally-resolved Stokes components. Because the individual channels occupy only a portion of the total signal bandwidth, there is a reduction in overall spectral resolution (by a factor of seven for an optimal system, somewhat more than that for real systems) in the reconstructed Stokes spectra.

Figure 7: Channeled spectropolarimeter diagram (adapted from ref. [8]). The input spectrum passes through a thick retarder oriented at 0° to the horizontal, then another thick retarder oriented at 45°, followed by an analyzer at 0° and a spectrometer. An unpolarized input spectrum is unmodulated, while a polarized input is given high-frequency modulations. In the Fourier domain, the polarization components of the beam are (with a proper choice of retarder thicknesses) separated into 7 channels. The sj terms shown here are the Stokes vector components representing the polarization state.

If we pass a polarized spectrum into a CHSP instrument, we see that the spectrum is given high-frequency modulations (Figure 8). If we now calculate the Fourier Transform of this spectrum, C(OPD) = F{I(σ)}, we can see the resulting 7-channel distribution in the Fourier domain, shown in Figure 9, where the abscissa describes the optical path difference between the interfering components of the beam. The Stokes component terms are separated into independent channels, much in the way that communications systems using a single carrier frequency reserve bandwidth regions for each independent communication channel, referred to as sideband modulation.

CTIS & Snapshot Hyperspectral Imager & Polarimeter

287

(enlarged below)

Figure 8: If we image horizontally-polarized light through a narrow slit, we can see the channeled spectrum most easily in the diffraction order oriented perpendicularly to the slit.

As long as none of the Stokes component spectra sj(σ) exceed the bandwidth of a given channel, then each of the Stokes components can be extracted separately via masking and shifting in the Fourier domain. The spectral resolution of the system may then be characterized by the OPD-bandwidth provided to each channel. For quartz retarders, the birefringence at visible wavelengths is approximately 1% of the crystal’s physical thickness. For the two visible-spectrum experiments discussed below, a pair of quartz retarders with thicknesses of 4.0mm and 8.0mm was used, giving a retardance of δ1 = 40.0µm for the orthogonal polarizations passing through the thinner retarder. An additional 10% must be added to this to take into consideration the linear dispersion of birefringence), giving δ1 = 44µm. This gives the width of each individual channel, as shown in Figure 9.

Figure 9: (Right) The measured channeled spectrum for a compound input polarization state. (Left) The magnitude of the channeled spectrum’s Fourier Transform, |C(OPD) |, for the input polarization state. Windows have been superimposed onto the plot to show the seven channels and the bandwidth region into which each polarization component of the input spectrum is distributed.

288

J. Hartke et al.

By combining the techniques of CTIS with CHSP, we can construct an instrument which maintains snapshot capability and obtains information of the data cubes for each of the four Stokes parameters. Fusing the two techniques involves the insertion of the pair of high-order retarders and analyzer into the collimated path in front of the CTIS disperser, so that the polarization information in the measured scene can now be encoded in modulations in the spectrum. This additional capability comes at the cost of reducing the spectral resolution by a factor of at least 7, but snapshot capability is maintained. 4. Ranging-Imaging Spectrometer Adapting the CTIS to function with a LADAR system adds the capability to measure a third spatial dimension, namely range. A LADAR developed by Sandia National Labs, called the Scannerless Range Imaging LADAR (SRI LADAR), uses a sequence of images to measure range. This particular LADAR technique uses phase encoding and time of flight (TOF) to measure the distance between the instrument and scene for every image pixel. Since this SRI LADAR acquires entire images of the scene without spatial scanning, it is suitable for use with the CTIS. The SRI LADAR consists of a pulsed laser emitter and receiving optics. The receiving optics contains an objective lens and an image intensified CCD camera with modulated gain. The resolution and range bin size of the SRI LADAR are dependent on the modulation frequency (or frequencies) of the intensified camera. Replacing the objective lens of the SRI LADAR with a CTIS operating around the laser wavelength, generates a hybrid instrument called the Ranging-Imaging Spectrometer (RIS). The RIS simultaneously measures 3-D spatial and spectral information from a sequence of rapidly acquired images. The spectral information is collected and processed in the same manner as the conventional CTIS. The range information collected by the RIS is only valid over the center order of the CITS diffraction pattern, because it contains a panchromatic image of the scene. A RIS instrument (Figure 10) was built by modifying an existing SRI LADAR operating at 857nm, with a CTIS designed to operate from 600900nm. The operation and experimental results of this instrument are contained below.

Figure 10: Diagram of the RIS.

CTIS & Snapshot Hyperspectral Imager & Polarimeter

289

As mentioned before, the SRI LADAR, and now RIS, used phase encoding and TOF to measure range. The phase encoding is created by heterodyning outgoing laser pulses at 1 kHz with a sinusoidally modulated gain at 10 MHz applied by the micro-channel plate (MCP) in the image intensifier. The laser pulses (λ = 857 nm) are reflected off the scene and imaged through the CTIS. The heterodyning of the two waveforms generates an image on the phosphor screen of the intensifier, whose intensity is range dependent. This image is then transferred via the fiber optic taper to the FPA, where another image of the scene is acquired with the initial phase difference between the outgoing laser pulses and image intensifier gain shifted by π/4 radians. Once eight images are acquired and transferred to the computer, the phase difference between the outgoing laser pulses and image intensifier gain has been stepped though 2π radians. Using the eight images and a TOF algorithm, the range of the scene can be calculated on a per-pixel basis. Since the CTIS requires only one image to measure spectral information and the SRI LADAR requires eight images to measure range, the spectral content of the scene is sampled eight times faster than range. Also, the programs that process and reconstruct the data are independent; consequently, depending on the application of the sensor or the needs of the user, either spectra, range, or both can be calculated for the scene. Because each pixel in the image contains both spectral and range measurements of the scene, the data gathered by the RIS can be represented as a four-dimensional hypercube with one spectral (λ) dimension and three spatial (x, y, and z) dimensions. This data is referred to as a spatial-spectral hypercube. The RIS collects spectral samples in a 77 x 77 pixel format with angular subtense of 12.5o. Laboratory and field tests have shown that the angular and range resolution of the RIS are 2.8 mr and 8.92 1.23 cm respectively. Spectral tests have shown that the RIS can resolve spectral lines separated by 19.08 nm. Figures 11 and 12 demonstrate examples of the spectral and ranging reconstructions of known test targets.

Figure 11: (Right) Spectral reconstruction of a single pixel over the LEDs in the image on the left. The spectra shows the two peaks of the LEDs and the laser emitter wavelength.

290

J. Hartke et al.

Figure 12: Styrofoam target with a 25 cm step, used for range testing of the RIS (left) and the reconstruction (right).

5. Conclusions and Future Work The CTIS system has shown significant promise as a hyperspectral imager, imaging spectropolarimeter, and a ranging imaging spectrometer. The CTIS’s advantage over other imaging spectrometer systems is that it can obtain all the information necessary to reconstruct a data cube in a single integration time where as conventional imaging spectrometers must scan either spatially or spectrally to obtain all the required information. The draw back of the CTIS system is that although the information is collected very quickly, it takes a long time (on the order of minutes) to reconstruct the data cube. There is currently work in progress at the University of Arizona to use wavelet transforms to speed the reconstruction. Another draw back from CTIS system is that only about 20% of the pixels on a two dimensional focal plane have light incident on them from the diffraction pattern. This limits the spatial and spectral resolution possible for a given focal plane size. A volume CGH may allow use to wrap the diffraction pattern in a spiral pattern around the center of the focal plane. If this works, we can use the entire focal plane and increase the spectral and spatial resolution. 6. References 1. W. R. Dyer and M. Z. Tidrow, “Applications of MCT and QWIP to ballistic missile defense,” E.L. Dereniak, R.E. Sampson, eds., in Proceedings of SPIE, Vol 3379, pp. 434-440 (1999). 2. A.C. Goldberg, et al, “Development of a dual-band LWIR/LWIR QWIP focal plane array for detection of buried mines,” E.L. Dereniak, R.E. Sampson, eds., Proceeding of SPIE Vol 4721, pp. 184-195 (2002). 3. W.R. Dyer M.Z. Tidrow, “Applications of MCT and QWIP to ballistic missile defense,” E.L. Dereniak, R.E. Sampson, eds. in Proceeding of SPIE Vol 3379, pp. 434-440, (April 1999). 4. G. T. Herman, Image Reconstruction from Projections, The Fundamentals of Computerized Tomography, Academic Press, New York, (1980). 5. L. A. Shepp, Y. Vardi, “Maximum likelihood reconstruction for emission tomography.” in IEEE transactions on medical imaging, Vol. MI-1, No. 2, pp. 113-122, (Oct 1982). 6. A. Lent, “A convergence algorithm for maximum entropy image restoration.” in Image analysis and evaluation, SPSE conference proceedings, Rodney Shaw, ed., pp. 249-257, (Jul 1976).

CTIS & Snapshot Hyperspectral Imager & Polarimeter

291

7. J. Hartke, E Dereniak, “Hyperspectral-dual spectral region imaging spectrometer.” In Proc of SPIE, E.L. Dereniak, R.E. Sampson, eds, Vol. 5563, pp. 156-166, (2004). 8. K. Oka and T. Kato, “Spectroscopic polarimetry with a channeled spectrum,” Opt. Lett. 24(21), pp. 1475–1477, 1999. 9. K. Oka and T. Kato, “Static spectroscopic ellipsometer based on optical frequency-domain interferometry,” in D. H. Goldstein et al., eds., Polarization Analysis, Measurement, and Remote Sensing IV, Proc. SPIE 4481, pp. 137–140, 2001.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 519–529  World Scientific Publishing Company

HYPERSPECTRAL IMAGING USING CHROMOTOMOGRAPHY: A FIELDABLE VISIBLE INSTRUMENT FOR TRANSIENT EVENTS RANDALL L. BOSTICK and GLEN P. PERRAM Department of Engineering Physics, Air Force Institute of Technology, 2950 Hobson Way Wright-Patterson Air Force Base, Ohio 45433-7765 USA [email protected] [email protected]

A visible, hyperspectral imager using chromotomography CT) has been built, with the goal of extending the technology to spatially extended sources with quickly varying (> 10 Hz) features, such as bomb detonations and muzzle flashes. Even with a low dispersion, ~0.7 mrad/nm, direct vision prism with undeviated wavelength near λ = 548 nm, spectral resolution of better than ∆ λ< 10 nm across the λ = 400 – 600 nm band is demonstrated with spatial resolution of better than 0.5 mm. The primary objective of this paper is to show empirically that the spatial and spectral resolution of data obtained by a simple CT instrument is unchanged in projection space and reconstructed object space. Keywords: Hyperspectral Imagery, Chromotomography, Detonation Fireball

1. Introduction Hyperspectral imagery (HSI) has largely been developed to monitor static, or slowly changing events. However, there is considerable potential for applying HSI to fast transient events, such as bomb detonations, muzzle flash, and other battlefield combustion events.1 Several instrumental approaches are emerging, including two key technologies: imaging Fourier Transform Spectroscopy (imaging FTIR), and chromotomography (CT). Chromotomography generally refers to a dispersive instrument which deconvolves spatial and spectral information using the same transforms employed in medical tomography.2 Chromotomography offers several advantages over FTIR approaches, including: (1) simple design with less sensitivity to vibration, (2) easy integration with standard imaging sensors, and (3) the use of event phenomenology in the CT transform for increased temporal response. Two primary approaches for CT are being pursued: (1) a dispersive prism with central undeviated wavelength rotated about the optical axis,2-5 and (2) a non-rotating, low dispersion grating which images multiple orders on a single detector array.6 The rotated prism approach offers the potential for greater spectral resolution. For applications such as the classification of detonation events, the higher spectral resolution is required.7 Source spectra must be detangled from atmospheric transmission, and spectral

293

294

R. L. Bostick & G. P. Perram

fingerprints from hot combustion products, or incomplete fuel consumption, are key discriminating features. 8 A few hyperspectral imagers based on chromotomography have been developed.2-4 However, their fast event detection is limited to point sources. Furthermore, the effects of instrumental design and performance parameters on temporal, spectral and spatial resolution are poorly documented. The goal of this research is to construct a prototype CT-HSI instrument and demonstrate the capability to collect, process, and exploit spectral imagery on a laboratory target whose size and spectral emission are changing quickly with time. The instrument will be used to perform a metrological study to determine the effects of jitter, optics misalignment, off-axis input rays, and other systematic errors on the resulting temporal, spatial, and spectral resolution in the reconstructed data. Innovative new data reconstruction algorithms which are not restricted by the singular system transfer function, are developed which are not sensitive to collection instabilities and can process the data quickly enough to be useful to military defense and battle space awareness. This paper outlines the basic design of the prototype, and explores the spatial and spectral resolution of data collected on a simple test source. 2. Experimental 2.1. A. Data Acquisition The visible chromotomagraphic hyperspectral imager is shown schematically in Fig. 1. The CT instrument employs a two component direct vision prism (DVP) shown in Fig. 2 with undeviated wavelength near λ = 548 nm, and angular dispersion as shown in Fig. 3.

Fig. 1. Schematic design of the visible CT instrument with D1 = 98.1 cm, D2 = 22.3 cm, D3 = 41.5, F1 = 15.24 cm, and F2 = 5.1 cm.

The entry telescope is used to collimate the source at the prism and establish a well defined incident angle. The dispersed light exiting the prism is imaged with a Canon XL2 video camera. The camera features three separate 480 x 720 pixel focal planes, one sensitive to each of the red, green, and blue wavelengths at 30 Hz framing rates. The pixel pitch in each focal plane is approximately 9 µm. It is thought that, eventually, the data collected by each focal plane can provide initial insight into the spectral content of

Hyperspectral Imaging Using Chromotomography

295

the source and aid in reconstruction algorithms. The camera was aligned on the optic axis, and an acceptable image created using an f/# =2.8 lens. The prism is rotated about the optical axis at a uniform rate of ½ Hz using a Newport RGV100 High-Speed Precision Rotation Stage with 1.7 µrad resolution. Due to the nature of the camera, the actual focal length of the camera lens will be determined by measurement.

Fig 2. Geometry of the direct vision prism (DVP).

The front face of the direct vision prism is normal to incident parallel beams, and the exit face is at an angle to the front. The prism has the usual dispersion angle as a function of wavelength, with a negligible displacement is also introduced with respect to the entrance point. The front prism is made from Schott SF L6 glass, the second from Schott LaSF N30 glass.

Fig 3. The angular deviation φ as a function of wavelength for the DVP. The wavelengths and corresponding angular deviation for the four primary emission lines from the Hg lamp are indicated.

The spectral source used as the object illumination was an Oriel 6035 Mercury pen lamp. This source provides four dominant spectral lines in the visible spectrum at

296

R. L. Bostick & G. P. Perram

approximately 404, 435, 546, and 578 nm. Note that the green line in the Hg spectrum (546 nm) is located very near the undeviated wavelength of the prism (548 nm). The difference in angular deviation is a mere 0.0179°, which can be considered negligible in this case as the focal length and detector pitch result in a linear displacement of less than a pixel at this wavelength. A 300 µm or 1.5 mm aperture is placed in front of the pen lamp in order to spatially limit the object size. The 1.5mm source is used to investigate the affect of a spatially extended source on spatial and spectral resolution, with results from the 300 µm aperture used in the reconstruction calculations. 2.2. Data Reconstruction The algorithm chosen to perform the 3-D data reconstruction in this experiment is a simple back projection algorithm. While a number of different and more sophisticated algorithms are available to do tomographic data reconstruction9, the back projection is one of the simplest and allows for reconstruction as data is being collected. Others require a full set of projections to be collected before reconstruction can begin, which limits the speed of the process. The back projection can be more easily constrained or modified to make use of any a priori knowledge of target conditions. While simple, the back projection does not produce the highest quality results as more mathematically exact techniques. However, it indicates the nature of the transform that is deficient, and highlights the areas that should be addressed by a subsequent algorithm. Finally, it is a simple algebraic method that is not sensitive to noise as are Fourier transform methods. The basic principal of the back projection algorithm involves taking each pixel intensity measurement in the projected data set, and equally assigning that value back to all possible sources of that intensity in the object space. The projection angle (or prism rotation angle) θ, and the angular deviation φ of the prism at each wavelength are used to do this. The algorithm is constrained spatially by knowing the aperture size and thus field of view of the instrument, and the spectral bandwidth, so that only a limited 3-D space need be considered. To illustrate this, consider a single source located at λi at a location in space such that it is seen by detector d1 at one angle θ, and seen at detector d2 at an angle θ + π. Fig. 4 illustrates this situation.

Fig 4. Dispersion from a point source at wavelength λi for prism rotation angle: (a) θ and (b) θ + π.

Hyperspectral Imaging Using Chromotomography

297

If the prism disperses at a constant angular dispersion dφ/dλ for all wavelengths (and furthermore no undeviated wavelength in the reconstruction space), the signal on detectors d1 and d2 must originate from a point on the line back projected through the object space defined by the deviation angle. Thus, all points on this line are assigned the magnitude of the signal measured by the detector. Fig. 5 shows the back projections from d1 and d2. The intersection is at the location of the original signal. As more and more back projections are included, the density of values increase in the area of the object source.

Fig 5. Intersection of back projections.

In the case of the prism used in this study, the angles φj are given by the values shown in Fig. 3 and thus trace out curves in the object space. One can see even from the crude example in Fig. 5 that the smaller deviation angle will blur the back projected data, thus decreasing the spectral resolution. For a point source, this loss of spectral resolution is due to the finite size of the detector as in Fig. 5. More likely, the loss of spectral resolution will be realized when observing a target with spatial extent. Only at an angle of φ = 90° could the image be exactly restored, as in medical computed tomography. Unfortunately, in chromotomography it is impossible to have the dispersion normal to the optical axis. This limited angle problem in CT has been exhaustively addressed in the literature, but may be no worse than that observed in prism spectroscopy, where the resolution is also dependant source size and prism dispersion. The width of a spectral line, ∆λ, given an input source of size a at a wavelength λ in a prism spectrograph setup with a focus lens of focal length f3 and prism dispersion dφ/dλ is given by:

 a   f  3 ∆λ =   dφ     dλ 

(1)

3. Results A sequence of line spectra collected using the 300 µm aperture illuminated by the Hg lamp taken at 26 projection angles are superimposed in Fig 6a to illustrate multiple image

298

R. L. Bostick & G. P. Perram

frames from a 360o rotation of the direct vision prism. The center of rotation of each spectrum is offset from what is expected to be the undeviated wavelength, due to prism misalignment. Advantage will be taken of the fact that the green line in the spectrum is located very close to the undeviated wavelength, and therefore should be at the center of rotation. The offset correction for each projection is done by linearly shifting each projection so that the green intensity lies on the center of rotation. The aligned projections are superimposed in Fig 6b, with an arbitrary projection at angle θ shown to demonstrate how the prism rotation angle is defined. The same correction is done for the 1.5 mm source.

Fig 6. Superimposed images from 26 projections of Hg lamp: (a) as directly observed, and (b) with offset removed. Pixel row and column numbers are indicated, with 9 µm spacing between pixels.

After the offset in rotation is removed, the angle θ of each projected line spectrum was measured by calculating the location of the blue and violet spots in the x-y coordinate system. Using these coordinates, two values for θ were found and averaged to improve accuracy. The measurements of the locations of each color in the x-y plane can also be used to calculate the effective focal length f3 of the lens on the XL2. The linear displacement ∆r of each color is given by: ∆r = f3 φ

(2)

Where, φ is the angular deviation of the prism at the given wavelength. Calculating ∆r for the blue and violet points at several projections yielded an average focal length of about 2900 pixels, or f3 = 2.6 cm. Once the data have been collected, corrected, and each projection angle determined, the back projection algorithm is used to reconstruct the data in the object space. The spectral and spatial resolution of the data is then evaluated to determine the effects of the reconstruction.

Hyperspectral Imaging Using Chromotomography

299

3.1. Spatial Resolution The spatial resolution achievable in the reconstructed space increases with the number of projections taken. However, even if an infinite number were used, the nature of the back projection algorithm is to smooth the data. Even so, in this type of arrangement the spatial resolution in the reconstructed space should only be limited by the resolution achievable by the 2-D focal plane, as documented in similar instruments used in the field of medical physics.10 Thus, the quality of the spatial resolution in the reconstructed object space will be judged by comparison to the spatial extent of the image source as measured on the camera focal plane. The size of the spectral images as detected on the focal plane are measured by taking the full-width half-maximum (FWHM) diameter in pixels of the spots in a direction perpendicular to the dispersion axis of the prism. In the reconstructed object, the width is measured the FWHM diameter in pixels (or bins) in the spatial plane at the appropriate value of λ. Fig. 7 illustrates the spatial distribution of the normalized intensity for the 300 µm source. The resulting spot sizes for both apertures are provided in Table 1. The sysem magnification is MT = 0.092, yielding the spot sizes in object space, also provided in Table 1. There is generally good agreement between the size of the observed images (projection) and reconstructed images. The agreement is best for the brighter 435 µm and 546 µm lines, and in general for the 1.5 mm case. The results indicate that the transformation to object space does seem to slightly reduce the spatial resolution of the collected data by focal plane, but not by a significant amount in the case of objects with sufficiently high signal.

Fig. 7. Projected (O) and reconstructed (∆) spatial distributions at the four Hg pen lamp wavelengths for the 300 µm source. Pixel numbers are displayed on the x-axis, and normalized intensity on the y-axis.

300

R. L. Bostick & G. P. Perram Table 1. Spot diameters in pixel numbers and absolute size in object space.

λ (nm) 404 435 546 578

300 µm Focal Image Reconstructed pixels mm pixels mm 2.3 0.21 4.9 0.45 6.9 0.63 8.2 0.75 4.5 0.41 4.7 0.43 4.3 0.39 3.0 0.28

1.5 mm Focal Image Reconstructed pixels mm pixels mm 14.5 1.33 15.8 1.45 17.4 1.60 17.5 1.61 13.9 1.28 14.2 1.30 12.1 1.11 12.4 1.14

3.2. Spectral Resolution Similar to the spatial resolution, the spectral resolution ∆λ can be measured as the widths (FWHM) of the peaks at each of the four wavelengths. Figs. 8-9 compare the peak widths for the reconstructed and projected data sets.

Fig. 8. Spectral resolution for the 300 µm source: (o) as imaged (projected) and ( ) as reconstructed. Each spectra bin corresponds to a single camera pixel.

Fig. 9. Spectral resolution for the 1.5 mm source: (o) as imaged (projected) and ( ) as reconstructed. Each spectral bin corresponds to a single camera pixel.

Hyperspectral Imaging Using Chromotomography

301

For the projected data, the width of the peaks in the wavelength space can be found from the linear dispersion, dλ/dx. In the object space, each bin on the wavelength axis is 1 nm. In the projected data, this is done by measuring the FWHM of the diameter of the image in the direction of the prism dispersion. For the reconstructed data, the resolution is measure as the FWHM diameter of the objects along their center in the spectral dimension. The numerical calculations for ∆λ in nm at each wavelength are shown in Figs. 10-11, with the theoretical best achievable ∆λ calculated using Eq. 1. There is no consistent loss of spectral resolution upon implementation of the reconstruction algorithm. The data for the point source again show that the weak signal of the 404 µm line is causing some error, although it is unclear why the 435 µm results do not match better. The general trend of the results for the point source is consistent between the projected and reconstructed data. The great improvement in resolution at the 404 µm and 578 µm lines in the projected data is most likely due again to the dim sources.

Fig. 10. Spectral resolution for the 300 µm source: (○) as projected on the focal plane, () as reconstructed, and (♦) theory.

Fig. 11. Spectral resolution for the 1.5 mm source: (○) as projected on the focal plane, () as reconstructed, and (♦) theory.

Overall, the 1.5 mm source provides a more stable, high signal to noise basis for the measurement. This is evident in the results obtained for spectral resolution. Fig. 11 shows that both measured data sets have the same slope as the theoretical, which means

302

R. L. Bostick & G. P. Perram

there is no observable contribution from diffraction or other factors than object size and prism dispersion. This result translates from the projected measurements to the reconstructed image space, with no discernable affect on resolution except for some random error. 4. Conclusions The purpose of this study was to investigate the effect on spatial and spectral resolution of a particular method of data reconstruction used in chromotomography. Spatial resolution is slightly degraded with respect to the focal plane data as a result of the simple back projection algorithm used. This is to be expected, as the algorithm does have well known artifacts which do lead to slight blurring of the images. This can be remedied by using a more complex reconstruction algorithm, or by filtering the data. The average physical size of the reconstructed 1.5 mm object matched the true size, however the size of the reconstructed 300 µm source was typically greater than 300 µm which indicated that the general image quality of the system was degraded by poor focus or possibly diffraction at the small aperture. The spectral resolution also did not seem to be affected by the reconstruction in any consistent way. There were some random deviations from the resolutions calculated in the projected data, but not severe in the cases where the initial projection data was of good quality. The trends of the data with the theoretical values for ∆λ suggest that the instrument is performing as expected. In the instances where there was not good agreement between the results measured from projected data, the low quality of the source due to dim light couple with the small aperture size was likely the problem. This is consistent with the results from the spatial resolution analysis suggesting instrument limitations at imaging the 300 µm source. Given the initial results, the spectral and spatial resolution of data collected by a CT instrument and reconstructed into the object space are similar to and no better or worse than that obtained by using a simple prism spectrometer. By rotating the prism, and collecting the data at projected angles, spectral imagery can be reconstructed from the datasets at presumably very high speeds. References 1. J.A. Orson, W.F. Bagby, and G.P. Perram, Infrared Signatures from Bomb Dentontaions, Infrared Physics and Technology 44 101-107 (2003) . 2. A.K. Brodzik and J.M. Mooney, Convex Projections Algorithm for Restoration of Limitedangle chromotomographic images, Journal of the Optical Society of America, 16 (2), 246–257 (1999). 3. A.K. Brodzik, J.M. Mooney, and M. An, Image Restoration by Convex Projections: Application to Image Spectrometry, Proceedings of SPIE 2819, 231–240 (1996).

Hyperspectral Imaging Using Chromotomography

303

4. J.M. Mooney, Angularly Multiplexed Spectral Imager, Proceedings of SPIE 2480 65–77 (1995). 5. M. Gould and S. Cain, Development of a Fast Chromotomographic Spectrometer, Optical Engineering 44 1111-1113 (2005). 6. M. Descour and E. Dereniak, Computed Tomography Imaging Spectrometer: Experimental Calibration and Reconstruction Results, Applied Optics 34 4817-4826 (1995). 7. A.N. Dills, R.F. Tuttle, and G.P. Perram, Detonation discrimination and feature saliency using a near-infrared focal plane array and a visible CCD camera, Proceedings of SPIE 5881, 123 (2005). 8. K.C. Gross, G.P. Perram and R. F. Tuttle, Modeling infrared spectral intensity data from bomb detonations, Proceedings of SPIE 5881 100 (2005). 9. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances, SPIE Press, Bellingham, WA (2003). 10. Grant M. Stevens, Filtered backprojection for modifying the impulse response of circular tomosynthesis, Medical Physics 28 (3), 372-380 (2001).

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 531–544  World Scientific Publishing Company

ADVANCED HYPERSPECTRAL ALGORITHMS FOR TACTICAL TARGET DETECTION AND DISCRIMINATION A. SCHAUM Naval Research Laboratory 4555 Overlook Ave. SW Washington, DC 20375 [email protected]

Region-of-interest cueing by hyperspectral imaging systems for tactical reconnaissance has emphasized wide area coverage, low false alarm rates, and the search for manmade objects. Because they often appear embedded in complex environments and can exhibit large intrinsic spectral variability, these targets usually cannot be characterized by consistent signatures that might facilitate the detection process. Template matching techniques that focus on distinctive and persistent absorption features, such as those characterizing gases or liquids, prove ineffectual for most hardbody targets. High-performance autonomous detection requires instead the integration of limited and uncertain signature knowledge with a statistical approach. Effective techniques devised in this way using Gaussian models have transitioned to fielded systems. These first-generation algorithms are described here, along with heuristic modifications that have proven beneficial. Higher-performance Gaussian-based algorithms are also described, but sensitivity to parameter selection can prove problematical. Finally, a next-generation parameter-free non-Gaussian method is outlined whose performance compares favorably with the best Gaussian methods. Keywords: Hyperspectral; target detection; subspace.

1. Introduction Some of the earliest concepts for detecting targets with hyperspectral systems have not been realized in real time systems. Here we first review some of these approaches, most of which were based on simplistic models of hyperspectral clutter, especially the Linear Mixing Model (LMM). Next, we describe the algorithms employed by the first operational sensors. These derive from statistical detection theory, with heuristic modifications informed by the LMM. Variants of these algorithms that are likely to be incorporated in the next generation systems are described. Lastly, we speculate on the prospects of some advanced detection methods. 2. Linear Mixing Models 2.1. End members One of the earliest models of natural spectral backgrounds is the Linear Mixing Model1. It was meant as a tool for segmenting scenes into distinct constituents called end 305

306

A. Schaum

members (EMs). The model treats any sensed spectrum as a linear superposition of EM spectra corrupted by additive white noise. Many approaches have been developed to address the problem of deriving, from a hyperspectral image, spectra to represent the end members. Most of these seek to accommodate the spectral geometric shape called a simplex, which is defined as the convex set of all physically allowable mixtures of EMs. The simplex for two EMs is the line between them. For three (cf. Figure 1) it is the triangle defined by the points of their vectors. Four-component models define tetrahedra, whose faces are the three-component models. Each hyperface of a model defines another lower-order model. Several general approaches for finding the spectra of EMs in a given scene have been devised. When available, prior knowledge of site geology combined with visual confirmation can be used to select pixels to represent EMs. This is especially effective in areas with outcroppings of pure minerals2. Image-based techniques can be more automated, for example “shrink wrapping” a simplex around a spectral data cloud or growing one3 from the inside until a minimal enclosing volume is found. Other methods exploit some of the statistical properties4 expected to distinguish EMs from mixed pixels. Spectral scatter of background pixels

s2

Band 2

s3

3 nd Ba

s1

Band 1

LMM fit to background clutter

Figure 1. Three end member example of a linear mixing model.

2.2. Unmixing “Unmixing” refers to any procedure for estimating the EM fractional constituents that best describe the spectrum of a test pixel. The EM spectra are assumed to be known. Unmixing is usually undertaken in several steps, each representing a more faithful attempt to conform to the linear mixing model. In one approach, the first step projects a test pixel’s spectrum into the Ne-dimensional subspace spanned by the Ne (assumed

Advanced Hyperspectral Algorithms for Tactical Target Detection

307

linearly independent) EM spectra. If {si} are vector representations of the discretized EM spectra, the EM matrix S is defined as

S ≡ ( s1s2 s3  s N e ) .

(1)

The associated physical model treats background pixels x as noise-corrupted linear mixtures of the EMs:

x = Sf + nnoise .

(2)

The components fi of the vector f are the fractions of the corresponding EM materials assumed to constitute the test pixel. The standard least mean squared error (LMSE) estimate fˆ of f can be computed by multiplying the test pixel by the pseudo-inverse of S,

(

fˆ = S # x ≡ S † S

)

−1

S †x

(3)

with † denoting matrix transposition. The EM-based estimate of the test pixel x is then

∑ s fˆ = Sfˆ = SS i i

#

x = PS x .

(4)

i

The linear projection operator PS extracts from any vector x its component in the subspace {⊕ si } spanned by the EMs. In the example of Figure 2, PS x is called the “unconstrained projection” and lies in the three-dimensional subspace of the three EM spectra. It represents the first of three projections, starting from the test pixel, moving progressively through several subspaces, and ending at the simplex. A second step in unmixing, called “sum-constrained unmixing,” generates a new fraction vector f whose components must satisfy

∑ f = 1 , i

(5)

i

to insure that the total abundance of EMs in a pixel is one. Equation (5) forces the estimate Sf of the test pixel x to lie in the affine (i.e. not containing the origin) subspace containing the EM spectra, which has a dimensionality one lower than the EM subspace. In the example depicted in Figure 2, the sum-constrained projection must lie on the plane containing the triangular simplex. The analytic form for the sum-constrained fraction estimate f is easily computed12 from a normal to the simplex. The final projection, onto the simplex, is more difficult.5,12 This step represents the “fully constrained” projection (see Figure 2) and finds the point on the simplex surface that is nearest Sf . The result is a third estimate f of the fractions that satisfies not only a sum constraint, but also an inequality appropriate for physical fractions:

308

A. Schaum



∑f

i

 = 1, 0 ≤ f i ≤ 1 .

(6)

i

In target detection applications, the point of these calculations is to enable the computation of some distance metric, of a test pixel from a “model space.” For example, the space could be the linear space {⊕ si } , or the affine space of EMs, or the convex set points defining the simplex. It appears that in all applications using actual data (as opposed to simulations based on an LMM), no consistent incremental gain accrues from using fully constrained unmixing. This reflects the extreme simplicity of the LMM—it seldom provides an accurate detailed description of natural backgrounds. Unconstrained Projection of test pixel into end member space

Sum-constrained Projection

PS x

Fully-constrained Projection

Normal to EM Subspace

Figure 2. Three standard projection procedures estimate a test pixel with different linear combinations of end members.

Furthermore, statistical methods such as a Principal Component decomposition of the covariance matrix can more simply isolate the spectral dimensions associated with background phenomenology. 3. Target Detection Linear mixing models are convenient for segmenting an image into classes of approximately homogeneous spectra. LMMs are also frequently used to construct algorithms for detecting rare targets in a scene. Here we review some of these approaches before proceeding to the techniques deployed in a real time airborne system. Nextgeneration methods are then described. Finally, the prospects for some advanced methods based on elliptically contoured distributions are discussed.

Advanced Hyperspectral Algorithms for Tactical Target Detection

309

3.1. LMM-based target detection The most straightforward use of linear mixing models for target detection involves projecting a test pixel into the space orthogonal to the simplex. Examples are “Orthogonal subspace projection” (OSP) 6 or a rival technique called “Constrained subspace detection” (CSD) 7. Both assume an LMM and known EM spectra. OSP first projects out from a test pixel x its component PS x and treats the residual as noise (hypothesis H0), or noise plus (projected) target (hypothesis H1). CSD does the same, but uses a projection into the affine space of the simplex. Both methods are suboptimal, because they ignore any differences between target and clutter in the simplex space. (See [12] for a more complete discussion.) Other LMM-based detection methods simply treat a rare target as one more end member, and use a “rare occurrence” of fractions in that EM as a criterion to judge the presence of a target. Several other detection methods also delete the projection of a test pixel into a “background” subspace and perform statistical detection on the residual. But these often avoid the complications associated with finding the end members of a linear mixing model. Instead, they merely treat the highest several principal components (PCs) as the background subspace. Matched Subspace Detection (MSD) 13 and Joint Subspace Detection (JSD) 14 represent two distinct approaches for exploiting subspace ideas for target detection. Both used the PC method to define the clutter space, but differ considerably in how they derive a model target probability density function (pdf). The mathematical forms of the algorithms can differ significantly. MSD often uses the principle of the generalized likelihood ratio test (GLRT) to estimate unknown noise, and this produces a ratio form of test statistic. JSD uses a constrained GLRT, in which the unknown noise is subsumed into the detection threshold. The decision surfaces generated by the two methods are, however, both N-dimensional generalization of conic sections, owing to the use of multivariate Gaussian models of pdfs. The general reason why HSI target detection methods work well against natural backgrounds is that the latter tend to have fewer independent constituents than the number HSI channels. Background spectral scatter plots therefore tend to be concentrated in some subspace. A new material introduced into a scene, whether manmade or natural, often has a large projection outside that subspace, and this component accounts by far for the bulk of its detectability. However, this effect can be captured by even the simplest of textbook methods, some of which have already found their way into operational systems.

310

A. Schaum

3.2. Real time algorithms The US Civil Air Patrol (CAP) is deploying a fleet of VNIR (Visible/Near Infrared) airborne hyperspectral sensor systems (ARCHER8) for Search and Rescue missions. The hardware designs and signal processing architecture were derived from systems operated by the Naval Research Laboratory (NRL). The algorithms used for detection were developed jointly by NRL and Space Computer Corporation (Los Angeles, CA). In ARCHER, an onboard signal processor ingests blocks of hyperspectral data that are used to estimate radiance statistics of the physical background (usually located beneath an airborne collection platform). These statistics are required for constructing all the onboard real time detection algorithms. Estimates

µ=

1 N

1 C= N

N

∑x

i

i =1 N

(7)

∑ ( x − µ )( x − µ ) i



i

i =1

of the mean and covariance matrix describing the background distribution of the data are constructed from a block of N background training vectors, with xi the sensor-derived estimate of spectral radiance at pixel i. (These are subsequently updated adaptively8.) The operational spectral dimensionality is typically chosen to be least several dozen, although this value usually results from a local spectral integration of the several hundred bands collect by a hyperspectral sensor. The approximations in Equation (7) are the maximum likelihood estimates of the first- and second-order statistics of a Gaussian model of the background probability density function (pdf)

pB ( x ) =

(



)

− NC

C



1 2

t  1  exp  − ( x − µ ) C −1 ( x − µ )  , 2  

(8)

with C the determinant of C . NC is the number of spectral bands. Using the statistical estimates in Equation (7), two algorithms are currently implemented, called “RX” and “LMF.” These represent two general classes of algorithm: anomaly- and signature-based detection. 3.2.1. The RX anomaly detector The standard (RX) anomaly detector is a test for rare pixels, as determined by the model pdf of Equation (8). Unusual pixel values—corresponding to a pdf value less than some threshold value—are labeled targets. This decision is logically equivalent to a test of the form

Advanced Hyperspectral Algorithms for Tactical Target Detection

( x − µ )† C −1 ( x − µ ) > k ≡

threshold

311

(9)

for deciding that a pixel with radiance vector x contains a target. The spectral region corresponding to a target declaration is the exterior of a hyperellipsoid (Figure 3) defined by Equation (9). 3.2.2. The linear matched filter The standard signature-based detector is the linear matched filter (LMF), which assumes the availability of a mean target signature value t . Knowing an approximate target vector allows the construction of a more selective decision surface than defined by Equation (9), to wit, the hyperplane of Figure 3.

Figure 3. Gaussian clutter model with decision surfaces for anomaly detection and for the linear matched filter.

For the LMF, a target is declared at any pixel x satisfying

( t − µ )† C −1 ( x − µ ) > k .

(10)

The corresponding decision boundary hyperplane is perpendicular to the vector C −1 ( t − µ ) . To see why this choice of hyperplane is a good one, it is convenient to view the data in a transformed representation of radiance space. 3.2.3. Whitened formulation The RX and LMF detectors admit simple geometrical interpretations. In spectral space, their decision surfaces account for the non-sphericity of the background distribution. The RX decision boundaries (Figure 3) are nothing but the surfaces of constant background pdf (see Equation (8)). And instead of using the hyperplane orthogonal to the line between background and target means, the LMF decision surface has a normal that is rotated (by the action of the matrix C −1 ) to account for possible asymmetry in the shape

312

A. Schaum

of the background distribution relative to (t - µ), the vector connecting target and background means. An intuitive explanation of these algorithms is easier if the asymmetry is removed. This can be accomplished by a “whitening” transformation of the radiance variable x

w≡C The square root defined by

1 − C 2



1 2

(x − µ) .

(11)

of the covariance matrix is a Cholesky decomposition and is

C



1 2

≡ ΛD −1Λ † ,

(12)

where

C ≡ ΛD 2 Λ †

(13)

is a singular value decomposition (SVD) of the sample background matrix C , Λ is an orthonormal matrix (of eigenvectors of C), Λ † is its transpose, and D is the nonnegative diagonal matrix of root eigenvalues of C. Viewed in the whitened variable w , instead of the radiance variable x , the collection of background pixels has mean value zero and, more importantly, unit variance in any direction. This latter characteristic makes the particular forms of RX and LMF algorithms obvious choices, as discussed below. The RX criterion (Equation (9)) for choosing “target” as the true hypothesis is equivalent to w2 > k (14) in the whitened space, that is, the Euclidean length of the vector w is compared to a threshold. Similarly, in whitened variables the LMF statistic (Equation (10)) becomes a Euclidean inner product wt • w > k , (15) with the whitened target signature defined by

wt ≡ C



1 2

(t − µ ) .

(16)

These algebraic simplifications arise because the zero-mean variable w has a covariance matrix equal to the identity:

I=

1 N

N

∑ ( w )( w ) i

i =1

i



(17)

Advanced Hyperspectral Algorithms for Tactical Target Detection

313

When viewed in terms of the variable w , Figure 3 transforms into Figure 4, in which the background distribution appears more spherically symmetric. Anomalies are, according to Equation (14), simply any pixels situated a sufficient distance from the mean of the background distribution. Because W is a linear transformation, it maps the hyperplane of Figure 3 into another hyperplane in Figure 4. If the background distribution is truly Gaussian, then in the whitened space it is also spherically symmetric. Consequently, the target mean vector wt defines the only preferred direction, to which the hyperplane decision surface must therefore be orthogonal. Equation (15) is the algebraic expression of this geometrical interpretation of the LMF decision surface. It represents a (scaled) projection of the test vector onto the target direction, which defines a one-dimensional preferred (whitened) subspace. The greater the projected value, the more likely is the corresponding pixel to contain a target.

Figure 4. Simplified geometry in whitened variables. The coordinate origin is actually at the center of the sphere.

3.3. Advanced Gaussian methods The operational detection algorithms being deployed in ARCHER are relatively simple heuristic modifications of the above theoretically ideal methods. Meanwhile, research at NRL and elsewhere has produced algorithms that are more sophisticated. These methods can be generated by incorporating into an hypothesis test generalizations of the simplistic target and background models9 that are associated with the implemented algorithms. The LMF arises as the optimal solution to a detection problem. However, the simplicity of its planar decision surface depends strongly on two assumptions: (1) the background and target distributions are Gaussian, and (2) their second-order statistics (covariance matrices) are identical. Only the means can differ.

314

A. Schaum

3.3.1. Joint subspace detection A family of techniques that produce more discriminating surfaces than a hyperplane have been studied for several years. Joint Subspace Detection (JSD) is a detection concept10 motivated by the frequently found experimental fact that subspace versions of RX or LMF often work better than the boilerplate versions described above. For example, the real time version of RX that is most commonly implemented operationally is SSRX (Subspace RX), which is RX applied to all but the first several higher-variance dimensions (as determined by an SVD analysis) of the spectral representation of the background. The clutter subspace defined by these dimensions can be thought of, in the context of linear mixing models, as the subspace defined by an EM simplex. This subspace modification of RX improves performance because target projections into the clutter subspace typically appear embedded deeply within these higher-variance dimensions of the background clutter. RX applied in only these dimensions would produce worse performance than a coin flip. (Anti-RX—gotten by adding a minus sign to the left-hand side of Equation (14)—would produce, by contrast, a nontrivial detector.) Similar considerations of phenomenology prompted the development of “Confined Target Models” (CTMs). These can generate quadric decision surfaces (Figure 5) in place of hyperplanes, thereby greatly enhancing the selectivity of hyperspectral detection algorithms. The general form of the CTM detector statistic in whitened space is †

−1 D ( x ) = w† w − ( w − wt ) M eff ( w − wt ) ,

(18)

with w given by Equation (11).

Target t

Band 2

CTM Decision Surface 3 nd Ba

Background Clutter RX Decision Surface Band 1 Ghost

Figure 5. Advanced detection algorithms produce more selective, quadric decision surfaces. CTM surfaces can include ghost surfaces, corresponding to the mirror branch of a hyperboloid.

Advanced Hyperspectral Algorithms for Tactical Target Detection

315

The first term is the standard RX anomaly detector statistic (see Equation (14)). The second depends on target model details through Meff. The negative sign represents the inclusion of an anti-RX term. Its presence reflects the general idea that high-variance background dimensions that, by themselves, are not competitive with low-variance ones in enabling target detection can nevertheless be exploited to enhance target detection. Many other algorithms for multivariate discrimination can exploit accurate prior target signature knowledge. These correspond to a limiting form of M eff that approaches σ 2 I , with σ small. In this case, expression (18) is large and negative for all sample values w, except in a small sphere around the target mean. This is the template-matching limit, a luxury often unavailable to hyperspectral remote sensing applications because of uncertainties in target spectra. Unlike LMM-based detectors and SSRX, CTM does not simply delete clutter-dominated dimensions. Generally, the only situation when sensed dimensions should be ignored by a detection algorithm is if the distributions of target and background pixels in those dimensions are identical. Large differences can be exploited, even when clutter dominates, that is when target projections into those dimensions are swamped by clutter projections. One drawback to the current incarnations of CTM is the lack of a consistent method for estimating the target parameters needed to compute the optimal Meff. This could indicate that it is much more important to use the kind of selective surfaces (such as hyperboloids) for detection that CTM produces, than to divine the correct target distribution model used to generate them. One final note on CTM detectors. Generally, when different variabilities are associated with background and target distributions, the decision surfaces can include “ghost surfaces” associated with the mirror branch of a hyperboloid. They are revealed in dimensions where background distributions are less variable than target distributions. For example, in the Gaussian case, for any given threshold setting, there can be pixels on the far side (opposite the target mean t) of the background distribution (in such dimensions) that are assigned the label target by a likelihood ratio test. These regions are correct theoretical predictions, so long as the Gaussian distribution models are strictly correct, but one is loath in practice to associate real targets with such remote regions of spectral space, lying on the side of the clutter opposite the target mean. 3.3.2. ECD detectors If the radiance variable x were truly Gaussian distributed, then the whitened form w, being derived from x by a linear transformation, would also be Gaussian. Moreover, with identity covariance matrix, the stochastic variable w would also be distributed with spherical symmetry. The general form of Gaussian distribution (Equation (8)) would then reduce to the product of NC one-dimensional Gaussians in the variables wi. It is easy to

316

A. Schaum

show that then the distribution of the RX statistic, w†w, should be chi-squared with NC degrees of freedom. However, the chi-squared has been observed for typical hyperspectral data sets to give a poor fit11. Non-Gaussianity is an important consideration in devising CFAR (constant false alarm rate) detection algorithms. Better modeling, especially of the tails of the measured distributions, is vital for creating a functional autonomous detection with a stable error rate. One method of improved modeling allows a generalization of the Gaussian form to one called elliptically contoured distributions (ECDs). Here we exploit ECDs for the purpose of target detection, rather than CFAR modeling. The most general form of ECD has pdf

p ( x) = f

(( x − µ ) C t

−1

( x − µ )) ,

(19)

with C some arbitrary invertible matrix. It can be shown that any such matrix appearing as in Equation (19) must be proportional to the covariance matrix (whose sample value is given in Equation (7)). Therefore, we assume without loss of generality that the C of Equation (19) is the covariance matrix of the background data. We may then take the distribution to be

(

)

p ( x ) = f w† w ,

(20)

with w given by (11). The whitened Gaussian is an example of an ECD:

pB ( x ) =

(



)

−N

 1  exp  − w† w  . 2  

(21)

This suggests a natural generalization: p  1 p B ( x ) = k p exp  − w  ,  2 

(22)

with the choice of parameter p = 2 corresponding to the standard Gaussian. The normalization constant kp is chosen to make the pdf integral unity. Assuming the standard “additive target” model, the pdfs for background and target are related: p  1 pT ( x ) = k p exp  − w − wt  . (23)  2 

Advanced Hyperspectral Algorithms for Tactical Target Detection

317

When the target and background distributions are both known, so also is the optimal detector, which is the (log) likelihood ratio test:

ln  pT ( x )  − ln  pB ( x )  > k ,

(24)

which condition dictates the declaration “target” at pixel x. The algorithms described in (22) through (24) comprise a one-parameter family of ECD detectors. For example, for p = 1, the detector is equivalent to the simple test:

w − w − wt > k .

(25)

The bounding surface between decision regions is defined by (25) (with equality replacing the inequality), which is a classic definition of a hyperbola in two dimensions, and a hyperboloid of revolution in higher dimensions. The foci are located at w = 0 and w = wt. The detector therefore produces a decision surface similar to the type shown in Figure 5 for CTM detectors. There is, moreover, no ghost. Recent tests conducted by NRL have ascertained the comparable detection performance of ECD and CTM detectors. Furthermore, while ECD-based algorithms can produce the kind of target/background decision surface usually associated with CTM methods, the selectivity of that shape for ECDs is not dependent on any differences between the modeled target and background distributions. Such differences are described by at least one parameter in CTM models. It has been found that the simplifying assumption of equal target and background covariance matrices—which also underlies the generation of the classic RX and LMF algorithms—generally suffices for producing CTM-type performance in ECD detectors. This means that highly selective decision surfaces can be generated without the need to optimize any parameters. The combination of performance and robustness associated with ECD methods makes then attractive candidates for eventual operational implementation. 4. Summary We have reviewed the linear mixing model, one of the earliest concepts for understanding hyperspectral imagery. Next, we described core detection algorithms implemented in the first operational hyperspectral detection systems. A family of next-generation methods currently being tested with data derived from experimental sensor system programs was also described and its relationship to LMMs discussed. These methods include algorithms generated from Gaussian assumptions, as well as a family associated with non-Gaussian models. The fine selectivity of the decision surfaces associated with these latter algorithms without the need for parameter optimization make them attractive candidates for future real time HSI detection systems.

318

A. Schaum

References 1. H.M. Horwitz, R.F. Nalepka, P.D. Hyde, J.P. Morgenstern, “Estimating the proportions of objects within a single resolution element of a multispectral scanner,” in Proc. Seventh Int. Symp. Remote Sensing of Env., Ann Arbor, MI, 1971, pp. 1307, 1320. 2. Clark, R.N., G.A. Swayze, and A. Gallagher, Mapping Minerals with Imaging Spectroscopy, U.S. Geological Survey, Office of Mineral Resources Bulletin 2039, pp. 141-150, 1993. 3. Winter, Michael, E., “N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data,” SPIE Conference on Imaging Spectrometry V, SPIE Vol. 3753, pp. 266-275, Denver, Colorado July 1999. 4. J.W. Boardman, “Automating spectral unmixing of AVIRIS data using convex geometric concepts”, Summaries of the Fourth Annual JPL Airborne Geosciences Workshop, R.Q. Green ed., 1994 5. E.A. Ashton and A. Schaum, “Algorithms for the detection of sub-pixel targets in multispectral imagery,” Photogramm. Eng. Remote Sens., vol. 64, no. 7, pp. 723–731, Jul. 1998. 6. J. Harsanyi and C. Chang, “Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach,” IEEE Trans. Geosci. Remote Sensing, vol. 32, pp. 779–785, July 1994. 7. Steven Johnson, The constrained signal detector, IEEE Trans. Geosci. Remote Sensing, Vol. 40, No. 6, June 2002. 8. Brian Stevenson et al., Design And Performance of the Civil Air Patrol ARCHER Hyperspectral Processing System, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, Proc. of SPIE Vol. 5806 (SPIE Bellingham, WA), 28 March – 1 April 2005, pp. 731-742 9. A. Schaum, Joint Subspace Detection Applied to Hyperspectral Target Detection, Proc. 2004 IEEE Aerospace Conference, IEEE Catalog Number: 04TH8720C ISBN: 0-7803-8156-4, 10 March. 10. A. Schaum, Alan Stocker, Joint Hyperspectral Subspace Detection Derived from a Bayesian Likelihood Ratio Test, Proceedings of SPIE Vol. 4725, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, pp. 225-233, April, 2002. 11. D. Manolakis, M. Rossacci, Statistical Characterization of Natural Hyperspectral Backgrounds using t-elliptically Contoured Distributions, Proceedings of SPIE Vol. 5806, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, pp. 56-65, March, 2005. 12. A. Schaum, Hyperspectral Detection Algorithms: From Old Ideas to Operational Concepts to Next Generation, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, Proc. of SPIE, 17 April 2006. 13. Healey, G. and Slater, D., “Models and Methods for Automated Material Identification in Hyperspectral Imagery Acquired under Unknown Illumination and Atmospheric Conditions,” IEEE Transactions on Geo Science and Remote Sensing, Vol. 37 # 6, November 1999. 14. A. Schaum, Alan Stocker, Joint Hyperspectral Subspace Detection Derived from a Bayesian Likelihood Ratio Test, Proceedings of SPIE Vol. 4725, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, pp. 225-233, April, 2002.

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 545–556  World Scientific Publishing Company

AIRIS — THE CANADIAN HYPERSPECTRAL IMAGER; CURRENT STATUS AND FUTURE DEVELOPMENTS PIERRE FOURNIER TRACY SMITHSON DANIEL ST-GERMAIN Optronic Surveillance, Defence R&D Canada Valcartier, 2459 blvd Pie-XI Nord, Québec City, Québec, G3J 1X5, Canada [email protected]

The Defence Research and Development Canada Agency has successfully completed a Technology Demonstration Program to assess the military utility of airborne hyperspectral Imagery. This required developing a sensor, the Airborne Infrared Imaging Spectrometer (AIRIS), and collecting in-flight imagery data. AIRIS was designed as a flexible instrument using a Fourier Transform spectrometer with a spectral resolution ranging from 1 to 16 cm-1, wide spectral coverage (2 to 12 microns), and different optical configurations. This paper provides a description of AIRIS and discusses examples of the spectral images collected during one air-trial. Emphasis is put on images of sub-pixel targets. Processing AIRIS data is labor intensive and can only be performed during post-trial analysis. Hardware and software modifications to AIRIS will implement a real-time processing capability over the next three years. These modifications will enable the instrument to output radiometrically calibrated digital spectrograms. These spectrograms will then be processed in real-time to output target detection and identification for selected target types. Keywords: Fourier Transform spectroscopy, hyperspectral imagery, infrared, imaging spectroscopy, spectroscopy.

1. Introduction The Defense Research and Development Canada (DRDC) Agency has successfully completed a Technology Demonstration Program (TDP) to assess the military utility of airborne hyperspectral imagery. This required developing a sensor, the Airborne Infrared Imaging Spectrometer (AIRIS), and collecting in-flight imagery data by flying AIRIS in the National Research Council’s (NRC) Convair 580 aircraft (Figure 1). AIRIS was designed with the aim of producing a flexible instrument to study a wide range of applications. AIRIS simultaneously operates two detector arrays to cover the 2.0 to 5.3 µm region (InSb technology) and the 3.3 to 12 µm region (HgCdTe, or MCT technology). Three data collection flight tests were conducted in the summer of 2005. The first test collected phenomenological data over rural, suburban and urban areas in the Ottawa area. The second test was conducted in Suffield and used several different target types

319

320

P. Fournier, T. Smithson & D. St-German

(a mix of hot and ambient temperature targets, as well as some industrial sites). The last flight test occurred in the Halifax area and collected phenomenological data over the Atlantic Ocean. Sub-pixel targets from the Suffield and Ottawa flight tests will be discussed in this article.

Fig. 1. NRC’s Convair 580 aircraft in which AIRIS was flown in 2005.

2. Instrument description AIRIS was designed with the objective of constructing a flexible instrument to study a wide range of airborne infra-red hyperspectral imaging applications. Its data provides spectral, spatial and temporal information about the target being measured, as well as broadband video imagery. These characteristics make AIRIS an excellent instrument to define the parameters for an operational sensor for a specific set of missions. 2.1. Spectral and broadband imagery One unique feature of AIRIS is its simultaneous operation of hyperspectral imagery and Wide Field of View (WFOV) broadband video. The video cameras cover the 0.5 µm to 12 µm region of the spectrum. The images captured by those cameras add to the data set collected during the trial by providing contextual spatial imagery to help the analysis of spectral imagery data. Those video images are recorded on a dedicated Hard Disk Drive (HDD), independently from the hyperspectral data. Figure 2 illustrates conceptually how broadband video, the tracking area for spectral imagery, and the 8x8 spectral image itself are related to each other.

Fig. 2. Broadband video, tracking area and spectral imagery.

AIRIS

321

2.2. Optical configurations AIRIS is designed to support two optical configurations named after their magnification power: the 3X and the 9X configurations (Figure 3). The 3X configuration utilizes a 3X afocal relay telescope and a flat mirror pointing/scanning system. Two off-axis parabolic mirrors make up the 3X telescope. Two other flat mirrors fold the optical path to direct light into the 3X afocal relay. The 9X configuration is obtained by replacing the flat mirror pointing/scanning system by a Gregorian type telescope adding a power of 3X to the previous telescope. The Gregorian telescope is designed with an oversized primary mirror and a movable secondary mirror allowing this system to point and track targets. Only the lower part of the optical system needs to be changed to go from the 3X to the 9X optical configuration (below the dotted line in Figure 3). The key factors that influenced the optical design of the optical system are the 15-inch diameter of the down-looking window on the aircraft, the interferometer’s throughput, and detector array size.

3X afocal relay telescope

3X afocal relay telescope

Primary mirror (below platform)

Flat folding mirrors Flat scanning mirror

3X telescope Secondary scanning mirror

Fig. 3. The 3X (left) and the 9X (right) optical configurations.

Both optical configurations also include an optical relay that directs an image of the scene to three broadband video cameras. Dichroic splitters are used to direct Long Wave Infrared (LWIR), Mid Wave Infrared (MWIR) and Visible (VIS) radiation in the proper video camera, whose characteristics are given in Table 1. Table 1. Parameters of the broadband video cameras. Spectral region

LWIR

MWIR

Detector type

MCT

InSb

VIS CCD

Array size

320x240

640x480

684x484

Waveband

8 to 10 µm

3 to 5 µm

0.4 to 0.9 µm

Figure 4 shows an example of the imagery collected by one of the three broadband WFOV cameras (in this case the MWIR camera). The scene is a gas plant in the vicinity of Suffield. Figure 4 also shows a view of the three zones illustrated in Figure 1. The large grey-level image is the WFOV covered by the three broadband cameras. The large

322

P. Fournier, T. Smithson & D. St-German

square delimited by the four yellow right angles shows the 64x64 pixels tracking area. The yellow square with the cross-hairs marker is the hyperspectral imagery Total Field of View (TFOV) of the imaging system for the 8x8 detector arrays. The purple diamond is the target location. The red diamond is just an artifact of the system.

Fig. 4. Example of imagery captured by one of the three broadband WFOV cameras.

The footprint of the instrument depends on the optical configuration and on aircraft altitude and airspeed. Table 2 shows the footprint of each individual pixel, of the 8 x 8 pixels array, and of the 64 x 64 pixels tracking zone as a function of the optical configuration (9X or 3X) and aircraft altitude. The resolution of a single pixel is 1 milliradian with the 9X configuration, and 3 milliradians with the 3X configuration. The number of scans in Table 2 is based on a 16 Hz frame rate. The Signal to Noise ratio (S/N) improvement is obtained by co-adding consecutive frames of a target. Table 2. AIRIS footprint area, Time in tracking zone, and Number of scans. Altitude

9X 1 km 2 km 3 km 3X 1 km 2 km 3 km

HSI pixel footprint

HSI FOV 8x8 Tracking area pixel footprint 64x64 pixel footprint

Tracking time Number of S/N @ 100 m/s scans @ 4 cm-1 improvement

1.2x1.2 m2 2.4x2.4 m2 3.6x3.6 m2

9.5x9.5 m2 19x19 m2 28.6x28.6 m2

76x76 m2 152x152 m2 228x228 m2

0.76 s 1.5 s 2.3 s

12.2 24 36.8

3.5 4.9 6.1

3.6x3.6 m2 28.6x28.6 m2 2 7.1x7.1 m 57x57 m2 2 10.7x10.7 m 86x86 m2

228x228 m2 457x457 m2 685x685 m2

2.3 s 4.6 s 6.9 s

36.8 68.8 110.4

6.1 8.3 10.5

Pixel size, flight altitude and flight speed set the time spent by the target in the tracking zone. The average Convair 580 airspeed during imagery data collection was 100 m/s (360 km/h = 195 knots). Table 2 shows the time spent in the tracking zone as a function of optical configuration and altitude based on a flight speed of 100 m/s. When pixel size is the smallest (1.2 x 1.2 m2), the target spends only 0.76 s in the tracking area.

AIRIS

323

This is very demanding for the tracking system, especially in the presence of turbulence. When pixel size is the largest (10.7 x 10.7 m2), the target spends 6.9 s in the tracking area. Selection of flight altitude is a trade-off between pixel size, spatial resolution, intensity on the detector arrays and the demand on the tracking system. All the spectral imagery data from the flight trials of 2005 was collected with the 3X optical configuration, most often at an altitude of 3 km. At this altitude, only the explosions and the industrial sites covered more than one pixel. Some data was collected at an altitude of 1 km. However, low altitude flight is prone to higher turbulence, which is very demanding on the tracking system. 2.3. Interferometer module AIRIS uses an ABB MR300 commercial Michelson interferometer module. The MR300’s maximum spectral resolution is 1 cm-1. Spectral resolution can be changed on the fly to 1, 4, 8, or 16 cm-1. The modulator has two output ports in front of which the detector arrays are located. Filters are mounted on a sliding plate in front of the detectors to allow 100%, 10% and 3% transmission (Figure 5). Using filters is sometimes necessary to avoid detector saturation when measuring hot targets like explosions or industrial plants. The filters are independent for each detector array and can be changed on the fly.

Fig. 5. Box containing the MR300 interferometer.

2.4. Detector arrays AIRIS uses two 8x8 detector arrays to cover the 2.0 to 12 µm region. The InSb detector array covers the 2.0 to 5.3 µm (5000 cm-1 to 1900 cm-1) region. The MCT detector array covers the 3.3 to 12 µm (3500 cm-1 to 840 cm-1) region. Both detector arrays are 3 mm x 3 mm. Each individual detector is approximately 350 µm x 350 µm. The rather large size of the IR detectors in AIRIS was used to generate the highest spatial coverage possible. The challenge in fabricating IR detectors of this size is to obtain a uniform doping over such a large area and be free of defects. Actual detector size varies a little because of their connectors, and the calibration procedure includes a compensation for the different detector sizes. Note that AIRIS is capable of single and simultaneous dual detector operation.

324

P. Fournier, T. Smithson & D. St-German

Detector cooling is obtained with a liquid nitrogen Dewar for the InSb and with a Stirling cooler for the MCT. The MCT detector was temporarily mounted in a liquid nitrogen Dewar because the Stirling cooler was not ready for the air trials in 2005. 2.5. Operating system Three computers are required to operate AIRIS. Two run the interferometer and data acquisition system operating software, and the other one runs the navigation and tracking software. Both software programs provide near real-time feedback to the operators. Figure 6 shows the Graphical User Interface (GUI) from which the interferometer and the acquisition system are operated. The interferometer and data acquisition system operating software runs on two computers set up in a master and slave configuration. The master computer controls the MCT detector array, and the slave computer controls the InSb detector array. All interferometer and data acquisition system parameters, except gain, are set through the master computer for both detectors. Detector gain is the only parameter that is set independently for each detector. The interferometer and data acquisition system GUI is divided in three main panels. The rightmost panel is for setting interferometer and data acquisition system parameters. Detector gain (1 to 128), sampling factor and speed, spectral resolution (1 to 16 cm-1) and scan parameters (number of sweeps, single scan or co-addition selection, file name, etc.) are set in that part of the interface. The data acquisition system can acquire data during either a fixed number of scans or an undetermined number of scans. Raw Spectra From 4 Pixels

Pixel Intensity Image Plot

Pixel Intensities

Control Parameters

Fig. 6. Interferometer and data acquisition system GUI.

Feed-back from the data acquisition system is displayed on the left and central panels of the GUI. The main feedback item is a color coded 8x8 pixels matrix of total intensity that ranges from black (lowest intensity) to red (highest intensity). The upper part of the central panel shows raw data from up to 4 user selected pixels. The lower right portion of

AIRIS

325

the central panel shows a matrix of relative pixel intensities calculated relative to the dynamic range of the detectors. All this feedback is provided in near real-time at an update rate of approximately 1 Hz. Figure 7 shows the Navigation and Tracking GUI. This software controls the optomechanical pointing device using data from the AIRIS GPS/INS* unit to calculate the steering angle transmitted to the mirror. An Inertial Measurement Unit (IMU) is attached to the AIRIS platform giving the exact attitude angle of the platform. The software accepts a list containing the latitude, longitude and altitude of the targets. Using the aircraft coordinates, it calculates the position of the target and shows it into the WFOV image. The steering mirror angular position and speed are updated with IMU data to track the target. The software also records a complete event log including full frame video recording of the three WFOV cameras, scanning mirror angles, target event and aircraft position and attitude logs.

Tracking Type

Target List

Main Event Image

Coordinates Panel

Camera selection window

Fig. 7. Navigation and tracking GUI.

2.6. Data storage Hyperspectral imagery produces very high data rates. Limiting the hyperspectral image to an 8x8 array, which makes AIRIS an interrogating system rather than a surveillance system, is one way of limiting the data rate. Yet, as Table 3 shows, the data rates generated by AIRIS can be quite high.

*

GPS/INS : Global Positioning System/Inertial Navigation System.

326

P. Fournier, T. Smithson & D. St-German Table 3. Spectral resolutions, frame rates, frame sizes and data rates. Spectral resolution

Frame rate

Frame size

Data rate

1 cm-1

4 Hz

8 MB/frame

32 MB/s

4 cm-1

16 Hz

2 MB/frame

32 MB/s

16 cm-1

50 Hz

0.5 MB/frame

25 MB/s

AIRIS uses two independent data acquisition systems, one for each detector array. Data comes out of the pre-amplifier boxes in 64 co-axial cables (one for each pixel). Those cables are shielded against electromagnetic interference generated by the aircraft electrical system. Those cables are connected to Analog-to-Digital Converter boards that convert the raw analog interferograms into digital numbers. These are then grabbed by a compressor board and sent in packets to a 250 GB HDD where the data is written in a “tape streamer” format. This is only a stream of digital numbers containing the raw interferograms and the contextual data on how they were generated (time stamp, spectral resolution, etc.). It is used to save the data as fast as possible. It is only when the data is transferred from the data acquisition system to an external computer that it is written in the BOMEM File System (BFS) format†. Data transfer from the acquisition system to an external computer is done only after flight because it can take several hours, depending on how much data was collected.

Fig. 8. Close-up view of the data acquisition system.

3. Data processing Due to the very large amount of data generated by AIRIS and lack of real-time processing capability, data processing is very labor intensive, can only be done during post trial analysis, requires specialized and scarce expertise, and cannot at present produce the information required by military intelligence in a timely manner. The processing and visualization tools currently used to calibrate the data and to find the frames and the pixels that contain the target were initially developed for ground-based †

The BFS file format is ABB proprietary.

AIRIS

327

IR HSI in which DRDC Valcartier worked for several years before developing AIRIS. A real time processing capability was not included in AIRIS because most of the effort of the TDP went into building the instrument that worked and collecting data with it to show the military utility of airborne IR HSI. 4. Examples This section presents selected examples of targets that were measured during the air trials in 2005. All of these examples are sub-pixel targets. They were measured from an altitude of 2 to 3 km above the target, with the 3X optical configuration; this translates into a hyperspectral footprint of 7.1x7.1 m2 at 2 km and 10.7x10.7 m2 at 3 km. 4.1. Helicopter A DRDC algorithm was used to search the raw hyperspectral data for a helicopter hovering above the ramp. Figure 9 shows a picture of this target from the MWIR (left) and LWIR (right) broadband video cameras. Due to the lack of spatial resolution, it is not possible to identify the target (the bright spot in the MWIR image). All that can be said is that the target is a hot spot in the MWIR band and a cold spot in the LWIR band, and that it is surrounded by dark zones in both bands.

….

. . Fig. 9. MWIR (left) and LWIR (right) images of the helicopter target.

The anomaly detector found the target in pixel 18 of the first in a series of frames that contain the target. Figure 10 shows the raw target signature (red), the average background signature (black), and the target-only signature (green) for pixel 18. The target-only signature results from subtracting the average background from the raw target signature. The following observations are made: (i) All the information at short wavelength (beyond 4000 cm-1) is sunglint. (ii) The target is not significantly more reflective than the background. (iii) There is evidence of hot CO2, which is indicative of a hot engine. (iv) The relative intensities of the background and of the target-only signatures on both side of the CO2 absorption indicate that the target is hotter than the background.

328

P. Fournier, T. Smithson & D. St-German

Fig. 10. The raw target, the average background, and the target-only signatures.

4.2. Propane BBQ A propane BBQ was placed in a wooden tray and turned on to maintain a temperature of 340o C (Figure 11). In this case, a DRDC visualization tool was used to find the target in the data.

Fig. 11. The propane BBQ in its 8 ft x 8 ft wooden tray on a gravel road.

The hot BBQ shows up as a very bright pixel in both bands, even in the uncalibrated data of the MCT detector array (Figure 12, left, bottom). Note that the images are rotated because of the interferometer’s optics. In this example the road is hotter (although not by much) than its surroundings in the InSb band and colder than its surroundings in the MCT band. The higher intensity of the road in the InSb band is due to a reflected contribution from the sun (evident by the contribution at short wavelengths). The lower intensity of the road in the MCT band is due to an unidentified absorption band. Figure 12 (right) shows the total frame of spectra for the two detectors from which the total pixel intensity images in Figure 12 (left) were generated. Total frame spectra provide a more detailed view of the target and its surroundings. Figure 12 (left) clearly shows how the BBQ stands out from its background in both bands, and how the road is

AIRIS

329

.. Fig. 12. Total pixel intensity (left) and total spectra (right) matrices for InSb and MCT.

very similar to its surroundings in the InSb band while it is colder than its surroundings in the MCT band. 4.3. Sand-blasted aluminum sheets Sand blasted aluminum sheets (Figure 13) provide an example of a target that has positive contrast in the MWIR band and negative contrast in the LWIR band.

Fig. 13. Sand blasted aluminum sheets.

Figure 14 shows the spectra of the pixel containing the aluminum sheets (red) and adjacent background pixels (black and green). A close examination of the spectra shows that except for the higher intensity of the aluminum sheets in the MWIR band, their spectra is very similar to background pixels. The high intensity is due to a large amount of sun glint off of the aluminum sheets, especially from 2400 to 3000 cm-1. Removal of this glint contribution suggests that the spectrum of the aluminum sheets is very similar to that of its background (the gravel road). In the LWIR band the aluminum sheets have a lower intensity than the background (especially between 800 and 1300 cm-1), indicating that they are not absorbing heat from the sun but rather reflect most of the incident solar energy as indicated in the MWIR band. Beyond 1300 cm-1, the aluminum sheets have an almost identical signature to their surroundings.

330

P. Fournier, T. Smithson & D. St-German

.. Fig. 14. Sand blasted aluminum sheets spectra.

5. Conclusion The HSI TDP demonstrated that IR HSI can enhance the Canadian Forces’ Intelligence, Surveillance and Reconnaissance capabilities. A very flexible IR HSI sensor was designed, built, and successfully test-flown. Its hybrid hyperspectral/broadband video and simultaneous dual band operation capabilities make AIRIS a unique sensor. The data collected during the summer of 2005 on ground and maritime targets is of both military and civilian interest. Analysis demonstrated the military utility of IR hyperspectral sensors. This analysis will continue over the next few years. The next development phase will implement a real-time processing capability in AIRIS. This will make its data more directly exploitable by military operators. 6. References 1. T. Smithson, D. St-Germain, and J.M. Garneau, “Canadian Airborne Hyperspectral Imager Development,” NATO Symposium on Emerging EO Phenomenology, SET 094/RSY019, Berlin, Germany. 10-11 Oct 2005, not publicly available. 2. P. Fournier, T. Smithson, and D. St-Germain, “Technical Description of the AIRIS Sensor,” DRDC Valcartier Technical Memorandum, unpublished. 3. P. Fournier, T. Smithson, and D. St-Germain, “AIRIS – The Canadian Hyperspectral Imager”, IEEE 2006 Conference, IEEE Workshop, unpublished.

7. Contributors The following industries contributed to the development of AIRIS: (i) ABB – MR interferometer, ADCs and data acquisition system (ii) Fermionics – LWIR detector array (iii) LR Tech – Detector integration, pre-amps and coolers (iv) LyrTech – Software (v) Telops – Opto-mechanical design

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 557–567  World Scientific Publishing Company

THE HYPERTEMPORAL-HYPERSPECTRAL ANALYSIS TEST STATION — HYHATS TOM OLD, ROY HENDRICK, DAVE HIGHAM, NELSON PALMER Alliant Techsystems Inc. (ATK Mission Research) P.O. Drawer 719 735 State Street Santa Barbara, CA 93102-0719 CHRIS MANNING Manning Applied Technology Inc. 419 S. Main Street P.O. Box 265 Troy, ID 83871

ATK Mission Research will describe a very high speed FT-IR spectrometer that collects 1000 interferograms per second at a 4 cm-1 resolution in the 2-5 micrometer bandpass. The field of view of the instrument is about 1 degree. Collection of photons is on a single cryo-cooled InSb detector with output data streamed to a RAID device for later processing. The system uses a rotating lightweight air bearing mirror to achieve rapid variation of optical path length. Tilt compensation optics include a corner cube in a variation of the Michelson arrangement. The instrument is a one-man portable, tripod-mounted unit with off-unit data collection electronics. The system is designed for field deployment where measurements of energetic events are desired. Applications are commercial and military. Keywords: Hypertemporal hyperspectral Field portable Spectrometer HyHATS

1. Background Information Alliant Techsystems Inc (known as ATK) is funding the development of a high speed, high resolution field deployable spectrometer for making laboratory and field measurements relevant to their core business areas. The program to develop the instrument is premised on a twenty four month effort with this report giving the status of the instrument 12 months into the project.

331

332

T. Old et al.

ATK’s core business areas involve advanced military hardware, rockets, explosives and special sensors. These weapons utilize a variety of rapid chemical reactions. Understanding the evolution of the reaction products requires very fast, fairly high resolution spectra collected remotely. The instrument we are building will measure individual interferograms every 1 millisecond with a spectral resolution of 4 cm-1. This time-resolution capability is sufficient to empirically address many of the rocket, bomb and high speed signature phenomena. ATK manufactures a number of optical sensors that rely on narrow band spectral power measurements. One such instrument is the AAN/AAR-47 missile attack warning system shown in Figure 2. This system uses multi-band optical radiation collected in short time frames. The system is in a major upgrade at this time. ATK is searching for new missile signature phenomena that can be exploited to assure missile launch warning with very few false alarms. Missile warning for commercial aircraft require false positive alert probabilities well below the current AAR-47 technology limits.

Figure 1. Ordnance manufactured by ATK that will be studied using the HyHATS sensor system. Canon shot real time chemistry and dynamics measurements are needed by ATK ordnance developers.

The objective of the current HyHATS project is to develop an instrument capable of measuring calibrated high speed, high resolution spectra for standoff measurement of rapid transient events. The system is required to be rugged, and field portable. The specification is at least 1000 frames per second at 4 cm-1. No commercial instrument can meet such specifications.

HYHATS

333

Figure 2. The AAN/AAR-47 Missile attack warning receiver system manufactured by ATK

A spectrometer that approaches the required specifications is being operated as a chemical spectrometer in the University of Idaho Chemistry Department. This sensor was designed, patented and fabricated by Manning Applied Technology. It is this fundamental design that is being updated by ATK to air bearing support, with higher speed electronics to support the rate of 1000 interferograms per second. The existing hardware indicates that the system concept is very stable, with negligible movementinduced noise over the whole temporal bandwidth of operation, from a few to 300 interferograms per second and over a spectral band pass that extends to 25 microns. Under license from Manning Applied Technology (MAT), ATK fabricated a MAT rotating mirror interferometer design that would achieve a specification for 1000 frames per second and 4 cm-1 resolution. Our design uses the fundamental patent that MAT developed, but modified the layout for a much more compact footprint than the previous prototypes. Table 1. “Build-to” requirements for the HyHATS instrument.

Index

Requirement

Value

Comment

1

Spectral Range

2-25 µm

2-5 µm as built; limited by refractive optics and detector response

2

Resolution

4 cm-1

8 or even 16 cm-1 may be useful 3

3

Size

< 0.5 ft

4

Weight

25 lbs

Or smaller Can be decreased with further optimization

5

Vibration

Mobile platform spec

TBD

6

CONOPS

Field or lab measurements

Rugged, light, fully functional

7

Portability

One-Man Portable

Lighter is better; handheld is a goal

8

Power

100 watts

Not to exceed

334

T. Old et al. Table 1. “Build-to” requirements for the HyHATS instrument (Continued)

Index

Requirement

Value

Comment

9

Temperature range

-20F to 110F

Maintain performance specifications across range

10

Field of View

20 mr

Non-imaging or imaging via field widening

11

Aperture

> 25 mm

25 mm

12

Frame Rate

1000 fps

Mirror designed for minimum distortion

13

D* cm √ Hz/watt

1011 Jones

Primary limit is detector

14

EMI, RFI

TBD

Assessment in progress

15

ADC Resolution

14 bits

ADC is COTS

16

Stabilization

14 bits of phase

Phase Stabilizer bits

17

Software

GUI interface

Requires on-board system processor available in next generation

18

Sample Rate

100 MSPS

14 bit IR with laser channel trigger

2. Optical Design Our initial effort has been to evaluate and characterize the cube corner reflector, the air bearing rotating mirror technology along with the laser reference beam technology to determine their performance impacts.

Figure 3. On the left is a typical Michelson interferometer operating on a sample in transmission. The MAT design replaces the moving mirror with a slightly tilted rotating mirror and tilt compensation optics.

HYHATS

335

The optical design was performed using ATK internal vector modeling done in Mathematica 5.0 (Wolfram Research, Inc., Champaign, IL), and Zemax optical design software (Zemax Development Corporation, Bellevue, WA). The fundamental analysis for the design is simple. A component raytrace model follows the sequence of virtual images resulting from reflections from plane surfaces. Only four operations are required to analyze the system. These are summarized in the following discussion. 1. Reflection. Representpoints by vectors from the origin. Thus, point A will be defined by the vector A . Given a mirror passing through the origin with a unit normal vector N , the direction of the image of the point will be parallel to the mirror normal twice the projection of the vector A on the mirror normal vector. Thus, the virtual image of point A, call it B will be

(

)

     B = A − 2 A⋅N N .

(1)

2. Corner cube. For a corner cube, the virtual image of light emerging is the location of the apex of the corner plus a vector equal to that from  the initial image to the  corner. For a corner with its apex defined by the vector C , the image of a point B will be D , where

   D = 2C − B .

(2)

3. Distance to a surface. The minimum distance from a point F to a plane through the  origin with a unit normal vector G is

  l f = F ⋅G .

(3)

4. Location of point on the mirror closest to F. This is the difference of the vector to F and the product of the distance to the surface and the surface normal. I.e.

   J = F −lfG

(4)

Figure 4 shows a typical configuration. A beam splitter directs half the beam downward toward the wobble mirror (not shown) and lets half go forward into the reference beam. For the following computations, this downward beam will be called the input beam. The coordinate system to be used will have its origin at the mirror surface on the axis of rotation. The z axis will coincide with the axis of rotation. The reflecting surface will be tipped at an angle θt relative to the z-axis. Its surface normal will be

    N = i cos(ωt )sin(θ t ) + j sin(ωt )sin(θ t ) + k cos(θ t ) .

(5)

336

T. Old et al.

Figure 4. Three dimensional FTIR Geometry.

 The input beam will be defined by a unity directional vector G and an intersection with the xy plane denoted P1 where    P1 = i x1 + j y1 .

(6)

 For the configuration of primary interest, the G vector lies in a xz plane    G = i sin(θ s ) − k cos(θ s ) .

(7)

Although the location of the apex of the corner can in general be anywhere, of particular interest is the location on the y=0 plane and along the reflected upward beam plane. Thus, if the apex is a slant distance lc above the z=0 plane vector position of the corner cube is

   C = i ( x1 + lc cos(θ s ) ) + k lc sin(θ s ) .

(8)

The virtual image for radiation exiting the corner cube is then

   V4 = 2C 4 − B.

(9)

This point is imaged in the mirror again to form virtual image P#3. Applying the reflection algorithm again produces

(

)

     V5 = V4 − 2 V4 ⋅ N N . This is the image of the source as looking from the fixed reflector.

(10)

HYHATS

337

The total path length (within a fixed constant) is

  l f = V5 ⋅ G ,

(11)

 where G the initial incidence vector is the same as the mirror normal. After some algebra the one way path length is given by lf = 2[x3cos(θs-2θf)+y3sin(θs-2θt)]-x1cos(θs) + ls

(11A)

where x3, y3 is the corner cube Cartesian coordinates, x1,y1 is the intersection of the ray with the mirror position. The angle θs is the source angle with mean rotating mirror coordinates. The angle θ t is the tilt angle of the rotating mirror. The distance ls is the distance from the source to the mean rotator mirror position.

Figure 5. The unfolded layout of the virtual images built up in the model described by equations 1 to 12.

338

T. Old et al.

The location of the center of the beam on the fixed mirror is

   J = V5 − l f G

(12)

All surfaces are flats, and the resulting mathematical model is simple, analytical and complete. It incorporates element geometry sensitivities, design errors, surface errors and other manufacturing errors to completely analyze and optimize the components both for spectral and imaging operations. The results of the model are put into the Zemax ray trace code for generating the optical prescription, including the FPA, the reference beam, and the fore-optics for production fabrication drawings in Solid Works (Solidworks Corporation, Concord, MA). This optical model coupled to other software tools allows facile analysis and optimization of design modifications, manufacturing specification impacts, and other fabrication issues.

Figure 6. The cube corner showing the paths of the beams and rotating mirror.

The layout of the HyHATS instrument incorporating the corner cube and rotating wobble mirror is shown in Figure 7 is primarily designed to evaluate the corner cube arrangement in Figure 6. The detector will ultimately be replaced with a closed-cycle cryo-cooler and compact optics. The reference laser is a temperature-controlled diode operating at 635 nm. The substantial size of the diode is a result of the active temperature controller to minimize frequency drift. The input beam makes eleven reflection is the variable arm plus the beam combiner. The mirror surfaces are high reflectivity. The input beam is split with half going through a hole into the cube corner and then to the rotating or “wobble” mirror. The beam reflects across the corner back to the wobble mirror and then to the flat in the corner cube which is perpendicular to the entry direction. The beam then traces its way back through

HYHATS

339

all reflections to the beam splitter exactly along the path it came in. It is split again with half going to the detector. The reference beam path is long and was not folded in the design. The optical path is approximately 138mm. The rotating mirror changes the optical path by 2.5mm during a single rotation, first advancing toward the beamsplitter and then away, such that two interferograms are measured during a single rotation. The wobble mirror is balanced, light-weighted coated beryllium that has been figured to meet specifications for flatness at the full rotation speed. The mirror is rated to turn at 30,000 revolutions per minute (500 per second) to generate the 1000 interferograms per second. Under test runs with the laser diode, the mirror system was extremely stable with negligible phase jitter from the mirror z movement, air movement or vibration. The drive motor for the system is shown as the left appendage in Figure 7.

Figure 7. The current laboratory HyHATS instrument with CAD drawing. In the final version, the reference beam and the detector legs will be collapsed to the instrument body greatly compacting the footprint

The reference laser beam is directed down the center of the optical path through a IRvisible dichroic to the beam combiner. The beam combiner is coated CaF2 that has nominal 50-50 transmission-reflection in the 2 µm to 5 µm band and has a 40-60 transmission-reflection split at the 635 nm wavelength. The corner cube path contains a phase compensator. Half of the visible light goes through the beam splitter to the reference, and reflects into the corner cube. After reflecting off the corner eleven times, the signal is returned to the dichroic, and the visible BK7 beam splitter into the visible detector. Losses are significant but laser power is adequate to give good signal-to-noise ratio. 3. Data Collection The HyHATS data processing system is not designed to run in real time. The data from the laser phase reference channel and the IR signal channels are collected synchronously at 100 mega samples per second (MSPS) to a 4 megabyte FIFO. Multi threading of the collections allows continuous transfer of data to the PC RAID for processing off line. Specifications of the data collection hardware are shown in Table 2.

340

T. Old et al. Table 2. Data collection electronics specifications.

Electrical Format

IEEE1394a AKA Firewire AKA i.link

Programming Format

DCAM 1.31 (IIDC v1.31) with support for format 7

Syncs

Frame trigger

Interface

Standard PCI-33 bus on the Host PC.

Model

Signatec PDA14

channels

2 x 14-bit

Sample rate

100 MSPS/channel, synchronous

FIFO buffer

512 MB or 256 samples (128 samples/channel)

PCI Data Rate

PCI-X-33: 64-bit, 33 MHz = 266 MB/sec (one-way) at +/-5V (50Ω Load)

3.1. Data Collection Examples Data was collected at 50 mega samples per second to buffer for the IR channel. Bus transfer reduces this rate to 50 mega samples per second to the hardware drive for each channel. 4

1.1

Laser

x 10

1 0.9

0

500

1000

1500

2000

2500

IR Interferogram 6000 5000 4000 0

500

1000

1500

5

3

2000

2500

3000

3500

4000

4500

IR Spectrum

x 10

2 1 0

0

1000

2000

3000

4000

5000

6000

7000

8000

Figure 8. A “first light” single scan laser signal near second turn-around point, IR interferogram, & IR spectrum. The spectrum is plotted as a function of wavenumber.

HYHATS 5

8

341

HyHATS High-Speed Spectra

x 10

7

Intensity [arb. units]

6 5 4 3 5 ms

2

4 ms 3 ms

1

2 ms 0 2000

2500

3000

3500

4000

4500

1 ms 5000

Wavenumber [cm-1] Figure 9. The Cermax spectra taken every millisecond. Each spectrum is shifted up by one half division. The bottom plot was taken with the source turned off.. The line structure occurs from known systemic noise in the capture electronics.

3.2. Summary The HyHATS spectrometer is under development and is currently undergoing a methodical subsystem checkout. Spectra have been collected with high resolution at 1000 frames per second.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 569–574  World Scientific Publishing Company

WAVELENGTH SELECTIVE BOLOMETER DESIGN SANGWOOK HAN Department of Electrical and Computer Engineering, University of Texas at Austin, 1 University Station, Austin,Texas 78712, USA [email protected] JOO-YUN JUNG Department of Electrical and Computer Engineering, University of Texas at Austin, 1 University Station, Austin,Texas 78712, USA [email protected] DEAN P. NEIKIRK Department of Electrical and Computer Engineering, University of Texas at Austin, 1 University Station, Austin,Texas 78712, USA [email protected]

Multi-color narrow-band Salisbury Screen and Jaumann Absorbers combined with optimized thick Si3N4 support layers are designed for wavelength selectivity in 7~14µm wavelength band. The Jaumann Absorbers are adopted as a vertically ‘stacked’ pixel structure to save space and enhance resolution compared against ‘tiled’ structure (pixels lying in a common plane). Keywords: Bolometer; Salisbury Screen; Jaumann Absorber; Wavelength Selectivity.

1. Introduction Salisbury Screen and Jaumann Absorbers have been used as broad-band electromagnetic absorbers. The Salisbury Screen (Fig. 1 (a)) consists of a resistive sheet spaced a quarter wavelength from a conductive mirror, which was introduced as an electromagnetic absorber during World War II by W. W. Salisbury1. To achieve absorption over a broader band, multilayer designs were also developed for radar camouflage in Germany in World War II by J. Jaumann; these structures are known as Jaumann Absorbers (Fig. 1 (b-c))2-3. Here, we design multi-color narrow-band Salisbury Screen and Jaumann Absorbers combined with optimized thick Si3N4 support layers for use in wavelength selective microbolometers operating in the long wave infrared (LWIR) band 7~14µm.

343

344

S. Han, J.-Y. Jung & D. P. Neikirk

Rs1

Si3N4 resistive sheet

Rs1 mirror (a)

Rs1

d1 d2 d3

Rs2 mirror

d1 d2 d3 d4 d5 d6

Rs2

Rs3 mirror

(b)

d1 d2 d3 d4 d5 d6 d7 d8 d9

(c)

Fig. 1. Configuration of wavelength selective pixels. (a) Salisbury Screen with Si3N4 support layers. (b) Twopixel Jaumann Absorbers with support layers. (c) Three-pixel Jaumann Absorbers with support layers.

2. Wavelength Selective Salisbury Screens First of all, we design 3-color pixels with absorption centered 8, 10, and 12µm wavelengths using the Salisbury Screen design shown in Fig. 1a. Each of the three pixels is designed individually, and would be used in a ‘tiled’ (lying in a common plane) arrangement to yield a color focal plane array. The generalized Salisbury Screen envisioned consists of a resistive layer supported by Si3N4 layers, all placed in front of a mirror (Fig. 1 (a)). Without multiple interferences due to the support layers, a resistive sheet and a mirror can not generate narrow band absorption within the LWIR. However, it may be possible to use interference to narrow the absorption bandwidth. To find proper thickness of the Si3N4 support layers, we adopt the Genetic Algorithm (GA) as an optimization scheme. To locate narrow bandwidth array designs that couple strongly at a desired wavelength, we have used a cost definition for the GA that seeks high power absorption at a specified wavelength, while minimizing the power absorbed at other wavelengths. This is accomplished by minimizing the area under the absorption curve, as given by Eq. (1). 14 µm  if PAE desiredλ > threshold , cost = PAE(λ ) dλ 7 µm   otherwise, cos t = ∞



(1)

where PAE stands for power absorption efficiency. The electromagnetic responses in this paper are calculated using an ABCD matrix analysis4. The ABCD matrix analysis can find the fields at every boundary, and it makes easy to analyze power absorption in each layer. In addition, calculation time is very fast, which is critical for an optimization process with many design parameters. Figure 2 illustrates GA-optimized designs showing enough wavelength selectivity for use in a 3-color LWIR system. The optimized designs in Fig. 2 have also been verified with Ansoft HFSS5, a 3D full-wave finite element method (FEM) solver (circle and polygons in Fig. 2). Specific designs yielding responses in Fig. 2 are given in Table 1.

Wavelength Selective Bolometer Design

345

Power Absorption Efficiencies

1.0 0.8 0.6 0.4 0.2 0.0 7

8

9

10 11 12 Wavelength (µm)

13

14

Fig. 2. GA-optimized designs to absorb 3 colors centered at 8, 10, and 12µm wavelengths. Dashed, solid, and dotted curves are from calculation with using the ABCD matrix method. Polygons are from Ansoft HFSS. Table 1. Specific designs yielding responses given Fig. 2. d1 and d2 are thickness of Si3N4 support layers, and d3 is thickness of an air gap (refer to Fig. 1 (a)). λ center

8µm (dashed)

10µm (solid)

12µm (dotted)

Rs1 (Ω)

444

459

535

d1 (µm)

3.0

3.6

2.4

d2 (µm)

0.8

0.9

0.9

d3 (µm)

3.9

5.0

6.0

3. Wavelength Selective 2-Color Jaumann Absorbers Using cooled devices, “2-color” infrared detectors has been constructed using the stacked quantum well infrared photo-detectors (QWIPS) for 3 to 5µm and 8 to 12µm infrared bands6, although in these devices the signals produce by the two bands cannot actually be separated. Here we design stacked 2-color pixels (centered 8.5µm and 12.5µm wavelengths) using Jaumann Absorbers (Fig. 1 (b)) suitable for use as room temperature microbolometers. ‘Stacking’ (pixels arranged vertically) can save space and enhance resolution of images compared to the ‘tiling’ described in the previous section. Here our design goal is for each layer in the pixel to maintain strong absorption at a specified center wavelength while minimizing the absorption at the same center wavelength by the other layers in the pixel; this leads to an eight parameter design search, Rs1, d1, d2, d3, Rs2, d4, d5, and d6. As a cost definition, we set up ideal power absorption efficiencies for 2color wavelength selectivity, and then try to minimize the difference between ideal and real curves. Figure 3 illustrates GA-optimized designs showing reasonable 2-color wavelength selectivity. Again the design in Fig. 3 has been verified with Ansoft HFSS in Fig. 4 (circles in Fig. 4). Figure 4 is sum of power absorption by both top and bottom layers in the pixel.

346

S. Han, J.-Y. Jung & D. P. Neikirk

Power Absorption Efficiencies

1.0 0.8 0.6 0.4 0.2 0.0 7

8

9

10 11 12 Wavelength (µm)

13

14

Fig. 3. GA-optimized vertically stacked pixel to absorb 2 colors centered at 8.5 and 12.5µm wavelengths. Rs1, d1, d2, d3, Rs2, d4, d5, and d6 are 668 Ω, 1.5µm, 1.7µm, 1.5µm, 823Ω, 0.0µm, 1.0µm, 0.5µm respectively. Solid: absorbed by top absorbing layer. Dashed: absorbed by bottom absorbing layer.

Power Absorption Efficiencies

1.0 0.8 0.6 0.4 0.2 0.0 7

8

9

10 11 12 Wavelength (µm)

13

14

Fig. 4. Sum of power absorption by the design described in Fig. 3. Solid curve is calculated using the ABCD matrix analysis [4], and circles are simulated from Ansoft HFSS for verification.

4. Wavelength Selective 3-Color Jaumann Absorbers We finally design 3-color stacked pixels (with each layer in the stacked design for absorption centered at 8µm, 10µm and 12µm wavelengths) using Jaumann Absorbers (Fig. 1 (c)). Again the design goal is for each layer in the pixel to maintain strong absorption at a specified center wavelength while minimizing the absorption at the same center wavelength by the other layers in the pixels; this leads to a twelve parameter design search, Rs1, d1, d2, d3, Rs2, d4, d5, d6, Rs3, d7, d8, and d9. Figure 5 illustrates GAoptimized designs showing 3-color wavelength selectivity. The optimized design in Fig. 5 has been verified with Ansoft HFSS in Fig. 6 (circles in Fig. 6). Figure 6 is sum of power absorption by all layer in the pixel. By comparison of Fig. 2, 3, and 5, it is clear that as the number of stack goes up, the wavelength selectivity gets worse.

Wavelength Selective Bolometer Design

347

Power Absorption Efficiencies

1.0 0.8 0.6 0.4 0.2 0.0 7

8

9

10 11 12 Wavelength (µm)

13

14

Fig. 5. GA-optimized vertically stacked pixels to absorb 3 colors centered at 8, 10, and 12µm wavelengths. Rs1, d1, d2, d3, Rs2, d4, d5, d6, Rs3, d7, d8, and d9 are 966Ω, 0.2µm, 1.4µm, 1.7µm, 379Ω, 1.3µm, 0.7µm, 1.4µm, 646Ω, 1.7µm, 0.8µm, 1.4µm respectively. Dotted: absorbed by top pixel. Solid: absorbed by middle pixel. Dashed: absorbed by bottom pixel.

Power Absorption Efficiencies

1.0 0.8 0.6 0.4 0.2 0.0 7

8

9

10 11 12 Wavelength (µm)

13

14

Fig. 6. Sum of power absorption by the design described in Fig. 5. Solid curve is calculated using the ABCD matrix analysis [4], and circles are simulated from Ansoft HFSS for verification.

5. Conclusions Multi-color narrow-band Salisbury Screen and Jaumann Absorbers combined with optimized thick Si3N4 support layers have been designed for use as wavelength selective microbolometers in 7~14µm wavelength band. Salisbury Screens can be used in tiled arrays, while Jaumann Absorbers can act as vertically stacked pixels that have shown clear 3-color wavelength selectivity with properly designed Si3N4 support layers.

348

S. Han, J.-Y. Jung & D. P. Neikirk

6. Acknowledgements This work was supported in part by the Office of Naval Research under the University Affiliated Research Center (UARC) Basic Research Program and the DoD DARPA Advanced Processing and Prototyping Center (AP2C) at The University of Texas at Austin. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research or the DARPA.

7. References 1. W. W. Salisbury, “Absorbent body for electromagnetic waves,” U.S. Patent 2 599 944, June 10, 1952 2. W. H. Emerson, “Electromagnetic wave absorbers and anechoic chambers through the years,” IEEE Transaction on Antenna and Propagation, vol. AP-21, no. 4, July, 1973. 3. E. F. Knott, K. B. Langseth, “Performance degradation of JAumann Absorbers due to curvature,” IEEE Transactions on Antennas and Propagation, vol. AP-28, no. 1, Jan, 1980, pp. 137-139. 4. D. M. Pozar, Microwave Engineering, Addison-Wesley, 1993. 5. www.ansoft.com 6. M. Z. Tidrow, J. C. Chiang, Sheng S. Li, and K. Bacher, “A high strain two-stack two-color quantum well infrared photodetector,” Applied Physics Letters, vol. 70(7), pp. 859-861, Feb. 17, 1997.

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 575–592  World Scientific Publishing Company

MULTISENSORY DETECTION SYSTEM FOR DAMAGE CONTROL AND SITUATIONAL AWARENESS CHRISTIAN P. MINOR, DANIEL A. STEINHURST Nova Research, Inc., Alexandria, VA KEVIN J. JOHNSON, SUSAN L. ROSE-PEHRSSON, JEFFREY C. OWRUTSKY Chemistry Division, US Naval Research Laboratory, Washington, DC STEPHEN C. WALES Acoustics Division, US Naval Research Laboratory, Washington, DC DANIEL T. GOTTUK Hughes Associates, Inc., Baltimore, MD

1. Abstract A data fusion-based, multisensory detection system, called "Volume Sensor", was developed under the Advanced Damage Countermeasures (ADC) portion of the US Navy’s Future Naval Capabilities program (FNC) to meet reduced manning goals. A diverse group of sensing modalities was chosen to provide an automated damage control monitoring capability that could be constructed at a relatively low cost and also easily integrated into existing ship infrastructure. Volume Sensor employs an efficient, scalable, and adaptable design framework that can serve as a template for heterogeneous sensor network integration for situational awareness. In the development of Volume Sensor, a number of challenges were addressed and met with solutions that are applicable to heterogeneous sensor networks of any type. These solutions include: 1) a uniform, but general format for encapsulating sensor data, 2) a communications protocol for the transfer of sensor data and command and control of networked sensor systems, 3) the development of event specific data fusion algorithms, and 4) the design and implementation of modular and scalable system architecture. In fullscale testing on a shipboard environment, two prototype Volume Sensor systems demonstrated the capability to provide highly accurate and timely situational awareness regarding damage control events while simultaneously imparting a negligible footprint on the ship’s 100 Mbps Ethernet network and maintaining smooth and reliable operation in a real-time fashion.

2. Introduction Advances in communications and sensor technologies in recent years have made possible sophisticated implementations of multicriteria, multisensor networks for situational awareness. The need for automated monitoring and assessment of events of interest, such as chemical agent dispersal, toxic chemical spills, fire or flood detection, is the principal motivation for the development of spatially dispersed, heterogeneous sensor arrays. Multimodal, network-enabled sensing platforms can generate complementary datasets,

349

350

C. P. Minor et al.

which can be mined with pattern recognition and feature selection techniques and then merged with event specific data fusion algorithms to provide faster and more accurate situational awareness than can be obtained with conventional sensor implementations. Multimodal sensing platforms offer potential advantages over more conventional point detection systems in terms of robustness, sensitivity, selectivity, and applicability. Specifically, coverage areas offer potential resilience to missing data and spurious sensor malfunctions not attainable with individual sensing units. Complementary information can effectively increase the signal to noise ratio of the system through an effect analogous to signal averaging while also offering the potential of detecting a wider range of analytes or events. Finally, a spatially, or even geographically, dispersed array of networked sensors can provide the necessary platform flexibility to accommodate diverse configurations of fixed or mobile, standoff or point sensors to satisfy a wide range of sensing needs. However, networked multicriteria, multisensor systems present their own unique set of development and implementation challenges.1 Care must be taken in selecting sensing modalities and sensors that provide complementary information appropriate to the sensing application being developed. A suitable network architecture and communications interface must be designed that is amenable to the differing data formats and interfaces typical of commercially developed sensors. To realize the benefits of a multimodal approach, sensor data must be combined and evaluated in a manner that enhances performance without increasing false positives.2 These challenges are in addition to those common to conventional sensor implementations: developing pattern recognition and feature extraction algorithms tailored to multiple event recognition and implementing a real-time data acquisition, analysis and command and control framework for the sensing system. Multisensor and multicriteria approaches to fire detection have demonstrated improved detection performance when compared to standard spot-type fire sensors and are rapidly becoming the industry state-of-the-art.3 Multisensor systems generally rely on some form of smart data fusion to achieve higher rates of detection and lower numbers of false alarms.4 Previous efforts at the U.S. Naval Research Laboratory (NRL) have demonstrated significant improvements in the accuracy, sensitivity and response times in fire and smoke detection using multicriteria approaches that utilize probabilistic neural network algorithms to combine data from various fire sensors.5,6 Other efforts at NRL have confirmed the viability of using a multisensor, multicriteria approach with data fusion for the detection of chemical agents7,8 and unexploded ordinance.9,10 Likewise, multisensor detection systems have shown a number of advantages over comparable single sensor systems for the detection of chemical weapons agents and toxic industrial chemicals (CWA/TIC), as evidenced by a number of successful, commercially available, multisensor-based detection systems for CWA/TIC applications. Examples systems are the Gas Detector Array II (GDA-2) by Airsense Analytics and the HAZMATCAD Plus by Microsensor Systems, Inc., both are portable devices capable of point-detection of a wide variety of chemical agents and toxic compounds. The GDA-2 uses ion mobility spectrometry supplemented with photoionization detection,

Multisensory Detection System for Damage Control

351

electrochemical, and metal-oxide sensors. The HAZMATCAD Plus uses surface acoustic waves sensors supplemented with electrochemical sensors. In addition, “multiway” analytical instrumentation, such as hyperspectral imaging technology,11 can be considered a multicriteria approach applied to standoff detection of CWA/TIC in that they utilize additional axes of measurement to provide the same types of advantages conferred by multiple sensors. The Adaptive InfraRed Imaging Spectrometer (AIRIS) by Physical Sciences, Inc. is an example of one such hyperspectral imaging system targeted for CWA/TIC detection applications.12 NRL has developed a real-time, remote detection system for damage control and situational awareness as part of the Advanced Volume Sensor Task, an important element of the U.S. Navy Office of Naval Research's, Future Naval Capabilities, Advanced Damage Countermeasures (ADC) program. The ADC program seeks to develop and demonstrate improved damage control capabilities for reduced manning aboard future naval vessels.13 The objective of the Advanced Volume Sensor Task was to develop an affordable detection system that could identify shipboard damage control conditions and provide real-time threat level information for events such as flaming and smoldering fires, explosions, pipe ruptures, flooding, and gas releases. The goal was to develop a robust, low cost system that eliminated the false alarms typical of fire detections systems in industrial environments. The Volume Sensor approach was to build a multisensor, multicriteria system from low cost commercial-off-the-shelf (COTS) hardware components integrated with intelligent software and smart data fusion algorithms developed at NRL. This effort took advantage of existing and emerging technology in the fields of optics, acoustics, image analysis and computer processing to add functionality to conventional surveillance camera installations planned for in new ship designs. The down selected sensing platforms are “suites” that incorporate video cameras, long wavelength (near infrared) filtered cameras, single element spectral sensors, and human audible microphones. The algorithms and protocols implemented in Volume Sensor are responsible for collecting and analyzing responses from individual sensors, formatting these responses into a consistent, logical data structure, and, most importantly, intelligently combining all available sensor data and information via data fusion techniques to provide the best possible situational awareness. When compared to commercial detection systems, the performance gains were achieved through the combination of multiple sensing modalities and data fusion algorithms. This paper describes the development and evaluation of the full-scale, multicompartment prototype Volume Sensor. Two Volumes Sensor Prototype (VSP) systems were built and evaluated in a shipboard test series onboard the Navy’s full-scale fire test facility, the ex-USS Shadwell.14 The performance of the VSPs was compared to two commercial video image detection (VID) and three spot-type fire detection systems that were simultaneously evaluated during this test series. Analysis of test results shows that the two VSPs demonstrated comparable or better fire detection performance to the commercial systems, with faster response times and higher nuisance source immunity. The VSPs also demonstrated capabilities beyond those of the commercial systems,

352

C. P. Minor et al.

adding situational awareness for pipe ruptures and flooding scenarios, fire suppression system activations, and gas release events.15 As part of the development of the Volume Sensor system and prototypes, a number of challenges specific to multicriteria, multisensor systems have been successfully addressed and surmounted via the development of a general data format, a communications protocol, event-specific data fusion algorithms, and a modular and scalable system architecture. These solutions are transferable to CWA/TIC sensing applications and other sensing needs. 3. Background To achieve the Navy's reduced manning goals, Volume Sensor employs intelligent machine vision algorithms in analyzing video images from cameras mounted in shipboard spaces for the detection of damage control events like fires. Video-based fire detection systems built around typical surveillance cameras are a recent technological advancement.16,17 The video image detection (VID) systems operate by analyzing video images produced by standard surveillance cameras (typically up to eight per unit) in order to detect smoke or fire in large spaces such as warehouses and transportation tunnels. Recent versions of these VID systems include detection algorithms that differentiate between flaming and smoldering fires. The systems differ mostly in the image analysis algorithms they employ, but all typically include some automatic calibration capabilities to reduce algorithm sensitivity to image and camera quality. Additionally, while the Navy's principal motivation for funding the development of Volume Sensor was reduced manning, secondary motivating factors were the performance gains expected with a multicriteria, multisensor fire detection system based on standoff technology. Spot-type detection systems (such as smoke detectors) require effluent to reach the detector, which reduces early warning capability and correspondingly increases the response time of damage control teams. Standoff detection, such as the combination of video camera surveillance with machine vision, can detect effluents at the location of source initiation, and thus has the potential for faster detection and significantly lower response times. Cluttered shipboard spaces, however, were expected to pose a serious challenge for video-only standoff detection. To address this challenge, a multisensor system was proposed that included optical and acoustic sensing technologies. Under the Volume Sensor program, three commercial video-based fire detection systems were evaluated in a shipboard environment on the ex-USS Shadwell and in a laboratory setting for their viability. The details of the testing procedure and evaluation have been described elsewhere.18 This study concluded that the alarm times for the VID systems were comparable to those from spot-type ionization detection systems for flaming fires, but were much faster than either ionization or photoelectric smoke detection systems for smoldering fires. Two VID system manufacturers worked with NRL to adapt and improve their systems for Navy applications. While considerable progress was made in improving the detection sensitivity, the false alarm rate remained undesirably high. One of the most significant challenges to shipboard early warning fire

Multisensory Detection System for Damage Control

353

detection with video-based systems is the discrimination of flaming fires from typical shipboard bright nuisance sources such as welding, torch cutting, and grinding. Alternative optical and acoustic detection methods were proposed for Volume Sensor to overcome this challenge. Two distinct approaches to optical detection outside the visible were pursued.19,20 These were (1) near infrared (NIR), long wavelength video detection (LWVD), which provides some degree of both spatial and spectral resolution and discrimination, and (2) single or multiple element narrow spectral band detectors, which are spectrally but not spatially resolved and operate with a wide field of view at specific wavelengths ranging from the mid infrared (IR) to the ultraviolet (UV). Image detection in the NIR has been utilized in background free environments, such as forest fire monitoring from ground installations 21 and satellites,22 in tunnels,23 and for cargo surveillance in aircraft.24 Image analysis in conjunction with narrow band filtered (1140 nm) NIR images has been patented as a method for enhancing fire and hot object detection.25 The primary advantages of long wavelength imaging are higher contrast for hot objects and more effective detection of reflected flame emission compared to images obtained from cameras operating only in the visible region. These advantages allow for improved detection of flaming fires that are not in the field of view of the camera. The LWVD system developed for Volume Sensor exploits the long wavelength response of standard CCD arrays used in many cameras (e.g., camcorders and surveillance cameras). This region is slightly to the red (700-1000 nm) of the human ocular response (400-650 nm). A long pass filter transmits light with wavelengths longer than a cutoff, typically in the range 700-900 nm. This increases the image contrast in favor of fire, flame, and hot objects by suppressing the normal video images of the space, and thereby effectively provides a degree of thermal imaging. There is more emission from hot objects in this spectral region (> 600 nm) than in the visible. Testing has demonstrated detection of objects heated to 400 ºC or higher.20 Thus, this approach to long wavelength imaging is an effective compromise between expensive, spectrally discriminating cameras operating in the mid IR and inexpensive, thermally insensitive visible cameras. A luminosity-based algorithm was developed to analyze video images for the detection of NIR emission and used to evaluate camera/filter combinations for fire, smoke and nuisance event detection.26 For each incoming video image, the algorithm applied a simple non-linear threshold to the summed, normalized intensity difference of the current video image and a background image established at the start of each test. This approach is similar to one suggested by Wittkopp, et al.27 for fire and smoke event classification with visible spectrum cameras in aircraft cargo holds. The luminosity algorithm serves as the principal detection method for the LWVD system, for which a patent has been applied for.28 The second optical detection method investigated was a collection of narrow band, single element, spectral sensors. Approaches to detect reflected NIR emission from fire sources using narrow band detectors have been previously reported and patented by Lloyd, et al.29-31 Atomic emission of potassium at 766 nm has been reported in Vdacek, et al.32 for satellite based fire detection. In addition, a number of UV and IR based flame

354

C. P. Minor et al.

detectors are commercially available.33,34 Narrow band sensors investigated for Volume Sensor included COTS UV/IR flame detectors, modified so that the individual outputs could be monitored independently, and other sensors operating in narrow spectral bands (10 nm) at visible (589 nm), NIR (766 and 1060 nm), and mid IR (2.7 and 4.3 µm) wavelengths. The spectral bands were chosen to match flame emission features identified in spectra measured for fires with different fuels. In a stand-alone configuration, combinations of the single channels were found to yield results for identifying fires in the sensor's field of view comparable to the COTS flame detectors and superior performance for fires out of the field of view, several nuisance sources, and certain smoke events. The inclusion of one or more of the single element sensors in Volume Sensor was expected to reduce the false alarms of the integrated system without degrading the sensitivity. To achieve this, principal components analysis (PCA)35 was used to develop a set of algorithms for the spectral sensors to discriminate flaming fires in and out of sensor field of view, smoke from smoldering sources, and high UV emitting nuisance sources such as welding and torch cutting.36 The down selected spectral sensors and the PCA-based discrimination algorithms comprise the spectral-based Volume Sensor (SBVS) system. Another key aspect of the Advanced Volume Sensor Task was the evaluation of acoustic signatures in the human-audible range for enhanced discrimination of damage control events, particularly flooding and pipe ruptures. Earlier efforts in acoustical leak detection emphasized using ultrasonic technologies for applications in nuclear reactor environments.37,38 For Volume Sensor, a representative set of fire and water acoustic event signatures and common shipboard background noises were collected and measured. Measurements were made during testing aboard the ex-USS Shadwell, in a full-scale laboratory test for fires, in a wet trainer for flooding and pipe ruptures, and on two inservice vessels, naval and research, for shipboard ambient noise.39 The event signatures and noise signals were compared in the time and time-frequency domains. Results indicated that clear differences in the signatures were present and led to the development of first generation algorithms to acoustically distinguish the various events. Flooding and pipe ruptures are typically loud events, and a simple broadband energy detector in the high frequency band (7-17 kHz) with an exponential average, has proven effective even in a noisy environment like an engine room. Further development and testing with linear discriminant models led to algorithms for acoustic-based detection of pipe ruptures, flooding scenarios, fire suppression system activations, gas releases, and nuisance sources such as welding, grinding, and people talking. Microphones and the acoustic detection algorithms make up the acoustic (ACST) sensor system. Both the integration of multimodal, complementary sensor systems for fire detection and the performance gains from using data fusion technology are well established in the literature.40,41 The implementation of the Volume Sensor approach required the consolidation of sensor data from the VID, LWVD, SBVS, and ACST sensor systems with event specific data fusion. Volume Sensor achieved this by implementing a modular and extensible design that employed a tiered approach to data acquisition and analysis. The Volume Sensor architecture is depicted in Figure 1. In the diagram, sensor data and

Multisensory Detection System for Damage Control

355

situational awareness flow from left-to-right, command and control from right-to-left. The labeled boxes in the figure grouped as “field monitoring”, “sensor system computers”, and “fusion system computer”, represent the various hardware and software components of Volume Sensor. The box at the far right of the figure, labeled “Supervisory Control System”, represents the interface of Volume Sensor to higher level systems, possibly including a Damage Control Administrator (DCA). Briefly, the Volume Sensor design for an integrated multisensor system is as follows: Raw sensor data from sensors monitoring compartments are acquired and analyzed by software with algorithms tailored to the sensors, after which sensor data and algorithmic output are packaged and sent on to a fusion machine where they are combined with information from other sensor systems and processed with data fusion decision algorithms. The output of the fusion machine is real-time damage control information for each monitored space in the form all clear, warning, or alarm signals for several event categories. The fusion machine also serves as the command and control center for the system as a whole. The Volume Sensor design is modular in that the sensor system components, communications, command and control, and data fusion algorithms are implemented in hardware and software separately. The components work together through specially developed data structures and communications interfaces, which are general enough to allow for the rapid addition of new sensor modalities or data fusion algorithms, or adaptation to new network or system topologies. The Volume Sensor design is also extensible in that the number of sensors being processed is limited by hardware and computer resources, and is not inherently fixed to the number or modality of the selected sensors or data fusion algorithms. By design, limited computer resources can be met by replicating the sensor system / fusion node architecture to accommodate increased monitoring requirements. Following sensor system selection, two proof-of-concept Volume Sensor prototypes (VSPs) were built and evaluated in the fourth phase of the program.42,43 Shipboard testing of these single-compartment VSPs was performed side by side with two VID-based systems and several spot-type and optical-based commercial systems. The results indicated that the performance of the VSPs was generally comparable to or faster than the commercial systems while providing additional situational awareness for pipe rupture, flooding scenarios, and gas release events and live video streaming of alarm sources. 4. Method The first step towards meeting the challenges for developing a successful multicriteria, multisensor detection system was the distribution of the sensing elements themselves. In Volume Sensor, this was achieved when sensors from the four sensor modalities (VID, LWVD, SBVS, and ACST) were grouped together into a heterogeneous array referred to as a sensor suite. An individual sensor suite was comprised of separate sensors (a video camera, a long wavelength filtered bullet camera, three photodiodes, an IR sensor, a UV sensor, and a microphone) that were installed in close proximity, as shown in Figure 2. Monitoring was achieved by deploying one or more sensor suites in spaces such as

356

C. P. Minor et al.

shipboard compartments. Note that for other sensing applications, the modular and extensible design goals are preserved. A sensor suite is not limited to the aforementioned sensors. Rather, a sensor suite represents the smallest unit of sensor data for input to the fusion analysis in Volume Sensor, that is, a collection of one or more sensing elements collocated in a compartment for monitoring purposes. The distinction is important for the encapsulation and analysis of sensor data with data fusion algorithms, as discussed below. Data acquisition of signals in a sensor suite is performed by the sensor's respective system, VID, LWVD, SBVS, or ACST. The first tier of data analysis is also performed by algorithms implemented on the sensor systems. As a consequence, a sensor system can generate both sensor data and sensor algorithm information for data fusion. In Volume Sensor, the NRL-developed sensor systems (LWVD, SBVS, and ACST) pass sensor data and sensor algorithm information to the fusion machine. Experience with data fusion algorithm development has confirmed that including both raw sensor data and algorithmic information significantly increases the potential for performance gains with data fusion algorithms as more complementary sensing information is provided. The commercial VID systems, however, only pass limited sensor algorithm information, due to the proprietary nature of their software. A one-to-one relationship between sensor suites and sensor computers is intentionally not required and improves the overall system's flexibility and scalability for consolidated sensor configurations and alternative network topologies. Data from sensor components from several sensor suites can be processed by a single sensor machine (e.g., a PC), which in turn interfaces with a fusion machine. A tiered approach to integration for a multisensor system implemented on a network topology naturally leads to differing time scales for data analysis. Sensor systems are theoretically able to detect data features at sub-second time scales, such as the flicker pattern of flaming fires or the frequency components of a pipe rupture. The cycle of sensor data acquisition, transfer, data fusion analysis, and output is referred to as the data analysis cycle. The data fusion decision algorithms cannot detect data features at time scales any finer than the duration of the data analysis cycle, but have the advantageous viewpoint of seeing data snapshots from multiple sensors simultaneously. In Volume Sensor, the speed of signal data acquisition varied from milliseconds for the optical and acoustic sensors, to a few tenths of a second for the video systems. Data transfer from sensor machines to the fusion machine was performed in 1 second (1 Hz) increments, and thus set the time duration of the data analysis cycle. The next challenge was the efficient storage and transfer of sensor data and algorithm information from the sensor systems to the fusion machine. In Volume Sensor, the storage of data was accomplished with a tree-based data structure, referred to as the gestalt data structure. A single element of sensor data or sensor algorithm output was stored as a floating point value at the finest detail level of the tree, the data level or “leaf”. At the leaf level, a data value was stored together with an identifying string (e.g. flame algorithm) and a category label (e.g. flame) that indicated the data value was relevant to a particular type of event, or that the data value was raw sensor data, which is

Multisensory Detection System for Damage Control

357

a

often relevant to several types of events. Together these three pieces of information formed a data block. Different pieces of sensor information (data blocks) associated with a sensor were grouped together at the next higher level of the tree, the channel level. For example, a VID system provided two pieces of information at the data block level, flame b and smoke algorithm outputs, for each camera at the channel level. Channels on the local sensor computer were grouped together in a channel block at the next higher level of the tree, the system level. For example, a sensor machine in the VID system processed video from eight cameras. System blocks from multiple sensor machines were grouped at the highest level of the tree, thus forming the gestalt. One gestalt data structure was filled during each data analysis cycle. The gestalt data structure had an additional advantage pertinent for data transfer in that it was easily translated into the extensible markup language (XML).44 In Volume Sensor, data transfer was achieved with XML-based message packets sent on a standard c internet protocol (IP) network (e.g., Ethernet) via user datagram protocol (UDP). The networked message packets formed the link between the sensor system computers and the fusion machine shown in Figure 1. The structure of the XML-based message packets was specified in a communications protocol referred to as the Volume Sensor Communications Specification (VSCS).45 Message packets in the VSCS protocol consist of information encapsulated by XML tags. A simple system of paired command and response message packets was developed to provide the fusion machine with information on demand about the status of a sensor machine, start and stop data acquisition, reset alarm or background levels, and set certain sensor system parameters. A third type of message packet, referred to as a data packet, was used to transfer sensor data and algorithm information from a sensor machine to the fusion machine. Command packets were only issued by the fusion machine, response packets and data packets only by the sensor machines. During a data analysis cycle, each sensor machine filled and sent a data packet to the fusion machine. The data packet contained data blocks, channel, and system information encoded in an XML version of the gestalt data structure. The fusion machine component of Volume Sensor was a PC-based unit that was responsible for aggregating sensor data, performing algorithmic data fusion analysis, and distributing situational awareness to supervisory control systems or other higher level information aggregators. The fusion machine also served as the command and control unit and external interface to Volume Sensor. The software components that implemented the fusion machine are shown in Figure 1. The principal component was the command and control (CnC) program, which encapsulated the XML communications libraries and the data fusion module (DFM). The XML libraries were used to encode and a

For example, a signal from the optical IR sensor with a level well above background may be observed from a flaming fire source, an overheating engine, or a person welding, corresponding respectively to flame, thermal, and nuisance events in Volume Sensor. b Alternatively, a channel could be a sensor algorithm that generated values for multiple features or events, one per data block. c UDP is similar to, but simpler than the more common transmission control protocol (TCP).

358

C. P. Minor et al.

decode message packets while the DFM software performed all data fusion-related tasks. The principal human interface was the supervisory control system (SCS). A graphical user interface (GUI) program was also developed for use in system testing and diagnostics. Internal and external communications in Volume Sensor are processed through the CnC. The CnC program receives data from and issues commands to the sensor systems, and in turn, receives commands from and sends situational awareness data to the GUI, and to one or more supervisory control systems, when present. All data and command transfers are conducted through a standard TCP/IP network interface using XML-based message packets. XML translation and encoding is performed by custom server-side libraries. Thus, the sensor system software, GUI, and SCS may be physically located anywhere that is network accessible to the fusion machine, or on the fusion machine itself. Data fusion is performed by algorithms in the DFM software. The DFM is implemented as an object internal to the CnC software. The gestalt data structure serves as the interface to the DFM object. Through it, sensor data and sensor algorithm information are passed to the DFM object and alarm and event information are returned to the CnC. The DFM object internally employs two other objects for data processing, referred to as sensor suite and data fusion objects. A sensor suite object encapsulates all sensor data and sensor algorithm information pertaining to a given sensor suite, and thus localizes the data in space (sensor suite location) as well as time (data analysis cycle number). A data fusion object encapsulates the data fusion decision algorithms and operates them on selected sensor suite objects. Both objects provide methods for functionality relevant to their purpose. Thus, a sensor suite object can parse the gestalt data structure, extract all sensor information pertaining to its assigned sensors, store the information in a linearized data format, and log this information. A data fusion object can run the data fusion decision algorithms on one or more sensor suite objects, keep track of time dependent features internally, generate real-time alarm and event information for output, and log this information. A sensor suite object typically encapsulates data from a single sensor suite, though other sensor groupings, such as sensors sensitive to flaming fires in magazine compartments, are possible. A data fusion object can process any grouping of sensor suite objects, for example, the sensor suite objects for all sensor suites in a given compartment, or sensor suites in all magazines. A data fusion object processes data from one or more sensor suite objects with data fusion algorithms and a decision tree, internally tracking events and alarm conditions. The data fusion objects use flags to keep track of events or alarm conditions observed in the current data analysis cycle, persistences to keep track of trends observed in event flags over multiple data analysis cycles, and latches to keep track of events or alarm conditions in steady states. Flags are cleared at the start of each data analysis cycle and then updated by the data fusion decision algorithms. Persistences are incremented or decremented to zero depending on the newly updated flags and current states of latches. New latch states are then set or cleared based on the values of both the flags and persistences. Threat level information, the output of the DFM, is generated from the

Multisensory Detection System for Damage Control

359

current states of latches at the end of data analysis cycle. Levels of “all clear”, “warning” or pre-alarm, and “alarm” are indicated through prescribed ranges of real-valued intensities for each damage control event and for all monitored compartments individually. Pattern recognition, statistics, and heuristics may be used for flag, d persistence, or latch level decisions. Data from multiple sensor suite objects may be processed sequentially, one sensor suite at a time, or in parallel, for example by taking maximal values over all sensor suites or combining sensor data from several sensor suites. In this way, the data fusion decision algorithms are able to evaluate newly acquired sensor information, track its trends in time, and identify changing or steady states for situational awareness. Techniques investigated for data fusion and event detection included feature selection, data clustering, Bayesian classification, Fisher discriminant analysis, and neural networks. Real-time situational awareness is accomplished in the data analysis cycle. Data gathered by the CnC from the sensor systems is compiled in a gestalt data structure and passed internally to the DFM for analysis. The DFM returns alarm information and event situational awareness as a completed system block of the gestalt data structure, suitable for inclusion in the current gestalt. The CnC then encodes the DFM output, together with the sensor system blocks of the current gestalt, into data packets that are forwarded to the GUI and SCS programs for display. Data packets from the CnC supply an SCS with current alarm and event information at the end of each data analysis cycle. This includes the current threat levels for damage control events in all monitored compartments, generated from the data fusion decision algorithms, the current alarm status from the individual sensor algorithms, and the current data values from sensor suites in all monitored compartments. Alarm and event information from the data fusion decision algorithms at the compartment level is considered the output of Volume Sensor. Information at the sensor level is provided for additional situational awareness. For example, when a compartment level alarm occurred in the VS5 test series, the SCS displayed a pop-up window containing a real-time video stream from the camera located nearest to the alarm source, as determined by the data fusion decision algorithms. The SCS also supplied detailed status and alarm information windows for all compartments and sensor suites on demand, as well as video streams from all visible spectrum cameras. The fusion machine software components were developed in Microsoft's Visual Studio .NET (2003) environment for use with the Windows XP Professional operating system. The CnC program and the DFM software were written in the C++ language. The GUI program was written in Microsoft's C# language. The XML libraries that implement the VSCS protocol were written in the standardized C language for crossplatform compatibility and were developed by Fastcom to Volume Sensor specifications. A modular design was employed so that any software component (command and control, communications, data fusion, or human interface) could be replaced or upgraded without d

Sensor algorithm information is typically binary valued (0 – all quiet, 1 – event/alarm), and for that reason is better suited to Boolean decision logic. Raw sensor data are typically real valued (0 ≤ x ≤ 1), and thus well suited for mathematical pattern recognition algorithms and statistical modeling.

360

C. P. Minor et al.

disruption to the other components of the system. To be effective, a modular design requires an immutable interface to link the components. Such an interface must also be general enough to accommodate advances in sensing and network technologies. The gestalt data structure served as this interface for Volume Sensor. To summarize, Volume Sensor can monitor spaces in real time, provide pre-alarm and alarm conditions for unusual events, log and archive the data from each subsystem, and archive and index alarms for easy recall. The communications interface that is used to move information between components is based on an extensible data format with XML-based message packets for easy implementation on a wide variety of networks. Sensor data and algorithm information are transferred from the sensor subsystems to a central data fusion node for processing. Algorithms first process the raw data at the sensor subsystem level, then the fusion node combines and analyzes data across the sensor subsystems in a decision tree incorporating expert knowledge and pattern recognition for event pre-alarm conditions. A pre-alarm triggers a second level of more sophisticated algorithms incorporating decision rules, further pattern recognition, and Bayesian evaluation specific to the event condition. The output of this latter tier is then passed to an information network for accurate, real-time, situational awareness. 5. Construction and Evaluation During the fifth phase of the program, two full-scale Volume Sensor prototypes capable of monitoring multiple compartments simultaneously were constructed by expanding the successfully tested Volume Sensor design.15 The number of sensor suites was scaled up from three to eight and the data fusion algorithms were incrementally improved, but the sensor system / fusion node architecture remained the same as in the proof-of-concept fourth phase. Each of the VSPs received identical information from the LWVD, SBVS, and ACST sensor systems, but received sensor information from different VID system components, as described below. Volume Sensor prototype 1 (VSP1) received and processed sensor data from the LWVD, SBVS, ACST, and Fastcom VID systems. Volume Sensor prototype 2 (VSP2) received and processed sensor data from the LWVD, SBVS, ACST, and axonX VID systems. Both VSPs interfaced with an independently developed supervisory control system.46 The performance of the two Volume Sensor prototypes was evaluated at the end of the fifth year of the program in full-scale tests during the Volume Sensor 5 (VS5) Test Series aboard the ex-USS Shadwell.15 These tests included a variety of simulated fire, nuisance, pipe rupture, suppression, and gas release events typical to a shipboard environment. The test scenarios were designed to assess the developmental progress of the prototype systems and to expand the database for future improvements. The detection capabilities, false alarm rates, and times to alarm of the two prototypes were compared to stand alone COTS fire detection systems that were evaluated side-by-side with the prototypes. For the VS5 tests series, the eight VSP sensor suites were distributed over six compartments comprising two magazine spaces, an office space, a passageway, an

Multisensory Detection System for Damage Control

361

electronics space, and a four deck mock-up of a vertical launch space (PVLS). The eight sensor suite video cameras were also analyzed by two commercially available VID systems, Fastcom's Smoke and Fire Alert (version 1.1.0.600) VID detection system47 and axonX's Signifire (version 2.2.0.1436) VID detection system,48 using proprietary algorithms independent of the VSPs. Additionally, five compartments, excluding the PVLS, were instrumented with seven clustered Edwards Systems Technologies (EST) spot-type detectors. These were the ionization (EST SIGA-IS), multicriteria (EST SIGAIPHS), and photoelectric (EST SIGA-PS) fire detection systems.49 Further details about the sensor instrumentation, test scenarios, testing procedure, and evaluation criteria are available in Lynch, et al.15 During the VS5 test series, the VSP systems demonstrated (1) the ability to successfully monitor multiple compartments of varying size, shape, and content, (2) the ability to detect and discriminate multiple simultaneous and consecutive damage control and nuisance events, and (3) the ability to convey timely situational awareness to a supervisory control system. Sensor data were accurately and consistently transmitted from the various sensor computers to the fusion machines at one-second intervals with virtually no footprint on the connecting 100Mbps Ethernet network. During testing, the Pentium IV class PC-based fusion machines used in the prototypes demonstrated adequate processing capabilities, running smoothly and remaining responsive in real-time operation. Alarm information and situational awareness were transmitted accurately and promptly to the supervisory control system. In terms of correct classifications of fire sources versus false positives due to nuisance sources, the VSP systems performed very well when compared to the commercial systems, as shown in Figure 3. Factoring in times to alarm, the VSP systems proved to be the most effective detection systems overall with VSP1 performing slightly faster than VSP2, and VSP2 performing slightly better than VSP1. The VSPs also provided effective situational awareness of pipe ruptures, flooding scenarios, fire suppression system activations, and gas release events not possible with the commercial systems. The VSPs achieved better overall performance than the commercial systems by using smart data fusion to combine the higher detection rates and faster response times of camera-based sensors (visible and long-wavelength) with the nuisance rejection capabilities of the spectral and acoustic sensors. 6. Conclusions The integration of multimodal sensors into a functioning full-scale prototype system for damage control and situational awareness was a success. Two prototype systems based on the modular and extensible Volume Sensor design were built and tested in a shipboard environment simultaneously with commercial VID detection systems and spot-type fire detection systems. Full-scale shipboard testing successfully demonstrated the viability of the sensor system / fusion node Volume Sensor architecture, showing the ability of the full-scale VSP systems to monitor multiple compartments and to discriminate between multiple types of simultaneous and consecutive damage control and nuisance events. The

362

C. P. Minor et al.

use of complementary sensing modalities and data fusion technology provided additional situational awareness and enhanced detection rates for damage control events while reducing the incidence of false positives and negatives. The VSP systems demonstrated the ability to discriminate between source types by detecting flaming and smoldering fire sources, water releases, and gas releases while rejecting nuisance sources, and the ability to report threat level information in a timely manner to a supervisory control system. Compared to the commercial systems, the VSPs demonstrated significantly better performance for the detection of damage control events, reduced false alarms, and comparable or faster times to alarm. Current work efforts are directed towards the optimization of the detection methods and the development of more sophisticated multivariate data analysis and fusion methods to further enhance detection performance and reduce false positives. During the development of the Volume Sensor system, NRL has successfully addressed and surmounted challenges specific to multicriteria, multisensor systems via the development of: • • • •

A uniform, but general format for encapsulating sensor data. A communications protocol for the transfer of sensor data and command and control of networked sensor systems. The development of event specific data fusion algorithms. The design and implementation of modular and scalable system architecture.

These solutions are transferable to CWA/TIC detection as well as other sensing applications where the benefits of multicriteria, multisensor systems are desired. 7. Acknowledgments This work was funded by the U.S. Navy's Office of Naval Research, Future Naval Capabilities, Advanced Damage Countermeasures program. Commercial VID manufacturers, Fastcom Technology and axonX have collaborated in this research. The authors thank Mr. John Farley and Dr. Frederick Williams for their valuable assistance in this program. The crew of the ex-USS Shadwell provided much assistance in acquiring data used in the development of the prototype detection systems.

Multisensory Detection System for Damage Control

SBVS

Optical Sensors

Sensor Analysis

ACST

Microphones

Acquisition & Analysis

LWVD

VID

Client XML Library

NIR Cameras

Data Converter

Visible Spectrum Cameras

Data Packaging

Field monitoring

Sensor system computers

Fusion Machine Fusion Server Human Command XML Interface & Library (GUI) Control

Supervisory Control System

XML packets

Data Fusion Module

Fusion computer

Figure 1 – Volume Sensor system architecture and components

IR and UV sensors (SBVS)

Video camera (VID)

363

Photodiode sensors (SBVS)

Filtered video camera (LWVD)

Microphone (ACST)

Figure 2 – Proof-of-concept sensor suite showing the various sensing elements

364

C. P. Minor et al. 100

Detected fires False positives

Percent (%)

80

60

40

20

0 VSP1

VSP2

VIDF

VIDA

ESTI

ESTP

ESTM

Figure 3 – Correct classifications of fire sources and rates of false positives due to nuisance sources

8. References 1. R.C. Luo, C.C. Yih, and K.L. Su, Multisensor Fusion and Integration: Approaches, Applications, and Future Research Directions, IEEE Sensors Journal 2 (2), 107-119 (2002). 2. J.A. Lynch, et al., Volume Sensor Development Test Series 2 – Lighting Conditions, Camera Settings, and Spectral and Acoustic Signatures, US Naval Research Laboratory, NRL/MR/6180—04-8843, , (2004). 3. G. Pfister, Multisensor/multicriteria fire detection: a new trend rapidly becomes state of the art, Fire Technol. 33(2), 115-139 (1997). 4. R.C. Luo, and K.L. Su, A Review of High-level Multisensor Fusion: Approaches and applications, Proc. of IEEE Intl. Conf. on Multisensor Fusion and Integration for Intelligent Systems, Taipei, Taiwan, R.O.C., (1999). 5. S.L. Rose-Pehrsson, et al., Multi-criteria Fire Detection Systems Using a Probabilistic Neural network, Sensors and Actuators. B 69(3), 325-335 (2000). 6. S.L. Rose-Pehrsson, et al., Early warning fire detection system using a probabilistic neural network. Fire Technol. 39(2), 147–171 (2003). 7. R.E. Shaffer and S.L. Rose-Pehrsson, Improved Probabilistic Neural Network Algorithm for Chemical Sensor Array Pattern Recognition, Anal. Chem. 71(19), 4263-4271 (1999). 8. S.J. Hart, et al., Real-Time Classification Performance and Failure Mode Analysis of a Physical /Chemical Sensor Array and a Probabilistic Neural Network, Field Anal. Chem.Tech. 5(5), 244-258 (2001). 9. S.J. Hart, R.E. Shaffer, and S.L. Rose-Pehrsson, Using Physics-Based Modeler Outputs to Train Probabilistic Neural networks for Unexploded Ordnance (UXO) Classification in Magnetometry Surveys, IEEE Trans. Geosci. Rem. Sens. 39(4), 797-804 (2001). 10. L.M. Collins, et al., A Comparison of the Performance of Statistical and Fuzzy Algorithms for Unexploded Ordnance Detection, IEEE Trans. Fuzzy Sys., 9(1), 17-30 (2001).

Multisensory Detection System for Damage Control

365

11. C.D. Tran, Infrared multispectral imaging: principles and instrumentation, Appl. Spec. Rev., 38(2), 133-153 (2003). 12. C. Gittins, W. Marinelli, and J. Jensen, Remote sensing and selective detection of chemical vapor plumes by LWIR imaging Fabry-Perot spectrometry, Proc. SPIE, 4574, 64-71, (2002). 13. F.W. Williams, et al., DC-ARM Final Demonstration Report, US Naval Research Laboratory, NRL/FR/6180—03-10,056, , (2003). 14. H.W. Carhart, T.A. Toomey, and F.W. Williams, The ex-USS Shadwell Full-scale Fire Research and Test Ship, US Naval Research Laboratory, NRL Memorandum Report 6074, , (1992). 15. J.A. Lynch, et al., Volume Sensor Development Test Series 5 – Multi-Compartment System, US Naval Research Laboratory, NRL/MR/6180—05-8931, , (2005). 16. G. Privalov and D. Privalov, Early fire detection method and apparatus, United States Patent 6,184,792, (2001). 17. D. Rizzotti, N. Schibli, and W. Straumann, Process and device for detecting fires based on image analysis, United States Patent 6,937,742, (2005). 18. D.T. Gottuk, et al., Video Image Fire Detection for Shipboard Use, Fire Safety Journal 41(4), 321-326 (2006). 19. J.C. Owrutsky, et al., Spectral based volume sensor component, US Naval Research Laboratory, NRL/MR/6110–03-8694, ; (2003). 20. J.C. Owrutsky, et al., Long Wavelength Video Detection of Fire in Ship Compartments, Fire Safety Journal 41(4), 315-320 (2006). 21. P.J. Thomas, Near-infrared forest-fire detection concept, Appl. Opt. 32(27), 5348 (1993). 22. R. Lasaponara, et al., A self adaptive algorithm based on AVHRR multitemporal data analysis for small active fire detection, Int. J. Remote Sens. 24(8) 1723-1749 (2003). 23. D. Wieser and T. Brupbacher, Smoke detection in tunnels using video images, NIST SP 965 (2001). 24. T. Sentenac, Y. Le Maolt, J.J. Orteu, Evaluation of a charge-coupled device- based video sensor for aircraft cargo surveillance, Opt. Eng. 41(4), 796–810 (2002). 25. W.S. Chan and J.W. Burge, Imaging flame detection system, United States Patent 5,937,077, (1999). 26. D.A. Steinhurst, et al., Long wavelength video-based event detection, preliminary results from the CVNX and VS1 test series, ex-USS SHADWELL, April 7–25, 2003, US Naval Research Laboratory, NRL/MR/6110–03-8733, , (2003). 27. T. Wittkopp, C. Hecker, D. Opitz, The cargo fire monitoring system (CFMS) for the visualization of fire events in aircraft cargo holds, Proc. of AUBE ‘01: 12th Intl. Conf. on Automatic Fire Detection, (2001). 28. J.C. Owrutsky and D.A. Steinhurst, Fire Detection Method, United States Patent Application Publication, US 2005/0012626 A1, (2005). 29. Y.J. Zhu, et al., Experimental and numerical evaluation of a near infrared fire detector, Proc. of Fire Research and Engineering, 2nd Intl. Conf., ICFRE2, 512 (1998). 30. A.C. Lloyd, et al., Fire detection using reflected near infrared radiation and source temperature discrimination, NIST GCR 98, 747 (1998). 31. Y. Sivathanu et al., Flame and smoke detector, United States Patent 6,111,511, (2000). 32. A. Vdacek, et al., Remote optical detection of biomass burning using a potassium emission signature. Int. J. Remote Sens. 23(13), 2721-2726 (2002). 33. See, for example, , Reliable Fire Equipment Company, 12845 South Cicero Ave, Alsip, IL 60803-3083. 34. See, for example, , Vibro-Meter, Inc., 10 Ammon Drive, Manchester, NH 03103.

366

C. P. Minor et al.

35. E.R. Malinowski, Factor Analysis in Chemistry, 2nd Ed. (John Wiley & Sons, New York, 1991). 36. D.A. Steinhurst, et al., Spectral-Based Volume Sensor Testbed Algorithm Development, Test Series VS2, NRL/MR/6110—05-8856, US Naval Research Laboratory, , (2005). 37. D.S. Kupperman, T.N. Claytor, R. Groenwald, Acoustic leak detection for reactor cooling systems, Nucl. Eng. Des. 86(1), 13–20 (1985). 38. K. Fischer and G. Preusser, Methods for leak detection for KWU pressurized and boiling water-reactors, Nucl. Eng. Des. 128(1), 43–9 (1991). 39. S.C. Wales, et al., Acoustic event signatures for damage control: water events and shipboard ambient noise, US Naval Research Laboratory, NRL/MR/7120–04-8845, , (2004). 40. R.C. Luo and M.G. Kay, Multisensor Integration and Fusion in Intelligent Systems, IEEE Trans. on Systems, Man, and Cybernetics 19(5), 901-931 (1989). 41. M. Nichols, A survey of multisensor data fusion systems. In Handbook of multisensor data fusion, D.L. Hall, J. Llinas, editors, (CRC Press, New York, 2001), p 22-1 – 22-7. 42. J.A. Lynch, et al., Volume Sensor Development Test Series 4 Results – Multi-Component Prototype Evaluation, US Naval Research Laboratory, NRL/MR/6180—06-8934, , (2006). 43. S.L. Rose-Pehrsson, et al., Volume Sensor for Damage Assessment and Situational Awareness, Fire Safety Journal 41(4), 301-310 (2006). 44. T. Bray, et al., Extensible Markup Language (XML) 1.1, W3C Recommendation, (2004), . 45. C.P. Minor, et al., Volume sensor communication specification (VSCS), US Naval Research Laboratory, NRL Letter Report 6110/054, Chemistry Division, Code 6180, Washington DC, 20375 (2004). 46. Fairmount Automation, Inc., 4621 West Chester Pike, Newtown Square, PA 19073. Autonomic Fire Suppression System (AFSS). . 47. Fastcom Technology SA, Boulevard de Grancy 19A, CH-1006 Lausanne, Switzerland. . 48. axonX LLC, 2400 Boston Street, suite 326, Baltimore, MD 21224. . 49. Edwards Systems Technology Inc., 90 Fieldstone Court, Cheshire, CT 06410. .

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 593–600  World Scientific Publishing Company

INEXPENSIVE CHEMICAL DEFENSE NETWORK FOR A FIXED SITE JULIETTE A. SEELEY* Sensor Technology and System Applications Group, Massachusetts Institute of Technology Lincoln Laboratory 244 Wood Street, Lexington, Massachusetts 02421 USA [email protected] MATTHEW ANGEL Biodefense Systems Group, Massachusetts Institute of Technology Lincoln Laboratory 244 Wood Street, Lexington, Massachusetts 02421, USA [email protected] ROSHAN L. AGGARWAL, THOMAS H. JEYS, ANTONIO SANCHEZ-RUBIO, WILLIAM DINATALE Quantum Electronics Group, Massachusetts Institute of Technology Lincoln Laboratory 244 Wood Street, Lexington, Massachusetts 02421 USA [email protected], [email protected], [email protected], [email protected] JONATHAN M. RICHARDSON Sensor Technology and System Applications Group, Massachusetts Institute of Technology Lincoln Laboratory 244 Wood Street, Lexington, Massachusetts 02421 USA [email protected]

The Inexpensive Chemical Agent Detection System (ICADS) consists of a network of affordable line-of-sight sensors, each designed to detect chemical threats passing between two points with high sensitivity and a low false-alarm rate. Each leg of the ICADS system is composed of two devices, a broadband IR transmitter, and a receiver containing a long-wave-IR spectrometer. The spectrometer continually measures the spectrum of the radiation emitted by the transmitter, which is separated from the receiver by up to several hundred meters, forming a line of protection. A chemical vapor or aerosol plume with sufficient long-wave-IR absorption causes a characteristic change in the spectrum of light collected by the receiver as the plume crosses the protected line, signaling a threat. Background measurements were conducted to determine background-limited performance. Additionally, a sensor composed of a long-wave-IR fixed-grating spectrometer and a hot-filament transmitter was designed and built. Measurements of the signal-to-noise ratio (SNR) and resolution agree with our analytical model and meet sensor requirements. Keywords: early warning; fixed site; affordable; inexpensive; LWIR; grating spectrometer. This work was sponsored by the Department of the Air Force under Contract FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the authors and not necessarily endorsed by the United States Government.

*

To whom correspondence should be addressed. 367

368

J. A. Seeley et al.

1. Introduction The threat of a chemical weapon in the hands of a terrorist or the agent of a hostile nation requires an analysis of the types of infrastructure for which an early-warning chemicaldetection capability could save lives and mitigate the effects of contamination. In addition to military installations, potential targets of a chemical attack include airports, hospitals, centers of commerce, utilities, food-production and distribution infrastructures, sports arenas, special events, and any other location that the attacker believes is of strategic or political value. Many of these facilities do not have a budget for chemical defense, and many of those that do rely only on surveillance and preparedness training due to the high cost of chemical-agent detection technology. An affordable protection system is needed to reduce the effectiveness and probability of both conventional and unconventional chemical attacks against soft targets and critical infrastructure. Current chemical-agent protection system architectures include networks of distributed point sensors,1 stand-off sensors,2,3 and line-of-sight sensors.4,5 The first two approaches offer a number of advantages: By taking advantage of spatial and temporal correlation of sensor data, a distributed network of sensitive point sensors can detect the presence of a chemical agent quickly and with a low false-alarm rate. However, point sensors that are capable of detecting and discriminating multiple agents are expensive. As the size of the protected area increases, the number of point sensors needed to provide adequate coverage increases, making such a system unaffordable for large areas in some cases. Stand-off systems on the other hand, require considerably fewer sensors to protect a site, as the sensors scan the entire area of interest in a short time. However, current chemical standoff sensors can be prohibitively expensive for some applications. A viable option for a low cost chemical-agent protection system is a network of line-of-sight sensors. Although not as sensitive as point sensors, and not capable of scanning a large area like stand-off sensors, line-of-sight sensors are able to sense chemicals from a distance, and a well-designed system composed of a network of line-of-sight sensors can offer an acceptable combination of performance and coverage. We have developed a chemical-agent protection-system architecture based on a network of hot-filament sources and spectrometers that are sensitive to long-wave infrared (LWIR, 8-12µm) radiation. The large amount of infrared radiation emitted by the hot-filament sources permits the use of low-cost detectors that would otherwise be too insensitive to detect chemical agents at concentrations of interest. This reduces the cost so substantially that each line-of-sight sensor is less expensive than a competing point sensor, and fewer of them are required to protect a given area, resulting in a much lower overall system cost. 2. Requirements An effective chemical-agent protection system must detect a chemical attack in enough time to warn potential victims, with a high probability of detection and a low false-alarm rate. Time-to-detect, probability of detection, and false-alarm rate requirements are

Inexpensive Chemical Defense Network for a Fixed Site

369

scenario specific. For an indoor attack, the time to detect must be short, to maximize the amount of time building inhabitants have to evacuate before being exposed. For an outdoor attack, the wind speed and the distance to the point of release will dictate the time it takes for the chemical to reach building. A detailed characterization of the wind field around the site, fused with an analysis of likely attacks places an upper limit on the system response time, which can then be used to define the relationship between range and sensor integration time. The probability of detection of a system should be high; we have set our detection requirement to 98%. The system false-alarm rate requirement is driven primarily by the cost of the action taken as the result of an alarm. A false positive rate of several times per month or even per week could be tolerable if the response to an alarm is a low regret action such as an automatic reconfiguration of the HVAC system. In addition, the system false-alarm rate is proportional to the number of sensors and also to the number of agents that the system is capable of detecting. 2.1. Chemical threat Two of the major threat classes are nerve agents and blister agents. Both classes have distinctive spectral signatures in the 8-12µm-region of the spectrum. For this reason, and because of the transparency of the atmosphere in this part of the spectrum, LWIR spectroscopy is commonly used for chemical detection and identification6. The LWIR absorption of nerve and blister agents facilitates detection, while the rich spectral signatures of these agents allows for discrimination between the agents and environmental interferents. The spectral resolution required to provide adequate discrimination depends on the magnitude, linewidth and distribution of the spectral features of the threat. Both nerve and blister agents have broad features on the order of 100–200nm full-width half-maximum. 2.2. Sensor requirements Our target price for each line-of-sight sensor is $5000 or less, in production. We have determined that a signal-to-noise ratio (SNR) of 50 is necessary to achieve a 98% probability of detection and a 0.0004% probability of false positive, which corresponds to a sensor false-positive rate of approximately once per month for a ten-second integration time and 24-hour operation. This determination assumes Gaussian-noise-limited performance while detecting a 3.7m-long plume of agent at a concentration that would kill 50% of people exposed for two minutes, and the use of a matched-filter detection algorithm. The spectrometer must be able to resolve absorption features separated by as little as 200nm to be able to resolve agents of interest. To determine the degree to which the absorption spectra of environmental interferents would cause a reduction in the performance of the system, we conducted a background-measurement campaign with high-fidelity spectrometer.

370

J. A. Seeley et al.

3. Background Measurements To isolate the effect of environmental clutter from sensor noise, we adapted a highresolution FTIR spectrometer with a cooled detector for stand-off use. A telescope with an eight-inch receiver was used to collect the light from a hot-filament source at 1120K mounted at the focus of an eight-inch parabolic reflector. The source and spectrometer were taken to several different locations, including a roadway, an indoor hallway, a college campus and an air field, and background spectra were recorded for days at a time. The goal of this exercise was to determine the effect of environmental interferents on the performance of our chemical detection system. Due to the spectral nature of the backgrounds, their effect cannot be treated as a simple noise contribution. To determine the effect of the background, we embedded calculated spectra of several chemical agent plumes at different concentrations and path lengths in the background spectra. This approach takes into account the spectral nature of the backgrounds and how the outcome of the matched filter operation will be affected. Datasets with and without embedded plumes were compared to generate a set of sensitivity curves showing the relationship between sensitivity and false-alarm rate for a fixed probability of detection of 98% at each of the measurement locations (Fig.1). This background-limited performance exceeds the sensor-limited performance of the inexpensive grating spectrometer. Therefore, in these environments, the background transients are not detrimental and grating spectrometer performance models may be based on sensor parameters alone.

Fig. 1. Sensitivity curves measured with high-fidelity FTIR instrument with cooled detector in various environments

4. Development of Grating Spectrometer The second phase of our program was to design and fabricate a sensor that not only met the performance requirements stated in section 2, but also had the potential to be manufactured at low cost. The cost requirement eliminated the possibility of using an interferometric spectrometer due to its inherent complexity. Of the several spectrometers available in the LWIR, we down-selected to a grating-based design because it offers the necessary spectral resolution at lowest cost. The spectrum of a beam of light incident on a grating can be measured by recording the output of a detector placed in the path of the

Inexpensive Chemical Defense Network for a Fixed Site

371

diffracted beam while rotating the grating. A grating and detector in this configuration form a spectrometer capable of scanning a portion of the spectrum, placing light in a small range of wavelengths onto the detector at any one time. However, to maximize the sensitivity of a grating spectrometer, the grating can be held fixed, and the single detector can be replaced by a detector with multiple elements, allowing simultaneous measurement of all of the light in the spectral region of interest. The fixed-grating configuration eliminates the need for a precision rotary stage, and is used in many compact, low-cost, visible and short-wave-infrared spectrometers. Several technologies are available for the multi-element detection of long-waveinfrared light. Our detector sensitivity requirements are relaxed by the use of a cooperative source, rendering inexpensive, uncooled detectors a viable option. Of the detector technologies that do not require aggressive cooling, room-temperature pyroelectric detectors offer the best sensitivity, and pyroelectric-detector linear arrays, although much less common than single-element detectors, are available and relatively inexpensive. The 64-element array from IR Microsystems is the only pyroelectricdetector linear array available on a non-custom basis with integrated electronics, and for this reason was selected for the design of the fixed-grating spectrometer. 4.1. Construction An enclosure for the fixed-grating spectrometer was designed to facilitate easy transportation and quick setup. A 2-inch germanium window with an 8-12µm antireflection coating was mounted in the front wall to form the spectrometer aperture. The enclosure was built with a precision elevation adjustment knob to facilitate pointing and alignment of the spectrometer with the transmitter. An off-axis parabolic mirror was chosen as the focusing element because a parabolic mirror has no aberration at the central field point (on-axis, collimated light is focused to a diffraction-limited spot.) An off-axis segment of the parent parabola was used to place the focus outside of the incoming beam, eliminating the need to obstruct a portion of the incoming light with the grating, the detector, or a flat beam-steering mirror. The mirror and grating act in concert to focus and disperse 7.5-12.5µm light entering the spectrometer aperture across the detector array. The focal length of the mirror, the groove density of the grating, and the distance between the mirror and the grating were optimized, taking into account the optical parameters of commercially available mirrors and gratings, to match the monochromatic spot sizes to the size and aspect ratio of the individual detector elements, ensuring that the spectrometer is able to provide acceptable resolution across the spectral region of interest. To calibrate the wavelength scale of the spectrometer, a 1-inch, narrow-band interference filter is held in front of the spectrometer while the azimuth of the spectrometer is aligned to place the transmission maximum at the correct position on the wavelength scale of the real-time spectrometer output.

372

J. A. Seeley et al.

Fig. 2. A photograph of the fixed-grating spectrometer is shown with the top cover removed. All of the major spectrometer components including the window, mirror, grating, chopper, detector assembly, and electronics are visible.

A four-inch transmitter was designed and built using a high-temperature lamp. The lamp is composed of a 2000°C cylindrical filament mounted in a sealed ceramic housing. The interior of the lamp is gold-coated, and the front of the lamp is sealed with a zincselenide window, which has a high transmission in the 8-12µm region. In the four-inch transmitter, the filament inside the lamp is placed at the focus of a long-focal-length, gold-coated, four-inch, off-axis parabolic mirror. A mirror with a long focal length was chosen to maximize the efficiency with which the light radiating from the filament is collected. 4.2. Performance analysis The low-cost system was analyzed to determine if it met the spectral resolution and SNR requirements defined in section 2. The SNR of the fixed-grating spectrometer was calculated from spectra taken with the four-inch 2000°C transmitter at ranges of 26.5, 50, 75, and 100 meters. The resolution of the spectrometer was calculated from spectra taken with the NIST transmission standard placed inside the transmitter, directly in front of the high-temperature lamp. The calculated SNR and resolution are plotted in Figure 3. Spectral resolution of the sensor depends on the divergence angle of incoming light. This defines a minimum range requirement for a particular receiver and transmitter aperture diameter. On the other hand, the SNR is adversely affected by increasing range. Thus it is important that for a given transmitter and receiver, the maximum range required for SNR exceed the minimum range required for adequate spectral resolution. To conduct spectral calibration, spectra were taken with the transmitter aperture blocked and with a 1921a NIST Infrared Transmission Wavelength Standard composed of a thin piece of polystyrene. A spectrum taken with the spectrometer aperture blocked

Inexpensive Chemical Defense Network for a Fixed Site

373

by an opaque material was subtracted from the two spectra described above. The transmission of the film was calculated by dividing the background-subtracted spectrum of the film by the background-subtracted spectrum of the uncovered transmitter. At 75m and 100m ranges, resolution of absorption features separated by as little as 200nm was demonstrated, indicating 100nm channel spacing, near-diffraction-limited performance. Thus the grating spectrometer meets our spectral resolution requirement. Additionally, the line-of-sight detector meets our SNR requirements of 50 at a range of 75 meters with a 15 second integration time. However, it should be noted that this SNR does not include the effect of drift, which is substantial in this non-temperature stabilized incarnation of the grating spectrometer. We expect simple modifications to greatly decrease the drift. Therefore, the SNR and resolution requirement can, in fact, be simultaneously met. The operational range can be increased by increasing the integration time or increasing the source/receiver diameters, taking care not to exceed the divergence angle constraints. 300

500

250

Resolution/nm

600

SNR

400 300 200 100

200 150 100 50

0

0 0

25

50

75

Range/m

100

125

150

0

25

50

75

100

125

150

Range/m

Fig. 3. The peak SNR of the 2000°C-transmitter/fixed-grating-spectrometer system was calculated for integration times of 15 and 60 seconds. With a 15-second integration time (shown), the peak SNR requirement of 50 is met at ranges of 75 meters and below, while integrating for 60 seconds extends the range of the system to 100 meters. The 2000°C-transmitter/fixed-grating spectrometer-system meets the 100nm resolution requirement at ranges of 75 meters and above.

5. Conclusion A system of low-cost line-of-sight LWIR sensors has been proposed to provide chemical protection of a fixed site. Sensor requirements were defined and the background-limited performance was found to be negligible compared to sensor noise. A chemical-agent line sensor composed of a long-wave-IR fixed-grating spectrometer and a hot-filament transmitter was designed and built. In total, the components of the grating spectrometer cost approximately $5000. The performance of the system was characterized at various ranges and for several integration times, and the results support the conclusion that a spectrometer based on low-cost, fixed-grating technology can meet the SNR and spectral resolution requirements of a chemical-agent line-sensor system. A network of such lineof-sight chemical agent detectors is an affordable option for fixed site protection from a chemical attack.

374

J. A. Seeley et al.

References 1. D. A. Shea, The BioWatch Program: detection of bioterrorism (Congressional Research Service, Library of Congress 2003). 2. H. Lavoie, E. Puckrin, J.M. Theriault and F. Bouffard, Passive Standoff Detection of SF6 at a Distance of 5.7km by Differential Fourier Transform Infrared Radiometry, Applied Spectroscopy 59(10), 1189-1193 (2005). 3. V. V. Vaicikauskas, V. Kabelka, Z. Kuprionis, V. Svedas, M. Kaucikas, and E.K. Maldutis, Infrared DIAL system for remote sensing of hazardous chemical agents, Proc. SPIE 5613, 2128, (2004). 4. J.N. Pawloski, D.G. Iverson, Use of Optical Remote Sensing Techniques to Monitor Facility Releases, Hydrocarbon Processing 77(9), 4 (1998). 5. W.G. Fately, R.M. Hammaker, M.D. Tucker, et. al. Observing Industrial Atmospheric Environments by FTIR, Journal of Molecular Structure 347, 153-168 (1995). 6. D.A. Skoog, F.J. Holler, T.A. Nieman, Principles of Instrumental Analysis (Saunders College Publishing, Philadelphia, 1998).

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 375–612  World Scientific Publishing Company

PRECISION MEASUREMENT OF ATMOSPHERIC TRACE CONSTITUENTS USING A COMPACT FABRY-PEROT RADIOMETER WILLIAM S. HEAPS Laser and Electro-Optics Branch, National Aeronautics and Space Administration, Goddard Space Flight Center, Code 550, Building 19, Room S1 Greenbelt, Maryland, 20771, United States of America [email protected] EMILY L. WILSON Laser and Electro-Optics Branch, National Aeronautics and Space Administration, Goddard Space Flight Center, Code 554, Building 19, Room N1 Greenbelt, Maryland, 20771, United States of America [email protected] ELENA M. GEORGIEVA Goddard Earth Sciences and Technology Center, University of Maryland 5523 Research Park Drive, Baltimore, Maryland 21228, United States of America [email protected]

To address the problem of sources and sinks of atmospheric CO2, measurements are needed on a global scale. Satellite instruments show promise, but typically measure the total column. Since sources and sinks at the surface represent a small perturbation to the total column, a precision of better than 1% is required. No species has ever been measured from space at this level. Over the last three years, we have developed a small instrument based upon a Fabry-Perot interferometer that is highly sensitive to atmospheric CO2. We have tested this instrument in a ground based configuration and from aircraft platforms simulating operation from a satellite. The instrument is characterized by high signal to noise ratio, fast response and great specificity. We have performed simulations and instrument designs for systems to detect, H2O, CO, 13CO2, CH4, CH2O, NH3, SO2, N2O, NO2, and O3. The high resolution and throughput, and small size of this instrument make it adaptable to many other atmospheric species. We present results and discuss ways this instrument can be used for ground, aircraft or space based surveillance and the detection of pollutants, toxics and industrial effluents in a variety of scenarios including battlefields, industrial monitoring, or pollution transport. Keywords: Instrumentation; measurement; metrology; remote sensing; atmospheric composition; optical instruments; absorption; interferometry; Fabry-Perot.

1. Introduction Fabry-Perot interferometry is a powerful spectroscopic technique and can be used to measure most of the greenhouse gases and also harmful species in the atmosphere like NH3, and SO2. The precise detection of carbon dioxide in the atmosphere is of great

375

376

W. S. Heaps, E. L. Wilson & E. M. Georgieva

interest due to its impact on trapping the long wavelength radiation emitted from the Earth’s surface.1 Global warming is mainly connected with the increase of the CO2 as a result of human activity and therefore studying the sinks and sources of this most important greenhouse gas will help to better understand the process of global carbon budget and to evaluate its future progress.2 The change in the CO2 column could arise from a surface source or sink, changing topography, or by weather-related changes in atmospheric pressure. To isolate the effects of sources and sinks it is necessary to normalize the CO2 column density by the total column density of the atmosphere. This quantity can be determined by a measurement of the oxygen column since the O2 mixing ratio is constant below 86 km. We present results from testing of a new instrument designed for high accuracy CO2 and O2 atmospheric measurements. The instrument was developed at NASA’s Goddard Space Flight Center. NASA’s Earth Science Technology Office sponsored the research. The measurement system is based on tunable Fabry-Perot interferometer and consists of three channels. They are the CO2 channel at central wavelength of 1570 nm, the oxygen pressure sensing channel at the oxygen A-band - 762 nm, and the oxygen temperature sensing channel at 768 nm as the oxygen absorption lines are more sensitive to temperature changes at this wavelength. The instrument presented here offers more compact, less complex and cheaper alternative to FT-IR techniques used so far and has an equal or better sensitivity. It can also be an effective economical solution to military and homeland defense monitoring requirements for air related sensing applications. Additionally, the presented here technique is easy to maintain and can be packaged to shoe box size for use in either field applications or deployment on aircraft or satellite platforms. The precise alignment of the transmission peaks of the solid Fabry-Perot etalon to the CO2 and O2 absorption lines is critical for a successive performance of the FabryPerot interferometer. It was achieved through altering the refractive index of the FabryPerot etalon of fused silica using temperature fine-tuning. A lock-in technique is used for data acquisition. Custom Lab VIEW software has been developed for controlling the measurement system as well as analyzing the detected signals from all channels. We have conducted laboratory experiments as well as validation of the instrument measuring the atmospheric CO2 and O2 from ground and airborne platforms. The direct and scattered from the Earth surface solar radiation was used at these measurements. Our preliminary results indicate that the method will be able to achieve the sensitivity and signal-to-noise ratio required to measure column CO2 and O2 at the target specifications.3,4,5,6 The measurement of CO2 and O2 is a complex issue. The emphasis of this paper is to show the advantages of using Fabry-Perot interferometer technique for detecting CO2 changes in the atmosphere with precision better than 2 ppm.

Precision Measurement of Atmospheric Trace Constituents

377

2. Instrument Setup Each channel of the Fabry-Perot measurement system consists of two sub-channels. The sub-channel that includes Fabry-Perot etalon, hereafter the Fabry-Perot sub-channel and the sub-channel used as a Reference, hereafter the reference sub-channel. The ratio of the detected by these channels signals can be directly related to the concentration of the species in the atmosphere.

Fig. 1. Laboratory version of the FPICC instrument.

The instrument setup is shown in Fig.1. The incoming light is collimated by two off axis parabolic gold mirrors (OAP’s). The field-of-view is limited by 2 mm pinhole positioned at the focal point of the mirrors. A chopper was used to modulate light at 380 Hz. The incoming light splits then between the three; CO2, oxygen pressure sensing and oxygen temperature sensing channels by a dichroic beam splitter. The three channels are identical except for the wavelengths, the type of the solid etalons and detectors. The first channel is designed for CO2, the second for the pressure variations and the third one for the temperature variations of the atmospheric oxygen column. Pre-filters were used at all channels to avoid the out-of-band spectral noises. They were mounted in temperature-controlled ovens to avoid wavelength fluctuations. The windows of the oven were wedged to avoid internal interference. The incident in the CO2 channel light was pre-filtered at central wavelength of 1570 nm.

378

W. S. Heaps, E. L. Wilson & E. M. Georgieva

The light then splits between the Fabry-Perot and Reference sub-channels with 90% of the light going to the Fabry-Perot sub-channel. In the Fabry-Perot sub-channel the light passes through the Fabry-Perot etalon mounted in a temperature controlled oven for fine FSR (free spectral range) tuning. The windows of the oven are wedged again to avoid internal interference. A set of OAP gold mirrors focuses light onto InGaAs detectors for the CO2 sub-channels and Si detectors for the two oxygen channels. The temperature cooled detectors sensitivity is 150 VW-1. The oven’s temperature stability was set up to 1/100 of a degree, which is important for the performance of the etalons. Even small temperature instability during the experiment can cause variations in the results. The FSR of the etalons was calculated to match the distance between CO2 or O2 absorption lines. The FSR is given by 7,8

FSR =

λ2 2n d cosθ

(1)

where λ is the wavelength, n is the refractive index, d is the thickness of the etalon and θ is the angle of incidence, in our case θ = 0 deg. The flight hardened version of the instrument was used at Dryden and New Hampshire validation campaigns on NASA’s DC-8 airplane. The flight unit represents an integration of three channels into a single instrument, Fig.2, has a slightly different design and employs more rugged optical mounts and integrated optical shielding.

Fig. 2. The flight version of the FPICC instrument

Precision Measurement of Atmospheric Trace Constituents

379

For measurements from the aircraft platform, light passes through the atmosphere, reflects off the Earth, and then enters the instrument through either a downward viewing mirror which views the ground through a portal in the bottom of the DC-8 aircraft, or through a ground glass lens mounted to the roof of the aircraft which is coupled to a fiber optic cable. Both the downward looking mirror and the fiber optic cable/collimator assembly are mounted in “snouts” which can be interchangeably connected to the front end of the instrument. In the DC-8 aircraft, the enclosed flight instrument was mounted to the aircraft seat track through vibration isolated wire-rope mounts. 2.1. Temperature and pressure changes We use temperature tuning to align the Fabry-Perot transmission lines to CO2 and O2 absorption lines. As the temperature changes, the refractive index of the medium change as:

n = no + β (T − To ) .

(2)

where no is the refractive index at temperature To, and β is the refractive index change coefficient. The Fabry-Perot etalon is placed in an oven with tilted windows to avoid additional complications due to interference. The temperature increases the lines shift to longer wavelengths by typically 0.02 nm per oC.9 We found the best alignment of FabryPerot and CO2 absorption lines at temperature 53oC. The tuning of the instrument is shown in Fig.3 where the Fabry-Perot etalon lines (grey) are aligned with CO2 absorption lines (black), the prefilter is shown with black solid line.

Fig. 3. Simulated correspondence between Fabry-Perot fringes and carbon dioxide absorption lines.

Fig.4 shows a laser scan using a gas cell filled with CO2. The Fabry-Perot transmission lines (black thin line) are aligned with CO2 absorption lines (black solid line). The temperature of the FP etalon in the oven was 53oC and the pressure in the gas cell was 500 Torr. It is evident the good agreement between the theoretical model and the experimental data.

380

0.20

0.050

0.18

0.045

0.16

0.040

0.14

0.035

Reference intensity, V

Fabry-Perot intensity, V

W. S. Heaps, E. L. Wilson & E. M. Georgieva

Fabry-Perot signal 0.12

0.030

0.10

0.025

0.08

0.020

Reference signal

0.06

0.015

0.04

0.010

0.02

0.005

0.00 1567

1568

1569

1570

1571

1572

1573

0.000 1574

Wavelength, nm

Fig. 4. The experimental tuning, laser scan of absorption cell (500 torr CO2) in an absorption cell using the instrument shows the Fabry-Perot transmission bands aligned with CO2 absorption lines.

The gas cell we used at laboratory measurements is about 1.5 meters long so a pressure of 1000 Torr has a column absorption roughly equal to ¾ that of the true atmosphere. The ratio of Fabry-Perot/Reference signals decrease as the pressure in the gas cell increase. 1.2

0.16

0.12

0.6 0.08

0.4

Intensity, Reference

0.8

I n t e n s i t y, R e f e r e n c e channel, V c h a n n e l, V

In t e Intensity, n si t y, F aFabry-Perot b r y-P e r o t channel, V c h a n n e l, V

1

0.04 0.2

0 760

0 760.5

761

761.5

762

762.5

763

763.5

764

Wavelength, nm W a v e l e n g t h, n m

Fig. 5. At a temperature 22 deg C, the transmission band of the etalon is placed in the gap at 762 nm, and in this way, only the Reference channel is sensitive to pressure changes.

Precision Measurement of Atmospheric Trace Constituents

381

Fig.5 shows a laser scan using the gas cell filled with oxygen and the etalon with FSR=2.212 nm (Coronado Inc.) which we used for the oxygen pressure sensing channel of the instrument. Fig.6 indicates the ratio of the two signals (Fabry-Perot sub-channel intensity/Reference sub-channel intensity) as a function of the oxygen pressure in the gas cell. The transmission peak of the etalon is placed in the gap between the oxygen absorption lines and the ratio increases with increasing the pressure in the cell.

Fabry-Perot vs reference signal ratio

1.289

1.287

1.285

1.283

1.281

1.279 0

200

400

600

800

1000

1200

Oxygen pressure, Torr

Fig. 6. Ratio as a function of O2 pressure in the gas cell, the etalon is at 22 deg C.

3. Flight and Ground Testing Results The FP instrument currently has two potential modes of operation; one as a ground based sensor or field sensor, and one as an aircraft or satellite deployable instrument. In the first configuration, the CO2 column is measured through absorption of light by CO2 in the atmosphere directly between the sun and the ground instrument. We accomplished this by collecting light with a small telescope fixed to an equatorial mount, aligned to track the sun throughout the day. An optical fiber coupled at the rear of the collimator brings light into the instrument. Figs. 2 and 3 show data collected January 2006 at NASA Goddard when the instrument was using direct sunlight. In Fig. 7 the ratios (FP sub-channel intensity/Reference sub-channel intensity) for the CO2 and O2 pressure sensing channel is plotted as a function of the local daytime. The experiment started at 11 AM in the morning and continued until 5 PM. The ratios are clearly changing during the day according to the Sun position. The noisy parts of the data around 11 PM are because of some very strong signals which overload the detectors. In Fig. 8 the two ratios – for the CO2 channel and for the O2 pressure sensing channel are plotted as a function of the calculated airmass for the day. The air mass has high values early in the morning and late in the afternoon. At noon the Sun is at nadir and the airmass is smaller.

382

W. S. Heaps, E. L. Wilson & E. M. Georgieva

Fig. 7. Data taken with the FPICC instrument and the sun tracker. CO2 channel ratio and O2 pressure channel ratio are plotted as a function of local daytime. The Sun beam is passing through less airmass at noon than at the afternoon. Less airmass means less absorption by CO2 molecules which gives higher ratio around noon in comparison with the ratio calculated late in the day. When the Sun is low in the sky the airmass has higher values then when the sun is at nadir.

Fig. 8. The two ratios – for the CO2 channel and for the O2 pressure sensing channel plotted as a function of the air mass for the day.

Precision Measurement of Atmospheric Trace Constituents

383

Fig. 9 is a photo of the roof and the sun tracker taken during the experiment.

Fig. 9. The sun tracker on the roof at NASA Goddard, January, 2006. The direct sun light is collected by a small telescope fixed on a mount programmed to follow the sun throughout the day. An optical fiber bundle coupled at the rear of the collimator brings light into the instrument.

The FP instrument can also use reflected light from the ground, that’s when the Sun light pass through the atmosphere and reflects off the Earth’s surface before entering the instrument platform. Results from ground tests with this instrument using light reflected from Spectralon plate were already published in SPIE proceedings3,6. Research in April and May, 2004 has focused on testing the three-channel flight instrument in both preliminary ground measurements and flight tests at Dryden Flight Research Center aboard NASA’s DC-8 Airborne Science Laboratory.3 Fig. 10 shows data taken during our second flight test during PAVE (Polar Aura Validation Experiment) at New Hampshire, February, 2005. The FPICC instrument was working in reflected mode. This plot shows the correlation between the CO2 channel and the O2 pressure sensing channel from a flight test conducted on February 7, 2005. The second plot in Fig. 10 shows data taken by the airborne and space-based Differential Absorption Lidar (DIAL) system for atmospheric studies of ozone, water vapor, aerosols, and clouds (Dr. Browell, NASA Langley Research Center) on the same date and at the same time as FPICC instrument. The two circled areas in Fig. 10 are an example of different types of clouds. The wavelengths for the carbon dioxide and for the oxygen channels are different and the penetration of the clouds for those two wavelengths is not the same. Due to the multiple scattering and the uncertain light path length the data for the CO2/O2 calibration contain

384

W. S. Heaps, E. L. Wilson & E. M. Georgieva

an error and do not follow a straight line. On a clear day without clouds the CO2 ratio and the O2 ratio are responding to the different altitudes of the plane, reflecting the airmass and the CO2/O2 calibration is a straight line.3

Fig. 10. Flight test data taken with FP instrument and with the Differential Absorption Lidar. The two marked places on the second figure are showing cloudy regions.

One of the ways to correct for the errors arising from scattering effects is to do measurements with a known path length like using the glint from water surfaces. In this way the instrument will have a very high precision no matter of the weather conditions. The OCO instrument (Orbiting Carbon Observatory) is also going to be designed to measure the reflected solar radiation by looking at the sun-glint over oceans.10,11,12 4. Conclusions We have implemented a prototype design for a Fabry-Perot based instrument to make measurements of CO2 and O2 total atmospheric column, using sunlight reflected from the Earth surface. CO2 is the second most important green house gas in the Earth’s atmosphere after water vapor and having the possibility to measure it from a satellite platform with accurate and relatively cheap instrument can help to estimate the source/sink quantification. We have demonstrated that the interferometer presented has significant capabilities to detect CO2 and O2 in the laboratory and have shown that it responds to actual reflected sunlight in the expected manner.

Precision Measurement of Atmospheric Trace Constituents

385

A rough estimate of the system performance in the field indicates that with its current design it can detect changes in the CO2 column as small as 2.3 ppm with a one second average and better than 1 ppm in less than 10 seconds averaging capability and detects changes in the O2 column as small as 0.1%, with a time resolution of 1 second using direct sun light. The airborne instrument using light reflected off the ground has a sensitivity of about 2%. The reduced sensitivity arises because the atmospheric scattering processes make the path length more variable and uncertain. The message from the science community is that the instruments for CO2 measurements must be capable to work with precision better than 1% including the effects of the atmospheric scattering. We attempt to solve this problem with designing an instrument that will be able to work in a glint mode. We also went through a theoretical study and design for a similar instrument for CO2 measurements of the Mars atmosphere. The advantages of the Fabry-Perot technique are its sensitivity and possibility to measure most of the greenhouse gases and also poison species in the atmosphere. It offers an effective and economical solution to military and homeland defense monitoring requirements for air related sensing applications. The design is relatively simple in comparison with other instruments used for remote sensing; portability and capability of real time operation are also benefits. References 1. J. Sarmiento, N. Gruber, Sinks for Anthropogenic Carbon, Physics Today 55(8), 30-36 (2002). 2. D. O’Brien, P. Rayner, Global observations of the carbon budget: CO2 column from differential absorption of reflected sunlight in the 1.61µm band of CO2, Journal of Geophysical Research 107(D18), 4354 (2002). 3. E. M. Georgieva, E. L. Wilson, M. Miodek, W. S. Heaps, Total Column Oxygen Detection Using Fabry-Perot Interferometer, Optical Engineering 45(11), 115001-1-115001-11 (2006). 4. E. M. Georgieva, E. L. Wilson, M. Miodek, W. S. Heaps, Atmospheric column CO2 and O2 absorption based on Fabry-Perot etalon for remote sensing, Proc. SPIE Earth Observing Systems X, 5882, 58820G-1--58820G-9 (2005). 5. E. M. Georgieva, E. L. Wilson, M. Miodek, W. S. Heaps, Experimental data on O2 absorption using Fabry-Perot based optical setup for remote sensing atmospheric observations, Proc. SPIE Earth Observing Systems IX, 5542, 195-206 (2004). 6. E. M. Georgieva, E. L. Wilson, M. Miodek, W. S. Heaps, Experimental data on CO2 detection using Fabry-Perot based optical setup for atmospheric observations, Proc. SPIE Optical Spectroscopic Techniques and Instrumentation for Atmospheric and Space Research V, 5157, 211-219 (2003). 7. J. Vaughan, The Fabry-Perot Interferometer: history, theory, practice, and applications (Bristol, England, 1989). 8. G. Hernandez, Fabry-Perot interferometers (Cambridge, New York, 1986). 9. G. Slyusarev, Aberration and Optical Design Theory (Adam Hilger Ltd, Bristol, 1984). 10. S. Houweling, F.-M. Breon, I. Aben, C. Rodenbeck, M. Gloor, M. Heimann, and P. Ciasis, Inverse modeling of CO2 sources and sinks using satellite data: a synthetic inter-comparison of measurement techniques and their performance as a function of space and time, Atmos. Chem. Phys., 523-538 (2004).

386

W. S. Heaps, E. L. Wilson & E. M. Georgieva

11. A. M. Zavody, P. D. Watts, D. L. Smith, and C. T. Mutlow, A novel method for calibrating the ATSR-2 1.6-µm channel using simultaneous measurements made in the 3.7µm channel in Sun glint, Journal of Atmospheric and Oceanic Technology, l.15, 1243-1252 (1998). 12. G. Luderer, J. Coakley and W. Tahnk, Using Sun glint to check the relative calibration of reflected spectral radiances, Atmospheric and Oceanic Technology, 22, 1480-1493 (2005).

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 613–625  World Scientific Publishing Company

BACKGROUND CHARACTERIZATION WITH A SCANNED FOURIER TRANSFORM SPECTROMETER ALISON K. LAZAREVICH Johns Hopkins University Applied Physics Laboratory Laurel, MD 20723-6099 [email protected] DOUGLAS A. OURSLER Johns Hopkins University Applied Physics Laboratory Laurel, MD 20723-6099 [email protected] DONALD D. DUNCAN Johns Hopkins University Applied Physics Laboratory Laurel, MD 20723-6099 [email protected]

Background scene characterization in the long wave infrared (LWIR) region of the electromagnetic spectrum is of particular interest for simulating the operational environment of passive standoff chemical detection systems. In conjunction with collections of local metrological data and temperature and water vapor profiles, spectrally resolved LWIR imagery, acquired using a scanned Fourier Transform Spectrometer, and boresighted visible imagery were collected during Fall 2005. Resulting spectra in the 700 – 1450 cm-1 frequency range were analyzed to provide estimates of the foreground/background temperature differential that can then be used as a metric to assess the information content in the resulting IR imagery. Measurements were performed at two geographical sites and over 100 GBs of raw data were accumulated. Through spatio-temporal analysis of the resulting temperature differential maps, we have derived strategies for optimizing scan patterns. These optimized strategies attempt to minimize the amount of redundant data thus providing shorter inter-sample times for temporally varying portions of the scene. Considerations include the first and second order spatial and temporal statistics as well as information content as quantified by average pixel entropy. This research was financed by the U.S. Army Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance under contract number N00024-03-D-6606. Keywords: Standoff Chemical Detection, dT Estimation, IR Background Characterization.

1. Introduction Fielding and testing deployed sensors for the military can be an expensive and time consuming task. Due to the limited number of field test locations, fully exercising all aspects of detection algorithms can be nearly impossible. To fill in the gaps from testing, and in some cases eliminating certain test scenarios, many programs are turning to modeling and simulation for performance prediction. One of the major inputs to any 387

388

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

modeling effort is a well characterized background environment with respect to the expected threat signature. In many cases both a first principles/phenomenology investigation as well as a statistical understanding of the variations in background conditions are required in order to understand and improve system performance. To accomplish these goals with respect to standoff chemical agent detection, a system constructed by The Johns Hopkins University Applied Physics Laboratory (JHU/APL) was fielded to collect spatial and spectrally resolved background radiance spectra in the long wave infrared (LWIR) in different geographical regions of interest. In addition to the spectra, auxiliary data important to simulating the IR environment such as local metrological information, VIS/IR imagery and altitude were recorded. The primary purpose of this data set is to provide modelers with the inputs required for injecting threat signatures into the background scenes and generating performance statistics for a variety of scenarios. However, using a single pixels scanning method to collect backgrounds for scene simulation has its limitations. For any given scene there will be some time evolution of the background that is assumed to be negligible with respect to sensor performance within the scene. If the record time of the scene is sufficiently long, that assumption is no longer valid and the resulting errors could lead to errors in performance prediction. Therefore, in any scanning strategy, it is important to minimize the scan time of the scene while maximizing the amount of information captured about the scene. Several parameters must be considered during this optimization process including spatial diversity, spectral diversity, noise figure and resolution. In the following pages, after a description of the hardware, a process is described that reduces the spectral dimension to a single number, termed ∆T, that reflects one of the important parameters for detection, and looks at how that value changes as a function of time and spatial location. Using this parameter to determine the information content in the image, one can then optimize the scan pattern to collect only those pixels that add information about the scene rather than producing redundancy. Trade-offs in resolution and spatial binning vs. information content are also presented. Finally, the discourse concludes with a discussion of how information about ∆T can aid in a fundamental understanding of chemical agent detection. 2. Hardware Description In support of the Joint Services Lightweight Standoff Chemical Agent Detector (JSLSCAD) Program’s modeling and simulation effort, a radiometer is being fielded with a host of auxiliary equipment and sensors to collect background data from sites of interest. This spatially and spectrally resolved data set has multiple uses in the modeling and simulation (M&S) community from end-to-end system performance evaluation to engineering development decision making and algorithm tuning. At the heart of the system is a Fourier Transform Infrared (FTIR) Spectrometer, an MR-series radiometer manufactured by ABB Bomem that is capable of collecting both LWIR and MWIR data.

Background Characterization with a Scanned Fourier Transform Spectrometer

389

Interferograms, calibration information and auxiliary sensor data combine to characterize the system ambient environment as it would relate to a passive chemical agent detection system. Auxiliary data are from a variety of sensors including a mechanical pan/tilt system, GPS unit, electronic inclinometer sensor, weather station, ozone point sensor, and a visible camera. All of theses sensors are controlled by a central data acquisition Figure 1: Picture of integrated background characterization suite. computer with remote control capabilities. A picture of the integrated system is shown in Figure 1. A more complete description of the sensor system is available in the literature1 but is summarized here to give context to the analysis products detailed in later sections. 2.1. Primary Radiometric Sensor To collect background radiance in the LWIR spectral region, a Bomem MR304 is used. The instrument is a dual channel spectroradiometer equipped with an MCT detector (7001450cm-1) and an InSb detector (1800-2650cm-1). The detectors are Stirling cooled and the sensor exhibits a scan to scan standard deviation of 23.1nW/(cm2 sr cm-1) at 1000cm-1 at 1cm-1 resolution. A plot of the single scan variance for this instrument compared with other commercially available chemical detection systems is shown in Figure 2. These values are used as guidelines in data collection operations to define the number of interferograms needed at each pixel to better the noise figure of the systems under evaluation. The radiometer is fitted with a custom telescope that matches the full throughput of the system with an 8mrad field of view (FOV), allowing for over sampling of the spatial extent of different commercially available systems. Data from the sensor consist of interferograms that are stored and then processed radiometrically by the end user allowing for flexibility in processing procedures. Calibration of the radiometer was achieved using two CI-Systems SR-800 blackbodies as thermal references. Three temperature references consisting of several scans are taken while the radiometer stares at the blackbodies at user defined intervals. Performance of temperature determination has been demonstrated to be better than 0.1K in most cases. To obtain spectrally resolved imagery in the LWIR, a scanner was

390

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

60 MR304 @1cm-1 MR304 @4cm-1, 4 co-adds Commercial System 1 Commercial System 2 Commercial System 3

Standard Deviation (nW/(cm2srcm-1))

50

40

30

20

10

0

700

800

900

1000 1100 Wavenumber (cm-1)

1200

1300

1400

Figure 2: Scan-to-scan standard deviation for measurement instrument compared with commercial systems under test. Information about the commercial systems was provided by the individual vendors and has not been independently verified. MR304 figures were obtained in a lab environment using 15C, 25C and 35C black bodies and standard deviation was calculated over 200 scans.

employed in conjunction with the single pixel FTIR. The QPT pan/tilt is an RS-232 controlled device with a pointing reproducibility of 0.01° and a range of motion of ±90° in elevation and more than 360° in azimuth. In testing of the integrated unit, it was found that the pan/tilt system induces a 5.3% jitter when stepping in increments on the order of the radiometer FOV resulting in a nominal pixel size of 8.4mrad. Data collection uses a step-stare format in which the spectrometer dwells roughly 2sec in each pixel before moving to the next pixel. There are four scans of the radiometer performed at each pixel location at a scan rate of 5Hz for 1cm-1 resolution yielding an acquisition time of 0.8sec; the differential is overhead associated with slew to the next pixel. A typical scan sequence is illustrated in Fig. 3 where the initial point is the upper left-hand corner and the final point is the lower right. The subsequent scan begins in the lower right-hand corner and retraces the illustrated pattern. For the illustrated scan, the lower right-hand pixel is sampled at the end of scan 1 and sampled again at the beginning of scan 2. Each measurement episode may consist of one or more scans over a specific elevation and azimuth range.

Background Characterization with a Scanned Fourier Transform Spectrometer

391

2.2. Auxiliary Sensors

elevation

In addition to each radiometric measurement, other data are collected to characterize the state of the environment and the instrument. All meteorological data, including air temperature, relative humidity (RH), wind speed, wind direction relative to magnetic North, and barometric pressure are provided by a transportable automated meteorological azimuth station (TAMS) manufactured Figure 3: Illustration of serpentine scan pattern. by All Weather, Inc. To supplement the meteorological data, an ozone sensor has been included in the instrumentation suite. The Thermo Electron 49C Ozone analyzer calculates the ozone content of an air sample using a differential UV absorption technique. For documentation purposes, additional sensors have been added to give information about the content of the scene as well as pointing and location of the sensor. A visible camera with sensitivity in the near infrared has been co-aligned with the telescope of the radiometer. The WATEC monochrome camera has a resolution of 640x480 and is coupled with a large FOV aspheric lens to provide context documentation for each scan. Finally, a GPS sensor has been integrated to provide an accurate time standard for the system as well as the exact location of the sensor, as altitude is an important parameter in atmospheric radiance modeling. 3. Temperature Differential Calculations The convention for calculating ∆T is illustrated in Fig. 4a. Foreground temperature is estimated by inspection of the spectrum between 1380 and 1450cm-1, while background temperature is estimated from the upper or lower envelope of the spectrum between 800 and 1150cm-1. Positive ∆T’s correspond to the situation in which the background is cooler than the foreground, while negative ∆T’s (see Fig. 4b) are observed when the background is warmer than the foreground. For positive (negative) ∆T’s the background temperature is dictated by the lower (upper) envelope of the local line structure. Representative temperature maps are shown in Fig’s. 5.

392

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

measured background BB foreground BB

Spectral radiance, L (µwatts/(cm sr))

14

12

10 foreground temperature 8

6 background temperature

4

2 600

700

800

900

1000

1100

1200

1300

1400

1500

W avenumber, ν (cm -1)

Figure 4a: Illustration of positive ∆T (= 14.3˚K) showing spectral regions over which temperatures are calculated

measured background BB foreground BB

Spectral radiance, L (µwatts/(cm sr))

14

12

10

8

6

4

2 600

700

800

900

1000

1100

1200

1300

1400

1500

W avenumber, ν (cm -1)

Figure 4b: Illustration of negative ∆T (= -7.35˚K) where foreground is cooler than background

Background Characterization with a Scanned Fourier Transform Spectrometer

393

7

Elevation (deg)

6 5 4 3 2 1 0 -42

-10

-40

-5

-38

0

-36 -34 Azimuth (degrees)

5

-32

10

-30

15

-28

20

Figure 5a: Differential temperature map in false color, scan 1. Display is in ˚K.

7

Elevation (deg)

6 5 4 3 2 1 0 -42

-10

-40

-5

-38

0

-36 -34 Azimuth (degrees)

5

-32

10

-30

15

-28

20

Figure 5b: Differential temperature map in false color, scan 2. Display is in ˚K.

4. Statistical Analysis One issue that arose in the development of the scanning system was the possibility of under sampling the spatial grid and interpolating for the missing data. The objective of this scheme was to provide shorter scan times. Because of uncertainty in the nature of the data to be collected, it was decided that a more conservative approach was to do dense sampling. Now that a fair amount of data has been collected, we can revisit the possibility of designing an optimized scan strategy. Towards this end we explore various temporal and spatial correlations and the probability density functions of the ∆T maps.

394

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

Our first observation is of the scan-to-scan correlation for each pixel in the ∆T maps. Figure 6 displays a typical result. Up to ∆T ≈ 5 , the scan to-scan reproducibility is excellent. Beyond this point, the correlation is nil. One thing that complicates the interpretation of this result is that the time difference between the first and second sampling of a specific pixel is not constant. Because the serpentine scan pattern retraces its original path, the time differences range from 2sec (time between sample of successive pixels) to the time for an entire scan (roughly 21min for this example). As a result, one cannot associate a temporal decorrelation time with an observed ∆T. Nevertheless, the qualitative assessment is that if the ∆T ≤ 5 then the temporal sampling interval for that pixel can be longer. We will revisit this issue later after we have discussed the spatial correlations within a given scan. Next we take up the topic of the probability density functions for ∆T. Because the number of data points is fairly small, we resort to the kernel method of estimating PDF’s2 rather than the traditional histograming technique. As exemplified by the results shown in Fig. 7, typical PDF’s are multimodal. These particular results show three distinct modes. Temperature difference maps segmented on these three modes are shown in Fig. 8. This result clearly shows that the modes are identified with elevation strata. These various results suggest the beginning of a strategy for more intelligent scanning: For small absolute values of ∆T the temporal revisit time can be longer and the spatial sample interval can be larger. This is primarily true in the horizontal direction while the sampling in the elevation direction should remain contiguous. To draw any more specific conclusions, we need to inspect the second order statistics of each elevation stratum separately. 25 20

R = 0.928

∆ T on scan 2

15 10 5 0 -5 -10 -15 -15

-10

-5

0

5

∆ T on scan 1

Figure 6: Scan-to-scan correlation

10

15

20

25

Background Characterization with a Scanned Fourier Transform Spectrometer

395

Probability density function, f( ∆ T) (oK-1)

0.12

0.1

0.08

0.06

0.04

0.02

0 -15

-10

-5

0

5

10

15

20

25

∆ T (oK)

Figure 7: Probability density function for first scan

7

Elevation (deg)

6 7.5 < ∆T < 21

5 4 3

2 < ∆T < 7.5

2 1 0

−10 < ∆T < 2

-42

-40

-38

-36 -34 Azimuth (degrees)

-32

-30

-28

Figure 8: Segmentation of temperature differential map based on modes of PDF

Inspection of the autocorrelation within each of the strata in our example ∆T map shows that the lowermost stratum is highly structured and thus it must be sampled contiguously. On the other hand, for this stratum, the temporal correlation is significant and thus revisit times can be long. The uppermost stratum has much less structure and thus can be spatially undersampled. However, this stratum with the largest temperature differentials is rapidly varying and thus the revisit times must be minimized. The middle stratum in this example is intermediate in terms of spatial and temporal autocorrelation.

396

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

elevation

sub-sampling in az & el more sparse in az than el short re-visit times

sub-sampling by 50% in az & el longer re-visit times

contiguous sampling in az & el longest re-visit times

5 < ∆T

0 < ∆T ≤ 5

∆ T ≤ 0

azimuth Figure 9: Suggested strategy for optimizing scan pattern

These observations lead to the suggested scan optimization strategy illustrated in Fig. 9. As an example of this reduced spatial sampling strategy, consider Fig’s 10. In Fig 10a, we illustrate a sub-sampling within the middle stratum of 50% and a more sparse sub-sampling in the upper stratum (Total number of missing values is 17%.). Interpolation of the missing values from the 4 nearest neighbors yields the results shown in Fig. 10b with the absolute differences shown in Fig. 10c. Mean difference is 0.01°, RMS difference 0.25° and maximum difference is 2.8°. Lest these differences be considered excessive, consider the fact that display of ∆T as an image is a bit misleading as adjacent pixels really correspond to different times. For this example, the time between the first pixel and the last is approximately 21min. As a result, pixel-to-pixel variations result from uncorrelated measurement noise as well as temporal trends. The desired overall strategy should be to mitigate both spatial and temporal aliasing effects. Towards this end the following changes are recommended: • • • • •

implement the spatial sub-sampling scheme illustrated in Fig. 9 every scan should sample upper-most stratum every other scan should sample middle stratum every fourth scan should sample lower-most stratum retain serpentine scan but scan faster in azimuth than elevation and begin at same azimuth and elevation rather than retracing

Background Characterization with a Scanned Fourier Transform Spectrometer

397

With these changes, contiguous sampling of spatially slowly varying regions is reduced, rapidly varying regions are revisited more often, and all pixels are sampled at fixed time intervals.

7

Elevation (deg)

6 5 4 3 2 1 0 -42

-10

-40

-5

-38

0

-36 -34 Azimuth (degrees)

5

-32

10

-30

15

-28

20

Figure 10a: Spatially sub-sampled scan pattern

7

Elevation (deg)

6 5 4 3 2 1 0 -42

-10

-40

-5

-38

0

-36 -34 Azimuth (degrees)

5

-32

10

Figure 10b: Missing values derived by interpolating from four nearest neighbors

-30

15

-28

20

398

A. K. Lazarevich, D. A. Oursler & D. D. Duncan

7

Elevation (deg)

6 5 4 3 2 1 0 -42

-2.5

-40

-2

-1.5

-38

-1

-36 -34 Azimuth (degrees)

-0.5

0

0.5

-32

1

1.5

-30

2

-28

2.5

Figure 10c: Absolute errors in interpolated values

5. Conclusions Herein we have presented a system for acquisition of background data and discussed an optimization strategy for its use in acquiring data. The suggested optimization strategy is based on minimizing the temporal sample interval for background components that are rapidly varying at the expense of spatially undersampling the data known to be highly spatially correlated. This optimization strategy is developed within the context of foreground/background temperature differentials that are calculated over broad spectral bands at high resolution. In an actual detection algorithm the relevant ∆T would be over a band that was localized to spectral structure in the target threat. As a result, the ∆T’s calculated here represent the upper bound of those of interest. Moreover, as the resolution of the instrument decreases, so too will the ∆T. Again, the ∆T’s discussed here represent an upper bound. In attempting to reduce the radiometric data to a single representative quantity, we have not only found a good metric to gauge scan coverage, but have also developed some insight into the backgrounds themselves. The examples shown here all exhibited a trimodal distribution in ∆T that is elevation dependent. Many data sets not presented here also show the same dependence. Information such as this is important not only for judging detection ability within the recorded scene but also for giving a statistical basis for creation of fully synthetic backgrounds. Information such as the distribution of ∆T along with the statistical variation of the radiance signal as a function of wavenumber are extremely important if the M&S community is to move to using fully synthesized backgrounds for evaluation purposes. Data collected using instrumentation such as that described above will provide not only the model basis set, but validation data for those models. With this architecture in place, this balance between measurements and

Background Characterization with a Scanned Fourier Transform Spectrometer

399

modeling, sensor systems can be fully exercised in a software test bed prior to full operational testing with confidence in the results.

6. Acknowledgements The authors would like to thank the JSLSCAD Program office for sponsoring this work through the U.S. Army Joint Project Manager for Nuclear, Biological, Chemical Contamination Avoidance, under contract number N00024-03-D-6606.

7. References 1. Lazarevich, A.K., Oursler, D.A., Baldwin, K.C., “Background Data Collection Suite for Atmospheric Remote Sensing Applications,” Proceedings of the Defense and Security Symposium, Chemical and Biological Sensing VII, Kissimmee, FL, April 2006. 2. B.W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman & Hall/CRC, Boca Raton, 1998.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 627–637  World Scientific Publishing Company

SPECTRAL SIGNATURES OF ACETONE VAPOR FROM ULTRAVIOLET TO MILLIMETER WAVELENGTHS R. E. PEALE, A. V. MURAVJOV, C. J. FREDRICKSEN, G. D. BOREMAN Department of Physics and College of Optics and Photonics, University of Central Florida, Orlando FL 32816 USA [email protected] H. SAXENA AND G. BRAUNSTEIN Department of Physics, University of Central Florida, Orlando FL 32816 USA V. L. VAKS, A. V. MASLOVSKY, S. D. NIKIFIROV Institute for Physics of Microstructures, RAS, Nizhny Novgorod, GSP-105, 603950, Russia

This paper presents comparative analysis of different wavelength ranges for the spectroscopic detection of acetone vapor. We collected and analyzed original absorption line spectra arising from electronic transitions in the ultraviolet, near-infrared vibrational overtones, mid-infrared fundamentals, THz torsional modes, and mm-wave rotational transitions. Peak absorption cross sections of prominent spectral features are determined. The relative merit of each spectral range for sensing is considered, taking into account the absorption strength, available technology, and possible interferences. Keywords: acetone, vapor, millimeter waves, terahertz, infrared, overtone

1. Introduction Development of sensitive spectroscopic techniques for acetone vapor detection has practical importance in pharmaceutical industry, medicine, security, terrorism prevention, global ecology monitoring, astrophysics, etc. Acetone C3H6O is an ingredient in Triacetone Triperoxide, or TATP, an exceptionally volatile explosive preferred by suicide bombers. TATP was used in the London subway bombings and as a trigger in the shoe bomb of Richard Reid. TATP is almost undetectable by conventional bomb detection systems, but of its base ingredients - drain cleaner, bleach and acetone – the latter has exceptionally high vapor pressure for a solvent. Thus, there is the possibility of detecting TATP and bomb manufacturing sites, by the presence of acetone vapor. Detection of acetone vapor in human breath is of value as a screening diagnostic for diabetes1,2 and epilepsy3. Due to crude processing facilities, illicit cocaine usually contains trace amounts of solvents, including acetone, which might be a useful marker for drug enforcement efforts4. The pharmaceutical industry requires accurate determination 401

402

R. E. Peale et al.

of the amount of solvent removed during drying of drug substance and intermediates in paste, cake, or slurry forms5, and acetone is one of the major solvents used in drug manufacturing. Acetone exists in interstellar space because it is composed of only a few of the most abundant atoms. Detection requires accurate knowledge of the spectrum. The existence of interstellar acetone has been confirmed recently6. Acetone has infinite solubility in water, which means that any sample of acetone exposed to air will have some dissolved water in it. Acetone’s vapor density is twice that of air, so that the vapor tens to hug the ground. It has a vapor pressure of 181.7 Torr at 20 C, which is high compared with other solvents. Its vaporization rate at 20 C is 7.7, (i.e. it is large) compared to the value unity for butyl acetate. The purpose of this paper is to compare absorption features of acetone vapor over a broad spectral range in order to be able to make some judgment about the relative merit of different spectral regions for acetone sensing. Absorption lines due to free rotations of the molecule are known at millimeter wavelengths7. At terahertz frequency, slow torsional motion of methyl groups are found8,9. In the mid-IR occur the strong fundamental vibrations10-14, with overtones that have been studied to some extent throughout the near-IR and visible15-19. A strong electronic band is found in the near UV19. It is hoped that analysis and comparison in one paper of original spectroscopic data, collected with the application of sensing in mind, measured with high frequency accuracy, and with the determination of absorption cross sections, will enable new acetone detection methodologies. 2. Experimental methods For millimeter wave spectroscopic studies of individual rotational lines of acetone, we used the method of non-stationary spectroscopy, namely the method of spectroscopy with phase switching of microwave radiation developed at the Institute for Physics of Microstructures20. The spectrometer was based on a backward wave oscillator with radiation frequency stabilized by a phase lock-in circuit, which involves a harmonic mixer, a phase detector, and reference synthesizer. The backward wave oscillator used in this measurement was tunable in the range 115-185 GHz. Switching of the radiation phase is realized by means of 10 ns voltage pulses fed to the slow-wave structure of the BWO. The phase switched microwave radiation was fed through a 1 meter gas cell and detected by a Schottky-diode based detector. Typical phase switching repetition rate was 250 kHz. The method uses post-excitation coherent spontaneous response of excited gas molecules, which can provide sensitivity of spectroscopic detection of rotational lines with absorption level up to 10-9 cm-1 with acquisition time of 1 second for a gas cell length of 1 meter. This method can be successfully used for detection of acetone concentration in human breath with limit 10-6 molar % at measurement acquisition time not more than several seconds21. Typical spectral shape of registered absorption line has

Spectral Signatures of Acetone Vapor

403

Voigt form. Optimal level of gas mixture pressure for this method is 10-2-10-3 Torr, when the linewidth is below several MHz but above the Doppler limit. The method can have an accuracy of line central frequency determination as low 1 kHz, which provides unique identification of the gas being detected. For wide range measurements of acetone vapor absorption spectra we used a vacuumbench Bomem DA8 Fourier-transform spectrometer. The frequency range of the spectrometer extended from 20 cm-1 (0.5 mm wavelength or 0.6 THz frequency) through the visible with maximum resolution 0.04 cm-1. For the range 20-55 cm-1, a 50 micron thick mylar pellicle beamsplitter, a Hg arc source of black body radiation, and a 4 K Si bolometer (IR Labs) were used. For the range 40-200 cm-1, the resources were changed to a 12 micron beamsplitter and a globar source. Spectra in the range 500-4000 cm-1 were collected using a Ge-coated KBr beamsplitter and HgCdTe detector. From 400014000 cm-1, a quartz-halogen lamp, a quartz beamsplitter, and a liquid-nitrogen cooled InSb detector were used. From 8500 to 25000 cm-1, a Si detector was used. The spectrometer optics and sample compartments were evacuated to a pressure of ~100 mTorr. Transmittance spectra were calculated as a ratio of the sample and reference spectra. Depending on the expected strength of acetone absorption lines in each spectral range, a 10 cm long stainless-steel cell with polyethylene or KBr windows, or a 10 m multi-pass White cell with polyethylene or quartz windows, was used to contain the vapor sample. Either cell can be contained completely within the evacuated spectrometer sample compartment. A reference spectrum was collected while the cell was continuously evacuated to a pressure of 70 mTorr. For identification of atmospheric absorption features due to water vapor, transmittance spectra of laboratory air and outdoor air were also collected. The UV absorption spectrum of acetone vapor was measured by a Varian spectrometer with 10 cm gas cell and KBr windows, which provided reasonable transparency at wavelengths longer than 200 nm. Transmittance was calculated as a ratio of the sample spectrum and reference spectrum. 3. Results 3.1. Millimeter waves Fig. 1 presents absorption spectra of acetone vapor near 132 GHz measured at gas pressure 20 mTorr. The line shape on the figures represents the second derivative of the natural absorption line. Due to absence of other gases in these measurements the spectra were collected during a single sweep of the frequency with sufficient signal-to-noise ratio. The signal-to-noise ratio can be remarkably improved by multi-scan data acquisition with averaging, while detecting weak lines or situations with low partial concentrations. The value of the absorption cross section for the line at 132.010 GHz is 3.8 x 10-20 cm2.

404

R. E. Peale et al. -5

3.0x10

-5

-1

Absorption (cm )

2.0x10

-5

1.0x10

0.0 -5

-1.0x10

-5

-2.0x10 132000 132005 132010 132015 132020 Frequency (MHz) Fig. 1. High-resolution mm-wave spectrum of acetone vapor at a pressure of ~20 mTorr.

3.2. Low THz range Fig. 2 presents the low terahertz spectrum collected using the Fourier spectrometer at resolution 0.04 cm-1 = 1.2 GHz. This resolution is too coarse to resolve individual absorption lines as in Fig.1, so the spectrum appears as a wide band. For comparison, the spectrum of methanol, collected for the same spectral range, path length, and resolution, showed strong characteristic structure, which can be expected since methanol has a smaller moment of inertia, giving rise to larger separation of the main groups of rotational transitions22. We identify all sharp lines in Fig. 2 as being due to water vapor, which originates from the liquid acetone source itself.

0.6 1.0

0.7

Frequency (THz) 0.9 1.0 1.2 1.3

1.5

1.6

50

55

400 mT

Transmittance

0.8 0.6

1T

0.4 0.2 0.0 20

2T 25

Acetone

30 35 40 45 -1 Wavenumber (cm )

Fig. 2. Low THz spectrum of acetone vapor at three different pressures. The optical path length was 10 m and the spectral resolution was 0.04 cm-1.

Spectral Signatures of Acetone Vapor

405

From the 0.4 Torr spectrum, a peak absorption cross section of 1.2 x 10-19 cm2 is estimated at 0.7 THz. This value is large compared to the cross sections of acetone lines in other spectral regions. Thus, apart from the absence of resolvable structure at atmospheric pressure, and due also to a comparative lack of interferences from water vapor or other ambient molecules, the spectral range below about 30 cm-1 (beyond about 0.33 mm wavelength) appears to be an attractive region for acetone sensing. 3.3. Mid THz range In the mid-THz range the problem of intrinsic water-vapor absorption emerges and starts to dominate the spectrum above 50 cm-1. All water lines in our spectra arise from the liquid acetone source itself, since we find that the residual water absorption in the evacuated spectrometer bench divides out quite well in the calculated transmittance. The mid-THz spectrum is presented in Figs. 3 and 4. For these measurements, the acetone pressure was 23 Torr, the spectral resolution was 0.25 cm-1, and the optical path length was 10 m. The 40-200 cm-1 range of the collected data overlaps with the low-THz spectrum of Fig. 2. The broad increase of the absorption below 70 cm-1 in Fig. 3 is due to acetone and corresponds to the same absorption increase observed in Fig. 2 below 55 cm-1. The absorption band is stronger, wider, and extends to higher wavenumbers in Fig. 3 because of the higher acetone concentration in this experiment. All of the sharp lines in Fig. 3 are due to water vapor.

Transmittance

Frequency (THz) 1.2 1.8 2.4 1.0 Acetone, 23 T 0.8 10 m path length

3.0

0.6 0.4 0.2 0.0 40

60 80 -1 Wavenumber (cm )

100

Fig. 3. Mid-THz spectrum of acetone vapor. The spectral resolution was 0.25 cm-1.

The spectrum in Fig. 4 extends the range of Fig. 3 up to 130 cm-1. The two marked lines in Fig. 4 are identified as belonging to acetone by comparison with a water absorption spectrum, and these have been identified previously as torsional modes8,9. In

406

R. E. Peale et al.

Ref. [8], the acetone pressure was its saturated value 181.7 Torr and the spectral resolution was 0.1 cm-1. Compared with Ref. [8], the water lines in Figs. 3 and 4 are stronger relative to the acetone lines, but this better represents the likely situation one would encounter in field sensing.

3.0 1.0

3.1

Frequency (THz) 3.3 3.4 3.6

3.7

3.9

110 115 120 125 -1 Wavenumber (cm )

130

Transmittance

0.8 0.6 0.4 0.2 0.0 100

105

Fig. 4. Mid-THz spectrum of acetone vapor with torsional modes indicated by symbols. The acetone vapor pressure was 23 Torr, the optical path length was 10 m, and the spectral resolution was 0.25 cm-1. All other lines are due to water vapor.

One of the marked absorption bands, a triplet, occurs at 124.6 cm-1 in the gap between water lines. Calculations9 have identified this band as the fundamental for methyl-group rotation clockwise-counterclockwise torsion mode ν = 0(G)  ν = 1(G). Four transitions are expected here within a 2.2 cm-1 wide range. In the previous study8, at the saturated vapor pressure of acetone 181.7 Torr, none of these transitions were clearly resolved, even at a spectral resolution of 0.1 cm-1. The other acetone transition identified in Fig. 4 at 104.6 cm-1 is identified as the first sequence transition ν = 1(G)  ν = 2(G), and is expected to be a 4 line multiplet spread over a range of 3.38 cm-1. None of this is clear in Fig. 4, because these transitions exactly coincide with a water band. The presence of an acetone band here is verified in our work only by an increase in strength of the 104.6 cm-1 line relative to the other water lines when acetone vapor is in the cell. The second sequence torsional transition ν = 2(G)  ν = 3(G) was predicted to occur at 139 cm-1, and from the discrepancy between theory and experiment for the two bands already discussed, one might expect a band to appear at 132 ± 4 cm-1. Such a band remains undiscovered: We find nothing attributable to acetone in the range 130-200 cm-1. We collected no data in the range 200-490 cm-1.

Spectral Signatures of Acetone Vapor

407

Both torsional bands in Fig. 4 are well within the tuning range of the p-Ge laser23, or within the range for which quantum cascade lasers can be made24. However, the absorption cross section of the band at 124.6 cm-1 is only about 6.3 x 10-22 cm2. Given the smallness of this cross section, the strong interferences from the overlapping bands of water vapor, the lack of resolved characteristic structure at atmospheric conditions, and the paucity of other characteristic lines in this spectral range, the mid-THz appears to be a particularly poor region for sensing of acetone vapor. 3.4. Mid-IR range The mid-IR spectrum of acetone is well known. We performed our own measurements over the range 490 to 4000 cm-1, but we present only a sampling of it here in order to make a point about sensing and the competition with water vapor absorption. (Our results for the entire mid-IR spectral range are summarized at the end of the paper.) Fig. 5 presents the acetone vapor spectrum from 1165 to 2000 cm-1. Very strong lines are observed at 1215, 1363, and 1731 cm-1. The first (C-C asymmetric stretch) at 8.23 µm is a potential candidate for sensing due to the availability of commercial quantum cascade lasers [2] and because it falls in a region of relatively low water vapor absorption. The fine structure between the higher two lines is water vapor. These higher two lines have been considered poor candidates for open path sensing because of competition with water vapor [14], but we note that the Fig. 5 sample has the same water content as in Fig. 4, while the water vapor interference in Fig. 4 is comparatively minor. Moreover, even the weakest barely-visible feature at ~1530 cm-1 in Fig. 5 has a cross section that is more that twenty times larger than that for any of the THz torsional transitions in Fig. 4. 1.0

Tranmittance

0.9 0.8 0.7 0.6 0.5 0.4

1200

1400

1600

1800

2000

-1

Wavenumber (cm ) Fig. 5. Acetone vapor spectrum in the range from 8.6 to 5 µm. The acetone pressure was 6 Torr, the optical path length was 10 cm, and the spectral resolution was 1 cm-1.

408

R. E. Peale et al.

Fig. 6 presents the acetone vapor spectrum in the range 2750 – 3600 cm-1. The strong C-H mode at 2972 cm-1 (3.36 µm) is a potentially interesting signature for sensing, because it is in a region of low atmospheric absorption, and because a HeNe laser line falls within the band. 1.0

Transmittance

0.8 0.6 0.4 0.2 0.0

2800

3000

3200

3400

3600

-1

Wavenumbers (cm ) Fig. 6. Acetone spectrum in the wavelength range 3.6 to 2.8 µm. The acetone pressure was 51 Torr, the optical path length was 10 cm, and the spectral resolution was 1 cm-1.

3.5. Near-IR range Near-IR overtone bands below 3 µm wavelength can be beneficial for spectroscopic sensing due to the availability of many near-IR solid-state laser lines and high atmospheric transparency. We were able to measure near IR overtones throughout the wavenumber range 4000 to 15000 cm-1, but only a sample is presented here with cross sections for all of them summarized at the end. Fig. 7 presents the transmittance spectrum in the range 5000-6250 cm-1. At least 5 transitions attributable to acetone are distinguished. A band of water vapor interference occurs in the range 5160-5540 cm-1. The peak frequencies of the observed lines are 5163*, 5635*, 5766*, 5915, 5980*, and 6063 cm-1. Asterisks indicate the existence of coincident solid-state laser lines25. Use of near-IR cavity ring down spectroscopy to detect the presence of acetone has recently been suggested19. Ref. [19] presented a spectrum of the absorption cross section for a single near-IR absorption line around 6000 cm-1 using a tunable external-cavity diode laser and cavity ringdown system. The spectral range of this laser was limited to 5952-6135 cm-1, and the spectrum produced is plotted as symbols in the Fig. 7 insert together with our data19. (The symbols are transmittance values calculated for our experimental conditions from the absorption cross section data presented in Ref. [19].) On the vertical scale, the two sets of data are in good agreement, so that we can simply cite Ref. [19] for the cross section of the deepest line as having the value 1.2 x 10-21 cm2.

Spectral Signatures of Acetone Vapor

409

On the other hand, there is a 20 cm-1 discrepancy in the frequency position of this line 19. Since a Fourier spectrometer determines wavenumbers by counting zero-crossings for the interferogram of a stabilized single-mode HeNe laser during the scanning of a Michelson interferometer, we believe that the absolute frequency accuracy of our data is higher than that of Ref. [19], which used a tunable laser. The discrepancy serves to highlight one of the main challenges to sensing of even so common a vapor as acetone, namely, the low accuracy of frequencies for sharp characteristic absorption lines found in the literature. 1.00

Transmittance

0.99

*

**

0.98 0.97 0.96

**

*

0.95 5000

6000

6100

5250

5500

5750

* 6000

6250

-1

Wavenumber (cm ) Fig. 7. Transmittance spectrum of acetone vapor in the wavelength range 1.60 to 2.00 µm, which corresponds to the ∆νCH = 2 overtone[16]. The optical path length was 10 cm, the acetone pressure was 122 Torr, and the resolution was 1 cm-1. The inset shows an expanded region were previously published spectral data exist (symbols, Ref. [19]). Asterisks indicate the existence of solid state lasers at those wavenumber positions.

3.6. UV Range The UV spectrum of acetone vapor is well known, so there is no need to present our spectrum here. The value19 of the cross section at its 265 nm peak is ~4.5 x 10-20 cm2. This value, while a factor ~10 lower than the strongest mid-IR line, is still sufficient to make the UV range useful for detection of concentrations as low as ~ 500 ppb19. Strong Doppler broadening, however, makes the UV line broad and featureless, such that it may be difficult to distinguish acetone from other UV absorbing vapors. 4. Discussion and Summary From the transmittance spectra T we determined the absorption coefficient α = -(1/d) ln (T), according to Beer’s law. Absorption cross sections σ are determined from α according to σ = α/n, where n = P/kT is the number density, P the pressure, k Boltzmann’s constant, and T = 296 K the temperature of the vapor. Peak cross sections of the strongest absorption lines in each of the major spectral groups studied here are presented in Fig. 8.

410

R. E. Peale et al.

1E-18 mid IR

mm

2

Cross section (cm )

1E-19 1E-20

UV

1E-21

near IR

1E-22

THz

1E-23 1E-24 1E-7

1E-6

1E-5

1E-4

1E-3

Wavelength (m) Fig. 8. Log-log plot of experimental peak absorption cross-sections for strong lines of acetone vapor in each spectral range studied vs. wavelength.

The mm-wave spectral region has the largest number of characteristic transitions upon which to base an acetone sensing methodology, but the vapor must be at pressure on the order of tens of mTorr in order for the these lines to be well resolved. The mid-IR spectral region has 4 very strong lines, two of which fall within atmospheric transparency regions and are thus good candidates for open path work. Although the near-IR overtones are weaker, they coincide with known solid state laser transitions where the atmosphere is also quite transparent. The UV absorption cross section is rather strong, but the absorption spectrum is very wide due to strong Doppler broadening at high frequencies, and thus may not be sufficiently characteristic of the molecule. By far the worst region for sensing is the mid-THz range, where the acetone lines are weak, while the water vapor spectrum is densely populated with strong lines. 5. References 1. A. Manolis, The diagnostic potential of breath analysis, Clin. Chem. 29(1), 5-15 (1983). 2. J. Donohue, K. O’Dwyer, B. D. MacCraith, C. Charlton, B. Mizaikoff, Quantum cascade laser–based sensor for monitoring physiological indicators in human breath, Europt(r)ode VII, 4-7 April 2004, Madrid, Spain. 3. K. Musa-Veloso, E. Rarama, F. Comeau, R. Curtis and S. Cunnane, Epilepsy and the Ketogenic Diet: Assessment of Ketosis in Children Using Breath Acetone, Pediatric Res. 52(3), 443-448 (2002). 4. W. H. Soine, Contamination of clandestinely prepared drugs with synthetic by-products, NIDA Res. Monogr. 95, 44-50 (1989). 5. S. C. Harris and D. S. Walker, Quantitative real-time monitoring of dryer effluent using fiber optic near-infrared spectroscopy, J. Pharm. Sci. 89(9), 1180-1186 (2000). 6. L. E. Snyder, F. J. Lovas, D. M. Mehringer, N. Y. Miao, Y-J. Kuan, J. M. Hollis, and P. R. Jewell, Confirmation of Interstellar Acetone, Astrophysical J. 578, part 1, 245–255(2002). 7. J. M. Vacherand, B. P. Van Eijck, J. Burke, and J. Demaison, The rotational specrum of acetone: Internal rotation and centrifugal distortion analysis, J. Molec. Spectros. 118(2), 355362 (1986).

Spectral Signatures of Acetone Vapor

411

8. P. Groner, G. A. Guirgis, J. R. Durig, Analysis of torsional spectra of molecules with two internal C3v rotors. XXIV. High resolution far infrared spectra of acetone-d0, -d3, and -d6, J. Chem Phys. 86(2), 565-568 (1987). 9. Y. G. Smeyers, M. L. Senent, V. Botella, D. C. Moule, An ab initio structural and spectroscopic study of acetone- An analysis of the far infrared torsional spectra of acetone –h6 and –d6, J. Chem. Phys. 98(4), 2754-2767 (1993). 10. P. M. Chu, F. R. Guenther, G. C. Rhoderick, and W. J. Lafferty, Quantitative Infrared Database, in NIST Chemistry WebBook, NIST Standard Reference Database Number 69, Eds. P.J. Linstrom and W.G. Mallard, June 2005, National Institute of Standards and Technology, Gaithersburg MD 20899 (http://webbook.nist.gov). 11. T. Shimanouchi, Molecular Vibrational Frequencies, in NIST Chemistry WebBook, NIST Standard Reference Database Number 69, Eds. P.J. Linstrom and W.G. Mallard, June 2005, National Institute of Standards and Technology, Gaithersburg MD 20899 (http://webbook.nist.gov). 12. D. J. Rogers, Infrared intensities of alcohols and ethers, PhD dissertation, University of Florida, Gainesville (1980). 13. J. Chao, K. R. Hall, K. N. Marsh, R. C. Wilhoit, Thermodynamic properties of key oxygen compounds in the carbon range C1 to C4. Part 2. Ideal gas properties, J. Phys. Chem. Ref. Data 15(4), 1369-1436 (1986). 14. P. L. Hanst and S. T. Hanst, Gas Analysis Manual for Analytical Chemists, Vol. I: Infrared Measurement of Gases and Vapors, Infrared Analysis Inc., Anaheim CA (1990). 15. H. G. Kjaergaard, B. R. Henry, and A. W. Tarr, Intensities in local mode overtone spectra of dimethyl ether and acetone, J. Chem. Phys. 94(9), 5844-5854 (1991). 16. I. Hanazaki, M. Baba, and U. Nagashima, Orientational site splitting of methyl C-H overtones in acetone and acetaldehyde, J. Phys. Chem. 89(26), 5637-5645 (1985). 17. H. L. Fang and R. L. Swofford, Photoacoustic spectroscopy of vibrational overtones in polyatomic molecules, Appl. Optics 21(1), 55-60 (1982). 18. M. Buback, and H. P. Vögele, FT-NIR Atlas, VCH Verlagsgesellschaft mbH, Weinheim, Germany (1993). 19. C. Wang, S. T. Scherrer, and D. Hossain, Measurements of cavity ringdown spectroscopy of acetone in the ultraviolet and near-infrared spectral regions: potential for development of a breath analyzer, Appl. Spectr. 58(7), 784-791 (2004). 20. V. L. Vaks, A. B. Brailovsky, V. V. Khodos, Millimeter range spectrometer with phase switching – novel method for reaching of the top sensitivity, IR & mm waves 20(5), 883-896 (1999). 21. V. L. Vaks, N. V. Klyueva, Microwave spectroscopy of nitric oxide in exhaled air, Proc. 17th Intl. Conf. High Resolution Molecular Spectroscopy, Prague, Czech Republic, September 1-5 (2002) p. D22. 22. G. Moruzzi, B. P. Winnewisser, M. Winnewisser, I. Mukhopadhyay, F. Strumia, Microwave, infrared, and laser transitions of methanol, CRC Press, Boca Raton (1995). 23. A.V. Muravjov, S. H. Withers, H. Weidner, R. C. Strijbos, S. G. Pavlov, V. N. Shastin, and R. E. Peale, Single axial-mode selection in a far-infrared p-Ge laser, Appl. Phys. Lett., 76(15), 1996-1998 (2000). 24. R. Köhler, A. Tredicucci, F. Beltram, H. E. Beere, E. H. Linfield, A. G. Davies, D. A. Ritchie, R. Iotti, and F. Rossi, Terahertz heterostructure laser, Nature 417(9 May 2002), 156-159 (2002). 25. A. A. Kaminskii, Laser crystals, their physics and properties, 2nd ed., Springer-Verlag, Berlin (1990).

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 639–645  World Scientific Publishing Company

THE STANDOFF AEROSOL ACTIVE SIGNATURE TESTBED (SAAST) AT MIT LINCOLN LABORATORY JONATHAN M. RICHARDSON MIT Lincoln Laboratory, 244 Wood Street Lexington, Massachusetts 02420, USA [email protected] JOHN C. ALDRIDGE MIT Lincoln Laboratory, 244 Wood Street Lexington, Massachusetts 02420, USA [email protected]

Standoff LIDAR detection of BW agents depends on accurate knowledge of the infrared and ultraviolet optical elastic scatter (ES) and ultraviolet fluorescence (UVF) signatures of bio-agents and interferents. MIT Lincoln Laboratory has developed the Standoff Aerosol Active Signature Testbed (SAAST) for measuring polarization-dependent ES cross sections from aerosol samples at all angles including 180º (direct backscatter) [1]. Measurements of interest include the dependence of the ES and UVF signatures on several spore production parameters including growth medium, sporulation protocol, washing protocol, fluidizing additives, and degree of aggregation. Using SAAST, we have made measurements of the polarization-dependent ES signature of Bacillus globigii (atropheaus, Bg) spores grown under different growth methods. We have also investigated one common interferent (Arizona Test Dust). Future samples will include pollen and diesel exhaust. This paper presents the details of the apparatus along with the results of recent measurements. Disclaimer: This work was sponsored by the US Air Force under Air Force Contract FA8721-05-C0002. Opinions, interpretations, conclusions and recommendations are those of the authors and not necessarily endorsed by the United States Government. Keywords: Standoff; lidar, polarization; aerosol, signature.

1. Introduction LIDAR has been used for standoff detection and characterization of aerosols2, including bio-aerosols for standoff bio-threat detection3. Existing bio-LIDAR systems have utilized one or more wavelengths for aerosol ranging and sizing, ignoring polarization information. Recent work has demonstrated that polarization-sensitive multi-wavelength LIDAR can be useful for standoff bio discrimination4-6. The development of such systems presents a number of technical challenges that can be mitigated, in part, through the narrowing of the system requirements, e.g., required wavelengths and polarization states. Our group at MIT Lincoln Laboratory has developed the Standoff Aerosol Active Signature (SAAST) apparatus for measuring the polarization-dependent elastic scattering 413

414

J. M. Richardson & J. C. Aldridge

signatures of a variety of controlled biological and inert aerosols at a range of wavelengths. Our measurements program was motivated in part by calculations performed under SPADE, a previous Lincoln Laboratory program demonstrating how NIR elastic scattering could discriminate between bio and background aerosols7. Recently, the SAAST has been extensively upgraded with a new UV-Vis-NIR OPO source, new detectors, automated angle selection, improved aerosol generation, and improved aerosol monitoring. The upgraded SAAST is located in a newly-renovated laboratory at MIT Lincoln laboratory. The SAAST geometry is illustrated in Figure 1, where the scattering angle is measured from the forward direction (such that direct backscatter is when Θ = 180 ). Scattering Angle

Laser

Θ

Scatterer Figure 1: Scattering geometry. Direct backscatter is equivalent to 180°.

2. Experimental Design 2.1. The measurement system The SAAST design goal is to enable multispectral, multiple-angle, polarimetric investigation of the elastic scattering characteristics of aerosols, including 180° backscatter. A schematic of the optical layout is shown in Figure 2. Achieving complete measurement coverage of an aerosol’s angular elastic scattering profile is complicated by the difficulty in sensing true 180° backscatter, as any receiver optic located at 180° necessarily interferes with the transmitted optical radiation. The receiver system has been designed with this in mind, and the primary receiver mirror has been designed to include a small source radiation entrance aperture. When the rotation stage supporting the receiver is positioned at 180°, slow-focused source radiation passes through a ~3mm hole in the primary receiver mirror, impinging on the target aerosol 500mm away with a spot size of ~1cm. Light scattered at 180 ± 1.8° is then collected by the primary mirror into the receiver system. An additional advantage of this design is that the use of reflective optics minimally affects the polarization state of the collected radiation.

SAAST at MIT Lincoln Laboratory

415

Figure 2: Optical schematic of the SAAST. The receiver optics are placed on a 30 × 90cm optical breadboard that can be rotated to any scattering angle (shown in the 180º position for directing backscatter).

The SAAST transmitter is currently based on an Opotek Vibrant Arrow 355 Type II Optical Parametric Oscillator (OPO) tunable from 210-2400nm. In the UV and nearinfrared, this system produces a ~10mJ, 5ns pulse, with repetition at 10Hz. The primary receiver mirror is uncoated gold on glass that reflects the scattered signal into the polarization analyzer consisting of a fixed linear polarizer and quarter-wave plate. The angle of the fast axis of the circular polarizer is re-oriented using a rotation stage. The receive optics focuses the light into a fast InGaAs detector that features a sensing bandwidth of 1GHz. In addition to the primary polarization-sensitive scatter sensor, there are two fixed scatter monitors that are mounted above the target region on opposite sides of the aerosol plume. The forward and back scatter monitors have two purposes: While each independently monitors the total aerosol density, the ratio of front to back is a powerful diagnostic of the sample size distribution. All signals are digitized by a digital oscilloscope and read into the computer using routines written in LabVIEW.

2.2. Aerosol generation The design of the SAAST aerosol handling system is shown in Figure 3. To date, aerosol generation has utilized a BGI CN-25 Collison nebulizer8. The nebulizer creates droplets by spraying a liquid suspension of particles against an inside surface; these droplets may contain one or more particles. Under appropriate conditions, the droplet then evaporates, leaving a dry particulate. In our design, the aerosol initially enters a drying column, where larger particles settle out. The aerosol then travels through a copper pipe to the interaction region. Once passing through the laser beam, the aerosol plume is completely collected by a capture tube and deposited into a bag-in-bag-out HEPA filter for disposal. The BL1 safety enclosure is shown in Figures 4-6. It features an all-aluminum design with multiple access hatches into the various regions.

416

J. M. Richardson & J. C. Aldridge

Figure 3: Aerosol generation and conditioning.

Beam Dump

Measurement Region Laser Enclosure

10 ft Access Hatches

(Top Covers Not Shown)

4 ft

Figure 4: Schematic of the SAAST chamber.

SAAST at MIT Lincoln Laboratory

Figure 5: The chamber with doors closed.

Figure 6: The chamber with doors open.

417

418

J. M. Richardson & J. C. Aldridge

3. Demonstrations of the SAAST Figures 6 and 7 display example scattering data acquired with the previous version of the system9. This data compares scattering from Arizona Test Dust (AZRD) with scattering from Bacillus globigii (B. subtilus var Niger) spores. Note that it was not possible to measure at 180º with the previous hardware. Each point on each graph represents a normalized Mueller matrix measurement derived from several measurements of the polarization-dependent scattering from the sample. The ratio a2/a1 (shown in Figure 6), for example, is closely related to the depolarization ratio as measured by several Lidar systems10. Although 180º data was not acquired in this early measurement, it can be inferred that there will be a measurable difference between the spore and AZRD samples at this angle. The ratio b1/a1 (shown in Figure 7) provides a measurable difference between the samples at 95º and thus can be exploited by point or bistatic sensors. a2/a1

b1/a1

1.2

0.2

1

0.1 0

0.8

-0.1 0.6 AZRD

AZRD

-0.2

Bg

0.4

Bg

-0.3 0.2

-0.4

0

-0.5 0

30

60

90 Scattering Angle

120

150

180

0

30

60

90

120

150

180

Scattering Angle

Figure 6 (left): Element a2 normalized to Element a1. The differing structure in the backward angle is indicative of the drastically different shape distributions of the two samples. This represents an exploitable feature for standoff bio-detection . Figure 7 (right): Element b1 normalized to Element a1. The back angles are indicative of average shape and the forward angles are indicative of average size of the particles. This is an exploitable feature of the data for nearfield or point detection systems.

4. Conclusions We have established the SAAST at its new location at MIT Lincoln Laboratory. We are satisfied that our design will allow for improved measurements of polarization-sensitive scattering phenomena from biological and inert samples. We anticipate rapid strides in establishing this instrument as a premiere testbed in support of polarized Lidar development programs. Future plans include an upgrade to extend the wavelength range to MWIR (2.5-5 µm) in support of recent bio-Lidar development efforts.

SAAST at MIT Lincoln Laboratory

419

References 1. J. M. Richardson and J. C. Aldridge, The standoff aerosol active signature testbed (SAAST) at MIT Lincoln Laboratory, Chemical and Biological Standoff Detection III (Proc. SPIE Vol. 5995), pp. 127-134, 2005. 2. R. M. Measures, Laser remote sensing: fundamentals and applications. New York: Wiley, 1984. 3. C. Swim, R. Vanderbeek, D. Emge, and A. Wong, Update on active chem-bio sensing, Chemical and Biological Sensing VI (Proc. SPIE #5795), pp. 79-85, 2005. 4. J. H. Marquardt, Measurement of Bio-Aerosols with a Polarization-Sensitive, Coherent Doppler Lidar, presented at Fifth Joint Conference on Standoff Detection for Chemical and Biological Defense, Williamsburg, Virginia, September 24-28. 5. S. D. Mayor, S. M. Spuler, and B. M. Morley, Scanning eye-safe depolarization lidar at 1.54 microns and potential usefulness in bioaerosol plume detection, Lidar Remote Sensing for Environmental Monitoring VI (Proc. SPIE #5887), pp. 137-148, 2005. 6. E. Yee, P. R. Kosteniuk, G. Roy, and B. T. N. Evans, Remote biodetection performance of a pulsed monostatic lidar system, Applied Optics, vol. 31, pp. 2900-2913, 1992. 7. J. W. Snow, W. E. Bicknell, A. T. George, and H. K. Burke, Standoff Polarimetric Aerosol Detection (SPADE) for Biodefense (TR-1100), Lincoln Laboratory, Lexington, MA, No. ESC-TR-2004-083, February 2005. 8. K. R. May, The Collison Nebulizer: Description, Performance and Applications, Aerosol Science, vol. 4, pp. 235-243, 1973. 9. J. M. Richardson, J. C. Aldridge, A. N. Goyette, J. D. Pitts, and J. W. Snow, Multi-Angle, Multi-Spectral, Polarimetric, NIR Elastic Scattering for Aerosol Characterization, presented at 6th Joint Conference on Standoff Detection for Chemical and Biological Defense, Williamsburg, VA, October 25-29, 2004. 10. S. R. Pal and A. I. Carswell, Polarization Properties of Lidar Backscattering from Clouds, Applied Optics, vol. 12, 1973.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 647–660  World Scientific Publishing Company

DISCRIMINATION BETWEEN NATURAL DENSE DUST CLOUDS WITH IR SPECTRAL MEASUREMENTS EYAL AGASSI, AYALA RONEN, NIR SHILOAH Israel Institute for Biology Research P.O. Box 19, Nes Ziona, 74100 Israel [email protected] EITAN HIRSCH Life Science Research Israel P.O. Box 19, Nes Ziona, 74100 Israel

Heavy loads of aerosols in the air have considerable health effects in individuals who suffer from chronic breathing difficulties. This problem is more acute in the Middle-East, where dust storms in winter and spring transverse from the neighboring deserts into dense populated areas. Discrimination between the dust types and association with their source can assist in assessment of the expected health effects. A method is introduced to characterize the properties of dense dust clouds with passive IR spectral measurements. First, we introduce a model based on the solution of the appropriate radiative transfer equations. Model predictions are presented and discussed. Actual field measurements of silicone-oil aerosol clouds with an IR spectro-radiometer are analyzed and compared with the theoretical model predictions. Silicone-oil aerosol clouds have been used instead of dust in our research, since they are composed of one compound in the form of spherical droplets and their release is easily controlled and repetitive. Both the theoretical model and the experimental results clearly show that discrimination between different dust types using IR spectral measurements is feasible. The dependence of this technique on measurement conditions, its limitations, and the future work needed for its practical application of this technique is discussed. Keywords: Remote sensing; Dust; Infrared; Spectral.

1. Introduction Heavy loads of aerosols in the air have a considerable health effect in individuals that suffer from chronic illnesses in the breathing airways and respiratory organs. This problem is more acute in the Middle-East, where dust storms in winter and spring are transverse from the neighboring deserts into dense populated areas. Discrimination between the dust types and association with their source can assist in the assessment of the expected health effects. Breathing fine particles is associated with illnesses like cancer1 (especially when the dust is mixed or coated with industrial pollutants) and some other diseases that are induced by the penetration of respirable grains of silica into the lungs2. The large scale of these storms enables their remote sensing, usually by active systems like LIDAR3-4. Active systems offer high sensitivity and range information, but 421

422

E. Agassi et al.

since most of them are operating in the visible/NIR they are commonly employed to infer about dust cloud density rather than to infer about its constituents. Furthermore, active systems are usually bulky and have a limited spectral coverage. Therefore, remote sensing of dust properties by passive systems should be considered as an alternative and advantageous measurement technique5. Many common dust minerals, such as SiO2 and CaCO3 have a wavelength-dependent albedo in the infrared wavelengths6-7, enabling a classification of dust type by passive measurements the IR spectral band. Nevertheless, while the detection scheme of aerosol clouds by active standoff measurements is well established, a complete understanding of passive sensing and its parameterization does not exist. This gap degrades the utility of passive standoff detection of aerosol clouds for environmental applications. Furthermore, the lack of comprehensive and rigorous knowledge of the signal origins and its parameterization, often lead to misinterpreted results. The aim of this article is to introduce a model of signal formation for passive aerosol detection, and to present experimental data that support the proposed theoretical approach and its implementation. The next section describes such a model and its computation. In section 3, the experimental setup and data are presented and the measured spectra are compared to model prediction. Discussion of the results and summary are found in the last section. 2. Theoretical Model and its Computation We introduce a model for solving the radiative transfer equation of an aerosol cloud, including the combined effect of both thermal emission and multiple scattering. The model described in this section follows Sutherland's approach8-9 that was originally established in order to estimate the obscuration efficiency by thermal smoke, as a part of the US Army COMBIC model10. Unlike Sutherland's model, which treats clouds with the Gaussian concentration profile, at this stage we treated uniform and homogeneous clouds. 2.1. Definitions A spherical symmetric aerosol cloud with radius r0 and temperature Tcld is in thermal equilibrium with the surrounding ambient air at temperature Tamb. Background temperature is Tbgd. The total radiance I that reaches a sensor, in a wavelength λ, is given by Eq. (1):

I (λ ,τ 0 , ω0 ) = B(Tbgd )e −τ 0 + ε 0 B(Tcld ) + ε1B(Tcld ) + ε 2 B(Tamb ) .

(1)

The first right hand term in Eq. (1) is the total attenuation of the emitted background radiance along the sensor's line of sight (LOS) due to the cloud. The next term represents the direct thermal self emission of the cloud. The last two terms denote the internal scattering of self thermal emission and the scattering of ambient radiance respectively. Eq. (1) neglects influences of atmosphere attenuation and path radiance, and is an approximate expression for the solution of the radiative transfer equation in this scenario.

Differentiating Dust Clouds with IR Spectral Measurements

423

The function B(T) is the band weighted blackbody radiance evaluated at temperature T. The total optical thickness τ 0 is the path integral of concentration C, and total mass extinction α, at a known wavelength and along a line of sight passing through the cloud: r0



τ 0 (λ , r0 , µ ,φ ) = α c(r ' )dr ' .

(2)

0

Since the cloud concentration is homogenous and outside the cloud C=0, the total optical thickness is:

τ 0 (λ ) = 2α C ⋅ r0 .

(3)

The three emissivity components which appear in Eq. (1) are defined as follows:

ε 0 = (1 − ω0 )(1 − e −τ 0 )

(4)

τ0



'

ε1 = ω0 J e (τ ')e −τ dτ '

(5)

0

τ0



'

ε 2 = ω0 J s (τ ')e −τ dτ ' .

(6)

0

Je and Js are the source functions, which will be defined later. r' is the coordinate and τ' the optical thickness of some point inside the sphere. ω0 is the cloud single scattering albedo. Its dependence in wavelength is expressed via the mass absorption coefficient, αabs, and mass scattering coefficient, αsct, (α = αabs + αsct):

ω0 =

α sct . α abs + α sct

(7)

2.2. Radiance and source functions calculation Calculating the radiance received in the sensor is performed by substituting the source functions and the given temperatures in Eq. (1). We define the useful signal as the difference between the clean background radiance and the measured radiance when the cloud is found in the field of view of the sensor – ∆I. Subtracting from the clean background reference the terms written in Eq. (1) yields the effective signal in the sensor:

∆I (λ ,τ 0 , ω0 ) = [Wbgd (λ ) − (1 − ω0 ) B(Tcld , λ )][1 − e −τ 0 ] − ε1B (Tcld , λ ) − ε 2Wamb (λ ) . (8) While the clean background radiance is expressed as follows:

Wbgd (λ ) = ε bgd B(λ , Tbgd ) + (1 − ε bgd )Wamb (λ ) ,

(9)

424

E. Agassi et al.

where ambient radiance ( Wamb ) is, for the aerosol cloud: sky and ground (a half of sphere for each), and half a sphere of the sky for terrain background. In order to estimate the model efficiency, we simplified it further by the following assumptions: •

• •

The background radiance is considered to be a blackbody at temperature TbgdB (λ , Tbgd ) . It can be justified since the emissivity of most natural ground materials is close to unity and characterized by weak spectral dependence11. The ambient radiance is a blackbody at temperature Tamb- B(λ , Tamb ) without angular dependence. Anisotropic scattering: scattering efficiency does not depend on scattering the angle.

Assumptions 2 and 3 are the two assumptions which significantly deviant from actual conditions. In reality, the significant contribution to radiance is obtained by weighing the scattering efficiency in different angles with the dependent of the sky temperature on the elevation angle. In the isotropic case, the source functions, Je and Js are defined as9:

J e (τ , µ ,φ ) =

J e (τ , µ ,φ ) =

1 4π

∫∫π I (τ , µ ,φ ; µ ,φ )d µ dφ

1 4π

∫∫π I (τ , µ ,φ ; µ ,φ )d µ dφ .

' e

'

'

'

'

(10)

4

' e

'

'

'

'

(11)

4

Ie and Is are the incoming total radiance in all directions of the internal thermal and external (sky and background) origins respectively. The integration is performed in all directions over the full 4π ster-radians of a sphere, through the angles φ and θ (Where µ ≡ cos(θ ) ). Once the cloud temperature is set to be a constant, the source functions fulfill

Je + J s = 1 .

(12)

The radiance that is calculated, can be covert into an equivalent temperature by using the opposite blackbody function.

2.3. Radiance calculation for silicone oil droplets Performing the calculation requires a combination of a specific aerosol cloud with known physical parameters. Particularly, albedo calculations depend on scattering and absorption coefficients (see Eq. (7)) which in turn, depend on the aerosol refractive index. As mentioned in previous sections, we decided to study the feasibility of discrimination

Differentiating Dust Clouds with IR Spectral Measurements

425

between dust types by using silicone oil (Polydimethyl siloxane) droplets aerosol cloud. The reasoning for this action is: • •

Common dusts types are mixtures of several materials with different optical properties. It is difficult to control and ensure repeatability in dust composition and its particle size distribution. The particle's shapes are usually unknown.

Since our study focuses on developing a model for standoff detection of aerosol clouds and its validation, we prefer to use a well-known material. The refractive index of silicone oil can be measured easily; it can be dispersed efficiently and repetitively, and it is composed of spherical droplets. The refractive index of Silicon oil was measured by elipsometry method in the facilities of Woolam Co. Inc. (Nebraska, USA) and the absorption coefficient had been measured in a pass cell with FTIR spectrometry (Nicolet XB-20, UK). The next step was to assign several log-normal size distributions of silicone oil droplets characterized by a number median diameter (NMD) and geometric standard deviations, σg. Coefficients of scattering and absorption for each wavelength were calculated using standard Mie's theory12, and were integrated over the size distribution of the droplets diameter. Fig. 1 presents the radiance calculated for a silicone oil droplets cloud with some size distributions parameterized by several NMD values. The mass extinction coefficients were averaged for 5 values of σg, between 1.2 to 2:

NMD=10 NMD=15 NMD=20 NMD=30 NMD=40 NMD=50 NMD=70 NMD=80 NMD=90

2.5

2

R, watt / m sr

3.0

2.0 1.5 1.0 0.5 0.0

9

10

λ , µm

11

12

13

Fig. 1. The calculated radiance emitted by silicone oil cloud (after subtraction from the clean background) with some size distributions parameterized by several NMD values. Tcld=20 °C, Tamb=-10 °C ,Tbgd=50 °C.

426

E. Agassi et al.

Temperature values used in Fig. 1 calculation were chosen as a fair estimate of a clear day condition at noon time. The background (ground) temperature is higher than the cloud (air) temperature, while the sky temperature is significantly lower than the air temperature. It can be seen that for small NMD values, signal intensity is high and so is the modulation depth. Distributions with large NMD have weaker signal and smaller modulation depth. Additionally, because both sky and air temperatures are colder than the background temperature, the cloud is colder than the background at any wavelength. Fig. 2 shows the correlation matrix between the spectral signatures (in the wavelength range of 8-13µm) corresponding to different NMD values that appeared in Fig. 1. 1 1 0.9 2 0.8 3 0.7 4 0.6 5 0.5 6 0.4 7 0.3 8 0.2 9 1

2

3

4

5

6

7

8

9

Fig. 2. The correlation matrix of the spectral signatures of a silicone oil droplets cloud at different NMD values. The numbering 1-9 refer to NMD values between 10 µm to 90 µm as shown in legend of Fig. 1. σg=1.2.

The figure shows that size distribution has a crucial effect on the expected measured spectral signature of an aerosol cloud. Therefore, classification methods between dust clouds must take this variability into account, or to restrict the discrimination to specific size distributions.

2.4. The influence of ground and air temperatures One of the factors varying during the day is the relation between the temperatures of background, cloud and sky. The combination of their values depends on the meteorological conditions and on the specific time within the diurnal cooling and warming cycle of the ground and the air. In typical night hours, the air temperature is higher than that of the ground and vice versa at day time hours. The sky temperature remains relatively low compared to ground and air temperatures throughout the day. Fig. 3 shows the dependence of the spectral signature on gradual modification of all these temperatures between representative values of day and night.

Differentiating Dust Clouds with IR Spectral Measurements

427

day night

3.0

2

R, watt / m sr

2.5 2.0 1.5 1.0 0.5 0.0 9

10

11

12

13

λ, µm Fig. 3. The calculated radiance calculation (after subtracted from the clean background) of a silicone oil droplets cloud with size distribution characterized by NMD=10µm, constant τ0 and averaged values of σg. Temperatures change gradually in constant spacing between day: Tcld=20 °C, Tamb=-10 °C, Tbgd=50 °C and night: Tcld=10 °C, Tamb=-15 °C, Ttgt=5 °C.

It can be noted that the spectral signature changes only slightly and remains constant up to shift and multiplication modifications. In day conditions, the signal is intense and positive ("cold") relative to the background. On the other hand, at night, the signal is weaker and can be either positive or negative ("hot") relative to the background, dependent on the magnitude of the albedo in each wavelength. This phenomenon enables to distinguish between gas and aerosol clouds when air temperature is higher than ground temperature. Gaseous cloud will appear always hotter than the background (because its presence in the field of view is an outcome of its emission), while the aerosol cloud will appear colder than the ground in the same condition. The big difference between the detection schemes of gaseous and aerosol clouds is especially manifested by the fact that aerosol clouds can be detected even when ground and air temperatures are equal. The next section describes experimental data that support the predicted spectral signatures of the computational model and their parameterization. We intend to expand and upgrade the model by the incorporation of anisotropic phase functions, a more realistic sky radiance model, non-log-normal size distributions and various cloud densities.

3. Experimental Data and its analysis 3.1. Measurements setup In order to evaluate the theoretical model predictions, a small scale field test was conducted. The test site was located in a wide valley in the Negev desert. The release

428

E. Agassi et al.

point was located at a distance of about 300 m from the imager’s observation point. The weather conditions such as the air and ground temperatures, wind speed and direction were measured continuously every one second. The weather throughout the measurement was fair and cloudless. The experimental setup had concluded: Silicone oil dispersion setup: An aerosol cloud of silicone oil (Dow Corning 200(R) Fluid, 350CST) droplets was generated by pumping the oil in a pressure of 3 atmospheres through three spray nozzles. It yielded a constant flow of 10Kg/min silicone oil droplets. Each release lasted 1 minute. The size distribution of the generated plume had been measured in the laboratory prior to the field test by aerosol analyzer Spraytec (Malvern, UK). A representative size distribution of the droplet that formed the cloud is shown in Fig. 4. 0.25

frequency

0.20

0.15

0.10

0.05

0.00

1

10

100

size, µ m Fig. 4. The size distribution of the dispersed silicone oil droplets.

The figure shows that the majority of the droplets sizes occupy the range of 1-10µm. It should be noticed that the droplets' size histogram does not follow a log-normal distribution, and cannot be expressed in such terms. Radiometric measurement setup: The spectral signatures were measured by the spectro-radiometer SR5000 (CI Systems, Israel). The SR5000 is customized for acquiring spectra at a high rate and in an outdoor environment. Spectral filtering is performed by passing the collected radiance in the field of view through rotating CVF (Circular Variable Filter). The nominal spectral resolution is ~1% (for field of views smaller than 3mrad) and it degrades to ~2% at the maximal allowed FOV (field of view) – 6mrad. Maximum signal to noise ratio is obtained with

Differentiating Dust Clouds with IR Spectral Measurements

429

maximal FOV and integration time. While there were no restrictions on choosing the FOV, we decided to acquire the spectra in a rate of 0.5Hz. This value was estimated as a reasonable compromise between the conflicting demands for a high integration time needed for better signal to noise ratio, and the short integration time needed for measurement of a moving object. Fig. 5 presents the measured NET (Noise Equivalent Temperatures) in each wavelength. 0.34 0.32 0.3

NET [C]

0.28 0.26 0.24 0.22 0.2 0.18 0.16

8

9

10

11 wavelength [mic]

12

13

14

Fig. 5. Measured NET of SR5000 spectro-radiometer with FOV of 6mrad, sampling BW of 250Hz and chopping frequency of 800Hz.

3.2. Experimental results 3.2.1. Data preprocessing The spectral signatures of the aerosol cloud were extracted in the following technique. First, in order to minimize the effects of low signal to noise ratio and aliasing, we truncated the spectral data to 86 wavelengths between 8.13-12.5µm. Second, we evaluated the signal strength as a function of the difference between air and ground temperatures. The magnitude of the signal in each wavelength was quantified by the expression written below:

S (t , λ ) =

I (t , λ ) − I (0, λ ) DI (λ , t ) ≡ . I (0, λ ) I (λ ,0)

(13)

Where I (t , λ ) denotes the measured spectra at time t and wavelength λ in units of Watt/cm2str. The first measurement, at the time t = 0 , represents the clear background. Signal strengths according to Eq. (13) at various temperature differences are shown in Fig. 6.

430

E. Agassi et al.

Fig. 6. The measures spectral intensity of silicone oil droplets cloud as function of temperature difference Tbgd − Tair .

We can conclude from the figure that: •



The measured signal is relatively high and reaches the values of DI/I=0.35. When this signal is expressed in terms of equivalent temperature differences, it reaches a strikingly high value of 26°C! The practical implication of this high magnitude of signal strength is that an aerosol cloud can be easily detected, even with sensors that are characterized by moderate signal to noise ratio. As predicted by the theoretical model, signal strength depends on the air to background temperature difference. The maximal signal is measured at day-time when the air temperature is much lower than the ground temperature, and the minimal signal is measured at nighttime when the air temperature is higher than the terrain background. However, a significant signal is always measured, even when air and ground temperatures are equal. This behavior differs fundamentally from the detection scheme of gaseous clouds which is not possible in this condition.

The estimation of signal strength was followed by a spectral analysis of the measured spectra. In order to extract the spectral signatures of the silicone oil clouds, a principal component transform was performed over a moving window of 10 measurements in the temporal domain (a time interval of 20 sec.). Plotting the magnitude of the first eigenvalue as a function of time enables to determine at which points significant changes in the measured radiance in the field of view of the spectroradiometer had occurred (Fig. 7).

Differentiating Dust Clouds with IR Spectral Measurements

1.4

x 10

431

-7

first eigevector varaince

1.2

1

0.8

0.6

0.4

0.2

0

0

10

20

30 scan no.

40

50

60

Fig. 7. First eigenvalue obtained by principal component analysis of the measured spectra in a moving window of 10 scans (20 sec.) during a typical release. Note the large increase in its value when a silicone oil cloud is present in the field of view of the radiometer.

The plot of the first eigenvector infers about the spectrum of the source of that change – i.e. the spectrum of the aerosol cloud. Further smoothing of the spectrum can be achieved by integrating over several eigenvectors that are associated with the best measured signals in terms of signal to noise ratio. This procedure was repeated over all the measured data and the obtained spectra of the aerosol clouds are shown in Fig. 8. The correlation matrix of spectra is presented in Fig. 9. 0.25 +11.3 +15.2 +6.2 +2.9 +0.2 -1.9 -3.1

si cloud siganture (first EV)

0.2

0.15

0.1

0.05

0

-0.05

-0.1

8

9

10

11 wavelength [mic]

12

13

14

Fig. 8. Spectral signature of silicone oil droplets cloud as a function of the temperature difference between air and ground: Tground – Tair.

432

E. Agassi et al. 1 1 0.95 2 0.9 3 0.85 4

0.8

5

0.75

6

0.7

7

0.65 1

2

3

4

5

6

7

Fig. 9. The correlation matrix of the spectra shown in Fig. 8

From the last figures we can state some important conclusions: •





The spectral signature of the silicone oil aerosol cloud contains noticeable spectral features. The existence of a complex spectral signature suggests that discrimination between two different aerosol clouds by passive remote sensing is feasible. Both signal strength and its spectral signature of an aerosol cloud over terrain background are only slightly dependent on the difference between the air and ground temperatures. Thus, the signals produced by a specific aerosol cloud (constitutes and size distribution) are consistent and might be used for its discrimination. The spectral signature of a specific aerosol cloud is invariant under translation and multiplication transforms. It differs essentially from the behavior of the spectral signature of gaseous clouds that is invariant only under multiplicative factor. This characteristic has a major importance when automatic detection methods are considered. Some discrimination methods, such as spectral angle mapping (SAM) are sensitive to translations, and hence are inadequate for aerosol clouds screening.

3.2.2. Comparison between model prediction and experimental data The main purpose of the theoretical model is to predict the signature of dust clouds composed of various materials, without the need to conduct field tests. Therefore, the calculated spectral signatures were compared to the measured data. Fig. 10 shows the correlation coefficient between the predicted spectral signature as a function of size distribution and measurement time along the day and the experimental data. As written in section 2, the calculations assumed a log-normal size distribution characterized by NMD (Number Mean Diameter) value with σ g values spanned form 1.2 to 2. For each size distribution with a given NMD, the reference spectrum was calculated by averaging over all the spectra with the different σ g values. The extreme temperatures set for ground, air and sky were as noted in the caption of Fig. 3. Intermediate temperatures are equally spaced within upper and the lower limits. A reference spectrum of the measured silicone oil clouds was calculated by averaging over all the extracted spectra from the raw data.

Differentiating Dust Clouds with IR Spectral Measurements

0.9

day night

0.8

correlation coefficient

433

0.7 0.6 0.5 0.4 0.3 0.2 10

20

30

40

50

60

70

80

90

NMD, µm Fig. 10. The Correlation coefficients between the predicted spectra of silicone oil droplets at various size distributions and the measured spectrum. Size distributions are log-normal.

The figure shows that there is a high correlation coefficient between the measured spectrum and the predicted spectra of Si oil cloud formed by small droplets. It is consistent with the measured size distribution of the droplets that indicated that the majority of the droplets have a diameter below 10µm. This good fit suggests that a specific aerosol cloud (in terms of constitutes and size distribution) can be classified by matching its measured spectral signature to precalculated signatures of various clouds types. It excludes the need for prior field tests aimed to collect reference spectra for all dust types with many particle size distributions and enables a practical use of passive standoff detection of aerosol clouds.

4. Summary and Discussion Both the theoretical model and the measured data indicate that infrared remote sensing and classification of dust types is feasible. Due to their high albedo and small particle size, high magnitude signals are expected. This enables a reliable detection, even with moderate performance sensors. Signal strength depends only slightly on air and ground temperatures when the cloud is measured over terrain background. It implies that detection is possible even when the thermal contrast between air and ground is null. However, since the detection of aerosol cloud relies on cold sky scattering, a clear sky is required for effective remote sensing of dust clouds. The spectral signature of a specific aerosol cloud is consistent and does not depend significantly on the measurement time during the day. Nevertheless, the size distribution of cloud particles has a considerable effect on the spectrum of the scattered radiance. Therefore, the establishment of efficient discrimination methods between dust clouds is a challenging task. Such algorithms should detect a specific dust type taking into account the inherent spectral variance in its

434

E. Agassi et al.

signature, while maintaining a low misclassification rate. The good agreement between the predicted and the measured spectral signatures suggests that a development of such classification methods is possible even without conducting extensive field tests. Nevertheless, further intensive research, mainly in the issues cited below, is still required before using passive standoff sensing of aerosol clouds as a well established technique: • •



Model upgrade in order to include the scattering phase function, non-homogenous sky background, and different cloud densities. Validation of the model by expanding the measurements to different dust clouds in various size distributions. Development of classification methods based on both calculated and measured spectral data.

References 1. www.gulflink.osd.mil/particulate_final/particulate_final_s05.htm 2. www.cdc.gov/niosh/topics/silica 3. F. De Tomasi, A. Blanco, and M. R. Perrone, Raman lidar monitoring of extinction and backscattering of African dust layers and dust characterization, Applied Optics, 42, 16991709, (2003). 4. www.lidarb.dkrz.de/earlinet 5. M. Thomas, J. Michaelson, P. Ricchiazzi, and S. Yang, Physical inversion strategies for the remote detection of mineral dust distributions from space using AIRS, Proceedings of the ISSSR 2003, Santa Barbara, USA, (2003). 6. Q. Williams, Infrared, Raman, and optical spectroscopy of earth materials, in AGU Handbook of Physical Constants, Vol. 2, T. J. Ahrens, Ed., 291-302, American Geophysical Union, Washington DC, (1995). 7. C. Linke, O. Mohler, A. Veres, A. Mohacsi, Z. Bozoki, G. Szabo, M. Schnaiter, Optical properties of and mineralogical composition of different Saharan mineral dust samples: a laboratory study, Atmospheric Chemistry and Physics Discussions, 6, 2897-2922, (2006). 8. R. A. Sutherland, J. C. Thompson, and J. D. Klett, Effects of multiple scattering and thermal emission on target-background signatures sensed through obscuring atmospheres, Proceeding of the SPIE, 4029, 300-309, (2000). 9. R. A. Sutherland, Determination and use of IR band emissivities in a multiple scattering and thermally emitting aerosol medium, ARL-TR-2688, July 2002. 10. http://www.ontar.com/govt_combic.htm 11. M. T. Eismann, J. N. Cederquist, C. R. Schwartz, Infrared multispectral target/background field measurements, Proceedings of the SPIE, 2235, 130-140, (1994). 12. H. C. van de Hulst, Light Scattering by Small Particles, (Dover Publications, N.Y., 1981).

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 661–673  World Scientific Publishing Company

SIGNAL PROCESSING ALGORITHMS FOR STARING SINGLE PIXEL HYPERSPECTRAL SENSORS DIMITRIS MANOLAKIS, MICHAEL ROSSACCI, ERIN O’DONNELL MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02420, U.S.A. [email protected] FRANCIS M. D’AMICO U.S. Army Edgewood Chemical and Biological Center Aberdeen Proving Ground, MD 21010, U.S.A

Remote sensing of chemical warfare agents (CWA) with stand-off hyperspectral sensors has a wide range of civilian and military applications. These sensors exploit the spectral changes in the ambient photon flux produced thermal emission or absorption after passage through a region containing the CWA cloud. In this work we focus on (a) staring single-pixel sensors that sample their field of view at regular intervals of time to produce a time series of spectra and (b) scanning single or multiple pixel sensors that sample their FOV as they scan. The main objective of signal processing algorithms is to determine if and when a CWA enters the FOV of the sensor. We shall first develop and evaluate algorithms for staring sensors following two different approaches. First, we will assume that no threat information is available and we design an adaptive anomaly detection algorithm to detect a statistically-significant change in the observed spectrum. The algorithm processes the observed spectra sequentially-in-time, estimates adaptively the background, and checks whether the next spectrum differs significantly from the background based on the Mahalanobis distance or the distance from the background subspace. In the second approach, we will assume that we know the spectral signature of the CWA and develop sequential-in-time adaptive matched filter detectors. In both cases, we assume that the sensor starts its operation before the release of the CWA; otherwise, staring at a nearby CWA-free area is required for background estimation. Experimental evaluation and comparison of the proposed algorithms is accomplished using data from a long-wave infrared (LWIR) Fourier transform spectrometer. Keywords: Hypespectral Imaging, chemical sensing, biological sensing, detection algorithms

1. Introduction Standoff detection of chemical warfare agents (CWAs) is necessary when physical separation is required to put people and assets outside the zone of severe damage. An important class of standoff sensors for CWAs is based on the principles of passive infrared (IR) spectroscopy. Typical standoff CWA sensors1 utilize passive imaging spectroscopy in the LWIR atmospheric window (8µm-13µm). The LWIR region is well suited for gas-sensing applications because of the relative transparency of the atmosphere at these wavelengths and the presence of uniquely identifying features for a wide range of 435

436

D. Manolakis et al.

chemicals. For passive IR detectors, the photons either from the sunlight or from the thermal emission of the earth provide the illumination source. However, in the LWIR region, thermal emission from the ground is far stronger than reflected sunlight. Hyperspectral CWA sensors measure the distribution of radiation in the LWIR region using a large number of contiguous narrow spectral bands2 (typically more than 100 bands). Some sensors measure the spectrum of the entire scene in their field-of-view (FOV). Such “single-pixel” sensors can be used to collect multiple spectra by scanning a region to build an image or by starring at the same point to obtain a time-series of spectra. In contrast, hyperspectral imaging sensors divide the FOV into a square grid of pixels and simultaneously measure the spectrum of each pixel. As a result, they produce a data cube with two spatial dimensions and one spectral dimension. Hyperspectral sensors can be deployed on satellites, UAVs, and ground-based platforms. However, the choice of a specific platform imposes crucial constraints on the design and performance of the sensor. Hyperspectral sensors utilize an array of electro-optical (EO) detectors. An EO detector is a transducer which transforms electromagnetic radiation into a form which can be more easily detected, usually an electrical signal. The performance of an EO detector is limited by noise, that is, the random currents and voltages which compete with or obscure the signal or information content of the radiation. The two most prominent noise sources in LWIR detectors are the noise being produced by fluctuations in the temperature of the surroundings and shot noise due to the discreetness of the electronic charge. Determining the power from all sources of noise is a difficult exercise3 in solid state physics. Therefore, it is customary to use the noise-equivalent power (NEP), defined as the value of incident rms signal power required to produce an rms signal-to-noise ratio (SNR) of one, to characterize the overall detector noise. The reciprocal of NEP is known as detectivity D. However, we typically use the quantity D star (D*), which is a normalization of D to take into account the area and electric-bandwidth dependence. Random fluctuations in the arrival rate of background photons incident on the detector produce random fluctuations in its output signal. EO detectors whose D* is limited only by this type of noise are called background-limited infrared performance (BLIP) detectors. The objective of the hyperspectral sensor is to sense the presence of a CWA in its FOV and provide information for its identification and quantification. When there is no CWA plume in the sensor FOV, the observed signal consists of detector noise and the radiation from the surrounding background within the sensor FOV (background clutter or simply clutter). The presence of unwanted detector noise and background clutter complicates the sensing of CWAs. Noise, which can be usually reduced by better sensor design, sets the limit for performance. Background clutter can be often reduced by appropriate signal processing4 to reach, at best, noise-limited performance.

Signal Processing Algorithms for Hyperspectral Sensors

437

Since the clutter varies with the type of the observed scene in a complex manner, sensing technologies are initially evaluated by establishing physics-based requirements to produce a certain SNR at a specified range. This is typically done using fundamental characteristics, such as photon count rates, source strengths, and stand-off ranges. These physics-based limits, which provide the best possible performance in the absence of clutter, help to assess performance in the presence of real-world background clutter variability. 2. Radiance Model for CWA Backgrounds In order for a passive hyperspectral sensor to detect a threat CWA cloud against detector noise and background clutter, there must be a measurable difference in the radiance of the target cloud compared with the radiance of the surrounding background. This difference is sometimes known as the apparent radiance contrast. If we assume an optically thin plume∗ with a single CWA, the radiance reaching the sensor can be expressed as7

  ∂ 2 B(T )  [α (λ )τ a (λ ) ] + Loff (λ ) = as (λ ) + Loff (λ ) Lon (λ ) ≈  nc ∆T p ∂T T =T   cloud 

(1)

where, Lon (λ ) = on-cloud radiance as function of wavelength nc = concentration-path length product B (T ) = Planck function at temperature T assuming B (λ , T ) ≈ B(T ) ∆T p = Tcloud − Tbackground

α (λ ) = spectral absorption cross-section of the molecule of interest τ (λ ) = atmospheric transmission between cloud and sensor Loff (λ ) = off-cloud radiance as a function of wavelength This relationship provides the basis for the detection, identification, and quantification algorithms discussed in this paper. The quantity in the brackets is a scale factor, denoted by a, that depends on the gas column density and the cloud-background temperaturedifference. The spectral signature s(λ), which is the product of the gas absorbance spectrum and the atmospheric path transmissivity, provides all the information required by the spectral exploitation algorithms. In order for a radiating cloud to be observable, the apparent radiance contrast as(λ) must be greater than the NEP of the EO detector. Suppose that we have a sensor that measures the radiance at K spectral channels centered at wavelengths. Then, all quantities in equation (1) are modified by the sensor



A cloud is optically thin if its transmissivity can be approximated by τ p (λ ) = e

− ncα ( λ )

≈ 1 − ncα (λ ) .

438

D. Manolakis et al.

response. If we use the tilde symbol to denote this modification, we have

  (λk ) + Loff (λk ) Lon (λk ) = as or in vector form as

x = as + v

(2)

where

sk = s (λk ) vk = Loff (λk )

(spectral signature of gas) (radiance spectrum of background)

xk = Lon (λk ) xk = Loff (λk )

if a ≠ 0 if a = 0 .

If the cloud consists of M optically thin gases, we have M

x=

∑a

m sm

+ v = Sa + v

(3)

m =1

where sm is the spectral signature and am is the scale factor of the mth gas. For simplicity, we shall use the terms spectral signature, amount, and background for the quantities sm, am, and v, respectively. Equation (1) is similar to the linear mixing model used for surface mixed pixels, that is, pixels containing different ground materials. In this case, the scalars should be non-negative and their sum should be equal to one. However, these constraints are not critical for the gas mixing model because the scalars can be larger than one or negative, depending on the gas amount (arbitrary non-negative scalar) and the temperature contrast (positive, zero, or negative).

3. Anomaly and Matched Filter Detectors A detector that maximizes the probability of detection for every probability of false alarm is said to be optimum according to the Neyman-Pearson criterion. The ROC curve of the optimum detector provides an upper bound limit to the performance of any other detector. It turns out that4,5 the optimum detector is given by the likelihood ratio, defined by

Λ( x ) =

f ( x | T = Yes) f ( x | T = No)

(4)

where T stands for “CWA present”. We usually implement the optimum detector using a computationally easier monotonic function of Λ(x). To determine the exact form of the optimum detector, we need the conditional distribution of x under the two hypotheses. We make the following assumptions:

Signal Processing Algorithms for Hyperspectral Sensors

439

1. We consider optically thin clouds; hence the observed signal can be modeled using the linear model in equation (1). 2. The combined noise-plus-clutter term v has the same distribution under both hypotheses. 3. The random vector v follows a normal distribution with mean vector µb and covariance matrix Σb. This is denoted by

v ∼ N K ( µb , Σ b )

(5)

where K is the number of spectral channels. If we know S, a, µ b and Σ b, the optimum detector is the well-known matched filter, given by4,6

y = D( x ) =

( Sa )Σ b −1 ( x − µb )  hT ( x − µb ) ( Sa )T Σ b −1 ( Sa )

(6)

The matched filter (6) is directly obtained from the likelihood ratio7. Since the optimum matched filter h is a linear operator, the detection statistics y follows a univariate normal distribution with mean E(y)=hTE(x) and variance Var(y)=hTCov(x)h. In many practical situations, a priori information on the statistics of the target is not known and only the mean and covariance of the target-absent hypothesis is known. In this case the likelihood ratio is given by

y = D ( x ) = ( x − µˆ b )T Σ b −1 ( x − µˆ b )

(7)

which is also known as the Mahalanobis distance of the observation under test from the mean of the background class.

4. Temporal-Spectral Detection Algorithm The detection algorithm used for detecting plumes from data collected from the Joint Services Lightweight Standoff Chemical Agent Detector (JSLSCAD) sensor is described as follows. Given an ensemble of spectral measurements over the duration of a collection, the approach taken was to process the data in a temporal-spectral adaptive manner. The data is acquired as an ensemble of N spectral observations with a sampling interval of ∆t seconds between observations. The algorithm processes the data by using a sliding window-based approach. During this process, detection statistics at the current time are computed using second order statistics from preceding observations to adaptively estimate the background. These statistics are then used to develop a thresholding criteria to decide on the absence or presence of the plume. In the next section we describe in detail the algorithm design and its implementation.

440

D. Manolakis et al.

The key mechanism in the temporal-spectral processing scheme is the movement of three sliding windows. The first variable length window is the background estimation window whose initialization requires L∆t seconds to be sufficiently long enough to reliably estimate the covariance matrix for data with dimensionality K. These second order statistics are then used in designing detectors to detect the presence of chemical plumes. The detectors are applied in each test window position where each high dimensional spectrum is mapped to a scalar. The resulting detection statistics are used to decide if the plume is present every M∆t seconds. The test window is separated from the background window by a guard window which lasts NG ∆t seconds and helps prevent plume pixels from contaminating the background estimation process. With the guard window in place, the first decision is made after (L+NG+M)∆t seconds. ∆t = Time between spectral measurements Single band radiance

L Background

NG

M

Guard

Test

Background

Guard

Time

• Test

• Background statistics

Detection statistics

Threshold

Decision

• •

Initialization requires ∆t × L seconds of background data Decisions are made every ∆t × M seconds The first decision is made after ∆t × (L+ NG + M) seconds There is a trade-off between “speed” and “quality” of adaptation

Figure 1. Temporal-spectral processing using block-adaptive detection algorithm.

Next, we describe in detail the main steps in the detection algorithm design which is shown in Figure 1. After initializing the L, NG, and M parameters, the mean, covariance matrix, and detection statistics of the first background window are computed and the mean and standard deviation of the detection statistics are stored. After waiting the required number of window shifts, the detection statistics for all observation vectors in the current window, i, are compared to mean value of the detection statistics from the previous window using

Signal Processing Algorithms for Hyperspectral Sensors

z d (i ) = yd (i ) − yˆ d (i − 1)

441

(8)

where yd(i) is the detector output at time n and yˆ d (i − 1) is the time-averaged detector output M∆t seconds earlier. The magnitude of the difference zd(i) is then compared to the threshold determined by the standard deviation of the detection statistics of the initial background window. An ordered statistics technique was implemented which imposes the requirement of at least 5 detections in the test window to declare that the plume is present. When this condition is satisfied, the background statistics are no longer updated and the last mean and covariance matrix before encountering the plume boundary are used while processing plume spectra. When the test window reaches the plume boundary, the probability of exceedance distribution of the detector output is shifted due to the presence of a chemical plume.

5. Experimental Results In this section we present some of the key experimental results from processing spectral observations from the Joint Services Lightweight Standoff Chemical Agent Detector (JSLSCAD) sensor data. The detectors that were implemented in the temporal-spectral processing scheme were the anomaly detector and the matched filter tuned to Tritehyl Phosphate (TEP) and Acetic Acid. In Figure 2 we show the output of the anomaly detector as a function of time in minutes after applying the block-adaptive detection algorithm to data set TEP18. The anomaly detection statistics are plotted in the top panel as the green line and the red line indicates the mean detector output across each test window. In the lower panel the blue line is the difference between the current detector output and the mean detector output M∆t seconds earlier found from equation (1). The detection threshold is plotted as the red dashed line and was determined from the standard deviation of the detector output from the first L∆t seconds. In Figure 3 we show the output of the anomaly detector as a function of time for the test window position in which the leading edge of the plume is detected. In this window, the ordered detector statistics constraint is satisfied and covariance updating is stopped. The window parameters were initialized to 200, 50, and 100 samples for the background, guard, and test windows, respectively. In Figure 4 we show in the top panel the output of the matched filter tuned to the TEP signature along with the mean detection output for each test window for the same set of window parameters. The difference using equation (1) is shown in the bottom panel along with the threshold in red. Similarly, in Figure 5 we show a close up view of the matched filter detection statistics when the leading edge of the plume is detected. The matched filter was also tuned to Acetic Acid for the same window parameters and the corresponding results are shown in Figures 6 and 7. In this case the released CWA was TEP, however, the matched filter tuned to Acetic Acid still resulted in a strong detection statistic with significantly higher fluctuations. Additional insight into block-adaptive detection can be gained by plotting the probability of exceedance of the detector output for each test window. We observe in Figures 8 and 9 the temporal block that contains the detection of the leading edge of the plume showing a shift in the probability of

D. Manolakis et al.

Anomaly Detection Statistics (L=200 NG=50 M=100) 800

y

d

600 400 200 0

1

2

3

4

5

6

1

2 3 4 5 6 Time (min) − (960 spectra/min)

7

8

7

8

z

d

200 100 0

Figure 2. Block-adaptive detection of TEP using Anomaly Detector (data set TEP18).

300

yd

200 100 0

2.56

2.58

2.6

2.62

2.64

2.56

2.58 2.6 2.62 2.64 Time (min) − (960 spectra/min)

150 100 zd

442

50 0

Figure 3. Leading edge of TEP plume using Anomaly Detector (data set TEP18).

Signal Processing Algorithms for Hyperspectral Sensors

Matched Filter (TEP) (L=200 N =50 M=100) G

yd

15 10 5 0 1

2

3

4

5

6

1

2 3 4 5 6 Time (min) − (960 spectra/min)

7

8

7

8

8

zd

6 4 2 0

Figure 4. Block-adaptive detection of TEP using matched filter (data set TEP18).

15

yd

10 5 0

2.56

2.58

2.6

2.62

2.64

2.56

2.58 2.6 2.62 2.64 Time (min) − (960 spectra/min)

zd

10

5

0

Figure 5. Leading edge of TEP plume detection using Matched Filter (data set TEP18).

443

444

D. Manolakis et al.

Matched Filter (Acetic Acid) (L=200 N =50 M=100)

yd

G

8 6 4 2 0 −2 1

2

3

4

5

6

1

2 3 4 5 6 Time (min) − (960 spectra/min)

7

8

7

8

zd

3 2 1 0

Figure 6. Block-adaptive detection using matched filter tuned to Acetic acid (data set TEP18).

3

yd

2 1 0 2.56

2.58

2.6

2.62

2.64

2.56

2.58 2.6 2.62 2.64 Time (min) − (960 spectra/min)

3

zd

2 1 0

Figure 7. Leading edge of plume detection using matched filter tuned to Acetic Acid (data set TEP18).

Signal Processing Algorithms for Hyperspectral Sensors

445

Anomaly Detection Statistics (L=200 N =50 M=100) G

0

Probability of Exceedance

10

−1

10

−2

10

0

50

100 150 Threshold

200

250

Figure 8. Shift in probability of exceedance distribution from background (blue) caused by leading edge of plume (red) using anomaly detector (data set TEP18).

0

Matched Filter (Acetic Acid) (L=200 NG=50 M=100)

Probability of Exceedance

10

−1

10

−2

10

−5

0

5 Threshold

10

15

Figure 9. Shift in probability of exceedance distribution from background (blue) caused by leading edge of plume (red) using matched filter tuned to Acetic Acid (data set TEP18).

446

D. Manolakis et al.

Anomaly Detection Statistics (L=200 N =50 M=100) G

y

d

3000 2000 1000 0

5

10

15

20

25

30

10 15 20 25 Time (min) − (960 spectra/min)

30

z

d

800 600 400 200 0

5

Figure 10. Block-adaptive detection of Acetic Acid (data set AA10) using anomaly detector.

Matched Filter (Acetic Acid) (L=200 NG=50 M=100) 0 yd

−10 −20 −30 5

10

15

20

25

30

10 15 20 25 Time (min) − (960 spectra/min)

30

8 zd

6 4 2 0

5

Figure 11. Matched filter detection of Acetic Acid (data set AA10) in state of absorption.

Signal Processing Algorithms for Hyperspectral Sensors

447

exceedance distribution for the detector output. Finally, in Figures 10 and 11 we show the results from processing data set AA10 for the anomaly detector and matched filter detector, respectively. In Figure 11 we observe a case in which the output of the matched filter tuned to Acetic Acid gives a negative detector output when the plume is present due to a sign change in the thermal difference between CWA cloud and surrounding background.

6. Conclusions In this paper we provided an overview on the development and evaluation of algorithms for single pixel LWIR Fourier transform sensors for detection of chemical warfare agents. A linear mixing model was derived from the apparent radiance contrast between a target cloud and the surrounding background using the optically thin plume assumption. We designed and implemented sequential-in-time anomaly and matched filter detectors that process spectral observations and searched for the presence of CWAs. These algorithms were used to process spectral measurements from the JSLSCAD sensor in order to detect the releases of Tritehyl Phosphate and Acetic Acid as a function of time. The temporal-spectral processing scheme which performs background estimation and mean-value tracking of detection statistics was shown to provide a capability to adaptively detect the leading edges of the chemical plumes. Finally, we showed that the probability of exceedance of the detector output can be monitored as additional metric where a shift in the distribution can be seen signifying detection of the plume edge.

References 1. E. P. Przybylowicz (Chair), “Testing and evaluation of standoff chemical agent detectors,” tech. rep., The National Academics Press, Washington, DC, 2003. 2. R. A. Schowengerdt, Remote Sensing: Models and Methods for Image Processing, Academic Press, San Diego, 1977. 3. J. Schott, Remote Sensing: the image chain approach, Oxford University Press, New York, 1997. 4. S. M. Kay, Fundamentals of Statistical Signal Processing, Prentice Hall, New Jersey, 1998. 5. R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, Inc., New York, 2nd ed., 2001. 6. D. Manolakis and G. Shaw, “Detection algorithms for hyperspectral imaging applications,” IEEE Signal Processing Magazine, pp. 29-43, January 2002. 7. D. Manolakis and F.M. D’Amico, “A taxonomy of algorithms for chemical vapor detection with hyperspectral imaging spectroscopy,” in Chemical and Biological Sensing VI, P. J. Gardner, ed., pp. 125-133, SPIE, May 2005.

Acknowledgements This work was sponsored by the Department of Defense under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government.

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 675–699  World Scientific Publishing Company

PERFORMANCE ESTIMATION TOOLS FOR: DECOUPLING BY FILTERING OF TEMPERATURE AND EMISSIVITY (DEFILTE), AN ALGORITHM FOR THERMAL HYPERSPECTRAL IMAGE PROCESSING PIERRE LAHAIE DRDC-Valcartier 2459 Pie-XI Blvd. North Quebec, Quebec, Canada G3J 1X5 [email protected]

The construction of very good hyperspectral sensors operating in the thermal infrared bands from 8 to 12 microns arouses much interest for the development of data exploitation tools. Temperature emissivity separation (TES) algorithms are very important components of a future toolbox, because they make it possible to extract these two fundamental targets' parameters. The emissivity relies on the nature of the target's surface materials, while the temperature gives information related to their use and relationship with the environment. The TES technique presented in this paper is based on iteration on temperature principle, where a total square error criterion is used to estimate the temperature. The complete procedure is described in the paper. Its sensitivity to noise is studied and a mathematical behavior model is provided. The model is validated through a Monte-Carlo simulation of the technique's operation. Keywords: Temperature; emissivity.

1. Introduction In the long wavelength infrared (LWIR) band, extending from 8 to 12 microns, the temperature and the emissivity can be defined as the fundamental parameters of the imaged material or targets. The emissivity provides information related to the target's nature while the temperature relates to its relationship with the environment or to its activity. Another very interesting feature of the LWIR bands is that imagery could be acquired by day or night. The processing chain links surrounding calibrated thermal infrared hyperspectral imagery leading to the extraction of the imaged material fundamental properties are atmospheric compensation followed by temperature-emissivity separation. The atmospheric compensation is the process by which the atmospheric transmittance and path radiance are removed. This step provides two important inputs to the TES algorithm: the ground-leaving radiance and the atmospheric downwelling irradiance. The path radiance is the energy generated by the atmosphere on the path from the target to the sensor. The transmittance is the amount of energy emitted by the ground and lost on the path from the target to the sensor. The downwelling irradiance is the energy incident on the target originating from the hemisphere above the target; it is often converted to 449

450

P. Lahaie

radiance by assuming lambertian reflection from the target. Finally, the ground-leaving radiance is the energy leaving the target and is the combination of the target's self radiation and reflection of the atmospheric downwelling irradiance. Using the downwelling irradiance and the ground-leaving radiance, it is possible to extract the emissivity and the temperature of a target. The downwelling irradiance acts in the process as the reference. This principle has been used by Borel1 and resulted in the ISSTES algorithm. This algorithm has subsequently been studied thoroughly by Ingram and Muse2. In our technique, we use the downwelling irradiance as the reference for the emissivity. We also use iteration for the determination of temperature, like what is done in Borel's procedure. Our technique's difference from ISSTES lies in the method used to select the appropriate temperature and its corresponding emissivity. That difference leads to an increase in resistance to noise and to impairments such as the wrong estimation of the downwelling irradiance. The paper is arranged as follows. In section 2 the assumed signal is described. Section 3 describes the general algorithm and highlights its most important features. Section 4 is devoted to a sensitivity analysis for noise and section 5 provides comparisons between computed and simulation results. 2. Signal model A complete signal model, taking into account every features of the atmosphere, is very complicated. It involves phenomenon such as heat-generated signal by ground objects and the atmosphere, scattering by the atmosphere (aerosols and clouds), absorption by the surface and by the atmosphere. If every single phenomenon was taken into account perfectly, forward computation is possible, but an inversion rapidly ends into an intractable problem. One solution is to specify conditions for which the model possesses a simpler solution. A more tractable, but still very complicated situation arises when a clear sky is the weather condition. For a given spectral band, the signal model becomes: ∞

Lmn =

∫ 0

π    2 2π (1 − ε (σ ))    f n (σ )  ε (σ ) BT (σ ) + Li (σ , θ , φ )sin (θ ) cos(θ )dθ dφ τ (σ ) + L p (σ ) dσ π    0 0   

∫∫

(1) where σ is the wavenumber. This unit will be used throughout the paper since we also use it in conjunction with MODTRAN. The term Lmn is the measured radiance in band n and fn is a normalized weighting function for the spectral response of the instrument. ε is the emissivity of the target, BT is the Planck function at temperature T. Li is the atmospheric radiance incident on the surface from the sky from direction (θ, φ), τ is the atmospheric transmittance on the path from the target to the sensor and Lp is the path radiance accumulated from the target to the sensor and is given by:

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

 a   dτ  L p = B (T ( y ), σ )  −  exp  − α ( y ', σ )dy '  dy    dy  0  y  a





451

(2)

where T(y) is the temperature of the atmosphere at elevation y and α is the extinction coefficient of the atmosphere at elevation y. In this model the influence of single scattering by aerosol particles is neglected as a contribution to the signal. The aerosols' effects are considered however, in the extinction coefficient. Another simplification lies in the reflection characteristics of the targets considered to be lambertian. Directional reflection peculiarities of targets are discarded. The mathematical simulation of a potential sensor measurement therefore requires: the knowledge of the sensor's characteristics such as the spectral response of each of its bands; the target's characteristics, which are the spectral emissivity, reflection characteristics and temperature; and the complete knowledge of the atmosphere; the temperature, the pressure and the humidity gradients. The clear sky model needs to be simplified even more with the use of some physical assumption. The targets' emissivities can be assumed constant within the bounds of each band. This holds also for the blackbody function. It enables a bulk transmittance to be used for each band. The downwelling irradiance is spectrally highly variable. The bulk transmittance for the computation of its reflection transmission through the atmosphere cannot be used if high precision is required. However, if the sky is clear and the atmosphere equivalent temperature is much smaller than the ground temperature, in conjunction with generally high target emissivity renders marginal the contribution of downwelling irradiance. The atmospheric transmittance can be used to estimate at the sensor the contribution from the reflected downwelling irradiance by a target. This introduces difficulties for processing highly reflective targets, but since these targets are already extremely difficult to process, it does not increase the burden too heavily. The path radiance is additive and can therefore be considered separately. For any given band, the model can be simplified to provide the following equation: L   Lmn =  ε n Bn (T ) + (1 − ε n ) s τ n + L pn + N π  

(3)

In the remainder of the document, the π factor dividing the Ls contribution of atmospheric downwelling irradiance will be omitted. It must however be kept in mind that the Ls variable is been converted into radiance through this transformation. The bracketed part of the preceding equation is known as the ground-leaving radiance. N is the noise generated by the sensor. The removal of the path radiance and of the transmittance constitutes the first component of the processing chain i.e. the atmospheric compensation. This provides the ground-leaving radiance. This last quantity can be measured directly by a ground spectrometer and this way data can be gathered to verify specifically a temperature-emissivity separation algorithm. The signal model used in the remainder of the paper is therefore:

452

P. Lahaie

Lmn = ε n Bn (T ) + (1 − ε n ) Lsn +

N

τn

(4)

3. Algorithm description 3.1. Introduction The TES system is built around the minimization of the total square error. We chose it because it is possible to get a mathematical expression for the computation of the temperature error variance. From the analytical expression a procedure for filter design may also be produced. The total absolute error could also be the selection criterion. In simulation it did not produced significantly different result compared to the total square error and its performance analysis cannot be performed through a mathematical development. The total square error is given by:

E2 =

∑( R

gn

− R fn )

2

(5)

n

where Rgn is the ground-emitted radiance in the nth band and Rfn is the emissivity-filtered radiance computed for band n. It assumes that the ground-leaving radiance and the downwelling irradiance have been provided as inputs. Prior to the operation on image's pixels the algorithm initialization must be done. This step encompasses the computation of pixels' ground-leaving radiance and the downwelling irradiance for the image. Figure 1 shows the structure of the algorithm for pixel-by-pixel processing. It contains the following steps: computation of a start temperature; emissivity computation; emissivity smoothing; computation of radiance, using smoothed emissivity; computation of the total squared error on radiance; decision to stop the process and finally estimation of a new trial temperature. 3.2. Algorithm components description 3.2.1. Algorithm initialization This step is not included in figure 1. The initialization phase of the algorithm is constituted by the input of basic data, which are the image size, the atmospheric profiles or atmospheric optical parameters (transmittance, path radiance and downwelling irradiance), the sensor characteristics such as the number of bands, their centers and width and possibly there shift with lateral position also known as the smile and the sensor altitude. If the atmospheric profiles were provided, it includes the computed atmospheric optical parameters. The computation is skipped if the optical atmospheric parameters for each band are provided instead of the atmospheric profiles. The image atmospheric compensation that comprises the elimination from the data of the path radiance and of the transmittance is part of the algorithm initialization.

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

453

Start at a given temperature

Computes emissivity at temperature

Smoothes the emissivity

Compute at-sensor radiance Select a new temperature Computes error on radiance

Is it the minimum

No

Yes Emissivity and temperature

Figure 1: Structure of the proposed algorithm

3.2.2. Pixel initialisation Pixel processing needs to be started at the best possible temperature estimation. The constraint associated with the rough estimate of the temperature is that it must be within the region of stability of the algorithm and thus the estimate must be as close as possible to the true temperature of the pixel's sample. The technique uses two adjacent or nearly adjacent bands of ground-leaving radiance and their corresponding downwelling irradiance. The assumptions are: noise can be neglected and the emissivities of the sample in the two bands are so similar and can be assumed equal. The proximity of the two bands justifies this approximation. A high difference between the radiances of the

454

P. Lahaie

two bands is the selection criterion. The assumption that the emissivities are the same reduces the number of unknowns to two and therefore the problem could be solved with some small error. The two equations become:

R1 = ε B1 (T ) + (1 − ε ) L1 R2 = ε B2 (T ) + (1 − ε ) L2

(7)

where the index of the blackbody functions indicates one band and it's nearest neighbor. Another approximation can be done that assumes the blackbody functions for the two bands are equal. The equation set is linear and the estimate for temperature is:   R1 L2 − R2 L1 Tcoarse = B1−1   ( R1 − R2 ) + ( L2 − L1 )   

(8)

3.2.3. Emissivity computation The emissivity computation is the first step performed in the iteration loop on temperature, as given by equation (9).

ε=

Rg − L B−L

(9)

3.2.4. Emissivity filtering Emissivity filtering is one of the critical parts of the algorithm. The filter behavior determines to a large extent the capability of the algorithm to estimate the temperature with precision. The application of the filter on the emissivity is given by:

ε f = Gε

(10)

where ε is the emissivity, G is the matrix filter, and εf is the filtered emissivity. 3.2.5. Radiance estimation The radiance estimation is a simple step performed once the emissivity has been filtered. The equation is the following:

R f = ε f Bt + (1 − ε f ) L

(11)

where Rf is the radiance computed with the filtered emissivity, Bt is the blackbody function computed at the trial temperature. 3.2.6. Error estimation The error estimation is calculated using the following expression for the total square error.

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

E2 =

∑( R

gn

− R fn )

2

455

(12)

n

3.2.7. Decision The decision to terminate pixel processing or to modify the temperature is based on the simplex method. The algorithm is initialized with a step on temperature set to one degree. Initially the temperature is supposed to be, by a small amount under the sample temperature so the initial direction of movement for the temperature is upward. Each time the temperature is updated, the error is computed. If it decreases, the direction and step are kept at the same values. When it increases, the temperature step is divided by two and the direction inverted. This procedure is followed until the temperature step reaches a minimum value. The temperature selection is done when the minimum is reached at the temperature generating the minimum error value at the smaller temperature step set by the user. Figure 2 shows the procedure graphically.

Initialize temperature Initialize temperature variation steps Compute error at initial temperature Increment temperature by one step

Compute error

Error decrease

Yes

T = T + Tstep

No

Is Tstep at minimum

No

Yes

End

Figure 2: Description of the minimum search algorithm

Tstep = Tstep / 2 Tstep = -Tstep

456

P. Lahaie

3.3. Comments on algorithm structure We described in this section the algorithm as a system. Each components of the computation is described with its purpose in mind and what it provides to the processing. We did not describe what constitutes an optimal filter or what the effect of the filter is. This last topic is devoted to the next section. 4. Filter influence analysis 4.1. Introduction In this section the effects of the filter on performance of the algorithm is analyzed mathematically. Analytical expressions for the error on temperature due to error on inputs such as the atmospheric downwelling irradiance and from sensor noise are analyzed. The shape of the emissivity also has an effect on the algorithm behavior because if the filter modifies it, the error will not have a zero minimum and the temperature location may not be the sample's temperature. The filter design therefore has a tremendous impact on the performance of the algorithm. The response to noise, to error on incident downwelling irradiance and the effect of a very variable emissivity are analyzed in this section and a mathematical formulation for the se errors is provided. The section begins with a study of the effects of the filter on highly variable emissivity. It then looks at the impact of errors on downwelling irradiance. We finally develop the formulation for the effects of noise on the temperature estimation. In this last part the bias due to noise is studied and the sufficient equations for the computation of the variance of temperature due to noise are given. The comparison of the results from simulation to computation is given in section 5. 4.2. Effect of a filter altering the original emissivity In this part of the study the noise is assumed to be negligible. The error equation for that case is: E2 =

∑ (ε B − ε

f

Bc − ( ε − ε f ) L

)

2

(13)

Where εf is the filtered trial emissivity, which is obtained by filtering the emissivity computed with the trial temperature. Bc is the blackbody function computed at the trial temperature. Equation (23) can also be expressed by: E2 =

∑ (ε B − ε

f

( B + B1Td ) − (ε − ε f ) L )

2

(14)

Where Bc is replaced with the assumption that the blackbody function can be approximated with a linear function of temperature. This assumption is justified if the temperature error remains below 5 Kelvin. It is obvious that if there is an error on the emissivity caused by the filter application, a bias on temperature is applied. The blackbody function, however, remains linear in a very large temperature band that will in

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

457

most cases exceed the bias generated by the filter. So when the error is at its minimum, the filtered emissivity and the blackbody function can respectively be expressed as:

ε f = Gε t

(15)

Bc = B + B1 (T − To ) = B + B1Td

(16)

Where εf is the filtered emissivity and εt is the true sample emissivity. The second equation is the linear approximation for the blackbody function obtained from a Taylor series expansion of the Planck's function. Td is the temperature difference from the true sample temperature. Using these expressions the error equation becomes: E2 =

∑ ( ( B − L ) (ε − ε ) − ε f

f

B1Td

)

2

(17)

In that equation, Td is not a spectral quantity. All the other quantities present in the equation are spectrally variable. To obtain the minimum error relatively to the temperature, the expression (3.15) can be derivated with respect to Td, , giving: d  dE =  dTd 2

∑ ( ( B − L) (ε − ε ) − ε f

f

B1Td

dTd

)

2

   =0

(18)

and Td can be isolated to give: Td =

∑ ε (ε − ε ) B ( B − L ) ∑ (ε B ) f

f

1

(19)

2

f

1

This expression gives the temperature offset generated by an inaccurate application of a filter on the emissivity. The temperature bias is estimated at approximately 0.3K, for a filter that generates an error of 1% on the emissivity when the sample has a temperature near 300K. If the emissivity of the target is low, the effect on an error will even be larger since the denominator has a higher power for the emissivity.

4.3. Effect of a downwelling irradiance error 4.3.1. Temperature effects Using a similar approach as the one developed in the previous section, the error on the downwelling irradiance can be studied. We suppose that the error on the estimation of incident irradiance introduces an error on the temperature. It also introduces an error on the emissivity. The error expression for that case is:

E2 =

∑ (ε B + (1 − ε ) L − ε

m

( B + B1Td ) − (1 − ε m )( L + ∆L ) )

2

(20)

Where ∆L is the error on the downwelling irradiance and εm is the filtered emissivity at the minimum temperature. Simplification of that expression yields:

458

P. Lahaie

E2 =

∑ ( (ε − ε

m

)( B − L ) − (1 − ε m ) ∆L − ε m B1Td )

2

(21)

The differentiation of the square error relatively to Td is: dE 2 =0=2 dTd

∑ ( (ε − ε

m

)( B − L ) − ε m B1Td + (1 − ε m ) ∆L )ε m B1

(22)

The value of Td, that minimizes equation 22 is obtained by solving equation 22, which gives: ( (ε − ε m )( B − L ) − (1 − ε m ) ∆L ) ε m B1 Td = (23) (ε m B1 ) 2





Note that this expression becomes equal to the expression giving the error associated to emissivity modification by the filter when the error on the downwelling irradiance becomes null. Another remark is that if the error on emissivity is neglected, a positive error on the downwelling irradiance generates a negative bias on the temperature. This negative bias however, generates a lower estimated contrast between the ground and the sky, which introduces a positive bias on the emissivity. These effects are antagonists in the above equation so the effect of an error on the downwelling irradiance, may be marginal as long as it is not too high. 4.4. Noise sensitivity analysis

Sensor noise is the only impairment that will always be present in any measurement system. All other problems could be avoided or greatly reduced by modeling and processing. These other problems are related to the estimation of the required variables of the system (the downwelling irradiance, the transmittance, the path radiance and the emissivity variations). It is reasonable to think that the optical atmospheric parameters as well as the sensor's parameters will be thoroughly known for a given sensor and measurement setup so, here, we limit ourselves to the evaluation of the impacts of sensor noise on the temperature and emissivity estimation. What is required are the temperature introduced bias and the variance of the temperature estimation due to noise. 4.4.1. Temperature bias due to noise We begin with the total square error as a function of temperature. 2

E =

∑ n

  Rg − L    Rg − L     R gn − BG   +  1 − G    L    B−L    B − L    

2

(24)

The measured radiance is given by: Rg = ε B + (1 − ε ) L + N

(25)

459

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

Assuming the blackbody function can be expanded in a Taylor series for the region near the temperature of interest, the total square error containing noise can be expressed by:

E2 =



    N   −ε o B1Td G  1  − G  N  + B1Td G    ( Bo − L ) + N      2      Bo − L   Bo − L   ( Bo − L )            1   N  N      BT  −  ε o − ε o B1Td G  B − L  − G  B − L  + B1Td G   2  1 d    o   o   ( Bo − L )     

2

(26) Using the fact that B(T ) ≈ Bo + B1Td Where Bo is the blackbody function at the temperature of the target and B1 is the first order derivative of the blackbody function estimated at the target's temperature. Td is the deviation from that temperature that is introduced by noise and εo is the true emissivity of the target. This function is quadratic on temperature and is valid in the region around the target's temperature. In this function all the variables are fixed or supposed to be known in a given case. Then to find the best estimate for the temperature one only has to minimize the function by differentiating the error function with respect to temperature and equating the derivative to zero to extract the temperature as a function of all the other variables. With a little rearrangement the derivative becomes.

dE 2 =2 dTd



       N    ( Bo − L )(U − G )    +  Bo − L             1   N   N   − ε o B1 (U − G )   + B1G    Td +  *     ( Bo − L ) B1G  2  B L B L − − o o     B − L )     ( o            1  N   Td2   +  ε o B12G  − B12G   =0  2        Bo − L   ( Bo − L )              1   N     N   B − L B G   B U G − − + B G + ε ( ) ( ) 1 o 1 1       o   ( Bo − L ) 2   Bo − L   Bo − L          *        1  N   2  2     + 2 ε B G − B G T   o 1  1    2  d   B − L       o   ( Bo − L )      

(27) This expression is very complicated and a lot of terms contained inside it possess a low magnitude compared to major terms. A study of each term's behaviour and magnitude near the minimum temperature is needed. The contrast between the ground

460

P. Lahaie

and the sky radiances is the signal component possessing the highest magnitude in the above expression. Following that magnitude, are second order components such as the noise and the radiance difference due to the difference in temperature Td. The emissivity possesses a magnitude of 1 and the temperature difference is expected to be very small if B1Td is considered to have a magnitude at least comparable to the noise. Neglecting the smaller terms with the use of the preceding arguments and inverting equation 27 for temperature difference leads to:



Td =

∑  ε

o B1

 1  N   (U − G )    Bo − L   Bo − L   

( Bo − L ) 2 (U − G ) 



  1   ε o B1 ( Bo − L )(U − G )    Bo − L   

2

(28)

This expression provides the bias due to noise in a given single case, where the noise would be known. The temperature estimate is unbiased if the noise magnitude is low enough and if the noise mean is null. To be valid expression 28 requires a high signal to noise ratio. 4.4.2. Temperature Variance due to noise The variance of the temperature estimation can be estimated directly from these equations and gives:  2 2  bi2 a m M mi   a n M ni + 2a n M ni   n  m = n +1  σ T2 = i (29) 2 Q







Where:  N2  N  2  bi =  and then b = i   ( Bo − L ) 2  Bo − L  i 

(30)

  1  2 a n = ε o B1 ( Bo − L ) (U − G )    Bo − L   n 

(31)

M = U −G

(32)

 Q =   2

   i



  1   ε o B1 ( Bo − L )(U − G )    Bo − L   

2 2

  

(33)

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

461

5. Computation results 5.1. Introduction

The main subject of this chapter is the comparison between the theoretical results obtained in section 4 and the simulations results obtained by the TES algorithm with the same design parameters. Each topic developed in the previous section is compared with its simulation counterpart. The aim is therefore not to make an experimental validation of the algorithm, but to put in evidence that the mathematical analysis is good and could be used instead of performing lengthy Monte-Carlo simulations to evaluate a particular filter or the performance of the technique in a given situation. To do the comparisons, for each topic studied in the previous chapter, the mathematical result is given without development and the proper sequence of figures is shown with the appropriate comments. Another purpose of section 5 is to illustrate the variability occurring for different case study depending of the variability of both the inputs and the components of the TES technique. The first part of the section is devoted to the description of the input parameters and of some components of the technique. The downwelling irradiance, a filter of gaussian shape and the error curves in function of temperature are described. Information is also provided relative to erroneous atmosphere that will be used in the study of the effects of error in atmospheric downwelling irradiance. The other parts of the section are the description and comments related to the topics covered analytically in the preceding section. The covered topics are: the impacts of downwelling irradiance error; the effect of the emissivity modified by a filter and of noise bias and variance. To do the comparisons, results are obtained with the use the analytical formulae and from the operation of the algorithm with the same parameters. Statistical parameters such as the bias and variance are computed with the analytical formulation and from Monte-Carlo simulations for their operational counterpart. For the assessment of the input parameters error effects or from impairments, analytical expressions are very useful tools. Their structure gives important pieces of qualitative information for filter design and capability of the technique. The precision of the computation enables a detailed analysis of the technique's capability. For example, a close study of the noise effects related with the number, position and characteristics of particular bands becomes possible. This kind of study can even have impacts on sensor design or on a lesser extent on filter design. Another example is the estimation of the required accuracy of atmospheric models like MODTRAN. This kind of study is reported to a later part of the project, but one important thing is that now if there are questions related to a given system performance, tools exist to evaluate them or they could be designed to implement that capability.

462

P. Lahaie

5.2. Filter description

The used filters are members of the gaussian family. The width of the gaussian function is changed to see what will be the effect of the width parameter on the filter's performance. When linear filters are used they are often characterized by the use of the bandwidth concept. In hyperspectral applications the term bandwidth can easily be confused with the spectral width of the channels, so we prefer not to use the term bandwidth associated with the linear filters. Fortunately with the gaussian type of filter the width can be used to characterize them. The simulated sensor has 101 bands. Each band is 4 cm-1 wide. The spectral scale is in wavenumber. The gaussian filter basic equation is:

g nm

 ( m − n)2   exp  −  2ω 2    =  ( m − n )2 exp  −  2ω 2 m 



(34)

   

Where m is the column index and n is the row index. The filter width parameter in terms of band index is ω. The filter is normalized to unity for each row. 0.06

Filter amplitude

0.05 0.04 0.03 0.02 0.01 0 0

20

40

60

80

100

Spectral or band index Figure 3: Extracted lines of a narrowband gaussian filter

This family of filters is not considered as optimal, but it enables the validation of the mathematical expression. We have not developed techniques for optimal filter design relatively to any criteria. Gaussian filters are used because they are well behaved and easy to use.

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

463

5.3. Deformation of emissivity by the filter

Equation 19 gives the bias introduced by the filter on the temperature due to the deformation of the original emissivity. Graphically, the bias could be explained by the deformation of the total square error curve due to the deformation. The figure 4 shows an error curve for a constant emissivity. Figure 5 shows a very variable emissivity extracted from the ASTER spectral library3. The filename for this emissivity is zoisit1s. The figure shows the emissivity and the consecutive filter modified value. Figure 6 shows the error curve for a number of filters applied to the emissivity. Finally figure 7 shows the bias on the emissivity due to the use of filter of varying width and a comparison with the simulation of the filter's operation within the algorithm.

6

x 10

-3

Total square error

5 4 3 2 1 0 296

298

300

302

304

306

Temperature [K] Figure 4: Error curve for a constant emissivity

Emissivity

1

0.8

0.6

0.4

0.2 800

900

1000

1100

-1

Wavenumber [cm ] Figure 5: Zoisit1s emissivity from the ASTER spectral library

1200

464

P. Lahaie

Total square error

1

x 10

-3

Width = 1 Width = 2 Width = 4 Width = 8

0.8

0.6

0.4

0.2

0 296

298

300

302

304

Temperature [K] Figure 6: Error curves for various filter half width filters applied to the Zoisit 1 s emissivity

Temperature bias [K]

5 Computed Simulated

4 3 2 1 0 -1 0

2

4

6

8

10

Width of filter [number of bands] Figure 7: Results comparison for simulation and computation of the temperature bias due to the Zoisit1s emissivity modification by the filter, as function of filter half width

5.4. Effect of an error on the downwelling irradiance

Error on downwelling irradiance is difficult to quantify. Two general sources of errors exist, the water vapour and the error on temperature profiles. Gas such as carbon dioxide and pollutants concentration errors are other sources of error. Aerosols particles present in the air are often not well accounted for in radiative transfer software. Aerosols are difficult to characterize, the concentration, the variety of constituents of the particles and their size, determined by the generation process are difficult to model accurately. The errors caused by this lack of knowledge are very difficult to estimate. These, are

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

465

however, smaller than the possible errors caused by the uncertainty of the atmospheric profiles used to compute the incident downwelling irradiance on targets. The errors will be quantifiable when a profile-estimating algorithm will exist. This algorithm will then be comparable with ground and air measurements of the optical atmospheric parameters. We can infer some plausible errors, study theoretically their effects and compare them with simulations to assess the effectiveness of the performance prediction formulation. The profiles we use for verifying the accuracy of the formula developed in the preceding section are the temperature offset atmospheric profiles. The relative humidity is the same as the one existing in MODTRAN's standard tropical atmosphere. Figure 4.7 shows the relative difference in irradiance due to the offset in the temperature profile. The emissivities used in this part of the study are the ZOISIT1S sample from the ASTER spectral library and the 0.9 constant emissivity. Equation 23 gives the error on temperature caused by an error on the downwelling irradiance estimation. The temperature selected by the algorithm is the one for which the trial emissivity is the less modified by the filter. If the temperature atmospheric profile estimation is higher in temperature than the existing profile, the spectral variations are more important as it could be seen in figure 8. The inverse behaviour will be observed for an underestimated temperature profile. When an emissivity is estimated at the wrong temperature in a case where there is no downwelling error the emissivity contains variations traceable to atmospheric irradiance. When the temperature is slightly above the sample's temperature the variations are negative and when the temperature is below the sample's temperature they are positive. So, if the downwelling irradiance is overestimated, the temperature estimation will be below the true sample's temperature in order to reduce the spectral variations modified by the filter. Therefore the overall emissivity estimation will be higher to compensate for a lower temperature. Curves on figure 9 and 10 illustrate that behavior for the 0.9 emissivity. The temperature is underestimated by 0.25 degree while the emissivity is slightly overestimated by 1 point. The inverse behaviour is observed for the underestimated profile. Figures 9 and 10 show the error on the estimation of the temperature for a constant emissivity. The computation is done using a 5 bands wide gaussian filter; however, we observed the filter does not have much influence on the results for constant emissivity.

P. Lahaie

Percentage of difference for irradiance

466

5 4 3 2 1 Warmer atmosphere +2

0

Colder atmosphere -2

-1 -2 -3 -4 -5 700

800

900

1000

1100

1200

1300

1400

1500

Wavenumber [cm-1] Figure 8: Percentage difference in irradiance for the +2 and -2 degrees atmospheres

12

Temperature bias [K]

10 Computed Simulated

8 6 4 2 0 -2 0.2

0.4

0.6

0.8

1

Emissivity Figure 9: Temperature estimation error due to an error of -2 degrees in the atmospheric profile as a function of emissivity, for a constant emissivity

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

467

0

-0.5 Computed Simulated

Temperature bias due to atmospheric error [K]

-1

-1.5

-2

-2.5

-3

-3.5

-4

-4.5

-5 0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Emissivity

Figure 10: Temperature estimation error due to an error of +2 degrees in the atmospheric profile as a function of emissivity, for a constant emissivity

5.5. Effect of noise on the temperature estimation

For any sensor data processing method, noise effects must be studied. The first application of noise study is to estimate the limitation and final capability of a data processing technique. The design of a technique can also be improved through a proper noise effect analysis. In most optical sensor two types of noise are potentially annoying. The first type is electronic random noise produced by detectors and amplifiers and the other is photon shot noise. The first noise type can usually be reduced significantly, compared to shot noise, by proper cooling of the detectors. Photon shot noise is caused by the random arrival of photon on the detectors and signal to noise ratio can only be reduced by increasing the number of photon. Electronic noise is usually modeled by a gaussian function with a zero mean for which the variance is also called the noise power. Photon shot noise is modeled by a Poisson distribution, which can be approximated by a gaussian function with equal mean and variance in terms of photon number. Sensor design therefore has a very high influence on the noise characteristics. Two main effects can be produced by noise: a bias and a random deviation of the estimated parameter. The bias can exist mostly because an algorithm has a nonlinear behavior, while the random deviation is mostly a linear noise effect and is characterized by the estimated parameters variance. 5.5.1. The bias In our method, the error function depends on the independent variables of the system that are: the temperature, the emissivity, the downwelling irradiance and the sensor noise. The mathematical representation of the error function, which contains all the variables

468

P. Lahaie

explicitly, is very complicated and is given at equation 26. An expression for the minimum temperature is given at equation 28. This last expression contains only the effect of the linear term in temperatures and noise. According to it, the method should not introduce any bias on the temperature if only the noise is considered. This is an effect of having used only terms that are linear on temperature to derive it. Looking carefully at equation 27, it appears that nonlinear in noise terms exist within the detailed formulation. These terms can be considered as small in comparison with other linear terms for high signal to noise ratio. It implies however that if the signal to noise ratio is low, a noticeable bias can be introduced in the temperature results. This bias will also depend on the filter characteristics. In any signal processing techniques, the bias introduced by noise is something that must be minimized as much as possible. Non-linearity of second order is very difficult to avoid completely. Equation 28 enables the estimation of the bias on temperature introduced by noise. The dependency on noise of the temperature estimation becomes linear if noise is modeled as a zero mean random process. In an optical system, shot noise is described by a Poisson process, for which a gaussian function can be used as an approximation when the mean is high. To illustrate the non-linear effect of the technique, a Monte-Carlo simulation has been performed for which the results are shown at figure 4.14. The used filter has a gaussian shape and a width of 10 bands. A bias clearly exists, and it depends on the noise level. It is however limited in effect to a signal that possesses a very small contrast compared to the noise level. In circumstances where the noise level is very high, the bias still remains much below 0.1K. 0.035 0.03

Bias of temperature [K]

0.025 0.02 0.015 0.01 0.005 0 -0.005 -4 10

10

-3

10 2

-2

-1

Noise equivalent radiance [W /(m .sr.cm )]

Figure 11: Bias on temperature estimation as a function of noise equivalent radiance for a target having an emissivity of 0.96 at a temperature of 300K in a tropical atmosphere

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

469

5.5.2. The variance The computation and simulation shown figure 12 demonstrate a very good correspondence between the computation and the simulation. This figure has been obtained by performing a Monte-Carlo simulation for temperature estimation of a 0.96 emissivity and by varying the noise level. The Monte-Carlo run is performed with sample size of 1000 for each noise value. The differences between the computed curve and its simulation counterpart are very small for large signal to noise ratio. At high noise level the discrepancy between two standard deviation curves is due to the non-linearity behavior of the algorithm. It is most probably the third order non-linearity term that is responsible. We considered in the computation that the noise is the same for each bands of the spectrum. The relationship between the noise variance and the temperature variance is therefore linear, when equation 29 is considered. This will not be the case when a signal from a satellite or from an airborne sensor will be used because the transmittance is not the same for each band and the noise, when transferred to the ground emitted radiance will not have the same level for each bands.

Computation Monte-Carlo simulation

0

Temperature standard variation [K]

10

10

-1

0

0.5

1

1.5

2

Noise equivalent radiance [W /(m 2.sr.cm -1)]

2.5

3 x 10

-3

Figure 12: Comparison of a Monte-Carlo simulation and a computation for the standard deviation of the estimated temperature as a function of noise standard deviation. The simulation is made with the MODTRAN standard tropical atmosphere and for an emissivity of 0.96 and a temperature of 300K. The filter is a 10 bands half width gaussian shape.

5.5.3. Different emissivity different variance The shape of the emissivity has an influence on the variance of the temperature. This effect originates from the shape of the error function curve. If an emissivity is such that

470

P. Lahaie

Temperature standard deviation [K]

the minimum is flatter compared to another emissivity, then the minimum's position will be more influenced by the noise. That will remain true for the use of the same filter. One important feature that can be observed in figure 12 is that regarding the variance of the estimated temperature the most important factor affecting the performance of the technique is certainly the mean of the emissivity. If an emissivity is very low it is likely that the estimation of its temperature will be more difficult than for a high emissivity. The interpretation of that fact relies mostly on the idea that if a surface has a low emissivity, the amount of radiation originating directly from the material is smaller and any noise will show a greater impact. In that sense the signal for signal to noise ratio evaluation can be interpreted to be the contrast between the sky and the surface radiance given by the Planck's function weighted by the surface emissivity.

10

10

0

Zoisit1s 0.2 constant 0.5 constant 0.8 constant 0.96 constant

-1

0

0.2

0.4

0.6

0.8

1

Noise equivalent radiance [W /(m 2.sr.cm -1)]

1.2

1.4 x 10

-3

Figure 13: Comparison of temperature standard deviation for different emissivity

5.5.4. Different filter different variance Observing carefully the components of the variance in equation 3.48, it can be seen that the filter has an influence on both the numerator and denominator. The same level of effect applies for these two elements of the equation. The weight of each part is very complicated to determine and actually only the computation of an empirical curve for the selection of a particular filter can be done. It is however, the application of the filter on the inverse of the contrast and not on the emissivity that contains much of the impacts on the variance, especially for the denominator. The design of the best filter is certainly not an easy task but could be based on inversion for the filter effect of equation 3.48. The following figure shows different variance computed with different filters for the same constant emissivity.

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

471

In figure 14, it is shown that the variance changes according to the filter width and that the wider the filter the better the variance. The filters are gaussian and the changing parameter is the half width of the filter. This means that if for a given sample the emissivity is thought to be near constant with the spectral index then a wider filter will achieve a better temperature estimate variance. This requirement must be contrasted with the requirement of bias due to the emissivity shape that is if the emissivity is highly variable with the spectral index then the filter must be narrow enough to follow its variations. The conclusion is that, highly variable emissivities will cause the temperature estimation to be less accurate.

Temperature standard deviation [K]

10

0

W idth W idth W idth W idth W idth

10

= = = = =

1 3 7 12 15

-1

0

0.5

1

1.5

2

Noise equivalent radiance [W /(m 2.sr.cm -1)]

2.5

3 x 10

-3

Figure 14: Computed standard deviation for different filters. The emissivity is constant at 0.96

6. Conclusion

We were seeking two objectives in the work presented in this paper. The first was the development of an algorithm for separation of temperature and emissivity having a strong resistance to noise and to diverse impairments such as an inaccurate knowledge of the inputs to the algorithm. The second was the development of tools enabling the prediction of the performance of the algorithm considering some difficulties the algorithm has to cope with. The problem of TES is fundamentally ill defined because of the greater number of unknowns versus measurements. For this purpose the algorithm is based on atmospheric downwelling irradiance as the reference for the estimation of both the temperature and of the emissivity. This leads to difficulties with the target’s emissivity. If its variations are too large then it can have an effect on the estimated temperature. This leads to the assumption that emissivity is smooth comparing to the ambient incident radiation. We developed a mathematical expression for the effect of the shape of the

472

P. Lahaie

emissivity because the technique has a sensitivity to it. The tool can also be used to develop particular filters to make the algorithm less sensitive to a given emissivity. Another major impairment is an inaccurate knowledge of the downwelling irradiance used as the reference. The tool developed for estimation of the error can be used to develop filters that will make this reduce the effect. It can also be used to predict the effect on temperature and emissivity estimation of techniques to predict the downwelling irradiance from indirect measurements if the true irradiance is known. From another perspective it could also be used to estimate the effect of the target’s BRDF because the aggregated downwelling irradiance by the target and the downwelling irradiance from the atmosphere are different since we assume in the algorithm operation that targets are lambertian. The last difficulty met by the algorithm in its operation is the noise generated by the sensor. We developed an expression for the prediction of the estimated temperature variance as a function of the sensor noise in each measurement bands. The tool validity has been verified using a Monte-Carlo simulation. We observed that the method is very linear and that non-linearity shows only for low signal to noise ratio. In the development of the prediction tool we observed that the true signal to noise ratio depends on the contrast between the target blackbody emission and the downwelling irradiance. The target emissivity also has a high importance. If it is high the accuracy of the temperature estimation will be higher than for a small emissivity. This leads to the conclusion that accuracy of the temperature estimation depends on the amount of signal emitted by the target. We can also conclude that estimation of temperature of highly reflective targets is extremely difficult and not only for the reason that reflection in this case can be far from lambertian. Finally, the technique has been validated using simulation technique. Its accuracy also requires to be verified from data gathered by ground measurements and from airborne measurements. References 1. Borel, C.C, "Surface emissivity and temperature retrieval for a hyperspectral sensor", Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS '98. 1998 IEEE, International , Volume: 1, 6-10 July 1998, pp. 546-549 vol.1 2. Ingram, P.M., Muse, A.H, "Sensitivity of iterative spectrally smooth temperature/emissivity separation to algorithmic assumptions and measurement noise", Geoscience and Remote Sensing, IEEE Transactions on , Volume: 39 Issue: 10, Oct. 2001, pp. 2158-2167 3. ASTER Spectral Library through the courtesy of the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California. Copyright [encircled "c"] 1999, California Institute of Technology. ALL RIGHTS RESERVED 4. A.R. Gillespie, S. Rokugawa, S.J. Hook, T. Matsunaga and A.B. Kahle, "Temperature/ Emissivity separation algorithm theoretical basis document, version 2.4", Report prepared for NASA under contract NAS5-31372, 22 March 1999. 5. P. Dash, F.-M. Göttsche, F.-S. Olesen and H. Fischer, "Land surface and emissivity estimation from passive sensor data: theory and practice-current trends", International Journal of Remote Sensing, 2002 vol 23, no 13, pp. 2563-2594.

DEFILTE, An Algorithm for Thermal Hyperspectral Image Processing

473

6. P. Lahaie, "A new technique for separation of temperature and emissivity", Defense R&D Canada - Valcartier, Technical memorandum, DRDC Valcartier TM 2004-124, December 2004 7. A. Gillespie, S. Rokugawa, T. Matsunaga, J.S. Cothern, S. Hook, A.B. Kahle, "A temperature and emissivity separation algorithm for advanced spaceborne thermal emission and reflection radiometer (ASTER) Images", IEEE transactions on geoscience and remote sensing, vol 36, no 4, July 1998. pp. 1113-1126

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 701–711  World Scientific Publishing Company

ESTIMATING THE LIMIT OF BIO-AEROSOL DETECTION WITH PASSIVE INFRARED SPECTROSCOPY AGUSTIN IFARRAGUERRI Science Applications International Corporation 4001 N. Fairfax Drive, Suite 300, Arlington, Virginia 22203 [email protected] AVISHAI BEN-DAVID AND RICHARD G. VANDERBEEK U.S. Army Edgewood Chemical and Biological Center 5183 Blackhawk Road, Aberdeen Proving Ground, Maryland 21010

To investigate the detection limits of biological aerosols using passive infrared measurements, we have developed a computational model that relies on physics-based simulations to generate a statistical sample. The simulation consists of three principal models: an atmospheric turbulence model, a radiative transfer model and a target detection model. The turbulence model is used to generate microscale atmospheric variability. Resulting temperature and density profiles, along with custom aerosol profiles, are used to generate inputs for MODTRAN5, which produces simulated atmospheric spectral radiance. The simulated data is then analyzed by using an optimal detection algorithm and a hypothesis test, resulting in receiver operating characteristic (ROC) curves.

1. Introduction The US military has a vested interest in protecting the troops as well as the general population from biological warfare attacks. An important component of bio-defense is early detection of an incoming attack in the form of an aerosol cloud. Standoff bio-aerosol detection provides valuable time to avoid contamination, prepare countermeasures, or possibly neutralize the threat. The standard approach to detection of biological aerosol clouds is to use LIDAR technology to measure particle scattering. It has been shown that using multiple CO2 laser lines to excite an airborne sample can provide long-range detection and discrimination of biological aerosols. Performance at significant standoff ranges (> 1 km) requires large, powerful infrared lasers, posing logistical and safety challenges. Passive infrared spectroscopy, particularly FTIR, has been studied as a possible technique to detect biological aerosol clouds at a distance. FTIR instruments and techniques have improved significantly over the past two decades. Sensor noise levels of less than 10-9 W/cm2/sr/cm-1 noise-equivalent spectral radiance are now achievable with reasonable-sized field instruments. These advances have made passive spectroscopy competitive with LIDAR for the bio-aerosol detection application. The purpose of this

475

476

A. Ifarraguerri, A. Ben-David & R. G. Vanderbeek

work is to help establish the feasibility of using passive infrared spectroscopy to detect airborne biological aerosol clouds. In particular, we would like to determine what are the ultimate limits of detection under “general” conditions. Multiple field experiments have established that it is possible to detect biological aerosol clouds using passive spectroscopy1,2,3, but these have been performed at short ranges and under controlled conditions. Our approach to the detection limit estimation problem uses simulation to generate many different realizations of atmospheric spectral radiance. We then apply a “best case” detector and use the results to obtain a statistical estimate of the best possible detection performance given the input geometry and atmospheric conditions. 2. Method Overview

90

100

Our simulation concept includes three principal models: an atmospheric turbulence model, a radiative transfer model and a target detection model. The turbulence and radiative transfer models work together to generate a time series of atmospheric radiance spectra that is intended to simulate the natural background variability. Figure 1 illustrates this portion of the simulation. The turbulence model is used to generate microscale atmospheric variability (on the order of seconds). It takes as input the desired observation geometry and atmospheric conditions. A white noise process drives the model to produce time series of temperature and density fluctuations along the sensor line of site (LOS). The density fluctuations in turn determine the fluctuations in the concentration of water vapor, ozone and background aerosol.

-4

0

100

200

300

400

500

600

700

800

900

60 50 170.00

150.00

190.00

0

10

20

30

40

Altitude (km)

1976 Standard Atmosphere

230.00

210.00

270.00

-3

250.00

-2

290.00

0

310.00

Atmospheric profiles and observation geometry (including range to target)

1

Temperature (Kelvins)

2

-1

70

80

3

Atmospheric Variability Level (%)

1000

White Noise Process

4

x 10

-8

Standard Deviation of Background Radiance

Fluctuation Model

Temperature Variability

MODTRAN

Radiance (W/cm 2/sr/cm-1)

3.5

3

2.5

2

750

800

850

900

950

1000 1050 1100 1150 1200 1250 1300

Wavenumber (cm-1)

Φ (ν ) = C T2ν − k Density Variability

Water Vapor, Ozone and Aerosol Variability

Simulated Atmospheric Radiance

Fig. 1. Simulation concept for generating time series of atmospheric radiance spectra.

For each point in the time series, we can generate all the required inputs for a radiative transfer model. We have chosen to use the MODTRAN5 code4 for our simulations. MODTRAN is a well-known and generally accepted standard for predicting atmospheric radiance and transmittance at spectral resolutions on the order of

Bio-Aerosol Limit Estimation with Passive IR Spectroscopy

477

2 – 20 cm-1, which is compatible with the passive infrared measurements of interest. The result of each MODTRAN “run” is a single simulated radiance spectrum. A spectrum is produced for each time point, thus simulating a time series of passive infrared spectral measurements. To include the bio-aerosol of interest, the extinction for a given size (path length) cloud is incorporated in into the simulated radiance measurement using a two-layer band model (see figure 2). A second application of the band model can incorporate a foreground radiance and transmission to simulate extended ranges, resulting in a new time series of “at sensor” radiance spectra. Currently only total extinction is modeled and multiple scattering is neglected. It is not clear at this point if multiple scattering effects are significant for the cloud sizes and concentration ranges of interest. The time series of “at sensor” radiance spectra, plus (optionally) simulated sensor noise, serve as input to an optimal linear detector. The detector model consists of a linear estimator and a hypothesis test. A vector of weighting coefficients is used to compute an inner product with each spectrum. The coefficients are determined by multiple regression using the known aerosol concentration profiles. This results in a “best-case” estimator for the aerosol concentration from the radiance data. A hypothesis test under a Gaussian noise assumption is then assumed to generate receiver operating characteristic (ROC) curves for any given threshold concentration. Finally, we define the detection limit as the aerosol concentration required to obtain a detection probability (PD) of 90% at a false alarm rate (PFA) of 10-4. 3

2

1

0

-1

-2

-8

4

x 10

-3

Standard Deviation of Background Radiance

-4

3

2.5

50

100

150

200

250

300

2

750

800

850

900

950

1000 1050

1100 1150

1200 1250

1300

Wavenumber (cm-1)

Simulated Atmospheric Radiance

Two-Layer Band Model*

-10

6

Extinction Coefficient (liter/particle-meter)

Radiance (W/cm2/sr/cm-1)

0

Sensor Noise

3.5

Radiance at Sensor

Optimal Linear Estimator

Estimated Target Concentration

Assumed Extinction for BG

x 10

5

Regression Analysis

4

3

2

1 750

800

850

900

950

1000 1050 1100 1150

1200 1250

1300

Wavenumber (cm-1)

Target concentration and extinction cross-section

Detection Limit (particles/liter)

Fig. 2. Simulation concept for estimating the detection limit of aerosol bio-detection.

478

A. Ifarraguerri, A. Ben-David & R. G. Vanderbeek

2.1. Turbulence Model Preliminary calculations using MODTRAN5 show that the vast majority of the radiance observed at the sensor is determined by the closest 100 km of atmosphere. Since we are primarily interested in limb observations (zenith angles near 90°), most or all of the first 100 km of our line of sight falls within the bottom 1 km of the atmosphere, thus only boundary layer turbulence is relevant. We assume isotropic turbulence following the Kolmogorov power spectrum5:

Φ T (ν ) = C T2ν − k

(1)

where ΦT(ν) is the power spectral density of the temperature T, CT2 is the temperature structure constant (analogous to Cn2, the refractive index structure constant), ν is either temporal or spatial frequency, and k is a constant that varies between 5/3 and 8/3. In our simulations we use k=5/3. The temporal frequency is related to the spatial frequency by the local wind speed. The Kolmogorov spectrum is commonly used to model stochastic changes in the refraction index of air, which are inversely proportional to temperature. However, for small temperature variations, the change in index with temperature can be approximated by a linear function, which allows us to use the same power spectrum to describe the temperature variability. We can generate a Kolmogorov time series by imposing an appropriate autocorrelation function on a Gaussian white noise so that the result has the power spectral density given in Eq. 1. At present, CT2 is determined by specifying a temperature variability level and adjusting the average signal power to match it. The variability level is expressed in terms of percent of the nominal temperature in order to allow scaling of the turbulent energy with altitude. In the future we would like to use field measurements of Cn2 to derive CT2 and thus improve the physical fidelity of our estimates. In addition to temperature variability, atmospheric radiative transfer is affected by the concentration of absorbing molecular species and by background aerosol load. Of the main absorbers in the atmosphere, water vapor and ozone are the most variable. Since we assume that temperature variation is the dominant physical process, the concentration of these two species as well as the background aerosol are determined by the air density as it changes with temperature. To calculate the density ρ, we assume that hydrostatic equilibrium is maintained and that air (including the aerosols) behaves as an ideal gas. Using the hydrostatic equation and the ideal gas law, we can derive the pressure P at a given height H above the ground given the pressure at the surface and the temperature profile:

 P( H )  g log  =− P (0) R  



H 0

dz T ( z)

(2)

where g is the gravitational acceleration constant and R is the ideal gas constant for (dry) air. We can then scale the concentration of each species by:

Bio-Aerosol Limit Estimation with Passive IR Spectroscopy

ρ = ρr

P Tr Pr T

479

(3)

where the subscript r indicates the reference value (normally from the 1976 U.S. Standard Atmosphere). The calculated values for temperature, water vapor and ozone are input into MODTRAN5 as a user-supplied atmosphere. The effect of varying aerosol concentration is incorporated by modifying the input ground-level visibility. Figure 3 shows an example of the simulation with a temperature variability level of 0.01%, which corresponds to ~29mK RMS variation at the ground level for the Standard Atmosphere. The graph on the left shows the input temperature time series at the lowest atmospheric layer. On the right is the standard deviation of the output radiance spectra for 500 samples. This graph gives an indication of the general level of radiometric variability that can be expected for a 0.01% variability level, and is analogous to the noiseequivalent spectral radiance (NESR) that is used to specify sensor noise levels.

-9

Temperature Time Series

4.5

288.35

Radiance (W/cm 2/sr/cm-1)

Air Temperature (Kelvins)

Standard Deviation of Atmospheric Radiance

4

288.3

288.25

288.2

288.15

288.1

3.5

3

2.5

2

288.05

288 0

x 10

50

100

150

200 250 300 Sample Index

350

400

450

500

1.5 750

800

850

900

950

1000 1050 1100 1150 1200 1250 1300

Wavenumber (cm-1)

Fig. 3. Example of temperature-drive atmospheric variability simulation. The graph on the left shows the time series of temperature values following a Kolmogorov power spectrum with a variability level equivalent to 29mK. The graph on the right shows the standard deviation of the resulting atmospheric radiance spectra from MODTRAN5.

2.2. Detection Limit Estimation We define the detection limit for a given target material in terms of an additive signalplus-noise model. Use of such a model requires that the signal be a known constant the noise statistics be fully determined. In our application the signal of interest is proportional to the aerosol concentration (for a given cloud depth and thermal contrast) and thus not constant. However, we can obtain an estimate of the aerosol concentration by performing a multiple regression of the simulated sensor data versus the known aerosol concentrations. In terms of the true concentration c, the estimate is given by:

480

A. Ifarraguerri, A. Ben-David & R. G. Vanderbeek

cˆ = c + r

(4)

where r is the residual, which can be treated as a random variable with zero mean. Thus the concentration estimate fits our signal-plus-noise model. Furthermore, an analysis of preliminary simulations indicates that, aside from a few low-value outliers, the residual distributions are approximately normal. Therefore we can set up the hypothesis test:

H 0 : cˆ = r , r ~N (0,σ 0 ) vs. H1: cˆ = c + r , r ~N (0,σ c )

(5)

where σ0 is the standard deviation of the residuals at concentrations below the sensor noise level and σc is the standard deviation at concentration c. From these parameters, a receiver operating characteristic (ROC) curve is obtained by computing:

Pf (γ ) = 1 − F( γ / σ 0 ) and

(6)

Pd (γ ,c) = 1 − F(( γ - c)/ σ c ) for multiple values of γ, and plotting the probability of detection Pd vs. the false alarm probability Pf. F() is the standard Gaussian cumulative probability distribution function. From Eqs. (6) we can derive a formula for the detection limit concentration as a function of Pf and Pd by combining both equations and solving for c:

cDL = σ o F −1 (1 − Pf ) − σ c F −1 (1 − Pd )

(7)

Since we have no prior knowledge of the standard deviations, we must use estimates instead. If we assume that σc = σ0 (i.e. constant noise power, which is the case for a detector noise limited system), then we obtain a simple formula for the detection limit concentration as a function of the estimated residual standard deviation σˆ 0 , and the difference of critical values at 1-Pf and 1-Pd:

cˆDL = σˆ o [ F −1 (1 − Pf ) − F −1 (1 − Pd )]

(8)

In the remainder of this paper we report detection limit values corresponding to the point Pd = 90%, Pf = 10-4 on the ROC curve, which corresponds closely to 5 “standard deviations”. 2.3. Confidence Limits To obtain the confidence limits on cˆDL , we note that the detection limit estimate is proportional to the RMS error of the concentration estimate under the null hypothesis (Eq. 8). We can therefore use a chi-square statistic to determine the confidence intervals. Let:

Bio-Aerosol Limit Estimation with Passive IR Spectroscopy

χ 2 (N ) =

N σˆ 02

481

(9)

σ 02

where σˆ 02 is the estimate of the mean square residuals under the null hypothesis and σ 02 is the true value. N is the number of samples used in the estimate. It then follows that:

σˆ 0 χ 2 (N ) = N σ0

(10)

We can define and calculate a relative error for a given confidence interval 100×(1-2α):

 χ 2 ( N ,α )   σˆ  − 1 re(α ) = 100 ×  0 − 1 = 100 ×    N σ0   

(11)

Figure 4 shows the relative error for α =0.025 (95% CI). For 500 samples the confidence interval is ±6.2% and for 1000 samples it is ±4.4%. 95% Confidence Interval of DL estimate 50 40 30

Relative Error (%)

20 10 0 -10 -20 -30 -40 -50

0

100

200

300

400 500 600 Number of Samples

700

800

900

1000

Fig. 4. Confidence interval of detection limit estimate versus number of samples.

3. Preliminary Results As the model is currently under development, only preliminary results are available at this time. In this section we present some experiments designed to characterize the behavior of the model. All detection limit calculations were performed assuming a 100-meter deep cloud of Bacillus Subtilis var. Niger (BG)6. The observation geometry was near horizontal from the ground (zenith angle of near 90° and observer altitude of 1 meter). All detection limit estimates are given in particles per liter of air.

482

A. Ifarraguerri, A. Ben-David & R. G. Vanderbeek

3.1. Range dependence of Detection Limit A question of primary importance is the change in detection limit with range. Some of the initial simulations were designed to qualify (and eventually quantify) the range dependence of the detection limit estimate. For each experiment, foreground and background atmospheric radiances plus foreground transmission were computed using different realizations of the same turbulence spectrum. Ranges of 0, 1, 5 and 10 km were simulated with varying sensor noise levels noise levels. Figure 5 shows results for two temperature variability levels and two sensor noise levels. The relationship with range is very close to linear, with varying slopes and intercepts. This result implies that, for ranges of less than 10 km, only two ranges need be simulated in order to accurately predict detection limits. Detection Limit vs. Range 45,000 y = 2414.5x + 17817

40,000

DL (part/liter)

35,000 30,000

y = 1741.9x + 10707

25,000 y = 1409.4x + 7377.6

20,000 15,000 10,000

y = 418.87x + 3284.5

5,000 0 0

2

4

6

8

10

Range (km)

Fig. 5. Plot of estimated detection limit of a 100-meter deep aerosol cloud as a function of range for two levels of temperature variability and sensor noise.

3.2. Detection limit versus atmospheric variability and sensor noise We quantify temperature variability level in terms of percentage of the nominal temperature. Since this is an ad-hoc approach, we need to characterize the relationship between our input variability level and the detection limit estimate. We conducted simulations using various input temperature variability values and computed the estimated detection limit for several sensor noise levels at zero range. The log-log graph shown in Figure 6 suggests that there is a power law relationship between the detection limit estimate and the atmospheric temperature variability level down to sensor noise levels just below 10-10 W/cm2/sr/cm-1 NESR. At lower sensor noise levels, the relationship becomes more complex. The graph also suggests a power law relationship between detection limit and sensor NESR, since the trends for the different NESR orders

Bio-Aerosol Limit Estimation with Passive IR Spectroscopy

483

of magnitude are evenly spaced. This observation is confirmed by plotting the detection limit estimates versus sensor NESR, as shown in Figure 7. Note that for low NESR values, the curves in Figure 7 flatten out to a constant that depends on the temperature variability level. This region of the graph can be considered to represent a “variabilitylimited” region, whereas the curves seem to converge to a single power law curve as the sensor NESR increases, indicating a “noise-limited” region. Detection Limit vs. Variability Level 1,000,000

DL (part/liter)

100,000 No Noise 1e-11 10,000

1e-10 1e-9 1e-8

1,000

100 0.001%

0.010%

0.100%

Temperature Variability (%)

Fig. 6. Simulation results showing the relationship between detection limit and atmospheric temperature variability level at various sensor NESR levels. The NESR values in the legend are in units of W/cm2/sr/cm-1.

4. Summary and Conclusions We have presented a new computational model for estimating the detection limit of biological aerosols when using passive infrared spectroscopy. By assuming that shortterm variations in atmospheric radiance are caused by isotropic turbulence, we can generate a time series of atmospheric temperature variations and compute the instantaneous atmospheric spectral radiance for each time point using MODTRAN5. The radiance data is then used as a background for simulating a detection experiment, from which we obtain statistics that allow us to calculate a detection limit based on a signalplus-noise model. Although the model is still a work in progress, we have obtained some preliminary results from initial simulations that provide very good insight into the relationships between atmospheric variability level, sensor noise, range and detection limit. While the dependence of detection limit on range appears to be linear, the relationship with both atmospheric variability and sensor noise are more complex, approximating a power law curve at the higher noise levels.

484

A. Ifarraguerri, A. Ben-David & R. G. Vanderbeek

A key observation from the initial simulations is that for sensor noise levels greater than ~10-10 W/cm2/sr/cm-1, sensor noise has more impact than atmospheric variability on the detection limit. As sensor noise decreases below this level, the effect of atmospheric temperature variability becomes increasingly important. If this result is confirmed, it may serve as a useful benchmark for future sensor design. Detection Limit vs. Sensor Noise Level

Detection Limit vs. Sensor Noise Level 1,000,000

100,000

DL (part/liter)

0.000% 0.001% 0.002% 10,000

0.005% 0.010% 0.050% 0.100%

1,000

100 1.0E-11

1.0E-10

1.0E-09

1.0E-08

Sensor NESR (W/cm2/sr/cm-1)

Fig. 7. Simulation results showing the relationship between detection limit and sensor NESR at various atmospheric temperature variability levels.

5. Acknowledgements This work was funded by the U.S. Army ECBC through contract DAAD13-03-C-0046. The authors would like to thank the Air Force Research Lab and Spectral Sciences Inc. for providing and supporting the MODTRAN5 code. The data used to for calculating the extinction of BG aerosol was collected by Kris Gurton of the Army Research Lab. 6. References 1. A Ben-David, “Remote detection of biological aerosols at a distance of 3 km with a passive Fourier transform infrared (FTIR) sensor,” Opt. Express, 11, 418-429, (2003) 2. A Ben-David and H Ren, Detection, identification, and estimation of biological aerosols and vapors with a Fourier-transform infrared spectrometer,” Applied Optics, 42, 4887-4900 (2003) 3. JM Therault, E Puckrin and JO Jensen, “Passive standoff detection of Bacillus subtilis aerosol by Fourier-transform infrared radiometry,” Applied Optics, 42, 6696-6703 (2003)

Bio-Aerosol Limit Estimation with Passive IR Spectroscopy

485

4. A. Berk, G.P. Anderson, P.K. Acharya, L.S. Bernstein, L. Muratov, J. Lee, M.J. Fox, S.M. Adler-Golden, J.H. Chetwynd, M.L. Hoke, R.B. Lockwood, T.W. Cooley and J.A. Gardner, “MODTRAN5: A Reformulated Atmospheric Band Model with Auxiliary Species and Practical Multiple Scattering Options” Proc SPIE 5655, 88-95 (2005) 5. M.C. Roggermann and B.M. Welsh, Imaging Through Turbulence, CRC Press (1996) 6. K.P. Gurton, D. Ligon and R. Kvavilashvili, “Measured Infrared Spectral Extinction for Aerosolized Bacillus Subtilis var. Niger Endospores from 3 to 13 µm,” Applied Optics 40(25), 4443-4448 (2001)

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 713–726  World Scientific Publishing Company

EYE SAFE POLARIZATION DIVERSITY LIDAR FOR AEROSOL STUDIES: CONCEPT DESIGN AND PRELIMINARY APPLICATIONS JAVIER FOCHESATTO Geophysical Institute, University of Alaska Fairbanks, 903 Koyukuk Dr., Fairbanks, Alaska 99775, USA [email protected] RICHARD L. COLLINS Geophysical Institute, University of Alaska Fairbanks, 903 Koyukuk Dr., Fairbanks, Alaska 99775, USA [email protected] KENNETH SASSEN Geophysical Institute, University of Alaska Fairbanks, 903 Koyukuk Dr., Fairbanks, Alaska 99775, USA [email protected] HEATHER QUANTZ Geophysical Institute, University of Alaska Fairbanks, 903 Koyukuk Dr., Fairbanks, Alaska 99775, USA [email protected] KRISHNAKANTH GANAPURAM Electrical Engineering Department, University of Alaska Fairbanks, 203 Duckering Building P.O. Box 755915, Fairbanks, Alaska 99775, USA [email protected]

The concept design and preliminary applications of a new Eye Safe Polarization Diversity Lidar (ESPDL) instrument are described. The lidar operates with polarization diversity in the laser emission and polarization discrimination in the receiver at 1574 nm for tropospheric aerosol studies in the Arctic atmosphere. This instrument was originally designed to operate in the eye-safe wavelength range with a one-channel receiver and 20 dB linear polarization accuracy in the laser emission, and assembled in a compact optical bench. The instrument was upgraded for polarization diversity laser emission, i.e polarization selectivity better than 30 dB, and for linear polarization reception discrimination, i.e. polarization discrimination better than 50 dB. Geophysical lidar applications under the scope of this instrument with an overall instrumental polarimetric accuracy better than 0.1% focus on the identification of very dilute suspended aerosols in the troposphere, ice in-cloud initiation and aerosol/water interfaces, complex aerosols, sub-visible high altitude clouds and environmental issues and dynamic processes in the Arctic such as ice fog, forest fire, and a multilayered stably stratified Arctic boundary layer. In this article we describe the instrument design concept and the electronic synchronization necessary to achieve the maximum instrumental polarization accuracy. We report a preliminary case study of differential polarization analysis. Keywords: scattering matrix, polarization diversity lidar 487

488

J. Fochesatto et al.

1. Introduction Atmospheric aerosols detection, identification, and speciation pose major challenges for laser remote sensing; these challenges are common to both atmospheric sciences and the military. Aerosol processes in the atmosphere are better described when the aerosol’s nature is known (e.g. the direct and indirect effect of aerosols on the radiation that reaches the earth, aerosol-cloud interactions, aerosol nucleation and freezing, aerosol hygroscopicity, bio-aerosols and bio-hazard particles dispersed in the atmosphere). Despite its importance, determining the chemical speciation of aerosols poses a very challenging instrumental problem since existing laboratory instruments, which are designed for tasks just such as this, suffer from severe limitations for field applications1. Polarization lidar offers a unique optical ranging signature, including remote sensing of the troposphere and the stratosphere, and is able to distinguish and type aerosols at different ranges under diverse environmental conditions. Polarization lidar can discriminate between spherical and non-spherical particles, marine and terrestrial aerosols, optically thin haze and background aerosols, and sub-visible high altitude clouds and background radiation, and can detect ice from super-cooled water aerosols and ice plate orientation in ice-clouds. Polarization lidar is a well established technique in atmospheric sciences for cloud physics research2. Classical polarization lidar configurations combine laser emission at one linear polarization state and a receiver unit that is equipped with two detection channels that possess polarization selectivity for the backscattered beam. One channel is used in l-polarization mode, aligned with the laser emission, and the other channel in r-polarization mode, perpendicular to the laser emission. In this configuration, the sensing scheme retrieves only two linear components of the backscattered optical field to calculate the depolarization ratio δ=Sp/Ss, where Sp and Ss are the two optical backscattered signals. The depolarization ratio is an intensive property of the air mass under interrogation; using it, one can type aerosols and clouds, and by consequence infer, under certain conditions, the dynamic and radiative influences of these atmospheric aerosols and clouds2,4,5. Improvements to the linear depolarization factor measurements are necessary in cases where the backscattered return is under the influence of multiple scattering effects (i.e. after being backscattered from a optically thick cloud), when determining iceplate angular orientation in high-altitude cirrus clouds and assessing their radiative influence or determining complex aerosol mixtures composed of different coating layers, or when detecting the presence of aerosols with adsorbed molecules on their surface. The assessment of such problems implies the measurement of the full Stokes parameters of the backscattered field. This operation requires a more complex lidar instrument with the ability to emit laser radiation in a controlled polarization state in order to interrogate the aerosol/cloud air mass by selectively changing the emission polarization states. The depolarization fraction is an optical property detectable down to the molecular scale, whereas the detection threshold is wavelength-dependent. Molecular depolarization has values ~3% at visible laser wavelengths (i.e. 694 nm)6. In this instrument the 1574 nm wavelength relies on a very low molecular depolarization return (0.1%).

Eye Safe Polarization Diversity Lidar for Aerosol Studies

489

In this article we discuss the concept behind the instrument design and some technical aspects of the ESPDL lidar in sections 2 and 3, and the laboratory calibration of the laser emission polarization state and the electronic synchronization of the instrument in section 4. Preliminary observations in the Arctic atmosphere are discussed in section 5. 2. Lidar Polarization Diversity Theory The complete polarization state of a light beam, whether coherent or non-coherent, can be represented by a measurable quantity called the Stokes vector components of the light beam: S = {I, Q, U, V}2 . The general form of a polarized beam in the Stokes parameters representation is given in Eq.(1), where the subscript l and r refer to the electric field parallel and perpendicular, respectively, to the scattering plane, and the brackets denote time averages. The four elements of the Stokes vector are specified as: (I) the total optical beam intensity, (Q) the dominant linear polarization l-state (Q = 1) or r-state (Q = -1), (U) determines the amount of linear polarization state at +45 (U = 1) or -45 (U= -1) and finally (V) determines the right circular polarization (V= 1) or the left circular polarization state as (V= -1): I = El ⋅ El* + Er ⋅ Er* Q = El ⋅ El* − Er ⋅ Er* U = El ⋅ Er* + Er ⋅ El*

.

(1)

V = i El ⋅ Er* − Er ⋅ El*

The Stokes vector representation can thus describe the incident optical beam and the scattered beam at a given layer in the atmosphere; their relationship is given by the sixteen-element Mueller matrix, see Eq.(2), where λ is the laser wavelength and z is the distance between the scattering layer and the lidar receiver. The scattering matrix components Sij contain information about the size distribution, shape, and refractive index of the scattering particles. The elements of this scattering matrix are dependent on scattering angle, wavelength, and the specific microphysical state and optical properties of the particular atmospheric layer:  Is   Ii  Q  Q  2 λ  s=  i. ⋅ S ⋅ [ ] 4x4 U s  4 ⋅ π 2 ⋅ z 2 U i       Vs   Vi 

(2)

Although scattering matrix simplifications are commonly used in well-understood cases, potentially all 16 elements may be significant for certain environments involving non-randomly oriented non-spherical particles. For the case of light scattered by randomly oriented aerosols, the scattering matrix simplifies into 10 unknown coefficients for each scattering angle3. In this case the randomly oriented distribution of scatterers

490

J. Fochesatto et al.

supposes that each particle and its mirrored particle scatter light at the same time. For an anisotropic distribution of aerosols, the scattering matrix evaluated in the backscattered direction (180o) can be notably simplified7. For spherical particles, only four independent matrix elements are needed to describe the interaction with arbitrarily polarized light, because spherically symmetrical particles (e.g., cloud droplets) do not produce any change in the backscattered polarization state in single scattering7. An anisotropic distribution of particles may yield S14 ≠ 0, but an isotropic distribution, perhaps the usual case representing randomly-oriented non-spherical particles in the atmosphere, has S14=04,5. It is also well known that particles are not always present in the atmosphere with random orientation; in fact they can be present in a preferred orientation, they can also be quasi-spherical, and in some cases multiple scattering may complicate severely the determination of the solid/liquid phase fraction when a reduced type of Mueller matrix is utilized8. For those cases a lidar with diverse polarization has the ability to detect new observable quantities related to the scattering coefficients Sij, permitting a full characterization of the scattering process. The application of polarization diversity theory relies on the measurement of different ratios (polarization factors) at different angular positions related to the scattering matrix coefficient Sij. In order to obtain this measurement and successfully accomplish the corresponding calibration, it is necessary to induce different polarization states in the laser emissions under very accurate and controlled conditions of {II, QI, UI, VI}. The analysis of the backscattered field at the lidar detection levels of {IS, QS, US, VS} must be made following high-accuracy standards in the polarization discrimination and in the polarization emission.

3. Polarization Diverse Lidar: Concept Design. 3.1. Lidar Emission In order to implement the polarization emission control with accurate polarization definition it is necessary to initially define a linear polarization state in the laser emission. In this case a Brewster angle polarizer was chosen to define the initial polarization state of the laser beam. Brewster angle polarizers are made by two calcite prisms which assembled with an air space. The polarizer is designed to transmit the extraordinary polarization beam l-state at very low transmission lost (i.e. less than 2%) while the extinction at the cross polarization r-state is approximate 60 dB. This high accuracy in polarization selection is achieved by means of arranging the four polarizer faces at Brewster angle. The polarizer Brewster angle was specified at λlaser/2 of flatness with surface crystal AR-coating to maximize the optical throughput of the l-state beam at 1574 nm. Once a linear polarization state is defined, the laser beam is introduced via a photoelastic modulator (PEM) that changes the polarization state of the optical beam passing through it to predefined values in synchronization with the sequence of laser firing pulses. The PEM is arranged with an initial positive angle +φ referenced to the initial

Eye Safe Polarization Diversity Lidar for Aerosol Studies

491

laser polarization state (the [+] means angles clockwise, looking in the direction of laser beam propagation). The optical modulator produces refractive index variation over a fused silica crystal driven by a piezoelectric modulator with a frequency of 42 kHz with +/- 1Hz uncertainty and variable amplitude. Based on the very precise PEM excitation frequency with deviation of less than 0.005%, a phase-locked loop electronic circuit produces a synchronous signal to synchronize the laser optical pulse passing through the PEM. This signal, hereafter named “retardation level”, gives a precise indication of the modulation timing that occurs in the PEM in such a way that the laser pulse can be fired at different instants during one cycle of PEM modulation. This signal command is serially interfaced to the computer controller. Eq.(3) shows the polarization beam transformation after passing through the PEM modulator. The total optical phase rotation is given by (ψ+φ) where φ is the physical angular position and ψ is the phase modulation introduced by the PEM. This device is assembled in a precision goniometric platform to allow opto-mechanical adjustments and polarization calibration. 0 0  I O  1  Q   0 cos ( 2 ⋅ (ψ + ϕ ) ) − sin ( 2 ⋅ (ψ + ϕ ) )  O = U O   0 sin ( 2 ⋅ (ψ + ϕ ) ) cos ( 2 ⋅ (ψ + ϕ ) )    0 0  VO   0

0  I I  0   QI  ×  . 0  U I     1   VI 

(3)

Laboratory calibration allows precise adjustment of both the retardation levels and the Q-switch laser trigger that fires the laser at the correct modulation phase state of the PEM. Since the PEM requires a period of 23.8 µs to produce the full modulation, variations in the Q-switch time delay within that interval of time centered on the optimum Q-switch value is less than 0.1%. Initially two polarization states were generated using two levels of PEM retardation level control with the same Q-switch time delay to induce two linear polarization states in the laser emission. To produce the linear polarization state {1,1,0,0} the phase angle to be induced must be -π/4; to generate {1,-1,0,0} the corresponding phase angle must be π/4 . Figure 1 shows the electronic timing diagram for the computer-controlled polarization selectivity. The main synchronization control is exerted by the PEM signal at 42 kHz. This signal is the input to a programmable retardation generator which has two outputs controllable by a GPIB (General purposes interface bus) computer interface. This interface allows the control of the laser flash lamp firing circuit and the subsequent Q-switch with a maximum jitter of ~1ps between leading edges of input and output pulses. The output pulses are 300 µs width for both the laser flash lamp and the laser Q-switch. The electronic command is computer-interfaced in such a way that each polarization state is defined by a combination of retardation levels in the PEM and delays in the Q-switch signal in a Lab-View software interface. Laboratory calibration permits the adjustment and selection of adequate working conditions. Figure 2 shows the sequence of PEM synchronization pulses and the corresponding laser pulse to generate the vertical and horizontal polarization selection. The optical measurements were made with an optical accuracy of 0.1% polarization discrimination.

492

J. Fochesatto et al.

Figure 1. Timing diagram for polarization selectivity. A) Computer polarization mode selection, in this case change in the retardation level applied to the PEM modulator with Q-switch constant. B) PEM synchronization pulses, 42 kHz +/- 1 Hz. C) Flash lamp laser trigger at 10 Hz PRF (pulse repetition frequency) in-phase with the PEM synch. signal. D) Laser Q-switch pulses of 300 µsec width. E) Laser optical emission pulse with width of 8.97 ns. F) the lidar signal, ~ 100 µsec length.

Figure 2. PEM synchronization signal and laser optical pulse at different polarization states. The square wave pattern is the PEM signal input in the synchronization circuit. The upper panel shows a retardation level (ρ = 0.0) where the optical pulse is passing through the PEM 11.55 µsec after the leading edge of the PEM shown in the figure. The laser optical pulse is vertically polarized. The lower panel shows the case of retardation level (ρ = 0.6) where the laser pulse passes through the PEM with a delay of 13.61 µsec referenced to the PEM leading edge (shown in the figure). The PEM period in the upper panel is 24.24 µsec and in the lower panel is 26.06 µsec. The upper panel produces vertically polarized emission and the lower panel produces horizontally polarized emission.

Eye Safe Polarization Diversity Lidar for Aerosol Studies

493

3.2. Lidar Receiver The lidar receiver was designed to collect the backscattering optical beam starting from distances very close to the laser emission. A maximum field-of-view angle of ~ 3 mrad variable from 1-3 mrad in steps of 0.2 mrad was implemented. In this configuration the instrument is able to measure optical returns from ~ 200 m. The optical design must be able to contain the backscattered beam within the surface of a very small semiconductor detector of 0.2 mm diameter (i.e. InGaAs-APD avalanche photodiode) for the entire lidar profile range. In this case a combination of fast capture (i.e. low f/number) telescope is needed to maintain the scattered image at all the ranges in the troposphere under controlled conditions. In addition, a mechanical design for the telescope structure was required that could maintain a constant distance between mirrors (primary over secondary) to keep the image in focus under field operations in the Arctic environment (i.e -60 oC). A custom telescope was designed and fabricated for that purpose. The optical specification is better than an aberration figure of four wave peak-to-valley at 632 nm in a Dall-Kirkham type telescope with no correction plate, and f/2.96 with 900 mm focal distance. This mechanical configuration maintains zero focal drift down to -60 oC, and the mirrors are fabricated with a low coefficient of thermal expansion2. Polarization distortion introduced in the collected optical beam is less than -60 dB for a 3 mrad field of view. The polarization receiver was implemented in a mechanical assembly containing the interferential filter, a half-wave plate, and the polarizer splitter. The mechanical assembly containing the polarization beam splitter, the focusing optics, and the detector units is able to rotate around the optical axis. This opto-mechanical design assists the optical alignment and the electronic calibration, and maximizes the polarization matching between the laser emission and the receiver subsystem. A Glan-Thompson polarizer beam splitter is used to separate the two orthogonal polarization components. This polarizer is made of two cemented prisms of calcite. For an input beam at normal incidence to the polarizer input face, the polarizer splits the beam into two orthogonally polarized output beams that exit the crystal at normal incidence to their respective output faces with a separation angle of 45o independent of the wavelength. The cross-talk extinction ratio is better than 50 dB.

Figure 3. Optical setup for polarization emission modes calibration. The dichroic mirror extracts the residual laser radiation at 1064 nm from the laser beam at 1574 nm. The Brewster angle polarizer allows only a vertical polarization state, with 60 dB accuracy. The PEM is set up at a 45º angle clockwise referenced to the vertical polarization. The cube beamsplitter polarizer analyzes the two polarized beams.

494

J. Fochesatto et al.

After passing through the beam splitter, the optical beams with l and r linear polarizations are collimated and focused onto the detector surface3. The beam focusing is achieved using an off-the-shelf optics assembly composed of a doublet achromatic lens (f/2-30 mm diameter) followed by a plane-convex lens to correct the focusing beam for paraxial aberrations. Finally, on top of the detector an aspheric lens (f/0.85-100 mm focal) concentrates the light onto the detector surface. The focal length of the focusing system is 188.25 mm for a 1.1 mrad field of view, resulting in a 127.6 µm rms spot diameter as calculated using Zemax optical software. The mechanical implementation maintains a tolerance of 10 µm. More details are available elsewhere3.

4. Preliminary Analysis of Differential Polarization Reflectivity A preliminary field experiment was carried out at the end of the winter in March 2006. The ESPDL Lidar was set up at the Geophysical Institute in Fairbanks, Alaska. The lidar was pointed at the zenith (slightly off-vertical to avoid direct reflection: ~ 1 deg from zenith). The instrument collected approximately 10 hours of lidar profiling the troposphere. The lidar emission was set up in two linear polarization modes, vertical and horizontal sequentially, emitting 10 Hz laser pulses as was explained in section 3. The lidar signals were averaged over 100 laser shots and then, after a 1 sec acquisition waiting time to avoid polarization contamination in the receiver beam, a new acquisition sequence was started with the next polarization state. The receiver was set up without polarization discrimination (i.e. single-channel receiver) using a commercial Cassegrain f/10 with 2 m focal length and 8 inches primary telescope diameter as is described in detail elsewhere2. The optical throughput of the telescope is lower than 40% at the laser wavelength, while the polarization accuracy, as estimated by Zemax, is preserved at better than -40 dB. Polarization reflectivity signals were obtained sequentially averaging 100 laser pulses (i.e. 10 sec time-averaging) at each polarization mode. Scattering matrix formulation for randomly oriented particles4,5 excited by two different linear polarization modes gives the expression for the backscattered polarization optical beam. When the emission is set to vertical, Eq.(3) gives an optical field return Sr as shown in Eq.(4):  I S   S11 Q   S  S  =  21 U S   0     VS   0

S12 S 22

0 0

0

S 33

0

S 43

0  1  0  1  . × S 34  0     S 44  0 

S r = I S + QS = S11 + S 22 + 2 ⋅ S12 .

(3)

(4)

When the emission is set to horizontal polarization as shown in Eq.(5), the returning field depends on the scattering coefficients, as shown in Eq.(6):

Eye Safe Polarization Diversity Lidar for Aerosol Studies

 I S   S11 Q   S  S  =  21 U S   0     VS   0

S12

0

S 22

0

0

S 33

0

S 43

0  1 0   −1 × S 34   0     S 44   0 

S l = I S + QS = S11 − S 22

495

(5)

(6)

The backscattering power (i.e. apparent backscattering-lidar return, uncorrected by extinction) at two linear polarization reflectivities is shown in Figure 4 for Vertical polarization emission and Figure 5 for Horizontal polarization emission. The false color code indicates different backscattering power levels (or apparent backscattering power). Figures 4 and 5 display a sub-visible cirrus cloud in the morning at a maximum altitude of 7 km which subsides during the day as the cloud undergoes morphological and optical changes. Below 2 km, not shown here, a backscattering signature of Asian dust aerosols was indicated. A vertical layered analysis was performed to show the differential polarization reflectivity and its possible relationship to the linear polarization ratio. Vertical layers of 1 km were taken at different heights of 3-4 km, 4-5 km, 5-6 km and 6-7 km. Figure 6 shows the backscatter power return at both polarizations at the different study heights of 3-4, 4-5, 5-6 and 6-7 km. The absolute differential polarization reflectivity ZDRI was calculated as in Eq.(5) and the total polarized backscattering reflectivity is given in Eq.(6). These two reflectivities are dependent on different scattering matrix coefficients. These two quantities vary independently with the two uncoupled scattering coefficients S22 and S11:

ZDRI = ZDRII =

Pr − Pl 2 Pr + Pl 2

= S 22 + S12

(7)

= S11 + S12

(8)

Based on the differential polarization reflectivities (ZDRI and ZDRII) a differential polarization reflectivity ratio can be formulated, see Eq.(9). This new quantity, the differential polarization reflectivity ratio, and the linear depolarization ratio have the same relationship to the Stokes coefficients of the scattering matrix: ∆=

ZDRI − ZDRII ZDRI + ZDRII

Figure 8 illustrates the calculation of ∆ at the specified tropospheric layers over time.

(9)

496

J. Fochesatto et al.

Figure 4. Vertical polarization reflectivity for March 2 (UTC), 2006. The temporal series was obtained at 0.75 m vertical resolution and 10 sec temporal resolution. The false color picture represents levels of backscattering power (i.e. apparent backscattering).

Figure 5. Horizontal polarization reflectivity for March 2 (UTC), 2006. The temporal series was obtained at 0.75 m vertical resolution and 10 sec temporal resolution. The false color picture represents levels of backscattering power (i.e. apparent backscattering).

Eye Safe Polarization Diversity Lidar for Aerosol Studies

497

Figure 6. Backscattering power at vertical and horizontal polarization emissions. Panels indicate vertical averaging over layers in the troposphere from 6-7 km, 5-6 km, 4-5 km, 3-4 km, and 2-3 km. Vertical axis is the signal strength in all cases. Green trace is horizontal and Blue trace is vertical polarization.

Figure 7. Differential polarization reflectivity. Panels indicate vertical averaging by layer in the troposphere from 6-7 km, 5-6 km, 4-5 km and 3-4 km. The black trace is the total backscattered intensity ZDRII and the blue trace indicates the absolute difference between polarization reflectivities ZDRI.

498

J. Fochesatto et al.

Figure 8. Differential polarization reflectivity ratio. Vertical averages by layers in the troposphere from 6-7 km, 5-6 km, 4-5 km and 3-4 km.

5. Discussions and Conclusions The eye-safe polarization diversity lidar is a novel instrument conceived for field operation in extreme environments and deployable in any platform. Attempts to perform lidar polarization diversity measurements have been made in the past using different static optical configurations9,10. The instrument described here represents a step forward in lidar remote sensing capabilities because it incorporates an accurate and electronically controlled optoelectronic system to manipulate and measure the scattering matrix at different altitudes in the atmosphere. The optoelectronic calibration and the electronic timing to fire the laser at the appropriate time during the refractive index modulation of the PEM are the key elements which enable high-accuracy values in the polarization emission to be obtained. It is important to note that the same polarization state in the laser emission can be generated following different configurations of the retardation level and the laser Q-switch. Here, to generate the two linear polarization states, the option was selected that keeps the same energy per pulse for both polarization states; in addition, both polarization states are generated during the slow variation of the refractive index in the PEM device. This feature allows more secure laser pulse triggering during the period of time when these slow variations of refractive index occur within the PEM. Generation of ±45º linear polarization emission will allow auto-calibration in the lidar receiver. Water droplets on cumulus clouds can be used to adjust both emitter channels in this linear polarization state.

Eye Safe Polarization Diversity Lidar for Aerosol Studies

499

Circular polarization dichroism opens a new frontier in our search for aerosol threats that may spread over large distances in different environments. Differential backscattering signals from circular polarization (left over right) will enable the detection of rotational asymmetric molecules that may be adsorbed to the aerosol surface. The choice of the laser wavelength in the near infrared (NIR) is beneficial for eyesafety and beam invisibility, and also enables a higher background contrast of aerosol over molecules when compared with visible lasers; simultaneously, NIR has the lowest threshold for detecting depolarization factors, since the molecular threshold is ~ 0.1%. Differences between the backscattered signals at the two polarization emission levels were consistently verified to be higher than their noise level difference. This means that the differential reflectivity measured throughout the different layers, as shown in Figure 7, is greater than the noise difference when no clouds or aerosols are present. As an example, in the upper levels (from 5 to 7 km) the differential signal is almost three times larger than the noise difference. The results obtained in this preliminary study indicate that the differential reflectivity ratio and the linear depolarization ratio have the same scattering matrix relationship. This preliminary result could be confirmed by comparing the same lidar measurement with other lidar wavelengths and by retrieving the angular scattering from a reflecting cloud or aerosol layer. Values of the differential reflectivity ratio are less than the ranges obtainable for the same targets (i.e. clouds and aerosols at different altitudes) by classic depolarization lidars. An important difference can be established between ∆ and the linear depolarization ratio: low depolarization ratios are generally associated with a small amount of signal coming through the perpendicular channel receiver, but in the case of ∆ calculations, low depolarization means both signals are similar. The ∆ calculation, therefore, has an advantage, since the zero depolarization condition is obtained under conditions of high signal-to-noise ratio; in the classical case, in contrast, the zero depolarization condition is obtained under conditions of no signal or poor signal-to-noise ratio. Another advantage presented by this particular lidar configuration is that it uses a single-channel receiver without the need for a calibration constant. Also, because this wavelength is only slightly attenuated by the atmosphere (i.e., loss of 7% of transmission for micron particles and less than 3% for submicron particles), range extinction corrections are negligible for depolarization ratio calculations.

6. Acknowledgments The initial eye-safe lidar prototype was funded by the University Partnering for Operational support program, supported by the United States Air Force under contract number N00024-03-D-6606, and the U.S. Army Research Laboratory with contract DAAD19-02-D-0003. The polarization diversity implementation was founded by the National Science Foundation project IRS 92-6000147.

500

J. Fochesatto et al.

7. References 1. Fochesatto J and Sloan J. (2006). “Advances in Signal Processing of Multicomponent Raman Spectra of Particulate Matter”. Reviewed papers Proceedings of the International Symposium on Spectral Sensing Research. Maine. 2. Sassen K. (1991). “The polarization Lidar technique for Cloud Research: A review and current assessment”. Bull. Amer. Meteor. Soc. 72, 1848-1866. 3. Fochesatto J., Collins R.L., Yue J., Cahill C.F., and Sassen K. (2005). “Compact Eye-Safe Backscatter Lidar for Aerosols Studies in Urban Polar Environment”. Proceedings of SPIE 5887. doi: 10.1117/12.620970. 4. Mishchenko M., Travis L. and Lacis L. (1999). “Light scattering by non-spherical particles”. Academic Press, New York. 5. Mishchenko M., Hovenier J. and Travis L. (Eds.) (2000). “Light Scattering by Nonspherical Particles. Theory, Measurements, and Applications”. (pp. 690) Academic Press, San Diego. 6. Sassen K. (2000). “Lidar Backscatter Depolarization Technique for Cloud and Aerosol Research”, In: “Light Scattering by Nonspherical Particles: Theory, Measurements, and Geophysical Applications”, Mishchenko M., Hovenier J. and Travis L. (Eds.), Academic Press, San Diego, 393-416. 7. Perrin F. (1942). “Polarization of Light Scattered by Isotropic Opalescent Media”. J. Chem. Phys. 10, 415-427. 8. Yong-X. Hu, Yang P., Lin B., Gibson G., and Hostetler C. (2003). “Discriminating between spherical and non-spherical scatters with lidar using circular polarization: a theoretical study”. JQSRT, 79-80, 757-764. 9. Houston J.D. and Carswell A.I. (1978). “Four-component polarization measurements of lidar atmospheric scattering”. Appl. Opt. 17, 614-620. 10. Woodward R, Collins R.L., and Disselkamp R.S. (1998). “Circular depolarization lidar measurements of cirrus clouds”. Proceedings of 19th Int. Laser Radar Conference, NASA /CP1998-207671/PT1, 47-50.

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 727–734  World Scientific Publishing Company

AEROSOL TYPE-IDENTIFICATION USING UV-NIR-IR LIDAR SYSTEM S. EGERT, D. PERI Israel Institute for Biological Research P. O. Box 19, Ness-Ziona, Israel 74100 [email protected]

Identification of aerosol type and chemical composition may help to trace their origin and estimate their impact on land and people. Aerosols chemical composition, size distribution and particles shape, manifest themselves in their spectral scattering cross-section. In order to make a reliable identification, comprehensive spectral analysis of aerosol scattering should be carried out. Usually, spectral LIDAR measurements of aerosols are most efficiently performed using an Nd:YAG laser transmitter in the fundamental frequency and its 2nd, 3rd and 4th harmonics. In this paper we describe automatic detection and identification of several aerosol types and size distributions, using a multispectral lidar system operating in the IR, NIR and UV spectral regions. The LIDAR transmitter is based on a single Nd:YAG laser. In addition to the 3rd and 4th harmonics in the UV, two optical parametric oscillator units produce the eye-safe 1.5 µm wavelength in the near IR and up to 40 separable spectral lines in the 8-11 µm IR. The combination of a wide spectral coverage required for backscattering analysis combined with fluorescence data, enable the generation of a large spectral data set for aerosols identification. Several natural and anthropogenic aerosol types were disseminated in controlled conditions, to test system capabilities. Reliable identification of transient and continuous phenomena demands fast and efficient control and detection algorithms. System performance, using the specially designed algorithms, is described below. Keywords: Multispectral, Lidar, UV, IR, NIR

1. Lidar System, Operational and Identification Algorithms The combined IR-UV LIDAR transmitter is based on an Nd:YAG laser. The 3rd and 4th harmonics are generated in conventional BBO crystals. The eye-safe 1.5 µm near IR radiation is generated in an OPO system. A second OPO unit produces any selectable spectral line in the 8-11µm range (IR), with a 5 cm-1 spectral width. A selected wavelengths set is transmitted sequentially at a 10 Hz rate. Therefore, in order to maintain fast acquisition, the wavelengths in the set should be carefully chosen and be limited in number. The spectral scheme is selected to best represent the expected aerosols on the scene compelled to some restrictions that are described below. The detection unit, composed of three different sensors, covers the widely separated NIR, IR and UV spectral channels. The NIR channel provides sensitive aerosols detection, and information on the cloud extent. Detailed spectral data of aerosols backscattering in the IR and fluorescence radiation resulted of two different UV excitations, form the basis for aerosol identification. Spectral analysis of the fluorescence signals is performed using an 501

502

S. Egert & D. Peri

eight-channel spectrometer in the UV-VIS parts of the spectrum 1. The transmitter and detection unit can be switched automatically between the spectral channels, based on a pre-designed detection protocol or operate at specific wavelengths within one of the spectral channels. The database for the LIDAR evaluation contains the spectra of several dust types, oil aerosols and diesel fogs. Their spectral backscattering cross-section has been measured in the laboratory for specific realizations of particles size distribution. For a more general class of size distribution functions the backscattering coefficient has been calculated using Mie scattering formalism. The IR refractive index, needed for the Mie calculations, has been evaluated from absorption measurements using the Kramers-Kronig relation. The scattering cross-section has been calculated for a number of size-distribution functions. To demonstrate the system capabilities during testing out of the laboratory, a few aerosol types were used. Oils were chosen to represent anthropogenic materials. Due to their low volatility, stable aerosol size distributions can be realized. Silicon oil aerosols can be similar in chemical features to some of the dusts that contain silicates and yet spectrally different. Diesel fog has been chosen mainly due to its strong fluorescence. A set of detection and identification algorithms, as well as automatic management and control software, has been developed. The aerosol cloud detection algorithm is based on the spatial range resolution capability of the system. The LIDAR detected signals, converted to range coordinates, are spatially smoothed. At each step the deviation from the system local RMS noise level is determined. The detection algorithm searches for a continuous positive deviation within three consecutive steps. A detection event is defined as a ‘cloud’ if such a local deviation passes a given threshold in terms of RMS noise and if the signal decreases to the noise level at a longer distance. Since the entire LIDAR signal is a decreasing function of the range, mathematical procedures are applied for base line correction, necessary due to the presence of ambient aerosols. The LIDAR scans angular sections that contain numerous lines of sight. Cloud detection is based on continuous detection of similar spatial features on a few successive lines of sight or a steady continuous detection on a single line. Aerosol cloud detection is performed using the 1.5 µm channel. The discrimination algorithm, based on the IR and UV channels is applied only to lines of sight on which aerosols are detected. During the identification process, the IR channel is applied first. If the aerosol type is identified confidently, the LIDAR turns back to the 1.5 µm channel. The continuous tracking is applied using the NIR channel to utilize its high sensitivity that can compensate for the decrease of aerosol concentration with cloud dispersion. When the IR operation does not produce clear identification, the UV channel is operated. If the fluorescence channel produces definite identification, the system turns back to the 1.5 µm channel for cloud tracking. If the cloud is not discriminated or it disappears from the scanned angular subsection, the entire process is initiated, operating the LIDAR over the full angular section. Computerized management and control software activates optical components directing the laser beam to the proper spectral channel for the transmitter as well as the

Aerosol Type-Identification Using UV-NIR-IR Lidar System

503

detector. The algorithm operates under the system control algorithm producing a feedback cycle based on the detection and discrimination algorithms findings. Identification algorithms are based on a least square fitting procedure. This procedure uses separate spectral databases for the two spectral channels, IR and fluorescence. The fluorescence data is composed of two separate spectral bases for each of the excitation wavelengths 266 nm and 355 nm. For aerosols clouds whose size distribution is expected to change during the discrimination process, a few spectra were included in the library for each aerosol type. Since too many similar spectra can degrade the algorithm performance, only a limited number of spectra were allowed. These spectra addressed the major differences in the spectrum resulted form the various possible size distributions. Knowing the aerosols spectral behavior from laboratory measurements and their calculated spectra, the transmitted wavelengths were carefully chosen to include the spectral maxima and minima of each aerosol type. This procedure ensured high signal to noise ratio for the wavelength of strong backscattering coefficient and stable spectral echo for the wavelengths of low variance or minimal scattering. In order to increase the discrimination reliability in case of spectral similarity between aerosol types, higher weight was given to wavelengths of low variance in the spectrum 2. LIDAR Operation and Identification Results The capability of discrimination between aerosol types using spectral scattering in the IR region was discussed previously 2. An early version of the LIDAR, employing a CO2 laser as transmitter, was operated, having four spectral bands that covered parts of the 911 µm spectral region. Two types of dust and oil aerosols were analyzed using this IR LIDAR. The transmitted spectrum did not cover the very low backscattering of the oil in the 8.5µm wavelength. This backscattering feature is typical and is different from the backscattering characteristics of the other specimens, even that of the desert dust having similar chemical components. Therefore, while using the CO2 transmitter, though spectral differences between the disseminated materials were evident, the difficulty to discriminate between the oil and some of the dusts was noted. The system presented in this paper exhibits improved identification capabilities. Improvement is a consequence of three factors: the quasi-continuous coverage of the 8-11 µm spectral range, the contribution of the NIR and the UV spectral channels. As described above, detection procedure starts with the NIR channel activated. The relative high energy and large scattering cross-section provide sensitive aerosol detection capability that is used to define the cloud position and extent. Figure 1. demonstrates a 110° angular scan where the lines of sight that intersect the cloud are enclosed in a circle. The subsection limiting angles are defined in the two boxes at the picture right side and recorded by the LIDAR for the discrimination procedure operation. The algorithm is capable of handling up to five ‘cloud’ events simultaneously and rejecting topographic static features.

504

S. Egert & D. Peri

Fig. 1. A field-scan performed by the NIR channel. Aerosol cloud location is enclosed in a circle.

Aerosols detection and discrimination have been performed while employing several aerosol dissemination points in this angular section, located at distances of 0.5-3 km from the LIDAR. The LIDAR scanned the defined subsection, activating sequentially the two spectral channels (UV, IR) on each line of sight (LOS) where aerosols have been detected. The known size distribution at the dissemination points changed with cloud dispersion and fluctuated due to the natural atmospheric turbulence, challenging the discrimination algorithm. Automatic detection and identification was achieved by employing a decision protocol that controls the choice of the spectral channel, the scanning procedure and reports the finding. As described above, the discrimination algorithm uses a library containing the spectral features of the aerosol expected to be present in the scene. Due to variations in the aerosol size distribution it is impractical to predict the exact backscattering spectrum. The identification is based on a procedure that finds the best fit to the library spectra. The algorithm output is the relative weights of the aerosols in the library. An example showing silicon oil identification, normalized to path-concentration that produces signal to noise of unity, is shown in Figure 2. The silicon oil aerosol is represented by two, somehow different, spectra (marked as silicone type-I and TYPE-II). The difference between the weighted concentration of the silicon oil and that of the other specimens in the library is evident. The contribution of silicone type-I is larger than that of silicone type-TYPE-II during the entire measurement. However, the level of silicone TYPE-II is higher than that of the fog oil or dust.

Aerosol Type-Identification Using UV-NIR-IR Lidar System

505

10 Dust Silicone I Silicone II

Normalized CL

Fog oil

1

0.1 0

20

40

60

80

100

120

140

Measurement number

Fig. 2. Detection and identification of silicone-oil cloud as it disperses in time. 500 Dust

450

Silicone I

400

Silicone II

Normalized CL

350

Fog oil

300 250 200 150 100 50 0 30

40

50

60

70

80

90

Measurement number

Fig. 3. Dust identification – equal spectral weights. Dust is wrongly identified as one of the silicone oils types.

Materials that have a typical feature in their spectral backscattering can be detected and identified even at low concentrations. If in this typical spectral sub region the detected materials show high spectral variance, the discrimination efficiency procedure can be improved while giving it a larger weight in the data processing. Figures 3 and 4 are examples of dust identification using equal spectral weights (Fig. 3) and emphasizing spectral wavelengths that exhibit larger variance (Fig. 4). Trying to identify dust aerosol with the equal weight procedure (Figure 3) shows various events where silicone-oil

506

S. Egert & D. Peri

instead of dust have been wrongly identified. Using the variable weight scheme (Figure 4) improves the number of accurate identification. In order to minimize the number of errors, ‘identification’ has been defined when k out of n measurements detect the same material and this procedure repeats itself m times. The processing is fast enough to allow such a screening procedure, even during detection of transient events. The choice of parameters n, k and m depends on the aerosol type, the wind condition and the desired level of reliability. These parameters can be changed to fit the mission performed by the LIDAR. 30

Dust

25

Silicon I Silicon II

Normalized CL

20

Fog oil

15

10

5

0 30

40

50

60

70

80

90

Measurement number

Fig. 4. Dust identification using unequal spectral weights.

The UV channel is operated to identify aerosols that do not have distinctive spectral features in the IR but have a specific UV fluorescence spectrum. The channel is composed of the two transmitted wavelengths: 355nm and 266nm. The fluorescence resulting form the Nd:YAG 4th harmonics proved to be a reliable tool for aerosols identification, especially for pollens, burning canopy and Diesel fog. However, due to its poor atmospheric transmittance, the 3rd harmonics was also used. In this channel, higher atmospheric transmission can sometimes compensate for the lower fluorescence efficiency. In most applications the information of the two channels could be added to increase identification reliability. An example of Diesel fog detection in the two UV channels, and the ratio of the concentrations derived separately from each channel, is depicted in Figure 5. In both channels the Diesel fog is identified correctly. Since these measurements were performed in an intermediate range (1200m) the degradation in the performance of the 266nm channel is not so evident. However, the deviation from unity of the concentration path integral ratio for the two excitation wavelengths, points on the

Aerosol Type-Identification Using UV-NIR-IR Lidar System

507

difference between their transmission values and their deviation form standard atmosphere. Discrimination between dust and Diesel fog could rely at certain concentration levels on spectral IR backscattering since the fog, unlike the other specimens in the library, is spectrally flat in the IR. Combining the information between the IR and UV channels can help to discriminate the specimen reliably even during events of low signal to noise ratio. Due to the system wide spectral coverage it is possible to employ efficiently the automatic spectral detection and discrimination algorithms. During system testing, numerous release events were operated. At stable wind conditions, fast detection and identification were performed reliably. However, during high turbulence or undefined wind direction, frequent changes in the cloud concentrations and orientation caused changes in the signal to noise ratio. These atmospheric conditions demanded numerous passages from one channel to another in order to perform reliable discrimination, elongating the identification procedure. In order to minimize it, constrains that were described above were added to the algorithm. ‘Reliable discrimination’ was declared only when the criteria for reliability were met for m out of l events (m≤l). The m and l values were chosen to fit the worst cases expected on the scene. The parameters governing system operation, detection and discrimination algorithms are changeable. These include the threshold values for the detection and discrimination, minimum number of measurements needed for channel transition, k and m values for the IR and UV channels etc. These parameters can be changed according to the LIDAR mission. 100

100 Dust Fog-oil

10

1

1

0.1

Ratio

Normalized CL

Ratio = CL(266)/CL(355)

10

0.1

0.01 0

50

100

150

200

250

300

350

400

0.01 450

Measurement Number

Fig. 5. Detection and identification of a Diesel-oil aerosol cloud, using fluorescence measurements. The ratio of the path concentration calculated for 266 nm and 355 nm is marked with an arrow.

508

S. Egert & D. Peri

3. Summary Successful operation of the versatile multi- channel LIDAR in the UV, NIR and IR was demonstrated during real time spectral aerosols measurements. The efficiency of the LIDAR performance and the reliability of the aerosol type-identification showed the technological availability. The detection/identification procedure that was built especially for this LIDAR performed efficiently: • • •

Efficient detection/identification algorithm enabled accurate automatic operation of the LIDAR NIR channel provided fast detection and determined aerosol extent Combination of the multi-spectral IR and UV channels provided accurate discrimination

Most airborne and space LIDAR systems are based on an Nd:YAG laser transmitter and OPO components (usually for 2nd and 3rd harmonics generation). The UV and IR channels are derived from similar elements. Improvement of atmospheric monitoring capability can therefore be achieved adopting the UV and IR scheme and the algorithms designed for its efficient and reliable operation. 4. References 1. S. Fastig, Y. Erlich, S. Pearl, E. Naor, Y. Krauss, T. Inbar, D. Kats, Multispectral lidar System Design, Build, Test, submitted to 23rd ILRC, Japan (2006) 2. D. Peri, N. Shiloah, S. Egert, Dust Type and Pollutants Identification Using a Multi-Spectral IR Lidar, Proc. 22nd ILRC, ITALY (2002)

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 735–745  World Scientific Publishing Company

RARE-EARTH DOPED POTASSIUM LEAD BROMIDE MID-IR LASER SOURCES FOR STANDOFF DETECTION KRISHNA C. MANDAL EIC Laboratories, Inc., 111 Downey Street Norwood, MA 02062, United States of America [email protected] SUNG H. KANG EIC Laboratories, Inc., 111 Downey Street, Norwood, MA 02062, United States of America [email protected] MICHAEL CHOI EIC Laboratories, Inc., 111 Downey Street, Norwood, MA 02062, United States of America [email protected] R. DAVID RAUH EIC Laboratories, Inc., 111 Downey Street, Norwood, MA 02062, United States of America [email protected]

The single crystal growth of KPb2Br5 by vertical Bridgman technique using in-house processed zone refined PbBr2 and KBr with rare-earth terbium doping has been studied. The grown moisture resistant crystals (1.5 cm diameter and 10 cm length) have shown high promise for low phonon energy room temperature solid-state laser applications in the longer side of mid-IR (4-15 µm) due to their high storage lifetimes, wide tunability, and excellent optical quality. The processed crystals are highly transparent (T= ≥80%) in the 0.4-25 µm spectral region. Repeated melting-freezing cycles during differential scanning calorimetry (DSC) experiments did not reveal any appreciable variation in the melting point or phase transitions, which is indicative of their excellent thermal stability. The emission spectra pumped with a 2 µm source show broadband emissions with peak wavelength of 3 µm (7F4→ 7F6), 5µm (7F5→ 7F6) and 7.9µm (7F4→ 7F5). The KPb2Br5:Tb laser crystals will be highly useful for standoff detection of incoming chemical and biological threats using unique infrared absorption signatures. Keywords: KPb2Br5:Tb; mid-IR laser sources; standoff detection.

1. Introduction The KPb2Br5:Tb is a potential candidate for diode pumped multi-line mid-IR laser source in an important atmospheric transmission window (3-5µm and 8-14 µm). In order to activate laser transitions of Tb3+ in mid-IR region, low phonon host material is required. 509

510

K. C. Mandal et al.

KPb2Br5 (KPB) is a good candidate for the host crystal due to its low phonon energy (134 cm-1) and possibility of Tb3+-doping in the divalent Pb sites.1 With low lattice phonon energy and the possibility of high doping concentrations (2×1020 ions/cm3) of terbium ions, which have large cross sections and excellent energy storage life times, Tb3+ doped KPb2Br5 crystals is a very promising mid-IR laser source which will greatly improve current remote sensing technology. The KPb2Br5:Tb crystal is a moistureinsensitive, robust and chemically very stable at or above room temperature,2 enabling large scale mid-IR laser sources. The KPb2Br5:Tb crystal is an excellent candidate for compact, highly efficient mid-infrared lasers, which can be used in detecting incoming chemical and biological threats from a distance using unique infrared absorption signatures. Other promising applications include new portable and easily deployable midIR laser sources, infrared countermeasures, infrared optical components and imaging, thermal scene illumination, atmospheric remote sensing in the vibrational fingerprint region (pollution control), laser machining, and IR spectroscopy in clinical and medical diagnostic analysis. We have grown large volume single crystal KPb2Br5:Tb by vertical Bridgman technique using in-house processed zone refined PbBr2 and KBr with rare earth element Tb3+-doping and have characterized crystals using physical, optical and spectroscopic methods. 2. Experimental 2.1. Purification of precursor materials In order to grow high quality KPb2Br5:Tb single crystals, commercially available precursor materials were purified by zone refining (ZR) techniques, and single crystals were grown from in-house purified materials using a modified Bridgman method. The purification step is very important because the performance of KPb2Br5 as a host crystal is significantly degraded by the impurity center of the commercially available starting materials. The precursor KBr and PbBr2 powders were purchased from Aldrich and Alfa Aesar, respectively, with a stated purity of 99.999% (5N). The purification of KBr involved 32 passes of ZR process. The purified KBr was characterized by x-ray diffraction (XRD) and the impurity analysis with glow discharge mass spectrometry (GDMS). Samples were tested for 72 elements from Li to Pb. The relative error associated with the GDMS technique is reported to be ~20%. The zone-refining of PbBr2 involved 36 passes and the purified PbBr2 material was also characterized by XRD and GDMS. 2.2. Synthesis and crystal growth The synthesis and crystal growth of KPb2Br5:Tb were performed under argon (Ar) overpressure or bromine (Br2)overpressure. A quartz ampoule (15 mm ID, 18 mm OD) was filled with 1:2 molar ratio of purified KBr and PbBr2 with 2.5 mole% or 5.2 mole% TbBr3 (Aldrich, 99.99%). Both synthesis and growth were carried out in an evacuated

Rare-Earth Doped KPb2Br5 Mid-IR Laser Sources

511

(2×10-6 Torr) high purity quartz ampoule with three times of successive argon gas purging (4×10-2 Torr). Finally, the ampoules were sealed under Ar-overpressure of 4×10-2 Torr or Br2-overpressure of 1×10-2 Torr. During the growth of KPb2Br5:Tb crystals, there are possibilities that the precursor materials may decompose and Br2 may out-diffuse resulting in a nonstoichiometric (Br-deficient) KPb2Br5 crystal with very high defect densities. To prevent out-diffusion of Br2, crystal growth operations were carried out under Br2-overpressure and Ar-overpressure for comparison. The crystals were synthesized at 435°C and the crystal growth was conducted by using a Mellen three zone crystal growth unit. All the zones of the furnace are independently computer controlled using Mellen temperature programmer/controllers and complete thermal profile during growth is monitored using an Adept 2000 (Adept Technology) software program. The synthesized material was heated slowly at the rate of 10°C/hr up to 410°C. The upper zone was maintained at 410°C and the lower zone was at 398°C. A schematic of vertical Bridgman growth furnace along with the temperature distribution in the growth chamber is shown in Fig. 1 (a). A state-of-the-art computer model, Multizone Adaptive Scheme for TRAnsport and Phase changes Processes (MASTRAP), is used to model heat and mass transfer in the Bridgman growth system and to predict the stress distribution in the as-grown crystal. The model accounts for heat transfer in the multiphase system, convection in the melt, and interface dynamics.3 The stream function and temperature distribution at different rotation rates were calculated as shown in Fig. 1 (b)-(c). The ampoule optimum rotation rate was 10 rotations/hr. The ampoule was lowered at a rate of 1mm/hr. The crystal was cooled using the following three cycles: 410°C to 280°C at a rate of 1.8°C/hr, 280°C to 230°C at a rate of 0.4°C/hr (to avoid any solid-solid phase transition) and 230°C to 25°C at a rate of 1.6°C/hr. It was observed that the effect of slow cooling and slow rotation, i.e., maintaining the uniform temperature distribution along the center line of the growth axis, was highly suited to avoid crystal cracking. Nevertheless, we have observed a few cracks surrounding the whole crystal ingot length of ~10 cm (ID 1.5 cm). 2.3. Crystal processing and characterizations The crystal growth ampoule was cut and the crystal ingot was taken out from the ampoule without any sticking. Various sizes of KPb2Br5:Tb crystals were cut from the grown ingot using a wire saw (South Bay Technology). Lapping and polishing operation were performed by using fine alumina powder down to 0.3 µm. Final polishing was performed using n-heptane (Aldrich) soaked fine polishing films (0.3 µm, FIS Inc., NY). A few wafers of various sizes were specially polished by Optical Unlimited Company (CT, USA) and those samples were tested at LLNL for laser performance. Various samples of dimensions 6×4×2 mm3 – 8×4×3 mm3 were cut from the Bridgman grown ingot for scanning electron microscopy (SEM). To remove surface roughness caused by the sawing and grinding, the samples were etched with 2-3% Br2-methanol for 1 minute. The samples were then washed thoroughly with isopropanol and finally polished with

512

K. C. Mandal et al.

n-heptane soaked kimwipes. After successful polishing and etching, the crystal surfaces were examined by SEM (Cambridge Instruments, Stereoscan 120). SEM was carried out to derive information about the imperfections or defects that occurred during crystal growth and during surface processing of KPb2Br5:Tb crystals. Energy dispersive analysis by x-rays (EDAX) was carried out to determine the stoichiometry of the grown KPb2Br5:Tb crystal. EDS 2000 (Model 500 analyzer) was used for the investigation. The KPb2Br5:Tb specimen was mounted on a stage which could be shifted or rotated and could be continuously viewed through SEM in order to select a particular region for analysis. The samples were scanned in five different regions and the average intensity values were taken. XRD analysis was conducted on the grown crystals and crystal structure with lattice parameters were determined. 9 6 5

8

11 10 9 8

8

7 6 5 4

4

(a)

(b)

(c)

-0.01

1

-0.005

54

9

5

9181

3

11 1 2 8

6

0

8 4 72 6

13

15 9

7 6 5

2

7

15 3 2

14

13

Heater Insulation Ampoule

9 11

12

11 9

6 7

1514126 453 7 1 5 9 1011

12

11

σvm(Pa)

4

1

10

13 15

5 7

1.5421E+07 1.4426E+07 1.3430E+07 1.2435E+07 1.1440E+07 1.0444E+07 9.4491E+06 8.4538E+06 7.4585E+06 6.4632E+06 5.4678E+06 4.4725E+06 3.4772E+06 2.4819E+06 1.4865E+06

3

14

Zone 3

13

3 11

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

2 46 10 13 14

Zone 2

5

7

9

440 439 438 436 430 423 396 369 342 316 289 262 235 208 181 154 127 101 74 47

σvm(Pa)

7

Zone 1

1

T( C )

0.005

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

1.7222E+06 1.6134E+06 1.5046E+06 1.3958E+06 1.2870E+06 1.1782E+06 1.0694E+06 9.6060E+05 8.5180E+05 7.4299E+05 6.3419E+05 5.2538E+05 4.1658E+05 3.0777E+05 1.9897E+05

0.01

y (d)

Fig. 1. (a) Schematic of the EIC Bridgman growth system and temperature distribution in the growth chamber. Stream function (left) and temperature distribution (right) at different rotation rates (b) 1 rph (c) 10 rph. (d) Von Mises stress distributions in the growing KPB crystal due to wall contact (left) and due to thermal stress only (right).

Differential scanning calorimetry (DSC) using Perkin Elmer, Pyris 1 DSC thermal analysis system was conducted to determine the phase transition temperature of the grown KPb2Br5:Tb crystals. A small piece (~10.5 mg) was crushed into powders and loaded onto a gold-coated stainless steel pan and hermetically sealed, and an empty pan was used for reference. The sample was cooled from 25°C to 0°C at a rate of 10°C/min and stabilized at 0°C for two minutes. Then it was heated at the same rate to 300°C, well above the phase transition temperature. The cell was then stabilized for two minutes and then brought back to 25°C. The X-ray photoelectron spectroscopy (XPS) was used to examine stoichiometry and various impurity concentrations of the KPb2Br5:Tb (2.5 mole%) crystal surfaces as well as within its bulk. The XPS was conducted using a Vacuum Generator ESCALAB MKII equipped with a XPS/Auger spectrometer with a MgKα source (E=1253.6 eV) with 300 W power at 15 KV. Physical Electronics software

Rare-Earth Doped KPb2Br5 Mid-IR Laser Sources

513

(version 6) was used for data acquisition and analysis. The work function of the spectrometer on the basis of the Au 4f3/2 peak energy of 83.8 eV and the reported binding energies are referenced to C (1s) at 284.6 eV. All the binding energies quoted here are within ±0.3 eV. Emission lifetime decays of the grown KPb2Br5:Tb crystals were measured at Lawrence Livermore National Laboratory (LLNL) by pumping into the 7F2 level using a pulsed Co: MgF2 laser (pulse width ~ 50 µs, 1Hz) tuned to 2.0 µm and focused onto the grown KPb2Br5:Tb crystals. The resulting fluorescence was captured by a liquid nitrogen cooled HgCdTe (MCT) detector through a preamplifier. The different transitions were distinguished by using appropriate long pass and short pass filters (LP 4.0 µm and SP 5.1 µm filters). Table 1. Impurity analysis by GDMS for (a) KBr (b) PbBr2. (a) GDMS analysis of KBr Impurity concentration (ppmw)

Cl

Na

Pb

S

I

starting KBr

155

18

3.8

3.2

3.1

zone-refined KBr

2.8

0.12

0.15

0.014

0.05

(b) GDMS analysis of PbBr2 Impurity concentration (ppmw)

K

Cl

Ag

Na

Si

starting PbBr2

55

38

35

31.5

20

1.05

1.01

0.02

0.14

1.2

zone-refined PbBr2

3. Results and discussion After 32 passes of ZR process, the KBr appeared bright, highly transparent, and shiny. The XRD pattern indicated that the ZR KBr is polycrystalline and the diffraction peaks corresponded very well to the standard XRD pattern of KBr with a cubic structure (a=6.5924 Å) No other peaks due to impurities or other phases were observed with the sensitivity of the instrument (0.1%). The (200), (220), (222), (400), (420), and (422) peaks were found to have highest intensities, as expected. The impurity analysis of the ZR KBr was conducted and the result is summarized in the Table 1 (a). The results confirmed substantial removal of impurities after zone purification. The zone-refining of PbBr2 involved 36 passes and the PbBr2 material appeared bright, highly transparent, and shiny. The XRD pattern indicated that ZR PbBr2 is polycrystalline and the observed diffraction peaks corresponded very well to the standard x-ray pattern for PbBr2 with an orthorhombic structure (a=8.058, b=9.526, c=4.687 Å). The impurity analysis result is summarized in Table 1 (b). A scanning electron micrograph for grown KPb2Br5:Tb crystal is shown in Fig. 2 (a). The surfaces were shiny and smooth as shown in the inset of the Fig. 2 (b) and there were not any visible microcracks, twin boundaries, dislocations or etch pits observed under SEM. The EDAX analysis results showed that the crystals grown under Br2 overpressure

514

K. C. Mandal et al.

(a)

(b)

Fig. 2. (a) SEM picture of the surface of a processed KPb2Br5:Tb (5.2 mole% TbBr3) crystal. (b) A typical EDAX spectrum of a KPb2Br5:Tb crystal grown under Br2-overpressure. (inset: a processed crystal picture)

were highly stoichiometric (1:2:5) without any deficiencies of bromine as shown in Fig. 2 (b). 6N purity PbBr2 and KBr and 4N purity TbBr3 samples were used as standards. The XRD pattern shown in Fig. 3 (a) corresponded very well to the standard x-ray diffraction pattern for single crystalline KPb2Br5 with an orthorhombic structure. The peaks at 2θ=31.77° corresponded to the (222) orientation and 100% intensity. However, a very small peak at 2θ=22.93° corresponding to the (211) orientation appeared. The appearance of this small peak might be due to a new phase transformation generated by thermal stress from quartz ampoule wall. Thermal stresses caused by the non-uniform temperature distribution as well as the constraint from the ampoule wall can cause plastic deformation in the growing crystal, giving rise to the multiplication of existing dislocations. If an ingot is grown in contact with the ampoule wall, either compression or traction is exerted on the ingot periphery due to mismatch of the coefficient of thermal expansions (CTE) of the crystal and the ampoule. Fig. 1 (d) shows numerical calculation results for the magnitude of von Mises stress in the crystal grown in contact with the ampoule wall. Notably, high stress concentrates along the crystal peripheral surface near the interface. When the ingot is not in contact with the wall of the ampoule, it provides the minimum stress in the grown ingot. A typical DSC thermogram on KPb2Br5:Tb (5.2 mole% TbBr3) crystal is shown in Fig. 3 (b). From this result, the phase transition temperature is clearly identified during heating and cooling cycles. During the cooling cycle, the measured value of the Tph (242.6°C) agrees very well with the reported results of Nitsch et al.2 During the heating cycle, we obtained a slightly higher value of 249.1°C. Nitsch et al reported the Tph of 242°C, however, they did not mention the experimental conditions used, particularly the rate of heating or cooling cycle. It is possible that the variation of oxygen content of their grown crystals is responsible for the differences or that the impurities present may bond the KPb2Br5 crystal in different ways giving rise to inherent structural differences. It might also be possible that the previous thermal history of the measured crystal was an important factor.

Rare-Earth Doped KPb2Br5 Mid-IR Laser Sources

(a)

515

(b)

Fig. 3. (a) XRD pattern of Bridgman grown KPb2Br5:Tb (2.5 mole% TbBr3) crystal and (b) A typical DSC thermogram on KPb2Br5:Tb (5.2 mole% TbBr3) crystal.

From XPS survey scans on KPb2Br5:Tb (2.5 mole%) shown in Fig. 4 with and without Ar etching, all peaks present are identified and attributed to Pb, K, Br, Tb, O, C, and Cl. The presence of C and O on the sample surface may arise due to surface contamination from air exposure. The intensity of C and O impurities decreased significantly after Ar etching. Fig. 5(a) shows the observed binding energies of the Pb 4f7/2 and 4f5/2 are 137.9 and 142.7 eV, respectively, which corresponded very well with the binding energy of Pb2+ in KPb2Br5 crystal. The low energy shoulders at 136.8 (4f7/2) and 141.9 eV (4f5/2) correspond to atomic Pb. Ar-ion etching for 5 and 15 minutes (5.0 kV, rate of sputtering 2.8 nm/min for Pb) confirms uniform distribution of Pb2+ within the crystal surface to a depth up to 42 nm. The low energy shoulders corresponding to atomic Pb significantly decreased after 15 min Ar-ion etching. Core level spectra of Tb (4d) levels in KPb2Br5:Tb crystal with (15 minutes) and without Aretching are shown in Fig. 5(b). The energy levels clearly show the broadening of Tb (4d) levels in the higher energy region without Ar-etching and the multiplet structures after Ar-etching. The Tb-doped binding energy level is clearly characterized by the sharpening of the observed peak at 146.9 eV.4,5 The maximum phonon energy of KPb 2 Br 5 crystal was determined by Raman scattering experiments and the peak value was determined to be 134 cm-1. This will minimize the nonradiative decay due to multiphonon interactions, permitting lasing at the longer side of mid-IR region out to 8 µm. Absorption measurement spectrum for KPb2Br5:Tb (2.5 mole%) crystal taken at LLNL is shown in Fig. 6 (a). Terbium ions have several mid-IR transitions that can be directly pumped with 1.8 µm-4.5 µm lasers. The spectrum in Fig. 6 (a) displays peaks assigned to transitions of the Tb3+ ions [Fig. 6(b)]. The absorption is influenced by a long tail originated from the band edge of the shorter wavelength transitions and small peaks possibly due to water or NH4+ ion contaminants. The spectrum clearly shows a strong absorption band at 3.0 µm (7F4) which can be pumped to lase out to ~8 µm. It also shows that KPb2Br5:Tb potentially permits the pumping and lasing of mid-IR transitions, specifically between 7F5→7F6 (~5 µm), 7F4 →7F5 (~8 µm), and 7F3 →7F4 (~10 µm). From the exponential decays of pulses from 7F5

1400

1200

1000

800

600

400

Pb (5d) K (3p)

200 B r (3d )

Br(3p 3/2) Pb (4f 7/2 )

Pb(4f5/2 )

400

Br (3p 1/2 )

K (2p3/2 )

600

K (2p 1/2)

800

Br (3s)

Pb (4d 3/2 ) Pb (4d5/2 ) K (2s )

1000

Pb (4p 3/2 )

1200

Pb (4p 1/2)

K (LMM)

Tb (3d 3/2 )

1400

Tb (3d 5/2)

C ounts (Arb. Units)

K (3p)

Pb (5d)

C (KLL)

C (1s )

Pb (4f7/2 )

Pb(4f5/2 )

K (2p 3/ 2)

K (2p 1 /2)

B r (3d )

Br(3p 3/2 )

Br (3p 1/2 )

Br (3 s)

O (1s )

Tb (3d 3 /2)

Pb (4d 3 /2 ) Pb (4d5/2 ) K (2s )

Tb (3d 5/2)

Pb (4p 3/2)

Pb (4p1/2 )

K (LMM)

Cl (LLM)

C ounts (Arb. Units)

516 K. C. Mandal et al.

XPS Survey Spectra-KPb2Br5:Tb, Without Argon Etching

200 0

Binding Energy (eV)

XPS Survey Spectra- KPb2Br 5:Tb, With 15 Minutes Argon Etching

0

Binding Energy (eV)

Fig. 4. XPS survey scan for KPb2Br5:Tb (2.5 mole% TbBr3) crystal without and with (15 minutes) Ar-etching.

Rare-Earth Doped KPb2Br5 Mid-IR Laser Sources

142.7 eV

Counts (Arb. Units)

137.9 eV

Counts (Arb. Units) 150

517

With 15-Minutes Ar-Etching Without Ar-Etching 140 Binding Energy (eV)

130

W i th o u t A r -E t c h i n g

W ith 1 5 M in u t e s A r - E t c h in g

180

170

160

150

140

130

B in d in g E n e r g y ( e V )

(a)

(b)

Fig. 5. (a) Photoemission spectra of Pb (4f) level on KPb2Br5:Tb (2.5 mole% TbBr3) crystal without (top) and with (bottom) Ar-ion etching (15 minutes) and (b) Photoemission spectra of Tb (4d) levels in the KPb2Br5:Tb (2.5 mole% TbBr3) crystal without (top) and with (bottom) Ar-ion etching (15 minutes). E [103 cm-1]

KPb2Br5: Tb 6 1.8 µm 1.9 µm

5

7F 0 7F 1 7F 2

2.0 µm 2.3 µm

7F 3

4 ~ 10 µm 7F 4

3.0 µm

3 ~ 8 µm 4.5 µm

7F 5

2

~ 5 µm

1

7F 6

0

(a)

(b)

Fig. 6. (a) Absorption spectrum of KPb2Br5:Tb (2.5 mole% TbBr3) crystal of thickness 4.0 mm measured at LLNL. (b) Trivalent terbium transition in KPb2Br5:Tb crystals.

and 7F4 levels, the emission lifetime was measured as shown in Fig. 7 (a) and (b) and summarized in Table 2. The emission lifetime of the 7F4 level is much shorter than the 7F5 level because the energy gap is smaller. Lifetimes were found to be longer for the samples grown under Br2 overpressure, possibly showing the influence of the stoichiometry and the better crystal quality. These measured lifetimes are quite encouraging since the 7F4 lifetime is adequate to obtain lasing at 8 µm.

518

K. C. Mandal et al.

KPb2Br5:Tb3+ decay

Normalized Intensity [a.u.]

1,0

0,8

τ(7F5)* Br overpressure Tb:KPB sample ~ 18 ms (exp. fit)

0,6

0,4

τ(7F5)* first Tb:KPB sample ~ 4 ms (exp. fit)

0,2

0,0 -20

0

20

40

60

80

100

Time [ms]

KPb2Br5:Tb3+ decay

Normalized Intensity [a.u.]

1,0

0,8

τ(7F4)* Br overpressure Tb:KPB sample ~ 360 µs (exp. fit)

0,6

0,4

Co:MgF2 pump (width ~ 50 µs)

τ(7F4)* first Tb:KPB sample ~ 193 µs (exp. fit)

0,2

0,0 -1

0

1

2

Time [ms]

Fig. 7. Emission lifetimes measured at LLNL on EIC grown KPb2Br5:Tb crystals. Top: 7F5→7F6 emission (LP 4 µm, SP 5.1 µm filters); Bottom: 7F4→7F5 emission (LP 6 µm, SP 8 µm filters)

Table 2. Decay lifetime for 7F5 and 7F4 levels under Ar overpressure and Br2 overpressure. Ar-overpressure

Br2-overpressure

7

4 ms

18 ms

7

193 µs

360 µs

Decay lifetime for F5 level Decay lifetime for F4 level

Rare-Earth Doped KPb2Br5 Mid-IR Laser Sources

519

4. Conclusions Single crystals of KPb2Br5:Tb have been grown by vertical Bridgman method from EIC’s zone-refined ultrahigh purity precursor materials. The crystals have been characterized by various methods from the purification step. The grown KPb2Br5:Tb single crystals under Br2-overpressure showed very good stoichiometry with excellent optical quality and thermal stability. The XPS results have shown the presence of Pb, K,, Br, Tb, O, C, and Cl and the binding energies of Pb (4f) and Tb (4d) levels in the KPb2Br5:Tb crystals are also characterized. Optical absorption measurement showed that the KPb2Br5:Tb crystal potentially permits the pumping and lasing in longer side of mid-IR region, 7F5→7F6 (~5 µm), 7F4 →7F5 (~8 µm), and 7F3 →7F4 (~10 µm). From the emission decay measurements, it was observed that the 7F4 lifetime is suitable for lasing at 8 µm and lifetimes were longer for the samples grown under Br2 overpressure. The KPb2Br5:Tb crystals are very promising candidates for new portable and easily deployable mid-IR laser sources and atmospheric sensing applications in the vibrational fingerprint region. Acknowledgments This research was supported by US ARO (C-W911NF-05-C-0039) and DOE (DE-FG0202ER83403). The authors would like to thank Prof. Arnold Burger of Fisk University, Prof. Hui Zhang of State University of New York at Stony Brook and Dr. Stephen Payne of Lawrence Livermore National Laboratory. References 1. U.N. Roy, R.H. Hawrami, Y. Cui, S. Morgan, A. Burger, Krishna C. Mandal, Caleb C. Noblitt, K. Rademaker, and S.A. Payne, Tb3+-doped KPb2Br5: Low-energy phonon mid-infrared laser crystal, Appl. Phys. Lett., 86, 151911-1-3 (2005). 2. K. Nitsch, V. Hamplova, M. Nikl, K. Polak and M. Rodova, Lead bromide and ternary alkali lead bromide single crystals-growth and emission properties, Chem. Phys. Lett. 258, 518-522 (1996). 3. Krishna C. Mandal, Sung H. Kang, Michael Choi, Job Bello, Lili Zheng, Hui Zhang, Michael Groza, Utpal N. Roy, Arnold Burger, Gerald E. Jellison, David E. Holcomb, Gomez W. Wright, and Joseph A. Williams, J. Electron Mater., 35, 1251-1256 (2006). 4. D. Majumdar and T. K. Hatwar, Effects of Pt and Zr on the oxidation behaviour of FeTbCo magneto-optic films: X-ray photoelectron spectroscopy, J. Vac. Sci. Technol. A. 7, 2673-2677 (1989). 5. W. F. Stickle and D. Coulman, An investigation of the chemistry of the dielectric/FeCoTb interface by X-ray photoelectron spectroscopy and Auger electron spectroscopy, J. Vac. Sci. Technol. A. 5, 1128-1131 (1987).

This page intentionally left blank

International Journal of High Speed Electronics and Systems Vol. 18, No. 3 (2009) 747–758  World Scientific Publishing Company

3D DECONVOLUTION OF VIBRATION CORRUPTED HYPERSPECTRAL IMAGES A. H. WEBSTER M.R. DAVENPORT [email protected] [email protected] MacDonald Dettwiler and Associates,13800 Commerce Pkwy, Richmond, BC V6V 2J3, Canada J.-P. ARDOUIN [email protected] Defence Research and Development Canada, Valcartier 2459 Pie-XI Blvd North, Québec, Québec, G3J 1X5, Canada

We have developed a hyperspectral deconvolution algorithm that sharpens the spectral dimension in addition to the more usual across-track and along-track dimensions. Using an individual threedimensional model for each pixel’s point spread function, the algorithm iteratively applies maximum likelihood criteria to reveal previously hidden features in the spatial and spectral dimensions. Of necessity, our solution is adaptive to unreported across-track and along-track vibrations with amplitudes smaller than the ground sampling distance. We sense and correct these vibrations using a combination of maximum likelihood deconvolution and gradient descent registration that maximizes statistical correlations over many bands. Test panels in real hyperspectral imagery show significant improvement when locations are corrected. Tests on simulated imagery show that the precision of relative corrected positions improves by about a factor of two. Keywords: Hyperspectral; Vibration Correction; Sharpening; Maximum Likelihood; Remote Sensing.

1. Introduction and Objectives Remote-sensing imaging spectrometers (“hyperspectral cameras”) convolve spatial and spectral information from a scene with the camera’s optical response and movementinduced blurring. Traditional hyperspectral analysis focuses on only the spectral characteristics of each pixel, without much thought about how that spectrum was spatially distributed on the ground.1, 2 Moreover, there is justifiable suspicion of any resampling process that might pollute the recorded spectrum of one pixel with data from neighboring pixels.3 This paper describes part of our ongoing work to extract, from a hyperspectral image, detailed knowledge of how spectral information maps onto specific spatial locations. We do this both by deconvolving individual bands, and by fusing information from multiple bands or even multiple images. The end result is a hyperspectral image with higher

521

522

A. H. Webster, M. R. Davenport & J.-P. Ardouin

resolution that the original, which may reveal sub-pixel targets that were previously lost in the noise. Our approach begins with a detailed model of the imaging process, in the form of a set of point spread functions (PSFs) for all the pixels. Deconvolution is particularly concerned with the overlaps between neighboring pixels, and in a hyperspectral image such overlaps occur both spatially and spectrally, so these PSFs need to be fully threedimensional – in x, y, and λ. Interestingly, this opened up the possibility of recovering both spatial and spectral features in a scene. Our initial approach was to deconvolve in the abundance domain, and then fuse the high-resolution abundance image with all the spectral bands. The abundance image, we reasoned, offers the following advantages: • it contains a statistically optimal interpretation of what all the bands mean, • it has fewer “bands” than the full scene and can thus be processed faster, and • it has a built in add-to-one constraint that reduces ringing in the scene. This worked very well on synthetic data, where we could make the PSF identical in all bands. Unfortunately the PSF varies significantly from band to band in some hyperspectral images. This may be due to spatial misalignments (due e.g. to keystoning) between the bands, or due to changing PSF footprint (due e.g. to the optical design or photon diffusion in the detector array4). Whatever the cause, it then becomes impossible to assign an accurate PSF to each pixel of the abundance image. Such varied PSFs also meant that we could not deconvolve on a band-reduced principal components image, as some have done.5, 6 Our approach, therefore, was to deconvolve all the hyperspectral bands at once, based on the precise shape and location of their overlapping PSFs. We added a second feedback loop that estimated high-resolution abundances from the deconvolved image, and encouraged the emerging radiance estimates to be consistent with the apparent abundances. Deconvolution was made much more difficult by the presence, in one of our test scenes, of unreported vibrations in both the cross-track and along-track directions. Deconvolution and super-resolution algorithms are highly sensitive to the accuracy of low-resolution pixel locations, because those locations are mapped onto a much finer grid than the original image. We used a gradient descent search, on a selected subset of bands, to find the instantaneous vibration displacement of each scanned line. This greatly improved the accuracy of the deconvolution results, as discussed below. We believe this same vibration correction algorithm could be used on images with other types of vibration or mis-registration. 2. PSF estimation Accurate deconvolution requires a reasonable estimate of the instrument point spread function (PSF) and location. If the instrument PSF is unknown, elements estimated from the acquired images can be used to estimate the overall PSF.7 Table 1 lists common,

3D Deconvolution of Vibration Corrupted Hyperspectral Images

523

instrument specific PSF elements, and the image dimensions that are affected. The full 3D PSF is a convolution of all these components. Table 1 Typical PSF components, their corresponding dimensions, and their shape Component

Dimensions

Focal Plane Array (FPA) Pixel Size

x-λ

Camera Optics

x-y-λ λ

Dispersive Optics

Equation / Determination Square function Can be approximated by an Airy Disk Grating dispersion in the case of an Offner spectrometer 8

Photon Diffusion in FPA

x- λ

Approximated as a Gaussian

Slit Projection

y- λ

Slit width coupled with sensor motion

Motion Blur

x-y

Motion is interpolated using geolocations of pixel centers in the x-y plane.

Wavelength

Each (x,y,λ) point in the data cube has a PSF associated with it. Figure 1, for example, plots the 50% contour surface of an example PSF. Each PSF consists of a set of weights on a grid that matches the spacing of the desired high-resolution image. We implemented wavelength-dependent variations in the PSF by making five example prototypes that spanned the instrument wavelengths, and interpolating specific PSFs for other wavelengths from those prototypes.

Alo n g-

tra ck

Ac r

tr oss-

ack

Figure 1 Contour of an Example Three-Dimensional Instrument Point Spread Function This contour shows where the PSF is 50% of maximum. The nominal location of any PSF may be offset from its geometric center, if this band is offset from other bands of this pixel.

3. Methodology The deconvolution algorithm works directly on the radiance images, and requires the following inputs: • • •

all sensor and platform parameters needed to define the shape of the PSF the center (x, y) spatial location and (λ) spectral location for each PSF Regions of Interest (ROI) representing all end-members, so that high-resolution abundances can be extracted to guide the deconvolution The current vibration-correction algorithm further requires that the sensor is push-broom, because it assumes that each vibration offset applies to a whole line of data.

524

A. H. Webster, M. R. Davenport & J.-P. Ardouin

3.1. Maximum Likelihood Deconvolution Our iterative deconvolution method, shown schematically in Figure 2, is based on a maximum likelihood estimator.9 The process is as follows: (i) a “Renderer” uses the 3D PSF model to create a first estimate of the highresolution (hi-res) image, from the overlapped low-resolution (low-res) data; (ii) an unmixing algorithm10 creates a hi-res abundance map from the hi-res image, to be used in (v) below; (iii) a camera model down-samples this hi-res image to show what the low-res image would be like, if the hi-res image were exactly correct; (iv) the algorithm calculates a “low-res error map” by subtracting the true original image from this estimated low-res image; (v) a maximum likelihood estimator, with guidance from the hi-res abundance map, translates the low-res error map into a high-res map of recommended changes; (vi) the “hi-res radiance adjustor” applies these recommended changes; (vii) Steps (ii) – (vi) are repeated until the overall low-res error reaches an acceptably low level, or stops improving. Because deconvolution solutions are not unique – many different high-resolution images could be consistent with a given low-res image – some form of constraint is typically used to regularize the solution. Our algorithm relies on the abundance feedback in step (v) to provide that constraint. This feedback provides an abundance fraction for each pixel, and flags as anomalous pixels, those that do not fit in within the statistical Error in One Low-Res Pixel of One Band

+

Low-Res (Original) Radiances

Reconstructed Low-Res Pixel

Estimated Hi-Res Radiances

Renderer

Camera Model (3D)

Maximum Likelihood Estimator

Hi-Res Radiances

Hi-Res Radiance Adjustor Hi-Res Radiances

Hi-Res Abundances Hi-Res Error Signal

Adaptive Spectral Unmixing

Figure 2 Concept of Operation of the Deconvolution Algorithm Deconvolution is an iterative update of the high-resolution radiances in response to discrepancies between them and the low-resolution image data.

3D Deconvolution of Vibration Corrupted Hyperspectral Images

525

distribution of an endmember. For non-anomalous pixels, the hi-res abundances are used to help push the radiances to be consistent with the mean spectra of a pixel with the given abundance fraction. 3.2. Vibration Correction Capability Figure 3 illustrates a severe vibration problem that we encountered in an important set of trial imagery. The source of this vibration is not clearly understood at this time. At first sight, this posed an insurmountable obstacle to deconvolution, because there was no accurate data on the locations of the pixels. We decided, however, to extend our algorithm to automatically detect and adapt to the vibrations. The updated algorithm is shown in Figure 4. The camera model begins with “nominal” pixel geolocations, corrected for platform motion as documented by the instrument sensors. We use the low-res error and the gradient of the PSF to determine how to shift the nominal location cross-track and along-track locations, and also redistribute the radiance values as indicated by the maximum likelihood estimator, to reduce the error. A gradient descent “auto-registration” algorithm then iteratively (a)

(c)

(b)

(d)

Figure 3 Sub-pixel Along-track and Cross-track Sensor Vibrations in Hyperspectral Images (a) and (c) show ground truth for two test panels, whose images are shown in (b) and (d) respectively. The fan target is designed such that the stripe width at one end is 0.5m and 2.0m at the other end. Cross-track vibrations cause straight vertical edges to show up as wavy lines. Along-track vibrations occur throughout the scene, but are most obvious at the leading and trailing edges of the panels, where panel pixels seem to be cut adrift from the main panel.

526

A. H. Webster, M. R. Davenport & J.-P. Ardouin

corrects the positions and the hi-res radiances. Our assumption here is that, when averaged over a whole push-broom line, radiance errors will tend to average to zero, whereas pixel misalignment errors will tend to accumulate. The addition of the auto-registration feedback slows the algorithm significantly, so we divided it into two stages: •

Stage 1 includes the auto-registration feedback as shown in Figure 4, but accepts only a small number of bands (e.g. 3 to 5) as input and ignores spectral overlaps; • Stage 2 uses the most precise registration information (from Stage 1 if necessary), does not include auto-registration, as shown in Figure 2, accepts all spectral bands as input, and includes spectral overlaps. Unlike Stage 1, it is not limited to data from push-broom sensors. Stage 1 is able to ignore spectral overlaps because our image has no vibrations in the spectral direction. We selected bands for Stage 1 that collectively provided the most spatial information about the scene, which basically meant that they had good contrast, and were all very different from each other. The Stage 1 algorithm auto-registered each band individually and averaged the spatial results over the selected bands. In this stage, we are not seeking the most accurate radiance values, and therefore do not need the added computational time of an unmixing feedback loop as we do during deconvolution. The gradient-descent auto-registration was complicated by the tendency for shifts in one image line to iteratively ripple through neighboring image lines. To minimize this, we shift lines in random rather than sequential order, and iterate through all lines until the overall error plateaus, allowing positions to settle into their most likely locations. In our experience, this typically occurs after approximately 10 iterations. Error in One Low-Res Pixel of One Band

+

Low-Resolution (Original) Radiances

Reconstructed Low-Res Pixel

PSF and Pixel Locations

Camera Model (2D)

Renderer Estimated Hi-Res Radiances

AutoRegistration Updated Low-Res Pixel Locations and PSFs

Low-Res Pixel Locations

Maximum Likelihood Estimator

Hi-Res Radiances

Hi-Res Radiance Adjustor

Hi-Res Error Signal

Figure 4 Concept of Operation of the Vibration Correction Algorithm Auto-registration is achieved by finding spatial shifts that reduce the low-resolution errors when averaged over a whole line of the push-broom image.

3D Deconvolution of Vibration Corrupted Hyperspectral Images

527

It is important to note that the nature of this algorithm requires variance in the terrain for accurate correction. In the limiting case of an entirely uniform image, we could shift the lines by any amount and have no noticeable effect on the error. 4. Experimental results The following sections discuss our experimental results for vibration correction and deconvolution processing. 4.1. Vibration Correction We validated and characterized the vibration correction algorithm using synthetic data where the correct vibration was known. We made the synthetic image by resampling a real high-resolution image using the sensor’s PSF and realistic “nominal” locations of each line, and by adding enough vibration to make the resulting images appear similar to the real data. The test locations provided to the vibration correction algorithm did not include the vibrations. We used two measurements to quantify performance: • •

absolute difference between true locations, compared to the corrected locations, and true distance between neighboring pixels, compared to the same distance in the corrected imagery.

estimated

nominal

true

estimated true

Figure 5 Validation of the Auto-Registration Using Synthetic Data The auto-registration algorithm did a good job of estimating the vibration in this synthetic image, particularly the relative locations of sequential locations. The straight line up the center shows the “nominal” track of the center. The true locations are shown in dark gray, and the estimated locations are shown in gray.

528

A. H. Webster, M. R. Davenport & J.-P. Ardouin

Figure 5 shows example validation results. Before corrections, the pixels were on average 0.3 GSD (ground sampling distance) from their true locations, and the average error in displacement to neighboring pixels was 0.197 GSD. After correction, the pixels were, on average, 0.2 GSD from their true locations, and average displacement error to neighbors was reduced to 0.103 GSD. This factor of two improvement in displacementto-neighbors was observed for all headings and resolutions tested, with the exception of when the scene was too uniform and blurry, where displacement-to-neighbor error was reduced to about 0.15. For our deconvolution algorithm, it is the relative position that matter most, and this improvement in position yields better deconvolution results. Figure 6(a) shows what the vibration-corrupted scene from Figure 3(b) would look like, if geocoded using just the instrument information and plotted using maximum likelihood resampling, and Figure 6(b) shows the estimated motion of the center pixel. The nominal GSD of this image is 1m (along-track) and 0.5m (cross-track). A more important test of the auto-registration is how much it improves the deconvolved image, as discussed in the following paragraphs.

heading

(meters)

(a)

(b)

Figure 6 Estimated Vibration in a Real Image A geocoded version (a) of the “Fan Panel” target in Figure 3(b) shows blurry vibration effects and loss of resolution when vibrations are ignored. The plot in (b) shows our algorithm’s estimate of the line-by-line location of the data, due to this vibration. Note how the view frequently moves backwards relative to the nominal direction of motion.

4.2. Two Dimensional Deconvolution in Real Data Figure 7 shows the result of our deconvolution, both with and without vibration correction, for the fan panel and another target in the scene. Data deconvolved without correcting for vibrations become amplified, as shown in 7(a) and 7(c). The same data, deconvolved with the vibration in the model, give much better results as shown in 7(b) and 7(d).

3D Deconvolution of Vibration Corrupted Hyperspectral Images (a)

529

(b)

(c)

(d)

Figure 7 2D Deconvolution Improvement with Vibration Estimation Deconvolution without vibration correction (a) and (c) amplifies the effects of vibration on image target panels. Deconvolution with vibration correction yields improved panels, (b) and (d).

4.3. Three Dimensional Deconvolution We have tested the full three-dimensional deconvolution using our synthetic “low-res” hyperspectral data. These low-res images were created by convolving three-dimensional PSFs from a real sensor onto a “very-hi-res” (high spatial and spectral resolution) synthetic image, built using spectra from a real image. The low-res image had a spatial resolution four times as large, and spectral resolution twice as large, as the very-hi-res image. We then used the 3D deconvolution algorithm to create a “hi-res” hyperspectral image. We then compared the hi-res image to the very-hi-res image to validate the deconvolution algorithm. We examined the results on the edges of large targets which cover several lowresolution pixels. For example the target shown in the very-hi-res image in Figure 8(a) was used to make a low-res version (b), and after 3D deconvolution, looked as shown in (c). Note, for example, that the deconvolution has revealed the “gun barrel” in (c). To establish the success of the spectral component of deconvolution, we examine a single low-res pixel on the edge of the target as marked in Figure 9(a), and the four corresponding hi-res pixels as marked in Figure 9(b). Of these four pixels, one becomes background, one target, and two mixed pixels remain, whose spectra are shown in Figure 10. Figure 11 compares the two “pure” deconvolved pixels to the true hi-res image, showing that the target spectra is well separated from the background. Several material specific spectral features re-emerge in these hi-res pixels, as indicated in the figure.

530

A. H. Webster, M. R. Davenport & J.-P. Ardouin

(b) Low-res version

(a) Very-hi-res target

(c) Deconvolved hi-res target

Figure 8 Spatial Deconvolution Results The target (a) in the very-high-resolution synthetic image is used as ground truth. The low-resolution image (b) was created by convolving (a) with realistic instrument PSFs. Target (c) was extracted from (b) using the 3D deconvolution algorithm.

pure target

pure background

(a) Mixed low-res pixel

(b) Corresponding hi-res pixels

Figure 9 Quality of the Target Edge Ideal deconvolution would extract a sharp edge from a mixed-pixel low-res edge, as in (a). Our algorithm has nicely created (b) pure target and pure background hi-res pixels along the edge of the target, as discussed in the text and as shown in Figures 9 and 10.

(a)

(b)

Deconvolved background part of low-res pixel Mixed pixel remaining after deconvolution Deconvolved target part of low-res pixel Low-res pixel

Figure 10 Spectra from the Deconvolved Pixel These graphs show the spectra for pixels marked in Figure 9 (a) and (b), offset for clarity. Note spectral features (a) and (b) that emerge during deconvolution, and can be compared to Figure 11.

3D Deconvolution of Vibration Corrupted Hyperspectral Images

(a)

531

(b)

Deconvolved target pixel Original high-resolution target pixel Deconvolved background pixel Original high-resolution background pixel

Figure 11 Spectral Features from the Deconvolution When the deconvolved “pure” pixels are compared to the very-high-res image, distinguishing spectral features at (a) and (b) appear that were not visible in the low-resolution mixed pixel shown in Figure 9. True spectra are plotted in the upper right corner; spectra in the lower left are offset for clarity.

5. Future work In principle, the deconvolution algorithm should be able to accept multiple low-res images and thus achieve super-resolution. We are exploring this possibility by addressing the two major obstacles: achieving adequate spatial registration, and matching spectra between scenes taken at different times. 6. Acknowledgments This research was funded in part by the Canadian Department of National Defence under the Defence Industrial Research (DIR) program. References 1. J.A Richards and X. Jia, Remote Sensing Digital Image Analysis (Springer, Germany, 1999). 2. Keshava, N. and Mustard, J. F., Spectral unmixing, IEEE Signal Process. Mag., vol. 19, issue 1, 44 – 57 (Jan. 2002). 3. F.C. Billingsley, Modeling Misregistration and Related Effects on Multispectral Classification, Phot. Eng. & Rem. Sens., 48, 421-430 (1982).

532

A. H. Webster, M. R. Davenport & J.-P. Ardouin

4. R. Widenhorn, A.Weber, M.M. Blouke, A.J. Bae, and E. Bodegom., PSF Measurements on Back-Illuminated CCDs, Proc. SPIE Electronic Imaging, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications IV, 5017, 176-184 (2003). 5. T.Akgun, Y. Altunbasak and R.M. Mersereau, Super-Resolution Reconstruction of Hyperspectral Images, IEEE Trans. Image Process., 14, no. 11, 1860-1875 (2005). 6. P. Chavez Jr., S. Sides, and J. Anderson, Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic, Phot. Eng. & Rem. Sens., 57, no. 3, 295–303 (1991). 7. E.H. Barney Smith, PSF estimation by gradient descent fit to the ESF, Proc. SPIE Electronic Imaging, Image Quality and System Performance III, 6059, paper 605914 (2006). 8. A. Karcher, C.J. Bebek, W.F. Kolbe, D. Maurath, V. Prasad, M. Uslenghi, and M. Wagner, Measurement of lateral charge diffusion in thick, fully depleted, back-illuminated CCDs, Nuclear Science Symposium Conference Record, 2003 IEEE, 3, 1513-1517 (2003). 9. W. K Pratt, Digital Image Processing (Wiley, New York, 1991). 10. W. Ressl, M. Davenport, and J.-P. Ardouin, Comparison of pixel unmixing algorithms for subpixel target detection in hyperspectral imagery, Fourth Joint International Military Sensing Symposium, NATO RTO Sensors and Electronics Technology (SET) Panel, Paris (September 2000).