115 85 20MB
English Pages 428 [410] Year 2020
Springer Proceedings in Physics 232
H. Paul Urbach Qifeng Yu Editors
5th International Symposium of Space Optical Instruments and Applications Beijing, China, September 5–7, 2018
Springer Proceedings in Physics Volume 232
The series Springer Proceedings in Physics, founded in 1984, is devoted to timely reports of state-of-the-art developments in physics and related sciences. Typically based on material presented at conferences, workshops and similar scientific meetings, volumes published in this series will constitute a comprehensive up-to-date source of reference on a field or subfield of relevance in contemporary physics. Proposals must include the following: – – – – –
name, place and date of the scientific meeting a link to the committees (local organization, international advisors etc.) scientific description of the meeting list of invited/plenary speakers an estimate of the planned proceedings book parameters (number of pages/ articles, requested number of bulk copies, submission deadline).
More information about this series at http://www.springer.com/series/361
H. Paul Urbach • Qifeng Yu Editors
5th International Symposium of Space Optical Instruments and Applications Beijing, China, September 5–7, 2018
Editors H. Paul Urbach Optics Research Group Delft University of Technology Delft, Zuid-Holland, The Netherlands
Qifeng Yu National University of Defense Technology Kaifu, Changsha, Hunan, China
ISSN 0930-8989 ISSN 1867-4941 (electronic) Springer Proceedings in Physics ISBN 978-3-030-27299-9 ISBN 978-3-030-27300-2 (eBook) https://doi.org/10.1007/978-3-030-27300-2 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The 5th International Symposium on Space Optical Instruments and Applications was successfully held in Beijing, China, on September 5–7, 2018. It is organized by the Sino-Holland Space Optical Instruments Joint Laboratory, supported by TU Delft. Like previous years, the joint lab continuously put efforts into encouraging international communication and cooperation in space optics and promoting innovation, research, and engineering development. The symposium focused on key innovations of space-based optical instruments and applications and the newest developments in theory, technology, and applications in optics in both China and Europe. It is a very good platform for exchanges on the space optics field and information on current and planned optical missions. There were about 230 attendees to the conference. The speakers were mainly from Chinese, Dutch, and other European universities, space institutes, and companies. The main topics included: • • • • •
Space optical remote sensing system design; Advanced optical system design and manufacturing; Remote sensor calibration and measurement; Remote sensing data processing and information retrieval; Remote sensing data applications.
Delft, The Netherlands Changsha, China
H. Paul Urbach Qifeng Yu
v
Contents
Analysis on NETD of Thermal Infrared Imaging Spectrometer . . . . . . . Jiacheng Zhu, Zhicheng Zhao, Shu Shen, Shujian Ding, and Weimin Shen
1
Demand Analysis of Optical Remote Sensing Satellites Under the Belt and Road Initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lingli Sun, Qiuyan Zhao, and Zhiqiang Shen
11
Design of an All-Spherical Catadioptric Telescope with Long Focal Length for High-Resolution CubeSat . . . . . . . . . . . . . . Li Liu, Li Chen, Zhenkun Wang, and Weimin Shen
21
Design of Lobster-Eye Focusing System for Dark Matter Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yun Su, Xianmin Wu, Long Dong, Hong Lv, Zhiyuan Li, and Meng Su
33
Design of Satellite Payload Management Integrated Electronic System Based on Plug and Play Technology . . . . . . . . . . . . . . . . . . . . . . Guofu Yong, Lanzhi Gao, and Wenxiu Mu
47
Full Path Analysis of Impact on MTF of Remote Sensing Camera by Jitter Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liu Yong, Lu Yuting, and Gao Lingyan
59
Influence of Micro-Vibration Caused by Momentum Wheel on the Imaging Quality of Space Camera . . . . . . . . . . . . . . . . . . . Qiangqiang Fu, Yong Liu, Nan Zhou, and Jinqiang Wang
69
Integrated Design of Platform and Payload for Remote Sensing Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiaguo Zu, Teng Wang, and Yanhua Wu
81
vii
viii
Contents
Optical System Design of Space Remote Sensor Based on the Ritchey-Chretien Cassegrain System . . . . . . . . . . . . . . . . . . . . . . Yang-Guang Xing, Lin Li, and Jilong Peng
91
Suppression of the Self-Radiation Stray Light of Long-Wave Thermal Infrared Imaging Spectrometers . . . . . . . . . . . . . . . . . . . . . . . . 101 Shu Shen, Jiacheng Zhu, Xujie Huang, and Weimin Shen Star Camera Layout and Orientation Precision Estimate for Stereo Mapping Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 W. Z. Wang, Q. L. Wang, J. J. Di, Y. B. Yu, Y. H. Zong, and W. J. Gao Study on Optical Swap Computational Imaging Method . . . . . . . . . . . . . 119 Xiaopeng Shao, Yazhe Wei, Fei Liu, Shengzhi Huang, Lixian Liu, and Weixin Feng CCD Detector’s Temperature Effect on Performance of the Space Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Xiaohong Zhang, Xiaoyong Wang, Zhixue Han, Chunmei Li, Pan Lu, and Jun Zheng The Thermal Stability Analysis of the LOS of the Remote Sensor Based on the Sensitivity Matrix . . . . . . . . . . . . . . . 135 Yuting Lu, Yong Liu, and Zong Chen Visible and Infrared Imaging Spectrometer Applied in Small Solar System Body Exploration . . . . . . . . . . . . . . . . . . 147 Bicen Li, Baohua Wang, Tong Wang, Hao Zhang, and Weigang Wang A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Jianing Hu, Xiaoyong Wang, and Ningjuan Ruan Design of a Snapshot Spectral Camera Based on Micromirror Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Shujian Ding, Xujie Huang, Jiacheng Zhu, and Weimin Shen Design, Manufacturing and Evaluation of Grazing Incidence X-Ray Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Fuchang Zuo, Loulou Deng, Haili Zhang, Zhengxin Lv, and Yueming Li Fabrication of the Partition Multiplexing Convex Grating . . . . . . . . . . . 197 Quan Liu, Peiliang Guo, and Jianhong Wu
Contents
ix
Research on the Optimal Design of Heterodyne Technique Based on the InGaAs-PIN Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Zongfeng Ma, Ming Zhang, and Panfeng Wu Research on the Thermal Stability of Integrated C/SiC Structure in Space Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Li Sun, Hua Nian, Zhiqing Song, and Yusheng Ding The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life Space-Borne Filter Wheel . . . . . . . . . . . . . . . . . . . 221 Yue Wang, Heng Zhang, Shiqi Li, Zhenghang Xiao, and Huili Jia A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi-Detector Stitched Aerial Camera . . . . . . . . . . . . . . 235 C. Zhong, Z. M. Shang, G. J. Wen, X. Liu, H. M. Wang, and C. Li Absolute Distance Interferometric Techniques Used in On-Orbit Metrology of Large-Scale Opto-Mechanical Structure . . . . . 245 Yun Wang and Xiaoyong Wang Development and On-Orbit Test for Sub-Arcsec Star Camera . . . . . . . . 253 Yun-hua Zong, Jing-jing Di, Yan Wang, Yu-ning Ren, Wei-zhi Wang, Yujie Tang, and Jian Li Opto-Mechanical Assembly and Analysis of Imaging Spectrometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Quan Zhang, Xin Li, and Xiaobing Zheng An Optimized Method of Target Tracking Based on Correlation Matching and Kalman Filter . . . . . . . . . . . . . . . . . . . . . . 273 Mingdao Xu and Liang Zhao Design and Verification of Micro-Vibration Isolator for a CMG . . . . . . . 281 Yongqiang Hu, Zhenwei Feng, Jiang Qin, Fang Yang, and Jian Zhao Development of a High-Frame-Rate CCD for Lightning Mapping Imager on FY-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 T. D. Wang, C. L. Liu, and N. M. Liao Evaluation of GF-4 Satellite Multispectral Image Quality Enhancement Based on Super Resolution Enhancement Method . . . . . . 295 Wei Wu, Wei Liu, and Dianzhong Wang Micro-Vibration Environment Parameters Measurement and Analysis of a Novel Satellite in Orbit . . . . . . . . . . . . . . . . . . . . . . . . 305 Pan-feng Wu, Zhen-long Xu, Xiao-yu Wang, Jiang Yang, and De-bo Wang
x
Contents
Multiband Image Fusion Based on SRF Curve and Regional Variance Matching Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Yuan-yuan Wang, Zhi-yong Wei, Jun Zheng, and Sheng-quan Yu Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Mutian Wang, Sijie Zhao, Xun Cao, Tao Yue, and Xuemei Hu Research on Infrared Image Quality Improvement Based on Ground Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Xiaofei Qu, Xiangyang Lin, and Mu Li Research on the Influence of Different Factors of Micro-Vibration on the Radiation Quality of TDICCD Images . . . . . . 349 Jingyu Liu, Yufu Cui, Hongyan He, and Huan Yin SVM-Based Cloud Detection Using Combined Texture Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Xiaoliang Sun, Qifeng Yu, and Zhang Li An Oversampling Enhancement Method of the Terahertz Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Zhi Zhang, Xu-Ling Lin, Jian-Bing Zhang, and Hong-Yan He Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Gaojin Wen, Limin Wu, Yun Xu, Zhiming Shang, Can Zhong, and Hongming Wang Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Junling Li, Yong Pang, Zengyuan Li, and Wen Jia Optical Design of a Simple and High Performance Reflecting Lens for Telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Jingjing Ge, Yu Su, Yingbo Li, Chao Wang, Jianchao Jiao, and Xiao Han Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Organization
Hosted By:
Organized By:
Jointly Organized By: National Center for International Research on Space Optical Instrument Beijing International Cooperation Base of Science and Technology on Space Optical Instrument Space Remote Sensing Committee, Chinese Society of Astronautics Space Optics Committee, The Chinese Optical Society Remote Sensing Committee, China Association of Remote Sensing Application Beijing Key Laboratory of Advanced Optical Remote Sensing Technology Beijing Engineering Technology Research Center of Aerial Intelligent Remote Sensing Equipments Optical Ultraprecise Processing Technology Innovation Center for Science and Technology Industry of National Defense
xi
xii
Organization
CASC Processing Technology Center of Optical Manufacture Remote Sensing Payload Group, Science and Technology Committee, China Academy of Space Technology Beijing Aerospace Innovative Intelligence Science and Technology Co., Ltd Chairman: Paul Urbach, TU Delft Qifeng YU, NUDT Executive Chairman: Kees Buijsrogge, TNO Hu CHEN, BISME Secretary-General: Andrew Court, TNO Bin FAN, BISME Organizing Committee: Peng XU, BISME Xiaoli CHEN, BISME Yue LI, BISME Meng WANG, BISME Zhenjiang CONG, BISME Chunwei WANG, BISME Yanxia CHEN, BISME Shumi XIA, BISME Yonghui PANG, BISME Peiwen ZHU, BISME Yiting WANG, BISME Junning GAO, BISME Jing Zou, TNO Sandra Baak, TNO
Analysis on NETD of Thermal Infrared Imaging Spectrometer Jiacheng Zhu1,2(*), Zhicheng Zhao1,2, Shu Shen1,2, Shujian Ding1,2, and Weimin Shen1,2 1
Key Lab of Modern Optical Technologies of Education Ministry of China Soochow University, Suzhou, China 2 Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province Soochow University, Suzhou, China [email protected]
Abstract. In order to suppress the noise of thermal infrared imaging spectrometer and improve its detection sensitivity, relationship between noise equivalent temperature difference (NETD) and various noise factors was analyzed. Noise of the thermal infrared imaging spectrometer mainly includes the stray light noise defined as nonimaging light scattered by opto-mechanical components’ surfaces and generated by nonworking orders of the grating, the background radiation noise from the optomechanical components, and the detector noise. According to the physical mechanism of NETD, the expression containing noise factors of NETD was deduced, and the noise analysis model of the thermal infrared imaging spectrometer was established. An Offner-type imaging spectrometer was taken as an example, whose spectral range is from 8 to 12.5 μm, F number is 2.7, number or spatial channels is 1024, number of spectral channels is 45. And a HgCdTe detector with a D of 4 1010 cm Hz1/2 W1 is used. Relationship between the NETD of this Offner-type spectrometer and its main influencing factors, including cryogenic temperature and optical properties of inwall surfaces of the spectrometer’s mechanical elements, was analyzed. When these parameters are in different value, the proportion of each noise and the main factor affecting NETD are different. Targeted noise suppression methods were proposed for different noise. Usually, background radiation noise is suppressed by cooling the opto-mechanical components and brighten the inwall surfaces. But stray light noise would increase when the inwall surfaces is brightened. Change of NETD with the cryogenic temperature was compared between the inwall surfaces blackened or brightened. When the cryogenic temperature was 90 K, NETD was 0.88 K in both blackened and brightened inwall surfaces case. When the cryogenic temperature was 70 K, NETD was 0.1 K in blackened inwall surfaces case, and 0.52 K in brightened inwall surfaces case. The conclusions have guiding significance for the determination of cryogenic temperature and the inwall surfaces’ optical properties of thermal infrared imaging spectrometer. Keywords: NETD · Thermal infrared imaging spectrometer · Noise · Background radiation
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_1
1
2
J. Zhu et al.
1 Introduction Thermal infrared imaging spectrometer is used to obtain the spatial and spectral information of the thermal radiation of the object itself, and it has the capability of all-weather monitoring, which has obvious advantages in the detection and identification of ground objects and events. It is widely used in military field, chemical gas detection, mineral exploration, forest fire monitor, and so on [1–3]. Compared with visible near-infrared or short-wave infrared imaging spectrometer, there are several difficulties in the development of thermal infrared imaging spectrometer. At normal temperature, the thermal radiation spectrum of the instrument itself is also in the thermal infrared band, which greatly interferes with the signal detection and even drowns the radiation signal. In addition, high-performance thermal infrared detectors are difficult to develop and few kinds are available. Especially, long-wave infrared detector has low detection rate of weak target signal. Therefore, reducing background radiation is the main problem to be solved to improve the performance of thermal infrared imaging spectrometer. Noise equivalent temperature difference (NETD) is commonly used to evaluate the sensitivity of thermal infrared imaging spectrometer. It is an important index of thermal infrared imaging spectrometer to describe the system’s ability to distinguish the temperature change of the target or the temperature difference between the target and the background. Lower the NETD is, higher the sensitivity of spectrometer. In order to improve the detection sensitivity of thermal infrared imaging spectrometer, the suppression of background noise is especially important. One of the more common methods is to use the whole machine refrigeration or the cooling stop to suppress the background noise, and such method was used in several missions. In 1994, the airborne thermal infrared imaging spectrometer AHI [4] was developed by university of Hawaii for mineral exploration. It works at 7.5–11.5 μm, with a spectral resolution of 125 nm. The F number of the spectrometer is reduced from 4 to 1.7 by using a background suppressor. The background suppressor works in a vacuum dewar, with liquid nitrogen refrigerating to 90 K, and the NETD of the spectrometer is 0.1 k. In 2006, JPL developed a quantum well thermal infrared imaging spectrometer QWEST [5] for earth science detection and verifying their quantum well infrared detector technology. The spectral range of the QWEST is 8–12.5 μm, with a spectral resolution of 17.6 nm and its F number is 1.6. The NETD of the spectrometer is 0.13 K when the whole optical path is cooled below 40 K. Similarly, the thermal infrared imaging spectrometers MAKO [6] and MAGI [7] developed by American aerospace corporation use the cooled Dyson spectrometer to improve detection sensitivity. Obviously, the refrigeration method can improve the sensitivity of thermal infrared imaging spectrometer. Therefore, it is particularly important to analyze the relationship between NETD and cryogenic temperature or other influencing factors during the design stage of a thermal infrared imaging spectrometer.
Analysis on NETD of Thermal Infrared Imaging Spectrometer
3
2 Physical Mechanism of NETD NETD is defined as the temperature difference between the target and the background when the signal to noise ratio (SNR) of the instrument is 1. The definitional equation of NETD is: NETD ¼
ΔT ΔV s =V n
ð1Þ
where ΔT is the temperature difference between target and background, ΔVs is the differential signal voltage generated by the detector, Vn is the total noise voltage output by the detector. Operation schematic diagram of thermal infrared imaging spectrometer is shown in Fig. 1. The aperature diameter of the system is D, the focal length is f, the detector pixel area is Ad, the target distance is R, and the instantaneous field of view is IFOV. The solid angle of the instrument into the observation area is Ω0, and the area of the observation area is S0. These parameters have the following relationships: S0 ¼ IFOV R2 Ω0 ¼ F¼
ð2Þ
πD2 4R2
ð3Þ
f D
ð4Þ
where F is the F number of the thermal infrared imaging spectrometer. The target radiation source can be considered as a black body. It’s spectral radiation exitance M (λ, T ) can be calculated by Planck formula: M ðλ, T Þ ¼ εðλÞ
c1 λ5 ½ exp ðc2 =λT Þ 1
ð5Þ
Thermal infrared imaging spectrometer Target Ad
D S0
IFOV
Ω0 R
f
Fig. 1 Operation schematic diagram of thermal infrared imaging spectrometer
4
J. Zhu et al.
where λ is the radiation wavelength, ε(λ) is the target emissivity, T is the target temperature, c1 ¼ 3.7418 1016 W m2 is the first radiation constant, c2 ¼ 1.4388 102 m K is the second radiation constant. Self-radiation of most natural target surfaces can be regarded as Lambertian radiation surfaces, the relationship between radiance L(λ, T) and spectral radiation exitance M(λ, T ) is as follows: Lðλ, T Þ ¼
M ðλ, T Þ π
ð6Þ
Irrespective of atmospheric absorption, the radiant flux entering the optical system is: ΦO ðλ, T Þ ¼ Lðλ, T ÞS0 Ω0
ð7Þ
Transmittance of the thermal infrared imaging spectrometer is τ(λ), and radiant flux received by the detector pixel can be deduced as ΦI ðλ, T Þ ¼
εðλÞτðλÞM ðλ, T ÞAd 4F 2
ð8Þ
The detector’s output signal voltage Vs is related to the received radiant flux ΦI(λ, T ) and the detector voltage response rate Rv [8]: Rv ðλÞ ¼
V n‐detector D ðλÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ad Δf
ð9Þ
where D(λ) is the peak detection rate of infrared detector, Δf is the noise equivalent bandwidth, Vn-detector is the noise voltage of detector itself. The detector’s output signal voltage Vs when detect a spectral bandwidth from λ1 to λ2 can be calculated by the follow equation: V s ðλ, T Þ ¼
ð λ2 λ1
Φðλ, T Þ Rv ðλÞ dλ
ð10Þ
The thermal infrared imaging spectrometer has a spectral resolution of Δλ, a small amount. When the imaging spectrometer observes a target scene with a background temperature difference of ΔT, the differential signal voltage ΔVs can be deduced and approximated as follows: ΔV s ¼
ð λþΔλ λ
½V s ðλ, T þ ΔT Þ V s ðλ, T Þ dλ
Ad V n‐detector D ðλÞ ∂M ðλ, T Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ΔTΔλ ∂T Ad Δf 4F 2 pffiffiffiffiffi Ad V n‐detector D ðλÞ hc=k pffiffiffiffiffiffi ¼ εðλÞτðλÞM ðλ, T Þ ΔTΔλ λT 2 4F 2 Δf εðλÞτðλÞ
ð11Þ
where h is the Planck constant, c is the velocity of light, k is the Boltzmann constant.
Analysis on NETD of Thermal Infrared Imaging Spectrometer
5
Total noise voltage contains the noise voltage of detector itself Vn-detector, the noise voltage of background radiation Vn-background, and the noise voltage caused by stray light generated during the light transmission in the spectrometer, Vn-stray. Background radiation and stray light transmitting to the detector are both invalid, and the noise they form on the detector acts in the same way, so the two kinds of noise voltage can be collectively referred to as the invalid radiation noise voltage, Vn-invalid. Define Φinvalid as the invalid radiant flux received by the detector. Invalid illuminance on the detector is Einvalid, which is the sum of background radiation illuminance Ebackground and stray light illuminance Estray. The invalid radiation noise voltage output by the detector can be expressed as: V n‐invalid ¼ Φinvalid
V n‐detector D ðλÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ad Δf
pffiffiffiffiffi V D ðλÞ pffiffiffiffiffiffi ¼ E invalid Ad n‐detector Δf
ð12Þ
Total noise voltage Vn is: Vn ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V n‐detector 2 þ V n‐invalid 2
ð13Þ
SNR of the thermal infrared imaging spectrometer can be deduced as: SNR ¼
pffiffiffiffiffi ΔV s εðλÞτðλÞM ðλ, T Þ hc=k Ad D ðλÞΔTΔλ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 2ffi pffiffiffiffiffi Vn 4λF 2 T 2 Δf þ E invalid Ad D ðλÞ
ð14Þ
When SNR ¼ 1, ΔT equals to the NETD of the thermal infrared imaging spectrometer. Expression of the NETD is as follows: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 pffiffiffiffiffi Δf þ E invalid Ad D ðλÞ pffiffiffiffiffi NETD ¼ εðλÞτðλÞM ðλ, T Þ hc=k Ad D ðλÞΔλ 4λF 2 T 2
ð15Þ
Obviously, if thermal infrared imaging spectrometer has a fast F number, high transmittance, and strong ability for suppression of invalid radiation, the system can achieve high sensitivity with low NETD.
3 Modeling and Analysis Equation (15) contains various noise factors, including stray light noise, background radiation noise, and the detector noise. On the detector photosensitive surface, stray light illuminance Estray and background radiation illuminance Ebackground have relationship with the reflectivity of inwall surfaces of the spectrometer’s mechanical elements.
6
J. Zhu et al.
Moreover, Ebackground is closely related to the cryogenic temperature. An Offner-type thermal infrared imaging spectrometer designed for working in the geostationary orbit is analyzed. Its spectral range is from 8 to 12.5 μm, F number is 2.7, number or spatial channels is 1024, number of spectral channels is 45. A HgCdTe detector with a pixel size of 24 32 μm and a D of 4 1010 cm Hz1/2 W1 is used, and its Δf is 50 Hz. Spectrometer’s noise is analyzed solely, for invalid radiation from the foreoptics is obstructed by the slit. The schematic of the Offner-type imaging spectrometer’s optomechanical system is given in Fig. 2. Two models with different inwall surfaces reflectivity are established for noise analysis. One has blackened inwall surfaces and another has brightened. Detailed optical properties of the spectrometer’s optical and mechanical elements of two models are shown in Table 1. Without consideration of noise, signal radiation illuminance Esignal on the detector can be expressed as:
E signal ¼
εðλÞτðλÞ
ð λþΔλ λ
M ðλ, T Þ dλ
4F 2
ð16Þ
Spherical mirror Mechanical frame
Detector Slit
Grating FM1
FM2
Fig. 2 Cut-view of opto-mechanical system of the Offner-type imaging spectrometer Table 1 Optical properties of the spectrometer’s optical and mechanical elements’ surfaces Elements Slit Mirror Grating Mechanical frame
Model 1 Reflectivity 90% Reflectivity 98.5% Scattering coefficient 0.02% 1 order diffraction efficiency 80% Absorptivity 92% Near-specular reflectivity 4% Lambert scattering rate 4%
Model 2 Reflectivity 90% Reflectivity 98.5% Scattering coefficient 0.02% 1 order diffraction efficiency 80% Absorptivity 5% Specular reflectivity 90% Lambert scattering rate 5%
Analysis on NETD of Thermal Infrared Imaging Spectrometer
7
Table 2 Ebackground of two models at 10.25 μm (unit of illuminance: W/mm2) Cryogenic Temperature (K) 180 160 140 120 100 80 70 60
Esignal 1.60 107 1.60 107 1.60 107 1.60 107 1.60 107 1.60 107 1.60 107 1.60 107
Ebackground of Model 1 2.76 106 1.08 106 3.26 107 6.84 108 8.11 109 4.51 1010 3.73 1011 2.01 1012
Ebackground of Model 2 1.20 106 4.70 107 1.43 107 3.00 108 3.60 109 2.59 1010 1.63 1011 8.78 1013
When the temperature of target is 300 K, Esignal calculated by (16) is 1.60 107 W/mm2 at the radiation wavelength of 10.25 μm. With the help of nonsequential ray tracing software LightTools, we can obtain Estray and Ebackground of the two models. Estray is independent of the cryogenic temperature, while Ebackground is temperature dependent. By the simulation, Estray of Model 1 is 2.20 1011 and 1.10 109 W/mm2 of Model 2. Ebackground of two models at different cryogenic temperature is given in Table 2. Obviously, background radiation can be suppressed by lowering the cryogenic temperature, and a cryogenic temperature of at least 140 K is required for the signal not to be drowned by background radiation noise. Furthermore, Ebackground of Model 2 is less than Model 1, but Estray of Model 2 is 20 times larger than Model 1. Once the inwall surfaces of mechanical elements are polished brightly, stray light will reflect back and forth for multiple times inside the spectrometer, and more stray light will travel to the detector. Therefore, blackening the mechanical elements is an effective way to reduce stray light noise. With the spectrometer’s parameters and Einvalid analyzed above, we obtained the relationship between NETD and cryogenic temperature using (15). The change of NETD with cryogenic temperature is shown in Fig. 3. For Model 1, when the cryogenic temperature is above 70 K, Ebackground is several orders of magnitude larger than Estray, and NETD drops sharply as temperature drops. When the cryogenic temperature is below 70 K, stray light noise accounts for a larger proportion than background radiation noise, and NETD goes stable as temperature drops. For Model 2, such critical temperature is 90 K. When the cryogenic temperature is 90 K, NETD is 0.88 K both in Model 1 and Model 2. NETD of Model 2 is lower as the cryogenic temperature above 90 K for less background radiation noise. NETD of Model 1 is lower as the cryogenic temperature below 90 K for less stray light noise. When the cryogenic temperature is 70 K, the NETD is 0.1 K in Model 1 and 0.52 K in Model 2.
4 Conclusion and Future Work In this paper, relationship between NETD and various noise factors of thermal infrared imaging spectrometer was analyzed, mainly including detector noise, stray light noise, and background radiation noise. The expression containing noise factors of NETD was
8
J. Zhu et al. 10 9
Model 1 Model 2
8
NETD / K
7 6 5 4 3 2 1 0 60
70
90 80 Cryogenic temperature / K
100
110
Fig. 3 Simulation curve of NETD for different models
deduced, and two noise analysis models of an Offner-type thermal infrared imaging spectrometer were established. By the simulation, variation trend of NETD with cryogenic temperature was got with inwall surfaces blackened or brightened. NETD drops as temperature drops, and NETD goes stable when cryogenic temperature is low enough. At different cryogenic temperature, different noise factors dominate. Background radiation noise is major noise when cryogenic temperature is above 90 K, while stray light noise and detector noise are major noise when cryogenic temperature is below 70 K. When spectrometer’s inwall surfaces is brightened, its NETD is lower at a high cryogenic temperature than the blackened one, but higher at a low cryogenic temperature. The conclusions have guiding significance for the determination of the cryogenic temperature and the inwall surfaces’ optical properties of the thermal infrared imaging spectrometer. In future work, NETD of thermal infrared imaging spectrometer with mechanical elements partly blackened and partly brightened will be studied. The authors believe that the NETD may reach a better consequence with an appropriate selection of brightened surfaces. Acknowledgments This work was supported in part by the National Key Research and Development Program of China (2016YFB05500501-02).
Analysis on NETD of Thermal Infrared Imaging Spectrometer
9
References 1. Hall, J.L., Boucher, R.H., Buckland, K.N., Tratt, D.M., Warren, D.W.: Mako airborne thermal infrared imaging spectrometer: performance update. In: Proceeding of SPIE: Imaging Spectrometry XXI, vol. 9976, pp. 997604 (2016) 2. Yuan, L., He, Z., Lv, G., Wang, Y., Li, C., Xie, J., Wang, J.: Optical design, laboratory test, and calibration of airborne long wave infrared imaging spectrometer. Opt. Express. 25(19), 22440 (2017) 3. Knollenberg, J., Gebhardt, A., Weber, I., Zeh, T., Hiesinger, H.: MERTIS: the thermal infrared imaging spectrometer onboard of the mercury planetary orbiter. In: International Conference on Space Optics, p. 13 (2017) 4. Lucey, P.G., Julian, J., Schaff, W., Winter, E.M., Stocker, A.D., Bowman, A.P.: Ahi: an airborne long-wave infrared hyperspectral imager. In: Proceeding of SPIE: Airborne Reconnaissance XXII, vol. 3431, pp. 36–43 (1998) 5. Johnson, W.R., Hook, S.J., Mouroulis, P.Z., Wilson, D.W., Gunapala, S.D.:QWEST: quantum well infrared earth science Testbed. In: Proceeding of SPIE: Imaging Spectrometry XIII, vol. 7086, pp. 708606 (2008) 6. Warren, D.W., Boucher, R.H., Gutierrez, D.J., Keim, E.R., Sivjee, M.G.: MAKO: a highperformance, airborne imaging spectrometer for the long-wave infrared. In: Proceeding of SPIE: Imaging Spectrometry XV, vol. 7812, pp. 781219 (2010) 7. Hall, J.L., Boucher, R.H., Buckland, K.N., Gutierrez, D.J., Hackwell, J.A., et al.: MAGI: a new highperformance airborne thermal-infrared imaging spectrometer for earth science applications. IEEE Trans. Geosci. Remote Sen. 53, 5447–5457 (2015) 8. Dudzik, M.C.: The Infrared & Electro-Optical Systems Handbook, vol. 4, pp. 19–22. Infrared Information and Analysis Center, Ann Arbor (1993)
Demand Analysis of Optical Remote Sensing Satellites Under the Belt and Road Initiative Lingli Sun1(*), Qiuyan Zhao2, and Zhiqiang Shen1 1
Beijing Institute of Spacecraft Engineering, Beijing, China [email protected] 2 China Academy of Space Technology, Beijing, China
Abstract. The Belt and Road Initiative is an open regional cooperation initiative announced by China’s new leader President Xi Jinping. In this paper, the basic situation of countries along the Belt and Road Initiative was introduced first and pointed out the problems that the relevant governments or organizations or companies to participate in the construction need to face. Secondly, the paper comprehensively analyzed the various remote sensing satellite resources currently available along the Belt and Road Initiative countries, including satellites developed independent by the relevant countries or by commercial companies. Thirdly, the application needs of space information of the countries along the relevant areas were disused, such as emergency disaster reduction, mineral resources survey, mapping, weather forecast, ocean observation, etc. Finally, this paper put forward suggestions on the construction direction of the space information service system, which provides references for the construction direction and application of space resources. Keywords: The Belt and Road Initiative · Demand · Optical remote sensing
1 Introduction In September and October 2013, President Xi Jinping proposed a major initiative to jointly build the Silk Road Economic Belt and twenty-first century Maritime Silk Road (hereinafter referred to as the Belt and Road Initiative). In March 2015, the State Council of China issued the Vision and Action of the Belt and Road Initiative as the main initiative of policy communication, facility linkage, trade smoothness, capital finance, and people’s hearts and clearly defined improving airplane (satellite) information as an important aspect of facility connectivity. In May 2017, the Chinese government organized the first One Belt, One Road international cooperation summit forum, which attracted more than 100 countries or international organizations to participate in the seminar. The Belt and Road Initiative has received great attention from the international community and positive responses from relevant countries. It has brought significant opportunities for China Aerospace to go global and provide high-quality aerospace products and services to more international users. It refers to countries with a vast territory and diverse natural environment. Most of them are economically underdeveloped and often suffer from disasters. All kinds of © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_2
11
12
L. Sun et al.
natural disasters and the secondary disasters caused by them occur from time to time, which seriously restrict the implementation of the Belt and Road Initiative. Space infrastructure with advantage of high location, Global coverage, etc., is an important means of facilities connected. As we all know, in some scenarios, it’s the only means available. The information service provided by space infrastructure plays an important role for policy communication, trade flow. In this paper, the basic situation of countries along the Belt and Road Initiative was introduced first. The problems that the relevant governments or organizations or companies to participate in the construction need to face were pointed out. Secondly, the paper comprehensively analyzed the various remote sensing satellite resources currently available along the Belt and Road Initiative countries, including satellites developed independent by the relevant countries or launched by commercial companies. Thirdly, the application needs of space information of the countries along the relevant areas were disused, such as emergency disaster reduction, mineral resources survey, mapping, weather forecast, ocean observation, and so on. Finally, this paper put forward suggestions on the construction direction of the space information service system, which provides references for the construction direction and application of space resources.
2 Basic Situation Along the Belt and Road The Belt and Road Initiative is an open regional cooperation initiative. At present, the Silk Road Economic Belt covers East Asia, central Asia, west Asia, central and east Europe, and other regions through land transportation, while the twenty-first century Maritime Silk Road connects east Asia, southeast Asia, south Asia, and west Asia by sea and radiates to parts of southern Europe. The basic meaning of the Belt and Road Initiative is a regional cooperation initiative covering 65 countries. This initiative connects the two economic circles of Asia-Pacific and Europe and is the largest and most promising economic cooperation belt of the world. There are more than 4.3 billion people in the countries along the route, accounting for more than 60% of total population of the world. The GDP of these countries accounts for nearly one-third of the world. The Belt and Road Initiative has been proposed for more than 4 years, and more than 100 countries and international organizations around the world have actively supported and participated in the Belt and Road Initiative construction. As of September 2017, China has signed One Belt and One Road cooperation documents with 74 countries and international organizations and signed various investment and trade agreements with 58 countries. In the first three quarters of 2017, 2893 new enterprises were established in China. However, the area along the Belt and Road area is wide, the terrain and landscape environment are complex, and the climate is changeable. Among them, Southeast Asia and South Asia are among the most frequent and most severely damaged areas in the world. The terrain conditions in South Asia are more complicated. Take Pakistan as an example. It is located in the Himalayan earthquake zone. It has frequent earthquakes and is affected by the south Asian tropical monsoon. The precipitation is uneven and prone to drought or flood disasters. There are many geological diseases along the Belt and Road, and the distribution is wide and the damage is serious. For example, the
Demand Analysis of Optical Remote Sensing Satellites Under the Belt. . .
13
China-Brazil Highway is a veritable road geological disaster museum, and many disasters seriously restrict and affect the construction and normal operation of the road. There are many projects along One Belt and One Road Initiative, most of which are located in remote locations, with poor infrastructure and undeveloped communications. It is difficult to conduct on-site safety supervision in construction areas. For the Belt and Road Initiative related governments, organizations, and companies that want to participate in the related project, effective measures are needed to ensure the safety of their activities under the initiative. After that, create maximum benefits can be considered. Space resources have the characteristics of high position and global coverage. They have natural advantages in emergency disaster reduction, mineral resources survey, survey mapping, and other applications. In June 2018, China signed cooperation with the United Nations on the development of the Belt and Road space information corridor. The Declaration brings opportunities for space-based remote sensing applications. This paper combs the application requirements of these space resources in detail, which can provide services and solutions for relevant governments, organizations, and enterprises and also provide reference for the construction and application of space resources.
3 Status of Space-Based Remote Sensing Resources Along the Route 3.1
Status of Remote Sensing Resources in Countries Along the Route
Along the Belt and Road, countries with their own space-based remote sensing resources mainly include Russia and India, and the rest of the countries have only a few remote sensing satellites, such as Saudi Arabia, Indonesia, and so on. 3.1.1 Russia The history of Russian remote sensing satellites has a long history. In the early days, military applications such as investigation and early warning were the mainstays. Later, they gradually developed into military and civilian use. In June 2006, the Resurs-DK satellite was launched, and the panchromatic resolution was better than 1 m. The system was also commercialized while satisfying the application of civilian remote sensing and sold high-resolution remote sensing satellite images to the world. The Canopus-B remote sensing satellite launched in June 2012 carries three payloads: the Panchromatic Imaging System (PSS), the Multispectral Imaging System (MSS), and the Multispectral Scanner (MSU-200). The spatial resolutions of the three payloads are resolved to 2.5, 12, and 25 m. It can be used for land and ocean surface observations, ice imaging observations, natural and man-made disasters monitoring, mapping, forest fires, and atmospheric environmental pollutants monitoring. 3.1.2 India The capabilities of Indian mapping and imaging satellites continue to increase. CartoSat-2, 2A, and 2B satellites are mainly used for high-resolution imaging observations, as well as detailed mapping of key areas based on global infrastructure mapping. CartoSat-2B was launched in July 2010. Its resolution is up to
14
L. Sun et al.
0.8 m, and its revisiting period is 4–5 days. India is developing the CartoSat-3 satellite, which is mainly used for military high-resolution reconnaissance and mapping. The satellite has a mass of about 600 kg. Its panchromatic resolution is 0.3 m, and breadth is about 10 km. 3.1.3 Others The Saudi Arabian satellite is a series of small Earth observation satellites developed by the Saudi Arabian Space Research Institute. The satellites currently have six satellites of the SaudiSat-1, 2, 3, and 4. Among them, only SaudiSat-2, 3, and 4 have imaging capability. But SaudiSat-2 has been retired. Pakistan does not have a national remote sensing satellite. It has previously cooperated with the United Kingdom to develop the Badr-B satellite. The British side is responsible for providing the satellite platform. The Rutherford Appleton Laboratory (RAL) has developed the main payload. The Palestinian side is responsible for the satellite assembly. On December 10, 2001, it was launched by Russian Zenit-2 rocket from the Baikonur launch site. Now it has been retired. The Space Technology Institute Cube-1 (iCUBE-1) satellite launched in 2013 is a 1 U cube satellite, carrying a low-resolution camera for technical verification and has also been retired. There are two remote sensing satellites in orbit in Indonesia. LAPAN-TUBSAT is a technical test satellite developed in cooperation with Germany and Japan. It was launched in 2007. Then, the National Aerospace Research Institute of Indonesia developed the LAPAN-ORARI satellite based on the satellite technology and launched it in 2015. The remote sensing satellite in Thailand is mainly the THEOS satellite. The satellite was developed by France’s Astrium company. It uses a sun-synchronous orbit with a height of 822 km, panchromatic resolution of 2 m, and a multispectral resolution of 15 m. Kazakhstan’s remote sensing satellite KazEOSat-1 is custom-made by France’s Airbus Defense and Space, with a minimum resolution of 1 m, a breadth of 20 km. It can cover 220,000km2 in 24 h. It was launched in 2014 with 7 years’ service life. KazEOSat-2 satellite was also launched in 2014, built on the SSTL-150 platform of Surrey Satellite Technology Co., Ltd., with a width of 77 km. There are also other resources summarized as shown in the following Table 1: 3.2
Other Available Resources
On the one hand, the world’s aerospace developed countries publish low-resolution data for free and sell high-resolution, high time-sensitive remote sensing data and images to users around the world through commercial channels. It mainly relies on the technical advantages of aerospace in the field of remote sensing satellites. While satisfying the national civil application, it is committed to exerting the redundant observation capability of on-orbit satellites and promoting its commercial application. Most of these companies are open to countries along the Belt and Road. These are also a remote sensing resource available.
Demand Analysis of Optical Remote Sensing Satellites Under the Belt. . .
15
Table 1 Summary of some remote sensing resources in the countries along the Belt and Road Prime contractor Astronautic technology (M) SdnBhd
Country Malaysia
Satellite name RazakSat
United Arab Emirates
DubaiSat-1
South Korea satrec initiative
EOS-C camera
DubaiSat-2
South Korea satrec initiative
EOS-D camera
EADS astrium
Optical imaging payload
Iranian
Vietnam natural resources, environment, and disaster monitoring satellite-1 Fajr
Turkey
GOKTURK-2
Israel
EROS-B
Ukraine
Sich-2
Egypt
EgyptSat-2
Vietnam
Southern design bureau of Ukraine Russian energy rocket group
Main load Medium-size aperture camera
CCD panchromatic camera Multispectral earth imager, full-color and multispectral push-broom imager Earth surface imaging equipment
Performance Panchromatic resolution 2.5 m, multispectral resolution 5 m Panchromatic resolution 2.5 m, multispectral resolution 5 m 2009.7 Panchromatic resolution 1 m, multispectral resolution 4 m Full-color resolution 2.5 m, multispectral resolution 10 m
Imaging resolution 500–1000 m, can return HD images 2.5 m resolution optical image 0.7 m Spatial resolution 8m
Launch time 2009.7
2009.7
2013.11
2013.5
2015
2012 2006.4 2011.8
2014
3.2.1 US Remote Sensing Satellite System Landsat is a representative land observation satellite of the United States. After three generations of development, satellite technology has been steadily improved. Landsat-8 and Landsat-7 are currently in the orbital, and Landsat-5 has been in service for many years. The panchromatic resolution is 15 m, the multispectral resolution is 30 m, the infrared resolution is 60 m, and the width is 178 km. A large number of remote sensing data have been acquired and are widely used for more than 30 years. The newly designed fourth-generation satellite Landsat-8 carries the TIRS sensor and the terrestrial imager OLI. The TIRS sensor uses a new low-cost quantum well infrared detector (QWIP) that captures two surface radiation images of the thermal infrared (long-wave infrared).
16
L. Sun et al.
The high-resolution imaging satellites, such as WorldView-1, GeoEye-1, have an optical imaging resolution of 0.45 m. Satellite terrestrial divisions such as IKONOS, QuickBird, and OrbView-3 are better than 1 m and have large-angle side-swing observation capabilities. The optical imaging resolution of the WorldView-2 satellite and the GeoEye-2 satellite has reached a level of 0.3–0.25 m. The US government has allowed satellite images of 0.25 m resolution for commercial sale. 3.2.2 French Remote Sensing Satellite System The French SPOT series satellites have been very successful in the global commercial remote sensing market. Its data spatial resolution is dominated by 2.5 m. The Pleiades optical remote sensing satellite has a maximum precision of 0.5 m and is operating in the same PPP mode as the SPOT satellite series. The satellite can achieve a base-to-horizon imaging ratio of 0.1–1.0. The positioning accuracy is better than 20 m without external reference, and the positioning accuracy can be better than 0.5 m after correction of the ground control point with 80 km interval. 3.2.3 Italian Radar Satellite System The COSMO-Skymed satellite system consists of four satellites. The repeatability of the ground trajectory of each satellite in the constellation is less than 1 km. The revisiting period of the 4-star network is only a few hours. It can be used for quick return visit to the area of interest. The beam mode has a resolution of 0.7 m and a width of 10 km. 3.2.4 German Remote Sensing Satellite In the RapidEye constellation, five satellites are evenly distributed in a sun-synchronous orbit of 630 km. Each satellite carries six cameras with a resolution of 6.5 m. The system can access any place of the Earth in 1 day and also can cover the entire agricultural area of North America and Europe within 5 days. The five satellites will download their image data to an X-band antenna in northern Norway, which is then transmitted to the German headquarters for processing and analysis, and finally the product is delivered to the user.
4 Satellite Remote Sensing Application Needs Analysis Space-based resources are characterized by high location and global coverage. They are an important means of connecting facilities. They are the only means of relying on certain scenarios. The information services provided by them have an important role in policy communication, smooth trade, and common people. From the perspective of space-based remote sensing applications, the needs of various applications are analyzed. 4.1
Emergency Disaster Reduction and Relief
There are many geological disasters along the Belt and Road countries. For example, the ASEAN region is a hotspot with frequent disasters around the world. The rivers are rich in the region and the ecological environment is complex. It is urgent to monitor and master the vegetation cover, water pollution, forest exploitation, and biodiversity.
Demand Analysis of Optical Remote Sensing Satellites Under the Belt. . .
17
Research on the quality of ecological environment and its evolution is needed. The regional ecological models should be established and improved. The satellite obtains scientific data on resources, environment, ecology, and remote sensing in the region and provides basic data support for research on global climate change, ecological protection, and industrial policy formulation. Users of this type of demand are mainly government organizations of relevant countries, and they should try to avoid disasters causing significant personnel and economic losses to their own countries. Satellite remote sensing is required to provide monitoring, forecasting, and early warning for disasters, such as floods, marine disasters (including tsunamis, sea ice, red tides, etc.), and drought. Satellite remote sensing can also be used for disaster detection and dynamic change monitoring. The disaster risk maps of different scales in high-risk areas and key areas of disasters should be made through Satellite remote sensing date. The 1:10,000–1:250,000 earthquake disaster basic information maps should be got and updated. Various earthquake precursor information spatial distribution and time series should be detected by satellites. Products such as earthquake disaster rapid monitoring, disaster assessment products are needed. 4.2
Mineral Resources Survey
In the implementation of the Belt and Road Initiative, in order to support and attract relevant enterprises to participate in mineral resources exploration and development, the primary and basic work is to provide spatial information services for accurate mineral resources. Based on this, countries along the Belt and Road urgently need to use space technology to establish a comprehensive and accurate spatial information support system for mineral resources. Then they can conduct global mineral resources survey and evaluation, while focus on oil gas and mineral resources such as uranium, iron, copper, aluminum, nickel, lead, and zinc. A global remote sensing map platform can be built to carry out development strategy research for shortage minerals. For mineral enterprises, it is necessary to conduct in-depth exploration of mineral resources for specific mineral resources prospects and target areas. Provide decisionmaking data for the construction of large-scale mineral resources’ development bases, and reduce enterprise risks and production costs. The non-ferrous metals industry should use the Belt and Road Initiative to accelerate going out, aiming at the low-grade mine development, expanding the application of non-ferrous metals and other technical needs, breaking the bottleneck of resources and environment constraints, and strengthening capacity cooperation and resource cooperation with relevant countries. In order to achieve the above objectives, the remote sensing products that need to be produced by the resource survey include 1:250,000 mine geological environment background, regional mineral resources development and utilization status, and implementation of mineral resources planning. Remote sensing data better than 2.5 m and better than 1 m were used to carry out multi-objective remote sensing survey and monitoring of mineral resources development and utilization status and mine environment in key metal organic belts or mining areas with scales of 1:50,000 and 1:10,000.
18
L. Sun et al.
4.3
Survey Drawing
Mapping geographic information can objectively represent the spatial location, morphological characteristics, and correlations of important natural geographical elements and artificial facilities on the surface of the Earth. It can accurately describe the spatial location and spatial extent corresponding to human factors such as place names and realms. It is an important basic information platform for the national economy and social information. At present, countries along the Belt and Road have not yet formed a large-scale production system for satellite mapping. There are widespread problems in regional data such as aging data, too small scale, lack of engineering geological content, and incomplete elements. Existing mapping techniques and capabilities cannot fully meet the needs of all parties in the face of mapping tasks. New mapping methods are urgently needed to obtain new basic maps. For railway project and infrastructure construction enterprises, detailed topographic map data are an indispensable basis to carry out overseas railway survey and design, infrastructure construction, and so on. However, data missing and collecting difficulties often happened in overseas railway projects. Or, the topographic map data are outdated and useless. These situations make the railway line selection difficult, which poses huge obstacles to the implementation of the project. With the implementation of the Belt and Road Initiative, it is bound to drive the rapid development of national economy and society along the line. The changes in geographical factors caused by the construction of supporting infrastructure such as transportation and urban construction will be very fast. Countries and enterprises will have a need for 1:10,000 or 1:5000 topographic map update. In addition, there are also urgent needs for high-resolution earth observation data, such as unmanned area mapping, coastal zone, and island mapping. 4.4
Weather Forecast and Climate Change Research
There are many unmanned areas along the Belt and Road, such as the mountainous areas, plateaus, deserts, and oceans. There are many blind spots in meteorological observations, which have become shortcomings for weather forecasting. The countries along the Belt and Road urgently need meteorological satellites to conduct all-weather and three-dimensional observations of the atmosphere, clearly capture the changes in the wind and cloud, and effectively compensate for the shortcomings of the ground observation. At the same time, the relevant meteorological disaster forecast information is of great significance for the engineering enterprises to arrange construction progress and personnel safety. The natural environment of the countries along the Belt and Road varies greatly, and there are many major climate types, such as tropical monsoon climate (Indochina, most parts of South Asia), tropical rainforest climate (Malay Peninsula, Indonesia, southern Philippines), maritime tropical monsoon climate (Northern Philippines), temperate desert steppe continental climate (Eurasia hinterland), temperate maritime climate (western Central and Eastern Europe), and temperate continental climate (eastern
Demand Analysis of Optical Remote Sensing Satellites Under the Belt. . .
19
Central and Eastern Europe). At the same time, the economic development mode of the Belt and Road region is relatively extensive. The energy consumption per unit of GDP, raw wood consumption, material consumption, and carbon dioxide emissions are more than half of the world average. The steel consumption per unit of GDP, cement consumption, non-ferrous metals consumption, water consumption, and ozone layer have reached or exceeded twice the world average. Except these, the ecological environment in the region is relatively fragile. Many countries are in arid and semi-arid environments, and forest coverage is lower than the world average. They should conduct exchanges and applications of meteorological products and meteorological technologies between countries. They also should take part in global meteorological forecasting and climate change research by providing global temperature, humidity profiles, as well as meteorological parameters such as clouds and radiation. All of them are calculated by numerical weather prediction models. They also need to monitor natural disasters, ecological environment, global ice, snow cover, and ozone distribution. Use those data, the countries can provide information services for climate change analysis and prediction. Besides, they need to provide global and regional meteorological information for agricultural, forestry, marine, transportation, and other applications. The information can apply on government decision-making, disaster prevention and mitigation, as well as economic and social development.
5 Conclusion Remote sensing resources in countries along the Belt and Road have a large gap to meet their application needs, while the existing international commercial remote sensing satellite resources can play a good complementary role. However, most countries do not have enough economic development to support the cost of commercial resources. The implementation of the Belt and Road Initiative is of great strategic importance to China. The implementation of the Space information Corridor project may be a solvation. Based on the communication satellites, navigation satellites, and remote sensing satellite resources of China’s on-orbit and planning construction, the space-based resources and the ground information sharing network are appropriately supplemented and formed. The system will be able to provide more powerful spatial information service capabilities for countries and regions along the Belt and Road. It also set up space information corridors to promote information interconnection. At the same time, it will also enable China’s space information industry to reach a new level in the marketization and internationalization of the corridor area and lay a solid foundation for the world.
Bibliography 1. Wu, J.: 2016–2030 China Space Science Planning Research Report. Science Press, Beijing (2016) 2. Wei, W., Qiming, Q.: Timeliness testing of disaster reduction services for high score 4 satellite data products. Aerosp. Return Remote Sens. 8, 102–109 (2016)
20
L. Sun et al.
3. Peng, C., Fenghuan, S.: Application of domestic high-resolution satellites in the belt and road natural disaster risk management. Satell. Appl. 10, 9–11 (2016) 4. Yufei, S., Na, Z., Su, G., Xuan, F.: The belt and road information communication corridor based on spatial communication information system. Satell. Appl. 10, 12–15 (2016) 5. The 2017 of the Belt and Road. www.yidaiyilu.gov.cn 6. Tao, L.: Pakistan introduced to the national space situation along the belt and road. Satell. Appl. 8, 39–44 (2015) 7. Zhiqiang, S.: Multi-Period Space Resource Capability Analysis Report. Unpublished 8. Tao, L.: Indonesia introduced to the national space situation along the belt and road. Satell. Appl. 11, 48–53 (2015) 9. Tao, L.: Indonesia introduced to the national space situation along the belt and road. Satell. Appl. 11, 48–53 (2015) 10. Tao, L.: Thailand introduced to the national space situation along the belt and road. Satell. Appl. 5, 24–30 (2016) 11. Sun, L., Zhao, Q., Shen, Z.: Demand analysis of resources application under the belt and road initiative. Satell. Appl. 3, 54–59 (2016)
Design of an All-Spherical Catadioptric Telescope with Long Focal Length for High-Resolution CubeSat Li Liu1,2(*), Li Chen1,2, Zhenkun Wang1,2, and Weimin Shen1,2 1
Key Lab of Modern Optical Technologies of Education Ministry of China, School of Optoelectronic Science and Engineering, Suzhou, China 2 Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province, School of Optoelectronic Science and Engineering, Suzhou, China [email protected]
Abstract. Telescopes with aspherical mirrors, long focal length, and large aperture are usually used to acquire high spatial resolution remote sensing images of earth surface, while they are expensive and large in volume. CubeSat is one of the fastest growing sectors in space science and technology, for that it has a relative superiority over other micro and mini satellites in terms of their shorter manufacture time, lower complexity, and most importantly less cost. An optical design of an all-spherical catadioptric telescopic objective for 3 U CubeSat missions is proposed in this chapter, aiming at implementing as low cost as possible. In order to correct aberrations of an all-spherical system, two corrector groups are introduced. One is afocal lens group at the front end of the primary mirror to eliminate spherical and coma aberrations of two-mirror objective. It is composed of two lenses with opposite focal power for achromatic design. Its diameter is close to the size of the primary mirror. The other is a negative focal power group with four pieces of lenses before image plane that amplifies the focal length of the two-mirror structure and corrects astigmatism, field curvature, and distortion aberrations of the whole system. All lenses used are made from the same kind of glass material for the convenience of the athermalization design. Furthermore, in order to minimize its outer barrel length, the secondary mirror adopts a Mangin type, which greatly reduces the marginal ray height of full field of view. Based on the mentioned basic idea and structure, its primary aberration formulas are derived, and the methods for the correction of its aberrations are analyzed. Then a designed example is given. It approaches diffraction-limited imaging performance. The total length of the system is 1/6 less than its effective focal length. At last, tolerance analysis is performed for verifying its manufacturability. Keywords: ISSOIA 2018 · All-spherical system · Compact and low cost · Corrector group
1 Introduction Aerospace remote sensing can get a lot of information from the earth surface, so it is widely used in the field of meteorological observation, resource investigation, mapping, environmental monitoring, and so on. However, the current high-resolution remote sensing cameras have problems such as large size, heavy weight, and high development © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_3
21
22
L. Liu et al.
cost and launch cost, which restrict the development of the remote sensing industry. A new class of picosatellite, Cube Satellite (CubeSat), was introduced by California Polytechnic State University (Cal Poly) and Stanford University in 1999, which is dedicated to the development of standardized satellites that can be developed and launched at a very low cost within a year or two, making it possible to develop highresolution, low-cost satellite constellations. A standard one unit (1 U) CubeSat occupies a volume of 10 cm 10 cm 10 cm and weighs no more than 1 kg [1]. Other standards include 2 U, 3 U, and so on. Telescopic objective proposed in this chapter is suitable for 3 U CubeSat missions. In order to achieve high resolution and low cost, the optical system is required for long focal length and compact structure, and all the components are spherical. Long focal length optical system generally adopts all-mirror structure, such as the classic Cassegrain system, Ritchey–Chretien system, three-mirror Korch structure, and cross structure. But these systems employ aspheric mirrors to correct aberrations and improve system performance, resulting in high processing difficulty [2]. In order to reduce the difficulty of processing and to ensure that the performance of the optical system does not decrease, an all-spherical catadioptric telescopic objective incorporating correctors could be employed. In 1934, V.N. Churilovskii proposed an afocal apochromatic corrector of the same material and placed it in front of the image surface of Cassegrain system. It is a typical form of the sub-aperture corrector group. His investigations showed that one apochromatic corrector can eliminate the spherical aberration and coma of the system, while two correctors can in principle eliminate all the image aberrations [3]. But his proposals did not arise public attentions. In 1941, Maksutov presented a full-aperture corrector group with a meniscus thick lens instead of a Schmidt correction plate. Under the conditions of achromatic aberration, the spherical aberration of the primary mirror can be compensated by the proper selection of the curvature radius and thickness of the front and back surfaces of meniscus thick lens. The coma can be adjusted by the distance between the meniscus lens and the primary mirror [4]. In 1975, Klevstov adopted a Mangin-type mirror instead of secondary mirror and placed a meniscus near it. The Mangin mirror can be approximated as a mirror and a negative lens. The aberration theory of this system showed that a reflective element made in Mangin type makes it possible to compensate longitudinal chromatism of either sign, and the monochromatic aberrations introduced by this element do not strongly differ from the aberrations of a convex secondary mirror [5]. All mentioned above are typical corrector groups, which are divided into full-aperture correction and sub-aperture correction according to the aperture size. The full-aperture corrector group has one or two large lenses at the front end of the primary mirror, whose diameter is basically the same as the pupil diameter. The sub-aperture corrector group corrects the aberration caused by spherical primary mirror with some small lenses near secondary mirror or before image surface. In the case of correcting the same spherical aberration, the fullaperture corrector group is more efficient than the sub-aperture corrector group, but there is also a big weight defect. The aim of this chapter is to present an all-spherical telescopic objective with long focal length and high resolution for high-resolution 3 U CubeSat missions by analyzing the aberration characteristics of spherical optical system.
Design of an All-Spherical Catadioptric Telescope with Long Focal. . .
23
2 Telescope Design 2.1
Design Concepts
A two-mirror system corrects spherical aberration and coma by choosing aspherical coefficients. It folds the optical path to reduce volume. However, the use of aspherical mirror brings problems such as difficulty in processing, high cost, and long processing period. The most innovative feature of the proposed system is the all-spherical design to overcome the abovementioned problem. Therefore, how to correct spherical aberration and coma with only spherical components is the key to design. The aberration generated by the primary and secondary mirrors is corrected by combining two kinds of corrector groups. Full-aperture corrector group has a strong aberration correction ability and is applied to the proposed system under the condition of weight allowed. Sub-aperture corrector group is also adopted to amplify the focal length of the two mirrors and correct astigmatism, field curvature, and distortion aberrations of the whole system. There is no chromatic aberration but large negative spherical aberration and coma introduced by concave primary mirror in the R-C system. The afocal corrector group at the front end of the primary mirror is composed of positive and negative lenses made from the same material for achromatic design. The spherical aberration and coma produced by the primary mirror can be compensated by a former achromatic corrector. However, the aberration not only affects the structure of the system but also is related to the tolerance. For the high-quality system designed by the large aberration compensation, some factors change small enough to make the result worse, so the tolerance is very strict [6]. In this case, if the aberration produced by the primary mirror is only corrected by the former corrector, it will lead to the strict tolerance of the former corrector, which is not conducive to the processing and adjustment. It is worth considering replacing the secondary mirror with a Mangin type to share the task of correcting aberration of the primary mirror. Mangin mirror is a good element for eliminating spherical aberrations [7]. It consists of a mirror and a meniscus attached to it. The spherical aberration can be adjusted by selecting the curvature radius of its surface. Furthermore, Mangin mirror plays the role of compressing marginal ray height of full field of view and shortening the outer barrel. The whole system draws lessons from the long focal lens structure of the telephoto-objective system. The sub-aperture corrector group along with the primary and secondary Mangin mirrors makes up positive focal power group, and four pieces of lenses are the negative focal power group. On the one hand, the focal length is multiplied, and the length of the optical system is reduced. On the other hand, the chromatic aberration introduced by the Mangin mirror can be balanced, as well as residual coma, astigmatism, field curvature, and distortion in the system. In pursuit of improving the performance of the optical system, it is also necessary to consider the difficulty of processing and alignment, and the outer barrel should be designed for this system. So the balance between the system performance, the system tolerance, and the length of the outer barrel needs to be considered. The smaller the radius of curvature of the primary mirror is, the more compact the system is, but the tolerance is more stringent. Under the condition that the radii of curvature of the primary mirror and the secondary mirror obscuration are all constants, the lower the light compression of Mangin mirror, the shorter the length of the outer barrel, while the
24
L. Liu et al.
more strict the tolerance of Mangin mirror, the higher the difficulty of processing, and even higher than the present processing level. The outer barrel along with the optical part should be suitable for 3 U volume, and the tolerance should be within the current level of fabrication and alignment. The optical system given below is chosen from many other results considering factors above. 2.2
Aberration Analysis
The structure and aberration of the system are analyzed under the normalization condition. Based on the design concepts, the ratio of the focal power of the primary mirror ϕ1 to the whole system focal power ϕ indicates the amplification of the focal length of the primary and secondary mirrors by the latter lens group, and the relationship between ϕ1 and ϕ is as follows: ϕ1 ¼ mϕ
ð1Þ
The line obscuration coefficient α is characterized by the ratio of the height of marginal rays of the on axis field of view on the secondary mirror to that on the primary mirror, and the lateral magnification of the secondary mirror is characterized by β. According to the volume limit and experience, the initial structural parameters of the two-mirror system can be calculated by (2), (3), (4), and (5), and the third order aberration coefficient can be obtained by (6) and (7). 2 β
ð2Þ
R2 ¼
2α βþ1
ð3Þ
d1 ¼
1α β
ð4Þ
R1 ¼
Δ¼αþ X
1α β
α 2 ϕ1 2 2 2 2ρ1 σ 1 ¼ 2ρ2 uz d1 α3 ρ2 α
SI mirror ¼ 2ρ31 6α2 ρ21 þ 4α2 ϕ1 ρ1
X
SII mirror
ð5Þ ð6Þ ð7Þ
where R1 and R2 are the radius of the primary and secondary mirrors, ρ1 and ρ2 are the curvature of the primary and secondary mirrors, d1 is the interval of primary and secondary mirrors, Δ is the rear working distance, and uz is the half field of view. If the spherical aberration and the coma of the whole system need to be corrected, the following two conditions should be satisfied: X
SI front þ
X
SI mirror ¼ 0
ð8Þ
Design of an All-Spherical Catadioptric Telescope with Long Focal. . .
X
SIfront þ
X
SII mirror ¼ 0
25
ð9Þ
For the convenience to calculate spherical aberration and coma of the front corrector group, the corrector group lenses are regarded as the thin lens close to each other, then the achromatic condition of the corrector group is as follows: φ1 φ2 þ ¼0 ν1 ν2
ð10Þ
Because the same material is used in the two lenses ν1 ¼ ν2, φ1 þ φ2 ¼ 0
ð11Þ
the two lenses must have the same value and the opposite optical focal power. The front corrector group has zero total optical power. According to the conditions of (8), (9), and (11) and the minimum spherical aberration of the first lens, the structural parameters of the front corrector group can be obtained. For the latter corrector group, the following expressions (12), (13), (14), and (15) can be obtained from the following aspects: the optical power requirement, the flat field condition, the lateral achromatic aberration, and the position achromatic aberration condition. h 5 φ5 þ h 6 φ6 þ h 7 φ 7 þ h 8 φ 8 ¼ 1 m X 2 2 SIV resedual þ φ5 þ φ6 þ φ7 þ φ8 ¼ n R2 R1 X X
ð12Þ ð13Þ
CI Mangin þ h25 φ5 þ h26 φ6 þ h27 φ7 þ h28 φ8 ¼ 0
ð14Þ
CII Mangin þ h5 hy5 φ5 þ h6 hy6 φ6 þ h7 hy7 φ7 þ h8 hy8 φ8 ¼ 0
ð15Þ
where n is the refractive index of the lens material, φi is the optical power of the lens, hi is the height of the marginal ray of the on axis field of view on the lens, hyi is the height of main ray of the off axis field of view on the lens. Multiple groups of solutions can be obtained by (12), (13), (14), and (15). By comparing different solutions, considering the total system length and the rear working distance, the short total length and the suitable rear working distance should be selected. 2.3
Results and Evaluation
The optical system is designed and optimized by the results of the upper section analysis and theoretical deduction. It is rotationally symmetric on the optical axis. All the components used are spherical. The stop is on the primary mirror. Compared to traditional two-mirror system, the Mangin type is adopted instead of the secondary mirror. Light passes through the refraction of the front corrector group to the primary
26
L. Liu et al.
mirror. The primary mirror focuses the light on the secondary Mangin mirror, then the light passes twice through the front surface of the Margin mirror. The latter corrector group is finally passed through to image plane. Moreover, four pieces of lenses L5, L6, L7, and L8 are used to shorten the total length of this optical system and to correct residual aberration. Total length from the secondary mirror to the focal plane (FP) in the optical design is less than 1/6 of the effective focal length. Rear working distance is longer than 20 mm for detector installation. Because of the strong compression ability of the Mangin mirror, the length of the outer barrel required by the system is 50 mm, and this novel optical system can be applied to the 3 U CubeSat missions. All used lenses are made from K7. Figure 1 shows the design layout of the CubeSat telescopic objective. The design and analysis are carried out with the Zemax optical design program. In order to evaluate the imaging quality of the whole system, the modulation transfer function (MTF) curve, spot diagram, distortion curve, and longitudinal aberration are given. As shown in Fig. 2, the MTF curves of each field of view are close to diffraction
Fig. 1 Layout of the telescopic objective
Fig. 2 MTF of the telescopic objective
Design of an All-Spherical Catadioptric Telescope with Long Focal. . .
27
limit, and the MTF value of full field of view is higher than 0.22 at the Nyquist frequency. Figure 3 depicts the spot diagram of each field of view on the image plane. It can be seen that in the whole band and all field of view, the spot diagram is mainly within the Airy spot, and the system has diffraction-limited imaging characteristics. The RMS radii of central and full maximum field are 4.126 μm and 3.253 μm, respectively. The distortion curve on the image plane can be seen in Fig. 4. The abscissa coordinates the relative distortion expressed in percentage, and the ordinate represents different field of view. It can be seen from the diagram that the distortion of the full field of view is the largest, which is 0.3%. Figure 5 shows the longitudinal aberration of this system. The abscissa is the distance from the focal plane, and the ordinates are normalized with different apertures. The curves of different colors represent different wavelengths. The maximum longitudinal aberration is within the focal depth. In summary, the results meet the requirements of the index, compact structure, and suitable distance for the detector installation. The imaging quality approaches diffraction limit. Compared with the existing compact optical remote sensing payloads with long focal length, the designed coaxial all-spherical catadioptric telescopic objective is suitable for the application of 3 U CubeSat remote sensing. It has the advantages of light, small, easy to process and low cost, but it also has the shortcoming of the outer barrel.
Fig. 3 Spot diagram of the telescopic objective
28
L. Liu et al.
Fig. 4 Distortion of the telescopic objective
Fig. 5 Longitudinal aberration of the telescopic objective
Design of an All-Spherical Catadioptric Telescope with Long Focal. . .
29
3 Tolerance The importance of tolerance analysis is to verify the practicality and reliability of the optical design. Tolerance analysis not only avoids the existence of sensitive in the system but also predicts the performance under fabrication and assembly precision limitations. Compared to non-symmetrical design, the symmetrical design helps to loosen the production and assembly tolerance. In this system, the most sensitive component is Mangin mirror, whose wedge tolerance is small to 0.1 mm. Also, the surface tolerances of primary mirror and Margin mirror are the main factors that decrease performance of this system. In the case of uncontrolled spherical aberration, the drop in MTF caused by the wedge tolerance of the Mangin mirror is as high as 0.1. Table 1 lists the length of the barrel required for systems of different lengths, the spherical aberrations of primary and Mangin mirror, and the decreases in tolerances caused by them. S8 is the primary mirror, and S9, S10, and S11 together represent the Mangin mirror. As we can see, the spherical aberration of Mangin mirror significantly affects the decrease in MTF under the same tolerance condition. The result given before has controlled spherical aberration in order to loosen the tolerance, achieving a balance between performance, tolerance, and barrel. Meanwhile, all other tolerances are easy to implement. The tolerances are all under the fabrication facility precision limitation and practical assembly experience. The compensator is the movement of image and the tilt, decenter, and movement of the whole latter corrector group. The cumulative probability of the final system can be seen in Fig. 6. Table 2 shows the degradation percentage of MTF value at the Nyquist frequency. Therefore, this design of telescopic objective for 3 U CubeSat missions after tolerance can still meet the specification.
Table 1 Comparison of MTF decrease in different conditions Barrel (mm) 45
80
130
Surface S8 S9 S10 S11 S8 S9 S10 S11 S8 S9 S10 S11
Spherical aberration 60 8.1 11.4 6.1 49 6.5 10 6.3 39.7 5.8 8 4.8
Total decrease (%) 21
Tolerance IRR 0.1 TIR 0.01 IRR 0.1
Decrease in MTF 0.011 0.022 0.013
IRR 0.1 TIR 0.01 IRR 0.1
0.011 0.007 0.014
18.5
IRR 0.1 TIR 0.01 IRR 0.1
0.010 0.005 0.014
18
30
L. Liu et al.
Fig. 6 Cumulative probability of the CubeSat telescope Table 2 Degradation percentage of MTF value at the Nyquist frequency Normalization field of view (0, 0) Meridian Sagittal (0, 0.7) Meridian Sagittal (0, 1) Meridian Sagittal
MTF value designed 0.2174 0.2147 0.2135 0.2114 0.2113 0.2099
MTF value final 0.1823 0.1824 0.1819 0.1800 0.1799 0.1797
Degradation percentage 15.1 15.1 14.8 14.9 14.9 14.4
4 Conclusion A compact all-spherical catadioptric telescopic objective for 3 U CubeSat missions is studied and designed. The design concept of the system is introduced, and the aberration analysis method is emphasized. The given example has the advantages of light weight, easy processing, and compact structure. The MTF curve, spot diagram, distortion curve, and longitudinal aberration are used to evaluate the image quality of the system. The results show that the system has good image quality, meeting the requirements of the index. According to the result of tolerance allocation, the tolerance requirements meet the existing process level, and the system after the tolerance still meets the requirements of the index.
Design of an All-Spherical Catadioptric Telescope with Long Focal. . .
31
Acknowledgments This work was supported in part by the National Key Research and Development Program of China (2016YFB05500501-02). Thanks to the support of the project of the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.
References 1. Mahmoud, A.A., Elazhary, T.T., Zaki, A.: Remote sensing CubeSat. Proc. SPIE. 7826, 78262I1–78262I-8 (2010) 2. Mingqiu, X., Weimin, S., Junhua, P.: Optical system for space remote sensing. In: The Development of Modern Optics Academician WangDaheng Engaged in Scientific Research Activities, pp. 243–265 (2003) 3. Churilovskiǐ, V. N.: Concerning a new type of astrophotographic objective with apochromatic, aplanatic, anastigmatic, and orthoscopic correction. Trudy LITMO. 1(4) (1940) 4. Maksutov, D.D.: New catadioptric meniscus systems. J. Opt. Soc. Am. 34(5), 270–284 (1944) 5. Klevtsov, Y.A.: Prospects for developing Cassegrain telescopes with a corrector in convergent beams. Opt. Technol. 71(10), 659–665 (2004) 6. Zhijiang, W.: Theoretical Basis of Optical Design, 2nd edn, pp. 304–305. Science Press, Beijing (1985) 7. Riedl, M.J.: The Mangin mirror and its primary aberrations. Appl. Opt. 13(7), 1690–1694 (1974)
Design of Lobster-Eye Focusing System for Dark Matter Detection Yun Su1(*), Xianmin Wu1, Long Dong1, Hong Lv1, Zhiyuan Li2, and Meng Su3 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected] 2 Nanjing University, Nanjing, China 3 Hongkong University, Hongkong, China
Abstract. Dark matter and dark energy are recognized as “Two dark clouds shrouded in the clear sky of physics in the twenty-first century.” Experimental and theoretical exploration of dark matter and dark energy will greatly deepen people’s understanding of the microcosm and the macrocosm and is likely to bring about a revolutionary breakthrough in the basic theory of physics. X-ray dark matter detection has attracted more and more attention in the field of dark matter searching area. This paper focuses on the detection of sterile neutrino dark matter particles in the keV band and designs the detection scheme for dark matter particles. Based on the X-ray total reflection theory, the imaging principles of lobster-eye optics are discussed, and the influence of the structure factor on the focusing performance is also investigated. The models of the lobster-eye optics that are designed for different detection modes have been setup by a simulation tool, and the imaging simulations are carried out. The results show that the focusing performance of the lobster-eye optics improves prominently when the optics structure is designed for different energy spectrum distribution. Under the condition of existing processing technology, a lobster-eye optics experimental sample has been made, and the result of the experiment based on this device shows a clear cross pattern and peak intensity, and this verifies the focusing properties of lobster-eye optics. Keywords: Dark matter · Sterile neutrino · X-ray detection · Lobster-eye optics
1 Introduction A large number of astronomical observations over the past half century have established that the main components of the universe are dark matter and dark energy [1]. However, the particle physics standard model cannot explain the physical nature of dark matter and dark energy, so they are recognized as “two dark clouds shrouded in the clear sky of physics in the twenty-first century.” The experimental and theoretical exploration of dark matter and dark energy will greatly deepen people’s understanding of the microcosm and macrocosm, and it is very likely to bring a revolutionary breakthrough in the fundamental theory of physics. © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_4
33
34
Y. Su et al.
Theoretical studies have shown that inert neutrinos may play an important role in the fundamental problems of particle physics such as baryon asymmetry, lepton asymmetry, and large-scale nuclear synthesis [2–4]. The particular importance is that inert neutrinos with masses falling in the keV energy region are ideal dark matter particles and have the properties of “warm dark matter” (WDM) [5–6]. The important value of inert neutrinos in particle physics, astrophysics, and cosmology has attracted extensive attention from astronomers and experimental physicists. At present, many teams in the world are working on the detection of inert neutrinos [7–9]. If it can be achieved, it will not only be a major breakthrough in cosmological research but also a milestone in particle physics. The project plans to develop a new X-ray detector technology, combined with the advantages of large field of view and high energy resolution, to find evidence of the existence of keV inert neutrino dark matter in the cluster of neighboring galaxies.
2 Progress The rapid development of X-ray astronomy has been made possibly by widely applying X-ray focusing telescopes on satellites. The Wolter I [10] reflecting optics (Fig. 1), consisting of a parabolic mirror followed by a hyperbolic mirror, is the most popular geometry employed. It has been applied aboard many satellites including Chandra and XMM-Newton in 1990s (see Figs. 2 and 3), Swift and Suzaku as well as NuSTAR in the twenty-first century. Although telescopes with Wolter I optics have remarkable spatial resolution, the FoV is usually very small (less than 1 ). However, the lobster-eye optics [11] overcomes the restricted FoV of Wolter optics. A telescope adopting lobster-eye optics can have a much wider FoV and lower mass at the same time, which make it very suitable as an X-ray all-sky monitor. Therefore, the lobster-eye optics has become a hot research topic [12]. The early applications of X-ray observations to search for keV inert neutrino decay signals failed to detect clear signals but still provided effective limits for some
y hyperboloid paraboloid
F1
O
F2
x
Fig. 1 Wolter I X-ray collecting system
Design of Lobster-Eye Focusing System for Dark Matter Detection
35
Fig. 2 Chandra telescope
Fig. 3 XMM-Newton telescope
mechanisms of inert neutrinos [12], as shown in Fig. 4. In 2014, Bulbul et al. [13] used the XMM-Newton telescope to run 3 MHz of X-ray data accumulated over a decade, from the superposition of the Perseus galaxy cluster and dozens of other nearby galaxies. A suspected emission line signal (confidence of approximate 3σ) was found in the spectrum with a peak energy of approximately 3.5 keV, which is inconsistent with the characteristic emission line energy of any known element. According to Bulbul et al., this signal is likely to be X-ray photons generated by the decay of inert neutrinos with a mass of 7 keV. Assuming that the dark matter in the cluster is composed entirely of inert neutrinos, the expected X-ray flux (Fcl) is:
36
Y. Su et al.
Fig. 4 Mass and the mixing angle of the inert neutrino
Γ N!γυa M DM,FoV 4πD2L ms 2 2 h i M DM,FoV ph θ ms DL 7 ffi 1:4 10 1011 7 keV 100 Mpc 1013 M cm2 s
F cl ¼
ð1Þ
Γ N!γυa is the probability of radiation decay, which is determined by the mixing angle θ of the inert neutrino mass ms and the inert neutrino-normal neutrino interaction; MDM,FoV is the mass of the dark matter contained in the telescope field of view; and DL is the distance of the cluster of galaxies. Therefore, the above-mentioned emission line signal can be deduced by ms 7.0 keV, sin 2(2θ) 7 1011. As shown in Fig. 4, the colored region is the parameter space excluded by the existing observations, and the data point with a mass of 7 keV corresponds to the suspected emission line signal from the neighboring cluster of galaxies detected by the XMM-Newton satellite [13]. This discovery immediately caught the attention of the astronomy and physics community. Many international cooperation teams have used different X-ray telescopes to track different targets (dwarf galaxies, massive galaxies, and galaxies), some of which support 3.5 keV signals; for example, using XMM-Newton data in the M31 nucleus region, a 3.5 keV weak emission line was detected [14]; a similar energy emission line signal was also found in the Perseus cluster using the Suzaku observation and coincided with the results of Bulbul et al. However, other work failed to detect the 3.5 keV signal [15–17] at the expected flux (i.e., assuming a universal inert neutrino mixing angle) (Figs. 5 and 6).
Design of Lobster-Eye Focusing System for Dark Matter Detection
37
Fig. 5 Earth material energy line
Fig. 6 A 3.5 keV weak emission line signal (3σ) XMM-Newton was found in the Andromeda Galaxy (M31)
The current observations are mainly limited by the signal-to-noise ratio of the current data. Regardless of whether there is a signal or no signal, the statistical confidence is only about 3σ, leaving a considerable room for further exploration. Based on the previous research results, this paper designs an X-ray lobster-eye focusing lens with high-sensitivity detector for deep exposure of several large-scale clusters of neighboring large-mass galaxies. The sensitivity is 1–2 orders of magnitude
38
Y. Su et al.
higher than the existing observations. The decay signal of the inert neutrino is determined to be detected or excluded, and the relevant theoretical model is effectively limited.
3 Lobster-Eye Optics Design 3.1
Design Principle
Lobster-eye optics comprise very thin spherical plates in which there are numerous micro square pores, and the axes of the pores point radially to a common center of curvature. The X-ray reflection at grazing angles and the scattering of electrons and protons are on the polished mirror surfaces. The X-rays going through this type of optics will form a cruciform point-spread function [11, 18] on the enclosed sphere with half the radius of curvature of the optics. A system adopting lobster-eye optics can have [19] a much wider FoV and lower mass at the same time, which make it very suitable as an X-ray all-sky monitor. Therefore, the lobster-eye optics has become a hot research topic [12] (Fig. 7). Where the X-rays end up on the detector plane depends on how they pass through the micro pores. We suppose that the number of reflections of the two pairs of parallel walls are Nx and Ny, respectively. If the micro pores are all perfectly oriented, the grazing incident X-rays undergoing only one reflection (or in general, Nx and Ny, one is an even number and the other is an odd number) will be redirected to form two perpendicular foci, while those that undergo two reflections from adjacent walls (or in general, both Nx and Ny are odd numbers) will be brought to a focus, and the rays passing through the micro pores without reflections (or in general, both Nx and Ny are even numbers) will not be focused. This kind of optics makes it possible to realize a wide-angle focusing telescope because of its imaging principle [12] (Fig. 8). The imaging formula of the micro-hole optical array system: 1 1 2 ¼ l s R
Fig. 7 Lobster-eye optics design principle
ð2Þ
Design of Lobster-Eye Focusing System for Dark Matter Detection
39
Fig. 8 Imaging of the micro-hole optical array system
Let s be infinite, then the focal length of the system: f ¼
R 2
ð3Þ
Deriving the distribution of the speckle on the image plane of each reflection unit of the microporous optical array by geometric relationship: Δs ¼ ðt sin φn Þ= cos 2φn
ð4Þ
The focal spot of the system: Δl ¼
R þ t=2 R cos φn tan 2φn 2 2
ð5Þ
Effective aperture of the system: 2D ¼ 2R tan φn
3.2
ð6Þ
Wider FoV All-Sky Monitor Lobster-Eye Optics Design
The multi-lens array shown in the following figure is composed of a 2 2 spherical lens array. This design can effectively improve the field of view of the system, detecting X-ray signals in multiple incident directions, that is, different color lights shown in the figure, so this structure is suitable for observing a wide range of X-ray astronomical phenomena. This project is to use the MPO optical system to have the characteristics of large field of view to achieve the ability of one exposure to cover the cluster of galaxies (Fig. 9). 3.3
Coating Design
3.3.1 Materials The reflection of the Ir and the Ni are analyzed for a spectral range of 0.4, 1, 4, and 10 keV, respectively, under a grazing incidence of 20 nm thickness, 1 nm roughness, and 0–1.5 . In order to achieve a higher reflectance in each spectral
40
Y. Su et al.
Fig. 9 2 2 optic system
Fig. 10 The reflection of the two materials
segment at a larger grazing incidence angle, the Ir-coated film is preferred for the positive product (Fig. 10). 3.3.2 Thickness The thickness of the film affects the reflectivity, the adhesion of the film, the plating cycle, etc., so it is necessary to optimize the thickness of the film. Using the membrane design software, the reflectivity was analyzed at the grazing incidence angles of 20, 30, 40 nm, and 0–2 at 400, 1000, 4, and 10 keV, respectively. The results are shown in the following figure. Considering the low ALD plating rate and long time, the film thickness is preferably 20 nm (Fig. 11).
Design of Lobster-Eye Focusing System for Dark Matter Detection
41
1
1 Ir 20.nm on SiO2 at 400.eV, P=0. Ir 30.nm on SiO2 at 400.eV, P=0. Ir 40.nm on SiO2 at 400.eV, P=0.
0.95
Ir 20.nm on SiO2 at 1000.eV, P=0. Ir 30.nm on SiO2 at 1000.eV, P=0. Ir 40.nm on SiO2 at 1000.eV, P=0.
0.95 0.9
0.9
0.85 0.85
0.8 0.8
0.75 0.75
0.7
0.7 0.65
0.65
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
(a)
1.2
1.4
1.6
1.8
2
(b) 1
1 Ir 20.nm on SiO2 at 4000.eV, P=0. Ir 30.nm on SiO2 at 4000.eV, P=0. Ir 40.nm on SiO2 at 4000.eV, P=0.
0.9 0.8
Ir 20.nm on SiO2 at 10000.eV, P=0. Ir 30.nm on SiO2 at 10000.eV, P=0. Ir 40.nm on SiO2 at 10000.eV, P=0.
0.9 0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1 0
1
0.1 0
0.2
0.4
0.6
0.8
1
(c)
1.2
1.4
1.6
1.8
2
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
(d)
Fig. 11 The reflection of different film thickness. (a) 0.4 keV, (b) 1 keV, (c) 4 keV, (d) 10 keV
3.3.3 Roughness Therefore, it is necessary to propose reasonable and feasible roughness index requirements according to the project requirements. In this paper, the roughness-reflectance characteristics of the film thickness of 20 nm were analyzed, and the film roughness was 0.2, 0.5, 1, 1.5, 2, 2.5, 3 nm at 400, 1000, 4 keV, respectively. The 10 keV spectrum, 0–2 grazing incidence angle, has been applied to analyze the effect of film roughness on reflectivity, as shown in the figure below. The red curve represents the corresponding reflectance at a roughness of 0.2 nm, the green represents a roughness of 0.5 nm, and the blue color represents a roughness higher than 1 nm, the reflectivity decreases with the increase of roughness. Roughness should be better than 1 nm (Fig. 12). 3.4
Lobster-Eye Optics
This system consists of a lobster-eye optics system, CCD detector, supporting structures, and some simple shieldings. The lobster-eye optics system is composed of a 2 2 spherical lens array. Each lens has a thickness of 1.06 mm, an aperture of about
42
Y. Su et al.
Fig. 12 The reflection of different film roughness. (a) 0.4 keV, (b) 1 keV, (c) 4 keV, (d) 10 keV
Fig. 13 Lobster-eye optics
45 45 cm2, and a curvature of 750 mm. The micro pores in each lens have a size of 20 20 μm2 and a length of 1.06 mm. The wall between two adjacent pores is 6 μm thick. The material of the micro square pore lens is glass comprising mainly SiO2 and PbO. The pores are coated with 20-nm-thick iridium [12]. The lobster-eye optics has a focal length of 375 mm, as shown in Fig. 13.
Design of Lobster-Eye Focusing System for Dark Matter Detection
3.5
43
Lobster-Eye Optics Test
Lobster-eye optics testing program includes the following: (1) Curvature test; (2) Wire pointing test; (3) Roughness test; (4) Film test; and (5) X-ray focus test (Table 1).
4 Lobster-Eye Focusing System 4.1
Prototype
The design of the lobster-eye four-channel camera structure is as shown. In order to achieve the focus of the four sets of MPO components, the mounting end face of the MPO frame needs to be designed as a bevel. After installing the MPO component, by designing the tilt angle of the lens end face, it is possible to achieve the focus of the four sets of MPO rays in the four quadrants of the detector (Fig. 14). 4.2
Detector
The volume of the detector to 13.3 13.3 mm2 and each pixel is 13 μm. The optics together with the detector produces an FoV of 2 2 (Fig. 15).
Table 1 Test results 1 2 3 4 6
Test item Radius Wire pointing Film roughness Film thickness X-ray focusing
Designed 750 10 mm 0.1 momentum wheel X> momentum wheel Y> momentum wheel S. 2.3
Difference Between Single Running and Simultaneous Running
The micro-vibration acceleration response produced by the simultaneous running of the four momentum wheels is shown in Table 3. It can be seen that the micro-vibration generated by the four momentum wheels at the same time has a greater effect on the camera than that of any single momentum wheel, the micro-vibration level is slightly larger than that caused by momentum wheel Z running alone, which means that the momentum wheel Z has the greatest contribution to the micro-vibration on the camera. In combination with Tables 2 and 3, the mean square root value at the secondary mirror is compared. The micro-vibration momentum caused by the simultaneous running of the four momentum wheels is about 1.3–2.2 times the magnitude of the micro-vibration when the momentum wheel Z running alone. It is 2.5–3.5 times the magnitude of the micro-vibration of the momentum wheel X alone.
3 Analysis of Rotation Angle Produced by Micro-Vibration Because of the high stiffness of the mirrors, the change of the surface shape of the mirror caused by the micro-vibration may not be considered [4]. The mirror surface is regarded as a rigid body. It is proved that the translational displacement has little influence on the camera image [5, 6]. The MTF decline and image shift of the optical system caused by the rotation of the mirrors are mainly analyzed in this paper. 3.1
The Establishment of the Finite Element Model
In order to carry out the dynamic analysis of the micro-vibration of the space camera caused by momentum wheel based on ground test, a finite element model of the satellite is established. The satellite coordinate system is defined as the X axis points to the direction of the satellite flight, the Z axis is parallel to the optical axis of the camera and points to the center of the earth, and the Y axis is determined by the right hand rule. The mass point is used to simulate the momentum wheel, and the MPC is connected with the
Influence of Micro-Vibration Caused by Momentum Wheel on the Imaging. . .
73
Fig. 1 The finite element model of the satellite
momentum wheel bracket and finally connected with the satellite plate. First, the model boundary condition is set to the fixed support at the junction of the support vehicle, and the finite element model is modified according to the data of the ground test; that makes the analytical response at the key position of the camera consistent with the test results. The vibration characteristics of the momentum wheel are studied by Liu and Maghami. The results show that the output disturbance spectrum of the momentum wheel is mainly based on the frequency component of the momentum wheel speed [7], and the disturbance model can be simulated by the sinusoidal load of the speed of the momentum wheel [8]. The amplitude of the load should be based on the RMS of the experimental data, so that the disturbance energy can be simulated better. Although a small part of the simulation analysis results is in large error compared with the ground test data, it is consistent with the trend of response propagation. It can be deduced that the micro-vibration dynamics simulation model is accurate, and to a certain extent, it can effectively respond to the satellite micro-vibration and its transmission characteristics. Finally, the solar arrays are added to the model, and the boundary condition of the model is set as a free state, and the analysis of the rotation of the mirrors in orbit is carried out. The finite element model of the satellite micro-vibration analysis is shown in Fig. 1. 3.2
Calculation of Rotation Angle of Mirrors
The angle between the center normal of the primary mirror and the Z axis is taken as the rotation angle of the optical axis of the camera. The angle between the center normal of the secondary mirror and the primary mirror is the rotation angle of the camera’s internal misalignment. When the optical axis of the camera rotate around the Z axis with 100 , the offset at the focal plane is not greater than 0.05 μm. Therefore, the influence of the optical axis around the Z axis is not considered in this paper. At the same time, the camera is a three reverse coaxial optical system. The primary mirror and the secondary mirror are all revolving mirrors, so the change of the angle of the Z axis between the primary and secondary mirrors does not affect the position of the imaging. MATLAB program is used to calculate the angle between the normal lines. The rotation angles of micro-vibration generated by momentum wheels running are shown in Table 4.
Internal misalignment
Optical axis
Direction Around X axis Around Y axis Around X axis Around Y axis
Wheel S/Test 1 Max Rms 0.05 0.014 0.041 0.016 0.085 0.025 0.069 0.018
Wheel X/Test 2 Max Rms 0.043 0.011 0.040 0.014 0.099 0.033 0.100 0.033
Wheel Y/Test 3 Max Rms 0.069 0.011 0.047 0.009 0.17 0.035 0.065 0.018
Wheel Z/Test 4 Max Rms 0.036 0.010 0.036 0.009 0.068 0.018 0.05 0.015
Table 4 The rotation angle of micro-vibration generated by momentum wheel running (unit: 00 ) All/Test 5 Max 0.048 0.064 0.2 0.169
Rms 0.017 0.016 0.093 0.043
74 Q. Fu et al.
Influence of Micro-Vibration Caused by Momentum Wheel on the Imaging. . .
75
From Table 4, we can see that the rotation angles (according to the RMS value) of the optical axis around the X axis are in turn as follows: momentum wheel all running > momentum wheel S> momentum wheel X¼ momentum wheel Y> momentum wheel Z. The rotation angles of the optical axis around the Y axis are in turn: momentum wheel all running ¼ momentum wheel S> momentum wheel X> momentum wheel Y¼ momentum wheel Z. The rotation angles of the camera’s internal misalignment around the X axis are in turn: momentum wheel all running > momentum wheel Y> momentum wheel X> momentum wheel S> momentum wheel Z. The rotation angles of the camera’s internal misalignment around the Y axis are in turn: momentum wheel all running > momentum wheel X> momentum wheel Y> momentum wheel S> momentum wheel Z. The momentum wheel, which has the greatest impact on the optical axis jitter of the camera, is the momentum wheel X and the momentum wheel S. The momentum wheel with the greatest impact on the imbalance between the primary and secondary mirrors is the momentum wheel Y and the momentum wheel X. In the Sect. 2, the influence sequence of each momentum wheel on the camera is different according to the acceleration response, which shows that it is limited to judge the image quality of each momentum wheel with the size of the acceleration.
4 Analysis of the Influence of Micro-Vibration on Imaging Quality The influence of micro-vibration on the imaging quality of space camera includes two aspects. One is blurred image, which is mainly caused by high frequency microvibration, but the distortion of the image, which is mainly caused by low frequency micro-vibration. Low frequency and high frequency are divided according to the critical frequency. The micro-vibration with higher frequency than the critical frequency is called high frequency micro-vibration, otherwise it is called low frequency microvibration. The critical frequency is related to the integral time and integral series of TDICCD [9, 10]. According to the common imaging parameters of the TDICCD space camera in this paper, the critical frequency can be obtained to be 146.9 Hz. According to the Sect. 3, when the momentum wheel is all running, the optical axis angles and the camera misalignment angles change the most. In this section, the influence of momentum wheel micro-vibration on the imaging quality of the camera is judged only by this condition. When the momentum wheel is all running, the spectrum of camera’s optical axis rotation angle and the camera misalignment rotation angle are shown in Figs. 2 and 3. As shown in Figs. 2 and 3, the frequency of the micro-vibration caused by momentum wheels is much lower than that of the critical frequency 146.9 Hz, and the proportion of the frequency above the critical frequency is very small. Therefore, the influence of the momentum wheel on the imaging quality of the space camera on the satellite platform is mainly the distortion of the image. The above rotating angle
76
Q. Fu et al. 0.014
0.01
X:121.5 Y:0.08996
0.08 0.07
X:9.481 Y:0.00955
0.06
0.008
d/”
X:118.5 Y:0.006045
0.006
d/”
0.012
0.09 X:24.3 Y:0.0131
0.05 0.04 0.03
0.004
0.02
0.002
0.01
0 0
100
200
300
400
X:118.5 Y:0.01668
X:24.3 Y:0.01043
500
100
200
300
400
500
f/Hz
f/Hz
Fig. 2 The spectrum of optical axis (left) and the misalignment rotation angle (right) around X axis
9 ×1 8
0.05
6
0.045
X:237.3 Y:0.008542
X:42.07 Y:0.00659
7
X:24.3 Y:0.005031
0.03
5 4
X:121.5 Y:0.02343
0.025 0.02
3
0.015
2
0.01
1
0.005 0
0
X:106.7 Y:0.04731
0.04 0.035 d/”
d/”
X:16.89 Y:0.008385
0
100
200
300 f/HZ
400
500
X:16.89 Y:0.01
0
100
200 f/HZ
300
400
500
Fig. 3 The spectrum of optical axis (left) and the misalignment rotation angle (right) around Y axis
spectrum data are combined with the optical system parameters to simulate the standard target image. The simulation image of the momentum wheel under the all running condition is shown in Fig. 4. Under the original proportion of the image, the distortion of the image is almost invisible. The impact of the momentum wheel micro-vibration on the image quality of the space camera is acceptable. By comparing the images with the space camera on orbit (shown in Fig. 5), it is found that the simulation results are in agreement with the visual effects of the on-orbit images. It is proved that the micro-vibration analysis method in this paper can provide some reference for evaluating the quality of on-orbit images.
5 Conclusion In this paper, based on the ground micro-vibration test data of multiple momentum wheels on the satellite, the characteristics of the micro-vibration acceleration response to the camera are studied, and the finite element model of the satellite is corrected
Influence of Micro-Vibration Caused by Momentum Wheel on the Imaging. . .
77
Fig. 4 The simulation image of the momentum wheel under the all running condition
according to the experimental data. The excitation of measured acceleration is applied to the position of the momentum wheel of the satellite finite element model. The rotation angle of the mirrors is obtained by processing the MATLAB program. By comparison, it is found that the momentum wheel that causes the maximum acceleration response is not necessarily the largest of the rotation angle of the mirrors. Therefore, it is limited to evaluate the imaging quality of each momentum wheel solely by the magnitude of acceleration response. Combined with the optical system and imaging parameters of the camera, it is found that the influence of the momentum wheel micro-vibration on image quality is mainly the distortion. Finally, the distorted image caused by the microvibration is obtained through the simulation of the standard target image, and the results are compared with the visual effect of the camera on the orbit, and the results are found to be in accordance with the visual effect of the on-orbit images. It is proved that the micro-vibration analysis method in this paper can provide some reference for evaluating the quality of on-orbit images.
78
Q. Fu et al.
Fig. 5 The image with the space camera on orbit
References 1. Qing-jun, Z., Guang-yuan, W., Gang-tie, Z.: Micro-vibration attenuation methods and key techniques for optical remote sensing satellite. J. Astronaut. 36(2), 125–135 (2015). (in Chinese) 2. Eyeman, C.E., Shea, J.F.: A Systems Engineering Approach to Disturbance Minimization for Spacecraft Utilizing Controlled Structures Technology. MIT, Boston (1990). SER Report 3. Guo-wei, J., Xu-bin, Z., Jun-feng, S.: Modeling and simulation of micro-vibration for a satellite. Spacecr. Environ. Eng. 28(1), 40–44 (2011). (in Chinese) 4. Brain, H., Thomas, Neville, W.: Assessment and control of vibrations affecting large astronomical telescope. SPIE. 1732, 130–156 (1987) 5. Yiyun, M., Shiping, C.: An innovative method for the real time measurements of image shift due to the motion of the satellite. Spacecr. Recover. Remote Sens. 24(3), 24–29 (2003). (in Chinese) 6. Rudoler, S., Hadar, O., Fisher, M.: Image resolution limits resulting from mechanical vibrations. Part 4: real-time numerical calculation of optical transfer functions and experimental verification. Opt. Eng. 33(2), 566–578 (1994)
Influence of Micro-Vibration Caused by Momentum Wheel on the Imaging. . .
79
7. Liu, K., Maghami, P., Blaurock, C.: Reaction Wheel Disturbance Modeling, Jitter Analysis, and Validation Tests for Solar Dynamics Observatory, vol. 2008-7232. AIAA, Washington D. C (2008) 8. Rudoler, S., Hadar, O., Fisher, M.: Image resolution limits resulting from mechanical vibrations, part 3: numerical calculation of modulation transfer function. Opt. Eng. 31(3), 581–589 (1992) 9. Bin, F., Qifeng, Y.: Simulation analysis of platform vibration on image quality of satellite-based high-resolution optical system. Chin. Space Sci. Tech. 37(3), 86–92 (2017). (in Chinese) 10. Bowen, Z.: Analysis on effect of micro-vibration on rigid-body space camera imaging. In: Proceeding of the Second High Resolution Remote Sensing Satellites, vol. 47, Beijing (2013) (in Chinese)
Integrated Design of Platform and Payload for Remote Sensing Satellite Jiaguo Zu1(*), Teng Wang1, and Yanhua Wu1 1
Beijing Institute of Spacecraft System Engineering, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. Integrated mechanical design of platform and payload is an important part of system design for remote sensing satellite. In this paper, the necessity of integrated design is studied based on the consumer requirement. Three parts of integrated mechanical design have been summarized, and each part has been deeply discussed and studied. Keywords: Remote sensing satellite · Integrated mechanical design · Platform · Payload
1 Introduction The integrated design of satellite platform and payload is an important part of remote sensing satellite system design, especially in the present situation, as satellite users are eager for higher satellite performance, which brings the increasing size of the payload. The integrated design of system and payload is an effective technical approach to solve this problem. In recent years, the research on integrated mechanical design has also become more and more in-depth. Satellite in orbit, whether it is a large satellite or a small satellite, whether military surveillance satellite or commercial remote sensing satellite, can be found out using the idea of integrated mechanical design. This paper analyzes the demand for satellite-integrated mechanical design, and then the three main contents of integrated mechanical design are summarized, and the deep research and discussion on the three main contents are conducted.
2 Integrated Mechanical Design Requirements Optical remote sensing satellites have been developing with higher resolution, better image quality, and higher efficiency. They behave more reliably and cost less. In addition to the improvement of the payload performance, the mechanical design of satellite systems is refined and integrated.
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_8
81
82
J. Zu et al.
Fig. 1 The necessity of integrated mechanical design
High-resolution requirements: Cameras on satellite are usually designed with large diameter and long focal length. The diameter of the camera exceeds 1 m, and the mass exceeds 1 t. The camera takes up a lot of space in satellite. The large-scale camera requires the satellite platform to have stronger carrying capacity, at the same time. The camera requires a better mechanical environment. The satellite platform and the payload independently achieve high-strength, high-rigidity characteristics, and it is costly. High image quality requirements: The optical axis of the camera is sensitive to the micro-vibration and thermal deformation of the platform, and the stability of the satellite platform is extremely demanding. An integrated design of the full link is required in the perspective of the system. High image target positioning accuracy requires a uniform and stable space-time reference between the camera and the satellite platform, eliminating deviations in matching conversion. High usage performance requirements: Satellites have efficient task execution capabilities. Platforms and payload need to be integrated around agile design. The design includes agile configuration design and small inertia/high maneuver design (Fig. 1).
3 Integrated Mechanical Design Content According to the analysis of task requirements, the overall idea of the integrated mechanical design is to conduct a satellite configuration design and a structure design around the payload configuration, optimize the satellite quality and inertia, optimize the force transmission path, develop a distributed vibration isolation design, and take into account the interests of both parties in carrying out the load and platform design. Balance the interests of both parties in carrying out payload and platform design. Complete the satellite configuration and structural parameters of the optimal design through the use of optical, mechanical, thermal integration design, and analysis methods, so that the overall system performance is optimal. The contents of integrated mechanical design can be mainly summarized as configuration and layout design, structural mechanics design, and micro-vibration suppression design (Fig. 2).
Integrated Design of Platform and Payload for Remote Sensing Satellite
83
Fig. 2 Integrated mechanical design content
Fig. 3 Configuration design
4 Integrated Overall Configuration and Layout Design The integrated configuration of the remote sensing satellite payload and platform brings about the compression of the whole satellite scale, the reduction of the whole satellite rotation inertia, and the improvement of the platform bearing ratio, around the reasonable layout of payload. The topology of the remote sensing satellite configuration evolved from the original platform-load docking configuration to the embedded configuration and from the embedded configuration to the converged configuration, as shown in Fig. 3. The current general opinion is that the embedded configuration and the converged configuration are all included in the typical integrated configuration. The difference between the two configurations: the embedded configuration is based on the integrated configuration of the
84
J. Zu et al. Table 1 Configuration topology comparison
1
2 3 4 5
6
Item Interface of platform and payload
Platform section form Centroid height Weight ratio of bearing structure Moment of inertia (kg.m2) Examples
Docking configuration Located on the docking surface of the payload and the platform, acting as a structural plate or frame
Mainly square
Embedded configuration Located on the docking surface of the main loadbearing frame and the platform, acting as an independent payload adaptation structure Mainly hexagonal
Converged configuration The main bearing frame of the payload is part of the main bearing system of satellite and there is no obvious interface
High 10–15%
Low 6–12%
Regular hexagon/ octagon or round Low 0.344 .
4 Optical Design Principle of Space Remote Sensor 4.1
The Design of R-C Cassegrain System
The coaxial two-mirror system is characterized by large aperture, no chromatic aberration, and wide reflection range. It has very important applications in astronomical telescopes, infrared or ultraviolet optical systems. Both the primary mirror and the secondary mirror are quadric surfaces, and the surface equation is: r 2 ¼ 2Rx 1 e2 x2
r 2 ¼ y2 þ z 2 ,
k ¼ e2
ð3Þ
ð4Þ
where e denotes eccentricity of quadric surface, R denotes the radius of curvature at the vertex of the mirror, k denotes cone coefficient. The basic structure of the coaxial two-mirror system is shown in Fig. 1: The secondary mirror intercepts the light at the center of the primary mirror, and the obscuration ratio is: ψ¼
h2 m2 0 h1 f1
ð5Þ
The lateral magnification of secondary mirror can be denoted as: β¼
m02 m2
ð6Þ
94
Y.-G. Xing et al.
Fig. 1 Structure diagram of the coaxial two-mirror system
The radius of curvature of the primary and secondary mirrors satisfies the following equation: R1 1 þ β ¼ ψβ R2
ð7Þ
R-C system is isoplanatic system of Cassegrain System. According to the spherical aberration and the coma are both equal to zero, the surface parameters of the primary and secondary mirrors are obtained: e21 ¼ 1 þ
e22
¼
2ψ ð1 ψ Þβ2
2β þ ð1 ψ Þð1 βÞ 1 β2 ð1 ψ Þð1 þ β Þ3
ð8Þ
ð9Þ
The initial parameters of the R-C system are designed as follows: In order to ensure high energy utilization efficiency, the obscuration ratio is taken as 25%; R1 ¼ 1350, and R2 ¼ 450; according to Eq. (7), we can get β ¼ 4; according to Eqs. (4), (8), and (9), we can get the cone coefficient of primary mirror k1 ¼ 1.0417 and the cone coefficient of primary k2 ¼ 3.1728. The initial conic coefficients of the primary and secondary mirrors are both less than 1, both of which are hyperbolic mirrors, meeting the requirements of the R-C system. 4.2
Achromatic Principle of Rear Lens Group
The optical material has different refractive indices for different light waves. Therefore, the light with the same aperture and different colors has different intersection points with
Optical System Design of Space Remote Sensor Based on the Ritchey. . .
95
the optical axis after passing through the optical system; and for the light with the different aperture and color, the situation is the same. In any position of image, the image of the object is a colored diffuse spot. The difference in the position of the two different colors of light imaging points on the axis is called the axial chromatic aberration. The axial chromatic aberration is only related to the aperture. The Taylor series expansion of axial chromatic aberration indicates that the axial chromatic aberration is a function of the aperture to the even power. When the aperture is zero, the axial chromatic aberration is not zero, so the series expansion contain a constant term, it can be represented as: ΔL0FC ¼ A0 þ A1 h2 þ A2 h4 þ ¼ δl0FC þ ðAF1 AC1 Þh2 þ ðAF2 AC2 Þh4 þ
ð10Þ
where δl0FC denotes the axial chromatic aberration of the paraxial rays; AFi denotes the spherical aberration coefficient (wavelength F) at the second order; ACi denotes the spherical aberration coefficient (wavelength C) at the second order. For most optical systems, the quadratic coefficient in the expansion is already small, so the primary axial chromatic aberration for a single refracting sphere is defined as: δL0FC ¼ A0 þ A1 h2 ¼ δl0FC þ ðAF1 AC1 Þh2
ð11Þ
According to the ray tracing formula of a single sphere, the calculation expression can be expressed as: δL0FC
0 k k 1 X 1 X Δn Δn ¼ 0 02 CI ¼ 0 0 2 luni n n0 nk u k 1 nk u k 1
ð12Þ
where Δn0 ¼ n0F n0C , Δn ¼ nF nC ; CI denotes the distribution coefficient of the primary axial chromatic aberration. For a single thin lens, according to Eq. (12), we can get: N X i¼1
CI ¼
N X
h2i ðϕi =υi Þ
ð13Þ
i¼1
where ϕi denotes the power of thin lens; υi denotes Abbe number of glass; N denotes the number of thin lenses; hi denotes the semi-passing aperture of lens. The same medium has different refractive indices for different light waves. Since the focal length of the system is a function of the focal length, distance, and refractive index of each lens, according to the lateral magnification of the optical system β ¼ l0/l ¼ f/x, we know that for off-axis object points, the lateral magnifications of different color lights are not equal. This difference is called lateral chromatic aberration. The Taylor series expansion of the lateral chromatic aberration can be expressed as:
96
Y.-G. Xing et al.
Δy0FC ¼ b0 y þ b1 y3 þ b2 y5 þ
ð14Þ
The primary lateral chromatic aberration of thin lens can be represented as: N X i¼1
CII ¼
N X
hi hpi ðϕi =υi Þ
ð15Þ
i¼1
From Eqs. (13) and (15), we can get the conclusion that the contribution of each lens to chromatic aberration is related to the power of the lens, the Abbe number of the glass, and the position of the lens in the optical path in lens optical system. Therefore, by distributing the power of the lens group, the purpose of eliminating the primary chromatic aberration can be achieved.
5 Result of Design 5.1
Layout and Parameters of Space Optical Remote Sensor
The initial structure of the R-C system has been determined in Sect. 4, and for the rear lens group, five spherical thin lenses are selected. Firstly, the glass material with a certain Abbe number is selected; then assuming that the half aperture of each lens is 40 mm; finally, the power of five single lenses is determined according to the achromatic condition of the system, and the original parameters of each thin lens are obtained. The full field of view is chosen to be 0.8 0.8 .Ray tracing and optimization design are performed by using ZEMAX software [6]. The focal length of the space optical remote sensor is optimized to 5400 mm, and the total length of system is 880 mm. The structure of system is shown in Fig. 2. The initial parameters and optimized parameters of the R-C system are shown in Table 2 below:
Fig. 2 Structure diagrams of space optical remote sensor. (a) Optical system diagram, (b) 3D layout
Optical System Design of Space Remote Sensor Based on the Ritchey. . .
97
Table 2 The parameters of R-C system Parameters Initial Optimized
R1 1350 1453.38
R2 450 434.23
d 506.25 557.76
k1 1.0471 1.0672
k2 3.1728 3.1043
ψ (%) 25 25.27
β 4 6.48
Table 3 The power distribution of each thin lens Lens hi (mm) ϕi (mm1)
1 47.647 8.357E4
2 42.498 1.032E3
3 39.210 0
4 37.110 6.616E3
5 41.878 2.453E3
Fig. 3 The MTF of space optical remote sensor. (a) MTF, (b) MTF vs full field of view (in +Y direction)
The rear lens group consists of five thin lenses, and the Abbe number of different glasses is selected. Power distribution of each thin lens is as shown in Table 3 below after several optimizations. According to Eqs. (11) and (13), the axial chromatic aberration distribution coefficient of the rear lens group is 0.000656, and the lateral chromatic dispersion distribution coefficient is 0.000049. Since the R-C system has no chromatic aberration, the spatial optical remote sensor system basically eliminates the chromatic aberration. 5.2
Image Quality Evaluation
The modulation transfer function curve of the space optical remote sensor is shown in Fig. 3a. The MTF curve shows that the value of MTF is better than 0.45 at the Nyquist frequency ( fN ¼ 50 lp/mm); the value of MTF is better than 0.40 in full field of view (in +Y direction), which is shown in Fig. 3b. The space optical remote sensor is almost diffraction limited. The standard spot diagram of the space optical remote sensor is shown in Fig. 4a. The maximum RMS spot radius is less than 6 μm, what’s more, the RMS spot radius is
98
Y.-G. Xing et al.
Fig. 4 RMS spot diagram and geometric encircled energy of the system. (a) RMS spot diagram, (b) geometric encircled energy
Fig. 5 The aberration of the system. (a) Field curvature and distortion, (b) wavefront aberration vs. wavelength
smaller than the Airy spot radius (1.22λF # ¼ 1.22 0.6 9 6.6 μm) in the full field of view; the curve of geometric encircled energy is shown in Fig. 4b. In the pixel size, the power concentration is better than 70%, and the image quality is good. The field curvature of the space optical remote sensor is less than 0.25 mm in the full field of view, and the distortion is less than 1.1%, as shown in Fig. 5a below. The variation of wavefront aberration with wavelength is shown in Fig. 5b. The RMS wavefront aberration is 0.02 waves to 0.08 waves in the full field of view.
6 Conclusions Here, a space optical remote sensor with high resolution (GPR Y0 ¼ > > Vec1 þ Vec2j j > > < Vec1 Vec2 Z0 ¼ > > Vec1 Vec2j j > > > : 0 X ¼ Y 0 Z0
ð1Þ
Then the transition matrix from the new coordinate system constructed by Vec1 and Vec2 to the mapping camera coordinate system is given by,
Flight direction
Y(YB,YF) Vec1
Vec2 O
φB
X(XB,XF)
φF
ZB Z
ZF
Fig. 1 Mapping camera coordinate system and front and back camera coordinate system/STC boresight relationship
Star Camera Layout and Orientation Precision Estimate for Stereo. . .
113
Fig. 2 Diagram of the STC bore-sight error transfers to axis of the new coordinate system
M STC1STC2 ¼ ½X 0 Y 0 Z 0
ð2Þ
Let the bore-sight error of the two STCs be the same as δϕSTC, as shown in Fig. 2, the equivalent error of the Y0 axis is, δϕ δϕ0Y ¼ pffiffiffi STC 2 sin ðθÞ
ð3Þ
where θ is half of the angle between STC1 and thepSTC2, besides the adjustment effect ffiffiffi using two STCs is considered with the parameter 2; Similarly, the determination error of the X0 , Z0 axis δϕ0X , δϕ0Z can be obtained by: δϕSTC δϕ0X ¼ pffiffiffi π 2 sin θ 2 Δϕ δϕ0Z ¼ pffiffiffi STCπ 2 sin 2
ð4Þ
Then the coordinate system X, Y, Z three-axis error matrix of the stereo mapping camera system is given by: 2
δϕX
3T
2
δϕ0X
6 7 6 6 δϕ 7 ¼ M STC1STC2 6 0 4 Y5 4 0 δϕZ
0 δϕ0Y 0
0
3
7 0 7 5 δϕ0Z
ð5Þ
Furthermore, the determination error of the X, Y, and Z axes of the stereo mapping camera coordinate system is shown in Eq. (6):
114
W. Z. Wang et al.
2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 δϕX ð1Þ2 þ δϕY ð1Þ2 þ δϕZ ð1Þ2 δϕX 7 6 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 7 6 7 6 6 6 δϕY 7 ¼ 6 δϕ ð2Þ2 þ δϕ ð2Þ2 þ δϕ ð2Þ2 7 7 4 5 6 X Y Z 7 4 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 δϕZ δϕX ð3Þ2 þ δϕY ð3Þ2 þ δϕZ ð3Þ2 2
3
ð6Þ
A similar method can be applied to obtain the three axes of the front and back camera coordinate system.
3 Error Weighted Orientation Accuracy Modeling According to the photogrammetry theory, when the external orientation element value of the stereo image is known, the ground point coordinate formula with the front space intersection method is the most basic segment of space photogrammetry [6]. According to [1], the intersection relationship in front of the photogrammetric space is shown in Fig. 3. Where f/f0 are the focal lengths of the front and back camera, and φF and φB are the angles between the camera bore-sight and the perpendicular, respectively. B is the baseline of photography, H is the orbital height, α is the geocentric angle corresponding to the two stations, R is the radius of the earth, and y is the distance from the edge of the line of the mapping camera. When considering only the STC attitude measurement accuracy error, the mapping camera image positioning accuracy formula is simplified as follows:
y
y B
f
f φF
φB H
R
α
Fig. 3 Diagram of photogrammetry space forward intersection
Star Camera Layout and Orientation Precision Estimate for Stereo. . .
8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > ðHy tan αÞ2 2 ðHyÞ2 2 > H2 2 2 2 > > m ¼ α m þ mω þ m 1 tan XM > φ > 2 2f 2 2f 2 κ > > > > > sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > 2 2 < f H tan 2 α þ Hy2 ðHyÞ2 H2 2 2 mYM ¼ mφ þ m þ m2κ > 2 ω > 2f 4 tan 2 α 2ðf tan αÞ2 > > > > sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > 2 2 > > H 2H 2 2 2ðHy tan αÞ 2 2ðHyÞ 2 > > mω þ mκ mφ þ : mZM ¼ B 4 2 2 cos α f f
115
ð7Þ
where mXM, mYM, mZM are the positioning errors of plane X, Y and elevation Z, respectively. The measurement error of the attitude angle of the camera system mφ, mω, mκ can be calculated from chapter “Demand Analysis of Optical Remote Sensing Satellites Under the Belt and Road Initiative.” The XY plane positioning accuracy is shown in Eq. (8): mXYM ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m2XM þ m2YM
ð8Þ
Furthermore, the EWO method is introduced: mIWO ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Q2 m2XYM þ m2ZM
ð9Þ
where mIWO is the positioning accuracy evaluation index, Q is the weighting factor as shown in (10): Q¼
RMSEz RMSEp
ð10Þ
where RMSEp, RMSEz are the plane position error and the elevation error refer to Table 1, respectively.
Table 1 1:10000 scale stereo mapping requirements [7] Scale 1:10000
Terrain Flat land Hills Mountain Alpine land
Plane position error (m) RMSEp 5 5 7.5 7.5
Height error (m) RMSEz 0.5 1.5 3 6
116
W. Z. Wang et al.
4 Simulation Results and Analysis The transition matrix from the stereo mapping camera system coordinate system to STC1/2 coordinate systems is described in Eq. (11), with ZXY sequence, respectively. 2
cos α cos γ sin α sin β sin γ 6 C ðγ, β, αÞ ¼4 cos β sin γ cos γ sin α þ cos α sin β sin γ
3 cos β sin α 7 sin β 5 cos α cos β
cos α sin γ þ cos γ sin α sin β cos β cos γ sin α sin γ cos α cos γ sin β
ð11Þ
4
0.8
3
0.7 0.6
1.4 1.2
error('/3σ)
0.9
error('/3σ)
error('/3 σ)
where α is the pitch angle (rotating with Y axis), β is the rolling angle (rotating with X axis), and γ is the yaw angle (rotating with Z axis). The STC 1, 2 bore-sight is symmetrically arranged according to the YOZ plane, so that the STC 1 transition matrix is MSTC1 ¼ C(γ, β, α), then the STC 2 transition matrix is MSTC2 ¼ C(γ, β, α). The rest of the simulation parameters are set as follows: Orbit height H ¼ 500 km, front and back camera focal length f/f 0 ¼ 8 m, line array length y ¼ 300 mm, φF ¼ φB ¼ 15 , STC bore-sights determination accuracy δϕSTC ¼ 0.500 , RMSEp, RMSEz. Take 5 m and 1.5 m, respectively. The three-axis pointing error of the surveying camera system and the front-back camera under different rolling and pitch angle conditions is shown in Figs. 4, 5, and 6.
2 1
0.6
0.5
0.4 80
0 80
0.4 80
60
70
60 40 0
roll(°)
30
roll(°)
30
70 50
20
40 0
60
40
50
20
50
20
60
70 60
40
60
40
1 0.8
40 0
roll(°)
pitch(°)
30
pitch(°)
pitch(°)
a)
b)
c)
Fig. 4 Stereo mapping camera system triaxial determination error. (a) Stereo mapping camera axis X, (b) stereo mapping camera axis Y, (c) stereo mapping camera axis Z
0.8
1.2
0.7 0.6
error('/3σ)
3 error('/3σ)
error('/3σ)
1.4
4
0.9
2 1
0.5 0.4 80
0.6 0.4 80
0 80 60
70 60
40
50
20 roll(°)
60
70
30
a)
pitch(°)
60
40
50
20
40 0
1 0.8
roll(°)
60
70
30
b)
pitch(°)
60
40
50
20
40 0
roll(°)
40 0
30
pitch(°)
c)
Fig. 5 Front camera triaxial determination error. (a) Front camera axis XF, (b) front camera axis YF, (c) front camera axis ZF
Star Camera Layout and Orientation Precision Estimate for Stereo. . .
4
0.9
1.4 1.2
0.7 0.6
error('/3σ)
3
error('/3σ)
error('/3σ)
0.8
2 1
0.5 0.4 80
0.4 80 60
70 60
40
70
roll(°)
30
roll(°)
pitch(°)
30
70
pitch(°)
roll(°)
b)
a)
50
20
40 0
60
40
50
20
40 0
60
60
40
50
20
1 0.8 0.6
0 80 60
117
40 0
30
pitch(°)
c)
Fig. 6 Back camera triaxial determination error. (a) Back camera axis XB, (b) back camera axis YB, (c) back camera axis ZB
error index(m/3σ)
25 20 15 10 70 5 0 70
60 50 60
50
40
40
30 roll(°)
a)
20
10
30
pitch(°)
b)
Fig. 7 The EWO evaluation result with the relationship of the angle of the star sensor bore-sight. (a) The comprehensive evaluation of positioning accuracy. (b) The angle between STC1 and STC2 boresight
Figure 4a shows that the X-axis determination accuracy of the stereo mapping camera system increases as the roll and pitch angles increase, and the Y-axis determination accuracy of the stereo mapping camera system is changed as the roll and pitch angles increase as shown in Fig. 4b. Figure 4c shows that the Z-axis determination accuracy of the stereo mapping camera system is independent of the pitch angle and decreases as the roll angle increases. The results of Figs. 5 and 6 are similar to Fig. 4. Obviously, it is impossible to make the XYZ three axes of the stereo mapping system optimal at the same time, and comprehensive evaluation is needed. By introducing the EWO evaluation method described in chapter “Design of an All-Spherical Catadioptric Telescope with Long Focal Length for High-Resolution CubeSat,” the results are shown in Fig. 7. Figure 7a shows that according to the EWO evaluation method, the system optimal solution of the star camera layout angle roll and pitch can be obtained. At this time, the attitude accuracy of the surveying camera system in the pitch direction is 0.400 , and the elevation positioning accuracy and plane positioning accuracy is 2.1 m/2 m, respectively. Noticeably, Fig. 7b shows that the angle between STC1 and STC2 bore-sight is not aligned with the positioning accuracy, which is different from [4].
118
W. Z. Wang et al.
5 Conclusion Based on the principle of double vector attitude determination, this paper establishes the model of the STC bore-sight determining error to the coordinate system of the two-line array stereo mapping camera system. By introducing the EWO positioning accuracy evaluation method, the optimal layout design of the STC is given, which is satisfied to the high-precision stereo mapping mission.
References 1. Wang, X.Y., Gao, L.Y., Yin, M., Wang, J.R.: Analysis and evaluation of position error of transmission stereo mapping satellite. J. Geomatics Sci. Technol. 29(6), 427–434 (2012) 2. Ma, H.L., Chen, T., Xu, S.J.: Optimal attitude estimation algorithm based on multiple star-sensor observations. J. Beijing Univ. Aeronaut. Astroaut. 39(7), 869–874 (2013) 3. Li, J., Zhang, G.J., Wei, X.G.: Modeling and accuracy analysis for multiple heads star tracker. Infrar. Laser Eng. 44(4), 1223–1228 (2015) 4. Lv, Z.D., Lei, Y.J.: Satellite attitude measurement and determination, 1st edn, pp. 112–114. National Defense Industry Press, Beijing (2013) 5. Wang, R.X., Wang, J.R., Hu, X.: Analysis of location accuracy without ground control points of optical satellite imagery. Acta Geodaetica et Cartographica Sinica. 46(3), 332–337 (2017) 6. Wang, R.X.: Satellite photogrammetric principle for three-line-array CCD imagery, 2nd edn, pp. 11–14. Sino Maps Press, Beijing (2016) 7. GB-T 33177: National Fundamental Scale Maps-1:5000 1:10000 Topographic Maps. Standard Press of China, Beijing (2016)
Study on Optical Swap Computational Imaging Method Xiaopeng Shao1(*), Yazhe Wei1, Fei Liu1, Shengzhi Huang1, Lixian Liu1, and Weixin Feng1 1
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an, China [email protected]
Abstract. A novel framework of optical SWaP (Size, Weight, and Power/Price) computational imaging method is proposed to solve the problem of large size, heavy weight, and high power consumption in traditional military optoelectronic weapon equipment. The key point of traditional optical system design is focused on the various geometrical aberration correction through adding the number of lenses, which leads to a complex structure. However, optical SWaP computational imaging method, established a geometrical aberration correction physical model by the optical system design and image processing theory, which was adopted to optimize the optical imaging system. In this study, the difficulty to correct different kinds of geometric aberrations by image processing is analyzed, providing the possibility for correcting easily corrected aberrations by image processing. Furthermore, a simple lens imaging system was designed by the proposed method and the objective evaluation criterion is employed to analyze the final imaging performance. The result shows that optical SWaP computational imaging method can get an excellent imaging performance compared with the result of traditional method. And it is also feasible for correction of optical system aberration through image processing algorithm. Due to the great possibility of fewer lenses needed, higher transmittance efficiency, simpler implementation, and cost-effectiveness, optical SWaP computational imaging method exhibits a great potential. Keywords: Computational imaging · Optical system design · Modulation transfer function
1 Introduction In traditional optical imaging system, imaging resolution and detection distance depend on optical parameters, such as aperture and focal length. And there are contradictions among the field of view (FOV), image-resolution, the size, and the weight of optical system. In addition, traditional imaging method also exists an insurmountable diffraction limit [1]. In order to solve these problems, researchers began to study the problemoriented computational imaging, which can realize tasks that couldn’t be accomplished by traditional imaging method [2]. Computational imaging converts physical problems into mathematical problems, and then solve the problems by mathematical tools. Generally, computational imaging is divided into two parts including optical system design and image processing design. © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_12
119
120
X. Shao et al.
Image processing is only a supplement to lens design in computational imaging, and they are not equally important, which is a natural development in the history of photography [3]. However, image processing in the optical SWaP computational imaging method is no longer a supplement to optical design but a role equal to optical system design. The optical SWaP method aims to minimize the size, weight, power, and price of optical system. Since the rapid development of electronic integration follows Moore’s Law from submicron to nanoscale level [4], and its integrated design makes the superiority of electronic systems significant in weight, size, power, and price. In contrast, the development of optics integration is constrained by the basic theory. Dowski and Cathey built a new imaging methodology called Wavefront Coding, using aspheric optics and signal processing to reduce system complexity and deliver high-quality imagery [5, 6]. Robinson and Stork introduced a novel framework for designing digital imaging systems and specifically an end-to-end merit function based on pixel-wise mean squared error [7]. Based on these methods, we analyze the difficulty to correct different kinds of geometric aberrations by image processing, and then try to correct spherical aberration by Wiener filter [5, 6, 8, 9]. In this paper, a whole chain optimization design model was established from optical system to image processing. The optical SWaP computational imaging method can break through the bottlenecks of traditional optical imaging, greatly reducing the size, weight, power, and price of military optoelectronic systems. It can be widely used in wide-area surveillance alarms systems, airborne-mounted photoelectric equipment, earth-based early warning systems, high-resolution earth observation systems, and other fields.
2 Theory Fermat’s principle states that light travels from one point to another along a path of extreme value [10]. As shown in Fig. 1, from the object point S at the finite object distance to the image point S0, the geometrical distances of different light rays passing through the lens are different. According to Fermat’s principle, their optical paths should take the extreme value. But they cannot all take the maximum value or minimum value, so the only result is to take the constant value. In other words, the optical paths of different lights from the object point through the lens to the image point are equal.
Fig. 1 Finite object distance imaging
Study on Optical Swap Computational Imaging Method
121
Exit Pupil Aberrated Wavefront
OPD W(x,y) Image Plane
Reference Wavefront
Fig. 2 Optical path difference at the exit pupil
The ideal condition of optical imaging has equal optical paths from object points to image points. Therefore, the real imaging can be characterized by the optical path difference (OPD). Since the exit pupil is the common exit of all imaging beams, the OPD of each light can be calculated with the reference light. As shown in Fig. 2, the wave surface OPD W(x, y) is the distance from the real wave surface to the reference wave surface at the exit pupil, and is a function of the exit pupil coordinate (x, y) of the optical system. Zernike polynomials are a complete set of base functions that are orthogonal in the unit circle. Since most of optical systems have circular pupil, the OPD at the exit pupil can be described by weighted Zernike polynomials. The polar form of Zernike polynomials can be described by [11]: ( Zm n ðρ, θ Þ
¼
jmj Nm n Rn ðρÞ cos ðmθ Þ
m 0, 0 ρ 1, 0 θ 2π
jmj Nm n Rn ðρÞ sin ðmθÞ
m < 0, 0 ρ 1, 0 θ 2π
ð1Þ
where N m n is the normalized factor and its value can be calculated by: Nm n
( pffiffiffiffiffiffiffiffiffiffiffi nþ1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ðn þ 1Þ
m¼0 m 6¼ 0
ð2Þ
where n is the order of Zernike polynomials, and m is the angular frequency of Zernike polynomials. The radial polynomial Rjnmj ðρÞ can be calculated by Rjnmj ðρÞ ¼
ðn jmjÞ=2 X s¼0
ð1Þs ðn sÞ! ρn2s s! ½0:5ðn þ jmjÞ s! ½0:5ðn jmjÞ s!
ð3Þ
The deviation between tangential field curvature and sagittal field curvature represents astigmatism, so defocus, distortion, coma, spherical aberration, and astigmatism
122
X. Shao et al.
can represent monochromatic aberration of optical system. The order number of Zernike polynomials n and the angular frequency of polynomials m corresponding to each monochromatic aberration are [12, 13]: defocus n ¼ 2, m ¼ 0; distortion n ¼ 1, m 1; coma n ¼ 3, m ¼ 1; spherical aberration n ¼ 4, m ¼ 0; astigmatism n ¼ 2, m ¼ 2. The wave front OPD of a circular aperture can be expressed as a weighted sum of Zernike polynomials: W ðρ, θÞ ¼
k n X X n
m Wm n Z n ðρ, θ Þ
ð4Þ
m¼n
where W m n is the coefficient of Zernike polynomials. The expression of wave front OPD in Cartesian coordinate is: W ðx, yÞ ¼
jX max
W j Z j ðx, yÞ
ð5Þ
j¼0
The wave front OPD with different geometric aberrations above can be used to represent the exit pupil function p(x, y) at the exit pupil point (x, y). Then exit pupil function p(x, y) can be described as [14]: pðx, yÞ ¼ aðx, yÞeikW ðx, yÞ
ð6Þ
where a(x, y) represents the aperture function of the optical system, then the point spread function (PSF) on the image surface is expressed as: PSFðx, yÞ ¼
n 2 o 1 i2π λ W ðx, yÞ y x FT p ð x, y Þ e f ¼ , f ¼ 2 2 x λd y λd λ d Ap
ð7Þ
where λ represents the main wavelength, d represents the diameter of the pupil, Ap represents the area of the pupil, and FT represents the Fourier transform. The relationship between the optical modulation transfer function (MTF) of geometric aberrations and the PSF is [15]:
MTF sx , sy
FTfPSFg ¼ FTfPSFgsx ¼0, sy ¼0
ð8Þ
According to Eqs. (4)–(8), the MTF of each optical aberration can be calculated. If the coefficient W m n of Zernike polynomial is 0.1, the main wavelength is 570 nm, and the diameter of the pupil is 7.5 mm, then Fig. 3 can be obtained. Figure 3 shows the optical MTFs curve of monochromatic aberration when the monochromatic aberration alone exists and the wave front OPD generated by each monochromatic aberration is equal. It can be seen from Fig. 3 that some MTFs decline more rapidly than others. Some MTFs have more zeros. These characteristics are closely
Study on Optical Swap Computational Imaging Method
123
1.0
MTF
0.8 0.6 0.4
0.2 0
0.2
0.4 0.6 Normalized spatial frequency
0.8
1.0
Fig. 3 MTFs of monochromatic aberrations
The whole system Original image
Optical transfer function
Image restoration function
Processed image
End-to-end performance metric Fig. 4 End-to-end imaging model
related to the difficulty of image processing. According to fundamentals of digital imaging [16], if the MTFs decline too fast, more zeros or a lower zero-crossing frequency, it is not beneficial to image processing.
3 Simulation Optical SWaP computational imaging method can simplify the complex optoelectronic imaging process into the end-to-end imaging model [17]. The end-to-end performance metric is adopted as the optimization criterion for the overall design, and the end-to-end performance metric is optimized by adjusting the parameters of each part in the imaging link at the same time, as shown in Fig. 4. Optical SWaP computational imaging method does not pursue the optimal performance of optical imaging system or image processing system, but considers the optical system and image processing system as a whole system, so as to design a global optimal system [7, 17, 18]. End-to-end performance metric connects optical subsystem and image processing subsystem. And the two can be optimized as a whole system by using end-to-end
124
X. Shao et al.
performance metric to realize the global optimal imaging system. So an metric that can analyze and predict the end-to-end overall imaging performance is needed [19, 20]. The mean square error (MSE) is a statistical metric of the average error of all pixels in an image. Therefore, the MSE between the processed image and the ideal image is taken as the performance metric of optical SWaP computational imaging method. Mean square error is defined as [21]. MSE ¼ ε½s^ðx, yÞ sðx, yÞ2
ð9Þ
where s(x, y) represents the intensity value of the pixel in ideal image at the coordinate (x, y), s^ðx, yÞ represents the intensity value of the pixel in final image after image processing, and ε represents the expected value between the final image and the ideal image. In order to compare the difference between optical SWaP computational imaging method and the traditional design method, we will carry out the simulation as shown in the Fig. 5. The simulation process adopts optical design software Zemax and C programming software. The specific process includes: 1. Perform aberration correction for the initial structure of a single lens, optimize to obtain the optimal optical structure, and then export its simulation image for image processing. 2. Perform selective aberration correction for the initial structure of the single lens, export its simulation image to Extensions of Zemax through DDE for image processing and image quality evaluation, and import the performance metric through DDE to Zemax. Since MSE was the end-to-end performance metric, the damping least-square optimization of Zemax was used to optimize the optical subsystem and
@+3'> ) RGTM[GMKYULZ]GXKVXUMXGSSK
:XGJOZOUTGR
9OTMRKRKTY
5HPKIZ
'HKXXGZOUT IUXXKIZOUT 5VZOIGR SKZXOI
5VZOIGR OSGMK
/SGMK VXUIKYYOTM
6XUIKYYKJ OSGMK
/SGMK SKZXOI
5VZOSO`K 5VZOSO`K
:NKUVZOSGRRKTY /SGMKW[GROZ_IUSVGXOYUT
9=G6
@+3'> 5HPKIZ
) RGTM[GMKYULZ]GXKVXUMXGSSKͧ@KSG^K^ZKTYOUTYͨ
9OTMRKRKTY **+
5VZOIGR OSGMK
6GXZOGRGHKXXGZOUT IUXXKIZOUT
/SGMK VXUIKYYOTM
End-to-end performance metric
9=G6RKTY 5VZOSO`K
**+
Fig. 5 Simulation implementation process
6XUIKYYKJ OSGMK
Study on Optical Swap Computational Imaging Method
125
image processing subsystem synchronously, so as to obtain a simple optical system suitable for image processing. 3. Compare the two processed images by the traditional design method and optical SWaP computational imaging design method to evaluate the overall imaging performance.
4 Result and Discussion According to Fig. 5, an attempt is made to correct the spherical aberration left specially for image processing in optical SWaP computational imaging method, and the overall performances metric are compared with traditional method. A single lens is used for the experiment. The lens specifications are shown in Table 1. While traditional design method pursues the optimal performance of optical subsystem and image processing subsystem, respectively, optical SWaP computational imaging design method seeks to achieve the global optimum. In order to compare the differences between the two kinds of design methods, this paper compares the imaging performance metrics of the optical system and the overall system. As shown in Fig. 6a, b, when the spatial frequency of MTF curve is lower than 60 lp/mm, the values of traditional optical design method is higher than optical SWaP computational imaging method. The condition is opposite while the spatial frequency is higher than 60 lp/mm. Therefore, optical SWaP computational imaging method gains higher spatial frequency information at the cost of losing lower spatial frequency information. As shown in Fig. 6c, d, the root mean square (RMS) radius of traditional optical design method is smaller than optical SWaP computational design method, indicates a better optical performance of traditional optical design method. As shown in Fig. 6e, f, the PSF curves of optical SWaP method have less oscillation and less zero in each field than traditional method. Since the image processing algorithm cannot recover the image information lost at the zero point, the optical image of optical SWaP computational imaging method is easier to be restored by the image processing algorithm. Figure 7 is the processed images of the two design methods. To make the comparison effect more obvious, the same area of the magnified images were selected. The symbol S represents the size of the Wiener filter in pixels. As shown in Fig. 7, the optical image clarity of traditional method is higher than SWaP method. However, processed with Wiener filter algorithm of the same filter size S, the noise of SWaP method is weak, the image quality of the edge is improved Table 1 Lens specifications Wavelength F# FOV Focal length Glass material
0.55 μm 4.5 20 10 mm N-BK7 (n ¼ 1.5168)
X. Shao et al. MTF
MTF
1
1
TS FOV10°
0.8
TS FOV10°
0.8
Diffraction limit
Diffraction limit
TS FOV5°
0.6
0.6 TS FOV0°
0.4
TS FOV5° TS FOV0°
0.4
0.2
0.2
0
30
60
90
120
lp/mm
0
30
(a)MTF curve of traditional optical system FOV0°
FOV5°
60
90
120
lp/mm
(b)MTF curve of SWaP optical system FOV0°
FOV5°
FOV10°
RMS Radius 16.689 GEO Radius 35.623
18.439 38.581
24.063 47.767
FOV10° Unit:μm
100
Unit:μm 100
126
RMS Radius 8.303 GEO Radius 16.946
9.403 20.792
15.035 32.395
(d)Spot diagram of SWaP optical system
(c)Spot diagram of traditional optical system PSF
PSF
1
1
0.8
0.8
0.6
0.6
FOV5°
FOV0° 0.4
FOV10°
FOV0°
FOV5°
0.4
0.2
FOV10°
0.2
0 -8
0 4 -4 (e)PSF of traditional optical system
8 X mm
0
-8
0 4 -4 (f)PSF of SWaP optical system
8 X mm
Fig. 6 Optical performances of the two design methods
Traditional optical image
Original image
SWaP optical image
Processed images of traditional method S=31×31
S=29×29
S=27×27
S=25×25
S=23×23
Processed images of SWaP method S=31×31
S=29×29
S=27×27
S=25×25
Fig. 7 Overall performances of the two design methods
S=23×23
Study on Optical Swap Computational Imaging Method 0.040
127
Traditional method SWaP method
0.035
M SE
0.030
0.025 0.020 0.015 31
29
27 25 23 Size of Weiner filter
Fig. 8 MSE curve of the two design methods
obviously, and the image changes to be sharp. Unfortunately, the noise of traditional method is strong and the edge has obvious ringing effect. As the Wiener filter size S becomes smaller, the processed image noise of two design methods is well suppressed, but the contrast of the image also decreases. The MSE of processed image and original image is plotted along with the size S of Wiener filter, as shown in Fig. 8. While the optimum filter size of traditional method is S ¼ 25 25 and the corresponding MSE ¼ 0.0257, the SWaP method is S ¼ 27 27 and MSE ¼ 0.0180. Therefore, the overall performance metric of the SWaP method is better than the traditional method. Although the SWaP optical image is less clear, the image is more suitable to be processed with Wiener filtering to obtain the high-resolution image.
5 Conclusion In this paper, we have demonstrated a method that can reduce complexity of optical system by sharing with imaging processing. Simulation results prove that optical SWaP computational imaging method has a better processed image effect and a smaller MSE value compared with the results of traditional method, indicating that the image reconstruction algorithm is feasible for the correction of optical system aberration. While traditional optical design often makes a complex structure to correct image aberration, the optical SWaP method can reduce the complexity of lenses meanwhile retaining imaging quality. It is significant for optical systems to be small size, light weight, and low price. Wiener filtering algorithm is used to restore the image in this paper, but it is easy to generate ringing effect due to its inherent linearity. Therefore, we can consider using nonlinear filtering algorithm to restore the optical image in the subsequent research.
128
X. Shao et al.
References 1. Fischer, R.E., Tadic-Galeb, B., Yoder, P.R.: Optical System Design. Academic Press, New York (1983) 2. Brady, D.J., Dogariu, A., Fiddy, M.A., Mahalanobis, A.: Computational optical sensing and imaging: introduction to the feature issue. Appl. Optics. 47, 1–2 (2008) 3. Lam, E.Y.: Computational photography: advances and challenges. Proc. SPIE. 8122, 1579–1584 (2011) 4. Thompson, S.E., Parthasarathy, S.: Moore's law: the future of Si microelectronics. Mater. Today. 9, 20–25 (2006) 5. Dowski, E.R.: Wavefront coding: jointly optimized optical and digital imaging systems. Proc. SPIE. 4041, 114–120 (2000) 6. Kubala, K., Dowski, E., Cathey, W.: Reducing complexity in computational imaging systems. Opt. Express. 11, 2102–2108 (2003) 7. D. Robinson and D. G. Stork, “Joint Design of Lens Systems and Digital Image Processing,” 2006 8. Dowski, E.R., Kubala, K.S.: Reducing size, weight, and cost in a LWIR imaging system with wavefront coding. Proc. SPIE. 66–73 (2004) 9. Heide, F., Rouf, M., Hullin, M.B., Labitzke, B., Heidrich, W., Kolb, A.: High-quality computational imaging through simple lenses. ACM Trans. Graph. 32, 149 (2013) 10. Smith, W.J.: Modern Optical Engineering, 4th edn, (2007) 11. Thibos, L.N., Applegate, R.A., Schwiegerling, J.T., Webb, R.: Standards for reporting the optical aberrations of eyes. In: International Confress of Wavefront Sensing and Aberration-Free, pp. S652–S660 (2002) 12. Mahajan, V.N.: Optical imaging and aberrations. In: Storage and Retrieval for Image and Video Databases (2013) 13. Malacara, D., Roddier, F.: Optical shop testing. Appl. Optics. 97, 454–464 (2007) 14. Geary, J.M.: Introduction to Lens Design: with Practical ZEMAX Examples. Willmann-Bell, Richmond, VA (2002) 15. Goodman, J.W.: Introduction to Fourier optics. Phys. Today. 22, 97–101 (1969) 16. J. Trussell and M. Vrhel, “Fundamentals of Digital Imaging,” 2008 17. Harvey, A.R., Vettenburg, T., Demenikov, M., Lucotte, B., Muyo, G., Wood, A., et al.: Digital image processing as an integral component of optical design. Proc. SPIE. 7061, 706104–706104-11 (2008) 18. Robinson, D., Stork, D.: Leveraging Digital Processing to Minimize Optical System Costs. Ricoh Innovations, Menlo Park, CA (2009) 19. Stork, D.G., Robinson, M.D.: Theoretical foundations for joint digital-optical analysis of electrooptical imaging systems. Appl. Optics. 47, B64–B75 (2008) 20. Robinson, M.D., Stork, D.G.: Joint digital-optical design of imaging systems for grayscale objects. In: Optical Systems Design, pp. 710011–710011-9 (2008) 21. Gonzalez, R.C., Woods, R.E., Eddins, S.L.: Digital Image Processing Using MATLAB, vol. 21, 2nd edn, pp. 197–199 (2010)
CCD Detector’s Temperature Effect on Performance of the Space Camera Xiaohong Zhang1(*), Xiaoyong Wang1, Zhixue Han1, Chunmei Li1, Pan Lu1, and Jun Zheng1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. CCD detector is the nucleus device of the space remote sensing camera. The performance of CCD detector seriously affects the performance of the space camera. CCD detector’s temperature has a significant influence on its performance. In order to study this influence, the temperature impact verification of CCD detector was carried out. The test scheme of dark signal under high-low temperature environment and the test scheme of radiation calibration under different temperature conditions are developed. The test verification has been completed. Based on the analysis of the experimental data, the quantization relations of the influence of temperature on the parameters such as dark signal, dark noise, signal-to-noise ratio, and dynamic range have been researched. These results have important reference values for the development of space camera. Keywords: Remote sensing · Space camera · Charge-coupled device · Time delay integration CCD · Dark noise · Signal-to-noise ratio · Dynamic range
1 Introduction In order to study the influence of different temperatures of CCD detector on CCD performance, a scheme of test verification was developed. According to the test scheme, the temperature impact test of 8192 pixel TDICCD detector was verified. The experimental results were analyzed, and the quantitative relations between the performance of CCD detector (dark signal, signal-to-noise ratio, dynamic range) and temperature were obtained.
2 Temperature Verification Test Scheme of CCD Detector The detector used is a linear array type TDICCD (Time Delay Integration CCD), the pixel size is 7 μm 7 μm, the pixel number is 8192, and the maximum TDI stages is 96. In the experiment, the horizontal transfer frequency of TDICCD is 16.67 MHz, the integral time is 1.05 ms, and the quantized number is 14 bits. © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_13
129
130
X. Zhang et al.
Test the dark signals of TDICCD at different temperatures in the atmospheric heat cycle box. TDICCD and corresponding focal plane circuit and signal processing circuit are all placed in the normal pressure heat circulation box. The power supply, thermal control equipment, and image acquisition equipment are placed outside the heat circulation box. During the test, the inside of the thermal circulation box is completely black (no light). The connection relationship of the equipment is shown in Fig. 1. Test temperature scope is set to 15 C~ + 30 C. Temperature control precision is +/ 1 C. For every 5 C set a test point. The temperature increasing rate and decreasing rate should be slow, no more than 1 C per minute. The heat preservation time of each temperature point is 15 min. The temperature of the TDICCD device is to be measured and the temperature is basically stable. Figure 2 is the setting of test temperature and
Fig. 1 Test equipment connection
Fig. 2 The setting of test temperature and thermal insulation time
CCD Detector’s Temperature Effect on Performance of the Space Camera
131
Fig. 3 The TDICCD, focal plane circuit, and signal processing circuit participating in the experiment
thermal insulation time. Figure 3 is the TDICCD, focal plane circuit, and signal processing circuit participating in the experiment. The experiment is conducted according to the following steps: 1. Circuit connection and self-check, close the light source in the oven. Then block the TDICCD, and make the TDICCD completely black. 2. Heat up to 30 C by temperature increasing rate 1 C/min, keep the temperature 0.5 h, then electrify the circuit, and collect an image. 3. Drop to 25 C by temperature decreasing rate 1 C/min, keep the temperature 15 min, then electrify the circuit, and collect an image. 4. Drop to 20 C by temperature decreasing rate 1 C/min, keep the temperature 15 min, then electrify the circuit, and collect an image. 5. By temperature decreasing rate 1 C/min, keep the temperature 15 min at each temperature point, then electrify the circuit, and collect an image. 6. Drop to 15 C by temperature decreasing rate 1 C/min, keep the temperature 0.5 h, then electrify the circuit, and collect an image. 7. Heat up to 10 C by temperature increasing rate 1 C/min, keep the temperature 15 min, then electrify the circuit, and collect an image. 8. By temperature increasing rate 1 C/min, keep the temperature 15 min at each temperature point, then electrify the circuit, and collect an image. 9. Heat up to 30 C by temperature increasing rate 1 C/min, keep the temperature 0.5 h, then electrify the circuit, and collect an image. 10. Drop to the room temperature by temperature decreasing rate 1 C/min, the end of the test. Each temperature point continuously collects 1024 lines of images, that is, each temperature point can obtain an image of 8192 (column) 1024 (line).
3 Temperature Verification Test Results of CCD Detector The mean value of the dark signal and the root mean square of the TDICCD at different temperatures can be obtained by calculating the image collected at each temperature point. Dynamic range at different temperatures of TDICCD can be calculated by the
132
X. Zhang et al.
Table 1 The calculation results of dark signals and dynamic range at different temperatures of TDICCD
Set temperature ( C) 30 25 20 15 10 5 0 5 10 15 10 5 0 5 10 15 20 25 30
The actual temperature measured ( C) 30.0 25.5 20.8 15.9 10.9 5.8 1.0 4.1 9.0 14.1 9.2 4.3 0.4 5.5 10.4 15.2 20.1 25.0 30.3
Dark signal value The average Root mean value square value 571.19 13.457 429.78 12.197 330.31 11.163 263.32 10.349 223.06 9.772 199.20 9.348 184.61 9.058 176.51 8.875 171.67 8.731 168.40 8.623 172.98 8.735 177.92 8.880 185.50 9.074 198.36 9.342 220.68 9.733 257.83 10.294 317.91 11.046 413.84 12.085 559.65 13.421
Dynamic range 1175.06: 1 1308.04: 1 1438.12: 1 1557.70: 1 1653.80: 1 1731.36: 1 1788.41: 1 1826.20: 1 1856.87: 1 1880.51: 1 1855.87: 1 1825.01: 1 1785.16: 1 1732.57: 1 1660.67: 1 1566.56: 1 1454.47: 1 1321.49: 1 1179.07: 1
following methods: subtract the mean value of the dark signal at the current temperature from the full range (the full range of 14 bits quantization is 16,384) and divide by the root mean square value of the dark signal at the current temperature. Table 1 shows the calculation results of dark signals and dynamic range at different temperatures of TDICCD. After analyzing the experimental data, the quantitative relations between the temperatures of the TDICCD and the dark signal and dynamic range of the TDICCD are obtained. Figure 4 is the quantitative relationship between temperatures and the mean values of dark signals. Figure 5 is the quantitative relation between the temperatures of the TDICCD and the root mean square values of dark signals (dark noise). Figure 6 is the quantitative relation between temperatures and dynamic range of the detector.
4 Conclusion The influence of temperature on TDICCD dark signal and dynamic range is obtained by analyzing the experimental results. The following conclusions can be obtained: (1) the lower the control temperature of CCD, the lower the dark signal level of CCD, the lower
CCD Detector’s Temperature Effect on Performance of the Space Camera 600 550
y=0.0001x4+0.0043x3+0.0855x2+1.9727x+186.15 R2 = 1
500 450 Dark signals
133
Temperature drop
400 350
Temperature rise
300 250 200 150 100 -15
-5
5
15
25
Temperatur (°C)
Fig. 4 The quantitative relation between temperatures and the mean value of dark signals
14 13
y=3E-07x4+5e-05x3+0.0016x2+0.0452x+9.07 R2=1
Dark noise
12 11 Temperature drop Temperature rise
10 9 8 7 6 -15
-10
-5
0
5
10
15
20
25
30
Temperatur (°C)
Fig. 5 The quantitative relation between temperatures and the root mean square values of dark signals (dark noise)
the dark signal noise, and the higher the dynamic range of the detector; (2) using temperature control technology can reduce the dark signal of CCD, especially the dark signal noise, that can improve the dynamic range of the camera and improve the imaging capability of the camera with low illuminance. Acknowledgments The work was supported by the National Key Research and Development Program of China (No. 2016YFB0500802).
134
X. Zhang et al. 2000 1900
Dynamic range
1800 1700
Temperature drop
1600 1500
Temperature rise
1400 1300 1200 1100 1000
-15
-5
5
15
25
Temperatur (°C)
Fig. 6 The quantitative relationship between temperatures and dynamic range of the detector
Bibliography 1. Bogaart, E.W., Hoekstra, W.: Very low dark current CCD image sensor. IEEE Trans. Electron Devices. 56(11), 2462–2467 (2009) 2. Janesick, J.R.: Scientific Charge—Coupled Devices. SPIE Press, Bellingham, WA (2001) 3. Tom, E.: Charge—coupled device pinning technologies. Opt. Sens. Electr. Photogr. 1071, 153–169 (1989) 4. Hashimotodani, K., Toneri, T., Kitamoto, S.: Measurement of quantum efficiency of a charge coupled device. Rev. Sci. Instrum. 69(11), 3746 (1998) 5. Michael, K., Ken, P., Lewis, D.: Critical technologies for still imaging systems. SPIE [C]. 157–183 (1989) 6. Janesick, J., Ellion, T.: Sandbox CCDs. SPIE [C]. 2415 (1995) 7. Renfang, L., Yan, W., Jianwei, G., Yujie, Z.: Study on surface dark current of CCD. Electr. Sci. Technol. 27(5), 26 (2014) 8. Holst, G.C.: CCD Arrays. Cameras, and Displays, 2nd edn. SPIE Optical Engineering Press, Bellingham, WA (1998)
The Thermal Stability Analysis of the LOS of the Remote Sensor Based on the Sensitivity Matrix Yuting Lu1(*), Yong Liu1, and Zong Chen1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. The positioning accuracy and the stability of camera LOS (line of sight) has been increasingly and highly demanded by the users of remote sensors due to the extensive use of remote sensing images. The temperature has an increasing effect on the stability of the camera LOS as the focal length of camera becomes longer and longer. In order to achieve the temperature changing effect on the stability of the LOS, this chapter takes a camera as an example, using the TD (Thermal Desktop) to establish the model of heat conduction, using the finite element software to create the construct model, and using CODEV to achieve LOS sensitivity matrix of optics system, completing the camera hot-structure-light all link model. On the basis of which, the chapter achieves four extreme thermal boundary conditions of the camera in the process of on-orbit life through analyzing outside heat flow, with the analysis by the finite element model (FEM), we can get that six degrees of freedom displacement of the optical element. Using the six degrees of freedom displacement data with optical sensitivity matrix, we can get a camera LOS of the short-term and long-term changes in value. The results show that the maximum change value of LOS is 0.26800 and the positioning accuracy error by temperature of the camera is no more than 0.8 m. The method in this chapter can provide reference for other cameras’ LOS analysis, and for the camera thermal control. Keywords: Line of sight · Stability · Sensitivity matrix
1 Introduction With the widespread of remote sensing image, users have higher and higher requirements of positional accuracy of the image [1]. Now, positional accuracy has become an important index of satellite system. Among foreign remote sensing satellites, the technology has been rapidly developing from 0.82m panchromatic resolution of IKONOS satellite in 1999, 9m uncontrolled positioning accuracy (CE90) [2–4] to 0.41m panchromatic resolution of Geoeye-1 in 2008, 2m uncontrolled positioning accuracy (CE90) [5, 6], and 0.3m panchromatic resolution of Worldview-4 launched in 2016, 3m uncontrolled positioning accuracy (CE90) [7, 8]. The first satellite of TH-1 had achieved 15.2m (CE90) uncontrolled positioning accuracy nationwide in 2010.
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_14
135
136
Y. Lu et al.
The third satellite of TH-1 launched in 2015 had achieved 10.6m (CE90) [9, 10] uncontrolled positioning accuracy. The factors which can influence the satellite image positioning accuracy are except for the measurement accuracy of star sensor gyroscope and accuracy of ground image data processing, it also includes the stability of satellite in-orbit system, especially stability of inner orientation parameter of camera load and the camera LOS relative to the star sensor imaging reference [11]. All the surveying and mapping satellites in foreign countries have carried out the stability design of LOS, and our country have paid more and more attention to the stability of LOS. However, LOS changes with the launching impact, on-orbit micro-vibration and the orbit space environment. The temperature environment of the remote sensor in-orbit is very complex and temperature difference will seriously influence the image quality of optical system. As a result, thermal optical properties of optical system become important indicators of space remote sensor [12–14]. With longer and longer camera focal length, the temperature impacts on camera LOS are becoming bigger and bigger. In order to test the influence of on-orbit temperature variation on stability of camera LOS, take one camera for an example, it builds a hotstructure-light all link model of camera. Based on the above content, this chapter obtains the short-time and long-time change values of the camera LOS through the extreme thermal boundary conditions in the orbit life of the camera.
2 The Principle of LOS Affecting the Positioning Accuracy LOS can cause image motion, and some foreign literature describe this phenomenon with the change of LOS. The change in the direction of roll axis and pitch axis can be uniformly called the change in the direction of LOS, which has similar effects on image motion. In general, the change of LOS is only related to angular displacement. The relationship between LOS and image motion is shown in Fig. 1. As shown in Fig. 1, the angle between the main light of the target and LOS is α before rotation, and the angular displacement around ox axis (or oy axis) is θx (or θy) qx(qy)
l’
f
l
α
z
H target
L
O
y
x
Fig. 1 The relationship between LOS and image motion
The Thermal Stability Analysis of the LOS of the Remote Sensor Based. . .
137
Fig. 2 Analytical path
within an exposure time. Then the image motion of the image point in the oy direction (ox direction) after rotation, which can be approximately expressed as: d im ¼ f ½ tan ðα þ θx Þ tan α f tan θx f θx
ð1Þ
During the orbit of the camera, due to the influence of external heat flux, the temperature of satellite star sensor and camera itself is in constant change. The change of temperature will lead to the distortion of the star sensor’s structure and camera’s structure, which makes the angle between LOS of the camera and LOS of the star sensor change from time to time. This change of LOS caused by temperature is difficult to calibrate, which results in the difference between the position coordinates of the image calculated by the star sensor and the position indicated by LOS of the camera, and which results in a decrease in the location accuracy without ground control points of the image. In addition to thermal effect, jitter also affects location accuracy, but the influence of jitter is mainly short term. Thermal effects can be divided into long-term effects and short-term effects. This chapter mainly discusses the change of location accuracy caused by thermal environment. The analytical path from thermal environment to location accuracy without ground control points is shown in Fig. 2.
3 LOS Sensitivity Matrix Optical sensitivity matrix is a bridge between optical performance index and structural analysis results. Their relationship is [15]: C ¼ C0 þ ∂C=∂μ μ
ð2Þ
In this formula: C is the error vector of LOS stability to be calculated; C0 is the initial error vector of LOS stability; əC/əμ is the sensitivity matrix of LOS movement; μ is the displacement vector for each degree of freedom of the optical element, which can be obtained by finite element analysis An optical model is established in optical software CODEV to calculate the effect of single-direction unit disturbance on optical performance index by tracing the light. The
138
Y. Lu et al. Table 1 Sensitivity matrix
Items Primary mirror Dx (μm) dy (μm) dz (μm) dtx (sec) dty (sec) dtz (sec) Tertiary mirror Dx (μm) dy (μm) dz (μm dtx (sec) dty (sec) dtz (sec) Focal plane Dx(μm) dy (μm) dz (μm) dtx (sec) dty (sec) dtz (sec)
00
LOS ( /μm or 00 /s) Sec (X) Sec (Y) 0.3354 0 0 0.336 0 0 0 2 1.99 0 0 0 Sec (X) Sec (Y) 0.134 0 0 0.133 0.01 0 0 0.351 0.348 0 0 0.0081 Sec (X) Sec (Y) 0.001 0 0 0.13 0.13 0 0 0 0 0 0 0
Items Secondary mirror Dx (μm) dy (μm) dz (μm dtx (sec) dty (sec) dtz (sec) Small flat mirror Dx (μm) dy (μm) dz (μm dtx (sec) dty (sec) dtz (sec)
LOS (00 /μm or 00 /s) Sec (X) Sec (Y) 0.251 0 0 0.250 0 0 0 0.451 0.448 0 0 0 Sec (X) Sec (Y) 0.01 0 0 0 0.003 0 0 0.0223 0.0868 0 0 0.0708
sensitivity matrix of LOS stability error can be obtained, which is shown in Table 1. LOS changes caused by the unit changes of primary mirror, secondary mirror, tertiary mirror, small flat mirror, and focal plane can be seen.
4 Thermal Elastic Analysis of a Camera LOS This is a sun-synchronous orbit satellite. This embarks two TDICCD panchromatic multispectral cameras. 4.1
Temperature Load
The extreme working condition of the camera is determined by the extreme working conditions of external heat flux condition, internal heat source, and the cabin’s thermal boundary. The extreme working condition of internal heat source refers to working for a consecutive 10 min period; the extreme low-temperature working condition refers to non-working for a long time. The extreme working condition of the camera is found through a comprehensive analysis of the extreme working conditions of external heat flux, internal heat source, and the cabin, as shown in Table 2.
The Thermal Stability Analysis of the LOS of the Remote Sensor Based. . .
139
Table 2 Thermal analysis working conditions
Working No. condition 1 High-temperature transient condition
2
Low temperature transient condition
External heat flux condition After entering lighting area, the entrance promptly scrolls; the radiator keeps loading near the descending node The entrance immediately scrolls; the radiator does not scroll
Satellite attitude Scroll near the descending node
Scroll
Temperature of the mounting surface ( C) 25
Temperature of the wall of the cabin ( C) 30
5
10
Internal heat source 10 min/ circle
Nonworking
According to the transient analysis of external heat flux, the highest temperature of satellite and camera occurs in winter. When the camera just enters the sunlight area, the entrance promptly scrolls. At this time, the satellite and camera receive the maximum radiation heat from the sun, and the temperature in the satellite cabin can reach 30 C, which is high-temperature transient for external heat flux. The lowest temperature of satellite and camera occurs in summer. The satellite immediately scrolls near the descending node. At this time, the entrance of the camera deviates from the sunlight, and the temperature in the satellite cabin can reach 10 C, which is the low-temperature transient for external heat flux. Due to the angle relationship with the sunlight, the high- and low-temperature transient conditions of the external heat flux are formed in a year of satellite operation, but the satellite and camera have active thermal control. The existence of active thermal control maintains the temperature of each part of the camera within a certain temperature range. As the satellite moves in and out of the earth’s shadow zone, they move in orbit, which is shown in Fig. 3. This fluctuation period is shorter than that of the external heat flux in the monthly cycle, the magnitude of which is the tens of minutes or so. Therefore, in the cases of active temperature control, there are four types of thermal conditions on-orbit in extreme conditions: 1. High-temperature working condition for external heat flux in high-temperature transient condition. 2. Low-temperature working condition for external heat flux in high-temperature transient condition. 3. High-temperature working condition for external heat flux in low-temperature transient condition. 4. Low-temperature working condition for external heat flux in low-temperature transient condition.
140
Y. Lu et al. 19.2
the back center of SM the front center of SM circumfertial direction of SM
20.0
temperaure (cº)
19.8
19.6
19.4
19.2
19.0 60000
70000
80000
90000
100000
110000
120000
Time (s) Fig. 3 The temperature change of SM (secondary mirror) with temperature control under low-temperature transient condition
The long-term and short-term LOS variation can be evaluated by examining LOS variation under these four extreme conditions. LOS variation under high and low temperature of every external heat flux transient conditions (in minutes) is regarded as the maximum short-term LOS variation; the maximum LOS variation between external heat flux transient conditions (in months) is regarded as long-term LOS variation. 4.2
Thermal Analysis Results
A thermal model was established using TD (Thermal Desktop). According to the external heat flux, the temperature field of each part can be obtained. Taking SM as an example, the temperature field distribution of SM under low-temperature working condition in low-temperature transient condition is given in Fig. 4. Finite element software is used to build the finite element model as shown in Fig. 3. The results of TD thermal analysis were added to the structural model by temperature mapping. Then the results of thermo-elastic deformation were obtained by finite element analysis. Figure 5 shows the finite element model used in structural thermodynamics. Figure 6 shows the displacement cloud diagram of each part of the camera under low-temperature working condition for external heat flux in low-temperature transient condition. Table 3 shows the displacement of each optical element in all working conditions.
The Thermal Stability Analysis of the LOS of the Remote Sensor Based. . .
141 2.00 D+01
T= 3.51000 E+04 1.99 D+01
1.99 D+01
1.99 D+01
1.99 D+01
1.99 D+01
1.99 D+01
1.99 D+01
y z
1.99 D+01 x y z 1.99 D+01
x 1.99 D+01
Fig. 4 The temperature cloud map of the camera under a certain working condition
Fig. 5 Finite Element Model
5 Thermal Analysis of Camera LOS We get the LOS variations in a relatively constant temperature 20 C under the four working conditions by multiplying the displacement of six degree of freedom by LOS optical sensitivity matrix, which are shown in Table 4. It is observed that the maximum variation occurs during high-temperature transient condition, when camera LOS variation in X direction is 0.24800 (camera A), and the maximum variation in Y direction is 0.0800 (camera B). The maximum LOS variation during low-temperature transient condition is only 0.0300 , which is much smaller than that during high-temperature transient condition.
142
Y. Lu et al.
Fig. 6 The displacement cloud diagram of each part of the camera under low-temperature working condition for external heat flux in low-temperature transient condition Table 3 Displacement of optical component
Main body A
Components Primary mirror
Secondary mirror
Tertiary mirror
Small flat mirror
Direction TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm
High-temperature transient condition HighLowtemperature temperature working working condition condition 3.88 0.0012 0.69 0.0002 1.1 0.0004 0.16 4.04e–06 0.073 0.00046 0.082 0.000143 3.92 0.0026 0.34 0.0029 0.96 0.048 0.17 0.001 0.07 0.0035 0.49 0.13 4.05 1.47e–05 1.06 5.25e–06 1.23 9.17e–05 0.079 5.15e–05 0.10 0.0001 0.13 0 3.85 0.001 0.78 0.0002 1.12 0.0024
Low-temperature transient condition HighLowtemperature temperature working working condition condition 0.45 0.461 0.081 0.081 0.13 0.131 0.019 0.019 0.009 0.01 0.009 0.009 0.45 0.455 0.037 0.033 0.065 0.005 0.019 0.018 0.011 0.017 0.187 0.363 0.47 0.461 0.12 0.081 0.14 0.131 0.009 0.019 0.012 0.01 0.02 0.009 0.45 0.455 0.09 0.092 0.12 0.126 (continued)
The Thermal Stability Analysis of the LOS of the Remote Sensor Based. . .
143
Table 3 (continued)
Main body
Components
Focal plane
B
Primary mirror
Secondary mirror
Tertiary mirror
Small flat mirror
Focal plane
Direction RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00 TX/μm TY/μm TZ/μm RX/00 RY/00 RZ/00
High-temperature transient condition HighLowtemperature temperature working working condition condition 0.18 0 0.064 0.0008 0.72 0 3.78 0.001 1.05 0.0002 1.13 0.0024 —— —— —— —— —— —— 3.89 0.0016 0.69 0.0001 1.11 0.0008 0.16 8.04e–05 0.076 0.00079 0.081 0.00018 3.93 4.72e–05 0.34 0.001 0.97 0.046 0.17 0.00089 0.075 0.004 0.49 0.12 4.062 2.75e–05 1.06 3.4e–05 1.23 8.675e–05 0.079 1.09e–05 0.10 0.0001 0.10 9.2e–06 3.85 0.0012 0.78 5.17e–05 1.12 0.0026 0.18 0 0.063 0.0006 0.72 0 3.78 –7e–06 1.05 0.00033 1.13 0.00014 —— —— —— —— —— ——
Low-temperature transient condition HighLowtemperature temperature working working condition condition 0.022 0.022 0.008 0.009 0.085 0.086 0.44 0.446 0.12 0.123 0.13 0.133 —— —— —— —— —— —— 0.45 0.462 0.08 0.081 0.13 0.133 0.019 0.019 0.009 0.011 0.009 0.009 0.46 0.463 0.039 0.038 0.068 0.006 0.021 0.023 0.013 0.019 0.18 0.351 0.47 0.478 0.12 0.125 0.14 0.145 0.009 0.009 0.012 0.012 0.011 0.012 0.45 0.456 0.09 0.092 0.12 0.126 0.022 0.045 0.008 0.092 0.085 0.126 0.44 0.446 0.12 0.123 0.13 0.134 —— —— —— —— —— ——
144
Y. Lu et al. Table 4 LOS/00 of the two main bodies in different working conditions
Component Main body A Main body B
Direction X Y X Y
High-temperature transient condition HighLowtemperature temperature working working condition condition 0.248 0.0004 0.0171 0.0001 0.245 0.0003 0.08 0.0004
Low-temperature transient condition HighLowtemperature temperature working working condition condition 0.03 0.02 0.002 0.005 0.03 0.02 0.009 0.008
We define the maximum value of high-temperature condition minus the minimum value of low-temperature transient condition as the maximum LOS variation in the longterm period. We can get the following: In a long period, the maximum variation LOS of Main Body A is 0.26800 (0.24800 þ 0.0200 ). In a long period, the maximum variation LOS of Main Body B is 0.26500 (0.24500 þ 0.0200 ). No matter short term or long term, the maximum LOS variation is 0.26800 . According to the formula in the second section, it can be calculated that the corresponding image motion is 0.8 pixels, and the image positioning error caused by the thermal influence of the camera itself is less than 0.8 m.
6 Conclusion This chapter completes the camera hot-structure-light all link model. On the basis of which, the chapter achieves four extreme thermal boundary conditions of the camera in the process of on-orbit life through analyzing outside heat flow, with the analysis by the finite element model (FEM), we can get that six degrees of freedom displacement of the optical element. Using the six degrees of freedom displacement data with optical sensitivity matrix, we can get a camera LOS of the short-term and long-term changes in value. The results show that the maximum change value of LOS is 0.26800 and the positioning accuracy error by temperature of the camera is no more than 0.8 m. The method of this chapter can provide reference for other cameras’ LOS analysis, and for the camera thermal control.
References 1. Wu, Z., Chunling, L.: Design and analysis of determining the image position error of push-broom remote sensing satellite [V]. Spacecraft Eng. 16(2), 6–11 (2007) 2. DigitalGlobe IKONOS data Sheet [EB/OL]. [2015-02-26]. https:kglobal.digitalglobe.com/sites/ default/files/DG_IKONOS_DS.pdf
The Thermal Stability Analysis of the LOS of the Remote Sensor Based. . .
145
3. Satellite Imaging Corp. IKONOS satellite imagery and satellite sensor specifications [EB/OL]. [2015-02-26]. https:kwww.satimagingcorp.com/satellite-sensors/ikonos 4. Earth Observation Portal. Ikonos-2 [EB/OL]. [2015-02-26]. https:kdirectory.eoportal.org/web/ eoportal/satellite-missions/i/ikonos-2 5. DigitalGlobe GeoEye data Sheet [EB/OL]. [2015-02-26]. https:kglobal.digitalglobe.com/sites/ default/files/DG_GeoEye1.pdf 6. Earth Observation Portal. GeoEye-1 [EB/OL].[2015-02-26]. https:kdirectory.eoportal.org/web/ eoportal/satellite-missions/g/geoeye-1 7. Satellite Imaging Corp. GeoEye-2 (WorldView-4) satellite sensor [EB/OL]. [2015-03-23]. https:k www.satimagingcorp.com/satellite-sensors/geoeye-2 8. Fei, Y.: World view satellite [T]. Satellite Appl. 2016(11), 81 9. Renheng, W., Jianrong, W., Xin, H.: Preliminary location accuracy assessments of 3rd satellite of TH-1 [P]. Acta Geod. Cartogr. Sin. 45(10), 1135–1139 (2016) 10. Huang, Z., Wang, S., Li, J.: The Different Expressions of the Positioning Accuracy of “TH-01” Satellite [C]. The Eighth China Satellite Navigation Conference, p. 24. China Satellite Navigation Conference, NanJing (2017) 11. Hongtao, G., Wenbo, L., Haitao, S., et al.: Structural stability design and implementation of ZY-3 Satellite [V]. Spacecraft Eng. 25(6), 18–24 (2016) 12. Dun, G., Tieyin, I., Hong, W.: Image quality testing of space remote sensing optical system under thermal environment [F]. Chin. Optics Appl. Optics Abstr. 5(6), 602–609 (2012) 13. Zhenming, Z., Bing, W., Juan, G.: Preliminary research on thermal design methods of the geosynchronous orbit stating camera [V]. Spacecraft Recovery Remote Sens. 31(3), 34–40 (2010) 14. Danying, F., Chunyong, Y., Chongde, W.: A study of thermal/structural/optical analysis of a space remote sensor [V]. J. Astronaut. 22(3), 105–110 (2001) 15. Xiaobo, L., Yuanqing, Z.: Application of integrated simulation technology in the design of space optical remote sensor [J]. Equip. Environ. Eng. 13(4), 102–111 (2016)
Visible and Infrared Imaging Spectrometer Applied in Small Solar System Body Exploration Bicen Li1(*), Baohua Wang1, Tong Wang2, Hao Zhang3, and Weigang Wang1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected] 2 Beijing Institute of Spacecraft System Engineering, Beijing, China 3 School of Earth Sciences, China University of Geosciences (Wuhan), Wuhan, China
Abstract. With the development of human society and advances in science and technology, the need of looking for extraterrestrial energy and expanding human living space is increasing. Spectrometer played an irreplaceable role in the deep space exploration in recent decades, which was very effective in the research on physical and chemical properties of substances. By combining image data and fine spectral information, imaging spectrometer can obtain the spectral characteristics and spatial distributions of material compositions at the same time. Required by the detailed determination in both spatial and spectral dimensions of target body, the study on the sensors with high spatial resolution, high spectral resolution, and high detection sensitivity becomes an important technical direction in deep space exploration field. This method can provide higher precision detection means and science data with richer information for the research on space science exploration. The application of hyperspectral imaging in deep space exploration is discussed. The spectrometers using filters, grating spectrometers, and Fourier transform spectrometers applied in previous deep space exploration missions are concluded. The advantages and disadvantages of different spectroscopic methods are discussed, and the development trend of spectrometers for deep space exploration is analyzed. Based on the characteristics of small solar system bodies’ exploration, the scientific objects of detection for nearearth asteroids (NEAs) and main belt comets (MBCs) are present. According to the mission requirements of NEA remote sensing, in situ analysis and sample return, and MBC investigation, and based on the requirements for composition studies of NEA and MBC, the main specifications of visible and infrared imaging spectrometer for small solar system bodies are designed. The spectral coverage of the spectrometer is from visible to medium wave infrared (0.4–5 μm). The spatial and spectral information can be obtained simultaneously with the spatial resolution of 0.5 m at 5 km observation distance. High spectral resolution of 5 nm in visible band and medium spectral resolution of 10 nm in infrared band are realized. The signal-to-noise ratio of the instrument can be better than 100. A convex grating with two different groove densities in different regions is used to realize the integration of visible and infrared bands in a single offner spectrometer. The optical system is very compact so that the volume and mass of the instrument can be lower. Considering the albedo and temperature of most target bodies are very low, cryo-refrigeration technique for detector is adapted to reduce the dark current and its noise, and the optical-mechanical system is cooled to reduce the noise produced by background radiation. Thus the
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_15
147
148
B. Li et al. detection sensitivity of the instrument can be guaranteed by all this measures. This instrument is devoted to material composition mapping in wide spectral coverage of small body and landing site choosing of the mission. Keywords: Deep space exploration · Small solar system body · Near-earth asteroid · Main belt comet · Grating spectrometer · Medium wave infrared
1 Introduction Deep space exploration refers to the exploration of planets and space matter by entering into the solar system and space without the gravitational field of the earth. As an important branch of remote sensing, astronomical observation, and planetary science, deep space exploration has become an important means to study the origin and evolution of the universe, to know the formation and evolution of the earth’s environment, and then to use space resources and expand living space [1]. Pioneer-5 was launched successfully by NASA in March 1960, which is the first probe for deep space exploration in human history. Since the twenty-first century, the United States, Russia, Europe, and Japan have made the long-term planning and schedule of the exploration. Deep space exploration has become one of the main development directions of space activities in the world [2]. Imaging spectrometry is a new interdisciplinary combined optical imaging and spectrum detection, which was born in the 1980s. The characteristics of geometry and spectral dimensions can be obtained from imaging spectral data, so that the material composition can be classified, analyzed, and recognized. With the development of electronics, device processing and remote sensing science, imaging spectroscopy has become an important observation means of aviation and space remote sensing, which is widely used in many fields of mineral exploration, agricultural production, ocean remote sensing, environmental monitoring, disaster prevention and mitigation, military reconnaissance, etc. The emergence of hyperspectral imaging is a revolution in the field of remote sensing. Applied in deep space exploration rapidly, hyperspectral imaging becomes an effective method for study of physical and chemical properties of substances, such as atmospheric composition, rocks and minerals, water ice, etc. By the combination of image data and fine spectral information, the spectral features and spatial distributions of target bodies can be determined simultaneously. Continuous updating and improving of detection technology enhance humans’ ability to explore the space, meanwhile, the requirements of detection elements refining and high retrieval accuracy for space science research also promote the rapid development of high sensitive and hyperspectral imaging technology. The application of hyperspectral imaging in deep space exploration is discussed. Based on the spectrum detection of small solar system bodies, the science goals, main specifications, and key techniques are analyzed.
Visible and Infrared Imaging Spectrometer Applied in Small Solar System. . .
149
2 Application of Hyperspectral Imaging in Deep Space Exploration Spectrum is the only identification card of matter, with the ability of fine characterization of properties. High-resolution spectral data provided by hyperspectral imaging technology can be used to distinguish physical and chemical properties of substances better. Nearly 50% of the existing deep space detection payloads are the spectral imaging instruments, which have collected many spectral image data to analyze material compositions on the surface of celestial bodies. For example, the primary purpose of humans to explore the space is to understand whether life exists or once, related to detect whether there is water on objects. One method is to directly detect water ice or rime ice on the surface of the object. The other is to detect whether there is carbonate, which is formed by a chemical reaction between carbon dioxide gas dissolved in water and metal. And many minerals have distinguishable absorption and reflection characteristics in visible and infrared band, concentrated with lots of absorption peaks of minerals and metallogenic structure features. The lunar surface is mainly composed of four kinds of minerals without vegetation, namely, pyroxene, calcium feldspar, olivine, and ilmenite. These minerals have some spectral features in visible and infrared band according to the research results of the samples brought by Apollo series spacecraft. In 2007, China’s first lunar exploration satellite “Chang’e-1” was carrying on an interference imaging spectrometer. This spectrometer realized the continuous spectrum detection of the moon over visible and near infrared band, thus the scientific goals of analyzing the useful elements and its distribution, studying the types, contents, and distribution of materials on lunar surface are accomplished [3]. Life, geology, climate, and manned landing of the Mars are the four major scientific directions of NASA’s Mars exploration. As early as in 2005, NASA found spectral evidence of hydrated salt on Mars by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) carried on Mars Reconnaissance orbiter (MRO), which confirmed the existence of liquid water and brought the hope of searching for life on Mars [4]. In the first Mars exploration activity planned by China, the spectrum observation and distribution detection of minerals and rocks for mineral identification on the surface of Mars are the main mission. In 2016, NASA’s Juno spacecraft has entered orbit around the Jupiter, starting the first close observation of the planet. The information about Jupiter deep structure have been transferred from Jovian Infrared Auroral Mapper (JIRAM), Ultraviolet Imaging Spectrograph (UVS), and JunoCam (JCM), which provide the evidence to verify the theory of the origin and evolution of solar system [5]. For asteroids detection, NASA launched the first asteroid sample return mission called OSIRIS-REx in 2016, delivering a Thermal Emission Spectrometer (OTES) and a Visible and Infrared Spectrometer (OVIRS), which can quickly obtain the key surface physical information of asteroid for the preparation for asteroid impact and the study on universe evolution and origin of life [6]. At present, the spectral imaging for deep space exploration mainly uses these four kinds of spectrometer, which is the imaging spectrometer using filter, dispersion imaging spectrometer, and interference imaging spectrometer.
150
2.1
B. Li et al.
Imaging Spectrometer Using Filter
The spectrometer uses filter wheel to switch different bands in turns to detect the different spectral images. Surface Stereo Imager (SSI) carried on Phoenix Mars Lander is used for the high-resolution geological mapping, mechanical arm working areas mapping, multispectral analysis, and atmospheric observations. Two cameras imitating human stereoscopic vision have two filter wheels, which can produce 12 spectral bands for each camera [7]. This method is only suitable for multispectral detection as the number of spectral channels is dependent on the quantity limit of the filters. Linear variable filter is a kind of wedge filter in front of the detector. Interference produced by membrane thickness changing linearly with the position makes the light of different wavelengths transmit through the different positions of the filter. One dimension of the array detector is spatial dimension, and the other is spectral dimension. The spectrogram of the same scene is obtained by different pixels at different times. OVIRS employs wedged filter to provide the spectrum from 0.4 to 4.3 μm, with the spectral resolution of 7.5–22 nm. OVIRS will provide asteroid spectral data and global spectral maps by scanning mode, of mineralogical and molecular components including carbonates, silicates, sulfates, oxides, absorbed water, and a wide range of organic species [8]. The advantage of this kind of technical scheme lies in its simple structure and small volume, but there are some problems such as spectral drift and nonsimultaneous imaging when the incident angle of the light is large. Acousto-optic tunable filter (AOTF) is a kind of electro-optic modulation spectroscopic device that uses solid crystal materials. Based on the principle of acousto-optic effect, wavelength selection can be realized by Bragg diffraction of light incident to the transmission medium when sound waves propagate in anisotropic medium. SPICAM (Spectroscopy for the Investigation of the Characteristics of the Atmosphere of Mars) is an ultraviolet and infrared atmospheric spectrometer on the ESA Mars Express mission with UV channel (118–320 nm) and infrared channel (1.1–1.7 μm). The infrared channel uses AOTF to reduce the mass of the instrument, with the spectral resolution of 0.5–1.2 nm. The optical system configuration and the appearance of the instrument are shown in Fig. 1. This instrument is mainly used to measure the water vapor in Mars’s atmosphere and to analyze the solar radiation spectrum reflected by surface and absorbed by the atmosphere [9]. The infrared imaging spectrometer on China’s first
Fig. 1 SPICAM optical system (left) and envelope (right)
Visible and Infrared Imaging Spectrometer Applied in Small Solar System. . .
151
moon landing and reconnaissance mission “Chang’e-3” employs AOTF to select the wavelength by changing driving frequency. The spectral imaging of VIS and NIR band (0.45–0.95 μm) and spectrum detection of SWIR band (0.9–2.4 μm) are accomplished, which provide many scientific observation data of the mineral compositions of patrol areas [10]. AOTF spectrometer has the advantages of flexible spectrum programming, simple and compact structure, etc. However, image blurring and offset, complicated spectral calibration involve the accurate performance test of the AOTF device and correction. 2.2
Dispersion Imaging Spectrometer
For dispersion imaging spectrometer working in push-broom mode, the incident beam is imaged at slit by a telescope, and the image at slit is dispersed by a dispersion spectrometer along the width direction of the slit and imaged on detector. The dispersion is accomplished by a prism or grating. Prism is not feasible to achieve a large field of view and a high spectral resolution, and the optical system can be longer and the volume of the instrument should be larger than grating spectrometer. Grating spectrometer has a higher spectral resolution and larger field of view. The visible and infrared mineralogical mapping spectrometer (OMEGA) launched on ESA Mars Express Orbiter in 2003 is a grating spectrometer working on 0.35–5.1 μm with the spectral resolution of 7–20 nm [11]. The optical system and the layout of the instrument are shown in Fig. 2. The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) is launched on Rosetta comet probe in 2004, whose scientific goal is to study the properties of the solid, the gas composition, to measure surface temperature of the nucleus, to analyze the comet’s physical condition, and to help finding the best landing site. One channel is for spectral
3
2
10/11
SWIR Telescope Entrance slit
Light trap
4
CCD Focal plane array
6
8
Mass surface
7
Folding mirror Concave holographic grating mirror
(a)
(b) Fig. 2 Optical system of OMEGA and the envelope of the instrument. (a) Optical system of OMEGA. (b) OMEGA envelope
152
B. Li et al. ZPD
M1 Fixed Mirror
Source
Beamsplitter
MPD
M2 movable Mirror OPD=2d when M2 moves with d.
Detector
Fig. 3 Principle of Michelson interferometer
imaging and is housed in the Mapper optical subsystem (VIRTIS-M). VIRTIS-H is devoted to spectroscopy with higher spectral resolution. The spectral resolution (λ/Δλ) of VIRTIS-M is 70–380 [12]. Grating spectrometer can directly obtain the spectrum and image of the target, with high spatial resolution and high spectral resolution, which has been widely employed in deep space exploration payloads. The requirements of the throughput and detector performance should be high enough. 2.3
Interference Imaging Spectrometer
Interference imaging spectrometer is based on the principle of Michelson interferometer, as shown in Fig. 3. The beam emitted from a light source is collimated, and then split up into reflected beam and transmitted beam by a beamsplitter. The reflected part which is then reflected by a fixed mirror (M1) and transmits the beamsplitter meets the transmitted beam reflected by a moving mirror (M2). So the interference light is generated here and focused on detector, which is a function of optical path difference between the two beams generated by the motion of the moving mirror. The spectral image of the target is transformed into interferogram by the interferometer, which can be calculated by inverse Fourier transform of the interference determined by the detector; thus this kind of spectrometer is also called Fourier transform spectrometer (FTS). MINI-TES is a time-modulated FTS on “Spirit” and “Opportunity” Mars Exploration Rover (MER), covering the spectral range of 5–29 μm with the spectral resolution of 10 cm 1. The optical system of MINI-TES is shown in Fig. 4. The distribution image of the temperature and thermal emission of Mars’s surface is produced by IFFT of the time series interferogram of each pixel [13]. Fourier transform imaging spectrometer is quite suitable for infrared hyperspectral imaging with very high spectral resolution and medium spatial resolution, and more composition and thermal emission characteristics of the target with lower temperature can be determined with wider spectral coverage by FTS. A stable and reliable design for the adaption to space environment should be considered more seriously for FTS because of its moving mechanism. Based on the above analysis, according to the requirements of different missions, scientific objectives, and resource allocation, different spectrophotometric schemes are selected for the imaging spectrometer of deep space exploration. The spectral coverage
Visible and Infrared Imaging Spectrometer Applied in Small Solar System. . . Detector Window
Primary Mirror Secondary Mirror
Focus Mirror
Compensator Beamsplitter
CdTe Main Window Fold Mirror Fixrd Mirror (Out of Plane)
(a)
Fixed Mirror
Field Stop Aperture Interferometer Fold Mirror
153
Moving Mirror
Main Fold Mirror
Interferometer Fold Mirror
Primary Mirror
Moving Mirror 99-0289-007
(b)
Secondary Mirror
Detector Preamp Assembly
99-0289-006
Fig. 4 Optical system of MINI-TES
of the spectrometers are wider, covering from visible to very long wave infrared. Visible to medium wave infrared (MWIR) bands are used to detect minerals and atmospheric compositions. Medium and long wave infrared bands are for the determination of thermal emission features. And the combination of spectral information of different bands is more advantageous to distinguish between more material compositions.
3 Spectral Detection of Small Solar System Bodies Humans’ 244 deep space exploration missions have almost covered all kinds of celestial bodies in the solar system, such as the moon, the seven planets and their satellites, dwarf planets, asteroids, and comets. According to the exploration frequency of the past 20 years, the hot spots of planetary science research in the present and future are the Mars, asteroids, and the moon. At the same time, as the moon and Mars explorations boom, the asteroid exploration also gets people’s attention. The exploration missions have developed from the early close flight to low-altitude orbital exploration, and to the recent soft landing and sample return of asteroids. Asteroids are objects that orbit the sun but are too small to be called planets. There are about 2000 asteroids larger than 1 km in diameter, about 300,000 larger than 100 m in diameter, which are the planetary bodies formed 460 million years ago, containing a large amount of information on early formation and evolution of the solar system. The mineral content and distribution data of asteroid surface are the preconditions for the study of asteroid geological evolution, the origin of life, and the exploitation and utilization of asteroid resources in the future. Some information such as exact orbital data and basic physical properties have been known by optical and radar observations from the ground. A lot of physical, chemical characteristics and mineral properties have been known by studying meteorites, the counterpart of the asteroids. Restricted by the detection accuracy and limitation of the methods, only some physical parameters of large asteroids can be measured from the ground, but the observation for those whose diameter are below sub kilometer is difficult. Through asteroids explorations,
154
B. Li et al.
understanding the formation and evolution of the planetary system, origin of the solar system and life, and further utilization of asteroids resource have become the hot spots of many countries’ space plan. 3.1
Spectrum Detection of Asteroids
The main purposes of spectrum detection of asteroids are to obtain the mineral types and their distribution on the surface, to study the water-related minerals and their distribution, to analyze the composition and content of minerals in low temperature area, to collect the distribution data of the rocks and soils on the surface, and to select the landing sites based on all these data. 1. To obtain spectral information of asteroid surface for minerals and water ice detection. Due to the obvious absorption features in infrared spectrum, minerals compositions and water-bearing materials can be identified by detecting the spectral information of asteroid surface. These include: to collect the information of past/ present life activity by identifying rocks and soils on the surface of asteroids; to study the formation process of water by identifying the specific mineral types, including carbonate, sulfate, phyllosilicate, evaporite, and phosphate; and to search for minerals that can be used as a result of biological evolution, such as manganese dioxide and carbonate. 2. To analyze the surface hardness of asteroids, serving the selection of landing sites. The firmness or looseness of the soil of the area studied can be analyzed by using different spectral features of different type of materials on the asteroid surface, and of the substances with the same composition but different hardness. Therefore whether the area on the asteroid’s surface is suitable for landing or walking can be decided by surveying of the composition and distribution of the rocks and soils. 3.2
Main Belt Comets Detection
Being rich in water ice and other volatile matters and located in the asteroid belt between Mars and Jupiter, main belt comets are probably the bodies bringing water to early earth. Since it was found at the beginning of twentieth century, main belt comets have aroused great interest in planetary science, and become the important candidate for the target of future orbital detection mission. To understand the characteristics, sources, and distributions of all kinds of volatile matters including water ice carried by main belt comet, the key parameters, such as surface composition and its distribution and productivity, should be measured quantitatively. The main scientific objectives are: to find out the activity caused by surface impact and understand the physical mechanism of various processes of material loss of small bodies; to find out complex organic compounds in the nucleus and identify their composition; to determine the types, composition, and shape feature of various kinds of ice on the main belt comet; to understand the evolution of asteroid-like and comet-like bodies by determining the surface geology and mineral structure and distribution information; and to determine the composition and nature of silicate minerals on the surface of main belt comet.
Visible and Infrared Imaging Spectrometer Applied in Small Solar System. . .
155
4 Design of Visible and Infrared Imaging Spectrometer for Small Solar System Bodies Detection Visible and infrared imaging spectrometer has become a kind of necessary payload for material composition detection of deep space exploration mission. On account of its wide spectra coverage, high spatial resolution, and high spectral resolution, this kind of spectrometer have been employed on Cassini, Mar Express, Rosetta, Venus Express, Dawn, and many other deep space exploration missions. The small solar system bodies’ exploration mission will complete the flying off, landing, sampling of a near-earth asteroid and flying around a comet, and returning to the earth. The mission for the spectrometer is to investigate the surface minerals in detail, to analyze the hardness of the surface, to choose the landing site, and to determine the composition of the main belt comet. Based on these mission requirements, the spectrometer will cover visible to MWIR band, with hyperspectral imaging ability and high sensitivity, and its structure should be very compact to meet the requirement of low mass and small volume design for deep space exploration payload. 4.1
Main Specification Design of the Spectrometer
The spectral characteristics, the “spectral fingerprint” of each substance, are the most effective and common means to identify minerals. At present, the spectral range of most imaging spectrometers for mineral detection is 0.4–2.5 μm. According to the experience of previous asteroid detection missions, expanding to MWIR band is more advantageous to distinguish the minerals having similar spectral absorption lines in visible to short wave infrared (SWIR) band, thus some new minerals can be discovered. On the other hand, considering the signal-to-noise ratio (SNR) would be reduced by the noise produced by thermal emission signal of optical-mechanical structure, the long wave edge of the spectrometer is set to be 5 μm. According to the analysis of several mineral samples in the lab, the spectral sampling interval should be smaller for the substance whose spectral curve has some shallow absorption features, otherwise the spectral features can’t be detected in the spectral signal of the instrument, or the position of the absorption feature will drift, which will influence the accuracy of the mineral identification. For minerals that have similar spectral characteristics and whose spectral absorption lines are very close, the spectral resolution should be higher to improve the recognition rate of different minerals in similar rocks. At present, the understanding of the asteroids is limited. To identify more extraterrestrial minerals and find new types of minerals, the spectral resolution of the spectrometer should be higher. And considering the spectral range of the asteroid detection spectrometer and technical development level, the recommended spectral resolution is 5–10 nm. The spectrometers for asteroid detection mission are usually carried on the orbiter, detecting the asteroid at a distance. For this kind of instrument, the design of the spatial resolution should consider the detection distance and the size of the asteroid. For bodies of small diameter, the material distribution of the surface should be measured with high
156
B. Li et al.
spatial resolution. And for the bodies that the probe flies by, the instrument of high spatial resolution would be advantageous because of the long detecting distance. Based on all these reasons, the instantaneous field of view (IFOV) of the spectrometer is set to 0.1 mrad, corresponding to the spatial resolution of 0.5 m at a distance of 5 km. The observation radiance of visible and infrared spectrometer is dependent on the albedo of asteroid and the observation angle relationship between the asteroid and the sun. The albedo is 0.13 according to NASA’s near-earth body observation plan, and the albedo of comet is about 0.05 or lower. For infrared band, the asteroid’s surface temperature is generally acknowledged to be about 200 K or lower. Thermal emission is mainly considered in MWIR band, and the energy detected in SWIR includes both reflected solar radiance and thermal emission radiance from target itself. The SNR of the previous deep space detection spectrometers is generally no less than 100, which can meet most scientific objectives. Therefore, the spectrometer for asteroid and comet detection should realize the SNR of better than 100 under the condition of low reflectivity and low temperature of the observation target. 4.2
Key Techniques of Asteroid Detection Spectrometer
1. Wide spectral coverage and compact design of the spectrometer There are many unknown factors about the composition detection of small bodies, and scientists do not have experience data of the mineral types and its spectrum curves. More detailed science data can be obtained by the spectrometer with wider spectral coverage and higher spectral resolution for the identification and determination of substances compositions. Visible and infrared imaging spectrometer should cover from visible to medium wave infrared band, thus channel division, grating design, and manufacturing are harder than traditional instruments. The optical system design should guarantee the performance with a very compact layout to meet the requirement of low mass and small volume of the deep space exploration payloads. By using a grating integrating the visible and infrared bands, folding the optical path, and other methods, the optical system can be very compact with few optical components, and can realize high grating diffraction efficiency, small spectral distortion, and good imaging quality. The optical system with all reflected mirrors is composed of a shafer telescope and an offner spectrometer, which covers all spectral ranges with one optical system without correction lens, dichroic plate, and objective lens. The weight and volume can be strongly reduced by this design, as shown in Fig. 5. The division of visible and infrared bands is realized by one grating with different groove densities in different regions, as shown in Fig. 6. 2. High sensitivity spectral imaging technology The detected energy of each spectral channels of the spectrometer is very weak, due to the very low reflectivity and temperature of the deep space detection target, the small IFOV and dispersion principle of the spectrometer of high spatial resolution and high spectral resolution. Thus the requirement of the sensitivity of the instrument should be very strict. For the spectrometer working at MWIR band of 5 μm, the dark current noise of the detector should be reduced, and the noise produced by thermal
Visible and Infrared Imaging Spectrometer Applied in Small Solar System. . .
157
Fig. 5 Optical system layout of visible and infrared imaging spectrometer IR2 IR1
V1
V2
Fig. 6 Grating zones
emission of instrument should be limited to a negligible level. The thermal emission noise over the whole detecting link should be simulated. Low-temperature design of the optical system and the detector cooling technology are adapted to control the noise effectively.
5 Conclusion Spectrometer has become an indispensable instrument for material composition detection in previous deep space exploration activities. Spectrometers should cover wider spectral range, be of high spatial resolution and high spectral resolution, of high sensitivity, and of small volume and lower mass, due to the uncertainty of target and mission requirements, to obtain more detailed and accurate spectral image of material compositions. Visible and infrared imaging spectrometer covering from visible to MWIR (0.4–5 μm) can determine the spatial distribution and spectral information simultaneously. The spatial resolution is 0.5 m at the distance of 5 km. Hyperspectral detection with 5 nm spectral resolution and imaging with medium spectral resolution of
158
B. Li et al.
10 nm are realized. The SNR of the instrument can be better than 100. The spectrometer can detect the target’s global (size, shape, albedo, etc) and local (mineralogical features, topography, roughness, dust and gas production rates, etc) properties. The resource distribution data of the cold targets and the regions of low temperature and light can be supplemented by the spectrometer using low-temperature technology, which has wide application prospects.
References 1. Ouyang, Z., Li, C., Zou, Y., Liu, J., Xu, L.: Progress of deep space exploration and chinese deep space exploration stratagem. In: International Symposium on Deep Space Exploration Technology and Application, p. 2002. Chinese Society of Astronautics, Qingdao (2002) 2. Ye, J., Peng, J.: Deep space exploration and its prospect in China. Eng. Sci. 8(10), 13–18 (2006) 3. Zhao, B., Yang, J., Chang, L., et al.: Optical design and on-orbit performance evaluation of the imaging spectrometer for Chang’e-1 lunar satellite. Acta Photon. Sin. 38(3), 479–483 (2009) 4. Smith, M.D., Wolff, M.J., Clancy, R.T., The CRISM Science Team: CRISM Observations of Water Vapor and Carbon Monoxide. NASA Report (2008) 5. Grassi, D., Adriani, A., Moriconi, M.L., et al.: Jupiter’s hot sports: quantitative assessment of the retrieval capabilities of future IR spectro-imagers. Planet. Space Sci. 58, 1265–1278 (2010) 6. Gal-Edd, J., Cheuvront, A.: The OSIRIS-rex asteroid sample return: mission operation design. 13th International Conference on Space Operations, Pasadena, CA, May 2014 (2014) 7. Smith, P.H., Weinberg, J., Friedman, T., et al.: The MVACS surface stereo imager on Mars Polar Lander. Eur. J. Vasc. Endovasc. Surg. 106(E8), 17589–17607 (2001) 8. Reuter, D.C., Simon-Miller, A.A.: The OVIRS Visible/IR Spectrometer on the OSIRIS-Rex Mission, NASA Report (2012) 9. Korablev, O., Bertaux, J.-L., Dimarellis, E., et al.: An AOTF-Based Spectrometer for Mars Atmosphere Sounding. Infrared Spaceborne Remote Sensing X, Seattle, WA (2002) 10. Wang, Y., Jia, J., He, Z., Wang, J.: Key technologies of advanced hyperspectral imaging system. J. Remote Sens. 20(5), 950–957 (2016) 11. Bellucci, G., Altieri, F., Bibring, J.P., et al.: OMEGA/Mars express: visual channel performances and data reduction techniques. Planet. Space Sci. 54, 675–684 (2006) 12. Coradini, A., Capaccioni, F., Drossart, P., et al.: VIRTIS: an imaging spectrometer for the ROSETTA mission. Space Sci. Rev. 128, 529–559 (2007) 13. Silverman, S., Peralta, R., Christensen, P., Mehall, G.: Miniature Thermal Emission Spectrometer for the Mars Exploration Rover. Optical Spectroscopic Techniques and Instrumentation for Atmospheric and Space Research V, San Diego, CA (2003)
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector Jianing Hu1(*), Xiaoyong Wang1, and Ningjuan Ruan1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. With the demand of higher resolution and much greater pixel density by aerospace remote sensing, a micron-scale detector is produced at the right moment. Normally, the detector pixel size of space remote sensing camera is over 6 μm with the Nyquist Frequency less than 100 lp/mm. This chapter employs only 1.4 μm pixel size with the Nyquist Frequency over 357 lp/mm. It is the first time that such a small pixel is used for visible light. The optical system is in need of a high standard and sophisticated optical design. Firstly, investigate the development of the small pixel size detector. Secondly, the effect of pixel size on optical and detection system is analyzed. Besides, the design of a novel optical system with a large aperture on-axis three mirror with small F Number is presented. The primary mirror and tertiary mirror are ready for integration of a single substrate. This novel system has the advantages of shortening the total length. This system can not only decrease the complexity of manufacturing but also enhance the quality of remote sensing images. The resulting system works at 0.4–0.7 μm which operates less than F/2 with a 300 mm pupil and over a 2.2 2.2 diagonal full field of view. The MTF value is above 0.25 at 357 lp/ mm. Keywords: Space remote sensing · Micron-scale detector · Small pixel · On-axis three mirror · Optical design
1 Introduction Pixel is the smallest element of an image. The smaller a pixel, the higher the resolution of an image is. Large-scale pixel means that the image is full of prodigious amounts of information. With the development of electro-optical imaging system, the pixel war is becoming increasingly fierce. The research of getting higher resolution images in fields of space exploration, national defense technology, disaster monitoring and geological survey has been becoming a hot issue. Normally, there are four different methods to obtain 100 million pixels images [1]: detector stitching, single shot stitching, multi-shot stitching and multi-scale gaze stitching. Each method has its disadvantages. It is difficult for detector stitching to achieve seamless splicing which leads to dead zone. Considering the time delay, single shot stitching is not the first choice. Though multi-shot stitching solves the problem of time delay, it costs a lot and wastes too much space. Multi-scale © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_16
159
160
J. Hu et al.
gaze stitching is limited by glass material. In order to acquire higher resolution images and solve the above-mentioned problems, it’s extremely urgent to apply micron-scale detector on space remote sensing camera. It’s already mature for small pixel size detector to apply on camera lens and cell phones. The continuous development of photoelectric detector promotes the pixel size from 5.6 μm in 2004 to 1.4 μm in 2012 [2]. Now the smallest pixel size is only 0.9 μm. But in space remote sensing cameras, the pixel size is still bigger than 6 μm, because applying small pixel will cause many problems in areas of optical system design, fabrication and assembly tolerance and camera testing. It’s totally different between SLR lens and space remote sensing cameras. In space remote sensing, not only are small pixels used but they are also combined with large aperture lens. In order to match small pixels, optical system needs to use small F Number. However, using small F Number optical system gives rise to a lot of problems in optical design, especially in aperture aberration. This chapter discusses optical system design for space remote sensing camera by using the 1.4 μm pixel size detector, a micron-scale detector.
2 The Development and Application of Small Pixel Size Detector Photoelectric converter, capable of sampling the target and transforming optical signal to electrical signal, is the core device of photoelectric detector. It puts many light sensors together and forms a group for semiconductor element. Each light senor is called a pixel. Pixel is the smallest element of an image. The smaller a pixel, the smaller the GSD (Ground Sampled Distance) is. On the contrary, a smaller pixel leads to decline in SNR (Signal to Noise Ratio), MTF (Modulation Transfer Function) and the field of view per pixel. Small pixel size detector has low sensitivity. Each pixel receives less light energy which means it’s hard for small pixel size detector to obtain satisfactory images. The smaller pixel corresponds to the higher sample frequency. It requests optical system to establish strict quality standards. The demand for optical system to achieve higher quality images at higher Nyquist Frequency. Image sensor size is related to optical system performance. As a result, when designing a system, the designer should consider the pixel size in different parameter indexes and performance specifications. 2.1
The Development of Small Pixel Size Detector
The advanced CMOS manufacturing technology makes the size of detector pixel decreased from 5.6 to 0.9 μm. Then, the phone’s pixel has a rapid growth from 300 thousand to 24 million. Nowadays, the pioneer cell phones, iPhone, design 12 million-pixel lenses, while some Chinese brands design the 24 million pixels front camera. It has seen a huge amount of technological advances over the last decade. Especially dedicated to how to make pixel size smaller. In the aspect of small pixel size and largescale pixel, SONY, Samsung and Omnivision are the leading brands in the area of imaging sensors (Tables 1 and 2).
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
161
Table 1 CMOS detectors pixel size comparison Brand SONY SONY SONY SONY Samsung Omnivision Omnivision Omnivision
Model IMX298 IMX318 IMX211 IMX411 ISOCELL OV13870 PureCel-Plus OV24A
Pixel size 1.12 μm 1.0 μm 4.6 μm 3.76 μm 1.0 μm 1.25 μm 1.0 μm 0.9 μm
Pixel scale 16 mega 22.5 mega 100 mega 150 mega 16 mega 13 mega 13 mega 24 mega
Table 2 Space camera pixel size comparison [3, 4] Pixel size 12 μm
8 μm
7 μm 6.5 μm 5.5 μm
Remote sensing satellite IKONOS QuickBird HiRISE WorldView-1 GeoEye-1 WorldView-2 GeoEye-2 TopSat DubaiSat-1 SkySat-1 Flock Pathfinder OVS-1A OVS-1B
Launch year 1999 2001 2005 2007 2008 2009 2012 2005 2009 2013 2014 2016 2017 2017
It’s already mature for small pixel size detector to apply on cell phones, camera lens and industrial cameras. Nonetheless, it’s unable to apply them on sub-small pixel on space remote sensing cameras. For instance, sensing area and saturation signal still need to improve, which makes the MTF and SNR hard to reach a high level. Improving the ability to manufacture image sensors and optical design is necessary. Generally, the pixel size used on space-borne high-resolution remote sensing cameras is bigger than 7 μm, because there is a wide gap between sub-small pixel detector and CCD or CMOS on SNR, dynamic range, sensitivity, quantum efficiency and saturation signal, which is applied on space technology. In recent years, as the commercial remote sensing satellites are developing fast, more and more micro–nano satellites use small pixels in order to lose their weight with less cost (Fig. 1). In overseas countries, researches are ongoing to reduce the pixel size of infrared detectors. Study shows that reducing pixel size contributes to lose weight and save space of system. The pixel size of long-wave infrared system has been reduced from over 20 to 12 μm. There are middle infrared detectors whose pixel size is only 8 μm. It’s said that DRS, an American company, dedicates itself to studying on 5 μm pixel size long-wave infrared cameras supported by DARPA, another American company. They aim to a further progress on losing weight and saving space with less cost.
162
J. Hu et al.
Fig. 1 The comparison of different pixel size long-wave infrared system on weight and size [5] Table 3 The comparison between big pixel size detector and small pixel size detector Index Resolution Maximum date rate Pixel size Frame rate Saturation signal Quantum efficiency Dynamic range
KODAK KAI-11002 4008 2672 28 MHz 9 μm 9 μm 4.8 fps >60,000 e/μm2 50% 66 dB
CANON 120MXS 13,280 9184 45 MHz 2.2 μm 2.2 μm 9.4 fps 10,000 e/μm2 >50% Unknown
Table 3 gives the performance parameter of big pixel size detector and small pixel size detector. KAI-1002 is a scientific CCD made by KODAK. KAI-1002 has 10 megapixels whose pixel size is 9 μm. 120MXS is the latest product by CANON. 120MXS has 100 megapixels whose pixel size is only 2.2 μm. Table 3 has shown clearly that small pixel size detector performance becomes nearly equal to or even better than big pixel detector. Small pixel size detector makes use of huge pixel scale, high resolution, light and miniaturization to play a significant role in the area of security monitoring, geographical mapping, urban planning, disaster monitoring and other fields where high resolution images are required. Without doubt that sub-small pixel size detector will seize the aerospace market. 2.2
The Application of Small Pixel Size Detector
2.2.1 Space Remote Sensing Small pixel size detector has the great advantages of small GSD. Small pixel size detector has larger-scale pixel with certain detector dimension. According to the feature, small pixel size detector can be applied on high-
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
163
resolution imaging system and high earth orbit observation. When the GSD is given, compared with big pixel size detector, small pixel size detector can take advantage of it to reduce focal length and make remote sensing camera light and miniature. Low earth orbit observation system could adopt the technology of push-broom TDI or starring imaging with the purpose of obtaining high-resolution images, improving the skill of data acquisition, saving cost of multi-satellite network, providing global coverage supporting a lot of services easily. In terms of high earth orbits, small pixel size detector will be mainly used on starring imaging and video observation. The accuracy will be improved greatly. Nowadays, the weight of 20 m resolution CCD camera on Geostationary Orbit is 1200 kg. Not only would this help lose satellites’ weight greatly, realize multi-satellite network cost less and global starring imaging by using small pixel size detector but also could be applied in battlefield control, hot spot patrol, target search, and real-time tracking. 2.2.2 Star Sensor Small pixel size detector can be used for star sensors that are applied on locating stars and providing satellites attitude data to spacecraft. Star sensing belongs to observations of spot object. Spot object shapes into a dispersed spot by the diffraction. For purpose of star extraction by using 3 3 or 5 5 pixels. If the CCD pixel size was 10 μm, dispersed spot size would need 30 μm 30 μm at least. Using small pixel size detector enables reducing the size of dispersed spot in order to extremely reduce star sensor F Number. Small pixel size detector has the advantages of reducing focal length with certain aperture. It makes star sensors compact-sized, light and miniature. Installation of more star sensors on satellite platform improves measurement accuracy.
3 The Influence of Detector Pixel Size on Imaging System 3.1
The Influence of Pixel Size on Performance of Optical and Detection System
MTF is one of the most essential criteria, especially for the imaging system [6]. The changes in pixel size will influence the optical system and detection system. The system MTF is equal to the product of every part of the system. In terms of space remote sensing camera, the camera MTF is equal to the product of optical system MTF and detection system MTF. Therefore, modeling and analysing the pixel size impacts the optical system and detection system. In addition, figure out the influence of pixel size change on space remote sensing camera [7]. 3.1.1 Optical System The optical system MTF depends on diffraction-limited optical system:
MTFdiffract
8 9 " 2 #1=2 = < f f f 2 ¼ cos 1 k k 1 k π: fc fc fc ;
ð1Þ
164
J. Hu et al. Table 4 The change rule of optical MTF with different wavelengths, pixel sizes, and F/#
#F 1 1.5 2.0 2.5 3.0 3.5 4.0
p/μm 0.7 0.5679 0.3695 0.1946 0.0576 0 0 0
1.4 0.7806 0.6729 0.5679 0.4664 0.3695 0.2784 0.1946
2.1 0.8533 0.7806 0.7086 0.6376 0.5679 0.4998 0.4335
2.8 0.8899 0.8351 0.7806 0.7265 0.6729 0.6200 0.5679
3.5 0.9119 0.8679 0.8241 0.7806 0.7373 0.6943 0.6517
4.2 0.9265 0.8899 0.8533 0.8169 0.7806 0.7445 0.7086
4.9 0.9370 0.9056 0.8742 0.8429 0.8117 0.7806 0.7496
fc is the optical bandpass limit of incoherent light, fc ¼ 1/λF, F is the F Number that equals f/D. λ is wavelength of light, λ1 þ λ2/2 is working wavelength, [λ1, λ2] is wavelength range. fk is spatial frequency. According to the Shannon Sampling Thesis, the recovery frequency is half of the highest sampling frequency called Nyquist frequency [8]. Sampling frequency fs ¼ 1/p, Nyquist frequency fNyquist ¼ 1/2p.Calculating optical MTF under wavelength is 0.587 μm with different pixel size from 0.7 μm to 4.9 μm and different F Number from 1 to 4. It’s evident in Table 4 that the lower the F Number, the larger the value of optical system MTF. The declining pixel size causes the lower value of optical system MTF. When pixel size matches the F Number with certain wavelength light, the optical system achieves a better MTF value. This chapter aims to design a creative optical system with small pixel size detector and low F Number. However, it’s unwise to blindly reduce F number, because the scene information will be aliased. Besides, F number needs to balance imaging quality and optical system structure. 3.1.2
Detection System The detection system MTF depends on image sensor: MTFgeometry ¼
sin ðπpf k Þ πpf k
ð2Þ
Calculate detection system MTF with different pixel sizes from 0.7 μm to 4.9 μm. The change rule can be seen obviously from Fig. 2. With certain spatial frequency, pixel size has the inverse correlation with detection system MTF. Small pixel size detector has high sampling frequency and large value of MTF. Comparing of the detection system, the smaller pixel size, the lower value of the optical system MTF is. Thus, it’s important to select a detector reasonably for achieving an ideal system. 3.2
The Difficulty of Designing an Optical System with Small Pixel Size Detector
3.2.1 MTF Small pixel size detector has high sampling frequency. This chapter gives an example in which the pixel size is 1.4 μm, spatial frequency is 714 lp/mm and Nyquist frequency is 357 lp/mm. In most countries, the Nyquist
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
165
Fig. 2 The detection system MTF with different pixel size
frequency of space remote sensing camera optical system is less than 100 p/mm. Using such a small pixel demands a stricter quality standard to the optical system. 3.2.2 F Number The minimum distance between two points can be distinguished by an optical system called the optical system resolution. Airy Disk should match pixel size. So, the smaller the pixel size, the lower the F Number. Airy Diam is: D ¼ 2:44 λ F=#
ð3Þ
In general, take detector sampling frequency and optical bandpass limit as the references to figure out F Number. Detector sampling frequency fs ¼ 1/p, optical bandpass limit fc ¼ 1/λF. Dividing detector sampling frequency by optical bandpass limit is λF/p, which is a measure of how finely the detector samples the diffractionlimited optics PSF (Point Spread Function). Most space remote sensing cameras are designed with λF/p < 2. When λF/p ¼ 1, it has a higher MTF near Nyquist frequency and the best image quality [9, 10]. This chapter gives an example of λ ¼ 0.578 μm, p ¼ 1.4 μm. When λF/p ¼ 1, F ¼ 2.4. In order to match small pixel, the F Number is less than 2.4. It’s difficult for large aperture with small F Number optical system to aberration correction, especially the spherical aberration. In addition, normal wide spectral band and middle angle system lead to chromatic aberration and field of view aberration (Fig. 3).
166
J. Hu et al.
Fig. 3 Blur matches small pixels [11]
3.2.3
Depth of Focus δ ¼ 2:44 λ ðF=# Þ2
ð4Þ
With certain wavelength, depth of focus is measured by the square of F Number. Comparing the optical system with large F Number (F/# ¼ 10) to the optical system with small F Number (F/# ¼ 2 ) which depth of focus reduces less than dozens of times, short depth of focus is sensitive to out of focus. This chapter gives an example, λ ¼ 0.578 μm, F ¼ 10, δ ¼ 141 μm; F ¼ 2.4, δ ¼ 8 μm.
4 A Novel Optical System of On-Axis Three Mirror 4.1
Key for Designing the Optical System
According to the small pixel size detector, large aperture optical system and the space remote sensing technology of Low earth orbit observation system, the design indexes of optical system are given (Table 5). This chapter designs a novel optical system of on-axis three mirror based on Mersenne–Schmidt. There is no chromatic aberration in reflective optical system which is suitable for large aperture and wide spectral imaging system. Compared to on-axis three mirror optical system, off-axis three mirror optical system is much bigger. It’s difficult to align and it has a high standard requirement of precision. Normally, compact structure is not one of the advantages of off-axis three mirror optical system. Off-axis three mirror system is more difficult than on-axis three mirror optical system on
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
167
Table 5 The design indexes of optical system Design Index Wavelength coverage Aperture F number Field of view Lens MTF
Parameter 0.4 μm ~0.7 μm 300 mm 2 2.2 2.2 0.25 @ 357lp/mm
Fig. 4 The optical system of Mersenne–Schmidt
alignment due to the asymmetric structure. On-axis three mirror optical system has advantages of designing a compact structure and saving the space. Thanks to the high level of imaging performance at a low cost, a large aperture on-axis three mirror optical system has widespread availability in the area of space remote sensing. However, on-axis three mirror optical system has disadvantages, including central obscuration and low pupil energy, which diminish image quality. Optical system requires field of view to match central obscuration in order to meet the requirement of MTF. Compared to the Mersenne–Schmidt, the primary mirror of the novel optical system is tangent to the tertiary mirror. It makes the optical system lighter and more miniature. The LSST (Large Synoptic Survey Telescope) also adopts this kind of optical system. But there are three lenses before image plane [12, 13]. This chapter designs a novel optical system without the three lenses before image plane. It enables to reduce the quantity of optical elements and loss of light energy (Figs. 4 and 5). All three mirrors of the novel optical system adopt high-order aspheric surface. Spherical mirror has spherical aberration and coma aberration. Small F Number optical systems with large aperture have aperture aberration. Not only is high-order aspheric
168
J. Hu et al.
Fig. 5 The optical system of LSST
surface good at aberration correction, but it also can reduce the quantity of optical elements. It enables the shortening of total length and saves space. 4.2
Performance Analysis
After optimization, the aperture of the novel optical system is 300 mm, F Number is 1.46 and over a 2.2 2.2 diagonal full field of view. The focal length is 438 mm with 35% central obscuration. The design provides a total length of only 250 mm. Compared to the big F Number optical system with big pixel, the novel system has the advantages of shortened total length and saved space. It’s effective to achieve the light and miniature system with less cost. The diffraction limit is close to 0.27 at Nyquist frequency 357 lp/mm. Full field of view MTF almost reaches diffraction limit. The result meets the requirement of MTF over 0.25 at Nyquist frequency 357 lp/mm. Furthermore, the MTF is close to 0.2 at 714 lp/mm. Use the novel optical system to surmount difficulties in the micron-scale detector (Figs. 6 and 7). The RMS radius with full field of view is less than Airy Diam which is 2.093 μm, according to the design demands. There is a little bit high-order spherical aberration near the edge of the field of view. However, aberration on the full field of view is less than 5 μm. Aberration is under control in this novel optical system (Figs. 8 and 9). Sagittal field curvature and meridional field curvature are both less than 2 μm. Distortion is controlled less than 0.1%. Field curvature and distortion reaches the requirement of space remote sensing.
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
Fig. 6 The final optical system
Fig. 7 The final MTF at 357 lp/mm
169
170
J. Hu et al.
Fig. 8 The final SPT
Fig. 9 The final Ray fan
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
171
On diffraction encircled energy perspective, it regards point or ray that concentrates over 30% energy as active blur that reciprocal is optical system resolution. The diffraction encircled energy of the novel optical system is over 60% when blur radius is 1.4 μm (Figs. 10 and 11).
Fig. 10 The final field curvature and distortion
Fig. 11 The final diffraction encircled energy radius
172
J. Hu et al.
5 Conclusion This chapter introduces the development and application of the small pixel size detectors in every detail. This chapter researches the development level of image sensor and the application of small pixel size detector in space remote sensing. It’s not hard to see that more and more small pixel size detectors are used on space remote sensing cameras. The study indicates that small pixels contribute to shorten total length, lose camera weight, make size reduction and let cost down. This chapter compares two different pixel size image sensors. The result shows that small pixel size detector is already close to or even better than the big pixel size detector on some performance parameters. In the future, small pixel size detector will play an important role in the area of space remote sensing. The change of pixel size causes the change of image quality. MTF is the most essential criteria of the imaging system in some way. Taking space remote sensing camera into account, model and analyse the pixel size impact on optical system and detection system. The research shows that the smaller pixel size, the higher value of detection system, the lower value of optical system MTF. In consequence, when designing an optical system, in order to gain a high-quality image, it needs to match detection system. There is no doubt that small pixel gives rise to huge difficulties to optical design on performance of Nyquist frequency, aberration correction and out of focus. This chapter designs a novel optical system based on on-axis three mirror that overcomes difficulties which small pixels cause. Its novelty is that primary mirror is tangent to the tertiary mirror. It makes the optical system lighter and more miniature. This novel optical system meets the requirement of applying the micron-scale detector. Micron-scale detectors have sub-small pixels and ultrahigh sampling frequency. It may capture more details in an image. The novel optical system of on-axis three mirror based on micron-scale detector felicitously solves problems of a large aperture optical system with small F Number bring in. This chapter has proved its viability on futuristic space remote sensing camera.
References 1. Fenggang, X.: Research on Design of Wide Field of View High Resolution Imaging Optical System [D]. Changchun Institute of Optics, Fine Mechanics and Physics Chinese Academy of Sciences (2017) 2. Wenjing, X.: Research and Design of Column-Level ADC for CMOS Image Sensor [D]. Tianjin University (2012) 3. Changyuan, H.: Recent earth imaging commercial satellites with high resolutions [J]. Chin. J. Optics Appl. Optics. 3(3), 201–208 (2010) 4. Cawley, S.: TopSat: low cost high-resolution imagery from space [J]. Acta Astronaut. 56(1), 147–152 (2005) 5. Kinch, M.: Case for Small Pixels: System Perspective and FPA Challenge [C]. Image Sensing Technologies: Materials, Devices, Systems, and Applications, p. 91000I. International Society for Optics and Photonics (2014) 6. Kingslake, R.: Optical System Desihn [M]. Academic Press, New York (1983)
A Novel Optical System of On-Axis Three Mirror Based on Micron-Scale Detector
173
7. Dexian, Z., Xiaoping, D.: Influence of detector’s pixel size on performance of optical detection system [J]. Chin. Space Technol. 31(3), 51–55 (2011) 8. Ma, W.: Space Optical Remote Sensing Technology [M]. China Science and Technology Press, Beijing (2011) 9. Fiete, R.D.: Image quality and λFN/p for remote sensing systems [J]. Optim. Eng. 38(38), 1229 (1999) 10. Ningjuan, R., Bingxin, Y.: Study on parameters of influencing relative aperture of TDICCD camera [J]. Spacecraft Recov. Remote Sens. 26(3), 52–55 (2005) 11. Changyuan, H.: Performance optimization of electro-optical imaging systems [J]. Opt. Precis. Eng. 23, 1):1–1):9 (2015) 12. Ivezić, Ž., Connolly, A.J., Jurić, M.: Everything we’d like to do with LSST data, but we don’t know (yet) how [J]. Proc. Int. Astron. Union. 12(S325), (2016) 13. Jones, L.: Asteroid detection with the large synoptic survey telescope (LSST) [J]. IAU Gen. Assembly. 22(1), 1–9 (2015)
Design of a Snapshot Spectral Camera Based on Micromirror Arrays Shujian Ding1,2(*), Xujie Huang1,2, Jiacheng Zhu1,2, and Weimin Shen1,2 1
Key Lab of Modern Optical Technologies of Education Ministry of China, School of Optoelectronic Science and Engineering, Soochow University, Suzhou, China [email protected] 2 Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province, School of Optoelectronic Science and Engineering, Soochow University, Suzhou, China
Abstract. Spectral imaging is an emerging tool for a variety of scientific and engineering applications because of the additional information it provides about the nature of the materials being imaged. Snapshot spectral imaging has great advantages over traditional imaging systems in the acquisition of data. Some examples of such systems are the computed tomographic imaging spectrometer (CTIS) and the coded aperture snapshot spectral imager (CASSI). Both instruments are based on computational imaging, which do not directly capture the image of the target. Another class of snapshot spectral imaging systems provides all spatial-spectral information on a single or multiple CCD image sensors with a one-to-one correspondence between datacube voxels and detector pixels. Integral field spectrometry with faceted mirrors (IFS-M), integral field spectrometry with coherent fiber bundles (IFS-F), and image mapping spectrometer (IMS) are all typical examples. In IFS-M, the two-dimensional field of interest is optically divided into small samples that are re-imaged at the entrance plane of the spectrograph. It has the characteristics of low spatial resolution. IFS-F uses a reformatter fiber optics to map a 2D image to a linear array that serves as an input slit to an imaging spectrometer. Its drawback with the use of fibers is that it is quite difficult to be coupled with light efficiently. So part of the light energy will be lost when entering the fiber. The micromirror array is IMS’s core component. It consists of a series of long and narrow mirrors and split and map the image onto the detector image plane. The reflective nature of the micromirror array and use of prisms for spectral dispersion provide the system with a high light collection capacity. In addition, its spatial resolution is higher than IFS-M. A designed snapshot spectral camera based on IMS is given in this chapter. The system with operation wavelength range from 400 nm to 900 nm, focal ratio of 15, and spectral resolution of 14.29 nm is designed, and the design results are analyzed and evaluated. It achieves a good imaging quality. It is expected to be applied to remote sensing. Keywords: Snapshot spectral camera · Micromirror array · Spectral imaging
1 Introduction Spectral imaging technology is a novel detection approach which acquires two-dimensional visual picture and one-dimensional spectral information [1]. This technology can collect a three-dimensional (3-D) dataset typically. According to the © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_17
175
176
S. Ding et al.
different spectral information of different substances, spectral imaging has a wide range of applications in coastal ocean imaging, environmental monitoring, and tactical military imaging [2–4]. Depending on the technical means of obtaining data cube, we can classify spectral imaging systems into scanning spectral imaging systems and snapshot spectral imaging systems [5]. Traditional dispersive imaging spectrometers employ “push-broom” scanning or “whisk-broom” scanning. The snapshot spectral imager can obtain the entire dataset during a single detector integration period. Compared to the scanning spectral imager, the snapshot spectral imager has an advantage in detecting moving targets. On the other hand, the snapshot systems can often offer much higher light collection efficiency without the use of slits [6]. At present, many snapshot spectral imaging techniques have emerged. The integral field spectrometer with lenslet arrays proposed in the early days has the advantage of simple structure. The lenslet array is placed at the image plane of the telephoto objective, which can reduce the volume of the instrument. However, the spatial resolution of the system depends on the array size of lenslet arrays and the spatial resolution is low. The integral field spectrometer with coherent fiber bundles can realize the conversion of the field of view through the fiber bundle, thereby acquiring the data cube of the object. However, the instrument itself has difficulty in arrangement of fiber image bundles and the operating band of the system is limited by the transmission of fiber. The coded aperture snapshot imaging spectrometer is based on the theory of compression sensing, which has the advantage of using the coding plate to reduce the complexity of the optical system [7]. The disadvantage is that the instrument itself needs to reconstruct the data cube through computer algorithms. The computed tomography imaging spectrometer uses two-dimensional dispersive elements to obtain projections of the data cube in different directions. The instrument itself has the problem of information loss, and computer software algorithms are also needed to reconstruct the data cube. Integral field spectrometry with faceted mirrors widely used in astronomy uses special optics elements to segment the field of view of the instrument to acquire images and spectra, but has the disadvantages of low spatial resolution and small field of view. Based on the improvement of the image slicer, the image mapping spectrometer achieves higher spatial resolution and larger field of view by means of the micromirror arrays. The snapshot spectral camera given in this chapter is based on this design idea and is expected to be applied in the field of remote sensing. This chapter presents a design for a snapshot spectral camera based on micromirror arrays without any moving parts. Spectral information and spatial information can be directly obtained from the system image plane, and the instrument samples simultaneously and quickly all of the spectral bands within the target spectra. It is expected to be applied in the field of remote sensing.
Design of a Snapshot Spectral Camera Based on Micromirror Arrays
177
2 Principle and Design Considerations Figure 1 shows the concept design of the snapshot spectral camera. It consists of a telephoto objective, micromirror arrays, a collimating lens, an array of prisms, an array of lenses, and a detector. The detector is situated in the image plane of the system. The telephoto objective is used to image objects at infinity onto the image plane. A rectangular image will be produced on the image plane. Micromirror arrays are the same size as the rectangular image. The micromirror array is a periodic optical component. In order to make the image divided, different sub-mirrors in the cycle have different rotation angles. On the other hand, the telephoto objective is telecentric, and its exit pupil is at infinity. The micromirror array is a component without power. Thus the collimating lens is used to image the pupil and a series of pupil images will be obtained on the back focal plane of the collimating lens. In order to achieve pupil matching, the entrance pupil of imager that consists of prisms and lenses is located in front of the imager. Finally, the spatial information and spectral information of the object are obtained on the image plane of the system. Sub-mirrors of micromirror arrays have different angles of rotation. The micromirror array is composed of an array of tiny mirror facets that reflect linear mappings of the image to different regions in the detector. The micromirror array is shown as a simplified 3D model which has only 8 slicing components, tilted in the direction (Ai, Bj) (i, j ¼ 1,2) in Fig. 1. But the micromirror array used in the system will contain more sub-mirrors. The camera is
The telescopic objective
Micromirror arrays (A1,B1) (A2,B1) (A1,B2) (A2,B2) (A1,B1) (A2,B1) (A1,B2) (A2,B2)
Collimating lens Pupil array
An array of prisms An array of imaging lenses Image plane Fig. 1 Concept design of the snapshot spectral camera
178
S. Ding et al. Table 1 The main parameters of the snapshot spectral camera Drone flight altitude Flight speed Field Spectral range Spectral resolution Spatial resolution SNR(signal to noise ratio) Sampling period
5 km 50 m/s 6.6 6.6 400–900 nm 14.29 nm 4m 60 Better than a data cube per second
Table 2 The main parameters of the detector Number of pixels Pixel size Readout noise Charge storage Dark signal
2048(H) 2064(L) 15 μm square 8 e at 1 MHz 4 e at 50 kHz 150,000e 0.2 e/pixel/s
designed with unmanned aerial vehicle remote sensing as the background. The main parameters of the system are shown in Table 1. Combining the flying height and field of view of the camera, we can get the swath ∘ 6:6 5000 tan 2 ¼ 576 ðmÞ 2
ð1Þ
Since the camera ground resolution is 4 m, the number of spatial pixels is 144 144. Because the spectral resolution required by the target is 14.29 nm and the spectral range of the snapshot spectral camera is 400–900 nm, the number of spectral channels of the system is 35. The number of sub-mirrors in a single period of the micromirror array is 36. The scale of the generated pupil array is 6 6. The micromirror array consists of 144 sub-mirrors and has four cycles. For the number of spatial pixels and the spectral resolution of the snapshot spectral camera, we use TELEDYNE e2v company’s photodetector model CCD230–42. The parameters of the detector are shown in Table 2. Figure 2 shows the quantum efficiency curve of the detector. From the figure we can see that this detector has a high response in the 400–900 nm range, and it can meet our usage requirements. H is 5000 (m), and CD is 4 (m). Assume that the sub-mirror has a width (μm) and the focal length of the telephoto objective is 1.25 (mm). Combined with the field of view of 6.6 6.6 , the size of the image located on the image plane of the telephoto objective can be obtained in the vertical direction. 1:25x tan ð3:3∘ Þ 2 ¼ 0:144x ðmmÞ
ð2Þ
Figure 3b shows the telephoto objective lens imaging diagram. On the one hand, the system should avoid the interference between the incident light path before the
Design of a Snapshot Spectral Camera Based on Micromirror Arrays
179
100 90
Quantum efficiency/%
80 70 60 50 40 30 20 10 0 300
350
400
450
500
550
600
650
700
750
800
850
900
950
1000
wavelength/nm
Fig. 2 The quantum efficiency curve of the detector
Fig. 3 (a) Camera imaging diagram (b) Telephoto objective imaging schematic
micromirror array and the exit light path after the micromirror array. On the other hand, when the tilt angle of the micromirror array is large, the amount of defocusing will also increase. In order to avoid defocusing, the relative aperture of the telescopic objective will also decrease, which will eventually decrease the relative aperture of the system. Therefore, the pre-tilt angle of the micromirror array given here is 15 . In order to obtain a clear image on the micromirror array, the axial distance of the upper and lower ends of the micromirror array should be smaller than the focal depth of the telephoto objective. Considering the depth of focus at 400 nm, (3) should be satisfied. 4 0:4 0:001 F front 2 0:144x tan ð15∘ Þ
ð3Þ
180
S. Ding et al.
F front
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 24:12x
ð4Þ
As shown in Fig. 3a, H is the flying height of the drone and f is the focal length of the telephoto objective. AB represents the width of a single sub-mirror in a micromirror array, and CD indicates the ground sampling width. These quantities satisfy (5). H CD ¼ f AB
ð5Þ
The system’s signal-to-noise ratio requirement is higher than 60. The (6) is an expression of signal-to-noise ratio [8]. SNR ¼
Ssignal Ssignal ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Snoise Sphoton þ Sdark þ Sback þ S2read
ð6Þ
where Ssignal stands for signal and Snoise stands for noise. Sphoton represents the photon noise generated when the signal is detected. Sdark represents the dark current noise of the detector. Sback represents the detector background noise. Sread represents the detector readout noise. Considering the selected detector, when dark noise and background noise are small, only when reading noise and photon noise are considered, the signal-to-noise ratio expression is as shown in (7). πabjλτðλÞηðλÞQE ðλÞT 1 mΔλLðλ, rg , ϕ0 Þ 4F 2 hc
SNR ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi πabjλτðλÞηðλÞQE ðλÞT 1 mΔλLðλ, rg , ϕ0 Þ þ S2read 4F 2 hc
ð7Þ
where a stands for the length of the pixel, b stands for the width of the pixel, j stands for number of pixel merges in the spatial dimension, τ(λ) stands for optical system transmittance, η(λ) stands for dispersive element energy utilization, QE(λ) stands for the quantum efficiency, T1 stands for the integration time of the detector, m stands for oversampling rate, and Δλ stands for spectral sampling interval. L(λ,rg,ϕ0) stands for the radiance of the optical system at the entrance pupil when the corresponding wavelength λ, the solar zenith angle is ϕ0, and the ground albedo is rg, F stands for the reciprocal of system relative aperture, h is Planck constant, and c is Speed of light. Taking into account the relevant parameters of the detector, a ¼ b ¼ 15 μm, j ¼ 1, and τ(λ) ¼ 0.6. Since the dispersive element uses a prim, η(λ) ¼ 0.9. Since the spatial resolution is 4 m and the flying speed of the drone is 50 m/s, the detector integration time should not exceed 80 ms in order to avoid image shifting beyond one pixel. T1 ¼ 60 ms, m ¼ 1, Δλ ¼ 14.29 nm, and rg ¼ 0.3. We use MODTRAN software to simulate the spectral radiance curve at the entrance of the system. When F number is 14, 15, 16, and 17, we analyze the variation of the signal-to-noise ratio of the system with wavelength at different zenith angles.
340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 0.4
280
45°Zenith Angle
260
60°Zenith Angle
260
60°Zenith Angle
200 180 160 140 120 100 80 60 0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.4
0.45
0.5
0.55
0.6
0.65
0.7
Wavelength/um
Wavelength/um
(a) F=14
(b) F=15 240
30°Zenith Angle 45°Zenith Angle
220
60°Zenith Angle
0.75
0.8
0.85
0.9
0.75
0.8
0.85
0.9
30°Zenith Angle 45°Zenith Angle 60°Zenith Angle
200
220
180
200 180
SNR
SNR
45°Zenith Angle
220
240
160 140
160 140 120
120
100
100
80
80
60
60 0.4
30°Zenith Angle
240
300 280
181
300 30°Zenith Angle
SNR
SNR
Design of a Snapshot Spectral Camera Based on Micromirror Arrays
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.4
0.45
0.5
0.55
0.6
0.65
0.7
Wavelength/um
Wavelength/um
(c) F=16
(d) F=17
Fig. 4 Signal-to-noise ratio curves at different zenith angles
Object
Image
Dx Dy
Collimating lens
(a)
Imager
(b)
Fig. 5 (a) Relay imaging diagram (b) pupil array
From Fig. 4, we can see that when the F number is 17, the signal-to-noise ratio of the system has not reached the requirement at 400 nm. Taking into account the design difficulty of the system, in order to set aside part of the design margin, the final system F number is set to 15. As shown in Fig. 5a, considering the collimating lens and the imager as a relay imaging system, the sub-mirror width is x (μm), the detector pixel size is 15 μm, the imager F number is 15, and the telephoto objective F number Ffront satisfies (8).
182
S. Ding et al.
F front x ¼ 15 15
ð8Þ
With (4) and (8), we can get the minimum of x is 47.8125 μm. The final x value is 60 μm, so the telephoto objective has an F number of 60 and a focal length of 75 mm. Figure 5b shows a schematic diagram of the pupil array. Since the collimating lens images the pupil to generate a pupil array, the requirements for aligning the diameter of the collimator mirror are required. D¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D2x þ D2y ¼ ð6 2 f collmator NAmicromirror 2:21Þ2 2
ð9Þ
where fcollmator stands for the focal length of the collimating lens and NAmicromirror stands for numerical aperture of the micromirror array. The minimum numerical aperture of the collimating lens can be obtained from (9). pffiffiffi NAmin ¼ 6 2 2:21 NAmicromirror ¼¼ 0:156
ð10Þ
In (10), 2.21 is a diffraction factor because of the micromirror array [9]. Considering that the pupil is not closely arranged in the actual optical path, the aperture of the collimating lens is finally 0.24. At this point, the main component parameters of the system have been determined, as shown in Tables 3, 4, and 5. Table 3 The main parameters of the telephoto objective Spectral range Field Focal length F number Pupil
400–900 nm 6.6 6.6 75 mm 60 Image-space telecentric
Table 4 The main parameters of the collimating lens Spectral range Field Focal length Numerical aperture Pupil
400–900 nm 9 mm 9 mm 75 mm 0.24 Object-space telecentric
Table 5 The main parameters of imager Spectral range Field Focal length F number Spectral width
400–900 nm 6.6 6.6 18.75 mm 15 0.525 mm
Design of a Snapshot Spectral Camera Based on Micromirror Arrays
183
3 Design Results and Analysis Based on the above parameter analysis, the designed snapshot spectral camera is shown in Fig. 6. The modulation transfer function MTF reflects the ability of the optical system to transmit information. Therefore, we use modulation transfer function to evaluate the imaging quality of the snapshot spectral camera. Since the detector pixel size is 15 μm, the system Nyquist frequency is 34 lp/mm. Since the system is a dispersion system, we have chosen three characteristic wavelengths of 400 nm, 650 nm, and 900 nm to evaluate the imaging quality of the snapshot spectral camera. It can be seen from Fig. 7 that the MTF curve of the system at 34 lp/mm is close to the diffraction limit. The snapshot spectral camera has a good imaging quality. The system has a high energy concentration in a single detector pixel.
Fig. 6 The optical layout of dispersion imager
184
S. Ding et al.
Fig. 7 MTF curves of the snapshot spectral camera
Design of a Snapshot Spectral Camera Based on Micromirror Arrays
185
4 Conclusion The optical system design of the snapshot spectral camera based on micromirror arrays is discussed. The telephoto objective takes two separate and the imager uses Amici prism to obtain spectrum. The MTF of the system are all above 0.4 at the spatial frequency of 34 lp/mm. The designed snapshot spectral camera is suitable for detecting and measuring energetic targets in real time. Moreover, the snapshot will be very applicable to remote sensing platforms. Acknowledgments This work was supported in part by the National Key Research and Development Program of China (2016YFB05500501-02) and the project of the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.
References 1. Goetz, A.F.H., Vane, G., Solomon, J.E., Rock, B.N.: Imaging spectrometry for earth remote sensing. Science. 228, 1147–1153 (1985) 2. Mouroulis, P., Van Gorp, B., Green, R.O., et al.: Portable remote imaging spectrometer coastal ocean sensor: design, characteristics, and first flight results. Appl. Optics. 53(7), 1363–1380 (2014) 3. Cai, F., Lu, W., Shi, W., He, S.: A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera. Sci. Rep. 7, 1–9 (2017) 4. Lockwood, R.B., Cooley, T.W., Nadile, R.M., et al.: Advanced responsive tactically-effective military imaging spectrometer (ARTEMIS) system overview and objectives. Proc. SPIE. 6661, 666102-1–666102-6 (2007) 5. Hagen, N., Kudenov, M.W.: Review of snapshot spectral imaging technologies. Optim. Eng. 52(9), 090901-1–090901-23 (2013) 6. Hagen, N., Kester, R.T., Gao, L., Tkaczyk, T.S.: Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems. Optim. Eng. 51(11), 1371–1379 (2012) 7. Kittle, D.S., Marks, D.L., Brady, D.J.: Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager. Optim. Eng. 51(7), 071403-1–071403-10 (2012) 8. Bovensmann, H., Buchwitz, M., et al.: A remote sensing technique for global monitoring of power plant CO2 emissions from space and related applications. Atmos. Meas. Tech. 3, 781–811 (2010) 9. Kester, R.T., Gao, L., Tkaczyk, T.S.: Development of image mappers for hyperspectral biomedical imaging applications. Appl. Optics. 49(10), 1886–1899 (2010)
Design, Manufacturing and Evaluation of Grazing Incidence X-Ray Optics Fuchang Zuo1(*), Loulou Deng1, Haili Zhang1, Zhengxin Lv1, and Yueming Li1 1
Beijing Institute of Control Engineering, Beijing, China [email protected]
Abstract. On November 10, 2016, the pulsar navigation test satellite 01 (XPNAV01) was launched in Jiuquan Satellite Launch Center, and has currently obtained a lot of observation data. At present, the payload has obtained a huge amount of observation data, and the flux, spectrum, and profile of the target observed are obtained as well. High consistency with the observation data by other missions such as RXTE is shown. The function and performance of China’s first grazing incidence X-ray optics onboard XPNAV-01 was verified in-orbit. The design of the X-ray optics was carried out according to the requirements of the mission to obtain parameters of each layer of mirror and relationship between adjacent mirrors, and four-layer optics was designed. Focusing analysis was then implemented to optimize the parameters and evaluate the performance of the optics. The mirrors were fabricated by electroformed Ni replication, and roughness and reflectivity were tested. The results show that the performance of mirror meets the requirements. Finally, the effective area of the optics was evaluated based on the in-orbit data to verify the design and fabrication, providing guidance and laying foundation for the development of X-ray optics with larger effective area and better angular resolution. Keywords: Multilayer nested · Grazing incidence · Optics · Manufacturing and test · Performance evaluation
1 Introduction Over the past decade, X-ray astronomy has been flourishing, in which focusing X-ray telescopes have had a very prominent role in astronomy, cosmology and in positioning astrophysics at the frontier of fundamental physics, as well as space climate detection, pulsar observation and pulsar navigation. The critical component of X-ray telescope is X-ray optics. Various types of X-ray optics have been developed and adopted, such as collimated type, coded aperture, normal incidence, lobster eye and grazing incidence. The collimated type, coded aperture and normal incidence have been gradually falling into disuse in recent years. The lobster eye is only suitable for monitoring the spatial X-ray burst source, and the background is relatively in a high level. Normal incidence National Key R&D Program of China (No.: 2017YFB0503300). © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_18
187
188
F. Zuo et al.
reflection optics and refraction optics cannot focus X-ray effectively. To realize X-ray focusing and increase the detection area, only the grazing incidence reflection optics can be selected. The grazing incidence focusing optics is widely used in observation of X-ray point source and pulsar navigation, with advantages of reducing background and improving SNR. It plays an irreplaceable role in the field of space X-ray observation and pulsar navigation, and has become one of core components of X-ray astronomy and pulsar navigation [1–3].
2 Requirements on Optics of Pulsar Navigation In the 1970s, the Wolter-I grazing incidence optics was first verified on Einstein telescope. It was realized that not all applications require imaging with high angular resolution ( < I ðEÞ ¼ 13:925 þ 38:59E 19:5E 2 > > : I ðEÞ ¼ 8:52853 4:65161E þ 0:71452E 2
6.2
0:30:5 keV 0:50:9 keV
ð9Þ
0:95:0 keV
SDD Quantum Efficiency
The four-layer nested X-ray grazing incidence optics focuses the Crab pulsar X-ray photons onto a SDD detector, so the response curve of the SDD detector also affects the performance evaluation of the optics. According to the manual of the SDD detector, the SDD quantum efficiency in the range of 0.2–5.0 keV is obtained by piecewise fitting as follows: (
6.3
QEðEÞ ¼ 0:30125 þ 1:34673E 0:37946E 2
0:21:6 keV
QEðEÞ ¼ 0:19147 þ 0:26572E 0:02258E
1:65:0 keV
2
ð10Þ
Effective Area Evaluation
Thus, the efficiency of the optics in the range of 0.3–5.0 keV is ηin‐orbit ðE Þ ¼
F obser ðEÞ F obser ðEÞ ¼ F radiated QEðE Þ I ðEÞ QEðEÞ Aap
ð11Þ
Design, Manufacturing and Evaluation of Grazing Incidence X-Ray Optics
195
And the effective area is Aeff ðE Þ ¼
F obser ðE Þ I ðE Þ QEðEÞ
ð12Þ
According to the observation data Fobser, Crab pulsar target source characteristic (Eq. 9) and SDD detector response curve (Eq. 10), the effective area of the optics is evaluated as shown in Fig. 6. It can be seen from Fig. 6 that the maximum effective area is 6.84 [email protected] keV, and the typical value is 4.22 cm2@1 keV, which is lower than the design value of 15.6 cm2@1 keV. We analyse that the reason for the smaller effective area is due to space thermal deformation and contamination of the optics in orbit. In the energy band of 0.3–0.7 keV, the effective area increases as the energy increases, and the trend is inconsistent with the theoretical calculation and ground test results. The reason for this is possibly that the Crab pulsar has a low flow intensity in the energy range of 0.3–0.7 keV, the randomness in X-ray photon energy is large, and the quantum efficiency of SDD in the energy range of 0.3–0.7 keV is also low, causing the evaluation uncertainty to increase. In the energy range of 0.7–5 keV, the effective area trend is consistent with the ground test result, and the evaluation reliability is reliable.
Effective area /cm2
10
10
10
10
1
0
-1
-2
10
0
Energy /keV
Fig. 6 Evaluated effective area based on in-orbit data
196
F. Zuo et al.
7 Conclusions Through the optical design and derivation of the recursive relationship between each layer of mirrors of the multilayer nested grazing incidence optics, reasonable initial structural parameters of the optics are given. According to the initial structural parameters, optical analysis software is used to analyse the focusing performance. The electroforming Ni replication is adopted to manufacture the ultra-smooth mirror, and the measured roughness and reflectivity reached 0.383 nm and 67%@1.5 keV, respectively. According to in-orbit data analysis, observation target source characteristics analysis and design parameters of the optics, the effective area of the optics is evaluated to be 4.22 cm2@1 keV, which is much lower than the design value of 15.6 cm2@1 keV, which is caused by factors such as space environment and contamination during ground test, transport and storage. This chapter provides guidance for the development of large-area multilayer nested X-ray grazing incidence optics. In addition to controlling the accuracy of the optics itself, it is also necessary to strengthen the control of environmental factors. At the same time, it is necessary to carry out space environment adaptability design and evaluation before launch.
References 1. Ray, P.S., Wood, K.S., Phlips, B.F.: Spacecraft navigation using X-ray pulsars. NRL Rev. 95–102 (2006) 2. Emadzadeh, A., Speyer, J.L.: A new relative navigation system based on X-ray pulsar measurements. In: IEEE Aerospace Conference, Big Sky, Montana, pp. 1–8 (2010) 3. Gorenstein, P.: Focusing X-Ray Optics for Astronomy, vol. 2010, 19p. Hindawi Publishing Corporation X-Ray Optics and Instrumentation, Article ID 109740 4. Lorimer, D., Kramer, M.: Handbook of Pulsar Astronomy. Cambridge University Press, Cambridge, UK (2004) 5. Keith, C.G., Zaven, A., Takashi, O.: The neutron star interior composition ExploreR (NICER): an explorer mission of opportunity for soft x-ray timing spectroscopy. Proc. SPIE. 8443, 8 (2012) 6. Lv, Z.X., Mei, Z.W., Deng, L.L., Li, L.S., Chen, J.W.: In-orbit calibration and dataanalysis of China’s first grazing incidence focusing X-ray pulsartelescope. Aerosp. China. 18(3), 3–10 (2017) 7. Petre, R.: Thin shell, segmented X-ray mirrors. X-Ray Opt. Instrum. 2010, 15 (2010) 8. Louis, D., Robert, C., Takashi, O., Peter,S., Yang, S., Richard, K., et al.: Development of full shell foil X-ray mirrors. Proc. SPIE. 8450 (2012) 9. Friedrich, P., Bruninger, H., Budau, B., Burkert, W., Eder, J., Freyberg, M.J., et al.: Design and development of the eROSITA X-ray mirrors. Proc. SPIE. 7011, 70112T-1–70112T-8 (2008) 10. Lumb, D.H., Schartel, N., Jansen, F.A.: X-ray multi-mirror mission (XMM-newton) observatory. Opt. Eng. 51(1), 011009-1–011009-10 (2012)
Fabrication of the Partition Multiplexing Convex Grating Quan Liu1,2(*), Peiliang Guo1, and Jianhong Wu1,2 1
School of Optoelectronic Science and Engineering and Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, China 2 Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province and Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou, China [email protected]
Abstract. Visible-shortwave infrared imaging spectrometers for deep space exploration need to meet the requirements of wide spectrum coverage, high spectral resolution and miniaturization. This chapter explores the achievement of higher and more consistent diffraction efficiency of the partition multiplexing convex grating in the visible-shortwave infrared (0.4–2.5 μm). The investigation of the diffraction efficiency indicates that when the two blaze angles of the convex grating, one between 5.6 and 6.5 and the other between 16.5 and 17.5 , the first-order diffraction efficiency would be over 25% at the visible-shortwave infrared band. The partition multiplexing convex grating with 408 L/mm, a curvature radius of 51.64 mm, an aperture of 30 mm, with the two blaze angles 6.3 and 17.3 , respectively, and the two antiblaze angles 66 and 56 , has been fabricated by holographic lithography— segmented scan ion beam etching. Keywords: Convex grating · Partition multiplexing grating · Holographic lithography · Ion beam etching · Diffraction efficiency
1 Introduction Hyperspectral imaging spectrometers are radiation sensors that provide a collection of spectral images of an inhomogeneous scene. This allows the spectral signature for each object point to be determined. Application areas include forestry, geology, agriculture, medicine, security, manufacturing, colorimetry, oceanography, ecology and others [1–3]. The Offner imaging spectrometer, which is often adopted in hyperspectral spectrometers overweights others used in pushbroom imaging spectrometry because of its advantages like low chromatic aberrations, a compact size with low optical distortion, and large numerical aperture [4, 5]. The convex grating is one of the key parts in Offner-type hyperspectral spectrometers. Visible-shortwave infrared imaging spectrometers for deep space exploration need to meet the requirements of wide spectrum coverage, high spectral resolution and miniaturization. However, the achievement of the high diffraction efficiency of the © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_19
197
198
Q. Liu et al.
convex grating is still a big challenge, especially in the visible-shortwave infrared broadband. Therefore the partition multiplexing convex gratings are suggested, which means splitting the grating area and blazing each region at a different wavelength. In 2009, NASA launched TacSat-3, whose hyperspectral imaging system ARTEMIS [6] used a double-angled blazed grating with a spectrum ranging from 0.4 to 2.5 μm. In this chapter, the partition multiplexing convex grating has been designed to obtain higher and more uniform diffraction efficiency in the visible-shortwave infrared (0.4–2.5 μm). The study concludes that the control of the two partition blaze angles of the partition multiplexing convex grating ensures that the first-order diffraction efficiency can be over 25% within the wavelength from 0.4 to 2.5 μm.
2 Diffraction Characteristics The diffraction efficiency, one key parameter in the convex grating, was analyzed by the finite difference time domain (FDTD). Figure 1 is a schematic diagram of the blazed grating (Λ represents the spatial frequency; β represents the blaze angle; γ represents the antiblaze angle; α represents the vertex angle). The relation between the diffraction efficiency and the blaze angle has been investigated. The vertex angle was assumed to be 90 . In the analyzing process, the substrate material was aluminum. Incidence was at 20 for the natural light. The spatial frequency of the convex grating was 408 L/mm. Figure 2 demonstrates the relationship between the first-order diffraction efficiency and wavelength with different blaze angels. As shown in Fig. 2, it is difficult to realize the high diffraction efficiency of the convex grating within the wavelength from 0.4 to 2.5 μm. So as to obtain the desired diffraction efficiency (above 25%) in the visible-shortwave infrared broadband, the relation between the diffraction efficiency and wavelength with different blaze angle combinations have been analyzed. The results are presented in Fig. 3. From Fig. 3, when the two blaze angles of the convex grating, one was within the range from 5.6 to 6.5 and the other was within the range from 16.5 to 17.5 respectively, the first-order diffraction efficiency was more than 25% at the visibleshortwave infrared band.
α
γ
β Λ
Fig. 1 The structure of blazed grating
Fabrication of the Partition Multiplexing Convex Grating
199
Fig. 2 Diffraction efficiency as a function of wavelength with different blaze angles
Fig. 3 Diffraction efficiency as a function of wavelength with different blaze angle combinations
200
Q. Liu et al.
3 Fabrication A diagrammatic sketch of fabricating the partition multiplexing convex grating is shown in Fig. 4. Holographic exposure and development was employed for grating pattern generation. Finally, the partition multiplexing convex grating has been fabricated by titling ion beam etching with two different etching angles. According to LIU Quan et al. [7], the convex grating mask with spatial frequency of 408 L/mm was produced by holographic exposure and development. The convex grating mask is presented in Fig. 5. We can see the depth of the mask was about 560 nm and its duty cycle was nearly 0.48. The convex grating mask pattern was transferred to the fused silica substrate by segmented scan ion beam etching. A portion of the area A convex grating is shown in Fig. 6. A portion of the area B convex grating is shown in Fig. 7. We can see that the two blaze angles were approximately 6.3 and 17.3 , respectively, and the two antiblaze
(a)
(b)
(c)
(d)
(e)
Fig. 4 Schematic diagram of the fabrication of the partition multiplexing convex grating. (a) Photoresist coating, (b) holographic exposure and development, (c) area A ion beam etching, (d) area B ion beam etching, (e) the partition multiplexing convex grating
Fabrication of the Partition Multiplexing Convex Grating
201
Fig. 5 AFM photograph of the convex grating mask
Fig. 6 AFM photograph of the area A convex grating
Fig. 7 AFM photograph of the area B convex grating
angles were about 66 and 56 , respectively. The partition multiplexing convex grating is shown in Fig. 8. Figure 9 shows the diagrammatic sketch of diffraction efficiency measurement. Due to the current experimental conditions, the diffraction efficiency in 450–1450 nm bands was measured which is shown in Fig. 10. Furthermore, FDTD was used to analyze the efficiency of the fabricated partition multiplexing convex grating.
202
Q. Liu et al.
Fig. 8 Photograph of the partition multiplexing convex grating
supercontinum source
parabolic mirror
convex grating
detector
Fig. 9 Experimental setup for measuring the diffraction efficiency
Figure 10 shows the evolutions of first-order diffraction efficiency among visibleshortwave infrared wavelength. Obviously the first-order diffraction efficiency could be more than 25% at the visible-shortwave infrared band and reached 47% at 1.4 μm. The measured efficiency of the first order is close to the theoretical efficiency. The slight discrepancy between the predicted and measured efficiency was due to a slight deviation in the profile from an ideal triangle grating.
4 Summary The diffraction efficiency of the partition multiplexing convex grating is discussed by using finite difference time domain. In order to achieve the first-order diffraction efficiency (over 25%) for the natural light at the visible-shortwave infrared, the partition multiplexing convex grating profile was optimized. The simulation results show that the required high diffraction efficiency can be achieved when the two blaze angles of the convex grating were between 5.6 and 6.5 and between 16.5 and 17.5 , respectively.
Fabrication of the Partition Multiplexing Convex Grating
203
0.9 6.3° 17.3° 6.3°, 17.3° measured
0.8
Diffraction Efficiency
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0.5
1
1.5 wavelength(m)
2
2.5 x 10
-6
Fig. 10 Diffraction efficiency versus the wavelength
The partition multiplexing convex grating with 408 L/mm, with the two blaze angles 6.3 and 17.3 , respectively, and the two antiblaze angles 66 and 56 , has been fabricated by holographic lithography—segmented scan ion beam etching. Acknowledgments This research is supported by the National Science and Technology Major Project of China (GFZX04061502), the National Key R&D Program of China (2016YFB0500501), the National Natural Science Foundation of China (60907017), the Opening fund of Shanghai Key Laboratory of All Solid-state Laser and Applied Techniques (2014ADL02), the Suzhou Science and Technology Department (SYG201328), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
References 1. Gat, N.: Imaging spectroscopy using tunable filters: a review. Proc. SPIE. 4056, 50–64 (2000) 2. Pearlman, J., Barry, P.S., Segal, C.C., Shepanski, J.: Hyperion, a space-based imaging spectrometer. IEEE Trans. Geosci. Remote Sens. 41, 1160–1173 (2003) 3. He, C., Feng, Z.K., Yuan, J.J., Wang, J., Gong, Y.X., Dong, Z.H.: Advances in the research on hyperspectral remote sensing in biodiversity and conservation. Spectrosc. Spectr. Anal. 32, 1628–1632 (2012) 4. Mouroulis, P., McKerns, M.: Pushbroom imaging spectrometer with high spectroscopic data fidelity: experimental demonstration. Opt. Eng. 39, 808–816 (2000) 5. Wang, Y.M., Lang, J.W., Wang, J.Y.: Status and prospect of space-borne hyperspectral imaging technology. Laser Optoelectron. Prog. 50, 010008 (2013) 6. Cooley, T.W., Lockwood, R.B., Davis, T.M., et al.: Advanced responsive tactically-effective military imaging spectrometer (ARTEMIS) design. Int. J. High Speed Electron.Syst. 18, 369–374 (2008) 7. Liu, Q., Wu, J.H., Zhou, Y., Gao, F.: The convex grating with high efficiency for hyperspectral remote sensing. Proc. SPIE. 10156, 101561K (2016)
Research on the Optimal Design of Heterodyne Technique Based on the InGaAs-PIN Photodiode Zongfeng Ma1(*), Ming Zhang1, and Panfeng Wu1 1
Shandong Aerospace Electro-Technology Institute, Yantai, China [email protected]
Abstract. For coherent detection technology, strict matching is essential for the magnetic fields of the optical beam. A heterodyne system based on all fiber instruments using an InGaAs-PIN photodiode field effect transistor module as the photodetector is presented. The optimum detection of signal-to-noise (SNR) ratio for a coherent Doppler lidar (CDL) depends on local oscillator (LO) power. An approach for analyzing the SNR is introduced by taking into account the non-linearity of the photodetector. Results of our numerical simulations show that a suitable value of the LO power is essential to ensure better performance. Keywords: Times roman · Image area · Acronyms · References
1 Introduction Lidar is a system in which a beam of laser instead of microwave is used to make rangeresolved remote measurements. A lidar system transmits a beam of light, which interacts with the object to be detected. Some of this light is scattered back to the receiver, then some properties of the return light, such as frequency changing, amplitude, or polarization are detected. Compared with microwave radar, lidar can offer both high resolution and high precision due to considerably shorter carrier wavelength. In the last decade, lidar systems have been developed by many institutions. Two different types in lidar systems, the coherent lidar and incoherent lidar, can be used to determine the Doppler shift. Compared with incoherent detection, coherent detection provides robust shot-noise-limited detection. Specific advantages in this case include [1] (a) immunity to detector dark counts or amplifier noise, (b) immunity to background light, solar illumination, and other interference, and (c) resistance to countermeasures. In recent years, with the development of coherent optical fiber communication technology, there has been an increasing interest in the development of high pulse energy and high efficiency eye-safe coherent lidar. These coherent optical fiber technologies, such as narrow line-width laser sources, optical fiber amplifiers, and so on, can be applied to coherent lidar systems easily.
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_20
205
206
Z. Ma et al.
2 Outline of Lidar System A schematic of a generic lidar system is shown in Fig. 1, which can be divided into three components: the transmitter, the receiver, and the detector. The lidar consists of a transmitter subsystem that sends a laser beam down through the atmosphere, a receiver subsystem, and a detector subsystem that collects and measures the light beam that is backscattered from the target, a payload processor subsystem that controls the above subsystems and does some onboard processing of the signals from the detector, and the data downlink systems [2]. 2.1
Transmitter
As shown in Fig. 1, a beam of light generated by the laser is sent to a small telescope, called a beam expander, which is required to increase the diameter of the beam and get a sufficiently well-collimated laser beam for transmission. To obtain high resolution, precision, and long distance sensing, a low beam divergence, extremely narrow spectral width, and short intense pulsed laser is required as the source of lidar. Figure 2 shows the light beam of the Shandong Aerospace Electro-technology Institute laser transmitter against the night background.
Transmitter
Laser
Beam expander Target
Receiver
Detector
Optical filtering
Optical to electrical transducer
Light collecting telescope
Siganl Processing
Fig. 1 Schematic of a lidar system
Fig. 2 Laser beam transmitted from the laser transmitter
Research on the Optimal Design of Heterodyne Technique Based. . .
2.2
207
Receiver
The receiver system of a lidar collects the scattered laser light, which is filtered by narrowband interference filters, and then incidents onto a photodetector. A narrowband interference filter that is typically much narrower than 1 nm width provides sufficient rejection of background light for a lidar to operate. The size of the optical element that collects a larger fraction of the scattered light is a key factor in determining the efficiency of a lidar system. For larger range detection, a larger primary optic is necessary. Otherwise, a smaller aperture optics is enough. 2.3
Detector
The different wavelengths used by transmitters maybe anywhere from the infrared through the visible and into the ultraviolet lead to the different photodetectors that the actual systems use, which depend on the requirements of measurements. The detection of light intensity is done electronically. The selection of photodetector, a square-law device which converts the light into an electrical signal, is based on the operation wavelength. 2.4
Lidar Equation
In general for a lidar system, the object identification is evaluated from the specialties of the received signals. The lidar equation is the general relation between the power transmitted and collected to the aperture, its field of view, and scattering coefficients, showing that the received power is proportional to the transmitted power [3]. The total optical power received at the detector is given by the lidar equation as: Pr ¼ Pt
σ πd2 η η 4πr 2 4πr 2 at sys
ð1Þ
where Pr is the backscattered power backscattered into the receiver, Pt is the transmitted signal power, σ is the cross section, r is the range of target, d is the aperture diameter, ηat is the atmospheric transmission factor, ηsys is the receiver system optical systems efficiency.
3 Heterodyne Detection Heterodyne detection is used in lidar systems established for velocity measurement. For a heterodyne detection lidar illustrated in Fig. 3, this detection technique combines and mixes between the weak backscattered optical light and a strong, frequency-stable light from a local oscillator on a photomixer. The Doppler shift, the frequency difference between the two optical signals, introduced by the motion of the object is proportional to
208
Z. Ma et al.
Siganl
Laser
Backscattered Siganl
Target
Local oscillator
Photodetector Fig. 3 Schematic of a heterodyne lidar system
id
C
rd
RL
Vout
Fig. 4 Detector-amplifier equivalent circuit
the velocity. A radio-frequency (RF) beat signal generated by the photomixer can be used to determine the Doppler frequency shift induced by the target. The optical power on the photodetector is given by: pffiffiffiffiffiffiffiffiffiffiffi P ¼ Ps þ Plo þ 2 Ps Plo cos ðj2πf d þ φÞ
ð2Þ
where P is the optical power on the detector, Ps is the optical power of the backscattered signal, Plo is the local oscillator (LO) power, fd is the Doppler shift frequency, φ is the phase difference. Recently, there has been interest in developing lidar systems operating at wavelengths near 1550 nm. The signal power is derived by a low dark current photodiode used as detector. Photodiodes can be modeled as the so-called equivalent circuit, which are current generators in parallel with an appropriate resistance and capacitance. An equivalent circuit for the detector and any amplifier configuration is shown in Fig. 4. RL is the equivalent resistance of any amplifier and bias circuits connected to the detector (represented as the source id), rd is the detector parallel resistance, and C is the combination of amplifier and diode capacitance [4].
4 Signal-to-Noise Ratio The advantage of a coherent lidar system over an incoherent lidar system is embodied in the signal-to-noise ratio (SNR). For a shot-noise-limited coherent lidar, the SNR of the photodetector can be taken as the power of signal beam detected on the receiver aperture multiplied by the mixing efficiency. For a coherent lidar system with fine spatial mode
Research on the Optimal Design of Heterodyne Technique Based. . .
209
matching between local-oscillator field and signal field, the mixing efficiency is unity. Otherwise, the contributions to the photocurrent from different parts of the receiver aperture can interfere destructively, reducing the instantaneous mixing efficiency. This method of overcoming the noise contribution is not possible with incoherent lidar system. Coherent lidar system may obtain an SNR up to 3–4 orders of magnitude greater than incoherent lidar system [4], which is represented by [5]: SNR ¼
ρ Plo Ps Bq ðPlo þ Ps þ Pbk þ Pdk þ Pth Þ
ð3Þ
where ρ is the responsivity of the detector, B is the electronic bandwidth, q is the electron charge, Pbk is the background noise, Pdk is the dark current power, Pth is the parasitic resistances thermal noise. If the ρ is a constant, we see that that the LO power should be increased as much as possible from Eqs. (2) and (3). Then, the optical local-oscillator shot noise is the dominant noise contributor, SNR is approximately constant. Equation (3) is simplified to SNRshort ¼
ρPs Bq
ð4Þ
However, photodetector is proved to work with a linear amplification range under real conditions. It is demonstrated experimentally that the photodetector responsivity decreases with increasing LO power out of linear operating range. The equation used for determining the detector DC current output value is expressed as [4, 6]: id ¼ ρPin ραP2in
ð0 Pin 1=2αÞ
ð5Þ
where Plo is the input power of photodetector, α is the fitting coefficient in mA/mW. If the input power is above the value 1/2α, an undesirable situation would occur that the detector system dropped into heavy saturation. In this condition, the SNR is a function of Plo and given by [5]: SNR ¼
ð1 2αPlo Þ2 Plo ρqR ρPs Bq Plo ð1 αPlo Þ þ ρqRe þ 2000 κT
ð6Þ
where R is the equivalent resistance, κ is Boltzman’s constant, T is the temperature in Kelvin.
5 The Numerical Simulation In this section, we choose an InGaAs-PIN photodiode Field Effect Transistor (pinFET) module as the photodetector to measure the beat frequency. The simulation results presented below refer to coherent lidar at the radiation wavelength 1550 nm. To analyze
210
Z. Ma et al. Table 1 Simulation conditions Parameter Fitting coefficient Responsivity Temperature Equivalent resistance
Symbol α ρ T R
Value 0.46 mA/mW 0.91 mA/mW 295 K 60 Ω
0.16 0.14
(0.314, 0.1323)
0.12
SNR/SNRskot
0.1 0.08 0.06 0.04 0.02 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
pb (mW)
Fig. 5 The numerical simulation result
the influence of the LO power on the SNR, we assumed the relevant parameters to be as follows (Table 1): Analysis of the effect of the LO power on the signal-to-noise ratio of a coherent lidar system has been carried out by using Eq. (6).The calculated result for the case is shown in Fig. 5, the SNR is influenced by the LO power with a maximum value at the point 0.314 mW. A system design of simple, continuous wave coherent lidar at 1550 nm for speed measurement was designed. The system was based on all fiber instruments. The interference signal in time domain and in frequency domain for different local oscillator powers is shown in Figs. 6 and 7.
Research on the Optimal Design of Heterodyne Technique Based. . .
211
5 10
4 3
mW
mV
5
0
-5
2 1
-10
0 0
5
10
15
20
25
0
t (m s)
100
300
200
400
500
f (kHz)
(a)
(b)
Fig. 6 The experimental results with 0.32 mW LO. (a) Interference signal in time domain, (b) beat frequency spectrum
5 10 4 3
P (mW)
P (mW)
5
0
-5
2 1
-10
0 0
5
10
15
20
25
0
100
200
300
t (m s)
f (kHz)
(a)
(b)
400
500
Fig. 7 The experimental results with 0.65 mW LO. (a) Interference signal in time domain, (b) beat frequency spectrum
6 Conclusion An approach for analyzing the SNR and optimum LO power of a coherent Doppler lidar is introduced, which takes into account the nonlinearity of the photodetector. The result is based upon the numerical simulation by using an InGaAs-PIN photodiode field effect transistor. This provides a method for specifying the optimum local oscillator power with knowledge of photodetector responsivity, fitting coefficient parameters. Experiments have been carried out in laboratory using a fiber laser with different power. The results show that a suitable value of the LO power is essential to ensure better performance. Acknowledgment This research work was supported by National Natural Science Foundation of China under Grant (Project No. 61302162).
212
Z. Ma et al.
References 1. Prasad, N.S., DiMarcantonio, A.,Rudd, V.::Development of coherent laser radar for space situational awareness applications. In: AMOS 2013, Sept 9-12, Maui, Hawai 2. Hunt, W.H., Winker, D.M., Vaughan, M.A., et al.: CALIPSOlidar description and performance assessment. J. Atmos. Ocean. Technol. 26, 1214–1228 (2009) 3. Pedersen, A.T.: Frequency swept fibre laser for wind speed measurements. PhD thesis, Technical University of Denmark, Kongens Lyngby, December 31, 2011, pp. 5–13 4. Stannard, A.: Optical considerations of a1.55 μm optical fibre anemometer. Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Applied and Modern Optics, August 2002, pp. 37–45 5. Zongfeng, M., Ji-qiang, W., Guangming, L., et al.: Optimum optical local-oscillator power for all fiber coherent Lidar. Semicond. Optoelectron. 30(2), 286–290 (2009) 6. Bobb, R.L.: Doppler shift analysis for a holographic aperture ladar system. In: Partial fulfillment of the requirements for the Degree of Master of Science in Electro-Optics. The School of Engineering of the UNIVERSITY OF DAYTON, Dayton, Ohio, May, 2012, pp. 54–57
Research on the Thermal Stability of Integrated C/SiC Structure in Space Camera Li Sun1(*), Hua Nian1, Zhiqing Song1, and Yusheng Ding2 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected] 2 Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, China
Abstract. C/SiC composite is one kind of the continuous fiber-reinforced ceramic composite (CFCC), with carbon fiber and SiC matrix. Due to its advantages such as specific stiffness, thermal stability, and the advantages of being able to form a complex structure in one piece, C/SiC has received wide attention in the aerospace field in recent years. In order to meet the high thermal stability requirements of the main bearing structure of space optical remote sensor, this chapter proposes an integrated main bearing structure scheme for an off-axis TMA space mapping camera. This structure is a frame-skin configuration, with an outer profile of 1.8 m 0.9 m 1.3 m and a weight of 110 kg. Different from the traditional split-type structure, this product adopts integral molding design, which reduces the weight and greatly improves the structural rigidity and the thermal stability. First, using carbon fiber cloth to make integrated preforms by three-dimensional weaving and stitching, then adding SiC matrix by chemical vapor infiltration (CVI). This chapter introduces the optimization design and FEA analysis method and manufacturing of the entire product. The chapter also introduces the thermal stability experiment. The product is wrapped by multilayer insulation components, and several temperature fields are applied through film resistance heater. The displacement and angle change of each mirror mounting interface are measured by laser dual-frequency interferometer. On analysis of the experimental data, the macroscopic coefficient of the product is found to be 0.73E6/ C, and the experiment results are in good agreement with the simulation analysis. This product can meet the requirements of thermal stability. Compared to conventional metal or resin-based composite materials, the C/SiC has great advantage. Keywords: C/SiC · Integrated structure · Thermal stability · Space camera
1 Introduction C/SiC composite is one kind of the continuous fiber-reinforced ceramic composite (CFCC), with carbon fiber and SiC matrix. In recent years, due to its high specific stiffness, high thermal stability, and the ability to integrate complex structures, the C/SiC has gained wide attention and supplication in spacecraft, especially space remote sensors. © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_21
213
214
L. Sun et al.
C/SiC has been extensively researched at home and abroad as a mirror material and a support structure and has achieved certain results. Zhang Litong [1] prepared 2D C/SiC composites by modifying the matrix of C/SiC composites. Guo Youjun [2] prepared a 3D C/SiC ceramic matrix composite with fiber reinforcement in the thickness direction by CVI method, which showed good structural characteristics and excellent mechanical properties. Dalmaz [3] studied and analyzed the cyclic fatigue properties and elastic modulus of 2.5D C/SiC composites. Germany’s Donier Satellite Systems has prepared a space telescope main mirror with a diameter of 630 mm and a mass of only 4 kg [4]. German Aerospace Institute [5] prepared several C/SiC test pieces. The GF-2 satellite used C/SiC for the front barrel and achieved good results [6].
2 Structural Design and Optimization This chapter proposes an integrated main bearing structure scheme for a high-precision surveying and mapping space optical remote sensor. The remote sensor uses an off-axis three reflex optical system, and the main bearing structure provides a mounting interface for each mirror and the satellite. The outer contour size is 1.8 m 0.9 m 1.3 m, and the weight does not exceed 110 kg. The main mirror, the third mirror, and the focal plane are mounted on the rear frame, the second mirror and the flat mirror are mounted on the front frame, and the bottom is the connection point with the satellite. Since the mail loads are arranged on both sides of the structure, it adopts high-strength frames on both sides, and the structure of the skin and the ribs is connected in the middle. The weights of the main mirror, the third mirror, and the focal plane are heavier, so the rear frame stiffness is higher than front. The frame is slightly larger to ensure the rigidity of the entire structure. The front and rear frames are in the form of I-beams, which are closed to the outer panel and open to the inner panel for easy weaving and sintering. According to the optical temperature stability analysis, in order to meet the requirements of the interior orientation of element stability of the camera, the thermal expansion coefficient of the main frame is required to be lower than 1.5 106/ C. According to this requirement, the parameters such as the order, direction, and fiber volume content of the preform are optimized. The design value of the thermal expansion coefficient is 0.5 106/ C. Topology optimization and size optimization of each position of the ribs were carried out during the front and rear frame design. The optimization objectives and boundary conditions are as follows: Optimization goal: The front and back frame volume fraction is the smallest. Optimize boundary conditions: (a) The overall fundamental frequency is not lower than 120 Hz. (b) Under the action of 1 g gravity in the X direction, the displacement of each mirror in the X direction is not more than 5 μm. (c) In the direction of 1 g gravity in the Y direction, the displacement of each mirror in the Y direction is not more than 5 μm.
Research on the Thermal Stability of Integrated C/SiC Structure in Space Camera
215
The topology optimization results are shown in Fig. 1. After topology optimization, the internal rib design of the front and rear frames was carried out according to the optimization results. The optimized first-order free-mode frequency is 129 Hz (Fig. 2).
(a)
(b)
Fig. 1 Front and rear frame topology optimization volume fraction distribution. (a) Front frame, (b) rear frame
Fig. 2 The optimized first-order free-mode of the frame
216
L. Sun et al.
3 Manufacture The main preparation techniques of C/SiC composites include PIP (polymer infiltration pyrolysis) [7], CVD (chemical vapor deposition) [8], CVI (chemical vapor infiltration) [9], LSI (liquid silicon infiltration) [10], and the like. The main frame manufacturing process is shown below. First, the overall weaving of the preform is carried out, and the inner and outer shape retention and thickness control are completed by the tooling assistance. When the preform is woven, the three-dimensional stitching and needling processes are adopted, and all the carbon cloths are overlapped at the joint to ensure the fibers are continuous. After the preform is completed, the densification takes place several times. First stage of densification adopts CVI process, and the second stage of densification adopts PIP method. PolyCarboSilane (PCS) fills the internal pores of the main frame, and is cross-linked and vitrificated at a certain temperature to be converted into silicon carbide ceramic particles. After multiple impregnation-pyrolysis cycles, the main frame is densified. After the second stage of densification, the main frame is surface treated by chemical method, and silicon carbide is formed by high temperature reaction to fix the internal and surface loose particles of the main frame, so as to avoid slag or dust falling during the mission. After surface treatment, the metal parts are installed. Then the overall processing of the metal parts. Install all the weights for the main frame vibration test, retest the geometrical tolerances of each installation interface after the test, and carry out the final machining (Figs. 3, 4, 5, and 6).
Fig. 3 Preform weave
Research on the Thermal Stability of Integrated C/SiC Structure in Space Camera
Fig. 4 First-stage densification
Fig. 5 Vibration test
preform weave
first-stage densification
second-stage densification
surface treatment
processing
metal parts are install
Processing metal parts
vibration test
Final processing
delivery acceptance
Fig. 6 Manufacture process of the main frame
217
218
L. Sun et al.
4 Thermal Stability Test The main frame provides mounting reference for each mirror assembly and focal plane assembly. Therefore, it requires extremely high thermal stability to ensure the image quality and internal orientation element stability of the camera in an alternating temperature environment. Through reasonable ply optimization design, the thermal expansion coefficient in the optical axis direction is made as small as possible under the premise of ensuring mechanical properties. However, the C/SiC material itself has anisotropy, and the coefficient of thermal expansion is related to the direction of fiber weaving. There may be slight differences in fiber orientation and design value during the implementation of the weaving process. At the same time, large-scale structural parts also have a certain degree in matrix deposition and reaction sintering. In homogeneity, both cases may affect the uniformity of the thermal expansion coefficient of the material, which in turn affects its thermal stability. After the main frame was manufactured, a thermal stability test was conducted. The main frame of this test is installed on the air floating platform after installing three pieces of flexible support. The flexible support has the function of releasing thermal deformation and decoupling from the mounting surface. The outer surface of the main frame is pasted with a heating sheet and covered with a plurality of layers, and the tooling for mounting the test mirror and the main frame are thermally insulated. The test product and the test platform are installed by heat insulation through a heat insulation mat. Using the temperature control loop on the main frame, the temperature of the main frame is controlled, and the temperature of the test chamber is pulled upwards to establish a uniform temperature rise temperature condition and a temperature gradient condition, and the displacement and the direction change measured under each state are compared. For the analysis, the conclusion of the stability test is obtained. During the test, multiple high-precision laser dual-frequency interferometers (LDFI) and photoelectric autocollimators (PEAC) were used to monitor the deformation of the main frame (Table 1) (Fig. 7).
Table 1 Test content of each device No. 1 2 3 4 5 6 7 8 9 10 11
Device 1# LDFI 2# LDFI 3# LDFI 4# LDFI 5# LDFI 6# LDFI 1# PEAC 2# PEAC 3# PEAC 4# PEAC 5# PEAC
Test content Length change of X direction in +Z side Length change of X direction in Z side Length change of Z direction in +Y side Length change of Z direction in Y side Length change of Y direction in +Z side Length change of Y direction in Z side Angle change of +Y direction in +Z side Angle change of Y direction in +Z side Angle change of Y direction in Z side Angle change of +Y direction in Z side Angle change of the experiment platform
Research on the Thermal Stability of Integrated C/SiC Structure in Space Camera
219
Fig. 7 The thermal stability test Table 2 Coefficient of thermal expansion of the main frame (106/ C) X direction +Z side 0.85
Average
Z side 0.54
Y direction +Z side 1.20
Z side 1.01
Z direction +Y side 0.46
Y side 0.51
Table 3 Change of the angle of the main frame at different temperatures (“/ C) Temperature 30 C 40 C
Front +Y X direction 0.070 0.107
Y direction 0.243 0.382
Rear Y X direction 0.017 0.012
Y direction 0.036 0.017
In order to reduce the test error and consider the heating power limit, the main frame temperature is divided into two steps, 10 C per gear, and the maximum temperature rise is 20 C. It is divided into four temperature conditions, which are uniform temperature rise and the temperature stability of the main frame during the gradient test in the X, Y, and Z directions. The temperature of the measuring point on each surface of the main frame under different working conditions is not more than 0.5 C, and the temperature uniformity of each surface does not exceed 0.5 C (Tables 2 and 3). The coefficient of thermal expansion of each surface does not exceed the index requirement of 1.5 106/ C, and the average value is 0.73 106/ C. Change of the angle of the main frame is less than 0.500 / C. If the main frame is made of titanium, the coefficient of thermal expansion would be more than 8.8 106/ C, the angle change would be more than 3/ C, and the weight would be twice that of the C/SiC. Therefore, C/SiC has great advantages in terms of structural thermal stability compared to traditional metal materials.
5 Summary This chapter introduces the design, simulation, manufacturing, and thermal stability test of the main bearing structure. The product applies different temperature fields and tests each mirror under various temperature environments by testing the main bearing
220
L. Sun et al.
structure. The displacement of the mounting interface is varied to obtain its macroscopic thermal stability. A series of tests show that the main bearing structure can meet the stability requirements, and it has a greater advantage than traditional metal materials or resin-based composite materials.
References 1. Xiufeng, H., Litong, Z.: Influence of matrix modification on microstructure and properties of carbon fiber reinforced silicon carbide composites [J]. J. Chin. Ceram. Soc. 34(7), 871 (2006) 2. Youjun, H., Jingjiang, N.: Microstructure and mechanical properties of three-dimensional needled C/SiC composite [J]. J. Chin. Ceram. Soc. 36(2), 144 (2008) 3. Dalmaza, Ducretd, el Guerjouma, R., et al.: Elastic moduli of a 2.5D C/SiC composite: experimental and theoretical estimates [J]. Compos. Sci. Technol. 60, 913 (2000) 4. Kull, Y., Yun, Y., et al.: Mechanical-activation-assisted combustion synthesis of SiC powders with polytetrafluoro-ethylene as promoter [J]. Mater. Res. Bull. 42, 1625 (2007) 5. Heidenreich, B., Renz, R., Krenkel, W.: Short fibre reinforced CMC materials for high performance brakes [C]. In: 4th international conference on high temperature ceramic matrix composites (HT-CMC4), Munich, Germany (2001) 6. Haibin, J.: Technology of high-density and high-resolution camera of GF-2 satellite [J]. Spacecr. Recover. Rem. Sens. 36(4), 4 (2015) 7. Shuhai, W.: Modern preparation technology of advanced ceramics [M]. Chemical Industry Press, Peking (2007) 8. Zhiqiao, Y.: Preparation of SiC coatings on surface of C/SiC composites by the chemical vapor deposition and their oxidation resistance behaviour [J]. J. Chin. Ceram. Soc. 36(8), 1098 (2008) 9. Shizhen, Z., Lijunhong: Preparation and properties of continuous carbon fiber-reinforced silicon carbide composites [J]. J. Beijing Inst. Technol. 22(4), 422 (2002) 10. Yuhui, W.: Structure and mechanical properties of C/SiC composites by reactive melt infiltration [J]. Fiber Reinf. Plast. Compos. 38(5), 20 (2005)
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life Space-Borne Filter Wheel Yue Wang1,2(*), Heng Zhang2, Shiqi Li2, Zhenghang Xiao1, and Huili Jia1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China 2 Huazhong University of Science and Technology, Wuhan, China [email protected]
Abstract. An application study of the large diameter, thin-wall bearing on the longlife space-borne filter wheel is presented in this chapter. A precise bearing mounting method was introduced to provide the sufficient stiffness and the adequate operating accuracy. A flexure bearing axial preload method was optimized to ensure the on-orbit working life circle. A simulated environmental life test under vacuum was processed to verify the performance of the filter wheel. The Scanning Electron Microscope (SEM) and Energy Dispersion Spectrum (EDS) observations were performed to analyze the abrasion of the raceway of the thin-wall bearing, and a friction torque test and analysis was given. The test result shows that the filter wheel meets the 2.1 105 on-orbit life circles requirement with the application of sputtered MoS2 lubricant thin-wall bearing. The application method of the large diameter, thin-wall bearing presents the engineering solution references for the long-life space filter wheel design. Keywords: Thin-wall bearing · Long life · Filter wheel · Space-borne · Solid lubricant
1 Introduction To ensure the multispectral imaging requirement, the space-borne camera needs to operate with varying opto-mechanical devices on-orbit. Typical opto-mechanical devices include the focus mechanism, the calibration mechanism, the multispectral splitter mechanism, and the scanning mirror mechanism. Most of the mechanisms achieve their rolling operation by using ball bearings [1]. Generally, the ball bearings get lubricated by grease to ensure a long working life and a precise rolling status on ground. However, the on-orbit working environments of the space-borne cameras have characteristics of high vacuum, large temperature difference, high radiation, and long life requirement, etc. [2]. The traditional grease lubrication method cannot meet the space-borne camera application requirement. The solid lubrication materials such as graphite and molybdenum disulfide (MoS2) are well applied in substitution of grease lubrication in space missions [3–5]. The MoS2 dry film coating method is widely used in ball bearing lubrication. With this method, the ball bearings can easily meet the low © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_22
221
222
Y. Wang et al.
working friction moment requirement as well as the low contamination working environment needed for the space remote sensing camera. The space remote sensing camera usually achieves its multispectral imaging function by using a series of optical filters. The filters usually adopt the rolling wheel method to achieve their switching in or out of light path. There are 18 filters with different spectral selection capabilities integrated in the MIRI cryogenic filter wheel of JWST. A brushless DC motor is adopted as the driving power resource of the filter wheel. The rotary wheel is mounted on a center shaft, and the center shaft and the DC motor are integrated by the angular contacted ball bearings with MoS2-coated lubrication [6]. The filter wheel and the grating wheel in the NIRSpec of JWST are designed with the same mounting method with the filter wheel in MIRI [7, 8]. The filter wheel mechanism design in Euclid camera incorporates a centrally driven titanium wheel disk that houses the four optical elements. The disk is driven by a brushless DC motor with the support of a duplex bearing pair [9]. The centrally supporting design concept has the potential of small diameter ball bearing application, and the operating friction moment can be maintained in a relative low level in cryogenic environment. However, this design concept leads to low structural stiffness of the filter wheel and complicated operating strategy of the mechanism. When the working environment of the mechanism is in ambient temperature environment and the quantity of the filter is large, the peripheral supporting design of the filter wheel mechanism with the application of the large diameter ball bearing can effectively improve the structural stiffness of the filter wheel mechanism. In this chapter, an application study of the large diameter, thin-wall bearing on the long-life space-borne filter wheel is presented. An appropriate bearing axial preload method was optimized to ensure the on-orbit working life circle. A simulated environmental life test under vacuum was processed to verify the performance of the filter wheel. The SEM and EDS observation is performed to analyze the abrasion of the raceway of the thin-wall bearing, and a friction torque test and analysis was given. The mechanical design process and the life test results of the large diameter, thin-wall bearing will be discussed in the next sections.
2 The Application Design of the Large Diameter, Thin-Wall Bearing 2.1
Design Description of the Filter Wheel Mechanism
The filter wheel disk needs to integrate five different spectral transmittance filters with the same external size of 96 96 12 mm3. As shown in Fig. 1, the design space of the filter wheel mechanism in optical axis dimension should be restricted in 65 mm, and the maximum out diameter of the filter wheel should be restricted in 363 mm. The filter wheel will be operated at an ambient temperature environment on-orbit. Figure 2 shows the mechanism of the filter wheel. The wheel is driven by a spur gear system with a gear ratio of 1/10.5. The power resource is the step motor. The small pinion is attached on the motor shaft. The large teeth ring is mounted onto the filter
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
223
65 Ø104
Ø363
Location of the filter
Axial dimensionrestriction of the filter wheel mechanism
Contour plot of the filter wheel
Fig. 1 The design space restriction of the filter wheel
Fig. 2 Exploded view of the filter wheel mechanism with the main subcomponents identified (1. main supporting structure, 2. thin-wall ball bearings, 3. rolling filter wheel disk, 4. filter, 5. step motor, 6. small pinion, 7. position sensor, 8. flexure pre-load ring)
wheel disk which the small pinion engages in. A pair of angular contact thin-wall ball bearings connects the filter wheel with the main support structure. The ball bearings are coated by MoS2 film. The rotating accuracy and on-orbit working life are guaranteed by the adequate manufacture tolerance and precise flexure pre-load setup. The main design parameters of the filter wheel mechanism are listed in Table 1. 2.2
Design Description of the Large Diameter, Thin-Wall Ball Bearing
The main factor affecting the working life decrease of space-borne solid lubrication ball bearings is the excessive friction torque caused by bad solid lubrication situation. There are various factors such as excessive assemble stress, vacuum weld, temperature fluctuation and remainder [10, 11]. The working life of the solid lubricating bearing will decrease along with the increase of the bearing contact stress and rolling speed [12]. Simultaneously, the cleanliness of the working environment can affect the working life of the bearing as well.
224
Y. Wang et al. Table 1 The main design parameters of the filter wheel mechanism
Property Filter Position accuracy of optical axis Angular accuracy of optical axis Gears Step motor Support bearings Position sensor Total mass On-orbit operating life Material
Parameter Five K9 glass filters with different spectral transmittance layer coating 0.1 mm 15 arcsec Spur gear pair with a gear ratio of 1/10.5 J55BYG450 step motor with 280 mNm torque output ability Integrated duplex angular contact bearing with MoS2-coated lubrication CS3040 hall sensor 10.4 kg 8 years with 2.1 105 circles Main structure: TC4 Flexure pre-load ring: TC4 Small pinion: Vespel SP3 Large teeth ring: TC4
Table 2 Main parameters of the thin-wall ball bearing Item Inner diameter (mm) Outer diameter (mm) Axial height (mm) Total number of balls Contact angular Lubricant Material brand of the rings and balls Radial load ability (N) Axial load ability (N)
Parameter 355.6 374.65 9.525 120 30 Sputtered MoS2 9Cr18 C0r: 37740, Cr: 9510 C0a: 108900, Ca: 27430
In the space-borne camera design application, the working environment of the solid lubricating bearings benefit from the stable working temperature and super clean environment of the optical components. The outer diameter of the filter wheel disk reaches 355 mm with the integration of five pieces of filter. With the light weight optimization design, the mass of the filter wheel disk is controlled to 5 kg. A typical large diameter, thin-wall ball bearing was selected to connect the filter wheel disk and the main structure. The bearing rings and balls are of same material brand 9Cr18. The cages are manufactured by polytetrafluoroethylene (PTFE). Sputtered MoS2 is applied on the balls and cages to form the solid lubrication film. The outer diameter of the bearing reaches 374.65 mm and its axial height is only 9.525 mm. The main parameters of the thin-wall bearing are listed in Table 2. The Hertzian contact theory is widely used to estimate the dynamic contact stress in the bearing [13]. Detailed descriptions of the Hertzian contact stress calculating
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
225
procedures are illustrated in [14, 15]. And the contact stress carrying capability studies of the MoS2 lubricant film are also discussed in [2, 14]. In the bearing pre-load designing procedure, it is necessary to control the axial load in a relative low level to limit the peak Hertzian stress of solid lubricant in a safe region. Simultaneously, the pre-load should be large enough so that nominally no gapping of the balls occurs during random vibration to prevent deterioration of the balls, the races or the lubricant. After the comprehensive consideration, the peak Hertzian stress of the solid lubricant under pre-load will be limited to a value of less than 1500 Mpa. Usually, the on-orbit operating torque margin of space-borne mechanism should be no less than the value of 2.5. As the friction torque of the mechanism increases along with the increase of the pre-load of the ball bearings, we need to adjust the pre-load value of the ball bearings so that the torque margin of the mechanism can meet the design requirement. The equation of the torque margin ηk can be expressed as follows: ηk ¼ ðM RÞ=M I 1
ð1Þ
where M is the minimum driving torque, R is the maximum resistance torque, MI is the maximum inertia moment. The maximum inertia moment of the rolling wheel disk MI can be calculated by the detailed 3D model, which is 51.84 mNm. The minimum driving torque of the step motor M is 280 mNm. The resistance torque can be measured by applying a certain mass load to the ball bearings. After a series of optimizations, the axial pre-load of the ball bearings was confirmed to 10 kg. We can calculate from Eq. (1) that the torque margin ηk is 2.68. Based on the Hertzian contact theory and the ball bearing parameters listed in Table 2, we can calculate that the Hertzian stress is 1204 Mpa in the condition of 10 kg axial pre-load in 1 g gravity field. It is in the safe range of the sputtered MoS2 lubricant film. When the ball bearings are working on-orbit, there is no gravity load applying on the bearings. The Hertzian stress will decrease to 800 Mpa. The expected on-orbit working life is higher than the ground life test. So the axial pre-load of the ball bearing is confirmed to 10 kg. However, it is difficult to apply a 10 kgf pre-load precisely to the thin-wall ball bearings as the bearings have its peculiar character of low self-stiffness. The deformation compatibility principle can effectively characterize relative relationship between the applied load and micro deformation [16]. In this case, we adopt a flexure pre-load ring structure to apply the flexible axial pre-load to the thin-wall bearings [17]. The sizeoptimized flexure pre-load ring feature is shown in Fig. 3. A 10 kgf axial pre-load brings a 0.01 mm axial pre-load deformation of the duplex ball bearings. The finite analysis result of the flexure pre-load ring is shown in Fig. 4. The axial pre-load deformation is preprocessed by the precise manufacture of the pre-load ring. From the finite element analysis result, we can get that the maximum von-Mises stress of the ring is 4.26 Mpa and the maximum axial deformation of the ring is 0.01 mm. The flexure pre-load ring is manufactured by TC4 and its allowable stress is no less than 800 Mpa. So the intensity of the flexure ring can fully meet the design requirement.
226
Y. Wang et al.
Fig. 3 The cut view of the flexure pre-load ring structure
Fig. 4 The finite element analysis result of the flexure pre-load ring (left is the contour plot of the Von-mises stress and right is the contour plot of the axial deformation)
3 The Accelerated Lie Test of the Large Diameter, Thin-Wall Bearing The on-orbit operating characteristic of the filter wheel is intermittent reciprocating rotation. The rotation speed is 3 r/min. The filter wheel disk will achieve a 72 angular rotation in 4 s. There is a 1 s pause between every two rotate motions. From the on-orbit working life circles requirement listed in Table 1, the total on-orbit rotation number of the mechanism is 2.1 105 cycles. Considering manufacture and time cost factors, the space-borne solid lubricant mechanisms generally adopt the accelerated life test method to verify their on-orbit life behavior [18, 19]. The accelerated life test procedure can be expressed in Fig. 5.
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
227
Start
Confirm the test parameters and failure criterion
Choose stress type
Confirm life distribuon and accelerated test formula
Confirm the test terminaon criterion
Confirm stress applying method
Test execuon
Confirm accelerate stress level and test environment
Test data processing
Test result analyze and discussion Confirm the joining number of solid lubricant ball bearings
End
Fig. 5 The procedure of accelerated life test
For the solid lubricant ball bearings, the temperature, load, and rotating speed are the three commonly used stress types. Different stress types and stress levels are used to confirm the corresponding failure criterions of the ball bearings under test. The Weibull distribution theory is widely used in the life verification procedures of the ball bearings of space-borne mechanisms. When the reliability and the test number of the mechanical are confirmed, the life test characteristic can be expressed by the equation below. XR ¼ X0
ln β n ln RðX 0 Þ
1=m ð2Þ
where XR is the quantum of the life test, X0 is the task parameter of the mechanism, n is the participation number of the mechanisms, m is the shape factor, β is the judgment risk, and R ¼ P(X > X0) is the reliability demand of the mechanism. For the parameter β, it has a relationship with the confidence value γ of the mechanism, that is β ¼ 1 – γ. The reliability demand of the rolling filter wheel mechanism is 80%, the shape factor is 2, and the confidence value is 65%. We choose two rolling filter mechanisms as the accelerated life test prototype. Then we can get that the quantum of the life test is 3.2 105 cycles from Eq. (2).
228
Y. Wang et al.
Vacuum Mechanism control unit Test prototype #1
Test prototype #2
Mechanism data processing unit
Fig. 6 The experimental setup for the accelerated life test
Start
Accelerated life test phase 1
Funcon test 1
Accelerated life test phase 2
End
Funcon test n
Accelerated life test phase n
Funcon test 2
Fig. 7 Flowchart of the mechanism accelerated life test
In the accelerated life test procedure, the rolling speed of the test prototypes is set to 30 r/min, which is 10 times the normal operating rolling speed. To avoid the laboratory environment influence of the life test procedure, the life test is processed in vacuum. Active temperature control system is applied in the vacuum to ensure the coherence between the life test status and on-orbit status. The experimental setup of the accelerated life test system is shown in Fig. 6. The orientation accuracy of the filter wheel mechanism is determined by the operating accuracy of the step motor. The mechanism control unit exports the driving pulse signal to the step motor of the test prototype. The step motor operates to achieve the rolling motion of the filter wheel disk. The data processing unit will record the actual rolling step of the step motor automatically and make comparison with the driving pulse number simultaneously. If there is a mismatch between the driving pulse number and the actual rolling step number, the data processing unit will record the difference value and make a logical judgment. If the difference value exceeds 5, the data processing unit will give a termination signal to the control unit. The orientation accuracy of the mechanism is judged to exceed its requirement and the mechanism reaches its end of life. The test procedure is operated in a series of phases and every 4 104 cycles is defined as a phase. There is a function test of the mechanism between every two phases to ensure the performance of the mechanism in a healthy status. Figure 7 shows the flowchart of the accelerated life test of the mechanism. Figure 8 shows the experimental setup for the accelerated life test under vacuum environment.
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
229
Fig. 8 Experimental setup for the accelerated life test
Fig. 9 The step lost trend of the life test prototypes
4 Experimental Data Analysis and Discussion Two test prototypes of the rolling filter wheel mechanism finished the accelerated life test. In the test procedure, both of the mechanisms worked in a normal status. The temperature control system in vacuum also worked well and the temperature range had been controlled in the range of 20 C 1 C throughout the whole test procedure. The motor step lost data during the test was recorded by the data processing unit. Detailed data trend curves are shown in Fig. 9. From the test data drawn in Fig. 9, the step lost number of the two test prototypes are both less than five steps. Both of the operating accuracy of the filter wheel and the on-orbit life of the solid lubricant ball bearings are well validated. After the accelerated life test, the friction torque measurements are applied on both of the two test prototypes. The friction torque test results of the two test prototypes are listed in Table 3. The friction torque of the two test prototypes are both have a certain extent increase. The maximum friction torque comparisons with the torque limit is
230
Y. Wang et al. Table 3 The friction torque test results of the test prototypes Before test (mNm)
Prototype number 1 2
Average 20.43 29.72
Maximum 61.39 89.38
After test (mNm) Torque margin 3.21 2.67
Average 43.41 93.49
Maximum 89.35 141.07
Torque margin 2.67 1.68
200
Maximum friction torque / mN.m
180 160
after life test torque margin limit befor life test
140 120 98.56 80 60 40 20 0
2
1 Test prototype number
Fig. 10 Comparison of the maximum friction torque of the two test prototypes
shown in Fig. 10. In the end of the life test, the friction torque of No. 2 prototype exceeded the torque margin limit. But the mechanism was operating in a good condition and the orientation accuracy was well maintained. The tested thin-wall ball bearings were disassembled after the accelerated life test. The Scanning Electron Microscope (SEM) analyses were applied on the two random selected balls in each bearing of the two test prototypes. From the SEM photos shown in Fig. 11, we can get that the surfaces of the balls behave in a good appearance. The Energy Dispersion Spectrum (EDS) analyses were applied on the different zones of the balls to verify the element composition on the surface of the balls. The EDS spectrum results are shown in Fig. 12. Tables 4 and 5 show the detailed EDS spectrum analysis results for the two test balls. From the EDS results data and the abrasions of the ball surfaces, we can get that each of the existing elements is part of the solid lubricant film and the ball cages except elements of Fe and Cr. The elements Fe and Cr are the background elements of the balls. No outside element contamination during the accelerated life test. The transfer film of solid lubricant on friction surface is formed normally. After the disassembly of the test bearings, both the inner raceway and the outer raceway appeared with obvious abrasion dusts. The abrasion dusts on 2# bearing are
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
231
Fig. 11 The SEM photos of the balls in the two test prototypes (left is the ball in 1# test prototype and right is the ball in 2# test prototype)
Fig. 12 The EDS photos of the balls in the two test prototypes (left is the ball in 1# test prototype and right is the ball in 2# test prototype) Table 4 The EDS analysis result of the ball in 1# test prototype Element C O F Si Cr Fe Mo Total
Line type K line K line K line K line K line K line L line
Consistency 1.47 5.38 5.30 0.38 19.50 84.63 2.81
Weight percentage 5.92 3.50 2.74 0.47 13.87 70.51 2.99 100
Atom percentage 20.26 8.99 5.93 0.69 10.96 51.89 1.28 100
more massive than on the 1# bearing. After wiping of the abrasive dusts, no abnormal abrasion zone was observed in the inner and outer races of the two bearings. The cages of the two bearings were not seriously worn. The abrasion dusts in the bearings are mainly from the cages. Based on the Weibull distribution theory, the accelerated life test was accomplished and the minimum torque margin is 1.67. At the end of the life test procedure, both of the
232
Y. Wang et al. Table 5 The EDS analysis result of the ball in 2# test prototype
Element C O F Si S Cr Fe Mo Total
Line type K line K line K line K line K line K line K line L line
Consistency 2.37 4.82 4.01 0.40 0.15 20.75 88.91 0.97
Weight percentage 8.81 3.00 1.95 0.47 0.13 13.96 70.70 0.98 100
Atom percentage 28.33 7.24 3.96 0.65 0.16 10.37 48.90 0.40 100
two test prototypes worked well. The actual on-orbit life of 2.1 105 is verified by the certain precondition. Considering the time and consuming cost of the test, the participation number of the mechanism is confirmed as two. There are difficulties to apply the life test based on a much higher confidence value. It will need more test time and test prototypes to verify the actual on-orbit life of the mechanism.
5 Conclusion Based on the long on-orbit working life demand of the space-borne rolling filter wheel, an application of the large diameter, thin-wall bearing is presented in this chapter. A precise bearing mounting method was introduced to provide the sufficient stiffness and the adequate operating accuracy. A flexure bearing axial preload method was optimized to ensure the on-orbit working life circle. Based on the Weibull distribution theory, an accelerated life test under vacuum was designed. Two test prototypes of the rolling filter mechanisms finished the test. The SEM and EDS analysis were performed to analyze the abrasion of the raceway of the thin-wall bearing and a friction torque test and analysis was given. The test result shows that the filter wheel has a 2.1 105 on-orbit circles capability with the application of sputtered MoS2 lubricant thin-wall bearing. The application of the large diameter, thin-wall bearing presents the engineering solution references for the long-life space filter wheel design.
References 1. Oswald, F.B., Jett, T.R., Predmore, R.E., Zaretsky, E.V.: Probabilistic analysis of space shuttle body flap actuator ball bearings. NASA/TM-2008-215057 2. Lingfeng, W.: Study on damage mechanism of solid lubrication bearing in vacuum environment. Harbin Institute of Technology, Harbin (2009). (in Chinese) 3. Binhai, Z.: Application of solid-lubricated technology in rolling bearing. Hefei University of Technology, Hefei (2003). (in Chinese)
The Application Study of Large Diameter, Thin-Wall Bearing on Long-Life. . .
233
4. Jun, Y., Han, Z., Yinmu, J.: Experimental analyses of the bearings used in the driving mechanism of the tilting board of solar cell. J. Hefei Univ. Technol. 25(S1), 946–948 (2002). (in Chinese) 5. Xinzhan, Y., Xiaoping, W., Hui, W., Bo, W.: Life analysis and test for fine adjustment antenna pointing mechanism. Spacecr. Environ. Eng. 29(3), 308–311 (2012). (in Chinese) 6. Lemke, D., Böhm, A., de Bonis, F., et al.: Cryogenic filter- and spectrometer wheels for the Mid Infrared Instrument (MIRI) of the James Webb Space Telescope (JWST). Proc. SPIE. 6273 (627324), 1–8 (2006) 7. Weidlich, K., Sedlacek, M., Fischer, M., et al.: The grating and filter wheels for the JWST NIRSpec instrument. Proc. SPIE. 6273(627323), 1–8 (2006) 8. Weidlich, K., Fischer, M., Ellenrieder, M.M., et al.: High-precision cryogenic wheel mechanisms for the JWST NIRSPEC instrument. Proc. SPIE. 7018(701821), 1–12 (2008) 9. Holmes, R., Grozinger, U., Krause, O., et al.: A filter wheel mechanism for the Euclid near-infrared imaging photometer. Proc. SPIE. 7739, 77391A, 1–10 (2010) 10. Tingwei, L., Ning, Z., Wei, L., et al.: Failure simulation analysis and experimental verification of bearings for spaceborne moving mechanism. China Mech. Eng. 25(21), 2864–2868 (2014). (in Chinese) 11. Ai Hong, S., You, M., Guo, L.Z., et al.: Effect of space environment on working life of solidlubricated rotating parts. Opt. Precis. Eng. 22(12), 3265–3271 (2014). (in Chinese) 12. Xinli, L., Zhiquan, L., Jin, Y.: A wear failure model for solid-lubricated ball bearings of spacecraft mechanisms. Spacecr.Eng. 17(4), 109–113 (2008). (in Chinese) 13. Shengjie, S.: Study on bearing preload of space joint and its dynamic characteristic. Harbin Institute of Technology, Harbin (2010). (in Chinese) 14. Qing, Z.: Fatigue life test and failure behavior analysis of bearing in space motor. Harbin Institute of Technology, Harbin (2010). (in Chinese) 15. Chunxu, Y., Rui, L., Feng, G.: Design and mechanical simulation of a micro drive assembly for space application. Aerosp. Control Appl. 42(6), 20–25 (2016). (in Chinese) 16. Malka, R., Desbiens, A.L., Chen, Y., et al.: Principles of microscale flexure hinge design for enhanced endurance. In: 2014 IEEE/RSJ international conference on intelligent robots and systems, pp. 2879–2885 (2014) 17. Rad, F.P.: Design and characterization of curved and spherical flexure hinges for planar and spatial compliant mechanisms. Universita di Bologna, Bologna (2014) 18. Hui, Z.: Accelerated life testing method research for solid lubrication rolling bearing. Chongqing University, Chongqing (2013). (in Chinese) 19. KaiFeng, Z., Hui, Z., JiXing, H., et al.: Life test for angular contact ball bearings 708C with sputtering MoS2/Ti films in atmospheric environment. Bearing. 4, 32–35 (2014). (in Chinese)
A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi-Detector Stitched Aerial Camera C. Zhong1(*), Z. M. Shang1, G. J. Wen1, X. Liu1, H. M. Wang1, and C. Li1 1
Engineering Technology Research Center for Aerial Intelligent Remote Sensing Equipment of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. Nowadays in surveying and mapping applications, the requirement of aerial mapping camera with large image frame is increasing rapidly, for its advantages of great imaging efficiency. Because of difficulty in fabrication of single large detector, stitching of multiple detectors is an approach to constructing large format digital aerial cameras. Also, it is shown in practice that imagery created from multiple images acquired by multi-detector camera system can have similar metric quality, so this sort of camera system will stay in production. As a result, output from multicenter projected images acquired by multiple detectors to an equivalent single center projected virtual image is a core issue, where an accurate calibration is necessary. Exact measuring angle method is a common camera calibration method in the laboratory, and it is a very accurate method. It also performs well when dealing with calibration for long-focal length camera indoors, without limitation of imaging distance. However, it is not applicable to cameras of which the central area is not imaging. This chapter proposes a step-by-step calibration method. Step 1 is to calibrate the central area with traditional exact angle measurement. And step 2 is to establish the image point transformation according to the relative position of various detectors and hence accomplish the calibration of edge area using the result of central area. Through theoretical derivation and simulation, it proves that the proposed method is suitable for multi-detector stitching aerial camera, and it overcomes the shortcomings of impossible measurement for edge area. Also, measurements error caused by optical axis deviation and focal length inconsistency is considered, and its influence on focal length and principle point calculation is analyzed. By deriving the error propagation model of the proposed exact measuring angle method, the assembly accuracy of some key points in the camera system is estimated, which will be a key indicator of accuracy control during the process of camera system production. Keywords: Exact measuring angle · Calibration · Interior element · Accuracy
1 Introduction With the development of aerial photogrammetry technology, there is an increasing demand for high image resolution and large image coverage, as a result the need for large frame aerial mapping camera becomes higher. Since it is still difficult to manufacture one single large frame detector, multi-detector imaging to form an equivalent © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_23
235
236
C. Zhong et al.
projected image is an alternative way. In the progress of establishing single central projection from multiple projection, accurate image interior elements are needed[1–3]. Basically, image interior elements are measured and calculated by exact measuring angle [4–8] or 3D control field calibration. Accurate angle measurement is a laboratorycarried calibration method, which is highly accurate. However, the operating circumstance of the equipment is very strict and rather expensive to maintain; hence, this method is not popularly applied. 3D control field calibration method takes photos of control field which include many precisely measured 3D control points and calculates interior element based on multiple backward intersection. However, as image frame becomes large, the scale of control field becomes tremendously large as well, which increases the difficulty in establishing or maintaining. For multiple detector camera, some of the sub-cameras have a disadvantage that the central area of frame does not form image, which leads to failure of applying normal accurate angle measurement method. Based on these situations, this chapter introduces a step-by-step calibration method based on accurate angle measurement and analyzes the measuring accuracy for multiple-detector camera such as UCXp camera.
2 Method of Calibration 2.1
Image Interior Element
Image interior element is the 3D displacement vector of camera lens center with regard to image frame, as shown in Fig. 1, the elements include focal f length and principle point coordinates (x0,y0). For some large mapping cameras, the lenses are designed with marginal and symmetric distortion; hence, calculated image interior elements are defined as a group of parameters which minimize the quadratic sum of image distortion. That is the base of accurate angle measurement.
S y f y0
p o x0
Fig. 1 Image interior element
x
A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi. . .
2.2
237
Principle of Accurate Angle Measurement
Figure 2 shows the principle of accurate angle measurement in one dimension. O is the center of image, N is ideal principle point, P is image point of target, w is the bundle angle according to P, dw is the bundle angle according to O, l and l0 are 1D coordinates of P and O, respectively. The observation values are w and l. The geometric relation between coordinate and bundle angle of P are established as l l0 ¼ f tan ðw dwÞ
ð1Þ
Based on the fact that principle point has minimum distortion, after measurements of several points, the most probable value of principle point coordinate and focal length are calculated using Eq. (2) according to least squares criterion. P P 9 θi li θ2i li θ2i > l0 ¼ > P 4 P 3 2 > > > = θi θ2i θi P P 4P P > θ θi li θ2i li θ3i > > > f ¼ Pi P > P ; 2 2 4 3 θi θi θi P
θ3i P
P
ð2Þ
where θi ¼ tan wi. 2.3
Calibration for Central Detector
The progress described in Sect. 2.2 is based on 1D coordinates, which is suitable for linear detector camera. For frame image camera, l and l0 should be considered as displacement along the scan line. For better calibration of frame image camera, several scan lines ought to be taken into account, as shown in Fig. 3.
P
S
l
N
w dw f
O
l0
Fig. 2 Principle of accurate angle measurement
238
C. Zhong et al.
Fig. 3 Scan line distribution for frame image
Edge detector
Central detector
xi
pe pc
pc
wi Scan line Fig. 4 Calibration of edge detector
2.4
Calibration for Edge Detector
For multiple detector cameras such as UCXp, there are sub-cameras that detectors distributes central symmetrically with regard to lens center, but the central areas do not form image. Under this situation, image point 1D coordinate l cannot be obtained since point O is not in the image. To overcome this shortcoming, a step-by-step method is applied, in which l of edge detectors is calculated based on the calibration result of central detector. As shown in Fig. 4, since pe is impossible to get, use pc in central detector and the designed displacement between two detectors to fabricate a pc in the image plane, and l is hence calculated using pc. Then the focal length and principle points are calculated according to Eq.(2).
A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi. . .
239
3 Error Analysis for Step-by-Step Calibration 3.1
Error Caused by Optical Axes Displacement
In progress of camera installation, it is inevitable that errors exist in displacement between optical axes, which would lead to wrong pc for transferring points as described in Sect. 2.4. The displacement error is shown in Fig. 5. 3.2
Error Caused by Inequality of Focal Length
It is probable that the focal lengths of different sub-camera are unequal, which would lead to the calculation of uncertainty of coordinates of points in the overlapping area between two detectors. The result of l is imprecise if focal lengths are assuming to be the same. The error caused by various focal lengths is shown in Fig. 6.
Lens
Δx0 Image plane xi
p2 p1
Fig. 5 Measurement error caused by optical axes displacement
Lens Δxq=x’q-xq q q’
xq
Edge image plane Central image plane
x’q Fig. 6 Measurement error caused by various focal lengths
240
C. Zhong et al.
4 Experiment and Analysis 4.1
Experiment Design
In our experiment, camera interior elements and distortions are simulated referring to the parameters of UCXp. The simulated data are used to test step-by-step calibration and accuracy estimation. The main procedure of simulation is: (1) Set camera parameters such as focal length, frame size, principle points, and distortion model; (2) Set scan line and calculate each point coordinate with limited random error; (3) Change camera parameters and re-calculate point coordinate. The calibrated results are compared with the setting parameters in order to estimate accuracy. In our experiment, basic focal length is 100.5 mm and pixel size is 9 μm. 4.2
Calibration of Central Detector
Two groups of experiments are carried out, considering the condition of optical axes displacement, various focal lengths with or without lens distortion. Results of changing focal length for central detector experiments are shown in Fig. 7; normal accurate angle measurement method performs well to obtain precise focal length and principle point. Although there is a constant error in solved focal length under lens distortion condition, this error can be considered to be caused only by lens distortion. Besides, the principle points are almost the same as setting parameters. Hence, the changing of focal length dose not influence the calibration result of central detector.
Fig. 7 Results of varying focal lengths for central detector
A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi. . .
241
Calibration result of central detector for principle point displacement 0.2
error of result (in pixcel)
-0.2
92
80
68
56
44
32
8
20
-4
-16
-28
-40
-52
-64
-76
-88
-100
0
-0.4
dx (no dist) df (no dist)
-0.6
dx (with dist)
-0.8
df (with dist) -1 -1.2 -1.4 Principle point displacement (in pixel)
Fig. 8 Results of principle point displacement for central detector
Results of principle point displacement for central detector experiments are shown in Fig. 8. If no distortion exists, the displacement of principle point is precisely calculated through accurate angle measurement method, the focal length is influenced a little, with the maximum change less than 1 pixel size. Under the condition of lens distortion, it the principle point coordinate presents a linear relation between calculated coordinates and displacement of principle point. As for focal length, besides the constant difference that may be caused by distortion, there are variants influenced by principle point displacement, although the changing values are below 1/3 pixel size. 4.3
Calibration of Edge Detector
Similarly, another two groups of experiments are performed. Results of changing focal length for edge detector experiments are shown in Fig. 9. In the condition of no distortion, the calculated result is precise. However, when distortion exists, both principle point and focal length have linear error with regard to focal length changes, and the maximum error is over 1 pixel size. It indicates that the variety of focal lengths is an important factor in influencing calibration results for edge detector. Results of principle point displacement for edge detector experiments are shown in Fig. 10. If no distortion exists, the curve of calculated result presents parabola, with maximum principle point coordinate error below 1/5 pixel size and focal length error below 1/2 pixel size. However, under the condition of lens distortion, both principle point and focal length have error over 1 pixel size. It indicates that the influence caused by principle point displacement is not ignorable.
242
C. Zhong et al.
Fig. 9 Results of varying focal lengths for edge detector
Fig. 10 Results of principle point displacement for central detector
5 Conclusion This chapter introduces a step-by-step calibration method based on accurate angle measurement for multi-detector stitched camera. Analysis of calibration error is performed, which indicates that the displacement of optical axes (identically, principle points) will influence the result of focal length, and the variety of focal length will influence both the result of principle point coordinate and focal length. The calibrated
A Step-by-Step Exact Measuring Angle Calibration Applicable for Multi. . .
243
errors increase when there is lens distortion. The experiment results support our conclusion. It can be concluded that for a multi-detector camera, it is of great significance to control the optical axes displacement and focal length variety, in order to get accurate calibration result for image interior elements.
References 1. Wang, H., Wu, Y.D., Zhang, Y.S.: Modeling and analyzing of geometric joint error for CCD matrix images of digital aerial camera. J. Inst. Surv. Mapp. 20(4), 257–261 (2003) 2. Li, J., Liu, X.L., Liu, F.D., et al.: Mosaic model of SWDC-4 large format aerial digital camera and accuracy analysis of stereo mapping. Sci. Surv. Mapp. 33(2), 104–106 (2008) 3. Devernay, F., Faugeras, O.: Straight lines have to be straight—automatic calibration and removal of distortion from scenes of structured environments. Mach. Vision Appl. 13, 14–24 (2001) 4. Chen, T., Shibasaki, R., Lin, Z.: A rigorous laboratory calibration method for interior orientation of an airborne linear push-broom camera. Photogramm. Eng. Remote Sens. 73, 369–374 (2007) 5. Wu, G., Han, B., He, X.: Calibration of geometric parameters of line array CCD camera based on exact measuring angle in lab. Opt. Precis. Eng. 15, 1628–1632 (2007) 6. Zhao, Z., Ye, D., Zhang, X., Chen, G.: Calibration of area-array camera parameters based on improved exact measuring angle method. Opt. Precis. Eng. 24, 1592–1599 (2016) 7. Liu, W., Ding, Y., Jia, J.: Factor analysis for the impaction on accuracy in exact measuring angle method. Sym. Photo. Optoelectron. 53, 1–4 (2010) 8. Yuan, G., Ding, Y., Hui, S., Liu, L., Yu, C.: Grouped approach algorithm for mapping camera calibration based on method of exact measuring angle. Acta Opt. Sin. 13, 14–24 (2001)
Absolute Distance Interferometric Techniques Used in On-Orbit Metrology of Large-Scale Opto-Mechanical Structure Yun Wang1(*) and Xiaoyong Wang1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. Optical misalignment caused by opto-mechanical structural deformation on account of null-gravity state and on-orbit unsteady-state temperature field becomes more serious with the increased sensor aperture. High-precision optical components distance metrology is demanded to meet the required optical accuracy in on-orbit alignment. Accuracy of submicron in absolute distance is needed, which is limited by the facts of space environment, implementation conditions, system running costs, etc. Aiming at engineering implementation, in this chapter, the researches of on-orbit laser interferometry absolute distance metrology of large-scale opto-mechanical structure were provided as follows: In terms of measuring method, the interferometric technique, double heterodyne interferometer, and its related technology branches are reasonable to acquire the required precision. The technical principle and feasibility was provided. This chapter matched the possible measuring range and the application range of the displacement. The mathematical solution based on the excess fractions was considered to be valuable in terms of an expanded measurement range without additional systematic costs. This chapter also analyzes the main error source. The error control method by means of control measure time was concluded. Keywords: Absolute distance metrology · Interferometric · Large-scale opto-mechanical structure
1 Introduction The requirement of high-precision real-time displacement metrology of optical components is increasing according to the expanded aperture, increased focal length, and lightweighted design in remote sensing area. Take a large aperture remote sensor with the aperture of 1.3 m and 1.4 m between the primary and secondary mirror as an example. The stimulation indicated an approximately 70 μm displacement within the system after gravity offload. This error, dozens of times larger than the acceptable error in the optical design, would cause an unacceptable imaging quality. It is absolutely unacceptable
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_24
245
246
Y. Wang and X. Wang
without effective offset or efficient on-orbit focal length adjustment. Beside the step gravity unload, the moisture desorption of composite material would also lead to a long-term position change between mirrors and the related supporting structures, resulting in a more serious imaging quality deterioration. Moreover, the periodic variation caused by the illumination condition would affect the stability of the secondary mirror supporting pole. This thermal deformation would also lead to the tilt optical axis and decreases the positioning accuracy. Meanwhile, micro-vibration on the platform would bring in unpredictable influence to imaging quality, resulting in the image degradation. The factors which affect the image quality are getting more severe with the expanding aperture and resolution of the remote sensor. At the present stage, the solution to this problem is mainly based on the stimulation and ground-based experiments, e.g., the focal plane could be preset before launch based on the ground-based gravity unload experiments. On-orbit mission plan was settled by means of the defocus prediction of the deformation. Analysis of temperature influence by means of the thermal control and stimulation would be used for the data adjustment. However, all the adjustments mentioned above are built on the stimulation and prediction. The lack of the real-time position sensing would cause the failure of the adjustment. The performance of the high precise large aperture space camera calls for new requirements for the on-orbit real-time system. It is important to require the precise displacement between main optical components, so as to provide data for system control and analysis. The assessment indicated that the micro-meter accuracy within several millimeters in 30 m is required in a large aperture optical system. Laser interferometry, as the most precise distance measurement method, is widely used not only in industry, but also, in several international collaborate programs, i.e., stellar detection, dark material detection and gravity wave detection, etc. The feasibility of this method is widely acceptable. Laser interferometry and its related branches lead to a high performance of the position and altitude measurement, followed by the precise position sensing. Laser interferometry is the key to the success of the projects mentioned above. Based on the stimulation and experiments, it is indicated that the space environment is relatively stable in vibration, and disturbance is simplified, which is possible for the laser interferometers to work stably. Followed by compensation and temperature stabilization, it is indicated that the theory possibility of the space laser interferometry is highly acceptable. This chapter represented the research and feasibility study of absolute distance metrology in use of large aperture optical structures on orbit. The precise metrology method adapted to the large optical components measurement is provided followed by the summarized analyses of the technologies of laser interferometry metrology. It is represented in this chapter that, by means of the double-wavelength double heterodyne interferometry system, followed by proper wavelength setting and adjustment, it is suitable to meet the requirement of the large aperture on orbit real-time remote sensing system.
Absolute Distance Interferometric Techniques Used in On-Orbit Metrology. . .
247
2 Theory of Absolute Metrology and Laser Heterodyne The requirement of absolute metrology is widely existed in space programs. From angstrom-thick length sensing to precise distance metrology of several kilometers between spacecrafts [1]. The absolute displacement metrology is distinguished from related displacement metrology by definition. Absolute displacement metrology is the measure process which could be pause and traced back. Since there is inherent contradiction between long measure range and precise accuracy, therefore, in the aspect of error analysis, the continuous high accuracy measurement of a specific range is also considered as absolute metrology. There are two sorts of absolute metrology methods based on laser interferometry, i.e., time-of-flight, a non-coherent measurement method, and optical interferometry, a coherent measurement, respectively. The method of time-of-flight is limited by the detector, causing a relatively low accuracy. For example, a measurement of distance L ¼ cτ 2 , with c is speed of light and τ is time of flight. Limited by the bandwidth of the detector with a resolution of 3 ps, an accuracy of 1 mm could be acquired with this method [2]. Laser interferometry, tracing with the wavelength of the laser, is known as the most accurate metrology method. This method takes Michelson interferometry as the theoretical basis and the light wavelength as the scale, so as to sense the deviation of the distance by measuring the phase difference of the reference arm and the measurement ϕ λ, where arm. The displacement could be considered as the function of phase, i.e., L ¼ 2π λ is period as 2π, 0 ϕ < 2π; therefore, the distance L is measured as follows: L¼
Nþϕ λ 2π
ð1Þ
where N is an undetermined integer. In this method, distance larger than a wavelength is λ not acceptable. Because if the distance is larger than 2π , it would be measured as a repeat pattern, which would not be able to be identified in the system. This is named integer ambiguity. To solve the integer problem and expand the measurement range larger than half wavelength, the guide rail could be a solution. It is indicated that by a continuous fast tracing process to the phase change, theoretically, it is possible to acquire the accuracy displacement from the starting point. However, limited by the engineering condition, with the environment disturbance, step pulse as an example, this method is not reliable in some circumstance. In the field of absolute measurement, heterodyne method of frequency scanning and multi-wavelength interferometry is commonly used to acquire the large range displacement. Heterodyne method takes laser of different wavelengths as the light source. A synthetic wavelength is created by the beat frequency. Therefore, the measure scale is expanded by the synthetic wavelength, which is much larger than original wavelength, and the range of integer ambiguity could be expanded. Use a two-wavelengths heterodyne interferometer as an example. The measurement mechanism is indicated as follows (Fig. 1): [3]
248
Y. Wang and X. Wang RRI A
B L
Laser
PBS
BS
f1
RR2
f2
P
P
PD1
PD2
Signal Process
N
Fig. 1 Schematic diagram for heterodyne interferometric
Assuming two heterodyne frequency of f1, f2. The phase difference during measurement could be indicated as ϕ1 ¼
2πf 1 L c
ð2Þ
ϕ2 ¼
2πf 2 L c
ð3Þ
where L is the length of the reference arm. The displacement from A to B could be indicated as the phase difference form the common path, shown as follows: Δϕ ¼ 2πL
f2 f1 c
ð4Þ
which is equaled as: λsynth ¼
c c λ λ ¼ 1 2 : ¼ f 2 þ f 1 Δf λ2 λ1
ð5Þ
The synthetic wavelength introduced here also has a cycle of 2π. Theoretically, λsynth is increasing with the decreasing Δυ, which equals to a expand scale, resulting a longer measurement range. Based on the analysis of integer ambiguity resolution, a longer absolute measurement is followed. Therefore, by adjusting frequency difference Δf, the wavelength of synthetic wavelength could be expanded to a certain range which could be determined, so as to eliminate the affection of integer ambiguity. For instance, a system with the frequency difference of 15 GHz indicates a synthetic wavelength of 2 cm. Considering a reasonable impact load stress and mechanical stiffness, in a large
Absolute Distance Interferometric Techniques Used in On-Orbit Metrology. . .
249
aperture optical system, the distance deviation could be controlled in a cycle of the synthetic wavelength. Followed by a Δφ measurement, the absolute distance could be acquired.
3 Realistic Feasibility In theory, the heterodyne method could provide a scale of any length, which is not the case in engineering implementation. Several restrictions are indicated. For instance, the larger frequency difference requires smaller frequency difference, which requires a critical accurate calibration standard for laser. Therefore, the preciseness of calibration is restricted by the laser itself. Moreover, the accuracy of the same electrical circuit would be decreased with a larger frequency difference. The cascade would fail when the stage accuracy exceeds the required half wavelength ambiguity of the followed stage. In a large aperture system, the measurement should balance the large baseline (30 m) and high measurement accuracy (micro or sub-micro). There are four conditions to accomplish a high precise metrology system. Firstly, the coherent length of the laser should be longer than the displacement of interest to avoid the integer ambiguity. Secondly, the accuracy of the laser should meet the requirement of the measurement accuracy. Then, the combination of synthetic wavelength and phase resolution should accomplish the requirement of the accuracy. Last but not least, the high accuracy of the synthetic wavelength is cruel. It is not easy to accomplish all the four conditions in one heterodyne interferometer. A double-wavelength double heterodyne interferometer system should be a partial solution to the question above. Two heterodyne frequencies are settled in this system, with a wide frequency system of GHz used to decide the cycle error and accomplish the coarse measurement, while the narrow frequency system is used for the fine measurement by measuring phase difference. The system is designed as follows: Two heterodyne frequencies, linked up with each other, were provided in this system to create both the coarse and find measurement. Similar to the traditional Michelson interferometer system, the measurement was acquired by comparing the relative phase difference of both the reference arm and measurement arm. In order to eliminate disturbance of the environment, it is designed in this system that the reference light and measurement light could pass the common path in the highest degree. In the coarse stage, two single-frequency laser sources were chosen, and the frequencies are locked by each other. Ref. [4, 5] provided a solution with the laser source of Lightwave 2000 LOLA, which could lock the frequency difference to 15 GHz. Following this stage, AOMs are applied to accomplish high accuracy optical frequency switch. In this way, a high speed optical frequency was applied to accomplish a stable frequency difference, so as to require a coarse frequency signal. A fine frequency adjustment was followed using two laser sources from the former stage. A small frequency could be acquired frequency difference of around 100 kHz, and fine phase measurement was acquired in this method.
250
Y. Wang and X. Wang
Following the above-mentioned process, a real-time switch of both coarse and fine measurement could be accomplished by the following function: ϕ¼
2πfL c
ð6Þ
Consider a system with frequency of f0, and assume a reference arm of L0 and measurement arm of L ¼ L0 + 2d, respectively. The distance d indicated the displacement between reference mirror and measurement mirror, which was doubled by the round trip. Therefore, the phase difference between reference signal and measurement signal could be indicated as: ϕ ¼ ϕ1 ϕ2 ¼
2πf 0 L 2πf 0 L0 4π f 0d c c c
ð7Þ
The differential for both sides of this equation indicated the phase change during measurement as follows: Δϕ ¼
4π 4π dΔf 0 þ f 0 Δd c c
ð8Þ
Equation (8) clearly indicated the process of both absolute measurement and relate measurement. From the first part of the equation, it is indicated that the phase difference is linear to the frequency difference. In the second part of this equation, to measure the phase change caused by the slight displacement could be considered as the fine measurement process. This system disintegrated measurement into two parts, which could expand the measurement range while high accuracy was guaranteed. The contradiction in the measurement could be solved.
4 Adaptive and Robustness Analysis 4.1
Adaptive Analysis
In engineering, the adaptation of this metrology system is decided by whether the measurement range could match the variation of the aim. Based on the analysis above, it is indicated that the variation during the process could not exceed the ambiguities of the coarse stages, or by means of other assistive ways, the absolute distance at the measurement scale could be decided. Followed by a linked accuracy analysis within two stages, the aim to acquire both large measurement range and high accuracy could be accomplished, and the system implementation could be simplified. The step pulse of gravity upload and the thermal deformation of the material are considered to be the main factors to affect a large aperture system. The ground-based stimulation could be indicated. However, all the available assessment was acquired by
Absolute Distance Interferometric Techniques Used in On-Orbit Metrology. . .
251
stimulation. There is not enough data to evaluate the relate displacement for the on-orbit system. Moreover, even if there were on-orbit data for the existed system, there could be slight difference on the system design, material, and operation environment, which might cause failure for the required system. Once the measurement range exceeds the synthetic wavelength, the measurement accuracy could not be guaranteed. It may cause a failure on the measurement. A larger synthetic wavelength means a higher reliability on measurement range. However, due to the calculation method, the accuracy could be lower. In order to expand measurement range without lost accuracy, the mathematics method based on excess fractions was provided in our system. The theory of excess fractions provided a solution to calculate the result when the measurement range is larger than the synthetic wavelength. By analysis of a wavelength series and the related fractions, an expanded range could be acquired and a heterodyne wavelength could be acquired. By this means, under mathematics calculation, the measurement range could be expanded without adequate priori knowledge. 4.2
Error Analysis and Control
In previous analysis, it was assumed that the reference arm L0 maintains the same during the measurement process. Actually, the vibration which would cause the length change existed in the whole measurement. Assuming the average channel path changed by δ in two measurements, then, the displacement L0 could be represented as follows [6]: 2πf 2 ðL þ δÞ 2πf 1 L1 2πf 2 L0 2πf 1 L0 ¼ c c c c that is, L0 ¼ L þ
λsynth f2 δ¼Lþ δ f2 f1 λ
where λ is the wavelength of the laser. Assuming λ ¼ 1.3 μm, λsynth ¼ 2 cm, which means the length change δ would amplified by 15,000 times. It could be indicated that the synthetic wavelength expands the measurement wavelength, while error is amplified by the same time. Therefore, it is clear that the measurement time could not be too long to eliminate error. Several methods were provided to shorten measurement time. One efficient solution is phase-shifting multi-wavelength dynamic interferometer. Apart from the traditional fringe count solution, this method acquires phase information by phase diagram analysis, so as to increase process speed. Four interferometry diagram of a same frequency was acquired, each with handling time of several hundred micro-seconds, which was highly improved than traditional interferometry. Following this method, the vibration could be less sensitive.
252
Y. Wang and X. Wang
5 Summary The requirement on the distance of optical components metrology was improving with the expanding aperture in a large-scale optical system. It is indicated that this high precise metrology is important for the followed system control an adjustment. Based on the analysis and comparison, it is provided in this chapter that it is achievable to acquire the aim by means of double heterodyne interferometer system. Aiming at the measure range, it is provided that the calculation based on excess fractions could solve the problem. The main error analysis was provided, and the increased measure speed would solve the problem. Acknowledgments The work was supported by the National Key Research and Development Program of China (No. 2016YFB0500802).
References 1. Dubovitsky, S., Lay, O.P., Peters, R.D., Liebe, C.C.: Absolute optical metrology: nanometers to kilometers, IELCONF (2005) 2. Wang, Q.Y., et al.: Femtosecond Laser Applications in Advanced Technologies. Beijing, 240 (2005) 3. Zhao, F.: Development of high precision laser heterodyne metrology gauges. In: Advanced Sensor Systems and Applications II 4. Mason, J.E.: Absolute metrology for the kite testbed. Interferometry in Space, SPIE (2003) 5. Zhao, F.: Picometer laser metrology for the space interferometry mission (SIM). IELCONF (2004) 6. Falaggis, K., Towers, D.P., Towers, C.E.: Method of excess fractions with application to absolute distance metrology analytical solution. Appl. Opt. 52(23), (2013)
Development and On-Orbit Test for Sub-Arcsec Star Camera Yun-hua Zong1(*), Jing-jing Di1, Yan Wang1, Yu-ning Ren1, Wei-zhi Wang1, Yujie Tang2, and Jian Li2 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected] 2 Beijing University of Aeronautics and Astronautics, Beijing, China [email protected]
Abstract. Firstly some main parameters of the star camera are designed by analysis to achieve high pointing accuracy, and then some important steps are described. A high-precision assembly and test system for this star camera are constructed in this chapter. Many measurement images are acquired by on-axis and off-axis star camera imaging test which are implemented with long focal length collimator’s image plane moving. The optimal focal plane position of this star camera is determined by a surface fitting method. The distribution character of the imaging points in the central and edge views is considered in this method. The sub-arcsec pointing accuracy is verified by test in orbit. The measurement result is consistent with the design value finally. Keywords: Design · High-precision star camera · In orbit
1 Introduction The measurement accuracy of satellite attitude is the main factor to affect the geolocation accuracy. A high-accuracy measuring equipment is more important when without control point. At present, the accuracies of some foreign-made star cameras are better than 1", but they are embargoed to our country. There is no star camera on the track, the precision of which is 1" at home, So it is urgent and significant to develop a high-accuracy star camera, to get high-precision direction measurement, and to improve the accuracy of image location finally, which is installed integratively and isothermally with optical remote sensing camera, working synchronously with the camera on the track [1–3].
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_25
253
254
Y.-h. Zong et al.
2 Analysis of Mission and General Design Scheme 2.1
Analysis of Mission
The goal of star camera is to realize sub-arcsec pointing accuracy. There are plenty of factors influencing accuracy. The parameter is selected primitively by the equation which is used to evaluate the attitude accuracy of the star sensor by Hughes Corporation. The error in the pitching direction or Yaw direction is calculated according to the following Eq. (1) [4]: σ crossboresight ¼
θFOV σ centr pffiffiffiffiffiffiffiffiffi N pixel N star
ð1Þ
where σ crossboresight is the error in the pitching direction or yaw direction, θFOV is the angle of field of the star camera, σ centr is the error of the centroid of star point, Npixel is the number of pixels of photo sensor, and Nstar is the number of stars involved in calculation. At present, the error of the centroid of star point σ centr can be equal to 0.1 pixel [5]. The number of stars involved in calculation Nstar is 16. A large-array (5120 3840) CMOS device is chosen as the photo sensor. Since the sensor is decided, according to Eq. (1), the angle of field should be less than 21 to achieve the measurement accuracy better than 0.5", when one piece of CMV20000 is used. Considering the engineering feasibility, the focus is set at 100 mm. The relationship between angle of field and focus is like Eq. (2): f tan
θFOV L ¼ 2 2
ð2Þ
where f is the focus of the star camera, θFOV is the angle of field of the star camera, and L is the length of the photosensitive area. It is easy to know that L should not be less than 31.6 mm, so one piece of CMV20000 is satisfied, the shorter side of the photosensitive area of which is 32.76 mm. Finally, the focus ( f ) is chosen as 100 mm, when the angle of field (θFOV) is 18 . So the angle of field (θFOV) is set 18 . Analysis of Accuracy The errors in both the pitching direction and the Yaw direction equal to 0.316", when we put θFOV ¼ 18 , Npixel ¼ 5120, and Nstar ¼ 16 into Eq. (1). Considering design surplus, it is feasible that the errors in the pitching direction and Yaw direction are less than 0.5". 2.2
General Design Scheme
After the analysis of mission, and considering the engineering feasibility, the star camera is actualized by technical methods of transmission-type lens of large wide field and large-array CMOS device. The high-accuracy and mini type of star camera is achieved
Development and On-Orbit Test for Sub-Arcsec Star Camera
255
Fig. 1 The configuration of the sub-arcsec star camera
by integrated design of optics, mechanics, electronics and heat. The star camera consists of lens hood, lens assembly, electronics component and heat-controlling assembly, as shown in Fig. 1.
3 Some Important Steps 3.1
The Focal Plane Precision Assembly
The star camera is designed with wide field of view, small f-number, long focal length, and short focal depth. Then there are high requirements for the focal plane assembly and calibrating accuracy. The poor assembly of the focal plane will cause the energy concentration of the star’s images become weak, and it will affect the measurement accuracy of the star camera [6]. The focal plane assembly of the star camera should ensure the precision position on the axis and the verticality to the axis. The CMOS focal plane will not be identical to the image surface because of field curvature of wide angle and large relative aperture camera lens. Figure 2 shows us the influence of field curvature. So the best focal plane position rests with the energy concentrations of the whole field of view, not only on-axis but also off-axis. In this chapter, a high-precision assembly and test system for this star camera are constructed, as shown in Fig. 3. Many measurement images are acquired by on-axis and off-axis star camera imaging test which are implemented with long focal length collimator’s image plane moving. A surface fitting method is applied on these images. The distribution character of the imaging points in the central and edge views is considered in this method.
256
Y.-h. Zong et al.
CMOS focal plane
image surface
Fig. 2 The influence of field curvature
Fig. 3 The high-precision assembly and test system
Firstly the defocusing amount is got by the max of 3 3 pixel energy concentration of each view. Then the surface is fit by these defocusing amounts, and finally the sizes of each gasket have made sure by the surface. The focal plane assembly is finished successfully in a lump by enough machining precision. Soon, the optimal focal plane position of this star camera (focal length f ¼ 100 mm, F ¼ 2.22, field of view θFOV ¼ 18 ) is determined. 3.2
Laboratory Calibration
The focal plane calibration of star camera with high precision is completed through a laboratory method which combines inner orientation parameters with exterior orientation parameters of this star camera. The temporal error, low spectrum frequency error, and high spectrum frequency error are considered in this method [7, 8].
Development and On-Orbit Test for Sub-Arcsec Star Camera
257
Table 1 The measurement accuracy of single star of the star camera The measurement accuracy of single star (00 ): X: 0.4162 Y: 0.4362 ELSFE(00 ) EHSFE(00 ) X Y X Y 0.3198 0.3215 0.2340 0.2563
ETE(00 ) X 0.1271
Y 0.1455
Because of these three error independence, the measurement accuracy of single star of the star camera is σ α=β ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E 2TE þ E 2LSFE þ E 2HSFE
ð3Þ
where σ α/β is the measurement accuracy of single star, ETE is the temporal error, ELSFE is the low spectrum frequency error, and EHSFE is the high spectrum frequency error. And so the measurement accuracy of the star camera is σ α=β E x=y ¼ pffiffiffiffiffiffiffiffiffi N star
ð4Þ
where Ex/y is the measurement accuracy of the star camera, and Nstar is the number of stars involved in calculation. The measurement accuracy of single star of the star camera is better than 0.500 by test data calculation, the result is shown in Table 1, and they meet the design requirement. 3.3
The Vacuum Focal Plane Setting and Testing
Considering the influence of thermal vacuum environment, the focal plane will be pre-set at the vacuum position before satellite launching. The star camera and a static star simulator are fixed on the same platform. This platform is put into the vacuum tank, when focal plane vacuum position of the star camera is calibrated. In the test process the temperature will be seriously controlled at 20 C in order to keep the performances of the static star simulator stable. Experimentation state is shown in Fig. 4. The result of quaternion error of the star camera is gained by enough test data, the test result is shown in Table 2, and the focal plane vacuum position’s correctness is verified through test in the simulative vacuum environment. 3.4
Test on Orbit
The star camera is successfully working in orbit. Many star images caught by this camera have been transmitted from the satellite to the earth. The characteristic parameters have been calculated through these star images. Performance evaluation is as follows: The detectable stellar magnitude is 7.5 Mv, and energy concentration is 80% in 3 3 pixels neighborhood, as shown in Figs. 5 and 6.
258
Y.-h. Zong et al.
Fig. 4 Vacuum experimentation state Table 2 The result of quaternion error of the star camera
Test item Quaternion error (00 ) Star center
Right ascension Declination DN
Test result Before thermal experiment 0.638
High temperature 0.414
Low temperature 0.418
After thermal experiment 0.682
0.534 131
0.343 189
0.328 190
0.521 132
Fig. 5 The picture of the star camera
The in-orbit quaternion of this star camera is analyzed by difference method, and the noise equivalent angle is X: 0.197400 , Y: 0.229800 , the result is shown in Fig. 7. The overall error is gained by quintic polynomial fitting [9], and the result is X: 0.368100 , Y: 0.381400 , which is shown in Fig. 8. The measurement result accords with the design requirement.
Development and On-Orbit Test for Sub-Arcsec Star Camera
Fig. 6 The image of 7.5 Mv star on orbit
0.01
0.005
0
-0.005
-0.01
-0.015
0
200
400
600
800
Fig. 7 The NEA error of the star camera
1000
1200
259
260
Y.-h. Zong et al.
2
10
-3
Y Z X
1.5 1 0.5 0 -0.5 -1 -1.5 -2 2.849724
2.8497245
2.849725 2.8497255 UTC
2.849726
2.8497265 8
10
Fig. 8 The overall error of the star camera
4 Conclusion The main parameters of the sub-arcsec star camera are designed by analyzing and calculation, and then the key processes are described. In this chapter, a high-precision assembly test system for this star camera is constructed, considering the imaging surface because of field curvature of wide angle and small F camera lens. A surface fitting method is applied to finish the optimal focal plane position of this star camera ( f ¼ 100 mm, F ¼ 2.22, θFOV ¼ 18 ) in a short time. The sub-arcsec pointing accuracy is verified by test in thermal vacuum experiment and test in orbit finally. The measurement result is consistent with the design value.
References 1. Wang, R.: Chinese photogrammetry satellite without ground control points (2)—technical thinking of 1:10000 scale data-transferring photogrammetry satellite. Spacecraft Recov. Remote Sens. 35(2), 1–5 (2014) 2. Wang, R.: Three Line Array CCD Image Photogrammetry Principle. Surveying and Mapping Publishing House, Beijing (2006) 3. Wang, X., Gao, L., et al.: Analysis and evaluation of position error of transmission stereo mapping satellite. J. Geomatics Sci. Technol. 29(6), 427–434 (2012) 4. ECSS-E-ST-60-20C, star sensors terminology and performance specification. ESA, Noordwijk (2008)
Development and On-Orbit Test for Sub-Arcsec Star Camera
261
5. Mattla, C., Francesca, F., Francesca, G., et al.: A new rigorous model for high—resolution satellite imagery orientation: application to EROS A and QuickBird. Int. J. Remote Sens. 33(8), 2321–2354 (2012) 6. Zheng, X., Jin, G., Wang, D., et al.: Focal plane assembly and calibrating of CMOS star sensor. Opto Electron. Eng. 38(9), 1–5 (2011) 7. Zheng, X., Zhang, G., Mao, X.: A very high precision errors test method for star sensor. Infrared Laser Eng. 44(5), 1605–1609 (2015). (in Chinese) 8. Xiong, K., Wei, X.G., Zhang, G.J., et al.: High-accuracy star sensor calibration based on intrinsic and extrinsic parameter decoupling. Opt. Eng. 54(3), 034112 (2015) 9. Schmidt, U., Elstner, Ch., Michel, K.: ASTRO 15 star tracker flight experience and further improvements towards the ASTRO APS Star Tracker. In: AIAA Guidance, Navigation and Control Conference and Exhibit, 18–21 August 2008, Honolulu, Hawaii
Opto-Mechanical Assembly and Analysis of Imaging Spectrometer Quan Zhang1,2(*), Xin Li1, and Xiaobing Zheng1 1
Key Laboratory of Optical Calibration and Characterization, Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Hefei, China 2 University of Science and Technology of China, Hefei, China [email protected]
Abstract. The imaging spectrometer developed by ourselves was introduced. The design of the optical machine and the adjustment of the optical machine were discussed. The results of the spectral test were analyzed. The instrument’s spectral range is 400–1050 nm, and its optomechanical part is mainly composed of hood, instrument frame, imaging system, spectroscopic system and detection system. According to the design requirements, an off-axis three-reverse imaging system was adopted, and a single aspherical mirror system was selected through MTF test comparison analysis. The Offner spectroscopic system was chosen considering factors such as image quality and compactness. The detection system uses a surface array CCD camera. The assembly test of each component system of the optical machine part was carried out separately. Finally, the instrument was tested by mercury argon lamp. The test results show that the spectral resolution of the imaging spectrometer is less than 7 nm, which is better than the design index of 10 nm, which has achieved the expected goal. It also verifies the rationality of design of the optical machine and the adjustment of the optical machine. Keywords: ISSOIA 2018 · Opto-mechanical assembly · Imaging spectrometer · Spectral analysis
1 Introduction The direction, extent, and speed of climate change are one of the major scientific and political issues affecting human survival and development. Satellite observations of climate change have long-term, large-scale and high-aging natural advantages. Only satellite observations are accurate enough to establish an uncontested climate record, measure whether and how the climate is changing, and establish a climate model for long-term trend prediction. At present, the accuracy and stability of satellite observations are far from sufficient to meet the requirements of measuring long-term changes, and even produce controversial observations [1]. Among the internationally agreed principles of climate observation, “precision accuracy and long-term stability of observations through high-precision calibration” is one of the most important technical requirements. It must be independent, shared and © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_26
263
264
Q. Zhang et al.
traceable to the International System of Units (SI). The spatial radiation reference, with cross-calibration as the basic means, provides regular radiation/spectral calibration and observational verification of operational observation satellites using a global calibration field network [2, 3]. In order to explore and verify the high-precision calibration technology for satellite observation, the United States has begun to implement the CLARREO (Climate Absolute Radiance and Refractivity Observatory) program and listed it as one of the top four missions in Decadal Survey Missions [1]. CLARREO’s important task of providing a permanent benchmark for global climate records and establishing corresponding observing standards is of great importance for understanding and predicting the global climate and reducing uncertainty. The University of Colorado is committed to solving the highprecision hyperspectral imaging technology of the near-infrared band of visible light and expects to achieve 0.2% of the accuracy of SI traceability. In the solar spectral band, CLARREO uses a hyperspectral imager to directly observe the sun, and the on-orbit reference to a stable solar source calibration; in cross-calibration, the hyperspectral imager simultaneously observes the ground target with the traffic load and traces the output of the to-be-calibrated load to International System of Units SI [4–6]. The spectral range of the loads in the CLARREO project is 320–2300 nm, and the spectral resolution is 8 nm. At present, Anhui Institute of Optics and Fine Mechanics of CAS and Shanghai Institute of Technical Physics of CAS are carrying out relevant technical research. Our research group takes the research of imaging spectrometer technology in the visible near-infrared band as a breakthrough. Based on the accumulation of the whole process calibration technology in recent years, an elementary prototype of imaging spectrometer with spectral resolution better than 10 nm will be developed to lay the foundation of the subsequent imager on short-wave infrared band. The instrument will eventually be used to obtain high-precision continuous subdivision spectral radiance of the atmosphere system to provide long-term stable radiation data for climate monitoring research. And as a reference load, it can be used to cross calibrate other loads in orbit to improve the accuracy and long-term consistency of operational loads in orbit.
2 Principle The high-precision imaging spectrometer project was first developed in the visible-nearinfrared band to solve key technologies such as self-calibration of solar observations, observation of ground-to-ground ratios, and cross-calibration and verified the application through the practical application and comparative evaluation of space observation. The imaging spectrometer adopts the form of push-sweep. The instrument is fixed on the two-axis turntable and installed in the back direction of the satellite platform. The two-axis turntable rotates the radiometer to point to the observation target, which satisfies the requirements for observation of the sun, moon, and surface. The working principle of imaging spectrometer is shown in Fig. 1. The most notable feature of the instrument is its ability to directly observe the sun to achieve calibration self-calibration and to verify the accuracy and stability of the calibration by observing the moon.
Opto-Mechanical Assembly and Analysis of Imaging Spectrometer
265
Sun Moon Correction
Observation
Cross calibration
Imaging spectrometer Impact factor
Analog calibration
Laboratory calibration
Business load
Surface reflectance
Common radiation benchmark
High precision observation on the ground Fig. 1 Working principle diagram
Through the periodic direct observation of the sun and the moon, the instrument’s response attenuation correction is achieved, and the long-term high precision of the earth observation is maintained. In the case of orbital overlap, the time/space/spectrum/ angle multi-element matching cross calibration is performed. Rail loads ensure high accuracy and traceability of all on-orbit loads. The imaging spectrometer directly observes the solar calibration self-calibration, mainly through the design of the attenuator [7, 8]. When the imaging spectrometer observes the reflectivity to the ground, the instrument observes the solar radiation and the ground reflection separately, and the ratio of the output is the reflectivity. The spectrometer periodically scans the sun, and the output of the detector pixel is as shown in Eq. (1). sensor Ssolar i,λ ¼ E solar ðt attenuator T attenuator Aattenuator ÞRi,λ
ð1Þ
The spectrometer’s ground observation output is shown in Eq. (2). sensor Searth i,λ ¼ Learth Ri,λ
ð2Þ
The ground reflectance is the output ratio of the above two, as shown in Eq. (3). ρearth ¼ πLearth = Esolar λ i, λ i, λ cos θ solar solar ¼ πSearth i, λ t attenuator T attenuator Aattenuator =Si, λ cos θ solar
ð3Þ
3 Instrument 3.1
Overview
The imaging spectrometer has a spectral range of 400–1050 nm, a spectral resolution of 10 nm or less and a full field of view of 11 . The optomechanical system of the imaging spectrometer is mainly composed of lens hood, off-axis triaxial imaging system, offner
266
Q. Zhang et al.
Imaging system
CCD camera
Lens hood
Spectroscopic system
Instrument frame
Fig. 2 Overall structural composition diagram Table 1 Design parameters of spectroscopic system and detection system Wavelength range 400–1050 nm
Spectral resolution 10 nm
Smile and keystone 1 μm
Convex grating size 25 mm
Vertex curvature radius of convex grating 70 mm
CCD pixel size 26 μm
CCD pixels 1024 256
spectroscopy system, CCD camera, and instrument frame. The overall structural composition diagram is shown in Fig. 2. The design parameters of instrument are shown in Table 1. 3.2
Imaging System
The high-quality imaging spectrometer optical system must have the advantages of large line field of view, large relative aperture, wide spectral imaging, small size, and light weight, so the off-axis three-reverse imaging system is adopted [9, 10]. With the traditional multi-spherical mirror system, the imaging quality can reach the diffraction limit. However, due to the aspherical surface used and the aspherical amount of each aspheric surface is not very small, it is necessary to develop a high-precision compensator. The use of a single aspheric mirror system has reduced the cost and difficulty of development. Through the comparison and analysis of MTF test results, a single aspheric mirror system is finally selected. As shown in Fig. 3, it is a schematic diagram of a light path and a three-dimensional simulation of a single aspheric mirror system. 3.3
Spectroscopic System and Detection System
As a concentric optical system, the Offner-type spectroscopic system has the advantages of small inherent aberration, large relative aperture, small spectral line curvature and color distortion, high imaging quality, and compact structure. It is suitable for the spectroscopic system of high-resolution imaging spectrometer [11, 12]. Therefore, the spectroscopic system of the ratio radiometer adopts a convex grating Offner structure.
Opto-Mechanical Assembly and Analysis of Imaging Spectrometer
267
(b)
(a) TM1 TM3
TM4/Fold TM2
Fig. 3 Imaging system design. (a) Optical path; (b) 3D simulation
(a)
(b) Plane mirror assembly
CCD camera
Convex grating assembly Adapter
Fiber interface
Shell
Debug gauge
Concave mirror assembly
Fig. 4 Spectroscopic system and detection system design. (a) Optical path; (b) 3D simulation
Fig. 5 Physical map. (a) Convex grating; (b) Experimental test
The optical components of the beam splitting module include a grating, a planar mirror, and a concave mirror. The detector uses a cooled CCD camera manufactured by FLI. Figure 4 is a schematic diagram of the optical path and structural composition of the Offner spectroscopic system and detection system. Figure 5 is the physical map of convex grating and experimental test.
268
Q. Zhang et al.
Fig. 6 MTF test at 650 nm. (a) MTF curve of optical lens; (b) Star point on the axis
4 Test of Imaging System 4.1
Effective Focal Length
The effective focal length measurement is measured on the optical test systemtransponder according to the nodal method principle. The wavelength of the illumination source is 650 nm, and the focal length measurement is averaged after multiple measurements to obtain the focal length of the CCD camera lens. The effective focal length design value is 84.00 mm, the measured value is 86.63 mm, and the focal length error is 3%. 4.2
MTF
The MTF of the lens is measured by the Optest optical test system, and the illumination source is respectively a 650 nm red light source. The lens MTF curve obtained by the test is shown in Fig. 6a. The measurement error of the MTF is not more than 0.03. It can be seen from the figure that the MTF is greater than 0.7 at the Nyquist frequency and has a higher transfer function value to meet the design requirements. The on-axis star points obtained during the MTF test are shown in Fig. 6b. 4.3
Rear Working Distance
The rear working distance of the lens is defined as the vertical distance from the mounting surface of the slit member to the focal plane. Accurate measurement of the back working distance is especially important for accurate installation of the slit. The post-operating distance measurement is measured by the Optest optical test system, and the collimator produces collimated light collimated into the lens. The test optical path diagram is shown in Fig. 7. Firstly, use the microscope to align the slit mounting plane and adjust the clarity. Read and record the microscope guide position parameter A, then use the microscope to image the star point, adjust to the focal plane position, read and record the microscope-guided position parameter B. The difference of parameter A and B is the working distance behind the lens. The three groups were averaged to obtain a post working distance of 3.25 mm.
Opto-Mechanical Assembly and Analysis of Imaging Spectrometer
269
Fig. 7 Schematic diagram of the rear working distance measuring device
5 Test of Spectroscopic System 5.1
Optical Component Mounting
Test equipment includes spectroscopic systems, CCD cameras, laptops (with camera software), mercury-argon calibration sources (185–1100 nm), fiber optics, and debug slits. Both the grating and the planar mirror are fixed to the mount by black silicone rubber. The left and right sides and the upper and lower sides of the concave mirror are pressed with a metal foil of 0.5 mm thickness, and the rear part is fixedly pressed by a pressure plate and a 0.5 mm foil, thereby ensuring that the concave mirror is firm and reliable in the lens holder. The grating assembly, the planar mirror assembly, and the concave mirror assembly are all fixed to the base plate by M3 or M4 screws. Firstly, the debugging caliper is fixed on the bottom plate, and then the grating component, the plane mirror component, and the concave mirror component are successively mounted on the bottom plate through the limit of the caliper. The through hole diameter of each component is +0.25, which is convenient for fine adjustment during debugging. The CCD camera is connected to the cabinet through an adapter plate. 5.2
Pinhole Slit Test
After the optical components of the beam splitting module are installed, the CCD camera is connected to the computer and the power source, and the optical fiber is connected to the calibration light source and the light splitting module. Turn off the light and cover the entire module with a blackout cloth. The CCD camera starts the photo test in continuous shooting mode (exposure time 0.5 s). The fiber head should be placed at a certain distance from the light source interface, and the frosted glass should be added. When the CCD camera is connected to the spectroscopic system, the front and rear distances can be fine-tuned by a brass-threaded ring. For the concave mirror assembly, the front and rear distances are adjusted by the two screws at the rear. The two screws rotate at the same time for parallel fine adjustment. Figure 8 shows the schematic diagram of the pinhole slit. There are nine
270
Q. Zhang et al.
Fig. 8 Schematic diagram of the pinhole slit
Fig. 9 Pinhole slit test
Fig. 10 Spectral test. (a) Test site map; (b) Spectrogram
30 μm clear holes distributed on the slit. The spacing between the two holes that are closer together is 40–80 μm. As shown in Fig. 9, the horizontal direction is the spatial dimension and the vertical direction is the spectral dimension. When the two spots that are closely spaced are clearly and clearly displayed separately, it indicates that the debugging position is better.
6 Overall Spectrum Test In order to analyze the spectral resolution of the imaging spectrometer, the whole machine must be subjected to spectral testing. Because the output wavelength range of the mercury-argon lamp calibration source is about 185–1100 nm, covering the spectral range of the imaging spectrometer 400–1050 nm, the mercury-argon lamp calibration source is selected as the test equipment. First connect the spectrometer to the computer and the power supply, and then align the light exit of the mercury argon lamp with the light entrance of the spectrometer. The background signal is measured first, and then the instantaneous signal is measured. The instant signal minus the background signal is a valid signal. Figure 10a is a spectrum test site map, and Fig. 10b is a spectrum diagram.
Opto-Mechanical Assembly and Analysis of Imaging Spectrometer
(a)
271
(b) Argon mercury lamp
10000
Pixel intensity
8000
6000
4000
2000
0 0
50
100
150
200
250
Pixel
Fig. 11 Spectrum curve. (a) Imaging spectrometer; (b) mercury argon lamp Table 2 Analysis of spectral curve Number Wavelength/nm Central pixel FWHM Resolution Number Wavelength/nm Central pixel FWHM Resolution
1 365.02 11.22104 1.73353 4.157 7 763.511 176.60908 1.88623 4.52318
2 404.66 27.60834 2.07165 4.96782 8 772.376 180.20384 2.55906 6.13663
3 435.84 40.46239 1.88507 4.5204 9 811.531 196.39765 2.21765 5.31792
4 546.07 86.47952 1.74874 4.19348 10 826.452 202.6765 2.40229 5.76069
5 696.543 148.65662 2.54362 6.0996 11 842.465 209.06729 2.61606 6.27331
6 750.387 171.1757 2.19069 5.25327 12 912.297 238.44352 2.46424 5.90925
Select a point on the spatial dimension of the spectrum, and analyze the data of the entire column at the position using Origin software to obtain the curve shown in Fig. 11a. According to the characteristic spectrum of the mercury argon lamp (as shown in Fig. 11b), Origin software is used to perform Gaussian fitting on the spectral curve to obtain the correspondence between the wavelength and the central pixel, as shown in Table 2. Using the origin software to perform a cubic polynomial fitting on the wavelengths in Table 1 and the central pixel, the fitting equation is as follows. y ¼ 338:32824 þ 2:39662x þ 1:28451 104 x2 3:37275 107 x3 , x ¼ 0, 1, . . . , 255
ð4Þ
Substituting the starting pixel number 0 and the cutoff pixel number 255, that is, x ¼ 0 and x ¼ 255, the spectral range is 338.3282–952.2266 nm. Therefore, the average resolution of a single pixel of the detector is R ¼ (952.2266–338.3282)/2562.3980. Gaussian fitting is performed on each of the independent wavelength points in the origin software, and the full width at half maximum (FWHM) of each wavelength point can also be obtained. The product of the theoretical spectral resolution and the full width at
272
Q. Zhang et al.
half maximum is the spectral resolution at the instrument’s specific wavelength, as shown in Table 2. As shown in Table 2, the spectral resolution of the imaging spectrometer is less than 7 nm, which is better than the design index of 10 nm and achieves the expected target.
7 Conclusion The imaging spectrometer adopts the off-axis three-reverse system as the imaging system, the Offner form as the spectroscopic system and the surface array CCD camera as the detection system. The assembly test of each component system of the optical machine part was carried out separately. Finally, the instrument was tested by mercury argon lamp. The test results show that the spectral resolution of the imaging spectrometer is less than 7 nm, which is better than the design index of 10 nm, which has achieved the expected goal. It also verifies the rationality of design of the optical machine and the adjustment of the optical machine.
References 1. Wielicki, B.A., Baize, R.R.: The CLARREO Mission—Earth’s Climate Change Observations (2015) 2. Aghedo, A.M., Bowman, K.W., Shindell, D.T., Faluvegi, G.: The impact of orbital sampling, monthly averaging and vertical resolution on climate chemistry model evaluation with satellite observations. Atmos. Chem. Phys. 11, 6493–6514 (2011) 3. Anderson, J.G., Dykema, J.A., Goody, R.M., Hu, H., Kirk-Davido, D.B.: Absolute, spectrallyresolved, thermal radiance: a benchmark for climate monitoring from space. J. Quant. Spec. Radiat. Trans. 85, 367–383 (2004) 4. Arechi, A.V., McKee, G.A., Durell, C.N.: RF-excited plasma lamps for use as sources in OGSE integrating spheres. In: Proceedings of SPIE, vol. 8153, San Diego, California (2011) 5. Barnes, W.L., Salomonson, V.V.: MODIS: a global image spectroradiometer for the earth observing system. Crit. Rev. Opt. Sci. Technol. CR47, 285–307 (1993) 6. Berk, A., Bernstein, L.S., Anderson, G.P., Acharya, P.K., Robertson, D.C., Chetwynd, J.H., AdlerGolden, S.M.: MODTRAN cloud and multiple scattering upgrades with application to AVIRIS. Remote Sens. Environ. 65, 367–375 (1998) 7. Smith, P., Drake, G., Espejo, J.: A solar irradiance cross-calibration method enabling climate studies requiring 0.2% radiometric accuracies 8. Mccorkel, J., Thome, K.: Instrumentation and first results of the reflected solar demonstration system for the climate absolute radiance and refractivity observatory. SPIE Opt. Eng. Appl. 8510 (1), 85100B (2012) 9. Zhang, T., Liao, Z.: Design of off-axis reflective optical system with 3 mirrors used in imaging spectrometers. Infrared Laser Eng. 42(7), 1863–1865 (2013) 10. Li, H., Xiang, Y.: Optical design of off-axis three-mirror telescope systems of imaging spectrometers. Infrared Laser Eng. 38(3), 500–504 (2009) 11. Rudong, X., Yiqun, J., Weimin, S.: Design of a spectroscopic system for SWIR Offner imaging spectrometer. J. Suzhou Univ. (Nat. Sci.). 27(3), 61–66 (2009) 12. Xuxia, L.: Research and Design of Littrow-Offner Spectroscopic Imaging System, pp. 9–10. Soochow University, Suzhou (2013)
An Optimized Method of Target Tracking Based on Correlation Matching and Kalman Filter Mingdao Xu1(*) and Liang Zhao1 1
Shandong Aerospace Electro-technology Institute, Yantai, Shandong, China [email protected]
Abstract. Correlation matching is an easy method for target image tracking. But global searching is large amounts of calculation, and target will be lost when shaded. The combining use of correlation matching and Kalman filter will predict a possible position in the next frame which reduces the calculation time and prevents from losing target. In this paper, this combined algorithm is improved by using a self-updated matching template and a self-adapted detecting window. This method can well overcome the disturbing by spinning, scaling, or transforming of the target and indeterminacy of movement. Because of these advantages, this algorithm can be used in target tracking system on spacecraft. Keywords: Target tracking · Correlation matching · Kalman filter · Self-updated template · Self-adapted window
1 Introduction Along with the development of space imaging realm, the request for intelligence and credibility of target tracking is higher and higher. Correlation matching is one of many tracking algorithms which are good at: no request for high image quality, adapting complicated target or background, and strong anti-interference ability [1–3]. The theory of Kalman filter gives us a state space method in time field, which introduces conception of state variable and state space [4]. This text improves these two algorithms by using a self-updated matching template and a self-adapted detecting window.
2 Base Algorithms 2.1
Correlation Matching
The correlation issue is to find a matched place for a given sub-image in the input image. The definition of correlation of two functions f(x, y) and h(x, y) sized M N is as below:
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_27
273
274
M. Xu and L. Zhao
f ðx, yÞ hðx, yÞ ¼
M 1 N 1 1 XX f ðm, nÞhðx þ m, y þ nÞ MN m¼0 n¼0
ð1Þ
Correlation matching calculates relative mean value of template and input image. Correlation being 1 means best match, correlation being 1 means worst match, and correlation being 0 means no relative at all. The equation of matching correlation of template T(x, y) and input image I(x, y) is as below: X 2 ½ T 0 ð x0 , y0 Þ I 0 ð x þ x0 , y þ y0 Þ x0 , y0 1 X T ðx00 , y00 Þ T 0 ð x0 , y0 Þ ¼ T ð x0 , y0 Þ w h x00 , y00 1 X I ðx þ x00 , y þ y00 Þ I 0 ð x þ x0 , y þ y0 Þ ¼ I ð x þ x0 , y þ y0 Þ w h x00 , y00 Rccoeff ðx, yÞ ¼
ð2Þ ð3Þ ð4Þ
The normalized correlation matching method can reduce the influence of light changing; the equation is as below: Rccoeff ðx, yÞ Z ðx, yÞ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi X X T ð x0 , y0 Þ 2 I ð x þ x0 , y þ y0 Þ 2 Z ðx, yÞ ¼ x0 , y0 x0 , y0 Rccoeff
2.2
normed ðx, yÞ
¼
ð5Þ ð6Þ
Kalman Filter
Kalman filter is a recursive estimator proposed by Rudolph E. Kalman. Its concept is: if there is a strong hypothesis, given history measurements of the system, then we can build a model that maximizes the posterior probability. Thus it uses a recursive method, so it does not need many history input signals but only one past signal. This advantage allows Kalman filter to be used in computer in real-time task. Kalman filter is composed of two phases. First is the predicting phase which generates the next time position from history information. Second is the correcting phase which modifies the model on the basis of new measurements. The equation of predicting phase is as below: x k ¼ Fxk1
ð7Þ
T P k ¼ FPk1 F þ Qk1
ð8Þ
x k indicates predict state in time k. xk1 indicates state in last time which is got from correcting phase. F is transfer matrix, which represents system’s transfer characteristic.
An Optimized Method of Target Tracking Based on Correlation Matching. . .
275
P k indicates predict error covariance in time k. Pk1 indicates predict error covariance in last time which is got from correcting phase. Qk1 indicates process noise covariance. The equation of the correcting phase is as below: 1 T T K k ¼ P k H k H k Pk H k þ Rk
ð9Þ
xk ¼ x k þ K k zk H k xk
ð10Þ
Pk ¼ ðI K k H k ÞP k
ð11Þ
Kk indicates gain in time k, Hk indicates measurement matrix in time k and Rk indicates measurement noise covariance. Equation 10 means getting corrected state xk on the basis of measurement zk, and xk is xk1 in the next predicting phase Eq. 7. This is how the recursive filter works.
3 Algorithms Improving and Implementation The algorithm combined correlation matching and Kalman filter flows like this: Get target template by initial target, initialize detecting window to current target position, and set detecting window to a sure value that is larger than the target. Calculate correlation using Eq. 5; if the max correlation is more than the threshold, then the max correlation position is the matched position. Use this matched position to correct Kalman filter as measurement value; if correlation is less than the threshold which means matching is failed, then make the predict position as matched position. Then, go to Kalman filter predicting phase, calculate the next time predict position using transfer matrix, and define the detecting window to predict position. The essential algorithm flow is as Fig. 1: Thinking of object’s spinning, scaling, shape changing, and movement, we need to improve the algorithm. The traditional Kalman filter only updates template content update and realizes more stable and accurate tracking by updating local template or using initial template combined with updating template at the same time. The size of template and the size of detection window have not changed. When the size of target changes due to motion, matching error may be caused by the target is too big to be contained in the template, or too much background content of the target in template, which will eventually lead to the failure of target tracking. So firstly we scale up and down the template to two new templates. Then we calculate the correlation using these three templates, which correlation is the biggest is the successful matching. If the successful matching template is the scaled one, that means the object scaled. At the mean time, we use the sub image that is at the successful matching position as next template. So the improved algorithm can adapt to the object’s changing. We use this equation to scale up or down the template:
276
M. Xu and L. Zhao
6WDUW *HWWDUJHWWHPSODWH7 ,QLWGHWHFWZLQGRZSRVWR REMHFWSRVGHWHFWZLQGRZVL]H WRREMHFWVL]H:+ &DOF&RUUHODWLRQ5
0D[5!WKUHVKROG"
1
< 0HDVXUH] 0DWFKHGSRV 0D[5 SRV
0DWFKHG SRV 3UHGLFWSRV
.DOPDQFRUUHFW
.DOPDQSUHGLFWWRJHWWKH SUHGLFWSRVDVGHWHFWZLQGRZ SRV Fig. 1 Base algorithm flow
f ðx, yÞ ¼ ½1 x x
f ð0, 0Þ
f ð0, 1Þ
f ð1, 0Þ
f ð1, 1Þ
1y y
ð12Þ
f(x, y) indicates scaled template. f(0, 0), f(0, 1), f(1, 0), and f(1, 1) indicate four pixels in the original template around f(x, y). x and y indicate normalized coordinates. Then we correct Kalman filter’s detecting window according to the template’s scaling, movement and matching accuracy. The equation is as below: Y k ¼ ð2 r k ÞpM k ðI þ σ k Þ
ð13Þ
Yk indicates size of detecting window, as same as (wyk, hyk).Mk indicates size of current template. p indicates the ratio of detecting window to template. rk indicates the best matching accuracy which is smaller than 1 and bigger than the matching threshold; if it is small means we need to enlarge the detecting window. σ k indicates the indeterminacy of movement. The equation is:
An Optimized Method of Target Tracking Based on Correlation Matching. . .
277
6WDUW *HWWDUJHWWHPSODWH7 ,QLWGHWHFWZLQGRZSRVWR REMHFWSRVGHWHFWZLQGRZVL]H WRSWLPHVRIREMHFWVL]H &DOFWKUHHWHPSODWHV FRUUHODWLRQ 1
0D[5!WKUHVKROG" < 0HDVXUH] 0DWFKHGSRV 0D[5 SRV 8SGDWHWHPSODWH
0DWFKHG SRV 3UHGLFWSRV
&DOFVFDOHGWHPSODWHV .DOPDQFRUUHFW
.DOPDQSUHGLFWWRJHWWKH SUHGLFWSRVDVGHWHFWZLQGRZ SRV &DOFLQGHWHUPLQDF\RIPRYHPHQW &DOFQHZGHWHFWZLQGRZVL]H
Fig. 2 Improved algorithm flow
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 ðak E Þ þ ðak1 E Þ þ ðak2 E Þ σk ¼
3
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2k þ a2k1 þ a2k2
ð14Þ
ak, ak1, and ak2 indicate acceleration (ax, ay) in x and y directions at the current moment, the last moment, and the earlier last moment, respectively. E indicates mean value of these three accelerations. σ k is between 0 and 1; the bigger value means more indeterminate of movement, that we need to enlarge the detecting window. The improved algorithm flow is shown in Fig. 2. In an implementation for example, correlation matching threshold is 0.92, template scaled up and down ratios are 1.05 and 0.95, window ratio p is 2, and noise Qk1 and Rk are 1e–6f. State, transfer matrix, measure, and measure matrix are as below:
278
M. Xu and L. Zhao
2 3 x 1 60 6y7 6 6 7 xk ¼ 6 7, F ¼ 6 40 4 vx 5 2
vy
0
3
1 0
0 1
2 1 7 6 dt 7 x 60 , H¼6 7, zk ¼ 40 05 y
0
0
1
0 dt
0
0
0
3
17 7 7: 05
ð15Þ
0
4 Result Analysis Using Visual C++, we imply the three algorithms: correlation tracking, correlation and Kalman filter combined tracking and improved tracking. Then, we make these three programs all work on 90 frames of 640 480 resolution images on the same computer which CPU is i7–3770, RAM is 4G bytes. Found by comparison, combined tracking is far more efficient than simple correlation tracking that makes processing time from about 160 ms to about 18 ms. When object is spinning, transforming, or shaded, simple correlation tracking will lose the object and can not continue to track. The combined tracking can overcome being shaded to some degree, but when the object is spinning, scaling or transforming, it will still lose object. And when the object’s movement is uncertain, the combined tracking’s detecting window can not contain the object. For example, when the program goes to frame 15 in Fig. 3, the combined tracking will lose object. The improved tracking program result is shown in Fig. 3. The object in frame 15 is spinning, in frame 57 is shaded, and is scaling down in all 90 frames. There are turning and acceleration in the object’s moving course. From Fig. 3, we can tell that when the object is spinning, the improved algorithm can still work. When the object is shaded in
Fig. 3 Result of the improved algorithm. (a) Frame 15. (b) Frame 30. (c) Frame 45. (d) Frame 57. (e) Frame 62. (f) Frame 75
An Optimized Method of Target Tracking Based on Correlation Matching. . .
279
frame 57, the predicted position is just right; then after that, when the object appears, the algorithm can still work well. When the object is scaling in the whole course, the algorithm can still work well. Because of the simplicity and efficiency of the improved algorithm, it can work well in embedded hardware. If we parallelize the program, the algorithm can get advanced at speed. So it can apply in space-to-earth image tracking system.
References 1. Yilmaz, A., Shafique, K., Shah, M.: Target tracking in airborne forward looking infrared imagery. Image Vis. Comput. 21(7), 625–635 (2003) 2. Wang, Y., Zhang, G.: The tracking method based on Kalman filter. Infrared Laser Eng. 33(5), 505–508 (2004) 3. Xiang, W., Han, G.: Application of target tracking algorithm in infrared imaging tracking technology. Appl. Elect. Tech. 2003–7 4. Liu, X., Jia, Y., Sun, H.: Research on tracking stability in correlation algorithm. Infrared Laser Eng. 27(1) (1998–2)
Design and Verification of Micro-Vibration Isolator for a CMG Yongqiang Hu1(*), Zhenwei Feng1, Jiang Qin1, Fang Yang1, and Jian Zhao1 1
DFH Satellite Co. Ltd., Beijing, China [email protected]
Abstract. Control moment gyros (CMGs) are widely used for attitude control to get high agility. However, CMGs produce undesirable micro-vibration disturbance during its on-orbit operation, which is one of the main sources of micro-vibration. The passive-type isolators have generally been employed to attenuate the micro-vibration transmitted to optical payloads on account of the advantage of simplicity, low cost, and high reliability. In this article, the number and dynamic parameter of isolator was optimized and verified. Firstly, the theoretical model was established to select stiffness and damping coefficient elementarily. Secondly, the number and the material of isolators are designed. Finally, the function and performance was verified by microvibration test and vibration test and other test. After optimization, the standard deviations of displacement on the bracket of CMG have reduced significantly. The results show that the efficiency of the optimized isolator is more than 80%. The isolator has good performance on 100 and 60 Hz and other main disturbance frequency of the CMG. Keywords: Micro-vibration · CMG · Isolator · Parameters optimization
1 Introduction Micro-vibration has become a critical factor for image quality as to high-resolution satellites [1]. Considering current abilities of designing, manufacturing and testing, mounting an isolator for the disturbance is the most feasible way [2]. There are four kinds of isolation techniques for isolation: passive, active, active-passive hybrid, and semi-active isolation. The passive-type isolators have generally been employed in space application on account of their advantages of simplicity, low cost, and high reliability to achieve desired isolation performance [3–7]. The CMG produces undesirable micro-vibration disturbance during its on-orbit operation, which is one of the main sources of degradation of the image quality of high-resolution observation satellites. In this article, the theoretical model and FEM model were established to select stiffness and damping coefficient elementarily, and the number and the material of isolators are designed and verified by micro-vibration test and vibration test and other test.
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_28
281
282
Y. Hu et al.
2 Model Build and Analysis of CMG Isolation System Considering the flexibility of foundation and coupling with other structures on-board spacecraft, FEM is a good method to take all these into account. With the FEM model of a typical high-resolution remote sensing satellite, the Hex8 element is employed to simulate the micro-vibration isolator. Figure 1 shows the FEM model of the CMG and isolators. Table 1 shows the comparison of ProE model and FEM model. It can be seen that the error is limited to 3%. Table 2 shows the comparison of natural frequency between test and simulation. It can be seen that the error is in the acceptance range of engineering. The FEM model can represent the real structure. Table 3 shows relationship between E of the material and the acceleration at 100 Hz. E is the elastic modulus of the isolator. As shown in the table, for the X and Z direction,
Fig. 1 The FEM model of the CMG and isolators Table 1 The comparison of ProE model and FEM model Model ProE
Mass/kg 19.65
FEM
19.17
Location of CG (mm) X 85.4 Y 165.3 Z 0 X 87.2 Y 166.8 Z 0.74
Inertia sensor at CG/kgm2 0.198 0.0321 0.0321 0.1475 0.00185 0.0012 0.2062 0.04126 0.04126 0.1456 0.00168 0.00148
0.00185 0.0012 0.2387 0.00168 0.00148 0.2432
Table 2 The comparison of natural frequency between test and simulation Natural frequency Simulation Test Error
X 15.6 Hz 16.1 Hz 3.1%
Y 38.7 Hz 43.2 Hz 10.4%
Z 19.9 Hz 18.5 Hz 7%
Design and Verification of Micro-Vibration Isolator for a CMG
283
Table 3 Relationship between E of the isolator and the acceleration at 100 Hz Isolator 1# 2# 3# 4# Without isolator
E of the material 400,000 500,000 600,000 700,000 \
Acceleration at 100 Hz of X/(m/s) 0.2132 0.2607 0.2787 0.2952 0.3934
Acceleration at 100 Hz of Y/(m/s) 0.2025 0.2088 0.205 0.1969 0.3524
Acceleration at 100 Hz of Z/(m/s) 0.1366 0.1624 0.1831 0.2068 0.4054
Fig. 2 Micro-vibration test of the CMG
the smaller the E of the isolator is, the better the performance of the isolator is. For the Y direction, the acceleration does not change much as the E changes and the optical axis of camera is parallel with the Y direction and the influence to image quality of this direction is very small, compared to the other two directions. So if only the isolation efficiency is taken into consideration, the E of the isolator is recommended to be as small as possible.
3 Verification of CMG Isolation System by Test In order to test the real performance and the adaptability of the isolator, the microvibration test and vibration test must be carried out to verify the function. Furthermore, the precision needs to be maintained during and after the whole test. 3.1
Micro-Vibration Test
The test case was set based on the possible working case in the orbit. Figure 2 shows the micro-vibration test of the CMG.
284
Y. Hu et al. Table 4 The micro-vibration test result of the CMG in time domain
Isolator Without isolator 1# 2# 3# 4# Isolation efficiency
1# 2# 3# 4#
Acceleration X/mg Standard Max deviation 60.280 24.592 10.466 3.006 17.634 6.070 27.957 10.615 45.404 14.538 81.5% 80.3% 70.7% 75.3% 53.6% 56.8% 24.7% 40.9%
Acceleration Y/mg Standard Max deviation 72.928 24.202 4.090 1.113 48.366 17.924 7.853 2.556 103.476 28.885 89.1% 89.0% 33.7% 25.9% 89.2% 89.4% 41.9% 19.3%
Acceleration Z/mg Standard Max deviation 91.455 32.947 6.369 1.562 25.504 10.372 25.299 8.867 49.496 16.664 88.7% 89.2% 72.1% 68.5% 72.3% 73.1% 45.9% 49.4%
Table 5 The micro-vibration test result of the CMG in frequency domain Isolator Without isolator 1# 2# 3# 4# Isolation efficiency
1# 2# 3# 4#
Acceleration X/mg 60 Hz 100 Hz 0.232 3.546 0.226 0.403 0.086 0.836 0.143 0.980 0.087 2.084 30.6% 90.9% 62.7% 76.4% 38.5% 72.4% 62.3% 41.2%
Acceleration Y/mg 60 Hz 100 Hz 0.469 5.746 0.047 0.800 0.188 0.464 0.021 0.884 0.247 1.146 61.1% 81.8% 59.9% 91.9% 95.4% 84.6% 47.4% 80.1%
Acceleration Z/mg 60 Hz 100 Hz 0.407 5.994 0.250 0.372 0.217 1.043 0.112 1.020 0.212 4.882 47.3% 93.7% 46.6% 82.6% 72.6% 83.0% 47.9% 18.6%
Table 4 shows the micro-vibration test result of the CMG in time domain. As is shown in the table, isolator 1# has the best performance of isolation. The isolation efficiency of standard deviation can be more than 80%. Then the time domain data can be transformed to frequency domain by FFT, as is shown in Table 5. It can be seen in Table 5 that the isolation efficiency of isolator 1# at 100 Hz can be more than 80%. 3.2
Vibration Test
Vibration test is an important test to verify the function stability of isolator. Figure 3 shows the vibration test of the CMG. Table 6 shows comparison of natural frequency and transmission ratio before and after vibration test. As is shown in the table, the relative ratios are mostly limited in 5%, which can satisfy the demand of satellites. Table 7 shows comparison of orientation precision before and after vibration test. The relative deviations are mostly limited in 30 , except in direction Y and Z of isolator 3#, and isolator 1# achieves the best performance.
Design and Verification of Micro-Vibration Isolator for a CMG
285
Fig. 3 Vibration test of the CMG Table 6 Comparison of natural frequency and transmission ratio before and after vibration test Natural frequency before Isolator Direction test/Hz 1# X 14.6 Y 12.3 Z 36.7 2# X 16.6 Y 13.6 Z 41.5 3# X 19.7 Y 17.6 Z 44.0 4# X 25.2 Y 16.7 Z 46.4
Natural frequency after test/Hz 14.3 11.6 36.9 16.1 12.9 41.5 19 15.5 45.2 18.7 14.7 44.7
Relative ratio (%) 2.1 5.7 þ0.5 3.0 5.2 0 3.6 11.9 þ2.7 25.8 12.0 3.7
Transmission ratio before test 3.51 3.74 3.67 4.46 4.04 3.96 4.88 4.84 4.45 4.6 4.45 3.68
Transmission ratio after test 3.5 3.65 3.75 4.32 4.38 4.05 4.76 4.97 4.47 4.31 4.38 3.78
Relative ratio (%) 0.3 2.4 þ2.1 3.1 þ8.4 þ2.2 2.4 þ2.7 þ0.5 6.3 1.5 þ2.7
Table 7 Comparison of orientation precision before and after vibration test Isolator 1# 2# 3# 4#
X 0.420 0.2700 0.7080 0.520
Y 0.660 0.9180 3.0120 1.590
Z 0.670 0.8040 3.0120 1.590
286
Y. Hu et al.
4 Conclusion In this article, the FEM model was established to select stiffness and damping coefficient elementarily, and then the function and performance was verified by micro-vibration test and vibration test. After optimal selection, the standard deviations of displacement on the bracket of CMG have reduced significantly. This method can be used in the follow design of micro-vibration isolator.
References 1. Feng, Z., Cui, Y., Yang, X., Qin, J.: Micro-vibration issues in integrated design of high resolution optical remote sensing satellites. In: ISSOIA, vol. 192, pp. 459–469. Springer Proceedings in Physics (2017) 2. Feng, Z., Hu, Y., Cui, Y., Xinfeng, Y., Jiang, Q.: Optimal design of dynamic characteristic of integrated design satellite structure based on transmission property analysis. In: ISSOIA (2018) 3. BorcIli, J., TroWitzsch, J., Brix, M., et al.: The LBT real-time based control software to mitigate and compensate vibrations. Proc. of the SPIE. 7740, 774005 (2010) 4. Zhenhua, Z., Lei, Y., Shiwei, P.: Analysis of micro-vibration environment of high-precision spacecraft. Spacecraft Environ. Eng. 26(2), 528–534 (2009) 5. Liu, C., Jing, X., Daley, S., Li, F.: Recent advances in micro-vibration isolation. Mech. Syst. Signal Process. 55–80 (2015) 6. Ibrahim, R.A.: Recent advances in nonlinear passive vibration isolators. J. Sound Vibrat. 314, 371–452 (2008) 7. Zhang, Y., Xu, S.J.: High frequency vibration isolation of CMG for satellite. J. Astronaut. 32(8), 1722–1727 (2011)
Development of a High-Frame-Rate CCD for Lightning Mapping Imager on FY-4 T. D. Wang1(*), C. L. Liu1, and N. M. Liao1 1
Chongqing Optical-Electrical Research Institute (COERI), Chongqing, China [email protected]
Abstract. A high-frame-rate charge-coupled device (CCD) capable of operating at 500 frames per second (fps) was designed and prototyped. It has a 400 300 split frame transfer CCD area array with 26 μm 26 μm photosensitive elements and 420 ke full well capacity. The device was originally developed for the Lightning Mapping Imager (LMI) on China’s latest and most powerful weather satellite FengYun-4 (FY-4). It has a total area array of 408 (H ) 304 (V ) with dark elements, where eight outputs are settled on two sides of the device. The active photosensitive area is about 11.3 mm 11.3 mm. According to its feature of high dynamic range and 2 ms frame period (which nearly equals to the duration of a single lightning event), the Lightning Mapping Imager can map total lightning activity on the Chinese and surrounding territory continuously day and night. This will facilitate forecasting severe thunderstorms and typhoon activity, and convective weather impacts on aviation safety and efficiency. Keywords: High frame rate · Charge-coupled device · Lightning Mapping Imager · FengYun-4
1 Introduction The first weather satellite of China’s most powerful second-generation three-axis stabilized, geostationary meteorological satellite series named FengYun-4 (FY-4-01) was successfully launched by a Long March-3B rocket at Xichang Satellite Launch Center in southwest China’s Sichuan province on December 11, 2016. It is an important milestone for the development of weather satellites in China [1]. FY-4-01 carried four advanced instruments, namely, Advanced Geosynchronous Radiation Imager (AGRI), Geosynchronous Interferometric Infrared Sounder (GIIRS), Lightning Mapping Imager (LMI) and Space Environment Package (SEP). The LMI is China’s first lightning detection component on weather satellites, which is designed and prototyped for mapping total lightning activity on the Chinese and surrounding territory continuously day and night. The data and images acquired by LMI will be used in forecasting and warning of convection precipitation, and studying of the Earth’s electric field [2].
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_29
287
288
T. D. Wang et al.
A 400 300 pixel CCD with split-frame transfer architecture was developed for LMI. The CCD was designed and manufactured by Chongqing Optical-Electrical Research Institute (COERI), which was requested by the FY-4-01 manufacturer China Academy of Space Technology (CAST).
2 CCD Architecture The challenge for designing the imager on LMI is the requirement to achieve high frame rate of more than 500 fps with high dynamic range and high global uniformity of eight outputs. The solution is to use a split-frame transfer architecture where eight identical outputs are symmetrically settled on two sides of the device. 2.1
Active Pixel Configuration and Structure of Image Section
In order to achieve the parameter goal of full well capacity, a four-phase structure with two layers of polysilicon is used. More than a half the area of one pixel is used to store the generated carriers within the integration time. Given the parameter of dark current for applications, multipinned phase (MPP) mode operation is used [3]. MPP is a technology which permits multiphase CCDs to operate fully inverted. In MPP mode operation, the generated signal carriers in CCD are isolated from surface defects, where a low dark current level could be realized. The pixel configuration of image section is shown in Fig. 1, where a MPP implant is incorporated by implanting additional Boron under the last phase. Here, charge would collect under the other three phases with the last phase as the barrier.
Fig. 1 Active pixel configuration of image section
Development of a High-Frame-Rate CCD for Lightning Mapping Imager on FY-4
289
Dark pixels with the same pixel configuration as active pixels of eight columns are arranged on left and right sides of the device. Dark pixels of two rows are settled in a half split frame. These dark pixels can be used as reference pixels for signal processing. 2.2
Structure of Storage Section
Since four amplifiers will be settled on a half split frame, some spaces are needed. The channel stops of storage are designed in tilted shapes. At the end of a defined integration time the CCD shift register of photosensitive area transfer charge packets to the corresponding CCD channels in the storage section. This transfer has to be done as quickly as possible in order to avoid disturbance by extra charge generation of the information already available in the CCD shift registers of the imaging section. By the results of theoretical analysis and real measurement, the transfer time can be accepted within no more than 0.2 ms. 2.3
Total Device Structure
The total device structure is shown in Fig. 2. The device is split-frame transfer CCD. There are eight subsections corresponding to eight outputs. The CCD images approximately 2 ms per frame and then transfers the charge to the storage regions. Each frame is read out row-by-row at 15 MHz date rate from each of the eight outputs. For subsections 1, 3, 6, and 8, the photosensitive elements and storage elements are both 104 (H ) 152 (V ). For subsections 2, 4, 5, and 7, the photosensitive elements and storage elements are both 100 (H ) 152 (V ). The details of elements of different subsections are shown in Fig. 3. 300
OS1
OS2
OS6
3
2
4
2
2
OS3 2
5
7
6
8
400
OS5
1
OS4 OS7
OS8
Fig. 2 Configuration of total device structure
290
T. D. Wang et al.
(a)
(b)
(c)
Fig. 3 Structures of different sections. (a) Sections 2, 4, 5 and 7. (b) Sections 1 and 8. (c) Sections 3 and 6
VRD R H3
T5 VOG
VDD
T3
T1
T2
Vout
T4
VB VB
Fig. 4 Circuit schematic of amplifier
Conventional two-stage MOSFET source follower amplifiers are used, which could operate at rate of more than 15 MHz (Fig. 4). The device uses eight output amplifier circuits which are capable of operating at a pixel rate of no less than 15 MHz. The electronic schematic is shown in Fig. 4, where multistage amplifiers are used to fullfill the desired pixel rates. All outputs must be used for full image read-out. The total layout is shown in Fig. 5. The picture of final packaged device is displayed in Fig. 6.
Fig. 5 Total layout of the CCD
Fig. 6 Picture of packaged CCD
292
T. D. Wang et al. Table 1 The measured CCD operating parameters
Parameter Device architecture Pixels used for imaging Dark reference Outputs Pixel size Fill factor Peak wavelength of QE Quantum efficiency Dynamic range CTE Nonuniformities (for one output) Readout noise Frame rate Pixel rate per output Linearity (from 10 to 80% of full well) Nonuniformities (global) Dark current Radiation hardness MTF Life predication
Measured value Frame-transfer CCD 300 (V ) 400 (H ) 4 per side 8 26 μm 26 μm 100% (777.4 nm 97.4 nm) ~ (777.4 nm + 22.6 nm) 30%@777.4 nm 10,000:1 0.99998 (both in parallel and in serial) 1% 100e 500 fps 15 MHz 2% 5% 0.02 nA/cm2 (@500 fps, 25 3 C) 50 krad(Si) 0.5 (680~800 nm, both in parallel and in serial directions) 10 years
3 CCD Characterization The measured CCD operating parameters are displayed in the following Table 1.
4 Summary The heart of the LMI instrument in FY-4-01 is a high-frame-rate (500 frames per second), 400 300 pixel focal plane, integrated with low-noise electronics and specialized optics to detect weak signals of total lightning activities on the Chinese and surrounding territory continuously day and night. It can measure lightning activities in real time. The images and data will be employed by meteorologists to forecast severe weather, storm tracks and convection precipitation. A long-term data record will also permit scientific studies including an examination of the Earth’s electric field. A pair of 400 300 pixel split-frame transfer CCDs are arranged in the LMI to reach the required spatial coverage over the Chinese and surrounding territory, watching over a ground area of 9000 km (diagonal field of view).
Development of a High-Frame-Rate CCD for Lightning Mapping Imager on FY-4
293
The CCDs in LMI operate at a frame rate of 500 images per second and the peak QE near 777.4 nm. Thus LMI can count flashes and measure their intensity for lightning events.
References 1. Yang, J., Zhang, Z., Wei, C., Cuo, Q.: Introducing the new generation of Chinese geostationary weather satellites—FengYun 4 (FY-4). Bull. Am. Meteorol. Soc. 98(8) (2016) 2. Bao, S., Li, Y.F., Tang, S., Liang, H., Zhao, X.: Instantaneous real-time detection technology of GLI on FY-4 geostationary meteorological satellite. Aerospace China. 18(2), 23–30 (2017) 3. Janesick, R.J.: Multipinned phase charge-coupled device. US, US 4963952 A (1990)
Evaluation of GF-4 Satellite Multispectral Image Quality Enhancement Based on Super Resolution Enhancement Method Wei Wu1(*), Wei Liu2, and Dianzhong Wang2 1
National Disaster Reduction Center of China, Ministry of Emergency Management, Beijing, China [email protected] 2 Beijing Institute of Space Mechanics and Electricity, Beijing, China
Abstract. GF-4 satellite is the world’s first high-resolution geostationary Earth observation satellite, which has broad application prospects in disaster prevention and relief, natural resource surveys, and atmospheric environmental monitoring. In order to improve the target recognition and classification capabilities by using GF-4 satellite image, the super-resolution reconstruction method is adopted. In this chapter, the GF-4 image quality enhancement software based on contour-consistent image super-resolution enhancement method is used to enhance the spatial resolution of GF-4 satellite PMS image. Under the condition of the same spatial resolution enhancement capability, the number of suitable image frames needed for multiframe image super-resolution enhancement is analyzed quantitatively by selecting multiple evaluation indexes. Maximum likelihood classification method is used to classify land cover using GF-4 satellite resolution enhancement image and original image, respectively. Classification map of GF-1 satellite WFV image with 16 m resolution in the same classification method is used as reference data to evaluate the classification accuracy of GF-4 satellite images. Taking Kunshan City, Jiangsu Province as an experimental area, nine high-frequency imaging GF-4 satellite PMS images are selected for experiments. The experimental results show that with the increase of the number of GF-4 satellite PMS images participating in super-resolution enhancement, the radiation performance of the image after super-resolution enhancement presents an overall improvement trend. In addition, the GF-4 satellite PMS image after super-resolution enhancement not only improves the spatial resolution, but also improves the visual effect of the image. Combined with the characteristics of GF-4 satellite image and the geographical situation of Kunshan City, the land cover type is divided into residential land, water body, and vegetation. The classification results show that the overall accuracy of the classification based on the GF-4 satellite superresolution enhanced image is higher than that based on the single-frame GF-4 satellite image. Keywords: GF-4 · Super resolution reconstruction · Land classification · Image quality evaluation
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_30
295
296
W. Wu et al.
1 Introduction Since the successful launch of the GF-4 satellite on December 29, 2015, it has been successfully in orbit for 3 years and accumulated a lot of observation data of China. GF-4 satellite is the first earth synchronous orbit high resolution optical imaging satellite in the world, which is mainly used for comprehensive disaster prevention and mitigation, meteorological warning and forecasting, forest resource survey, desertification monitoring, environmental governance, and other fields [1]. With the characteristics of GF-4 spectrum and fast imaging, some achievements in satellite application have been achieved. Reference [2] analyzed the effect of water body extraction based on GF-4 satellite data. Taking the Dongting Lake area of Hunan Province as an example, it is found that GF-4 satellite imagery has good information extraction ability of wetland type [3]. Reference [4] evaluated the application ability of GF-4 satellite imagery in drought disaster monitoring. Reference [5] analyzed the identification ability of flaming, smoke, and burned scar in different combustion status during forest fire based on GF-4 satellite images. However, the high-frequency continuous imaging advantages of the GF-4 satellite have not been effectively utilized in practical applications. Through the experiment, reference [6] found that after super-resolution reconstruction of GF-4 satellite images, satellite images with higher resolution can be obtained while maintaining clarity and information details. Using the staring imaging features of GF-4 satellite, this chapter analyzes the effect of the number of continuous imaging images on the enhancement of image quality after super-resolution reconstruction. The classification of land cover based on super resolution enhancement is carried out, and the classification effect of the image after super resolution enhancement is evaluated, and a technical scheme for fully mining the application ability of GF-4 satellite is provided.
2 Method 2.1
GF-4 Satellite and Data Characteristics
The GF-4 satellite is fixed at a geosynchronous orbit of 105.6 E with a total weight of 5040 kg and a design life of 8 years [7]. The GF-4 satellite is equipped with a staring camera, which allows for full-time observation of China and surrounding areas. Staring camera can achieve simultaneous imaging of visible and near infrared channel and medium wave infrared channel. The continuous imaging time resolution of the single spectral segment of the GF-4 satellite staring camera’s visible light and near infrared channel is 5 s, multi-spectral continuous imaging time resolution reaches the minute level, and the time resolution of the continuous imaging of the medium wave infrared channel is 1 s, especially for the monitoring the moving target or the fast changing target [8]. The main specifications of the GF-4 satellite and its payload are shown in Table 1. The PMS data of GF-4 satellite contains one panchromatic band and four visible and near infrared bands, while the IRS data of GF-4 satellite contains only one medium wave infrared band. The spatial resolution of the PMS data can reach 50 m, but as the latitude
Evaluation of GF-4 Satellite Multispectral Image Quality Enhancement. . .
297
Table 1 GF-4 satellite main technical indexes [9] Item Total weight (kg) Orbit altitude (km) Orbit inclination ( ) Spectral range (μm)
Spatial resolution (m m) Swath (km km) Downlink code rate (Mbit/s) Design life (years)
Index T, T is fixed threshold, both the images contain abundant detail information, fusion wavelet coefficients are determined by weighted average fusion operation, more detail information can be reserved and less noise will be introduced; if M < T, in means that the difference of the information contained in the two images is relatively large, one contains more detail information while the other less and it is better to choose the HF coefficient with larger area variance as wavelet coefficient to fusion. The fusion criterion can be described as the following expression: When M > T, ( DεI , F ðx, yÞ
¼
K max Dεl, I ðx, yÞ þ K min Dεl, P ðx, yÞ
if Vεl, I ðx, yÞ Vεl, P ðx, yÞ
K min Dεl, I ðx, yÞ þ K max Dεl, P ðx, yÞ
if Vεl, I ðx, yÞ Vεl, P ðx, yÞ
K min ¼ 0:5 0:5 ð1 M=1 T Þ, K max ¼ 1 K min
ð2Þ ð3Þ
When M < T, ( Dεl, F ðx, yÞ
¼
Dεl, I ðx, yÞ if V εl, I ðx, yÞ V εl, P ðx, yÞ Dεl, P ðx, yÞ
if V εl, I ðx, yÞ V εl, P ðx, yÞ
ð4Þ
ε represents horizontal, vertical, and diagonal direction.
3 PAN Component Calculated Based on SRF Curve Ideally, each multispectral band can be well separated, and the radiation energy of panchromatic band is equal to the sum of radiation energy of multispectral below panchromatic SRF (spectral response function) [18]. The radiation energy that the sensor records is the result of the interaction between entrance pupil radiation energy and spectral response, ð Lk ¼ LðλÞRk ðλÞdλ
ð5Þ
λ is wavelength, Lk is radiation luminance value of band k, L(λ) is entrance pupil radiation energy and Rk(λ) is spectral response in wavelength λ.
Multiband Image Fusion Based on SRF Curve and Regional Variance Matching Degree
319
As for sample camera, we can define several weight values to estimate the panchromatic band radiation; it should meet the following formula. PAN ¼ ωB B þ ωG G þ ωR R þ ωY Y þ ωNIR1 NIR1 þ ωNIR2 NIR2 þ other
ð6Þ
PAN, B, G, R, Y, NIR1, and NIR2 separately stand for radiation of panchromatic spectrum, blue, green, red, near infrared band 1 and near infrared band 2; ωB, ωG, ωR, ωY, ωNIR1, and ωNIR2 stand for corresponding weight coefficient; actually, panchromatic spectrum covers more component than the six bands, so other is taken into account. When the radiation curve of entrance pupil changes slowly, L can be considered as a constant, and radiation Li of band i meets the following formula: ð ð Li ¼ LðλÞRi ðλÞdλ ¼ L Ri ðλÞdλ
ð7Þ
Then the weight coefficient of band i satisfies Ð Li Ð Ri ðλÞdλ S ¼ i ¼ ωi ¼ LP RP ðλÞdλ SP
ð8Þ
Respectively, Ri and RP are SRF of band I and SRF of PAN, Si and SP stand for the area that SRF covered, then the weight coefficient can be considered as the ratio of covered area of SRF. Inew component unavoidably contains other luminance component except for three spectrum information that participate in fusion. I new ¼ PAN
X i
ωi MSi
ð9Þ
MSi is multispectral band i covered by panchromatic band, and ωi is weight coefficient constructed according to spectrum response function. The procedure of the proposed image fusion method combined with SRF is summarized as follows: (a) Calculate weight coefficient ωi according to spectrum response function; (b) Construct Inew component by low resolution multispectral data and weight coefficient; (c) Apply Inew into inverse wavelet transform to reconstruct new I component.
4 Experimental Result According to the proposed advanced algorithm principle and fusion criterion, we have carried out a large number of experiments. Due to the space limitation, only one set of results is presented. The experiment selects panchromatic and multispectral image of imaging test of example camera in research to be studied, the SRF curve of example camera is
320
Y.-y. Wang et al. 1.2 1 PAN 0.8
B1 B2
0.6
B3 0.4
B4 B5
0.2
B6 0 300
500
700
900
1100
-0.2
Fig. 2 SRF curve of example camera
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3 Original images and degraded image. (a) Band 2 image. (b) Band 3 image. (c) Band 4 image. (d) Original multispectral image. (e) Degraded multispectral image. (f) Degraded panchromatic image
shown as Fig. 2. We combine the original image of band 4, band 3, and band 2 into one color image. Figure 3a–c is the original multispectral image of band 2, band 3, and band 4, with resolution four times of panchromatic image. We combine original image of band 4, band 3, and band 2 into one color image (Fig. 3d). All of the four images reflect the scene at the same time in the same area and have been strictly registered. In order to
Multiband Image Fusion Based on SRF Curve and Regional Variance Matching Degree
(a)
(b)
(c)
(d)
(e)
(f)
321
Fig. 4 Fusion results of different methods. (a) Original multispectral image. (b) Fused image based on IHSWT. (c) Fused image based on this chapter. (d) Detail image of original multispectral image. (e) Detail image of IHSWT fusion result. (f) Detail image of this chapter fusion result
verify the characteristics of the spectrum of this algorithm, we choose the original resolution multispectral color image as reference image and employ four times degenerate sampling on the two original image to get lower resolution panchromatic image (Fig. 3f) and lower resolution multispectral image (Fig. 3e). In the experiment, the proposed method is compared with the IHSWT fusion method mentioned in Sect. 2 of this chapter. The fusion result is shown in Fig. 4. To evaluate the quality of fusion images quantitatively, six kinds of indices are introduced: entropy, average grade, spectrum distortion, standard deviation, correlation coefficient, and space frequency. The results of the objective evaluation on the above mentioned fusion methods are shown in Table 1. Entropy is an important index measuring the richness of image information; the greater the value is, the more information will be contained in the fused image. Average grade can reflect the ability to express the contrast of tiny details; the greater value is, the clearer the fused image will be. The standard deviation reflects the discrete situation of gray value compared to the mean gray value; the greater value is, the higher image contrast will be. Correlation coefficient reflects the correlative degree of the fused image and the original multispectral image; the more close the value is to 1, the less the spectral information of fused image loss. Space frequency indicates the activity of the image space; the greater the value is, the better the fusion effect is. The black numbers in Tables 1 and 2 represent the optimal value of each index. From Fig. 4 and Tables 1 and 2, it follows that:
Space frequency
Standard deviation
Average grade
Evaluation index Entropy
Channel R G B R G B R G B R G B
Original multispectral image 13.1805 12.6467 12.5738 107.4883 79.2633 69.0194 2979.7 2332.6 2154.0 560.3600 429.3648 382.6726
Original Degraded panchromatic image multispectral image 14.7333 13.159 12.617 12.5603 407.6937 77.2207 58.1582 51.5548 8445.5 2902.6 2271.9 2102.3 1992.3 400.0432 314.4001 283.7059 1315.9
8138.1
494.6113
Degraded panchromatic image 14.7019
Fused image based on IHSWT 13.9386 13.7256 13.6447 241.2942 226.419 219.8593 4805.3 4195.2 3961.1 1142.7 1069.3 1037.5
Table 1 Quantitative results of original image and image of different fusion methods Fused image based on this chapter 13.8608 13.628 13.5414 229.1352 214.154 207.5582 4577.5 3964.0 3734.5 1086.6 1013.2 981.3
322 Y.-y. Wang et al.
Multiband Image Fusion Based on SRF Curve and Regional Variance Matching Degree
323
Table 2 Spectral retention quantitative results of different fusion methods Evaluation index Correlation coefficient Spectrum distortion
Channel R G B R G B
Fused image based on IHSWT 0.9108 0.8990 0.8682 4494.5507 4489.641 4492.0492
Fused image based on this chapter 0.9386 0.9307 0.9216 278.2741 240.2803 234.3115
Each index of the fused image is better than original multispectral image; it means that image fusion with panchromatic can enhance the quality of multispectral image. The results of two fusion methods are almost unanimously on the index of entropy, average grade, standard deviation and space frequency, and the fusion method of this chapter was slightly worse. The fusion method combined with SRF is obviously good at preserving the spectral information; its index of correlation coefficient and spectrum distortion are superior to IHSWT method. Therefore, the proposed method combined with SRF not only enhances spatial information, but also preserves spectral information to maximum in general. Both the subjective and objective evaluations show that the new method is effective.
5 Conclusion In conclusion, a novel fusion method combined with SRF of multispectral image and panchromatic image based on IHS and wavelet transform is proposed. The fusion criterion in HF coefficient and LF coefficient are described in detail. Experimental results show that the proposed fusion method is more effective than the other fusion methods. This method is hopeful for realizing the goal to reduce the spectral information distortion to maximum on the basis of improving spatial resolution.
References 1. Kang, Y.: Data Fusion Theory and Applications, 2nd edn. Xidian University Press, Xi’an (2006) 2. Mao, S., Zhao, W.: Comments on multisensor image fusion techniques. J. Beijing Univ. Aeronaut. Astronaut. 28, 512–517 (2002) 3. Xia, M., He, Y., Tang, X.: The present situation and prospect of image fusion. Ship Electron. Eng. 2–13 (2002) 4. Jia, Y., Li, D., Sun, J.: Data fusion techniques for multisources remotely sensed imagery. Remote Sens. Technol. Appl. 15, 41–44 (2000) 5. Xing, K., He, H., Yue, C.: Straight line extraction in remote sensing image based on Hough transform. Spacecraft Recov. Remote Sens. (2015) 6. Yuan, J., Wang, W.: Research on multi-source remote sensing information fusion application. Geo Inform. Sci. 7, 97–103 (2005)
324
Y.-y. Wang et al.
7. Sang, G., Xuan, S., Zhao, B.: Fusion algorithm for infrared and visible light images using object extraction and NSCT. Comput. Eng. Appl. 49, 583–588 (2013) 8. Xue, J., Yu, S., Wang, H.: An image fusion algorithm based on lifting wavelet transform and IHS transform. J. Image Graph. 14, 340–345 (2009) 9. Chang, L., Wang, G., Gao, F., et al.: Implementation of HIS transform and lifting wavelet transform based on FPGA. Video Eng. 36, 264–269 (2012) 10. Li, X., Gao, Y., Yue, S.: A novel scale-aware Pansharpening method. J. Astronaut. 38, 1348–1353 (2017) 11. Li, H., Liu, F., Yang, S., et al.: Remote sensing image fusion based on deep support value learning networks. Chinese J. Comput. 39, 1583–1596 (2016) 12. Duan, C., Wang, X., Wang, S., et al.: Remote image fusion based on dual tree compactly supported Shearlet transform. J. Univ. Electron. Sci. Technol. China. 44, 43–49 (2015) 13. Xia, X., Wang, B., Zhang, L.: Remote sensing image fusion based on generalized IHS transformation and compressive sensing. J. Comput. Aided Design Comput. Graph. 25, 1399–1409 (2013) 14. Huang, W., Feng, J., Liu, Y.: An improved image fusion algorithm based on regional features and IHS transform. Electron. Design Eng. 20, 184–187 (2012) 15. Zhu, K., He, X., Yang, B.: A selective remote sensing image fusion method based on the local feature of wavelet coefficients. Remote Sens. Inform. 9–14 (2011) 16. Piella, G.: A general framework for multi-resolution image fusion: from pixels to regions. Elsevier Sci. Inform. Fusion. 4, 259–280 (2003) 17. Tao, G., Li, D., Lu, G.: Study on image fusion based on different fusion rules of wavelet transform. Infrared Laser Eng. 32, 173–176 (2003) 18. Dou, W., Sun, H., Chen, Y.: Comparison among remotely sensed image fusion methods based on spectral response function. Spectrosc. Spectr. Anal. 31, 746–752 (2011)
Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation Mutian Wang1(*), Sijie Zhao1, Xun Cao1, Tao Yue1, and Xuemei Hu1 1
School of Electronic Science and Engineering, Nanjing University, Nanjing, China [email protected]
Abstract. Images acquired by optical remote sensing often suffer from noise, due to low-light conditions and long distances from the object. Noise introduces undesirable effects such as artifacts, unrealistic edges, unseen lines and corners, blurred objects and disturbed background scenes, and decreasing the performance of visual and computational analysis. The great majority of denoising methods only focus on single noise type, and mostly assume additive Gaussian white noise. However, in practice, according to the different sources of noise during the capturing process, the real image noise may be composed of noises from several different distributions, i.e., Poisson distribution from shot noise and dark current, Gaussian distribution from readout noise, etc. Thus, it is hard to model complex noise with a single noise model, which therefore limits the practical performance of image denoiser for real captured images. To address this problem, we propose a deep residual network with parameter estimation to model and remove mixed noises of remote sensing data, with no noise type and noise level information. In this chapter, a deep convolutional neural network based framework is proposed to implement both noise estimation and noise removing. The framework is composed of noise estimator module and denoiser module. The noise estimator module is used to estimate the noise distribution characteristics of the input image, which are used to synthesize the noisy dataset for the training of the denoiser module. After training, the denoiser module can then denoise the input image specifically for that noise characteristic of the input image. Besides, we use some advanced designment to improve the performance of the network like residual learning, skip connections, and perceptual loss. Our model works for both given noise types and others without taking into account in training process, and for the latter case we test the model on the noise mixed with different types, i.e., shot noise, Gaussian noise, salt and pepper noise, speckle noise, dark current, and quantization noise, covering most of the common noise types in practical situation. The estimated noise distribution according to the proposed estimator is compared with the ground truth to validate the performance of the estimator. We also compare the proposed method with the existing denoising methods, including both the traditional algorithm (i.e., BM3D (Dabov et al., IEEE Trans Image Process 16:2080–2095, 2007)) and the deep neural network (i.e., DnCNN (Zhang et al., IEEE Trans Image Process 26:3142–3155, 2017)). Experimental results showed that our method outperforms these state-of-the-art methods in both PSNR (peak signal to noise ratio) and SSIM (structural similarity index). Keywords: Remote sensing image · Mixed noise denoising · Noise level · Convolutional neural network
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_33
325
326
M. Wang et al.
1 Introduction Images acquired by optical remote sensing data often suffer from noise during acquisition and transmission. Noise introduces undesirable effects, degrading performance of later visual and computational analysis, such as image classification [1], target detection [2], and environment monitoring [3]. Therefore, the denoising task on remote sensing data has arisen as an important and also a tough issue, where various kinds of noise with different levels may be introduced into images. In the past decades, many denoising algorithms have been proposed. In [4], wavelet techniques are explored to filter speckle noise of microwave remote sensing images. Song et al. [5] presents a weighted double sparsity unidirectional variation (WDSUV) model to reduce stripe noise in remote sensing images. Recently, many neural networks [6–8] have achieved superior performance on image processing tasks, as well. However, most of these methods are designed on a certain type of noise, and assumptions on statistical property of noise are usually required to get better performance. In this chapter, we propose a deep residual network with parameter estimation (DRN-PE) to deal with various types of noise from remote sensing images. We use three typical types of noise, Poisson noise, additive Gaussian white noise, and salt and pepper noise, to represent the noise of target image. Estimator, the network stacked with convolutional neural layers and densely connected layers, is utilized to evaluate the level of these three noise. The estimator is connected to a denoiser, where we train another network targeting on this specific mixed noise removal. In this process, we successfully achieve denoising task without extra information of noise distribution, let alone explicit statistical characteristics. We notice that the model also works on other types of noise not appeared during training. A given combination of noise might share similarities in statistical properties with mixed noise type in training process, allowing the best combination of noise to be chosen by the estimator. Experiments demonstrate that our model outperforms traditional methods like BM3D and also deep learning methods like DnCNN. In particular, we make the following contributions: 1. We propose a neural network based estimator to handle complex noise emerged with certain basic types of noise in real cases. 2. We propose a deep neural network architecture suitable for remote sensing image denoising tasks, with skip connections, residual learning, and perceptual loss to boost the accuracy of the output image. 3. The combined estimator-denoiser model succeeds in removing noise from input image with unknown noise type and noise level.
2 Related Work In the field of image denoising, numerous approaches have been proposed. Traditional methods such as total variation (TV) [9, 10], BM3D algorithm [11] and dictionary learning based methods [12] have shown good performance on image restoration topics
Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation
327
like image denoising and super-resolution. Yuan et al. [13] extended the TV model to solve hyperspectral image denoising problem and also gained good performance. Although some of the proposed traditional models have achieved big progress, further improvement is still needed in this area. Deep neural network (DNN) based methods have a great potential for image restoration tasks and remain active in recent years. Xie et al. [6] had succeeded at removing noise from corrupted images with the stacked sparse denoising autoencoder (SSDA). Other neural network-based methods such as plain multi-layer perceptron (MLP), provided by Burger et al. [14], also get attractive performance in mapping noisy patches onto noise-free ones. They also concluded that with large networks and large training data, neural networks can achieve state-of-the-art image denoising performance. Zhang et al. [7] proposed feed-forward denoising convolutional neural networks (DnCNNs) to tackle with denoising tasks, with residual learning and batch normalization utilized to speed up the training process and boost performance. Mao et al. [8] added symmetric skip connections in a very deep convolutional encoder-decoder network and gained higher PSNR and SSIM. In the denoising task of remote sensing image, deep learning method seems not so active. Zhang et al. [15] utilized the L1∕2-regularized deconvolution network for the restoration and denoising of RS images, acquiring improved result of the L1-regularized deconvolution network. But most of the methods proposed before only focus on a single type of noise and need a statistical prior of noise information to get better performance. Our proposed model eliminates the need to determine noise type beforehand, let alone its statistical characteristics.
3 Algorithm In this section, we present the proposed DRN-PE network in details. Figure 1 shows the illustration of the whole process.
Fig. 1 Architect of our proposed model. The estimator is used to simulate noise from single input image in order to synthesize noisy images, which are then fed to the denoiser as dataset for end-to-end training. The blue arrow is for training process, and the black dotted lines represent actual application using pre-trained model
328
3.1
M. Wang et al.
Parameters Estimator
For the network architecture design, we modify the VGG network [16] to make it suitable for parameter estimation. Following the principle in [16], we adopt 3 3 filter size and replace maxpooling layer with convolutional layer with strides, which decreases the size of feature and retains the detail of input at the same time. We set the depth of convolutional layers as 14 to gain a wide enough receptive field for input patches. Three densely connected layers are arranged next to convolutional layers to map hidden layers into several variables expected as parameters for given types of noise. Dropout is used to deal with overfitting problem by preventing units from co-adapting too much [17]. For the given types of noise used in training, we choose Poisson noise, additive Gaussian white noise and salt & pepper noise. For Poisson noise, the image is divided by λ prior to applying the noise; the result is then multiplied by λ. Actually there is no need for the network to learn the exact values of parameters, but to use given noise types to model unknown input noise. In Sect. 4 we use parameters to generate fake noisy image, and draw histograms of noise distribution to test the similarity between fake noise and actual noise. 3.2
Denoiser
Architecture Our denoiser module utilizes deep convolutional neural network to achieve end-to-end mapping between input noisy image and output clean image. As the network goes deeper, it seems harder for backward hidden layers to retain details of images. To address this problem, we use residual network and skip connections to retain details of input to great extent, and the model achieves faster convergence and a highquality local optimum as well. In this process, an input noisy image is mapped through a denoising network f with trainable parameters θ. We use a weighted combination of Mean Squared Error (MSE) and perceptual loss, where the latter one is measured by a pre-trained loss network φ which keeps fixed during training. The loss function is defined as (1). lðθÞ ¼
1X λ1 kf ðxi ; θÞ yi k2F þ λ2 kφðf ðxi ; θÞÞ φðyi Þk2F N i¼1
ð1Þ
4 Discussions 1. Residual learning. Since residual learning [18] was originally proposed to solve performance degradation problems, many researches apply this strategy to deal with various image tasks. Comprehensive empirical evidence is provided to show that residual networks are easier to optimize, and can gain accuracy from considerably increased depth [18]. Residual learning is naturally suitable for denoising tasks, and we adopt it to train a residual mapping, learning the noise of image instead of
Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation
329
clean one. In the model DnCNNs [7] and RED-Net [8], the strategy is also exploited for image restoration tasks, and both of them gain very good performance. 2. Skip connections. Skip connections [8] is also used for retaining detail information of image. By assuming that detail information is generally lost in a straightforward network, we extract feature maps in the front layers and add them to corresponding hidden layers to maintain detail information of input. At the same time, skip connections also benefit on back-propagating the gradient to bottom layers, which makes training deep network much easier. 3. Perceptual loss. During training process, we notice that even though pixel-wise loss function tends to get higher PSNR performance, the image seems smooth and some detail information missed. Perceptual loss [19], however, might address this problem. A pre-trained loss network (VGG-16) is utilized to measure high-level perceptual differences. It has been demonstrated that the use of perceptual loss often generates visually pleasing results by better reconstructing fine details and edges.
5 Experiments For each coming noisy image, we send it into the pre-trained parameters estimator to gain the noise description, then the output parameters are transmitted to a specific denoiser, which is trained using the dataset generated according to this specific noise. Practically, both the generated noise images and the clean ones are sent into the denoiser for training. 5.1
Experimental Settings
We use WHU-RS19 [20] dataset both for parameters estimation and denoising. The dataset collects satellite images exported from Google Earth2, which provides highresolution satellite images up to 0.5 m. It contains 1005 meaningful scenes in highresolution satellite imagery with 600 600 pixels each. Since there is no testing remote sensing dataset for image restoration task, we simply divide 38 images from WHU-RS19 for testing, extracting two randomly from each class. Notice that these extracted images won’t appear in training process. For training the parameter estimator, we set batch size as 16, each contains an input with size 128 128, cut from a randomly chose image at randomly chose locations. The output parameters and input image are transmitted to the denoiser, with batch size as 32 including 64 64–size image cut at each. 5.2
Experiment Results
We test the performance of our model on noise composed of given noise types (i.e., Poisson, Gaussian, Salt & Pepper) only and other types of noise which are not taken into consideration in the training process (e.g., speckle noise).
330
M. Wang et al.
Results on noise with given noise types We generate four kinds of mixed noise using given noise types in the training process with random parameters. We use the aforementioned 38 images for testing and gain average results for each noise type. The denoising results are shown in Table 1. We use BM3D and DnCNN at the specific Gaussian variance for comparison. Figure 2 offers an example from mixed noise 1. We also extract parameters estimation results in Table 2, but notice that there is no need to output exact parameters, but use mixed noise to describe the distribution of given noise. Results on noise with untrained noise types Our model also succeeds on the noisy images with the untrained noise types. For this part, we consider several noise types that are common in practice in addition to Poisson noise (shot noise), Gaussian noise, salt and pepper noise. Speckle noise, often inherently existing in remote sensing images, can be expressed as I0 ¼ I + I n, where n follows a uniform distribution. We use variance of n to denote the level of speckle noise. Quantization noise is expressed as a uniform distribution ranging from 0.5 to 0.5, and we use ρ to denote the density of that noise. Dark current is also in our consideration.
Table 1 Average PSNRs and SSIMs for BM3D, DnCNN, and ours, best values in bold Results Noise Mixed noise 1 Mixed noise 2 Mixed noise 3 Mixed noise 4
BM3D PSNR 28.87 27.10 26.47 28.90
Clean
SSIM 0.5067 0.4675 0.4514 0.5051
Noisy 19.02/0.2073
DnCNN PSNR 27.57 22.30 22.78 21.71
BM3D 30.03/0.5581
SSIM 0.5070 0.4562 0.4330 0.3958
DnCNN 28.47/0.5484
Ours PSNR 29.86 33.65 31.63 31.42
SSIM 0.5199 0.7181 0.6365 0.6374
Ours 30.51/0.5581
Fig. 2 Qualitative comparison with state-of-the-art denoising methods without delicate noise parameter estimation from mixed noise 1 Table 2 Average parameters estimation result Results Noise Mixed noise 1 Mixed noise 2 Mixed noise 3 Mixed noise 4
Poisson Original 1.19 1.21 1.54 0.34
Estimation 1.41 1.37 1.72 0.37
Gaussian Original 24.9 7.3 15.5 5.2
Estimation 27.0 7.6 15.1 4.6
Salt & pepper Original Estimation 0.65% 0.87% 1.91% 1.95% 2.30% 1.87% 0.99% 1.03%
Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation
331
We design three kinds of mixed noise composed of six common noise types, e.g., shot noise, Gaussian noise, salt and pepper noise, speckle noise, dark current, and quantization noise, as described in Table 3, the denoising result is shown in Table 4, and one example from mixed noise 6 is offered in Fig. 3. To validate the accuracy of parameters estimation result, we use it to generate fake image from clean one, and draw the histogram of noise distribution to show the similarity. Noise distributions of actual noisy image and fake image are compared as shown in Fig. 4.
Table 3 Noise type description Results Noise Mixed noise 5 Mixed noise 6 Mixed noise 7
Original λ1 σ1 5 10 1 15 0.2 20
ρ1 0.3% 0.5% 0.7%
σ2 0.005 0.01 0.01
λ2 2 3 5
ρ2 30% 50% 100%
Estimation λ σ 1.14 9.3 0.43 13.1 0.12 15.3
ρ 0.32% 0.49% 0.71%
Parameters λ1, σ 1, ρ1, σ 2, λ2, and ρ2 denote noise level of shot noise, Gaussian noise, salt & pepper noise, speckle noise, dark current, and quantization noise, respectively
Table 4 Average PSNRs and SSIMs for BM3D, DnCNN, and ours, best values are in bold Results Noise Mixed noise 5 Mixed noise 6 Mixed noise 7
Clean
Noisy PSNR 25.95 20.76 17.60
SSIM 0.4042 0.3473 0.2481
Noisy 20.50/0.4991
BM3D PSNR 31.16 29.35 25.14
SSIM 0.5633 0.5196 0.4352
BM3D 28.24/0.7216
DnCNN PSNR 29.53 25.18 22.35
SSIM 0.6497 0.4892 0.3847
DnCNN 24.55/ 0.6666
Fig. 3 A denoising example from mixed noise 6
Ours PSNR 34.70 30.42 28.19
SSIM 0.7470 0.5999 0.4938
Ours 29.19/ 0.7829
332
M. Wang et al.
Fig. 4 Noise distribution of actual noisy image and fake image, and every one pair from mixed noise 5 and 6, two pairs from mixed noise 7 from left to right, up to down
6 Conclusion In this chapter, a deep convolutional neural network is proposed for image denoising to tackle mixed noise with unknown types, also without prior statistical properties. Residual learning is adopted to remove noise from input images. Skip connections and residual learning are integrated to speed up the training process as well as retain more detail information. Perceptual loss is utilized to boost visual performance. Experimental results demonstrated that the proposed network achieves better performance than DnCNNs and BM3D. Moreover, we presented similarity of noise distribution for both estimator and ground truth to approve the feasibility of our network. In future, we will investigate other neural networks and focus on other general image restoration tasks. Acknowledgments This work was partially supported by NSFC Projects 61671236, and National Science Foundation for Young Scholar of Jiangsu Province, China (Grant No. BK20160634), and Fundamental Research Funds for the Central Universities, China (Grant No. 0210-14380067).
References 1. Yu, C., Qiu, Q., Zhao, Y., Chen, X.: Satellite image classification using morphological component analysis of texture and cartoon layers. IEEE Geosci. Remote Sens. Lett. 10(5), 1109–1113 (2013) 2. Li, H., Zhang, L.: A hybrid automatic endmember extraction algorithm based on a local window. IEEE Trans. Geosci. Remote Sens. 49(11), 4223–4238 (2011) 3. Martínez-López, J., Carreño, M.F., Palazón-Ferrando, J.A., Martínez-Fernández, J., Esteve, M.A.: Remote sensing of plant communities as a tool for assessing the condition of semiarid Mediterranean saline wetlands in agricultural catchments. Int. J. Appl. Earth Obs. Geoinf. 26, 193–204 (2014) 4. Misra, A., Kartikeyan, B., Garg, S.: Noise Removal Techniques for Microwave Remote Sensing Radar Data and its Evaluation. AIRCC Publishing Corporation (2013) 5. Song, Q., Wang, Y., Yan, X., Gu, H.: Remote sensing images stripe noise removal by double sparse regulation and region separation. Remote Sens. 10(7), 998 (2018) 6. Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: Proc. Advances in Neural Inf. Process. Syst., pp. 350–358 (2012)
Remote Sensing Image Mixed Noise Denoising with Noise Parameter Estimation
333
7. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) 8. Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: Advances in neural information processing systems, pp. 2802–2810 (2016) 9. Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Sim. 4(2), 460–489 (2005) 10. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D. 60(1–4), 259–268 (1992) 11. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image denoising by sparse 3-d transformdomain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) 12. Chatterjee, P., Milanfar, P.: Clustering-based denoising with locally learned dictionaries. IEEE Trans. Image Process. 18(7), 1438–1451 (2009) 13. Yuan, Q., Zhang, L., Shen, H.: Hyperspectral image denoising employing a spectral–spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 50(10), 3660–3677 (2012) 14. Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: can plain neural networks compete with BM3D? In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pp. 2392–2399 (2012) 15. Zhang, J., Zhong, P., Chen, Y., Li, S.: $ L_ {1/2} $-regularized deconvolution network for the representation and restoration of optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 52(5), 2617–2627 (2014) 16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference for Learning Representations (2015) 17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Machine Learn. Res. 15(1), 1929–1958 (2014) 18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016) 19. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711. Springer, Cham (2016) 20. http://www.escience.cn/people/yangwen/whu-rs19.html
Research on Infrared Image Quality Improvement Based on Ground Compensation Xiaofei Qu1(*), Xiangyang Lin2, and Mu Li1 1
Command and Control Engineering College, PLA Army Engineering University, Nanjing, China [email protected] 2 Beijing Institute of Remote Sensing Information, Beijing, China
Abstract. Due to the non-linear response of the infrared load device in the low-temperature region, the stripes, appeared in the low-temperature region of the infrared image, will seriously deteriorate the image quality and quantitative retrieval accuracy of the infrared remote sensing image. The existing methods based on the calibration parameters on the infrared remote sensing satellite can’t solve the stripe problem caused by the non-linear response. Therefore, the compensation by means of ground to improve the quality of infrared images is particularly urgent. This paper deeply analysed the generation mechanism of satellite infrared camera load inhomogeneity and studied two improved stripe removal algorithms based on Kalman filtering and moment matching in view of the shortcomings of the traditional method in removing stripes. First of all, this paper introduces the advantages of Kalman filtering noise processing and the deficiencies of the algorithm in image processing. Then, a method based on Kalman filtering is proposed to improve the radiation quality of infrared images. Secondly, aiming at the problem of “Banding effect” in traditional moment matching, a one-dimensional moving window was proposed to segment the image to protect the image detail information, and we also propose a method to enhance the radiation quality of moment matched infrared images based on adaptive moving window weighted column average compensation. Through the actual data processing, the improved two methods could effectively eliminate the stripes in the image, improve the target temperature inversion accuracy in a variety of complex scenes, and enhance the visual effect of the image. Moreover, the accuracy of temperature inversion of targets in various complex scenarios is also improved, which will greatly improve the quantitative availability of infrared remote sensing data. Keywords: Ground compensation · Infrared image · Banding effect · Kalman filter · Moment matched
1 Introduction Infrared imaging can sense the target’s infrared spectrum information and can be used for night-time imaging. Because on-satellite calibration requires the detector to be linear, the detector cannot guarantee linearity at the high- and low-temperature critical values, and the detector element will produce a nonlinear response, which lead to the fringes not being able to be eliminated. Especially in the low-temperature area of infrared images, © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_34
335
336
X. Qu et al.
the phenomenon of deep and shallow stripes is very obvious, which seriously reduces the image quality and quantitative inversion accuracy of infrared remote sensing images [1]. Existing methods based on on-satellite calibration parameters cannot solve this nonlinearity-induced fringe problem. Therefore, it is particularly urgent to improve the quality of infrared images through ground compensation methods. Domestic and foreign methods for non-uniformity correction can be divided into two categories: one is a calibration-based correction algorithm, and the other is a scenebased non-uniformity correction algorithm. At present, a large amount of research works at home and abroad focus on scene-based non-uniformity correction techniques and have proposed a variety of scene-based non-uniformity correction methods, i.e., Kalman filtering method, moment matching method, wavelet transform method [2], Fourier transform method, and so on. This article mainly focuses on two of them, the Kalman filtering method and the moment matching method. Current Kalman filtering and moment matching methods have certain advantages in removing the fringe problem. In large engineering applications, the linear model is still used for Kalman filtering radiation correction. However, the detector response characteristics are actually nonlinear. Obviously, if the linear model is still used at this time, the nonlinearity will introduce a larger error to the non-uniformity correction and reduce the non-uniformity correction performance [3]. The reference column mean and reference column standard deviation used for standard moment matching are the average values of the mean and column standard deviation of all the columns in the entire image [4]. This results in a certain degree of grayscale distortion and weakened texture information. When the ground object type is complex, it usually produces a “ribbon effect”. According to the appeal problem, by analyzing the mechanism of the inhomogeneity of infrared camera image data, the improved methods of these two technologies are proposed. Experimental image calibration results of qualitative and quantitative analysis illustrate the advantages of the two methods, for subsequent research and application, and image characteristics of an appropriate selection of nonuniformity correction methods provide a reference. Unlike optical imaging, the infrared strip elimination does not affect the entire image information [5], because the strip itself is caused by radiation distortion, and strip elimination is a means of compensating for radiation distortion.
2 Traditional Methods 2.1
Based on Kalman Filtering Stripe Removal Algorithm [6]
The Kalman filtering method [7] is a signal processing method that solves the best linear filtering [8] and prediction problem with a minimum mean square error criterion, the core idea of which is to describe the non-uniformity of the plane by means of state equations, and to use the Kalman filtering to recursively track the drift of noise parameters to achieve non-uniformity correction [9]. The Kalman filtering method assumes that the gain and offset of the detector response are constant at the same time block, and there is a slow random variation between the time blocks. Therefore, the Gauss-Markov model is used to describe the state variables to obtain the measurement
Research on Infrared Image Quality Improvement Based on Ground Compensation
337
equations and the state equation, which uses the derived gain and offset to achieve non-uniform correction. Assume that the detector obeys the linear response model. In the time block k, the response of the detection unit in the n-th frame image is expressed as Y k ðnÞ ¼ Ak T k ðnÞ þ Bk þ V k ðnÞ
k ¼ 1, 2, 3, . . .
ð1Þ
where Yk(n) is gray value of image. Ak and Bk are the gain and bias of the detection unit, respectively, Tk(n) is the infrared radiation energy received by the detection unit, and Vk(n) is the noise. The values of gain and bias in time block k + 1 are random disturbances of values in time block k. The state equation of the system can be expressed as X kþ1 ðnÞ ¼ Φk X k þ W k
ð2Þ
Among them, the state transition matrix Φk and system state variables Xk+1(n) are:
αk Φk ¼ 0
0 βk
ð3Þ
X kþ1 ¼ ½Akþ1 , Bkþ1 T
ð4Þ
where α 2 (0, 1) and β 2 (0, 1) are drift coefficients, which is determined by the h i ð1Þ ð2Þ amplitude of the detector response parameter drift. W kþ1 ¼ wk , W k is the system’s disturbance noise. The autocorrelation matrix is: " Φk ¼
ð1 α2 Þδ2A0 0
0
1 β2 δ2B0
# ð5Þ
Hereby, the observation equation of the system can be obtained by the above formula: Y k ¼ HkXk þ V k
ð6Þ
where Vk is the observed noise vector, Xk is a two-dimensional state vector consisting of the k-th frame block gain, and the offsets Ak, Bk, Yk is an observation of the dimension of k (the number of frames included in the kth frame); Hk is an observation matrix of k 2. Hk is the observation matrix: Hk ¼
T k1
...
T kl
1
...
1
T ð7Þ
338
X. Qu et al.
The Kalman filtering method is mainly based on the following recursive formulas: X k ¼ Φk1 X k1 þ W k1
ð8Þ
T P k ¼ Φk1 Pk1 Φk1 þ Qk1
ð9Þ
1 T T K k ¼ P k H k H k Pk H K þ Pk
ð10Þ
Xk ¼ X k þ K k Y k Hk Xk
ð11Þ
Pk ¼ ðI þ K k H k ÞP k
ð12Þ
where X k and Xk are the prediction and filtering estimates of the current state, respectively. P k and Pk are the variance matrix of the prediction and filtering estimates, respectively. Kk is the gain matrix of the Kalman filter. Φk1 represents the state transition matrix. Pk and Qk1 are the covariance matrices of observed noise and driving noise, respectively. The Kalman filtering can be obtained by the system state model and observation model, and linear minimum mean square error estimation is performed for each detector unit. The estimation of the state vector for each detection unit can be obtained through this set of recursive formula; we set the initial condition to: X 0 ¼ X 0 ¼ E½X 0 ¼ A0 , B0 " P0 ¼
σ 2A0
0
0
σ 2B0
ð13Þ
# ð14Þ
where X0 encapsulates the gain and offset parameters. A0 and B0 respectively represent the mean value of the gain and offset of each detector of the initial frame block. σ 2A0 and σ 2B0 represent the variance of the gain (offset) of each detector of the initial frame block, respectively. The corrected frame is thus obtained: T k ðiÞ ¼
Y k ðiÞ Bk Ak
ð15Þ
where Ak, Bk, and Yk have been described before. The Kalman filter-based non-uniformity correction algorithm can make full use of the information in the previous frame sequence and utilize the current segment data to effectively update the estimated parameters, which will in turn improve the performance.
Research on Infrared Image Quality Improvement Based on Ground Compensation
2.2
339
Based on the Moment Matching Fringe Removal Algorithm [6]
Generally, the detector generally uses a linear response function within the spectral response range. The relationship between the gray value OUT of the image output and the input actual infrared radiation value T can be expressed as: OUT ¼ AT þ B þ δ
ð16Þ
where A and B are the gain and offset of the detector response function, respectively; δ is noise. Because different detectors respond differently to the same incident light intensity in the outside world, gray scale deviations may occur on the image. If the output values of different detectors are normalized, the problem of non-uniformity of infrared images can be effectively solved. For a sufficiently large image, the probability of distribution of gray values generated by each detector is the same. Therefore, the difference between the mean and standard deviation of real radiation in a local region of the image is small [10, 11]. The standard moment matching method assumes that the features detected by each sensor have a uniform radiation distribution [12]. One row or one column in the image has statistical consistency [13]. Its central idea is to achieve the purpose of radiation correction by adjusting the mean and variance of each sensor to a certain threshold [12]. The standard moment matching method is as follows: Y¼
δr δ T þ μr μi r δi δi
ð17Þ
where T and Y are the radiance values before and after image pixel correction; μi and δi are the mean and variance of the image respectively; μr and δr are the mean and variance of the reference image (or the panoramic image), respectively.
3 Improved Algorithm The improved two methods described below, which are for line array imaging, can optimize infrared image radiation correction, eliminate infrared fringes, and maximize infrared image radiation quality. 3.1
Improved Kalman Filtering Algorithm Research Process
Kalman filtering is a signal processing method that solves the best linear filtering and prediction problem based on the criterion of minimum mean square error. It uses the equation of state to characterize the non-uniformity of the focal plane and uses recursive Kalman filtering to track the drift of noise parameters to achieve non-uniformity correction. Here, we will refer to one image of each probe in the image as a frame and divide the image into several blocks. Each block contains several frames. The response
340
X. Qu et al.
parameters in the same frame block are kept constant. The adjacent block response parameters are fitted using a Markov random process. The gain and offset of each probe element are regarded as the random variables generated by the Markov process. The Kalman filtering is designed to update the estimated parameters of the gain and offset of each probe element, and the gain and offset of the probe are used as state variables. Previously we discussed the infrared camera response model is divided into linear and nonlinear, while the traditional Kalman filtering usually uses a linear response model, which will inevitably introduce a larger error and reduce the non-uniformity correction performance [14, 15]. Thus, to eliminate the error and improve the system performance, a new non-linear model is proposed, the response characteristics of which can be expressed as Y¼
A 1 þ eBCx
ð18Þ
where A represents the dynamic range of the detector response, C represents the slope of the response characteristic curve, and B is the intercept of the response characteristic curve. By taking the logarithm for the non-linear response, the Eq. (18) can be re-expressed as ln
A 1 ¼ B Cx Y
ð19Þ
Let S ¼ ln(A/Y1). The linear relationship between S and the input x satisfies the linear assumption of the above-described Kalman filtering algorithm, so that it can be used for correction processing. The image S is then corrected using a linear model Kalman filter, as described above. And then, S0 performs an exponentially operation, e.g., Y ¼ eS0Aþ1, Finally, the corrected image Y is obtained. 3.2
Improved Moment Matching Algorithm Research Process
The improved moment matching algorithm is called: A moment matching method based on weighted column mean compensation of adaptive moving window. The reference column mean and reference column standard deviation used for standard moment matching are the average values of the mean and column standard deviation of all the columns in the entire image. This causes a certain degree of grayscale distortion in the processed image, and the texture information is weakened, and when the ground object type is complicated, a problem such as a “band effect” is usually generated. When the image covers a complex object type, using the average value of all the column statistics as a reference value is not enough to adapt to the change of the feature. Here we consider segmenting the image in a one-dimensional moving window to reduce the single processing range to protect the image details [16, 17]. Let the window width be W, and its starting and ending columns are, respectively, m and n, and the center column number is k. The average value of the image column
Research on Infrared Image Quality Improvement Based on Ground Compensation
341
average and the column standard deviation in the window W are used as a reference value, and the center column is subjected to moment matching. Considering the spatial correlation of the features, the weights of the columns are set according to the distance from the columns in the window to the central column. The weights are assigned using the Gaussian weight formula, which can be presented as "
xj xk wj ¼ exp 2t 2
2 # , mj > > > > ððk, lÞ, ðm, nÞÞ 2 ðL LÞ ðL LÞ j @ Aor > > > > > > > > > = < l n ¼ d p i, jjd, 45 ¼ 0 1 > > > > k m ¼ d, > > > > > >@ A > > , I ð k, l Þ ¼ i, I ð m, n Þ ¼ j > > ; : ln¼d ð4Þ ( ) ððk, lÞ, ðm, nÞÞ 2 ðL LÞ ðL LÞ j jk mj ¼ d, p i, jjd, 90 ¼ l n ¼ 0, I ðk, lÞ ¼ i, I ðm, nÞ ¼ j 0 1 9 8 k m ¼ d, > > > > > Aor > > > ððk, lÞ, ðm, nÞÞ 2 ðL LÞ ðL LÞ j @ > > > > > > = < l n ¼ d p i, jjd, 135 ¼ 0 1 > > > > k m ¼ d, > > > > > > @ A > > , I ð k, l Þ ¼ i, I ð m, n Þ ¼ j > > ; : l n ¼ d The final GLCM is got from normalizing the sum of matrices correspond to each θ. Follow [22, 23], we use ten textural features in the proposed method. Let p(i, j) denote
SVM-Based Cloud Detection Using Combined Texture Features
367
the (i, j)th entry of GLCM. The mean and standard deviation for the columns and rows of GLCM are μx ¼ σx ¼
PP i
PP PP i pði, jÞ, μ y ¼ j pði, jÞ, i
j
i
2
ði μx Þ pði, jÞ, σ y ¼
j
PP
j
i
j μy
2
pði, jÞ
ð5Þ
j
Other assistant quantities are px ðiÞ ¼
X X XX pði, jÞ, p y ð jÞ ¼ pði, jÞ, HXY ¼ pði, jÞ log pði, jÞ j
HXY1 ¼ HXY2 ¼
XX i
j
i
j
XX
i
pði, jÞ log px ðiÞp y ð jÞ
i
px ðiÞp y ð jÞ log px ðiÞp y ð jÞ
j
ð6Þ
The ten features are as follows: PP 1. Entropy: f 1 ¼ pði, jÞ log pði, jÞ i j PP 2 p ði, jÞ 2. Energy: f 2 ¼ i j PP 1 pði, jÞ 3. Homogeneity: f 3 ¼ 1c þ ði jÞ2 i j PP ji jjpði, jÞ 4. Contrast: f 4 ¼ i
j
5. Cluster shade: f 5 ¼
PP i
j
6. Cluster prominence: f 6 ¼ PP 7. Correlation: f 7 ¼
2 i þ j μx μ y pði, jÞ
PP i
3 i þ j μx μ y pði, jÞ
j
ði μx Þð j μ y Þpði, jÞ
i
j
σx σ y
HXY HXY1 8. Information measurement of correlation1: f 8 ¼ max fHX, HYg 9. Information measurement of correlation1: f9 ¼ (1 exp (2.0 (HXY2 HXY)))1/2 10. Maximum probability: f 10 ¼ max ðpði, jÞÞ i, j
Rotation invariant LBP (RILBP) was proposed by Ojala et al. [24]. It is defined as LBPðxc , yc Þ ¼
P1 X
2p s ip ic
ð7Þ
p¼0
(xc, yc) is the centre pixel. ic and ip denote the grey levels. s() is the sign function. R and P are the radius of the neighbour region and the number of neighbour pixels. Commonly-used settings are (R ¼ 1, P ¼ 8), (R ¼ 2, P ¼ 16), and (R ¼ 3, P ¼ 24).
368
X. Sun et al.
Uniform pattern is introduced to reduce the dimension of the RILBP. RIULBP’s definition is
LBPriu2 P,R
8 P1 > < P s i i p c ¼ p¼1 > : Pþ1
U ðLBPP,R Þ 2
ð8Þ
otherwise
and U ðLBPP,R Þ ¼ jsðiP1 ic Þ sði0 ic Þj þ
P1 X s ip ic s ip1 ic
ð9Þ
p¼1
We use the combination of GLCM features and RIULBP to capture the characteristics of cloud texture. 2.3
Feature Discrimination
The proposed method adopts SVM in feature discrimination. The linear SVM is defined as y ¼ g wT x þ b
ð10Þ
y ¼ {1, 1} is the class label. x denotes the feature vector. w is the weight vector. g(.) is the threshold function. Kernel function enables SVM to process non-linear classification cases. We use the RBF in this chapter.
jx xi j2 K ðx, xi Þ ¼ exp σ2
ð11Þ
Relaxation variables and penalty coefficients are introduced to penalize outliers and noise. The training process is realized by maximizing the margin between two classes.
3 Experimental Results 3.1
Experimental Set-up
To validate the proposed method’s performance qualitatively and quantitatively, we built a dataset of real satellite images collected from the internet. Experiments were conducted on the dataset. All the real satellite images in the dataset were with the size 1554115781. As mentioned above, the proposed method takes image sub-images of size 6464 as processed elements. We collect sub-images from real satellite images. The collected database contains nearly 20k sub-images. Figure 2 lists eight samples
SVM-Based Cloud Detection Using Combined Texture Features
369
Fig. 2 Sample sub-images from the collected database
GLCM 93 92 91 90 89
1
2 8
16
3 64
Fig. 3 Detection accuracies of GLCM features with different parameter settings
picked from the collected database. The first four are cloud samples (positive samples) and the last four are ground samples (negative samples). This chapter explores the influence of different parameter settings and different combination of features on the proposed method’s performance. The penalty coefficients and the radius of the RBF are got via cross-validation before the model training. We divided the database into five subsets of equal sizes. Four subsets are taken as training sets and the remaining one as the test set. The reported detection accuracy is the average of the results that correspond to each test set. 3.2
Results
We first perform experiments using GLCM features with different parameter settings. There are three parameters that need to set in GLCM features extraction: θ, L, and d. Refer to [22, 23], θ is set to {0 , 45 , 90 , 135 }. L is mainly determined by the sub-image size and the characteristics of the texture. d is closely related to the scale of the textural structures. We test different combinations of L 2 {8, 16, 24} and d 2 {1, 2, 3} on the database. The experimental results are shown in Fig. 3. The combination (L, d ) ¼ (64, 1) achieved the highest detection accuracy 92.16% on the collected database. RIULBP can capture the characteristics of texture at different scales by varying the radius of the neighbour region. Ojala et al. [24] give three commonly-used parameter combinations: (R ¼ 1, P ¼ 8), (R ¼ 2, P ¼ 16), and (R ¼ 3, P ¼ 24). Similar to GLCM features, we test RIULBP feature with different parameter settings on the database. The detection accuracy is presented in Table 1. (R ¼ 1, P ¼ 8) achieves the best performance in the experiments.
370
X. Sun et al. Table 1 Detection accuracies of RIULBP with different parameter settings
(R, P) Detection accuracy (%)
(1, 8) 98.25
(2, 16) 98.09
(3, 24) 97.70
Table 2 Comparison of detection accuracy Feature Detection accuracy (%)
RIULBP (R ¼ 1, P ¼ 8) 98.25
(a)
GLCM (L ¼ 64, d ¼ 1) 92.16
RIULBP + GLCM 98.92
(b)
Fig. 4 Cloud detection on real image. (a) Original sub-image. (b) Cloud detection
As mentioned in Sect. 1, different features have their own advantages in capturing the properties of the texture. We combine GLCM features and RIULBP in cloud detection. We adopt the optimal parameters setting (R ¼ 1, P ¼ 8, L ¼ 64, d ¼ 1) according to the above experimental results (Table 2). Experimental results indicate that RIULBP performs better than GLCM on the database collected in this chapter. The combined features achieved the highest accuracy in the experiments. We use RIULBP and GLCM with the optimal parameters setting to detect cloud in real images. Figure 4 presents the detection result of a sub-image of a real large image. The size of sub-image is 320256 and is divided into 20 patches. Red square indicates the patch that is detected as cloud. The result shows that the proposed method detects the cloud reliably.
4 Conclusion This chapter proposed an SVM-based method for cloud detection of satellite image. The GLCM and RIULBP are calculated in sub-images and combined as a new feature for the training of SVM classifier. The trained SVM classifier identifies the sub-image as cloud or not. A dataset for cloud detection based on real satellite images was built in this
SVM-Based Cloud Detection Using Combined Texture Features
371
chapter. Three methods were validated on this dataset. Results showed that our proposed method outperforms the methods only using GLCM or RIULBP. The proposed method cannot detect the accurate cloud region at pixel level. For future work, we will investigate the pixel-level features and accurate segmentation method for refining the detection result. Acknowledgement This work is supported by Hunan Provincial Natural Science Foundation of China (2019JJ50732). We would like to thank all reviewers for their helpful insights and suggestions.
References 1. Tian, B., Shaikh, A., Azimi-Sajadi, M.R., Vonder Haar, T.H., Reinke, C.L.: A study of cloud classification with neural networks using spectral and textural features. IEEE Trans. Neural Netw. 10, 138–152 (1999) 2. Liu, L., Sun, X., Chen, F., et al.: Cloud classification based on structure features of infrared images. J. Atmos. Ocean. Technol. 28, 410–417 (2011) 3. Isosalo, A., Turtinen, M., Pietikainen, M.: Cloud characterization using local texture information. In: Proceedings of the Finnish Signal Processing Symposium, Oulu, Finland, pp. 1–6 (2007) 4. Calbo, J., Sabburg, J.: Feature extraction from whole-sky ground-based images for cloud-type recognition. J. Atmos. Ocean. Technol. 25, 3–12 (2008) 5. Heinle, A., Macke, A., Srivastav, A.: Automatic cloud classification of whole sky images. Atmos. Meas. Tech. Dis. 3, 557–567 (2010) 6. Zhou, W., Cao, Z., Xiao, Y.: Cloud classification of ground-based images using texture-structure features. J. Atmos. Ocean. Technol. 31, 79–92 (2014) 7. Singh, M., Glennen, M.: Automated ground-based cloud recognition. Formal Pattern Anal. Appl. 8, 258–271 (2005) 8. Christodoulou, C.I., Pattichis, C.S., Pantziaris, M., et al.: Texture-based classification of atherosclerotic carotid plaques. IEEE Tans. Med. Imaging. 22, 902–912 (2003) 9. Xie, M.H., Li, R.Y., Tian, Y.Q., et al.: The removing clouds method based on large remote sensing image. J. Beijing Normal Univ. 42, 42–47 (2006) 10. Azimi-Sadjadi, M.R., Zekavat, S.A.: Cloud classification using support vector machine. vol. 31, pp. 669–671 (2000) 11. Chen, Y., Zhang, C.: Multi-features cloud classification based on SVM and fractal dimension. Int. J. Digital Content Technol. Appl. 6, 211–220 (2013) 12. Zhou, X., et al.: Salient binary pattern for ground-based cloud classification. Acta. Meteor. Sin. 27, 211–220 (2013) 13. Zhou, X., Wu, F.: An improved approach to remove cloud and mist from remote sensing images based on the Mallat algorithm. Proc. SPIE Int. Soc. Opt. Eng. (2008) 14. Ma, J., Wang, C.: Image fusion for automatic detection and removal of clouds and their shadows. Proc. SPIE. 6419, 64191X (2006) 15. Lee, J., Weger, R.C., Sengupta, S.K., et al.: A neural network approach to cloud classification. IEEE Tans. Geosci. Remote Sens. 28, 846–855 (1990) 16. Welch, R.M., Sengupta, S.K., Goroch, A.K., et al.: Polar cloud and surface classification using AVHRR imagery: An intercomparison of methods. J. Appl. Meteorol. 31, 405–420 (1992) 17. Tian, B., Shaikh, M.A., Azimi-Sadjadi, M.R., et al.: A study of cloud classification with neural networks using spectral and textural features. IEEE Trans. Neural Netw. 10, 138–151 (1999) 18. Buch, K.A., Sun, C.H.: Cloud classification using whole-sky imager data. In: Ninth Symposium on Meteorological Observations & Instrumentation, vol. 16, pp. 353–358 (1995) 19. Liang, D., Kong, J., Hu, G.S., et al.: The removal of thick cloud and cloud shadow of remote sensing image based on support vector machine. Acta Geodaetica et Cartographica Sinica. 41, 225–232 (2012)
372
X. Sun et al.
20. Kong, J., Hu, G.S., Liang, D.: Thin cloud removing approach of remote sensing image based on support vector machine. Comp. Eng. Design. 32, 599–602 (2011) 21. Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall, Upper Saddle River (2003) 22. Soh, L.K., Tsatsoulis, C.: Texture analysis of SAR sea ice imagery using grey level co-occurrence matrices. IEEE Trans. Geosci. Remote Sens. 37, 780–795 (1999) 23. Al-Janobi, A.: Performance evaluation of cross-diagonal texture matrix method of texture analysis. Pattern Recogn. 34, 171–180 (2001) 24. Ojala, T., Pietikainen, M.: Multiresolution gray-scale and rotation invariant texture classification with local binary pattern. IEEE Trans. Pattern Anal. Mach. Intell. 24, 971–987 (2002)
An Oversampling Enhancement Method of the Terahertz Image Zhi Zhang1,2(*), Xu-Ling Lin1, Jian-Bing Zhang3, and Hong-Yan He1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China 2 Nanjing University of Science and Technology, Nanjing, China [email protected] 3 Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai, China
Abstract. Terahertz imaging is new trend of space optical remote sensing detection field. However, the resolution of Terahertz imaging is limited by the manufacture level of the Terahertz detector. In this paper we present the image processing method based Terahertz oversampling system. The multi-frames of observing same region are collected by our Terahertz imaging system. The shift displacement between adjacent frames is sub-pixel on the horizontal and vertical direction. The multi-frames are filtered by the norm constraint and enhancement. The higher resolution grid is reconstructed after the staggered frames are achieved. It shows that background noise is constrained and target edge is more continuous by the proposed method. The evaluation index of image quality is raised, which means that proposed method is better than other conventional method. The detail information of processed image is enhanced. The proposed oversampling method is special for dim target and background clutter data such as terahertz and infrared image, combining with the p2 norm optimization and bilateral filtering. Keywords: Terahertz imaging · Oversampling · Remote sensing image · Filtering · Space optics
1 Introduction Over the years, Terahertz wave (THz) is a hot research region since it is put forward. It is very important for space optical remote sensing of the upper atmosphere [1], industrial inspection, security, and so on. There are some bottlenecks for terahertz application, such as low resolution and bad definition [2, 3]. The hardware level is limited on the existing conditions [4, 5]. An oversampling technic is adopted here to meet high resolution need [6].
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_37
373
374
Z. Zhang et al.
2 The Proposed Method There is a rigorous geometric topological mapping relationship between object minimum resolution size and on-board imaging system resolution [7]. Super-resolution might be a useful and popular way to raise the resolution of space optical imaging system, such as SPOT5 super mode and the European MSG (Meteosat Second Generation) diamond sampling [8–10]. The result is that resolution of the imaging system being significantly raised. In this paper, the enhancement method is proposed to improve the system performance, using the oversampling method. Firstly, the mathematical model [11] of the terahertz system is usually expressed as followed. F ðm, nÞ ¼ Sðm, nÞ psf þ N
ð1Þ
Here, (m, n) is the size of the terahertz image, PSF is point spread function of the system, S(m, n) is the original scene, N is the noise, and F(m, n) is obtained data from terahertz system. Secondly, the multi-observation images are norm constraint optimized and enhancement. Then, the information is constraint optimized on every observation. The norm optimization model is operated based on a single frame. The filtered formula is given by 2 arg min F k, b ðm, nÞ F 0, b ðm0 , n0 Þ2 þ λb Db b¼1...4 2 ¼ arg min Sk, b ðm, nÞ psf b ðm, nÞ þ N b F 0, b ðm, nÞ2 þ λb Db
ð2Þ
b¼1...4
Here, arg min kk22 indicates the optimization of p2 norm, k is number of cyclic b1...4
computations, b is exposure times, λb is adjustable parameter, and Db is sharp information of the LR image. Db ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðF b ðm þ 1, nÞ F b ðm, nÞÞ2 þ ðF b ðm, n þ 1Þ F b ðm, nÞÞ2
ð3Þ
Here, SNew(m, n) is enhanced image after sub-pixel shifting between adjacent images on the same direction. Thirdly, the result is enhanced by the bilateral filter. The formula is ω P
Snew b ðm, nÞ
¼
i, j¼ω
Gs ðm0 , n0 ;m, nÞGr ðm0 , n0 ;m, nÞSk, b ðm, nÞ
ω P
i, j¼ω
Gs m0 , n0 ;mi , nj Gr m0 , n0 ;mi , nj
An Oversampling Enhancement Method of the Terahertz Image
ð m m Þ 2 þ ð n0 nÞ 2 Gs ðm0 , n0 ; m, nÞ ¼ exp 0 2σ 2s j Sk, b ðm0 , n0 Þ Sk, b ðm, nÞ Gr ðm0 , n0 ;m, nÞ ¼ exp 2σ 2r
375
ð4Þ
Here, Gs(m0, n0; m, n) is Gaussian kernel function, indicating space similarity. (m0, n0) is the centre of inner points set. ω is radius of the inner points set. σ s is variance of the inner points set. After being superimposed by oversampling method, the result is S0 ðu, vÞ ¼
m n 1 X X New S ðm, nÞ M u¼1 v¼1 b
ð5Þ
where S0(u, v) is the reconstructed image, M indicates the frame number, and (u, v) is reconstructed sampling unit. So the HR (High Resolution) reconstructed image for terahertz imaging is obtained. Finally, the final terahertz image is better than the original observation image, which is interpolated by multi-frame LR images.
3 Experiment and Analysis The experiment is carried out on the platform TAS7400TS from the Advantest Corporation. The frequency range of the system is 0.5~7 THz. The imaging result is shown as Fig. 1a. Here, the ‘10’ sample is put on the THz transmit imaging platform. The four frames are obtained in the experiment, which shifts half a pixel between each other on the vertical and horizontal directions. It shows that the number ‘10’ is blurred and faint in Fig. 1a. The image is seriously degraded by noise that edge of ‘10’ is not continuous. The main factors is temperature variation and detector noise fluctuation. There is amount of block-like noise around object, which is called as the background fluctuation effect. Figure 1b is processed result by POCS (Projection Onto Convent Sets) method, and Fig. 1c is processed result by IBP (Iterative Back Projection) method. Obviously, the noise is reduced, and edge is more continuous. Figure 1d is a processed result by the proposed method, which has high contrast, clear object and continuous edge. There is not the block noise on the image and background (Table 1). Figure 2a is shown as the target imaging result on the experiment. There are eight frames LR THz images, which shifts sub-pixel (shifting distance is 0.1~0.25 pixels) between each other on the vertical and horizontal directions. Figure 2b is a processed result by the POCS method, and Fig. 2c is a processed result by the IBP method. It is shown that the block-like noise disappears and the object edge is clear in Fig. 2d. As seen in the reconstructed images and the comparison, the proposed method could do a better job visually and photometric-wise (Table 2).
376
Z. Zhang et al.
Fig. 1 Comparison of different methods on the ‘10’ sample image. (a) Original image. (b) Result of the POCS method. (c) Result of the IBP method. (d) Result of the proposed method
Table 1 Evaluation index of image quality Oversampling methods Mean value Variance SNR (dB) Contrast ratio Information entropy Image power spectrum MTF ratio
Original image 85.93 37.18 7.27 20.59 6.86 40.03 1
POCS 89.87 34.49 8.31 24.73 7.10 40.21 1.23
IBP 89.23 30.50 9.32 21.92 6.95 40.15 1.14
Proposed method 137.17 28.10 13.76 21.58 6.84 41.29 1.43
The experimental result shows that background noise is not only constrained, but target edge is also continuous by the proposed method. The evaluation index of image quality is raised, meaning that the detail of the processed image is improved. The proposed method is better than conventional methods on the minimum distinguish ability, but not well enough on raising SNR. Therefore, the proposed oversampling methods are not superimposed by multi-observation images as to enhance texture, but also effectively reduce noise and preserve edge information of the image.
An Oversampling Enhancement Method of the Terahertz Image
377
Fig. 2 Comparison of different methods on the metal target image. (a) Original image. (b) Result of the POCS method. (c) Result of the IBP method. (d) Result of the proposed method Table 2 Evaluation index of image quality Oversampling methods Mean value Variance SNR (dB) Contrast ratio Information entropy Image power spectrum MTF ratio
Original image 119 77.78 3.75 70.58 5.40 37.35 1
POCS 120 78.15 3.72 71.26 6.39 41.37 1.12
IBP 119 78.35 3.67 71.43 6.42 41.36 1.45
Proposed method 118 78.69 3.56 72.46 7.04 41.39 1.62
4 Conclusion The proposed oversampling method could improve the quality of THz image, combining with the norm optimization and bilateral filtering method. The processed image by the proposed method is better than the convenient method in the experiment. The
378
Z. Zhang et al.
proposed method is special for enhancement of the dim illumination image quality. The proposed method is expected to be used in the complex diffraction limited system, extending to remote sensing applications in the future [11]. Of course, the type of observations will require further development of our method.
References 1. Na, H.Y.: Improvement of Terahertz Active Imaging System. Nanjing University, Nanjing (2016) 2. Shams, M.B., Jiang, Z.G., Rahman, S., et al.: Approaching real-time terahertz imaging using photoinduced reconfigurable aperture arrays. Terahertz Physics, Devices, and Systems VIII: Advanced Applications in Industry and Defense, pp. 910207-1–910207-8 (2014) 3. Kannegulla, A., Jiang, Z., Rahman, S.M., et al.: Coded aperture imaging using photo-induced aperture arrays for mapping terahertz beams. IEEE Transactions on Terahertz Science and Technology, pp. 321–327 (2014) 4. Gergelyi, D., Földesy, P., Zarándy, Á.: Scalable, “low-noise architecture for integrated terahertz imagers”. Int. J. Infrared Millimeter Waves. 36, 520–536 (2015) 5. Xu, L.M.: High Resolution Terahertz Image Processing. Xi’an Institute of Optics & Precision Mechanics, Chinese Academy of Sciences, Xi’an (2013) 6. Dong, J.M., Peng, X.Y., Ma, X.H., et al.: Progress of detection technology of ultra-broadband THz time-domain spectroscopy. Spectrosc. Spectr. Anal. 1277–1283 (2016) 7. Liu, Z.J., Zhang, Z., Zhou, F., et al. A geometrical mapping method optical imaging for space pushbroom system. Spacecraft Recov. Remote Sens. 39–45 (2012) 8. Pan, H.B., Cong, M.Y., Zhang, W., et al.: Imaging model study of space objects from space remote sensor. J. Harbin Inst. Technol. 40, 1699–1702 (2008) 9. Wang, S.T., Zhang, W., Jin, L.H., et al.: Point target detection based on temporal-spatial oversampling system. J. Infrared Millimeter Waves. 32, 68–72 (2013) 10. Chen, B.Y., Chen, F.S., Chen, G.L., et al.: A new method of improving spatial resolution of linear matrix scanner by over sample. Infrared Technol. 395–402 (2009) 11. Lin, X.L., Zhang, Z., Zhang, J.B.: Super-Resolution Reconstruction for Terahertz Pulsed Imaging. The 42nd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz) (2017)
Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm Gaojin Wen1, Limin Wu1, Yun Xu2(*), Zhiming Shang1, Can Zhong1, and Hongming Wang1 1
Engineering Research Center of Aerial Intelligent Remote Sensing Equipment of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China 2 Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, China [email protected]
Abstract. Band selection is the most useful method to overcome data redundancy of hyperspectral image data, which is to find a subset of spectral bands from more than 100 original spectral bands. In this chapter, we make a short review of band selection methods developed in recent years, and a new band selection method based on k-means++ clustering and genetic algorithms is developed. Firstly, k-means++ clustering method is applied to generate several optional subsets of spectral bands from original hyperspectral data. Then genetic algorithm is used to select bands based on a framework of supervised classification of the hyperspectral data. Two hyperspectral datasets are used to test the proposed method. It is proved that the presented band selection method can reduce the dimensionality of hyperspectral data efficiently and result in much better accuracy for classification applications. Keywords: Band selection · Genetic algorithm · Clustering · Supervised classification
1 Introduction Nowadays, hyperspectral imaging techniques have developed into a prosperous period with the usage of a high speed optic-fiber storage device. More and more hyperspectral data are captured and processed for many different kinds of applications, such as environment protection, agriculture monitoring, mineral detection and forest management. Large redundancy is an intrinsical drawback of hyperspectral data, which often leads to large computational cost and data noise for widely application. To overcome this drawback and decrease the processed quantity of hyperspectral data, several dimensionality reducement techniques are applied, such as principal components analysis (PCA), maximum noise fraction (MNF), independent component analysis (ICA) and band selection. Among these dimensionality reduction methods, band selection is a special preprocess, which is to find a subset of spectral bands from more than 100 original spectral © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_38
379
380
G. Wen et al.
bands. Due to without modification of physical meaning of original spectral data, band selection becomes the most useful method used by engineers and researchers for analysis of hyperspectral image data. In this chapter, we make a short review of band selection methods developed in recent years, and a new band selection method based on k-means++ clustering and genetic algorithm is developed. Firstly, k-means++ clustering method is applied to generate several optional subsets of spectral bands from original hyperspectral data. Then genetic algorithm is used to select bands based on a framework of supervised classification of the hyperspectral data. Two hyperspectral datasets are used to test the proposed method. It is proved that the presented band selection method can reduce the dimensionality of hyperspectral data efficiently and result in much better accuracy for classification applications. The rest of this chapter is structured as follows. Recent related works are introduced in Sect. 2. Section 3 illustrates the proposed band selection framework based on k-means++ and genetic algorithm. Section 4 displays the experiment and analysis. Finally, a short conclusion is drawn in Sect. 5.
2 Related Work Band selection for hyperspectral data is a combinatorial optimization problem in essence. Different objective functions are proposed for band selection and result in two major categories: supervised band selection and unsupervised band selection. Based on this classification rule, related works in recent years are reviewed in the rest of this section. 2.1
Supervised Band Selection
Hyperspectral data is often used for classification application, such as plant identification and disease detection. The categories of each hyperspectral image point are known beforehand. So the classification accuracy is used as the objective function naturally, which makes band selection problem become a supervised classification problem. Band selection methods developed based on this criterion lead to one large category: supervised band selection. To solve this supervised classification problem, many combinatorial optimization techniques are proposed and analyzed, such as chaotic binary coded gravitational search algorithm [1] and genetic algorithm [2]. 2.2
Unsupervised Band Selection
Besides of classification accuracy, many information criterions are used as the objective function, such as saliency [3], maximin distance [4], sparsity [5], entropy [6], mutual information [7], correlation [8], variance [9] and change detection [10]. This kind of band selection method is categorized as unsupervised band selection.
Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm
381
To solve this unsupervised band selection problem, many combinatorial optimization techniques based on evolutionary method are proposed and analyzed, such as Binary Social Spider Optimization algorithm [6]. Other optimization solving methods are also proposed to solve unsupervised band selection problem, such as iterative method [3], multiobjective optimization [9], split-and-merge approach [8], greedy search [10] and constrained energy minimization [11].
3 Band Selection Based on Clustering and Genetic Algorithm 3.1
Framework
Usually, hyperspectral data has more than 100 bands, and spectral resolution of each band is about λ/100. λ is wavelength of visible light. High resolution leads to strong correlation among adjacent bands of hyperspectral data. Strong correlation will lead to high data redundancy, which will decrease the efficiency of band selection and hinder applications of hyperspectral data gravely. So, it is required naturally to eliminate the correlation of spectral bands before other processing, e.g. band selection and data dimensionality reduction. Clustering is such an efficient way to grouping similar bands into multiple clusters. K-means++ method is an optional clustering method due to its easiness of implementation. There are two major issues for band clustering. One is to decide proper number of clustering center, which is an input of k-means++ clustering. The other is to evaluate clustering result. For the first problem, the straightforward method is to iterate all possible number of clustering center. For the second problem, classification accuracy is used as a criterion to find the best clustering result. After band clustering, the number of spectral bands will decrease to a proper value. Genetic algorithm is used to implement band selection. Since genetic algorithm is a random search method based on evolutionary theory, it is to select a best result from several results of execution. The framework of band selection based on clustering and genetic algorithm is as follows: 1. Band Clustering Step For k ¼ k1: k2 do { For s ¼ 1: S1 do { Use k-means++ to do band clustering and result in k clustering centers; Find the closest band for each clustering center and generate band centers; Calculate classification accuracy for clustering band centers;}} Sort accuracy sequence to find the largest value and corresponding band centers; 2. Band Selection Step For s ¼ 1: S2 do { Generate an initial population of P individuals through random selection method;
382
G. Wen et al.
For g ¼ 1: G do { Calculate classification accuracy for each individual as fitness value; Calculate best and average fitness value; If (exit condition satisfied) break; else { Commit GA operators: crossover, mutation and selection;}}} Find the largest fitness value and corresponding selected bands. 3.2
Distance of Bands
For n samples of hyperspectral data, bands x and y are two numeric sequences with n element in interval (0, 1). The reasonable distance of bands is to reflect the correlation effect of two bands on hyperspectral data classification. Correlation coefficient is a proper distance, which is to calculate normalized covariance of two bands x and y according (1). 0 dðx, yÞ ¼ 1 X X Y Y =sX sy
ð1Þ
where X and SX is average and standard variance of X. 3.3
Evaluation of Clustering Results
After clustering, n bands is categorized into k clusters. For each band cluster, the band closest to the cluster centroid is selected as representative band of this cluster. Then number of bands of the hyperspectral is reduced to k. The class label in the original hyperspectral data is known beforehand through denoting during the capture process, and then we calculate the classification accuracy through supervised classification methods. Several classification methods can be used, and the best accuracy is recorded as the scores. Discriminant analysis classifiers are used based on Linear, Quadratic, DiagLinear, DiagQuadratic and Mahalanobis mode. 3.4
Individual Representation
As mentioned above, band selection is a combinatorial optimization problem. It is going to select several bands from k bands, and (0 or 1) bit string is a nature representation for implementation of GAs. For bit strings, many genetic operators have been developed and proved to be successful, such as single-point crossover, two-point crossover and bit inversion mutation operator. For an individual g, original hyperspectral data can be reduced with dimension from k bands to s bands, and s is the number of 1 in g. Supervised classification of five modes mentioned above is carried out, and the best accuracy is recorded as the fitness value of g.
Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm
383
4 Experiments and Discussions This proposed band selection method based on k means++ and GAs has been implemented by using MATLAB. Two types of hyperspectral data are used to test its performance. In this section, we give the details of the experiments. 4.1
Hyperspectral Dataset of Broadleaf
The test dataset is hyperspectral data of 66 broadleaf species; each species has 50 samples with 359 spectral bands. A hyperspectral data sample of osmanthus fragrans leaf is shown in Fig. 1. All 3300 hyperspectral data samples are shown in Fig. 2. For the input hyperspectral data, the number of band clusters varied from k1 ¼ 10 to k2 ¼ 80, and C1(¼ 20) times band clustering are run. The statistic results are displayed in in Fig. 3. It can be deduced that classification accuracy will go up to a peak value
Fig. 1 A hyperspectral data sample of osmanthus fragrans leaf
Fig. 2 Hyperspectral dataset of broadleaf
384
G. Wen et al.
Fig. 3 Statistic result of band clustering
Fig. 4 Best band clustering result
99.3097% together with number of clusters. After that, classification accuracy will not increase even the number of clusters continuously increases. The band clustering process containing the best classification accuracy is recorded in Fig. 4. Discriminant analysis classifier based on Quadratic mode shows the best performance. Mahalanobis mode shows better performance. The other three modes are less efficient. In the best clustering result, original 359 spectral bands are grouped into 66 clusters, and the corresponding group distribution is shown in Fig. 6. Adjacent bands in spectral index region [4–24], [69–159], [180–255], [266–341] are clustering into groups efficiently. C2(¼ 20) times band selection based on GAs are run, and the best value and average value of each generation for one run are shown in Fig. 5, and the corresponding best solution is shown in Fig. 6. The best solution selects 40 bands from the total 66 bands, and the corresponding classification accuracy is 99.3425%. It can be deduced from the above three figures that the performance of the band selection method works efficiently and robustly.
Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm
385
Fig. 5 Convergence plot of genetic algorithm
Fig. 6 Result of band selection based on GA. The best solution of 40 bands is marked with “”
Fig. 7 Hyperspectral data of Salinas scene and ground truth of classification result
4.2
Salinas Hyperspectral Dataset
A public available hyperspectral scene is used as the second testing data, which was acquired by AVIRIS sensor over the Salinas Valley, California and consists of 512 217 pixels and 224 spectral bands. It includes 16 types of data such as vegetables, bare soil and vineyards. The ground truth of classification results are shown in Fig. 7.
386
G. Wen et al.
Fig. 8 Statistic result of band clustering
Fig. 9 Best band clustering result
For the input hyperspectral data, the number of band clusters varied from k1 ¼ 10 to k2 ¼ 60, and C1(¼ 20) times band clustering are run. The statistic results are displayed in Fig. 8. It can be deduced that classification accuracy will go up to a peak value 88.5348% together with number of clusters. After that, classification accuracy will not increase even the number of clusters continuously increases. In the best clustering result, original 224 spectral bands are grouped into 47 clusters, and the corresponding band distribution is shown in Fig. 11. The band clustering process containing the best classification accuracy is recorded in Fig. 9. Discriminant analysis classifier based on Quadratic and Mahalanobis mode shows better performance than other three modes. C2(¼ 20) times band selection based on GAs are run, and the best value and average value of each generation for one run is shown Fig. 10 and the corresponding best solution is shown in Fig. 11. The best solution selects 24 bands from the total 224 bands, and the corresponding classification accuracy is 89.7172%. It can be deduced from the above four figures that the performance of the band selection method works efficiently and robustly.
Band Selection for Hyperspectral Data Based on Clustering and Genetic Algorithm
387
Fig. 10 Convergence plot of genetic algorithm
Fig. 11 Result of bands clustering with 47 classes and band selection based on GAs marked with “”
5 Conclusion In this chapter, a band selection method based on genetic algorithm for classification of hyperspectral data has been developed successfully. The targets of obtaining the least bands without loss of classification accuracy can be achieved. The proposed method is evaluated on two real captured hyperspectral data for classification applications. It is proved that the proposed algorithm can give more accurate results together with fewer bands.
References 1. Wang, M., Wan, Y., Ye, Z., Gao, X., Lai, X.: A band selection method for airborne hyperspectral image based on chaotic binary coded gravitational search algorithm. Neurocomputing. 273, 57–67 (2018) 2. Koushik, N., Jones, S., Sarkar, S., Singh, A.K., Singh, A., Baskar, G.: Hyperspectral band selection using genetic algorithm and support vector machines for early identification of charcoal rot disease in soybean. Plant Methods. 14(1) (2017)
388
G. Wen et al.
3. Su, P., Liu, D., Li, X., Liu, Z.: A saliency-based band selection approach for hyper-spectral imagery inspired by scale selection. IEEE Geosci. Remote Sensing Lett. 15(4), 572–576 (2018) 4. Kumar, V.S., Vasuki, S.: Maximin distance based band selection for endmember extraction in hyperspectral images using simplex growing algorithm. Multimedia Tools Appl. 77(6), 7221–7237 (2018) 5. Li, F., Zhang, P., Lu, H.: Unsupervised band selection of hyperspectral images via multi-dictionary sparse representation. IEEE Access. 99 (2017) 6. Shukla, U.P., Nanda, S.J.: A binary social spider optimization algorithm for unsupervised band selection in compressed hyperspectral images. Expert Syst. Appl. 97, 336–356 (2018) 7. Asma, E., Sarhrouni, E., Ahmed, H., Chafik, N.: A new band selection approach based on information theory and support vector machine for hyperspectral images reduction and classification. ISNCC, pp. 1–6 (2017) 8. Rashwan, S., Dobigeon, N.: A split-and-merge approach for hyperspectral band selection. IEEE Geosci. Remote Sensing Lett. 14(8), 1378–1382 (2017) 9. Xu, X., Shi, Z., Pan, B.: A new unsupervised hyperspectral band selection method based on multiobjective optimization. IEEE Geosci. Remote Sensing Lett. 14(11), 2112–2116 (2017) 10. Tales, I., Carlos, M.B.J., Cdric, R.: Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem. IEEE Trans. Image Processing. 26(5), 2179–2191 (2017) 11. Wang, L., Li, H.-C., Xue, B., Chang, C.-I.: Constrained band subset selection for hyper- spectral imagery. IEEE Geosci. Remote Sensing Lett. 14(11), 2032–2036 (2017)
Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area Junling Li1(*), Yong Pang1(*), Zengyuan Li1, and Wen Jia1 1
Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing, China [email protected]
Abstract. Tree species classification is crucial for forest management. Airborne hyperspectral data provide the image with high spatial and spectral resolution, which has a possibility for forest species classification. The radiance of airborne hyperspectral image was decreased when the cloud exists. The radiance of object in the cloud shadow area is much lower than the non-shaded. And the spectral variance of different objects in the shadow is also lower than that of the non-shaded region. Although the radiance of the image in the cloud shadow is decreased, there is still a certain difference among different objects, which provides the possibility for forest species classification. We used vegetation indices and texture features to recombine a new image. Reflectance images and the recombination images were classified by support vector machine (SVM) classifier. Tree species classification is crucial for forest management. Airborne hyperspectral data provide the image with high spatial and spectral resolution, which bring possibility to forest species classification. The radiance of object in the cloud shadow area of airborne hyperspectral image is much lower than the non-shaded area. And the spectral variance of different objects under the shadow is also lower than that of the non-shaded region. Although the radiance of the image in the cloud shadow is decreased, there is still a certain difference among different objects, which provides the possibility for forest species classification. We used vegetation indices and texture features to recombine a new image. The narrow band vegetation indices include red edge normalized vegetation index (NDVI705), Improved red edge ratio vegetation index (mSR705), Improved red edge normalized vegetation index (mNDVI705), Vogelmann1 (VOG1), Vogelmann2 (VOG2) and Red Edge Position Index. The bands of 31 (0.67 μm), 51 (0.86 μm) and 55 (0.89 μm) were used to calculate the texture information, which were selected using the optimum index factor (OIF). Tree species training samples was selected based on high resolution aerial photographs. The support vector machine (SVM) method was used to classify the reflectivity images and the feature images after recombination. The classification results were verified by filed data, and the overall accuracy and Kappa coefficient were used as the evaluation indices for classification accuracy. Compared with the classification result of reflectance image, the combination of vegetation index and texture information improve classification accuracy significantly. The overall accuracy and Kappa coefficient are 90.4% and 0.88, which increased 18% and 0.2 respectively. The classification accuracy of each tree species is also significantly improved. It can be seen from the confusion matrix that when using the reflectance image for classification, the Pinus koraiensis was divided into Pinus sylvestris by mistake. However, using vegetation index, the pixel number of Pinus koraiensis divided into Pinus sylvestris by mistake is greatly reduced. We concluded that the cloud-shaded forests can be classified based on the narrow band vegetation indices (NDVI705, mSR705, mNDVI705, VOG1, VOG2, REP) and texture information, which is better than the reflectance image only. © Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_39
389
390
J. Li et al. Keywords: AISA Eagle II · Airborne hyperspectral images · Cloud shadow area · Tree species classification · Narrow band vegetation indices
1 Introduction Tree species classification is an important part of forest resource management and scientific research [1]. The composition and distribution of tree species is not only important for the ecological economy, but also important for managers to make effective decisions. The traditional methods of obtaining tree species information mainly rely on field survey, which is not applicable to some remote forest areas. The traditional survey method requires a lot of human power, material and financial resources. Remote sensing can provide faster and more accurate tree species information. The rich information contained in remote sensing images is the basis for tree species classification [2]. The use of airborne hyperspectral data to classify forest species in different temperature zones at home and abroad proves that airborne hyperspectral data has great potential in forest resource monitoring and tree species classification [3–6]. However, during the obtaining of the airborne hyperspectral image, if there is a cloud in the sky, the radiance of object under the cloud shadow area is much lower than the non-shaded. And the spectral variance of different objects in the shadow is also lower than that of the non-shaded region. Gao et al., used spectral feature thresholds to detect cloud shadows and then used a comprehensive model to compensate for cloud shadows [7]. Wang et al. use a principal component analysis method to detect cloud shadows and then compensate with the unshaded pixel values of the same feature type on the image [8]. This compensation method needs to know the type of features on the image. The image after compensate is not suitable for the classification either. These cloud-shaded images are useually discarded, which will cause waste of image resources. In this study, we used vegetation indices and texture information to recombine new images. This recombined images were classified by a support vector machine (SVM) classifie to explore the potential of narrow band vegetation indices, texture information for the classification in cloud shadow forest.
2 Study Area and Data Collection 2.1
Overview of the Study Area
The study area is located in Jiamusi City, Heilongjiang Province. The geographical coordinates are 130 320 –130 520 east longitude and 46 200 –46 300 north latitude. The annual average temperature is 2.7 C. The extreme maximum temperature is 35.6 C and the minimum temperature is 34.7 C. The annual average rainfall is 550 mm. The annual sunshine hours are 1955 h. The forest farm is dominated by plantations. The area of plantations accounts for 76.7% of the total area of the forest. Larix olgensis, Pinus sylvestris and Pinus koraiensis account for about 80% of the total area of plantations. There are a small amount of Picea asperata in the plantation. Broad-leaf trees are mainly distributed in natural forests (Fig. 1).
Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area
391
Fig. 1 Study area
2.2
Remote Sensing Data and Filed Data Obtaining
The hyperspectral image is obtained by the CAF’s (The Chinese Academy of Forestry) LiCHy (LiDAR, CCD and Hyperspectral) System owned by the Institute of Resource Information of the Chinese Academy of Forestry [9]. The acquisition time is June 5, 2017. The AISA Eagle II image used in this study has 64 bands with the spectral resolution of 3.3 nm and the ground spatial resolution of 0.5 m. Use Trimble handheld GPS to locate the position information of the tree species. The filed data obtained mainly includes 50 Larix olgensis training samples, 50 Pinus koraiensis training samples, 41 Pinus sylvestris training samples and 48 Picea asperata training samples, respectively. In this study, the training samples were selected based on DOM data, and the spatial resolution of the DOM used was 0.1 m, which was acquired synchronously with the hyperspectral data.
392
2.3
J. Li et al.
Hyperspectral Data Processing and Tree Species Classification
Due to remote sensing images are affected by various factors during the hyperspectral data acquiring, these effects reduce the quality of remote sensing data, which affects the accuracy of tree species classification. Therefore, it is necessary to preprocess the image before the image analysis. The preprocessing mainly includes radiometric calibration, geometric correction and atmospheric correction. The SVM method was adopted to classify the radiance image and the recombined image. The classification results were verified by filed data; the overall accuracy and Kappa coefficient were used as the evaluation indices for tree species classification accuracy (Fig. 2). 2.3.1 Geometric Correction and Atmospheric Correction Since the image is distorted by various factors during the obtaining, the original image needs to be
radiometric calibration, geometric correction
atmospheric correction
Cloud shadow extract
SVM classify Calculation of narrowband vegetation index
Principal component analysis
Band select
Texture Analysis
Recombined new image
SVM classify Post classification
Mapping Classification result
Fig. 2 Workflow of fine classification
Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area
393
Fig. 3 Hyperspectral image after atmosphere correction (band 38, band 18, band 8) Table 1 Vegetation index abbreviation and publicity Vegetation index and abbreviation Red edge normalized vegetation index (NDVI705) Improved red edge ratio vegetation index (mSR705) Improved red edge normalized vegetation index (mNDVI705) Vogelmann (Red edge index 1) (VOG1) Vogelmann (Red edge index 2) (VOG2)
Calculation formula σ 705 NDVI705 ¼ σσ750 750 þ σ 705
References [10]
σ 445 mSR705 ¼ σσ750 750 þ σ 445
[11]
mNDVI705 ¼
[11]
VOG1 ¼ σσ740 720
σ 750 σ 705 σ 750 þ σ 705 2σ 445
734 σ 747 VOG2 ¼ σσ715 þ σ 726
[12] [13]
geometrically corrected, and the CaliGeoPro2.0 software is used for radiation calibration and geometric correction. Using ATCOR4 software to obtain the solar azimuth, zenith angle and aerosol type, adjust the visible distance of adjacent pixels, atmospheric visible distance and other parameters based on the MODTRAN5 model to correct the influence of the atmosphere on the sensor acquisition data and acquire the true reflectance (Fig. 3). 2.3.2 Calculation of Narrow-Band Vegetation Index and Texture Analysis The narrowband vegetation index is very sensitive to chlorophyll content, leaf surface canopy, leaf clumps and canopy structure. The narrowband vegetation index uses the red to near-infrared area—“red edge”. The red edge is in the region of 690–740 nm, including absorption and scattering of chlorophyll and other pigments. It is more sensitive than the broad-band vegetation index. The narrow band vegetation index include red edge normalized vegetation index (NDVI705) [10], Improved red edge ratio vegetation index (mSR705) [11], Improved red edge normalized vegetation index (mNDVI705) [11], Vogelmann1 (VOG1) [12], Vogelmann2 (VOG2) [13] and Red Edge Position Index (REP) [14] (Table 1). The band used to calculate the texture information was selected by the optimum index factor (OIF). The higher the OIF value, the larger the amount of information contained in the band. The number of bands selected are band 31(0.67 μm), 51(0.86 μm) and 55(0.89 μm). The texture information is calculated using these three bands, and the
394
J. Li et al.
texture features include Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment and Correlation. Due to the high number of hyperspectral bands, the bands of hyperspectral images are often highly correlated. Principal component analysis removes redundant information between bands and compresses multi-band image information to a limited number than the original band. The first four principal components were selected for this paper, and it contained 99.9% information of the original image. We used vegetation indices, texture features and the first four principal components to recombined a new image. 2.4
Supervised Classification
2.4.1 Determination of the Classification System According to the filed data, the main tree species in the research areas are Pinus sylvestris, Pinus koraiensis, Larix olgensis and Picea asperata; there are also many shrubs, farmland and grasslands in the research area. Therefore, the final classification system is: farmland, shrubs, grassland, Pinus sylvestris, Pinus koraiensis, Larix olgensis and Picea asperata. For hyperspectral images that are not affected by cloud shadows, they have the correct reflectivity after atmospheric correction, while for the reflectance affected by cloud shadows, the reflectivity value is suppressed due to cloud. The radiance of object in the cloud shadow area is much lower than the non-shaded. Its atmospheric corrected spectral curve is shown in Fig. 4. 2.4.2 Classifier SVM is proposed by Corinna Cortes [14]. It shows many unique advantages in solving small sample, nonlinear and high-dimensional pattern recognition, and can be applied to function simulation. Support vector machine has been increasingly used to classify airborne hyperspectral images [15], which is more accurate than traditional techniques (e.g. maximum likelihood, neural network, decision tree classifier) [16]. Taking the temperate mountain forest for an example, the SVM classifier is used to supervise and classify the atmospheric-corrected reflectance image and the recombined image, and 6000 1500
Larix olgensis
Data Value
Data Value
Pinus sylvestris
1000
Larix olgensis Pinus koraiensis Pinus sylvestris Picea asperata
5000
Pinus koraiensis
Picea asperata 500
4000 3000 2000 1000
0
0 0.4
0.5
0.6 0.7 Wavelenth (µm)
(a)
0.8
0.9
1.0
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Wavelenth (µm)
(b)
Fig. 4 Spectral curve of the objective. (a) Spectral of tree species in cloud shadow. (b) Spectral of tree species under cloud shadow
Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area
395
evaluate the influence of narrow-band vegetation index and texture information on classification accuracy. 2.4.3 Selection of Training Samples Tree species training samples were selected based on high resolution aerial photographs. The spatial resolution of aerial photographs is 0.1 m. The final number of training samples is eight farmland filed data, nine shrubs filed data, 24 grassland filed data, 50 Pinus sylvestris filed data, 57 Pinus koraiensis filed data, 50 Larix olgensis filed data and 60 Picea asperata filed data. 2.4.4 Image Classification The SVM method was adopted to classify the reflectivity images and the feature images after recombination. The classification results were verified by filed data. The same training samples were used to classify the reflectivity images and recombined image. Majority analysis is performed on the classification results to eliminate salt and pepper phenomenon. The classified images are then evaluated using the filed data.
3 Classification Results and Analysis 3.1
Image Classification Results
The radiance image and the recombined image are separately classified by SVM method. The classified image is output under the SVM classifier of the ENVI software. The classification results are shown in Fig. 5. 3.2
Accuracy Verification
The verification sample is composed of the filed data, and the classification results of the reflectance image and the classification results of the recombined image are verified, respectively. The verification results are shown in Tables 2 and 3, respectively.
Legend Picea asperata Larix olgensis Pinus Koraiensise Pinus sylvestris
Legend Grassland shrub farmland
(a)
Picea asperata Larix olgensis Pinus Koraiensise Pinus sylvestris
Grassland shrub farmland
(b)
Fig. 5 Classified image. (a) Reflectivity image classification map. (b) Feature image classification map
396
J. Li et al.
Table 2 Class confusion matrix, overall accuracy, Kappa coefficient of the classification result from reflectance image Object type Picea asperata Larix olgensis Pinus koraiensise Pinus sylvestris Grassland Shrub Farmland Overall accuracy Kappa coefficient
Picea asperata 557 54 108 15 0 0 0 72.68 0.68
Larix olgensis 51 658 0 13 0 114 0
Pinus koraiensise 23 0 247 259 0 0 0
Pinus sylvestris 0 0 83 547 121 0 0
Grassland 0 0 0 0 327 0 0
Shrub 0 58 15 0 0 306 0
Farmland 0 0 0 0 185 0 317
Table 3 Class confusion matrix, overall accuracy, Kappa coefficient of the classification result from recombined image Object type Picea asperata Larix olgensis Pinus koraiensise Pinus sylvestris Grassland Shrub Farmland Overall accuracy Kappa coefficient
3.3
Picea asperata 569 40 95 15 0 0 0 90.64 0.88
Larix olgensis 32 802 0 14 0 16 0
Pinus koraiensise 33 19 460 17 0 0 0
Pinus sylvestris 0 4 29 718 0 0 0
Grassland 0 0 0 0 327 0 0
Shrub 0 0 0 0 7 372 0
Farmland 0 0 0 0 67 0 435
Classification Results and Analysis
In this paper, we used vegetation indices and texture information to recombine a new image. Reflectance images and recombined images were classified by SVM classifier. The results show that the narrow-band vegetation index and texture information can significantly improve the classification accuracy of the objects. The classification accuracy and Kappa coefficient are 90.4% and 0.88, which increased 18% and 0.2, respectively. The classification accuracy of each tree species is also significantly improved. It can be seen from the confusion matrix that when using the reflectance image for classification, the Pinus koraiensis is divided into Pinus sylvestris by mistake. However, using vegetation index, the number of Pinus koraiensis divided into Pinus sylvestris by mistake is greatly reduced, indicating that the vegetation index and texture information used in the feature image can greatly improve the distinguishing ability of Pinus koraiensis and Pinus sylvestris. At the same time, it can be seen from the confusion matrix that there is also a misclassification between farmland and grassland. This is because some farmland has been cultivated, and the grown crops and grassland have similar characteristics.
Tree Species Classification of Airborne Hyperspectral Image in Cloud Shadow Area
397
4 Discussion Airborne hyperspectral remote sensing is a passive method of remote sensing. The flight platform is below the cloud layer. The image is greatly affected by the weather during the acquisition process. If the cloud layer blocks the sunlight, there will be cloud shadows on the image. Especially for the forest area, the weather changes quickly, and the effect of cloud shadows is a more obvious phenomenon. If the image with cloud shadow area is discarded, the cost of image acquisition will be greatly increased. Therefore, it is critical to select a proper method to improve the classification accuracy of hyperspectral image with cloud shadow. Since the shadow area of the cloud cannot directly receive the illumination of the sun, the pixel value of the shadow area is lower than the non-shadow area under the same imaging environment. It can be seen from spectral curves that the value of the spectral curve in the shaded area is significantly lower than that of the non-shaded area. When the forest species were classified using reflectance image, the overall accuracy and Kappa coefficient were 0.96 and 0.95 in the non-cloud area; however, the overall accuracy and Kappa coefficient were 0.72 and 0.68 when the tree species were classified using hyperspectral image with cloud shadow. This shows that the cloud shadow will reduce the classification accuracy of the forest tree species. The traditional method used to detect cloud shadow area mainly uses the threshold to identify the cloud shadow area, and then compensate the detected shadow area using the value of same object, but the pixel used for compensation may not the true spectral feature of the object in the shadow area, so the image compensated by this method is no longer suitable for tree species classification. Vegetation index and texture features represent the characteristics of the features themselves. For the selected vegetation index, it is calculated by the band between red and near infrared. This can eliminate the influence of clouds to some extent, thereby improving the classification accuracy of tree species.
5 Conclusion and Prospects This paper mainly classifies the airborne hyperspectral images in cloud shadows. The SVM was adopted to classify the reflectivity images and the feature images after recombination. Compared with the classification result of reflectance image, the combination of vegetation index and texture information can significantly improve the classification accuracy. The classification accuracy of each tree species is also significantly improved. It also can be seen from the confusion matrix that when using the reflectance image for classification, the Pinus koraiensis is divided into Pinus sylvestris by mistake. However, using vegetation index, the number of Pinus koraiensis divided into Pinus sylvestris by mistake is greatly reduced. We concluded that the cloud shadow forest can be classified based on the narrow band vegetation index (NDVI705, mSR705, mNDVI705, VOG1, VOG2, REP) and texture information and the classification result is better than the reflectance image does. However, this paper only uses the narrow-band vegetation index and does not use other vegetation indices, such as broad-band
398
J. Li et al.
vegetation index, the light utilization index and canopy water content index. In the subsequent classification process, all vegetation indices can be combined to select the optimal index for classification, which can further improve the tree species classification accuracy. Acknowledgement This paper is supported by The National Key Research and Development Program of China “Multi-scale larch plantation growth prediction”.
References 1. Tong, Q.X., Zhang, B., Zheng, L.F.: Hyperspectral Remote Sensing, pp. 364–373. Higher Education Press, Beijing (2006) 2. Xue, D.Y.: Application status and prospect of remote sensing technology in forestry. Sci. Technol. Vision. 21, 309–311 (2014) 3. Dian, Y.Y., Li, Z.Y., Pang, Y.: Spectral and texture features combined for forest tree species classification with airborne hyperspectral imagery. J Indian Society Remote Sens. 43, 1–7 (2014) 4. Jia, W., Pang, Y., Yue, C.R., Li, Z.Y.: Tree classification of mountain forest based on AISA Eagle II airborne hyperspectral data. For. Invent. Plan. 40, 9–14 (2015) 5. Dalponte, M., Bruzzone, L., Gianelle, D.: Tree species classification in the southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 123, 258–270 (2012) 6. Féret, J., Asner, G.P.: Tree species discrimination in tropical forests using airborne imaging spectroscopy. Geosci. Remote Sens. 51, 73–84 (2013) 7. Gao, X.J., Wan, Y.C., He, P.P., Yang, Y.W.: Automatic removal of cloud shadow in single aerial images. J. Tianjin Univ. Sci. Technol. 47, 771–777 (2014) 8. Wang, Y., Wang, S.G.: High resolution remote sensing image shadow monitoring and compensation of principal component analysis (PCA). J. Appl. Sci. 28, 136–141 (2010) 9. Pang, Y., Li, Z., Ju, H., Lu, H., Jia, W., Si, L., Guo, Y., Liu, Q., Li, S., Liu, L., Xie, B., Tan, B., Dian, Y.: LiCHy: the CAF’s LiDAR, CCD and hyperspectral integrated airborne observation system. Remote Sens. 8, 398 (2016) 10. Gitelson, A.A., Merzlyak, M.N.: Spectral reflectance changes associated with autumn senescence of Aesculus Hippocastanum L. and Acer Platanoides L. leaves. Spectral features and relation to chlorophyll estimation. J. Plant Physiol. 143, 286–292 (1994) 11. Sims, D.A., Gamon, J.A.: Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 81, 337–354 (2002) 12. Vogelmann, J.E., Rock, B.N., Moss, D.M.: Red edge spectral measurements from sugar maple leaves. Int. J. Remote Sens. 14, 1563–1575 (1993) 13. Huang, J., Wang, Y., Wang, F., Liu, Z.: Red edge characteristics and leaf area index estimation model using hyperspectral data for rape. Trans. CSAE. 22, 22–26 (2006) 14. Hu, G.S., Qian, L., Zhang, G.H.: Research on multi-classification algorithm of support vector machine. Syst. Eng. Electron. 28, 127–132 (2006) 15. Mercier, G., Lennon, M.: Support vector machines for hyperspectral image classification with spectral-based kernels. IEEE Int. Geosci. Remote Sens. Sym. 3, 21–25 (2003) 16. Bazi, Y., Melgani, F.: Toward an optimal SVM classification system for hyperspectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 44, 3374–3376 (2006)
Optical Design of a Simple and High Performance Reflecting Lens for Telescope Jingjing Ge1(*), Yu Su1, Yingbo Li1, Chao Wang1, Jianchao Jiao1, and Xiao Han1 1
Key Laboratory for Advanced Optical Remote Sensing Technology of Beijing, Beijing Institute of Space Mechanics and Electricity, Beijing, China [email protected]
Abstract. In this chapter, a new optical design of two reflected lens with compressed optical path for reflecting four times is described. We have designed this novel system with a longer focal length and a shorter physical length. It will fabricate inexpensively and reduce the difficulty of the test. The system is designed at the focal length of 850 mm, F-number of 3.6 and the pixel size of 30 μm 30 μm with the field of view 3 3 with the uncooled M/LWIR microbolometer array detector. In the design process, it is used four times reflection to reduce the length of system. The result shows that the system can be utilized efficiently and the system is compact and image quality is favorable. Keywords: Optical design · Compact telescopes · Reflect mirror · FLIR
1 Introduction In these years, a plenty of research working on simple and novel optical system for commercial remote sensing are needing compact structures to achieve better performance. The traditional Cassegrain system (two mirrors system) has become the main choice for researchers [1–3]. However, with the increase of focal length and field of view, the imaging performance is limited by the aberration of dispersion and field curvature. In order to obtain a larger fields of view, the three-mirror [4, 5], four-mirror [6–8] and off-axis [9–12] system are widely used in lens design. Meanwhile, the size of system cannot be guaranteed to be compact. In this chapter, we designed a new optical design of compact system with folded path and the four-time reflector optical path. The whole system has the characteristics of lightweight, compact and simple structure with a high image quality.
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2_40
399
400
J. Ge et al.
2 Optical Design 2.1
Requirements and Concept for Design of the Optical Telescope
The final goal of this optical design is to design a compact system under 3–12 μm with uncooled sensor. The whole system is no more than one-eighth of the focal length. In order to fit in a small space, we choose a folded path reflective system. Table 1 shows the details of the optical system. The idea of the novel telescope comes from a classical Gregorian telescope. Though, the parabolic primary mirror and the elliptical secondary mirror can produce a good image, the quality of the image and the system length are not good enough for the long length and a large field of view. Firstly, we insert two dummy mirrors with a refractive diopter into this system in order to get a better image with aberration-free. Figure 1 shows the whole design idea. And the calculation of the original four mirror parameter is as shown in the following equations.
Table 1 The specifications of the optical system 1 2 3 4 5 6
Parameter Focal length (mm) Effective F/# Field of view ( ) Wavelength (μm) Pixel size (μm) Length (mm)
Fig. 1 The illustrates an embodiment of a compact telescope
Value 850 3.6 3 3 3–12 30 161
Optical Design of a Simple and High Performance Reflecting Lens for Telescope
401
Fig. 2 Layout of the system
a ¼ f 1 d=2
ð1Þ
tan ðβÞ ¼ h0 =ð2f 1 Þ
ð2Þ
h1 ¼ tan ðβÞ a ¼ a h0 =f 1
ð3Þ
h2 ¼ 2 ðd f 1 Þ tan ðβÞ ¼ h0 ðd f 1 Þ=f 1
ð4Þ
where f1 is the focal length of the P-S mirror (the primary mirror and the secondary mirror); a is the effective distance between the secondary dummy mirror and the first focal plane P-S mirror; d is the distance between the primary mirror and the tertiary mirror; β is the beam field of view; h0, h1 and h2 are the diameter of the primary mirror, the secondary mirror and the quaternary mirror. To achieve a compact path, a novel four mirror system is created which is shown in Fig. 2. The primary mirror, the secondary mirror and the tertiary mirror are all cyclic annular which use a part of the mirror. The primary mirror and the tertiary mirror share with one of mirror with independent area. At the same time, the secondary mirror and the quaternary mirror are share with another piece of mirror with relative independent place. The whole design and optimization process is under software Zemax. The control relative independent area is significant. These should not only not be intersected, but also not be occluded. The inner mirror and the outer mirror need perfect matching. 2.2
Results
The final design for four-fold optical mirrors is given in Fig. 3. The whole system consists of outer primary mirror, outer secondary mirror, inner tertiary mirror and inner quaternary mirror. The cyclic annular primary mirror and cyclic tertiary mirror can be manufactured in one piece, and the cyclic annular secondary mirror and circle quaternary mirror can be manufactured in another piece. It can be seen that the size of the
402
J. Ge et al.
Fig. 3 Four-mirror optical layout
(a)
(b)
Fig. 4 (a) Calculated MTFs for several view field. (b) Calculated MTFs at diffraction-limited of sensor
secondary mirror is smaller than the tertiary mirror because the value of F# is small and the angle of view is a bit larger. The whole length of the optical system is 108 mm. Figure 4 shows the layout and calculated MTF. The curve of MTFs is shown in Fig. 4a with fields of view up to 3 3 . Though the system can get the diffraction limit level, the value of the MTF curve is not high enough. The reason is that the cyclic annular mirror reduces the performance of the MTF values at the middle spatial frequency and the high spatial frequency. There is a
Optical Design of a Simple and High Performance Reflecting Lens for Telescope
403
(a)
(b)
(c)
Fig. 5 The wave-front for the compact telescope with the different view of field and the different wavelength. (a) The wave-front at 0 0 . (b) The wave-front at 2.31 2.31 . (c) The wave-front at 3 3
Fig. 6 The field curvature and the distortion of the system
long flat curve in the MTF curve. We give two type of MTF curves: one at the whole spatial frequency; other is ended at the Nyquist limit (17 cyc/mm). Figure 5 shows the wave-front for the compact telescope with the different view of field (0 0 , 2.31 2.31 , 3 3 ) and the different wavelengths (3 μm, 5 μm, 8 μm, 12 μm). Figure 6 shows the field curvature and the distortion for the compact telescope. The curve of MTF, wave-front and the field curvature and distortion perform well.
404
J. Ge et al.
3 Conclusion A new compact telescope with a lightweight uncooled M/LWIR imager is created. The optical design of a two reflected lens of compressed optical path for reflecting four times is described. The whole system is designed at focal length 850 mm, F-number 3.6 and pixel size of 30 μm 30 μm in 3–12 μm with a large field of view 3 3 with the uncooled M/LWIR microbolometer array detector. The whole path is compact that the length of system is 108 mm only one-eighth of the focal length. It is shown that the optical system can be utilized efficiently and the system is compact and image quality is favorable.
References 1. Lin, Y.Y., et al.: Design of compact integral structure of two-mirror system. Opt. Precis. Eng. 21(3), 561–566 (2013) 2. Pan, J.H.: Research on the field corrector design for the R-C system of the large telescope cassegrain focus. Opt. Precis. Eng. 10(3) (2002) 3. Yan, P.P., Fan, X.W.: Optical design and stray light analysis of R-C system. Infrared Technol. 33 (4) (2011) 4. Zhaojun, L.: Study on coaxis three-mirror-anastigmat space camera with long focal length. Spacecraft Recov. Remote Sens. 32(6), 46–52 (2011) 5. Yuan, T., Xiong, Y.J., Wu, H.P.: Design of coaxial three-mirror infrared optical system for space camera. Electrooptic Technol. Appl. 26(2) (2011) 6. Liang, S.T.: Design of a four-mirror optical system with wide field of view. Proceedings of SPIE Optical Design and Testing IV. vol. 7849 (2010) 7. Sasian, J.M.: Flat-field, anastigmatic, four-mirror optical system for large telescopes. Opt. Eng. 26 (12), 261297 (1987) 8. Puryayev, D.T.: Aplanatic four-mirror system for optical telescopes with a spherical primary mirror. Opt. Eng. 37(8), 2334–2342 (1998) 9. Zhao, W.C.: Design of improved off-axial TMA optical systems. Opt. Precis. Eng. 19(12), 2837–2843 (2011) 10. Moretto, G., Kuhn, J.R.: Off-axis systems for 4-m class telescopes. Appl. Opt. 37(16), 3539–3546 (1998) 11. Jiang, X.: Studies on cooled off-axis reflective optical systems. Doctor Dissertation, Changchun University of Science and Technology (2016) 12. Draganov, V.: Compact telescope. Patent No: US 6667831 B2, 23 Dec 2003
Index
A Absolute displacement metrology, 247 Absolute distance interferometric techniques absolute metrology and laser heterodyne, 247–249 adaptive analysis, 250–251 error analysis and control, 251 realistic feasibility, 249–250 Acousto-optic tunable filter (AOTF), 150–151 Active-passive hybrid isolation, 281 Active-type isolators, 281 Additive Gaussian white noise, 326 Advanced Geostationary/Geosynchronous Radiation Imager (AGRI), 306 Advanced on-orbit system (AOS) protocol, 49, 50 Aerial photogrammetry technology, 235 Airborne hyperspectral image in cloud shadow area accuracy verification, 395, 396 classification results and analysis, 396 classification workflow, 392 geometric correction and atmospheric correction, 392–393 image classification results, 395 narrow-band vegetation index, 393–394 remote sensing data and filed data obtaining, 391 spectral feature thresholds, 390 study area, 390, 391 supervised classification, 394–395 SVM method, 392 texture analysis, 393–394 texture information, 390 vegetation indices, 390
Airborne thermal infrared imaging spectrometer, 2 AISA Eagle II image, 391 All-spherical catadioptric telescope, for 3U CubeSat missions aberration analysis, 24–25 design concepts, 23–24 design layout of, 26 distortion of, 28 longitudinal aberration of, 26, 28 MTF, 26 spot diagram of, 26, 27 tolerance analysis, 29–30 Amplitude impact analysis, 354 Asteroids, 153–154 spectrometer techniques, 156–157 spectrum detection, 154 Atmospheric correction for GF-4 satellite, 299 hyperspectral data processing, 392–393
B Background noise, 180 Background radiation, 7 Band selection methods, for hyperspectral data clustering and genetic algorithm, 381–382 hyperspectral dataset of broadleaf, 383–385 Salinas hyperspectral dataset, 385–386 supervised, 380 unsupervised, 380–381 Belt and Road Initiative remote locations, 13 remote sensing resources, 13–15 space infrastructure, 12
© Springer Nature Switzerland AG 2020 H. P. Urbach, Q. Yu (eds.), 5th International Symposium of Space Optical Instruments and Applications, Springer Proceedings in Physics 232, https://doi.org/10.1007/978-3-030-27300-2
405
406
Index
Belt and Road Initiative (cont.) space resources, 13 terrain and landscape environment, 12 Bilinear interpolation method, 351 Blazed grating, structure of, 198 Broadleaf hyperspectral dataset, 383–385
C Calibration-based correction algorithm, 336 Camera LOS thermal analysis, 141, 144 thermal elastic analysis displacement cloud diagram, 140, 142 finite element model, 140, 141 long-term and short-term variation, 140 optical component, displacement of, 140, 142–143 TDICCD panchromatic multispectral cameras, 138 temperature cloud map, 140, 141 temperature load, 138–140 thermal desktop, 140 CCD, see Charge-coupled device Central detector, 237, 238, 240–241 Chandra telescope, 35 Chang’e-1, 149 Charge-coupled device (CCD) architecture active pixel configuration and image section structure, 288–289 storage section structure, 289 total device structure, 289–291 characterization, 292 packaged, 290, 291 pixel size, 162 split-frame transfer, 289 temperature verification test (see Time delay integration CCD) total layout, 290, 291 Chemical vapor deposition (CVD) process, in C/SiC composite, 216 Chemical vapor infiltration (CVI) process, in C/SiC composite, 216 Chongqing Optical-Electrical Research Institute (COERI), 288 Churilovskii, V.N., 22 Classical Gregorian telescope, 400 Climate Absolute Radiance and Refractivity Observatory (CLARREO) program, 264 Climate change research, 18–19, 263 Cloud detection, satellite images
spectral feature-based methods, 363 SVM-based cloud detection (see SVM-based cloud detection) texture feature-based methods, 364 Clustering band selection on, 381–382 best classification accuracy, 384, 386 statistic results, 383, 384, 386 Coded aperture snapshot imaging spectrometer, 176 Coded aperture X-ray optics, 187 Coefficient of thermal expansion, 219 Coherent lidar vs. incoherent detection, 205, 208 SNR, 209 Collimated type X-ray optics, 187 Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), 149 Computational imaging image processing design, 119–120 optical system design, 119 problem-oriented, 119 Computed tomography imaging spectrometer, 176 Control moment gyros (CMGs) isolation system micro-vibration test, 283–284 model build and analysis, 282–283 vibration test, 284–285 Correlation matching, 273–275 Crab Nebula soft X-ray band radiation, 194 C/SiC composite in space camera applications, 213–214 integrated main bearing structure scheme, 214 manufacture process, 216, 217 properties, 213 structural design and optimization, 214–215 thermal stability test, 218–219 Cube Satellite (CubeSat), all-spherical system, see All-spherical catadioptric telescope, for 3U CubeSat missions
D Dark noise, 180 See also Dark signals Dark signals in atmospheric heat cycle box, 130 mean value, 131–133 root mean square values, 132, 133 Data sampling processing technology, 309 Deep neural network (DNN), 327
Index Deep residual network with parameter estimation (DRN-PE) denoiser, 328 experiments, 329–332 noise types, 326 parameters estimator, 328 perceptual loss, 329 proposed model, 327 residual learning, 328–329 skip connections, 329 Deep space exploration definition, 148 hyperspectral imaging in dispersion imaging spectrometer, 151–152 imaging spectrometer using filter, 150–151 interference imaging spectrometer, 152–153 lunar surface, materials on, 149 physical and chemical properties of substances, 148, 149 Pioneer-5, 148 small solar system bodies detection spectrum detection, 153–154 visible and infrared imaging spectrometer, 155–157 Denoising convolutional neural networks (DnCNNs), 327 Densification, C/SiC composites, 216, 217 Depth of focus, 166 Detector, 43–44 Discriminant analysis classifier, 384 Dispersion imaging spectrometer, 151–152, 176 Distortion curve, 26 Donier Satellite Systems, 214 Doppler shift frequency, 208 by lidar systems, 205, 207–208 Double-wavelength double heterodyne interferometer system, 249 Driver bootstrap technology, SpaceWire broadcast functions, 52 network discovery function, 51–52 network topology, 52 service configuration, 52 DRN-PE, see Deep residual network with parameter estimation Dynamic range, of TDICCD, 131–133 Dyson thermal infrared spectrometer, 102
407
E Edge detector, 238, 241, 242 Electro-optic modulation spectroscopic device, 150 Emergency disaster reduction and relief, 16–17 Error Weighted of Orientation (EWO) algorithm, 112
F Feature discrimination, 364, 368 FEM, see Finite element model FengYun-4 (FY-4-01), instruments in, 287 Fermat’s principle, 120 Finite difference time domain (FDTD), 198, 202 Finite element model (FEM) camera LOS, 140, 141 jitter spectrum, 60, 61, 67 satellite micro-vibration analysis, 72-73 F Number, 164–166 Fourier transform imaging spectrometer, 152 Fourier transform spectrometer (FTS), 152 French Remote Sensing Satellite System, 16 Frequency impact analysis, 355–358 Full width at half maximum (FWHM), 271
G Gaussian fitting, 271 Genetic algorithm (GA) band selection on, 381–382, 384–387 convergence plot, 384–387 GeoEye-1 satellite, 16 GeoEye-2 satellite, 16 Geometric correction, 392–393 Geostationary/Geosynchronous Interferometric Infrared Sounder (GIIRS), 306 German Remote Sensing Satellite, 16 GF-4 satellite characteristics, 296 and data characteristics, 296–297 multispectral image quality based superresolution enhancement classification accuracy, 302 evaluation technology flow, 298, 299 image classification results, 299, 301 image quality evaluation, 298, 300 PMS false color images, 298, 301 radiation calibration and atmospheric correction, 299 super-resolution reconstruction, 296 technical process, 297–298
408
Index
GOES-R satellite, 306 Grating spectrometer, 151–152 Grazing incidence X-ray optics multilayer nested grazing incidence focusing optics design, 189–191 effective area evaluation, 194–195 focusing performance analysis, 191, 192 manufacturing and testing, 191–193 SDD quantum efficiency, 194 target source characteristics, 194 pulsar navigation, requirements on, 188–189 X-ray point source, 188 Grey-level co-occurrence matrix (GLCM) features, 365–371 Ground compensation, infrared images, see Infrared images, ground compensation Ground pixel resolution (GPR), 93 Ground sampled distance (GSD), 160
H Hertzian contact stress, 224–225 Hertzian contact theory, 224–225 Heterodyne detection, in lidar system, 207–208 Heterodyne interferometer, 247–248 High frame rate CCD, LMI, see Chargecoupled device Holographic lithography, 203 Hyperspectral image data band selection methods clustering and genetic algorithm, 381–382 hyperspectral dataset of broadleaf, 383–385 Salinas hyperspectral dataset, 385–386 supervised, 380 unsupervised, 380–381 dimensionality reduction methods, 379 intrinsical drawback, 379 Hyperspectral imager, 264 Hyperspectral imaging spectrometers application areas, 197 convex grating Offner imaging spectrometer, 197 partition multiplexing convex gratings (see Partition multiplexing convex gratings) visible-shortwave infrared imaging spectrometers, 197–198
I IKONOS, 16 Image classification airborne hyperspectral image in cloud shadow area, 395 GF-4 satellite, 299, 301 Image denoising deep neural network, 327 denoising convolutional neural networks, 327 DRN-PE denoiser, 328 experiments, 329–332 noise types, 326 parameters estimator, 328 perceptual loss, 329 proposed model, 327 residual learning, 328–329 skip connections, 329 neural network-based methods, 327 stacked sparse denoising autoencoder, 327 traditional methods, 326–327 wavelet techniques, 326 WDSUV model, 326 Image fusion multispectral image (see Multi-band image fusion) observation goal, 315 remote image fusion method, 316 wavelet transform, 315–316 Image interior element, 236 Image processing design, 119–120 Imaging spectrometer dispersion, 151–152 imaging system, 266, 267 interference, 152–153 optomechanical system, 265 overall structural composition diagram, 266 principle, 264–265 spectroscopic system and detection system, 266–267 spectrum test, 270–272 test of imaging system effective focal length, 268 MTF, 268 rear working distance, 268, 269 test of spectroscopic system optical component mounting, 269 pinhole slit test, 269–270 using filter, 150–151 visible near-infrared band, 264 Imaging spectrometry, 148 See also Imaging spectrometer
Index Improved Kalman filtering algorithm research process, 339–340 Improved moment matching algorithm research process, 340–341 Incoherent lidar vs. coherent detection, 205, 208 SNR, 209 Indian mapping and imaging satellites, 13–14 Infrared images, ground compensation calibration-based correction algorithm, 336 evaluation criteria, 341–342 Kalman filtering characteristics and applicability analysis, 345, 346 experimental results, 342–344 improved algorithm research process, 339–340 traditional stripe removal algorithm, 336–338 key technology and technological advancement, 346 moment matching characteristics and applicability analysis, 345, 346 experimental results, 343–345 improved algorithm research process, 340–341 traditional fringe removal algorithm, 339 on-satellite calibration, 335–336 scene-based non-uniformity correction algorithm, 336 technical indicators, specification of, 342 InGaAs-PIN photodiode field effect transistor (pinFET), 209–211 Instantaneous field of view (IFOV), 156 Integral field spectrometer with coherent fiber bundles, 176 with faceted mirrors, 176 with lenslet array, 176 Integrated design configuration and layout design, 83–85 integrated mechanical design, 81–83 micro-vibration suppression designs, 87–88 structural mechanics design, 85–87 Integrated Modeling Environment, 59–60 Interference imaging spectrometer, 152–153 International System of Units (SI), 264 Ion beam etching, 200 Italian Radar Satellite System, 16
J JITTER1.0, 63 Jovian Infrared Auroral Mapper (JIRAM), 149 JunoCam (JCM), 149
409
K Kalman filter, 274–276 Kalman filter-based non-uniformity correction algorithm, 338 Kalman filtering infrared image radiation quality improvement method characteristics and applicability analysis, 345, 346 experimental results, 342–344 improved algorithm research process, 339–340 traditional stripe removal algorithm, 336–338 Kalman filtering stripe removal algorithm, 336–338 KazEOSat-1, 14
L LAPAN-ORARI satellite, 14 LAPAN-TUBSAT, 14 Large diameter, thin-wall ball bearing on filter wheel mechanism accelerated lie test data processing unit, 228 experimental setup, 228, 229 life test characteristic, 224 mechanism flowchart, 228 on-orbit operating characteristic, 226 orientation accuracy, 228 procedure, 226, 227 design description excessive friction torque, 223 flexure pre-load ring structure, 225, 226 Hertzian contact theory, 224–225 main parameters, 224 sputtered MoS2, 224 torque margin, 225 experimental data analysis energy dispersion spectrum (EDS) analysis, 230–232 friction torque test results, 229, 230 participation number, 232 scanning electron microscope (SEM) analysis, 230, 231 test prototypes, step lost trend of, 229 Weibull distribution theory, 232 Large-scale pixel, 159 Large synoptic survey telescope (LSST), 167, 168 Laser dual-frequency interferometers (LDFI), 218 Laser heterodyne method, 247–249 Laser interferometry, 246, 247 Lidar system
410
Index
Lidar system (cont.) coherent, 205 definition, 205 detector, 207 equation, 207 heterodyne detection in, 207–208 incoherent, 205 pinFET, numerical simulation of, 209–211 receiver, 207 schematic, 206 SNR, 208–209 transmitter, 206 types, 205 Lightning Mapping Imager (LMI), 287 CCD architecture, 288–291 characterization, 292 Linear variable filter, 150 Line of sight (LOS) camera (see Camera LOS) and image motion, 136 positioning accuracy, principle affecting, 136–137 sensitivity matrix, 137–138 LMI, see Lightning Mapping Imager Lobster-eye optics, dark matter detection Chandra telescope, 35 coating design, 39–41 design principle, 38 detector, 43–44 experimental and theoretical exploration, 33 imaging test, 44–45 mass and mixing angle of inert neutrino, 36 micro-hole optical array system, 39 prototype, 43 signal-to-noise ratio, 37 2 2 spherical lens array, 39 testing program, 43 Wolter I X-ray collecting system, 34 XMM-Newton telescope, 35 Lobster eye X-ray optics, 187 Local binary pattern (LBP), 367–368 Longitudinal aberration, 26 Long-life space-borne filter wheel accelerated lie test, thin-wall bearing, 226–229 design description, 222–224 on-orbit operating characteristic, 226 Long wave infrared imaging spectrometer cold stop, of focal plane array detector, 107–108 imaging and spectral analysis, 102 LWIR hyperspectral systems, 102
MERTIS, 102 SEBASS, 102 SIELETERS, 102 slit image, 102 stray light analysis, 103, 106–107 system design, 103, 104–105 whole cooling opto-mechanical parts, 109–110 LOS, see Line of sight
M Main belt comets detection, 154 Mangin mirror, 22 Mapper optical subsystem (VIRTIS-M), 152 Mars Reconnaissance orbiter (MRO), 149 MATLAB program, 73 Mean square error (MSE), 124 Measurement accuracy of satellite attitude, 253 of single star, 257 Medium wave infrared (MWIR) bands, 153, 155, 156 Mercury-argon lamp calibration, 270, 271 Michelson interferometer, 152 Michelson interferometry, 247 Micromirror array definition, 177 numerical aperture of, 182 pre-tilt angle of, 179 sub-mirrors, 177, 178 Micron-scale detector, 160 Micro-vibration acceleration response, 71–72 components, 69 environment parameters measurement data sampling processing technology, 309 errors of optical payloads, 309 GOES-R satellite, 306 in-orbit data analysis, 309–310 in-orbit measurement systems, 306 loads with high precision in orbit, 312–313 low noise and high precision acquisition technology, 309 MTG satellite, 306 overall scheme design, 307, 308 principle diagram, 307, 308 validity verification of, 310–311 variable range technology, 308–309 whole satellite in-orbit, 311–312 finite element model, 72–73
Index imaging quality analysis, 75–76 isolator, CMG isolation system model build and analysis, 282–283 verification, 283–286 mechanism, 69 micro-vibration ground test, 70–71 remote sensing error compensations, 306 rotation angle of mirrors, 73–75 single running and simultaneous running, 72 on spacecraft, 305 TDICCD camera image radiation quality amplitude impact analysis, 354 evaluation indexes, 352 frequency impact analysis, 355–358 model establishment, 350–351 number of stage impact analysis, 355, 357, 359, 360 primary and secondary relationship and interaction relationship, 350 rule application, 357, 361 simulation input, 352–353 vibration suppression, 306 Mineral resources survey, 17 MINI-TES, 152, 153 MIRI cryogenic filter wheel, 222 Modulation transfer function (MTF), 26, 66–67, 122–123 detection system, 164 imaging spectrometer, 268 optical system, 163–165, 172 optical telescope, reflecting lens, 402 of snapshot spectral camera, 184 Moment matching fringe removal algorithm, 339 Moment matching infrared image quality improvement method characteristics and applicability analysis, 345, 346 experimental results, 343–345 improved algorithm research process, 340–341 traditional fringe removal algorithm, 339 Momentum wheel micro-vibration (see Micro-vibration) remote sensing camera, influence of frequency domain analysis of, 64–65 optical model, 62–63 satellite control system, 61–62 structural model, 60, 61 structure–optic interface program, 63 time domain analysis of, 65–66 vibration source model, 60, 61 simulation image of, 77
411
MTF, see Modulation transfer functionMTG satellite, 306 Multi-band image fusion IHS transform and wavelet transform algorithm process, 316–317 fusion criterion, 317–318 vs. IHSWT fusion method, 321 proposed fusion method, 317 SRF curve correlation coefficient, 321 entropy, 321 of example camera, 320 fusion results, 321 objective evaluation, 321, 322 original images and degraded image, 320 PAN component calculation, 318–319 spectral retention quantitative results, 323 Multi-detector stitched aerial camera accuracy, 236 step-by-step exact angle measurement calibration of central detector, 237, 238, 240–241 of edge detector, 238, 241, 242 error analysis, 239 experiment design, 240 image interior element, 236 principle of, 237 Multilayer nested grazing incidence focusing optics, 189 design, 189–191 effective area evaluation, 194–195 focusing performance analysis, 191, 192 manufacturing and testing, 191–193 SDD quantum efficiency, 194 target source characteristics, 194 Multi-layer perceptron (MLP), 327 Multipinned phase (MPP) mode operation, 288 Multi-scale gaze stitching, 159 Multi-shot stitching, 159 Multispectral image, 315 Multispectral image quality based superresolution enhancement classification accuracy, 302 evaluation technology flow, 298, 299 image classification results, 299, 301 image quality evaluation, 298, 300 PMS false color images, 298, 301 radiation calibration and atmospheric correction, 299
412
Index
N Narrow-band vegetation index, 393–394 NASA’s Juno spacecraft, 149 National Aerospace Research Institute of Indonesia, 14 NETD, see Noise equivalent temperature difference Neutron Star Interior Composition Explorer (NICER), 188 Noise equivalent temperature difference (NETD) physical mechanism of, 3–5 simulation curve of, 8 thermal infrared imaging spectrometer airborne thermal infrared imaging spectrometer, 2 background radiation, 7 cryogenic temperature, 2 detection sensitivity, 2 Ebackground, 7 MAKO and MAGI, 2 opto-mechanical system, Offner-type imaging spectrometer, 6 spectrometer’s optical and mechanical elements, 6 Non-uniformity correction, 336 Normal incidence X-ray optics, 187–188 Number of stage impact analysis, 355, 357, 359, 360
O Off-axis three mirror system, 166 Offner-type hyperspectral spectrometers, 197 Offner-type imaging spectrometer, 6 On-axis three mirror optical system design indexes, 166, 167 LSST, 167, 168 Mersenne–Schmidt, 166, 167 performance analysis, 168–171 On-orbit mission plan, 246 On-orbit operating torque, 225 OPD, see Optical path difference Optical interferometry, 247 Optical local-oscillator shot noise, 209 Optical path difference (OPD) at exit pupil, 121 wave front, 122 Optical remote sensing satellites, 81–82 Optical SWaP computational imaging method Fermat’s principle, 120 finite object distance imaging, 120 goals of, 120
lens specifications, 125 MTF, 122–123 OPD, 121, 122 optics integration, 120 PSF, 122 radial polynomial, 121 RMS, 125 simulation end-to-end imaging model, 123–124 implementation process, 124–125 MSE, 124 vs. traditional design method, 124–127 wavefront coding, 120 Zernike polynomials, 121, 122 Optical system of LSST, 167, 168 of MINI-TES, 152, 153 MTF, 163–165, 172 of OMEGA, 151 small pixel size detector, 163–164 designing, 164–166 of on-axis three mirror, 166–171 specifications of, 400 Optical telescope, reflecting lens for embodiment of, 400 field curvature and system distortion, 403 folded path, 399 four-fold optical mirrors, 401, 402 MTFs, 402 optical design, 400–401 system layout, 401 wave-front, 403 Opto-mechanical devices, 221 OrbView-3, 16 OSIRIS-REx Thermal Emission Spectrometer (OTES), 149 OSIRIS-REx Visible and Infrared Spectrometer (OVIRS), 149, 150 Oversampling enhancement method, terahertz image bilateral filter, 374–375 experiment and analysis, 375–377 mathematical model, 374 multi-observation images, 374 super-resolution, 374
P Panchromatic image, 315 Pansharpening method, 316 Partition multiplexing convex gratings blazed grating structure, 198 diffraction efficiency
Index experimental setup for, 201, 202 finite difference time domain, 198, 202 spatial frequency, 198 vs. wavelength, 202, 203 wavelength with different blaze angles, 198, 199 fabrication, 200–202 Passive-type isolators, 281 Patran/Nastran software, 65 Phase-shifting multi-wavelength dynamic interferometer, 251 Phase shift interference (PSI) mode, 193 Photoelectric autocollimators (PEAC), 218 Photon noise, 180 Pinhole slit test, 269–270 Pioneer-5, 148 Pixel definition, 159 detector stitching, 159 large-scale, 159 multi-scale gaze stitching, 159 multi-shot stitching, 159 single shot stitching, 159 small pixel, 160 in space remote sensing cameras, 160 Platforms and payload bearing system, 84 star sensors and, 85 Plug and play technology, SpaceWire network architecture design of, 49 ASIM interface module, 55 bus monitoring technology, 50 driver bootstrap technology, 50–53 enhanced SpaceWire router scheme, 52–53 IEEE 1355-1995 and IEEE 1596.3, 49 integrated control software architecture, 53–55 Mar Express and Smart-1, 50 software operation mechanism, 55–56 Point spread 96 function (PSF), 122 Poisson noise, 326 PolyCarboSilane (PCS) fills, 216 Polymer infiltration pyrolysis (PIP), in C/SiC composite, 216 Positioning accuracy LOS principle affecting, 136–137 satellite image, 136 uncontrolled, 135 Pre-processing methods, 364–366 Problem-oriented computational imaging, 119 Prototype, 43 Pulsar navigation, grazing incidence X-ray optics, 188–189
413
Pulsar Navigation Test Satellite 01 (XPNAV-1), 188–189 Push-broom scanning, 176
Q Quantum well infrared detector (QWIP), 15 QuickBird, 16
R Radiation calibration, for GF-4 satellite, 299 Reading noise, 180 Relative radiation correction accuracy, 342 Remote sensing camera momentum wheel vibration frequency domain analysis of, 64–65 optical model, 62–63 satellite control system, 61–62 structural model, 60, 61 structure–optic interface program, 63 time domain analysis of, 65–66 vibration source model, 60, 61 MTF, impact of, 66–67 Remote sensing images airborne hyperspectral (see Airborne hyperspectral image in cloud shadow area) denoising deep neural network, 327 denoising convolutional neural networks, 327 DRN-PE (see Deep residual network with parameter estimation) neural network-based methods, 327 stacked sparse denoising autoencoder, 327 traditional methods, 326–327 wavelet techniques, 326 WDSUV model, 326 and filed data, 391 positional accuracy, 135–136 terahertz imaging (see Terahertz image, oversampling enhancement method) traditional Cassegrain system, 399 tree species classification (see Tree species classification) Remote sensing satellites SuperView-1 satellite’s resolution, 349 TDICCD camera images (see TDICCD camera image radiation quality, micro-vibration)
414
Index
Ritchey-Chretien Cassegrain system achromatic principle, of rear lens group, 94–96 coaxial two-mirror system, 93–94 large aperture and obscuration ratio, 92 parameters of, 94 See also Space remote sensor, R-C system RIULBP, see Rotation invariant uniform local binary pattern Root mean square (RMS) spot diagram, 97, 98 of TDICCD, 131–133 of traditional optical design method, 125 Rotation invariant uniform local binary pattern (RIULBP), 365, 366, 368–371 Russian remote sensing satellites, 13 Russian Zenit-2 rocket, 14 Rutherford Appleton Laboratory (RAL), 14
S Salinas hyperspectral dataset, 385–386 Salt and pepper noise, 326 Satellite image positioning accuracy, 136 Satellite observations, of climate change, 263 Saudi Arabian satellite, 14 Scene-based non-uniformity correction methods, 336 SDD quantum efficiency, 194 Semi-active isolation, 281 Sensitivity matrix, LOS, 137–138 Short wave infrared (SWIR) band, 155 Shot-noise-limited coherent lidar, 208 Signal-to-noise ratio (SNR) lidar system, 208–209 snapshot spectral camera, 180 Simpson integral method, 351 Single landscape width (SLW), 93 Single shot stitching, 159 Small pixel size detector vs. big pixel size detector, 162 CMOS detectors, 160, 161 detection system, 164 development, 160–162 ground sampled distance, 160 optical system, 163–164 designing, 164–166 of on-axis three mirror, 166–171 space camera, 160, 161 space remote sensing, 162–163 star sensor, 163 Small solar system bodies detection low-altitude orbital exploration, 153 spectrum detection asteroids, 153–154
main belt comets detection, 154 visible and infrared imaging spectrometer asteroid detection spectrometer, 156–157 specification design, 155–156 Snapshot spectral camera, micromirror arrays axial distance, 179 CCD230–42 photodetector, 178 collimating lens minimum numerical aperture of, 182 parameters of, 182 concept design, 177 depth of focus, 179 dispersion imager, optical layout of, 183 imager, parameters of, 182 main parameters, 178 MTF curves of, 184 pupil array, 181, 182 quantum efficiency curve of detector, 178, 179 relay imaging diagram, 181 SNR, 180–181 spectral information and spatial information, 176 telephoto objective lens imaging diagram, 178, 179 telephoto objective, parameters of, 182 zenith angles, 180, 181 SNR, see Signal-to-noise ratio Space-borne cameras ball bearing lubrication, 221 opto-mechanical devices, 221 solid lubrication materials, 221 Space camera CCD detector’s temperature effect (see Time delay integration CCD) C/SiC composite in (see C/SiC composite in space camera) small pixel size detector, 160, 161 Space Data Systems Advisory Committee (CCSDS), 49 Space Environment Package (SEP), 287 Space optical imaging system, terahertz imaging, 373, 374 Space remote sensing camera micron-scale detector, 160 rolling wheel method, 222 Space remote sensor, R-C system calculation of system parameters, 93 catadioptric optical system, 92 characteristics, 91–92 design requirements, 92 image quality evaluation, 97–98 layout and parameters of, 96–97 optical design principle of, 93–96
Index reflective optical system, 92 refractive optical system, 92 SpaceWire, see Plug and play technology, SpaceWire network Spatial radiation reference, 264 Spectral feature thresholds, 390 Spectral imaging technology applications, 176 definition, 175 scanning spectral imaging systems, 176 snapshot spectral imaging systems, 176 Spectral response function (SRF) curve, multi-band image fusion correlation coefficient, 321 entropy, 321 of example camera, 320 fusion results, 321 objective evaluation, 321, 322 original images and degraded image, 320 PAN component calculation, 318–319 spectral retention quantitative results, 323 Spectroscopy for the Investigation of the Characteristics of the Atmosphere of Mars (SPICAM), 150 Spot diagram, 26 Sputtered MoS2, in thin-wall ball bearing, 224 SRF curve, see Spectral response function curve Stacked sparse denoising autoencoder (SSDA), 327 Star camera stereo mapping satellite, 112, 117 sub-arcsec (see Sub-arcsec star camera) Stereo mapping double-vector attitude error modeling determination error, 113 front and back camera coordinate system, 114 STC bore-sight error, 113 transition matrix, 112 error weighted orientation accuracy modeling, 114–115/ simulation results and analysis, 116–117 Stray light analysis, 103, 106–107 relay lens connection, 108 self-thermal radiation and components, 109 Sub-arcsec star camera analysis of mission, 254 configuration, 255 focal plane precision assembly, 255–256 general design scheme, 254–255 laboratory calibration, 256–257 7.5 Mv star on orbit image, 257, 259 NEA error of, 258, 259
415
overall error of, 258, 260 picture of, 258 pointing accuracy, 254 quaternion error, 257 test on orbit, 257–260 vacuum focal plane setting and testing, 257, 258 Supervised band selection, 380 Supervised classification, 394–395 Surface stereo imager (SSI), 150 Survey drawing, 18 SVM-based cloud detection, combined GLCM and RIULBP texture features experiments results, 369–370 set-up, 368–369 feature discrimination, 364, 368 flow chart, 365 Kernel function, 368 pre-processing methods, 364–366 texture feature extraction, 364, 366–368 SVM method, for tree species classification, 392
T Target tracking correlation matching, 273–274 essential algorithm flow, 275, 276 failure of, 275 improved algorithm flow, 277 Kalman filter, 274–275 result analysis, 278–279 template’s scaling, 276 transfer matrix, 277–278 TDICCD, see Time delay integration CCD TDICCD camera image radiation quality, micro-vibration amplitude impact analysis, 354 evaluation indexes, 352 frequency impact analysis, 355–358 model establishment, 350–351 number of stage impact analysis, 355, 357, 359, 360 primary and secondary relationship and interaction relationship, 350 rule application, 357, 361 simulation input, 352–353 working principle diagram, 350, 351 TDICCD panchromatic multispectral cameras, 138 Terahertz image, oversampling enhancement method bilateral filter, 374–375
416
Index
Terahertz image (cont.) experiment and analysis, 375–377 mathematical model, 374 multi-observation images, 374 super-resolution, 374 Texture feature-based methods, SVM-based cloud detection experiments results, 369–370 set-up, 368–369 feature discrimination, 364, 368 GLCM, 365 pre-processing methods, 364–366 RIULBP, 365 texture feature extraction, 364, 366–368 Texture feature extraction, 364, 366–368 THEOS satellite, 14 Thermal deformation, 246 Thermal desktop (TD), 140 Thermal stability analysis of LOS, see Camera LOS 3D control field calibration method, 236 Time delay integration CCD (TDICCD) dark signal and dynamic range, results of, 131–133 experiment procedure, 131 focal plane circuit, and signal processing circuit, 131 scheme, 129–131 test equipment connection, 130 test temperature and thermal insulation time, 130 Time-of-flight method, 247 Tolerance analysis, 29–30 Tree species classification airborne hyperspectral image in cloud shadow area accuracy verification, 395, 396 classification results and analysis, 396 classification workflow, 392 geometric correction and atmospheric correction, 392–393 image classification results, 395 narrow-band vegetation index, 393–394 remote sensing data and filed data obtaining, 391 spectral feature thresholds, 390 study area, 390, 391 supervised classification, 394–395 SVM method, 392 texture analysis, 393–394 texture information, 390 vegetation indices, 390 remote sensing, 390 traditional survey method, 390
U Ultraviolet Imaging Spectrograph (UVS), 149 Uncontrolled positioning accuracy, 135 Unsupervised band selection, 380–381 US Remote Sensing Satellite System, 15
V Vacuum testing system, 44 Visible and infrared imaging spectrometer asteroid detection spectrometer, 156–157 observation radiance of, 156 optical system layout, 156, 157 specification design, 155–156 Visible and infrared mineralogical mapping spectrometer (OMEGA), 151 Visible and Infrared Thermal Imaging Spectrometer (VIRTIS), 151 Visible-shortwave infrared imaging spectrometers, 197–198
W Warm dark matter (WDM), 33 Wavefront coding, 120 Weather forecast, 18–19 Weibull distribution theory, 232 Weighted double sparsity unidirectional variation (WDSUV) model, 326 Whisk-broom scanning, 176 Wiener filter algorithm, 125, 127 WorldView-1 satellite, 16 WorldView-2 satellite, 16
X Xi Jinping, 11 XMM-Newton telescope, 35 X-ray astronomy, 187 X-ray detector technology, 33 X-ray optics coded aperture, 187 collimated type, 187 grazing incidence (see Grazing incidence X-ray optics) lobster eye, 187 normal incidence, 187–188 X-ray telescopes, 187
Z Zemax optical design program, 26 Zernike polynomials, 121, 122