138 46 32MB
English Pages 340 [342] Year 2022
BISTATIC SYNTHETIC APERTURE RADAR
This page intentionally left blank
BISTATIC SYNTHETIC APERTURE RADAR Jianyu Yang Professor, University of Electronic Science and Technology of China (UESTC), China
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2022 National Defense Industry Press. Published by Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-822459-5 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals
Publisher: Glyn Jones Editorial Project Manager: Naomi Robertson Production Project Manager: Prem Kumar Kaliamoorthi Cover Designer: Christian Bilbow Typeset by STRAIVE, India
Contents List of symbols List of abbreviations Preface
1.
2.
3.
Overview of bistatic SAR
2 31 53 57 63 75
Bistatic SAR imaging theory
77
2.1 Imaging method 2.2 Resolution performance 2.3 Configuration design 2.4 Echo model References
78 90 107 111 136
Bistatic SAR imaging algorithm
Bistatic SAR parameter estimation 4.1 Motion measurement, parameter calculation, and accuracy requirements 4.2 Parameter estimation based on echo law 4.3 Parameter estimation based on iterative autofocus References
5.
1
1.1 Imaging principle 1.2 Configuration classification 1.3 System composition 1.4 Performance parameters 1.5 Development trends References
3.1 Basic tasks of imaging algorithm 3.2 Bistatic SAR time-domain imaging algorithm 3.3 Bistatic SAR frequency-domain imaging algorithm References
4.
vii xix xxi
139 140 149 162 183
185 186 193 208 215
Bistatic SAR motion compensation
217
5.1 Source and influence of motion error 5.2 Motion error tolerance
218 225
v
vi
Contents
5.3 Motion error measurement and perception 5.4 Motion error control and echo error compensation References
6.
Bistatic SAR synchronization 6.1 Space synchronization 6.2 Time and frequency synchronization References
7.
228 237 245
247 248 260 280
Verification of bistatic SAR
281
7.1 Test levels and principles 7.2 Experimental conditions 7.3 Test plan 7.4 Experiment implementation 7.5 Experiment example References
281 284 288 292 296 306
Index
307
List of symbols a A A am, m 5 0, 1⋯ M aR aR ar1 aRx, aRy, aRz aT aT aTx, aTy, aTz ax, ay, az ay1 A() B Ba Bf bm Bn Br C c CP CR CT C f^dr CP(t0) D d dR dT Dα DϕT Dγ G Em Es E[] Ei()
the acceleration vector the area/the amplitude the position vector of scattering point A polynomial coefficient/lower bound of the m-th search interval the acceleration vector of the receiving platform the error rate with the receiving carrier frequency the projection factor from the transmitter to the rR direction the components of the acceleration of the receiver in the three directions of x, y, z the acceleration vector of the transmitting platform the error rate with the transmitting carrier frequency the components of the acceleration of the transmitter in the three directions of x, y, z the component of acceleration vector velocity in three directions of x, y, z the mapping relationship between y and r1 the amplitude function the position vector of scattering point B the Doppler bandwidth the bandwidth of the matched filter the upper bound of the mth search interval the receiver noise bandwidth the transmitted signal bandwidth image contrast the speed of light the slant range mean curve the slant range curve of receiver/circular probable error of receiver space synchronization the slant range curve of transmitter/circular probable error of transmitter space synchronization the image contrast function curvature of time-delay migration in the synthetic aperture the real aperture of the antenna the scale of ground objects the length of the subaperture of the receiver the length of the subaperture of the transmitter the value ranges of parameter α the value ranges of parameter ϕT the value ranges of parameter γ G the main lobe energy the sidelobe energy mathematical expectation the incident power density at the target
vii
viii
List of symbols
Es() exp() f F fc fd fd0 fdc fdc0 fdm fR d fdr f^dr fdrε a fdrε ac fdrε as fdrε
fdt fdT f^dt fdε fP0 FR fR Fr fs FT fT ft ft 0 ftcR ftcT ftR ftT f(t) fd(t) fd(t; x, y) Fi() fi() fP(t) fP(t) fR(t) fT(t) fτ
the received power density at the receiving antenna the exponential function the frequency variable the unit conversion factor the carrier frequency the Doppler frequency difference/Doppler frequency the Doppler frequency caused by heading velocity the Doppler center frequency the baseband Doppler centroid the maximum Doppler frequency the Doppler frequency corresponding to the receiver the Doppler frequency rate the estimated value of fdr the Doppler frequency rate error the Doppler frequency rate error caused by the acceleration of the transmitter and the receiver the Doppler frequency rate error caused by the acceleration along the route the Doppler frequency rate error caused by the acceleration along the direction of sight the derivative of Doppler frequency rate the Doppler frequency corresponding to the transmitter the estimated value of f^dr the Doppler frequency error the Doppler centroid antenna direction factor of the receiver the local oscillator frequency of the receiver the pulse repetition frequency the fast-time sampling frequency antenna direction factor of the transmitter the actual carrier frequency of the transmitting signal the slow-time frequency the slow-time frequency after transformation the Doppler centroids contribution of the receiver the Doppler centroids contribution of the transmitter the Doppler frequency contributed by the receiver the Doppler frequency contributed by the transmitter the frequency function versus time the Doppler frequency function the instantaneous Doppler frequency function function the Doppler frequency the echo Doppler frequency at point P the Doppler frequency function of the receiver the Doppler frequency function of the transmitter the fast-time frequency
List of symbols
fτ 0 f0 F0 GR gr1 GT gy1 G() H H() h(t, τ; x, y) h∗(t, τ; x, y) H0(t) h0(t, τ) hA(t, τ) I ðx, y; xe, e yÞ i, j, k Im[] Inpp J k k K ksx, ksy, ksz kt kx, ky, kz e K k(x, y) K2 l L LA Lat LB LJ Lng Lr Ls M mA(fd) Mamb
ix
the fast-time frequency after transformation the center frequency/ideal carrier frequency the receiver noise coefficient the receive antenna power gain the first-order expansion coefficients of t0T with Δr the transmit antenna power gain the first-order expansion coefficients of t0T with Δr the correlation function the platform height; the equal phase center of the transmitters and receivers the entropy function the normalized echo of the signal scattering point (x, y) the conjugation of h(t, τ; x, y) the range walk correction function two-dimensional LFM signal the echo signal of point A the image domain of several configurations the unit vector representing the direction of x, y, z axis the operation of taking the imaginary part the polarization scattering term the merging series coefficient or constant/wave number the Boltzmann constant/coefficient/Doppler third-order term the range walk slope/Doppler frequency rate the wavenumber component of the scattering direction the ambulation slope of the delay migration trajectory the wavenumber component of the incident direction the dimensionless walk slope the imaging process normalization factor the straight-line slope in the discrete time-frequency plane the independent variable of correlation function relative to the relative length L the transceiver baseline length/the relative moving amount of the target/the relevant length the synthetic aperture length of target A latitude the synthetic aperture length of target B the length of the Jth subaperture longitude the synthetic aperture length the system loss factor the equivalent independent look number/polynomial order/the number of sampling points the inverse Fourier transform modulus of the normalized transmitted and received signal power ratio the Doppler ambiguity number
x
List of symbols
N Na Nr Nt Nt Nx Ny Nτ Nτ n(t, τ) OR OT Pav Pm Pr pr1(ft, fτ) Ps Pt P? t py1(ft, fτ) P? τ p() p(τd) qr1(ft, fτ) qRr2(ft) qy1(ft, fτ) r rA rB rbi Re[] rect() rLOS rLOSI rLOSII rmax rmin rP(t) rP(t; x, y) rp0 ðtÞ rP0 (t; x, y) RR rR rR Rrr
the sampling number/positive integer the number of slow-time sampling points the number of fast-time sampling points slow-time SNR improvement total number of slow-time sampling points in synthetic aperture time the fast-time scale of imaging scene the angle of imaging scene fast-time echo matrix fast-time SNR improvement random noise the track crosscut point of the receiver the track crosscut point of the transmitter the average power the main lobe peak power the peak transmit power of the receiving antenna the first-order expansion coefficients of Ft with Δr the highest sidelobe peak power the peak transmit power of the transmitter the projection matrix of the Doppler ground resolution to the ground the first-order expansion coefficients of Ft with Δy the projection matrix of the delay ground resolution to the ground the probability density function/the delay signal function the normalized modulus of the delay signal the first-order expansion coefficients of FR with Δr the second-order terms of qr1(ft, fτ) with respect to fτ the first-order expansion coefficients of FR with Δy the slant range the slant range of target A the slant range of target B the sum of bistatic distances the operation of taking the real part the square wave function the slant range error for line of sight the line-of-sight error for the slant range in the imaging area center the spatial variant part of the slant range error along the line of sight the equivalent radius of the transmitting beam the equivalent radius of the receiving beam the mean slant range the slant range mean value resolution of the transmit and receive the first derivative of rP(t) the first derivative of rR(t; x, y) the distance between target and receiver/track crosscuts of the receiving station the slant range between the receiver and the target the track crosscuts to the receiver the distance from the receiver to the beam footprint center of the receiver
List of symbols
Rrt rR(t) rR(t; x, y) rR0 rT rT rT(t) rT(t; x, y) rT0 r0 r(t) r0 (t) rΣ r0R r0T S sa(t) sA(t; x, y) sC(t) sD(t) SFAR SNR SNRd sR[τ] SRFM(ft, fτ; rR0) S(f ) s(t) S(t, fτ) s(t, τ) S(ϕT, ϕR) S∗(f ) s∗(2t) S0(fτ) S2f(ft, fτ) T t Ta td td Tf tn tP(x, y) Tr
xi
the distance from the receiver to the beam footprint center of the transmitter the range history of the receiver the slant range of the receiver the slant range of the transmitter when beam center irradiates target the slant range between the transmitter and the target the track crosscuts to the transmitter the range history of the transmitter the slant range of the transmitter the slant range of the receiver when beam center irradiates target the slant range of the equivalent phase center of the transceiver the migration trajectory the first derivative of the migration trajectory the distance between the transceiver and the target the zero-time slant ranges of the receiver the zero-time slant ranges of the transmitter the area the wave or signal of the scattering point A the antenna pattern modulation function the wave or signal of target C the wave or signal of target D signal-to-first ambiguity ratio the signal-to-noise ratio the degradation of signal-to-noise ratio the transmitting pulse baseband waveform amplitude the two-dimensional frequency-domain reference function of the formula of the center scene echo data the frequency-domain function a wave or signal two-dimensional data matrix of the fast-time frequency domain and slow-time compression time domain two-dimensional time domain data the bistatic masking function the conjugation of S(f ) the inverse conjugation of s(t) signal spectrum two-dimensional data matrix of the fast-time frequency domain and slow-time frequency domain the pulse width the time variable/slow time the synthetic aperture time the delay time the time delay the time width of the matched filter the nth sampling point of slow time the time when the vertex of the skew mean curve appears the pulse repetition period
xii
List of symbols
tR tT ^tPb ^tPR ^tPT t(f ) t0 T0 tref 0T v v vE vF vP vR vR vRx, vRy, vRz vs vT vT vTx, vTy, vTz vx, vy, vz ! vE W W(n) wP(t0) xmn xR(t) xT(t) ym, n yR(t) yT(t) e ym, n zR(t) zT(t) α α αD αR αT α0 β
the time at which the receiver arrives at the track crosscut points the time at which the transmitter arrives at the track crosscut points the stationary phase point of the bistatic SAR the stationary phase point of the receiver the stationary phase point of the transmitter the time function versus frequency mean value of starting and ending time of target irradiated by transmitting and receiving beams the receiver noise temperature the track crosscuts at the launch station to the center of the scene the platform velocity vector the speed of the platform the speed size of the equivalent phase center of the transceiver the speed of the airborne platform the relative radial velocity the speed of the receiver the velocity vector of the receiver the component of the velocity of the receiving platform in the direction x, y, z the speed of the satellite platform the speed of the transmitter the velocity vector of the transmitter the component of the velocity of the transmitting platform in the direction x, y, z the component of vector velocity in three directions of x, y, z the velocity vector of the equivalent phase center of the transceiver the width of the imaging area the Fourier transform of the surface correlation function to the nth power migration of time-delay migration in the synthetic aperture gray value of the (m, n) sampling unit the x coordinate change of the receiving platform the x coordinate change of the transmitting platform discrete representation of two-dimensional data the y coordinate change of the receiving platform the y coordinate change of the transmitting platform discrete representation of two-dimensional data with errors the z coordinate change of the receiving platform the z coordinate change of the transmitting platform the downward angle of the antenna beam the flight direction angle of the transceiver platform the angle between the directions of delay ground resolution and the Doppler ground resolution the elevation angles of the receiver the elevation angles of the transmitter the top view angle of the equivalent phase center of the transceiver the baseline inclination of the transmitter and the receiver
List of symbols
β β βR βR βT βT β_ β1 β2 χ (A, B) Δ Δη Δf Δf0 Δf0max Δfd Δfd Δfdc Δfdr ΔfRmax ΔK e ΔK ΔL Δr Δr Δr(t) ΔrAB ΔrLOS_R(t) ΔrLOS_T(t) Δt Δta Δtr Δx Δxmax ΔxR(t) ΔxT(t) Δy ΔyR(t) ΔyT(t) ΔzR(t) ΔzT(t) Δα1 Δα2 Δβj Δβ Δη
xiii
the maximum change of the walking angle the platform yaw angle the azimuth angle of the receiving antenna the yaw angle of the receiver the azimuth angle of the transmitting antenna the yaw angle of the transmitter the platform yaw angular velocity the baseline angle of target A relative to the transmitter the baseline angle of target B relative to the transmitting T2 ambiguity function the sampling interval/step size the variation of η the frequency interval the fixed frequency error of the receiver and transmitter the maximum deviation of the allowable fixed frequency the equal Doppler increment the normalized Doppler increment the Doppler centroid error the Doppler frequency rate error the maximum possible variation of the receiver local oscillator the variation of the walk slope the variation of the walking slope of dimensionless the ground interval of the iso-Doppler line the sampling interval of the slant range the track crosscuts difference between the scattering point and the reference point the error of slant range history the sum of bistatic distance between A and B the line-of-sight errors corresponding to the transmitter the line-of-sight errors corresponding to the receiver the signal delay/time difference the slow-time sampling interval the fast-time sampling interval the length of scattering unit/the azimuth offset the maximum deviation the error of the receiving platform in axial x direction the error of the transmitting platform in axial x direction the width of scattering unit/y-axis position difference the error of the receiving platform in axial y direction the error of the transmitting platform in axial y direction the error of the receiving platform in axial z direction the error of the transmitting platform in axial z direction the coarse step size the fine step size the jth line-of-sight interval for the transmitter the line-of-sight interval for the transmitter the deviation angle formed by the two beam footprint centers
xiv
List of symbols
Δφ Δφquadic Δτ δz δ(x, y) δ(x 2 x0, y 2 y0) ε εb εr εR εR(t) εRi εT εT(t) εTi ε0 ε00 Φ ΦRA ΦTA ϕ ϕb(t) ϕe ϕe(t) ϕn ϕR ϕR ϕR(t) ϕ00R ð^tPR Þ ϕRES(ft, fτ; rR, y, rR0, y0) ϕS ϕT ϕT ϕT(t) ϕ00T ð^tPT Þ ϕ0 () ϕ0 () ΓR ΓRP(t) ΓT
the output phase difference of the phase discriminator the quadratic phase error the fixed delay increment/the time deviation the root mean square height the two-dimensional impulse function the delay of the impulse function threshold/constant coefficient the beam footprint center deviation error complex permittivity the local oscillator frequency error of the receiver a function of the local oscillator frequency error of the receiver the random frequency deviation in the i-th PRI for the receiving carrier frequency the transmitter carrier frequency a function of the transmitter carrier frequency the random frequency deviation in the i-th PRI for the transmitting carrier frequency relative permittivity the imaginary part of complex permittivity the phase error vector the unit vector of the scattering point pointing to the transmitter and the receiver the unit vector of the scattering point pointing to the transmitter the projection angle of the Doppler resolution direction to the ground the phase of the integrated function the phase error the function of the relative error the phase error of the nth slow-time sampling point the downward angle of the receiver the scattering angle the phase of the integrated function of the receiver the second derivative of ϕR ð^t PR Þ the residual phase function of the formula the relative azimuth of transceiver station the incident angle the squint angle of the transmitter the phase of the integrated function of transmitter the second derivative of ϕT ð^t PT Þ the first derivative of ϕ() the phase function the unit vector along the direction of rotation of the line of sight of the receiver unit vectors along the direction of rotation of the line of sight of the receiver the unit vector along the direction of rotation of the line of sight of the transmitter
List of symbols
ΓTP(t) γ γG γn η η ηb ηs ϑ φ φA(t; x, y) φLOS φR φR φR(τ) φR0 φT φT φT0 φ(t) φ0 (t) λ μ μA μA μI μR μRP(t) μt μTP(t) Θ Θ ΘT θ θA θB θP θR θRP θsR θsR(ft) θsT θsT(ft)
xv
unit vectors along the direction of rotation of the line of sight of the transmitter the bistatic angle the ground projection of the bistatic angle the radiative resolution the dip angle corresponding to the dimensionless walk slope the ratio of the slant range of the transmitter and the receiver the ratio between the actual echo power and the maximum echo power the 3 dB footprint overlap ratio for the transceiver beam the projection angle of the delay resolution to the ground the deviation angle of the central velocity vector of the equivalent phase the initial phase the error phase for line of sight the antenna incidence angle of the receiving platform the azimuth angle of the receiver the transmitting pulse baseband waveform phase the initial phase of the local oscillator the antenna incidence angle of the transmitting platform the azimuth angle of the transmitter the initial phase of the transmitting signal the Doppler phase history the first derivative of the Doppler phase the wavelength the frequency rate the Doppler frequency rate the frequency rate of target A the mean value of the image amplitude frequency rate of transmitting FM pulse the unit vector of the receiver to the light of sight of point P at time t the degree of bending of the delay migration trajectory the unit vector of the transmitter to the light of sight of point P at time t the direction of delay ground resolution the unit vector of the direction of the bistatic angle bisector the transpose of Θ the incident angle of the antenna/the beam width of the antenna the synthetic beam width of target A the synthetic beam width of target B the synthetic rotation of the transmitter and the receiver relative to the point P the elevation angle of the receiving antenna the rotations of the receiver relative to the point P in the synthetic aperture time the squint angle of beam center of receiver the squint angle of the receiver when the system Doppler frequency is ft the squint angle of beam center of transmitter the squint angle of the transmitter when the system Doppler frequency is ft
xvi
List of symbols
θt θT θTP θτ ρa ρa ρg ρr ρr0 ρrΣ ρRt1(ft) ρG t ρTr2(ft) ρTt1(ft) ρτ ρG τ ρΩ ρΨ ρ() σ σI σ^k ðx, yÞ σ^k, m ðx, yÞ σs σS σ0 σ0 σ 0(x, y) σ^0 ðx, yÞ σ 0 ðxe, e yÞ σ 0(x, y, h) τ τd τij(t) τmin τP(t) τP(t) τP(t; x, y) τ0 ς Rr1(ft) ς Ry1(ft)
the angle between difference direction of target position vector and time delay resolution direction the elevation angle of the transmitting antenna the rotations of the transmitter relative to the point P in the synthetic aperture time angle between difference direction of target position vector and time delay resolution direction the azimuth resolution the azimuth resolution the ground resolution the slant range resolution the resolution of the imaging projection surface the range sum resolution the linear terms of qr1(ft, fτ) with respect to fτ, ρTt1(ft) the Doppler ground resolution the second-order terms of pr1(ft, fτ) with respect to fτ the linear terms of pr1(ft, fτ) with respect to fτ, ρTt1(ft) the delay resolution the delay ground resolution the 3 dB main lobe width corresponding to the sidelobe extending direction of delay resolution the 3 dB main lobe width corresponding to the sidelobe extending direction of Doppler resolution the correlation coefficient function the mean square error the mean square deviation the kth subimage of the subaperture the mth subimage of the kth subaperture the scattering cross-sectional area the target bistatic radar scattering cross-sectional area the equivalent noise/constant the target scattering coefficient the scattering coefficient distribution function the estimated value of σ 0(x, y) the scattering coefficient of the position ðxe, e yÞ the three-dimensional distribution function of microwave scattering rate the time delay/fast time the signal delay difference the echo time delay migration trajectories corresponding to the center position of the grid located in row i and column j the minimum time delay time delay of the target point P the delay migration trajectory the pulse echo delay the delay mean of the delay migration trajectory the first-order expansion coefficients of ftR(ft) with Δr the first-order expansion coefficients of ftR(ft) with Δy
List of symbols
ς Tr1(ft) ς Ty1(ft) ω ωA ωRA ωRP ωTA ωTP Ξ — fP(t) — fG P (t) — τP(t) — τG P (t)
xvii
the first-order expansion coefficients of ftT(ft) with Δr the first-order expansion coefficients of ftT(ft) and ftR(ft) with Δy angular frequency/discrete angular frequency the synthetic angular velocity the rotational angular velocity of the receiver relative to the scattering point the rotational angular velocity of the transmitter and the receiver relative to point P the rotational angular velocity of the transmitter relative to the scattering point the rotational angular velocity of the transmitter relative to point P the direction of Doppler ground resolution the gradient of fP(t) the projection of the gradient of fP(t) on the ground the gradient of τP(t) the projection of the gradient of τP(t) on the ground
This page intentionally left blank
List of abbreviations A/D BP CPU CS D/A DDS DFT DMO DPT DSP EPC FFBP FFT FPGA GAF GEO GPS GPU IFFT IMU INS IRW ISLR LBF LEO LFM LOS MSR NLCS NUFFT PD PLL POS PPS PRF PSLR RD SAR SFAR SNR STFT
analog to digital back projection central processing unit chirp scaling digital to analog direct digital synthesizer discrete Fourier transform dip moveout discrete polynomial-phase transform digital signal processor equivalent phase center fast factorized back projection fast Fourier transform field programmable gate array generalized ambiguity function geostationary earth orbit global positioning system graphics processing unit inverse fast Fourier transform inertial measurement unit inertial navigation system impulse response width integrated sidelobe ratio Loffeld’s bistatic formula low earth orbit linear frequency modulated line of sight method of series reversion nonlinear chirp scaling nonuniform FFT phase detector phase-locked loop positioning and orientation system pulse per second pulse repetition frequency peak sidelobe ratio range Doppler synthetic aperture radar signal-to-first ambiguity ratio signal-to-noise ratio short time Fourier transform
xix
xx
List of abbreviations
TOPS VCO VCXO WGS WVD
terrain observation by progressive scans voltage-controlled oscillator voltage-controlled crystal oscillator world geodesic system Wigner-Ville distribution
Preface Synthetic aperture radar (SAR) has the ability of all-day and all-weather high-resolution terrain imaging, and is one of the main technologies for Earth observations. Even in recent years, SAR is still constantly improving and evolving into different forms. Among them, bistatic SAR, boasting its unique antiinterference and forward-looking ability, is one of the most striking evolved forms, and is moving from research and verification to applications, becoming an important branch of SAR. Compared with monostatic SAR, bistatic SAR, whose transmitter and receiver are mounted on different platforms, has more degrees of freedom and its echo law presents new features, causing bistatic SAR to face complex new problems in synchronization, imaging, and compensation. Although the concept of bistatic SAR was proposed as early as the 1970s, it developed slowly in the early stages. In recent decades, driven by application demand, coupled with the advancements in relevant technologies, bistatic SAR has been developing rapidly. Deeper and more comprehensive understandings about bistatic SAR than those in existing books are scattered in many different research papers. To avoid the huge time and effort needed to acquire this new knowledge from varied sources, a new book about bistatic SAR needs to be written. The research team under my leadership has carried out long-term and extensive research on bistatic SAR. In 2007, we conducted successfully China’s first airborne bistatic SAR imaging experiment, working in sidelooking mode, and in 2012, we obtained the first airborne bistatic forwardlooking SAR image. In 2020, we realized not only the spaceborne-airborne bistatic forward-looking SAR, but also the bistatic forward-looking SARGMTI for the first time in the experiment. It is because of the experience in leading and participating these research that I was able to write such a book to comprehensively present the relevant theories, methods, and techniques involved in bistatic SAR. The contents of the book are divided into seven chapters, including an introduction and chapters on imaging theories, imaging algorithms, parameter estimation, motion compensation, bistatic synchronization, and experiment verification. This book includes not only necessary and practical theories and methods involved in bistatic SAR, but also the design of experimental systems and the results of experimental verifications, etc. In addition, this book reports the
xxi
xxii
Preface
latest research results and the potential development of bistatic SAR, involving many useful processing algorithms for bistatic SAR with different configurations, such as translational variant bistatic SAR, bistatic forwardlooking SAR, among others. As a new kind of SAR system that is being studied and tested, there are still a lot of aspects that needs to be investigated and improved for bistatic SAR. Hence, there are inevitably shortcomings and mistakes in the book, and your suggestions would be appreciated. Jianyu Yang University of Electronic Science and Technology of China, Chengdu, People’s Republic of China
CHAPTER 1
Overview of bistatic SAR Synthetic aperture radar (SAR) uses microwave transmitting and receiving devices carried on a moving platform to obtain scattering echoes of ground objects from different observation angles, and then carries out echo data processing through a signal processing device to obtain the estimated value of the distribution function of the microwave scattering rate of ground objects, so as to achieve ground imaging. The SAR system can use the intensity of the echo signal to distinguish various types of ground objects with different scattering rates, and then achieve radiation resolution. Using the pulse compression of wideband signals, the equivalent narrow pulse is formed to realize the range resolution. A large antenna aperture is synthesized by using the change of viewing angle generated by motion to realize the azimuth resolution. By using the phase interferometrics of multichannel echo, the difference of wave path between channels can be used to obtain height measurement. Based on the difference of frequency and polarization response of the scattering rate, the features of the ground object are extracted to realize the material resolution. The active microwave radiation of the radar transmitter is used to illuminate the observed ground object features for operation both day and night. Based on the characteristics of electromagnetic wave propagation in the microwave frequency band, through the clouds, rain, fog, foliage, and ground, penetrating observation can be realized. Therefore the SAR system can obtain planar, black-and-white, and even stereo and color images of ground objects. It has all-day, all-weather, high-resolution, and multidimensional earth observation and imaging capabilities. It has been widely used in civil and military applications and is an indispensable means of earth observation. The transmitting and receiving devices of the bistatic SAR system are carried on different platforms, and the system configuration is flexible and diverse; the receiving device has strong adaptability and good electromagnetic concealment character. It not only can realize side-looking imaging and obtain observation information different from monostatic SAR but also can realize forward-looking, downward-looking, and backward-looking imaging, expanding the observation direction of the imaging radar. Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00001-3 Published by Elsevier Inc. All rights reserved.
1
2
Bistatic synthetic aperture radar
Therefore bistatic SAR has become a new developing direction of SAR technology. Because there are obvious differences between bistatic SAR and the traditional monostatic SAR in geometry, work mode, resolution performance, and application fields, bistatic SAR has a series of new theories, methods, and technical problems in imaging theory, system composition, transceiver synchronization, parameter estimation, motion compensation, imaging processing, and experimental verification. This chapter elaborates the imaging principle, system classification, system composition, performance parameters, research status, and development trend of bistatic SAR from the perspective of the spatial relationship and physical concept. It briefly analyzes the similarities and differences between bistatic SAR and monostatic SAR and then puts forward the theoretical and technical issues that need attention. A more detailed discussion will be carried out in the following chapters.
1.1 Imaging principle The microwave frequency radar system carried on the moving platform is used to periodically transmit broadband pulse signals, receive and record ground scattering echoes, and process echo data to obtain microwave images of ground objects. This process is called radar imaging of ground objects, and the microwave images obtained are called radar images. The microwave image represents the measured value or estimated value of the spatial distribution function of the ground object scattering rate, and the scattering rate reflects the scattering ratio of ground objects of different materials and roughness to the incident electromagnetic wave. The scattering rate changes with the extension and fluctuation of the earth’s surface, which constitutes the three-dimensional distribution function of scattering rate. The radar two-dimensional image refers to the estimated value of the aerial view of the three-dimensional distribution function of the surface scattering rate, that is, the projection of the distribution function towards its vertical plane (called the imaging plane) along the direction close to the height dimension. In order to achieve high-quality 2D imaging, the radar system needs to have high radiation resolution and 2D spatial resolution at the same time. The high radiation resolution enables the SAR system to effectively distinguish the difference in echo amplitude between scattering points with similar scattering rates and accurately reflect the relative strength of the scattering rates at different scattering points. It is beneficial to reflect this strength relationship by different light and dark gray values in the image. And the high resolving power of
Overview of bistatic SAR
3
2D space makes the SAR system successfully separate and focus echo signal energy of different scattering points, effectively distinguish the echoes of scattering point spatially close to each other, and accurately reflect the relative position relationship between scattering points on the two orthogonal directions of ground, thus making the scattering rate of various scattering points represented at the correct location in the image. In order to obtain the high radiation resolution and spatial resolution, the radar system must have appropriate system resources, geometry and processing methods. The transmitting and receiving devices of bistatic SAR are carried on different platforms. The transmitting, receiving, and imaging regions form a specific triangular relationship, which is different from the simple connection between the transmitting/receiving station and imaging regions in monostatic SAR. In bistatic SAR, the two key factors affecting spatial resolution, namely the time delay of the ground object echo and the Doppler frequency shift, are jointly determined by the sum of the two ranges of the transmitting station-ground and the ground-receiving station, and their relative motion relationship. That means bistatic SAR has different characteristics from monostatic SAR. Therefore bistatic SAR has special problems different from monostatic SAR in system configuration, system composition, echo model, and imaging processing, and is also more complicated.
1.1.1 Basic principles Under certain conditions, the imaging process of bistatic SAR is very close to that of monostatic SAR. Therefore the system configuration of SAR can be explained from the perspective of the simplest echo acquisition and processing of monostatic SAR, and then the principle of the 2D spatial resolution of bistatic SAR can be explained by comparing the differences of monostatic and bistatic SAR. In monostatic side-looking SAR, the radar transmitting and receiving devices with working wavelengths λ are carried on the same flight platform. The height of the platform from the ground is H, and it moves uniformly, linearly, and horizontally at a constant speed v. In the pitch direction perpendicular to the flight path, the radar antenna beam with a downward angle of α covers an area with a width of W on the ground, as shown in Fig. 1.1A. In the bearing direction parallel to the route, the real aperture of the radar antenna is D, forming a bearing beam with a beamwidth of λ/D, and illuminating the side-looking area along the direction perpendicular to the route, as shown in Fig. 1.1B.
4
Bistatic synthetic aperture radar
Fig. 1.1 Geometric configuration of monostatic SAR.
During the imaging process, the radar transmits a wide-band coherent pulse train signal with equal time interval Tr and bandwidth Br and receives the ground scattering echo. According to the radar signal detection theory, the pulse echoes at ground scattering points can be converted into narrow pulses of width 1/Br through the matching filtering processing of each pulseecho signal at scattering points, so as to produce the energy focusing of each pulse-echo at scattering points. At the same time, the delay difference of each scattering point echo is retained, so that the skew resolution of c/2Br can be realized by using the delay difference resolution of 1/Br (see Section 1.1.5 for details). For example, to obtain a slant range resolution of 1 m, theoretically, a pulse signal with a bandwidth of 150 MHz needs to be transmitted. In SAR imaging, the process of implementing echo matched filtering is called slant range compression. With the movement of the carrier platform, the radar receiving and transmitting antennas will appear in different positions at equal intervals on the flight route. The observation angle of view of the ground scatters in the observation area will change λ/D, and the azimuth echo reflecting the phase change between the pulse echoes of the ground scatters will be obtained. According to the theory of the array antenna, through the signal processing of the azimuth echo, it is able to synthesize an antenna aperture Lr
Overview of bistatic SAR
5
that is much larger than the real aperture D, and is proportional to the slope r of the scattering point and the change in the viewing angle λ/D. As shown in Fig. 1.1B, to achieve the energy focus of the azimuth scattering point echoes, while retaining the time difference of the azimuth echoes of various scattering points, the angular difference resolution of D/2r can be used to obtain the azimuth resolution of D/2 on a straight line parallel to the route and a range of r (normal to the platform observer line of sight) (see Section 1.1.4 for details): that is, the azimuth resolution of SAR imaging is independent of the slant range under certain conditions. For example, for 3-cm wavelength X-band radar, to obtain an azimuth resolution of 1 m, the real antenna aperture scale D is 2 m, so the corresponding azimuth beamwidth is 0.86 degrees. In SAR imaging, the process of achieving the synthetic aperture is called azimuth compression. From the preceding principle, the two mentioned compression processes are linear processes, which retain the amplitude differences of the variable scattering point echo. Therefore they can provide the basic conditions for the realization of high radiation resolution. On the other hand, it can be seen that the 2D spatial resolution of monostatic side-looking SAR comes from narrow pulse delay resolution formed by wideband pulse-echo matched filtering and narrow beam angle resolution formed by synthetic aperture from observation angle changes. The two resolution directions coincide with the slant range direction and azimuth direction, and the ground resolution corresponding to the time delay resolution and the ground resolution corresponding to the angular resolution are orthogonal in the direction. Therefore the high resolution of these two dimensions exactly constitutes the prerequisite for the two-dimensional imaging. However, in other types of SAR system configurations, such as most monostatic squint-looking SAR and bistatic SAR system configurations, the direction of the two types of resolution may not coincide, respectively, with the platform observers’ line-of-sight direction and its orthogonal direction. Therefore it is necessary to take delay resolution and range resolution, angle resolution and azimuth resolution as different concepts and distinguish them on nomenclature. In addition, the ground resolution directions corresponding to the resolution direction of time delay and angle are usually nonorthogonal, so the comprehensive measurement of two-dimensional resolution and the correction of imaging geometric distortion must also be considered. The transmitting and receiving devices of parallel-flying bistatic sidelooking SAR are carried on their flight platforms, and the imaging area is
6
Bistatic synthetic aperture radar
W
Receiver
Transmitter
Equivalent phase center E
Iso-time delay line H
Equivalent transceiver beam
(a) Side view W
Equivalent phase center Receiver
Transmitter
Iso-time delay line Equivalent synthetic aperture beam
Lr
Equivalent transceiver beam Imaging area
(b) Top view Fig. 1.2 Geometric configuration of bistatic SAR.
respectively illuminated by the transmitting beam and receiving beam, as shown in Fig. 1.2. When the distance between the imaging area and the transmitting and receiving platforms is far greater than the distance between the transmitting and receiving platforms, the angle of sight (i.e., bistatic angle) between the transmitting and receiving platforms observed from the imaging area is very small, and the imaging geometric relationship of bistatic SAR is very close to that of monostatic SAR, which can be equivalent to the monostatic SAR imaging configuration that has the equivalent phase center (EPC) of the transmitting and receiving antennas’ EPC. From the perspective of surface reflection characteristics and echo delay, the EPC is located on the bistatic angle bisector, and its slant range from the imaging area center is equal to the average slant range from the transmitting and receiving antenna phase centers to the imaging area center. At this time, the imaging principle of bistatic SAR can be qualitatively introduced and understood from the aspects of equivalent phase center, equivalent transmitting and receiving beam, corresponding time delay resolution, and aperture synthesis.
Overview of bistatic SAR
7
For the case of a large bistatic angle, the formation process of time delay and phase change of the ground object echo of bistatic SAR is quite different from that of monostatic SAR, but the basic principle of imaging is still similar. Therefore bistatic SAR can still be understood from the viewpoint of equivalent monostatic SAR in terms of physical concept, which means the principle of two-dimensional spatial resolution can be explained by using the time-delay resolution of the narrow pulse formed by the wideband pulse-echo matching filter and the angular resolution of the narrow beam formed by the synthetic aperture with the change of the observation angle of equivalent phase center relative to the target.
1.1.2 Treatment process Monostatic side-looking SAR and bistatic side-looking SAR can obtain multiple pulse echoes of the predetermined imaging area through the platform motion, and then process echo data through slant range compression and azimuth compression, so as to realize the two-dimensional imaging of ground objects. The wideband pulse signal transmitted by the radar is scattered by the ground, forming a pulse-echo that is captured by the radar receiver. After downscaling, orthogonal detection, and the sampling process, complex baseband echo data is obtained and then recorded as a row in a twodimensional array, and the row direction is called the fast time direction. As the platform moves, receiving and transmitting antennas on the radar will appear at different positions on the flight path, and the subsequent multiple pulse-echo data will be obtained. These data will be arranged together in the column direction according to the receiving order, and the column direction is called the slow time direction, so as to obtain the echo data (also known as the raw data) in two-dimensional format formed by multiple pulse-echo waves, whose amplitude is shown in Fig. 1.3A. In the image processing, matched filtering processing of the transmitted pulse baseband signal is conducted on each row of raw data to realize the slant range compression and make echoes from each scattering point turn into narrow pulses, and focus the energy of a single pulse from various scattering points, while at the same time retain their relative amplitude and delay differences, so as to realize slant range or the range sum resolution. The resolution is determined by the speed of light and the reciprocal of the transmitted signal bandwidth (see Section 1.1.5). The slant range compressed data (called intermediate data) is shown in Fig. 1.3B.
8
Bistatic synthetic aperture radar
Fig. 1.3 Recording mode and processing of SAR baseband echo (University of Electronic Science and Technology of China, 2007, airborne side-looking bistatic SAR data, RD algorithm).
Then, synthetic aperture processing of the azimuth echo is conducted on each column of slant range compressed data, to carry out azimuth compression. This is to focus the energy of multiple pulse echoes from various scattering points, while at the same time retain their relative range and azimuth position relations to realize the azimuth resolution. Its resolution is determined by the platform velocity and reciprocal of Doppler signal bandwidth (see Section 1.1.6). As a result, the slant range azimuth two-dimensional high-resolution data (called image data), as shown in Fig. 1.3C, is obtained. At this point, the imaging signal processing is completed, a two-dimensional array reflecting the change of ground scattering rate is obtained in the slantazimuth plane, and the transformation from two-dimensional echo data to two-dimensional image data is carried out. Of course, the actual process is more complex than that shown in Fig. 1.3, due to three main causes. The first is the complexity of echo law. As the radar antenna moves with the platform, the slant range between the scattering point on the ground and
Overview of bistatic SAR
9
the phase center of the transceiver antenna will change. Therefore, in the fast and slow time plane, the tracks formed by multiple pulse echoes of scattering points are not constrained in the column direction but bend and incline in different degrees, while the shapes of echo tracks of scattering points at different positions are different and intersecting. At the same time, the azimuth echo phase changes of scattering points at different positions also differ from each other. These two reasons lead to the need to perform pixel-by-pixel two-dimensional calculations during the processing, which requires enormous calculations and makes it difficult to achieve the instantaneity to satisfy application demands. In the process of efficient imaging processing, corresponding processing links should be added to deal with the adverse effects of these factors. In SAR imaging, this kind of problem is associated with the echo modeling and imaging algorithm. The second cause is the unknown nature of the actual movement of the platform. The azimuth matching filtering needs accurate echo signal parameters as the basis of processing, but these parameters are unknown. Therefore it is necessary to use a motion measurement device to measure the space position, motion speed, and beam direction of the receiving and transmitting antenna. Moreover, in the imaging process, the regularity of echo data and aggregation in the transform domain of echo data are required to achieve a more accurate estimation of relevant parameters. In SAR, this kind of problem belongs to the motion measurement and parameter estimation. The third cause is the randomness of the actual movement of the platform. This randomness will cause the echo law to deviate from the assumptions of platform regular motion and constant attitude. Therefore it is necessary to use the motion measuring device to measure the antenna’s spatial position, motion speed, and the actual state of beam pointing in real time, and to reduce the influence of irregular motion and attitude change on the echo by using a beam pointing stabilization and pulse repetition frequency control device. At the same time, it is also necessary to add corresponding processing links in the process to estimate and compensate for the echo errors caused by the remaining irregular motion and attitude change more accurately, so as to make the echo law return to the state corresponding to regular motion and constant attitude. In SAR technology, such problems belong to motion error control and motion error compensation. Although the imaging process of bistatic SAR is similar to that of monostatic SAR, its geometric relationship, echo model, imaging algorithm, parameter estimation, and motion compensation are more complex and diverse, and the corresponding process is more complex and diverse [1].
10
Bistatic synthetic aperture radar
1.1.3 Imaging characteristics The geometric configuration, working principle, and operating frequency band of SAR are quite different from those of optical imaging equipment. Therefore the microwave images obtained by SAR have many distinct features different from optical images. According to the geometric configuration of the monostatic SAR shown in Fig. 1.1A, it can be seen that the two-dimensional image obtained by the SAR is the projection of the three-dimensional distribution function of the ground scattering rate towards the imaging plane along the isotime delay line direction due to the projection of ground echoes along the isotime delay line (a circle centered on the phase center of the transmitting and receiving antennas). This imaging plane is a plane formed by the motion of the beam’s centerline in the direction of the route. Therefore, although the radar works at a certain height away from the imaging area, the obtained two-dimensional microwave image has an equivalent aerial view effect. The equivalent observation point is in the normal direction of the imaging surface, and the angle of departure from the height direction is equal to the angle of view α under the beam. As a result, the microwave image obtained by SAR has a similar imaging effect to that of “overlooking the slanting sun shining on the earth.” In fact, the “overlooking” here deviates from the height dimension direction with angle α, and the “slanting sun” is the radar transmitter. The opposite direction of shadow extension must refer to the radar transmitter. Although the monostatic SAR system in Fig. 1.1 obtains the equivalent aerial imaging effect through side observation, the two-dimensional image obtained is still different from the real aerial imaging, because of the working frequency band, the imaging principle, and geometric configurations shown in Fig. 1.1 obtained by monostatic SAR systems. In the microwave image, there is a corner effect (that is, the effects of a corner reflector), mirror effect, and slope effect; other features that can occur are ground object shadow, close range compression, tower top defocusing, high clear shadow but illegible object image, object blur but shadow retention, or unclear trees but clear shadow, resulting in radiation distortion and geometry distortion. Most of these image features will appear in the microwave images obtained by a bistatic SAR system in a similar form, as shown in Fig. 1.4 which illustrates the results of airborne bistatic SAR images in side-looking and forward-looking mode at line 1 and line 2 obtained by the author’s research group, University of Electronic Science and Technology of China, while line 3 shows the spaceborne/airborne bistatic SAR images in forward-looking and side-looking mode obtained by the author’s research
Overview of bistatic SAR
11
Fig. 1.4 Image characteristics of bistatic SAR.
group, University of Electronic Science and Technology of China, in cooperation with China Academy of Space Technology [2]. However, in bistatic SAR, because the transmitter and receiver are mounted on different platforms, these image features will produce some changes, such as the corner effect, and the shadow effect. These image features are different from those of monostatic SAR, and can provide new information about ground objects, making bistatic SAR and monostatic SAR complementary in applications. The combination of the two can improve the image dimension of earth observation and provide more abundant information on ground features. Here we briefly explain the similarities and differences between bistatic SAR and monostatic SAR imaging in terms of scattering characteristics, light-dark relationship, shadow phenomenon, and speckles. (1) Scattering characteristics The scattering rate distribution function is directional, that is, it is related to the incident direction and the observation direction. What monostatic SAR observes is the backscatter distribution function, that is, the observed direction coincides with the incident direction. What bistatic SAR observes is the distribution function of the lateral scattering rate. Its observation direction is generally not coincident with the incident direction, as shown in Fig. 1.5. Therefore, even if it has the same transmitter or receiver position or equivalent phase center position as monostatic SAR, what bistatic SAR
12
Bistatic synthetic aperture radar
Normal direction of the mean surface
EPC Transmitter
Receiver Direction of strong reflection Observation direction
Incident direction Scattering pattern Electromagnetic wave length
fT l / d cosfT
Plane of the mean surface
d Fig. 1.5 Rough surface and its bistatic scattering characteristics.
observes is the scattering rate distribution function with different directional attributes. This enables bistatic SAR to obtain ground object scattering characteristics different from monostatic SAR, and it is also conducive to the detection of ground stealth targets with backscattering stealth measures. For the same reason, the angle reflector that generates a strong echo in monostatic SAR is not a strong reflection target in the passband of bistatic SAR, because most of the energy of the angle reflector has been reflected towards the incident direction, that is, the direction of the transmitting station. However, the receiving station of bistatic SAR can barely receive the scattered energy, so the angle reflector camouflage or interference is basically ineffective for bistatic SAR. Figs. 1.6 and 1.7 obtained by the author’s research group, University of Electronic Science and Technology of China, show imaging results of monostatic SAR and bistatic SAR of the same area at the same time, and the obvious difference between them can be observed. (2) Light-dark relationship The relationship between light and dark in SAR images is related to a variety of factors. In addition to the difference in scattering characteristics caused by incident and observation angle, it is also related to ground object electromagnetic parameters, surface roughness, and surface slope. Electromagnetic parameters of ground objects can obviously affect the absorbency and scattering rate of ground objects to the
Overview of bistatic SAR
13
(a) Monostatic SAR image
(b) Bistatic SAR image
Fig. 1.6 Difference of light-dark relationship between monostatic and bistatic SAR image.
incident wave, which is an important factor affecting the relationship between scenes’ brightness and darkness in SAR images. In addition, the surface roughness also has an important influence on the light-dark relationship of SAR images. From the perspective of electromagnetic scattering theory, a smooth plane with a scale of d will produce scattering in different directions and with different intensities for electromagnetic waves with an incident angle of ϕT and a wavelength of λ. If d is above 10λ, it will form a clear directional scattering pattern, whose scattering beam width is λ/d cos ϕT. The centerline of the beam is located in the symmetrical direction at the other side of the plane normal. As shown in Fig. 1.5, when the angle of observation direction deviated from the beam centerline is larger than the beam width, the receiving station receives the side lobe scattering energy of the scattering pattern, which is very weak compared with energy from the beam centerline
14
Bistatic synthetic aperture radar
(a) Monostatic SAR image
(b) Bistatic SAR image
(c) Optical image Fig. 1.7 The difference in light-dark relationship between monostatic SAR image, bistatic SAR image, and an optical image.
Overview of bistatic SAR
15
direction. Therefore only those planes whose normal point to the equivalent phase center of the radar transceiver can form strong echoes at the radar receiving station. Once the plane normal line deviates from the center direction of the equivalent phase, the scattered energy captured by the receiving station will decrease obviously. Besides, the larger the plane scale is, the faster the decline is. Regardless of whether the ground surface is smooth or not, on a certain observation scale, there are fluctuations of different degrees and different roughness. The ground surface can be seen as being formed by the connection of many small-scale subsections with different slopes (generally more than 10λ), as shown in Fig. 1.5. For a given imaging resolution unit (that is, a quadrilateral region on the ground, whose two side lengths are respectively equal to two types of ground range resolution), the normals of these subsections have different directions and diverge from those of the mean plane of the undulating surface to different degrees. The scattered energy captured by the receiving station mainly comes from the contribution of subsections of the transmitting and receiving equivalent phase centers pointed by normals. In this sense, if the same equivalent phase center position and change process exist between bistatic SAR and monostatic SAR, imaging results with a similar light-dark relationship will be obtained. Statistically speaking, in the same imaging resolution unit, the normal direction of all subdivision faces presents a probability distribution pattern, as shown in Fig. 1.8. Its peak position usually appears in the normal direction of the mean surface. Both sides show a downward The smooth surface
p(q)
Smooth
Smooth Smooth horizontal surface downstream slope
upstream slope
with the mean surface normal pointing to the EPC
The rough surface
Rough
with the mean
upstream
surface normal
slope
Rough
Rough
horizontal
downstream
surface
slope
pointing to the EPC
p /2
0 Normal direction Direction of upstream slope of EPC
p
Normal direction of
Normal direction horizontal surface of downstream slope
Fig. 1.8 Probability function of normal direction distribution of different slopes.
q
16
Bistatic synthetic aperture radar
R EPC
Pitch direction map of receiver
T
Pitch direction map of transmitter
Horizontal plane
Upstream slope
Downstream slope
Shadow region
Fig. 1.9 Beam modulation effect and slope illumination distortion.
trend, whose rate of decline is related to the roughness or smoothness of the surface. For the relatively smooth surface, such as a water surface, road, or runway, the probability density function decreases rapidly and the variance is small; for the relatively rough surface, such as farmland, grassland, and forest, the probability density function decreases slowly, and the variance is large. With the average slope of surface changing from left to right shown in Fig. 1.9, the angles of the mean surface normal direction from the equivalent phase center increase in turn. The peak position of the probability density function of the subdivision surface will gradually move to the right, as in Fig. 1.8. The probability of surface subdivision surface normals pointing in the direction of the EPC will be reduced accordingly, making SAR receive less scattering energy, and the brightness of the corresponding resolution cell in the image decreases. Therefore a smooth surface with the mean surface normal pointing to the EPC direction has the strongest echo, and the weaker one, in turn, is a smooth surface where the normal points to the EPC direction, the rough surface where the normal points to the EPC direction, the rough upstream slope, the rough horizontal surface, the rough downstream slope, the smooth upstream slope, the smooth horizontal surface, and the smooth downstream slope. This strength relationship will also be reflected in the light and dark relationship of the corresponding area in the image domain. As shown in Fig. 1.4, the upslope is brighter, the horizontal and back slopes are darker, and the water surface is black.
Overview of bistatic SAR
17
Due to the influence of surface slope fluctuation, the estimated value of surface scattering rate distribution obtained by SAR images corresponds to different directional properties in areas of different slopes, resulting in slope fluctuation radiation distortion. Therefore SAR cannot obtain scattering rate distribution function estimation with a uniform direction property. In addition, the light-dark relationship in the SAR image includes the effects of beam modulation and propagation attenuation in addition to the already-mentioned effects of the electromagnetic parameters of the surface, the roughness of the surface, and the slope of the surface. Due to the nonuniform irradiation and reception from the radar antenna to the ground, the beam modulation effect is formed, resulting in the radiation distortion in the elevation plane. Due to the pitch beam modulation effect, the imaging area is bright in the middle and gradually dim on both sides. At the same time, due to the attenuation effect of space propagation of the electromagnetic wave, the incident power density and received power density of the ground will also show a trend of near strong, far weak, resulting in radiation distortion in the slant range direction. (3) Shadow phenomenon It can also be seen in Fig. 1.9 that if the back slope is too large, it will cause shadow. These areas cannot be illuminated by the transmitting beam. Even if they are covered by the receiving beam, no scattered energy is returned to the receiver, thereby forming a black shaded area in the image. In bistatic SAR, the area where the echo can be received is the area where the line of sight of the transmitting and receiving station can reach simultaneously. Therefore, when the bistatic angle γ is large, if the image focus of the bistatic SAR is good and the surface has tower-like undulation, a double shadow phenomenon can be observed, which is quite different from that of the monostatic SAR, as shown in Fig. 1.7. One of the shadows extends in the opposite direction to the transmitting station, and the other shadows extend in the opposite direction to the receiving station. In high-resolution SAR images, the shadows of ground object (e.g., tanks, buildings, aircraft, etc.) surfaces that are mostly smooth are often better at delineating features than the object surfaces. The object surface is not as good as its shadow in showing the contour of the object, resulting in the phenomenon of “clear shadow but illegible object image.” In the case of wind, the monostatic SAR image will show a “ turbid tree but clear shadow” phenomenon with blurred tree crown but clear tree shadow. The reason is that the swaying of branches and leaves in the
18
Bistatic synthetic aperture radar
wind produces random fluttering comparable to the wavelength, which will produce wind-induced additional phase changes significantly different from those of stationary ground objects among multiple echoes, thus significantly affecting the aperture synthesis of the tree canopy by the SAR system and resulting in defocusing of the image. On the other hand, if the displacement of the canopy relative to the resolution scale is small, the shape of the shadow changes little in the image, and the surface vegetation around the shadow is shallow. The wind-induced phase change is much smaller than the canopy, and the aperture can still be synthesized normally, so it can still form a clear edge of the tree shadow. Similar phenomena of “object blur but shadow retention” can also be generated by ground moving targets in monostatic SAR images, which also results from a similar mechanism. Bistatic SAR can observe similar phenomena to monostatic SAR. Accompanying the shadow phenomenon, there are prominent bumps on the ground that produce shadows. In the SAR image, it will fall to the side of the phase center of the transmitting and receiving antenna, forming a phenomenon of “tower falling to close range.” The formation mechanism is shown in Fig. 1.1A. At the same time, the echo law of ground scattering points on the top of the tower is not the same as that on the same slant range. The imaging process focusing on the flat ground echo law can obtain a well-focused SAR image, but the ground protrusions such as the top of the tower cannot be fully focused, resulting in the phenomenon of “tower top defocusing.” (4) Speckle Since the echoes of each ground resolution element are generated by coherent superposition of echoes generated by a large number of normal random subsections, as shown in Fig. 1.5, there are wave path differences between them in incident and reflection paths, and the echo amplitude of the whole resolution element will be random when coherent superposition occurs. Therefore, in the SAR image, it can be observed that SAR images of grasslands, Gobi, fields, and other distributed features show a specific texture. The formation mechanism is caused by the variation of pixels’ grayscale with space location among different resolution units of the same type of earth’s surface. This texture is caused by random fluctuations in the position and is called speckles in SAR images. Although the speckle formation mechanism of bistatic SAR is similar to that of monostatic SAR, the observed scattering characteristics are different from those of the monostatic SAR
Overview of bistatic SAR
19
due to the separation of the transmitter and the receiver. Therefore the speckle texture will show some characteristics different from those of the monostatic SAR. This is shown in Figs. 1.6 and 1.7. (5) Penetration phenomenon The working wavelength of SAR is much longer than that of optical imaging equipment. Some optically opaque features will appear transparently in SAR. Therefore SAR can observe other objects covered by some objects. For example, SAR can penetrate the sand layer of the Sahara Desert and observe the buried ancient river channel. This penetrability of SAR varies with the types of objects and the operating wavelength of the radar. Generally, radar with a larger operating wavelength has better penetrability and nonmetallic material and low moisture surfaces are easier to penetrate. This penetration capability of SAR allows it to be used for concealed target detection, and there have been many examples of this. This penetration phenomenon originates from the characteristics of electromagnetic wave propagation, which is similar to monostatic and bistatic SAR, and the differences need to be further studied.
1.1.4 Aperture synthesis In monostatic side-looking SAR and parallel-flying bistatic side-looking SAR, the abilities of both the azimuth resolution and slant range resolution can be accurately explained by the pulse compression effect, which is produced by the matched filtering of the linear frequency-modulated signal with a large time-bandwidth product. However, from the intuitive physical concept, the ability of azimuth resolution can also be explained by the large aperture antenna array formed by the motion of the transceiver antenna, which is also the origination of the name of SAR. As shown in Fig. 1.10A, monostatic SAR carries the transmitting and receiving devices with a working wavelength of λ on the same flight platform. The platform carries out uniform linear motion of the transmitting and receiving antenna at speed v, aperture scale D, beamwidth θ ¼ λ/D, which irradiates the ground in the direction of the vertical flight path. The radar transmitter periodically transmits wideband pulse signals at a certain interval, while the receiver receives and records the ground echo. With the movement of the carrier platform, the radar transceiver antenna and its phase center will appear in different positions of equal spacing in the route, one after another. According to the theory of the array antenna, the echo received from these different positions can be used to synthesize the array
20
Bistatic synthetic aperture radar
Equivalent transmiting and receiving beam Direction Slant range
Fig. 1.10 Principle of aperture synthesis.
antenna with a large aperture through signal processing, while the transmitting and receiving antenna at different positions on the route is its array element. For a specific scattering point on the ground, its corresponding synthetic array antenna element must be in a position that can illuminate and receive its echo. According to this principle, points A and B on two straight lines on the ground parallel to the route will respectively form array antennas with aperture length LA ¼ rA λ/D and LB ¼ rB λ/D. Different from the real aperture D of the transceiver antenna, the aperture of this large-scale array antenna is the virtual aperture synthesized after signal processing of the echo generated from point A and point B during the irradiation of the transceiver beam, so it is called the synthetic aperture. The process of forming this synthetic aperture is called aperture synthesis, and the corresponding irradiation time is called synthetic aperture time. The aperture scales LA and LB of this synthetic array antenna will be much larger than its array element scale, that is, the true aperture scale D of the transmitting and receiving antenna, and therefore will have a much narrower beamwidth θA ¼ λ/ 2LA ¼ D/2rA, θB ¼ λ/2LB ¼ D/2rB. The origination of ½ among them is that, when calculating the beamwidth of the array antenna, the two-way transmission is considered, and its beamwidth is ½ of the one-way pattern. Since the transmitting and receiving antenna beam has a certain angle coverage range, the two ground scattering points A and B with different slant ranges in Fig. 1.10 have different starting and ending points and duration lengths of the irradiation time of the transmitting and receiving beam, and the positions, scales, and the synthetic beamwidth of the synthesized array antenna are also different. The difference of the synthetic beam width and the influence of the target’s slant range just cancel each other, so that the two points A and B located at different ranges have the same azimuth
Overview of bistatic SAR
21
resolution ρa ¼ rA θA ¼ rB θB ¼ D/2, that is, they are exactly equal to onehalf of the real aperture scale D of the receiving and transmitting antennas, forming the phenomenon of “the same azimuth resolution at different ranges.” In fact, although the range between points A and B and the track is different, the radar uses the same beam to irradiate the ground. As a result, within the corresponding duration of receiving the echo of A and B points, the same observation angle change θ ¼ λ/D and the same azimuth resolution ρa ¼ D/ 2 ¼ λ/2θ are formed for them. Therefore, the physical essence of “the same azimuth resolution at different ranges” is that the azimuth resolution of SAR actually depends on the change of the working wavelength and observation angle of the radar, and has nothing to do with the range. This conclusion is also consistent with the explanation of microwave holographic theory. For the case of bistatic SAR in Fig. 1.10B, although the transmitting antenna phase center, receiving antenna phase center, and transmitting and receiving equivalent phase center have been separated, the principle of aperture synthesis is still similar to that of monostatic SAR. That is to say, with movement of the platform, its equivalent phase center will still form a change of viewing angle relative to the target. Therefore, we can still use the signal processing method to process the echo signals in different positions of the platform to achieve aperture synthesis, so that we can use the angle resolution ability of the synthetic narrow beam to achieve azimuth resolution. However, different from monostatic SAR, the bistatic SAR transceiver antenna has its own independent motion direction and antenna beam, and its synthetic aperture and azimuth resolution quantitative expressions are different from those of monostatic SAR, which are more complex, especially in the case of a large bistatic angle. A more detailed discussion of this is presented in Chapter 2.
1.1.5 Ground range resolution In monostatic side-looking and parallel-flying bistatic side-looking SAR, two-dimensional imaging is achieved through high azimuth resolution and high slant range resolution. High azimuth resolution is realized by the narrow beam formed by the aperture synthesis effect of platform motion, while the high slant range resolution is realized by the narrow pulse formed by the pulse compression effect of the broadband signal. (1) Linear frequency modulated signal According to the theory of radar signals, the delay resolution of radar is equal to 1/Br, the reciprocal of the transmitted pulse signal bandwidth Br. That is, the larger the signal bandwidth Br, the higher
22
Bistatic synthetic aperture radar
the delay resolution. For a single carrier pulse signal, the product of pulse width T and signal bandwidth Br is 1, that is, BrT ¼ 1. Due to the tight coupling relationship between time width and bandwidth, it is necessary to reduce the pulse width T of the transmitted signal in order to increase the signal bandwidth Br and delay resolution. However, considering the echo signal-to-noise ratio and working distance, the pulse width T of the transmitted signal must be increased. In order to overcome this contradiction, a large time-wide bandwidth signal formed by phase modulation in the duration of the pulse can be used to make BrT ≫ 1, thereby releasing the tight coupling relationship between time-width T and bandwidth Br. This kind of signal has LFM, pulse code signal, and so on, among which the most commonly used is the LFM pulse signal. For example, for an LFM pulse signal with a time width of 100 μs and a bandwidth of 150 MHz, the corresponding time-bandwidth product is 1.5 104. Signals with a large time-bandwidth product can meet the requirements of detection range and delay resolution by using time-width and bandwidth, respectively. The baseband time-domain expression of the LFM pulse signal s(t) is shown in Eq. (1.1 a), where react() represents the square-wave function with an amplitude of 1 and width of 1. According to the principle of stationary phase, the frequency-domain expression can be obtained, as shown in Eq. (1.1 b), in which the constant amplitude factor and the constant phase factor of –π/4 are omitted. 8 t > > s ð t Þ ¼ rect exp ðjπμt2 Þ < T (1.1) f f2 > > exp jπ : Sð f Þ rect Br μ The left and right sides of Fig. 1.11 show the time domain and frequency domain waveform of the signal, respectively. From top to bottom are instantaneous frequency, real part, imaginary part, envelope, and phase. It can be seen that LFM pulse signals have a highly similar functional shape in the time domain and the frequency domain. The main difference is that when the time domain is a rectangular envelope, although the envelope in the frequency domain is very close to the rectangle, it has Fresnel ripples, which can be ignored when the time-bandwidth product is BrT ≫ 1.
Overview of bistatic SAR
23
Fig. 1.11 Linear frequency-modulated signal.
(2) Pulse compression and delay resolution The baseband pulse signal s(t) is generated by the waveform generator in the transmitter, and is transmitted by the antenna and irradiated after the up-conversion to the carrier frequency f0 ¼ c/λ. The echoes generated by the two scattering points C and D on the ground are converted down to the baseband after reaching the receiving antenna and receiver, and the complex form of baseband pulse-echo sc(t), sd(t) is obtained. As shown in Fig. 1.12, they all have the same waveform as s(t), but they differ from each other in amplitude, delay, and initial phase. After the s(t) matched filter, the output pulse of the two signals with different amplitude delay and partially overlapping in the time domain is compressed into a narrow pulse of width 1/Br. The compression ratio is T/(1/Br) ¼ BrT ≫ 1, and the main peak is resolved in the time domain, so the time-delay resolution ρτ ¼ 1/Br equivalent
Pulse compression Pulse width d
c
T
Overlapped
d
S*( f )
Dt
Stagger
c
Stagger
|S( f )|
Overlapped
d
Amplitude
Pulse compression filter
F
F
Overlapped
d
o o
f
d
f
Br
o
Amplitude
c o
f
Br
–ÐS( f )
d
1/Br
Dt
T
S*(–t)
T
Phase
so (t)
Resolved c
t
Br
1/Br
exp[–j2p fDt]
f 0
d
c exp[–j2p fDt]
Fig. 1.12 Pulse compression of linear frequency-modulated signal.
o
f
c
o
Phase f
Overview of bistatic SAR
25
Fig. 1.13 Slant range and ground range resolution.
to the output pulse width is obtained. At the same time, the original delay and amplitude difference are retained in the processing, which provides an important foundation for restoring the scattering point position relationship and achieving radiometric resolution. According to the theory of optimal detection and estimation, the process of pulse compression and delay resolution of the pulse echo is also the process of optimal detection and delay estimation of the scattering point pulse echo in the Gaussian white noise of the receiver. (3) Slant range resolution and ground range resolution In monostatic side-looking SAR, the delay resolution ρτ is the basis of the slant range (the distance of the platform observer’s line of sight) resolution ργ and the ground range (corresponding ground distance) resolution ρg, as shown in Fig. 1.13A.
26
Bistatic synthetic aperture radar
In the monostatic SAR in Fig. 1.13A, the isotime delay ring is a concentric circle centered on the phase center of the transceiver antenna. The imaging plane is orthogonal to the isodelay line and coincides with the slant range direction representing the line of sight of the platform observer. According to the relationship between the electromagnetic wave propagation speed c and the round-trip two-way propagation path, the resolution of the slant range is ρr ¼ cρτ/2 ¼ c/2Br, and the resolution of the corresponding ground range is ρg ¼ ρr/ cos α ¼ c/2Br cos α. Therefore the ground range resolution ρg in the slant range direction is realized indirectly by the echo delay resolution ρτ, and is jointly determined by the transmitted pulse signal bandwidth Br and the downward angle α. Due to the influence of the change of view angle α in different regions, the ground range resolution ρg shows the characteristics of nonlinear change. As the slant range of the imaging area gets closer, the angle of α under the propagation path and the ground range resolution ρg increases, which leads to a decrease of the ground range resolution, forming the phenomenon of “close range compression” and the geometric distortion of the SAR image. In bistatic SAR, shown in Fig. 1.13B, such an isodelay ring has become an ellipse ring with focus T, R, and center E. The imaging plane has transformed by the equivalent phase center to observe the view changes in the composition of the plane, and form the angle of α0 with the ground, but the extension direction is no longer coincident with the observed gaze direction of the superposition and receiving platform. The range-sum resolution of the sum of distance rΣ ¼ rT + rR is ρrP ¼ cρτ ¼ c/Br, the resolution of the mean value of the transmit and receive slant range r ¼ (rT + rR)/2 is ρr ¼ c/2Br, and the resolution of the imaging projection surface is ρr ¼ kρr, so the corresponding ground range resolution is ρg ¼ ρr0 / cos α0 ¼ kc/2Br cos α0 , where the parameters k and α0 are jointly determined by the baseline height H, length L, inclination angle β, and their geometric position relationship with the imaging area. When the bistatic angle γ decreases to 0, the isodelay elliptic circle degenerates into a concentric circle with the center of point E, and the transmitting path coincides with the receiving path gradually, so that the relationship between the imaging surface resolution ρr0 and the ground range resolution ρg degenerates to the same situation as in monostatic SAR.
Overview of bistatic SAR
27
1.1.6 Azimuth resolution In Section 1.1.4, the principle of azimuth resolution of monostatic side-looking SAR and parallel-flying bistatic side-looking SAR is explained from the angle of synthetic aperture using antenna theory. In fact, from the pulse compression process of azimuth echo, it can be explained more accurately, which is closer to the actual process of imaging signal processing. Here, a simpler monostatic side-looking SAR is taken as an example to explain its principle, and then it is extended to the case of parallel-flying bistatic side-looking SAR. (1) The formation process of azimuth echo During the movement of the platform, the slant range between the scattering point on the ground and the phase center of the radar transceiver antenna and its angle in the antenna direction map are physical quantities that change continuously with time. This time-varying characteristic will directly extend to amplitude, time delay, and initial phase between the pulse echoes of scattering points. If A(t) and ϕ(t), respectively, represent the amplitude and initial phase change history of each pulse-echo at the scattering point A in Fig. 1.14, then
Fig. 1.14 Azimuth Doppler signal formation and pulse compression.
28
Bistatic synthetic aperture radar
sA(t) ¼ A(t) cos ϕ(t) represents the azimuth echo of the scattering point A. Because the first derivative ϕ0 (t)/2π of the phase ϕ(t) represents the instantaneous Doppler frequency fd(t) of the scattered point echo, the azimuth echo is often called a Doppler signal. In a monostatic side-looking SAR imaging process, the formation process of azimuth echo can be shown in Fig. 1.14A–D. To simplify the narrative and facilitate a graphic, the antenna beam is apparently wider than the actual beam and the equidistance line is drawn as a straight line instead of an actual circular arc, because in the case of small beamwidth, the circular arc also can be approximated to the linear segment. The receiving and transmitting antenna is carried on a uniform linear motion platform with velocity v. The radar irradiates the area on the front and side of the motion direction through the beam with azimuth aperture size D and beamwidth λ/D. As shown in Fig. 1.14D, from the perspective of platform observation, scattering point A on the ground moves in a straight line parallel to the flight path at a constant speed v. Due to the relative motion between point A and the platform, the initial phase of pulse echo changes. According to the physical principle, the center frequency of pulse echo will generate an additional Doppler frequency shift compared with the carrier frequency of the transmitted pulse signal. Over time, the radial speed vr ¼ v cos α of the point A relative to the platform changes from +v to v, and the echo Doppler frequency changes from +2v/λ to 2v/λ. During this process, when the point A is located on the front side of the platform, the instantaneous Doppler frequency is 0, as shown in Fig. 1.14C. Due to the direction selectivity of irradiation from the antenna to the ground and echo receiving, the period of the radar receiver that can illuminate point A and receive the echo from the point A is a finite width TA, which is a quantity proportional to the range rA and represents the time width of the azimuth echo SA(t) of point A, as shown in Fig. 1.14A. When the antenna beamwidth θ is small, the Doppler frequency fd(t) in the time range of TA can generally be approximated as a linear change process, as shown by the solid line in Fig. 1.14C, that is, SA(t) can be modeled as a linear frequency-modulated signal, The rate of change μA is an amount that is inversely proportional to the range rA. As shown in Fig. 1.14C, the bandwidth Ba of SA(t) can be determined according to the Doppler frequency difference when point A enters and exits the antenna beam. Considering that θ is usually small, it is easy to get Ba ¼ 2v/D, which indicates that the bandwidth Ba of SA(t) is quantity independent of the range rA.
Overview of bistatic SAR
29
Point A0 on the ground enters and exits the azimuth beam earlier than the point A. The azimuth Doppler signal formed by it is more advanced than the point A. The product of delay difference Δt and platform speed v reflects the azimuth range between them, as shown in Fig. 1.14A. Since the radar transmits pulse signals at equal intervals Tγ and receives the echo from the scattering point A on the ground, the radar can only obtain the sampling value of the azimuth echo SA(t) at the point A at equal intervals, which is Tγ . However, as long as Tγ is small enough to satisfy 1/Tγ > 2v/D or vTγ < D/2 required by the Nyquist sampling rate, the azimuth echo SA(t) can be recovered according to these sampling values. This corresponds to the fact that one pulse signal should be emitted for every pitch of the platform flying through no more than half the antenna aperture scale. For example, for an antenna aperture of 2 m, every time the platform flies over 1 m, it needs to transmit a broadband pulse signal. If the platform is an aircraft whose speed is 200 m/s, the time interval of the transmitting signal is 5 ms. The corresponding repetition frequency of the transmitting pulse is 200 Hz. (2) Azimuth pulse compression and azimuth resolution According to the working wavelength λ, slant range rA, transmitting and receiving antenna aperture D, actual flight speed v, and other parameters, the azimuth echo time width Ta and bandwidth Ba corresponding to point A can be calculated to construct a matched filter. Using this filter, pulse compression processing can be performed on the azimuth echo of the A point. A and A0 scattering points are located on the ground on the same line parallel to the route, and their corresponding azimuth echoes have the same waveform, but there are differences in amplitude, initial phase, and time delay, which mainly come from their scattering rate and azimuth position differences. The two signals with different amplitude and time delay and partially overlapped in the time domain are compressed into a narrow pulse with a width of 1/Ba ¼ D/2v after the matched filter. The corresponding compression ratio 2λrA/D2 is usually much larger than 1. For example, for the X-band radar with a wavelength of 3 cm, when the antenna size is 2 m and the observation distance is 50 km, the pulse compression ratio is 750. In the process of azimuth pulse compression, the energy focus of the two azimuth signals is realized, and their main output peaks are separated in the time domain, thus obtaining ρτ ¼ D/2v delay resolution equal to the width of the output pulse. At the same time, the original
30
Bistatic synthetic aperture radar
delay difference and amplitude difference are retained in the process, which provides important basic conditions for the reduction of spatial position relation of A, A0 and the realization of radiation resolution. According to the formation mechanism of azimuth echo delay difference, the corresponding azimuth ground-range resolution ρa ¼ ρτ v ¼ D/2, that is, if an azimuth resolution of 1 m is needed, the corresponding antenna aperture is 2 m. In the range unit rB in Fig. 1.14, the process of azimuth resolution is similar to that of range unit rA, except that the parameters of matched filter are different, and the azimuth resolution of D/2 can also be obtained, that is, the resolution obtained by azimuth compression is independent of range. The azimuth echoes generated by scattering points of different range units can be separated by range compression at first, so that they can be processed by azimuth compression, respectively, to adapt to their different requirements of matching filter parameters. According to the theory of the best detection and estimation, the process of using azimuth pulse compression to realize the time delay resolution of azimuth echo is also the best detection process of azimuth echoes of scattering points and the best estimation process of the time delay difference in Gaussian white noise. (3) Azimuth resolution of bistatic SAR The azimuth resolution ability of constant-speed parallel-flying side-looking bistatic SAR can be explained according to the situation of side-looking monostatic SAR. As shown in Fig. 1.15, radar transmitting station T and receiving station R are respectively carried on two parallel flight platforms with the speed of v whose equivalent phase center E moves with the same velocity vector. The transmitting station transmits the linear frequency-modulated signal with the large time-bandwidth product with the time interval Tγ . The receiving station receives the echo signal synchronously, and through the slant range pulse, realizes the range resolution compression, which will separate the azimuth echo of scatter points in the different range units. Then through the azimuth compression of all the range units, the azimuth resolution is achieved, so as to realize the 2D imaging of ground objects. This is because, when viewed from the equivalent phase center, the ground scattering point A moves along a straight line parallel to the track at a speed of v, and its echo Doppler frequency also undergoes a change
Overview of bistatic SAR
31
Fig. 1.15 Azimuth resolution of parallel-flying side-looking bistatic SAR at constant velocity.
from +2v/λ to 2v/λ. During the irradiation of the equivalent receiving beam, the azimuth Doppler signal can also be regarded as a linear frequency-modulated signal. Therefore, like monostatic SAR, after the range pulse compression of each pulse echo, according to the measured motion parameters and geometric relationship, the corresponding azimuth Doppler signal matching filter can be constructed to carry out the azimuth pulse compression for the range resolution unit corresponding to every slant range rA, so as to achieve the azimuth resolution. Certainly, to a more strict extent, in bistatic SAR, the echo Doppler frequency of point A is determined by the Doppler frequency shift of point A relative to point T, R. Therefore the quantitative expression of its azimuth resolution is different from that of monostatic SAR and is more complex, especially in the case of a large bistatic angle. The related analysis is discussed in detail in Chapter 2.
1.2 Configuration classification The transmitter and receiver are mounted on different platforms in bistatic SAR, and the two platforms move independently. In terms of the system configuration reflecting the characteristics of the spatial relationship between the transmitting station, the receiving station, and the area to be imaged, it is more complex and diverse than monostatic SAR. In some configurations,
32
Bistatic synthetic aperture radar
the imageable area of the receiving platform is also extended from the sidelooking and squint-looking of monostatic SAR to the forward-looking, downward-looking, and backward-looking, which can form different application modes with different characteristics and values. Different system configurations are also an important basis for the classification of a bistatic SAR system. The system configuration and main classification of bistatic SAR will be described from the following aspects: transceiver baseline type, synthetic aperture direction, baseline aperture combination, transceiver flight mode, imaging region location, transceiver scanning mode, and bearing platform combination.
1.2.1 Transceiver baseline type The relative spatial position relationship and its time-varying characteristic among the transmitter, receiver, and the area to be imaged are one of the key elements of the configuration of the bistatic SAR system. It has an important influence on the complexity of imaging spatial resolution and efficient imaging processing. It is an important way to classify bistatic SAR. As shown in Fig. 1.16, the connection between the transmitting antenna and the receiving antenna phase center T, R is called the transceiver baseline, and its length is L. The included angle β between the baseline and the ground plane is called the baseline inclination. The projection of the baseline on the ground is called the baseline projection. The distance between the center E of the receiving and transmitting baseline and the ground plane is called the baseline height. The opening angle γ formed by the two endpoints of the transceiver baselines with respect to the ground scattering point P is called the bistatic angle. The distances rT and rR from T and R to the scattering point are called the transmitting and receiving slant ranges, respectively, and their ratios are called the slant range
R L/2
E
rR
L/2
T b
H
O q E' r rT P g P
Fig. 1.16 Transceiver baseline length, height, inclination, and isotime delay line.
Overview of bistatic SAR
33
ratio η ¼ rT/rR. These baseline parameters are key parameters of the bistatic SAR system configuration and also reflect the degree of difference between bistatic SAR and monostatic SAR. For example, the large bistatic angle γ enables bistatic SAR to obtain feature information significantly different from that of monostatic SAR, which is also a necessary condition for the antidetection and antiinterference of a bistatic SAR receiving station. Besides, a large slant range ratio can significantly reduce the gain requirement of the receiving antenna. Other baseline parameters have an important influence on the isodelay line distribution, ground range resolution, and image ambiguous area. (1) Iso-time delay line The pulse echo generated by the scattering point P on the ground needs to travel through two routes from the phase center T of the transmitting antenna to the scattering point P on the ground, and then from the scattering point P to the phase center r of the receiving antenna. Compared with the time of signal transmission, a time delay τ will be generated, whose size is determined by the speed of light c and the sum of the two distances rP ¼ rT + rR, that is, τ ¼ (rT + rR)/c. The set of points with the same delay at a certain time on the ground is usually called the isotime delay line or time delay contour. In bistatic SAR, the isotime delay line is the intersection line between the rotation ellipsoid and the ground plane with the baseline endpoints T and R as the focus and the baseline direction as the axis, which is usually an ellipse. If the isotime delay line on the ground is drawn with a fixed delay increment Δτ, a concentric elliptical cluster with equal delay can be obtained. Its center p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi O corresponds to the minimum time delay τmin ¼ L 2 cos 2 β + 4H 2 of the ground scattering point and is located on the side of the baseline projection, which is inclined to the descent direction of the baseline, and the distance from the center projection point E0 of the baseline is OE0 ¼ L 2 cos β sinβ=4H. When the inclination angle β ¼ π/4, OE0 reaches the maximum value L2p /8H. ffiffiffi Since H > sin β L/2 and the maximum value of OE0 is L=2 2, the distance between O and E0 is usually smaller than the baseline length. The isotime delay elliptic clusters will be distributed symmetrically along both sides of the baseline projection line. Moreover, in the direction gradually away from the O point, the isotime delay line gradually becomes dense, and the ellipse gradually degenerates into a circle. In the region far away from the center O point, the distance between isotime delay rings reaches the minimum value and is directly determined by the increment of time delay Δτ, that is, cΔτ/2.
34
Bistatic synthetic aperture radar
(2) Delay ground range resolution When the increment Δτ is taken as the delay resolution ρτ, the ground isotime delay line interval will represent the minimum distinguishable ground range, that is, the ground range resolution ρg. Therefore the denseness of the isotime delay line on the ground represents the resolution of the ground range in the area. The denser is the isotime delay line, the higher the ground range resolution. The optimal resolution direction of the ground range lies in the normal direction of the isotime delay line, so the direction of ground isotime delay line also determines the optimal resolution direction of ground range in this region. Therefore the height, length, and inclination of the baseline not only have an important influence on the distribution of isotime delay line clusters on the ground and determine the density and trend of isolines in a specific region, but also directly determine the ground range resolution of different regions under the condition of given time delay resolution. From the distribution rule of isotime delay line, it can be seen that the ground range resolution in different areas is related to the range rP between the area and the center O of the isotime delay ring. In the area far from the center of the circle, the γ angle becomes smaller, the isodelay lines are denser, and the ground range resolution is better. At infinity, the ground range resolution will approach the range resolution determined by the bandwidth of the transmitted signal, which is ρg ¼ cρτ/2 ¼ c/2Br. In addition, there is a nonlinear effect similar to that of monostatic SAR between delay resolution ρτ and ground range resolution ρg. In the region near the center of the loop, the isotime delay line spacing is not uniform, and the microwave image obtained has the geometric distortion phenomenon of “close range compression.” However, the isotime delay line interval tends to be uniform away from the central region, and this phenomenon gradually weakens until it disappears. (3) Image ambiguous area The height, length, and inclination of the baseline determine the ground position of O and also affect the position of the imaging ambiguity. If the imaging area is in the central concentric ring and contains a complete isotime delay ring, the echo from different positions of the ring cannot be distinguished by the time delay difference. Besides, no matter the design of the equivalent phase center movement direction, the area with the equal Doppler frequency will appear on the ring. The echo in these two areas cannot be distinguished by differences in Doppler frequency and Doppler history. Thereby ghosting is formed
Overview of bistatic SAR
35
R R
E
T
E
T
E
R
b = 0
T
O
Ob = p / 2
O
b
(a) Vertical
(b) Tilt
(c) Horizontal
Fig. 1.17 Three typical configurations for the transceiver baseline.
in the image domain, resulting in an ambiguity phenomenon. So the area around O cannot be imaged. (4) Baseline type selection According to the relationship between the transceiver baseline and the ground, bistatic SAR can be divided into three categories: vertical baseline, inclined baseline, and horizontal baseline, as shown in Fig. 1.17. For these three typical baseline types, the ground range resolution formed by the isotime delay ring has no obvious directionality for different regions. For the angle θ in Fig. 1.16, they present an isotropic characteristic. For the ground range resolution for a specific region, baseline projection direction does not have a significant impact during the system configuration design. Bistatic SAR can be further divided into two categories, translational variant and translational invariant, according to the different situations of transceiver baseline changing with time, as shown in Fig. 1.18. Due to the significant difference in the efficient imaging processing corresponding to these two types of configurations, they belong to different categories of bistatic SAR. If the direction and length of the baseline remain unchanged within the synthetic aperture time Ta, it is called translational invariant mode; if one or T
T
T'
E
E
R
T'
E
'
E' R
' E¢ R
b
Ta
b
'
(a) Translational invariant
E ''
E¢
b
R' b¢
E ''
Ta
(b) Translational variant
Fig. 1.18 Time variability of the transceiver baseline.
36
Bistatic synthetic aperture radar
two of the direction and length of the baseline change within the synthetic aperture time Ta, it is called translational variant mode. In the process of synthetic aperture, the spatial position of baseline center E must change, because only in this way can we obtain the required synthetic aperture. The essence of the division of translational variant or translational invariant is whether the positional relationship between the two platforms moving in a straight line at a uniform speed changes over time. The translational invariant mode corresponds to the entire imaging system composed of two bearing platforms, which has a virtual rigid structure in geometry, while the translational variant mode corresponds to the flexible structure. In these two baseline configurations, the complexity of the efficient imaging algorithm is obviously different. In the translational-invariant mode, the relative position of the transceiver remains unchanged, and the transmitting platform, receiving platform, and equivalent phase center have the same moving speed and parallel track. The point targets on different lines parallel to the flight track on the ground have different Doppler varying history, while the scattering points on the same line parallel to the flight track on the ground have the same Doppler varying history. The difference is just in the order of time. Therefore, in the translational-invariant mode, the Doppler history of point targets on the ground presents one-dimensional space-variant characteristics with different target positions, and the process of efficient imaging processing is similar to that of monostatic SAR. However, in the translational-variant mode, due to the relative position of the transmitter and receiver changing with time, the Doppler history of the scattering points on the ground presents obvious two-dimensional space-variant characteristics with different positions of the scattering points, which results in the difference between the high-efficiency imaging process and that of monostatic SAR, and the complexity increases significantly. For a detailed discussion of this, see Section 3.1.
1.2.2 Synthetic aperture direction The direction of synthetic aperture refers to the motion direction of the equivalent phase center of the bistatic SAR receiving and transmitting antenna. It is the second key element of the configuration of the bistatic SAR system and it has an important influence on the location distribution of the best imaging area on the ground; it is also an important way of bistatic SAR classification.
Overview of bistatic SAR
37
The aperture synthesis process of bistatic SAR is often accurately described by the time difference resolution of Doppler signal in engineering applications. However, in terms of physical concept, bistatic SAR can also be approximately equivalent to the monostatic SAR system configuration by adopting the concept of the equivalent phase center of the transceiver. Therefore the virtual synthetic aperture formed by the motion of the equivalent phase center can be used to qualitatively and intuitively explain the Doppler ground range resolution performance of different regions in bistatic SAR, which can play a macro guiding role in engineering applications. Since it is a qualitative analysis, according to different purposes, the equivalent phase center can be selectively approximated as the center of the transceiver baseline, or the intersection point between bistatic angle bisector and the transceiver baseline. For the convenient discussion in this section, the transmitterreceiver baseline center E is approximated as the equivalent phase center. As shown in Fig. 1.19, the initial height from the ground of point E of the equivalent phase center is H, the ground projection point is E0 , the velocity vec! tor is v E , and its magnitude is vE; the intersection point of the velocity extension direction and the ground plane is O. According to the angle relationship between ! the velocity vector v E or the direction of the equivalent synthetic aperture and the ground plane, the range between point O and point E0 can be taken from 0 to +∞, respectively corresponding to the falling body motion, the subduction motion, and the horizontal movement. The deviation angle φ of the scattering ! point P from v E on the ground has an important influence on the magnitude and change history of the echo Doppler frequency at point P. The initial position height H of the equivalent phase center and the magnitude and direction of ! the velocity v E have important effects on the ground equivalent Doppler line distribution, the ground distance resolution, and the imaging blur area. (1) Iso-Doppler line As shown in Fig. 1.19, because the equivalent phase center E moves relative to the ground, the echo of scattering point P on the ground will
Fig. 1.19 Typical movement of equivalent phase center.
38
Bistatic synthetic aperture radar
generate Doppler shift fd, which is determined by its relative radial velocity vP ¼ vE cos φ and working wavelength λ ¼ c/f0, and fd ¼ 2vE cos φ/λ. At a given moment, the set of points with the same Doppler frequency on the ground is called an Iso-Doppler line or Doppler contour. In monostatic SAR, the Iso-Doppler line is the intersection line between the ground plane and the conical surface with vertex of E and the axis of the velocity vector direction. It has three different shapes: circle, ellipse, and hyperbola, depending on the angle relationship between the velocity vector and the ground. If the Iso-Doppler line of the ground is drawn with a fixed Doppler increment Δfd, an equal Doppler curve cluster centered at the O point can be obtained, and the O point corresponds to the maximum Doppler frequency shift 2vE/λ, as shown in Fig. 1.19. For bistatic SAR, the equivalent phase center is an approximate concept because the Doppler frequency is generated by the scattering point P relative to the motion of two parts, namely, the transmitting station and receiving station. Although the Iso-Doppler line deforms based on the monostatic SAR, it can still be used for macroscopic analysis. In order to simplify the analysis process and obtain qualitative and macroscopic conclusions, the influence of such deformation will be neglected in the discussion in this chapter. In the horizontal movement mode of Fig. 1.19C, the point O is located at the infinite distance in the extension direction of the velocity projection, and the Iso-Doppler clusters are symmetrically distributed ! along both sides of the velocity vector v E projection line. In the process where the Iso-Doppler line gradually moves away from point O and approaches E0 , φ gradually increases, and the Iso-Doppler lines become denser. At the point E0 , φ ¼ π/2, Iso-Doppler line spacing reaches a minimum and is determined by the normalized Doppler increment Δfd and the height of point H, namely, Δfd H, where Δfd ¼ Δfd =ð2vE =λÞ. (2) Doppler ground range resolution Due to the motion of point E, the ground Doppler contour cluster has a dynamic change of centripetal compression relative to point O. The scattering point P on the ground, along the direction deviating from the O point, will generate the relative motion through the Iso-Doppler line cluster with the velocity of vP, thus forming a Doppler signal with instantaneous frequency change, which is the physical basis of synthetic aperture or the pulse compression of the Doppler signal.
Overview of bistatic SAR
39
Suppose that the ground interval of the Iso-Doppler line in the relative moving direction of the P-point is ΔL, in the synthetic aperture time Ta that can receive the echo of the P-point, the relative moving amount of the P-point is L ¼ vPTa, the number of passing through the Doppler contour is N ¼ L/ΔL, the corresponding Doppler bandwidth is Ba ¼ N Δfd, and the corresponding Doppler signal time difference resolution is ρτ ¼ 1/B. It is easy to prove that the corresponding ground range resolution ρg ¼ vPρτ ¼ ΔL/Ta Δfd. Therefore, the ground range resolution ρg of the P-point relative to the direction of motion is directly proportional to the contour interval ΔL. When the contour increment Δfd is set to 1/Ta, the contour interval represents the minimum distinguishable ground interval, that is, the ground range resolution ρg. Therefore, the density of Iso-Doppler lines represents the ground range resolution in the area. The denser the contour lines are, the higher the ground range resolution is. The best resolution direction is located in the normal direction of the Iso-Doppler line, so the direction of the Iso-Doppler line on the ground also determines the best resolution direction of the area. Therefore the initial position and velocity vectors of the equivalent phase center not only affect the distribution pattern of the IsoDoppler line clusters on the ground but also determine the contour density and direction of a specific region, thus having a direct impact on the frequency rate of the scattering point Doppler signal. Moreover, given synthetic aperture time, the bandwidth of the Doppler signal will be directly determined, so as to determine the delay resolution and the corresponding ground range resolution of the Doppler signal. For the case in Fig. 1.19C, it can be seen from the distribution law of the Iso-Doppler line that the ground range resolution of different regions is related to the distance between the region and the projection point of the equivalent phase center. In the area near the point E0 , isolines are denser and have better ground range resolution under the same synthetic aperture time. In addition, because the adjacent Iso-Doppler lines are nonparallel in most regions, the density will change along the contour direction. Therefore the ground range resolution by means of Doppler signal delay resolution will also be accompanied by geometric distortion, which must be corrected after imaging processing.
40
Bistatic synthetic aperture radar
(3) Imaging ambiguous area The position and velocity direction of the equivalent phase center E will also affect the position of the imaging ambiguous area. In the vicinity of point O, if the imaging area contains a complete Doppler isoline ring, the scattering points at different positions on the ring have the same initial Doppler frequency and the similar instantaneous Doppler frequency variation course, so the Doppler frequency and Doppler signal delay difference cannot be used to distinguish. Moreover, no matter how to choose the baseline direction and the isotime delay line direction, the isotime delay line passes through the IsoDoppler ring. As a result, two different regions with the same time delay are formed on the ring and cannot be distinguished by the time delay difference, so that ghosts are formed in the image domain, which causes ambiguity. Therefore the area near the O point is a nonimaging area. (4) Choice of synthetic aperture direction According to the φ angle size in Fig. 1.19C, the equivalent synthetic aperture can be divided into three categories: lateral synthetic aperture, oblique synthetic aperture, and forward synthetic aperture, which correspond to the case where the angle φ is near π/2, the angle φ is near π/4, and the angle φ is close to 0. This is because these three types of the synthetic aperture have distinct Doppler resolution for the region around point P. For example, when φ ¼ π/2, the relative movement direction of point P is close to the best resolution direction, and it has the largest contour density. In other regions, the contours become sparse, and the movement direction of point P deviates from the best resolution direction. The best Doppler ground range resolution cannot be achieved. So, to get the best Doppler ground range res! olution, when designing a system, if conditions permit, v E and the direction of the synthetic aperture should be set reasonably in order to make the imaging area locate in the direction of φ ¼ π/2 as far as possible, which means the lateral synthetic aperture has priority to be used. Only when application conditions are not allowed are oblique synthetic aperture and consequent synthetic aperture considered.
1.2.3 Baseline aperture combination The combination of baseline and aperture refers to the geometric relationship between baseline direction and aperture direction. It is the third key
Overview of bistatic SAR
41
element of the bistatic SAR system configuration. It has an important influence on the location distribution and imaging performance of the imageable area on the ground. It is also an important way to classify bistatic SAR. As mentioned earlier, the baseline determines the distribution and direction of the ground isotime delay line; the position and velocity vector of the equivalent phase center determine the distribution and trend of the ground Iso-Doppler line clusters. Their different combinations will significantly change the distribution of imageable areas on the ground, and also directly affect the two-dimensional ground range resolution of specific areas. (1) Two-dimensional resolution relation The basic condition for realizing two-dimensional imaging is that the imaging system must have a sufficiently high one-dimensional spatial resolution in two orthogonal directions on the ground. In synthetic aperture radar imaging, the two one-dimension space resolution is derived from the time delay resolution of the echo pulse signal and time resolution of Doppler signal referred to as “the time delay resolution and Doppler resolutions.” The magnitudes of their resolution are determined by the bandwidth of these two signals, and their best ground range resolution directions are vertical with the isotime delay line and Iso-Doppler line, respectively. Since the angle between the two contours is closely related to the system configuration and regional position, they are usually not orthogonal on the ground. Therefore the best ground range resolution directions for time-delay resolution and Doppler resolution are usually not orthogonal. As mentioned before, when isotime delay line increment Δτ is taken as the reciprocal of transmission signal bandwidth Br and IsoDoppler line increment Δfd is taken as the reciprocal of synthetic aperture time Ta, the distance between these two kinds of ground contours represents the minimum resolvable ground range, that is, the ground range resolution. Therefore, in order to obtain the ground range resolution capability with distributed equilibrium and two-dimensional equilibrium in the image field, it is necessary to design the system configuration reasonably, so that the two contours in the imaging area can form a near-square grid segmentation with scale equilibrium as far as possible. In bistatic SAR, the combination mode of baseline and velocity determine the distribution density, intersection angle, and grid shape of two contour clusters in different regions, which affects the
42
Bistatic synthetic aperture radar
distribution form of imageable regions and also affects the resolution of imageable region performance. Therefore, when the application conditions permit, the two contours can be segmented into a high-density and nearly square uniform grid in the imaging area to be imitated, which is the basic principle for the design of baseline-velocity combination configuration. (2) Baseline aperture combination According to the preceding principles, it is easy to find that some combination modes can form imageable areas in a large area, although there are also nonimageable areas; some combination modes cannot form an imageable area on the ground. For example, the falling motion pattern in Fig. 1.19A, combined with the three baseline types in Fig. 1.17, is not a good combination pattern. Under these combinatorial modes, the crossing relation of the two isolines cannot conform to this configuration design principle in any region. Therefore the falling body movement mode and the large subduction angle movement mode of the equivalent phase center E are not adopted in the system configuration of SAR imaging. To simplify the discussion, only the combination of the horizontal motion mode in Fig. 1.19C and the baseline type in Fig. 1.17 is considered in this book. According to the included angle relationship between the central velocity vector of the equivalent phase and the projection direction of the baseline, bistatic SAR can be divided into three main categories: along the baseline, tangent baseline, and oblique baseline, as shown in Fig. 1.20. Combination of these kinds of baseline apertures can form an imageable area on the ground, which can meet the needs of different applications. (3) Configuration design principles As mentioned earlier, by the different baseline configurations, the isotime delay line clusters formed on the ground, are all concentric elliptical cluster shapes, and their centers O are located near the baseline center projection point E0 . Except for the direct downlooking region of the equivalent phase center E (that is, the region near O point), which belongs to the nonimageable region due to ambiguity, the ground range resolution performance of the other regions is approximately isotropic with O point as the center. ! However, by the equivalent phase center velocity vector v E , the Iso-Doppler line cluster formed on the ground and the corresponding
Overview of bistatic SAR
43
E
(a)vertical baseline-tangent baseline
(b)Slant baseline-along the baseline
E
(e)Horizonal baseline-along the baseline
E
E
E
(c)Slant baseline-tangent baseline
E
(f)Horizonal baseline-tangent baseline
(d)Slant baseline-oblique baseline
E
(g)Horizonal baseline-oblique baseline
Fig. 1.20 Combination mode of baseline and velocity.
ground range resolution performance, is anisotropic. In the case of crossing E0 point and perpendicular to the direction of velocity projection, there is a denser Iso-Doppler line distribution and better Doppler ground range resolution performance. Moreover, under various combination modes, it can form an almost orthogonal crossover relation with the isotime delay line. Therefore it is also possible to obtain a better ground-range resolution in two directions orthogonal to each other on the ground. Therefore, the imageable region distribution is mainly determined by the position of the equivalent phase center and its velocity vector direction. Based on the equivalent phase center E point, the best imaging area is the forward side of the velocity vector projection, and the second best is the oblique side. The back sight and front sight areas of the equivalent phase center E, in all combination modes, because the two contour lines are nearly parallel, can only provide onedimensional ground-range resolution, and they are not imaging areas. If understood from the angle of synthetic aperture for equivalent phase center, these regions are located at both ends of the SAR array and
44
Bistatic synthetic aperture radar
Fig. 1.21 Effect of the combined mode of baseline and aperture on the imageable area.
cannot obtain the resolution perpendicular to the aperture direction, so it cannot obtain the resolution perpendicular to the delay resolution. Fig. 1.21 shows the baseline state at a certain time. Assuming that the system is in a translational-invariant configuration, there is a significant difference in the distribution of the imageable area between the two typical cases of the horizontal baseline along baseline motion combination (a) and the horizontal baseline-tangent baseline motion combination (b). For specific areas A, C and B, D, the imaging performance in two different baseline aperture combination modes appears transposition. To obtain high-resolution two-dimensional imaging in the area to be imaged, two intuitive configuration design principles need to be followed: (1) The lateral observation method of the synthetic aperture should be adopted as far as possible, so as to align the direction with the densest Doppler isolines to the region to be implantable and obtain the best Doppler ground-range resolution; (2) Make the synthetic aperture direction parallel or nearly parallel to the isotime delay line of the imaging area, so as to form the orthogonal or quasiorthogonal of the Iso-Doppler line and the isotime delay line. Roughly speaking, this is equivalent to aligning the side of the motion direction of the equivalent phase center with the area to be imaged. For example, in the baseline morphology shown in Fig. 1.21, the high-resolution two-dimensional imaging in area B and area D of the tangent baseline should have a synthetic aperture along the projection direction of the baseline, while the high-resolution two-dimensional imaging in area A and area C along the baseline should have a synthetic aperture along the tangent baseline direction.
Overview of bistatic SAR
45
1.2.4 Transmitting and receiving flight mode The transmitting and receiving flight mode refers to the relationship between the velocity vector of the transmitting station and that of the receiving station. As the fourth key element of a bistatic SAR system configuration, it has an important influence on the time variability of the system baseline and the position distribution of the imageable area relative to the receiving platform and is also an important way of bistatic SAR system classification. According to the directional relationship between the two platform velocity vectors, the bistatic SAR configuration can be divided into three modes: parallel flight (the velocity vector is parallel), oblique flight (the velocity vector direction has a nonorthogonal included angle), and orthogonal flight (the velocity vector direction is orthogonal). The same kind of transmitting and receiving equivalent phase center ! speed vector v E can be realized by combining different transmitting plat! ! form speed vectors v T and receiving platform speed vectors v R , thereby bringing greater configuration flexibility to practical applications. For example, as shown in Fig. 1.22, the combined mode of tangent baseline motion of the vertical baseline can be realized by horizontal flight at different speeds, horizontal flight at the same speed, orthogonal at different speeds, orthogonal at the same speed, fixed reception, and other ways. However, the isotime delay line clusters and Iso-Doppler clusters of these several implementation types will form similar grid divisions on the ground and show similar imaging spatial resolution performance in the same area on the ground. Therefore the position relation of the imagable region relative to the motion direction of the receiving station platform can be changed significantly by means of the transmitter-receiver flight mode adjustment under
Fig. 1.22 Different implementation modes of transceiver flight mode.
46
Bistatic synthetic aperture radar
the condition that the crossing relation of the two contours and the grid distribution shape is generally unchanged.
1.2.5 Location of imaging area The position of the imaging area refers to the relationship between the imaging area and the moving direction of the receiving platform. It is the fifth key element of the bistatic SAR system configuration, and it has an important impact on the application efficiency of the bistatic SAR system and the complexity of efficient imaging processing. Therefore, it is also an important way to classify bistatic SAR. As shown in Fig. 1.23A, monostatic SAR only has the imaging capability of the two sides of the flight path (including the front and rear sides) due to the coincidence of the equivalent phase center and the position of the platform, while the area that cannot be imaged is concentrated in the area along the ground projection of the flight path of the platform. Because the direction of time-delay ground range resolution and Doppler ground range resolution in this area is nearly parallel, it can only provide one-dimensional ground range resolution, and cannot realize two-dimensional imaging. As described in Section 1.2.4, bistatic SAR achieves the separation of the center position of the transmitting and receiving phases from the position of the receiving platform, and the separation of the speed vector of the receiving and receiving phase center and the speed vector of the receiving platform. These two separations significantly increase the degree of freedom of the imaging configuration, form a significantly different imageable region distribution from monostatic SAR, and obtain unprecedented forwardlooking, downward-looking and backward-looking imaging capabilities of the receiving platform. In the horizontal baseline and tangent baseline motion combination mode shown in Fig. 1.23B, the equivalent phase center of the bistatic
Fig. 1.23 Differences in imaging area between monostatic and bistatic SAR.
Overview of bistatic SAR
47
SAR has been separated from the receiving platform, so that the route projection direction of the receiving station is separated from the nonimaging area. Making the flight path projection direction separate from the area cannot be imaged, thus making it possible for bistatic SAR to realize sidelooking imaging in most areas, and also making the front area, back area, and down area of the receiving station with the oblique or orthogonal isotime delay line and isoDoppler line in the relationship, such as to obtain the precious backward-looking, downward-looking, and forward-looking imaging capability. In the vertical baseline and tangent baseline motion combination configuration shown in Fig. 1.23C, the bistatic SAR equivalent phase center velocity vector is separated from the receiving station velocity vector. The limitation that the best imaging area is fixed on the side of the moving direction of the receiving station is removed, and the extremely valuable backward-looking and forward-looking imaging ability of the receiving station are obtained. It is worth noting that in different imaging regions, there is a significant difference in the crossing relation between the two contours, which has an important impact on the echo pattern and the processing complexity of efficient imaging. Therefore, the location of the imaging area relative to the receiving platform is also the basis for the classification of bistatic SAR, such as side-looking, squint-looking, forward-looking, backward-looking, and downward-looking.
1.2.6 Transmitting and receiving scan mode The transmitting/receiving scanning mode refers to the relative motion relationship between the ground footprints of the transmitting and receiving beams and their relative motion relationship with the ground. It is the sixth key element of the bistatic SAR configuration. The scanning mode not only plays a key role in the selection of the imaging region but also has a decisive influence on the starting and ending time of the Doppler signal, thus exerting an important influence on the echo model and imaging processing. Therefore, it is also an important means of bistatic SAR classification. In order to effectively use the transmitter power and obtain a high signalto-noise ratio of the scheduled imaging region, control Doppler signal bandwidth and relax the constraints of the transmitted signal repetition period Tγ , reduce the aliasing effect from the nonimaging region echoes to the scheduled imaging area echoes in the time domain and frequency domain, an
48
Bistatic synthetic aperture radar
(a) Follow-up (b) Sliding
(c) Following stripmap
(d) Sliding spotlight
(e) Following spotlight
Fig. 1.24 Bistatic SAR scanning method.
imaging system with good space selective high gain and low sidelobe antenna beam direction is needed, and in particular a transceiver scanning mode, for a period of time, to illuminate the imaging area. According to the relative relationship between the transmitting and receiving beam footprints, bistatic SAR can be divided into follow-up mode and sliding mode, as shown in Fig. 1.24A and B. According to the relative motion relationship between the transmitting and receiving overlapping beams and the ground, bistatic SAR can be divided into the following stripmap mode, sliding spotlight mode, and following spotlight mode, as shown in Fig. 1.24C–E. In the following mode shown in Fig. 1.24A, the transmitting and receiving beam footprints overlap each other on the ground and move relative to the ground at the same speed. In the sliding mode shown in Fig. 1.24B, the transceiver beam footprints move at different speeds on the ground and overlap only partially. In the follow-up stripmap mode, shown in Fig. 1.24C, the transceiver beam footprint overlaps and moves at a certain speed on the ground to form a strip imaging region. In the gliding spotlight mode shown in Fig. 1.24D, the receiving beam centerline intersects at the O point at a certain depth below the ground. The position of the intersection O remains unchanged during the synthetic aperture time. The ground footprints of the transmitting and receiving beams in this mode can overlap for a long time. The beam footprint moves at a speed much lower than the platform speed, and a rectangular imaging area
Overview of bistatic SAR
49
can be obtained. This pattern is often used to obtain higher spatial resolution than the strip pattern, because this pattern has longer observation time, larger Doppler signal bandwidth, and higher Doppler signal delay resolution. In the spotlight mode shown in Fig. 1.24E, the transmitting and receiving beam footprints are completely overlapped and fixed relative to the ground. This mode is mostly used for ultra-high-resolution imaging in specific areas, because the synthetic aperture time is the longest under this mode, which can obtain the maximum Doppler signal bandwidth and the highest Doppler signal delay resolution. In different scanning modes, scattering points at different positions in the predetermined imaging area are irradiated by the overlapping beam of the transceiver at different starting and ending times, resulting in different trace extension of multiple pulse echoes of scattering points visible in the data plane, as well as different starting and ending times and signal bandwidth of Doppler signals. Therefore different scanning modes will cause significant changes in the echo signal model, resulting in great differences in imaging processing or imaging algorithm.
1.2.7 Bearing platform combination The two bearing platforms of bistatic SAR can be of the same type or different types. Both platforms can be in motion, or one in motion and the other at rest. These combination relationships are called load-bearing platform combinations. The different types of platform combinations not only correspond to different application backgrounds, but also significantly change the difference in the contribution degree of the two platforms to the equivalent phase center velocity vector, and bring about big differences in transceiver synchronization, echo model, parameter estimation, motion compensation, and imaging algorithm. Therefore the bearing platform is also an important way to classify bistatic SAR. (1) Carrying platform Both moving and stationary platforms at a certain height from the ground can be used as the bearing platform for bistatic SAR. Generally speaking, aircraft are the main types of bearing platforms for bistatic SAR, including high- and low-orbit satellites, near-space vehicles, airships, aircraft, and missiles. Stationary platforms such as mountain tops, building tops, and aerostats can also be used as transmitting or receiving station bearing platforms for bistatic SAR. According to the working
50
Bistatic synthetic aperture radar
High orbit
Space Low orbit
Near space
Air Ground
Fig. 1.25 Bistatic SAR receiving and transmitting bearing platform.
height of the bearing platform, it can be roughly divided into four levels: air, near space, space, and ground as shown in Fig. 1.25. The platform carrying the bistatic SAR transmitter station requires a large payload weight and space, as well as a strong power supply capacity, to provide the volume, load, and power consumption required by the high-power transmitter. It is suitable for the larger flight platform or stationary platform: for example, satellite, aircraft, airship, floating platform, tower, etc. In addition, under certain conditions, cooperative and noncooperative monostatic SAR systems can also be used as transmitting stations for bistatic SAR, and even cooperative and noncooperative radiation sources for other purposes such as communication and navigation. Under the conditions of high ground power density and sufficient signal bandwidth, it can also be used as a transmitting station for a bistatic SAR system. The platform that carries a bistatic SAR receiver can have smaller payload weight and space, as well as lower power supply capacity because the receiving station does not need to transmit high-power signals; it only needs to receive and record the ground echo and transmit or process it. The bistatic SAR system has a significantly lower load volume, load, and power supply requirements than the monostatic SAR system. The adaptability of the platform is significantly improved. It is not only suitable for large flight platforms, but also for micro and small platforms such as microsatellite, small UAV, cruise missile, air to ground missile, and guided projectile. Therefore, bistatic SAR can
Overview of bistatic SAR
51
significantly expand the application field of SAR imaging technology, thus forming new SAR imaging technology products and equipment. (2) Combination modes A bistatic SAR transmitting/receiving platform is divided, flexible in configuration, with application characteristics different from monostatic SAR. Different combination modes can be selected according to different application conditions and application purposes. In addition to selecting “far-range transmission, near-range reception,” “side transmission, front reception” equivalent or near type platform combination modes, the combination of different types of platforms such as “one static and one moving” or “one slow and one fast” and the combination mode of “one transmission and multiple receptions” can also be used. These different types of combination modes can meet the different requirements of practical applications for platform security, imaging performance, and detection distance. Typical combination modes include satellite-satellite, satelliteaircraft, satellite-missile, aircraft-missile, satellite-ground, aircraftground, etc. These different combination modes are greatly different in echo model, imaging processing, parameter estimation, motion compensation, time-frequency synchronization, and scanning mode due to the influence of platform height, velocity difference and platform stability. Similar or near type platform combinations, usually with similar flight altitude and speed, generally belong to the type of receiving and transmitting baseline close to the level, which can be either translational variant or translational invariant. This kind of combination, in the collaborative application of multiplatform combined formation, can better realize the space-time coordination of the transceiver platform, and form a long synthetic aperture time and high imaging resolution by co-viewing the transceiver beam in the predetermined imaging area for a long time. As a launch platform, the combination of LEO satellite with aircraft, missile, ground, and other heterogeneous receiving platforms belongs to the typical “one fast one slow” combination, “one high one low” vertical baseline type, and large translational variant baseline type. At this time, the fast speed of the satellite platform makes the velocity vector of the equivalent phase center and the direction of synthetic aperture mainly determined by the satellite velocity vector, so that the receiving platform can flexibly choose the flight direction,
52
Bistatic synthetic aperture radar
z
EPC
vs
vs
z
vE
vE EPC
y
vF
vF
Imaging area of backward-looking
x
(a) Backward-looking imaging
y Imaging area of forward-looking
x
(b) Forward-looking imaging
Fig. 1.26 Satellite/aircraft SAR combination.
and form side-looking, downward-looking, backward-looking, and forward-looking imaging capabilities relative to the receiving platform in the way of radio silence. For example, in the satellite-aircraft SAR shown in Fig. 1.26, the direction of the synthetic aperture is mainly determined by the satellite velocity vector. The receiving station aircraft can realize side-looking, forward-looking, or backward-looking imaging by selecting the flight direction. Low-orbit satellites move fast, the beam ground residence time is short, the revisit cycle is long, and it is difficult to form good space-time cooperation with slower aircraft and other platforms. Unless the constellation relay irradiation is used, the application is limited. The combination mode with a high-orbit satellite as the transmitting station and a low-orbit satellite constellation as the receiving station belongs to the heterogeneous platform combination of “one transmitting and multiple receiving,” “one high and one low,” and “one fast and one slow.” It also belongs to the tilted baseline and large translational variant mode, which can realize a high revisit rate and wide-area imaging monitoring on the ground. It is a combination mode of the bearing platform with good application prospects. Compared with multiple monostatic SAR systems composed of a low-orbit constellation, this combination of multiple low-orbit satellite platforms can share the transmitting signal resources of one or more high-orbit satellites, which can significantly reduce the payload capacity and power supply requirements of a low-orbit satellite platform by eliminating the transmitting subsystem of a low-orbit satellite. It is conducive to the selection of multiple light and small satellite platforms to
Overview of bistatic SAR
53
implement a large-scale, high-revisit rate ground monitoring in a constellation manner, thereby significantly reducing the development and launch costs of the entire system. At the same time, this combination has better concealment and antijamming ability than a monostatic SAR constellation. As a transmitting platform, the combination of high-orbit satellite and space-based receiving platform, such as aircraft, is also a heterogeneous platform combination of “one high and one low,” vertical baseline, and shift mode. However, due to the relatively small difference between the moving speed of the suborbital point of the high-orbit satellite and the moving speed of the suborbital point of the aircraft and other platforms, they can form good space-time coordination, with some characteristics of the same type of platform combination. High-orbit satellites, especially monostatic SAR carried by geostationary orbit (GEO) satellites, can provide a wide range of stable illumination to the ground. Therefore bistatic SAR based on the GEO satellite platform as the transmitting station and air-based platforms such as aircraft as the receiving station is a combination mode of bistatic SAR bearing platform with a very important application prospect. After the GEO satellite, SAR is put into operation; this combination will have an important impact on the production equipment of SAR technology, and can form a variety of ground imaging systems only equipped with receiving stations. It can also enable micro-small spaceborne platforms that do not have monostatic SAR installation conditions to have SAR imaging capabilities. It can also enable receiving station platforms to have forward-looking, downward-looking, and backward-looking imaging capabilities that monostatic SAR cannot achieve. Stratospheric airships and other platforms have relatively slow ground projection speeds, and can also form long-term large-scale irradiation of the predetermined imaging area. They are also very suitable for bistatic SAR applications. With various aircraft as receiving stations, it is possible to obtain the hidden ground imaging capability of the receiving platform with a small payload and system resource overhead.
1.3 System composition The transmitting and receiving devices of the bistatic SAR system are placed on different platforms. In addition to the transmitting and receiving functions of conventional radar, the time, frequency, and space synchronization
54
Bistatic synthetic aperture radar
problems between the transmitting and receiving subsystems need to be solved. Therefore a bistatic SAR system generally consists of a transmitting extension, a receiving extension, and a synchronous extension, and is divided into a transmitting station subsystem and a receiving station subsystem, according to the platform.
1.3.1 Transmitting station subsystem The bistatic SAR transmitting subsystem, in addition to the conventional antenna feeder extension, transmitting extension, signal generation extension, and power supply extension, also includes the synchronization subsystem extension, as shown in Fig. 1.27. In the global navigation and positioning terminal of the synchronous extension, the system time and second pulse signals of the global navigation and positioning system are combined with the local rubidium clock and crystal oscillator to provide a unified time benchmark, synchronization pulse, and frequency source for the transmitting station and the receiving station, and realize the time and frequency synchronization between the transmitting and receiving stations. At the same time, the synchronous extension obtains the attitude and positioning information of the platform through GPS/INS and other equipment. Then, the data link is used to share this information between the two platforms. According to the coordinate information of the observation area, the beam direction angle is calculated.
Signal generation extension Trigger signal
Antenna feeder extension
Transmitting extension Power
Antenna Transmit
Trigger signal
Waveform generator
IF
Clock Time schedule controller
Clock
Frequency synthesizer
LO
Front stage up-conversion of transmitter
Stabllized platform Last stage amplifier of transmitter Antenna servo
Synchronization subsystem extension Power supply extension
Power
Frequency source
Data transmission
GPS/INS
Data memory
Control module
Instruction
Fig. 1.27 Composition block diagram of transmitting station subsystem.
Overview of bistatic SAR
55
And by controlling and adjusting the antenna servo, the superposition of the ground footprint of the transmitting and receiving antenna beams is achieved. The frequency source of the synchronous extension provides the transmitting station with a frequency reference that is related to the receiving station. The frequency synthesizer generates the local oscillator signal and the clock signals required by other unit components. The timing of the whole machine is controlled by the timing controller. The waveform generator generates a broadband linear frequency-modulation signal according to the specified repetition frequency, which is upconverted to the radio frequency by the transmitter, amplified by the final high-power amplifier, and radiated outward through the antenna. The high-voltage power supply system is mainly provided to the final high-power amplifier, and the lowvoltage power supply is used to supply power to other units in the transmitting station.
1.3.2 Receiving station subsystem Fig. 1.28 shows a block diagram of the receiving station subsystem, which consists of synchronous extension, receiving antenna feeder extension, receiving extension, data acquisition, and storage extension, signal processor, data transmission extension, and power supply extension. Among them, the composition and function of the synchronous extension are the same as those of the synchronous extension in the transmitting station. The frequency source of the synchronous extension provides the receiving station with a frequency source that is related to the transmitting station, and the frequency synthesizer generates the local oscillator and clock signals required by other units. The ground echo signal received by the antenna of the receiving station is amplified by a low-noise amplifier, mixed with the local oscillator to obtain the intermediate frequency signal, and then filtered, amplified, and I/Q demodulated to obtain the complex form of the baseband signal. Then we use the data collector to sample the two channels of I and Q, respectively, to obtain the digital signals corresponding to the real and imaginary parts of the baseband echo signal, send it to the data memory for storage, and finally send it to the signal processor to form a radar image. Of course, signal processing can also be done on the ground, in which case the receiving platform should have a data transmission extension, which is responsible for sending the data to the ground and then processing it accordingly.
Antenna feeder extension
Receiving extension
Data acquisition and storage extension
Antenna
Echo signal
LNA
Signal processor
IF AMP
Stabllized platform AMP
I/Q demodul ation
Antenna servo LO
Data Collect
Data memory
AMP
IF
Data transmission extension
Clock Clock
Frequency synthesizer
Synchronous extension Control Instruction module
Transimitter navigation data
Data transmission
GPS/INS
Frequency source
Data-caching mechanism
Fig. 1.28 Block diagram of the receiving station subsystem.
Or
Time schedule controller
Power supply extension Power
Overview of bistatic SAR
57
1.4 Performance parameters Similar to the performance parameters of monostatic SAR, the imaging performance of bistatic SAR mainly includes three aspects: space performance, radiation performance, and technical performance.
1.4.1 Space performance The space performance mainly covers the size and position of the imaging area, the location accuracy of the imaging target, and the spatial resolution of the imaging. It mainly includes: (1) The observation distance and observation bandwidth that reflect the location and area of the imaging area. (2) Incident angle and reflection angle that reflect the pitching direction of the transceiver station, which have important effects on the observed ground scattering rate and target characteristics. (3) Positioning accuracy and geometric distortion, reflecting the absolute position accuracy and phase position accuracy of imaging. (4) Spatial resolution, which reflects the minimum distance between ground scattering points that can be distinguished by radar images, is a key performance indicator of image quality.
1.4.2 Radiation performance Radiation performance reflects the ability of radar images to characterize and distinguish ground object scatter rates, which mainly include dynamic range, radiation accuracy, and radiation resolution. (1) Dynamic range and equivalent noise Dynamic range can be divided into two aspects: system dynamic range and image dynamic range. The system dynamic range refers to the ground scattering rate range, which can be imaged and reflects its value correctly. Image dynamic range refers to the change range of pixel level in the final image, and its lower limit is the additional thermal noise level in the final image. The dynamic range of the image is related to the nature of the ground scene, the elevation angle of the transmitting and receiving antennas, and the imaging processing algorithm. Equivalent noise σ 0 reflects the lower limit of the scattering rate that can be informed and correctly reflects its value, and its value is usually set as the ground scattering coefficient corresponding to the attached heating noise level in the final image. It is worth noting that in bistatic
58
Bistatic synthetic aperture radar
SAR, the scattering coefficient is a quantity related to both incident and reflection angles. Different ground scenarios and application conditions require the dynamic range of the system to be between 50 and 90 dB, and the typical value of σ 0 is 20 dB. (2) Radiation accuracy, ambiguity, and pulse response side lobes These performance parameters represent the accuracy of the ground scattering rate reflected by the final image from different sides. The radiation accuracy characterizes the accuracy of the ground object scattering rate reflected by the pixel level in the image. In addition to being affected by the latter two performance parameters, it is also related to the internal and external calibration performance of the SAR system. The typical values are absolute accuracy less than 2 dB and relative accuracy less than 1 dB. The ambiguity reflects the degree of pollution of the observed area by the nonobserved ambiguous area in the image, which is related to the ambiguity design and control of the system. The typical value requires the ambiguity to be less than 18 dB. The side lobes of impulse response reflect the shielding degree of the side lobes of the strong target response to the weak target in the image, which is related to the side lobes of the receiving and transmitting antennas, the imaging processing methods, and various errors of the system, and the typical value is 20 dB. When the ambiguity and impulse response performance parameters are poor, the image contrast is low. It is difficult to observe the shadow formed by the weak target and the ground protrusion, and even a false target or false superimposed image appears in the image. (3) Radiative resolution and equivalent independent number of looks The radiation resolution reflects the ability of the final image to distinguish the difference of scattering coefficients and is determined by the final image signal-to-noise ratio. The so-called final image refers to the image obtained after reducing noise by incoherent stacking of the multilook image. These “views” correspond to different synthetic aperture time ranges and observation angles, so they have noise samples independent of other “looks.” The signal power here refers to the gray value corresponding to the average scattering intensity of the resolution unit, and the noise power refers to the gray fluctuations corresponding to the thermal noise or speckle noise of the receiver.
Overview of bistatic SAR
59
When the speckle noise plays a major role, the typical value of the radiation resolution is 3–4 dB in a single look case. The equivalent independent visible object number reflects the number of single look images with independent noise samples in multiple single look image sets, and its typical value is 3–4.
1.4.3 Technical performance Technical performance parameters mainly reflect the technical level and status of the equipment, including synchronization accuracy, working frequency, signal bandwidth, transmission power, antenna gain, beamwidth, frequency stability, spectrum purity, noise coefficient, pulse width, gain and stability, etc.
1.4.4 Several notes Several important parameters are described in detail in this section. (1) Spatial resolution The traditional method to describe the spatial resolution of imaging systems is the two-point critical resolution criterion of equal scattering rate. Among them, the Rayleigh criterion is the most commonly used. As shown in Fig. 1.29A, the image domain response profile, two scattering points are considered to be fully distinguishable; in the case shown in Fig. 1.29B, under the Rayleigh criterion, two scattering points are considered to be critically distinguishable, and the spacing ρ0 of scattering points at this time is the Rayleigh resolution. In practice, the resolution is usually determined by the half-power point (3 dB) width of the scattering point target image field in response to
Fig. 1.29 One-dimensional profile of image domain response of two scattering points (the vertical line indicates the position of the scattering point).
60
Bistatic synthetic aperture radar
the main lobe, because the resolution obtained by this measurement method is very close to the Rayleigh resolution. (2) Radiation resolution The radiation resolution γ n is determined by the single look image signal-to-noise ratio (SNR) and the equivalent independent look number M and is defined as: 1 1 γ n ðdBÞ ¼ 10lg 1 + pffiffiffiffiffi 1 + (1.2) SNR M Under the conditions of large transmitting power, high antenna gain, and close observation distance, the speckle noise in the single look image is much larger than that of the receiver. In this case, the former is the main source of noise, while in the opposite case, the latter is the main source of the noise. When the receiver noise plays a major role, the signal-to-noise ratio (SNR) in Eq. (1.2) is determined by the bistatic SAR SNR equation, which is related to the bistatic SAR system parameters and the surface bistatic scattering coefficient (the details are shown in Section 2.2.2). When the speckle noise plays a major role, SNR ¼ μI/σ I, μI represents the gray value of the single look image corresponding to the average scattering coefficient of resolution unit; σ I represents the gray value of the single look image corresponding to the fluctuation variance of the scattering coefficient of the resolution unit caused by the coherent speckle noise, which is jointly determined by surface material, roughness, incidence angle, reflection angle, and other factors. After multilook processing, the average scattering coefficient of distributed features such as grassland, Gobi, and field can be maintained, while the noise variance can be reduced. Therefore multilook processing is conducive to improving the radiation resolution of the image. As shown in Fig. 1.30, the single look and multilook SAR images of two kinds of distributed features are displayed on the top, and the bottom is a row in the image. It can be seen that after multilook processing, the variance of the noise is reduced, and the fluctuation of the amplitude A is significantly reduced. This corresponds to the fact that the mean value of the amplitude probability density function is unchanged, but the distribution range is narrowed, so it is beneficial to distinguish the average scattering coefficients of different ground objects. Based on the same mechanism, the receiver noise in the image can be suppressed by multilook processing.
61
Overview of bistatic SAR
Single look
Scene 2
Scene 1
Scene 2
Image SNR determine radiation resolution
A
D istinguishable
Indistinguishable
Scene 1
Multi look Radar im age
R adar im age
A
Radiation resolution improved
Distinguishable
Indistinguishable
Fig. 1.30 Schematic diagram of different radiation resolutions.
(3) Impulse response side lobe Pulse response side lobes refer to the side lobes that respond to the scattering point image domain. The index to measure the characteristics of side lobes is the integrated side lobes ratio (ISLR) and the peak side lobes ratio (PSLR). As shown in Fig. 1.31, the ISLR is defined as the ratio of the sidelobe energy Es (black area) within generally 10 times the 3 dB main lobe width to the main lobe energy Em (gray area). It is used to describe the SAR system to eliminate the gray distortion of the image caused by the nearby distributed targets. It can quantitatively describe the extent to which a locally dark area is submerged by the energy leakage from the surrounding bright areas. The smaller the ISLR, the higher the image quality. The ISLR is usually required to be less than 12 dB. ISLR ¼ 10lg
Es Em
(1.3)
The PSLR is defined as the ratio of the highest sidelobe peak power Ps to the main lobe peak power Pm of the pulse response function, which indicates the ability of the SAR system to observe weak scattering points near the strong scattering points. This ability increases with the increase of the PSLR. The PSLR of SAR images is usually required to be less than 20 dB. PSLR ¼ 10lg
Ps Pm
(1.4)
(4) Ambiguity The side lobes and pulse working mode of the radar antenna pattern inevitably cause time-domain aliasing of the pulse echo of distant and
62
Bistatic synthetic aperture radar
Fig. 1.31 PSLR/ISLR of impulse response function.
short-range ground objects and the aliasing of the echo Doppler signal spectrum of left- and right-hand ground objects. It is called range ambiguity and azimuth ambiguity, which cause the images corresponding to different regions to shift and overlap with each other, resulting in image quality degradation. In serious cases, “ghosts” of unknown origin will be superimposed on the images, affecting the estimation accuracy of ground object scattering rate and image identification of the SAR system. As shown in Fig. 1.32, the cause of azimuth ambiguity is mainly due to the side lobe of the azimuth beam of the antenna. When the pulse repetition frequency fr ¼ 1/Tr is too low, too much ambiguous signal spectrum will be folded into the frequency range of the main signal and superimposed on the main signal spectrum. The range ambiguity is mainly due to the presence of sidelobe in the pitch beam of the antenna. When the pulse repetition frequency fr of the radar signal emitted is too high, too many ambiguous signals will be folded in the time domain to the time range, where the main signal appears and is superimposed on the main signal.
Overview of bistatic SAR
63
Doppler signal effective bandwidth
Doppler signal spectrum of scattered point
—— Real target ——First ambiguity of right in azimuth —— First ambiguity of left in azimuth
Ambiguous signal spectrum
Scattering point Doppler signal spectrum formed by antenna directional beam modulation
Mainlobe
Sidelobe
-2f r
- fr
0
fr
2f r
Fig. 1.32 Cause of SAR azimuth ambiguity.
Ambiguity is a measure of the ambiguous degree of the SAR image, which is defined as the ratio of the energy intensity of the ambiguous signal and the main signal; the range ambiguity is defined in the time domain, and the azimuth ambiguity is characterized in the frequency domain. High-quality SAR images generally require that the range and azimuth ambiguity be no higher than 18 dB.
1.5 Development trends Since Carl Wiley introduced the concept of SAR imaging in 1951, SAR technology has developed a great deal and has evolved into many different forms, and this evolution continues to acquire new capabilities to adapt to different application needs. For example, from the morphology of synthetic aperture, curvilinear and planar apertures have evolved from the original linear aperture, and new categories such as circular SAR, curved SAR, tomographic SAR, and array SAR have been developed. From the perspective of the spatial location relationship between the transmitter and receiver, the original configuration of the same receiving and transmitting location and the same station has evolved into layouts with different receiving and transmitting locations or different stations. New categories such as interferometric SAR, distributed SAR, and bistatic (multistatic) SAR have been developed.
64
Bistatic synthetic aperture radar
Bistatic SAR with separate transmitter and receiver splitter and flexible configurations can obtain ground object information different from monostatic SAR. It can also expand the imaging direction relative to the receiving platform from side-looking and squint-looking to forward-looking, downward-looking, and backward-looking. It has valuable applications and is one of the important developmental directions of SAR technology.
1.5.1 Research trends The first research on bistatic SAR appeared in the late 1970s. In 1977, Xonics Inc. in the United States showed that the basic mode can realize synthetic aperture imaging, and in 1979, it cooperated with Goodyear. A tactical bistatic radar verification program, supported by the US Air Force, obtained a bistatic SAR image during a test in 1983. Later, some patents and papers in the United States involved bistatic SAR imaging processing and spatiotemporal frequency synchronization. However, by the end of the last century, bistatic SAR had not attracted widespread attention. In May 2004, at the EUSAR international conference held in Neu-Ulm, Germany, there were more than 10 papers related to the theoretical research and experimental progress of bistatic SAR, showing the unique application characteristics and performance advantages of bistatic SAR, which became the focus of the conference discussion. Since then, research and experiments on bistatic SAR-related technologies have gradually attracted attention from countries around the world. The international IEEE Radar, IGARSS, and EUSAR conferences have set up bistatic SAR topics every year. This has led to a series of valuable achievements and deepened people’s understanding of the application value and technical problems of bistatic SAR. (1) Imaging theory Bistatic SAR is separated in transmitting and receiving, with various configurations, and its geometric configuration and system parameters have important effects on imaging performance. In practical applications, the quantitative relationship between the imaging performance index, the geometric configuration, and system parameters should be first grasped theoretically, whether it is the design of the geometric configuration and radar system under the given imaging performance conditions or the prediction and analysis of the imaging performance under the given geometric configuration and radar system parameters. These quantitative relationships involve resolution models, ambiguity theory, and imaging distance equations, but they
Overview of bistatic SAR
65
do not involve specific implementation methods and technical approaches. They only strive to theoretically and quantitatively reflect the imaging performance limits of bistatic SAR and its influencing factors and provide the mathematical tools required for the design of bistatic SAR systems and their geometric configurations. Therefore this can be classified as the imaging theory of bistatic SAR. For example, there are three kinds of analytical methods in bistatic SAR spatial resolution analysis: the gradient method, integrated generalized ambiguity function method, and frequency projection method. Cardillo studied spatial resolution [3] of bistatic SAR by using the gradient theory of isorange and isoDoppler lines. However, this method is based on the linear motion and velocity invariable assumption of the SAR platform. Zeng Tao et al. analyzed the spatial resolution characteristics of bistatic geodesy by using the method of two-dimensional ambiguity function [4]. This method can be used in the case of curve motion and time-varying speed of the SAR platform and has higher universality. Moccia et al. used the gradient method to analyze the influence of different distributed transceiver platforms and spatial configurations on the distributed SAR spatial resolution and compared the application scope of different analysis methods [5]. Generally speaking, the spatial resolution of bistatic SAR has been fully studied. However, the research on the radiation resolution of bistatic SAR and the ambiguity of bistatic SAR is not in-depth enough, especially for the forward-looking imaging mode of the receiving station. (2) Echo model The essence of the imaging process is to process the echo data to return the energy of the various scattering point echoes in the focused data plane, and correctly reflect the positional relationship between the scattering points and scattering strength. In order to realize such processing, it is necessary to use quantitative mathematical expressions to accurately describe the echo generated by each scattering point in the data plane according to such factors as the geometric relationship of motion, the working mode of the system and waveform parameters of the transmitted signal, and to grasp the trajectory and amplitude variation law of the echo in the data plane, thereby determining the processing method and designing the imaging process flow. In engineering implementation, in order to simplify the processing, reduce the computational complexity, and adapt to the real-time
66
Bistatic synthetic aperture radar
requirements in the application, it is necessary to make an appropriate approximation of the echo quantitative expression to obtain the approximate expression meeting the imaging quality requirements. This approximate expression is called the echo model, which can reflect the trace and amplitude variation law of echo in a simple and analytical mathematical form under the condition of ensuring the imaging quality. It is conducive to the construction of an imaging algorithm with simple flow and efficient operation. According to the variables’ category and its combination that describes echo, the echo model can be divided into two-dimensional time-domain models, time domain and frequency domain models, and two-dimensional frequency-domain models. They can serve the construction of different types of algorithms. Constrained by the architecture and capabilities of existing aircraft and computing equipment, high-efficiency imaging-processing algorithms are usually required, and fast frequency-domain imaging algorithms can meet this demand. Therefore two-dimensional frequency-domain models are currently being studied. For bistatic SAR, the range history of most geometric configurations needs to be described in the form of the sum of two roots. It is difficult to use the stationary phase principle to solve the accurate two-dimensional frequency domain model. Given this problem, the most representative methods are the LBF model and the MSR model. In 2004, Otmar Loffeld et al., from the University of Siegen, Germany, made a second-order Taylor approximation of the slant ranges of the transmitting and receiving stations at their stationary phase points, respectively, and then obtained the common stationary phase points from the approximation formula, thus deriving the bistatic SAR two-dimensional frequency domain analytical expression, which is called the Loffeld’s Bistatic Formula model [6]. This model can be divided into monostatic and bistatic terms, which is convenient for derivation and understanding but limited in accuracy. It is not suitable for all bistatic SAR configurations. Then, considering the distribution difference between receiving and transmitting stations to Doppler, Loffeld et al. presented an improved LBF model. In 2007, Neo yew Lam and others of Columbia University in Canada presented Taylor expansion of range history in the slow time domain. Through sequence inversion, they first obtained the bistatic SAR azimuth stationary phase point and then obtained the
Overview of bistatic SAR
67
two-dimensional spectrum, which is called the MSR (method of series inversion) model [7]. The accuracy of this model depends on the order reserved by the Taylor expansion, which can achieve high accuracy in theory. However, due to the form of series expansion, the expression is not very analytical, which is not conducive to the construction of an efficient algorithm. The essence of the frequency domain echo model is to obtain the analytical expression of point target in the time domain or frequency domain by using different approximate methods. The difficulty is how to obtain simplicity and adaptability at the same time with sufficient accuracy. In the case that it is difficult to meet such requirements, different frequency-domain echo models with different approximation methods and degrees are also needed to provide mathematical support for different types of frequency-domain imaging algorithms and promote the diversification of imaging algorithms, in order to adapt to different geometrical configurations and different application conditions or to focus on the requirements of imaging quality and real-time performance in different applications. (3) Imaging algorithm In essence, the existing SAR imaging processing method is the optimal detection of echoes from different scattering points on the ground and optimal estimation of their scattering intensity. If the two-dimensional matching filtering process is directly performed based on the various scattering point echoes, the entire processing is a point-by-point processing, a two-dimensional operation, and an inefficient correlation process, and the amount of calculation is huge, far from meeting the real-time requirements of engineering applications. Therefore it is necessary to construct a faster calculation method with higher calculation efficiency through proper mathematical transformation processing, so as to greatly reduce the amount of calculation in the imaging process and ensure enough high imaging quality. This high-efficiency and high-precision calculation method, known as the imaging algorithm, is one of the core technologies of SAR engineering and has a decisive impact on the high quality and real-time nature of imaging. Bistatic SAR has different echo laws from monostatic SAR, and different geometric configurations, observation angles, and scanning modes also have significant differences in corresponding echo laws. Therefore it is necessary to study the corresponding high-efficiency and
68
Bistatic synthetic aperture radar
high-precision imaging algorithm. The current bistatic SAR imaging algorithms can be divided into two categories, namely the time-domain imaging algorithm and the frequency-domain imaging algorithm. The representative of the time-domain imaging algorithm is the structure of the BP imaging algorithm which is simple, has high imaging accuracy, and is suitable for bistatic SAR of arbitrary geometric configuration and maneuvering trajectory. The BP imaging algorithm first performs pulse compression in the fast time direction, and then performs compensation integration along the echo trajectory to achieve pulse compression in the slow time direction. Although the calculation efficiency has been greatly improved compared with directly performing two-dimensional matching filtering, the algorithm structure of pixel-by-pixel calculation is still used, and the computation is too large when there are many pixels (such as imaging with 10-km scene width and 1-m resolution). The real-time imaging processing is usually far from being realized on the current mainstream nonparallel computing equipment in engineering applications. Therefore the fast BP imaging algorithm and frequency domain imaging algorithm are the main research directions worldwide. Although the accuracy of the frequency-domain imaging algorithm is slightly lower than that of the BP algorithm, the calculation efficiency is higher, which is conducive to the parameter estimation of embedded echo and the motion error compensation of the platform. Therefore it has been widely concerned. At present, there are mainly three types of frequency-domain imaging algorithms. The first type is the imaging algorithm that converts bistatic to monostatic, that is, first bistatic SAR echo data is equivalent to echo data under the monostatic model, and then the monostatic SAR imaging algorithm is used: for example, the idea based on the equivalent of the midpoint of the transceiver baseline (close to the equivalent phase center of the transceiver) and the idea based on the hyperbolic equivalent. In addition, Rocca et al. introduced the DMO method in seismic signal processing into bistatic SAR imaging processing, which also converted the bistatic SAR echo into the equivalent monostatic form by preprocessing [8]. The second category is the imaging algorithm based on the echo explicit two-dimensional spectrum model. For example, using LBF and MSR models, Robert Wang, Neo yew Lam, and others proposed the RD and NLCS imaging algorithms of bistatic SAR [9, 10].
Overview of bistatic SAR
69
The third category is the imaging algorithm based on implicit twodimensional spectrum, that is, the accurate implicit two-dimensional spectrum is obtained by numerical calculation, and then the imaging algorithm is constructed by phase decomposition. See, for example, imaging algorithms proposed by Ender, Giroux, etc. [11, 12]. According to different situations, different ideas are adopted, and different calculation processes are obtained for frequency-domain imaging algorithms. Although there are great differences in form, they are all consistent in essence. They are all based on the echo model, through the same domain conversion and foreign domain transformation and other mathematical processing, first to achieve the space-variance elimination and collection, create conditions for batch processing, and then realize decoupling and dimensionality reduction, and convert the twodimensional processing into two orthogonal and serial one-dimensional matched filtering processing. Finally, fast calculation methods such as FFT are used to realize fast calculation in batches, thereby achieving the purpose of significantly improving operational efficiency. This type of algorithm is suitable for real-time processing using computing devices such as DSP + FPGA and other nonparallel architectures. The characteristics of pixel-by-pixel calculation of time-domain imaging algorithms such as BP make its serial computing efficiency much lower than the frequency-domain algorithm, but it makes it suitable for implementation by computing devices with GPUs and other multithread parallel processing architecture. Combined with the existing fast BP imaging algorithm, real-time processing can also be achieved. Of course, the BP imaging algorithm needs to use accurate platform motion data. Therefore, in engineering applications, it is mainly used on the platform with POS and other high-precision attitude measurement and positioning equipment. In the case of the lack of attitude measurement and positioning equipment or the lack of accuracy, it is necessary to adopt the autofocus BP imaging algorithm. Of course, this will increase the additional computational cost, which will have a certain impact on real-time imaging. Due to the advantages of high computational accuracy and the strong adaptability of the BP algorithm, with the development and popularization of high-precision POS and other attitude measurement and positioning technologies and GPU multithreaded computing technology, the fast autofocus BP algorithm will be applied more widely and become the mainstream imaging algorithm in future engineering applications.
70
Bistatic synthetic aperture radar
(4) Motion compensation The measurement, estimation, and compensation of motion errors are due to the unknown and random motion of the platform, which has an important impact on the quality of the SAR image in practical applications and is one of the core technologies of SAR. In the analysis of the echo model and the construction of the imaging algorithm, in order to simplify the analysis and facilitate understanding, it is generally assumed that the receiving and transmitting platforms fly in a constant attitude and a predetermined manner. However, in practical applications, the flight platform is affected by various factors, resulting in irregular movement and random change of constant attitude, which leads to deviation of the actual echo amplitude variation law and the echo model based on the imaging processing, resulting in a significant degradation in SAR image quality. To solve this problem, on the one hand, it is necessary to use an inertial stabilization platform to reduce the influence of the attitude error on the phase center of the antenna. On the other hand, motion measurement devices such as IMU, INS, GPS, or POS on the platform must be used to record motion parameters and error data of the platform. Or in the imaging processing, echo data is used to estimate the echo parameters and errors as the basis of model parameter correction and motion error compensation. The purpose of motion compensation is to suppress or eliminate the influence of the position error and attitude error of the platform deviating from the specified state on the echo, so that the echo after motion compensation can be repaired to the regular motion state with known motion parameters, so as to achieve high-quality imaging. Moreover, in order to reduce repeated calculations and improve the real-time performance of imaging processing, it is usually necessary to embed motion compensation into the processing flow of the imaging algorithm. The motion error of bistatic SAR comes from the independent motion of the two platforms due to the separated transmitter and receiver. The motion error measurement and estimation method of bistatic SAR and the corresponding motion compensation method have different characteristics from monostatic SAR. Therefore, special research is needed. At present, there are two kinds of motion compensation in bistatic SAR: one is based on motion measurement information and the other is based on echo data.
Overview of bistatic SAR
71
The key to compensation based on motion measurement information is how to obtain high-precision motion measurement data, and then the compensation considers the space-variant characteristics of the error. In 1985, John C. Kirk of Raytheon in the United States first wrote a paper on bistatic SAR motion compensation and pointed out that bistatic SAR motion compensation requires accurate measurement information of the phase center of the transmitting and receiving antenna and mutual communication of measurement information of the transmitting and receiving platform [13]; Holger Nies, of Siegen University in Germany, etc., used the characteristics of GPS measurement accuracy independent of time and high INS short-time accuracy to study the motion compensation method of bistatic SAR based on GPS/INS information fusion [14]. Due to the blockade of foreign technology, the accuracy of the current mature product-level motion measurement equipment cannot meet the requirements of motion compensation, and a motion compensation method based on echo data is required. The key to motion compensation based on echo data lies in the accurate estimation of parameters and errors. At present, there are two kinds of estimation methods. The first one is based on the echo law of the Doppler signal. According to the Doppler signal law of bistatic SAR with different configurations, this method models the Doppler signal as a polynomial phase signal. Then the polynomial phase signal parameter estimation method or the transform domain aggregation of the Doppler signal is used to transform to the transform domain, such as the time-frequency domain or time-frequency rate domain. According to the location of the peaks in the transform domain, the parameters are estimated. The second type is based on iterative autofocus methods. This type of method is based on phase gradients or phase differences, or different image quality evaluation standards. Iterative methods, such as contrast, entropy, etc., are adopted to obtain the corresponding parameters or phase errors while achieving high-quality imaging. In general, these methods can estimate the parameters that meet the accuracy requirements. However, most methods have scenedependent problems, such as the need for strong point information or an excessively high signal-to-noise ratio. Due to the difference between bistatic SAR and monostatic SAR, how to implement the nesting of imaging algorithm and error
72
Bistatic synthetic aperture radar
compensation is also a problem worthy of attention. Moreira et al. of DLR in Germany proposed an extended CS algorithm [15] with twostep motion compensation. However, this algorithm cannot be embedded in the ω – k wavenumber domain algorithm because the algorithm’s migration correction and slow-time focusing are performed simultaneously. In 2006, A. Reigber et al. of DLR in Germany proposed an improved Stolt mapping [16], which modified the wavenumber domain algorithm, carried out migration correction and slow time focusing separately, and realized the embedding of two-step motion compensation. (5) Test verification The purpose of experimental verification is to obtain real echo data and platform motion data and provide the basis and basic conditions for the research and verification of imaging principle, imaging theory, echo model, imaging algorithm, motion compensation, and system performance. At the same time, successful verification tests often indicate the comprehensive grasp of bistatic SAR technology. The strategy and method of experimental verification are also worth studying. Until now, bistatic SAR imaging experiments have been held in the United States, the United Kingdom, France, Germany, and China. Among them, the US Air Force first obtained bistatic SAR images in 1980. Subsequently, between 2002 and 2004, the British Defense Research Agency (DRA), the French Aerospace Research Institute (ONERA), the German Aerospace Center (DLR), and the German Institute of Applied Natural Sciences Research Institute of High-Frequency Physics and Radar (FHR) and other institutions have successively carried out experiments to verify the feasibility of different modes of bistatic SAR imaging. In China, the University of Electronic Science and Technology of China (UESTC) also organized and carried out several airborne bistatic side-looking and forward-looking SAR tests in 2007, 2012, and 2013, and obtained the first airborne bistatic SAR side-looking image, and verified the feasibility of airborne bistatic forward-looking SAR imaging for the first time in the world. At present, most of the reported experiments of bistatic SAR are being carried out by German scientific research institutions. For the first time, they have verified the feasibility of the bistatic imaging mode, such as one station fixed bistatic SAR and satellite/aircraft backward-looking SAR.
Overview of bistatic SAR
73
For example, in November 2003, the German FGAN-FHR used the Dornier 228 aircraft loader SAR system AER-II as the transmitting station, and the Transall C-160 aircraft equipped with PAMIR radar as the receiving station. The X-band imaging experiment of the bistatic squint-looking SAR was carried out [17]. In May 2004, the German FGAN-FHR conducted a bistatic SAR imaging test [18] with the receiver fixed on the ground. The MEMPHIS radar installed on C-160 aircraft was adopted for the transmitting station, and the BP imaging algorithm was used for imaging processing. In December 2007, PAMIR radar installed on C-160 was used as a receiver for an imaging test. In December 2007, German DLR used F-SAR as the airborne receiving station and TerraSAR-X as the transmitting platform and completed the satellite side-looking SAR imaging test [19]. To ensure the synthetic aperture time required for high-resolution imaging, the TerraSAR-X satellite works in the mode of cluster imaging, while the F-SAR works in the reverse sliding spotlight mode. In 2009, Germany’s FGAN-FHR carried out the satellite/aircraft bistatic backward-looking SAR test [20], which used the TerraSAR-X satellite platform as the transmitting station and the PAMIR radar installed on C-160 aircraft as the receiving station. The University of Electronic Science and Technology of China (UESTC) carried out bistatic side-looking SAR [21] and bistatic forward-looking SAR imaging experiments [22], using two Yun-5 aircraft as receiving and transmitting platforms, in 2006 and 2012. The preliminary imaging results are shown in Figs. 1.33 and 1.34. After that, several flight tests were carried out to verify that the highresolution imaging of bistatic forward-looking SAR can be realized by using the existing airborne radar. The experimental verification of bistatic SAR is undergoing a transition from simple configurations to complex configurations. With the further development of bistatic SAR, the experimental verification of more complex configurations and those more suitable for practical applications will be carried out successively.
1.5.2 Development trends Through the joint efforts of researchers around the world for many years, bistatic SAR has undergone better development, but it has not yet reached
74
Bistatic synthetic aperture radar
Transmitter Receiver
Fig. 1.33 Airborne bistatic side-looking SAR test in UESTC (frequency-domain algorithm).
Transmitter
Receiver
Fig. 1.34 Experiment of airborne bistatic forward-looking SAR in UESTC (frequencydomain algorithm).
the mature state of monostatic SAR. It also requires systematic and in-depth research and experimental work on imaging theory, platform combination, imaging mode, scanning method, dimensional resources, observation area, and method technology. In terms of imaging theory, the spatial resolution has been fully understood, and radiation resolution and ambiguity analysis will be the focus of future research. In terms of platform combination, airborne bistatic SAR is expected to be the first combination mode to be applied, and the GEO satellite/aircraft combination, GEO-LEO satellite/satellite combination, as well as the combination of near-space platforms and airborne platforms, will be the focus of future research. In terms of imaging mode, the research on translational invariant mode has been more sufficient, while the more complex imaging modes such as translational variant mode and maneuvering mode will become the main research directions. In terms of scanning methods, the current research is mostly focused on the stripe mode, and sliding spotlight, TOPS and other modes will be the focus of future research. In terms of dimensional resources, bistatic SAR will develop in the direction of multiband and multipolarization, and bistatic interferometric SAR will also attract attention. In terms of the observation area, related studies of bistatic SAR will focus more on the aspects of squint-looking,
Overview of bistatic SAR
75
forward-looking, backward-looking, and downward-looking, so as to form a full coverage of the imaging observation direction of bistatic SAR. In terms of imaging processing, research will focus more on imaging algorithms under complex configurations, such as echo models, imaging algorithms, motion compensation in large translational variant modes, and motion compensation technologies that take into account changes in terrain height. With the development, improvement, and popularization of POS and other high-precision attitude measurement and positioning equipment and GPU architecture processing technology, the fast autofocus BP imaging algorithm with strong configuration and space variation adaptability and high imaging processing accuracy may become the mainstream of research and application. In addition to the previously described research, the ground moving target detection, image classification, and recognition of bistatic SAR will also be research hotspots in the future. The requirements for transmitting stations are reduced due to the separation of transmitter and receiver of bistatic SAR. It can observe ground object scattering characteristics different from monostatic SAR, but also has the characteristic of concealed imaging at the receiving station, and can realize the forward-looking, downward-looking, and backwardlooking imaging of the receiving station, greatly expanding the observation direction and application field of SAR imaging. With the deepening of research, bistatic SAR will be widely used, and it will have a profound impact on SAR imaging product lineage, SAR technology application fields, etc.
References [1] Y. Jianyu, Bistatic synthetic aperture radar technology, J. Univ. Electron. Sci. Technol. China 45 (4) (2016) 482–501. [2] Z. Sun, J. Wu, L. Zheng, et al., Spaceborne-airborne bistatic SAR experiment using GF-3 illuminator: description, in: Processing and Results, IEEE International Geoscience & Remote Sensing Symposium, July 11-16, Brussels, Belgium, 2021. [3] G.P. Cardillo, On the use of the gradient to determine bistatic SAR resolution, in: Antennas and Propagation Society International Symposium, vol. 2, 1990, pp. 1032–1035. [4] T. Zeng, M. Cherniakov, T. Long, Generalized approach to resolution analysis in BSAR, IEEE Trans. Aerosp. Electron. Syst. 41 (2) (2005) 461–474. [5] A. Moccia, A. Renga, Spatial resolution of bistatic synthetic aperture radar: impact of acquisition geometry on imaging performance, IEEE Trans. Geosci. Remote Sens. 49 (10) (2011) 3487–3503. [6] O. Loffeld, H. Nies, V. Peters, et al., Models and useful relations for bistatic SAR processing, IEEE Trans. Geosci. Remote Sens. 42 (10) (2004) 2031–2038.
76
Bistatic synthetic aperture radar
[7] Y.L. Neo, F. Wong, I.G. Cumming, A two-dimensional spectrum for bistatic SAR processing using series reversion, IEEE Geosci. Remote Sens. Lett. 4 (1) (2007) 93–96. [8] D. D’Aria, A. Monti Guarnieri, F. Rocca, Focusing bistatic synthetic aperture radar using dip move out, IEEE Trans. Geosci. Remote Sens. 42 (7) (2004) 1362–1376. [9] R. Wang, O. Loffeld, Y.L. Neo, et al., Focusing bistatic SAR data in airborne /stationary configuration, IEEE Trans. Geosci. Remote Sens. 48 (1) (2010) 452–465. [10] R. Wang, O. Loffeld, Y.L. Neo, et al., Extending Loffeld’s bistatic formula for the general bistatic SAR configuration, IET Radar Sonar Navig. 4 (1) (2010) 74–84. [11] J.H.G. Ender, I. Walterscheid, A.R. Brenner, Bistatic SAR–translational invariant processing and experimental results, IEE Proc-Radar Son. Nav. 153 (3) (2006) 177–183. [12] V. Giroux, H. Cantalloube, F. Daout, Frequency domain algorithm for bistatic SAR, in: Proceedings of the 6th European Conference onSynthetic Aperture Radar (EUSAR), 2006. [13] J.C. Kirk Jr., Bistatic SAR motion compensation, in: International Radar Conference, vol. 1, 1985, pp. 360–365. [14] H. Nies, O. Loffeld, K. Natroshvili, Analysis and focusing of bistatic airborne SAR data, IEEE Trans. Geosci. Remote Sens. 45 (11) (2007) 3342–3349. [15] A. Moreira, Y. Huang, Airborne SAR processing of highly squinted data using a chirp scaling approach with integrated motion compensation, IEEE Trans. Geosci. Remote Sens. 32 (5) (1994) 1029–1040. [16] A. Reigber, E. Alivizatos, A. Potsis, et al., Extended wavenumber-domain synthetic aperture radar focusing with integrated motion compensation, IEE Proc-Radar Son. Nav. 153 (3) (2006) 301–310. [17] I. Walterscheid, J.H.G. Ender, A.R. Brenner, et al., Bistatic SAR processing and experiments, IEEE Trans. Geosci. Remote Sens. 44 (10) (2006) 2710–2717. [18] F. Balke, Field test of bistatic forward-looking synthetic aperture radar, in: IEEE International Radar Conference, 2005, pp. 424–429. [19] M. Rodriguez-Cassola, S.V. Baumgartner, G. Krieger, et al., Bistatic TerraSAR-X/ F-SAR spaceborne–airborne SAR experiment: description, data processing, and results, IEEE Trans. Geosci. Remote Sens. 48 (2) (2010) 781–794. [20] T. Espeter, I. Walterscheid, J. Klare, et al., Bistatic forward-looking SAR: results of a spaceborne–airborne experiment, IEEE Geosci. Remote Sens. Lett. 8 (4) (2011) 765–767. [21] X. Li, X. Jintao, H. Yulin, Y. Jianyu, Research on airborne bistatic SAR squint imaging mode algorithm and experiment data processing, in: Synthetic Aperture Radar, 2007. APSAR 2007. 1st Asian and Pacific Conference on. IEEE, 2007. [22] J. Yang, Y. Huang, H. Yang, J. Wu, W. Li, A first experiment of airborne bistatic forward-looking SAR—preliminary results, in: IEEE International Geoscience and Remote Sensing Symposium, 2013, pp. 4202–4204.
CHAPTER 2
Bistatic SAR imaging theory Imaging theory refers to the sum of quantitative mathematical relationships between the many factors involved in the imaging of the SAR system imaging effects. These factors include target characteristics, geometry, platform speed, the direction of observation, observation time, antenna beam, system parameters, operating modes, and others. Imaging effects include the width, resolution, signal-to-noise ratio, and contrast of the radar image obtained. Determining the quantitative relationship between them and obtaining the corresponding mathematical model through appropriate approximation and simplification is the theoretical basis for SAR system hardware design, performance prediction analysis, and imaging processing. Bistatic SAR adopts the configuration of separate transmitter and receiver platforms, resulting in the obvious differences from traditional monostatic SAR in terms of imaging mechanism, conditions, models, patterns, and capabilities. The thorough comprehension of these is an important prerequisite for the design of bistatic SAR systems, the selection of configuration modes, and the construction of imaging processing algorithms. This chapter mainly discusses bistatic SAR imaging methods, resolution characteristics, configuration design, and echo model, and lays a theoretical foundation for the coverage of imaging algorithms, parameter estimation, and motion compensation in subsequent chapters. It is necessary to point out that most existing aircraft generate a linear rigid trajectory and maintain a relatively stable speed and flight attitude, since they have relatively simple and regular motion characteristics during the imaging observation time. Therefore this chapter assumes the ideal conditions of uniform linear motion trajectory as well as stable and constant flight attitude. This assumption not only leads to a simple echo mathematical model but also facilitates a clear imaging processing flow, which helps in understanding the mathematical process and physical essence of echo generation and imaging processing more clearly. As for practical flight situations, the influence of the trajectory deviation and attitude change of the aircraft from the ideal situation on the echo is attributed to motion error. Its influence and elimination method are discussed in the following chapters. Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00002-5 Published by Elsevier Inc. All rights reserved.
77
78
Bistatic synthetic aperture radar
2.1 Imaging method The imaging method refers to the mathematical method that converts echo data to a periodically transmitted pulse into image data corresponding to a two-dimensional distribution function of ground scattering intensity. Imaging process refers to the implementation process of the method. In Chapter 1, to simplify the narrative and to facilitate understanding, the imaging process is interpreted as fast time compression and slow time compression (or synthetic aperture). However, in a strict sense, according to the currently used two-dimensional correlation imaging method, the process of SAR imaging is a two-dimensional correlation operation process for the echo of various scattering points in the observed scene. To understand this process, first we need to understand the purpose of imaging, clarify the principle of this method, and confirm the relevant problems that the method must solve in practical applications.
2.1.1 Purpose of imaging processing According to physics principles, each scattering unit (i.e., a certain area of the earth’s surface) reflects the incident electromagnetic wave, forming a corresponding echo. Its reflection intensity can be characterized by the scattering coefficient σ 0 and defined as the ratio between the radar cross-section of the scattering unit (hereinafter referred to as RCS) and scattering unit area. Note that the RCS has different meanings for monostatic SAR and bistatic SAR. Taking the translational invariant bistatic side-looking SAR with parallel flight paths as an example, we define the coordinate system in the ground plane x-o-y (referred to as the target domain). Due to the influence of factors such as surface material and roughness, scattering coefficient is a function related to the position coordinates (x, y) and can be expressed as σ 0(x, y), which is known as the scattering coefficient distribution function, shown in Fig. 2.1 (left). For the convenience of mathematical analysis, σ 0(x, y) is usually considered to be the linear superposition of a series of impulse functions of different positions ðxe, e yÞ and intensities σ 0 ðxe, e yÞ: ðð yÞδðx xe, y e yÞde xde y (2.1) σ 0 ðx, yÞ ¼ σ 0 ðxe, e According to the definition of σ 0(x, y), with coordinates ðxe, e yÞ as the center, the corresponding RCS of the scattering unit whose length and width are Δx and Δy, respectively, is σ 0 ðxe, e yÞΔxΔy. However, the RCS corresponding to a certain area of the surface S is given by:
Bistatic SAR imaging theory
79
Fig. 2.1 Purpose of SAR imaging.
ðð σS ¼
σ 0 ðx, yÞdxdy
(2.2)
S
Since radar works by periodically transmitting signals and receiving echoes, time variables are usually decomposed into fast time and slow time to distinguish the amplitude and phase changes of the echo caused by electromagnetic wave propagation scattering and the amplitude and phase changes of the echo caused by the interpulse movement of the platform or the observation angle of view, which facilitates the respective processing of delay resolution and Doppler resolution. During the SAR transmission, receiving, and echo recording process, the echo of the entire observed area is recorded in a two-dimensional array in the storage device. One dimension corresponds to slow time t and the other dimension corresponds to fast time τ. In fact, each scattering unit in the observed area will form echo data in the slow time t-fast time τ plane (hereinafter referred to as the data domain) and cover a certain area in the plane. The data of t time along the τ axis direction corresponds to the echo generated by the transmission pulse centered on the time envelope, and the center position corresponds to the scattering point echo delay. The echo signal strength corresponding to the scattering unit with the center position coordinate ðxe, e yÞ and area ΔxΔy is proportional to its RCS σ 0 ðxe, e yÞΔxΔy. However, the coverage area and variation law of the echo signal in the data domain are related to the relative motion geometric parameters between the radar and the scattering unit, and also related to the antenna beamwidth, direction, and its variation, and the signal waveform and parameters of the radar system. It is also related to other
80
Bistatic synthetic aperture radar
factors and varies with the coordinates ðxe, e yÞ of the center position of the scattering unit. Therefore, the echo signal generated by the unit can be expressed asσ 0 ðxe, e yÞΔxΔy hðt, τ; xe, e yÞ. If the scattering unit has a unit RCS, i.e., σ 0 ðxe, e yÞΔxΔy ¼ 1, the corresponding echo is hðt, τ; xe, e yÞ, known as the normalized echo. Therefore hðt, τ; xe, e yÞ can be considered to be the echo generated by a scattering unit RCS with center coordinates ðxe, e yÞ. Mathematically, hðt, τ; xe, e yÞ is also equivalent to an echo generated by an isolated scattering point with a scattering coefficient distribution function σ 0 ðx, yÞ ¼ δðx xe, y e yÞ, because the isolated scattering point has a unit RCS and is located at the coordinates ðxe, e yÞ according to Eq. (2.2). In translational invariant bistatic side-looking SAR with parallel flight paths, when the transmitted signal is a linear frequency modulated signal, hðt, τ; xe, e yÞ usually deforms as the two-dimensional linear frequency modulated signal in the data domain. The typical shape of the real part is shown in Fig. 2.1 (middle). Its coverage area is symmetrically curved. The center position can be calculated by the corresponding scattering point position coordinates ðxe, e yÞ and platform motion velocity. Assume that the platform height is not considered and the x-axis direction coincides with the platform track; the center position is located at ðxe=v, 2e y=c Þ, where v represents the velocity of the transmitter and receiver, and c represents the speed of light. In most other modes of bistatic SAR, the coverage area of hðt, τ; xe, e yÞ will be different, generally appearing as an asymmetric bend or a slanted bend. Since the entire observation area consists of a plurality of scattering units adjacent to each other, echoes corresponding to scattering units with different central position coordinates ðxe, e yÞ in the target domain will produce a linear superposition in the data domain. Therefore the echo s(t, τ) of the entire observation area surface recorded by radar, except the noise mixed into the receiver, can be expressed as. ðð sðt, τÞ ¼ σ 0 ðxe, e yÞhðt, τ; xe, e yÞde xde y (2.3) The purpose of imaging is to rebuild and estimate the surface scattering coefficient distribution function σ 0(x, y) in the target domain by using echo data s(t, τ) from the data domain and the synchronously recorded radar moving information during the data recording process. The essence of imaging is to regather echo energy from various scattering points in the data domain and restore the geometric position and scattering intensity information of each scattering point in the image domain (x, y), form the radar image
Bistatic SAR imaging theory
81
σ^0 ðx, yÞ of the ground objects, and make σ^0 ðx, yÞ ¼ σ 0 ðx, yÞ through appropriate mathematical processing, as shown in Fig. 2.1 (right). However, it is difficult to achieve the ideal effect shown in Fig. 2.1 (right) in practical applications. The reason for this is that the entire observation area and every scattering unit are composed of a large number of dense scattering points, and the echoes of immediately adjacent scattering points have overlapped coverage areas and similar variation law, so it is difficult to distinguish them effectively. Please see the analysis in the next section for details.
2.1.2 Two-dimensional correlation imaging method The current imaging method is a point-by-point two-dimensional correlation operation on the echoes of each scattering point. The result of the imaging method in the image domain (x, y) produced by various scattering points appears like a two-dimensional sinc function, as shown in Fig. 2.2 (right). Although this method can gather the echo energy from each scattering point data domain in the image domain and restore its positional information, there still remains echo energy diffusion, with an oscillation attenuation trend. This will adversely affect the resolution of the adjacent scattering points and the estimation of the intensity of the adjacent scattering point. The reasons for this phenomenon are discussed shortly. The correlation computation is a widely used processing method in radar signal processing. It obtains the integral value σ^0 by performing integration operations on the conjugate multiplication S H∗. The essence of the correlation computation is to test the similarity or correlation between S and
Fig. 2.2 The process of synthetic aperture radar imaging (taking the translational invariant bistatic side-looking SAR with parallel flight paths as an example).
82
Bistatic synthetic aperture radar
H. So, it can also be called the similarity integral, when S ¼ N + σ 0 where σ 0 is a constant independent of the signal variable. The H signal’s component σ 0 H is already contained in the S signal. When this component is multiplied by H∗, it will form in-phase stacking in the entire domain of integration, thus making a great contribution to the integral value σ^0 . If the integral value 1/k of H H∗ meets the normalization condition 1/k ¼ 1, without considering the contribution of signal N contained in signal S to the integral value or the integral value N H∗, which is 0 (as N is orthogonal or uncorrelated with H), the correlation integral value σ^0 will be equal to σ 0. This means that the correlation operation can examine the value of σ 0 of the H signal contained in the S signal. If 1/k 6¼ 1, multiplying the relevant integral value by k can also achieve this goal. Certainly, if N and H do not meet the orthogonal or uncorrelated conditions, then the associated integral value σ^0 will have errors with σ 0. In the SAR imaging processing, the same idea also applies to restore or estimate the scattering coefficient σ 0(x, y), namely adopting the following two-dimensional correlation operation formula to get the radar image σ^0 ðx, yÞ: ðð (2.4) σ^0 ðx, yÞ ¼ kðx, yÞ sðt, τÞh∗ ðt, τ; x, yÞdtdτ where k(x, y) represents the normalization factor of the imaging process. It is used to achieve dimension transition and process calibration of the imaging process. It meets the following normalization conditions: ðð kðx, yÞ hðt, τ; x, yÞh∗ ðt, τ; x, yÞdtdτ ¼ 1 (2.5) Note that k(x, y) is related to many factors in the whole imaging process, including geometric configuration in imaging data acquisition, distance, surface slope, transmit power, signal waveform, antenna beam, scanning mode, receive gain, and so on. According to Eq. (2.4), in order to get the scattering coefficient estimated value σ^0 ðx, yÞ corresponding to the position coordinate (x, y) in the image domain, it is necessary to construct the normalized echo h(t, τ; x, y) corresponding to the position coordinate (x, y) according to the imaging geometric motion parameters and the echo variation law. Then, h(t, τ; x, y) is conjugate multiplied and two-dimensionally integrated with the entire echo s(t, τ) according to Eq. (2.4), a two-dimensional correlation operation, so that the scattering coefficient estimated value σ^0 ðx, yÞ corresponding to
Bistatic SAR imaging theory
83
the position coordinate (x, y) can be obtained. The estimated value of the scattering rate density function σ 0(x, y) can also be obtained at the same time. For the imaging mechanism of Eq. (2.4), further analysis can be carried out from the qualitative and quantitative perspectives, in order to deeply understand the characteristics of this imaging method as well as its advantages and limitations for further practical application. From the perspective of qualitative analysis, in order to get the estimated value of the scattering coefficient σ^0 ðx0 , y0 Þ at the point (x0, y0) in the image domain of Fig. 2.2, simply follow Eq. (2.4), under the guidance of the similarity measures principle, and use h(t, τ; x0, y0) to check whether the echo data s(t, τ) contains the echo component σ 0(x0, y0)h(t, τ; x0, y0) generated by the scattering point at (x0, y0). If the component is contained in the echo data s(t, τ), and there are no other echo components generated by other scattering points, then according to the normalization condition of Eq. (2.5), the integral value of Eq. (2.4) will be exactly σ 0(x0, y0). Unfortunately, there will always be echo components generated by other scattering points within echo data s(t, τ) in practical applications, and they will contribute to the result of Eq. (2.4). The result will be affected not only by the degree of their deviation from (x0, y0), but also by their scattering coefficient relative to the value of σ 0(x0, y0). For example, in Fig. 2.2 (left), the scattering point (0, y0), which is near (x0, y0), produces the echo σ 0(0, y0)h(t, τ; 0, y0). Since the overlap of h(t, τ; 0, y0) and h(t, τ; x0, y0) is small, a large range of in-phase stacking cannot be formed in the integral of Eq. (2.4); therefore, when σ 0(0, y0) and σ 0(x0, y0) are equivalent, the contribution of the scattering point (0, y0) to the integral value is not large. Although the individual contribution of such adjacent scattering points is small and the actual surface consists of a large number of dense scattering points, their synthetic contributions cannot be ignored. Because this will cause the result of Eq. (2.4) to deviate significantly from σ 0(x0, y0), resulting in a large estimation error, especially those with a scattering coefficient much larger than σ 0(x0, y0), their effect on the estimation error cannot be ignored. According to Eq. (2.4), other scattering points that can adversely affect σ^0 ðx0 , y0 Þ are at (xi, yi). From (x0, y0), there is an extension to a certain distance around, equal to the sum of the unilateral extension ranges of h(t, τ; x0, y0) and h(t, τ; xi, yi). The scattering point (0, y1), which is away from (x0, y0) in Fig. 2.2 (left), produces the echo σ 0(0, y1)h(t, τ; 0, y1), but it locates beyond the extension distance mentioned, so the coverage areas of h(t, τ; 0, y1) and h(t, τ; x0, y0) have no overlaps. Therefore the scattering point (0, y1) does not affect the result.
84
Bistatic synthetic aperture radar
From the perspective of quantitative analysis, by substituting Eqs. (2.3) into (2.4), the result σ^0 ðx, yÞ can be obtained by the two-dimensional correlation imaging method, which is ðð σ^0 ðx, yÞ ¼ σ 0 ðxe, e yÞI ðx, y; xe, e yÞde xde y (2.6) where
ðð I ðx, y; xe, e yÞ ¼ kðx, yÞ hðt, τ; xe, e yÞh∗ ðt, τ; x, yÞdtdτ
(2.7)
This represents the impulse response of the entire imaging process, namely the response of the scattering point in the image domain, as shown in Fig. 2.2 (below). By comparing the expressions of σ 0(x, y), s(t, τ), and σ^0 ðx, yÞ, it can be seen that the scattering coefficient, echo data, and radar image are all formed by the stacking of the contributions of multiple scattering points. Moreover, δðx xe, y e yÞ generates hðt, τ; xe, e yÞ during the data recording process and generates I ðx, y; xe, e yÞ during the processing procedure. So I ðx, y; xe, e yÞ can be regarded as the response generated by the single scattering point ðxe, e yÞ in the image domain. In fact, if we compare Eqs. (2.6) with (2.1), we can see that because I ðx, y; xe, e yÞ 6¼ δðx xe; y e yÞ, there must be σ^0 ðx, yÞ 6¼ σ 0 ðx, yÞ. This shows that the result of the correlation operation σ^0 ðx, yÞ does not give an accurate estimated value of σ 0(x, y). A higher estimation accuracy can only be obtained in the case in which scattering points are sparse as in Fig. 2.2 (left). For the actual scene, the reconstruction error of this imaging processing method will be determined by the specific shape of σ 0(x, y), the overlap in the echo dispersion range of different scattering points in the data domain, and the similarity of the variation law. For the reconstruction error of this imaging method, a more detailed analysis can be performed according to the formula I ðx, y; xe, e yÞ of Eq. (2.7). In fact, Eq. (2.7) indicates that I ðx, y; xe, e yÞ represents a crosscorrelation value or similarity between h(t, τ; x, y) and hðt, τ; xe, e yÞ. When ðx, yÞ ¼ ðxe, e yÞ, hðt, τ; x, yÞ ¼ hðt, τ; xe, e yÞ in Eq. (2.7), and its coverage area perfectly overlaps the coverage area of hðt, τ; xe, e yÞ in the data domain (t, τ) with the ability of in-phase addition of their conjugate product in the entire integration domain, so that the integral reaches the maximum value. Now, according to the normalization condition of Eq. (2.5), the value of I ðx, y; xe, e yÞ is 1. But when ðx, yÞ 6¼ ðxe, e yÞ, the overlap area of h(t, τ; x, y)
Bistatic SAR imaging theory
85
and hðt, τ; xe, e yÞ in Eq. (2.7) is reduced obviously. Moreover, the conjugate product values of different (t, τ) cannot form a complete in-phase stacking, and the integral value is reduced significantly. Therefore, there must be yÞj 1 and there is an equal sign when ðx, yÞ ¼ ðxe, e yÞ. Namely, jI ðx, y; xe, e in the image domain with x and y as variables, the peak value of I ðx, y; xe, e yÞ is at ðxe, e yÞ. As (x, y) deviates from ðxe, e yÞ further and further, the overlap area of h(t, τ; x, y) and hðt, τ; xe, e yÞ gradually decreases till it eventually disappears. The value of jI ðx, y; xe, e yÞj will show a tendency of downgoing oscillation. Fig. 2.2 (right) shows the typical form of jI ðx, y; xe, e yÞj when the transmitted signal is a linear frequency modulated signal and the operating mode is translational invariant bistatic side-looking SAR with parallel flight paths, which is very close to the cross-ray pattern shown by the two-dimensional sinc function. In addition to the main lobe of a certain width, there are side-lobe extensions in the echo delay direction and the cross direction, and the unilateral extending range of the sidelobe is equal to the sum of the unilateral extending range of h(t, τ; x, y) and hðt, τ; xe, e yÞ. Among them, the main lobe width determines the spatial resolution of the system, while the sidelobe significantly affects the estimated value of the scattering coefficient near the weak scattering point and affects the contrast of the entire image. Section 2.2 explains that, due to the direction of delay resolution and the direction of the Doppler resolution usually being nonorthogonal and not consistent with the direction of the range and azimuth in other configurations of bistatic SAR, the scattering point echo will appear to be oblique bending in the coverage area of the data domain, and the shape of yÞj will also be different, usually showing a nonorthogonal cross jI ðx, y; xe, e ray shape. This phenomenon is further analyzed in Section 2.2.1. In fact, Eqs. (2.6), (2.7) show that σ^0 ðx, yÞ, which is obtained by this imaging processing method, is the weighted sum of the image domain response I ðx, y; xe, e yÞ produced by every scattering point ðxe, e yÞ in the scene, and its weight is exactly the scattering coefficient σ 0 ðxe, e yÞ corresponding to the scattering point. This shows that the two-dimensional correlation integral operation results in a significant cross-effect between the image domain responses of each scattering point so that the σ^0 ðx, yÞ calculated by Eq. (2.6) does not accurately reconstruct the target scene scattering rate density function σ 0(x, y). The reason for this cross-effect is the overlap of the coverage areas and the similarity of the variation law of the echoes generated by the adjacent scattering points in the data domain.
86
Bistatic synthetic aperture radar
2.1.3 Related questions to be solved In fact, when there is only one single scattering point in the target scene and the receiver noise is mixed in the echo, from the perspective of optimum detection and estimation theory, the two-dimensional correlation image method is the best estimation method for the amplitude of the scattering point echo and the best determination method for the position coordinates of the scattering point. There will be a limited range extension of the sidelobes in the estimation results. However, due to the large number of dense scattering points in the actual scene, error caused by the cross-effect of the image response of the scattering point exists in this method. Therefore, even from the perspective of optimum detection and estimation theory, this method is not the best imaging method. Although this method has obvious shortcomings, the two-dimensional correlation imaging method is a mainstream imaging method that has been in use for a long time, simply because there isn’t another better method to replace it. It should be noted that when adopting this imaging method for implementing the ground imaging in engineering applications, a number of key issues must be resolved. The first key issue is the imaging calibration, that is, how to determine the imaging process normalization factor k(x, y) in Eq. (2.4) to ensure that the imaging process result σ^0 ðx, yÞ can directly represent the estimated value σ 0(x, y) of the scattering coefficient distribution function. Since k(x, y) involves many factors, there will be a large error if the calculation is performed directly. In engineering applications, internal and external calibration methods are usually used, especially when it is necessary to measure σ 0(x, y) as accurately as possible for the inversion of the surface or ground material and physical and chemical parameters accordingly. Of course, if the purpose of the imaging is only to use the relations between the light and dark or the contour shape of the image domain to distinguish various types of objects or targets in the observation area, then calibration is not necessarily an essential process. However, omitting this normalization factor is equivalent to making k(x, y) a constant. The obtained radar image under this condition belongs to the uncalibrated image and can only reflect the relative size relation of the scattering rate between adjacent scattering units. It does not represent the actual scattering rate of the target object. Furthermore, there is also a significant error in the relative size relation of the scattering ratio between scattering units that are far apart from each other. As shown in the fourth picture in Fig. 1.4 of Chapter 1, it can be observed that there is a big difference in the brightness of the images corresponding to similar
Bistatic SAR imaging theory
87
features in different areas in the uncalibrated radar image. This occurs because k(x, y) is assumed to be a constant so that the electromagnetic wave propagation attenuates, and the influence of antenna beam modulation and slope effect remain in the radar image. The second key issue is the echo model: that is, the normalized echo h(t, τ; x, y) and the normalized factor k(x, y) are constructed for each coordinate position (x, y) of the target domain, and are used as the basis for calculating σ^0 ðx, yÞ in Eq. (2.4). Therefore it is necessary to derive accurate expressions of h(t, τ; x, y) and k(x, y) according to predetermined motion geometry, emission signal waveform, antenna pattern, and scanning mode. However, in order to avoid excessive computational resource spending caused by complex mathematical operations, it is necessary to properly approximate the exact expression while maintaining sufficient accuracy, forming an echo model to reduce computational complexity and computational resource spending. The third key issue is the parameter estimation, which refers to obtaining the exact value of each parameter in the expression of the normalized echo h(t, τ; x, y). In practical applications, the relative motion geometric parameters of the two platforms and the observation area can be obtained by the platform motion measuring device. Then the relevant parameters of h(t, τ; x, y) can be calculated accordingly. Unfortunately, due to the error in the motion measuring device and the time-frequency synchronizing device at the current state of the art, it is difficult to meet the accuracy requirement of good focus for scattering point echoes and accurately positioned images according to the parameters of the echo signal calculated by the movement parameter measurement and the coordinate space-time conversion. Therefore it is needed to study the parameter estimation method based on echo data to directly estimate the signal parameters in order to provide an accurate estimate of the echo signal parameters. Certainly, this comes at the expense of additional computational spending. The fourth key issue is motion compensation, which is the measurement of, estimation of, and compensating for the random error produced by the irregular motion of the platform. This compensation is needed because during the process of building the echo, in order to simplify the mathematical analysis process and obtain a simple mathematical expression, it is usually assumed that the platform performs regular motion in a predetermined manner. But in practical applications, due to various factors such as airflow disturbances or flight control, the actual motion of the platform will randomly
88
Bistatic synthetic aperture radar
deviate from the expected regular motion, which leads to the corresponding deviation of the echo law in the data domain, thereby affecting the imaging result. So it is necessary to use a motion measuring device and the echo data to measure and estimate the deviation, and perform corresponding correction and compensation to return the echo law to a predetermined state in the data domain. The fifth key issue is the sidelobe control, which refers to controlling the sidelobe level of I ðx, y; xe, e yÞ, which is the scattering point response in the image domain, reducing the diffusion of echo energy of every scattering point in the image domain, and rejecting the cross-effect of adjacent scattering point response in the image domain, in order to reduce the difference between σ^0 ðx, yÞ and σ 0(x, y) of weak scattering points and to improve the accuracy of scattering coefficient estimation. In actual processing, frequency domain windowing or time-domain windowing methods are usually used to control the sidelobe level. This can effectively reduce the occlusion effect of the sidelobes of strong scattering point response in the image domain on the adjacent weak scattering point response in the image domain. Intuitively, this effect results in better image contrast being obtained, and the strong scattering point region and the adjacent weak scattering point region showing a light and dark state with clear boundaries in the image can be observed. However, the main lobe of I ðx, y; xe, e yÞ broadens and the spatial resolution is decreased at the same time. The sixth key issue is fuzzy control. The fast time τ and the slow time t in the two-dimensional correlation operation of Eq. (2.4) are actually derived from the equal length segmentation operation of the time axis. The beginning of the segment is the corresponding pulse transmission time. The end of the segment is the radar pulse repetition period Tr ¼ 1/Fr. Align the segments of the echoes of each time period and arrange them in order to obtain the echo s(t, τ) in the data domain. The direction of τ represents the time change from the beginning to the end of each time period, and the direction of the t axis represents the sequence of each time period. Therefore the apparent delay of the echo with the delay bigger than Tr appears as the remainder after deducting the integer multiple of Tr in the data domain, forming a delay ambiguity phenomenon. This is also called distance ambiguity. As for an echo with Doppler frequency higher than Fr/2, the apparent frequency appears in the data domain as the remainder after deducting the integer multiple of Fr, forming a Doppler ambiguity phenomenon called azimuth ambiguity. Both of these ambiguities will cause the echo data in the data domain to inaccurately reflect the mutual positional relationship of the
Bistatic SAR imaging theory
89
scattering point echoes at different positions. This will lead to their echoes aliasing in the data domain, forming a significant ghost image and a “ghosting” phenomenon in the image domain. To this end, we must carefully analyze the causes and laws of this problem and propose targeted measures to effectively control the influences of these two kinds of ambiguities, which is fuzzy control in SAR. The seventh key issue is efficient calculation, that is, constructing a fast algorithm to replace the inefficient Eq. (2.4). Because Eq. (2.4) is a point-bypoint process, namely, for each coordinate position in the image domain. The corresponding scattering point normalized echo h(t, τ; x, y) is calculated and then the integral calculation is performed. Both of them are performed one by one. At the same time, the integral of Eq. (2.4) is a two-dimensional correlation operation process, which requires two-dimensional calculation. These two factors cause the computational efficiency of the twodimensional correlation imaging method to be very low. Based on the existing serial computing device carried by the motion platform, the required imaging processing time exceeds the time allowed by the actual application in multiple orders of magnitude. Therefore it is necessary to work out a fast algorithm that can significantly improve the computational efficiency of Eq. (2.4): an imaging algorithm that can shorten the time required for imaging processing. As a result, the two-dimensional correlation imaging method can meet the real-time requirements of engineering applications. The eighth key issue is the projection distortion correction, needed to improve the calculation efficiency and simplify the processing in the fast algorithm, as the horizon rectangular coordinate system will not be used to describe the target domain and the image domain. In this case, it is necessary to perform two-dimensional mapping on the obtained image to find the estimated value σ^0 ðx, yÞ of the horizon scattering rate distribution function. The mapping is determined by the projection mapping relationship between the horizon and the adopted target domain or image domain coordinate system and can be used to revise the geometric distortion of the equidistant circular projection corresponding to the “close distance compression” phenomenon mentioned in Chapter 1. In addition, if there are ups and downs on the surface, between the σ^0 ðx, yÞ obtained by Eq. (2.4) and the horizontal projection σ 0(x, y) of the surface three-dimensional scattering rate distribution function σ 0(x, y, h), there will be residual projection geometric distortion and shadow region radiation distortion that cannot be revised by the two-dimensional imaging, namely, the “near tower upside down” and the “back slope shadow” mentioned in Chapter 1.
90
Bistatic synthetic aperture radar
The ninth key issue is the system parameter and geometry design. First, the receiver noise will introduce an additive noise term n(t, τ) into echo s(t, τ) in Eq. (2.3), which will show as additive noise in the point target response I ðx, y; xe, e yÞ of Eq. (2.7) and the final image σ^0 ðx, yÞ of Eq. (2.6), thereby affecting the actual spatial resolution and radiation resolution of the image, and eventually affecting the image’s recognizability. Therefore it is necessary to derive the SAR radar equation and understand the quantitative relationship between the echo signal-to-noise ratio and the transmission and reception distance, the target scattering intensity, and the system parameters in order to reduce the influence of noise through a reasonable system design, then constrain it to the allowable range. Secondly, the variation law of the point target echo hðx, y; xe, e yÞ in Eq. (2.3) and the parameter have a decisive influence on the point target response I ðx, y; xe, e yÞ and the final image σ^0 ðx, yÞ. It also determines the resolution and sidelobe level of the system. However, hðx, y; xe, e yÞ is not only related to the transmitted signal waveform and parameters, but also related to the antenna beam shape, orientation, and its time variation, and also has a strong correlation with the triangular configuration and time variation between the transceiver station and the imaging area. Therefore it is necessary to grasp the quantitative relationship between resolution and related factors and derive the geometric configuration and system parameter design method, so that the imaging resolution and sidelobes can meet the expected requirements through rational design and selecting the appropriate configuration.
2.2 Resolution performance Resolution performance analysis is one of the most important components of SAR imaging theory, including spatial resolution and radiation resolution. Radiation resolution reflects the minimum difference in the scattering rate that can be distinguished by radar images. The spatial resolution reflects the minimum spatial separation between two targets that can be distinguished by the radar image, which is usually measured by the azimuth and distance resolution reflecting the ground projection direction of the line-of-sight of the receiving station and the resolving power of its orthogonal direction, respectively. From the perspective of the SAR imaging principle, the two-dimensional spatial resolving power is derived from the propagation delay resolution in fast time-dimensional echo signals and the time-delay resolution of slow time-dimensional Doppler signals of different scatter points, which are simply referred to as delay resolution and Doppler
Bistatic SAR imaging theory
91
resolution. Their corresponding spatial resolving power are called the delay ground resolution and Doppler ground distance resolution, respectively. The process of forming ground distance resolution is called fast time compression and slow time compression, respectively. In monostatic sidelooking SAR, the direction of the delay ground resolution is consistent with the direction of the distance resolution, and the direction of the Doppler ground resolution is consistent with the direction of the azimuth resolution. These two directions are orthogonal. However, in most modes of monostatic squint-looking and bistatic SAR, this consistency does not exist. The direction of the delay ground resolution and the direction of the Doppler ground resolution is generally nonorthogonal. Therefore it is necessary to distinguish the delay resolution from the distance resolution and distinguish the Doppler ground resolution from the azimuth resolution. Space and radiation resolution are the core indicators for measuring the imaging performance of SAR systems. In bistatic SAR, spatial and radiation resolution are not only determined by radar system parameters, but also related to factors such as platform space location, platform motion parameters, and beam scanning mode. They have different characteristics from those of monostatic SAR. The purpose of studying the resolution characteristics of bistatic SAR is to grasp the qualitative and quantitative relationship between spatial and radiation resolution and these factors, for providing a theoretical basis for the design of transceiver configurations and the working mode. In this section, based on the ground delay and Doppler contour gradient field produced by the spatial position relationship of the bistatic SAR transceiver and its time variation, the quantitative relationship between spatial resolution and transceiver spatial configuration is derived, which will describe the nonorthogonality and spatial variation of spatial resolution, and give a quantitative formula of the radiation resolution of the bistatic SAR.
2.2.1 Spatial resolution In Chapter 1, concepts such as spatial resolution, equal delay lines, and equal Doppler lines have already been introduced, and it has been explained that when the equal delay line increment is set to 1/Br, which is the reciprocal of the transmitted signal bandwidth Br, the interval of equal delay lines represents the ground resolution corresponding to the delay resolution, that is, the delay ground resolution, the direction of which is orthogonal to the
92
Bistatic synthetic aperture radar
direction of the equal delay line. And when the Doppler increment is set to 1/Ta, the reciprocal of the observation time Ta, the interval between equal Doppler lines represents the ground resolution corresponding to the Doppler resolution, namely, the Doppler ground resolution, which direction is orthogonal to the direction of the equal Doppler line. It is worth noting that in most bistatic SAR configurations, the equal delay line of the area to be imaged is nonorthogonal with the equal Doppler line, which will cause the direction of the preceding two ground range resolutions is not orthogonality, and cause the geometric shape distortion in the imaging result of the frequency domain imaging algorithm, thereby affecting the recognizability of the image. In essence, the geometric distortion comes from the distribution pattern of the correlation of the equal delay line and the equal Doppler line, besides the signal bandwidth and observation time. It is rooted in the geometry of the time variation of the transceiver station and the area to be imaged based on the bistatic SAR system. Thus understanding the impact of specific configuration relationships on spatial resolution performance and grasping the mathematical relationship between them from a quantitative perspective have important theoretical value for predicting the spatial resolution performance of bisbatic SAR systems or guiding the spatial relationship of platform formation and its time variation in application task planning. (1) Spatial resolution equation In fact, from the perspective of vector calculus, the echo time delay and the distribution of Doppler in the ground can be regarded as a scalar field. And the gradient of the scalar field is the vector field whose direction represents the fastest growing direction of the scalar field and also represents the minimum interval direction between two adjacent contours. Its length represents the maximum rate of change of its scalar position, which can be obtained by grading the time delay [1]. Because the gradient reflects the ratio of the delay increment to the ground increment, when the given delay increment is the delay resolution 1/Br, the corresponding ground distance increment can be obtained based on the gradient, which is the time delay ground resolution. When the given Doppler increment is the Doppler resolution 1/Ta, the corresponding ground distance increment can be obtained accordingly, namely, the Doppler ground resolution [2]. Assume that the echo signal time delay generated by the scattering point P with coordinate (x, y) in the ground rectangular coordinate
Bistatic SAR imaging theory
93
system is τP(t) at time t. Then at this moment, the gradient, which varies with the change of spatial position of τP(t), is: rτP ðtÞ ¼
∂τP ðtÞ ∂τP ðtÞ ∂τP ðt Þ 1 i+ j+ k ¼ ½μTP ðtÞ + μRP ðtÞ ∂x ∂y ∂z c
(2.8)
where μTP(t) and μRP(t) represent the unit vector of the transmitting station and the receiving station to the line of sight of the point P at time t, respectively. According to the vector composition parallelogram rule, the direction of rτP(t) corresponds to the direction of the line-of-sight unit vector of the transceiver station at this time. The size of rτP(t) is determined by the angle between the line-of-sight vectors. Assume that the echo Doppler frequency at the point P at time t is fp(t). Its gradient in rectangular coordinates is: ∂fP ðtÞ ∂fP ðtÞ ∂fP ðt Þ i+ j+ k ∂x ∂y ∂z (2.9) 1 ¼ ½ωTP ΓTP ðt Þ + ωRP ΓRP ðtÞ λ where ωTP and ωRP represent the rotational angular velocity of the transmitting station and the receiving station relative to the point P, respectively. ΓTP(t) and ΓRP(t) represent unit vectors along the direction of rotation of the line of sight of the transmitting station and the direction of rotation of the line of sight of the receiving station, respectively. This formula shows that the direction of r fP(t) is the synthetic direction of vector ωTPΓTP(t) and vector ωRPΓRP(t). Projecting these two gradients onto the ground, you can obtain ? their projections on the ground, which are rτG P (t) ¼ Pτ rτP(t) and G ? ? ? r fP (t) ¼ Pt rfP(t), among which Pτ and Pt stand for the projection matrix of the delay ground resolution and Doppler ground resolution to the ground, respectively. Thus the time delay ground resolution and the Doppler ground range resolution can be expressed by Eqs. (2.10) and (2.11), respectively. rfP ðtÞ ¼
1 c c ¼ (2.10) ¼ γ ? Br kP1 rτP ðtÞk Br Pτ ½μTP ðt Þ + μRP ðt Þ 2Br cos cosϑ 2 where γ stands for the angle between the lines of sight of the transmitting station and the receiving station to point P, which is called the
ρG τ ¼
94
Bistatic synthetic aperture radar
bistatic angle. ϑ is the projection angle of the delay resolution to the ground. It can be seen that the time delay ground range resolution ρG τ is mainly determined by transmission signal bandwidth Br and bistatic angle γ. The larger the signal bandwidth, the better the resolution, but the bigger the bistatic angle, the more sparse the distribution of equal time delay line at the position, and the worse the resolution. ρG t ¼
1 λ λ ¼ ¼ (2.11) ? ? 2θP cosϕ Ta P rfP ðtÞ P ½θTP ΓTP ðtÞ + θRP ΓRP ðtÞ t
t
Here, θTP and θRP are the rotations of the transmitting station and the receiving station relative to the point P in the synthetic aperture time, respectively; ϕ is the projection angle of the Doppler resolution direction to the ground, while θP is the modulus of [θTPΓTP(t) + θRPΓRP(t)]/2, which stands for the bistatic SAR synthetic rotation angle. θP consists of a stacking of transceiver stations that are mutually enhanced or offset, depending on different modes and geometric configuration. It can be seen from Eq. (2.11) that the Doppler ground range resolution ρG t is mainly determined by the synthetic rotation θP of the transmitting station and the receiving station relative to the point P. The larger the rotation angle, the better the resolution. According to the time delay and Doppler ground range resolution expressions (2.10) and (2.11), the directions of the time delay and Doppler ground range resolutions can be obtained as: Θ¼
rτG P ðt Þ krτG P ðt Þk
(2.12)
Ξ¼
rfPG ðt Þ krfPG ðt Þk
(2.13)
The angle between the time delay and the Doppler ground range resolution is αD ¼ cos 1 ðΞ ΘÞ
(2.14)
It can be seen from Eq. (2.14) that, due to the flexibility and diversity of the transmission and reception geometric configuration and beam point, the time delay ground range resolution and the Doppler ground range resolution of bistatic SAR are usually nonorthogonal. Therefore the spatial resolution of bistatic SAR cannot be fully characterized by the adoption of the delay ground resolution and the
Bistatic SAR imaging theory
95
Doppler ground resolution. It is also needed to investigate the area of the resolution unit, which is the area enclosed by the contour 3 dB of the scattering points in the image domain. Its calculation formula is S¼
G ρG τ ρt sin αD
(2.15)
(2) The nonorthogonality of spatial resolution For monostatic SAR, the time delay resolution is generally orthogonal to the sidelobe extension of the Doppler resolution. Therefore, according to the spatial resolution performance metric given in Section 1.4.4, its time delay resolution and Doppler resolution can be respectively obtained by the profile of the sidelobe extending direction of the delay resolution of echo in the image domain produced by the isolated scattering point and 3 dB main lobe width of the profile of the sidelobe extending direction of the Doppler resolution. However, for bistatic SAR, the sidelobe extending directions of the time delay resolution and the Doppler resolution are usually not orthogonal, so the delay resolution and the Doppler resolution of the bistatic SAR are no longer equal to the 3 dB main lobe width of the profile of their sidelobe extending direction. Therefore it is necessary to analyze the characteristics of the sidelobe through the bistatic SAR generalized ambiguity function (GAF) at first and then obtain the measurement method of the spatial resolution of bistatic SAR accordingly. Assume that A and B are the position vectors of two adjacent scattering points ðxe, e yÞ and (x, y); then the module value of bistatic SAR GAF is [3]. Ð Ð hA ðt, τÞh∗B ðt, τÞdtdτ q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ffi (2.16) jχ ðA, BÞj ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ð Ð Ð Ð jhA ðt, τÞj2 dτdt jhB ðt, τÞj2 dtdτ where hA() and hB() are echo signals of point A and point B, respectively. Eq. (2.16) is actually another form of expression of Eq. (2.7). Therefore the GAF is actually I ðx, y; xe, e yÞ, the image domain response of isolated scattering point of the unit scattering sectional area, when assuming ðxe, e yÞ is the coordinate origin. After being approximately simplified [3], this formula is rewritten as jχ ðA, BÞj ¼ pðτd ÞmA ðfd Þ
(2.17)
96
Bistatic synthetic aperture radar
Eq. (2.17) is an ambiguity function of the bistatic SAR about the echo delay and Doppler frequency. Here, τd is the signal delay difference between the two scattering points A and B at time τd, and fd is the Doppler frequency difference between the two signals at time t; p(τd) is the normalized modulus of the time delay signal, and mA(fd) is the inverse Fourier transform modulus of the normalized transmitted and received signal power ratio. Performing the Taylor expansion on the expressions of τd and fd in B ¼ A, it can be obtained: ½ΦTA + ΦRA T ðB AÞ (2.18) τd c 1 fd ½ωTA ΓT + ωRA ΓR T ðB AÞ (2.19) λ where ΦTA and ΦRA are the unit vectors of the scattering point ðxe, e yÞ pointing to the transmitting station and the receiving station, respectively. ωTA and ωRA stand for the rotational angular velocity of the transmitting station and receiving station relative to the scattering point ðxe, e yÞ, respectively, while ΓT and ΓR respectively represent unit vectors along the direction of rotation of the line of sight of the transmitting station and the direction of rotation of the line of sight of the receiving station. Based on these two formulas, Eq. (2.7) can be converted into 2 cos ðγ=2ÞΘT ðB AÞ 2ωA ΞT ðB AÞ mA (2.20) jχ ðA, BÞj ¼ p c λ where Θ is the unit vector of the direction of the bistatic angle bisector, ωA is the modular operation result of (ωTAΓT + ωRAΓR)/2, which stands for the synthetic angular velocity,; The 3 dB width of p() is the time delay ground resolution, and the 3 dB width of mA() is the Doppler ground resolution. Performing the calculation of the bistatic SAR GAF in Eq. (2.20), under the condition that the bistatic angle γ ¼ 43 degrees, the results can be obtained as shown in Fig. 2.3. According to the physical meaning of the fuzzy function, the scattering point located on the centerline of the time delay GAF (i.e., the dotted line in Fig. 2.3D) has the same time delay as the reference point and thus cannot be resolved by the time delay differences, so the
97
Bistatic SAR imaging theory
1.5
1.5 1 Z(Amp)
Z(Amp)
1 0.5
0.5
0
0
-0.5 -20
-0.5 -20 0 y(m)
-20
-20
-10
0
10
20 0 y(m)
x(m)
(a) time delay GAF
-20
-20
20
x(m)
(b) Doppler GAF 15
Side lobe extending direction
10 of delay resolution,
Delay resolution direction
5 y(m)
1
Z(Amp)
10
0
-10
0.5
0 -5
Doppler resolution direction Side lobe extending direction of Doppler resolution,
-10
0 20 0 y(m)
-20
-20
-10
0
10
x(m)
(c) two-dimensional GAF
20
-15 -10
0 x(m)
10
(d) GAF ground contour map
Fig. 2.3 Generalized ambiguity function.
direction of the time delay resolution is perpendicular to the centerline. Since the centerline direction is equal to the sidelobe extending direction of the Doppler resolution, the time delay resolution is perpendicular to the sidelobe extending direction of Doppler resolution. Similarly, the Doppler resolution direction is perpendicular to the delay resolution sidelobe extending direction. Based on the preceding characteristics of bistatic SAR twodimensional resolution, it is necessary to reconsider the measurement of time delay and Doppler resolution. As shown in Fig. 2.4, assuming that the scattering point B is located in the extending direction of the delay resolution sidelobe, and the mode ρΩ of the vector (BA) is the 3 dB main lobe width corresponding to the sidelobe extending direction of the delay resolution, then
98
Bistatic synthetic aperture radar
The side lobe
1
qt extending direction
of doppler resolution
B
y(m)
0
A r
W
r
t
The side lobe extending direction of delay resolution -1 -1
0
x(m)
1
Fig. 2.4 Resolution projection diagram.
ΘT ðB AÞ ¼ ρΩ cos θτ ΞT ðB AÞ ¼ 0
(2.21)
where θτ is the angle between (BA) and the direction of time delay resolution. From the fuzzy function Eq. (2.20) and the definition of resolution, ρΩ should meet the requirements that. ρΩ cos θτ cos ðγ=2Þ 1 mA ð0Þ ¼ pffiffiffi (2.22) jχ ðA, BÞj ¼ p c 2 When the transmitting signal is a rectangular enveloping signal, this equation can be solved as c ρΩ ¼ (2.23) 2Br cos ðγ=2Þ cos θτ Comparing Eqs. (2.10) and (2.23), it can be seen that the relationship between pΩ and the bistatic SAR delay ground resolution ρG τ is ρG τ ¼ ρΩ cosθ τ = cos ϑ
(2.24)
The preceding equation shows that the time delay ground range resolution can be obtained by projecting the 3 dB main lobe width
Bistatic SAR imaging theory
99
of the sidelobe extending direction of delay resolution to the direction of delay resolution and then projecting it to the ground. Similarly, if it is assumed in Fig. 2.4 that the point target B is located in the Doppler sidelobe direction, and the modulus ρΨ of the vector (BA) is the 3 dB main lobe width corresponding to the sidelobe direction of Doppler, then ( T Θ ðB AÞ ¼ 0 1 (2.25) ΞT ðB AÞ ¼ ρΨ cosθt 2 where θt is the angle between (B A) and the direction of Doppler resolution. According to the fuzzy function formula and the definition of resolution, ρΨ should meet the requirements that jχ ðA, BÞj ¼ pð0ÞmA
ωA ρΨ cosθt λ
1 ¼ pffiffiffi 2
(2.26)
When the antenna pattern gain has no sidelobes, the preceding equation can be solved as ρΨ ¼
λ 2θA cos θt
(2.27)
where θA is the modulus of [θTAΓTA(t) + θRAΓRA(t)]/2, which represents the synthetic rotation angle of target A in bistatic SAR. Comparing Eqs. (2.11) and (2.27), the relationship between the resolution and the bistatic SAR Doppler ground range resolution ρG t is ρG t ¼ ρΨ cos θt =cos ϕ
(2.28)
Therefore, after projecting the 3 dB main lobe width of the sidelobe direction of Doppler resolution to the sidelobe direction of Doppler resolution, and then projecting it to the ground, the Doppler ground range resolution is obtained. (3) The spatial-variant characteristics of spatial resolution According to the previous analysis, the resolution of the bistatic SAR is related to the configuration parameters such as the bistatic angle γ and the transceiver station rotation θA. These configuration parameters are closely related to the position of the scattering point. Therefore the resolution of the bistatic SAR varies with the spatial position on the ground: that is, it has a spatial-variant characteristic [6].
100
Bistatic synthetic aperture radar
´104
4
0.4
5
0.3
-1
0.75
0.6
-3
-2
-1
0 x(m)
1
2
3
4
-3
-2
52
1 .8 64 0 0.
-4 -4
-1
0 x(m)
´104
1
2
3
4 ´104
(b) Doppler ground range resolution
´104
4 T
3
´104
3
2
2
1
1 y(m)
y(m)
33
0.
-3
(a) time delay ground range resolution 4
0.
2 R
1
0.4
2
-4 -4
0.26 2
-2
R
-3
0 0.6 4 0.8 1
2 0.81 0.6 4 0.5 2
0.4
-2
2
.2 0.10 5
-1
5
52 0.64 0.8 1
0.41
1 0.75 0.6 0.5 0 . 8 42 0.4
0.41
1
1
0.
0.26
y(m)
2
0.33
1
0.38
0
0.4 2
T
3
5 0.
1
´104
0.4
2
5
0.6
0.4
y(m)
3
0.75
5 0.
T 1
4
0
T
0 -1
-1
-2
-2
R
-3
-3
R
-4
-4 -4
-3
-2
-1
0 x(m)
1
2
3
-4
4
-3
-2
-1
0 1 x(m)
´104
2
3
4 ´104
(c) direction of time delay ground range resolution (d) direction of Doppler ground range resolution -4
´104
-4
T
-3
-3
2.5
-2
2 1.5
1
1 R
2
-3
-2
0
-1
0 x(m)
1
2
3
4
0
1 R
3 4 -4
-3
-2
´104
(e) resolution unit area spatial-variant diagram
Resolution unit range les than 1m2
Resolution unit range les than 1m2
2 0.5
3 4 -4
y(m)
-1
0
T
-2
-1 y(m)
´104
-1
0 x(m)
1
2
3
4 ´104
(f) 1 m2 resolution unit range
Fig. 2.5 Spatial resolution of typical static SAR configuration.
Fig. 2.5A–D shows the time delay ground range resolution and Doppler ground range resolution of contour distribution maps and resolution direction maps of a typical configuration of bistatic SAR. Fig. 2.5E shows the spatial variant of the ground resolution of the unit area, and Fig. 2.5F shows the geographical range of the ground range resolution unit in the less than 1 m2 case.
Bistatic SAR imaging theory
101
It can be seen from Fig. 2.5 that the size of spatial resolution, direction, and resolution unit area of bistatic SAR have spatial-variant characteristics, which requires attention in system design and performance analysis.
2.2.2 Radiation resolution The definition of the radiation resolution of bistatic SAR is the same as that of monostatic SAR, which also illustrates the ability of the SAR system to distinguish the differences in target scattering intensity. It will directly affect the dynamic range of the image obtained by the SAR system (the richness of the gray level) and the contrast of the image (the clear distinction between black and white). For SAR image understanding and target recognition, the radiation resolution and the spatial resolution are equally important because these two resolutions represent the ability of the SAR system image to distinguish the target position (horizontal) and the ability to distinguish the target scattering intensity (longitudinal), respectively. The following analysis will show that, in the case of low signal-to-noise ratios such as long-distance imaging, the radiation resolution will depend on radar system parameters, the imaging area slant range, and the surface scattering intensity. (1) Radiation resolution formula In Section 1.4.4, the definition of radiation resolution under high SNR conditions has been given, and it has been explained that the main factor affecting the radiation resolution is the speckle noise, which depends on the ratio of the surface scattering mean to the mean square deviation. As for long-distance imaging, the system noise caused by antenna noise, receiver thermal noise, and sampling quantization noise will be much larger than the coherent speckle noise. In this case, the radiation resolution γ n will depend on the ground scattering power and the system noise power in the image domain. The power ratio SNR is defined as follows: 1 1 γ n ¼ 1 + pffiffiffiffiffi 1 + (2.29) SNR M where M is the multilook number, representing the number of independent observations for the same scene. The logarithmic relationship is usually used to show the radiation resolution, which is given as 1 1 p ffiffiffiffiffi γ n ðdBÞ ¼ 10 lg 1 + 1+ (2.30) SNR M
102
Bistatic synthetic aperture radar
This equation applies to the situation where M is considered to have the same SNR. If the SNR of each multilook is different, the radiation resolution is calculated according to Eq. (2.31): 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi9 M
> X 2 > > > > > > > 1 + ð SNR Þ > i > < = i¼1 γ n ðdBÞ ¼ 10lg 1 + (2.31) M X > > > > > > > > ðSNRÞi > > : ; i¼1
It can be noticed that the radiation resolution of the bistatic SAR image is related to the image SNR in addition to the imaging multilook number. The higher the SNR, the better the radiation resolution. (2) Bistatic SAR SNR equation The SNR from Eqs. (2.29) to (2.31) can be estimated using the bistatic SAR radar formula. Suppose that in a bistatic radar situation, the peak transmit power of the transmitting station is Pt, the transmit antenna power gain is GT, the target bistatic radar scattering cross-sectional area is σ S, the receive antenna power gain is GR, and the working wavelength is λ; then the scattering power the receive antenna obtains can be expressed as Pr ¼
Pt GT GR λ2 σ S ð4π Þ3 RT2 RR2
(2.32)
In practical applications, the bistatic radar formula must include direction factor, loss factor, and atmospheric factor. In order to consider the influence of these losses, a parameter Ls(Ls > 1) called the system loss factor is introduced, along with the antenna pattern factors FT and FR of the transmitting station and the receiving station. Assuming that k is the Boltzmann constant, T0 is the receiving noise temperature, Bn is the noise bandwidth, and kT0Bn is the system noise power, the echo SNR equation can be obtained as: SNR ¼
Pt GT GR λ2 σ S FT2 FR2 ð4π Þ3 kT 0 Bn Ls ðRT RR Þ2
(2.33)
In the SAR imaging process, fast time compression and slow time compression can improve the SNR by Nτ and Nt times, respectively. According to Eq. (2.15), the target scattering cross-sectional area σ S can be expressed as σ S ¼ σ 0 ρτ G ρt G = sin αD , where σ 0 is the surface
Bistatic SAR imaging theory
103
normalized bistatic scattering coefficient corresponding to the particular spatial resolution unit. Thus the SNR equation of the bistatic SAR image expressed in peak power can be obtained as: SNR ¼
G 2 2 Pt GT GR λ2 σ 0 ρG τ ρt FT FR Nt Nτ ð4π Þ3 ðRT RR Þ2 kT 0 Bn F0 Ls sin αD
(2.34)
Nτ ¼ BrTr where Tr is the pulse width and Br is the signal bandwidth, Pt ¼ Pav/(FrTr), Pav is the average transmission power, Fr is the pulse repetition frequency, and F0 is the noise figure. In addition, Nt ¼ BaTa where Ta is the synthetic aperture time, Ba is the Doppler signal bandwidth, and Bn ¼ Br. Therefore the average power of the SNR equation of the bistatic SAR image can be expressed as: SNR ¼
G 2 2 Pav GT GR λ2 σ 0 ρG τ ρt FT FR Ba Ta ð4π Þ3 ðRT RR Þ2 kT 0 F0 Ls Fr sinαD
(2.35)
It can be seen that the SNR of the bistatic SAR image is inversely proportional to the square of the product of the receiving and transmitting distance. Under the certain conditions of the SNR requirement, the use of a large transmitting antenna gain or the working mode of the receiving station can reduce the requirements of receiving antenna gain so that the antenna size of the receiving subsystem can be reduced and the platform adaptability of the airborne receiving station can be G enhanced. Because ρG t and ρτ have a certain correlation with the geometrical configuration of the bistatic SAR, which is far more complicated than the monostatic SAR, it can be further rewritten according to the different configurations of the bistatic SAR. Eq. (2.35) is the SNR equation for the bistatic SAR image expressed by the average power, which can be used to calculate the image SNR under the condition of a specific geometry, and based on this can be used to calculate the system radiation resolution. (3) Bistatic SAR scattering coefficient The σ 0 in Eq. (2.35) can be obtained by actual measurement or estimated by electromagnetic scattering theory. Except for man-made objects, bistatic SAR is usually employed to observe random rough surfaces such as the ground or ocean. According to the target electromagnetic scattering theory, σ 0 can be given as [7]: 4π 0 2 Es ðϕR , φR Þ (2.36) σ ¼ lim RR A RR !∞ Ei ðϕT , φT Þ
104
Bistatic synthetic aperture radar
where A is the area of the target, RR is the distance from the receiver to the target, while Es() and Ei() are the received power density at the receiving antenna and the incident power density at the target, respectively. The definitions of ϕT and ϕR are shown in Fig. 2.6; they represent the incident angle and the scattering angle, respectively, while φT and φR are the azimuth angle of the transmitter and receiver, respectively, and their difference is the relative azimuth angle ϕS. This indicates that σ 0 is related to factors such as the system operating frequency, polarization mode, incident angle, scattering angle, the permittivity of ground materials, and roughness. In bistatic SAR, σ 0 has a strong dependence on geometric parameters such as bistatic angle γ and plane deflection angle. The calculation of σ 0 can be attributed to the bistatic scattering modeling problems of a random rough surface in electromagnetic scattering theory. With the help of the integral equation method in electromagnetic scattering theory, the calculation method of σ 0 can be given, which provides the necessary theoretical basis for system design, configuration design, and radiation resolution performance evaluation. The random rough surface height fluctuation in Fig. 2.6 is z ¼ Z(x, y), and the statistics describing it include the height fluctuation probability density function p(z), the root mean square height δz, the height fluctuation correlation function G(l), the correlation coefficient ρ(l), and the relative length L, where the height fluctuation probability density function p(z) represents the distribution of height fluctuations, while the root mean square height δz illustrates the degree of deviation of
Fig. 2.6 Spatial geometry of radar bistatic electromagnetic scattering.
Bistatic SAR imaging theory
105
the random rough surface height fluctuation from the mean. Adopting the one-dimensional discrete data of the random rough surface height obtained by the needle roughness measuring plate, the root mean square height δz can be expressed as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2.37) δz ¼ E½Z 2 ðxÞ fE½Z ðxÞg2 Since the rough surface height fluctuation function Z(x, y) is isotropic, it is only necessary to consider the direction height fluctuation Z(x). The height fluctuation correlation function G(l) measures the degree of association between the heights of any two points on a random rough surface, which can be expressed as GðlÞ ¼ E ½Z ðx + lÞZ ðxÞ
(2.38)
The correlation coefficient can be expressed as ρðlÞ ¼ GðlÞ=δ2z
(2.39) 1
The relative length L is the value of l when ρ(l) falls to e . L is used as a criterion of judgment for two points that are mutually independent on a rough surface in applications. The dielectric parameter of a random rough surface is represented by the complex permittivity εr. When the medium is isotropic, the complex permittivity εr can be expressed as εr ¼ ε0 jε00
(2.40)
where ε0 is the relative permittivity, which represents the dispersion characteristics of the medium, and ε00 is related to the attenuation of incident electromagnetic waves in the medium. For most natural surfaces, the imaginary part of the complex permittivity is much smaller than the real part. As for the specific polarization method, based on the integral equation model, the concrete expression of the bistatic SAR scattering coefficient σ 0 of the random rough surface can be obtained as follows: 2 ðnÞ k k , k k ∞
2 2
X k2 sx x sy y 0 2 2n n W σ ¼ SðϕT , ϕR Þ exp δz kz + ksz δz Ipp n! 2 n¼1 (2.41) where k is the wavenumber of the medium surface; kx, ky, and kz represent the wavenumber component of the incident direction; ksx, ksy,
106
Bistatic synthetic aperture radar
and ksz represent the wavenumber component of the scattering direction; and W(n) is the Fourier transform of the nth power of the surface correlation function. S(ϕT, ϕR) represents the bistatic shadowing function, which reflects the effect of the rough surface shadowing on the scattering, and Inpp is the polarization scattering term, whose specific form can refer to literature [7]. The integral equation model starts from the integral equation satisfied by the electric and magnetic field and takes into account multiple scattering and shadowing effects. It can be used to calculate the scattering coefficient of different roughness surfaces and provides a mathematical tool for studying the scattering characteristics of random rough surfaces such as ground and ocean surface. Here are the calculation results of the random rough surface bistatic SAR scattering coefficient σ 0 under several typical parameters, which can be used as the typical value of σ 0 in estimation of the bistatic SAR in practical applications. The calculation parameters are shown in Table 2.1. In each calculation, the relative azimuth angle ϕs changes continuously. The calculation results are shown in Fig. 2.7A–D. By analyzing the previous calculation results, the following conclusions can be obtained: VV polarization is less sensitive to the incident angle ϕT and scattering angle ϕR than HH polarization. For HH polarization, if the incident angle ϕT is properly increased or the scattering angle ϕR is reduced, σ 0 will increase correspondingly and lead to a larger SNR and better resolution. The HH polarization and the VV polarization should avoid the relative azimuth ϕS from the notch portion in Fig. 2.7 in case σ 0 will be too small, in order to achieve a higher SNR and radiation resolution. In this section, the factors affecting the SNR and scattering coefficient of the radiation resolution of the bistatic SAR are analyzed, and the corresponding calculation formulas are given. The radiation resolution of a particular bistatic SAR system in a particular operating mode Table 2.1 Calculation parameters of random rough surface bistatic σ 0.
1 2
f (GHz)
δz (cm)
L (cm)
εr
ϕT (degrees)
ϕR (degrees)
9.65 9.65
1.002 1.002
21.34 21.34
7 7
Variable 70
70 Variable
Bistatic SAR imaging theory
107
Fig. 2.7 Calculation of bistatic SAR random rough surface σ 0.
can be estimated by Eqs. (2.31), (2.35) and (2.41) under known polarization modes, geometric configurations, and other parameters.
2.3 Configuration design The configuration design is a new problem derived from bistatic SAR with separate transmitting and receiving stations, high spatial freedom, and imaging performance depending on the geometric relationship between the transmit-receive-image areas, which has important guiding significance for mission planning and image processing in real-time applications of bistatic SAR. From the related analysis of the resolution characteristics of bistatic SAR, it can be seen that the spatial and radiation resolution performance of bistatic SAR is related not only to the system parameters such as transmitting power,
108
Bistatic synthetic aperture radar
signal bandwidth, and antenna gain, but also to geometrical parameters such as the relative positional relationship between the transceiver station and the target, and the velocity. Therefore, in practical applications, how to design the imaging space geometry of bistatic SAR to achieve the expected image performance metrics such as spatial resolution and radiation resolution is a problem to be solved in the application of bistatic SAR.
2.3.1 Configuration design principles The configuration design refers to determining the system mode parameters, the transceiver geometry, and the platform velocity vector correlation so that the bistatic SAR achieves the predetermined image performance index such as spatial resolution, radiation resolution, and mapping width under the condition of satisfying the application constraints such as the bistatic angle [8]. It can be seen from the previous analysis that the spatial resolution and radiation resolution of the bistatic SAR are closely related to the geometric configuration and relative motion parameters of the transceiver station and the target area. To obtain the expected imaging performance index, the corresponding geometry and platform motion parameters are required, so there should be a reasonable design for these parameters. Because there are many related parameters, and there is a certain correlation between them, in the configuration and velocity parameter design, a minimum parameter set needs to be selected to represent all the parameters. The selection of the parameter set should generally meet the following three conditions: ① can uniquely determine the system configuration; ② mutual independence; ③ the number should be as small as possible. In the space geometry configuration of the bistatic SAR in Fig. 2.8, T and R represent the transmitting station and the receiving station, respectively, and their motion velocity vectors are VT and VR, and their slant
Fig. 2.8 Spatial geometric relationship of bistatic SAR.
Bistatic SAR imaging theory
109
distances to the origin of the rectangular coordinate system are RT(t) and RR(t), respectively. γ G is the ground projection of the bistatic angle, and the definition of ϕS is shown in Fig. 2.6. ϕT and ϕR are the incident angle and the scattering angle, respectively, and α is the angle between the flight directions of the two platforms. Using the preceding parameter set selection, the expressions of the spatial G resolution and the radiation resolutions of bistatic SAR such as γ n, ρG τ , ρt , αD, and SNR can be transformed into nonlinear functions of main geometrical parameters such as γ G, ϕR, ϕT, and α [9]. 8 G ρ ¼ F1 fγ G , ϕR , ϕT g > > < τG ρt ¼ F2 fγ G , ϕR , ϕT , αg (2.42) αD ¼ F3 fγ G , ϕR , ϕT , αg > > : SNR ¼ F4 fγ G , ϕR , ϕT , αg where the specific form of the function Fi() can be obtained through the spatial resolution and image SNR calculation formula. The design of configuration and motion parameters, in practical applications, is usually the process of solving other parameters when given several known parameters and imaging performance requirements. For example, the expected imaging performance includes the delay ground resolution G ρG τ , the Doppler ground resolution ρt , the angle between the directions of delay ground resolution and the Doppler ground resolution, and the SNR, when the slant distance of the receiving station, the incident angle, and the velocity are given. Then the spatial geometry parameters that need to be designed are only γ G, ϕT, and α. Such problems can be transformed into the following system of multivariate nonlinear equations to find the answer: 3 2 3 3 2 0 f1 ðxÞ F1 ðϕT , γ G Þ ρG τ 6 f2 ðxÞ 7 6 F2 ðϕT , γ G , αÞ ρG 7 6 0 7 t 7 6 7 7 6 6 4 f3 ðxÞ 5 ¼ 4 F3 ðϕ , γ G , αÞ αD 5 ¼ 4 0 5 T 0 f4 ðxÞ F4 ðϕT , γ G , αÞ SNR 2
(2.43)
where x ¼ (ϕT, γ G, α). The solution of this system of multivariate nonlinear equations is the bistatic spatial geometry that matches the imaging index.
2.3.2 Configuration design method As mentioned earlier, the configuration design is to find the solution of the previous multivariate nonlinear equations system (NES). However, the number of solutions for this NES is not unique. Mathematically, such problems can
110
Bistatic synthetic aperture radar
be solved by numerical calculations, and the optimization method is a better option [8]. To this end, we transform the previous nonlinear equations into constrained multiobjective optimization problems as follows: 8 8 X4 < ϕT DϕT < arg min F1 ðxÞ ¼ ϕT + f ð x Þ j j i i¼1 x , s:t: γ G DγG : : arg min F2 ðxÞ ¼ 1 ϕT + 4 max ðjf1 ðxÞj, ⋯, jf4 ðxÞjÞ αDα x (2.44) G
where DϕT, Dγ G, and Dα represent the value ranges of parameters ϕT, γ , and α, respectively. By solving the preceding optimization problem, the optimal approximation value of the configuration parameters that meet the specific imaging performance can be obtained. The method adopted can be the multiobjective genetic algorithm NSGAII [10]. Based on this, the flowchart of the configuration design method can be obtained, as shown in Fig. 2.9. Where Gen is the iteration number of the multiobjective genetic algorithm, and G is the maximum iteration number. Here is an example to illustrate the effectiveness of this configuration design method. Assume that the required ground delay resolution ρG τ ¼ 1 m, the azimuth resolution ρG ¼ 1.6 m, the angle between the resolution direct tions αD ¼ 90degrees, and the image SNR ¼ 10 dB. Other simulation parameters are shown in Table 2.2. According to the preceding configuration design method, the bistatic spatial geometric configuration parameters that meet the listed requirements
Construct imaging performance function relationships
Competition to choose Cross variation calculation
Modeling as nonlinear equations Nondominant sort Transform multi-objective optimization
Congestion calculation
Initialize as the parent group
Gen>G
No
Yes Realize the given imaging index
Fig. 2.9 Configuration design method flowchart.
Gen=Gen+1
Bistatic SAR imaging theory
111
Table 2.2 Simulation parameter.
can be obtained. Table 2.3 gives six sets of geometric configuration parameters and their corresponding imaging performance indexes. It can be seen that the six sets of solutions in the table can achieve the predetermined imaging performance indexes. In order to further verify the validity of these solutions, according to the previous three spatial geometric configurations, the bistatic SAR echoes are generated separately, and the time domain back-projection algorithm is adopted for imaging processing. The result is shown in Fig. 2.10. It can be seen from the figure that the spatial geometry of bistatic SAR according to this method can achieve the desired spatial resolution. In practical applications, one of these six solutions can be selected according to the allowed flight airspace and the convenience of the flight.
2.4 Echo model The purpose of studying the bistatic SAR echo model is to obtain a practical and accurate mathematical expression of the scattering point echo h(t, τ; x, y) in Eq. (2.4) in the fast and slow time two-dimensional time domain and its transform domain. This will contribute to analyzing and mastering the particular delay migration and Doppler variation law of the bistatic SAR echo, as well as the two-dimensional space-variance and coupling, so as to lay a theoretical basis for building efficient, high-precision imaging algorithms. This section first shows the mathematical model of the mean slant range history and the Doppler history of the bistatic SAR and analyzes their
Table 2.3 Geometric configuration parameters and imaging performance. Parameters
Imaging performance
Configuration design
Incident angle of transmitter (degrees)
Bistatic angle (degrees)
Angle of velocity (degrees)
Delay resolution (m)
Doppler resolution (m)
Angle of resolution (degrees)
SNR (dB)
1 2 3 4 5 6
46.3758 45.4026 45.1193 46.7136 46.6455 45.7310
115.0891 245.3092 114.0097 244.8724 244.8724 114.9524
59.9547 59.9947 59.8747 59.9645 59.9478 59.9547
1.0283 1.0272 1.0285 1.0268 1.0264 1.0272
1.5912 1.5890 1.5951 1.6137 1.6089 1.5890
89.9385 90.2252 90.0966 89.9893 90.0031 90.0052
10.3256 10.5720 10.0414 10.1218 10.1708 10.7652
113
Bistatic SAR imaging theory
6
pr=1.03m
6
pa=1.60m
a=90.4°
2
y(m)
2
y(m)
pa=1.60m
4
a=90°
4
pr=1.03m
0
0
-2
-2
-4
-4
-6
-6 -6
-4
-2
0
2
4
-6
6
-4
-2
0
2
4
6
x(m)
x(m)
(a) Configuration 1
(b) Configuration 2 pr=1.03m
6
pa=1.60m a=90.2°
4
y(m)
2 0 -2 -4
-6
-4
-2
0
2
4
6
x(m)
(c) Configuration 3 Fig. 2.10 Scattering point responses in the image domain of several configurations.
characteristics. Then, the analytical expressions of the two-dimensional time domain and two-dimensional frequency domain of the bistatic SAR scattering point echo are derived, which are the echo models in the time domain and frequency domain. The two-dimensional spectral model of the scattering point echo, which is the generalized Loffeld’s bistatic formula (LBF), can benefit the construction of the fast-frequency domain imaging algorithm of bistatic SAR.
2.4.1 Slant range history In bistatic SAR, from transmission to reception, the electromagnetic wave needs to go through two stages of propagation, from the transmitting station
114
Bistatic synthetic aperture radar
to the target and then from the target to the receiving station. The lengths of the two propagation paths are called the transmitting slant range and the receiving slant range. Their average value is called the mean slant range, and the changing process of the mean slant range is called the mean slant range history. The mean slant range has a decisive influence on the echo delay; therefore understanding the courses and laws of the mean slant range and its influencing factors plays an important role in mastering the variation law of echo delay. As shown in Fig. 2.11, assuming that the transmitting station T and the receiving station R move linearly along their own routes at speeds vT and vR, respectively, the track crosscuts from the scatter point P whose coordinate is (x, y) to the transmitting station and the receiving station are, respectively, rT and rR, and the time at which the transmitting station and the receiving station arrive at the track crosscut points are tT and tR, respectively. Then, the transmission slant range rT(t) and the receiving slant range rR(t) at time t can be expressed as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2.45) rT ðt Þ ¼ rT2 + vT2 ðt tT Þ2 rR ðtÞ ¼ rR2 + vR2 ðt tR Þ2 In the slant-time plane of Fig. 2.12, they are in the form of a hyperbola, called the U-shaped line, whose opening width is determined by the platform velocity, and the hyperbolic apex corresponds to the track crosscut and the track crosscut arrival time. The mean slant range rP(t) of the scattering point P can be expressed in the form of the sum of two radicals as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rP ðtÞ ¼ rT2 + vT2 ðt tT Þ2 + rR2 + vR2 ðt tR Þ2 =2 (2.46)
Fig. 2.11 Mean slant distance of the scattering point and the transceiver station.
Bistatic SAR imaging theory
115
Fig. 2.12 Variation law of the target point’s mean slant range.
Therefore rP(t) is no longer a hyperbolic form of monostatic SAR in the slant-time plane, but a flat bottom hyperbola, called the V-shaped domain, formed by the means of two hyperbolas with heights of different vertex positions and opening widths, whose variation law is determined by six parameters, i.e., the velocity, track crosscut, and track crosscuts arrival time of the two platforms. Accordingly, the echo delay τP(t) of the target point P can be expressed as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rT2 + vT2 ðt tT Þ2 + rR2 + vR2 ðt tR Þ2 =c τ P ðt Þ ¼
(2.47)
In the slow-time and fast-time plane, τP(t) also appears as the flat-bottom hyperbolic form. Usually, this curve is called the delay migration trajectory, which has a delay migration law that is significantly different from monostatic SAR. However, in the actual echo recording process, due to the antenna beam modulation effect, only a small segment of the flat bottom hyperbola corresponding to τP(t) can be observed by radar. The position of the visible curve segment on the entire flat bottom hyperbola is determined by t0, which is the mean of the starting and ending time of the transmitting and receiving beams collectively illuminating the P point, while the time span of the curve segment is determined by the duration TA (i.e., the synthetic aperture time) at which the transmitting and receiving beams collectively illuminate the P point. Therefore, on the fast- and slow-time plane, the trajectory
116
Bistatic synthetic aperture radar
produced by the position of each pulse echo envelope center on the fasttime axis, which changes with slow time, will appear as slant and bend. Perform Taylor series expansion on Eq. (2.47) at time t0; then it can be obtained that: τP ðtÞ ¼ τ0 + kt ðt t0 Þ + μt ðt t0 Þ2 + … where τ0 ¼ τP ðt0 Þ ¼ τP ðtÞ ¼
(2.48)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rT2 + vT2 ðt0 tT Þ2 + rR2 + vR2 ðt0 tR Þ2 =c (2.49)
vT2 ðt0 tT Þ vR2 ðt0 tR Þ ffi + qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kt ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c rT2 + vT2 ðt0 tT Þ2 c rR2 + vR2 ðt0 tR Þ2
(2.50)
vT2 rT2 vR2 rR2 + μt ¼
3=2 3=2 2c rT2 + vT2 ðt0 tT Þ2 2c rR2 + vR2 ðt0 tR Þ2
(2.51)
are the mean delay, the ambulation slope, and the degree of curve of the delay migration trajectory, respectively. The inclined part of the delay migration trajectory corresponds to the first-order term, which is called delay ambulation. The bend corresponds to the quadratic term and above. Generally, the migration corresponding to the quadratic term is called delay bending, which means that the influence of the higher-order terms is ignored. However, whether the impact of highorder terms can be ignored or not depends on the specific working mode. It can be seen from Eq. (2.50) that when the angle between the velocity vectors of the transmitting and receiving platform is π/2, and when t0 is between tT and tR, the ambulation slope is usually small, corresponding to the joint back squint and front squint observation mode. When t0 is larger than tT and tR, the ambulation slope is positive, corresponding to the squint back observation mode. When t0 is smaller than tT and tT, the ambulation slope is negative, corresponding to the squint front observation mode. Moreover, the greater the difference between t0 and tT, t0 and tR, the higher the ambulation slope, and the larger the corresponding squint angle of observation. The limit of the ambulation slope is (vT + vR)/c, which corresponds to the positive front or positive back observation mode. And when t0 ¼ tT ¼ tR, the ambulation slope is 0, which corresponds to the side-looking mode.
Bistatic SAR imaging theory
117
t
wP (t0 )
t0
t P (t0 ) t
CP (t0 )
TA
Fig. 2.13 Schematic diagram of the ambulation and bending of the delay migration.
As shown in Fig. 2.13, assuming the wP(t0) and CP(t0) are the ambulation and bending of the delay migration within the synthetic aperture time, so the expressions of wP(t0) and CP(t0) can be derived from Eqs. (2.50), (2.51) as: vT2 ðt0 tT ÞTA vR2 ðt0 tR ÞTA ffi + qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi wP ðt0 Þ ¼ kt TA ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c rT2 + vT2 ðt0 tT Þ2 c rR2 + vR2 ðt0 tR Þ2
(2.52)
1 vT2 rT2 TA2 vR2 rR2 TA2 + CP ðt0 Þ ¼ μt T t A2 ¼
3=2 3=2 4 8c rT2 + vT2 ðt0 tT Þ2 8c rR2 + vR2 ðt0 tR Þ2 (2.53) If the delay migration exceeds the reciprocal of the transmitted signal bandwidth BR, which is the delay resolution unit scale 1/BR, when constructing the frequency domain imaging processing algorithm, it is necessary to equalize the delay ambulation according to the ambulation slope and perform the straightening operation on the delay bending (i.e., migration correction and bending correction) according to the degree of bending to implement dimension reducing processing and improve algorithm efficiency. Generally, for squint front or squint back mode, ambulation correction is a necessary operation, since the necessity of bending correction needs to be evaluated according to actual conditions.
2.4.2 Doppler history The change of the mean slant range value not only causes echo delay variation but also causes the echo phase to change, thereby forming a Doppler signal that changes with time. The Doppler frequency of the echo and its
118
Bistatic synthetic aperture radar
time variation reflect the laws and parameters of the echo phase varying with slow time. Therefore understanding the variation history and the influence factors of the echo instantaneous frequency plays an important role in mastering the law of echo variation and constructing the normalized echo for two-dimensional correlation operations. According to physics principles, if the radar operating wavelength is λ, the Doppler frequency of the echo fP(t) ¼ 2rp0 (t)/λ. Thus, according to Eq. (2.46), we have: 1 1 fP ðt Þ ¼ fT ðt Þ + fR ðtÞ 2 2
(2.54)
where 2vT2 ðt tT Þ 2vR2 ðt tR Þ ffi fT ðt Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; fR ðtÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 λ rT2 + vT2 ðt tT Þ2 λ rR + vR ðt tR Þ
(2.55)
It can be seen from Eqs. (2.54), and (2.55) that in bistatic SAR, the Doppler frequency of the echo is jointly generated by the transmission Doppler frequency fT(t) and the receiving Doppler frequency fR(t), and is the average of them. For different bistatic SAR configurations, the contribution of the transceiver platform to the Doppler and Doppler change rate of the echo may vary greatly. This phenomenon is called the asymmetric Doppler contribution. According to Fig. 2.14 and Eq. (2.54), since the platform moves from far to near, and then from near to far relative to the point P,
Fig. 2.14 The law of echo Doppler frequency variation.
Bistatic SAR imaging theory
119
fT(t) will vary from +2vT/λ to 2vT/λ, and fR(t) will vary from +2vR/λ to 2vR/λ. The Doppler frequency is 0 at the time of tT and tR; meanwhile, the Doppler change rate is 2v2T/λrT and 2v2R/λrR. And the echo-Doppler change curve fP(t) is the average mean of the curves fT(t) and fR(t). Therefore, the velocity of both the platforms, the track crosscut, the track crosscut arrival time, and other interrelationships have a decisive influence on the variation history of the Doppler frequency of the echo. Besides, since P is collectively illuminated by the transmitting and receiving beams for a finite duration, the observable width of the curve fP(t) is the synthetic aperture time TA, and the center position of the observable curve path t0 is the median of the starting and ending time of the common illumination of the transmitting and receiving beams. Therefore fP(t) is usually close to the linear frequency modulated signal. Perform a Taylor series expansion on Eq. (2.54) at point t0; it can then be obtained that: 1 fP ðtÞ ¼ fP0 + μA t2 + kt 3 + … 2
(2.56)
vT2 tT vR2 tR ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi + fP0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi λ rT2 + vT2 tT2 λ rR2 + vR2 tR2
(2.57)
where:
μA ¼ k¼
vT2 rT2 3=2
λðrT2 + vT2 tT2 Þ 3vT4 rT2 tT
5
2λðrT2 + vT2 tT2 Þ2
+ +
vR2 rR2 λðrR2 + vR2 tR2 Þ
3=2
(2.58)
5
(2.59)
3vR4 rR2 tR 2λðrR2 + vR2 tR2 Þ2
are the Doppler centroid, Doppler frequency, and the third-order Doppler term at time t0, respectively. It can be seen from this equation and Fig. 2.14 that when t0 is equivalent to tT and tR, the radar is working at the squint front with squint back mode, under a fixed TA, the Doppler signal has small Doppler centroid and high Doppler frequency, and large Doppler signal bandwidth and Doppler resolution. However, when t0 is much larger or smaller than tT and tR, corresponding to the squint back and squint front mode, respectively, there is a large Doppler centroid absolute value and a low Doppler frequency absolute value. Under the situation in which TA is certain, the Doppler bandwidth and Doppler resolution are small. In order to achieve the same Doppler resolution, the squint front and squint back modes require a longer synthetic aperture time TA.
120
Bistatic synthetic aperture radar
For the mixed observation modes where t0 is between tT and tR, the slope of the Doppler curve is large, and the synthetic aperture time required to achieve the expected Doppler resolution and bandwidth is short, so the curve segment fP(t) can generally be regarded to be linear so that the Doppler signal can be modeled as a linear frequency modulated signal, which means that the effects of second-order frequency modulation or third-order phase modulation are ignored. For the squint mode with t0 larger or smaller than tT and tR, since the slope of the curve segment fP(t) is small, the synthetic aperture time required to achieve the same Doppler resolution and bandwidth is longer, so the curvature of fP(t) will be significant. Therefore, it is generally necessary to consider the effects of second-order frequency modulation or third-order phase modulation. This analysis is mainly for the conditions with several meters of resolution in the ground distance. From a more accurate point of view, whether the third-order phase modulation needs to be considered in the imaging process should be quantitatively evaluated according to the following formula. 3 kT π=4 A
(2.60)
whose physical meaning is that during the synthetic aperture time, the phase change caused by the third-order phase modulation term does not significantly affect the in-phase stacking of the Doppler signal during slow time compression. Note that equations from (2.45) to (2.59) are related to the scattering point P. As the coordinate (x, y) of the point P changes, the calculation results of these formulas will also change correspondingly, which is referred to as space-variant characteristics. So strictly speaking, x and y should be adopted as parametric variables. According to the two-dimensional correlation imaging method introduced in Section 2.1.2, the synthetic aperture radar determines the position coordinates of the scattering point based on the positions of the migration trajectory of the corresponding echo in the fast-slow time domain, and implement the echo energy aggregation and scattering intensity estimation according to the baseband waveform of the transmitted signal, the shape of the delay migration trajectory, and the Doppler frequency modulation parameters, thus forming the radar image. Therefore it can be considered that the variation of the slant range mean value caused by the movement of the platform results in two consequences, the delay migration and the
Bistatic SAR imaging theory
121
Doppler frequency modulation. Although the former can be used to determine the position coordinate of the scattering point, it is not of benefit for highly efficient imaging processing, so when constructing the imaging algorithm, we must also focus on eliminating its adverse effects. And the latter is an important basis and a favorable factor for slow-time echo energy aggregation and scattering rate estimation.
2.4.3 Time-domain echo model The time-domain echo model describes the history of the echo amplitude and phase as a function of fast and slow time by simple and precise mathematical relationships through appropriate approximation. It is the basis of the time domain imaging algorithm and the origin for building the frequency domain model. The analysis of the mean slant range history and the Doppler history in the previous two sections can be used to build the timedomain model. Assume the radar carrier wavelength is λ and the speed of light is c. When the transmitter and the receiver move in a straight line at a constant speed, the scattering point P of the position coordinate (x, y) in the target domain and the mean value rP(t; x, y) ¼ [rT(t; x, y) + rR(t; x, y)]/2 of the slant ranges rT(t; x, y) and rR(t; x, y) of the transmitting station and the receiving station will change with the slow time t, simultaneously causing corresponding changes in pulse echo delay τP(t; x, y) ¼ 2rP(t; x, y)/c and the initial phase φA(t; x, y) ¼ 4πrP(t; x, y)/λ of each pulse. It should be noted that the reason why x and y are the parametric variables here is that the laws of echo delay variation and the initial phase variation are closely related to the parameters and the position (x, y) of the scattering point, that is, the echo has spacevariant characteristics. According to the analysis in Section 2.4.1, with the change of t, the platform moves from far to near and then from near to far with respect to point P; the echo delay of point P τP(t; x, y) will appear as a U-shaped migration trajectory in the fast-slow time domain as shown in Fig. 2.12. However, due to the beam pattern modulation and truncation effects, only a small segment of the U-shaped migration trajectory can be observed in the fast-slow time domain. The center time of the segment is t0, and the time span is the synthetic aperture time TA. Therefore the echo coverage area of the scattering point in the fast-slow time domain generally appears as squint and bent. At the same time, the initial phase φA(t; x, y) varies and forms a slow timefrequency modulated signal, which is the Doppler signal, and the variation
122
Bistatic synthetic aperture radar
law of its instantaneous frequency fd(t; x, y) ¼ 2rP0 (t; x, y)/λ is shown in Fig. 2.14; its envelope shape is determined by the antenna pattern modulation function sA(t; x, y). This is because sA(t; x, y) is related to the shape and scanning mode of the antenna beam; according to the time, position, and process of each scattering point entering and exiting the beam, it will produce amplitude modulation and signal truncation to the echo data along the slow-time t direction. Besides, when the transmitted pulse is a frequency-modulated signal, for each transmitted pulse, the echo of point P will produce a baseband frequency-modulated signal in the τ direction of the fast-slow time domain, which is the fast-time modulated signal, whose envelope shape and the phase variation law are determined by the baseband waveform sR[τ] exp [jφR(τ)] of the transmitted pulse, while the envelope center and the phase center are determined by τP(t; x, y). Therefore the normalized echo h(t, τ; x, y) of point P will become a deformed two-dimensional frequency modulated signal whose mathematical expression is. hðt, τ; x, yÞ ¼ sR ½τ τP ðt; x, yÞ exp fjφR ½τ τP ðt; x, yÞg sA ðt; x, yÞ exp ½jφA ðt; x, yÞ
(2.61)
According to Eq. (2.3), the echo s(t, τ) generated from the observation area Ω in the bistatic SAR receiver is the integral of the two-dimensional echo of all scattering points in the region on the observation area Ω. ðð sðt, τÞ ¼ σ 0 ðx, yÞhðt, τ; x, yÞdxdy (2.62) Ω
In bistatic SAR, the transmitted signal is a linear or nonlinear frequency modulated signal in order to obtain a large time-bandwidth product. The slow-time signal varies with the variation of observation mode, geometry, and synthetic aperture time, and can generally be regarded as the linear or nonlinear frequency-modulated signal. Therefore the echo of the scattering point P appears as a deformed two-dimensional frequency-modulated signal on the fast-slow time plane. According to different modes, the envelope and phase center delay τP(t; x, y) of the fast-time frequency-modulated signal varies with the slow time t, and a different migration rule is presented, so that the curve corresponding to τP(t; x, y) exhibits different degrees of squint and bending. Fig. 2.15 shows the real part graphic of the scattering point echo when the transmitted signal is a negative frequency in several typical modes,
Bistatic SAR imaging theory
123
Fig. 2.15 Real part graphic of scattering point echo in bistatic SAR.
where the vertical axis is fast time τ and the horizontal axis is slow time t. It can be seen that, except for the fact that the coverage area exhibits different degrees of squint and bending, it also presents an annular strip outwardly from the center or that gradually thickens from one side to the other. When the echoes can be modeled as linear frequency-modulated signals in both fast- and slow-time directions, the number of strip rings is equal to onequarter of the signal time-bandwidth product. For conditions with multiple scattering points, bistatic SAR echoes have intersection, coupling, and space-variant characteristics, as shown in Fig. 2.16. Among them, the intersection points to the overlap of the adjacent scattering points in the echo coverage area. The coupling corresponds to the squint and bending of the echo coverage area caused by the delay migration and the corresponding phase variation law. It is, mathematically, the expressions of the echo envelope, and the phase of the scattering point cannot be split into the product of two univariate factors, which means there is only the
124
Bistatic synthetic aperture radar
Fig. 2.16 Schematic diagram of multiple scattering point bistatic SAR echoes.
fast-time variable τ or the slow-time variable t in each factor. Space-variant means that the echo law and parameters change with the spatial position of the scattering point. In fact, the bistatic SAR echo is seemingly disordered two-dimensional data stacked by the echo of a large number of scattering points in the observation area, as shown in Fig. 1.3A. The purpose of imaging processing is to transform these seemingly disordered two-dimensional data into ordered two-dimensional images, as shown in Fig. 1.3C. The time-domain imaging method of Eq. (2.4) is to match the twodimensional data with the echoes of each scattering point, point by point, so that the echoes of different targets can be focused.
2.4.4 Frequency-domain echo model The frequency-domain echo model characterizes the two-dimensional spectrum of the point target echo by a simple approximation and a sufficiently accurate analytical expression through appropriate approximation in order to grasp the amplitude and phase variation law of the point target echo in the two-dimensional frequency domain, to understand the quantitative relationship between the two-dimensional spectrum of different scattering point echoes, and to support the building of the frequency-domain
Bistatic SAR imaging theory
125
imaging algorithm so that the signal processor can perform the imaging process efficiently and quickly, and obtain a well-focused two-dimensional image. In a platform computing device with low parallelism, the frequency-domain imaging algorithm requires much less time than the time-domain imaging algorithm, and it is easy to be embedded with parameter estimation and motion compensation. Also, it is the mainstream imaging algorithm in engineering applications. Therefore it is very important to build a frequency-domain echo model. The analysis of Section 2.4.3 shows that the echo of bistatic SAR is a deformed two-dimensional frequency-modulated signal in the fast and slow time domain. The essence of getting two-dimensional analytical expression of this signal is solving the integration of second-order or even higher-order complex exponential functions. Thus the phase stabilization principle is needed to simplify the analytical calculation of the frequency-modulated signal integration. In this, the most important thing is to find the stationary phase point in the direction of the slow-time frequency. The so-called stationary phase point refers to the time point at which the derivative of the phase of the complex exponential signal is zero. The value of the complex exponential signal at this point and in its vicinity can determine the integral value far exceeding other time periods, and it can be used to approximate the integral of this higher-order complex exponential function. This is called the principle of stationary phase [11, 12]. The method of building the echo spectrum model is to use the principle of stationary phase to obtain a relatively simple and accurate analytical expression of the echo two-dimensional spectrum. The history of the bistatic SAR slant range mean value has two square roots, which are from the transmitting station and the receiving station, making the phase modulation of the Doppler signal much more complicated than that of single-based SAR. This means that, when building the frequency-domain echo model, it is difficult to directly calculate the stationary phase point, and thus the principle of stationary phase cannot be directly applied. So far, there are two two-dimensional spectrum analytical expression calculation methods. One is the method of series reversion (MSR) [13], and the other is the Loffeld model (LBF) [4]. The result of MSR is a series whose form is complex, which is not suitable for the derivation of the algorithm, while the traditional LBF model has insufficient adaptability to the configurations of bistatic SAR. So in this subsection, a generalized LBF model [14] is presented to accurately describe the echo spectrum of multiple bistatic SAR configurations.
126
Bistatic synthetic aperture radar
(1) Generalized LBF After performing the Fourier transform along the fast-time direction τ on the time domain echo signal in Eq. (2.61), the following can be obtained: rT ðt Þ + rR ðtÞ Sðt, fτ Þ ¼ S0 ðfτ Þexp j2π ðfτ + f0 Þ (2.63) c where fτ is the fast-time frequency, f0 is the center carrier frequency, and S0(fτ) is the spectrum of the transmitted signal. In order to simplify the expression, the coordinate (x, y) representing the position of the scattering point is omitted. Then, by performing the Fourier transform along the slow-time direction t on Eq. (2.63), the two-dimensional spectrum of the scattering point echo can be obtained [14]: +∞ ð
exp ½jϕb ðtÞdt
S2f ðft , fτ Þ ¼ S0 ðfτ Þ
(2.64)
∞
where
fτ + f0 ½rT ðtÞ + rR ðt Þ + ft t ϕb ðt Þ ¼ 2π c
(2.65)
and ft is the slow-time frequency. In order to solve the analytical expression of Eq. (2.64), we need to know the slow-time stationary phase point. Generally, when calculating the stationary phase point, the first-order derivative of the phase of the integrated function is required, and the time point at which the derivative is zero is the stationary phase point. However, since the two radicals rT(t) and rR(t) are included in Eq. (2.65), it is difficult to obtain an analytical solution if we directly derive and inversely solve the stationary phase point. Therefore in this section we divide the phase ϕb(t) of the integral function into the two parts corresponding to the transmitting station and the receiving station and solve the stationary phase point, respectively, and derive the quantitative relationship between the slow-time stationary phase point of the bistatic SAR system and the slow-time stationary phase points of the transmitting
Bistatic SAR imaging theory
127
station and the receiving station. Thus the analytical expression of the two-dimensional spectrum of the bistatic SAR echo can be obtained. (a) Slow-time stationary phase point First, divide ϕb(t) into two parts corresponding to the transmitting station and the receiving station: fτ + f0 ϕT ðtÞ ¼ 2π ½rT ðt Þ + ftT t (2.66) c fτ + f0 ϕR ðtÞ ¼ 2π ½rR ðtÞ + ftR t (2.67) c where ftT and ftR represent the Doppler frequencies contributed by the transmitter and the receiver, respectively. The specific expressions are given in Eqs. (2.85) and (2.86). Then assume that ϕ’T(t) ¼ 0 and ϕ’R(t) ¼ 0 and solve them separately; hence the stationary phase points corresponding to the transmitting station and the receiving station can be obtained: ^tPT ¼
r0T sin θsT cr 0T cos θsT ftT vT vT2 FT
(2.68)
^tPR ¼
r0R sinθsR cr 0R cosθsR ftR vR vR2 FR
(2.69)
where θsT and θsR are the squint angles of the beam center of the transmitter and the receiver, respectively, while r0T and r0R are the zero-time slant ranges of the transmitting station and the receiving station, and vT and vR are velocities of the transmitting station and the receiving station, and sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 cf tT 2 FT ¼ ðfτ + f0 Þ (2.70) vT sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 cf tR 2 FR ¼ ðfτ + f0 Þ (2.71) vR Then, perform the Taylor expansion on ϕT(t) and ϕR(t) at ^tPT and ^tPR , respectively, and retain them to second-order terms; we then have
128
Bistatic synthetic aperture radar
S2f ðft , fτ Þ ¼ S0 ðfτ Þ exp ½jϕT ð^t PT Þ jϕR ð^tPR Þ +∞ ð j 00 j 00 2 2 exp ϕT ð^tPT Þðt ^tPT Þ ϕR ð^tPR Þðt ^tPR Þ dt 2 2 ∞
(2.72) where ϕ00T ð^t PT Þ and ϕ00R ð^tPR Þ are the second-order derivatives of ϕT(t) and ϕR(t) at ^tPT and ^tPR , respectively. By deriving the phase in the integral sign of this formula and making the derivative zero, the stationary phase point of the bistatic SAR system can be obtained: ^tPb ¼
ϕ00R ð^tPR Þ^tPR + ϕ00T ð^tPT Þ^tPT 00 00 ϕR ð^tPR Þ + ϕT ð^t PT Þ
(2.73)
The relationship between the slow-time stationary phase point of the transceiver station and the slow-time stationary phase point of the bistatic SAR system will then be derived. Theoretically, the Doppler frequencies contributed by the transmitting station and the receiving station can be written as vT sinθsT ðft Þðf + f0 Þ (2.74) c vR sinθsR ðft Þðf + f0 Þ ftR ðft Þ ¼ (2.75) c where θsT(ft) and θsR(ft) are the squint angles of the transmitting station and the receiving station when the system Doppler frequency is ft, respectively. So ftT ðft Þ ¼
FT ¼ ðfτ + f0 Þ cos θsT ðft Þ
(2.76)
FR ¼ ðfτ + f0 Þ cos θsR ðft Þ
(2.77)
Thus, by introducing the equations from (2.74) to (2.77) into Eqs. (2.68) and (2.69), the stationary phase points of the transmitting station and the receiving station can be written as: r ^tPT ¼ 0T ½ sinθsT cosθsT tanθsT ðft Þ (2.78) vT r ^tPR ¼ 0R ½ sinθsR cos θsR tan θsR ðft Þ (2.79) vR
Bistatic SAR imaging theory
129
In Eqs. (2.78) and (2.79), the first term is the time that the platform takes from the moment when the beam center points to the target to the time of the track crosscut point, and the second term is the time the platform takes from the moment when the system Doppler frequency is ft to the time of the track crosscut point. Therefore it can be found from these expressions that the stationary phase point of the transmitter and the receiver is the time interval from the moment when the beam center points to the target to the moment when the system Doppler frequency is ft. As can be seen from Fig. 2.14, there is a one-to-one correspondence between the slow time and the system Doppler frequency, the Doppler frequency contribution of the transmitter and the receiver, so the times that the transmitting station and the receiving station take from the moment when the system Doppler frequency is ft to the moment when the center of the transceiver station beam points to the target are the same. Thus ^tPT ¼ ^tPR ¼ ^tPb . (b) Transceiver Doppler frequency contribution modeling Theoretically, if we substitute Eqs. (2.68) and (2.69) into Eq. (2.72) and then apply the Fresnel integral formula, the analytical expression for the two-dimensional spectrum of bistatic SAR can be obtained. However, in Eqs. (2.68) and (2.69), the frequency-domain analytical expressions of the Doppler contribution of the transmitting station and the receiving station are not given. Therefore we can first give the relationship between the Doppler contribution of the transceiver and the slow time, and then perform the Taylor expansion on the slow time t with the system Doppler frequency, making t the intermediate variable, and obtain the quantitative relationship between the Doppler contribution of the transceiver station and the system Doppler frequency. In the slow-time domain, the Doppler frequencies generated by the transmitter and receiver are: 2
vT t r0T vT sinθsT ðfτ + f0 Þ ffi ftT ðtÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2.80) 2 + v 2 t 2 2r v t sinθ c r0T 0T T sT T 2
vR t r0R vR sin θsR ðfτ + f0 Þ ffi ftR ðt Þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2.81) 2 + v 2 t 2 2r v t sinθ c r0R 0R R sR R
130
Bistatic synthetic aperture radar
and the total system Doppler frequency is ft(t) ¼ ftT(t) + ftR(t). Perform the Taylor expansion on Eqs. (2.80) and (2.81) with respect to t: ftT ðtÞ ftcT + ftrT t + ft3T t2
(2.82)
ftR ðtÞ ftcR + ftrR t + ft3R t 2
(2.83)
ft ðt Þ ¼ ftT ðtÞ + ftR ðtÞ ftc + ftr t + ft3 t2
(2.84)
Eliminate the intermediate variable t to obtain the relationship between ftT, ftR, and ft: ftT ðft Þ ftcT +
ftrT ftrT ft3 ft3T ftr ðft ftc Þ ðft ftc Þ2 ftr ftr3
(2.85)
ftR ðft Þ ftcR +
ftrR ftrR ft3 ft3R ftr ðft ftc Þ ðft ftc Þ2 ftr ftr3
(2.86)
(c) Two-dimensional spectrum model Substitute Eqs. (2.85), (2.86) into Eqs. (2.68), (2.69), and then substitute the result into Eq. (2.72) and apply the Fresnel integral formula; the analytical expression of the two-dimensional spectrum of the bistatic SAR point target echo can be expressed as: pffiffiffiffiffi π 2π S2f ðft , fτ Þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi exp j S0 ðfτ Þ 4 ϕ00Tð^t PT Þ + ϕ00R ð^t PR Þ 2π exp j ½r0T cos θsT FT + r0R cosθsR FR c r0T sin θsT r0R sinθsR + 2π ftT ðft Þ + ftR ðft Þ g (2.87) vT vR Compared with the original LBF [4], the modified LBF [5], and other forms of LBF, the Eq. (2.87) can be regarded as a more general form of the frequency-domain echo model, which can be applied to both variant and invariant bistatic forward-looking and squint front SAR. Fig. 2.17 gives the real part graphic of the center scattering point echo spectrum of the scene in several typical modes calculated by Eq. (2.87) when the transmitted signal is the negative frequency modulation. It can be seen that the two-dimensional
Fast time (sample points)
Fast time (sample points)
1000
2000
3000
4000
5000
6000
1000
2000
3000
4000
5000
6000 500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Slow time (sample points)
Fig. 2.17 Real part of bistatic SAR scattering point echo spectrum.
500
1000
1500
2000
2500
3000
3500
4000
Slow time (sample points)
4500
5000
132
Bistatic synthetic aperture radar
spectrum of the scattering point echo is also an annular strip that gradually becomes thicker outward from the center, and the number of strip rings is the same as in the time-domain signal corresponding to Fig. 2.15, because when the fast- and slow-time directions of the scattering point echo can all be modeled as linear frequency-modulated signals, the number of strip rings in the time domain and the frequency domain is equal to one-quarter of the signal time-bandwidth product. Note that in order to show the equal phase annular strip, the figure reduces the time-bandwidth product in fast time and slow time, and the annular strip visible in the four corners of the spectrum reflects the spectrum aliasing caused by finite time-domain sampling. As for the scattering point deviating from the center of the scene, although there is no significant difference in the shape of the spectral coverage area, in the two-dimensional spectrum, the linear phase factors related to the position of the scattering point and the additional phase factors caused by the spatial-variant are increased. Since both Eq. (2.62) and the two-dimensional Fourier transform are linear operations, the synthesized spectrum corresponding to the echo of the entire scene is exactly the seemingly disordered two-dimensional data formed by the weighted summation of the echo coefficients of the scattering points. The weights correspond to the scattering intensity of the scattering points, and the distribution ranges of the synthesized spectrum are determined by the bandwidths of the transmitted signals, the Doppler centroid of each point target, and the Doppler bandwidths. The frequency-domain imaging algorithm reveals the variation law and correlation of the echo frequency in each scattering point based on the frequency-domain echo model. By frequencydomain scaling, phase multiplication, and other operations, the frequency-domain phase factor is normalized, and the phase factors of each scattering point spectrum are eliminated, and the spectral amplitude information and the linear phase factors that reflect the positions are preserved. At the same time, the phase factors corresponding to the two-dimensional linear frequency modulation are preserved, so that the echo spectrum of each scattering point can be collected. Multiplying the reversed-phase spectrum corresponding to the annular strips to eliminate the annular strips of all scattering points, retaining the equal interval parallel linear
Bistatic SAR imaging theory
133
strips corresponding to the linear phase factors of each scattering point, and converting the two-dimensional spectrum of each scattering point echo into a two-dimensional pulse function with linear phase factor, which corresponds to a two-dimensional narrow pulse function that converts the image-domain response to its position, the focus of the echo energy of each scattering point is then realized, and the radar image of the ground scene is obtained. (2) Model accuracy analysis In the process of deriving the generalized LBF model using the principle of stationary phase, although a simple two-dimensional spectrum analytical expression of scattering points is obtained, there is still a certain error between the accurate spectrum of the scattering points echoes, which is the two-dimensional Fourier transform of the time-domain echoes of the scattering points. This error shows the accuracy of the model and will affect the accuracy of the frequencydomain imaging algorithm constructed based on the model and the bistatic SAR configuration mode, and influence the resulting image focus. Therefore it is necessary to quantitatively evaluate the accuracy of the model. The method of evaluating the accuracy of the scattering point model is to compare the calculated results of the comparison model with the exact spectrum by numerical calculation or by image focusing degree. Here we first use the scattering point image-domain focusing degree to evaluate the accuracy of the spectral model, which means using the computed exact spectrum and spectral model, the original LBF, the modified LBF, and the generalized LBF as matched filters to focus the reference point echoes and perform focusing degree comparisons. Table 2.4 gives a set of related parameters for comparison, where mode A is a spaceborne-airborne bistatic SAR. In this mode, the airborne receiving station works in the large squint mode, and the transmitting station works in the small squint mode. In mode B, both the transmitting station and the receiving station are airborne with medium squint angle. In mode C, the receiving station operates in large squint mode and the transmitting station is in side-looking mode. Table 2.5 shows the image-domain focusing effect for the exact spectrum, original LBF, modified LBF, and generalized LBF with their corresponding resolution (IRW), peak sidelobe ratio (PSLR), and integrated sidelobe ratio (ISLR).
Table 2.4 List of calculation parameters. Simulation parameters
The transmitting station
The receiving station
Carrier frequency (GHz) Bandwidth of delay direction (MHz) PRF (Hz)
9.65 150 Mode A: 3500; Mode B, C: 400
Mode A: Spaceborne/airborne
Distance from platform to point target (km) Speed (m/s) Squint angle (degrees)
859
6.7
7600 6.67
120 63
14.3
8.96
120 8
120 38
14.14
11.2
120 0
120 63
Mode B: Airborne
Distance from platform to point target (km) Speed (m/s) Squint angle (degrees) Mode C: Airborne
Distance from platform to point target (km) Speed (m/s) Squint angle (degrees) Table 2.5 List of performance parameters.
Mode A
Mode B
Mode C
Exact spectrum LBF [4] Modified LBF [5] Generalized LBF Exact spectrum LBF [4] Modified LBF [5] Generalized LBF Exact spectrum LBF [4] Modified LBF [5] Generalized LBF
Resolution (Sampling unit)
Peak side lobe ratio (dB)
Integrated sidelobe ratio (dB)
43.35
13.27
10.09
Invalid 47.54
Invalid 12.94
Invalid 9.99
43.39
1326
10.09
25.24
12.32
10.15
186.3 158.43
Invalid Invalid
Invalid Invalid
25.27
12.32
10.15
37.43
13.06
10.14
Invalid Invalid
Invalid Invalid
Invalid Invalid
37.45
13.06
10.14
Bistatic SAR imaging theory
135
As can be seen from Table 2.5, for Mode A, the modified LBF and the generalized LBF have similar imaging performance to the accurate spectral. In both airborne modes, the performance of the modified LBF decreases as the squint angle of the receiving station increases. For mode B of medium squint, the original LBF and the modified LBF have a poor focusing effect, while the generalized LBF has a good focusing effect, as shown in Fig. 2.18. For mode C, when the receiving station operates in the large squint mode, the modified LBF does not have any focusing effect in the slow-time direction. Meanwhile, adopting the generalized LBF proposed in this chapter can still achieve an 500 450
450
400
350
Slow tune (Sampling points)
Slow tune (Sampling points)
400
300 250 200 150 100 50
350 300 250 200 150 100 50
100
200
300
400
150
500
200
250
300
350
400
Fast time (Sampling points)
Fast time (Sampling points)
(a) modified LBF
(a) original LBF 500 450
Slow tune (Sampling points)
400 350 300 250 200 150 100 50 100
200
300
400
500
Fast time (Sampling points)
(c) generalized LBF Fig. 2.18 Effect of focusing echoes on scattering points adopting different spectral models (mode B).
136
Bistatic synthetic aperture radar
-2500 Phase error (rad)
Phase error (rad)
-2500 -3000 -3500 -4000
-3000 -3500 -4000 -4500 3200
-4500 3200 3000 2800
0
-0.5
Azimuth frequency (Hz) 2600 -1
0.5
1
3000
Range frequency (Hz)
h10
2800
8
h10
Azimuth frequency (Hz) 2600 -1
-0.5
0
0.5
1 h10
8
Range frequency (Hz)
-7
Phase error (rad)
1 0 -1 -2 -3 -4 3200 3000 2800 Azimuth frequency (Hz) 2600 -1
-0.5
0
0.5
1 8
h10
Range frequency (Hz)
Fig. 2.19 Phase error of different spectrum models.
effect close to that of adopting the exact spectrum. Therefore generalized LBF has good model accuracy for large squint and even forwardlooking mode. The significant differences in the preceding focusing performances are essentially due to the differences in accuracy of the spectral models. For example, for the airborne large squint case of mode C, Fig. 2.19 shows the phase error of several spectral models. It can be seen that the phase error of the original LBF and the modified LBF reaches 1433π and 955π, which far exceeds the upper bound π/4 required for good focus. Therefore the focusing effect in the slow-time direction is poor. The phase error of the generalized LBF is much smaller than π/4, so the focusing in the slow-time direction is good.
References [1] G.P. Cardillo, On the use of the gradient to determine bistatic SAR resolution, in: Antennas and Propagation Society International Symposium, vol. 2, 1990, pp. 1032–1035. [2] X.K. Yuan, Introduction to Spaceborne Synthetic Aperture Radar, National Defense Industry Press, 2003.
Bistatic SAR imaging theory
137
[3] T. Zeng, M. Cherniakov, T. Long, Generalized approach to resolution analysis in BSAR, IEEE Trans. Aerosp. Electron. Syst. 41 (2) (2005) 461–474. [4] O. Loffeld, H. Nies, V. Peters, et al., Models and useful relations for bistatic SAR processing, IEEE Trans. Geosci. Remote Sens. 42 (10) (2004) 2031–2038. [5] R. Wang, O. Loffeld, Q. Ul-Ann, H. Nies, A. Medrano Ortiz, A. Samarah, A Bistatic point target reference spectrum for general Bistatic SAR processing, IEEE Geosci. Remote Sens. Lett. 5 (3) (2008) 517–521. [6] Q. Zhang, J. Wu, Z. Li, Y. Huang, J. Yang, H. Yang, On the spatial resolution metrics of Bistatic SAR, in: CIE International Conference On Radar, 2016, pp. 989–993. [7] T.D. Wu, K.S. Chen, J. Shi, et al., A study of an AIEM model for bistatic scattering from randomly rough surfaces, IEEE Trans. Geosci. Remote Sens. 46 (9) (2008) 2584–2598. [8] Z. Sun, J. Wu, J. Pei, Z. Li, Y. Huang, J. Yang, Inclined geosynchronous spaceborne– airborne bistatic SAR: performance analysis and mission design, IEEE Trans. Geosci. Remote Sens. 54 (1) (2016) 343–357. [9] Z. Sun, J. Wu, J. Yang, Y. Huang, C. Li, D. Li, Path planning for GEO-UAV bistatic SAR using constrained adaptive multiobjective differential evolution, IEEE Trans. Geosci. Remote Sens. 54 (11) (2016) 6444–6457. [10] K. Deb, A. Pratap, S. Agarwal, et al., A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197. [11] P. Yiming, Y. Jianyu, F. Yusheng, Y. Xiaobo, Principle of Synthetic Aperture Radar, UESTC (University of Electronic Science and Technology of China) press, 2007. [12] I.G. Cumming, F.H. Wong, Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation, Artech House, Norwood, 2004. [13] Y.L. Neo, F. Wong, I.G. Cumming, A two-dimensional spectrum for bistatic SAR processing using series reversion, IEEE Geosci. Remote Sens. Lett. 4 (1) (2007) 93. [14] J. Wu, Z. Li, Y. Huang, J. Yang, Q.H. Liu, An omega-K algorithm for translational invariant Bistatic SAR based on generalized Loffeld’s Bistatic formula, IEEE Trans. Geosci. Remote Sens. 52 (10) (2014) 6699–6714.
This page intentionally left blank
CHAPTER 3
Bistatic SAR imaging algorithm The physical process of imaging processing involves homing and focusing the echo energy of each point according to the echo law and parameters, so as to transform the echo in the time domain into a two-dimensional image in the target domain. According to the detection theory, the current imaging method is the best detection and scattering rate estimation of each point in the noise background. From the perspective of processing flow, it is a twodimensional matching filter for each point of the time domain echo. From a computational perspective, it is a two-dimensional correlation operation per pixel. The purpose of constructing the imaging algorithm is to provide a highefficiency calculation method that can ensure certain accuracy requirements, instead of the pixel-by-pixel two-dimensional correlation operation, which has a very low calculation efficiency. It can ensure that the time required for imaging processing can be controlled within the time range allowed by the practical application. Therefore the imaging algorithm is one of the indispensable core technologies in SAR engineering applications. The layout pattern of bistatic SAR with separated transmitter and receiver can significantly affect the migration, coupling, and space-variant characteristics of the echoes, forming echo patterns significantly different from those of monostatic SAR. Thus to achieve accurate homing focus, it is impossible to copy the monostatic SAR imaging algorithm. Moreover, different geometric configurations, observation directions, and scanning modes of bistatic SAR will also form different echo patterns, and the specific problems to be solved by the imaging algorithm and the corresponding algorithm structure are also different, and need to be studied and discussed. This chapter first introduces the main tasks and mathematical essence of the imaging algorithm, analyzes the basic ideas and characteristics of the timedomain imaging algorithm and the frequency-domain imaging algorithm, and expounds the problems faced by the construction of the frequencydomain imaging algorithm, along with the causes and countermeasures. Then it introduces the bistatic SAR time-domain imaging algorithm and frequencydomain imaging algorithm, including algorithm construction, processing Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00003-7 Published by Elsevier Inc. All rights reserved.
139
140
Bistatic synthetic aperture radar
flow, and simulation verification. Among these, the frequency domain bistatic SAR imaging algorithm focuses on the representative translational variant bistatic side-looking and squint-looking imaging algorithm.
3.1 Basic tasks of imaging algorithm The term imaging algorithm refers to the efficient calculation method of the two-dimensional correlation imaging method, which can significantly shorten the time of imaging processing and meet the real-time requirements of SAR imaging in practical applications while ensuring imaging accuracy. Although the imaging algorithms corresponding to different bistatic SAR modes and processor architectures have different implementation approaches, they all have the same mathematical essence, and the main problems and coping strategies they face are essentially the same.
3.1.1 The mathematical essence of imaging algorithm As described in Section 2.1, the current two-dimensional correlation imaging method is a point-by-point processing and two-dimensional operation process, which requires a large amount of calculation. In the existing serial computing equipment on motion platforms, these calculation requirements far exceed the real-time processing capabilities, so this imaging method is difficult to directly apply. According to the analysis of the imaging methods represented by Eq. (2.4), the fundamental reason for point-by-point processing is the difference of variation law or the parameters of the normalized echo h(t, τ; x, y) of each scattering point. That is, there are space-variant characteristics. The fundamental reason for the two-dimensional operation is the twodimensional coupling phenomenon of the echo signal: that is, in its mathematical expression, both the slow time t and the fast time τ appear in the envelope and phase variables. The cross-multiplication term of t and τ resulted in the inability to directly split the two-dimensional integral into the product of two one-dimensional integrals. The purpose of building an imaging algorithm is greatly improving the computing efficiency of the pixel-by-pixel imaging method in two dimensions. Its mathematical essence is, by the same-domain conversion and different-domain transformation, to achieve homogeneous collection and fast calculation in batches, and reduce duplication of calculation to dramatically reduce the imaging processing time required by the serial computing
Bistatic SAR imaging algorithm
141
equipment. As a result, the real-time requirements of practical applications of imaging processing can be achieved. Based on the difference in the operating structures, the imaging algorithm can be divided into two categories: the time-domain and the frequency-domain imaging algorithms. The imaging algorithm in the time domain deals with the echo directly in the time domain. Its basic idea is, first of all, the scattering point echo is the delay version of the baseband signal of the transmitted pulse, which is matched and filtered along the direction τ to s(t, τ) to realize the fast time pulse compression and cause the scattering point echo energy to gather on the delay migration trajectory along direction τ. Then, the conjugate reference function is constructed according to the Doppler signal phase variation law of the point, and the slow time pulse compression is realized by multiplying and integrating along the time delay migration trajectory, so as to make the point echo converge to the position corresponding to the observed direction. The fast time compression corresponds to the line integral, while the slow time compression corresponds to the curve integral. Therefore, although this algorithm implements dimension-reducing processing to a certain extent, the resolution of two-dimensional integrals is not complete. The time-domain imaging algorithm has clear and simple thinking that is easy to understand, and has a simple structure, high accuracy in theory, and no geometric distortion such as “close compression,” and, in principle, it is suitable for any imaging configuration and observation direction. However, it is still a point-by-point processing process and does not achieve batch processing at the same time. Therefore, for existing serial computing equipment on the mobile platform, the computing efficiency is still low and it will still greatly exceed the processing time allowed by the actual application. Therefore it is necessary to study the fast algorithms to improve computing efficiency. In this chapter, a fast factorized back projection (FFBP) algorithm is introduced, which adopts the idea of subaperture division and grade-by-level combination, which can reduce the number of repeatability calculations between different pixels in the imaging process, thus greatly improving the computational efficiency of the time-domain imaging algorithm (BP). In addition, when the accuracy of the platform motion measurement device is insufficient, the processing of the time-domain imaging algorithm will also involve additional computational burden in terms of parameter estimation and autofocus, resulting in a further reduction in real-time
142
Bistatic synthetic aperture radar
performance. This is because the echo parameter estimation required for imaging processing still needs to be converted to a transform domain such as the frequency domain. On the other hand, in high-resolution applications, the iterative process of imaging processing, i.e. autofocus, is also needed to achieve good focusing of the energy of the scattering points. A frequency-domain imaging algorithm can convert the echo to frequency-domain processing. Although the imaging geometry and the direction of observation dependence are a little bit higher in the image processing, this technique can directly perform an accurate estimation of the echo signal delay migration path, Doppler frequency modulation parameters, and echo error of higher-order parameters, with low dependency on the platform motion measurement device and the precision of the time and frequency synchronization device. Moreover, the efficiency of the frequency-domain imaging algorithm is very high and can realize real-time processing on the existing platform serial computing equipment. Therefore the frequency-domain imaging algorithm is still the main imaging algorithm used at present. The frequency-domain imaging algorithm can significantly improve the calculation efficiency of Eq. (2.4). Its main tasks can be described as follows. First, eliminate space-variant characteristics, to facilitate the adoption of measures to achieve the same kind of collection, and create conditions for the subsequent steps to achieve decoupling and dimension reducing and batch processing. Secondly, the coupling is removed so that dimensionreducing processing can be carried out by using two cascades of onedimensional processing instead of two-dimensional operation of correlation. Thirdly, a fast transform is used, that is, FFT and other fast algorithms are used to replace the low-efficiency correlation operation after dimension reduction. Fourthly, embedded estimation compensation is used to synchronously estimate the echo parameters and compensate the motion errors in the imaging-processing flow, to save the calculation cost, which is repeated with the imaging-processing intermediate link. Fifthly, reduce processing errors: that is, minimize the residuals of eliminating space-variation and decoupling operations and the additional errors caused by such operations to ensure amplitude accuracy and geometric accuracy of σ^0 ðx, yÞ. In order to constrain the computational cost of such operations, the echo mathematical expression must be approximated to reduce the complexity of this type of operation as much as possible, while tolerating the corresponding residuals and errors. The sixth main task is geometric and radiation correction, to correct as much as possible the image domain amplitude distortion
Bistatic SAR imaging algorithm
143
and geometric distortion caused by eliminating space variance and decoupling links: that is, to reduce the amplitude and geometric errors of σ^0 ðx, yÞ relative to the true value σ 0(x, y). Among these tasks, the first three are the most basic, as the purpose is to greatly improve the computational efficiency. The other three are remedial measures to reduce the cost of algorithmic precision and calculation repeatability. In order to overcome the difficulties caused by space-variant characteristics and coupling in constructing frequency-domain imaging algorithms and in implementing batching and dimension-reduction operations, it is necessary to understand their formation mechanism, manifestation, and influencing factors in advance, and to then propose more effective targeted measures.
3.1.2 Causes and countermeasures of space-variant characteristics As shown in Fig. 3.1, it is assumed that the transmitting station and the receiving station move along a straight route at speeds vT and vR, respectively; the included angle of the route is α; and the track shortcuts of the scattering point P in the target domain are, respectively, rT and rR.
Fig. 3.1 Relationship between echo space-variant characteristics and bistatic SAR configuration.
144
Bistatic synthetic aperture radar
On the plane of the average slant range r-time t, the slant ranges rT(t) and rR(t) from the scattering point P to the two stations are in the form of hyperbolas, which are recorded as CT and CR, respectively; The bending degree is determined by the speed and the corresponding track shortcuts; the low point ordinate is determined by the track shortcuts, and the low point abscissa is determined by the times tT and tR when each station reaches the track shortcuts point OT and OR. When track shortcuts rT or rR are zero, the corresponding hyperbola will degenerate into a V shape, and its opening width is determined by the corresponding speed of the station. Since the mean of the slant range of the scattering point P is rP(t; x, y) ¼ rP(t) ¼ [rT(t) + rR(t)]/2, the corresponding curve CP is a deformed hyperbola obtained by taking the mean of curves CT and CR. Of course, due to the truncation effect on the echo signal caused by the limited irradiation time of the antenna beam, only a small segment of the curve CP can be observed in the fast-slow time domain, which reflects the visible time-delay migration trajectory of the corresponding point echo data, and directly determines the variation law of echo data and its coverage area trend in the fast-slow time domain. It will be explained in following text that the shape of the mean skew curve CP will change with the spatial position of the range point, showing so-called space-variant characteristics. Moreover, different bistatic geometric configurations have different degrees of space variation. For the squint-flying translational variant bistatic configuration, as shown in Fig. 3.1A, scattering points on the route parallel line PP0 all have the same track shortcuts rR, but different track shortcuts rT, while on the route parallel line PP00 they are just the opposite. This indicates that any two points of different positions will have different track shortcut combinations, resulting in different shapes of the mean slant range curve CP except different positions. Therefore the law governing this kind of bistatic configuration in stereo point echo will exhibit two-dimensional space-variant characteristics. In addition, the low time difference Δt ¼ tT tR corresponding to any two fixed points on the airway parallel lines is usually not the same: that is, Δt is space-variant, which will further aggravate the echo space-variant characteristics, unless the included angle α and speed vT and vR meet special configuration conditions vT ¼ vR cos α or vR ¼ vT cos α—that is, the projected speed of the station on the airway direction is equal to its speed. For the parallel-flying translational variant bistatic configuration, as shown in Fig. 3.1B, the scattering points on the parallel lines of different routes have different track shortcut combinations, so they have an echo
Bistatic SAR imaging algorithm
145
space-variant characteristic perpendicular to the route direction. However, any two scattering points on the parallel lines of the same route have the same track shortcut combinations, but this configuration is equivalent to α ¼ 0 in Fig. 3.1A, and vT 6¼ vR, so it does not meet the stated special configuration conditions, and the low point time difference Δt is still space variant. Therefore, in the direction parallel to the route, it also has echo space-variant characteristics. Therefore, although the echo space-variant characteristics of the bistatic configuration have been weakened, they are still two-dimensional space variation. For the translational invariant bistatic configuration shown in Fig. 3.1C, track shortcut combinations are also different for the track points on the parallel lines of different flight routes, so the echo space-variant characteristics are also perpendicular to the route direction. However, any two track points on the same track parallel lines all have the same track shortcut combination. Meanwhile, due to vT ¼ vR, which can meet the aforementioned special configuration conditions, their low point time difference Δt is the same: that is, Δt has space-invariant characteristics. Therefore, in the direction parallel to the route, there are space-invariant characteristics, meaning that the variation law is the same between echoes of different types of points on the same route parallel line, except for different time and positions. Therefore the echo space-variant characteristics of translational invariant bistatic configuration are further weakened, but still have one-dimensional space-variant characteristics perpendicular to the flight path. This configuration will degenerate to monostatic SAR at rT ¼ rR and tT ¼ tR, but the law of echo space-variant characteristics remains unchanged. Space-variant characteristics are adverse factors affecting the computational efficiency of imaging processing and are the fundamental causes of the point-by-point calculation required in Eq. (2.4). Therefore, in the construction of the imaging-processing algorithm, special mathematical operations such as scale transformation are needed to transform the twodimensional space variation problem into a one-dimensional space variation problem. Then, for the space-invariant direction, the frequency domain collection of the echo is carried out by using the property that the time domain shift of the Fourier transform corresponds to the frequency domain multiplication of the linear phase factor, and the fast computation of the Fourier transform and the batch processing of pulse compression are realized by the FFT. In the space-variant direction of the echo, a differential processing strategy is adopted, which does not affect the computational efficiency of the two cascaded one-dimensional processing.
146
Bistatic synthetic aperture radar
3.1.3 Causes and countermeasures of coupling For a standard two-dimensional chirp signal, the mathematical expression is h0 ðt, τÞ ¼ sR ½τ τ0 exp jπμR ½τ τ0 2 sA ½t t0 exp jπμA ½t t0 2 (3.1) where sR[] and sA[] are assumed to be standard rectangular pulses with the center at 0 and widths of TR and TA, respectively. It can be seen that τ and t do not appear in the two envelope variables and two-phase variables of h0(t, τ) at the same time. Therefore it can be divided into two products with only single variables τ or t, corresponding to the fast-time FM signal (the first and second terms) and the slow-time FM signal (the third and fourth terms). The bandwidth of the fast-time FM signal is BR ¼ μRTR, and the number of oscillation cycles is BRTR/4; the bandwidth of the slow-time FM signal is BA ¼ μATA, and the number of oscillation cycles is BATA/4. Fig. 3.2A shows the real part of h0(t, τ) when μA and μA are both negative. The intuitive observation is that in the τ direction or the t direction, the chirp characteristics with only τ or t as variables can be observed, respectively. For the normalized echo h(t, τ; x, y) of the scattering point P, according to Eq. (2.61), when φR(τ) ¼ πμRτ2, i.e., the baseband signal of the transmitting
t
τ
τ
t (A) Standard two-dimensional linear frequency modulated signal (Real part) Two-dimensional coupling
Two-dimensional coupling
τ
τ
Range frequency modulated phase
Range frequency modulated phase Azimuth frequency modulated phase
Two-dimensional coupling
Azimuth frequency modulated phase
t
Two-dimensional coupling
(B) Deformed two-dimensional linear frequency modulated signal (Real part)
Fig. 3.2 Two-dimensional coupling of deformed two-dimensional chirp signals.
Bistatic SAR imaging algorithm
147
pulse is the LFM signal sR[τ] exp [jπμRτ2], the fast-time FM signal becomes the LFM signal with τ as the variable, and its envelope center and phase center are determined by τP(t; x, y) ¼ 2rP(t; x, y)/c; the envelope change law of the slow-time FM signal is related to the modulation and truncation effects of the antenna beam. Under certain conditions, the phase variation law can also be modeled as an LFM signal with t as the variable. At this time, h(t, τ; x, y) will become a deformed two-dimensional LFM signal, which is obviously different from the standard two-dimensional LFM signal h0(t, τ). The following is an example of the simplest translational invariant bistatic side-looking mode. To simplify the analysis, it is assumed that the antenna pattern has no side lobes, the main lobe gain is constant, and the beam pointing is perpendicular to the route. At this time, sA(t; x, y) in Eq. (2.61) can be expressed as sA(t; x, y) ¼ sA[t tP(x, y)], where sA[] is a rectangular pulse with the center at 0, and tP(x, y) represents the moment when the low point of the slant range mean curve CP in Fig. 3.1 appears. When the beam is narrow, the slow-time FM signal can be modeled as a chirp signal sA[t tP(x, y)] exp {jπμA(x, y) [t tP(x, y)]2}. At this time, the mathematical expression of h(t, τ; x, y) is hðt, τ; x, yÞ ¼ sR ½τ τP ðt; x, yÞ exp jπμR ½τ τP ðt; x, yÞ2 sA ½t tP ðx, yÞ exp jπμA ðx, yÞ½t tP ðx, yÞ2 (3.2) Fig. 3.2B shows the real part of h(t, τ; x, y) when μR is negative. From Eq. (3.2), it can be seen that as the slant range mean τP(t; x, y) changes with time t, it causes echo delay migration and the simultaneous appearance of τ and t in the fast-time FM envelope of h(t, τ; x, y) (the first term). If τP(t; x, y) is expanded by a Taylor series with t as the variable, the cross-multiplication term of τ and t will appear in the fast-time FM phase (the second term) of h(t, τ; x, y) in the topological variable, forming the cross-FM effect. Therefore h(t, τ; x, y) cannot be divided into the product of two components with only one variable τ or t, which shows obvious two-dimensional coupling. As a result, Eq. (2.4) cannot be divided into the product of two one-dimensional integrals, and two-dimensional integral operation must be carried out. As shown in Fig. 3.2, the most intuitive expression of coupling in the fast-slow time domain is that, although the frequency-modulation phase changes corresponding to the transmitted signal waveform can be observed along the direction of the axis τ, the center of the phase and the envelope will change with the difference of t. If the echo signal is observed along the time
148
Bistatic synthetic aperture radar
delay migration trajectory τ ¼ τP(t; x, y), according to Eq. (3.2), the observed signal hc(t, τ; x, y) can be expressed as hc ðt, τ; x, yÞ ¼ sR ½0exp jπμR 02 sA ½t tP ðx, yÞ exp jπμA ðx, yÞ½t tP ðx, yÞ2
(3.3)
It can accurately reflect the envelope and phase change of the slow-time FM signal (the third and fourth terms). Because the envelope value (the first term) of the fast-time FM component is constant, the phase value (the second term) of the fast-time FM component is zero; only the influence of the slow-time FM component is retained. This is also the mathematical basis of the time-domain imaging method that first compresses the fast-time pulse and then compresses the slow-time along the migration trajectory. However, when the echo signal is observed along the t axial direction of the straight line τ ¼ τ0, the observed signal h(t, τ0; x, y) can be expressed as hðt, τ0 ; x, yÞ ¼ sR ½τ0 τP ðt; x, yÞ exp jπμR ½τ0 τP ðt; x, yÞ2 sA ½t tP ðx, yÞ (3.4) exp jπμA ðx, yÞ½t tP ðx, yÞ2 By comparing Eqs. (3.3) and (3.4), it can be seen that in addition to the envelope and phase factor (the third and fourth terms) reflecting slow-time frequency modulation, an additional phase (the second terms) is added in the formula h(t, τ0; x, y), whose value is exactly equal to the fast-time frequency modulation phase corresponding to the time difference τ0 τ(t; x, y) of the straight line τ ¼ τ0 deviated from the time delay migration trajectory, i.e., a cross-frequency modulation phenomenon occurs. Therefore slow-time FM phase variation law cannot be accurately reflected in h(t, τ0; x, y). In addition, the influence of the fast-time frequency modulation envelope e (the first terms) is added to h(t, τ0; x, y). With the change of t, it will produce an additional amplitude modulation and truncation effect on the slow-time frequency modulation signal: that is, cross-amplitude modulation appears. As can be seen from the example in Fig. 3.2, h(t, τ0; x, y) has obviously deviated from the envelope and phase variation law of the LFM signal. If the pulse compression of the slow-time FM signal is realized directly along the t axis direction, the echo signal energy in the t axis direction will not be gathered well, resulting in obvious slow-time defocusing in the image domain. Coupling is an adverse factor that affects the computational efficiency of imaging processing, and it is the fundamental reason that Eq. (2.4) must be calculated in two dimensions. Therefore, in the process of building an
Bistatic SAR imaging algorithm
149
imaging algorithm, special mathematical operations, such as interpolation mapping and scale transformation, are required to conduct walk correction and bend/straighten operations for the deformed two-dimensional chirp signal in Fig. 3.2B, which is transformed into the standard two-dimensional chirp signal in Fig. 3.2A after removing the two-dimensional coupling of the echo. This is helpful for the computing device to perform the fast-time compression process in the column direction and the slow-time compression process in the row direction, thereby converting the two-dimensional operation of Eq. (2.4) into two cascaded one-dimensional operations, carrying out the dimensionality reduction and thus greatly improving the calculation efficiency. However, the actual mathematical operation of decoupling is often much more complex than the simple leveling and straightening operations performed directly in the fast-slow time domain. This occurs because, when delay curvature cannot be ignored, there will be misalignment and crossover phenomena in the fast-slow time domain for time-delay migration trajectories of different points, which leads to their contradictory requirements for leveling and straightening operations in the fast-slow time domain. Therefore, when constructing the imaging algorithm, it is generally necessary to first gather the space-invariant echoes in the space-invariant direction using the Fourier transform, so that they have overlapping migration paths in the frequency domain, so as to unify their leveling and straightening operation requirements, resolve the stated contradiction, and achieve batch processing of leveling and straightening. In this sense, the operation of de-spacevariance is often the premise of decoupling.
3.2 Bistatic SAR time-domain imaging algorithm The typical representative of the bistatic SAR time-domain imaging algorithm is the time-domain back-projection (BP) imaging algorithm [1], which is a time-domain imaging algorithm that coherently accumulates the migration trajectory along the delay for each resolution unit. The imaging process is shown in Fig. 3.3. The algorithm first compresses the pulse in the fast-time τ direction in echo s(t, τ), gathers the echo energy of each resolution unit for the first time, and obtains the echo S(t, τ) after the fast-time compression; it then divides the area to be imaged into two-dimensional resolution unit grids, each resolution grid corresponding to an image pixel. According to the radar carrier frequency f0, light speed, and attitude measurement positioning data, the
150
Bistatic synthetic aperture radar
Echo data in row i and column j
Grid center( xi , y j )
Pixel gray value corresponding to the grid σˆ˄ 0 xi , y j˅
y
τ Delay migration trajectory τ (t) ij
xi
A
A
t
x B
t0
Imaging area resolution unit grid
B C
C
yj
τ0 Echo data after fast time compression S (t0 , τ 0 )
Echo data after slow time compression σˆ 0 ( x, y )
Fig. 3.3 Principle of BP imaging algorithm.
echo time-delay migration trajectory τij(t) ¼ [γ T(t; xi, yj) + γ R(t; xi, yj)]/c corresponding to the resolution grid center position (xi, yj) in row i, column j, is calculated. Based on this, the compensation phase factor exp[j2πf0τij(t)] is constructed and multiplied with the corresponding fast-time compressed echo data S[t, τij(t)] to obtain the phase-compensated migration trajectory echo data S[t, τij(t)] exp [ j2πf0τij(t)]. Finally, the gray value σ^0 ðxi , yi Þ of the image pixel for the resolution grid can be obtained by integrating along the delay migration trajectory. After a similar operation for each resolution grid, the radar image σ^0 ðxi , yi Þ of homing and focusing is obtained. The imaging process of the BP imaging algorithm can be described by the following mathematical formula: ð σ^0 xi , yj ¼ S t, τij ðtÞ exp j2πf0 τij ðtÞ dt (3.5) The corresponding processing flow is shown in Fig. 3.4.
Raw echo data
Imaging configuration resolution theory
Attitude measurement data
Resolution cell meshing
Calculate delay migration trajectory
Calculate phase compensation factor
Fast time pulse compression
Extract corresponding data
Compensate phase
Fig. 3.4 BP algorithm flow chart.
Coherent integration
Radar image
Bistatic SAR imaging algorithm
151
From a calculation efficiency point of view, assuming that the number of echo data slow-time sampling points and resolution grid points are both N, the calculation complexity of the algorithm is O (N3). For practical applications, such a large amount of computation is still very difficult to process in real time, so it is necessary to study more efficient imaging algorithms [2]. Therefore this section focuses on a fast factorization BP algorithm for bistatic SAR.
3.2.1 Fast BP imaging process BP algorithms have a large amount of computation, because the delay migration trajectories of different resolution grids are intersected and there are a lot of repetitive calculations between the pixel value calculation processes of these grids. As shown in Fig. 3.3, the echo data S(t0, τ0) at (t0, τ0) is composed of the echoes of different grids. During processing of the BP algorithm, S(t0, τ0) is back-projected to different grids such as B and C many times, and the corresponding calculations of the compensation factor exp [j2πf0τ0] and its multiplication with S(t0, τ0) are also repeated many times. From a more intuitive perspective, the reason for the low computational efficiency of the BP imaging algorithm is that, for each slow-time, the calculation processes pixel values of each resolution grid through which the ground equal delay line passes; thus there will be many repetitive calculations. As shown in Fig. 3.5, at the slow-time t0, all grids contribute to the Echo data S (t0 ,τ 0 ) S (t0 ,τ 0 ) exp( j 2π f0τ 0 ) Equal delay line with time delay τ 0 at time t0 •
t0
A
x
t B C
τ0
τ
Echo data after fast time compression S (t0 ,τ 0 )
y Grid of imaging areas with the same time delay
Fig. 3.5 Calculation repeatability of BP algorithm.
152
Bistatic synthetic aperture radar
value of S(t0, τ0) on the equal delay line with a delay of τ0 in the imaging region. In the process of the BP imaging algorithm, the echo data S(t0, τ0) are back-projected to all the grids through which the equal delay line passes. These grids need to repeat the calculation of the same phase compensation factor exp[j2πf0τ0] and the same multiplication operation S(t0, τ0) exp [2πf0τ0]. Therefore if we can reduce this kind of repetitive calculation and carry out batch processing, we can significantly improve the calculation efficiency of the BP imaging algorithm. Based on this consideration, a kind of factorization fast-time domain back-projection algorithm, called the FFBP algorithm for short, has appeared in recent years. This algorithm adopts the ideas of subaperture division and grade-bystep merging. First, the whole aperture corresponding to the observation time is divided into several subapertures, and then the back-projection rough imaging is carried out on the subaperture, and then the subimages are merged step by step to improve the imaging accuracy and finally form a full-aperture image. With this method, the slow-time resolution of each subaperture image is lower, and the number of meshes in the slow-time direction decreases. The fewer the number of grids in the slow-time direction, the fewer the repeated calculation times of the back-projection value of the projected point on the equal delay line in the imaging scene, which can significantly reduce this repeated calculation and improve the calculation efficiency. After discretizing Eq. (3.5), we can obtain σ^0 ðx, yÞ ¼
Nt X n¼1
s½tn , τn ðx, yÞexp ½ j2πf0 τn ðx, yÞ
(3.6)
In this formula, Nt represents the total number of slow-time sampling points in the synthetic aperture time Ta, and tn represents the nth sampling time of the slow time. First, the entire synthetic aperture is decomposed into K subapertures, and K is called the merging factor. Then Eq. (3.6) can be written as: σ^0 ðx, yÞ ¼
K N t =K X X
s½tm , τm ðx, yÞexp ½ j2πf0 τm ðx, yÞ
k¼1 i¼1
¼
K X k¼1
σ^k ðx, yÞ
(3.7)
Bistatic SAR imaging algorithm
153
Among them, m ¼ (k 1)Nt/K + iand σ^k ðx, yÞ are the subimages obtained by applying the algorithm to the kth subaperture. Further, each subaperture can be further decomposed: for example, each subaperture is further decomposed into K subapertures: σ^k ðx, yÞ ¼
K X
σ^k, m ðx, yÞ
(3.8)
m¼1
Among them, σ^k, m ðx, yÞ is a subimage formed by the mth subaperture among the k subapertures. The preceding steps can be iterated step by step to decompose the pore size into the smallest pore size. As shown in Fig. 3.6, it is assumed that the full aperture length is 16 azimuth sampling points, the merging factor K ¼ 2, and the merging series J ¼ 4, where the merging factor is the number of subapertures needed in the previous stage for new apertures at the latter stage of synthesis, and the merging series is the number of subapertures needed to form the final full-aperture image. The FFBP algorithm is the inverse process of aperture decomposition. First, the whole synthetic aperture is divided into several subapertures at the bottom layer, and the traditional BP is used for initial rough imaging of each subaperture. Then, the coherent superposition of several adjacent subaperture subimages into the image of the next subaperture is iterated, until all the subapertures are merged into the entire aperture, and the final radar image is obtained. The specific process of merging is shown in the third step in Section 3.2.3. The subimages obtained by the FFBP algorithm at each level of each subaperture are coarse images with low resolution in the slow-time direction; therefore, in the subimage imaging process, the imaging area can be divided into thicker and fewer image grids in the slow-time direction. If the image Level 1˖ K 0 full aperture Level 2˖ K 1 full aperture
Level M˖ K J - 1 full aperture
Fig. 3.6 Schematic illustration of FFBP algorithm aperture stepwise decomposition.
154
Bistatic synthetic aperture radar
mesh of the slow-time direction of the imaging area is not divided into thicker and fewer in each level of subaperture imaging, the amount of FFBP calculation is equivalent to that of the BP algorithm. In each level of subaperture imaging, with the number of grids in the slow-time direction being less, the number of repeated calculations of the back-projection value of the point on the equal delay line in the imaging scene will be less. Therefore the FFBP algorithm can avoid the computational redundancy of BP algorithm to some extent and improve the computational efficiency. Thus the key to improving the computational efficiency of the FFBP algorithm is to adopt the appropriate image scene grid division strategy for each level of subimage.
3.2.2 Image meshing and hierarchical merging According to the analysis in the preceding section, when the FFBP algorithm is used for imaging, it is necessary to determine the method of subimage meshing at each level. In monostatic SAR, FFBP is based on the center moment of the entire aperture, adopts the ground projection of the observation line of the platform and equal delay line to divide the resolution unit grid, and the final image is located in the polar coordinate system composed of distance and azimuth. In bistatic SAR, ground equal delay line and ground projection of transmitting station line-of-sight can be used to divide the resolution cell grid, and the final image is located in the elliptic polar coordinate system composed of the bistatic distance and transmitting station perspective. Of course, the image coordinate system can also be defined by the ground projection of the receiving station or equivalent phase center line-of-sight. After the method of subimage mesh is determined, the two-dimensional mesh interval needs to be solved. Because the mesh interval of the subimage must be smaller than its resolution to complete the precise merging of the subimage, the equal delay line interval needs to conform to the following formula: Δτ 1=Br
(3.9)
Since only the phase of the center point of the grid is compensated for during subaperture synthesis, the principle for determining the line-of-sight interval Δβ of the transmitting station is that the projection value of the scattering points on both sides of the boundary of the grid at the two ends of the subaperture can form in-phase superposition. As shown in Fig. 3.7, T1 and T2, R1 and R2 are the first and last points of a subaperture in a certain stage of the transmitting station and receiving station, and the corresponding
Bistatic SAR imaging algorithm
155
Fig. 3.7 Bistatic distance change of bistatic SAR scattering point.
moments are t1 and t2. The distance dT between T1 and T2 is the length of the subaperture of the transmitting station, and the distance dR between R1 and R2 is the length of the subaperture of the receiving station. The ellipse segment shown by the dotted line in the figure is the equal delay line at moment t1, and the corresponding bistatic distance is rb. The two scattering points A and B on the equal delay line represent the boundary points of the line-of-sight angle interval of the transmitting station, and their line-ofsight angle interval to T1 is Δβ ¼ β1 β2. The distance from T1 to A and B is rT(t1; A) and rT(t1; B), and the distance between R1 andA,B is rR(t1; A) and rR(t1; B). The distance from T2 to A and B is rT(t2; A) and rT(t2; B), the distance from R2 to!A and B is rR(t2; A) and rR(t2; B), and the base vector between T1 and R1 is b ¼ bx by bz . The sum of bistatic distance between A and B is rb at moment t1, and the difference between the sum of the bistatic distance between A and B is 0 at t2. The sum of the bistatic distance between A and B in t2 is ( r ðt2 ; AÞ ¼ rT ðt2 ; AÞ + rR ðt2 ; AÞ (3.10) r ðt2 ; BÞ ¼ rT ðt2 ; BÞ + rR ðt2 ; BÞ
156
Bistatic synthetic aperture radar
The difference between the sum of the bistatic distance of the two points is ΔrAB ¼ r ðt2 ; AÞ r ðt2 ; BÞ
(3.11)
where 8 1=2 > rT ðt2 ; AÞ ¼ rT2 ðt1 ; AÞ + dT 2 2 dT rT ðt1 ; AÞ cosβ1 > > > > > < r ðt ; BÞ ¼ r 2 ðt ; BÞ + d 2 2 d r ðt ; BÞ cosβ 1=2 T 2 T T T 1 2 T 1 2 1=2 (3.12) > 2 > r ð t ; A Þ ¼ r ð t ; A Þ + d 2 d b + r ð t ; A Þ cos β R 2 1 R R y T 1 > 1 R > > > : r ðt ; BÞ ¼ r 2 ðt ; BÞ + d 2 2 d b + r ðt ; BÞ cos β 1=2 R 2 R R y T 1 2 R 1 Therefore ΔrAB ¼ rT ðt2 ; AÞ rT ðt2 ; BÞ + rR ðt2 ; AÞ rR ðt2 ; BÞ rT ðt1 ; AÞ +
dT2 2dT rT ðt1 ; AÞcos β1 2rT ðt1 ; AÞ
dT2 2dT rT ðt1 ; BÞ cos β2 rT ðt1 ; BÞ + 2rT ðt1 ; BÞ dR2 2dR by + rT ðt1 ; AÞ cosβ1 + rR ðt1 ; AÞ + 2rR ðt1 ; AÞ dR2 2dR by + rT ðt1 ; BÞ cosβ2 rR ðt1 ; BÞ + 2rR ðt1 ; BÞ According to the following approximate relationship: 8 rR ðt1 ; AÞ rR ðt1 ; BÞ > > < rT ðt1 ; AÞ rT ðt1 ; BÞ > > : cos β1 cos β2 Δβ sinβ1 Eq. (3.13) can be reduced to
dT dR + ΔrAB rT ðt1 ; AÞ Δβ sinβ1 rT ðt1 ; AÞ rR ðt1 ; AÞ
(3.13)
(3.14)
(3.15)
In order to ensure that the echoes at the two ends of the subaperture form a coherent superposition on the echo projection values of the two pixel points A and B at the grid boundary, it is required that the conditions of
Bistatic SAR imaging algorithm
157
the following formula should be satisfied between ΔrAB and the radar operating wavelength λ: ΔrAB λ=4
(3.16)
From the formulas in (3.15) and (3.16), the value range of Δβ can be deduced, which is the condition that the line-of-sight interval of the transmitting station of the bistatic SAR needs to meet:
1 λ dT dR Δβ (3.17) + 4rT ðt1 ; AÞ sinβ1 rT ðt1 ; AÞ rR ðt1 ; AÞ In translational-invariant bistatic SAR dT ¼ dR ¼ d, so
1 λ 1 1 Δβ + 4rT ðt1 ; AÞ d sin β1 rT ðt1 ; AÞ rR ðt1 ; AÞ
(3.18)
Formula (3.18) is the constraint relation between the line-of-sight angle interval and the length of the subaperture of the subimage, which is determined by the wavelength and the length of the subaperture and is inversely proportional to the length of the subaperture. According to the formula, the size of the line-of-sight interval in the resolution cell grid of each level of image can be calculated. To sum up, Formulas (3.9) and (3.17), respectively, give the conditions to divide the equal delay line interval and the line-of-sight interval of the transmitting station in the subimage of the FFBP algorithm for bistatic SAR, so that the resolution cell grid of each level of subimage can be divided. Based on the preceding FFBP subimage mesh division method, the J level subaperture mesh is divided, and the BP coarse focusing imaging is performed on each subaperture, and the subimage is merged step by step to improve the subimage imaging accuracy. Finally, the full-aperture high-precision image is formed. Assuming that the current merged series is level j (that is, the subimage of the level j + 1 has been obtained), it is necessary to combine the continuous K subaperture of level j + 1 into a new subaperture to obtain the subimage of the level j. First, the imaging grid is divided according to the subaperture center position of the level j and the two-dimensional sampling interval. According to Eq. (3.8), since the line-of-sight angle interval is inversely proportional to
158
Bistatic synthetic aperture radar
the subaperture length d, the line-of-sight angle interval at this level is 1/K of the level d. Δβj ¼ Δβj + 1 =K
(3.19)
After the subimage grid interval is determined, the continuous K subaperture in the level j can be merged into a new subaperture. When merging into a new subaperture, because the central position of each subimage pixel grid in level j + 1 defines different elliptical polar coordinate systems, they need to be transformed into a unified rectangular coordinate system. In this rectangular coordinate system, each pixel value of the level j subimage can be solved by the method of proximity interpolation: that is, first find the pixel grid center of the levelj subimage and the adjacent pixel grid center of the level subimage K in the level j + 1 subimage; then compensate the phase error caused by the distance difference and carry out the coherent superposition interpolation calculation, so as to get the pixel gray value of the level j subimage. For each subaperture of the j-th level, the K subimage of the j-th level can be obtained by the preceding interpolation operation. Because in the j-level, the sampling interval of the line-ofsight angle is reduced, and the gray value of the pixel obtained is the result of the coherent superposition average of the adjacent pixel values of the j + 1 resolution grid, so the resolution of the line-of-sight angle direction of the subimage is improved compared with that of the level j + 1. Finally, we need to obtain subimages of the j-th level, two times from the rectangular coordinate system to their own different elliptic polar coordinate systems.
3.2.3 Algorithm process and performance analysis Sections 3.2.1 and 3.2.2 analyze the fast BP algorithm for bistatic SAR and the subimage grid division and hierarchical merging method, according to which the processing flow of the imaging algorithm can be given, its computational efficiency analyzed, and its imaging performance verified by numerical simulation. 1. Algorithm process According to the image mesh partitioning method determined by Eqs. (3.9) and (3.17) and the subaperture division and subimage merging method determined by Eqs. (3.6)–(3.8), the algorithm flow chart can be given in combination with the subimage hierarchical merging method in Section 3.2.3, as shown in Fig. 3.8.
Bistatic SAR imaging algorithm
Echo fast time compression
Determine the merging series J merging factor K the sub-aperture length LJ of grade J
Class J imaging
Class J-1 imaging
Class 1 imaging
Divide the imaging grid
Divide the imaging grid
Divide the imaging grid
BP imaging for each sub aperture
Interpolate and overlay the previous sub-images
……
Convert the image to rectangular coordinates
159
Bistatic SAR full aperture image
Interpolate and overlay the previous sub-images
Fig. 3.8 FFBP algorithm flowchart of bistatic SAR.
Step 1: Compress the echo in the fast-time domain. Based on the whole aperture length, the merging series J merging factor K and the subaperture length LJ of level J are determined. Step 2: According to Formulas (3.9) and (3.18), determine the size of the subimage grid, divide the imaging grid, and then use the BP algorithm for each subaperture to obtain K subimages. Step 3: According to the subimage classification and merging method in Section 3.2.3, the subimages of level j + 1 are merged into the K subimages corresponding to the subaperture of level j according to the subaperture subordination relationship of the two levels. Step 4: Repeat step 3 until the subapertures are merged into a full aperture; then the image in the elliptic polar coordinate system is BP full aperture imaging. Step 5: Convert the image of elliptic polar coordinates to the rectangular coordinate system and output the FFBP imaging results of bistatic SAR. 2. Computational analysis Assume that the size of the echo matrix is Nτ Nt (fast-time direction slow- time direction) and the size of the imaging scene is Ny NxNy (angle direction fast-time direction). Considering that the scene imaging scale is of the same order of magnitude as the number of slow-time sampling points, here we set Ny ¼ Nx ¼ Nt ¼ Nτ ¼ N, the minimum subaperture length LJ ¼ 2, and the merging factor K ¼ 2, and then merge the series J ¼ log2N and estimate the number of floating-point operations and the relative relationship of the FFBP algorithm and BP algorithm accordingly. For the traditional BP algorithm, the floating-point operation order of the operation is O(N3). That is, the total floating-point operation number
160
Bistatic synthetic aperture radar
is proportional to the third power of N, while for the calculation of the FFBP algorithm operand, it needs to be calculated step by step: Level A: the total aperture is divided into KJ1 ¼ N/2 subapertures, each subaperture is composed of two slow-time sampling points, and the corresponding subimage size of each subaperture is 2 N (angle direction fasttime direction). The total number of floating-point operations required for level 1 is 2N2, since the level J requires a coarse image with two backprojections for each subaperture. Level j(j 6¼ J): The full aperture is divided into N/2j-1 subapertures. Each subaperture is a combination of the two subapertures in the previous stage. Each subimage, which is formed by the interpolation superposition of two subimages of the former level, contains 2j1 N2j1 (angle direction fast-time direction) pixels. There are 2N2 interpolation operations, and each interpolation requires 2(2M 1) floating-point operations, where M is the number of neighboring points required for the interpolation operation. Therefore, the number of floating-point operations required for the j stage is 4(2M 1)N2. Since the aperture synthesis level of the FFBP algorithm is J ¼ log2N, the total number of floating-point operations of FFBP is 2N2 + 4(2M 1) N2(log2N 1), where M is the number of neighboring points required for the interpolation operation. Therefore the order of floating-point operations of the FFBP algorithm is O(N2log2N). Compared with the traditional BP algorithm, the operation time of the FFBP algorithm is greatly accelerated, and the reduction factor is K¼
O ðN 3 Þ OðN 2 log 2 N Þ
(3.20)
3. Performance analysis In the subaperture image synthesis of the FFBP algorithm, the pixel value of the latter subimage is formed by the interpolation superposition of the corresponding pixel value of the former subimage. The processing precision is slightly lost, but sufficient precision is maintained and the computational efficiency is greatly improved. The following is analyzed and verified by specific simulation. The simulation scene consists of nine points with a distance of 100 m in three rows and three columns. The initial coordinates of the transmitting station and the receiving station are (10, 0, 10)km and (6, 10, 10)km,
161
Along track of the transmitter (m)
Bistatic SAR imaging algorithm
Along track of the transmitter (m)
100
50
0 -50
-100 -150 -150
-100
-50
0
50
100
50
0 -50
-100 -150 -150
100
-100
Cross track of the transmitter (m)
-50
0
50
100
Cross track of the transmitter (m)
(a) FFBP algorithm point targets imaging
(b) BP algorithm point targets imaging
Fig. 3.9 FFBP algorithm and BP algorithm imaging.
the speed is 100 m/s, the transmission signal bandwidth is 100 MHz, and the carrier frequency is 9.6 GHz. Fig. 3.9 shows the imaging results of the BP algorithm and the FFBP algorithm for scene target points. It can be seen that FFBP can complete good focusing for nine target points in the scene, and the imaging effect is basically the same as that of the BP algorithm. Fig. 3.10 shows the contour map of the image corresponding to the center point of the scene. It can be seen that the FFBP algorithm has the same focusing effect as the BP algorithm. Table 3.1 compares the imaging quality indicators. It can be seen that the imaging quality of the FFBP algorithm is very close to the BP algorithm, but the running time is reduced by 84 times. 20
Along track of the transmitter (m)
Along track of the transmitter (m)
20 15 10 5 0 -5 -10 -15 -20 -40
-30
-20
-10
0
10
20
30
Cross track of the transmitter (m)
(a) FFBP algorithm point target imaging
40
15 10 5 0 -5 -10 -15 -20
-40
-30
-20
-10
0
10
20
30
Cross track of the transmitter (m)
(b) BP algorithm point target imaging
Fig. 3.10 Contour map of the center point of the scene.
40
162
Bistatic synthetic aperture radar
Table 3.1 BP and FFBP algorithm point target imaging quality indicators.
BP FFBP
Time (ms)
Fast-time 3 dB main lobe width (m)
Slow-time 3 dB main lobe width (m)
Fast-time PSLR (dB)
Slow-time PSLR (dB)
887,016 105
1.32 1.32
1.20 1.21
13.29 13.31
13.30 13.24
3.3 Bistatic SAR frequency-domain imaging algorithm The frequency-domain imaging algorithm of bistatic SAR is usually related to the imaging mode, but considering that the translational-invariant mode and one-station fixed mode are special cases of the translational variant mode, the frequency-domain imaging algorithm of the translational variant mode is mainly introduced here. In the geometric configuration of translational variant bistatic SAR, the velocity vectors of the receiving and transmitting platform are different, and the relative position of the receiving and transmitting platform changes in the imaging process. The bistatic SAR echo has the problems of timedelay migration, two-dimensional coupling, and two-dimensional spatial variation of the slow-time reference function with the target position. These problems become more serious in the squint observation. In order to overcome the impact of these problems on the construction of efficient imaging algorithms, it is necessary to find ways to correct migration, eliminate space variance and perform decoupling, and create conditions for homogeneous collection, dimension-reducing processing, and batch fast computation. However, the existing frequency-domain imaging algorithms of the translational-variant mode, such as the 2D scaled IFFT algorithm [3–6], NLCS algorithm, bistatic RD algorithm based on the Loffeld model [7] [8], and the bistatic CS algorithm, all simply characterize the higher-order variation of the space-variation along the track of the receiving station with a constant factor, and do not take into account the space-variance of the receiver along the flight path with time-delay migration. They also ignore the two-dimensional high-order coupling problem. There is a large error in the echo model, and the error will be greater during squint observation. Therefore, there will be a significant centrifugal (i.e., deviation from the reference point) defocus phenomenon in the slow-time direction. To this end, a bistatic SAR frequency-domain imaging algorithm for the translational variant side-looking/squint-looking mode is introduced here.
Bistatic SAR imaging algorithm
163
3.3.1 ω 2 k imaging process In order to image bistatic side-looking/squint-looking SAR using a frequency-domain algorithm, it is necessary to use a two-dimensional spectrum model (BPTRS) of scattering point echo with higher accuracy, such as the generalized Loffeld two-dimensional spectrum model described in Section 2.4.4. At the same time, the generalized Loffeld model needs to be linearized in two-dimensional space to separate the location variable of scattering point from the two-dimensional frequency, to create conditions for the elimination of echo space-variant characteristics. Subsequently, the two-dimensional Stolt transform is used to eliminate the two-dimensional space variance and decouple the two-dimensional coupling. The constructed bistatic ω k algorithm can achieve high-precision imaging of bistatic squint-looking and forward-looking SAR in the case of unequal speeds and large bistatic angles at the transceiver station. 1. Generalized Loffeld model of the 2D spectrum of scattering point echo Fig. 3.11 shows the imaging geometry of the dual flight translational variant mode squint-looking SAR, where the receiving station is defined in the x, y, z rectangular coordinate system and flies at speed vR in a direction parallel to the y axis. The transmitting station is defined in the x0 , y0 , z0 coordinate system and flies at speed vT in a direction parallel to the y0 axis. The two coordinate systems have the same origin O, that is, the scene center. If the angle between the y axis and y0 axis is α, the relationship between the two coordinate systems is:
0
cos α sinα x x ¼ (3.21) y0 sinα cos α y
Fig. 3.11 Geometric relationship of translational variant bistatic side-looking/squintlooking SAR imaging with two flight paths.
164
Bistatic synthetic aperture radar
P is the scattering point whose coordinate is (x, y) in the imaging area, and its corresponding coordinate is (x0 , y0 ) in the coordinate system x0 , y0 , z0 ; hT and hR are the height of transmitting station and receiving station, respectively. When the slow-time variable is zero, the coordinate of the transmitting station is (xT0 , yT0 , hT0 ), the coordinate of the receiving station is (xR, yR, hR); rT(x0 ) and rR(x) are track shortcuts of the transmitting station and the receiving station, respectively; θsT(x0 ) and θsR(x) are squint angles of the transmitting station and the receiving station, and are functions of x0 and x, respectively; rT(x0 ) and rR(x) are functions of x0 and x, respectively. For simplicity, we use rT, rR, θsT, θsR to represent them. According to the analysis in Section 2.4.4, the analytical expression of the two-dimensional spectrum of the scattering point P can be given as: S2f ð ft , fτ ; x, yÞ ¼ S0 ð fτ Þ exp fjΦG ð ft , fτ ; x, yÞg
(3.22)
where S0(fτ) is the spectrum of the transmitted baseband signal, and ft and fτ are the slow-time frequency and fast-time frequency; ΦG ð ft , fτ ; x, yÞ ¼
2π ½rT FT ð ft , fτ Þ + rR FR ð ft , fτ Þ + 2π ½ ftT ð ft Þt0T + ftR ð ft Þt0R c (3.23)
where the expressions of FT( ft, fτ), FR( ft, fτ), ftT( ft), and ftR( ft) are shown in Chapter 2, as Eqs. (2.70), (2.71), (2.85), and (2.86). Here t0T and t0R are the track shortcuts of the transmitting station and the receiving station, respectively. 2. Spatial linearization of two-dimensional spectrum of scattering point echo To construct the ω k imaging algorithm of the translational variant bistatic side-looking/squint-looking SAR, spatial two-dimensional linearization of the two-dimensional spectrum of the scattering point echo is required. The purpose is to separate the first-order terms for track shortcuts and y axis coordinates from the higher-order terms for analysis and processing, respectively. Put Eq. (3.23) along the direction of rR and the y direction of the receiving aircraft flight direction. A Taylor expansion is carried out for track shortcuts rR0 and y axis coordinates y0 corresponding to pixel point (x0, y0). For the moment, the remaining second-order terms of Δr and Δy and their coupling terms are not considered, and only first-order terms are retained. Formula (3.24) can be obtained, and the influence of the residual high-order terms
Bistatic SAR imaging algorithm
165
and the corresponding compensation measures can be referred to in the relevant literature [9, 10]. " # ref ref ref rT 0 FT + rR0 FR yR ftR ref ref + ftT t0T ΦG ðft , fτ Þ 2π c vR " ref ref ar1 FT + pr1 ð ft , fτ ÞrT 0 + qr1 ð ft , fτ ÞrR0 + FR + 2πΔr c ref + ςTr1 ðft Þt0T
"
ς ð ft ÞyR ref Rr1 + gr1 ftT vR
#
ref
ay1 FT + py1 ð ft , fτ ÞrT 0 + qy1 ð ft , fτ ÞrR0 + 2πΔy c ref + ςTy1 ðft Þt0T
ref ςRy1 ð ft ÞyR f ref + gr1 ftT + tT vR vR
# (3.24)
where Δr ¼ rR rR0 and Δy ¼ y y0 are the track shortcuts difference and the y axis position difference between the scattering point and the reference point, respectively; ar1 and ay1 are the coefficients of linear expansion of rT with rR and y, respectively; pr1 and qr1 are the first-order expansion coefficients of FT and FR with Δr; py1 and qy1 are respectively the first-order expansion coefficients with FT and FR with Δy. Also, ςTr1 and ςRr1 are first-order expansion coefficients of ftT( ft) and ftR(ft) to Δr; ςTy1 and ςRy1 are respectively first-order expansion coefficients of ftT(ft) and ftR(ft) to Δy; gr1 and gy1 are respectively first-order expansion coefficients of t0T to Δr and Δy; FTref ¼ FT( ft, fτ; rR0, y0),FRref ¼ FR( ft, fτ, ref ref rR0, y0),ftT ¼ ftT(ft; rR0, y0),and ftR ¼ ftR(ft; rR0, y0);tref 0T is the track shortcuts moment of the track of the launch station to the center of the scent. In Formula (3.24), the first term represents the space invariant phase, corresponding to the spectrum phase of the reference point, except for phase S0(fτ), which can be compensated by multiplying the reference function
166
Bistatic synthetic aperture radar
in the two-dimensional frequency domain. The second term concerns the linear space variable part of Δr, and the third term concerns the linear space variable part of Δy. So far, the spatial variables in the two-dimensional spectrum of the point target have been separated, and the coefficients of Δr and Δy in the previous formula are only functions of frequency variables. 3. Two-dimensional Stolt transformation After spatial linearization, the two-dimensional Stolt transformation is a key step to eliminate the two-dimensional space-variant characteristics of echo and to remove two-dimensional coupling. However, before the transformation, the phase conjugate multiplication of the two-dimensional spectrum of the entire scene should be carried out based on the phase of the twodimensional spectrum of the reference point, so as to achieve the focus of the scene’s central reference point in the image field and retain the information of the differential phase related to the position of the scattering point of the two-dimensional spectrum. (1) Multiplication of reference functions After the scene echo data is transformed to the two-dimensional frequency domain, it can be multiplied by the reference function of Eq. (3.25), so that the obtained spectrum corresponds to the image domain, to realize the twodimensional focus of the reference point. This is because, according to the quantitative relationship between Formula (3.22) and Formula (3.24) and the reference function, at the reference point (rR0, y0) Δr ¼ 0 and Δy ¼ 0; after multiplying by the reference function, the frequency spectrum phase of the reference point has been converted to zero phase, and its frequency spectrum is converted to a two-dimensional rectangular real function, corresponding to the two-dimensional sinc function in the image domain, that is, full focus. ( " #) ref ref ref r F + r F y f T 0 R0 R ref R T SRFM ðft , fτ ; rR0 Þ ¼ S0∗ ðfτ Þ exp j2π + ftref toT tR c vR (3.25) However, when the reference function of Eq. (3.25) is used to multiply the echo spectrum of scattering points (rR, y), Δr 6¼ 0 or Δy 6¼ 0 other
167
Bistatic SAR imaging algorithm
than the reference point, there is still a residual phase in the obtained spectrum: Δr h ref ϕRES ðft , fτ ; rR , y, rR0 , y0 Þ ¼ 2π ar1 FT + pr1 ðft , fτ ÞrT0 + qr1 ð ft , fτ ÞrR0 c # ςRr1 ð ft ÞyR c ref ref ref + gr1 ftT c + FR + ςTr1 ðft Þt0T c vR 2π
i Δy nh ref ay1 FT + py1 ðft , fτ ÞrT 0 + qy1 ð ft , fτ ÞrR0 vR =c vR
ref
ref
ref
+ ςTy1 ð ft Þt0T vR ςRy1 ðft ÞyR + gr1 ftT + ftR g
(3.26)
Therefore, only partial compression can be achieved in the image domain, and good focus cannot be achieved. (2) Stolt transform The residual phase in the previous equation is linear to Δr and Δy. Let the coefficients of Δr and Δy be, respectively, a new frequency variable, and a two-dimensional frequency transformation relationship can be obtained: ref ref fτ 0 + f0 ¼ ar1 FT + pr1 ð ft , fτ ÞrT 0 + FR + qr1 ð ft , fτ ÞrR0 ref
ref
+ ςTr1 ðft Þt0T c ςRr1 ð ft ÞyR c=vR + gr1 ftT cft0 ref ¼ ay1 FT vR =c + py1 ðft , fτ ÞrT 0 vR =c + qy1 ðft , fτ ÞrR0 vR =c ref ref ref + ςTy1 ð ft Þt0T vR ςRy1 ð ft ÞyR + gy1 ftT vR + ftR : (3.27) 0
0
where fτ and ft are, respectively, the fast-time and slow-time frequency after transformation. Thus, in Eq. (3.26), the coordinate of ϕRES() in the fast- and slow-time frequency domain is the value of (ft, fτ), which can be mapped to the value of (ft0 , fτ0 ) in the new fast- and slow-time frequency domain according to the two-dimensional frequency transformation relation of Eq. (3.27). After the preceding frequency transformation, the following can be obtained: ϕRES ð ft 0 , fτ 0 ; rR , y, rR0 , y0 Þ 2π ðfτ 0 + f0 Þ
Δr Δy 2πft0 c vR
(3.28)
where Δr and Δy reflect the location information of the target. As the phase is linear to both spatial and frequency variables, it is a linear phase shift factor. According to the characteristics of the Fourier transform, the phase shift
168
Bistatic synthetic aperture radar
factor corresponds to the position of the scattering point relative to the reference point in the image domain. Therefore the two-dimensional fast Fourier inverse transform (IFFT) can be used to obtain the focused image of echo energy of each point. This process thus achieves the goal of eliminating the centrifugal (that is, deviating from the reference point) defocusing phenomenon in the image field, so as to eliminate the image depth atrophy caused by two-dimensional space variation. The essence of Eq. (3.27) is through the spectrum of the twodimensional nonlinear affine transformation, carrying out a geometrical deformation mathematical operation of the original spectrum, eliminating the frequency nonlinear phase of ϕRES(), and reserving the frequency linear term. The effect is that this transformation makes the spectrum phase contours of the scatter point echoes other than the reference point are transformed from the nonequally spaced curve cluster shape to the equally spaced parallel straight line cluster shape. And, the direction and density of parallel straight-line clusters of phase contour correspond to the relative coordinates Δr and Δy of the scattering point to the reference point, as shown in Fig. 3.12. (3) Stolt transformation of special geometric mode The preceding bistatic SAR imaging algorithm with translational variant mode can also be applied to some special modes. In order to facilitate the construction of a bistatic SAR imaging algorithm, an expression for the Stolt transform for some special bistatic SAR modes is given here. a. Cross-flight translational variant mode In this mode, the speed vector of the transceiver station is orthogonal: α ¼ π/2 so ar1 ¼ 0 and gy1 ¼ 0; bring these two values into Eq. (3.27), and you can obtain the corresponding Stolt transform expression.
Fig. 3.12 Stolt transform phase contour change schematic diagram.
Bistatic SAR imaging algorithm
169
b. Parallel-flight translational variant mode In this mode, the transceiver station flies along two parallel straight lines, but at different speeds, so vT 6¼ vR and α ¼ 0, corresponding to ay1 ¼ 0, gr1 ¼ 0. At this time, the Stolt transformation is still a two-dimensional transformation according to the analysis in Section 3.1, although the space-variant characteristics are less than that of the cross-flight translational variant mode. c. One station fixed translational variant mode In this mode, one platform is stationary, and it is only through the motion of another platform to synthesize aperture that the Doppler frequency is completely provided by the moving platform. If the launch station is stationref ary, then ftT ¼ 0, ftR ¼ ft, and then Fref T ¼ f + f0, FR ¼ FR, Py1 ¼ 0, qy1 ¼ 1/v, ςTy1 ¼ 0, ςRy1 ¼ 0, gy1 ¼ 0, gr1 ¼ 0. At this point, the Stolt transform degenerates to: 0 fτ + f0 ¼ ar1 ðfτ + f0 Þ + FR (3.29) ft0 ¼ ay1 ðfτ + f0 ÞvR =c + ft In Formula (3.29), the fast-time frequency transformation is still a nonlinear mapping, while the slow-time frequency transformation degenerates into a linear mapping. d.Parallel-flight translational invariant mode In this mode, the flight direction of the transceiver is parallel and the speed is the same, i.e., α ¼ 0 and vT ¼ vR ¼ v, corresponding to ay1 ¼ 0, py1 ¼ 0, qy1 ¼ 0, ςTy1 ¼ 0, ςRy1 ¼ 0, gy1 ¼ 1/v, gr1 ¼ 0,gr1 ¼ 0. Therefore, the Stolt transform degenerates to: ref
ref
ref
fτ 0 + f0 ¼ ar1 FT + pr1 ðft , fτ ÞrT 0 + qr1 ðft , fτ ÞrR0 + FR + ςTr1 ðft Þt0T c ςRr1 ðft ÞyR c=v
(3.30)
Therefore the Stolt transform at this time has degenerated into a onedimensional transform, which only needs frequency transformation along the fast-time frequency direction. This is because, according to the analysis in Section 3.1, the model is one-dimensional. e.Monostatic translational invariant mode This mode can be regarded as a special case of the translational invariant mode, and ftT ¼ ftR ¼ ft/2,pr1 ¼ 0, qr1 ¼ 0, ςTr1 ¼ 0, ςRr1 ¼ 0, ar1 ¼ 1, ay1 ¼ 0, py1 ¼ 0, qy1 ¼ 0, ςTy1 ¼ 0, ςRy1 ¼ 0, gy1 ¼ 1/v, and gr1 ¼ 0. At this time, the Stolt transform is degraded to: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi cf t fτ 0 + f0 ¼ 2 ðfτ + f0 Þ2 (3.31) 2v
170
Bistatic synthetic aperture radar
By comparing Eq. (3.31) with the Stolt transform formula adopted in the ω k algorithm of monostatic SAR in the literature, it can be seen that the two are completely consistent. This shows that Eq. (3.27) is a unified expression for the monostatic/bistatic Stolt transformation.
3.3.2 The role of the two-dimensional Stolt transformation The two-dimensional Stolt transform is the core step of the previously discussed imaging algorithm. Further clarification of the physical significance of the two-dimensional Stolt transform is very important for understanding the imaging algorithm and realizing the process of de-space variance and decoupling. It may be easier to illustrate this when considering three scattering points in the imaging scene: P0, PM, and PN, respectively. Their positions are shown in Fig. 3.13, where P0 is the reference point and its position coordinate is (rR0, y0). For PM,Δy ¼ 0 and only fast-time frequency transformation can affect the focusing performance of the PM point. For PN,Δr ¼ 0 and only slow-time frequency conversion can affect the focusing performance of the PN point.
Fig. 3.13 Top-view structure of a translational variant bistatic squint-looking SAR.
Bistatic SAR imaging algorithm
171
1. Fast-time frequency transform Expanding FTref and FRref, pr1(ft, fτ) and qr1(ft, fτ) by the second-order Taylor expansion to fτ at zero frequency, it can be obtained that: h ref ref fτ 0 + f0 ar1 DT ftT f0 + DR ftR f0 + pr1 ð0, ft ÞrT 0 + qr1 ð0, ft ÞrR0 ref
+ ςTr1 ðft Þt0T c 2
ςRr1 ðft ÞyR c ref + gr1 ftT c vR
ar1 1 +4 + ref ref DR ftcT DT ftcT ref ref + ρTr1 ftcT rT0 + ρRr1 ftcR rR0 fτ 2
3 a a 1 1 r1 r1 5fτ +4 + ref ref ref ref DR ftT DR ftcT DT ftT DT ftcT +
nh i ref ρTr1 ðft Þ ρTr1 ftcT rT 0
h i ref + ρRr1 ðft Þ ρRr1 ftcR rR0 gfτ + ½ρTr2 ðft ÞrT 0 + ρRr2 ðft ÞrR0 2 2 ref ref ar1 c 2 ftT c 2 ftR fτ 2 ref ref 3 3 3 3 2 2 2DT ftT vT f0 2DR ftR vR f0 where DT
ref ftT
(3.32)
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi ref ref ref ¼ 1 cf tT =vT f0 , DR ftR ¼ 1 cf tR =vT f0 ;
ρTt1(ft) and ρRt1(ft) are the linear terms of pr1(ft, fτ) and qr1(ft, fτ) with respect to fτ; ρTt1(ft) and ρRt1(ft) are the second-order terms of pr1(ft, fτ) and qr1(ft, fτ)
172
Bistatic synthetic aperture radar
ref ref with respect to ft; ftcT ¼ ftcT(rR0, y0), ftcR ¼ ftcR(rR0, y0); and ftcT and ftcR are, respectively, the centroids of the Doppler contribution of transmitter and receiver. In this formula, the first term is a constant of fast-time frequency, which represents the residual Doppler shift and Doppler modulation after the reference function is multiplied. In Section 2.4.4, the two-dimensional spectrum of the point targets for translational variant bistatic SAR can be decomposed into two parts, corresponding to the contributions of the transmitting station and the receiving station, respectively. After the reference function matches, PM is partially ref compressed, ar1D(ftT )f0 is the residual slow time compression factor caused by the transmitting station, and ar1 represents the projection factor from the transmitting station to the rR direction. For PM, the difference between the ref shortest slant range of the transmitting stations is ar1Δr, and D(ftR )f0 is the residual slow-time compression factor caused by the receiving station; ref gr1ftT is the position difference between P0 and PM along the flight direction of the transmitting station. After the fast-time frequency transformation, P0 and PM fall into the same slow-time unit. The second term represents the stretching transformation of the fasttime frequency axis. This scaling transformation will lead to the linear scaling transformation of the fast-time axis, and the bistatic distance sum rbi becomes the shortest receiving distance rR, where the bistatic distance sum rbi ¼ rT + rR. The third and fourth terms are also linear terms for the fast-time frequency, but the difference is that these scale factors vary with the slow-time frequency and represent the residual time migration factors. Among them, the third term represents the residual migration factor caused by the change of distance coordinates, while the fourth term represents the residual migration factor caused by the change of Doppler parameters. The fifth term represents the residual quadratic fast-time compression factor. From the preceding analysis, it can be seen that fast-time frequency transformation can remove space-variant characteristics along rR, including residual time migration, the residual slow-time compression factor, and the secondary fast-time compression factor. 2. Slow-time frequency transform ref If Fref T and FR , pr1(ft, fτ), qr1(ft, fτ) and fτ are subjected to a second-order Taylor expansion at zero frequency and brought into Eq. (3.27), we can obtain:
Bistatic SAR imaging algorithm
173
h ref ft0 ay1 DT ftT f0 vR =c + py1 ð0, ft ÞrT 0 vR =c + qy1 ð0, ft ÞrR0 vR =c ref
ref
ref
+ ςTy1 ð2ft Þt0T vR ςRy1 ðft ÞyR + gy1 ftT vR + ftT 3 vR 4 ay1 ref ref + ρTy1 ftcT + rT 0 + ρRy1 ftcR rR0 5fτ c DT f ref tcT 2 3 ay1 5 vR 4 ay1 fτ + ref c DT f ref DT ftcT tT n h i h i o vR ref ref + ρTy1 ðft Þ ρTy1 ftcR rT 0 + ρRy1 ðft Þ ρRy1 ftcR rR0 fτ c 2 2 3 ref 2 a c ftT y1 vR 6 7 2 + 4ρTy2 ðft ÞrT 0 + ρRy2 ðft ÞrR0 5fτ ref c 2D3T ftT vT2 f03 (3.33) In this formula, the first term is independent of the fast-time frequency and represents slow-time migration and the residual slow-time compression factor. After multiplying the reference function, the receiving station part for PN of the slow-time compression factor has been fully compensated, because PN and ref P0 have the same rR. The role of ay1DT(ftT )f0vR/c in this term is to compensate for the remaining slow-time compression phase caused by the transmitting station, where ay1 represents the mapping relationship between y and r1. The second term is a linear term with respect to the slow-time frequency, which represents the position offset correction along Δr caused by different velocity vectors between the receiving and receiving stations. Due to spacevariant characteristics of the tangential track direction of the receiving station, PN and P0 fall into different fast-time units after multiplying by the reference function SRFM(ft, fτ; rR0), and the difference between the sum of bistatic distances corresponding to fast-time units where PN and P0 are located is: rT2 rT0 rR0 rR0 + ref DT ½ftcT ðrR0 , yÞ D f DR ½ftcR ðrR0 , yÞ D f ref T tcT R tcR ref ref rT 0 κ Ty ftcT rR0 κ Ry ftcR ay1 ¼ ref ref ref DT ftcT D2T ftcT D2R ftcR 2 3 6 ay1 ref ref 7 ¼ 4 + rT0 ρTy1 ftcT + rR0 ρRy1 ftcR 5Δy ref DT ftcT
Δrbi ðrR0 , y; y0 Þ ¼
(3.34)
174
Bistatic synthetic aperture radar
where rT2 is the shortest slope distance from the transmitting station to PN(rR0, y). It can be found that the second term of Eq. (3.33) can correct the difference between the PN and P0 bistatic distance sum. After slow-time frequency conversion, PN and P0 fall into the same fast-time unit. In fact, the echo of the translational variant squint-looking SAR is obtained in the (t, rbi) domain. The fast-time Stolt transform can transform the echo from the (t, rbi) domain to the (t, rR) domain, while the slow-time frequency transform can transform the echo from the (rR, y) domain to the (rR, y) domain. The third and fourth terms represent the remaining time delay migration correction factors. The fifth term represents the remaining quadratic fasttime compression factor. From the preceding analysis, it can be seen that slow-time frequency transformation can remove space-variant characteristics along the track direction of the receiving station. Combined with the fast-time frequency transform, the two-dimensional frequency Stolt transform in this section can eliminate the two-dimensional space variation of the translational variant mode.
3.3.3 Algorithm flow and performance analysis In Sections 3.3.1 and 3.3.2, the main mathematical operations involved in the ω k imaging algorithm based on the two-dimensional spectrum generalized Loffeld model and two-dimensional Stolt transform are analyzed. Based on this, the processing flow of the imaging algorithm can be given, its computational efficiency analyzed, and its imaging performance verified by numerical simulation. 1. Algorithm flow Fig. 3.14 shows the flowchart of the frequency-domain imaging algorithm described in this section, in which the inside of the dotted box is the core process, while the outside of the dotted box is used to provide the signal spectrum parameters needed for the processing, where the expression of the reference function SRFM() is given by Eq. (3.25). The operation of multiplying the reference function corresponds to the two-dimensional focusing of the reference point in the image domain. The expression of the 2D Stolt transform is given by Eq. (3.27), which aims to eliminate the image depth of field atrophy caused by the 2D space variation. The function of high-order error compensation is to eliminate the influence of the residual high-order
Bistatic SAR imaging algorithm
The echo data of the bistatic SAR
Attitude measurement data
Two-dimensional FFT
Echo parameter estimation
S RFM (.) Two-dimensional Stolt frequency transform
175
Reference function generation Stolt change relationship generation
Two-dimensional FFT
Bistatic SAR image
Fig. 3.14 Flowchart of ω k imaging algorithm for translational variant bistatic squintlooking SAR.
term when the spectrum is linearized in the spatial domain and further improve the imaging accuracy. 2. Computational complexity As shown in Fig. 3.14, the ω k imaging algorithm of translational variant bistatic squint-looking SAR requires two FFTs and IFFTs along the slowtime direction, two FFTs and IFFTs along the fast-time direction, one fasttime interpolation, one slow-time interpolation operation, and one reference function multiplication operation. Assuming that the sampling point in the fast-time direction is Nr and the sampling point in the slow-time direction is Na, the total real floating-point operations required by this algorithm are: 10Na Nr log 2 ðNa Þ + 10Na Nr log 2 ðNr Þ + 2Na Nr + 4ð2M 1ÞNa Nr (3.35) where M is the number of neighboring points required for the interpolation operation. If Na ¼ Nr ¼ N, the complexity of the algorithm is O(N2log2N). Compared with the FFBP algorithm, the total number of floating-point operations of this method is slightly less, and the computational complexity is equivalent. However, in this method it is easier to combine parameter
176
Bistatic synthetic aperture radar
Table 3.2 Simulation parameters. Simulation parameters
Carrier frequency Bandwidth PRF Synthetic aperture time
The transmitting station
The receiving station
9.65 GHz 120 MHz 1000 Hz 3s Mode 1: vT 6¼ vR, α 6¼ 0
Position (x, y, z) x speed y speed Squint angle Bistatic angle Velocity angle
(10, 3, 10) km 100 m/s 200 m/s 30 degrees
(5, 2, 7) km 0 m/s 140 m/s 13 degrees
9.3 degrees 27 degrees Mode 2: vT 6¼ vR, α ¼ 0
Position (x, y, z) x speed y speed Squint angle Bistatic angle
(20, 20, 10) km 0 m/s 300 m/s 42 degrees
(10, 5, 5) km 0 m/s 200 m/s 24 degrees
66 degrees
estimation and motion compensation and the method has a higher application value in practical processing. Compared with the ω k algorithm of monostatic SAR, the ω k algorithm of bistatic SAR in translational invariant mode has similar computation, while in translational variant mode, an interpolation operation of the azimuth Stolt transform is added. 3. Algorithm performance In order to illustrate the effectiveness of the algorithm described in this section, a simulation example of airborne bistatic SAR imaging processing is given here. Mode 1 is oblique-flight translational variant squint-looking mode, and mode 2 is parallel-fight translational variant squint-looking mode. The simulation parameters are shown in Table 3.2. The imaging scene consists of 15 equally spaced scattering points in 3 rows and 5 columns, with a spacing of 500 m along the x-axis and 100 m along the y-axis. The central scattering point is O, and the coordinates are (0,0); the two scattering points at the vertex are respectively recorded as P1 and P2, and their coordinates are (1000,100) m and (1000,100) m. (1) Mode one In this mode, the flight directions of the transmitter and the receiver are different, and the imaging scene is located on the squint area of the transmitting
177
Bistatic SAR imaging algorithm
Frequency sampling point of slow time
Frequency sampling point of slow time
station and the receiving station. Therefore this mode belongs to the oblique-flight translational variant squint-looking mode. The following describes the processing steps and imaging performance of the algorithm in the simulation process. A. Processing steps The two-dimensional frequency Stolt transform is the main step of the algorithm. Fig. 3.15A shows the normal value of the two-dimensional spectrum of the echo before the 2D frequency Stolt transformation after multiplying by the reference function SRFM(ft, fτ; rR0), showing a parallelogram distribution. The Doppler bandwidth at the reference point is 455 Hz, but due to the space-variant characteristics of the Doppler centroid, the Doppler bandwidth for the entire scene is approximately 480 Hz. The spectrum distribution is skewed because the Doppler centroid is a function of the fast-time frequency. In order to demonstrate the effects of slow-time frequency
1000 2000 3000 4000 5000 6000
2200 2400 2600 2800 3000 3200 3400 3600 3800
500 1000 1500 2000 2500 3000 3500 4000
1400 1600 1800 2000 2200 2400 2600 2800 Frequency sampling point of fast time
Frequency sampling point of fast time
(a) Two dimentional spectrum before Stolt transformation
(b) Focus before Stolt transform
Frequency sampling point of slow time
Frequency sampling point of slow time
2200 1000 2000 3000 4000 5000
2400 2600 2800 3000 3200 3400 3600 3800
6000 500 1000 1500 2000 2500 3000 3500 4000
1400 1600 1800 2000
Frequency sampling point of fast time
2200 2400 2600
2800
Sampling point of fast time
(c) Two-dimentional spectrum after slow time-frequency Stolt transformation (d) Focusing after slow time-frequency Stolt transformation
Fig. 3.15 Two-dimensiona spectrum distribution and image domain focus during imaging. (Continued)
Bistatic synthetic aperture radar
Frequency sampling point of slow time
Frequency sampling point of slow time
178
1000 2000 3000 4000
5000
2400 2600 2800 3000 3200 3400 3600
6000 1400 1600 1800 2000 2200 2400 2600 2800
500 1000 1500 2000 2500 3000 3500 4000 Frequency sampling point of fast time
Sampling point of fast time
(e) Two-dimensional spectrum after fast time-frequency Stolt transformation
Frequency sampling point of slow time
Frequency sampling point of slow time
(f) Focusing after fast time-frequency Stolt transformation
1000 2000 3000 4000
5000 6000 500 1000 1500 2000 2500 3000 3500 4000 Frequency sampling point of fast time
2400 2600 2800 3000 3200 3400 3600 3800 1400 1600 1800 2000 2200 2400 2600 2800 Sampling point of fast time
(g) Two-dimensional spectrum after two dimensional Stolt transformation (h) Focusing after two-dimensional Stolt transformation
Fig. 3.15—cont’d
transformation and fast-time frequency transformation, respectively, Fig. 3.15C and E gives the normal value of the two-dimensional spectrum only after the slow-time frequency Stolt transformation or fast-time frequency Stolt transformation. The slow-time frequency transformation slightly reduces the skew degree of the two-dimensional spectrum distribution, while the fast-time frequency transformation changes the shape of the two-dimensional spectrum from a parallelogram to skew bending. Fig. 3.15G shows the two-dimensional spectrum after the Stolt transformation of fast-time and slow-time frequency, whose shape is similar to Fig. 3.15E. Compared with Fig. 3.15A, Fig. 3.15G reflects the obvious
Bistatic SAR imaging algorithm
179
change of the shape of the spectrum distribution. The real essence of the shape transformation is the change of the spectrum phase distribution and isophase line shape, which eliminates the nonlinear phase term of each scattering point and the influence of space-variant characteristics. Fig. 3.15B, D, F and H shows the focus of the image domain corresponding to Fig. 3.15A, C, E and G. After multiplying the reference function SRFM(ft, fτ; rR0) in Eq. (3.25), the shape of the imaging area changes from a rectangle to a parallelogram. Also, the target position is severely shifted along the horizontal axis of Fig. 3.15B, and there is a certain tilt along the vertical axis of Fig. 3.15B. The slow-time frequency Stolt transformation mainly completes position correction along the horizontal axis of Fig. 3.15D. Fig. 3.16 shows the focus of the image domain response of the three scattering points in the middle column of Fig. 3.15B and D. In this figure, the target points with the same horizontal axis value as O are extracted, and it can be found that the slow-time frequency transformation can correct the positions in the vertical axis of Fig. 3.15D of the target, so that they fall into the same fast-time unit, while the fast-time frequency transformation can achieve the remaining slow-time compression and position correction in the vertical axis direction. Because the algorithm uses a second-order approximation, after the twodimensional frequency Stolt transformation, there is a second-order position error in the image domain, as shown in Fig. 3.15H. After the phase errors are corrected, the corresponding focus image is shown in Fig. 3.17. For the focus image of the P1 point, the position error along the longitudinal axis and the horizontal axis becomes 1.5 m and 0.75 m; the position error
2400
2600
Frequency sampling point of slow time
Frequency sampling point of slow time
2400
2800 3000 3200 3400 3600 2000
2020
2040
2060
2080
2100
Sampling point of fast time
(a) Focusing before slow time Stolt transformation
2600 2800 3000 3200 3400 3600 3800 2000
2020
2040
2060
2080
2100
Sampling point of fast time
(b) Focusing after slow time Stolt transformation
Fig. 3.16 Effect of slow-time frequency Stolt transformation on spatial phase focus.
180
Bistatic synthetic aperture radar
Sampling point of slow time
2400 2600 2800 3000 3200 3400 3600 3800 1400 1600 1800 2000 2200 2400 2600 2800 Sampling point of fast time
Fig. 3.17 Imaging results after error compensation.
along the longitudinal axis and the horizontal axis of the P2 point becomes 1.5 m and 0.5 m. The remaining position errors can be corrected during geometric calibration. B.Focusing performance Fig. 3.18 shows the contour map of the focused images of P1, O, and P2. In order to show the details, nine times interpolation is carried out. It can be found that the three scattering point echoes can be focused at the same time. This is because, in this section, the algorithm considers the variant characteristics and high-order coupling along the y direction (i.e., the receiving aircraft along the track). When deriving the Stolt transform, only the linearization in the spatial domain is performed, and the higher-order terms of the frequency are retained. Therefore the algorithm can simultaneously eliminate the effect of the two-dimensional spatial variance of the Doppler parameters. According to the gradient theory, it can be obtained that the theoretical time-delay ground range resolution corresponding to the algorithm in this section is 1.09 m, and the theoretical value of the Doppler ground range resolution is 0.31 m. Table 3.3 gives the quantitative index analysis of the algorithm in this section. It can be seen that, with this method, the time-delay ground range resolution is consistent with the theoretical value, the Doppler ground range resolution has a maximum 2% broadening, while PSLR and ISLR are consistent with the theoretical value.
Bistatic SAR imaging algorithm
1200 1150
Sampling point of slow time
Sampling point of slow time
1200 1150
1100
1100
1050
1050
1000
1000 950
181
950 1000 1020 1040 1060 1080 1100 1120 1140
1000 1020 1040 1060 1080 1100 1120 1140 Sampling point of fast time
Sampling point of fast time
(b) Target O
(a) Target P1
Sampling point of slow time
1200 1150 1100 1050 1000 950 1000 1020 1040 1060 1080 1100 1120 1140 Sampling point of fast time
(c) Target P2 Fig. 3.18 Point target imaging results (mode one).
Table 3.3 Imaging performance parameters (mode one). Time-delay direction
Target P1 Target O Target P2
Doppler direction
IRW (m)
PSLR (dB)
ISLR (dB)
IRW (m)
PSLR (dB)
ISLR (dB)
1.11 1.10 1.12
13.21 13.29 13.25
10.23 10.27 10.24
0.32 0.31 0.33
13.24 13.25 13.23
10.21 10.24 10.17
182
Bistatic synthetic aperture radar
(2) Mode two In this mode, the transmitting station and the receiving station have the same flight direction, but the speeds are different, so this mode belongs to the flat-flight translational variant squint mode. Fig. 3.19 shows the focusing image and contour map of P1, O, and P2, and Table 3.4 shows the performance index. It can be found that the algorithm in this section can still achieve good focusing of each scattering point echo in the image domain. According to the gradient theory, the theoretical ground range resolution of Doppler theory is 0.39 m, 0.42 m, and 0.45 m, and the results are in good agreement with the theoretical values.
1200
Sampling point of slow time
Sampling point of slow time
1200 1150 1100 1050 1000
1150 1100 1050 1000 950
950 1000
1050
1100
1150
1000
Sampling point of fast time
(a) Target P1
Sampling point of slow time
1100
(b) Target O
1200 1150 1100 1050 1000 950 1000
1050
Sampling point of fast time
1050
1100
Sampling point of fast time
(c) Target P2
Fig. 3.19 Point target imaging results (mode two).
1150
1150
Bistatic SAR imaging algorithm
183
Table 3.4 Imaging quality parameters (mode two). Time-delay direction
Target P1 Target O Target P2
Doppler direction
IRW (m)
PSLR (dB)
ISLR (dB)
IRW (m)
PSLR (dB)
ISLR (dB)
1.38 1.39 1.39
13.26 13.31 13.29
10.23 10.27 10.16
0.47 0.49 0.49
13.25 13.27 13.23
10.26 10.24 10.23
The preceding analysis and simulation show that this algorithm not only greatly improves the calculation efficiency, but also effectively eliminates the spatial variation and decouples the coupling by using the efficient two-dimensional spectrum model spatial domain linearization and twodimensional frequency Stolt transform and other processing. It is also possible to achieve consistent focusing on the center and edge of the scene, even in the translational variant bistatic mode with a large squint angle.
References [1] S. Jun, M. Long, Z. Xiaoling, Streaming BP for non-linear motion compensation SAR imaging based on GPU, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 6 (4) (2013) 2035–2050. [2] S. Jun, X. Zhang, J. Yang, Principle and methods on bistatic SAR signal processing via time correlation, IEEE Trans. Geosci. Remote Sens. 46 (10) (2008) 3163–3178. [3] K. Natroshvili, O. Loffeld, H. Nies, et al., Focusing of general bistatic SAR configuration data with 2-D inverse scaled FFT, IEEE Trans. Geosci. Remote Sens. 44 (10) (2006) 2718–2727. [4] Z. Liu, J. Yang, X. Zhang, Y. Pi, Study on spaceborne/airborne hybrid bistatic SAR image formation in frequency domain, IEEE Geosci. Remote Sens. Lett. 5 (4) (2008) 578–582. [5] F.H. Wong, I.G. Cumming, Y.L. Neo, Focusing bistatic SAR data using the nonlinear chirp scaling algorithm, IEEE Trans. Geosci. Remote Sens. 46 (9) (2008) 2493–2505. [6] Z. Liu, C. Dai, X. Zhang, J. Yang, Elevation-dependent frequency-domain imaging for general bistatic SAR, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 8 (12) (2015) 5553–5564. [7] R. Wang, O. Loffeld, Y. Neo, et al., Extending Loffeld’s bistatic formula for the general bistatic SAR configuration, IET Radar Sonar Navig. 4 (1) (2010) 74–84. [8] R. Wang, Y.K. Deng, O. Loffeld, et al., Processing the azimuth-variant bistatic SAR data by using monostatic imaging algorithms based on two-dimensional principle of stationary phase, IEEE Trans. Geosci. Remote Sens. 49 (10) (2011) 3504–3520. [9] J. Wu, Z. Li, Y. Huang, J. Yang, Q.H. Liu, A generalized omega-K algorithm to process translationally variant bistatic SAR data based on two-dimensional stolt mapping, IEEE Trans. Geosci. Remote Sens. 52 (10) (2014) 6597–6614. [10] J. Wu, Z. Li, Y. Huang, J. Yang, Q.H. Liu, An omega-k imaging algorithm for onestationary bistatic SAR, IEEE Trans. Aerosp. Electron. Syst. 50 (1) (2014) 33–52.
This page intentionally left blank
CHAPTER 4
Bistatic SAR parameter estimation Migration correction and slow-time compression processing in SAR imaging require an accurate grasp of the target echo migration trajectory and Doppler signal parameters of each order. In motion error compensation, the higher-order term parameters introduced by motion error need to be accurately grasped. These parameters are determined by the actual motion trajectory, attitude, and geometric relationship with the imaging area of the SAR bearing platform. Therefore, the actual motion of the platform needs to be measured, recorded, and calculated by the motion measurement device, to obtain the parameters of each order required for migration correction and slow-time compression processing, as well as the parameters of higher order required for motion error compensation. When the accuracy of the motion measurement device is insufficient, it is necessary to use echo data to estimate more accurate parameters of each order. In the actual imaging process, the SAR bearing platform involves the complex three-dimensional center of mass motion, yaw, pitch, roll, and other attitude changes, with a large degree of freedom and high difficulty in estimation. Bistatic SAR, whose transmitter and receiver are mounted on different platforms, has independent motions of double platforms, with more freedom but many error sources. It will be more difficult to master the position and attitude changes of the two platforms in the same coordinate system at the same time. Therefore the direct estimation of parameters using echo data has more importance. Mathematically, due to the derivative relations between the target echo range history and the Doppler phase history, the problem of SAR parameter estimation can be attributed to the Doppler parameter estimation problem. However, for different configurations of bistatic SAR, the order of the Doppler parameters to be estimated is different. For example, in the forward-looking mode of bistatic SAR with parallel flight paths, the Doppler signal can be modeled as a second-order polynomial phase signal, so for migration correction and slow-time compression, only the second-order
Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00004-9 Published by Elsevier Inc. All rights reserved.
185
186
Bistatic synthetic aperture radar
and below parameters need to be estimated. For motion error compensation, it is necessary to estimate the parameters of third order and above. This chapter focuses on the measurement, calculation, and estimation methods of bistatic SAR Doppler parameters and analyzes the corresponding precision requirements. Starting from the two aspects of echo law and iterative autofocus, it expounds the typical methods of estimating Doppler centroid, Doppler frequency rate, and the derivative of the Doppler frequency rate of bistatic SAR, and gives some examples.
4.1 Motion measurement, parameter calculation, and accuracy requirements The law and parameters of Doppler signals are derived from the motion geometric relationship. Therefore it is a natural and reasonable technical idea to measure and record the movement history of the antenna phase center by using the motion-sensing device on the platform, and then obtain the Doppler signal parameters through calculation.
4.1.1 Motion measurement Currently, there are many motion-sensing devices used on SAR platforms, including GPS, INS, IMU, and their combination forms, which can measure the position and attitude information of the platform in real time and can be used for the calculation of Doppler parameters. The global positioning system (GPS) can provide real-time position information for the platform, such as longitude, latitude, altitude, and speed, by receiving a pseudorange from multiple navigation satellites and calculations. It has the advantages of measuring accuracy independent of time, free from the influence of gravity, and provides high precision in the long term, but has a low data update rate. It belongs to the nonautonomous system and is prone to cycle jump and loss of lock, resulting in navigation information discontinuity. The inertial navigation system (INS) is a navigation parameter calculation system with gyroscope and accelerometer as sensitive devices. The gyro is used to form a navigation coordinate system to give the direction and attitude angle information. The accelerometer is used to measure the acceleration of the platform, and speed and range can be obtained after one and two integrations. The inertial navigation system is an autonomous system with continuous navigation information, low noise, high update rate, good short-term accuracy, and stability.
Bistatic SAR parameter estimation
187
IMU, also known as inertial measurement unit, is mainly composed of three accelerometers and three gyroscopes. The gyroscopes and accelerometers are directly fixed on the antenna. The accelerometer detects the three linear accelerations of the antenna, and the gyroscope detects the three angular rates, and calculates these six quantities through the strapdown computing unit, to realize the mathematical transformation between the platform coordinate system, the navigation coordinate system, and the inertial coordinate system, and provide the acceleration, speed, yaw, roll, pitch angle and other information. It can be considered that IMU provides a short-term, relatively accurate measurement of the high-frequency motion of the antenna phase center. Due to the existence of systematic errors in the inertial measurement system, which increase with the time, causing poor long-term accuracy. Therefore it usually uses external information such as GPS to assist the realization of integrated navigation, so as to ensure the long-term and short-term accuracy of long-term motion measurement at the same time. For example, the GPS/INS combination, GPS/IMU combination, and POS measurement systems are widely used at present.
4.1.2 Parameter calculation There is a clear quantitative relationship between the Doppler parameters and the motion geometric relationship of the imaging process. This quantitative relationship is the basis for calculating Doppler parameters based on motion measurement data. In this section, taking bistatic forward-looking SAR as an example, according to the motion geometry relationship, a method is given to calculate the Doppler centroid, Doppler frequency rate, and the derivative of the Doppler frequency rate of the target point P on the beam centerline. To solve the Doppler parameters, we need to solve the instantaneous Doppler frequency first, which can be obtained from the phase history or range history of the target. According to the geometric relationship shown in Fig. 4.1, the range history of the point target in the synthetic aperture time is: r ðtÞ ¼ rT ðtÞ + rR ðtÞ (4.1) where rT(t) and rR(t) are the range history of the transmitting station and the receiving station, respectively. According to the cosine theorem, it can be obtained as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rT ðtÞ ¼ rT2 0 + ðvt Þ2 2rT0 vt sin ϕT (4.2)
188
Bistatic synthetic aperture radar
vT = vR
Transmitter track
z
vR
Receiver track
fT
fR
rT (t )
rR (t )
rT 0
rR 0
O P( x0 , y0 )
y
x
Fig. 4.1 Geometric model of translational invariant bistatic forward-looking SAR.
rR ðtÞ ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 + ðvt Þ2 2r vt cos ϕ rR0 R0 R
(4.3)
where vR and vT are the speed of the transmitting and receiving platform (both use v for the case of translational invariant); rR0 and rT0 are the slant range when the center of the transmitting and receiving beam passes through the target; ϕT is the squint angle of the transmitting station; and ϕR is the downward angle of the receiving station. The Doppler phase brought by the two-platform motion is φ(t) ¼ 2πr(t)/λ, and the Doppler frequency is fd(t) ¼ φ0 (t)/2π ¼ r0 (t)/λ. When taking the expression r(t) into the equation, after Taylor expansion and taking the second-order term, the Doppler frequency can be obtained as follows: fd ðtÞ
vT sin ϕT vT 2 cos 2 ϕT 3 vT 3 cos 2 ϕT sinϕT 2 vR cos ϕR t t + 2 λ λrT0 λrT2 0 λ vR 2 sin 2 ϕR 3 vR 3 sin 2 ϕR cosϕR 2 t t (4.4) 2 2 λrR0 λrR0
Then the mathematical expressions of the Doppler centroid fdc, Doppler frequency rate fdr, and the derivative of the Doppler frequency rate fdt can be obtained: vT sin ϕT vR cos ϕR + λ λ
(4.5)
vT 2 cos 2 ϕT vR 2 sin 2 ϕR λrT0 λrR0
(4.6)
fdc ¼ fdr ¼
Bistatic SAR parameter estimation
fdt ¼
3vT 3 cos 2 ϕT sin ϕT 3vR 3 sin 2 ϕR cos ϕR 2 λrT2 0 λrR0
189
(4.7)
It can be seen that the Doppler parameters of bistatic forward-looking SAR are related to the speed, angle of view, slant range, and transmitted signal wavelength of the transceiver platform. The corresponding formula can be obtained after similar derivation for other bistatic SAR configurations, and their Doppler parameters are also related to the preceding factors. For bistatic SAR with one station fixed, the corresponding Doppler will only be provided by the moving platform; for a bistatic side-looking SAR, since ϕR is π/2 and ϕT is zero, the Doppler centroid is zero; for a satellite platform, the calculation of Doppler parameters is slightly more complicated due to the influence of curvature and earth rotation [1]. In addition, similar to monostatic SAR, since the sampling rate (i.e., pulse repetition frequency Fr) limits the maximum observable Doppler frequency, only frequencies between [Fr/2, Fr/2) can be observed unambiguously. In this frequency range, the energy center of the Doppler signal spectrum is generally referred to as the baseband Doppler centroid. For bistatic squint-looking and forward-looking SAR and other geometric configurations, the Doppler centroid is usually large. When it is larger than Fr, the Doppler centroid can be ambiguous. In this case, the Doppler centroid fdc is determined by the baseband Doppler centroid f dc0 and the Doppler ambiguity number Mamb. fdc ¼ fdc0 + Mamb Fr
(4.8)
4.1.3 Accuracy requirements The calculation and estimation accuracy of Doppler parameters is the basis for selecting the motion measurement device and evaluating the parameter estimation method. Doppler parameters of different orders have different effects on imaging quality. Based on the imaging performance, it is a normal idea to put forward the corresponding requirements for the estimation accuracy of Doppler parameters. 1. Doppler centroid The Doppler centroid is the center frequency of Doppler frequency and the constant term of the Doppler frequency polynomial, which is an important parameter for SAR image processing. Doppler centroid estimation, also known as clutter locking, is derived from the traditional airborne-pulse Doppler radar.
190
Bistatic synthetic aperture radar
When the Doppler centroid estimation has errors, the center frequency of the constructed azimuth-matching filter will deviate from the energy center of the signal spectrum, resulting in mismatching, thereby reducing the energy of the target response, increasing the energy of the ambiguity region, and worsening the signal-to-ambiguity ratio. At the same time, the noise spectrum is flat and the energy does not change, so the SNR will be reduced. In addition, the linear phase error introduced by Doppler centroid error will cause image dislocation and positioning deviation. Therefore, the estimation accuracy of the Doppler centroid should be within the acceptable range of target positioning, signal-to-ambiguity ratio, and SNR. As shown in Fig. 4.2, under the condition of the sinc square antenna beam pattern and an oversampling rate of 1.1, the decrease of signal-to-first ambiguity ratio and signal-to-noise ratio caused by the Doppler centroid error are illustrated. According to these results, it can be seen that the signal-to-first ambiguity ratio decreases linearly with the centroid error, and the loss of signal-to-noise ratio shows a quadratic curve trend, which can be fitted as 44 Fr SFAR ¼ 22 Δfdc , Δfdc 0, (4.9) Fr 2 Δfdc 2 Fr , Δfdc 0, SNRd ¼ 20 (4.10) Fr 2 where SFAR is the signal-to-first ambiguity ratio, Fr is the pulse repetition frequency, and SNRd is the signal-to-noise ratio degradation. For example, the estimated error must be lower than 4.6% of the pulse repetition frequency when the SFAR is required not to exceed 2 dB. When the SNR deterioration is required to be no more than 2 dB, the estimated error must be less than 32% of the pulse repetition frequency. Certainly, the accuracy of Doppler centroid estimation can also be required from the perspective of image registration or positioning accuracy. 2. Doppler frequency rate The Doppler frequency rate is the coefficient corresponding to the firstorder term of the Doppler frequency polynomial, which is the core parameter of SAR image processing, and also the most important factor affecting the focus quality of SAR. Therefore Doppler frequency rate estimation is also called autofocus in some cases. Doppler frequency rate error can lead to the phase error in the slow-time matching filter reference function or compensation factor, resulting in defocusing of the target pulse response, the main lobe broadening, peak decline,
Bistatic SAR parameter estimation
191
40
Ratio (dB)
30
20
10
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
The base-band Doppler centroid/ PRF (a) Deterioration of Signal to first ambiguity ratio with Doppler centroid error 2
0
Variation (dB)
-2
-4
-6
-8
-10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
The base-band Doppler centroid/ PRF (b) Deterioration of signal to noise ratio with Doppler centroid error
Fig. 4.2 Effect of Doppler centroid error.
side lobe elevation, and other problems, which seriously affect the imaging quality. Fig. 4.3 shows the slow-time focusing of 0% and 5% Doppler frequency rate errors and the variation of image contrast with Doppler frequency rate errors. Since the quadratic phase error at the edge of the aperture is the largest, it can be expressed as 2 Ta Δφquadic ¼ πΔfdr (4.11) 2
Bistatic synthetic aperture radar
1
1
0.9
0.9
0.8
0.8
Normalized amplitude
Normalized amplitude
192
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0.7 0.6 0.5 0.4 0.3 0.2 0.1
0
100
200
300
400
500
600
700
800
900
0
1000
0
100
200
300
400
500
600
700
800
Sample points
Sample points
(a) 0% frequency rate error
(b) 5% frequency rate error
900
1000
1
0.9
Contrast
0.8
0.7
0.6
0.5
-6
-4
-2
0
2
4
6
Doppler rate error (c) Change of contrast
Fig. 4.3 The effect of the Doppler frequency rate error on the contrast of the image.
where Ta is the synthetic aperture time and Δfdr is the Doppler frequency rate error. Therefore the quadratic phase error is closely related to the Doppler frequency rate error. Under the typical parameters, the accuracy of Doppler frequency rate estimation error can be obtained from the angle of main lobe broadening. As shown in Fig. 4.4, for the main lobe width of less than 2%, Δφquadic should be controlled within π/4. Based on Eq. (4.11), the following equation can give the estimation accuracy requirements of Doppler frequency rate fdr: jΔfdr j
1 Ta 2
Δfdr 1 f T B dr a a
(4.12) (4.13)
where Ba is the Doppler bandwidth. It can be seen that the higher the time-bandwidth product is, the higher the relative accuracy of the Doppler
Bistatic SAR parameter estimation
193
Percentage of widening
30
20
10
0 0
0.2
0.4
0.6
0.8
|QPE| (p radian) Fig. 4.4 Effect of Doppler frequency error on main lobe widening.
frequency rate estimation. For example, when the time-bandwidth product of a Doppler signal is 20, the relative accuracy of fdr estimation should be less than 5%. In addition, the estimation accuracy of the Doppler frequency rate can also be given from the angles of peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR).
4.2 Parameter estimation based on echo law After migration correction for different bistatic SAR geometry, the Doppler signal presents different rules. Some can be modeled as a linear frequency-modulated (FM) signal, and others need to be modeled as a quadratic or cubic FM signal. The Doppler parameters can be estimated according to the position of the peak value by using the energy accumulation characteristics of each order of the signal in the corresponding transform domain. The commonly used methods are polynomial phase transform, transform domain slope detection, and slow-time signal correlation. Some of the transform domains used by these parameter estimation methods are the same as those involved in the imaging algorithm. Therefore, in practical applications, Doppler parameter estimation can be embedded in imaging processing to reduce reduplicate calculation and improve the computational efficiency.
194
Bistatic synthetic aperture radar
4.2.1 Polynomial phase transform method This method models the Doppler signal after migration correction as a polynomial phase signal. For example, for bistatic side-looking SAR or bistatic SAR with a small squint angle, it is modeled as a linear frequency-modulated signal, while for bistatic SAR with a large squint angle, it is modeled as a quadratic FM signal. The polynomial phase transform method can be used to estimate Doppler parameters, and it has the characteristics of less computational effort and implementation simplicity. 1. Principle of polynomial phase transform Suppose z(n) is a complex polynomial phase signal of order M, sampling points N, sampling interval Δ, and coefficient am, m ¼ 0, 1 ⋯ M: ( ) M X m (4.14) am ðnΔÞ ,0 n N 1 zðnÞ ¼ exp j m¼0
Define the following operation formula for z[n]: 8 > < DP 1 ½zðnÞ, td ¼ zðnÞ DP 2 ½zðnÞ, td ¼ zðnÞz∗ ðn td Þ > : DP M ½zðnÞ, td ¼ DP 2 ½DP M1 ðz, td Þ, td
(4.15)
where td is the time delay, which is an integer, and ()∗ is the conjugate operator. It can be seen that a single frequency signal whose frequency is related to the coefficient aM will be obtained after the operation of the operator DPM(, td) for the M-order polynomial phase signal. As long as the corresponding signal frequency point is found, the corresponding coefficient aM can be obtained. Therefore an M-order polynomial phase transform can be defined as the discrete Fourier transform of operator DPM(, td) [2]: DP M ½zðnÞ, ω, td ¼ DFT fDP M ½zðnÞ, td g ¼
N 1 X
DP M ½zðnÞ, td exp fjωnΔg
(4.16)
n¼ðM1Þtd
where ω is the frequency. In particular, for DPT1[z(n), ω, td] ¼ DFT{z(n)}, the first-order polynomial phase transform is a discrete Fourier transform; and for DPT2[z(n), ω, td] ¼ DFT{z(n)z∗(n td)}, the second-order polynomial phase transform is a discrete ambiguity function. 2. Estimation process Assume the Doppler signal phase of bistatic SAR can be approximated by the third-order polynomial, i.e.,
Bistatic SAR parameter estimation
195
2π 1 ðrT 0 + rR0 Þ + 2πfdc t + πfdr t2 + πfdt t3 (4.17) λ 3 According to the preceding discussion, the Doppler parameters of bistatic SAR can be estimated by the third-order polynomial phase transform. The detailed process is as described in the following paragraphs. First, the signal in one range bin is extracted from the data after fast-time compression and migration correction, and the third-order polynomial phase transform is carried out to obtain the spectrum with one single peak for the single frequency signal. Secondly, according to the relationship between the position of the frequency point corresponding to the spectral peak and the third-order Doppler term fdt in the formula, an estimated value f^dt of the derivative of Doppler frequency rate is realized. Finally, the Doppler frequency rate and Doppler centroid can be estimated by decreasing the order of the signal. Specifically, the extracted azi muth signal is multiplied by exp jπ f^dt t3 =3 , the signal is reduced to a quadratic polynomial phase signal, the second-order polynomial phase transform is performed, and the Doppler frequency rate f^dr is estimated. By anal ogy, the quadratic polynomial phase signal is multiplied by exp jπ f^dr t2 to reduce its order to a single-frequency signal, and then the Doppler centroid is obtained by the Fourier transform. It should be noted that from the perspective of estimation flow, this method has an error propagation effect, and the estimation accuracy of low-order Doppler parameters is relatively low. Other types of polynomial phase signal parameter estimation methods can also be used, such as the product cubic phase function [3] and integrated cubic phase function [4]. In addition, it should be pointed out that when there is space-variance in the Doppler parameters of bistatic SAR, it is often necessary to eliminate the space-variance first, such as nonlinear chirp scaled (NLCS) equilibrium and partitioning in the slow-time direction, or to study the parameter estimation method of the multicomponent polynomial phase signal. φðtÞ
4.2.2 Transform domain slope detection method Based on the method of slope detection in the transform domain, the intercept-slope domain is generated by using the feature of the linear aggregation of Doppler signal energy in the transform domain and the method of line detection, such as the Radon transform, and the Doppler signal energy is
196
Bistatic synthetic aperture radar
further gathered into a single peak. Then the Doppler parameters are estimated according to the peak position. In essence, this method belongs to the estimation method based on echo law, which is applicable to the estimation of Doppler centroid, Doppler frequency rate, and the derivative of Doppler frequency rate. 1. Doppler centroid The focus for the Doppler centroid estimation method based on slope detection is to find the analytical relationship between the slope of the transform domain and the Doppler centroid (or ambiguity number). Based on this idea, this section discusses the mapping relationship between echo geometric features and Doppler centroid (or ambiguity number) from echo data in the fast-time compression time-domain and the slow-time time-domain, and then gives the mechanism, process, and simulation of the estimation method. (1) Geometric features of echo data in fast-time compression time domain and slow-time time-domain In the squint-looking or forward-looking bistatic SAR configurations, the range walk of the migration component is much greater than that of the range curvature and higher-order migration. The migration trajectory is mainly characterized by large walk. Therefore, in the time-domain of fast time compression and the time-domain of slow time, the echo trajectory of point target will show a significantly inclined line, as shown in Fig. 4.5. (2) Methodology According to the calculation method of the Doppler centroid, 1 dr ðt Þ 1 fdc ¼ ¼ K (4.18) λ dt t¼t0 λ where λ is the signal wavelength, r(t) is the migration trajectory, t0 is the moment when the center of the beam crosses the target, and K is the slope of the walk. It can be seen that the Doppler centroid for the echo with a large walk can be estimated by detecting the slope of the geometric characteristics in the time-domain of fast-time compression and slow-time domain. Therefore, how to get the measured or estimated value of this slope becomes the key of the Doppler centroid estimation. Eq. (4.5) of the theoretical value of the Doppler centroid of bistatic forward-looking SAR is substituted into Eq. (4.18) to obtain K ¼ vT sin ϕT vR cos ϕR
(4.19)
50 Samples in azimuth time domain
Samples in azimuth time domain
50 100 150 200 250 300 350 400 450 500
100 150 200 250 300 350 400 450
100 200 300 400 500 600 700 800 900 1000
400
450
500
550
600
Samples in range compressed time domain
Samples in range compressed time domain
(a) Point targets with different azimuth
(b) Point targets with different range
Fig. 4.5 Echo characteristics of point targets with different azimuth and range in the time domain with range compressed.
198
Bistatic synthetic aperture radar
This is to say that the walking slope is related to the speed and angle of the transceiver platform, so conceptually the measured value of the platform motion measuring equipment can be used to calculate the walking slope value according to the preceding formula, but the slope value obtained in this way is usually of low accuracy, and cannot reach the accuracy required for imaging. To obtain a more accurate slope estimation value, it should be estimated from echo data. Because the operation object of the estimation process is a twoe is obtained first, whose reladimensional array, the dimensionless slope K tionship with the slope K is K¼
Nr Δr Nr ðcΔtr Þ e cΔtr e ¼ ¼K ¼ K F ¼ tanη F Na Δta Na Δta Δta
(4.20)
where Nr and Na is the sampling points of fast time and slow time, respece is the dimensionless walking slope; Δr is the sampling interval; Δtr tively; K is the fast-time sampling interval, which equals 1/fs; c is the speed of light; Δta is the slow-time sampling interval; F is the unit conversion factor; and η is the dip angle corresponding to the dimensionless walk slope. Based on the preceding analysis, Doppler centroid estimation can be realized by detecting the straight line in the two-dimensional plane. At present, there are many methods for line detection, such as the Radon or Hough transforms. In practice, the object of the direct operation is grayscale data, and both methods involve a two-dimensional search process, resulting in a heavy computing burden. Therefore the coarse search interval is usually determined by using the prior information provided by the inertial navigation system, and then the fine search interval is set and estimated based on the coarse estimation. In addition, when the echo signal and noise are relatively strong, the operation of binarization (sparsity) of grayscale data can be used to save storage space and computation time. The search step size can be determined based on the estimated accuracy requirements. From (4.18), we get 1 1 1 e F ¼ Δη cot η F Δfdc ¼ ΔK ¼ ΔK (4.21) λ λ λ Assuming that the estimation accuracy requirement is Δfdc, then the fine step size Δα2 must meet at least. 1 (4.22) Δα2 ¼ Δfdc = cotη F λ
Bistatic SAR parameter estimation
199
In general, the number of coarse search steps is equal to the number of fine search steps: that is, the coarse step size is pffiffiffiffiffiffiffiffiffiffiffiffiffi Δα1 ¼ 2βΔα2 (4.23) where β is the maximum variation of the coarse search angle for the Radon transform, which can be determined by prior information of INS. (3) Simulation and experimental results In order to verify the effectiveness of the method, the simulation of bistatic forward-looking SAR is carried out first. Then, the measured data of a spaceborne squint-looking SAR with obvious walk characteristics are evaluated. The results are shown in Figs. 4.6–4.8, where (A) is the fast time-compressed data, (B) is the binarization result, and (C) is the coarse Radon transform result. It can be seen from the results of simulation and experimental data that the echo signal energy appears an obvious aggregation in the result of the Radon transform. According to the focusing peak position in Figs. 4.6C– 4.8C, the corresponding angle can be determined so as to obtain the estimated value of η, and then the unambiguous Doppler centroid fdc can be calculated by Eqs. (4.20) and (4.18) [5]. In addition, for the point target echo of forward-looking and squint forward-looking bistatic SAR in the slow-time compression time domain with fast-time frequency domain and the slow-time compression timedomain with fast-time time domain, the envelope energy presents linear distribution characteristics, which can be confirmed by theoretical analysis and simulation, as shown in Fig. 4.9. Therefore the relationship between the slope of the envelope energy distribution and the Doppler centroid or the Doppler ambiguity number can also be derived to resolve the Doppler centroid ambiguity. However, the method needs to estimate the Doppler frequency rate and geometric slope at the same time, so the effect of error propagation and accumulation is unavoidable. 2. Doppler frequency rate estimation For geometric configuration of bistatic side-looking SAR or bistatic SAR with a small squint angle, the extracted Doppler signal from the echo data after migration correction usually can be modeled as a linear frequencymodulated signal. We can take advantage of the characteristic that the energy of the linear frequency-modulated signal in the time-frequency domain is gathered along with the inclined straight-line, and use the methods of slope
Fig. 4.6 Multipoint target simulation.
0
500
500 Azimuth samples
Azimuth samples
0
1000
1500
1000
1500 550
600
650 700 750 Range samples
800
850
900
550
600
650 700 750 Range samples
300
400
Distance(samples)
500
600
700
800
900
1000
Fig. 4.7 Experimental results (1).
20
40
60
80 100 Angle(degree)
120
140
160
180
800
850
900
0
500
500 Azimuth samples
Azimuth samples
0
1000
1500
1000
1500 950
1000
1050 1100 1150 Range samples
1200
1250
20
40
1300
950
1000
1050 1100 1150 Range samples
140
160
300
400
Distance(samples)
500
600
700
800
900
1000
Fig. 4.8 Experimental results (2).
60
100 80 Angle(degree)
120
180
1200
1250
1300
50
100
100
Samples in azimuth time domain
Samples in azimuth time domain
50
150 200 250 300 350 400 450
150 200 250 300 350 400 450 500
500 100 200 300 400
500
600 700 800 900 1000
100
200 300 400
500
600 700 800 900 1000
Samples in range compressed time domain
Samples in range time domain
(a) Echo in azimuth compressed time domain and range frequency domain
(b) Echo in azimuth compressed time domain and range time domain
Fig. 4.9 Echo characteristics of multipoint targets in different transform domains.
204
Bistatic synthetic aperture radar
detection in time and frequency domain to realize the Doppler frequency rate estimation. There are many time-frequency analytical methods of linear frequencymodulated signals, such as the short-time Fourier frequency (STFT) and Wigner-Ville distribution (WVD). The WVD method has higher timefrequency resolution and more practical applications. However, because WVD is essentially a bilinear transform when applied to multicomponent signals, there will be “cross-term interference.” Therefore, in order to apply this method to the bistatic SAR configuration with space variance in the slow-time direction, the cross-term suppression method should be studied. For a bistatic SAR configuration without space-variance in the slowtime direction, since there is no difference in the Doppler parameters, it is not necessary to consider the influence of cross terms. The Doppler frequency rate fdr has the following relationship with the straight-line slope K2 in the discrete time-frequency plane during the digital calculation: fdr ¼ K2
Δf Fr =2Na Fr ¼ K2 ¼ K2 Δta Ta =Na 2Ta
(4.24)
where Δf and Δta are the frequency and time interval in the time-frequency plane of WVD, Fr is the pulse repetition frequency, Ta is the synthetic aperture time, and Na is the number of slow-time sampling points. As shown in Fig. 4.10, time-frequency diagrams for a linear frequencymodulated signal with the use of the WVD method and the result of the Radon transform are given. Figs. 4.11 and 4.12 show the processing results of the SAR experimental data. The signal in azimuth of the experimental data can be expressed as a linear frequency-modulated signal. In the time-frequency plane of WVD, the energy of point target echo gathers along the inclined straight line, while in the Radon transform domain of WVD, it gathers in a single peak, and the corresponding angle coordinate of the peak can be found. Therefore, the estimated value of the Doppler frequency rate can be calculated by Eq. (4.24). The estimation accuracy that can be achieved by this method is related to the data signal-to-noise ratio and the search step size of the Radon transform [6].
4.2.3 Slow-time signal correlation method Based on the method of slow-time signal correlation, the Doppler parameters are estimated by solving the correlation function and using the internal
Bistatic SAR parameter estimation
205
Fig. 4.10 Wigner-Ville distribution and slope detection results of an LFM signal.
relationship between the correlation function and Doppler parameters. In essence, it uses the law of echo signal to realize parameter estimation. These methods, such as the time-domain correlated Doppler centroid estimation method [7] and the subaperture correlated Doppler frequency rate estimation method [8], are more common in reference books on synthetic aperture radar and will not be repeated here.
200
400
400 Frequency samples
Frequency samples
200
600 800 1000
600 800 1000
1200
1200
1400
1400 200
400
600 800 1000 Time samples
1200
1400
200
400
600 800 1000 Time samples
200 400
Distance(samples)
600 800 1000 1200 1400 1600 1800 2000 20
Fig. 4.11 Experimental data processing results (1).
40
60
80 100 Angle(degree)
120
140
160
180
1200
1400
200
200
400 600 Distance(samples)
Frequency samples
400 600 800 1000
800 1000 1200 1400 1600
1200 1800
1400
2000
200
400
600 800 1000 Time samples
1200
20
1400
40
60
80 100 Angle(degree)
200 400
Distance(samples)
600 800 1000 1200 1400 1600 1800 2000 20
Fig. 4.12 Experimental data processing results (2).
40
60
80 100 Angle(degree)
120
140
160
180
120
140
160
180
208
Bistatic synthetic aperture radar
4.3 Parameter estimation based on iterative autofocus In practical applications, in order to get accurate estimation results, it is necessary to iterate to form a closed-loop process and gradually reduce the estimation error according to the set threshold and criteria. The representative method is the iterative autofocus parameter estimation method based on image quality evaluation.
4.3.1 Image quality evaluation criteria Generally, there are two evaluation methods for SAR imaging quality: subjective visual evaluation and objective parameter evaluation. Since the judgment of subjective vision cannot accurately represent the SAR image quality, considering the adaptability of the imaging stage, it is necessary to select an appropriate objective evaluation standard to evaluate the focusing quality of the image. Based on the method of measurement, the evaluation of image quality can be mainly divided into methods based on point targets and surface targets, such as spatial resolution, integrated sidelobe ratio, and peak sidelobe ratio, as well as radiation resolution and signal-to-ambiguity ratio and other indicators. But these indicators are mostly used for evaluation after the completion of imaging. In the imaging processing stage, Shannon entropy, contrast, and others are widely used. Image Shannon entropy is defined as follows: X 8 H ¼ pmn lnpmn > > > > m, n < jxmn j2 (4.25) X > p ¼ mn > 2 > jxmn j > : m, n where xmn represents the value of the (m, n) sampling unit of the image x, and m, n represent the slow-time and fast-time sampling points, respectively. It can generally be considered that the smaller the image entropy, the better the focus quality. In addition, when x is a one-dimensional sequence, H is also called waveform entropy. There are many ways to quantify contrast. For example, the maximum gray difference of the image can be defined as the contrast of the image, and the ratio of the standard deviation to the mean of the gray value can also be defined as the contrast. An image with ideal focus has a clear distinction between black and white and light and dark, and it is rich in layers. The
Bistatic SAR parameter estimation
209
target is prominent above the background, and the image amplitude fluctuates dramatically compared with the surrounding scene. For the defocused image, the main lobe of the target is broadened; the difference between the amplitude of the target and the surrounding background is smaller, so it is difficult to distinguish the clear target contour from the background. Therefore, generally, the higher the image contrast, the better the focusing quality.
4.3.2 Method based on image quality assessment The method based on image quality assessment is a process of continuously updating the estimated value of parameters required for imaging according to the result of the image quality assessment, until the image quality is satisfied and the accurate parameter estimation value is obtained. The mechanisms and processes of Doppler centroid and frequency rate estimation based on quality assessment are described in the following paragraphs. 1. Doppler centroid estimation For bistatic SAR configuration with a large walk, the migratory trajectory in the two-dimensional time domain is mainly manifested by obvious inclined straight-line clustering features. It has been shown that the slope of the walk is in direct proportion to the Doppler centroid. Therefore, by predicting the slope of the line and iterating the range walk correction, and taking the best quality of the walk correction as the iteration termination criterion, the estimated values of the slope of the line and the unambiguous Doppler centroid can be obtained. This method is mathematically identical to the linear detection method of echo in the two-dimensional time domain, but the twodimensional search process is changed to the one-dimensional search process. (1) Algorithm principle Taking the geometric configuration of bistatic forward-looking SAR with a fixed transmitting station as an example, due to the influence of the downward angle of the receiving station, its curvature and high-order migration are far less than the walk, and they can usually be considered as being within a cell. The primary task of migration correction, therefore, is walk correction. According to the mapping relationship in the time domain and frequency domain, the time shift of a signal in time domain corresponding to the signal in frequency domain multiplied by the linear phase factor, the walking correction can be realized by the method of echo data in the fast-time frequency domain, multiplied by the walking correction function.
210
Bistatic synthetic aperture radar
According to the corresponding relationship between the walk slope and the walk term, the walk correction function H0(t) can be expressed as K t (4.26) H0 ðtÞ ¼ exp j2π fr c where fr is the range frequency, t is the slow time, c is the speed of light, and K is the walk slope, which satisfies the following relationship: K ¼ v cos ϕR
(4.27)
where ϕR is the downward angle of the receiving platform, and v is the platform speed. Because platform velocity v and downward angle are difficult to be accurately measured by a platform motion measurement device, it is necessary to find the accurate walk slope through echo data. According to the analysis of migration characteristics, when the migration trajectory is corrected, it corresponds to the accurate walk slope, so the problem can be transformed into when the walk trajectory is corrected. Therefore it is necessary to study whether the migration trajectory is corrected. First, the walk slope can be calculated by using the information obtained from the platform motion measurement device to make the preliminary walk correction. Then, the data after the initial correction can be integrated along the direction of slow time to obtain a waveform sequence. The walk correction quality can be measured by calculating the entropy size of the waveform. Theoretically, when the waveform entropy is the smallest, the correction quality is the best, and the walk slope is the most accurate. Otherwise, the estimated value of the walk slope should be updated and the entropy of the waveform should be recalculated and compared. As for the size of the update step, it can be determined according to the estimation accuracy, or the strategy of first coarse and then fine can be adopted. Due to the one-dimensional search process of the estimated slope value, measures to reduce the amount of calculation can be considered from the following two aspects: (1) According to the information provided by the platform motion measurement device, the initial walk slope can be obtained. Based on the prior information of the maximum error, the maximum variation range of the walk slope is determined. (2) First, determine the updating direction of the walk slope; then, during the update of the walk slope, a binary search method with a strategy as
Bistatic SAR parameter estimation
211
follows is used. Let the search interval at the m iteration be [am, bm]. ① Calculate the midpoint (am + bm)/2 of the search interval; ② If H(am) H(bm), let am+1 ¼ (am + bm)/2, bm+1 ¼ bm; ③ If H(am) H(bm), let bm+1 ¼ (am + bm)/2,am+1 ¼ am, and so on until the length of the search interval is less than the required estimation accuracy, and the searching stops (H() represents the value of the waveform entropy). (2) Algorithm simulation To illustrate the effectiveness of the method, a simulation of a fixed bistatic forward-looking SAR at a transmitting station is given here. The measured data of squint SAR with large walking characteristics is used for verification. ▪ Simulation data verification In the ideal case, the simulation results of single-point and multipoint targets are shown in Fig. 4.13A and B, where A is the fast-time compression data; B is the walk correction data corresponding to the minimum entropy; C is the energy accumulation result along the slow-time direction; D is the energy accumulation result along the slow-time direction. It can be seen that when migration is mainly manifested as migration, this method can achieve migration correction well. The estimated Doppler centroid can be obtained according to the correct quantity of the walking slope corresponding to the minimum waveform entropy and the wavelength of the transmitted signal. The effect of noise is not considered, so the estimation error is determined by the initial slope and the search step size. ▪ Experimental data validation Since this method is applicable to any SAR configuration with a large walk feature, spaceborne squint-looking SAR data conforming to this feature are used to verify the effectiveness of this method. The processing results of the two sets of data are shown in Fig. 4.14, where A is the fast-time compression echo data; B is the echo data after correction corresponding to the minimum entropy; C is the result of energy accumulation along the slow-time direction of A; D is the result of energy accumulation along the slow-time direction of B. The range walk slope corresponding to the minimum waveform entropy is 397.13 m/s, and when combined with the transmitted signal wavelength, the Doppler centroid is 7020.15 Hz, considering that the pulse repetition frequency is 1256.98 Hzand the solvable Doppler ambiguity number is 6, which is consistent with the results given in Ref. [9]. For detailed analysis, refer to Ref. [10].
Fig. 4.13 Single-point and multipoint target simulation results.
Fig. 4.14 Experimental results of spaceborne SAR.
214
Bistatic synthetic aperture radar
2. Doppler frequency rate Similar to monostatic SAR, the Doppler frequency rate is the most important parameter that affects the imaging quality of bistatic SAR. Therefore it is common to evaluate the focusing quality of the image and realize the Doppler frequency rate estimation through iteration. If a point target is well focused, most of the signal energy will be gathered together; otherwise, the signal energy will be scattered. In other words, if the focus is good, the contrast is high, and as the focus deteriorates, the contrast decreases. For a bistatic SAR configuration that only needs to estimate the Doppler frequency rate, the contrast can be expressed as a function of the Doppler frequency rate: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi 2 2 ^ ^ E I n, f dr E I n, f dr C f^dr ¼ 1nN (4.28) E I 2 n, f^dr where E() represents the digital expectation, I n, f^dr represents the onedimensional image amplitude when the Doppler frequency rate estimation value is f^dr , and N represents the number of slow-time sampling points. The Doppler frequency estimation method based on image contrast improves the contrast by adjusting f^dr to reach the maximum value, where f^dr is considered to be the correct estimation of the Doppler frequency. The detailed processes can be described as follows. (1) With the information measured by the platform motion measurement device, the initial Doppler frequency rate is calculated, the reference function is constructed, the echo data after fast-time compression is compressed in slow time, the appropriate scene area is selected, and the contrast is calculated. Set the initial Doppler frequency rate iteration step size Δ and the iteration termination preset threshold ε; (2) The reference function is updated and corrected with ði + 1Þ ðiÞ f^dr ¼ ΔðiÞ + f^dr , and the coarse image of the selected scene is reconstructed, and its contrast is calculated; (3) Compare the contrast of the two images before and after processing, and modify the iteration step size according to the following three cases: I. C f^dr Beforecorrection > C f^dr Aftercorrection , iteration step length (i ) Δ(i+1) ¼ Δ ; ^ II. C f dr Aftercorrection < C f^dr Beforecorrection , iteration step length (i ) Δ(i+1) ¼ Δ /2; III. C f^dr Aftercorrection ¼ C f^dr Beforecorrection , iteration step length Δ(i+1) ¼Δ(i )/2.
Bistatic SAR parameter estimation
215
(4) Repeat steps (2) to (3) until the iteration step is smaller than the preset threshold ε. This kind of algorithm is not restricted to the strong points in the scene, but it should evaluate the contrast of different regions synthetically. In addition, this method can also be used to estimate the high-order Doppler parameters. There are many estimation methods based on iterative autofocus, such as the common phase gradient autofocus algorithm and other improved methods [11]. It should be pointed out that the convergence and speed of the closed-loop iterative process will directly affect the imaging processing efficiency and real-time performance. Generally, the convergence is closely related to the selection of the scene, and the speed of convergence is affected by many factors, such as the selection of the initial value of the estimation, the selection of the iteration step, and the selection of the error threshold.
References [1] X. Yuan, Introduce to the Spaceborne Synthetic Aperture Radar, National Defence Industry Press, Beijing, 2002. [2] S. Peleg, B. Friedlander, The discrete polynomial-phase transform, IEEE Trans. Signal Process. 43 (8) (1995) 1901–1914. [3] P. Wang, J. Yang, Multicomponent chirp signals analysis using product cubic phase function, Digital Signal Process. 16 (6) (2006) 654–669. [4] P. Wang, H. Li, D. Igor, Integrated cubic phase function for linear FM signal analysis, IEEE Trans. Aerosp. Electron. Syst. 46 (3) (2010) 963–975. [5] W. Li, Y. Huang, J. Yang, J. Wu, L. Kong, An improved radon-transform-based scheme of Doppler centroid estimation for bistatic forward-looking SAR, IEEE Geosci. Remote Sens. Lett. 8 (2) (2011) 379–383. [6] W. Li, J. Yang, Y. Huang, Improved Doppler parameter estimation of squint SAR based on slope detection, Int. J. Remote Sens. 35 (4) (2014) 1417–1431. [7] S.N. Madsen, Estimating the Doppler centroid of SAR data, IEEE Trans. Aerosp. Electron. Syst. 25 (2) (1989) 134–140. [8] Z. Bao, M. Xing, T. Wang, Radar Imaging Technique, Electronic Industry Press, Beijing, 2005. [9] I.G. Cumming, F.H. Wang, Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation, Artech House, Norwood, 2005. [10] W. Li, J. Yang, Y. Huang, J. Wu, A geometry-based Doppler centroid estimator for bistatic forward-looking SAR, IEEE Geosci. Remote Sens. Lett. 9 (3) (2012) 388–392. [11] S. Zhou, M. Xing, X.G. Xia, et al., An azimuth-dependent phase gradient autofocus (APGA) algorithm for airborne/stationary BiSAR imagery, IEEE Geosci. Remote Sens. Lett. 10 (6) (2013) 1290–1294.
This page intentionally left blank
CHAPTER 5
Bistatic SAR motion compensation In the preceding chapter, when constructing the echo model and imaging algorithm, the platform motion is set as a regular motion state to simplify analysis and processing. But in the actual flight process, due to airflow disturbances and flight control, the instantaneous position and flight attitude of the SAR platform will have motion errors that deviate from the regular motion state, resulting in echo delay, amplitude, and phase errors relative to the regular motion. If they cannot be compensated for, the image quality will be seriously degraded. The purpose of motion compensation is to reduce the echo error caused by irregular motion through control or correction measures so that the compensated echo can be restored to the state of regular motion as much as possible. Therefore high-quality imaging can be realized by means of the imaging processing method and flow path corresponding to the regular motion state. SAR motion compensation can be divided into two parts: motion compensation based on motion sensing information and motion compensation based on echo data. Using the information of motion and attitude parameters provided by motion-sensing equipment, the uniformity of spatial sampling can be controlled in real time and the beam direction can be stabilized, the echo error can be reduced from the source, and the residual echo error can also be preliminarily corrected. By using the motion error information estimated from echo data, the echo delay and phase residual errors can be finely corrected. In practice, due to the limitation of measuring the precision of motion-sensing equipment, two compensation methods are often combined, which is the idea of first coarse and then fine, or first hard and then soft. Compared with monostatic SAR, bistatic SAR has separate transceivers, independent movement of dual platforms, multiple geometric configurations, more error sources, new characteristics of error laws, and more complicated and difficult analysis and compensation. But in essence, the core task of bistatic SAR motion compensation is still the correction of the echo migration trajectory error and the repair of Doppler signal phase error. Based on the analysis of the sources and effects of motion errors, this chapter discusses the motion error tolerance and focuses on the motion Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00005-0 Published by Elsevier Inc. All rights reserved.
217
218
Bistatic synthetic aperture radar
compensation method. Examples of simulation and experimental verification are given.
5.1 Source and influence of motion error There are many sources of bistatic SAR motion errors, which are coupled with each other. In order to simplify the analysis process and reach a simple and clear conclusion with clear physical meaning, it is necessary to analyze the motion errors from different sources and discuss their effects. This can lay a foundation for analyzing motion error tolerance and achieving motion error control and echo error compensation.
5.1.1 Source of motion error The motion of the platform can be described by parameters with six degrees of freedom, among which three degrees of freedom describe the motion state of the center of mass of the platform, which is called the track. The other three degrees of freedom describe the rotational state of the platform around the center of mass. During the actual movement of the platform, due to various random forces such as airflow and lateral wind, there are inevitably motion errors, mainly including two aspects, track error and attitude error. Between them, the track error reflects the three-dimensional error between the real track and the ideal track of the platform, while the attitude error reflects the three-dimensional error between the actual attitude and the predetermined attitude of the platform. Bistatic SAR involves 2 independent platforms and requires 12 degrees of freedom to characterize its motion error. However, it can be divided into two aspects: track error and attitude error. Taking bistatic side-looking SAR as an example, track error and attitude error can be represented as in Fig. 5.1, where the solid line is the ideal regular motion track, which corresponds to the regular motion of the platform. The dotted line is the real track, which corresponds to the actual flight of the platform. Compared with the ideal track, the real track has certain errors in three directions. Motion errors can also be classified according to their characteristics. For example, they can be divided into deterministic errors and random errors. They can also be divided into low-frequency, high-frequency, and wideband errors according to the variation law of error over time. Lowfrequency error refers to an error whose period is larger than the synthetic aperture time, including linear, quadratic, and high-order errors, which are
Bistatic SAR motion compensation
z Actual route
219
Transmitter Receiver
hT
y
jR
jT hR
Ideal route
z y
Yaw
rT
rR
Roll Direction of platform
( xn , yn ,0)
o
P
(a) Track error
movement
x
o
x Pitch
(b) Attitude error
Fig. 5.1 Schematic diagram of bistatic side-looking SAR with motion error.
mostly caused by platform speed error, such as linear speed error and nonlinear speed error. High-frequency error refers to an error whose period of change is less than the synthetic aperture time, which is mostly caused by the regular vibration or jitter of the platform, such as sine error. In addition, based on whether the motion error is related to the spatial position of scattering points in the scene, the motion error can be divided into spatial variant error and space invariant error [1]. It should be pointed out that the preceding classification methods are mutually intersecting. For example, broadband errors belong to random errors, while low- and high-frequency errors belong to deterministic errors.
5.1.2 Motion error effect Ground scattering point echoes are mainly related to the spatial position of the scattering point relative to the platform and beam modulation effects, in addition to the transmitted signal waveform. However, the platform track error and attitude error will cause trajectory error of the antenna phase center, which will cause the delay and initial phase error of each pulse echo. Attitude error will also affect antenna beam pointing and generate additional beam modulation, resulting in echo amplitude error. All these echo errors have adverse effects on imaging. 1. Track error The platform track error can be decomposed into three components along heading, altitude, and horizontal, which are mainly derived from platform speed errors in different directions. As shown in Fig. 5.2, because the pulse repetition frequency is usually not synchronized with the platform speed, the random error of the speed along the heading will cause a
220
Bistatic synthetic aperture radar
Track of platform
No error Line of light Direction Deviation
Fig. 5.2 Schematic diagram of track error impact.
nonuniformity of spatial sampling. Thus, according to the uniform interval, the echo data stored in the slow-time direction appear nonlinear compression or stretching phenomenon, resulting in echo phase error. Random errors of horizontal or height velocity will further cause an error of scattering point slant, which will cause a delay and initial phase error of echo data. The simplest bistatic side-looking SAR with parallel flight paths is taken as an example to illustrate the influence of track error of two platforms. According to the geometric relationship shown in Fig. 5.1, the distance from the phase center of the transceiver antenna to the point target P can be expressed as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rR ðtÞ ¼ ½xR ðtÞ xn 2 + ½yR ðtÞ yn 2 + ½zR ðtÞ 02 (5.1) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (5.2) rT ðt Þ ¼ ½xT ðtÞ xn 2 + ½yT ðtÞ yn 2 + ½zT ðt Þ 02 where xn and yn are the horizontal and vertical coordinates of point P; xR(t), yR(t), and zR(t) represent the receiving platform position at time t; and xT(t) and yT(t) and zT(t) represent the transmitting platform position at time t. Suppose that ΔxT(t), ΔyT(t), ΔzT(t), and ΔxR(t), ΔyR(t), ΔzR(t) are the errors of the transmitting and receiving platforms that deviate respectively from the regular track in the direction of the axis x, y, and z. By Taylor expansion and approximation of Eqs. (5.1) and (5.2), the skew history error introduced by the two-platform error can be expressed as Δr ðt Þ
ðvR t yn ÞΔyR ðtÞ ðvT t yn ÞΔyT ðtÞ + + ΔrLOS R ðtÞ + ΔrLOS T ðtÞ rR rT (5.3)
Bistatic SAR motion compensation
221
where vT and vR are, respectively, ideal motion speeds of the transmitting platform and receiving platform; rT and rR represent track crosscuts from point P to the ideal track of the transmitting platform and receiving platform; terms 1 and 2 in Eq. (5.3) are directional errors corresponding to the transceiver station; and terms 3 and 4 are line-of-sight errors corresponding to the transceiver station, satisfying the following relationship: ΔrLOS R ðtÞ ¼ ΔxR ðt Þsin φR + ΔzR ðtÞ cos φR
(5.4)
ΔrLOS T ðtÞ ¼ ΔxT ðtÞ sinφT + ΔzT ðt Þcos φT
(5.5)
where φT and φR are the incident angles of the transmitting platform and the receiving platform. Fig. 5.3 shows the line-of-sight projection of the deviation in direction x and z of the receiving station, shown in the schematic diagram of Eq. (5.4). According to the preceding analysis, the track error of the transceiver platform will cause deviation of the oblique distance history, sampling unevenness of the echo space, and echo phase error. The linear phase error will cause spectrum translation and target position error during imaging. The second phase error will cause spectrum expansion and defocusing of the target image during imaging. 2. Attitude error The instability of the platform attitude will lead to an antenna pointing error and, further, the change of the antenna gain in the target direction, which will produce adverse random modulation to the power of the echo signal. In bistatic SAR, it will also affect the space synchronization of the receiving and transmitting beams, resulting in the significant reduction of the echo signal-to-noise ratio, even the failure to receive the echo. At the same time, it will cause an error in the center position of the antenna
Fig. 5.3 Projection in line of sight for zdirection and xdirection deviation of receiving station.
222
Bistatic synthetic aperture radar
phase, which will lead to an error in the mean value of the slant range and the delay of the echo. Therefore the instability of platform attitude will have a significant impact on the imaging quality. In different bistatic SAR configurations, the influence of attitude error on echo and imaging has different characteristics. Here, bistatic side-looking SAR with parallel flight paths is still used as an example to discuss the impact of platform yaw, pitch, and roll on imaging. (1) Yaw As shown in Fig. 5.4, there are conditions with and without yaw. It can be seen that yaw will cause a direction change of the transceiver beam and a deviation of the beam footprint, resulting in an echo amplitude change, and the echo cannot even be received in serious cases. It will also change the mean of the slant range between the target and the receiving/transmitting platform, thus affecting the phase of the echo signal and its trajectory in the data plane. When there is no yaw, the velocity of the receiving platform is along the y direction. When yaw occurs, it is equivalent to the platform introducing a squint angle. When the yaw angle is very small, the expression of three components of two platform velocities can be obtained: vRx vR βR , vRy vR , vRz ¼ 0
(5.6)
vTx vT βT , vTy vT , vTz ¼ 0
(5.7)
where vTx, vTy, vTz, and vRx, vRy, vRz are the components of transmitting and receiving platform speed in direction x, y, z, and βT, βR is the yaw angle of the transmitting and receiving stations, respectively. Track of Transmitter
Track of Receiver
Track of Transmitter
Beam irradiation area
Track of Receiver Beam irradiation area with yaw
without yaw
˄a˅No yaw error
˄b˅Yaw error
Fig. 5.4 Top view of parallel-flying bistatic side-looking SAR with or without yaw error.
223
Bistatic SAR motion compensation
According to the relationship between the platform speed and the Doppler frequency, the Doppler frequency error fdε and Doppler frequency rate error fdrε caused by the platform yaw can be obtained as 1 fdε ¼ ðvT βT sinφT + vR βR sin φR Þ λ 2 2 βT vT β2R vR2 fdrε ¼ + λrT λrR
(5.8) (5.9)
where βT and βR are, respectively, the yaw angles of the transmitting and receiving stations. It can be seen that the platform yaw causes a change in the heading speed. The magnitude and direction of the yaw angle determine the error of Doppler frequency and Doppler frequency rate. (2) Pitch As shown in Fig. 5.5, when the platform has pitching errors, it will cause the direction of the transceiver beam and the position of the beam footprint to change, resulting in echo amplitude error. On the other hand, pitch error leads to error in height, which leads to error in the mean of the slant range from the target point to the receiving and transmitting platforms, and then affects the delay and phase of the echo signal. The influence of platform pitch on bistatic SAR imaging is similar to that of yaw. The pitch of the transceiver platform rotates the radar beam center around the ox axis, forming velocity errors in the y and z directions. Thus the
Transmitter
Transmitter
z
Track of transmitter/receiver
z
Receiver
y
o
Track of transmitter/receiver Receiver
L
(a) No pitch error
y
o
'
L
(b) Pitch error
Fig. 5.5 Side view of bistatic side-looking SAR with pitch error at the receiving station.
224
Bistatic synthetic aperture radar
Doppler frequency and Doppler frequency rate errors caused by pitch are as follows: 1 fdε ¼ ðvT αT cosφT + vR αR cosφR Þ λ 2 2 αT vT α2R vR2 fdrε ¼ + λrT λrR
(5.10) (5.11)
where αT and αR are the elevation angles of the transmitting and receiving stations, respectively. It can be seen that the pitch of the platform will also cause a change in the heading velocity. The magnitude and direction of the pitch angle determine the error of the Doppler frequency and the Doppler frequency rate. (3) Roll over Fig. 5.6 is a comparison diagram with and without roll. It can be seen that if the leverage effect of antenna phase center is not considered when the platform is disturbed by small-angle roll, the mean value of the slant range from the target to the receiving and transmitting platforms will not change, which means that the corresponding time delay and phase error will not be introduced. However, rolling causes the antenna beam footprint to move, which results in an error in the echo amplitude. At the same time, the beam footprint overlap area of the receiving and transmitting stations will be reduced, thereby reducing the range of the imaging area. In severe cases, the echo cannot be received.
z
z
Transmitter
Transmitter
Receiver
Receiver
O
W
(a) Without roll over
x
O
W'
(b) With roll over
Fig. 5.6 Front view of double platform with and without rolling error.
x
Bistatic SAR motion compensation
225
5.2 Motion error tolerance The trajectory and attitude errors cause echo delay errors and amplitude and phase distortions, which degrade the quality of bistatic SAR imaging and various imaging indicators. Therefore there is a quantitative correspondence between motion errors and imaging quality indicators. The corresponding motion error tolerance can be derived according to the requirements of imaging quality indexes, which can be used as the judgment threshold of whether motion compensation is needed, or as the allowable upper bound of residual errors after motion compensation, which has important practical guiding significance for the design of bistatic SAR systems and motion compensation capability.
5.2.1 Track error According to the previous analysis, the track error mainly comes from the velocity error in three directions. Mathematically, various forms such as power series, Bessel functions, Legendre functions, and so on can be used to represent the flight path error. Then the corresponding error tolerance can be quantitatively evaluated according to the image quality indexes of main lobe broadening, integrated sidelobe ratio, peak sidelobe ratio, and so on in the imaging results [2]. It should be pointed out that different bistatic SAR geometrical configurations have different sources of motion errors and their influence mechanisms, so the tolerances of motion errors are also significantly different and need to be analyzed separately. Taking the parallel-flying bistatic side-looking SAR as an example, assume that the spatial geometric model is as shown in Fig. 5.7. In this model, the spatial coordinate system is determined by reference to the location of the receiving station. The definition of each variable is shown in Fig. 5.7. According to the geometric relationship in the figure, the distance vectors rT and rR from the transmitting station and the receiving station to the point P can be approximately expressed as follows: rT ¼ rT sin φT i + 0:5ρa j hT k (5.12) rR ¼ rR sinφR i + 0:5ρa j hR k where i, j, k is the unit vector in the direction of the x, y, z axis, and ρa is the azimuth resolution.
226
Bistatic synthetic aperture radar
Fig. 5.7 Geometric model of parallel-flying bistatic side-looking SAR.
Assuming the platform speed vector is v and the acceleration vector is a, the speed and acceleration can be decomposed into: v ¼ vx i + vy j + vz k ðaÞ (5.13) a ¼ ax i + ay j + az k ðbÞ where vx, vy, vz and ax, ay, az are the components of speed and acceleration in the x, y, z directions. Therefore the Doppler frequency fd and the Doppler frequency rate fdr related to point P(xR, ρa/2, 0) are: 8 1d r T vT r R vR > > ðrT + rR Þ ¼ ðaÞ < fd ¼ λ dt λrT λrR (5.14) 1 d2 rT aT + v2T rR aR + v2R > > ðbÞ : fdr ¼ 2 ðrT + rR Þ ¼ λ dt λrT λrR where vT, vR and aT, aR represent the velocity vector and acceleration vector of the transmitting platform and the receiving platform, respectively. (1) Velocity error constraint Combining Eqs. (5.12), (5.13(a)), and (5.14(a)), the following can be obtained: fd ¼ fdT + fdR ¼ fd0 + fdε where 8 1 vTy ρa vRy ρa > < fd0 ¼ + λ 2rT 2rR 1 > : f ¼ ðv sinφ v cos φ + v sin φ v cos φ Þ dε Tx Tz Rx Rz T T R R λ
(5.15)
(5.16)
Bistatic SAR motion compensation
227
In Eq. (5.15) fTd and fR d are the Doppler frequencies corresponding to the transmitting station and the receiving station, and fd0 is the Doppler frequency caused by the heading speed and it is the desired Doppler frequency (regardless of the along-track speed error); fdε is the Doppler frequency caused by the two velocity components perpendicular to the heading and it belongs to the error component. The Doppler frequency error may increase when the velocity vectors of the transceiver platform are superimposed on each other. When the velocity vectors counteract, the Doppler frequency error may decrease. Generally, the residual Doppler frequency error after compensation is required to meet [3]: fdε < εBf
(5.17)
where Bf ¼ 2fd0 corresponds to the bandwidth of an SAR matched tracking filter with the resolution ρa, while ε usually takes a value from 0.1 to 0.5. Using Formula (5.17), the tolerance of the velocity error can be obtained. (2) Acceleration error constraint Combining Eqs. (5.12)–(5.14(b)), neglecting the velocity term, the Doppler frequency rate error caused by the acceleration of the transmitting station and the receiving station can be obtained: a ac as fdrε ¼ fdrε + fdrε
(5.18)
where 8 1 aTy ρa aRy ρa > ac < fdrε ¼ + λ 2rT 2rR (5.19) 1 > : f as ¼ ða sinφ a cosφ + a sin φ a cos φ Þ Tx Tz Rx Rz T T R R drε λ where aTx, aTy, aTz and aRx, aRy, aRz represent the components of the acceleration of the transmitting station and the receiving station in the three directions x, y, z; fac drε represents the Doppler frequency rate error caused by the acceleration in the direction y along the route; and fas drε represents the Doppler frequency rate error caused by the acceleration in the directions x and z. In practice, except in the case of maneuver flight, the platform speed changes slowly along the track, and the effect of acceleration can be ignored. Usually, after compensation, the Doppler frequency offset satisfies the following inequality: as Tf fdrε < εBf
(5.20)
228
Bistatic synthetic aperture radar
where Tf ¼ 1/Bf. Using Formula (5.20), the constraint conditions of acceleration error can be obtained. For a typical airborne platform, for which the wavelength λ is 3 cm, Bf is 2 Hz, and ε ¼ 0.1, Eqs. (5.16) and (5.20) can be obtained as follows: vTx sinφT vTz cosφT + vRx sin φR vRz cos φR 6 103 m=s aTx sinφT aTz cosφT + aRx sin φR aRz cos φR 1:2 102 m=s2 (5.21) This is a very strict requirement for platform motion error, which means motion compensation is absolutely necessary.
5.2.2 Attitude error According to the analysis in Section 5.1, the platform attitude error will cause Doppler frequency or phase error, which will affect the imaging quality. Therefore the attitude error needs to be constrained according to the accuracy requirements of Doppler parameters. In addition, the yaw, pitch, and roll errors can also be constrained according to the analysis idea of track error. For a typical spaceborne platform, the allowable yaw angle β and yaw angular velocity β are [3]. β < ð0:1 0:5Þ 0:00019 degrees (5.22) β_ < ð0:1 0:5Þ 0:42 104 rad=s For bistatic SAR, considering the attitude errors of both platforms, the right-hand constraint will become half of the preceding formula. These constraints need a stable platform and high precision control to achieve.
5.3 Motion error measurement and perception The rationale for compensating for the echo amplitude, phase, and delay error caused by the motion error is to master the magnitude and direction of various errors directly or indirectly, that is, to measure and perceive the motion error. In practical applications, there are two common methods to measure and perceive motion errors. One is to use motion measurement equipment in an SAR bearing platform to measure motion error information and calculate the echo delay phase error information based on this. The other is based on the special characteristics of motion error in the echo transform domain, and the time delay, phase, and Doppler error information are
Bistatic SAR motion compensation
229
estimated directly from echo data by using a signal detection and estimation algorithm. Or, depending on the special characteristics of the motion error in the imaging results, using imaging iterative processing, the homing focus of each error point in the image field is realized, and the echo error information is estimated implicitly.
5.3.1 Motion-sensing information The motion-sensing information reflects the actual track and attitude change of the platform and is perceived and recorded by the motion-sensing equipment during the movement of the platform, including the platform’s longitude, latitude, altitude, speed, acceleration, attitude angle, etc. After this information is preprocessed, it can be used as a basis for calculating echo parameters during imaging processing and to extract motion error information as a basis for motion compensation. (1) Motion-sensing device High-precision motion compensation requires the use of motionsensing equipment to accurately measure platform motion errors. At present, the most common motion-sensing devices are the inertial navigation system (INS), the global positioning system (GPS), and inertial measurement units (IMUs). In practical application, in order to improve the accuracy of motion sensing, these technologies are often combined, such as in a GPS/INS combination, GPS/IMU combination, or a POS attitude and positioning system, which are widely used at present. In the combination of GPS/INS, the GPS antenna and INS are installed on the same platform. After the measurement data flow is fused by the INS signal processing unit, it provides information such as yaw angle, roll angle, pitch angle, heading angle, three speeds, and three accelerations of the aircraft. According to this information, the control unit of the SAR system adjusts the antenna beam direction to keep it constant and reduces the impact of the attitude error. At the same time, it adjusts the pulse repetition frequency to reduce the impact of the heading speed error, so as to reach the conditions of steady irradiation in the target area and space interval sampling. It should be pointed out that when a nonrigid body moves or the INS is far from the phase center of the radar antenna, the motion information measured by the platform inertial navigation does not represent the state of motion of the phase center of the antenna and must be leveraged. An auxiliary inertial measurement unit (IMU) needs to be installed to measure the motion of the antenna phase center.
230
Bistatic synthetic aperture radar
In addition, INS and IMU systems based on gyros and accelerometers both have the problem of positioning error drift and poor long-term positioning accuracy, which needs to be corrected by the global positioning system (GPS, etc.). However, the compensation accuracy ultimately depends on the short-term measurement accuracy of the INS or IMU. For example, the Lynx SAR developed by Sandia National Laboratory in the United States uses a combination of high-precision motion measurement device SIMU and carrier phase differential GPS in order to achieve ultrahighresolution monostatic SAR imaging of 0.1 m. The gyro drift is less than 0.01 degrees, the accelerometer accuracy is 50 μg, the high-frequency position accuracy is less than 0.025 mm, and the speed precision is less than 1 cm/s [4]. (2) Preprocessing of measurement data The measurement information obtained by motion-sensing equipment usually needs to be preprocessed before it can be used for SAR motion compensation. This is mainly due to the following four factors. First, since the information obtained by the measuring equipment is all the coordinates and their changes in the geodetic coordinate system, it is necessary to change the spatial variant when the dual-platform coordinates are converted to the bistatic SAR imaging coordinate system and decomposed into regular motion information and motion error information. Second, because the data update rate of the measurement equipment is much lower than the radar pulse repetition frequency, the measurement data need to be interpolated. Third, the measurement data usually contains high-frequency random noise, which must be smoothed and filtered. Fourth, if the motion measurement data and SAR echo are not synchronized in time, the measurement data and echo data need to be aligned in the time axis. Taking a strapdown attitude instrument as an example, the pretreatment involved can be illustrated. It is composed of three rate gyros, three accelerometers, a high-speed navigation computer, and a GPS. The platform angular velocity information measured by the gyro and the platform acceleration information measured by the accelerometer are both transferred to the navigation computer after the module-to-number conversion by the data collector. The navigation computer uses Kalman filtering technology to fuse the measurement information of the INS and GPS, to improve the measurement accuracy of the navigation system, and obtain the heading angle, pitch angle, rolling angle, and other motion information during the flight of the platform. The process is shown in Fig. 5.8.
Bistatic SAR motion compensation
231
Accelerometer
Error compensation
Specific force calculation
Speed calculation
GPS information
Pre-processing
Posture matrix
Calculation of posture angle
Top
Error compensation
Information fusion and compensation
GPS information
Navigation parameters output
Location calculation
Fig. 5.8 Navigation principle and process of a strapdown attitude instrument.
(1) Coordinate transformation The coordinate transformation includes the transformation between the geodetic coordinate system and the northeast coordinate system, the fitting of the reference track, and the transformation between the northeast coordinate system and the heading coordinate system. First, the longitude and latitude heights of the two platforms in the geodetic coordinate system recorded by the motion measurement equipment are converted into the northeast coordinate system (due east as the x axis, due north as the y axis) to eliminate the errors between the spherical coordinate system and the rectangular coordinate system. It should be pointed out that, after converting to the northeast coordinate system, the height of the measured data is subtracted from the altitude of the scene. As shown in Fig. 5.9, the attitude indicator data is converted to the northeast coordinate system during an outfield test, where two lines represent the round-trip flight path of the aircraft. Then, according to the measured track data in the northeast coordinate system, the reference track is fitted, and the reference track of the platform is determined as a straight line. The reference track reflects the regular motion track of the platform, which is the basis for calculating the migration trajectory parameters and Doppler parameters during imaging processing, while the part deviating from the reference track is summed up as the motion error, which is the basis for subsequent motion compensation. Fig. 5.10 shows the actual motion trajectory and the fitted reference trajectory in certain outfield data. Since the imaging processing is usually carried out in the heading coordinate system, the flight path in the northeast coordinate system should be converted to the heading coordinate system to facilitate the
5
6
x 10
-3
3.4
-4
3.3
y North coordinate (m)
y North coordinate (m)
x 10
-5
-6
-7
3.2
3.1
3
2.9 -8
2.8 2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
x East coordinate (m) (a) Transmitting station
2.8
2.9
3
2 6
x 10
2.1
2.2
2.3
x East coordinate (m) (b) Receiving station
Fig. 5.9 Transceiver station flight trajectory recorded by the attitude indicator in the northeast coordinate system.
2.4
2.5 5
x 10
-3320 Actual flight
-2380
-3340
y North coordinate (m)
y North coordinate (m)
Fitting flight
-3360
-3380
-3400
Actual ⮅ 昬flight 刑忠 Fitting ㋆ ⏯flight 刑忠
-2400 -2420 -2440 -2460 -2480
-3420
-2500 -2520
-3440
-10100
-1.076 -1.074 -1.072 -1.07 -1.068 -1.066 -1.064 -1.062 -1.06 -1.058
-10050
-10000
-9950
4
x East coordinate (m) (a) Transmitting station Fig. 5.10 Track fitting of receiving and transmitting stations.
x 10
x East coordinate (m) (b) Receiving station
-9900
234
Bistatic synthetic aperture radar
motion-compensation processing. The heading coordinate system refers to the right coordinate system with the direction of the receiving station’s reference track as axis x and the vertical direction as axis z, and it specifies that x ¼ 0 represents the moment of the first pulse launch. At this point, the coordinate system conversion process is completed. It should be noted that the preceding description only provides one of the methods and other ideas can also be used in practice, as long as the measurement data coordinate system and the imaging coordinate system are unified. (2) Smoothing and interpolation Based on the fitted reference track, the yaw and altitude motion errors can be calculated. In order to eliminate the high-frequency noise contained in the motion measurement data, it is necessary to use neighborhood average or Kalman filtering and other methods to smooth the data, as shown in Fig. 5.11. In addition, the update rate of measured data is usually 10–100 Hz, while the pulse repetition frequency is usually hundreds of hertz, or even thousands of hertz. The update rate of measured data is much lower. In order to achieve one-to-one correspondence, the measured data must be interpolated. (3) Data synchronization For the bistatic SAR transceiver division, two platforms need to have the measuring equipment installed. Therefore it inevitably involves the timesynchronization problem of two-platform motion measurement data. The time synchronization error of the measurement data leads to mismatched data, which causes errors related to the synchronization time difference between the platform position and attitude information. Using measurement data with obvious errors to process the echo will seriously affect migration correction, the effect of slow-time compression, and motion compensation. Therefore it is necessary to use synchronization technical measures to reduce the synchronization time difference, and when the synchronization time difference is known or measurable, to perform data alignment and matching processing based on the time difference. On the other hand, the time synchronization of the measurement data and echo data is also a problem that must be solved, because the synchronization time difference will cause the wrong match between them. Using this kind of misaligned measurement data to process echo data will seriously affect the migration correction, the effect of slow-time compression, and motion compensation.
1.5
1.5
Smoothed data
Smoothed data
Unsmoothed data
Unsmoothed data
Data deviated from ideal track (m)
Data deviated from ideal track (m)
2
1 0.5 0 -0.5 -1 -1.5 -2 -2.5
0
20
40
60
80
100
120
140
160
Sampling points in x axis
(a) Yaw error at transmitting station Fig. 5.11 Yaw error of transceiver station before and after smoothing.
1
0.5
0
-0.5
-1
0
20
40
60
80
100
120
Sampling points in x axis
(b) Yaw error at receiving station
140
160
236
Bistatic synthetic aperture radar
Because the motion measurement data is used for parameter estimation and error estimation of echo data, the synchronization accuracy should reach the order of the pulse repetition interval with echo data. The pulse repetition frequency is usually between several hundred and several thousand hertz, and the accuracy of the time synchronization error between the measurement data of two platforms and the echo data can reach the order of microseconds. In fact, for bistatic SAR, the time synchronization of the transceiver station is a problem that must be solved to achieve imaging and ranging. When the transmitter and receiver are equipped with synchronous extension, the time synchronization accuracy of each extension of the transmitter and receiver can reach the nanosecond level, and the previous two kinds of data synchronization problems can be solved at the same time.
5.3.2 Information extracted from echo In bistatic SAR, the ultimate purpose of measuring the position and attitude information of the two platforms using motion-sensing equipment is to be able to calculate the signal parameters and error information of the echo through the measured values and the dual-platform coordinates conversion. However, due to the limited measurement accuracy of the motion-sensing device, the obtained parameter values and error values usually cannot meet the requirements of high-precision imaging and high-precision motion compensation. Therefore it is necessary to use these parameter values and error values as initial values, and then use the special characteristics of regular and irregular motion in the echo data transformation domain and image domain to estimate more accurate parameter values and error values. Generally, phase or motion errors are extracted by Doppler parameter estimation or autofocus methods. For details, please refer to the sections on Doppler parameter estimation in the previous chapter and autofocus at the end of this chapter. It should be pointed out that the separation of motion errors is complicated because bistatic SAR involves the motion of two independent platforms. In practice, the method used for motion information extraction based on echo is often to treat the transmitting platform and the receiving platform as one, no longer distinguishing which platform the motion error comes from, and perform a unified compensation, which implies the idea of an equivalent phase center motion error of the transmitting and receiving stations and equivalent single station.
Bistatic SAR motion compensation
237
5.4 Motion error control and echo error compensation SAR motion compensation includes two major parts: motion error control and echo error compensation. In practical applications, the technologies involved in these two parts are often appropriately reduced and used in combination depending on specific conditions. The purpose of motion error control is, through real-time control measures, to cause the phase center of the transmitting and receiving antenna locate in the spatial position corresponding to regular motion as much as possible, and to make the antenna beam point to the direction corresponding to a constant attitude as much as possible during signal transmission and echo reception. Therefore it can isolate and reduce the effects of irregular motion of the platform and random changes in attitude on the echo. The basic way to achieve this is to measure the error information in real time with the aid of motion-measuring equipment and clutter locking technology, and to use the repetition frequency control device to compensate for the influence of the nonuniform platform speed with the nonuniform pulse repetition interval, so as to realize uniform spatial sampling. At the same time, the antenna stability platform is used to automatically achieve stability of the beam direction. For bistatic SAR, the problem of beam direction stability has been transformed into a problem of beam space synchronization. See Chapter 6 for details. Echo error compensation is the focus of this section. Its purpose is to preprocess the echo data based on the mastered motion error information to reduce or eliminate the effect of motion error on the spatial sampling uniformity and to repair and compensate for echo delay, amplitude, and phase errors caused by motion errors. The motion error information comes from the correction measures of motion-measurement equipment, which has a small amount of calculation and low precision and is suitable for the compensation and correction of echo error caused by high-frequency motion error. The motion error information comes from the correct measure of echo data, which has a large amount of computation and high precision and requires a high echo signal-to-noise ratio and is suitable for the correction and compensation of low-frequency or slow-varying motion errors. The echo error compensation method can be classified from different aspects. For example, according to the source of motion error information, it can be divided into motion compensation based on motion measurement data and motion compensation based on echo data. Depending on the category of motion error, it can be divided into along-track error compensation
238
Bistatic synthetic aperture radar
and line-of-sight error compensation. In addition, the residual motion error should be compensated for after a certain compensation treatment. Different bistatic configurations have different echo laws and different compensation methods [5]. In this section, translational invariant bistatic side-looking SAR with parallel flight paths is taken as an example. The method used for the bistatic SAR echo compensation is described from three aspects: along-track motion compensation, line-of-sight motion compensation, and autofocus.
5.4.1 Along-track motion compensation For translational invariant bistatic side-looking SAR with parallel flight paths, compensation for the along-track errors for the two platforms should be considered, respectively. Generally, in synthetic aperture time, the effect of acceleration can be ignored and only the compensation for velocity error needs to be considered. Ideally, the platform moves uniformly in a straight line and the radar transmits signals at equal intervals, so each pulse is transmitting and receiving at equal intervals in space. However, when there is an along-track speed error, the space interval of the platform flying between adjacent pulses will become nonuniform, as shown in Fig. 5.12. There are two common methods to control and compensate for the along-track motion error. The first method is implemented in the process of data acquisition by adjusting the pulse repetition frequency in real time to match the current platform speed change so that the space interval between the pulses is still uniform. Its essence is that the radar forms equal angle sampling for the target. This technique is part of the error control measures. In the second method, the resampling of the completed data is performed by azimuth interpolation, so the azimuth resampling data corresponds to a uniform space interval. This is an error compensation measure. For bistatic SAR, due to the separation of the transceiver platform, the transmission and receiving are not in the same position, and the along-track
Fig. 5.12 Schematic diagram of nonuniform sampling of monostatic SAR caused by along-track motion error.
Bistatic SAR motion compensation
239
motion errors are not consistent. Therefore the transmitting and receiving intervals between pulses are nonuniform. The traditional real-time pulse repetition frequency adjustment measures can only ensure the equal interval transmission or equal interval reception of monostatic SAR. In addition, the traditional resampling method can only resample according to the location of the receiving station or the transmitting station, and it cannot guarantee equal interval sampling when the along-track speed error exists at the same time for the transmitting and receiving stations. Based on the principle of the equivalent phase center for the transmitting and receiving stations, the bistatic SAR along-track motion compensation method first equalizes the speed of the transceiver platform to the speed of a single platform or the sampling time of the transceiver platform to the equivalent single platform. Then resampling methods, such as cubic spline interpolation, are used to realize the echo data resampling and eliminate the influence of nonuniform sampling in the slow-time direction. The nonuniform Fourier transform (NUFFT) [6] can be used to calculate the Fourier transform of the nonuniformly sampled data, thereby solving the problem of motion compensation along the track. A simulation example is given for bistatic side-looking SAR, and the results are shown in Fig. 5.13. It can be seen that nonuniform sampling causes sidelobe asymmetry and main lobe broadening. The compensation algorithm based on instantaneous velocity equivalence and NUFFT can better compensate for the error along the track. Fig. 5.13D shows the point target response after echo data resampling according to the along-track error of the receiving station and the transmitting station, respectively. The results of the compensation scheme are based on instantaneous speed equivalence and NUFFT. It can be seen that the latter has better compensation performance. In addition, depending on the correspondence between the instantaneous speed of the platform and the Doppler frequency rate, compensation for the motion error along-track can also be achieved [7]. However, as the instantaneous Doppler frequency rate is not only affected by the error of speed along-track, but also by the line-of-sight error, it is difficult to separate the two.
5.4.2 Line-of-sight motion compensation The purpose of line-of-sight motion compensation is to compensate for the echo error caused by the slant range error. For a single platform, line-of-sight motion compensation mainly includes two aspects: envelope compensation,
240
Bistatic synthetic aperture radar
velocity errors along the flying path 1
550
transmitter receiver
0.8
540
0.6
Azimuth[sampling point]
velocity error(m/s)
0.4 0.2 0 -0.2 -0.4 -0.6
530 520 510 500 490
-0.8
480 -1 -1.5
-1
-0.5
0 time(s)
0.5
1
1.5
500
550 540
505
510 515 520 Range[sampling point]
525
proposed method resampling receiver resampling transmitter
-5
Magnitude凚 dB)
Azimuth[sampling point]
530 520 510
-10
-15
500
-20 490
-25
480 500
505
510 515 Range[sampling point]
520
525
2350
2400
2450
2500 2550 2600 Azimuth凚 sample point凛
2650
2700
2750
Fig. 5.13 Simulation results.
which is used to correct the delay error caused by slant range error, and twostep phase compensation, which is used to compensate for the phase error caused by the slant range error. The first step is first-order motion compensation, that is, the echo error compensation of ignoring the spatial variance of the slant range error. The second step is the second-order motion compensation, which is the residual echo error compensation related to the slant range. For bistatic SAR, line-of-sight compensation of the two platforms must be considered. In order to simplify the analysis, narrow-beam and flat-terrain assumptions are used here to introduce a method of parallel-flying bistatic sidelooking SAR line-of-sight motion compensation. This method first resamples along fast time, to compensate for echo time delay error caused by mean value error rLOS of the slant range. If the time delay corresponding to rLOS is less than half of the time delay resolution cell, this step can be ignored, or the method of multiplying the linear phase factor in the fast-time frequency domain can be adopted to complete the subresolution cell difference resampling. Next is the correction for the phase error φLOS ¼ 2πrLOS/λ.
Bistatic SAR motion compensation
241
Fig. 5.14 Line-of-sight error compensation process.
Phase correction usually uses a two-step compensation method. The first step is to compensate for the mean value of the reference slant range, for example, to compensate for the phase error caused by the mean error rLOSI of the center slant range in the imaging area, which is called first-order motion compensation. This compensation measure is to compensate for the same amount of phase for all the ground scattering point echoes with different mean values of slant-range. The second step is to compensate for the spatial variant part rLOSI of the slant-range mean error rLOS, that is, to compensate for the difference between rLOS and rLOSI, which is known as second-order motion compensation. This is a differential compensation measure for different mean values of the slant range. Fig. 5.14 shows the line-of-sight phase compensation process. Here, a simulation example is given for bistatic SAR line-of-sight twostep motion compensation. The scene-setting is shown in Fig. 5.15A, and the simulation results are shown in Fig. 5.15. Fig. 5.15B shows the imaging result without any motion compensation. It can be seen that the image is completely out of focus, especially in the slow-time direction. Fig. 5.15C is the case after only first-order motion compensation. Only the echo of the scene center is well focused, while other points still have obvious defocus. Fig. 5.15D shows the situation after first-order and second-order motion compensation. It can be seen that all points in the scene have achieved good focus. Fig. 5.16 shows the imaging results of the bistatic side-looking SAR test conducted by the University of Electronic Science and Technology of China (UESTC) in 2006 [8]. It can be seen that, after compensation, the image quality has been significantly improved.
242
Bistatic synthetic aperture radar
Fig. 5.15 Simulation results.
It should be pointed out that, since the conventional wavenumber domain algorithm realizes migration correction and slow-time compression at the same time, it is difficult to embed the second-order motion compensation effectively. Therefore the extended wavenumber domain algorithm is needed [9]. In addition, a one-step motion compensation method [10] has recently appeared. Since the spatial variant error compensation of the mean value of slant-range is also carried out before migration correction, it can reduce the subsequent migration correction error to a certain extent.
5.4.3 Autofocus After the aforementioned motion compensation, it can be considered that most of the errors have been compensated for, but there are usually residual motion errors due to the limitation of measurement data accuracy and other factors. In addition, in some cases, such as small drones, there is no
Bistatic SAR motion compensation
243
Fig. 5.16 Bistatic SAR experimental data imaging results.
motion-sensing device or the accuracy is low, and only echo data can be relied on for motion compensation. Autofocus is a method of estimating residual error from echo data and compensating. It iteratively adjusts parameters to achieve good focus on image quality. In terms of classification, it belongs to motion compensation based on echo data. However, the definition of autofocus is slightly different from that in the previous chapter, which focuses on the estimation of Doppler frequency, namely, the estimation of quadratic phase error. Here the compensation for residual higher-order phase errors is considered. For specific autofocus methods, the phase gradient autofocus algorithm is the most common. It does not depend on the phase error model and can estimate and compensate for the phase error of any order, but it has specific requirements for the imaging scene [11]. Autofocus essentially estimates and compensates for the phase error. Therefore the existing monostatic SAR autofocus method can also be applied to bistatic SAR. However, due to the flexibility of the bistatic configuration, more spatial variances will be introduced, which will be the focus of bistatic SAR autofocusing. For example, for translational variant bistatic side-looking SAR with parallel flight paths, the two-dimensional spatial variant of echo will cause a spatial variant phenomena of the linear, quadratic, and third-order term
244
Bistatic synthetic aperture radar
for time delay migration. If the migration of the linear term cannot be properly corrected, it will directly affect the quality of subsequent autofocus [12, 13]. The quadratic term and third-order migration generally do not cross the time delay resolution unit, which reduces the complexity of processing to some extent. However, for the spatial variant of the quadratic phase, equilibrium operation (nonlinear CS, etc.) is usually required, and the spatial variance of the third-order phase is not considered. In this chapter, an autofocusing method is introduced using the translational invariant bistatic side-looking SAR with parallel flight paths as an example. First, the slow-time dechirp function is constructed and multiplied with the echo data after fast-time compression and fine migration correction to realize the slow-time dechirp. The discretization is expressed as ym, n(m ¼ 1, 2, 3, …, M, n ¼ 1, 2, 3, …, N), where M is the number of fast-time sampling points and N is the number of slow-time sampling points. Let ym, n represent the echo data without phase error, and perform the ym, n discrete Fourier transform in the slow-time direction to obtain the imaging result. Due to the error, there is a phase error Φ ¼ (ϕ1, ϕ2, ϕ3, ⋯, ϕN) in the slow-time direction of echo data, where ϕn (n ¼ 1, 2, 3, …, N) represents the phase error of the nth slow-time sampling point; then the actual slow-time dechirp result e ym, n can be expressed as e ym, n ¼ ym, n ejϕn
(5.23)
On this basis, the slow-time direction discrete Fourier transform is carried out, and the imaging results are affected by the error. In order to get a well-focused image, it is necessary to estimate the error of each slow-time sampling point. In practice, error estimation can be realized by evaluating image contrast or entropy. For example, the best estimation of the phase error vector Φ can be expressed as: ^ ¼ arg max C ðΦÞ Φ Φ
(5.24)
where C(Φ) is the image contrast function. For Eq. (5.24), an iterative algorithm based on coordinate descent can be used to estimate the error vector. The basic idea is to keep the error estimates of other slow-time sampling points fixed in each iteration and to conduct a one-dimensional search for the errors of a specific slow-time sampling point [14]. Due to the huge amount of calculation, Newton method, conjugate gradient, coordinate projection, and other optimization algorithms can be used to accelerate the solving process [15].
Bistatic SAR motion compensation
Slow time (sample points)
Slow time (sample points)
50 100 150 200 250 300 350 400 450 500 550
50
100
150
200
Fast time (sample points) (a) Before processing
250
245
50 100 150 200 250 300 350 400 450 500 550
50
100
150
200
250
Fast time (sample points) (b) After processing
Fig. 5.17 Autofocus results.
Fig. 5.17 shows the imaging results before and after autofocusing. It can be seen that after autofocus processing, the image quality has been significantly improved. It needs to be added that since the autofocus method combined with the frequency-domain imaging algorithm has a strong dependence on the quality of migration correction, the idea of migration correction and error estimation iteration, or the two-dimensional autofocus method, is generally adopted. However, the autofocus method combined with the BP algorithm does not need migration correction and can be used for differential compensation of points in different positions in the scene, which not only can adapt to the situation of spatial variant error but also have no error-transmission phenomenon. This is also suitable for bistatic SAR autofocus motion compensation [16].
References [1] W.C. Carrara, R.S. Goodman, R.M. Majewski, Spotlight Synthetic Aperture Radar: Signal Processing Algorithms, Artech House, Boston, 1995. [2] C. Zhang, Synthetic Aperture Radar: Principle, System Analysis and Application, Science Press, Beijing, 1989. [3] X. Yuan, Introduce to the Spaceborne Synthetic Aperture Radar, China National Defence Industry Press, Beijing, 2002. [4] X. Chou, C. Ding, D. Hu, Bistatic SAR Imaging Processing Technology, Science Press, Beijing, 2010. [5] W. Li, Motion Compensation of Bistatic Forward-Looking Synthetic Aperture Radar, University of Electronic Science and Technology of China, 2012.
246
Bistatic synthetic aperture radar
[6] Q.H. Liu, X.M. Xu, B. Tian, et al., Applications of nonuniform fast transform algorithms in numerical solutions of differential and integral equations, IEEE Trans. Geosci. Remote Sens. 38 (4) (2000) 1551–1560. [7] Z. Bao, M. Xing, T. Wang, Radar Imaging Technique, Electronic Industry Press, Beijing, 2005. [8] Y. Huang, Research on Key Technologies of Airborne Bistatic SAR, University of Electronic Science and Technology of China, 2008. [9] W. Pu, W. Li, Y. Lv, J. Jang, et al., An extended omega-K algorithm with integrated motion compensation for bistatic forward-looking SAR, in: IEEE Radar Conference (RadarCon), 2015, pp. 1291–1295. [10] M. Yang, D. Zhu, W. Song, Comparison of two-step and one-step motion compensation algorithms for airborne synthetic aperture radar, Electron. Lett. 51 (14) (2015) 1108–1110. [11] M. Li, W. Pu, W. Li, et al., Range migration correction of translational variant bistatic forward-looking SAR based on iterative keystone transformation, in: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milano, 2015. [12] W. Pu, J. Yang, W. Li, et al., A minimum-entropy based residual range cell migration correction for bistatic forward-looking SAR, in: IEEE Radar Conference (RadarConf ), 2016. [13] W. Pu, J. Yang, W. Li, et al., A residual range cell migration correction algorithm for SAR based on low-frequency fitting, in: IEEE Radar Conference (RadarCon), 2015, pp. 1300–1304. [14] Y. Huang, W. Pu, J. Wu, et al., An azimuth-variant autofocus scheme of bistatic forward-looking synthetic aperture radar, in: IEEE Radar Conference (RadarConf ), 2016, pp. 1–4. [15] T.J. Kragh, Monotonic iterative algorithm for minimum-entropy autofocus, in: Adaptive Sensor Array Processing (ASAP) Workshop, 2006. [16] J.N. Ash, An autofocus method for backprojection imagery in synthetic aperture radar, IEEE Geosci. Remote Sens. Lett. 9 (1) (2012) 104–108.
CHAPTER 6
Bistatic SAR synchronization The basis for achieving two-dimensional correlation imaging for synthetic aperture radar is the delay resolution, Doppler resolution, and amplitude resolution of scattering point echo at different positions on the ground. Therefore the radar system must have the ability to accurately measure the echo delay, Doppler, and amplitude. This requires that the transmitting subsystem and the receiving subsystem have a unified time and frequency reference. In addition to sufficient transmitting power and receiving sensitivity, the antenna beam footprints must be overlapped in the imaging region. In monostatic SAR, the transmission and reception are located on the same platform, and the unification of time and frequency datum can be realized only through homology. Moreover, the transmitter and receiver can share the same antenna, which means the ground footprints of the transmitter and receiver beam have a natural overlap. Therefore, even if the transmitter and receiver adopt different antennas, it is easy to calibrate the antenna electrical axis on the same platform to achieve coincidence of the beam footprints. Bistatic SAR has the characteristics of separate transmitting and receiving. The transmitting subsystem and the receiving subsystem are located on different moving platforms, each using different clocks, frequency sources, and antennas. If we do not take appropriate technical measures, transmitting and receiving time and frequency will have a large deviation, and the beam footprints cannot achieve overlap, resulting in echo time delay and Doppler and amplitude error. This will further affect and damage the measurement and decomposition of time delay, Doppler, and amplitude, causing an image orientation deviation, defocusing of the target, and deterioration of SNR and contrast, which seriously affects the imaging quality and can even prevent imaging. Therefore, in bistatic SAR, it is necessary to take technical measures to communicate between the transmitting station and the receiving station to solve the problems of the unified time base, coherent transmission and reception, and common beam viewing, that is, the three synchronization issues of time, frequency, and space. Therefore the transmitting station and the receiving station can be separated and combined to form an organic whole, to work in a coordinated way, and to achieve Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00006-2 Published by Elsevier Inc. All rights reserved.
247
248
Bistatic synthetic aperture radar
stable and coherent high-quality echo signal acquisition, high-precision echo delay, and Doppler measurement. This gives basic conditions for obtaining high-quality images through imaging processing. This chapter mainly analyzes the source and influence of synchronization error, puts forward the limitation conditions of the synchronization error to realize the imaging, introduces typical methods to achieve the three synchronizations, and gives an example and effects of transceiver synchronization.
6.1 Space synchronization In monostatic SAR, the ground footprints of the transmitting and receiving beams naturally overlap due to the common antenna used for the transmitting and receiving stations. In bistatic SAR, the transmitting beam and the receiving beam are generated from two antennas on different platforms, and their ground footprints do not have a natural overlap. On the contrary, due to the influence of antenna pointing setting, platform position and attitude, servo control response, and other factors, there will be an obvious fixed deviation and random error, resulting in the decline of echo power, narrowing of imaging amplitude, and change of imaging resolution, thus seriously affecting the imaging quality. Therefore it is necessary to take real-time and dynamic measures to control and stabilize the direction of the receiving and transmitting antennas according to the platform position, attitude and echo amplitude changes, so as to ensure that the beam footprints of the two antennas overlap as much as possible and remain stable in the process of echo signal acquisition of synthetic aperture, which is called space synchronization technology.
6.1.1 Space synchronization error Space synchronization refers to the state in which the ground footprint centers of the transmitting and receiving beams are completely coincident. Space synchronization error describes the degree of deviation from this state. In engineering applications, it is generally described quantitatively from two aspects: the center deviation error of the beam footprint and the beam center pointing error. We will discuss the constraints and sources of these two types of errors in depth. 1. Beam footprint center deviation error constraint The deviation error of the beam footprint center is defined as εb, which refers to the distance between the transmitting beam footprint center and the receiving beam footprint center. It can reflect the degree of beam
Bistatic SAR synchronization
249
misalignment of the antennas of the transmitting station and receiving station. The pattern of the antenna beam on the ground is related to its beam shape. To simplify the discussion, the beam footprint is assumed to be circular. Let the 3 dB footprint equivalent radius of the two beams be, respectively, rmax and rmin, and rmax > rmin. Then, as the deviation error εb from the beam footprint center of the transmitting station to the beam footprint center of the receiving station increases, various synchronization states, such as complete synchronization, critical synchronization, local synchronization, and failure synchronization, will occur in turn, as shown in Fig. 6.1. The overlapping degree of beam footprint is a direct representation of the space synchronization state, which has an important impact on the observation amplitude, the length of the synthetic aperture, and the echo power. It can be seen in Fig. 6.2 that with the increase of εb, the 3 dB footprint overlap area of the transceiver beam will show the trend of a step-down. We can see that the inflection point of the ηs change occurs at εb ¼ rmax rmin and εb ¼ rmax + rmin. In general, the purpose of space synchronization is to
Fig. 6.1 Overlapping states of several typical transmitting and receiving beams.
Fig. 6.2 3 dB footprint overlap ratio.
250
Bistatic synthetic aperture radar
keep C close to 100%. Therefore εb rmax rmin needs to be ensured, so that transmitting and receiving beam footprints can be fully synchronized, or at least critically synchronized. Because of the decrease of ηs, the corresponding overlapping area, the observation width, and the synthetic aperture length will decrease, resulting in a decrease of imaging width and broadening of the Doppler point target response main lobe. On the other hand, the increase of εb will lead to the decline of echo power, as well as the image signal-to-noise ratio and image quality loss. The variation law of ηb with εb can be derived based on the bistatic geometric configuration relationship, antenna beam shape, and system parameters, as shown in Fig. 6.3. In general, when εb ¼ rmax, ηb drops by 1.5 dB, and when εb ¼ rmax + rmin, ηb has dropped by 3 dB. So when εb is a nonzero fixed value, ηp will be lower than the value when fully synchronized, resulting in a decrease in echo power and loss of image signal-to-noise ratio. When εb changes randomly between 0 and rmax, the echo power will produce a fluctuation of 0.75 dB, which has exceeded the general requirement of 0.5 dB, resulting in a significantly increased target response sidelobe at the midpoint of the imaging results. This is even worse when the range of εb exceeds that of rmax. Therefore, in order to obtain the echo of stable amplitude, εb rmax rmin should be guaranteed. 2. Beam center pointing error constraint Beam center pointing error refers to the deviation angle of the two-beam footprint centers observed from the perspective of the receiving station, which is recorded as Δη. It is determined by the direction angle solution error and the direction angle control error. The error of the pointing angle solution is mainly related to the positioning accuracy of the attitude measurement, the beam pointing solution method, and the operational accuracy. However, the pointing angle control error is mainly caused by the control error of the servo system of the sweep antenna and the mechanical error of
Fig. 6.3 Relation between echo power and space synchronization status.
Bistatic SAR synchronization
251
the motor, or by such factors as the beam array topology of the phased array antenna and the precision of the phase shifter. Let M be the predetermined point of the beam footprint center, and let PT and PR respectively denote the transmit beam and receive beam footprint centers. Due to the effects of the foregoing errors, PT and PR are distributed in circles of radius CT and CR, respectively, with a probability of 50% or more; CT and CR are respectively referred to as circular probabilistic errors of transmitting and receiving beams; and εb CT + CR is obtained from Fig. 6.4. Since critical synchronization requires εb rmax rmin, the requirement for the beam pointing circle probability error is CT + CR rmax rmin. In practical engineering applications, the opening angle Δη formed by the two-beam footprint centers of the receiving antenna is often used to measure synchronization. According to the spatial relationship in Fig. 6.5, we can get
Fig. 6.4 Relation between circular probability error and synchronization error.
Fig. 6.5 Relation between circle probability error and beam coincidence degree.
252
Bistatic synthetic aperture radar
Δη ¼ arccos
Rrt2 + Rrr2 ε2b 2Rrt Rrr
(6.1)
where Rrt represents the distance from the receiver to the beam footprint center of the transmitting station, and Rrr represents the distance from the receiver to the beam footprint center of the receiving station. According to the preceding analysis, the beam footprint should be at least in a critical synchronization state, i.e., εb rmax rmin. According to Eq. (6.1), the constraint on Δη is Δη arccos
Rrt2 + Rrr2 ðrmax rmin Þ2 2Rrt Rrr
(6.2)
The Δη corresponding to the beam pointing circle probability error satisfies the following formula: Δη arccos
Rrt2 + Rrr2 ðCT + CR Þ2 2Rrt Rrr
(6.3)
3. Sources of space synchronization errors In practice, the space synchronization error is caused by the platform attitude error, platform position error, positioning error, beam pointing control error, and other factors. The first and second types are caused by nonideal factors such as wind speed, wind direction, and airflow, which affect the constant attitude and uniform straight-line motion of the aircraft. Such errors will cause unexpected and random deviation of the beam direction of the transceiver antenna. The third and fourth categories are hardware system errors, which are caused by insufficient attitude and positioning accuracy of inertial navigation and other equipment, as well as the control accuracy and response characteristics of the antenna servo, resulting in fixed and random deviations in antenna beam pointing. In addition to these factors, the deviation between the antenna installation datum and the positioning reference will also lead to an antenna pointing error of the transceiver platform. All these errors will cause the direction of the transmitting and receiving beam to deviate from its due direction, resulting in nonoverlap of the footprint center, that is, the space is out of synchronization. Therefore the effective control of and compensation for these errors comprise the premise of good space synchronization.
Bistatic SAR synchronization
253
6.1.2 Space synchronization technology In order to control the space synchronization error within the allowable range, it is necessary to take corresponding technical measures. Generally, the spatial position is used to calculate and control the beam pointing to achieve the initial alignment of the beam footprint, and then clutter locking technology is used to realize the precise tracking of the beam footprint. 1. Coarse alignment method of pointing solution At present, there are two kinds of synchronization methods available in engineering. The first category is relative positioning, direct control. According to the platform position and attitude measured by the attitude measuring and positioning device, combined with parameters such as the center position of the mapping strip, the antenna beam pointing angle of the transceiver station is adjusted to complete the space synchronization. This method is easy to control and achieve, and its space synchronization accuracy is mainly affected by the attitude measurement positioning error and the pointing angle control error. The second category comprises the absolute positioning, real-time calculation methods. This method uses GPS and integrated navigation devices to measure the attitude and position of the platform. Through the transformation of the multistage coordinate system and the integration of spatial position relations, the antenna pointing angle of the transceiver station is solved. The space synchronization of the transceiver beam is realized through the control of the antenna pointing system. This method is suitable for space synchronization of bistatic SAR with high accuracy. Fig. 6.6 shows the implementation scheme of the second method, in which the attitude and position of the platform are measured by the attitude and positioning device. The antenna pointing controller is used to adjust the antenna pointing. The wireless transceiver unit realizes information transmission between the transceiver stations. The storage unit records and indicates the status of space synchronization, and stores the position and attitude of the platform during the radar system work, providing data for subsequent signal processing. The common method of the antenna pointing solution is a space synchronization method based on a geometric spatial relationship solution. By multistage coordinate transformation, the position of the predetermined beam footprint center in the coordinate system of the carrier is obtained and
Attitude and Position Measurement Device
Display Storage Unit
Synchronous Solution
transmitter
Wireless Transmission Unit
Antenna Positioning Controller
Fig. 6.6 Synchronization scheme of the absolute synchronization method.
receiver
Wireless Transmission Unit
Attitude and Position Measurement Device
Synchronous Solution
Antenna Positioning Controller
Display Storage Unit
Bistatic SAR synchronization
255
then the antenna beam direction, including azimuth angle and elevation angle, is obtained. Taking the geodetic coordinate system as an example, the azimuth angle refers to the horizontal angle from the north direction line in the clockwise direction starting on the aircraft platform to the centerline of the antenna beam. Elevation angle refers to the angle between the horizontal plane of the aircraft platform and the center line of the antenna beam. In order to get the antenna pointing of the transmitting and receiving stations, it is necessary to obtain the space position of the carrier platform of the transmitting station, the carrier platform of the receiving station, and the target. Through the positioning function of the GPS, the required position information can be obtained. However, the platform position measured by the GPS receiver is in the WGS-84 coordinate system, which cannot directly obtain the antenna direction, so a coordinate system transformation is needed. Here, taking bistatic forward-looking SAR as an example, the direction solution rough alignment method [1] is presented. As shown in Fig. 6.7, taking the center of gravity O of the launching station platform T as the coordinate origin, point O pointing to the due north is the positive direction of the axis x, the vertical line from point O to the ground plane downwards is the positive direction of the axis z, and the y axis satisfies the right-hand rule. The geographical coordinate system O xyz of the aircraft is established. The center of the beam footprint of the receiving antenna on the ground is the target point M; T0 and R0 represent the projection of the transmitting platform and the receiving platform on the ground. The coordinate (xr, yr, zr) of the receiving platform R in the geographic coordinate system, the
Fig. 6.7 Aircraft geographic coordinate system.
256
Bistatic synthetic aperture radar
azimuth angle of the receiving antenna βR, the elevation angle θR, the coordinates (xt, yt, zt) of the transmitting platform (xt, yt, zt) in the geographic coordinate system, and the height of the receiving platform from the target point h are known. Assume that the coordinates of carrier platform P and target M in the geographical coordinate system are, respectively, (xp, yp, zp) and (xm, ym, zm); in order to obtain the antenna direction of the transmitting antenna, the coordinate (xm, ym, zm) of target point M in the geographical coordinate system needs to be obtained first, which can be obtained by solving the following equations. 8 < yr ym ¼ ðxm xr Þ tan βR h2 (6.4) : ðxm xr Þ2 + ðyr ym Þ2 ¼ 2 tan θR In the platform aircraft geographic coordinate system, the coordinates of the target point become (xmp, ymp, zmp), Eq. (6.4) is still satisfied, and: 0 1 0 10 1 sinLat cos Lng sin Lat sin Lng cosLat xm xp xmp @ ymp A ¼ @ sin Lng cosLng 0 A@ ym yp A zmp zm zp cos Lat cos Lng cos Lat sin Lng sinLat (6.5) where Lng and Lat are, respectively, the longitude and latitude of the position of the carrier platform. Combine Formulas (6.4) and (6.5); now azimuth angle β and pitch angle θ of the antenna from the platform to the target point can be obtained: ymp β ¼ arctan (6.6) xmp 1 0 zmp C B θ ¼ arctan @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA x2mp + y2mp
(6.7)
So far, the solution of antenna pointing has been completed. However, at this time, the antenna pointing is located in the geographical coordinate system of the carrier, and its 0-degree direction is in the due north direction. Therefore it cannot be directly used for the pointing control of the antenna pointing controller. The transformation of antenna pointing from the geographic coordinate system to the coordinate system of the servo turntable requires the
Bistatic SAR synchronization
257
transition of the coordinate system of the servo turntable, which is required to coincide with the coordinate system of the servo turntable in the actual installation. In this way, the antenna pointing controller can transfer the antenna pointing from the geographic coordinate system of the carrier to the coordinate system of the servo turntable through its attitude compensation function. After the receiving and transmitting stations calculate the pointing angle of each antenna, they need to adjust the pointing and aim at the center of the mapping area. If the radar antenna direction is adjusted mechanically, the mechanical error will be introduced. If the antenna beam direction is adjusted by phase control, the phase accuracy of the phase shifter is insufficient, which will also bring errors. No matter which method is adopted, the actual pointing angle of the antenna beam will be different from the calculated theoretical pointing angle. These errors need to be adjusted in the overall design of the system so that the impact of the total error does not exceed the constraints required by the imaging quality. 2. Clutter lock fine tracking For the wide beam of the radar antenna, the preceding methods can meet the requirements of imaging for bistatic and multistatic synthetic aperture radar. However, when the transmitting and receiving antennas are narrow beams, due to the errors in attitude and positioning of GPS and INS, there will be some errors in the calculated antenna direction control parameters, which will lead to an insufficient coincidence area of the beam footprint of the transmitting and receiving antennas and affect the imaging quality. In order to solve the problem of inaccurate alignment of the transmitting and receiving antenna beams, a clutter lock fine tracking method is introduced here. This method is based on the premise that the beam of the transceiver antenna has pointed to the target area, that is, the antenna beam of the transceiver station has completed the initial alignment. However, the antenna beam orientation of the transmitting station needs to be fine-tuned to achieve accurate alignment of the beam of the transceiver antenna, because perhaps the alignment accuracy is not enough. The basic idea is that, under the condition that the receiving antenna’s pointing is unchanged, the radar antenna can be scanned within a certain angle range by controlling it. The direction of the receiving station beam footprint can be determined by the power of the echo signal. When the echo power is the strongest, the corresponding beam center direction of the transmitting station antenna is the direction to be pointed.
258
Bistatic synthetic aperture radar
Fig. 6.8 Clutter lock fine tracking.
As shown in Fig. 6.8, the transmitting station controls the antenna scanning. The coverage area of the antenna beam passes through A, B, and C in sequence. At the same time, the clutter locking fine tracking method is used to perform signal processing on the radar echo to obtain the corresponding antenna direction (the power center) when the echo power is the strongest, and to lock the point. For the radar echo that is accompanied by noise, if the maximum power of the echo matrix is calculated directly, the error will be relatively large. Therefore it is necessary to conduct range pulse compression and improve the signal-to-noise ratio to calculate the power. The beam direction angle corresponding to the power center of the transmitting station antenna is the direction angle of the transmitting station antenna aiming at the beam center of the receiving station antenna. The antenna pointing controller adjusts the antenna pointing to the pointing angle, and high-precision space synchronization of the airborne bistatic SAR can be achieved. The workflow is shown in Fig. 6.9. According to the simulation process shown in Fig. 6.9, the beam width is 10 degrees, the scanning range is 55–80 degrees in azimuth and is 15–40 degrees in pitch, the echo signal-to-noise ratio is 5 dBm, and the results shown in Fig. 6.10 can be obtained. The synchronization accuracy is 0.16 degrees in azimuth and 0.14 degrees in pitch. The clutter lock fine tracking method can improve the accuracy of space synchronization, which is mainly used in the initial stage of stripmap SAR space synchronization and is no longer used during radar operation.
259
Bistatic SAR synchronization
Set receiver antenna azimuth angle to q opt
Start
Set receiver antenna azimuth angle by coarse alignment solution q
Receiver antenna azimuth scan and record clutter power P
cT
Receiver antenna short-range scan and record clutter power P
No
Max of P appear? No
Yes
Max of P appear?
Maximum of azimuth angle of P is the best azimuth angle q opt Yes Maximum of pitch angle of P is the best pitch angle q opt
Start
Fig. 6.9 Clutter lock fine tracking process.
Transmitter Clutter power (dB)
Clutter power (dB)
Receiver
0.16e
Supposed alignment direction
Actual alignment direction
Azimuth angle (degree)
Clutter power (dB)
Receiver
0.14e Clutter power (dB)
Transmitter
Azimuth angle (degree)
Supposed alignment direction
Actual alignment direction
Azimuth angle (degree)
Azimuth angle (degree)
Fig. 6.10 Clutter lock fine tracking method synchronization accuracy.
260
Bistatic synthetic aperture radar
6.2 Time and frequency synchronization Time, frequency, and phase synchronization are mainly used to solve the positioning accuracy and phase coherence problems of bistatic SAR systems. This section focuses on the synchronization of time and frequency of bistatic SAR. In monostatic SAR, the transmitter and receiver share the same clock, and it is easy to perform time delay measurement and ranging positioning. In bistatic SAR, the spatial separation of the transmitting and receiving stations means that they can only use their respective clocks. The difference between them makes it difficult for transmitting and receiving to coordinate, and also causes delay measurement difficulty and positioning error. Therefore time synchronization measures must be taken to achieve a unified time base for transmitting and receiving. Time synchronization mainly solves the problem of clock synchronization and calibration between the transmitting and the receiving stations, and tries to make the clocks in the two stations have the same starting point and the same speed. Time synchronization includes not only the time and minute and second level alignment, but also nanosecond level clock phase alignment. The former ensures that the separation of transmitter and receiver can work together step by step, and the latter ensures the accuracy of the echo delay measurement and ranging positioning of the system. The Doppler signal that the aperture synthesis relies on is derived from the change in the phase of the echo between pulses caused by the change in the viewing angle of the equivalent phase center of the transceiver station relative to the target. Therefore it is required that the transmitter and receiver cannot introduce unknown phase change in the Doppler signal, that is, the system is required to have good coherence. The bistatic SAR transceiver system is placed on an independent platform, and localhost excitation is used as the frequency source. The fixed frequency deviation between the two frequency sources, the random frequency change, and the phase jitter will all introduce unknown and irrelevant phase changes in the Doppler signal. That is the phase error of the echo signal [3]. The purpose of frequency phase synchronization is to reduce this irrelevant phase change and to avoid contaminating the Doppler signal phase. The basic requirement is that the phase error satisfy jϕe j π/4 during the synthetic aperture time. The essence of this requirement is to constrain the phase error so that the in-phase superposition of Doppler signal matching filtering is not significantly damaged. This requirement includes two
Bistatic SAR synchronization
261
conditions: one is that the phase error between adjacent echo pulse signals should meet j ϕe j π/4, which reflects the frequency stability of one pulse (usually in millisecond time scale). The other is that the total phase error of echo pulse signals in the whole synthetic aperture time should meet j ϕe j π/4, which is a problem of frequency stability at the synthetic aperture level (usually in the order of seconds). For the monostatic SAR system, since the carrier frequency of the transmitter and the local oscillator of the receiver come from the same frequency source, there is no problem of phase error accumulation between pulses in the echo signal. Therefore only the first phase error constraint needs to be satisfied. However, for the bistatic SAR system, the second phase error limitation condition needs to be satisfied, due to the accumulation of phase error of the echo signal. Based on the discussion of the influence of time and frequency synchronization error, this section gives the error bound and the method of time and frequency synchronization.
6.2.1 Time synchronization error The second-level time synchronization error causes the working mode, working flow, and working parameters of the transceiver station to not be uniformly set to the predetermined working state at the specified time, which will affect the normal work of the system. It will also cause the time step of the direction adjustment of the transceiver beam to be inconsistent, resulting in the nonoverlap of the beam ground footprints on the space. Microsecond time synchronization error in determining the order of events that allow instructions, requests, and responses may cause judgment errors and logical confusion, which affects the normal operation of the system. It will also affect the consistency of the attitude measurement positioning data with the echo data recording time base. Nanosecond time synchronization error will lead to inconsistency of the measurement time scale at the exact time when the events such as signal transmission and echo reception occur, which will cause significant errors in echo time delay measurement and range measurement, and even cause the radar to lose its positioning ability. Time synchronization of the first three levels is easy to achieve in practice, but time synchronization of the nanosecond level is more difficult. In engineering practice, the key to nanosecond synchronization is the synchronization of PRF pulse signals, which is also the focus of the following analysis and discussion.
262
Bistatic synthetic aperture radar
1. Effect of PRF pulse synchronization error on time delay measurement Radar is used to measure the distance and achieve positioning by the time delay of echo relative to the transmitting signal. In monostatic SAR, the signal transmitting time and echo receiving time can be measured by the same clock source and the same time reference and scale, and the difference value can be used to calculate the echo delay. In bistatic SAR, the transmission and reception are completed on two spatially separated platforms, so two clock sources can be used to measure the signal transmission and echo reception moments, respectively. If the two clock sources are not synchronized and there are unknown fixed time differences and random time differences, the accuracy of the radar echo time delay measurement will be reduced, which will affect the ranging and positioning ability. Therefore, in the actual system, the corresponding technical measures must be adopted to minimize the random time difference, eliminate the fixed time difference, and unify the time benchmark. When the fixed time difference cannot be eliminated, it must be able to be measured accurately. As shown in Fig. 6.11, the square wave pulse train signal output by clock source (i.e., the PRF signal) represents the time scale of the station. When the waveform generator DDS of the transmitting station detects the
Transmitter Frequency Source (clock1) Transmitter Time Schedule Controller Enable Signal PRI
Transmitter PRF Br
Transmitter DDS Receiver Frequency Source (clock2) Echo
τ
Δτ
t1 t1 ⬘ t 2 t2 ⬘
Receiver A/D enable Signal
PRI'
Receiver PRF Receiver A/D Sampling Window 0
Fig. 6.11 Working sequence of bistatic SAR.
Bistatic SAR synchronization
263
waveform enable signal from the timing controller at time 0, it starts to generate a baseband linear frequency modulated pulse signal. The pulse signal has a width T and a bandwidth Br. Due to the deviation of Δτ in the time reference of the receiving and transmitting stations, the time of waveform generation between the data collector of receiving station ADC and the transmitting station is not synchronized. The true time delay of echo is equal to the time delay τ measured at the receiving station plus the clock difference Δτ between the two stations. If Δτ is a nonzero constant, it will result in a fixed delay error, and if Δτ is time-varying, it will result in varying delay and ranging error. When the unknown Δτ reaches more than 10 ns, a range and positioning error of meter level will be generated by the oblique distance value given by the receiving station. When the fixed clock difference Δτ is above 10 μs, because the time reference points of the transmitting station and the receiving station are different, the radar observation range will produce a deviation of kilometer level. Based on the clock of the transmitting station, the receiving station samples the echo signal of the predetermined area from time t1 to time t2, which are determined by the geometry of the system space and the observation width. However, due to the PRF synchronization error Δτ at the receiving and transmitting stations, the PRF at the receiving station lags Δτ, making the actual sampling window of ADC to be t10 t20 (t10 ¼ t1 + Δτ, t20 ¼ t2 + Δτ), and the overall lagging Δτ. Although the echo data with the time width t2 t1 is recorded, its partially scheduled echo is not accepted normally. If the two frequency sources are different in speed, there will be differences in time scale, resulting in errors in the starting and ending points and width of the admission time. 2. PRF—The effect of PRF synchronization error on imaging If the time error is not taken into account, after downconversion with the local oscillator signal exp{ j2πfcτ} of the receiver, suppose that the transmitting signal is a linear frequency-modulated signal with the carrier frequency fc, wavelength λ, and frequency modulation rate K. Then the baseband signal with zero intermediate frequency of the point target is: ( 2 ) 2π 1 (6.8) sR ðτ, tÞ ¼ exp j RðtÞ exp jπK τ RðtÞ λ c Due to the influence of time synchronization error, it is assumed that the transmitter clock is ahead of the receiver clock Δτ(t), the time difference between the trigger time of the transceiver station is τ, the real echo delay
264
Bistatic synthetic aperture radar
with the receiver clock as the reference is τ0 ¼ τ + Δτ(t), and the receiver local oscillator signal change is exp{j2πfc[τ + Δτ(t)]}. Then the actual mathematical expression for the echo signal is: ( 2 ) 2π 1 s0R ðτ0 , tÞ ¼ exp j RðtÞ exp jπK τ0 ΔτðtÞ RðtÞ λ c exp fj2πfc ΔτðtÞg
(6.9)
Based on Eq. (6.9), Δτ(t) not only introduces the delay error in range but also introduces the phase error in azimuth, which also has an impact on the Doppler processing of the echo. Next, we will analyze and discuss the influence of errors on imaging in detail according to the different forms of time errors. (1) Effects of fixed time difference on imaging When Δτ(t) is a fixed value, it is called the PRF fixed time difference, or PRF trigger error. Fig. 6.12 shows the imaging results of point targets with the fixed time difference of 0.1 μs and the fixed time error and imaging performance index. The fixed time difference will affect the quality of rangeoriented imaging because the width and bandwidth of the sampled signal are narrowed, which makes the resolution in the direction of delay worse. However, the fixed time difference does not change the phase change relationship between successive pulses and does not affect the imaging effect in the Doppler direction.
the image
30
30
20
20 Azimuth, meters
Azimuth, meters
the image
10 0 -10 -20 -30
0 -10 -20 -30
-30
(a)
10
-20
-10 0 10 Range, meters
20
30
Point Target Response Without Error
-30
(b)
-20
-10 0 10 Range, meters
20
30
Point Target Response Without Error With Fixed time difference (0.05µs)
Fig. 6.12 Effect of PRF triggering out of sync on point target imaging.
Bistatic SAR synchronization
265
(2) The influence of linear time difference on imaging The linear time difference comes from the periodic difference of the clock signal and PRF signal between the two receiving and transmitting stations, which shows a linear change with the increase of the azimuth time t. Fig. 6.13 shows the effect of PRF linear error on imaging. It can be seen from Fig. 6.13 that the PRF linear time error causes a linear distance error in the record position of the data plane, as well as changes in the Doppler signal time width, center frequency, and bandwidth during azimuth processing. It also causes azimuth processing mismatch, which makes the response of the point target in the image domain shift in the azimuth direction. Then the azimuth resolution decreases, and the PSLR and ISLR indexes deteriorate accordingly. (3) Impact of jitter time difference on imaging Under normal circumstances, the frequency stability of different clock sources cannot be completely consistent, and it will also drift due to the influence of factors such as working environment temperature and vibration. This frequency stability performance of the main oscillator clock will cause a small change in the interval between adjacent pulses. This change is reflected in the time synchronization of the transceiver station, which is manifested in the jitter between timing pulses generated by the clock source of the transceiver station. It will cause instability of the PRF cycle, which is called PRF jitter [2]. Assume that there is a time jitter Δτ(t) between the timing pulses generated by the clock source of the transceiver station. Generally, it can be considered to be a Gaussian random process with zero mean and mean square error σ ¼ 1 1011. the image
30
30
20
20 Azimuth, meters
Azimuth, meters
the image
10 0 -10
10 0 -10
-20
-20
-30
-30 -30
-20
-10 0 10 Range, meters
20
30
(a) Point Target Response Without Error
-30
-20
-10 0 10 Range, meters
20
30
(b) Point Target Response When Δ = 0.7 × 10−11
Fig. 6.13 Effect of PRF linear time error on imaging.
266
Bistatic synthetic aperture radar
As the simulation results show in Fig. 6.14A and B, unlike the linear time difference, the jitter time difference mainly causes the sidelobe level to rise, while the azimuth position shift and main lobe broadening are not obvious. With the increase of PRF jitter error, not only will the peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR) be worse, but also the resolution will be gradually reduced because the larger jitter time difference will significantly affect the phase relationship between adjacent pulses. When the mean square error is σ ¼ 1 1011, the imaging quality of the target in the azimuth direction is significantly worse. In conclusion, the time synchronization error of bistatic SAR systems not only introduces a delay error in the range direction but also affects the azimuth processing of the signal. The fixed error of PRF does not affect the azimuth signal processing. The linear time error of PRF causes a position shift and resolution decrease of the target in the azimuth direction. The PRF jitter error mainly causes the sidelobe of the azimuth compressed signal to rise, which is difficult to compensate for.
6.2.2 Frequency synchronization error The frequency synchronization error of bistatic SAR is caused by the different frequency sources used by the receiving and transmitting stations, which mainly includes fixed frequency difference, linear frequency difference, and random frequency difference, which will lead to image displacement, defocusing, and sidelobe deterioration [3]. When the time of the transceiver station is completely synchronized, for the transmitting signal at time t0, after the echo signal reflected from the ground is demodulated by the receiver in the receiving station synchronously, the remaining phase can be expressed as: ð t0 ð t0 + td ϕt0 ¼ 2π ðfR fT Þdt + 2πfR dt + φR0 φT 0 ¼ φt0 + φd0 + φ0 (6.10) 0
t0
Ð t0 Ð t0+td where φt0 ¼ 0 2π(fR fT)dt, φd0 ¼ t0 2πfRdt, φ0 ¼ φR0 φT0 and td represent the time delay of the echo signal relative to the radar transmission signal; fT is the actual carrier frequency of the transmitting signal; fR is the local oscillator frequency of the receiver; φT0 is the initial phase of the transmitting signal at time t0; and φR0 is the starting phase at time t0 of the local oscillator. There is a relationship as follows: fT ¼ f0 ½1 + εT ðtÞ
(6.11)
15
10
10
5
5
Azimuth , meters
Azimuth , meters
15
0 -5
0 -5
-10
-10
-15
-15 5.99
5.995
6 6.005 Range ,meters
6.01 x 10
(a) Point Target Contour Map When s = 0.5 × 10
−11
Fig. 6.14 Impact of PRF jitter on imaging.
5.99 4
5.995
6 6.005 Range ,meters
6.01 x 10
(b) Point Target Contour Map When s = 1 × 10
−11
4
268
Bistatic synthetic aperture radar
fR ¼ f0 ½1 + εR ðt Þ
(6.12)
In Formulas (6.11) and (6.12), f0 is the ideal carrier frequency, and εT(t) and εR(t) are the relative frequency error functions of the transmitter carrier frequency and the receiver’s local oscillator, respectively. In the same way, for the transmitting signal at time t1, the remaining initial phase of the echo signal after synchronous demodulation has the same form as Formula (6.10). The phase error of the echo signal at time t1 relative to time t0 is: ð t1
ϕe ¼ ϕt1 ϕt0 ¼ 2π ðfR fT Þdt + φd1 φd0 (6.13) t0
where
ð ð t0 + td t1 + td φd φd ¼ 2πfR dt 2πfR dt 4πΔfR max td 1 0 t1
t0
(6.14)
In Eq. (6.14), the maximum value of time delay td is usually equal to the magnitude of the radar pulse repetition period, which is about millisecondlevel, calculated according to pulse repetition frequency; ΔfRmax represents the maximum possible variation of the receiver’s local oscillator at time td. According to the current frequency stability level of the actual crystal oscillator, for the radar system of f0 ¼ 10 GHz, the maximum phase error generated by 4πΔfRmaxtd is far less than the phase error limit of π/4, so ϕe is mainly determined by the first term of Eq. (6.13), so the phase error can be approximated as follows: ð t1 ð t1 ϕe 2π ðfR fT Þdt ¼ 2πf0 ðδR ðtÞ δT ðtÞÞdt (6.15) t0
t0
The investigation time for the phase error introduced by the frequency source asynchrony must be at least equal to a synthetic aperture time Ta, that is, t1 t0 Ta. According to the preceding analysis, the phase error of the echo signal is directly determined by εT(t) and εR(t). Therefore it is necessary to study the effects of different forms of εT(t) and εR(t). (1) Effect of fixed frequency error on imaging Suppose there is only a fixed frequency deviation Δf0 between the transmitting carrier frequency and the receiver’s local oscillator frequency, and εT(t) ¼ εT and εR(t) ¼ εR are constants. The instantaneous phase error of the echo signal is ϕe(t) ¼ 2πf0(εR εT)t ¼ 2πΔf0t; then, max fjϕe ðt Þjg ¼ jϕe ðTa Þj ¼ j2πΔf0 Ta j
(6.16)
269
Bistatic SAR synchronization
In bistatic side-looking SAR with parallel flight paths, because ϕe(t) is the linear phase term, the main lobe position shift of the azimuth compression signal principally happens. The equivalent offset is approximated as: Δx ¼
λRR RT Δf0 vðRR + RT Þ
(6.17)
In Eq. (6.17), RR and RT are, respectively, the shortest oblique distances from the point target to the receiver and transmitter. When RT > > RR, Eq. (6.17) can be approximated as Δx λRRΔf0/v. The fixed frequency error mainly leads to the azimuth deviation of the point target image. As shown in Fig. 6.15, with the increase of the fixed frequency error, the azimuth displacement of the target gradually increases. The maximum allowable offset set in the azimuth direction is limited to Δxmax, and the maximum value of Δf0 can be obtained as Δf0max ¼ V(RR + RT)Δxmax/λRRRT, indicating that when Δxmax is constant, the longer the working wavelength is, the smaller the allowable fixed frequency deviation will be. For example, Δxmax ¼ 3 m, RT ¼ 100 km, RR ¼ 70 km, v ¼ 1 m/s, corresponding Δf0max ¼ 2.43 Hz.
3.5
Relationship between fixed frequency error and resolution
3 azimuth resolution
Resolution (m)
2.5
range resolution
2
1.5
1
0.5 0
5
10
15 20 25 Fixed frequency errror (MHz)
30
Fig. 6.15 Index diagram of fixed frequency error and imaging performance.
35
270
Bistatic synthetic aperture radar
(2) The influence of linear frequency error on imaging As the frequency sources of the receiving and transmitting stations are separated, the specific environment is not the same, so the changes in the two are not the same. In the continuous operation of the system, the frequency source is affected by the variation of component parameters, etc., and the output frequency value usually increases or decreases monotonously with the operation time, and presents a linear rule on the time scale of the synthetic aperture. Assuming that, due to the influence of different environments, there is a linear time-varying error between the transmitting carrier frequency and the local oscillator frequency of the receiver, that is, εT(t) ¼ aTt, εR(t) ¼ aRt, aT and aR are constants, then Eq. (6.15) can be obtained: ϕe ðtÞ ¼ πf0 ðaR aT Þðt1 t0 Þ2
(6.18)
It can be seen that ϕe(t) is a quadratic phase term, and its effect is to make the image defocus and the sidelobe level increase. After deducting the influence of the fixed frequency difference, ϕe(t) obtains a maximum value at time t1 t0 ¼ Ta/2: 1 2 (6.19) max fjϕe ðτÞjg ¼ jϕe ðTa =2Þj ¼ πf0 ðaR aT ÞTa 4 Considering the most unfavorable case, where the error signs of εT(t) and εR(t) are opposite, according to the phase error restriction condition max {j ϕe(t)j} π/4, and assuming j aR j ¼ jaT j ¼ a, we can get: a
1 2f0 Ta 2
(6.20)
Or expressed as: jaT a j
1 2f0 Ta
(6.21)
where j aTa j represents the linear relative frequency deviation generated by the frequency source within the synthetic aperture time. The influence of linear frequency error on the azimuth compression signal can be reflected in Fig. 6.16. The solid and dashed lines in the figure represent the compressed signal when there is no frequency error and when there is a linear frequency error. It can be seen that when a linear frequency error is introduced, the main lobe of the signal is broadened and the sidelobe is increased after compression. The reason for this is that the linear frequency
Bistatic SAR synchronization
271
0 no error error
–5 –10
Amplitude (dB)
–15 –20 –25 –30 –35 –40 –45 –50 –100
–50
0
50
100
Azimuth (m) Fig. 6.16 Effect of linear frequency difference on compressed signal.
offset adds linear frequency to the echo-Doppler signal modulation, which makes its frequency rate deviate from the frequency rate of the azimuth reference signal. This leads to residual linear frequency modulated components after azimuth compression. Fig. 6.17 shows the relationship between the linear frequency difference and imaging performance index. (3) Effect of random frequency error on imaging. Random frequency error is used to describe the stability of the frequency source. To simplify the discussion, it is assumed that there is a random frequency error in each pulse repetition period, and it does not change in each pulse repetition period TPRI. Let εRi and εTi respectively denote the random frequency deviations of the receiver’s local oscillator and transmit carrier frequency in the ith PRI. Let εRi and εTi be independent and identically distributed; the mean is 0, and the variance is a normally distributed random variable of εTi. Then the relative error ϕe in PRI time conforms to the Gaussian distribution:
ϕe N 0, σ 2ϕ (6.22) where
pffiffiffi σ ϕ ¼ 2πf0 TPRI 2σ f
(6.23)
The phase error ϕe is introduced into each PRI of the echo signal for simulation, and its influence on the azimuth compression signal can be
272
Bistatic synthetic aperture radar
Relationship between linear frequency error and resolution
3.5
Reslution (m)
3 azimuth resolution range resolution
2.5
2
1.5
1
0.5
0
20
40
60
80
100
120
140
Linear frequency error (MHz)
Fig. 6.17 Linear frequency difference and imaging performance indicators. 0 no error error
–5 –10
Amplitude (dB)
–15 –20 –25 –30 –35 –40 –45 –50 –100
–50
0
50
100
Azimuth (m)
Fig. 6.18 Effect of normal random frequency difference on azimuth response of point target.
obtained. In Fig. 6.18, the solid line and the dotted line represent, respectively, the compressed signal without frequency error and with normal random frequency error. It can be seen that when the normal random frequency error is introduced, the main lobe position of the compressed signal slightly deviates, but the sidelobe shows a clear random increase. The reason is
Bistatic SAR synchronization
273
similar to the linear error, that is, the random frequency difference is the random change in the instantaneous frequency of the echo azimuth signal, which deviates from the reference function and remains in the spectrum of the signal after azimuth pulse pressure in the form of the higher-order phase shift function of the frequency, thus causing the sidelobe deterioration. The distance of the position of the main lobe in Fig. 6.18 is derived from the value of the random frequency difference. However, if the error variance increases, the azimuth will be severely defocused. As shown in Fig. 6.19, the PSLR of the azimuth pulse pressure signal first slowly increases with the increase of the normal random frequency difference, and rapidly increases when the mean square error of the random frequency error exceeds 2 108. In summary, when there is a fixed frequency difference between the frequency sources of the receiving and transmitting systems, the target will be offset in azimuth. When there is a linear time-varying frequency error between the frequency sources, the main lobe of the azimuth compression signal is obviously broadened, which will cause the azimuth resolution to decrease and the sidelobe to rise as a whole. When there is a normal random frequency difference between the frequency sources, the side lobe of azimuth is asymmetrically increased randomly. These effects can result in distorted and defocused images of the bistatic SAR, and even no imaging.
Relationship between normal random frequency error and signal PSLR 0 –2 –4
PSLR (dB)
–6 –8 –10 –12 –14 –16
0
0.5
1
1.5
2
2.5
3
3.5
4
Normal random relative frequency error (× 10–8)
Fig. 6.19 Relationship between normal random frequency difference and signal PSLR.
274
Bistatic synthetic aperture radar
6.2.3 Time frequency synchronization method The time and frequency synchronization subsystem shall adopt appropriate methods to provide the time reference signal and the frequency reference signal for time and frequency synchronization for the receiving and transmitting stations, and ensure the time synchronization accuracy of the time reference signal and the stability of the frequency reference signal. Typical synchronization methods include the direct synchronization method, atomic clock method, and GPS synchronization method [3, 4]. These three methods have their own characteristics and different applications. 1. Direct synchronization method The principle of the direct synchronization method is that the transmitting station transmits the crystal frequency division signal and PRF trigger signal directly to the receiving station through the digital antenna after the digital modulation. After digital demodulation at the receiving station, it sends the crystal frequency-dividing signal and PRF trigger signal to the receiving station timing controller and genlock. The stable frequency after phase-locked is sent to the frequency synthesizer to realize the time-frequency synchronization of the receiving and transmitting stations. As shown in Fig. 6.20, the PRF trigger instruction is sent to the timing sequence controller of the sending station through external control. After receiving the PRF trigger instruction, the timing sequence controller, on the one hand, sends the PRF trigger message to the receiving station through the digital circuit, so that the receiving station starts to trigger the ADC sampling of the echo signal after receiving the PRF trigger message. On the other hand, after a delay of Ts ¼ L0/c (L0 is the space direct distance of the receiving and transmitting stations), the trigger waveform generator starts to generate baseband transmitting signals. The receiving station demodulates the receiving modulation signal digitally, and then sends the demodulated signal to the synchronous phaselocked loop composed of PD, FPGA, DAC, and VCXO. The output of the signal that is in phase with the frequency source of the transmitting station is the frequency reference signal of the receiving station. The implementation of the method will encounter the following problems: first, the spatial propagation attenuation and the Doppler effect generated by the relative motion of the aircraft will seriously reduce the accuracy and spectrum purity of the synchronization signal, and it is difficult to meet the synchronization requirements of bistatic SAR applications.
Control Signal
Waveform Generator PRF Trigger
Transmitter Time Schedule Controller
Crystal Oscillator
Frequency Division
PRF Trigger
Digital Modulation
Digital Modulation
Receiver Time Schedule Controller
PD
FPGA
D/A
÷N Synchronization Phase-locked loop
Fig. 6.20 Block diagram of direct synchronization method.
VCXO
Frequency Synthesizer
276
Bistatic synthetic aperture radar
Second, the carrier wave transmission of analog signals is susceptible to interference from space electromagnetic waves, which introduces additional synchronization errors. Therefore, if it is used for bistatic SAR, there are still many theoretical and engineering problems to be solved. 2. Independent frequency source method of free oscillation The principle of the free-oscillation independent frequency source synchronization method is as follows: the crystal vibration of the receiving and transmitting stations is directly input to the frequency synthesizer, and the clock signal of the frequency synthesizer is sent to the timing controller. By frequency division of the timing sequence controller, the independent PRF trigger signal of the receiving and transmitting stations is obtained, and the waveform generator is respectively triggered to generate the transmitting signal and ADC sampling and enrollment signal. The frequency synthesizer generates the local oscillator signals of the receiving and transmitting stations through the phase-locking, frequency doubling and amplification of the crystal oscillator. The block diagram of the free-oscillating independent frequency source synchronization system is shown in Fig. 6.21. The premise of this method is that the local oscillators of the transmitting and receiving stations need to be calibrated in advance at the same time and place, by the same atomic clock, and then the self-stability of the two local oscillators is used to achieve the frequency and phase synchronization of the transmitting and receiving signals. In this case, since the receiving and transmitting stations are unable to carry out real-time synchronization signal transmission during the working process, in order to realize the PRF time synchronization at the starting time, the PRF signal trigger time of the receiving and transmitting stations must be determined in advance and their time error must be within a certain range to meet a certain synchronization accuracy. Once the trigger time is determined, the system must run all the time and cannot be stopped; otherwise, the synchronization information will be lost. The synchronization accuracy of PRF will be guaranteed by the local oscillator of the system. After a period of use, the independent frequency source of the receiving and transmitting stations need to be recalibrated to ensure that the synchronization accuracy of the same magnitude can be achieved in the next work. 3. GPS Synchronization method The synchronization method of GPS uses the GPS receiver of the receiving and transmitting stations to receive, respectively, the GPS time and 1PPS second pulse signals. It serves as the unified time–split-second time benchmark and frequency benchmark to realize the time-frequency synchronization of
Crystal Oscillation
Frequency Synthesizer
PRF
Transmitter Time Schedule Controller PRF
trigger
Waveform Generator
Transmitter
Receiver Time Schedule Controller PRF
Frequency Synthesizer
trigger
A/D
Receiver
Fig. 6.21 Block diagram of free oscillation independent frequency source synchronization method.
Crystal Oscillation
278
Bistatic synthetic aperture radar
the transceiver station, which can cause the long-term stability of the time benchmark signal to reach 2 1012/day and the accuracy reach 50 ns. The rising edge of the 1PPS second pulse signal is used as the PRF trigger signal to realize the PRF synchronization of the receiving and transmitting stations. The accuracy of the initial synchronization time of PRF can reach the level of nanoseconds, and the error does not accumulate between periods, so the reliability is quite high, and the accuracy can be kept unchanged within 24 h after the loss of the satellite signal. Therefore the PRF accuracy triggered by the 1PPS signal is also on the level of nanoseconds. The essence of this time-frequency synchronization method is that the receiving and transmitting stations use the satellite navigation satellite system to uniformly provide time, and both of the satellites’ 1PPS signals are homologously locked. Then the long-term stability of the 1PPS second pulse and short-term stability of the oscillator can be well maintained. As shown in Fig. 6.22, the VCO located at the receiving and transmitting stations locks the 1PPS second pulse signal output by the respective GPS receiver to output the stable frequency reference signal and send it to the respective frequency synthesizer as the system frequency source, to realize the frequency synchronization of the receiving and transmitting stations. The specific implementation process is as follows: the VCO outputs a sinusoidal signal and, after frequency division, the phase is discriminated with the 1PPS pulse signal from the GPS receiver, and the output phase difference of the phase discriminator is Δφ. The phase difference is converted into the voltage signal to control the VCO and adjust its frequency and phase. When there is no frequency difference between the VCO’s oscillation frequency and the 1PPS signal, and the phase difference is kept to a small fixed value Δε (the fixed difference depends on the error voltage difference, which is determined by the system design and is a known constant), VCO completes the synchronization of 1PPS second pulse signal. If the jitter processing and phase-locking strategy of the 1PPS second pulse can be optimized, the output of VCO is a stable sinusoidal signal and has good short-term and long-term frequency stability. This method is not only suitable for the GPS satellite positioning system but also is compatible with other satellite positioning and timing systems, such as Russia’s GLANASS and China’s BDS. Using the GPS satellite time-frequency synchronization method, the time synchronization error can reach within 3 ns, the frequency synchronization accuracy up to 1010 Hz, and the frequency stability up to 1011 Hz.
GPS Receiver
VCO
VCO To Frequency Synthesizer
Δf time standard To Central Controller
Phase Discriminator
Frequency Divider
1PPS
Transmitter Fig. 6.22 GPS-based time-frequency synchronization scheme.
To Frequency Synthesizer
GPS Receiver
Δf Frequency Divider
Phase Discriminator
Receiver
time standard 1PPS
To Central Controller
280
Bistatic synthetic aperture radar
References [1] J. Wu, J. Xiong, Y. Huang, et al., Analysis of PRF jitter in bistatic SAR, in: Proceedings of 2006 IEEE CIE International Conference on Radar, 2006, pp. 563–566. [2] R. Liu, J. Xiong, Y. Huang, Analysis of bistatic SAR frequency synchronization, in: International Conference on Communications, Circuit and Systems Proceedings, 2006, pp. 380–383. [3] Y. Huang, J. Yang, J. Wu, et al., Precise time frequency synchronization technology for bistatic radar, J. Syst. Eng. Electron. 19 (5) (2008) 929–933. [4] Y. Huang, J. Yang, J. Xiong, Synchronization technology of bistatic radar system, in: International Conference on Communications, Circuits and Systems Proceedings, 2006, pp. 2219–2221.
CHAPTER 7
Verification of bistatic SAR As discussed in the earlier chapters, bistatic SAR is a new radar system that is still in the research and exploration stages. Testing is ongoing to obtain ground object echoes under different configuration conditions; to discover new problems, phenomena, and rules; and to verify relevant theories, models, methods, and technologies. Therefore good experimental system composition and performance as well as sound scientific test verification methods all are playing key roles in acquiring real echo data under various configuration conditions with high efficiency and quality, to support the research on the echo models, imaging algorithms, motion compensation techniques, image understanding, and overall system technology. Due to the fact that bistatic SAR transceivers are mounted on different platforms and have independent movement, the echo laws, imaging processing, error characteristics, and image characteristics are closely related to the geometric configuration of the transmitter–target area–receiver and the velocity vector of the two platforms. It is necessary to carry out test verifications for different platform combinations and geometric configuration relationships. In addition, bistatic SAR has many special new problems in ground equivalent testing and aircraft-borne testing. Therefore the experimental verification methods for bistatic SAR are topics that need more study and attention. This chapter focuses on ground equivalent and airborne flight testing for the verification of the principles of side-looking and forward-looking imaging modes. It introduces test system construction principles, test plan preparation methods, and key issues in test implementation that can be used in practice. The chapter also provides typical test verification examples.
7.1 Test levels and principles From concept research to product finalization, bistatic SAR needs experimental verification work in different stages. Each stage has different levels with different purposes, but the stages follow some common principles and require similar work. Bistatic Synthetic Aperture Radar Copyright © 2022 National Defense Industry Press. https://doi.org/10.1016/B978-0-12-822459-5.00007-4 Published by Elsevier Inc. All rights reserved.
281
282
Bistatic synthetic aperture radar
7.1.1 Test level Based on the different research stages, hardware levels, and technology maturity, bistatic SAR testing is generally divided into the following four stages and levels: 1. Principle verification The purpose of principle verification testing is to verify the feasibility of the system, achieve ground imaging, and accumulate the necessary basic data for the initial exploration and research of the theory, method, and technology, especially the ground echo data, so as to identify problems, correct the theory, and improve the method. The test at this stage has fewer requirements on the platform, and a convenient bearing platform can be used for platform testing. The requirements for the maturity of the experimental system, three synchronizations, and motion compensation technologies are also low, but good quality platform attitude measurement and positioning equipment and good data recording ability are required. The image processing is usually done offline. 2. Technical verification The purpose of the technical verification tests is to verify the feasibility of key technical approaches, improve imaging quality, and obtain further experimental data to improve and perfect the theory, method, and key technologies. The testing at this stage has certain requirements for the bearing platform, which needs to be closer to the final installed application platform. There are relatively high requirements on the technical status of the experimental system, such as the transceiver and storage, attitude measurement and positioning, space-time and frequency synchronization, and online motion compensation. Imaging processing must be done online and in real time. 3. Demonstration verification The purpose of the demonstration verification test is to more completely verify the software and hardware functions and performance of the system, to determine the application and technical indicators, and to achieve highquality real-time imaging of typical scenes and targets in a variety of typical application configuration and mode conditions, to provide the necessary experimental basis for product development. At this stage, there are highlevel requirements for the test platform. Generally, a platform that is at least close to the performance of the final installed application platform should be used. The experimental system should meet the engineering prototype standards and have continuous high-quality real-time imaging processing capabilities.
Verification of bistatic SAR
283
4. Stereotype test The purpose of the stereotype test is to verify the adaptability, stability, reliability, maintainability, environmental adaptability, application function, and technical indicators of the product under typical application conditions, according to relevant specifications and standards, before the product is finalized. In this stage, the test platform must be the final installed application platform, and the product-level software and hardware systems should be used in the test, while the imaging processing and target information extraction must be completed online and in real time on the platform.
7.1.2 Testing principles Although there are many differences in the various levels of experimental work, in order to carry out a successful bistatic SAR test, we generally need to follow some common test principles. 1. Phased implementation principle Bistatic SAR is a flexible system with separate transceivers. Many complex factors can affect the results of the experiment. In order to identify the causes of adverse test results, these factors must be analyzed and classified one by one in advance, and the influences of different factors must be separated as far as possible in different test stages. For example, during the proof-ofprinciple phase, wide beam, low-power transmitters and independent clock frequency sources can be used to relax the requirements for transmitting and receiving synchronization, detection distance, and positioning capabilities, to obtain the ground echo data as early as possible and achieve preliminary imaging, verification principles, and inspection systems. 2. Hierarchical realization principle The principle of hierarchical realization means that when formulating the test purpose and test plan, the ultimate goal, key goals, and general goals to be achieved in the test should be set forth in stages, according to the actual situation. In the testing process, when the final goal proposed in advance cannot be fully achieved at one time, the key goal or general goal should be returned to, in line with scientific, rigorous, and economic principles, which is the inevitable choice to deal with test uncertainty. For example, in the principle verification stage, it is difficult to accurately estimate the echo SNR due to various factors. Therefore, if the signal-to-noise ratio is insufficient due to the low reflectance of the ground during the imaging test in a specific area, the requirements for imaging distance and imaging width
284
Bistatic synthetic aperture radar
should be reduced. Sufficient echo signal-to-noise ratio and high-quality imaging can be achieved first. 3. Feasibility principle The feasibility principle refers to the fact that the required elements of the test must be complete and meet the requirements. First, the experimental radar system equipment must be feasible: that is, it must be able to meet the test requirements of a specific stage and achieve the corresponding technical state as demonstrated through prior testing. Second, a feasible test platform needs to be able to carry the experimental system, provide the necessary power supply and information support, and ensure the necessary experimental personnel onboard. It needs to be competent for specific configurations and speeds, and it also needs to have a movable airspace or area that meets the test requirements. Third, the scene and target to be imaged must feasibly meet the purpose and requirements of the experiment, and the specific image content must be capable of being feasibly verified in the final imaging results. 4. Subject line principle For the different types of content that need to be verified in each test, the test subjects must be listed separately, so as to facilitate the formation scheduling of the platform, evaluate the test results, master the test process, and adjust the test scheme and steps. In this way, the test can be carried out in an orderly way. When designing the test scheme, the correlation between subjects shall be taken into consideration as much as possible, so as to facilitate the overall arrangement of the test and complete multiple sets of similar content experiments in one subject line. In a flight test, several test subjects should be completed, to improve the test efficiency and save on costs.
7.2 Experimental conditions The experimental conditions required for bistatic SAR tests include the experimental system, bearing platform, moving space, scene target, processing equipment, worksite, etc. In different research stages, in order to adapt to different experimental purposes, the corresponding experimental conditions must be present to carry out good experimental verification work.
7.2.1 Hardware conditions The main hardware condition of concern in the experiment is the bistatic SAR experimental radar system.
Verification of bistatic SAR
285
In terms of function, in the phases of principle verification and technical verification, the experimental radar system must have the ability to transmit and receive and carry out time base unification, transmitting and receiving coherence, beam synchronization, platform measurement, echo recording, scene recording, etc. The system must also be able to adapt to the platform load conditions and power supply conditions, as well as to provide good observability or test accessibility. From the point of view of composition, the system should have an extension system composition architecture similar to that shown in Fig. 7.1. The transmitting station subsystem and the receiving station subsystem can both be incomplete radar systems. For example, the subsystem of the transmitting station does not need a receiver and echo recording/processing unit, because in the test this part is not needed, except for the comparison verification of monostatic and bistatic SAR imaging. The receiving station subsystem can be free of waveform generation parameters, upconversion links, and transmitters, because the transmitting signal of bistatic SAR is provided by the transmitting subsystem mounted on another platform. However, these two subsystems need to have the equipment in place for measuring the position and attitude of the platform, to provide the basis for beam covision and spatial synchronization and the reference
Fig. 7.1 System configuration diagram.
286
Bistatic synthetic aperture radar
data for imaging processing and motion compensation. It is also necessary to have data transmission equipment for communication, to transmit and receive relevant instructions and data about working modes, working parameters, and attitude data, so that the transmitter subsystem and the receiver subsystem can be coordinated to form an organic whole and jointly complete the function and performance of the whole bistatic SAR system. In particular, it is necessary to have a synchronization unit that can achieve the three synchronizations of time, frequency, and space to ensure that the transmitting subsystem and the receiving subsystem can be unified on a time basis and to provide beam covision and a coherent transceiver. In terms of performance, the experimental radar system should have the basic functions and performance of the coherent radar system. For example, the transmitting subsystem and the receiving subsystem should have a stable clock and frequency sources, high transmitting signal quality, low receiving noise, and sufficient attitude measurement and positioning data and ground echo data recording ability. In terms of working frequency, signal bandwidth, beamwidth, sidelobe level, signal time width, repetition frequency, transmitting power, receiving noise, sampling frequency, sampling accuracy, and storage depth, the technical indicators should be able to meet the requirements of specific experimental subjects. In the technical verification phase, the synchronous extension is a necessary hardware condition for high-quality imaging and target positioning. Therefore attitude measurement positioning, digital transmission equipment, and a synchronization unit are necessary system components. In the principle verification phase, the main purpose is to obtain echo data and achieve preliminary imaging, which requires relatively low imaging quality. At this point, there can be no pose positioning, digital transmission equipment, or synchronization unit in the experimental system. At the same time, the use of asynchronous acquisition and processing [1], parameter estimation based on echo data, and motion compensation technology can make up for the lack of such hardware. It can also achieve ground-to-ground imaging, but will lose the range and location ability, and the imaging quality will also decline.
7.2.2 Platform conditions The requirements for testing of the bistatic SAR bearing platform are similar to those of the monostatic SAR, but there are also differences. In the stage of principle verification and technical verification, the basic requirements for
Verification of bistatic SAR
287
the carrying platform are that they can move, get into formation, be installed, be wave-transparent, provide a power supply, and carry people. Specifically, the loading platform should be able to provide the installation space and load conditions required for the experimental radar system; provide the power supply required by the experimental radar and the matching instrument equipment; provide the site to install the antenna, servo system, and transparent fairing; reach the flight altitude and movement speed required by the test; and adapt to the formation movement requirements of the test. It is better to carry experimental personnel to facilitate the necessary monitoring of and interventions on the experimental radar system during the test. Communication equipment is also needed to facilitate real-time communication between the experimental personnel and the pilot or pilots, as well as real-time communication and coordination with experimenters on another platform. For ground testing, the platform involves only the vehicle and it is easy to coordinate and control. However, this often requires a self-contained power source, typically a generator as a primary power source, or a high-capacity lithium battery. For flight tests, the use of Yun-5, Yun-7, and Yun-12 aircraft can meet most test verification requirements. However, the power supply must be equipped with conversion power supply equipment that adapts to the specifications and capacity of an onboard power supply. If it is part of a short-range low-power verification test, a self-contained lithium battery can be used to supply power for the system. In terms of the applicability of the platform, it is generally necessary to coordinate and communicate with the platform side to implement the adaptability of the platform to the test. As for the installation of the experimental system, it is necessary to coordinate with the platform on-site concerning the mechanical interface of the experimental system, especially the antenna, servo system, and transmissive radome mounting interface. In addition, this may involve modification and airworthiness assessment, and corresponding arrangements for this need to be made in advance.
7.2.3 Scene and target Scene and target are the objects of the imaging test, and are very important for data accumulation, verification performance, and effect demonstration. Factors such as latitude, region, season, and meteorology of the imaging region will affect the surface water content and vegetation state, thus affecting the surface scattering rate and echo signal-to-noise ratio, as well as image
288
Bistatic synthetic aperture radar
contrast, dynamic range, radiation resolution, and other imaging quality indicators. The richness of scene features plays an important role in visual evaluation of image resolution, peak sidelobe ratio, integral sidelobe ratio, and imaging blurriness. Therefore the imaging area should include rivers, lakes, coasts, islands, highways, bridges, buildings, woods, hills, fields, cities, villages, runways, vehicles, airplanes, and other typical features as much as possible. The specific experimental scenario has the problem of weighing and choosing, which is generally determined by comprehensive consideration based on the experiment purpose, possible imaging effects, range of platform movement, and the surrounding topography and geomorphology characteristics. In conclusion, this usually can be determined through map primary and field investigation iteration. The target is a piece of important equipment for quantitative evaluation of imaging spatial resolution, radiation resolution, peak sidelobe ratio, integral sidelobe ratio, and imaging fuzziness. For bistatic SAR, metal spheres with sufficient radar cross-sectional area and different sizes should be selected as the target, rather than the angular reflector used in monostatic SAR testing, because most of the echo energy of the angular reflector reflects toward the direction of the transmitting station, and the echo energy scattered to the direction of the receiving station is extremely low. The target is usually laid out in a two-dimensional asymptotic array to test the actual resolution of the image.
7.3 Test plan The test plan is a comprehensive scheduling and utilization plan for experimental resources, such as the experimental system, bearing platform, experimental scene, test equipment, experimental funds, and experimental personnel. It mainly includes the decomposition of experimental content and course setting, and the motion lines, experimental platform formation method, implementation of the program, experimental system work mode and parameter settings, and so on. In order to eliminate any defects in the test plan as much as possible and reduce repetition in the experimental process, it is necessary to simulate and review the experimental plan in advance.
7.3.1 Content subjects In each test, there are generally many subjects of test content. The test content is divided into the different test subjects based on the geometric configuration, speed relationship, working mode, motion line, and target
Verification of bistatic SAR
289
scene of the test. In general, it is necessary to divide the contents of each test into each test subject reasonably, so as to facilitate the classification and concentration of similar tests and to carry out the tests in batches, which is conducive to the scheduling and implementation of test resources, as well as the rest of the laboratory personnel, the analysis and collection of experimental data, and the adjustment and prepreparation of experimental resources. For each subject, the corresponding test program shall be specified, such as the geometric configuration, speed relationship, working parameters, motion path, target scene, test steps, etc. In this way, the experimenters and platform personnel can have a unified and comprehensive understanding of the test process, so as to do the test work consistently.
7.3.2 Configuration route In bistatic SAR, the transmitting and receiving platform and its spatial geometric configuration and relative motion relationship with the imaging region have a great impact on the imaging performance. Therefore the configuration relationship of the transceiver platform and its dynamic change mode are the first things to be determined in the experiment. This configuration relationship, at the heart of the testing scheme to chart and data to reflect, usually includes parameters such as transceiver platform coordinates, transceiver velocity vector, regional center coordinates, and transceiver beam footprint center, as well as their time-varying characteristics, which need to be determined according to the experimental purpose and the configuration design method described in Chapter 2. The route of the transmitting and receiving platform formation movement has an important influence on the ground object scene and object category that can be obtained by flight test imaging results, and the permitted range of motion is required (for example, the direction and shape of the road or site can be used in a ground sports car test, or the formation flight airspace permitted in a flight test), and formation configurations are constraints. They need to be based on the principle that the beam footprint glides over the terrain and landform, typical features, and target layout areas as rich and desired as possible, and then are put forward after comprehensive analysis. At the beginning and end of a test, the carrying platform needs to be in the same location, especially during a flight test. In the design of the motion route, in addition to the segments corresponding to the imaging data recording, it is necessary to consider the segments such as platform startup, return, formation, and turning. In the design of the experimental scheme, it is
290
Bistatic synthetic aperture radar
necessary to make reasonable arrangements and efficient connections for the sections with different properties and mark the coordinates or time of the sections’ connecting points on the map in advance, to guide the movement coordination of the two platforms during the experiment, and save time and cost during the test. The design of the configurational route not only requires comprehensive consideration of experimental purposes, platform conditions, geographical conditions, permitted airspace, and many other factors, but also requires communication and consultation with the platform party and the pilot. Usually, after several rounds of coordination, the configuration and route scheme that can be used for the experiment can be obtained. In addition, in the process of implementation, such configuration and route schemes often need to be changed and adjusted to adapt to changes in experimental conditions. Therefore, generally, corresponding plans should be prepared.
7.3.3 Mode and parameters In order to complete the data recording and imaging processing in accordance with the prescribed configuration routes and scene targets in a specific test subject and achieve the experimental purpose, the experimental radar system must be in the corresponding working state before and during the test. This kind of working state usually requires proper working mode and working parameters to guarantee success. Therefore the scanning mode must be designed in advance, and the working mode of the transmission, reception, recording, synchronization, and other subsystems should be determined. The parameters of the experimental system, such as repetition frequency, pulse width, signal bandwidth, sampling frequency, storage capacity, beam direction, etc., need to be calculated in order to comply with their implementation in the test.
7.3.4 Simulation rehearsal Bistatic SAR testing involves multiple degrees of freedom. There are some unpredictable or contradictory factors in the experimental scheme, and it is easy for defects or errors caused by poor consideration to appear. Therefore, before the final determination of the test scheme, sufficient system simulations should be carried out to grasp in advance the echo data features and the effectiveness of their discrimination methods, as well as the expected imaging effects and features, so as to identify unexpected phenomena and possible problems and improve the test scheme. Advanced evaluation and
Verification of bistatic SAR
291
improvement schemes for the data analysis and fast view processing software to be used in the experiment can also be carried out, to improve the success rate of the experiment, shorten the test cycle, and reduce the test cost. Based on the content and subjects in the test scheme, the system simulation should carry out the whole process simulation of echo generation and recording, imaging processing, motion compensation, and imaging quality evaluation according to the corresponding imaging geometry, velocity vector relationship, working mode parameters, scene target category, and characteristics. Consider as much as possible the effects of factors such as echo signal-to-noise ratio, beam shape and sidelobe clutter, space-time-frequency synchronization errors, platform motion and attitude errors, attitude measurement and positioning errors, etc. In this way, the validity of the test scheme can be evaluated more effectively, and the relevant conditions in the test can be foreseen more accurately, so as to make corresponding arrangements and adjustments. In addition to system simulation, more complex tests may involve rehearsal and preparatory subjects, the main purpose of which is to test the working state of the system and the coordination ability of the test personnel. For example, airborne tests are generally preliminarily carried out to a certain extent using ground tests.
7.3.5 Test organization Bistatic SAR testing involves two moving platforms, which is more difficult than the common SAR test. At the experiment site, the experimenters need to consider different objects such as equipment, platform, scene, target, and tower, and need to complete the preparation, installation, testing, opening and closing, monitoring, recording, analysis, processing, evaluation, adjustment and other work of a different nature. They must take on various roles such as decision-making, coordination, command, operation, and guarantees. There are many people and things involved. The process is strict and the relationship is complex, which requires the participation of a corresponding number and level of experimental personnel, auxiliary personnel, and test-matching personnel. It also requires careful planning, mobilization, communication, coordination, and rehearsal in advance. Only in this way can we make the tasks, procedures, and steps absolutely clear. Everything is in charge of specific people, and everyone has something to do. Division of labor and careful coordination, in which all personnel complement each other in a step by step process, incorporating change without
292
Bistatic synthetic aperture radar
disorder, can be achieved to ensure the smooth progress of the experiment and reach the expected experimental objectives. Therefore a successful test needs to be fully prepared for beforehand, which involves a practical organization plan for the test.
7.4 Experiment implementation After defining the level of the test, having the relevant conditions for the test, designing the test plan, and completing the communication and docking with the platform, the core stage of the test will be entered after transportation of the experimental system, personnel assembly, and on-site installation. In addition to carrying out the experiment step by step in accordance with the test purpose, test principles, and test plan, the key is data acquisition, data analysis, and image processing.
7.4.1 Data record In the stage of principle verification and technology verification, the test data is the key achievement of the test work, which plays an important role in verifying the imaging theory, method, and technology, and can also provide a valuable data basis for subsequent research. Therefore the test data must be recorded and archived to facilitate future analysis, processing, and research. The content of the test data recorded usually includes five parts: the process record, scene record, attitude record, synchronous record, and echo record. The attitude recording is automatically completed by the existing attitude measuring and positioning equipment of the experimental system or the platform, and is used to assist in motion error compensation processing in the imaging process. It is also an important basis for target positioning. The echo data is automatically completed by the data acquisition and storage subsystem of the receiving station. It should be noted that in the design of an experimental system and experimental operation, the time base of these two data records should be unified and synchronized so that the two datasets can be used together in later data analysis and imaging processing. The synchronous record mainly refers to the record of spatial synchronous data, which is usually completed automatically by the synchronous extension during the experiment. The synchronous data records the start-stop time and related parameters of the transmitting pulse train of the transmitting station, the start-stop time and related parameters of the data acquisition and storage extension of the receiving station, and the dynamic
Verification of bistatic SAR
293
change information of the beam pointing of the transmitting antenna and the receiving antenna. The purpose of the scene record is to record the situation of stationary objects and moving objects in the target area during the imaging process in real time, to facilitate the restoration of the ground scene and target situation afterwards, and to compare and identify with the imaging results, which helps evaluate the imaging effect. For close-range imaging tests and forward-looking imaging tests, the scene record is usually done with a camera fixed to the aircraft and is supplemented by onboard scene laboratory personnel photography and measurement. When conditions permit, the scene can be recorded utilizing simultaneous aerial photography and video. The process record mainly involves time, place, personnel, and events, and its purpose is to restore the test process after the event and analyze the experimental data and imaging effect after the event. These events mainly include the subjects of the experiment, the start and stop of the platform movement, the start and stop of the experimental system, the related operations of the experiment, the accidental phenomena and matters, and so on. The recording method includes a manual record and equipment record. The process record should be carried out by different personnel at different locations. For example, in a flight test, it will involve the command center or tower, airport ground, scene area, target area, aircraft cabin, visual observation post, etc. In order to correlate and mutually verify all kinds of record data, it is necessary to have a clear division of labor and arrangements in advance, and also to prepare the corresponding record table and recording equipment. In addition, before and during the test, the time base must be unified. In addition to the normal time synchronization of the system, it is necessary to synchronize the seconds of the timing equipment of the experimenter. For all kinds of test data, safety backup and associated filing should be carried out in real time when available, so that the data can be retrieved and used in the future.
7.4.2 Data analysis The objects of data analysis mainly include the attitude data, synchronous data, echo data, and monitoring data. The content of the data analysis is mainly the consistency and conformance of the data’s laws with the expectations. Data analysis tools are the corresponding data reading, playback, display, processing, measurement, and evaluation software. The results of data
294
Bistatic synthetic aperture radar
analysis are the judgment conclusions on the validity of the data, as well as the decisions that affect the test program and test process: for example, eliminating certain errors or faults, adjusting the formation configuration or system status, repeating a subject test, and entering the next test subject. The aircraft attitude data, which records the change course of the platform’s movement trajectory and attitude, can be used to quickly determine the conformity of the actual movement with the predetermined configuration and line in the test site, and to evaluate whether the error of movement and aircraft attitude is within the allowable tolerance range. In combination with the beam pointing data, it can use the prepared space synchronization evaluation software to quickly determine the beam overlap of the receiving and transmitting antennas and their coverage of the imaging regional center, to deduce the accuracy of space synchronization. The monitoring data mainly refers to the system state, waveform, and spectrum monitored by the instrument during the experiment. These data can be used to judge whether the system is in a normal working state in real time. If not, interventions and adjustments can be made in time. Echo data records the amplitude and phase change process of the signal reflected from the ground scene to the receiving station. It is not only the main data source for forming the scene image but also an important basis for quickly and accurately evaluating the success or failure of the test quality. The levels of synchronization accuracy, signal-to-noise ratio, and coherence are reflected on different sides of the echo data. For example, the real part and imaginary part of echo data in the fast- and slow-time domain of the two-dimensional FM interference pattern can accurately prove that there are strong reflection point targets in the scene, and the system has good echo coherence. Echo data amplitude after distance pulse compression can also provide many valuable judgments. For example, defocusing of the distance dimension means that there are problems with the quality of the transmitting signal and the accuracy of the data sampling. The bright line extending in the slow-time direction corresponds to the echo signal-to-noise ratio and whether there is a strong reflection target on the ground. The unexpected migration of the bright line can reflect a time synchronization error or motion error. The experimental verification of bistatic SAR often involves multiple subjects with different configurations and modes, and there may be a hierarchical relationship between them. Therefore, it is necessary to quickly, effectively, and accurately determine the effectiveness and quality of the implementation of the test subjects in the test site, so as to make relevant
Verification of bistatic SAR
295
decisions on the test scheme and process adjustment. Therefore it is necessary to prepare the relevant data analysis software in advance and carry out sufficient simulation tests under various error conditions in advance, so as to grasp the specific manifestations of various factors and errors in the echo data. Make sure you know the bottom in advance. In this way, the data analysis software can play a strong supporting role in the test data analysis, test scheme, and process adjustment.
7.4.3 Data processing Data processing is a complicated process of obtaining microwave images of ground scenes through synchronization compensation, motion compensation and imaging processing by utilizing navigation attitude data, synchronous data, and echo data. For the experiments in the principle verification and technology verification stages, the precise data processing after the experiment needs to consider the influence of high-order errors and space-variant characteristics. It should make use of more accurate parameter estimation, motion compensation, and imaging processing algorithms to obtain high-quality imaging results by making full use of the aforementioned test data. In this process, further problems can be found and solved. At the same time, improvement plans can be proposed for the next possible test. The direct purpose of data processing in the test site is not to pursue highquality imaging, but to further verify the quality of echo data and attitude data based on data analysis through relatively simple processing process and algorithm. It aims at local two-dimensional focusing of the imaging area and can observe obvious features as targets. Through image contrast, isolated peak width, shadow area brightness, and other intuitive features, it can comprehensively judge the SNR of echo data, two-dimensional compressibility, and the matching and availability of attitude and synchronous data. On-site data processing generally uses a simplified imaging processing algorithm without too much consideration of the effects of space-variant characteristics and various high-order errors. For example, distance pulse compression can be achieved by deskew processing, and then the migration correction and azimuth compression can be carried out. The last two steps need to use attitude data as the calculation basis of the relevant parameters. Then, the validity and quality of echo data are evaluated according to the visual impression and objective index of the image. Next, the aircraft attitude data can be further used to make some low-order motion
296
Bistatic synthetic aperture radar
compensation, and then the imaging quality can be observed to further evaluate the effectiveness of aircraft attitude data. Therefore it is necessary to prepare the relevant software and humancomputer interface in advance. Through echo generation and processing simulation under the condition of known errors, carry out a full test and grasp the expression form and characteristics of various errors in the imaging results in advance, so that the relevant software can play a quick and accurate auxiliary evaluation role in the experimental process.
7.5 Experiment example Depending on the different platforms, bistatic SAR tests can be divided into three categories: ground test, flight test, and hybrid test. There are many differences in the purposes, methods, and processes of these tests. The following sections give some test examples based on the common problems and principles of the previous sections.
7.5.1 Ground test The purpose of the ground vehicle test is to verify the working status of the radar system and each extension, with low test cost and under easy control conditions before the flight test. The bistatic SAR echo characteristics and two-dimensional focus ability are tested under specific configurations and modes, and the imaging principle and imaging algorithms are initially verified so as to provide a basis for subsequent flight tests. There are some adverse factors related to the ground testing. For example, the near ground clutter will have an adverse effect on the echo signal and can even lead to test failure. More importantly, since the bistatic SAR test involves the movement of two platforms, it is much more difficult to choose the test site or road for the ground vehicle test than for the monostatic SAR test. In addition, there is an obvious dependence between the imaging resolution and the depression angle in the forward-looking imaging model of bistatic SAR. Some special workarounds must be adopted in the geometric configuration of the ground test. In the following, we use a bistatic SAR ground vehicle test as an example to introduce the typical process of vehicle testing. 1. Test plan This test is part of the principle stage. In addition to checking the state of the system, the main purpose of this test was to realize the ground imaging of
Verification of bistatic SAR
297
bistatic side-looking SAR, and preliminarily verify the system, principle, and algorithm. The specific imaging geometry includes two categories. The first type of configuration is the stationary receiving station, the vehicle-mounted moving configuration of the transmitting station. The second type of configuration is the dislocation and parallel movement of the dual vehicle-mounted vehicles. The imaging modes of the two types of configurations are strip mode. The vehicle was carried by an ordinary passenger car, which is convenient for carrying laboratory personnel and conducive to controlling and monitoring the test equipment. Here the vehicle-borne test of the second configuration is introduced. As the test belongs to the close-range principle verification, the peak emission power of the experimental system was only 1 W, the signal bandwidth was set to 80 MHz, and the pulse repetition frequency was set to 500 Hz based on the vehicle speed and space sampling rate requirements. Since there was no synchronous extension between the transmitter and receiver at that time, nonsynchronous acquisition technology was used to record the echo, and the beam width was widened to 20 degrees. Since there was no suitable power supply equipment on the vehicle, a generator and lithium battery pack were used to power the test system and the matching instrument. The test movement path was selected on a particular Yangtze River bridge in southwest China. The bridge deck is 40 m high, the linear road length is about 3 km, and the two-way eight-lane road is suitable for sports car testing. There is no obvious shielding structure above the railing of the bridge, and the effect of near-ground clutter is less. In the test, the receiving station and the transmitting station were respectively fixed on the bus, the beam was fixed and installed perpendicular to the moving direction, with the bus looking down 15 degrees. The bus crossed the bridge at a speed of 30 km/h. The scene is shown in Fig. 7.2. In order to avoid traffic congestion that could affect the stability of the movement, the test was performed after midnight. The round-trip movement was divided into two experimental subjects, and the scenes on both sides of the bridge were observed. The imaging scene included Jiangxinzhou, riverbank, roads, buildings, berthing ships, and so on. 2. Test results For one of the subjects, the real-part intensity distribution of echo data on the fast- and slow-time plane is shown in Fig. 7.3A. When the display is enlarged, an obvious interference fringe, like two-dimensional linear
298
Bistatic synthetic aperture radar
Fig. 7.2 Geometric configuration and ground scene of bistatic side-looking SAR imaging test with the transceiver mounted on the vehicle.
Fig. 7.3 Echo data obtained in the experiment.
frequency modulation, can be observed, which indicates that the system worked well and had a good coherent transmitter-receiver property. For the data after range pulse compression in Fig. 7.3B, bright lines extending in the slow-time direction can be observed, indicating that the transmission waveform quality and data acquisition records were normal, and the tilt of the bright lines was caused by asynchronous acquisition. The nonsynchronous acquisition error can be automatically corrected by using the adjacent correlation method, as shown in Fig. 7.3C. The simple range-azimuth compression algorithm was adopted in the imaging processing of the test site, without motion compensation. The aim is to control the test process by observing the focus of distance and
Verification of bistatic SAR
299
Fig. 7.4 Dual-vehicle-borne bistatic SAR imaging scene and results.
azimuth. After the test, using RD, BP, and other processing algorithms, combined with motion compensation, we can obtain more detailed imaging results [2], as shown in Fig. 7.4. Due to the poor motion stability of the two vehicles in this test, and the lack of attitude measurement equipment and synchronous extension, the imaging quality is not high. However, from the imaging results, the focus of the scene along the river and the contrast of light and shade with the river surface can be observed obviously, and the test purpose of checking the state of the system and preliminarily verifying the principle has been achieved. Generally, after the preparation for the vehicle test, the formulation of the airborne test scheme can be started.
7.5.2 Airborne test For airborne applications, the flight test is a formal test verification link. Its purpose is to obtain real ground echo data, synchronous data, and attitude measurement and positioning data; to verify the imaging principle, motion compensation method, and imaging processing algorithm; to test the imaging echo model and imaging performance; and to find new problems that have not been noticed. The experimental data foundation is laid for improving the radar system, echo model, processing method, and compensation method. According to the consistency of the test platform and application platform, the airborne test can be divided into two categories: a local airborne test and another airborne test. In the principle and technology verification
300
Bistatic synthetic aperture radar
phases, it is common to use another airborne test to avoid some issues that are not relevant to the verification purpose. The airborne test involves more factors than the vehicle test. The communication and coordination with the platform and the organization of the experiment are also more complicated and difficult than with the vehicle test. The selection and modification of the platform, the selection of the test airspace and flight routes, the determination of the target scene, and the deployment of targets are the main tasks in the formulation of the test plan, while the estimation and evaluation of the test signal-to-noise ratio, the adaptive adjustment of the imaging distance, and the data analysis and processing are the main tasks in the experimental process. In this chapter, we provide examples of several flight tests in the principle verification and technical verification stages for reference. 1. Bistatic side-looking imaging test The purpose of this set of experiments is to obtain airborne bistatic sidelooking SAR echo data, examine imaging methods, and verify imaging principles and key technologies. (1) Test scheme The test is divided into two stages, the first stage being a close-range test and the second being a long-range test. The first stage is a principle verification test. A system that has been tested in a vehicle test was used in the test. The loading platform selected was a Yun-5 aircraft to take test personnel to facilitate the equipment operational status monitoring and real-time determination and adjustment of test progress. The test mode involved shift and nonshift modes, as shown in Fig. 7.5. The test area selected was a site in southwest China. Although the flight test was conducted in the middle of winter, the ground vegetation was good, the surface water content was high, and the ground object reflection was strong. The two aircraft flew in parallel at a uniform speed according to the predetermined route and the configuration relationship shown in Fig. 7.5. The horizontal interval between the two aircraft was 200 m, and the front and rear interval was 200 m. The planes took off successively from an airport, and the flight line was designed according to the shape of the stadium runway. The formation of flight segments and transit segments would not receive echo data. Limited by the installation conditions and other factors, the experimental system was not equipped with attitude measurement positioning and synchronous extension. Echo data was collected by asynchronous acquisition technology.
Verification of bistatic SAR
301
transmitter receiver
Fig. 7.5 Test scheme of airborne translational invariant bistatic SAR.
The second-stage test was a technical verification test that upgraded and updated the test system, significantly increasing the transmission power and the system bandwidth, and configuring the attitude determination positioning and synchronous extension. At the same time, two kinds of airplanes were selected as the test carrier platform that were closer to the application of the installation and had faster speed, and used an external pod to realize the system installation. In the flight test, there were no experimental personnel on board the monitoring system, and the system automatically started to work. In the autumn at a certain site in northwest China, a series of experiments of shifting flight modes was carried out. In the test, the distance between the two aircrafts was 10 km, the flight heights were 8000 m and 6000 m, and the distance from the imaging area was 25 km. A four-sided rounded flight line was used to conduct observation imaging experiments around the central area. (2) Test results The first stage flight test results are shown in Fig. 7.6. Fig. 7.6A shows the real part of the echo signal in the fast- and slow-time domains. Because there is no point object with strong reflection in the scene, it is not easy to see the obvious interference fringes in the image. Fig. 7.6B shows the echo after range compression, and the obvious bright line extending in the direction of slow time can be observed, indicating that the transmitted signal quality is high, echo acquisition is normal, range compression is good, and echo signal-to-noise ratio is relatively high. Fig. 7.6C shows the result of azimuth compression processing [3]; the two-dimensional scene and various features
302
Bistatic synthetic aperture radar
Fig. 7.6 Processing results of the airborne bistatic SAR test.
can be observed, with good focus, rich light and shade levels, high contrast, and obvious geometric features. Among them are ponds, dwellings, and courtyards, all consistent with the actual scenes recorded by the simultaneous aerial photography in the experiment. However, due to the close imaging distance, the geometric distortion caused by the “close range compression” effect is obvious, as shown in the upper part of Fig. 7.6C. According to the test, the imaging resolution is 3 m (range) 2 m (azimuth); the imaging peak and the integrated sidelobe are also high, which verifies the imaging principle and imaging method of airborne bistatic sidelooking SAR. This test was the first domestic implementation of bistatic side-looking SAR imaging, making China the fifth country to implement bistatic side-looking SAR verification after Great Britain, the United States, Germany, and France. The imaging results of the second stage experiment are shown in Fig. 7.7. The imaging scene range is significantly larger than that of the first stage experiment. From the imaging results, mountains, fields, roads, villages, and other features can be observed. Because the test area is located in
Verification of bistatic SAR
303
Fig. 7.7 Imaging results of the second-stage test of airborne bistatic side-looking SAR.
northwestern China, and the test is in late autumn, the ground and vegetation are relatively dry, and the incident angle is larger, the image signal-to-noise ratio and contrast are slightly lower than those in the first-stage test. However, as the system bandwidth increases, the synthetic aperture time is correspondingly longer, the imaging resolution reaches 1 m (range) 1 m (azimuth), which is higher than that of the first-stage test. At the same time, due to the long imaging distance, there was no obvious “close range compression” geometric distortion. Also, the test verified the technical performance of the attitude measuring and positioning device and the synchronous unit in the experimental radar, as well as the installation and environmental adaptability of the experimental radar. 2. Bistatic forward-looking SAR imaging test The purpose of this test is to obtain airborne bistatic forward-looking SAR echo data, test the imaging method, verify the forward-looking imaging mechanism, and realize the forward-looking ground imaging of the receiving aircraft. (1) Experimental plan Before the airborne flight test, a ground-mounted dual-vehicle-borne forward-looking imaging experiment was carried out in advance to verify the operational coordination status of the experimental system and the two-dimensional focusing performance of the image domain. The flight test was also divided into two stages. The first stage was a principle verification test. The experimental system was not equipped with a synchronous extension of the attitude and positioning device. The experiment was conducted at a site in western China in the early winter. The main purpose was to obtain airborne bistatic forward-looking SAR echo data, to realize the principle verification of forward-looking imaging, and to provide
304
Bistatic synthetic aperture radar
an early data basis for the echo model and the imaging method of forwardlooking imaging. In the experiment, a wide beam with a fixed direction was still used to avoid the problem of spatial synchronization, while asynchronous data acquisition and adjacent correlation correction methods were used to record and correct the echo data. The second phase test was a technical verification test. The attitude measuring and positioning device and the synchronization extension were configured in the test system to improve the system bandwidth. The test was carried out in the early winter at a site in central China, and the flight tests were carried out in various modes, such as horizontal flight and oblique flight. The main purpose was to test the technical performance of time-frequency-space synchronization, obtain better echo data and corresponding position measurement data and synchronization data, and improve the imaging quality. The tests in two stages were both carried out on Yun-5 aircraft, and the external pods were used to install antennas and servo systems. The two carriers were 500 m apart, 500 m above the ground, and 2 km from the imaging area, as shown in Fig. 7.8. (2) Test results Fig. 7.9 shows the imaging result of the first-stage airborne bistatic forwardlooking SAR [1], with an imaging resolution of 3 m (range) 3 m (azimuth), which realizes the principle verification of airborne bistatic forward-looking SAR. From the image, we can observe the shape and reflectance difference of different ground blocks, as well as the ridge trend,
T
500m
2km R
Fig. 7.8 Experimental plan.
Fig. 7.9 Imaging result of the first-stage bistatic forward-looking SAR.
Verification of bistatic SAR
305
and can observe the geometric distortion of “close range compression” and the radiation distortion caused by antenna beam modulation. In addition, affected by factors such as transmit power and ground moisture the signal-to-noise ratio of the echo was relatively low and the grayscale and contrast of the image appear insufficient. This was the first bistatic forward-looking SAR image in the world with the transmitter and receiver both mounted on the aircraft, which has been fully recognized by international peers and IEEE journals. Fig. 7.10 shows the imaging result (partial) of the second-stage airborne bistatic forward-looking SAR. It can be observed that the image quality of roads, hangars, airplanes, and other features in the airport apron area has been improved. After that, the airborne bistatic forward-looking SAR was verified by product-grade airborne radar and a corresponding carrier. The imaging performance was significantly improved and has achieved the application requirements, which is inseparable from the results obtained in the first two stages of test. Finally, the imaging result of bistatic forward-looking SAR carried out by the author’s team in 2020 are illustrated in Fig. 7.11.
Fig. 7.10 Imaging result of the second-stage bistatic forward-looking SAR.
Fig. 7.11 Bistatic forward-looking SAR imaging result in 2020, UESTC.
306
Bistatic synthetic aperture radar
In addition, in the late half of 2020, the author’s team successfully carried out experimental verification of airborne forward-looking SAR-GMTI [4] independently, as well as spaceborne/airborne forward-looking SAR imaging in cooperation with China Academy of Space Technology [5], as shown on the left of line 3 in Figure 1.4.
References [1] J. Yang, Y. Huang, J. Wu, et al., A first experiment of airborne bistatic forward-looking SAR—preliminary results, in: IGARSS 2013, Melbourne, Australia, 2013. [2] Y. Huang, J. Yang, l. Xian, et al., Vehicle-borne bistatic synthetic aperture radar imaging, in: IGARSS 2007, Barcelona, Spain, 2007. [3] L. Xian, J. Xiong, Y. Huang, et al., Research on airborne bistatic SAR squint imaging mode algorithm and experiment data processing, in: APSAR2007, Huangshan, China, 2007. [4] Z. Liu, H. Ye, Z. Li, et al., Optimally matched space-time filtering technique for BFSAR non-stationary clutter suppression[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021 (Early Access). [5] Z. Sun, J. Wu, Z. Lv, et al., Spaceborne-airborne bistatic SAR experiment using GF-3 illuminator: description, in: Processing and Results, IEEE International Geoscience & Remote Sensing Symposium, July 11–16, Brussels, Belgium, 2021.
Index Note: Page numbers followed by f indicate figures, and t indicate tables.
A Absolute synchronization method, scheme of, 253, 254f Acceleration error constraint, 227 Accelerometers, 186–187 Accuracy of Doppler parameters, 189–193 Doppler centroid, 189, 191f Doppler frequency rate, 190, 192–193f configuration mode, 133 Airborne bistatic forward-looking SAR, 305 echo data, 303–304 Airborne bistatic SAR test, 302f Airborne test, 299–306, 301–303f Airborne translational invariant bistatic SAR, test scheme of, 301f Aircraft attitude data, 294–296 Aircraft-borne testing, 281 Aircraft geographic coordinate system, 255–256, 255f Algorithm back-projection (BP) imaging algorithm (see Back-projection (BP) imaging algorithm) fast factorized back projection (FFBP) algorithm, 141, 152–154, 153f, 159f frequency-domain imaging algorithm (see Frequency-domain imaging algorithm) imaging algorithm (see Imaging algorithm) iterative algorithm, 244 multiobjective genetic algorithm NSGAII, 110 principle, 209 simple range-azimuth compression algorithm, 298–299 simulation, 211 time-domain imaging algorithm (see Time-domain imaging algorithm) Along-track motion
compensation, 238–239 error, 238f Ambiguity azimuth ambiguity, 88–89 generalized ambiguity function (GAF), 95, 97f performance parameters, 58, 61 signal-to-first ambiguity ratio, 190 Antenna aperture, 1, 4–5, 29 Antenna beam footprints, 247 Antenna pointing controller, 253, 258 Aperture synthesis, 19–21 Asymmetric Doppler contribution, 118–119 Asynchronous acquisition technology, 300 Attitude error, 221, 228 Autofocus methods, 236, 242–245, 245f Auxiliary inertial measurement unit, 229 Azimuth ambiguity, 88–89 Azimuth compression signal, 271–273 Azimuth processing mismatch, 265 Azimuth resolution, 27–31 azimuth pulse compression and, 29 of bistatic SAR, 30 of parallel-flying side-looking bistatic SAR, 31f
B Back-projection (BP) imaging algorithm, 151–154, 245 calculation repeatability of, 151–152, 151f flow chart, 150, 150f imaging, 161, 161f, 162t principle of, 149–150, 150f Baseband Doppler centroid, 189 Baseline type selection, 35 Beam center pointing error constraint, 250 Beam direction angle, 258 Beam footprint center deviation error constraint, 248 Beam modulation effect, 16f, 17 Beam pointing solution method, 250–251
307
308
Index
Bearing platforms, 49–53 Bessel functions, 225 Binarization (sparsity) of grayscale data, 198 Bistatic forward-looking SAR, 255–256 imaging, 305f Bistatic side-looking SAR, 7, 19, 218, 220–221, 239, 269 imaging test, 300 with motion error, 219f side view of, 223f Bistatic Synthetic aperture radar configuration classification (see Configuration classification) development trends (see Development trends) imaging theory (see Imaging theory) motion compensation (see Motion compensation) parameter estimation (see Parameter estimation) performance parameters (see Performance parameters) scattering coefficient, 103 scattering point echo spectrum, 131f SNR equation, 102 spatial resolution (see Spatial resolution) synchronization (see Synchronization) transceiver division, 234 Boltzmann constant, 102
C Calculation parameters, spaceborneairborne bistatic SAR, 134t Carrying platform, 49 Circle probability error vs. beam coincidence degree, 251f Circular probability error vs. synchronization error, 251f “Close range compression” geometric distortion, 302–303 “Close range compression” effect, 301–302 Close-range imaging tests, 293 Close-range principle verification, 297 Clutter lock fine tracking method, 257–258, 258–259f Clutter locking technology, 237
Coarse alignment method of pointing solution, 253 Computational analysis, 159 Configuration classification, 31–53 baseline aperture combination, 40–44 design principles, 42 two-dimensional resolution relation, 41 bearing platform combination, 49–53 carrying platform, 49 combination modes, 51 imaging area, location of, 46–47 synthetic aperture direction, 36–40 choice of, 40 Doppler ground range resolution, 38 imaging ambiguous area, 40 iso-Doppler line, 37 transceiver baseline type, 32–36 delay ground range resolution, 34 image ambiguous area, 34 iso-time-delay line, 33 selection of, 35 transmitting and receiving flight mode, 45–46 scan mode, 47–49 Configuration design, 107–111 method, 109–111, 110f principles, 108–109 Convenient bearing platform, 282 Coordinate transformation, 231 Cosine theorem, 187–188 Coupling, 146–149 Cross-flight translational variant mode, 168
D Data analysis tools, 293–294 Data processing, 295–296 Data synchronization, 234 Decoupling, 149 Deformed two-dimensional frequency modulated signal, 122–123, 125 Delay ambulation, 116, 117f Delay bending, 116 Delay ground range resolution, 34 Delay ground resolution, 90–91 Delay migration trajectory, 115–116 Delay resolution, 23
Index
Demonstration verification testing, 282 Development trends, 63–75 research trends, 64–73 echo model, 65 imaging algorithm, 67 imaging theory, 64 motion compensation, 70 test verification, 72 Direct synchronization method, 274, 275f Doppler bandwidth, 119 Doppler centroid, 119, 188–189, 196 accuracy of Doppler parameters, 189, 191f baseband Doppler centroid, 189 estimation, 209 algorithm principle, 209 algorithm simulation, 211 transform domain slope detection method, 196 Doppler curve, 120 Doppler frequency, 88–89, 96, 118–119, 128, 188 estimation method, 214–215 modulation parameters, 120–121 rate estimation, 199 Doppler ground range resolution, 90–91 Doppler ground range resolutions, 38, 94 Doppler ground range resolution, 90–92, 109 Doppler history, echo model, 117–121 Doppler lines, 91–92 Doppler parameters, 187–188, 231 estimation, 193, 236 Doppler phase, 188 history, 185–186 Doppler resolution, 85, 95, 119 Doppler signal, 8, 27–31, 27f, 37, 39, 49, 71, 120, 185–186, 260 energy, 195–196 matching filtering, 260–261 parameters, 185 phase of bistatic SAR, 194–195 spectrum, 189 Doppler variation law, of bistatic SAR echo, 111 Dual-platform, 230 Dual-vehicle-borne bistatic SAR imaging, 299f Dynamic range, 57–58
309
E Echo-Doppler signal modulation, 270–271 Echo model, 65 amplitude error, 223 azimuth signal, 271–273 characteristics of point targets, 197f, 203f data, 292, 298f data records, 294 Doppler frequency, 93 Doppler frequency variation, law of, 118f error, 217, 237–245 information extracted from, 236 law, 193–207 parameter estimation based on, 193–207 polynomial phase transform method, 194–195 slow-time signal correlation method, 204–207 transform domain slope detection method, 195–204 model, 111–136 Doppler history, 117–121 frequency-domain echo model, 124–136 slant range history, 113–117, 115f time-domain echo model, 121–124 power vs. space synchronization status, 250f recording process, 115–116 signal for simulation, 271–273 two-dimensional spectrum, 125 Electromagnetic scattering theory, 103–104 Electromagnetic wave propagation, 1, 19, 26 Elevation angle, 253–255 Equivalent noise, 58 Equivalent phase center (EPC), 6, 16 Error acceleration error constraint, 227 along-track motion, 238f attitude error, 221, 228 beam center pointing error constraint, 250 beam footprint center deviation error constraint, 248 bistatic side-looking SAR with motion error, 219f
310
Index
Error (Continued) circle probability error vs. beam coincidence degree, 251f echo, 217 fixed frequency error, 269f frequency synchronization error, 266–273 on imaging linear frequency error, 270 PRF linear time error, 265f PRF synchronization error, 263 random frequency error, 271 reconstruction error, 84–85 linear phase error, 190, 221 linear time-varying frequency error, 270, 273 microsecond time synchronization error, 261 motion error (see Motion error) nanosecond time synchronization error, 261 phase error, of different spectrum models, 136f pitch error, 223 quadratic phase error, 192 residual Doppler frequency error, 227 second-level time synchronization error, 261 space synchronization error, 248–252 systematic errors, 187 time and frequency synchronization error, 261–273 track error, 219, 220f, 225–228 velocity error constraint, 226 yaw error, 222, 222f Experimental radar system equipment, 284 Experimental verification, 294–295
F Fast factorized back projection (FFBP) algorithm, 141, 152–154, 153f, 159f imaging, 161, 161f, 162t Fast Fourier inverse transform (IFFT), 167–168 Fast-time frequency transform, 171 Feasibility principle, 284
Feasible test platform, 284 First-order motion compensation, 241 First-stage bistatic forward-looking SAR, 304f Fitted reference track, 234 Fixed frequency error, index diagram of, 269f Flight mode, 45–46 Follow-up mode, 48, 48f Follow-up stripmap mode, 48–49, 48f Formation process, 27 Forward-looking imaging model, 296 Forward-looking imaging tests, 293 Fourier transform, 126, 195 Free oscillation independent frequency source synchronization system, 276, 277f Frequency-domain echo model, 82–83, 124–136 Frequency-domain imaging algorithm, 82–83, 124–125, 141–143, 162–183, 245 Loffeld two-dimensional spectrum model, 163 scattering point echo, 164 two-dimensional Stolt transformation, 166 Frequency projection method, 65 Frequency synchronization error, 266–273 Frequency synthesizer, 276 Fresnel integral formula, 130 Fuzzy control in SAR, 88–89 Fuzzy function, physical meaning of, 96–97, 99
G GAF. See Generalized ambiguity function (GAF) Gaussian distribution, 271 Gaussian random process, 265 Generalized ambiguity function (GAF), 95, 97f Generalized LBF model, 125–136 Geodetic coordinate system, 253–255 Geographical coordinate system, 255–256 Geometric configuration, 10
Index
parameters and imaging performance, 112t Geometric model, of parallel-flying bistatic side-looking SAR, 226f Geostationary orbit (GEO) satellites platform, 53 Gliding spotlight mode, 48f, 49 Global positioning system (GPS), 186, 229 satellite positioning system, 278 synchronization method, 276 GPS-based time-frequency synchronization scheme, 279f Gradient method, 65 Gradient theory, 177–179 Ground equivalent testing, 281 Ground range resolution, 21–27 delay resolution, 23 linear frequency modulated signal, 21 pulse compression, 23 slant range resolution, 25 Ground test, 296–299 Gyro, 186 Gyroscopes, 187
H Hardware conditions, bistatic SAR experimental radar system, 284–286 Height fluctuation correlation function, 105 Hierarchical merging, 154–158 Hierarchical realization principle, 283 Higher-order term parameters, 185 High-order Doppler parameters, 215 High-precision motion compensation, 229 High-quality imaging, 217 Hough transforms, 198
I Image ambiguous area, 34, 40 Image-domain focusing effect, 133 Image meshing, 154–158 Image quality assessment, method based on, 209–215 Doppler centroid estimation, 209 Doppler frequency rate, 214 Image quality evaluation criteria, 208–209 Image Shannon entropy, 208
311
Imaging algorithm, 67 algorithm flow, 174 computational complexity, 175 coupling, causes and countermeasures of, 146–149 frequency-domain imaging algorithm, 141–143, 162–183 Loffeld two-dimensional spectrum model, 163 scattering point echo, 164 two-dimensional Stolt transformation, 166 mathematical essence of, 140–143 performance, 176 space-variant characteristics, 143–145 time-domain imaging algorithm, 141–142, 149–161 algorithm process, 158 computational analysis, 159 fast BP imaging process, 151–154 hierarchical merging, 154–158 image meshing, 154–158 performance analysis, 160 two-dimensional Stolt transformation, 166 fast-time frequency transform, 171 reference functions, multiplication of, 166 role of, 170–174 slow-time frequency transform, 172 of special geometric mode, 168 Stolt transform, 167 Imaging method, 78–90 imaging processing, purpose of, 78–81, 79f two-dimensional correlation imaging method, 81–85 Imaging performance indicators, 272f Imaging plane, 10 Imaging process, 2–31 aperture synthesis, 19–21 azimuth resolution, 27–31 azimuth pulse compression and, 29 bistatic synthetic aperture radar (SAR), 30 formation process, 27 characteristics, 7–9
312
Index
Imaging process (Continued) light-dark relationship, 12 penetration phenomenon, 19 scattering, 11 shadow phenomena, 17 speckle, 18 ground range resolution, 21–27 delay resolution, 23 linear frequency modulated signal, 21, 23–24f pulse compression, 23 slant range resolution, 25 principle, 2–31 treatment process, 7–9 Imaging theory, 64, 77 configuration design, 107–111 method, 109–111, 110f principles, 108–109 echo model, 111–136 Doppler history, 117–121 frequency-domain echo model, 124–136 slant range history, 113–117, 115f time-domain echo model, 121–124 imaging method, 78–90 imaging processing, purpose of, 78–81, 79f two-dimensional correlation imaging method, 81–85 resolution performance analysis, 90–107 radiation resolution, 101–107 spatial resolution, 91–101 equation, 92 nonorthogonality of, 95 spatial-variant characteristics of, 99 of typical static SAR configuration, 100f IMUs. See Inertial measurement units (IMUs) Independent frequency source method of free oscillation, 276 Inertial measurement units (IMUs), 187, 229 Inertial navigation system (INS), 186, 229 Integral equation method in electromagnetic scattering theory, 104 Integral equation model, 106
Integrated generalized ambiguity function method, 65 Integrated side lobes ratio (ISLR), 61, 62f, 193, 266 Iso-time-delay line, 33 Iterative algorithm, 244 Iterative autofocus image quality assessment, 209–215 evaluation criteria, 208–209 methods, 71
J Jitter time difference on imaging, 265
K Kalman filtering technology, 230–231, 234
L Legendre functions, 225 Light-dark relationship, 12–15 Linear frequency difference, 272f Linear frequency error on imaging, 270 Linear frequency-modulated signals, 21, 23–24f, 120, 130–132, 193, 204 Linear phase error, 190, 221 Linear time difference on imaging, 265 Linear time-varying frequency error, 270, 273 Line detection method, 195–196 Line-of-sight angle interval, 154–155, 157–158 Line-of-sight motion compensation process, 239–242, 241f Loffeld model (LBF), 125–136 Loffeld two-dimensional spectrum model, 163 Long-distance imaging, 101 Low signal-to-noise ratios, 101–107 Lynx SAR, 230
M Mean slant range, 113–114, 114f Method of series reversion (MSR), 125–136 Microsecond time synchronization error, 261 Migration characteristics, 210
Index
Model accuracy analysis, 133 Monitoring data, 294 Monostatic SAR system, 95, 189, 217, 247–248, 260–262 nonuniform sampling of, 238f Monostatic side-looking SAR, 3, 5, 7, 25, 27–31 Monostatic translational invariant mode, 169 Motion compensation, 70, 217–218 Motion error control and echo error compensation, 237–245 effect of, 219–224 measurement and perception, 228–236 source of, 218–219, 219f tolerance, 225–228 Motion-sensing devices, 186, 217, 229 Motion-sensing information, 229–236 MSR. See Method of series reversion (MSR) Multiobjective genetic algorithm NSGAII, 110 Multiple scattering point bistatic SAR echoes, 124f Multipoint target simulation, 200f, 212f Multivariate nonlinear equations, 109–110
N Nanosecond time synchronization error, 261 Navigation principle, 231f Newton method, 244 Nonlinear chirp scaled (NLCS) equilibrium, 195 Nonorthogonality of spatial resolution, 95 Nonsynchronous acquisition technology, 297 Nonuniform Fourier transform (NUFFT), 239 Nonuniform sampling of monostatic SAR, 238f NUFFT. See Nonuniform Fourier transform (NUFFT)
O One station fixed translational variant mode, 169
313
One-step motion compensation method, 242 On-site data processing, 295–296
P Parallel-flight translational invariant mode, 169 Parallel-flight translational variant mode, 169 Parallel-flying bistatic side-looking SAR, 19, 222f, 225 geometric model of, 226f Parallel-flying translational variant bistatic configuration, 144–145 Parameter estimation, 185–186 accuracy requirements Doppler centroid, 189, 191f Doppler frequency rate, 190, 192–193f echo law, 193–207 polynomial phase transform method, 194–195 slow-time signal correlation method, 204–207 transform domain slope detection method, 195–204 iterative autofocus image quality assessment, method based on, 209–215 image quality evaluation criteria, 208–209 motion measurement, 186–187 parameter calculation, 187–189 Parameters Doppler frequency, modulation, 120–121 Doppler parameters, 187–193, 231 Doppler signal, 185 estimation (see Parameter estimation) geometric configuration, 112t higher-order term, 185 high-order Doppler parameters, 215 mode and, 290 performance (see Performance parameters) radar system, 101–107 spaceborne-airborne bistatic SAR, calculation, 134t system parameter design method, 90
314
Index
Peak side lobes ratio (PSLR), 61, 62f, 193, 266 Penetration phenomenon, 19 Performance analysis, 160 Performance parameters, 57 ambiguity, 61 impulse response side lobe, 61 radiation, 57 accuracy, 58 ambiguity, 58 dynamic range, 57–58 equivalent noise, 58 pulse response side lobes, 58 resolution, 58 radiation resolution, 60 space, 57 spaceborne-airborne bistatic SAR, 134t spatial resolution, 59 technical performance, 59 Phase correction, 241 Phased implementation principle, 283 Phase error, of different spectrum models, 136f Physics principles, 78 Pitch error, 223 Pixel-by-pixel calculation, 68, 70 Platform aircraft geographic coordinate system, 256 Platform conditions, 286–287 Point-by-point two-dimensional correlation, 81 Polynomial phase transform method, 194–195 estimation process, 194 principle of, 194 Preceding configuration design method, 110–111 PRF jitter on imaging, 267f PRF linear time error on imaging, 265f PRF synchronization error on imaging, 263 PRF triggering out of sync, on point target imaging, 264f Principle of stationary phase, 125 Principle verification test, 282, 300, 303–304 Projection distortion correction, 89
PSLR. See Peak side lobes ratio (PSLR) Pulse compression, 23, 29
Q Quadratic curve trend, 190 Quadratic phase error, 192
R Radar bistatic electromagnetic scattering, spatial geometry of, 104f Radar cross-section of the scattering unit (RCS), 78 Radar echo, 258 Radar signal detection theory, 4 Radar system, 247 parameters, 101–107 Radiation performance, 57–59 Radiation resolution, 58, 60 of bistatic SAR, 101–107 formula, 101 Radon, 198 Radon transform, 195–196, 199 Radon transform domain of WVD, 204 Random frequency error on imaging, 271 RCS. See Radar cross-section of the scattering unit (RCS) Receiving station subsystem, 55–56, 56f, 285–286 Receiving subsystem, 247–248 Reconstruction error, of imaging method, 84–85 Reference functions, 166 Reference track of platform, 231 Residual Doppler frequency error, 227 Resolution performance analysis, 90–107 radiation resolution, 101–107 spatial resolution, 91–101 equation, 92 nonorthogonality of, 95 spatial-variant characteristics of, 99 of typical static SAR configuration, 100f Resolution projection, 97–98, 98f Round-trip movement, 297
Index
S Sandia National Laboratory in the United States, 230 Scanning mode, 47–49, 290 Scattering, 11 characteristics, 11 point echo, 164 point model, 133 point responses, in image domain, 113f Second-level time synchronization error, 261 Second-order frequency modulation, 120 Second-order motion compensation, 241–242 Second-order polynomial phase signal, 185–186 Second-stage test bistatic forward-looking SAR, 305f Second-stage test of airborne bistatic sidelooking SAR, 301, 303f Self-contained lithium battery, 287 Servo turntable, 256–257 Shadow phenomena, 17 Short-range low-power verification test, 287 Short-term measurement accuracy of INS/ IMU, 230 Short-time Fourier frequency (STFT), 204 Signal-to-first ambiguity ratio, 190 Signal-to-noise ratio, 283–284 Similarity integral, 81–82 Simple range-azimuth compression algorithm, 298–299 Simulation rehearsal, 290–291 Single-point target simulation results, 212f Slant range history, echo model, 113–117, 115f resolution, 25 Sliding mode, 48, 48f Slow-time frequency Stolt transformation, 172, 177–179, 179f Slow-time sampling point, 244 Slow-time signal correlation method, 204–207 Slow-time stationary phase point, 127 Space and radiation resolution, 91 Spaceborne-airborne bistatic SAR, 133
315
Spaceborne SAR, experimental results of, 213f Spaceborne squint-looking SAR, 199 Space performance, 57 Space synchronization, 248–259 absolute synchronization method, 254f error, 248–252 technology, 253–259 Space-variant characteristics, 143–145 Spatial coordinate system, 225 Spatial geometric relationship, 108f Spatial geometry of radar bistatic electromagnetic scattering, 104f Spatial linearization, 164 Spatial resolution, 59, 91–101 equation, 92 nonorthogonality of, 95 spatial-variant characteristics of, 99 of typical static SAR configuration, 100f Spatial-variant characteristics of spatial resolution, 99 Speckle, 18 Spotlight mode, 48f, 49 Squint-flying translational variant bistatic configuration, 144 Stationary phase point, 125 Stationary receiving station, 297 Stereotype test, 283 STFT. See Short-time Fourier frequency (STFT) Stolt transform, 167 Strapdown attitude instrument, 231f Stratospheric airships, 53–54 Stripmap SAR space synchronization, 258 Single Subject Principle, 284 Synchronization, 247–248 absolute synchronization method, scheme of, 253, 254f space synchronization, 248–259 error, 248–252 technology, 253–259 time and frequency synchronization, 260–279 error, 261–273 method, 274–279 Synchronous record, 292–293
316
Index
Synthetic aperture radar (SAR), 1, 36–40 bearing platform, 185 direction, 40 imaging processing, 78–81, 79f processing, 8 Systematic errors, 187 System composition, 54 receiving station subsystem, 55–56, 56f transmitting station subsystem, 54–55 System configuration diagram, 285f System parameter design method, 90
T Target echo range history, 185–186 Target electromagnetic scattering theory, 103–104 Target point’s mean slant range, variation law of, 115f Taylor expansion, 96, 127–128 Taylor series, 116, 119 Technical performance, 59 Technical verification phase, 286 Technical verification test, 282, 301 Testing principles, 283–284 Test plan, bistatic SAR verification, 288–292 configuration route, 289–290 content subjects, 288–289 mode and parameters, 290 simulation rehearsal, 290–291 test organization, 291–292 Test scheme, 300 of airborne translational invariant bistatic SAR, 301f Test verification, 72 Third-order Doppler, 119 Third-order phase modulation, 120 Third-order polynomial, 194–195 3 dB footprint overlap ratio, 249–250, 249f Time and frequency synchronization, 260–279 error, 261–273 method, 274–279 Time-domain echo model, 121–124 Time-domain imaging algorithm, 141–142, 149–161 algorithm process, 158
computational analysis, 159 fast BP imaging process, 151–154 hierarchical merging, 154–158 image meshing, 154–158 performance analysis, 160 Time-frequency analytical methods, 204 Time-frequency-space synchronization, 303–304 Track error, 219, 220f, 225–228 Traditional resampling method, 238–239 Transceiver baseline type, 32–36 Transceiver Doppler frequency contribution modeling, 129 Transceiver station flight trajectory, 232f Transform domain slope detection method, 195–204 Doppler centroid, 196 Doppler frequency rate estimation, 199 Translational invariant bistatic configuration, 145 Translational invariant bistatic forwardlooking SAR, geometric model of, 188f Transmitting platform, 53 Transmitting slant range, 113–114 Transmitting station subsystem, 54–55, 285–286 Transmitting subsystem, 247–248 Two-dimensional autofocus method, 245 Two-dimensional correlation imaging method, 81–86, 89, 120–121 operation process, 89 Two-dimensional coupling, 146, 146f Two-dimensional FM interference pattern, 294 Two-dimensional Fourier transform, 132–133 Two-dimensional resolution, 41 Two-dimensional spectrum model, 130 of scattering point echo, 111–113 Two-dimensional Stolt transformation, 166 fast-time frequency transform, 171 reference functions, multiplication of, 166 role of, 170–174 slow-time frequency transform, 172 of special geometric mode, 168 Stolt transform, 167
Index
Typical airborne platform, 228 Typical spaceborne platform, 228 Typical static SAR configuration, spatial resolution of, 100f Typical synchronization methods, 274
U University of Electronic Science and Technology of China (UESTC), 241 U-shaped line, 114 U-shaped migration trajectory, 121
V Vector composition parallelogram rule, 93 Velocity error constraint, 226 Verification, 281 experimental conditions, 284–288 hardware conditions, 284–286 platform conditions, 286–287 scene and target, 287–288 experiment example, 296–306 airborne test, 299–306, 301–303f ground test, 296–299 experiment implementation data analysis, 293–295
317
data processing, 295–296 data record, 292–293 testing principles, 283–284 test levels and principles, 281–284 test plan, 288–292 configuration route, 289–290 content subjects, 288–289 mode and parameters, 290 simulation rehearsal, 290–291 test organization, 291–292 V-shaped domain, 115
W Waveform entropy, 210 Waveform generator DDS, 262–263 Wigner-Ville distribution (WVD) method, 204, 205f Wireless transceiver unit, 253 Working sequence, 262f WVD method. See Wigner-Ville distribution (WVD) method
Y Yaw error, 222, 222f of transceiver station, 235f
This page intentionally left blank