282 56 2MB
English Pages 269 Year 2005
IETRADAR,SONARANDNAvIGATIONSERIES19 SeriesEditors: DrN.Stewart ProfessorH.Grifiths
RadarImaging andHolography
Othervolumesinthisseries: OptimisedradarprocessorsA.Farina(Editor) WeibullradarclutterM.SekineandY.Mao AdvancedradartechniquesandsystemsG.Galati(Editor) Ultra-widebandradarmeasurements:analysisandprocessing L.Yu.AstaninandA.A.Kostylev Volume8 Aviationweathersurveillancesystems:advancedradarandsurface sensorsforlightsafetyandairtraficmanagementP.R.Mahapatra Volume10 RadartechniquesusingarrayantennasW.Wirth Volume11 Airandspaceborneradarsystems:anintroductionP.Lacomme(Editor) Volume13 IntroductiontoRFstealthD.Lynch Volume14 Applicationsofspace-timeadaptiveprocessingR.Klemm(Editor) Volume15 Groundpenetratingradar,2ndeditionD.Daniels Volume16 TargetdetectionbymarineradarJ.Briggs Volume17 Strapdowninertialnavigationtechnology,2ndeditionD.Tittertonand J.Weston Volume18 IntroductiontoradartargetrecognitionP.Tait Volume19 RadarimagingandholographyA.PasmurovandS.Zinovjev Volume20 Seaclutter:scattering,theKdistributionandradarperformanceK.Ward, R.ToughandS.Watts Volume21 Principlesofspace-timeadaptiveprocessing,3rdeditionR.Klemm Volume101 Introductiontoairborneradar,2ndeditionG.W.Stimson Volume102 Low-angleradarlandclutterB.Billingsley Volume1 Volume3 Volume4 Volume7
RadarImaging andHolography A.PasmurovandJ.Zinoviev
TheInstitutionofEngineeringandTechnology
PublishedbyTheInstitutionofEngineeringandTechnology,London,UnitedKingdom Firstedition©2005TheInstitutionofElectricalEngineers Newcover©2009TheInstitutionofEngineeringandTechnology Firstpublished2005 ThispublicationiscopyrightundertheBerneConventionandtheUniversalCopyright Convention.Allrightsreserved.Apartfromanyfairdealingforthepurposesofresearchor privatestudy,orcriticismorreview,aspermittedundertheCopyright,DesignsandPatents Act,1988,thispublicationmaybereproduced,storedortransmitted,inanyformorby anymeans,onlywiththepriorpermissioninwritingofthepublishers,orinthecaseof reprographicreproductioninaccordancewiththetermsoflicencesissuedbytheCopyright LicensingAgency.Enquiriesconcerningreproductionoutsidethosetermsshouldbesentto thepublishersattheundermentionedaddress: TheInstitutionofEngineeringandTechnology MichaelFaradayHouse SixHillsWay,Stevenage Herts,SG12AY,UnitedKingdom www.theiet.org Whiletheauthorsandthepublishersbelievethattheinformationandguidancegiveninthis workarecorrect,allpartiesmustrelyupontheirownskillandjudgementwhenmakinguse ofthem.Neithertheauthorsnorthepublishersassumeanyliabilitytoanyoneforanyloss ordamagecausedbyanyerrororomissioninthework,whethersucherrororomissionis theresultofnegligenceoranyothercause.Anyandallsuchliabilityisdisclaimed. Themoralrightsoftheauthorstobeidentiiedasauthorsofthisworkhavebeenasserted bytheminaccordancewiththeCopyright,DesignsandPatentsAct1988.
BritishLibraryCataloguinginPublicationData Pasmurov,AlexanderYa. Radarimagingandholography 1.Radar2.Imagingsystems3.Radartargets4.Holography I.TitleII.Zinoviev,JuliusS.III.InstitutionofElectricalEngineers 621.3’848 ISBN(10digit)0863415024 ISBN(13digit)978-0-86341-502-9
TypesetinIndiabyNewgenImagingSystems(P)Ltd,Chennai FirstprintedintheUKbyMPGBooksLtd,Bodmin,Cornwall ReprintedintheUKbyLightningSourceUKLtd,MiltonKeynes
Contents
List of figures
ix
List of tables
xvii
Introduction
1
1
Basic concepts of radar imaging 1.1 Optical definitions 1.2 Holographic concepts 1.3 The principles of computerised tomography 1.4 The principles of microwave imaging
7 7 10 14 20
2
Methods of radar imaging 2.1 Target models 2.2 Basic principles of aperture synthesis 2.3 Methods of signal processing in imaging radar 2.3.1 SAR signal processing and holographic radar for earth surveys 2.3.2 ISAR signal processing 2.4 Coherent radar holographic and tomographic processing 2.4.1 The holographic approach 2.4.2 Tomographic processing in 2D viewing geometry
27 27 31 33
3
Quasi-holographic and holographic radar imaging of point targets on the earth surface 3.1 Side-looking SAR as a quasi-holographic radar 3.1.1 The principles of hologram recording 3.1.2 Image reconstruction from a microwave hologram 3.1.3 Effects of carrier track instabilities and object’s motion on image quality
33 34 36 36 41
49 49 50 53 57
vi
Contents 3.2
Front-looking holographic radar 3.2.1 The principles of hologram recording 3.2.2 Image reconstruction and scaling relations 3.2.3 The focal depth A tomographic approach to spotlight SAR 3.3.1 Tomographic registration of the earth area projection 3.3.2 Tomographic algorithms for image reconstruction
60 60 62 67 70 70 72
Imaging radars and partially coherent targets 4.1 Imaging of extended targets 4.2 Mapping of rough sea surface 4.3 A mathematical model of imaging of partially coherent extended targets 4.4 Statistical characteristics of partially coherent target images 4.4.1 Statistical image characteristics for zero incoherent signal integration 4.4.2 Statistical image characteristics for incoherent signal integration 4.5 Viewing of low contrast partially coherent targets
79 80 82
3.3
4
5
6
7
Radar systems for rotating target imaging (a holographic approach) 5.1 Inverse synthesis of 1D microwave Fourier holograms 5.2 Complex 1D microwave Fourier holograms 5.3 Simulation of microwave Fourier holograms Radar systems for rotating target imaging (a tomographic approach) 6.1 Processing in frequency and space domains 6.2 Processing in 3D viewing geometry: 2D and 3D imaging 6.2.1 The conditions for hologram recording 6.2.2 Preprocessing of radar data 6.3 Hologram processing by coherent summation of partial components 6.4 Processing algorithms for holograms of complex geometry 6.4.1 2D viewing geometry 6.4.2 3D viewing geometry Imaging of targets moving in a straight line 7.1 The effect of partial signal coherence on the cross range resolution 7.2 Modelling of path instabilities of an aerodynamic target 7.3 Modelling of radar imaging for partially coherent signals
85 87 88 90 94
101 101 110 112
117 117 119 120 124 126 130 131 141 147 148 151 152
Contents 8
9
vii
Phase errors and improvement of image quality 8.1 Phase errors due to tropospheric and ionospheric turbulence 8.1.1 The refractive index distribution in the troposphere 8.1.2 The distribution of electron density fluctuations in the ionosphere 8.2 A model of phase errors in a turbulent troposphere 8.3 A model of phase errors in a turbulent ionosphere 8.4 Evaluation of image quality 8.4.1 Potential SAR characteristics 8.4.2 Radar characteristics determined from images 8.4.3 Integral evaluation of image quality 8.5 Speckle noise and its suppression 8.5.1 Structure and statistical characteristics of speckle 8.5.2 Speckle suppression
157 157 157
Radar imaging application 9.1 The earth remote sensing 9.1.1 Satellite SARs 9.1.2 SAR sea ice monitoring in the Arctic 9.1.3 SAR imaging of mesoscale ocean phenomena 9.2 The application of inverse aperture synthesis for radar imaging 9.3 Measurement of target characteristics 9.4 Target recognition
191 191 191 195 204
166 167 172 173 173 175 177 181 182 184
215 217 222
References
231
List of abbreviations
241
Index
243
List of figures
Chapter 1 Figure 1.1 Figure 1.2
Figure 1.3 Figure 1.4
Figure 1.5
Figure 1.6
Figure 1.7
Figure 1.8
The process of imaging by a thin lens A schematic illustration of the focal depth of an optical image: (a) image of point M lying in the optical axis; (b) image of point A; (c) image of point B and (d) image of points A and B in the planes M1 , M2 and M3 The process of optical hologram recording: 1 – reference wave; 2 – object; 3 – photoplate and 4 – object’s wave Image reconstruction from a hologram: 1 – virtual image; 2 – real image; 3 – zero diffraction order and 4 – hologram Viewing geometry in computerised tomography (from Reference 15): Ŵm – circumference for measurements; Ŵc – circumference with the centre at point O enveloping a cross section; p – arbitrary point in the circle with the polar coordinates ρ and ; A, C and D – wide beam transmitters; B, C′ and D′ – receivers; γ –γ , δ–δ – parallel elliptic arcs defining the resolving power of transmitter–receiver pair (CC′ and DD′ ) A scheme of X-ray tomographic experiment using a collimated beam: 1 – X-rays; 2 – projection angle; 3 – registration line; 4 – projection axis and 5 – integration line The geometrical arrangement of the G(x, y) pixels in the Fourier region of a polar grid. The parameters ϑmax and ϑmin are the variation range of the projection angles. The shaded region is the SAR recording area Synthesis of a radar aperture pattern: (a) real antenna array and (b) synthesised antenna array
8
9 11
13
15
17
18 22
x
List of figures
Chapter 2 Figure 2.1 Figure 2.2
Figure 2.3
Figure 2.4
Figure 2.5
Figure 2.6 Figure 2.7 Figure 2.8 Figure 2.9
Figure 2.10
Chapter 3 Figure 3.1
Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5 Figure 3.6
Viewing geometry for a rotating cylinder: 1, 2, 3 – scattering centres (scatterers) Schematic illustrations of aperture synthesis techniques: (a) direct synthesis implemented in SAR, (b) inverse synthesis for a target moving in a straight line and (c) inverse synthesis for a rotating target The holographic approach to signal recording and processing in SAR: 1 – recording of a 1D Fraunhofer or Fresnel diffraction pattern of target field in the form of a transparency (azimuthal recording of a 1D microwave hologram), 2 – 1D Fourier or Fresnel transformation, 3 – display Synthesis of a microwave hologram: (a) quadratic hologram recorded at a high frequency, (b) quadratic hologram recorded at an intermediate frequency, (c) multiplicative hologram recorded at a high frequency, (d) multiplicative hologram recorded at an intermediate frequency, (e) quadrature holograms, (f) phase-only hologram A block diagram of a microwave holographic receiver: 1 – reference field, 2 – reference signal cos(ω0 t + ϕ0 ), 3 – input signal A cos(ω0 t + ϕ0 − ϕ), 4 – signal sin(ω0 t + ϕ0 ) and 5 – mixer Illustration for the calculation of the phase variation of a reference wave The coordinates used in target viewing 2D data acquisition design in the tomographic approach The space frequency spectrum recorded by a coherent (microwave holographic) system. The projection slices are shifted by the value fpo from the coordinate origin The space frequency spectrum recorded by an incoherent (tomographic) system
A scheme illustrating the focusing properties of a Fresnel zone plate: 1 – collimated coherent light, 2 – Fresnel zone plate, 3 – virtual image, 4 – real image and 5 – zeroth-order diffraction The basic geometrical relations in SAR An equivalent scheme of 1D microwave hologram recording by SAR The viewing field of a holographic radar A schematic diagram of a front-looking holographic radar The resolution of a front-looking holographic radar along the x-axis as a function of the angle ϕ
30
32
34
37
39 39 42 45
46 47
50 51 51 60 61 61
List of figures Figure 3.7 Figure 3.8 Figure 3.9
Figure 3.10
Figure 3.11 Chapter 4 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4
Figure 4.5 Figure 4.6 Figure 4.7
Chapter 5 Figure 5.1
Figure 5.2 Figure 5.3 Figure 5.4
Figure 5.5
The resolution of a front-looking holographic radar along the z-axis as a function of the angle ϕ Generalised schemes of hologram recording (a) and reconstruction (b) Recording (a) and reconstruction (b) of a two-point object for finding longitudinal magnifications: 1, 2 – point objects, 3 – reference wave source and 4 – reconstructing wave source The focal depth of a microwave image: 1 – reconstructing wave source, 2 – real image of a point object and 3 – microwave hologram The basic geometrical relations for a spot-light SAR
The geometrical relations in a SAR A generalised block diagram of a SAR The variation of the parameter Q with the synthesis range Ls at λ = 3 cm, = 0.02 and various values of R The dependence of the spatial correlation range of the image on normalised Ls for multi-ray processing (solid lines) at various degrees of incoherent integration De and for averaging of the resolution elements (dashed lines) at various Ge : λ = 3 cm, R = 10 km; 1, 5–0 (curves overlap); 2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2 The variation of the parameter Qh with the number of integrated signals Ni at various values of Ka The variation of the parameter Qe with the synthesis range Ls at various signal correlation times τc The parameter Q as a function of the synthesis range Ls at various signal correlation times τc
A schematic diagram of direct bistatic radar synthesis of a microwave hologram along arc L of a circle of radius R0 : 1 – transmitter, 2 – receiver A schematic diagram of inverse synthesis of a microwave hologram by a unistatic radar located at point C The geometry of data acquisition for the synthesis of a 1D microwave Fourier hologram of a rotating object Optical reconstruction of 1D microwave images from a quadrature Fourier hologram: (a) flat transparency, (b) spherical transparency The dependence of microwave image resolution on the normalised aperture angle of the hologram
xi
62 63
67
68 70
85 85 90
91 93 97 98
102 103 103
106 109
xii
List of figures
Figure 5.6
Figure 5.7 Figure 5.8
Chapter 6 Figure 6.1
Figure 6.2
Figure 6.3 Figure 6.4
Figure 6.5 Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Microwave images reconstructed from Fourier holograms: (a) quadrature hologram, (b) complex hologram with carrier frequency, (c) complex hologram without carrier frequency and (d,e,f) the variation of the reconstructed image with the hologram angle ψs (complex hologram without carrier frequency) The algorithm of digital processing of 1D microwave complex Fourier holograms A microwave image of a point object, reconstructed digitally from a complex Fourier hologram as a function of the object’s aspects 0 (s = π/6): (a) 0 = π/12, (b) 0 = 5π/2 and (c) 0 = 3π/4.
The aspect variation relative to the line of sight of a ground radar as a function of the viewing time for a satellite at the culmination altitudes of 31◦ , 66◦ and 88◦ : (a) aspect α and (b) aspect β Geometrical relations for 3D microwave hologram recording: (a) data acquisition geometry; a–b, trajectory projection onto a unit surface relative to the radar motion and (b) hologram recording geometry The sequence of operations in radar data processing during imaging Subdivision of a 3D microwave hologram into partial holograms: (a) 1D partial (radial and transversal), (b) 2D partial (radial and transversal) and (c) 3D partial holograms Subdivision of a 3D surface hologram into partial holograms: (a) radial, (b) 1D partial transversal and (c) 2D partial Coherent summation of partial hologram. A 2D narrowband microwave hologram: (a) highlighting of partial holograms and (b) formation of an integral image Coherent summation of partial hologram. A 2D wideband microwave hologram: (a) highlighting of partial holograms, (b) formation of an integral image The computational complexity of the coherent summation algorithms as a function of the target dimension for a narrowband microwave hologram: (a) transverse partial images, (b) hologram samples The relative computational complexity of coherent summation algorithms as a function of the target dimension for a narrowband microwave hologram: (a) transverse partial images/CCA, (b) hologram samples/CCA
113 114
115
121
123 125
128 129
132
133
137
140
List of figures Figure 6.10
Figure 6.11
Figure 6.12
Chapter 7 Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Figure 7.5
Chapter 8 Figure 8.1
Figure 8.2 Figure 8.3
The relative computational complexity of coherent summation algorithms of hologram samples and transverse partial images versus the coefficient µ in the case of a wideband hologram The relative computational complexity of coherent summation algorithms for radial and transverse partial images versus the coefficient µ in the case of a wideband hologram The transformation of the partial coordinate frame in the processing of a 3D hologram by coherent summation of transverse partial images
Characteristics of an imaging device in the case of partially coherent echo signals: (a) potential resolving power at C2 = 1, (b) performance criterion (1 – dc = 6.98 m, 2 – dc = 3.49 m and 3 – dc = 0) Typical errors in the impulse response of an imaging device along the s-axis: (a) response shift, (b) response broadening, (c) increased amplitude of the response side lobes and (d) combined effect of the above factors The resolving power of an imaging device in the presence of range instabilities versus the synthesis time Ts and the method of resolution step measurement: (a) −σp = 0.04 m; 1 and 1′ (2 and 2′ ) – first (second) way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1′ and 2′ – Tc = 3 s; (b) −σp = 0.05 m, 1 and 1′ (2 and 2′ ) – first (second) way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1′ and 2′ – Tc = 3 s The resolving power of an imaging system in the presence of velocity instabilities versus the synthesis time Ts and the method of resolution step measurement: (a) σx′ ,y′ = 0.1 m/s (other details as in Fig. 7.3), (b) σx′ ,y′ = 0.2 m/s (other details as in Fig. 7.3) Evaluation of the performance of a processing device in the case of partially coherent signals versus the synthesis time Ts and the space step of path instability correlation dc : 1–dc = 6.98m, 2 – dc = 3.49 m The normalised refractive index spectrum n (χ )/Cn2 as a function of the wave number χ in various models: 1 – Tatarsky’s model-I, 2 – Tatarsky’s model-II, 3 – Carman’s model and 4 – modified Carman’s model The profile of the structure constant Cn2 versus the altitude for April at the SAR wavelength of 3.12 cm The profile of the structure constant Cn2 versus the altitude for November at the SAR wavelength of 3.12 cm
xiii
141
142
143
151
153
154
155
155
160 164 165
xiv
List of figures
Figure 8.4
Figure 8.5 Figure 8.6 Figure 8.7 Figure 8.8 Figure 8.9 Figure 8.10
Chapter 9 Figure 9.1
Figure 9.2 Figure 9.3 Figure 9.4
Figure 9.5
Figure 9.6
Figure 9.7
Figure 9.8
Figure 9.9
A geometrical construction for a spaceborne SAR tracking a point object A through a turbulent atmospheric stratum of thickness ht A schematic test ground with corner reflectors for investigation of SAR performance A 1D SAR image of two corner reflectors A histogram of the noise distribution in a SAR receiver The grey-level (half-tone) resolution versus the number of incoherently integrated frames N The dependence of the image interpretability on the resolution versus linear resolution pa = pr = p The dependence of the half-tone resolution on the number of incoherent integrations over the total real antenna pattern
The mean monthly convoy speed in the NSR changes from V0 (without satellite data) to V1 (SAR images used by the icebreaker’s crew to select the route in sea ice). The mean ice thickness (hi ) is shown as a function of the season. (N. Babich, personal communications) (a) Photo of grease ice and (b) a characteristic dark SAR signature of grease ice. © European Space Agency Photo of typical nilas with finger-rafting A RADARSAT ScanSAR Wide image of 25 April 1998, covering an area of 500 km × 500 km around the northern Novaya Zemlya. A geographical grid and the coastline are superimposed on the image. © Canadian Space Agency A RADARSAT ScanSAR Wide image of 3 March 1998, covering the boundary between old and first-year sea ice in the area to north Alaska. © Canadian Space Agency (a) Photo of a typical pancake ice edge and (b) a characteristic ERS SAR signature of pancake ice. A mixed bright and dark backscatter signature is typical for pancake and grease ice found at the ice edge. © European Space Agency A RADARSAT ScanSAR Wide image of 8 May 1998, covering the south-western Kara Sea. © Canadian Space Agency An ENVISAT ASAR image of 28 March 2003, covering the ice edge in the Barents Sea westward and southward of Svalbard. © European Space Agency An ERS-2 SAR image of 11 September 2001, covering the Red Army Strait in the Severnaya Zemlya Archipelago. © European Space Agency
168 176 177 178 179 180 181
198 199 200
201
202
203
204
205
206
List of figures Figure 9.10
Figure 9.11 Figure 9.12
Figure 9.13
Figure 9.14
Figure 9.15
Figure 9.16 Figure 9.17 Figure 9.18 Figure 9.19 Figure 9.20
An ERS-2 SAR image (100 km × 100 km) taken on 24 June 2000 over the Black Sea (region to the East Crimea peninsula) and showing upwelling, natural films SST retrieved from a NOAA AVHRR image on 24 June 2000. A fragment of an ERS-2 SAR image (26 km × 22 km) taken on 30 September 1995 over the Northern Sea near the Norwegian coast and showing swell An ERS-2 SAR image (100 km × 100 km) taken on 28 September 1995 over the Northern Sea and showing an oil spill, wind shadow, low wind and ocean fronts An ERS-1 SAR image (100 km × 100 km) taken on 29 September 1995 over the Northern Sea showing rain cells An ERS-2 SAR image (18 km × 32 km) taken on 30 September 1995 over the Northern Sea showing an internal wave and a ship wake The scheme of the reconstruction algorithm A typical 1D image of a perfectly conducting cylinder The local scattering characteristics for a metallic cylinder (E-polarisation) The local scattering characteristics for a metallic cylinder (H-polarisation) A mathematical model of a radar recognition device
xv
208 209
210
211
213
214 221 222 223 224 226
List of tables
Chapter 6 Table 6.1
The number of spectral components of a PH
136
Chapter 8 Table 8.1
The main characteristics of the synthetic aperture pattern
174
Chapter 9 Table 9.1 Table 9.2 Table 9.3 Table 9.4 Table 9.5 Table 9.6 Table 9.7 Table 9.8 Table 9.9
Technical parameters of SARs borne by the SEASAT and Shuttle Parameters of the Almaz-1 SAR The parameters of the ERS-1/2 satellites SAR imaging modes of the RADARSAT satellite The ENVISAT ASAR operation modes The LRIR characteristics The variants of the sign vectors The valid recognition probability (a Bayes classifier) The valid recognition probability (a classifier based on the method of potential functions)
192 192 193 194 194 216 227 228 228
Introduction
The analysis of the current state and tendencies in radar development shows that novel methods of target viewing are based on a detailed study of echo signals and their informative characteristics. These methods are aimed at obtaining complete data on a target, with emphasis on revealing new steady parameters for their recognition. One way of raising the efficiency of radar technology is to improve available methods of radio vision, or imaging. Radio vision systems provide a high resolution, considerably extending the scope of target detection and recognition. This field of radar science and technology is very promising, because it paves the way from the classical detection of a point target to the imaging of a whole object. The physical mechanism underlying target viewing can be understood on a heuristic basis. An electromagnetic wave incident on a target induces an electric current on it, generating a scattered electromagnetic wave. In order to find the scattering properties of the target, we must visualise its elements making the greatest contribution to the wave scattering. This brings us to the concept of a radar image, which can be defined as a spatial distribution pattern of the target reflectivity. Therefore, an image must give a spatial quantitative description of this physical property of the target with a quality not less than that provided by conventional observational techniques. Radio vision makes it possible to sense an object as a visual picture. This is very important because we get about 90 per cent of all information about the world through vision. Of course, a radar image differs from a common optical image. For instance, a surface rough to light waves will be specular to radio waves (microwaves), and images of many objects will look like bright spots, or glare. However, the representation of information transported by microwaves as visual images has become quite common. It took much time and effort to get a high angular resolution in the microwave frequency band because of the limited size of a real antenna. It was not until the 1950–1960s that a sufficiently high resolution was obtained by a side-looking radar with a large synthesised antenna aperture. The synthetic aperture method was then described in terms of the range-Doppler approach. At about the same time, a new method of imaging in the visible spectrum emerged which was based on recording and reconstruction of the wave front and its phase, using a reference wave. A lens-free registration of the wave front (the holographic technique), followed by the image reconstruction, was first suggested by D. Gabor in
2
Introduction
1948 and re-discovered by E. Leith and U. Upatnieks in 1963. The two researchers suggested a holographic method with a ‘side reference beam’ to eliminate the zeroth diffraction order. This principle was later used in a new, side-looking type of radar. A specific feature of holographic imaging is that a hologram records an integral Fourier or Fresnel transform of the object’s scattering function. The emergence of holography radically changed our conception of an object’s image. Earlier, humans had dealt with images produced by recording the distribution of light intensity in a certain plane. But objects can generate a light field or another kind of electromagnetic field with all of its parameters modulated: the amplitude, phase, polarisation, etc. This discovery considerably extended the scope of spatial information that could be extracted about the object of interest. It should be noted that holography brought about revolutionary changes only in optics, because it did not possess ways or means to save the recorded information about the phase structure of an optical field until then. But the application of holographic principles to the microwave frequency band proceeded easily, giving excellent results. This was due to the fact that radio engineering had employed methods of registration of the electromagnetic wave phase long before the emergence of holography. For many years, radar imaging developed independently of holography, although some workers (E.N. Leith, W.E. Kock, D.L. Mensa, B.D. Steinberg) did note that many intermediate steps in the recording and processing techniques for radar imaging were quite similar to those of holography and tomography. These researchers, however, only briefly reviewed the holographic principles just to point out the fundamental similarity and difference between optical and radar imaging, but they did not make a comprehensive analysis of this fact in the context of radiolocation. E.N. Leith and A.L. Ingalls showed that the operation of a side-looking radar should be treated in terms of a holographic approach. Holograms recorded in the microwave frequency range were referred to as microwave holograms, and radar systems based on the holographic principle were called by E.N. Leith quasi-holographic. In fact, the work done in those years became the basis for designing a special type of radar to perform imaging. The research into radar imaging was developing quite intensively, and many scientists made their contributions to it: L.J. Cutrona, A. Kozma, D.A. Ausherman, G. Graf, I.L. Walker, W.M. Brown, D.L. Mensa, D.C. Manson, B.D. Steinberg, N.H. Farhat, V.C. Chen, D.R. Wehner and others. These efforts were accompanied by the development of tomographic techniques for image reconstruction in medicine and physiology (X-ray imaging). Initially, tomography was treated as a way of reconstructing the spatial distribution of a certain physical characteristic of an object by making computational operations with data obtained during the probing of the object. This resulted in the emergence of reconstructive computerised tomography possessing powerful mathematical methods. Later, tomographic techniques were suggested capable of reconstructing a physical characteristic of an object by a mathematical processing of the field reflected by it. Naturally, there have been suggestions to combine the available methods of radar imaging (e.g. the range-Doppler principles) with tomographic algorithms (D.L. Mensa, D.C. Manson). At present, the work on radar imaging goes on,
Introduction
3
combining the principles of microwave holography, range-Doppler methods of reflected field recording and tomographic image reconstruction. In Russia, the theory of a side-looking radar has been developed by many workers: Yu.A. Melnik, N.I. Burenin, G.S. Kondratenkov, A.P. Reutov, Yu.A. Feoktistov, E.F. Tolstov, L.B. Neronsky and others. Much contribution to the theory of inverse aperture synthesis and tomographic image processing has been made by S.A. Popov, B.A. Rozanov, J.S. Zinoviev, A.Ya. Pasmurov, A.F. Kononov, A.A. Kuriksha, A. Manukyan and others. This book presents systematised results on the application of direct and inverse aperture synthesis for radar imaging by holographic and tomographic techniques. The focus is on the research data obtained by the authors themselves. The book is primarily intended for engineers, designers and researchers, who are working in radar design and maintenance and are interested in the fundamental problem of extracting useful information from radar data. The book consists of three parts: introductory Chapters 1 and 2, theoretical Chapters 3–8 and concluding Chapter 9. The first two chapters will be useful to a reader who has but limited knowledge of optical holography, microwave holography and tomography. They cover the material available in the literature, but the information is presented in such a way that the reader will be able to better understand the chapters that follow. Besides, Chapter 1 treats the equation for an optical hologram in a non-trivial way to explain the speckle structure of a radar image. Chapter 2 explains the physical difference between coherent (microwave holographic) and incoherent (tomographic) imaging. The mathematical relations presented can be regarded as an extension of the classical theorem of a projection slice to coherent imaging. This allows application of the analytical methods of reconstructive computerised tomography for further development of coherent imaging theory. Chapters 3–8 represent an attempt to treat the imaging radar operation in terms of holography, microwave holography and tomography, without resorting to the Doppler approach. Most of this material is the authors’ results published during the past 30 years. Chapter 3 discusses the holographic approach as applied to a side-looking radar. Its azimuthal channel is treated as a holographic system, in which the formation of a microwave hologram represents the recording of a field scattered from an artificial reference source, and the image reconstruction is described in terms of physical optics. We show that the use of a subcarrier frequency by turning the antenna beam away from the direction normal to the track velocity vector leads to a distorted image. The holographic approach can readily evaluate a permissible deviation of the carrier’s pathway from a straight line and find various radar parameters, using conventional geometrical and optical methods. The holographic analysis of a frontlooking radar on the basis of a generalised hologram geometry shows that the image is three-dimensional (3D); we describe the conditions for recording an undistorted image in the longitudinal and transversal directions. We also introduce the concept of a focal depth and explain the pseudoscopic character of an image. The application of tomographic principles to a spot-light radar is largely discussed using
4
Introduction
the results of D.S. Manson, who was the first to demonstrate their applicability to data processing. Chapter 4 considers the radar aperture synthesis during the viewing of partially coherent and extended targets. The mathematical model of the aperture is also based on the holographic principle; the aperture is thought to be a filter with a frequencycontrast characteristic, which registers the space–time spectrum of a target. This approach is useful for the calculation of incoherent integration efficiency to smooth out low contrast details on an image. In Chapter 5 we discuss microwave imaging of a rotating target, using 1D Fourier hologram theory and find the longitudinal and transverse scales of a reconstructed image, the target resolution and a criterion for an optimal processing of a Fourier microwave hologram. The resolution of a visual radar image is found to be consistent with the Abbe criterion for optical systems. One specificity is that it is necessary to introduce a space carrier frequency to separate two conjugate images and an image of the reference source. Here we have an analogy with synthetic aperture theory, with the exception that we employ the concept of a complex microwave Fourier hologram. It is shown that there is no zeroth diffraction order in digital reconstruction. We have formulated some requirements on methods and devices for synthesising this type of hologram. This method is easy and useful to implement in an anechoic chamber. Chapter 6 focuses on tomographic processing of 2D and 3D microwave holograms of a rotating target in 3D viewing geometry with a non-equidistant arrangement of echo signal records in the registration of its aspect variation (for space objects). The suggested technique of image reconstruction is based on the processing of microwave holograms by coherent summation of partial holograms. These are classified into 1D, 2D, 2D radial, as well as narrowband and wideband partial holograms. This technique is feasible in any mode of target motion. The method of hologram synthesis combined with coherent computerised tomography represents a new processing technique which accounts for a large variation of real hologram geometries in 3D viewing. This advantage is inaccessible to other processing procedures yet. Chapter 7 is concerned with methods of hologram processing for a target moving in a straight line and viewed by a ground radar processing partially coherent echo signals. The signal coherence is assumed to be perturbed by such factors as a turbulent medium, elastic vibrations of the target’s body, vibrations of parts of the engines, etc. We suggest an approach to modelling the track instabilities of an aerodynamic target and present estimates of the radar resolving power in a real cross-section region. Chapter 8 focuses on phase errors in radar imaging, evaluation of image quality and speckle noise. Finally, possible applications of radar imaging are discussed in Chapter 9. The emphasis is on spaceborne synthetic aperture radars for surveying the earth surface. Some novel and original developments by researchers and designers at the Nansen Environmental and Remote Sensing Centre in Bergen (Norway) and at the Nansen International Environmental and Remote Sensing Centre in St Petersburg (Russia) are described. They have much experience in processing holograms from various SARs: Almaz-1 (Russia), RADARSAT (Canada), ERS-1/2 and ENVISAT ASAR (the European Space Agency). Of special interest to the reader might be the information
Introduction
5
about the use of microwave holography for classification of sea ice, navigation in the Arctics, a global monitoring of ocean phenomena and characteristics to be used for surveying gas and oil resources. We illustrate the use of the holographic methods in a coherent ground radar for 2D imaging of the Russian spacecraft Progress and for the study of local radar responses to objects of complex geometry in an anechoic chamber, aimed at target recognition. To conclude, the methods and techniques described in this book are also applicable to many other research fields, including ultrasound and sonar, astronomy, geophysics, environmental sciences, resources surveys, non-destructive testing, aerospace defence and medical imaging, that have already started to utilise this rapidly developing technology. We hope that our book will also be used as an advanced textbook by postgraduate and graduate students in electrical engineering, physics and astronomy.
Acknowledgements The idea to write a book about the application of holographic principles in radiolocation occurred to us at the end of the last century and was supported by the late Professor V.E. Dulevich. We are indebted to him for his encouragement and useful suggestions. We express our gratitude to the staff members of the Nansen Centres (Bergens and St Petersburg), who provided us with valuable information about the practical application of a side-looking radar. We should like to thank V.Y. Aleksandrov, L.P. Bobylev, D.B. Akimov, O.M. Johannessen and S. Sandven for their help in the preparation of these materials. Our deepest thanks also go to our colleagues E.F. Tolstov and A.S. Bogachev for their excellent description of the criteria for evaluation of radar images. This book is based on the results of our investigations that have taken a long period of time. We have collaborated with many specialists who helped to shape our conception of a coherent radar system. We thank them all, especially S.A. Popov, G.S. Kondratenkov, P.Ya. Ufimtzev, D.B. Kanareykin and Yu.A. Melnik, whose contribution was particularly valuable. We also thank our pupils V.R. Akhmetyanov, A.L. Ilyin and V.P. Likhachev for their assistance in the preparation of this book. We are also grateful to L.N. Smirnova, the translator of the book, for her immense help in producing the English version.
Chapter 1
Basic concepts of radar imaging
1.1 Optical definitions At present, there is a certain class of microwave radars capable of imaging various types of extended targets. These are usually termed imaging radars. Before giving a definition of a ‘microwave image’, we should like to draw the reader’s attention to two circumstances. First, a microwave image is always viewed by a radar operator in the visible range, while the imaging is performed in the microwave range. Second, this book considers radar imaging based on a combination of holographic and tomographic approaches. Therefore, we should first recall the basic concepts necessary for the description of imaging by conventional photographic and holographic devices in the visible spectral range. Let us construct an image of an object (AB) formed by a thin lens (Fig. 1.1) [19]. The lens thickness can be neglected, and one can assume that the principal planes of the object AB and its image A′ B′ coincide and pass through the lens centre (line M′ N′ ). The other designations are the focal lengths HF, HF′ , f , f ′ and the distances x, x′ separating the object and its image from the respective focal points F and F′ . The straight line AA′ connecting the vertices of the object and the image passes through the centre of the lens H. If we draw an auxiliary ray AF intercepting the principal plane at the point N and an auxiliary ray AM parallel to the optical axis at the point A′ , where the refracted rays MA′ and NA′ intercept, we can find the image of the point A. If we draw the normal A′ B′ from the point A′ to the optical axis, we shall get the optical image of the object AB. The similarity conditions yield the governing equations for an optical image, or Newton’s formulae: f x′ y′ = − = − ′, y x f xx′ = ff ′ .
(1.1) (1.2)
8
Radar imaging and holography M⬘
A
M
y F⬘
H B
B⬘
F
–y⬘
N
A⬘ –f ⬘
–x⬘
a1
Figure 1.1
f⬘
N⬘
x⬘
a2
The process of imaging by a thin lens
The relation between the elements of an image and the corresponding elements of an object is known as a linear or transversal lens magnification V defined as V =
y′ . y
(1.3)
Since the lens is described by the equality f = −f ′ , Eq. (1.2) gives xx′ = −f 2 .
(1.4)
Newton’s formulae relate the distances of the object and the image to the respective focal points. However, it is sometimes more convenient to use their distances to the respective principal planes. Let us denote these distances as a1 and a2 . Then using Fig. 1.1 and Eq. (1.2), we can get 1 1 1 = . − a1 f1 a2 The linear magnification can be expressed through a1 and a2 as a2 V = . a1
(1.5)
(1.6)
Consider now the concept of focal depth in the image space [80]. When constructing the image to be produced by a lens, we assumed that the image and the object were in planes normal to the optical axis. Suppose now that the object AB, say, a bulb filament, is inclined to the optical axis, as is shown in Fig. 1.2, while a photographic plate is in the plane M1 normal to the optical axis of the objective lens. In order to
Basic concepts of radar imaging
9
(a) M⬘ F
M
Photographic plate
F⬘
A
(b) M
F
F⬘
B A⬘
A
(c)
Photographic plate
B⬘
M F
F⬘
B Photographic plate (d) M⬘
B⬘ M⬘
A⬘ M3
Figure 1.2
M3
M1
M2
B⬘
M⬘
A⬘ M1
M2
A schematic illustration of the focal depth of an optical image: (a) image of point M lying in the optical axis; (b) image of point A; (c) image of point B and (d) image of points A and B in the planes M1 , M2 and M3 .
find the image on the photoplate, we shall construct rays of light going away from individual points of the object. The light beams going from the object AB to the objective lens and from the objective lens to the image are conic with the lens as the base and the points of the object and the image as the vertices. Imagine that the image of the point M of an object lying in the optical axis is on a photoplate in the plane M1 (Fig. 1.2(a) and (d)). Then the beam of rays converging onto this image will have its vertex on the plate. The object’s extremal points A and B will produce conic rays with the vertices in front of (B′ ) in Fig. 1.2(c) and behind the photoplate
10
Radar imaging and holography
(A′ ) (Fig. 1.2(b)). Thus, it is only the point M in the optical axis that will have its image as a bright point M′ in Fig. 1.2(a). The end points A and B of the line will look like light circles A′ and B′ . The image of the line will look like M1 in Fig.1.2(d). If the photoplate is shifted towards A′ (Fig. 1.2(b)) or B′ (Fig. 1.2(c)), we shall have different images M2 or M3 (Fig. 1.2(d)). It follows from this representation that the image of a 3D object extended along the optical axis will have different focal depths on the plate at all the points in the image space. In practice, however, images of such objects have a good contrast. Therefore, the objective lens possesses a considerable focal depth. This parameter determines the longitudinal distance between two points of an object, and the sizes of their images do not exceed the eye’s unit resolution. Therefore, the classical recording on a photoplate produces a 2D image, which cannot be transformed to a 3D image. The third dimension may be perceived only due to indirect phenomena such as the perspective. Now let us describe the real and virtual optical images and see how the image of a point object M can be constructed with rays. The rays go away from the object in all directions. If one of the rays encounters a lens along its pathway, its trajectory will change. If the rays deflected by the lens intercept, when extended along the light propagation direction, a point image will be formed at the interception and can be recorded on a screen or a photoplate. This kind of image is known as real. However, when the rays are extended along the direction opposite to the light propagation direction, both the interception point and the image are said to be virtual. The images in Fig. 1.2 are real because they are formed by rays intercepting at their extension along the light propagation. An optical image possesses orthoscopic and pseudoscopic properties. Suppose a 2D object has a surface relief, its image will be orthoscopic if it is not reversed longitudinally: the convex parts of the object look convex on the image. Using the above approach, we can show that the image formed by a thin lens is orthoscopic. If an image has a reverse relief, it is termed pseudoscopic; such images are produced by holographic cameras. Thus, images produced by classical methods have the following typical characteristics. • Imaging includes only the recording of incident light intensity, while its wave phase remains unrecorded. For this reason, this sort of image cannot be transformed to a 3D image. • An image has a limited focal depth. • An image produced by a thin lens is real and orthoscopic.
1.2 Holographic concepts Holography is a lens-free way of recording images of 3D objects into 2D recording media [29]. This process includes two stages. The first stage is called hologram recording, during which the interference between the diffraction field from an object
Basic concepts of radar imaging
11
y 1
Reference wave
u
z
3 4 2
Figure 1.3
The process of optical hologram recording: 1 – reference wave; 2 – object; 3 – photoplate and 4 – object’s wave
and a reference field is recorded on a photoplate or another photosensitive material. A necessary condition is that both fields should be coherent. In their original experiments, the pioneers of holography used mercury sources that were later replaced by lasers. The interference pattern registered on a photoplate was called a hologram. The second stage is that of image reconstruction including the illumination of the processed photoplate with a wave identical to the reference wave. Suppose, for simplicity, that the reference wave is plane (Fig. 1.3) and propagates at an angle θ to the z-axis (x, y, z are coordinates in the hologram plane). The object’s wave is described by a complex function u(x, y) = a(x, y) exp(−jϕ(x, y)) and the reference wave by the function uo (x, y) = ao exp(−jωo x), where ωo = k sin θ, θ is the wave incidence onto a photoplate located in the xOy plane, k = 2π/λ1 is the wave number, and λ1 is the wavelength of coherent light source. The intensity of the interference pattern on the hologram is I (x, y) = [uo (x, y) + u(x, y)]2 = a2o + a2 (x, y) + exp{j[ϕ(x, y) − ωo x]} + exp{−j[ϕ(x, y) − ωo x]}ao a(x, y) = a2o + a2 (x, y) + 2ao a(x, y) cos[ϕ(x, y) − ωo x].
(1.7)
12
Radar imaging and holography
In addition to the constant term a2o + a(x, y), the hologram function in Eq. (1.7) contains a harmonic term 2ao a(x, y) cos(ωo x) with the period T = 2π/ωo = λ1 / sin θ.
(1.8)
The quantity ωo which defines this period is known as the space carrier frequency (SCF) of a hologram. For example, for a He–Ne laser beam (λ1 = 0.6328 µm) incident onto a hologram at an angle of 30◦ , the SCF is ωo = 900 lines/mm. The minimum period of the SCF is θ = π/2 and is equal to the wavelength λ1 . The a/ao ratio is called the hologram modulation index. It follows from Eq. (1.7) that the amplitude and phase distributions of the object’s wave appear to be coded by the SCF amplitude and phase modulations, respectively. As a result, a hologram turns out to be the carrier of space frequency which contains spatial information, whereas a microwave is the carrier of angular frequency and contains temporal information. Phase-only holograms record only the phase variation rather than the amplitude. The first stage of the holographic process is terminated by recording the quantity I (x, y). A photoplate records a hologram. The transmittance of an exposed and processed photoplate is Tn (x, y) = I −γ ,
(1.9)
where γ is the plate contrast coefficient. It is reasonable to take γ = −2 because the hologram then corresponds to a sine diffraction grating which does not form diffraction orders higher than the first one. So we have t(x, y) = Tn (x, y) = I (x, y).
During the reconstruction, a hologram is illuminated by the same reference wave as was used at the recording stage. The reconstruction occurs due to the light diffraction on the hologram (Fig. 1.4). Immediately behind the hologram, a wave field is induced with the following components: U (x, y) = e−jωo x t(x, y) = e−jωo x I (x, y) = exp(−jωo x)[a2o + a2 (x, y)] + ao a(x, y) exp{−jϕ(x, y)} + exp{−jωo x} exp{jϕ(x, y)}ao a(x, y). (1.10)
With this, the second stage of the holographic process is terminated. Three terms in Eq. (1.10) describe waves that form three different images (Fig. 1.4). The first wave preserves the direction of the reconstructing (plane) wave and represents the zero diffraction order, or light background. The second wave ao a(x, y) exp[−jϕ(x, y)] reproduces the object’s wave to an accuracy of the amplitude factor ao , providing a virtual image of the object observed behind the hologram. At an angle (−2θ ) relative to the normal to the hologram, a complex conjugate wave propagates, producing a real image in front of the hologram. It can be shown (Chapter 3) that the virtual image is orthoscopic and the real image is pseudoscopic. Of importance is the fact that the virtual image is 3D.
Basic concepts of radar imaging
13
Reconstruction wave
4 z
u 2u
1
3 2
Figure 1.4
Image reconstruction from a hologram: 1 – virtual image; 2 – real image; 3 – zero diffraction order and 4 – hologram
Consider the basic properties of a holographic image, in particular, the hologram information structure. Suppose the object to be imaged is a discrete ensemble N of coherently radiating points with the coordinates rq . The object’s field on the hologram aperture can be described by a sum [108] Ur =
N
aq exp(−jkrq ) =
N
(1.11)
αq
q=1
q=1
and the respective intensity distribution by an expression |Ur |2 =
N N
αq αp∗ ,
(1.12)
q=1 p=1
where the asterisk denotes a complex conjugate quantity. The reference beam on the hologram aperture will be given as Uo = ao exp(−jkro ) = αo ,
(1.13)
where ro are the reference beam coordinates. Then the intensity of the interference pattern can be written as |Ur + Uo |2 = a2o + αo∗
N q=1
αq + α
N p=1
αp∗ +
N N
αq∗ αp .
(1.14)
q=1 p=1
The last term in Eq. (1.14) corresponds to Eq. (1.12) but usually it is not analysed completely. In holographic theory (Eq. (1.7)), one often restricts one’s consideration to the second and third terms. Commonly, the information about the object is assumed to be distributed uniformly across the hologram aperture; in reality, however, a hologram is synthesised from a set of microholograms. So the aperture is split into
14
Radar imaging and holography
a multiplicity of microapertures having various information significance. Such partial microholograms may correspond to the object’s field with varying polarisation. The hologram structure has three space-frequency levels. The first level is associated with the diffraction characteristics of individual radiating scatterers, more exactly, with their scattering patterns and the distance to the hologram plane. The second level is associated with the interference of overlapping fields of different radiating scatterers, a factor described by the last term in Eq. (1.14). Both levels determine the structure of the object’s field. The third level is due to the interference between the object’s speckle field and the reference beam field; this is a holographic structure possessing the highest space frequencies. A scatterer reflects waves in all directions; therefore, every point of the hologram receives information about the object as a whole (the third information level). That is why we can easily explain the experiment with a hologram broken into pieces: any piece can reconstruct the whole image because it contains information from all the scatterers. If a piece is small, the image quality will be poor since some details are lost because of a poorer resolution. The result is a characteristic speckle pattern due to the greater effect of the second-order elements on the hologram. Thus, holographic images have the following specific features. • • • •
The holographic method of image recording registers the field phase in addition to the amplitude. Reconstructed images are 3D. Holographic images possess a considerable focal depth. Holographic images may be orthoscopic or pseudoscopic.
1.3 The principles of computerised tomography Computerised tomography is generally defined as a method of reconstructing the true image (density distribution) of an object, using special computational procedures with data registered when the object is subjected to probing [15]. Generally, probing is an arbitrary physical phenomenon (radiation, wave propagation, etc.) used for the study of objects’ structure, and density means the distribution of an arbitrary physical characteristic of the object to be reconstructed. The true image is an image, in which the reconstructed density at any point in space is ideally independent of the true densities beyond the point vicinity, or of the minimum object’s volume resolvable by a measuring system. Since a probing wave interacts with the object and this interaction is ‘integrated’ along its passage through the object, it is clear why tomography is said to be a method of image reconstruction from integrated data such as beam sums, projections and so on. Therefore, computerised tomography is a way of producing 2D images of slices of 3D objects by means of digital processing of a multiplicity of 1D functions (projections) obtained at various vision angles. There are three important aspects of this technique. First, it is the problem of reconstructed image singularity, that is, the degree to which the object is describable by available data. Second, it is necessary to know whether the reconstruction process is resistant to errors and noise in the initial data. Finally, one must design an algorithm for image reconstruction.
Basic concepts of radar imaging
15
Γm Γc B P q
r A
g
Φ 0
Basic line
C d
Q
d C⬘
Figure 1.5
D⬘
g D
Viewing geometry in computerised tomography (from Reference 15): Ŵm – circumference for measurements; Ŵc – circumference with the centre at point O enveloping a cross section; p – arbitrary point in the circle with the polar coordinates ρ and ; A, C and D – wide beam transmitters; B, C ′ and D′ – receivers; γ –γ , δ–δ – parallel elliptic arcs defining the resolving power of transmitter–receiver pair (CC ′ and DD′ )
The principle of computerised tomography can be conveniently illustrated with a 2D case [15]. We shall first consider tomographic procedures for the reconstruction of density across a body’s slice. Let us introduce a circumference Ŵc (Fig. 1.5) enveloping a body, more exactly, the cross section of a real 3D object. The inner Ŵc region can be termed the image space because it includes the object to be imaged. The medium outside this region is assumed to be free, which means that a probing wave interacts only with the object. If the probing sources are located outside the Ŵc region, the method is called remote-probing computerised tomography, as opposed to remotesensing tomography when the sources are located within this region. The latter is, however, of no interest to radar imaging. The probing effects are commonly measured outside the Ŵc region. It is clear from information theory that measurements made along a certain circumference Ŵm (Fig. 1.5) embracing Ŵc will be quite sufficient. Suppose a probing radiation transmitter is located in the circumference Ŵm . To prescribe the density at an arbitrary point ρ in the Ŵc region, we introduce the polar coordinate function g = g(ρ, ) and express the total probing effect E = E(ρ, , t) as a sum of the incident E = E(ρ, , t) and secondary ES = ES (ρ, , t) effects: E = Ei +ES . The problem then reduces to the reconstruction of the density distribution across the Ŵc region. Obviously, when a target is probed by electromagnetic radiation,
16
Radar imaging and holography
the quantity Ei is the part of E directly related to the initial wave front which is the first to arrive at any point in Ŵc , while ES is composed of all effects scattered, often repeatedly, by all the points in the Ŵc region. The secondary probing effect must be given as ES (ϑ, t) = {g(ρ, ); E(ρ, , t); w},
(1.15)
where ES (ϑ, t) is the amplitude value of ES (ρ, , t) in Ŵm , {. . .} is an integral operator determined in the Ŵc region, and w is the distance between the point P and the receiver B. It is easy to see that a tomographic problem is a classical inverse source problem, since the function g(ρ, ) is to be reconstructed from the known values of ES (ϑ, t) and the source in Ŵm . Note that here we are faced with the problem of dimensionality, because ES (ϑ, t) measurements are 2D, while the resulting effect E(ρ, , t) is 3D. Because of this discrepancy, inversion algorithms become numerically unstable and sensitive to any error in the initial data. The solution to the inverse source problem is always approximate. An approximation most important to tomography involves geometrical optics allowing the representation of probing effects as rays. This provides an optimal formulation of the inverse problem related directly to conventional computerised tomography which reconstructs images from linear trajectories. This can be illustrated with Fig. 1.5. The signal recorded at point B can be represented as a function of the variables ϑ and to show that this signal varies with the position of the point B in Ŵm and with the radiation incidence: l(B) g(ρ, )dl, S(ϑ, ) =
(1.16)
l(A)
where l is a coordinate going along the ray, whose initial and final points are denoted as l(A) and l(B), respectively. There are no dimensionality problems with this expression, because the measured quantity S(ϑ, ) and the reconstructed quantity g(ρ, ) are 2D. So if S(ϑ, ) is prescribed for the number of ϑ and pairs sufficient for the description of g(ρ, ) with the desired accuracy, the true density distribution may be reconstructed such that the computational algorithm is stable. Equation (1.16) is a governing equation in conventional tomography. At present, there are various reconstruction techniques allowing the solution of this integral equation [88]. No doubt, it would be desirable to integrate the true image in the Ŵc region (in the image space). For practical considerations, however, the data may be integrated in a different space, whose properties depend on how the experimental data are related to the density function g(ρ, ). The quantity to be measured is often a Fourier image of the density distribution, so the data recording is said to be performed in the Fourier space. An example of this type of recording is that in a radio telescope with a synthetic aperture [118]. Although the data integration in the image space and the Fourier space is identical theoretically, the practical algorithms for image reconstruction differ
Basic concepts of radar imaging
17
y 1 u
0
2 q
x
3 4 5
Figure 1.6
A scheme of X-ray tomographic experiment using a collimated beam: 1 – X-rays; 2 – projection angle; 3 – registration line; 4 – projection axis and 5 – integration line
essentially. Many of the available algorithms for the reconstruction of an unknown 2D function g are based on the projection slice theorem [57,95]. It can be formulated with reference to Fig. 1.6 by introducing, in the image space, two rectangular coordinate systems xOy and uOv, rotated by the angle ϑ relative to each other. The projection of the g function at the angle ϑ is described as Pϑ (u) =
∞
g(u cos ϑ − v sin ϑ, u sin ϑ − v cos ϑ) dv,
(1.17)
−∞
where Pϑ (u) calculated at constant u = uo is a 1D integral along the respective straight line parallel to the v-axis, so that the Pϑ (u) function describes a set of integrals for all ϑ values. The projection theorem states [57] that a 1D Fourier image of a projection made at an angle Pϑ (u) represents a ‘slice’ of a 2D Fourier transform of the g(x, y) function at the ϑ angle to the X -axis: Pϑ (U ) = G(U cos ϑ, U sin ϑ) with Pϑ (U ) =
∞
Pϑ (u)e−juU du,
−∞
G(X , Y ) =
∞
−∞
g(x, y)e−j(xX +yY ) dx dy.
(1.18)
18
Radar imaging and holography Y
qmax
X
qmin
Figure 1.7
The geometrical arrangement of the G(x, y) pixels in the Fourier region of a polar grid. The parameters ϑmax and ϑmin are the variation range of the projection angles. The shaded region is the SAR recording area
In classical X-ray tomography, a body is probed by a collimated radiation beam (Fig. 1.6), while the Pϑ (u) function is measured by a set of sensors located along a straight line normal to the radiation direction. The set of Pϑi (u) projections for various ϑ angles is formed by rotating the object or the power transmitters and receivers. Then one usually uses a convolution back-projection (CBP) algorithm to be discussed later. An alternative is to use a Fourier transform. The latter approach is convenient when data recording is made in the Fourier space and the pixel values of the Pϑi (U ) Fourier images are known. According to the projection theorem, these pixels also represent the pixels of the G(X , Y ) function along a line at the ϑ angle to the X -axis. Therefore, the Pϑi (U ) values obtained for a set of ϑ angles prescribe the G(X , Y ) pixels on a polar grid (Fig. 1.7). By using an interpolation algorithm, one can go over to the G(X , Y ) pixels on a rectangular grid and use an inverse Fourier transform to reconstruct the density g(x, y). There have been attempts to compute g(x, y) directly from the G(X , Y ) pixels on a polar grid to avoid using an interpolation algorithm [100]. Let us now discuss common approaches used in computerised tomography and radar imaging. In the latter, the target position is determined from the time delay of the radar echo and the antenna orientation. The range resolution is usually much higher than the angular resolution. Suppose a wide beam transmitter and a receiver of electromagnetic radiation are located at the points C and C′ , respectively (Fig. 1.5). The geometrical positions of scatterers, whose echo-signals arrive at the point C′ simultaneously, form an ellipse with the focus at C and C′ . More exactly, this is a band between two ‘concentric’ ellipses, its width characterising the resolution limit of the system. Part of this band is denoted as γ –γ . In the case of imaging one point (when the points CC′ overlap), the ellipses degenerate into circles. The total scattering
Basic concepts of radar imaging
19
intensity is proportional to the band-averaged density of scatterers. Since the distance between the ellipses is equal to the resolution, it is sufficient to integrate the density along an average ellipse. The signal recorded at the point C can be written as S(C, C′ ; γ ) = g(ρ, ) dl, (1.19) γ
where γ denotes an average ellipse and dl is an element of the ellipse length. Like in the case described by Eq. (1.16), there is no problem of dimensionality. The scatterers located in the vicinity of a certain point Q can be identified by changing the positions of the transmitter and the receiver. Figure 1.5 shows one of these positions, denoted as D and D′ , and the respective band δ–δ. The true density distribution can be reconstructed from a sufficiently large number of measurements made at different points. It is clear that the cases described by Eqs (1.16) and (1.19) differ only in the integration direction. In X-ray tomography, the function ρ(x, y) describes an unknown distribution of the X-ray attenuation coefficient across a transversal slice (Fig. 1.6) to be measured. The Pθ (u) projection values are obtained in a multi-beam system represented as an array of X-ray transmitters and receivers located at the θ angle to the x-axis (Fig. 1.6). The intensity of the received radiation decreases exponentionally as the beams pass along the line of ρ(x, y) integration. Therefore, the projection Pθ (u) of this function is Pθ (u) = − log
Iθ (u) , Io
(1.20)
where Io is the X-ray source intensity and Iθ (u) is the intensity registered by the receivers. The set of Pθi (u) projections can be obtained by rotating the object or the array of transmitters and receivers by discrete angles θ = θi . The distribution of the attenuation coefficient of g(x, y) is usually reconstructed from the measured Pθi (u) projections, using the CBP method [127]. It enables one to estimate the spatial distribution of the inner physical parameters of the target. It will be shown below that a radar can register the Pθ (v) signal. For a given θ angle, its intensity is proportional to the scattering density ρθ (u, v) integrated along the u- and v-coordinates, that is, it is a tomographic projection along the v-axis. So the Pθ (v) function is a 1D function of the variable v with the parameter θ defining the projection orientation. One can see that there is an essential difference between X-ray tomography and synthetic aperture radar (SAR) imaging. In the latter, a linear integral used to obtain a projection is taken in the direction normal to the microwave propagation, whereas in X-ray tomography it is taken along the X-ray propagation direction. It will be shown in Chapter 6 that the Doppler and holographic methods of SAR signal processing can provide such projections. Another important specificity is that these projections include a phase factor to describe the time of the double path of the signal between a target and a radar antenna. Thus, a projection produced by SAR is a coherent tomogram that carries much more information about a target. This is especially evident when one uses holographic projections (see Chapter 6). On the
20
Radar imaging and holography
other hand, a tomographic processing of projections is capable of reconstructing the arrangement of scatterers on a target, or, in fact, its shape.
1.4 The principles of microwave imaging The past decade has witnessed an ever increasing interest in radars with a very high resolving power. For example, the ERS-1 and ERS-2 radars (side-looking synthetic aperture radars (SARs) of the European Space Agency [62]) provide microwave imagery of the earth surface with the resolution of 25 m × 25 m in the azimuth-range coordinates. An earth area of 100 km × 100 km (100 km is the radar swath width) is represented by 1.6×107 pixels. Modern ground radars have large antenna arrays with an aperture of about 104 –105 λ1 , where λ1 is the radar wavelength. They provide an angular resolution of 10−4 –10−5 rad [129], so the radar vision field can be subdivided into 104 –105 beams. A radar with a linear and angular resolution much higher than that of a TV equipment (7.105 − 106 pixels) is capable of producing microwave images of extended targets (land areas and water surfaces) and complex objects (aircraft, space craft). So it is reasonable to give a definition of a microwave image. At present, there is no generally accepted definition, so we suggest the following formulation. A microwave image is an optical image, whose structure reproduces on a definite scale the spatial arrangement of scatterers (‘radiant’ points) on a target illuminated by microwave beams. In addition to the arrangement, scatterers are characterised by a certain radiance. It should be emphasised that the microwave beams can produce 3D images, whereas the visible range of conventional optical systems gives only 2D images. Available methods of microwave imaging can be grouped into three classes: • direct methods using real apertures; • methods employing synthetic apertures; • methods combining real and synthetic apertures. Imaging by direct methods can, in turn, be performed by real antennas or antenna arrays. Real antennas were used in the early years of radar history. An earth area was viewed by means of circular scanning or sector rocking of the antenna beam in the azimuthal plane. Such systems were termed panoramic or sector circular radars. Modern panoramic radars use 50–100λ1 apertures and their resolution is low. Since the application of airborne panoramic antenna arrays is a hard task, the only way to increase the resolution is to use the millimetre wavelength range. One is faced with a similar problem when dealing with a side-looking real antenna mounted along the aircraft fuselage. Such antennas may be as long as 10–15 m; at the wavelength λ1 = 3 cm, their angular resolution is less than 10 min of arc and the linear resolution of the earth surface is a few dozens of metres, which is too low for some applications. For this type of antenna, the problem of increasing the aperture size was solved in a radical way – by replacing a real aperture with a synthesised aperture. Consider the potentialities of antenna arrays for aerial survey of the earth surface and for ground imaging of targets flying at low altitudes. Suppose we are to design an
Basic concepts of radar imaging
21
antenna array for aircraft imaging. The target has a characteristic size D and is illuminated by a continuous radar pulse. Then, according to the sampling theorem [103], the echo signal function in the aperture receiver can be described by a series of records recorded at the intervals Rλ1 , (1.21) δL = D where R is the distance to the target. The aperture size necessary for getting a desired resolution on the target , can be defined in terms of Abbe’s formula [131]: =
λ1 R λ1 ∼ , = 2 sin α/2 L
(1.22)
where α is the aperture angle and L is its length. The total number of receivers on an aperture of length L is N =
DL L = . δL λ1 R
(1.23)
With Eq. (1.22), we get D . (1.24) Let us illustrate this with a particular problem. Suppose we have λ1 = 10 cm, R = 600 km, D = 20 m, and = 1 m. Then we get L = 60 km, δL = 3 km and N = 20. A planar aperture of L × L in size must contain n = N 2 = 400 individual receivers. This example shows that the applicability of direct imaging using large antenna arrays is quite limited. Nevertheless, one of these techniques employing a radio camera designed by B. D. Steinberg is of great interest [129]. The radio camera is based on a pulse radar with a real large antenna array and an adaptive beamforming (AB) algorithm. The principal task is to obtain a high resolution with a large aperture avoiding severe restrictions on the arrangement of the antenna elements. The operation of a self-phasing algorithm requires the use of an additional external phase-synchronising radiation source with known parameters, which could generate a reference field and would be located near the target. The radio camera provides an angular resolution of 10−4 –10−5 rad [129], and the image quality is close to that of optical systems. There is one limitation – the radio camera has a narrow vision field. But still, it may find a wide application in radar imaging of the earth surface, in surveying aircraft traffic, etc. To summarise, direct real aperture imaging of remote targets at distances of hundreds and thousands of kilometres is practically impossible. We turn now to methods employing a synthesised aperture. The idea of aperture synthesis born during the designing of a side-looking aperture radar [32,74,86] was to replace a real antenna array with an equivalent synthetic antenna (Fig. 1.8). An antenna with a small aperture is to receive consecutive echo signals and make their coherent summation at various moments of time. For a coherent summation to be made, the radar must also be coherent, namely, it should possess a high transmitter N =
22
Radar imaging and holography (a)
ur
LA
Summator
Output
(b)
us
V
x
x
Memory (delay line)
Summator
Output
X V 0
Figure 1.8
t Ts
Synthesis of a radar aperture pattern: (a) real antenna array and (b) synthesised antenna array
frequency stability and have a reference voltage of the stable frequency to compare echo signals. We shall see below that a reference voltage is similar to a reference wave in holography, with the only difference that the ‘wave’ is created in the receiver by the voltage of a coherent generator. Under the conditions described above, the echo signals received by a real antenna are saved in a memory unit as their amplitudes and phases. When an aircraft flies over an earth area x = Ls , the signals are summed up at the moment Ts = x/V (the final moment of synthesis), where V is the track velocity of the aircraft. As
Basic concepts of radar imaging
23
a result of coherent signal processing, which is similar to the processing by a real antenna (Fig. 1.8(a)), a synthetic aperture pattern θs similar to a real aperture pattern θr is formed. Thus, the real aperture length Ls is replaced by the synthesised aperture length x(x = Ls ). The width of this aperture pattern is θs =
λ1 . 2x
(1.25)
Owing to its large size, a synthetic aperture can provide very narrow patterns, so the track range resolution δx = θs R,
(1.26)
where R is the slant range to the target, may be very high even at large distances. To illustrate, if the synthetic aperture length is x = 400 m and λ1 = 3 cm, the resolution may be as high as δx = 6 m at R = 160 km. Similar principles apply to a stationary ground radar and a moving target. If one needs to obtain a high angular resolution, one can make use of the so-called inverse aperture synthesis. We shall show in Chapter 2 that the resolution on the target is then independent of the distance to it but is determined only by the radar wavelength and the synthesis angle. As a result, one can obtain a very high angular resolution and reconstruct the arrangement of the scatterers into a microwave image. Thus, current approaches to microwave imaging, based on direct and inverse synthesis of the aperture, provide 2D images which are structurally similar to optical images. Besides, there are methods combining both approaches. They apply a real, say, phased aperture and a synthetic aperture along the aircraft track. These techniques also produce images similar in structure to optical images [2]; they will be discussed in detail in Chapter 3. However, there are certain differences between the two types of 2D images. We summarise the most important ones below. 1. The wavelengths in the microwave range are 103 –106 times longer than in the visible range, and this determines an essential difference in the scattering and reflection by natural and man-made targets. In the visible range, the scattering by man-made targets is basically diffusive, and it can be observed when the surface roughness is of the order of a wavelength. This fact allows a target to be considered as a continuous body. In the microwave range, the picture is quite different because there is no diffusion. The signals are reflected by scattering centres, corner structures and specular surfaces. For this reason, a microwave image of a man-made target is discrete and is made up of ‘dark’ pixels and those produced by the strong reflectors we mentioned above. A good example is the microwave image of an aircraft that was obtained in Reference 130. Reflection by natural targets produces similar images. However, the reflection spectrum of the earth surface contains an essential diffusion component. 2. For these reasons, the dynamic range of microwave images varies between 50 and 90 dB, while it rarely exceeds 20 dB in optical images, reaching the value of 30 dB in bright sunlight.
24
Radar imaging and holography
3. The quality of an image does not depend on the natural luminosity of a target and depends but slightly on weather conditions. 4. Image quality strongly depends on the geometry of the earth region to be imaged, especially its slant angles, roughness and bulk features in the surface layer. So microwave imaging is used for all-weather mapping, soil classification, detection of boundaries of background surfaces, etc. There is no unified optimal angle (in the vertical plane) for viewing geological structures, and the best values should be adjusted to the local topography. For mountainous and undulated reliefs, for example, a small radiation incidence relative to the normal is preferable, while the imaging of plains requires the use of large incidence angles, which increase the sensitivity to surface roughness. For this reason, images produced by airborne SAR may be inadequate radiometrically (speckle noise) resulting from a large variation in the incidence across a swath because of a wide aperture pattern. Space SARs possess an approximately constant radiation incidence across a swath, so there is no speckle on the image. 5. The density of blackened regions on a negative depends significantly on the dielectric behaviour of the surface being imaged, in particular, on the presence of moisture, both frozen and liquid, in the soil. 6. The microwave range gives the opportunity to probe subsurface areas. For example, the microwave images of the Sakhara desert obtained by a SIR-A SAR showed the presence of dried river beds buried under the sands, which were invisible on the desert surface. This opens up new opportunities to archaeological surveys. It has been demonstrated experimentally that the probing radiation depth in dry sand may be as large as 5 m. Besides, a sand stratum possessing a low attenuation is found to enhance images of subsurface roughness due to refraction at the air–soil interface. This effect is particularly strong for horizontal polarisation at large incidence angles. 7. The specific propagation pattern of the long wavelengths in the microwave range provide quality imagery of lands covered with vegetation. 8. The interaction of subwater phenomena such as internal waves, subsurface currents, etc., with the ocean surface allows imaging the bottom topography and various subwater effects. 9. The use of moving target selection allows one to make precise measurements of the target’s radial velocity relative to the SAR. 10. An important factor in imagery is the proper choice of radiation polarisation. 11. Quite specific is imaging of urban areas and other anthropogenic targets. This is due to a large number of objects with a high dielectric permittivity (e.g. metallic objects), surface elements possessing specular reflection, resonance reflectors and objects with horizontal and vertical planes that form corner reflectors. The result of the latter is the following effect: streets parallel to the SAR carrier track produce white lines on the image (the positive), while streets normal to the track produce dark lines. Moreover, the presence or absence on the image of some linear elements of the radar scene and an average density of blackening of the whole image depend on the azimuthal angle, that is, the angle made by the SAR
Basic concepts of radar imaging
25
beam in the plane tangential to the earth surface. This is a serious obstacle to the analysis of images of urban areas. 12. An image contains speckle noise associated with the coherence of the imaging process. To conclude, a microwave radar image may be 3D if it is recorded by holographic or tomographic techniques (Chapters 5 and 6, respectively).
Chapter 2
Methods of radar imaging
2.1 Target models All radar targets can be classified into point and complex targets [138]. A point target is a convenient model object commonly used in radar science and practice to solve certain types of problems. It is defined as a target located at distance R from a radar at the viewing point ‘0’, which scatters the incident radar radiation isotropically. For such a target, the equiphase surface is a sphere with the centre at ‘0’. Suppose a radar generates a wave described as f (t) = a(t) exp j[ωo t + (t)], where f0 = ωo /2π is the carrier frequency, while a and are the amplitude and phase modulation functions overlapping the carrier frequency. A point target located at distance R creates an echo signal 2R 2R 2R 2R g(t) = σ f t − = σa t − exp j ωo t − + t− , c c c c (2.1) where σ is a complex factor including the target reflectance and signal attenuation along the track. The Doppler frequency shift is implicitly present in the variable R. If we assume that the radial velocity v1 is constant, we shall have R = R1 + v1 t1 ,
(2.2)
where R1 is the distance to the target at the initial moment of time t = 0. Equations (2.1) and (2.2) describe a simple model target to be further used for the analysis of the aperture synthesis and imaging principles. In practice, most radar targets refer to the class of complex targets. In spite of a great variety of particular targets, we can offer a common criterion for their
28
Radar imaging and holography
classification. This criterion is based on the relationship between the maximum target size and the radar resolving power in the coordinate space of the parameters R, α, β and ˙ which are the range, the azimuth, the elevation angle and the radial velocity of the R, target, respectively. An additional important parameter is the number of scattering centres (scatterers). In accordance with this criterion, all complex targets can be subdivided into extended compact targets and extended proper targets. A target is referred to as extended compact if it has a small number of scatterers, its linear and angular dimensions are much smaller than the radar resolution element, and the difference between the radial velocities of the extremal scatterers is appreciably smaller than the velocity resolution element. What is important is that this definition also holds for targets located at large distances. On the other hand, a target which has a size much larger than the radar resolution element and a large number of scatterers should be referred to as extended proper. Earth and water surfaces are examples of such targets. We shall first discuss extended compact targets (airplanes, spacecrafts, etc.). In the high-frequency region, these targets should be represented as a set of scatterers, or radiant points. The mathematical model of an extended compact target, based on the concept of scatterers, has the form [138]:
U =
M √
σm exp(jm ),
(2.3)
m=1
where M is the number of individual scatterers, σm is the radar cross-section (RCS) for the mth scatterer and m is the phase of the pulse reflected by the mth scatterer relative to that of the pulse reflected by the first scatterer. The value of σm is to be found for a particular polarisation. Equation (2.3) is usually used for monostatic incidence in the optical region (high frequency approximation). It can also be used to find the relation between monostatic and bistatic scattering at the same target aspect α. For this, the phases of the scatterers, m , should be expressed as a sum of two terms [69]: m = 2kZm (α) cos β/2 + ξm ,
(2.4)
where Zm (α) is the projection of the distance between the mth and the first scatterers onto the bisectrix of the bistatic angle, k = 2π/λ1 is the wave number of the incident wave and β is the bistatic angle and ξm is the residual phase contribution of the mth scatterer, including the contribution of the creeping wave. For scatterers retaining their position with changing bistatic angle, the mathematical model is
U =
M √
m=1
σm exp(j2kZm (α) cos β/2ξm ).
(2.5)
Methods of radar imaging
29
Equation (2.5) allows us to introduce the concept of equivalence of mono- and bistatic scattering and to define conditions for this equivalence. The theorem of R. E. Kell states that (1) if the total field can be written as a sum of the fields of √ all scatterers and (2) if the quantity σm , the Zm -coordinate and the residual phase ξm are all independent of the bistatic angle β in a particular range of β values at any given aspect α, then the total bistatic field for the angles α and β is equal to the monostatic scattering field measured along the bisectrix of the β angle at a frequency reduced by a factor of cos(β/2). This theorem will be used in Chapter 5 to justify the method of inverse aperture synthesis for recording and reconstruction of Fourier holograms. The amplitude and polarisation characteristics of individual scatterers are of special interest for the understanding of diffraction phenomena in extended compact targets. A comparison of respective experimental and theoretical values should be based on precise scattering models substantiated by the physical theory of diffraction, namely, by the edge wave method (EWM) [137] or by the geometrical theory of diffraction (GTD) [70]. To illustrate, let us consider the field scattered by a perfectly conducting cylinder of finite length l and radius a oriented towards the transmitting antenna. According to the EWM [12], the horizontal and vertical field components in the far range are
Eϕ =
eikR ia Eox (ϑ), 2 R
Eϑ =
eikR ia (ϑ), Hox 2 R
(2.6)
where k is the wave number and ϑ is the angle between the viewing direction and the cylinder symmetry axis, π/2 ≤ ϑ ≤ π : (ϑ) = (1) + (2) + (3),
(ϑ) = (1) + (2) + (3),
(1) = f (1)[J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ ,
(2) = f (2)[−J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ ,
(3) = f (3)[−J1 (ζ ) + iJ2 (ζ )]e−ikl cos ϑ ,
(2.7) (2.8) (2.9) (2.10) (2.11)
ζ = 2ka sin ζ ,
J1 (ζ ) and J2 (ζ ) are the first- and second-order Bessel functions, respectively. Indices 1, 2 and 3 correspond to three scatterers on the cylinder (Fig. 2.1). Similar expressions can be obtained for the functions (1), (2) and (3) by replacing f (1), f (2) and f (3) by g(1), g(2) and g(3), respectively. The latter are
30
Radar imaging and holography y z
3 A
A/ 1 q
2
Figure 2.1
Viewing geometry for a rotating cylinder: 1, 2, 3 – scattering centres (scatterers)
defined as −1 −1 sin(π/n) π (π − 2ϑ) π f (1) = cos − 1 ± cos − cos , g(1) n n n n (2.12) −1 sin(π/n) 2ϑ −1 π π f (2) = cos − 1 ∓ cos − cos , g(2) n n n n −1
−1 sin(π/n) (π + 2ϑ) π π f (3) = ∓ cos − cos cos − 1 , g(3) n n n n
(2.13)
(2.14)
n = 3/2. The functions (2.7)–(2.14) can be used to calculate the scattering characteristics (the RCS diagram, the amplitude and phase scattering diagrams, etc.) for an experimental study of diffraction in an anechoic chamber (AEC). The last two diagrams, for example, can be found as the modulus and argument of the functions (2.7) and (2.8). However, the representation of the field as a sum of the fields re-transmitted by scatterers provides information on individual scatterers. Such characteristics are referred to as local responses [12]. The RCS diagrams for scatterers on a cylinder and
Methods of radar imaging the E- and H-polarisations of the incident field can be written as 2 2 σnH (ϑ) = π a2 (ϑ) , σnE (ϑ) = π a2 (ϑ) , n
31
(2.15)
n
n = 1, 2, 3.
The phase responses of scatterers can be derived in the form of arguments of the complex valued functions (2.9)–(2.11). A scattering model for a cylinder with bistatic incidence was designed in Reference 12 in the EWM approximation. Besides, it is shown in References 105 and 109 that the amplitude responses and the positions of scatterers on a target can be studied experimentally using images reconstructed from microwave holograms. We now turn to models of extended proper targets. Such targets include • • • •
land surface; sea surface; large anthropogenic objects like urban areas and settlements; special standard objects for radar calibration.
An analysis of models of all of these targets would go far beyond the scope of this book. We give a brief survey of scattering models of sea surface in Chapter 4, including a model of a partially coherent extended proper target, which is used in the analysis of microwave radar imagery. It should be noted that extended compact targets may also be partially coherent (Chapter 7). In either case, these targets produce parasitic phase fluctuations which perturb radar imaging coherence. Target models are used for several purposes: to justify the principles of inverse aperture synthesis, to interpret microwave images, to obtain local RCS of scatterers on standard objects, and to calibrate measurements made in AECs.
2.2 Basic principles of aperture synthesis We have mentioned in Chapter 1 that the use of a synthetic aperture is necessary if one needs to obtain a high angular resolution of targets at large distances. It has been shown by some researchers [73,109] that the aperture synthesis is, in principle, possible for any form of relative motion of a target and a real antenna; what is important is that the target aspect should change together with the relative displacement. Today there are two basic methods of aperture synthesis – direct and inverse. Direct synthesis can be made by scanning a relatively stationary target by a real antenna (Fig. 2.2(a)). The target is on the earth surface and the antenna is located on an aircraft. Radar systems with direct antenna synthesis are known as side-looking synthetic aperture radars (SARs). The authors of Reference 85 have suggested for them the term quasi-holographic radars (Chapter 3). Methods of aperture synthesis using linear translational motion of a target or its natural rotation relative to a stationary ground antenna are called inverse methods and radars based on such methods are
32
Radar imaging and holography Lc
(a)
V
1
bo 2 1 – radar 2 – target Lc
(b)
V
2
bo 2 1
1 – radar 2 – target (c)
2
bo 1
1 – radar 2 – target
Figure 2.2
Schematic illustrations of aperture synthesis techniques: (a) direct synthesis implemented in SAR, (b) inverse synthesis for a target moving in a straight line and (c) inverse synthesis for a rotating target
known as inverse synthetic aperture radars (ISARs) (Fig. 2.2(b) and (c)). There are also combined approaches to field recording. For example, a front-looking holographic radar (Chapter 3) combines direct synthesis along the track and transversal synthesis with a one-dimensional (1D) real antenna array (Fig. 3.4). A spot-light mode of synthesis is also possible: it uses both the linear movement of an airborne antenna and its constant axial orientation to a ground target (Fig. 3.11). Radars based on this principle are known as spot-light SAR [100]. Finally, ground radars operating in the inverse synthesis mode and viewing a linearly moving target can combine a real-phased antenna array and aperture synthesis.
Methods of radar imaging
33
This method was suggested by B. D. Steinberg [129] to employ adaptive beamforming (AB) together with aperture synthesis (ISAR + AB). In any method of aperture synthesis, the radar azimuthal resolution is determined by the aperture angle βo = Ls /R. The linear resolution along the angle coordinate is δl = λ1 /βo . It should be emphasised [73] that rotation of a synthetic antenna pattern (SAP) does not shift the target phase centre and, therefore, does not synthesise the aperture. For this reason, one cannot increase the angular resolution by rotating a real antenna, in contrast to the target rotation.
2.3 Methods of signal processing in imaging radar Imaging radar signal processing can be considered from different points of view. Since there is an essential difference between the direct and inverse modes of aperture synthesis, the processing techniques should be described individually for each type of radar.
2.3.1 SAR signal processing and holographic radar for earth surveys The SAR aperture synthesis by coherent integration is treated in terms of • • • • •
the antenna approach [74]; the range-Doppler approach [85,140]; the cross-correlation approach [85]; the holographic approach [85,143]; the tomographic approach [100].
The use of a variety of analytical techniques in radar imaging leads to various processing designs and physical interpretations of some of its details. The first four approaches provide a fairly complete analysis of the effects of SAR parameters on its performance characteristics and the results are generally consistent. Each approach, however, enables one to see the image recording and reconstruction in a new light, because each has its own merits and demerits. In this book, we largely follow the holographic approach to the performance analysis of various SAR systems, which involves the theories of optical and holographic systems. According to one of the pioneers of optical and microwave holography E. H. Leith, a holographic treatment of SAR performance has proved most fruitful. The recording of a signal is regarded as that of a reduced microwave hologram of the wave field along the azimuth, that is, along the flight track. Illumination of such a hologram by coherent light reconstructs the optical wave field, which is similar to the recorded microwave field on a certain scale. A schematic diagram illustrating the holographic approach to SAR signal recording and processing is presented in Fig. 2.3. For a point target, for instance, an optical hologram is a Fresnel zone plate. When the plate is illuminated by coherent light, the real and virtual images of the point target are reconstructed (Fig. 3.1). Thus, a microwave image of a point target can be obtained directly owing to the focusing properties of a Fresnel zone plate. The processing optics in that case is
34
Radar imaging and holography Signal from radar target
1 Reference signal
2
3
Image
Figure 2.3
The holographic approach to signal recording and processing in SAR: 1 – recording of a 1D Fraunhofer or Fresnel diffraction pattern of target field in the form of a transparency (azimuthal recording of a 1D microwave hologram), 2 – 1D Fourier or Fresnel transformation, 3 – display
necessary only to compensate for various distortions inherent in SAR; anamorphism and the difference in the azimuth and range scale factors. Optical processing of SAR signals was first analysed in terms of holography [86]. The holographic approach will be used in Chapter 3 to describe SAR as a system for combined recording and reconstruction of microwave holograms. A general scheme of this process is shown in Fig. 2.3. The reference signal here is a heterodyne coherent pulse, whose role is actually much more important (see below). Holographic SAR for surveying the earth surface (Chapter 3) uses the crosscorrelation [72] and holographic approaches. The scheme illustrating the holographic principle is similar to that in Fig. 2.3 with the only difference that one deals here with 2D microwave holograms. The tomographic approach is applied in descriptions of aperture synthesis by spot-light SAR (Chapter 3).
2.3.2 ISAR signal processing Methods of inverse aperture synthesis have been discussed in a number of publications. The treatments involved are: • •
a range-Doppler algorithm [13,21,24]; a circular convolution algorithm (CCA) [94];
Methods of radar imaging • • • • •
35
correlated processing [13]; extended coherent processing (ECP) [13]; polar format processing [13]; holographic processing [109]; tomographic processing [9,106].
A serious limitation of the range-Doppler algorithm is its applicability only to a synthesis made at relatively small angle steps, which is an obstacle in achieving high resolutions. The restrictions on the time intervals of coherent processing were formulated by D. A. Ausherman et al. [13]. Any attempt to overcome these restrictions leads to displacement of individual scatterer images into adjacent resolution elements and, hence, to the image degradation. The range-Doppler algorithm has been used in SAR for microwave imaging of aircraft [8]. Preliminarily, the radial movement of the target is compensated for in all range channels. The development of new processing algorithms based on larger angle steps required the use of spherical coordinates (polar coordinates in the 2D case) instead of the Cartesian coordinates of the range–cross range type. One of these is the CCA permitting the limit angle step of 2π with a precise aperture focusing over the whole target space [94]. Moreover, it is applicable to the processing of both narrow- and wide-band radar signals. The conditions for viewing real targets differ from the conditions, in which the CCA operates. First, discrete records for the angle steps of the target aspect variation, recorded at a constant repetition frequency, are not equidistant. Second, the angle between the radar line of sight (RLOS) and the rotation axis changes during the viewing. The first obstacle can be bypassed by interpolating the radar data. The second one inevitably leads to the necessity to consider a 3D problem. Attempts at using this algorithm, like other 2D algorithms, to process 3D data result in distorted images [8]. When applied to narrow-band signals, the CCA has another disadvantage: the whole ensemble of radar data must be processed simultaneously. So this algorithm should be employed only in measuring test areas and in AECs. Correlated processing provides well-focused images of targets of any size, and the time intervals of coherent processing may be of arbitrary duration. On the other hand, its computational efficiency is quite low [8]. Both algorithms require special measures to compensate for the phase shift due to the radial displacement of the target. Extended coherent processing is based on coherent summation of microwave images, each of which is formed by a range-Doppler algorithm at a small angle step. The application of this technique increases the processing rate by approximately an order of magnitude with a good image quality for a fairly long processing time. Variable movement of a target relative to the RLOS necessitates the use of different algorithms for the synthesis of the final image from partial ones. So algorithms for ECP are subdivided into those for wide angle imaging and those for multiple target rotations. Target aspects suitable for wide angle imaging are chosen when a ground radar views a space craft stabilised along three axes or rotating around its centre of mass. Imaging by multiple rotations has the following specificity: when a space target is stabilized by rotation, the angle step remains the same in every consecutive rotation
36
Radar imaging and holography
of the target around its axis. In its latter modification, the ECP algorithm is used for 3D and stroboscopic microwave imaging [13]. Polar format processing is another effective way to overcome the scatterers’ movement through the resolution elements. It is based on the representation of radar data in a 3D frequency space. In our opinion, a very perspective way of inverse aperture synthesis is by holographic processing [109,146]. The possibility of using a holographic approach was first suggested by E. N. Leith [85]. Not only does it provide a new insight into the processes occurring in inverse synthesis but it also helps to find novel designs of recording and reconstruction devices. The schematic diagram of the holographic approach to ISAR signal recording and reconstruction is similar to that shown in Fig. 2.3. The first step is to record a 1D quadrature or complex microwave Fourier hologram (the diffraction pattern of the target field) (Section 2.3.1). The reference signal is a coherent heterodyne pulse. The second step is the implementation of a 1D Fourier transform. The next step is the image representation. Tomographic processing can be performed using one of the three ways of image reconstruction: • • •
reconstruction in the frequency region [9]; reconstruction in the space region by using a convolution back-projection algorithm [9]; reconstruction by summation of partial images (Chapter 6).
The tomographic approach to ISAR analysis will be discussed in Section 2.4.2 and in Chapter 6.
2.4 Coherent radar holographic and tomographic processing 2.4.1 The holographic approach Direct hologram recording commonly used in the optical wavelength range finds a limited application in the microwave range because of the absence of a suitable substitution to a microwave photoplate. So the processing can be made by either of the two methods – direct or inverse aperture synthesis (Section 2.2). These techniques allow the recording of two types of hologram. One is similar to an optical hologram, while the other has no optical counterpart. Suppose a exp(i) is a target wave and ao exp(io ) is a reference wave. In the first case a square microwave hologram is formed which is described by the following equation: H1 (x, y) = |a exp(i) + ao exp(io )|2 = a2o + a2 + 2aao cos( − o ),
(2.16)
Such holograms can be recorded by a quadratic detector in the high- and mediumfrequency ranges (Fig. 2.4(a) and (b) respectively), using a high-frequency reference
Methods of radar imaging (a)
(b)
Square detector
Receiver
ao exp(iΦo)
ao exp(iΦo)
(c) a exp(iΦ)
Intermediate frequency
a exp(iΦo)
Square detector
(d) a exp(iΦ)
Multiplicative detector
Receiver
Amplitudephase detector
ao exp(iΦo)
(e)
37
ao exp(iΦo)
Amplitude-phase detector I a exp(iΦ) p/2
ao exp(iΦo)
Amplitude-phase detector II
(f)
a exp(iΦ) Receiver
Amplitudelimiter circuit
Phase detector exp(iΦ)
Figure 2.4
Synthesis of a microwave hologram: (a) quadratic hologram recorded at a high frequency, (b) quadratic hologram recorded at an intermediate frequency, (c) multiplicative hologram recorded at a high frequency, (d) multiplicative hologram recorded at an intermediate frequency, (e) quadrature holograms, (f) phase-only hologram
wave. In the second case a multiplicative hologram is formed [109] which is defined as H1 (x, y) = Re[a exp(i) · ao exp(−io )] = aao cos( − o ).
(2.17)
The latter can also be formed at high and medium frequencies (Fig. 2.4(c) and (d), respectively). In either case it is possible to record a quadrature microwave hologram H2 (x, y) = a2o + a2 + 2aao sin( − o ),
(2.18)
H2 (x, y) = aao sin( − o ).
(2.19)
38
Radar imaging and holography
A pair of quadrature microwave holograms (2.16), (2.17) or (2.18), (2.19) is recorded by using identical reference waves phase-shifted by π/2 relative to each other (Fig. 2.4(e)). Optical recording of the bipolar functions (2.17) and (2.19) for optical reconstruction requires the use of the reference level Hr to be found from the condition max |H1 (x, y)| Hr ≥ (2.20) max |H2 (x, y)| and the linearity of the microwave recording. Then we arrive at the equations H1 (x, y) = Hr + aao cos( − o ),
(2.21)
H2 (x, y) = Hr + aao sin( − o ).
(2.22)
Each pair of quadrature holograms makes up a complex microwave hologram: H (x, y) = H1 (x, y) + iH2 (x, y).
(2.23)
The quantity j in Eq. (2.23) is introduced at the reconstruction stage, following the recording of only two quadrature holograms, say, (2.21) and (2.22). But this form of a complex hologram equation makes it possible to consider this pair as an entity, which is especially convenient for an analytical description of the reconstruction process. A complex microwave hologram is a means of registration of the total field scattered by a target. It will be shown later that this allows the reconstruction of a single image. The designs shown in Fig. 2.4(a)–(d) are largely used in laboratory and test set-ups, while radar stations use the design in Fig. 2.4(e). A typical microwave holographic receiver based on this design is shown in Fig. 2.5. In contrast to optical holography, the reference wave is produced by a coherent generator and phase-shifter 1 in the receiver. Therefore, this is a radically new way, as compared to optical holography when it creates a reference wave by electrical modulation. We call it an artificial reference wave. Its incidence angle can be simulated by varying the phase with phase-shifter 1 operating synchronously with the movement of the real radar antenna. The incidence angle α to the carrier track (Fig. 2.6) can be simulated by changing its phase as =
2π x sin α , λ1
(2.24)
where x is the position of the real antenna during the aperture synthesis. Microwave holograms can be classified in terms of the volume of recorded data on the target wave. If a hologram contains data on the wave amplitude and phase, it is said to be an amplitude–phase hologram. If the amplitude factor a(x, y) is neglected before the summation or multiplication of the target and reference waves, a hologram is said to be a phase-only hologram [109] (Fig. 2.4(f)). To describe the fields of reconstructed images, one can conveniently use the Fresnel–Kirchhoff diffraction formula [121] employed in optical holography. So it is reasonable to classify holograms in terms of the phase fronts of fields induced by reference sources and diffracted by a target.
Methods of radar imaging Transmitter
XX
Frequency synthesiser
A-phase detector
Low-pass filter
39
A cos Φ
2
Phase shifter No– 2 (90°)
Phase shifter No– 1
1
4
IF amplifier
Receiver
3
A-phase detector
Low-pass filter
A sin Φ
5
Figure 2.5
A block diagram of a microwave holographic receiver: 1 – reference field, 2 – reference signal cos(ω0 t + ϕ0 ), 3 – input signal A cos(ω0 t + ϕ0 − ϕ), 4 – signal sin(ω0 t + ϕ0 ) and 5 – mixer z Plane wave
a
v x Radar antenna
Figure 2.6
Illustration for the calculation of the phase variation of a reference wave
A Fresnel microwave hologram is synthesised by registration of the interference pattern of interaction between plane or spherical reference waves and waves diffracted by a target, which have a spherical phase front in the hologram plane. A Fraunhofer microwave hologram is formed by recording the interference pattern of plane or spherical reference waves interacting with diffracted waves having a plane phase front in the hologram plane. A Fourier microwave hologram is formed by recording the interference pattern of interaction between the diffracted waves having a spherical front in the hologram plane and a spherical reference wave with a curvature radius equal to an average curvature radius of the waves coming from the target and propagating in the same direction.
40
Radar imaging and holography
Fresnel and Fraunhofer holograms have found application in SAR theory, while Fourier holograms are used in ISAR theory (Chapters 3 and 5). Since the process of hologram synthesis implies that the radar is to be coherent, the question arises as to what requirements must be imposed on the coherence. Let us first define the concept of coherence in microwave radar theory. A signal is said to be coherent if it shows no abrupt changes in the basic frequency, or if such changes are small, of the order of 1–3◦ [14]. If the basic frequency changes are greater than these values, the signal reflected from a target is called partially coherent. This happens when the coherence is perturbed due to: • • • •
an unstable frequency of the radar wave synthesiser or heterodyne; the effects of the target itself, say, of a sea surface (Chapter 4); a non-uniform motion of the aircraft, for example, yawing, pitching and beaking (Chapter 7); the effects of the troposphere and ionosphere, such as sporadic changes in the wave propagation conditions (Chapter 8).
Within this definition, a continuous radiation is always coherent for a period of time when various instabilities in the transmitter performance can be neglected. When a radar operates in a pulse mode, coherence is determined by an unambiguous relation between the initial phase values of the carrier frequency of a train of pulses. The above definition of coherence also applies to radar signals with known phase jumps that can be avoided using coherent sensing. Since the first of the factors responsible for coherence instability is the most serious one, there was a suggestion to introduce in imaging theory the concept of frequency, rather than coherence, stability [87]. A comprehensive analysis of requirements on the frequency stability was made in SAR theory by R.O. Harger [55]. A simplified approach is considered in Reference 87. The latter will be discussed here in more detail in order to explain the physical mechanism of SAR instability. The treatment of this problem has yielded the following expression: π αT 2 ≤ (π/4)(cT /2R),
(2.25)
where α is the rate of linear frequency variation due to the instability of the radar generator, T is the time for a pulse to reach the target at distance R and to come back. It is clear from Eq. (2.25) that a permissible phase error π α 2 T 2 is π/4 for the time T = 2R/c. Therefore, Eq. (2.25) is the criterion for the coherence length in the holographing of reflecting targets; it should provide the frequency stability of the signal propagation for a time consistent with the scene depth (a full analogy with optical holography). Similar stability requirements can be imposed on coherent ISAR, in which coherence is preserved if the signal phase deviation due to the frequency instability is less than π/2. Then we have the expression 2π δfc T ≤ π/2,
(2.26)
Methods of radar imaging
41
where δfc is the deviation of the probing signal frequency for the time T . Neglecting the signal delay in the antenna-feeder waveguide, we get δfc ≤ c/8R,
(2.27)
using the concept of short-term instability εf =
fc , fc
(2.28)
where fc is the radar carrier frequency. Then we have εf =
c . 8fc R
(2.29)
The condition for a long-term frequency instability can be found from a coherent processing in the whole time interval of the synthesis, Ts , which varies with the type of hologram processing. The frequency stability in modern radars is achieved with highly stable, mainly caesium atomic beam standards of time and frequency. The frequency standards provide a long-term instability over 1 h with a possible adjustment of about 10−12 –10−14 and a 1 ns random component of the 24 h behaviour of the timescale [25]. To maintain the stability, modern radars use a phase loop control [44]. The long-term instability requirements to coherent ISAR are very high. For example, in the Goldstone Solar System Radar (GSSR) radar for planet surveys [44] this parameter is about 10−15 for 1000 s and the pulse-to-pulse instability is less than 1◦ . The GSSR Project is designed for the observation of Mercury, Venus and Mars. In a LRIR (Long-Range Imaging Radar), the pulse-to-pulse instability is about 2–3◦ [20]. This radar is designed for observation of space objects. To summarise the discussion of factors causing coherence instabilities in radars, the frequency stability maintained by frequency standards and loop frequency control can solve the problem of operation instabilities of a pulse generator or heterodyne. The other three causes of instability can be removed by special signal processing in the radar (Chapters 4 and 7).
2.4.2 Tomographic processing in 2D viewing geometry It has been shown above that the signal processing in ISAR can be described in terms of Doppler frequencies, correlated processing, CCAs, etc. We believe that the most appropriate approach is tomographic processing which allows focusing of a synthesised aperture over the whole target space and provides an image resolution restricted only by the diffraction limit [7,9,10]. Another advantage of this technique is great possibilities for optimisation of processing algorithms and devices. Consider a target being probed by a stationary coherent radar (Fig. 2.7), which radiates pulses with the carrier frequency fc and the modulation function w(t) s(t) = w(t) exp(j2π fc t)
(2.30)
42
Radar imaging and holography Point target r–o 0 –r –r
Figure 2.7
a
The coordinates used in target viewing
and measures the amplitude and phase of the complex envelope of an echo signal. The target is assumed to consist of a small number of independent scatterers, whose position relative to the centre of mass of the target O and the radar is defined by the respective vectors (Fig. 2.7). The target moves along an arbitrary trajectory, rotating around its centre of mass. The conditions for the far zone and a uniform field amplitude of the wave incident on the target surface facing the radar are fulfilled. The algorithm for the processing of the complex envelope of an echo signal, synthesised by the radar receiver, is
2|¯r | exp(−j2kc |¯r |)d r¯o , (2.31) sv (t) = g(¯ro )w t − c v
where g(¯ro ) is the function of the target reflectivity and kc = 2π/λc is a wave number corresponding to the wavelength of the radar carrier oscillation. Equation (2.31) allows the estimation of the g(¯ ˆ ro ) reflectivity of every scatterer. The integration of Eq. (2.31) is made over the target space. With the condition for the far zone, the vector r¯ describing the position of an arbitrary scatterer relative to the radar can be substituted by its projection on the line of sight: |¯r | = |¯ra | + rˆ ,
(2.32)
where rˆ = r¯o u¯ ,
u¯ = r¯a /|¯ra |
(2.33)
and u¯ is a unit vector coinciding with the line of sight and directed away from the target rotation centre towards the radar. Generally, both terms of Eq. (2.32) vary during the viewing. However, the contribution to the imaging is made only by the variation in the relative range rˆ . On the contrary, the range variation of the target’s centre of mass |¯ra | produces distortions in the image. By substituting Eq. (2.32) into Eq. (2.31) and regrouping the terms for
Methods of radar imaging
43
the complex envelope distortion, we obtain
2|¯ra | 2ˆr − exp(−j2kc |¯ra |) exp(−j2kc rˆ )d r¯o . sv (t) = g(¯ro ) w t − c c v
(2.34) It follows from the analysis of Eq. (2.34) that the correction of the received signal is to maintain a constant delay τ = 2|¯ra |/c and to multiply the signal by the phase factor exp(j2kc |¯ra |). After making the correction, the signal can be written (assuming τ = 0) as
2ˆr sv (t) = g(¯ro )w t − exp(−j2kc rˆ )d r¯o . (2.35) c v
The exponential phase factor in Eq. (2.35) defines the coherence degree of the whole imaging system (the radar and the processing system) over the whole band-limited frequency spectrum. The coherence instability due to, say, an inaccurate compensation for the target radial movement leads to a poorer resolution. The possibility of imaging by a tomographic algorithm is, in principle, preserved. Let us process a signal in the frequency domain. The Fourier transform of a video signal corresponding to the change in the target aspect relative to the radar is
S(f ) = F{s(t)} = W (f ) g(¯ro ) exp[−j2(kc + k)ˆr ] d r¯o , (2.36) v
where W (f ) = F{w(t)} is the modulation function spectrum, k = 2π/λ is the wave number to be defined in the frequency spectrum, and F{·} is a 1D Fourier transform operator. Next, we perform a standard range processing to obtain the resolution along the line of sight in a filter with the transmission characteristic K(f ) [18]:
S(f ) = H (f ) g(¯ro ) exp[−j2(kc k)ˆr ] d r¯o (2.37) v
with H (f ) = W (f )K(f ). The range processing can also be made in the time domain of the receiver using a filter with the impulse response h(t) = F −1 {K(f )}, where F −1 {·} is the inverse Fourier transform (IFT) operator. Note that the compensation for the target radial displacement can also be made by the processor (after the transformation of Eq. (2.37)) by multiplying the video signal spectrum by the phase factor exp[j2(kc + k)|¯ra |]. A particular method of processing requires a proper design of the receiver and processor. With Eq. (2.33), expression (2.37) can be presented as a 3D Fourier transform of the target reflectivity:
S(f ) = H (f ) g(¯ro ) exp[−j2(kc + k)¯ur¯o ] d r¯o , (2.38) v
where (kc + k) is the 3D frequency vector modulus.
44
Radar imaging and holography
To calculate the target reflectivity, it is necessary to make an inverse transformation of the Fourier function over the respective volume: g(¯ ˆ ro ) = F −1 {S(f )} = g(¯ro ) ∗ h(¯ro ),
(2.39)
where ∗ denotes convolution, h(¯ro ) is the processing system response from a single point target in the space frequency domain, h(¯ro ) = F −1 {H (f )}, and H (f ) is a 3D aperture function. It is clear from Eq. (2.39) that the value of g(r ˆ o ) is a distorted representation of the target reflectivity g(ro ). The distortion is largely due to the limited frequency spectrum and the small angle step of the aspect variation. Equation (2.39) can be transformed in the 3D frequency domain. More often, however, one needs 2D images, which can be obtained using an appropriate 2D data acquisition design (Fig. 2.8). Equation (2.38) then has the form: S(f ) = H (f )
∞
−∞
g(u, v) exp[−j2(kc + k)v] du dv.
(2.40)
Keeping in mind that the function Pθ (v) =
∞
g(u, v) du,
(2.41)
−∞
represents the projection of the target reflectivity on the v-axis, the target aspect defined by the angle θ (Fig. 2.8) can be written as Sθ (f ) = H (f )
∞
−∞
Pθ (v) exp[−j2(kc + k)v] dv.
(2.42)
Using the denotation fp = 2(fc + f )/c, we get Sθ (f ) = H (f )Pθ (fp ) = H (f )
∞
Pθ (v) exp(−j2π fp v) dv,
(2.43)
−∞
where Pθ (fp ) is the Fourier transform of the projection Pθ (v) with the space frequency fp . The substitution of Eq. (2.41) into Eq. (2.43) yields Pθ (fp ) =
∞
−∞
g(u, v) exp[−j2π(0u + fp v)] du dv
(2.44)
or Pθ (fp ) = Pθ (0, fp ) = Pθ (fp sin θ, fp cos θ ),
(2.45)
Methods of radar imaging
45
v y
u ro w
o x u
Figure 2.8
2D data acquisition design in the tomographic approach
where P(·) is the Fourier transform of the target reflectivity in the (x, y) coordinates. Then using Eq. (2.45), we have Sθ (f ) = H (f )Pθ (fp sin θ , fp cos θ ).
(2.46)
Equation (2.45) represents the formulation of the projection theorem underlying the tomographic imaging algorithms [34,57]. Bearing in mind that v = y cos θ −x sin θ, we go from Eq. (2.43) to the 2D Fourier transform in the (x, y) coordinates related to the target: S(fx fy ) = H (f )
∞
−∞
g(x, y) exp[−j2π(fx x + fy y)] dx dy,
(2.47)
where fx and fy are the respective space frequencies, fx = −(fpo + fp ) sin θ, fy = (fpo + fp ) cos θ, fpo = 2fc /c is the space frequency corresponding to the carrier frequency spectrum, fp is the space frequency defined over the whole frequency band of the probing signal, 2fl /c < fp < 2fu /c, fl and fu are the lower and top frequency spectra. The solution to Eq. (2.47) yields the target reflectivity: g(x, ˆ y) =
∞
−∞
S(fx , fy ) exp[j2π(fx x + fy y)] dx dy = g(x, y) ∗ h(x, y).
(2.48)
This approach to imaging can be implemented in the frequency and space domains (see Chapter 6). Note that the radar data on a signal are recorded in polar coordinates [8], while the imaging devices are represented as a dot matrix. This inconvenience necessitates the use of a cumbersome procedure of data interpolation and then finding a compromise between the degree of interpolation complexity (the greater the complexity, the better the image quality) and the computation resources. It will be shown
46
Radar imaging and holography fy
fpo fx Du fp1 fpt
Dfp
Figure 2.9
The space frequency spectrum recorded by a coherent (microwave holographic) system. The projection slices are shifted by the value fpo from the coordinate origin
in Chapter 6 that there is a procedure of processing in the space domain, which successfully overcomes this difficulty. The space spectrum of each echo signal is represented in the frequency fx fy plane (Fig. 2.9) as a straight line coinciding with a radial beam. The beam angular coordinate θ is equal to the angle ϑ which defines the target position at the moment of probing signal reflection. The space spectra of echo signals are centred relative to an arc of radius fpo . In the frequency plane, their multiplicity forms microwave holograms with the angle θ equal to the angle step of the synthesis, ϑ. The inner and outer radii of a hologram are defined by the space frequencies fpl and fpt in the lower and top frequency spectra of the probing signal. It follows from Eq. (2.47) that the ensemble of radar data recorded under the above conditions is a 2D Fourier microwave hologram. The image reconstruction from such a hologram reduces to IFT. Although the inversion of a hologram described by Eq. (2.47) is a simple mathematical procedure, the methods of its digital implementation are not as obvious. With the above assumptions of the far zone and the high-frequency spectrum, we can suggest that at every moment of time t = 2v/c the contribution to the echo signal will be made only by the local scatterers with the range coordinate ϑ. Then the integral
(2.49) Pϑ (ϑ) = g(u, ϑ)du V
taken along the transverse range represents the projection of the target reflectivity on the RLOS.
Methods of radar imaging
47
fy Dfp
fx
~Du
Figure 2.10
The space frequency spectrum recorded by an incoherent (tomographic) system
With Eq. (2.49), expression (2.40) will have the form:
Sθ (fpo + fp ) = H (fp ) Pϑ (ϑ) exp[−j2π(fpo + fp )ϑ]dϑ = H (fp )Pθ (fpo + fp ), V
(2.50) where Pθ (fpo + fp ) = F{Pϑ (ϑ) exp(−j2π fpo ϑ)}.
(2.51)
If the right-hand side of Eq. (2.50) is expressed in the Cartesian coordinates, we shall have Sθ (fp ) = H (fp )Pθ [−(fpo + fp ) sin θ, (fpo + fp ) cos θ ].
(2.52)
Hence, a 1D spectrum of the product of the reflectivity function projection at the angle ϑ and the phase factor = exp(−j2π fpo ϑ) is the cross section of a microwave hologram function along a straight line passing through the frequency plane origin at the angle θ (θ = ϑ). If the data acquisition system is incoherent and records only the complex envelope shape of the echo signal, the phase factor vanishes from Eq. (2.50) to Eq. (2.52). Equation (2.52) then reduces to the projection slice theorem, one of the fundamental theorems in computerised tomography [57]. Let us discuss the physical differences between coherent (holographic) and incoherent (tomographic) systems of microwave radar imaging by comparing Figs 2.9 and 2.10. The angle step of the target aspect variation and the frequency band width of the probing signal are taken to be identical in both cases.
48
Radar imaging and holography
A specific feature of coherent systems is that the projection slices of a hologram are shifted radially by the value fpo away from the coordinate origin. Other things being equal, their resolution, defined in the first approximation by the data domain size in the frequency space, is therefore high [8]. A more important difference is that the projection Pϑ (ϑ) recorded by an incoherent system is a real time function. So the phase of any of the projected slices in the data domain of the frequency space is zero at the interception with the coordinate origin. The projection slices are independent of one another. In contrast, a coherent system records not only the changes in the complex envelope amplitude along the projection but also those of the phase of the echo signal carrier oscillation. As a result, in consecutive projection slices the phases of average records with the space frequency fpo carry information about the ranges of all unscreened scatterers of the target relative to its rotation centre. Other records of the projection slice have additional shifts with their space frequency differences with respect to the centre record. In this way, all hologram records become interrelated providing a resolution along any direction, including the transverse range. Thus, the mathematical theory of computerised tomography for designing digital processing algorithms should be modified to adjust it to the requirements of coherent imaging. The above mathematical expressions (2.50)–(2.52) can be regarded as generalised projection slice theorems for coherent radar imaging. This enables one to employ analytical methods of computerised tomography [57] as a basis for further development of the theory of coherent imaging. Advantages of this kind of treatment are physical clarity and computation efficiency (Chapter 6). The holographic approach to the description of inverse synthesis by coherent radars accounts for arbitrary changes in the target aspect and the frequency band width of the probing signal. Most of the available algorithms for microwave imaging have been designed for 2D viewing geometry, so digital processing for real target sizes and angle steps becomes a time-consuming endeavour. Well-elaborated mathematics of computerised tomography could considerably facilitate the development of effective computation algorithms for digital processing of 3D microwave holograms.
Chapter 3
Quasi-holographic and holographic radar imaging of point targets on the earth surface
3.1 Side-looking SAR as a quasi-holographic radar We have shown in Chapter 2 that the aperture synthesis can be described in different ways, including a holographic approach. It was first applied by E. N. Leith to a side-looking synthetic aperture radar (SAR) [85,86]. He analysed the optical cross correlator, which processes the received and the reference signals, and concluded that ‘if the reference function is a lens, the record of a point object’s signal can also be considered as a lens, because the reference function has the same functional dependence as the signal itself’ [85]. The signal from a point object is a Fresnel lens, and its illumination by a collimated coherent light beam creates two basic images – a real image and a virtual image (Fig. 3.1). The author also pointed out that the images formed by a Fresnel lens were identical to those created by correlation processing. He drew the conclusion that ‘by reducing the optical system to only three lenses, we are, it appears, led to abolishing even these, as well as the correlation theory upon which all had been based’ [85]. This was a radically new concept of SAR. The radar theory and the principles of optical processing were revised in terms of the holographic approach. Its key idea is that signal recording is not just a process of data storage, like in antenna or correlation theories, but it is rather the recording of a miniature hologram of the wave field along the carrier’s trajectory. For this, the recording is made on a twodimensional (2D) optical transparency (the ‘azimuth-range’), or a complex reflected signal is recorded 2D. The first procedure uses a photographic film to record the range across the film but the azimuth and pathway range along its length. In optical recording, the image is reconstructed in the same way as in conventional off-axial holography, that is, along the carrier’s pathway line. If a microwave hologram is recorded optically, its illumination by coherent light reproduces a miniature optical representation of the radar wave field. Therefore, the object’s resolution is determined by the size of the hologram recorded along the pathway line, rather than by the aperture
50
Radar imaging and holography 4 3
5
1 2
Figure 3.1
A scheme illustrating the focusing properties of a Fresnel zone plate: 1 – collimated coherent light, 2 – Fresnel zone plate, 3 – virtual image, 4 – real image and 5 – zeroth-order diffraction
of a real radar antenna. The range resolution is provided by the pulse modulation of radiated signals. Since the holographic approach to SAR is applicable only to its azimuthal channel, the authors of the work [85] termed it quasi-holographic. In his later publications on this subject, E. N. Leith pointed out that aperture synthesis should be described as a microwave analogue of holography to which holographic methods could be applied, rather than as holography proper. Thus, a combination of SAR and a coherent optical processor represents a ‘quasi-holographic’ system, whose azimuthal resolution is achieved by holographic processing of the recorded wave field. Both E. N. Leith and F. L. Ingalls believe [86] that this representation is most flexible and physically clear. The use of the holographic approach for SAR analysis has so far been restricted to optical processors [87]. There is a suggestion to represent the entire SAR azimuthal channel as a holographic system [143]. In that case the initial stage of the holographic process in this channel (the formation of a microwave hologram) is the recording of the field scattered by an artificial reference source. The second stage (the image reconstruction) is described in terms of physical optics.
3.1.1 The principles of hologram recording Let us consider a SAR borne by a carrier moving with velocity v along the x′ -axis (Fig 3.2). The SAR antenna has the length LR (the real aperture) and the beam width ϑR along the pathway line. The SAR irradiates the view stripe by short pulses and makes consecutive time recordings of the probing signal reflected by the object. The scattered field amplitude and phase are registered by a coherent (synchronous) detector due to the interference of the reference and received signals. This produces a multiplicative microwave hologram (Chapter 2). The role of the reference wave is
Quasi-holographic and holographic radar imaging
51
played by a signal directly supplied to the synchronous detector; this is the so-called ‘artificial’ reference wave. We shall describe now the receiving device of the synthetic aperture which records a hologram on a cathode tube display. Usually, a hologram is recorded by modulating the tube radiation intensity, with the photofilm moving with velocity vf relative to the screen. For objects with different ranges Ro from the pathway line, one can use a pulse mode and vertical display scanning. As a result, the device records a series of one-dimensional (1D) holograms having different positions along the film width, depending on the distance to the respective objects. Suppose all the objects are located at a distance Ro to the pathway line. For simplicity, the radiated signal can then be taken to be continuous because the pulsed nature of the radiation is important only for the analysis of range resolution. Figure 3.3 shows an equivalent scheme of 1D microwave hologram recording. A synthetic aperture is located at point Q with the coordinates (x′ , 0) (x′ = vt, where t is the current moment of time), and a hypothetical source of the reference wave is at point R(xr , zr ). The source functions in a way similar to that of the reference wave during the hologram recording (Fig. 1.2). The point P(xo , zo = −Ro ) belongs to the object being viewed along the xo -axis. If x⬘ V
qR
Ro
Figure 3.2
The basic geometrical relations in SAR x⬘ Q(x⬘, O)
xo P(xr, zr)
Ro 0
z
Rr R(xr, zr)
Figure 3.3
An equivalent scheme of 1D microwave hologram recording by SAR
52
Radar imaging and holography
the object’s scattering characteristics are described by the function F(xo ) and its size is small as compared with Ro , one can use the well-known Fresnel approximation to define the diffraction field along the ′ -axis [103]: eik1 Ro Uo (x ) = Co √ λ1 Ro ′
∞
′
F(xo )eik1 ((xo −x )/(Ro )) dxo ,
(3.1)
−∞
where k1 = 2π/λ1 is the wave number and Co is a complex-valued constant. The complex amplitude of the reference wave is Ur (x′ ) = Ar eiϕr .
Normally, this is a plane wave, i.e. ϕr = k1 sin(ϑx′ ), where ϑ is the wave ‘incidence’ on the hologram. The inclination of the reference wave is equivalent to that of the reference signal with a linear phase shift, providing the introduction of the carrier frequency ωx = k1 sin ϑ. A coherent registration gives a hologram described as h(x′ ) = Re(Ur∗ (x′ )Uo (x′ ))
(3.2)
or h(x′ ) = Im(Ur∗ (x′ )Uo (x′ )). It follows from Eq. (3.1) that a synthetic aperture generally forms 1D Fresnel holograms. The following three types of hologram are possible, depending on the relation between the object’s size, the synthetic aperture length Ls = vT (T is the recording time or the time of the aperture synthesis) and the range Ro . 1. If the condition Ro ≫ k1 (xo2 )max /2 holds true (here (xo )max defines the maximum size of the object), we get Fraunhofer’s approximation instead of Eq. (3.1): eik1 Ro ik1 ((x′ )2 /(Ro )) uo (x ) = Co √ e λ1 R o ′
∞
′
F(xo )e−ik1 ((2x −xo )/(Ro )) dxo .
(3.3)
−∞
The hologram we obtain is of the Fraunhofer type. 2. If the condition Ro ≫ k1 (xo′ )2max /2 = k1 L2S /8 is valid, we can eliminate the term exp[ik1 (x′ )2 /Ro ] from Eq. (3.3) to obtain a Fourier hologram, which is described as (3.4) LS ≤ 2 λ1 Ro /π .
3. For a point object, we have F(xo ) ∼ δ(x′ − xo )
and Fraunhofer’s condition for diffraction becomes immediately fulfilled.
Quasi-holographic and holographic radar imaging
53
Using the filtering properties of the δ-function and Eq. (3.3), we arrive at the following equation for the hologram (with the constant phase terms ignored): x ′ xo (x′ )2 , (3.5) + 2k1 h(x′ ) = Ar Ao cos ωx x′ − k1 Ro Ro where Ao is the scattered wave amplitude at the receiver input. If Eq. (3.4) holds, expression (3.5) yields x ′ xo ′ ′ . h(x ) = Ar Ao cos ωx x + 2k1 Ro
(3.6)
Thus, a synthetic aperture forms either a Fraunhofer or a Fourier hologram of a point object. The former looks like a 1D Fresnel zone plate, in accordance with Eq. (3.5), and the latter is a 1D diffraction grating with a constant step, in accordance with Eq. (3.6). During the photographic recording, the holograms are scaled by substituting the x′ -coordinate by the x-coordinate, where x = x′ /nx and nx = v/vf . A constant term ho (‘displacement’) is added to Eqs (3.5) and (3.6) for the photographic registration of the bipolar function h(x′ ).
3.1.2 Image reconstruction from a microwave hologram It is reasonable to discuss the next step in the holographic process in terms of physical optics. Illumination of a photographic transparency by a plane coherent wave with the wave number k2 produces a diffraction field, whose distribution at distance ρ from the hologram is described by the Huygens–Fresnel integral: ei(k2 ρ−π/4) V (ξ ) = √ λ2 ρ
v n T /2
2
h(x)ei(k2 /2ρ)(x−ξ ) dx.
(3.7)
−vn T /2
The substitution of Eq. (3.5) into Eq. (3.7) gives V (ξ ) = Vo (ξ ) + V1 (ξ ) + V2 (ξ ), where Vo (ξ ) is the zeroth order corresponding to the displacement ho , V1 (ξ ) and V2 (ξ ) are the functions of the reconstructed images of a point object. These functions are equal to v f T /2 2 2 V1 (ξ ) = Co ei (k2 /2ρ)±(k1 nx /Ro ) x e−i[(k2 ξ/ρ)±(ωx nx +(2k1 nx xo /Ro ))]x dx. V2 (ξ ) −vf T /2
(3.8)
The positions of the images along the z-axis can be found from the condition for the zeroth power of the first exponent in Eq. (3.8): ρ=±
λ1 Ro . 2λ2 n2x
Obviously, one image is virtual and the other real.
(3.9)
54
Radar imaging and holography By integrating Eq. (3.8) with the condition of Eq. (3.9), we obtain sin{[ωx nx + (2k1 n2x /Ro )((xo /nx ) − ξ )]vn T /2} V1 (ξ ) . = Co V2 (ξ ) [ωx nx + (2k1 n2x /Ro )((xo /nx ) − ξ )]vn T /2
(3.10)
Therefore, the image of a point object is described by the sin ν/ν-type of function. It follows from Eq. (3.10) that the image position along the x-axis is defined by the zeroth value of the argument ν, or ξ = xo /nx + ωx Ro /2k1 nx .
(3.11)
The first term in Eq. (3.11) corresponds to the real coordinate of the object and the second one describes the carrier frequency. Images of two-point objects having the same coordinates xo (xo1 = xo2 ) but different ranges R1 and R2 (R1 = R2 ) are characterised by different coordinates ξ1 and ξ2 (ξ1 = ξ2 ). Therefore, the use of the carrier frequency leads to geometrical distortions of the coordinates of point objects. We should recall that the use of the carrier frequency in the first generation of SARs (with an optical processor) was necessitated by the application of Leith’s off-axial holography in order to separate images from the zeroth order. The carrier frequency becomes, however, unnecessary in digital image reconstruction from complex holograms (Chapter 2). Let us now discuss the SAR resolving power. According to Reighley’s criterion, two points are thought to be separated if the major maximum of one of the sin x/x functions coincides with the first zero of the other function. This gives us the resolving power
x′ = x1 − x2 = π Ro /k1 LS .
(3.12)
The aperture creating a Fraunhofer hologram with Eq. (3.5) was termed a ‘focused aperture’ in classical SAR theory. The focusing here is treated as a compensation for the quadratic phase shift in Eq. (3.5) during image reconstruction, the compensation being made with the transform in Eq. (3.7). The case of an ‘unfocused aperture’ is described by Eq. (3.6) for the Fourier hologram, and the processing is performed with the Fourier transform of the hologram function: V (ξ ) = Co
v f T /2
h(x)e−ik2 xξ/ρ dx.
(3.13)
−vf T /2
Eqs (3.13) and (3.6) yield sin {[(k2 /ρ)ξ ± (ωx nx + (2k1 /Ro )nx xo )] vf T /2} V1 (ξ ) . = Co V2 (ξ ) [(k2 /ρ)ξ ± (ωx nx + (2k1 /Ro )nx xo )] vf T /2 Here, ρ can be taken to be the focal length of the Fourier lens. The image position for a point object is defined as 2k1 ρ ωx nx + . ξ =± k2 Ro nx xo
(3.14)
(3.15)
Quasi-holographic and holographic radar imaging
55
Images of two-point objects with the same coordinates xo but different ranges R1 and R2 will also be distorted due to the dependence of ξ on Ro . The resolving power from Reighley’s criterion is
x′ = x1 − x2 = π R/k1 nx vf T .
(3.16)
With Eq. (3.4), the permissible limit for this parameter in SAR with unfocused processing has the value
x′ =
π λ1 Ro /4 ≈ 0.44 λ1 Ro .
(3.17)
Note that a hologram is written on a photofilm (in the case of an optical processor) or in a memory device (in the case of digital recording) continuously during the flight. For this reason, the focused or unfocused aperture regime is prescribed only at the reconstruction stage. Synthetic aperture radar can also be considered in terms of geometrical optics, which implies phase structure analysis of a hologram. One of the expressions in (3.2) can be re-written as h(x′ ) = Ar Ao cos(ϕr − ϕo ), where ϕo is the phase of the field scattered by the object. For a point object located at point P (Fig. 3.3), we can write two expressions taking into account the SAR wave propagation to the object and back: ϕo = −2k1 (PQ − PO), ϕr = −2k1 (RQ − RO), where RO = Rr is the distance between a hypothetical reference wave source and the coordinates origin. By expanding ϕr and ϕo into series, we get for the first-order terms 4π 1 1 xo ′ 2 ′ xr ϕ r − ϕo ∼ − (x ) − x . (3.18) − − = λ1 2Rr 2Ro Rr Ro In a simple case of xo = 0, xr = 0 and Rr = ∞ (a plane reference wave without linear phase shift), we have ϕr − ϕo = 4π(x′ )2 /2λ1 Ro . The space frequency in the interference pattern is ν(x) =
1 ∂(ϕr − ϕo ) = 2x′ /λ1 Ro . 2π ∂x′
(3.19)
′ = (L ) At a certain value of xcr S max /2, the frequency ν may exceed the resolving power of the field recorder, which is defined in this case by the real aperture angle and is equal to νcr = 1/LR . From this we have the condition
(LS ) ≤ λ1 Ro /LR = ϑR Ro .
(3.20)
56
Radar imaging and holography
The substitution of (Ls )max into Eq. (3.12) gives a classical relation for the attainable limit of SAR resolution:
xlim = LR /2. The pulsed nature of the signal allows determination of such an important radar parameter as the minimum repetition frequency of probing pulses, χmin . Obviously, the pulse mode is similar to hologram discretisation. The distance between two adjacent records x′ = vf /χ must meet the condition ′
x′ ≤ [2ν(xcr )]−1 .
This condition and Eq. (3.19) gives χmin = 2ϑ/LR . By following the method suggested in Reference 92 we can obtain relations for the phase deviation of the reconstructed wave from the spherical shape (third-order wave aberrations): x4 k2 3 2 (3) (3.21) Do − D1 x + D2 x ,
ϕ = − 2 4 where xk 2µ xk Dk = c3 − I3 ± 4−k Rc m RI
xrIk xok − R3o R3r
,
xc and Rc are the coordinates of the reconstructing wave source, µ = λ2 /λ1 , m = n−1 x . The image coordinates for a point object are 1 1 2µ 1 1 , = ± 2 − RI RC Ro Rr m xI xI 2µ xo xr . = ± − RI RC m Ro Rr The value k = 0 is for the spherical aberration, k = 1 is for the coma, k = 2 is for the astigmatism. These relations can be used to find the maximum size of the synthetic aperture, (Ls )max , from Reighley’s formula (wave aberrations at the hologram edges should not be larger than λ2 /4). Since spherical aberration is largest in the order of magnitude, we obtain
µ2 4 3 (LS )max = 2 λ1 Ro 1−4 2 . (3.22) m For typical conditions of SAR performance, the value of (Ls )max calculated from Eq. (3.20) is smaller than (Ls )max found in Eq. (3.22), that is, the effect of wave aberrations is inessential.
Quasi-holographic and holographic radar imaging
57
3.1.3 Effects of carrier track instabilities and object’s motion on image quality The carrier’s trajectory instabilities are a major factor that can distort SAR images. The use of geometrical optics in the holographic approach provides a fairly simple estimation of permissible trajectory deviations from a straight line. The object’s wave phase ϕo (x′ ) can be written as 1/2 − Ro , ϕo (x′ ) = −2k1 (zo − g)2 + (x′ − xo )2
where g = g(x′ ) is the trajectory deviation from the x′ -axis. At Ro ≫ xo , x′ and g, the binomial expansion ignoring all terms of the g 2 order gives an approximate expression for ϕo (x′ ): x4 − 4xo (x′ )3 + 4xo2 (xo′ )2 4π (x′ )2 − 2xo x′ − ϕo (x′ ) ∼ =− λ1 2Ro 8R3o zo g(x′ )2 − 2zo gxo x′ zo g + − . Ro 2R3o The phase equation for a wave reconstructing one of the images has a standard form: (3.23)
ϕI = ϕc ± (ϕo − ϕr ), where ϕc are the reconstructed wave phases. On the other hand, ϕI can be written as 2π ϕI = − λ2
x2 − 2xI x x4 − 4xI x3 + 4xI2 x2 − 2RI 8R3I
.
(3.24)
The phases ϕc and ϕr are described by expressions similar to (3.24). The phase differences between the respective third-order terms relative to 1/RI in Eqs (3.23) and (3.24) represent aberrations described as
= ϕ (3) + ϕn(3) . The aberrations ϕ (3) are defined by Eq. (3.21), and ϕn(3) has the form:
ϕn(3) = −k2 (D3 g + D4 gx − D5 gx2 ),
(3.25)
where D3 = ∓2µzo /Ro , D5 = ∓µzo /m2 R3o ,
D4 = ∓2µzo xo /mR3o , m = 1/nx ,
µ = λ2 /λ1
and g is the trajectory deviation. Here the quantities D3 , D4 and D5 are aberrations arising from the trajectory instabilities.
58
Radar imaging and holography
Equation (3.25) describing distortions in the hologram phase structure can be used to calculate the compensating phase shift directly during the synthesis. For this, SAR should be equipped with a digital signal processor. By applying Reighley’s criterion to each term in Eq. (3.25), one can get the following conditions for maximum permissible deviations of the carrier’s trajectory: g3 ≤ λ2 /4/D3 = λ1 Ro /8Zo = λ1 /8 cos ϑo ,
(3.26)
g4 ≤ λ2 /4/D4 /xmax = λ1 R3o /4LS Zo xo ,
(3.27)
2 g5 ≤ λ2 /4/D5 /xmax = λ1 R3o /Zo L2S .
(3.28)
Besides, if one knows the flight conditions and carrier’s characteristics, Eqs (3.26)–(3.28) can be used to find constraints imposed on the parameter cos ϑo and the maximum size of the synthetic aperture: cos ϑo ≤ λ1 /8g, LSmax ≤ λ1 R2o /4gxo , LSmax ≤ Ro λ1 /g.
Normally, SAR meets the conditions LS ≪ Ro and xo ≪ Ro . So D4 and D5 can be neglected leaving only the factor D3 , which severely restricts the trajectory stability (see Eq. (3.26)). Effects arising in a synthetic aperture during the viewing of moving targets can be estimated in terms of physical optics. Suppose a point object moves radially (along the z-axis) at velocity vo , such that its displacement is smaller than the range resolution for the synthesis time T . Then, the equation for the hologram, ignoring constant phase terms, is n2x x2 nx xo x k1 vo 2 2 2 vo h(x) ∼ cos ωx nx x + 2k1 nx x − k1 nx x . + 2k1 − v Ro Ro Ro v (3.29) The substitution of Eq. (3.29) into (3.7) gives a condition for viewing the focused image: v 2 k2 Ro o ρ=± . 1+ 2 2k1 nx v
Since vo /v ≪ 1, the image can be viewed practically in the same plane as that for an immobile object. Keeping this in mind, we can obtain, after the integration, a function describing one of the reconstructed images: V (ξ ) = Co
sin {[ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2} . [ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2
The image position is defined as being ξ=
xo ω x Ro R o vo . + + nx 2k1 nx nx v
Quasi-holographic and holographic radar imaging
59
Clearly, the object’s motion is equivalent to the use of additional carrier frequency at the recording stage, which causes the image shift. The optical processor deals with a real image recorded on a photofilm. The recording field on the film is limited by a diaphragm cutting off the background. The value of vo may become so large that no image will be recorded because of the shift. The object’s motion in the azimuthal direction (along the x′ -axis) at velocity vo is equivalent to the change in the SAR’s flight velocity. Then Eq. (3.9) describing the position of the focused image along the z-axis can be re-written as ρ ′ = ±λ1 Ro /2λ2 n2x = ±λ1 Ro v2 /2λ2 (v − vo )2 . Therefore, the object’s motion along the x′ -axis changes the focusing conditions by the value
vo vo 2 vo δρ = ρ ′ − ρ = 2ρ 1− 1− , (3.30) v 2v v where ρ is found from Eq. (3.9). If the condition vo ≪ v is fulfilled, we have δρ ≈ 2ρvo /v.
(3.31)
Equation (3.30) yields vo = v 1 − 1 − δρ/(ρ + δρ) .
On the other hand, a simple geometrical consideration can give the following relations for the resolving power of SAR along the z-axis (longitudinal resolution):
ρ = 2( x′ vf )2 /λ2 v2 = 2( x′ )2 /λ2 n2x .
(3.32)
The focusing depth ρ is defined as the focal plane shift along the z-axis by a distance, at which the azimuthal resolution x′ becomes twice as poor as the diffraction limit in Eq. (3.12). The viewing of a focused image of an object moving at velocity vo requires an additional focusing of the optical processor. The object’s velocities that require the focusing can be found from the condition ρ < δρ, where δρ is given by Eq. (3.31). Using Eqs (3.9) and (3.32), we get vo > 2( x′ )2 /λ1 Ro . At lower velocities, there is no need to re-focus the processor, and a poorer image quality may be assumed to be inessential. To conclude Section 3.1, we should like to emphasise the following. The SAR operation principles can be described by conventional methods (Chapter 2) that are still widely used [73] or with a holographic approach representing the side-looking synthetic aperture and the processor as an integral system for recording and reconstructing the wave field. The analysis of the aperture synthesis can be based on the well-elaborated principles of holography as well as on physical and geometrical optics. The examples we have discussed support the physical clarity of the holographic approach and its value for SAR analysis. We can get a better insight into the
60
Radar imaging and holography
mechanisms of image formation by SAR without relying on Doppler frequencies of reflected signals or on correlation theory.
3.2 Front-looking holographic radar The operation principle of a front-looking holographic radar was discussed in Chapter 2. A high resolution across the pathway line (Fig. 3.4) is provided in it by a multibeam antenna pattern of a large receiving antenna array located, say, along the aircraft wings [72]. The resolution along the pathway line is achieved by the aperture synthesis. There is another radar design, in which the desired transversal resolution is provided by a phased antenna array mounted under the fuselage and the longitudinal resolution by a synthetic aperture [81,82].
3.2.1 The principles of hologram recording A coherent transmitter (Fig. 3.5) generates a continuous or pulsed signal (to decouple the transmitter and the receiver) and illuminates the desired survey zone under the aircraft. The receiving antenna represents a linear or phased array of numerous receivers. The amplitude and phase of the reflected signal are recorded by each array element for the time Ts , synthesising a 2D aperture of size Xs Y along the trajectory segment Xs = vTs . Signals at the receiver output are saved by a memory unit, for example, on a photofilm [81]. The film record can be regarded as a 2D plane optical hologram equivalent to a microwave hologram with the size Xs Y (Fig. 3.4). If the radar has an optical processor, it reconstructs the wave front recorded on the optical hologram to produce an optical image of the earth surface within the view zone. Thus, the operation principle of this type of radar is totally holographic and it is reasonable to call Transmitter antenna Receiver antenna
V
Y
Xc
a H Survey zone
Line of track x
y
Figure 3.4
The viewing field of a holographic radar
Quasi-holographic and holographic radar imaging
Transmitter
Receiver
Transmitter antenna
Receiver-phased antenna
Figure 3.5
Memory
61
Processor
Display
A schematic diagram of a front-looking holographic radar dx(w),m 2.5 2 1.5 1 0.5 0 20
Figure 3.6
w 30
40
50
60
70
80
90
100
The resolution of a front-looking holographic radar along the x-axis as a function of the angle ϕ
it a front-looking holographic radar [72,81]. Since it is an analogue of a 2D optical holographic system, it produces a 3D image. The resolution of a holographic radar can be examined by analysing the uncertainty function [72]. The slicing of this function into equal power levels at the point of 0.7 gives an approximate radar resolution: δy = 0.88λ1 H /Y sin ϕ,
(3.33)
δx = 0.45λ1 H /Xs sin3 ϕ, δz = 7λ1 H
2
/(2Xs2 sin3 ϕ
(3.34) 2
3
+ Y sin ϕ),
(3.35)
where Xs is the synthetic aperture length, ϕ = 90◦ − α. Figures 3.6 and 3.7 show the dependence of δx and δz on the angle ϕ, plotted from the following initial parameters: λ1 = 1.78 cm, H = 300 m, Y = 1 m and Xs = 30 m. One can see that a holographic radar possesses a fairly large resolving power. It follows from Eqs (3.33), (3.34) and (3.35) that in addition to the ‘conventional’ resolution along and across the pathway line, a holographic radar has a longitudinal
62
Radar imaging and holography dz(w),m 200 180 160 140 120 100 80 60 40 20 0 20
Figure 3.7
30
40
50
60
70
80
90
w 100
The resolution of a front-looking holographic radar along the z-axis as a function of the angle ϕ
resolution δz even when its signal is continuous. This is due to the fact that a hologram contains information about the three dimensions of the object, including the longitudinal range (Chapter 2).
3.2.2 Image reconstruction and scaling relations Consider now the processes of wave front recording and processing in this type of radar. As the radar is an analogue of a 2D holographic system, it would be natural to analyse it in terms of the holographic approach developed in Section 3.1, which treats the radar and the processing unit as an integral system. For this, we shall examine a generalised hologram geometry [50,51]. Suppose a wave comes from a microwave point source with the coordinates (xo , yo , zo ), and a reference wave is generated by a point source with the coordinates (xr , yr , zr ), as shown in Fig. 3.8(a). The wave field being recorded has the wavelength λ1 . At the second stage, the recorded hologram is illuminated by a spherical wave with the wavelength λ2 , coming from a point source with the coordinates (xp , yp , zp ), as shown in Fig. 3.8(b). A paraxial approximation will then give the coordinates of two reconstructed images: λ2 zi zi λ2 zi xi = ± xo ∓ xr − xp λ1 zo λ1 zr zp λ2 zi zi λ2 zi yo ± yr − y p yi = ± (3.36) λ1 zo λ1 zr z −1 1 λ2 λ2 zi = ± ∓ zp λ1 zr λ1 zo
The upper arithmetic signs in the equalities of (3.36) are for the virtual image and the lower ones are for the real image. When zi is positive, the image is virtual and is on the left of the hologram; when zi is negative, the image is real and is located on
Quasi-holographic and holographic radar imaging
63
Y
(a) zr
Reference wave (xr, yr, zr)
Photoplate z
Object (xo, yo, zo) zo (b)
Y
Reconstructing wave (xp, yp, zp)
zp Hologram
z Virtual image (xi, yi, zi) zi
Figure 3.8
Real image (x /i, y /i, z /i)
Generalised schemes of hologram recording (a) and reconstruction (b)
the right of the hologram. At λ1 = λ2 , zr = zo and zc > 0 both images are virtual, whereas at λ1 = λ2 , zr = zo and zc < 0 they are real. One can show with Eqs (3.36) that holographic images of objects more complex than just a point, for example, consisting of two point sources, can be magnified or diminished relative to the respective object [50,51]. As the reconstructed wave front is 3D, the transverse (along the x- and y-axes) and the longitudinal (along the z-axis) magnifications obtained during the reconstruction can be analysed separately. From Eq. (3.36), the transverse magnifications are: for the real image (superscript ‘r’) Mtr =
∂yi λ2 zi ∂xi = = , ∂xo ∂yo λ1 zo
(3.37)
64
Radar imaging and holography for the virtual image (superscript ‘v’) Mtv =
∂yi λ2 zi ∂xi = =− ∂xo ∂yo λ1 zo
Mtr
Mtv
(3.38)
or =
zo λ1 zo −1 . = 1 − ∓ zr λ2 z p
(3.39)
Here the superscript is for the real image and the subscript is for the virtual one. The transverse magnification describes the ratio of the width and height of the image to the appropriate parameters of the real object. The longitudinal magnification can be found by differentiating Eq. (3.36) for zi : for the real image Mlr =
λ2 zi2 ∼ λ1 r 2 ∂zi = Mt = ∂zo λ2 λ1 zo2
(3.40)
for the virtual image Mlv =
λ2 z 2 ∂zi λ1 v 2 = − i2 ∼ Mt . =− ∂zo λ2 λ1 z o
(3.41)
The longitudinal magnification of a virtual image is always negative. This means that the image always has a relief inverse to that of the object: it is pseudoscopic. Equations (3.37), (3.38) and (3.40), (3.41) show that the longitudinal and transverse magnifications are not identical, so the image of a 3D object is distorted. The matter is that the object’s relief cannot be reproduced exactly in an image. The condition for obtaining an undistorted image can be derived from the equality of transverse and longitudinal magnifications: Mtr = Mlr
or
λ2 zi2 λ2 zi . = λ1 zo λ1 zo2
Therefore, a geometrical similarity is possible only if the image is reconstructed at the site the object occupied during the recording. By substituting the coordinate zi = zo into Eq. (3.36), we can get an expression for the coordinates of the reconstructing source: 1 λ2 λ2 1 1∓ ∓ = . (3.42) zp zo λ1 λ1 z r Another way of obtaining an undistorted image is to change the scale of the linear hologram size by a factor of m at the transition from the recording to the reconstruction [50]. At m < 1, the hologram becomes smaller while at m > 1 it becomes larger. The coordinates of an image reconstructed from a hologram diminished m times can
Quasi-holographic and holographic radar imaging
65
be found from λ2 zi zi λ2 zi xo ∓ m xr − xp ; λ1 zo λ1 zr zp λ2 zi zi λ2 zi yi = ±m yo ∓ m yr − yp ; λ1 zo λ1 zr zp −1 λ2 1 2 λ2 . ±m ∓m zi = zp λ1 zr λ1 zo
xi = ±m
(3.43)
The transverse magnifications are:
for the real image (superscript ‘r’) Mtr =
∂xi ∂yi λ2 zi = =m ∂xo ∂xo λ1 zo
(3.44)
for the virtual image (superscript ‘v’) Mtv =
λ2 zi ∂yi ∂xi = = −m . ∂yo ∂yo λ1 zo
(3.45)
The longitudinal magnifications are: for the real image Mlr =
∂zi λ1 = m2 (Mtr )2 , ∂zo λ2
(3.46)
for the virtual image Mlv =
λ1 ∂zi = −m2 (Mtv )2 . ∂zo λ2
(3.47)
The condition for obtaining an undistorted image also follows from the equality of transverse and longitudinal magnifications zo = mzi .
(3.48)
The substitution of Eq. (3.48) into Eq. (3.36) yields the coordinates of the reconstructing source; in particular, for zp we have λ2 λ2 m 1 1 ± m2 ∓m = . (3.49) zp zo λ1 λ1 z r If the recorded hologram is magnified m times, the reconstructed image is at a distance −1 1 λ2 λ2 zi = ± , (3.50) ∓ zp λ1 zr m2 λ1 zo m2
66
Radar imaging and holography from this hologram, and the transverse magnifications are: for the real image Mtr =
λ2 zi λ1 zo m
(3.51)
for the virtual image Mtv = −
λ2 zi . λ1 zo m
(3.52)
In the case of imaging a 3D object, the distortions due to the difference in the transversal and longitudinal directions will be minimal at Mt = λ2 /λ1 . Then Eqs (3.51) and (3.52) give Mt = Ml . The distortions of the real and virtual images due to the shift are also eliminated at a = b = 0 (Fig. 3.9), but the images and the zeroth-order overlap, a situation unacceptable for optical holography. In a holographic radar capable of recording a complex hologram (Chapter 2), there is no problem with decoupling a single image and the zeroth order. We shall now turn to the limiting longitudinal resolution in a holographic radar and consider the recording and reconstruction schemes (Fig. 3.9 (a) and (b), respectively) in order to define longitudinal magnifications. Using a paraxial approximation, the authors of the work [142] have shown that the minimal resolvable longitudinal distance for a reconstructed real image is written as (dr )min ∼ = lr
at
d ≪ R1 ,
(3.53)
where
lr = lr′ − lr′′ ,
lr′ = λ1 R1 L1 L2 /(λ′ L1 L2 − λ′ R1 L2 − λ1 R1 L1 ),
lr′′
′′
′′
= λ1 R1 L1 L2 /(λ L1 L2 − λ R1 L2 − λ1 R1 L1 ),
(3.54) (3.55)
λ′ and λ′′ are the minimal and maximal wavelengths of the reconstructing beam. If the distance d is small compared to R1 , the longitudinal magnification is dr at d ≪ R1 . d Hence, we have Mlr =
(dr )min ≥ lr (Mlr )−1 ,
(3.56)
where Mlr = λ1 λ2 (L1 L2 )2 /[λ2 L1 L2 − λ2 RL2 − λ1 R1 L1 ]2 and λ2 = (λ′ λ′′ )1/2 is the average wavelength of the reconstructing source. Similar expressions can be derived for the reconstruction of a virtual image. The analysis we have made allows one to choose suitable recording and reconstruction procedures when one uses a holographic radar. Clearly, the parameters of these procedures are closely interrelated, so the radar and its processor should be regarded as an integral system.
Quasi-holographic and holographic radar imaging (a)
67
x
Photoplate 2
1
d
a
R1 3 L1
(b)
4 1
2
b
Hologram
1
2 dv
l/1 l/2
l1
dr l2
L2
Figure 3.9
Recording (a) and reconstruction (b) of a two-point object for finding longitudinal magnifications: 1, 2 – point objects, 3 – reference wave source and 4 – reconstructing wave source
3.2.3 The focal depth Consider now the focal depth of an image produced by a holographic radar. Chapter 2 discussed the problem of recording a 3D image in a 2D medium using a classical holographic approach. The image quality then depends on the focal depth of the image. The process of reconstruction gives the opportunity to obtain a 3D image of a scene. Following the reconstruction, the image is again recorded in a 2D medium, so the problem of focal depth arises once more. This parameter can be defined by analogy with the recommendations suggested in Reference 96.
68
Radar imaging and holography 1
3
2 ∆xi ∆zi
Figure 3.10
The focal depth of a microwave image: 1 – reconstructing wave source, 2 – real image of a point object and 3 – microwave hologram
The focal depth of a microwave holographic image is a longitudinal distance zi , along which the cross section of the beam reconstructing the virtual or real image of a point object is smaller than the resolution elements xi , so that it is perceived as a point image (Fig 3.10). A formula for the focal depth of a virtual image can be derived from Eqs (3.40) and (3.41) for the longitudinal magnification:
zi = Mlv zo
(3.57)
or
zi = −
λ1 v 2 (M ) zo . λ2 t
With the relation for the transverse magnification (3.52), one can write
zo λ1 .
zi = − xi λ2
xo
(3.58)
(3.59)
The last factor in Eq. (3.59) can be written as
zo = tgαo ,
xo
(3.60)
where αo is the aperture angle in the objects’ space. Then Eqs (3.59) and (3.60) yield
zi = −
λ1 v xi . Mt λ2 tgαo
(3.61)
If the scale of the initial hologram is diminished m times, we have
zi = −m2
λ1 v xi . Mt λ2 tgαo
(3.62)
Let us now define the quantity xi . Although the resolution along the x- and y-axes is determined by different physical conditions, the resolution elements x and y must have the same values. Therefore, instead of xi one can use δx describing the resolution along the pathway line provided by the aperture synthesis. Then Eqs (3.62)
Quasi-holographic and holographic radar imaging
69
and (3.34) give
zi = −
0.45λ21 Mtv H λ2 Xs tgαo sin3 ϕ
.
(3.63)
A characteristic feature of this expression is that zi is inversely proportional to the synthetic aperture length Xs . It is also worth discussing some practical aspects of scaling in a holographic radar. Unlike SAR, this type of radar has no anamorphism, that is, the image planes coincide in azimuth and range. So there is no need to use special optics to eliminate anamorphism. However, the image proportions along the x- and y-axes do not coincide because the scaling coefficient in azimuth, Px , differs from that in range, Py . According to Reference 81, Px is defined as Px = v/V ,
(3.64)
where v is the velocity of the transparency on which the hologram is recorded and V is the velocity of the antenna array. Along the y-axis, the scaling coefficient Py is Py =
W , 2a
(3.65)
where W is the transparency width and 2a is the double length of the antenna array. As a result, the holographic image appears to be defocused along the x- and y-axes. The image scale along these axes can be equalised by special optics – spherical or cylindrical telescopes. The optics suggested in Reference 81 can change the image scale from 4 to 25 times. Transversal and longitudinal scales of an image can be equalised by choosing a proper coefficient m. Therefore, the final values of longitudinal magnification and focal depth can be found only after one has selected all the scaling coefficients Py , Px and m. To conclude, we summarise specific features of front-looking SAR systems. 1. It has been shown in Reference 74 that SAR systems have a serious limitation. When the view zone approaches the pathway line, the resolution in azimuth becomes much poorer. This makes it impossible to obtain quality images in the front view zone. In contrast, a holographic radar provides a high resolution directly under the aircraft. 2. Another essential advantage of a holographic radar is a high longitudinal resolution even in a continuous pulse mode along the z-axis, providing 3D relief images. 3. The 3D character of a holographic radar image is a basis for obtaining range contour lines which can then be recalculated to get surface contours [81]. This operation mode is ‘purely’ holographic. In fact, it implements the principle of two-frequency holographic interferometry. 4. A high 3D quality of the image requires the use of a new parameter – the image focal depth, by analogy with optical systems.
70
Radar imaging and holography
5. The view field geometry in a holographic radar is equivalent to that of airborne infrared and optical devices, so it is possible to combine microwave images with infrared and optical images. This kind of complexing considerably increases the radar capability to detect and identify targets.
3.3 A tomographic approach to spotlight SAR 3.3.1 Tomographic registration of the earth area projection Today there are two practically valuable cases when tomographic algorithms can be used for reconstruction of radar images: inverse aperture synthesis by rotating an object round its centre of mass (see Chapter 6) and aperture synthesis in a spotlight or telescopic mode [100]. We shall analyse the latter case. A microwave radar with a synthetic aperture borne by a carrier and operating in a spotlight mode has a real antenna oriented onto an earth area to be surveyed. The area is illuminated for a longer time than is normally done in stripe surface mapping [100], so this type of SAR has a greater resolving power than a conventional side-looking SAR. Figure 3.11 shows the basic geometrical relations illustrating the spotlight mode. For simplicity, we shall consider a 2D case. Suppose that the coordinate origin is related to a certain point on the earth’s surface; the x-axis is the range and the y-axis is the azimuth. During the carrier flight, a real antenna ray is incident onto this area at an angle ϑ to the x-axis. The SAR scans the target with wideband pulses, for example, linear frequency modulation (LFM) pulses of the ReS(t) type, where 2 ej(ωo t+αt ) , |t| ≤ τ/2, S(t) = (3.66) 0, otherwise
y v u
x
Figure 3.11
The basic geometrical relations for a spot-light SAR
Quasi-holographic and holographic radar imaging
71
where ωo is the SAR carrier frequency, 2α is the LFM slope and τ is the pulse duration. Note that the latter condition is not obligatory because the signal may have a narrow band. It is assumed that the target is in the far zone and the microwave phase front in the target vicinity is planar. The signal reflected by a unit area of the surface at the point (xo , yo ) is 2R ro (t) = ARe g(xo , yo )S(t − ) dx dy, (3.67) c where A is the amplitude coefficient accounting for the signal attenuation during the propagation; 2R/c is the time delay of the signal while it covers the distance R in both directions; g(x, y) = |g(x, y)| exp[ jϕo (x, y)] is the density function, whose physical sense here is just the distribution of the earth surface reflectivity; and ϕo (x, y) is the signal phase shift due to the reflection. We also assume that the function g(x, y) remains constant within the given ranges of radiation frequencies and view angles ϑ. Normally, when the distance to the target is much larger than the target’s size, elements of the ellipses in Fig. 3.11 may be regarded as segments of straight lines. Therefore, with Eqs (1.17) and (3.67) we can write down the total echo signal from all reflectors located within a narrow band normal to the u-axis and having the width du at u = uo : 2(Ro + uo ) du, r1 (t) = ARe pϑ (uo )S t − c where Ro is the distance between the SAR and the target centre. The total signal from the area being surveyed is L 2(Ro + uo ) pϑ (u)S t − du , rϑ (t) = ARe c
(3.68)
−L
where L is the area length along the u-axis and A = const, which is valid at Ro ≫ L. In contrast to the classical situation presented in Fig. 1.6, the linear integral used for the projection is taken along the line normal to the microwave propagation direction. Now we substitute Eq. (3.66) for the LFM pulse into Eq. (3.68), simultaneously detecting the received signal with a couple of quadrature multipliers, and then we pass the output signals through low-frequency filtres. What we eventually get is the signal A cϑ (t) = 2
L
−L
2 4αu2 exp −j [ωo + 2α(t − τo )u] du, pϑ (u) exp j 2 c c
where τo = 2Ro /c
and
τ/2 + 2(Ro + L)/c ≤ t ≤ τ/2 + 2(Ro − L)/c.
(3.69)
The latter expression is the Fourier transform of the function pϑ(u) exp( j4αu2 /c2 ), whose exponential factor can be easily eliminated if we find the inverse Fourier
72
Radar imaging and holography
transform of cϑ (t) by multiplying the result by exp(−j4αu2 /c2 ) and making Fourier transform again. This quadratic phase factor can quite often be neglected. Eventually, we have 2 A [ωo + 2α(t − τo )] , (3.70) cϑ (t) = Pϑ 2 c where the time t satisfies Eq. (3.69). Therefore, if one uses LFM pulses, demodulated signals received from every illuminated direction are part of a 1D Fourier function of the centre projection of the earth area at the respective view angle. In other words, the processor output signal represents a Fourier image of the projection function (within the time interval considered) and the data are registered in Fourier space. In accordance with the projection theorem, the function (3.70) is a cross section, taken at the angle ϑ, of the 2D Fourier transform G(X , Y ) of the desired density function g(x, y). It follows from Eq. (3.69) that the function Pϑ (X ) is defined in the range X1 ≤ X ≤ X2 , where 4αL ∼ 2 2 ωo − ατ + X1 = = (ωo − ατ ), c c c 4αL ∼ 2 2 (3.71) ωo + ατ − X2 = = (ωo + ατ ). c c c Since measurements are usually limited to a certain range of angles ϑmin ≤ ϑ ≤ ϑmax , it is clear that the counts of G(X , Y ) can be obtained at the polar grid points within a limited circular segment (the shaded region in Fig. 1.7). The inner and outer radius of the circle, X1 and X2 , are proportional to the smallest (ωo − ατ ) and the largest (ωo + ατ ) frequency values of the LFM pulse. Further, one can employ classical algorithms based on interpolations and inverse Fourier transforms to reconstruct g(x, y). Before performing the latter procedure, it is useful to multiply the G function by the weight or ‘window’ function, to reduce parasitic side lobes in the image.
3.3.2 Tomographic algorithms for image reconstruction The next step in developing this algorithm is to perform a 2D inverse Fourier transform in the polar coordinates. This can be done as follows. First, introduce the function G(u, v)δ(x − u, y − v), F(x, y) = (u,v)∈P
where P is a polar grid; δ(·) is the delta-function of Dirac; G = S · W , where S is a complex-valued function prescribed by P (experimental data); and W is a real-valued weight function. We should also prescribe the real parameters a > 0, b > 0 and the integer parameters M > 0 and N > 0 such that the rectangular grid R = {(ma, nb)| − M /2 ≤ m < M /2, −N /2 ≤ n < N /2} , should satisfy a discrete representation of the object sought for.
Quasi-holographic and holographic radar imaging
73
The quantity M ×N is equal to the number of pixels on the image, each pixel having the size a × b. According to the sampling theorem, I /a and I /b are approximately equal to the size of the P region along the x- and y-axes, while I /(Ma) and I /(Nb) should equal the spacing between the grid nodes along the same axes. Thus, the P grid consists approximately of M radial lines and N pixels along each line. Note that in classical tomography, we have M ∼ = N and the grid P includes about π M /2 radial lines and N pixels along each line. We can now estimate the inverse Fourier transform f of the function F across the region R: f (ma, nb) = F(x, y)E(xma + ynb) dx dy =
(u,v)∈P
G(u, v)E(uma + vnb)
with E(z) = exp( j2π z). A straightforward calculation of exact values of f in the region R using the last formula will require about M 2 N 2 elementary arithmetic operations. For f estimations in this region, however, one can employ conventional methods with a smaller number of operations, using the interpolation algorithm mentioned above and the convolution algorithm to be discussed below. They are a fairly simple algorithm for a rigorous calculation of the functions f (ma, nb) with the so-called homogeneous concentrically square polar grid, which requires about MN log2 (MN ) operations. The polar grid P is described as P = {(u(i, k) = A(k) + iB(k), v(i, k) = C + kD) at 0 ≤ i < M, 0 ≤ k < N}, where A(k) = −(C + kD)tg(ϑo /2), B(k) = −2A(k)/(M − 1), ϑo is the size of the R region, C and D are some selected real positive numbers. The values of the f function are found in two steps. First, for −M /2 ≤ m < M /2 and 0 ≤ k < N we find the function H (m, k) = E(m2 aB(k)/2) 2
i=M −1 i=0
{G(i, k)E(i2 aB(k)/2)}
× E(−(m − i) aB(k)/2). Second, for −M /2 ≤ m < M /2 and −N /2 ≤ n < N we calculate the desired function f (ma, nb) = E(nbC)
N −1 k=o
{H (m, k)E(maA(k))} E(nk/N ).
Consider now a tomographic algorithm for reconstruction of SAR images, based on the convolution back projection (CBP) method. It employs the relation between
74
Radar imaging and holography
the functions g(x, y) and G(X , Y ) written in the polar coordinates [34]: 1 g(ρ cos , ρ sin ) = 4π 2
π/2 ∞
G(r cos ϑ, r sin ϑ)|r|
−π/2 −∞
× exp[ jrρ cos( − ϑ)] dr dϑ. With the projection theorem, the last expression can be re-written as 1 g(ρ cos , ρ sin ) = 4π 2
π/2 ∞
Pϑ (r)|r| exp[ jrρ cos( − ϑ)] dr dϑ.
−π/2 −∞
(3.72)
The integral around the variable r can be interpreted as inverse Fourier transform with the argument ρ cos( − ϑ); from the convolution theorem, Eq. (3.72) reduces to 1 g(ρ cos , ρ sin ) = 2π
π/2
(Pϑ ∗ kr )ρ cos( − ϑ)dϑ,
(3.73)
−π/2
where kr is the Fourier transform of the function |r|. The algorithm used in computer-aided tomography (CAT) involves the calculation of the Pϑ ∗kr convolution for each value of ϑ, followed by an approximate integration around the variable ϑ by summing up the results obtained. Since one measures the function Pϑ (r), the reconstruction algorithm should be based on Eq. (3.72) rather than Eq. (3.73). It follows from Eq. (3.72) that Pϑ (r) must be known for all r values, but it is clear from the foregoing (see Eq. (3.71)) that Pϑ (r) is known only for a limited range of r values with the centre at r = 2ωo /c. Besides, the circular segment of the Pϑ function (Fig. 3.12) should be shifted towards the origin. With these remarks in mind, Eq. (3.72) can be reduced to 1 g(ρ cos , ρ sin ) = 4π 2
ϑ max X2 −X1
Pϑ (r + X1 )|r + X1 |W1 (r)
ϑmin
0
× exp[ jrρ cos(ϑ − )]dr × W2 (ϑ) exp[ jX1 ρ cos( − ϑ)]dϑ.
(3.74)
where W1 (r) and W2 (ϑ) are additional weight functions [33]. The interpolation and convolution algorithms have been compared quantitatively. The comparison is based on two criteria: (1) the level of multiplicative noise (side lobes) N2 oml RMN = 101g 2 , Niml
Quasi-holographic and holographic radar imaging
75
where Noml is the number of pixels outside the major lobe on a point scatterer’s image and Niml is the number of pixels inside the major lobe; and (2) the computation time and complexity, or the number of elementary arithmetic operations to be made. The value of RMN for the convolution algorithm has been found to be −(30/40) dB. A similar result is obtained using the convolution algorithm with a high interpolation order (8–16). The computation complexity of the first algorithm is about N 3 (N × N is the number of pixels on the image) and that of the second algorithm is about kN 2 (k is a constant varying in proportion with the interpolation order). The computation time with the convolution algorithm is 3–5 times longer than with the interpolation algorithm. Its application is, however, preferred because it allows processing primary data as they come handy (e.g. the internal integral in Eq. (3.74)) in real time for each projection individually. The convolution algorithm can be used for simultaneous (systolic) computations by a set of elementary processors such as a multiplier, a summator and a saving register, which are not tightly coupled to one another. There have been some attempts to design ‘faster’ tomographic algorithms, using, for example, the Hankel transform. The principle of this algorithm is as follows. Because the functions g(ρ, ) and G(r, ϑ) are periodic with the period 2π , they can be expanded into a Fourier series: g(ρ, ) = G(r, ϑ) =
∞
gn (ρ) exp( jn),
n=−∞ ∞
Gm (r) exp( jmϑ),
m=−∞
where 1 gn (ρ) = 2π
π/2
g(ρ, ) exp(−jn)d,
π/2
G(r, ϑ) exp(−jmϑ)dϑ.
−π/2
1 Gm (r) = 2π
−π/2
In addition, We can show that gn (ρ) = 2π
∞
rGn (r)Jn (rρ)dr,
(3.75)
0
where Jn (·) is the first-order Bessel function. This relation is known as the nth order Hankel transform [103]. Apparently, these relations can be applied to the reconstruction of g from the known values of G. An important advantage of this algorithm is the use of data in a
76
Radar imaging and holography
polar format without interpolation. The Hankel transform takes the largest computational time. The available procedures for accelerating the computation are based on the representation of Eq. (3.75) as a convolution and the use an asymptotic representation of the Bessel function. The available tomographic algorithms for image reconstruction in spotlight SAR also include signal processing designs accounting for the wave front curvature. These employ more complex transformations than just finding Fourier images. The ‘efficiency’ of such algorithms should be evaluated taking into account the inadequacy of the problem formulation. We should recall that a problem is considered to be ill-posed if it has no solution, or the solution is ambiguous or unsteady, that is, it does not change continuously with the input data. It is the second circumstance that usually takes place in the case being discussed, because experimental data fit only a small region in the transformation space. Even if we assume that the G(X , Y ) values are known over the whole polar grid, there is generally no sampling theorem for g(ρ, ) in the polar format. The tomographic approach allows estimation of all major parameters of the spotlight SAR. In particular, the resolution was estimated as πc , δx ∼ = 2αT πc ∼ δy = 2ωo sin(|ϑmin | + |ϑmax |) a value coinciding with a conventional radar estimate [100]. The conditions for the input data discretisation were defined. Besides, requirements for the synthesis were formulated, providing that one could ignore the deviation of projections from a straight line and their incoherence due to the wave front curvature in the target vicinity. We have made the above analysis for a 2D case, neglecting the SAR’s altitude. This circumstance does not, however, violate the generality of our treatment. A correction for the altitude can be easily made by ‘extending’ the linear range by a factor of Ro /R, where Ro is the slant range to the target’s centre and R is the slant range projection onto the earth plane. We should like to emphasise the following important difference between CAT systems and SAR operating in a spot-light (telescopic) mode. In order to provide a high resolution, a CAT radar must cover a much larger range of angles than a SAR, say, 360◦ against 6◦ . This can be understood in terms of image reconstruction from data obtained within a limited region of a 2D space–time spectrum. In this sense, the spectral region utilised by the SAR is shifted relative to the origin by 2ωo /c (Eq. (3.71)), while the spectral region of a CAT radar is not. We shall try to show why a high resolution can be achieved by a small aperture in SAR. We should first recall that resolution corresponds to the width of the major lobe of the pulse response, normally at 3 dB. The resolving power of both CAT and SAR systems depends only on the frequency band used in a 2D spectrum and it should be independent of the carrier frequency ωo , which is the frequency of the band shift.
Quasi-holographic and holographic radar imaging
77
To illustrate, the range resolution for the shaded region in Fig. 1.7 is inversely proportional to the frequency band width along the X -axis (or the u-axis) and the azimuthal resolution to that along the Y -axis (or the v-axis). If the number of point objects is large, the image quality becomes poor due to signal interference. This effect arises because the pulse response of the system, usually expressed as a 2D function sin x/x, contains a constant phase factor varying with the carrier frequency ωo and the position of the point object. As is easy to see, the quality of a reconstructed image is independent of the ωo variation provided that the function describing the object depends on a complex-valued variable with an occasional uncorrelated phase. This means that the phases of signals reflected by different scattering centres are not correlated. The authors evaluated the image quality from a formula similar to that for finding a mean-square root error. One can suggest that the process of SAR imaging meets this condition. As a result, the spectrum of the ‘initial image’ occupies a wide frequency band in Fourier transform space and the object’s reflectivity can be reconstructed from a limited shifted spectral region. This circumstance is similar to a fact well known in holography: the image of a diffusely scattering object can be reconstructed from any fragment of the hologram (Chapter 2). These aspects of image quality can be treated in a different way. The band width of space frequencies, v, which defines the azimuthal resolution, ‘grows’ with the shift frequency (Fig. 1.7) as
v = (|ϑmin | + |ϑmax |)(2ωo /c). For a CAT radar, ωo = 0 and v is (|ϑmin | + |ϑmax |) u,
where u ∼ = 4αT /c ≪ 2ωo /c. Therefore, in order to obtain a high azimuthal resolution, one must have information about the whole range of view angles, 360◦ . One can eventually say that the principal difference between the CAT and SAR systems is that the latter is coherent and can process complex signals. To conclude, the tomographic principle of synthetic aperture operation does not rely on the analysis of Doppler frequencies of reflected signals. We shall turn to this factor again in Chapters 5 and 6 when we describe imaging of a rotating object by an inverse synthetic aperture. It will be shown that the holographic and tomographic approaches do not need an analysis of Doppler frequencies.
Chapter 4
Imaging radars and partially coherent targets
Remote sensing of the earth surface in the microwave frequency range is a rapidly developing field of fundamental and applied radio electronics [31,77]. It has already become a powerful method in many earth sciences such as geophysics, oceanology, meteorology, resources survey, etc. Especially among microwave sensors sidelooking synthetic aperture radars (SAR) are capable of providing high-resolution images of a background area at any time, irrespective of weather conditions. Extensive information has been obtained by airborne radars and radars carried by satellites and spacecraft: SEASAT-A and SIR (USA), RADARSAT (Canada), Almaz-1 (Russia), ERS and ENVISAT (European Space Agency), Okean (Russia, Ukraine). A challenge to the radar scientist is the analysis of synthetic aperture imaging of extended targets. The various tasks of remote SAR sensing of the earth include the study of the ocean surface, sea currents, shelf zones, ice fields, and many other problems [62]. Objects to be imaged are wind slicks, oil spills, internal waves, current boundaries, etc. Some of these targets are characterised by motions with unknown parameters, so they are considered to be partially coherent. This chapter focuses on theoretical problems of SAR imaging of such targets while their practical aspects are discussed in Chapter 9. In contrast to a conventional radar which measures instantaneous amplitudes of a signal reflected by a target, the SAR registers the signal phase and amplitude for a finite synthesis time Ts . The conversion of these data to a radar image requires the knowledge of the time variation of these characteristics, which can be found if one knows a priori the time variation of the reflected signal. When the view zone includes only stationary targets, the prescribed data have the form of the time dependence of distances between the SAR and the objects being viewed. If the time variation of the signal phase is unknown, the coherence is violated. This may happen not only in SAR viewing of the sea surface but also because of sporadic fluctuations of the carrier trajectory (see Chapter 7). So partial coherence may be associated with the viewing conditions or with the target itself. The analytical method discussed below preserves its generality in this case.
80
Radar imaging and holography
4.1 Imaging of extended targets Viewing of background surfaces by SAR involves two kinds of difficulty: one is associated with evaluation and improvement of image quality and the other with image interpretation [59]. The first difficulty is due to the fact that one has to control the SAR performance (i.e. the operation of transmitters/receivers and imaging devices), to evaluate the capabilities of test systems, and to compare the data from the synthetic aperture and other sensors. The other difficulty arises from the diversity of image applications. The point is that one resolution element contains a large number of elementary scatterers reflecting coherent signals which interfere with one another. This produces speckle noise on the radar image. The situation becomes especially complicated, for example, in sea surface viewing when elementary scatterers move, making the image intensity a random quantity. For this reason, one has to employ statistical methods to describe the imaging of extended proper targets. It is clear that both problems are closely interrelated. For instance, the statistical characteristics of speckle noise can be used to obtain information about the surface and to evaluate the image quality. Image quality is affected by numerous independent parameters of target imaging. Therefore, image evaluation requires the use of quantitative factors which can objectively describe the image characteristics and relate this information to the parameters of the viewing system. The quality of any image, including a radar one, can be described by four parameters: geometrical accuracy, spatial resolution, radiometric precision and radiometric resolution. Geometric accuracy defines the longitudinal and latitudinal precision of the image as an integral entity, which is particularly important for images of poorly recognisable surface areas. It also determines the mapping accuracy of different points on the image relative to one another. Since a SAR is a coherent system, its ability to resolve neighbouring point scatterers depends on various factors, such as the relative phases of the scatterers, their relative effective cross sections, the system noise, etc. So it is reasonable to describe spatial resolution either with the half width of the major impulse response peak (usually, at 3 dB) or with the envelope of this response. The latter way enables one to find the extent to which the image is affected by the side lobes of the impulse response, which are comparable with the major peaks of responses from neighbouring, less intensive scatterers that can be erroneously taken for images of independent point targets. Spatial resolution can be evaluated by a photometric study of the image of a bright point object, say, of a corner reflector, or by determining the amplitude image profile of an object with a sharp reflectivity variation, followed by the calculation of the impulse response from this profile gradient. The second approach is more accurate because the resolution evaluation is less affected by the limited dynamic range of the aperture. Radiometric precision indicates to what extent the various brightness levels of the image reproduce the reflectivity variation of the radar target at particular wavelengths, polarisations and radiation incidences. To measure the radiometric precision, one can use calibrated extended targets with different values of the specific cross-section (SCS).
Imaging radars and partially coherent targets
81
Radiometric (contrast) resolution characterises the ability to discern the SCS values of neighbouring elements and is largely determined by random signal fluctuations registered on the image. Such fluctuations may arise along the signal pathway from aperture or speckle noise. The radiometric resolution for homogeneous areas can be calculated from the density distribution function of the image intensity probability. The reflectivity distribution across the area of interest is often assumed to be normal. Then the amplitude distribution of the reflected signal is described by Reighley’s formula while the phase is taken to be uniform in the range from 0 to 2π . The radar image intensity, which is equal to the squared signal modulus, has an exponential distribution: p(χ ) = (1 + S)−1 exp[−χ /(1 + S)],
(4.1)
where χ is the intensity normalised to unit noise power and S is the signal-tonoise ratio on the image. The average intensity and the distribution dispersion are, respectively, described as χm = 1 + S,
(4.2)
Dχ = (1 + S)2 .
(4.3)
SCS measurements involve a large ambiguity. From Eqs (4.2) and (4.3) it follows that the standard deviation of the SCS value is equal to the image intensity. To estimate this value, it is necessary to find the mean noise intensity and subtract it from the image intensity. We assume χm = S and assume that the estimate dispersion to be constant. If the radiometric resolution γ is found to be on the level of one standard deviation (the ratio of the mean value plus one standard deviation to the mean value), then for the distribution described by Eq. (4.1) at zero noise we have γ = 10 lg(2 + 1/S).
(4.4)
Obviously, γ will not be larger than 3 dB even at S → ∞. The simplest way to improve radiometric resolution is to average the viewing results on several neighbouring resolution elements of an extended target (incoherent signal integration). Then we shall have γ = 10 lg[1 + (1 + S)/N 1/2 S],
(4.5)
where N is the number of uncorrelated integrated versions of the image. Incoherent signal integration by SAR can be provided only at the expense of spatial resolution because this is normally done by multi-ray processing or by averaging the intensities of elements of a highly resolved image. For example, the SEASAT-A aperture used a four-ray processing which, nevertheless, could not totally remove the speckle noise [99]. Thus, there is a certain contradiction between spatial and radiometric resolutions [61]. A possible compromise is to choose a proper criterion for image quality. However, this is not very easy to do for two reasons. First, such a criterion must account for specific features of the object being viewed, which may happen to be diverse. Second, one must adapt this criterion to the subsequent processing of the
82
Radar imaging and holography
image – visual, automated, etc. Moore [99], for example, suggested using visual expertise of the image as a criterion for evaluation of its quality. For a quantitative analysis he used the spatial grey-level (SGL) volume V = Va VR Vg (N ), where Va and VR are the azimuth and range resolutions, respectively, and Vg (N ) is the grey-level resolution defined by the number of uncorrelated integrated realisations, N . Before proceeding with the discussion of criteria that can optimise the coherentto-incoherent signal ratio in the synthetic aperture, we think it is necessary to consider briefly the available methods to describe SAR mapping of a typical fluctuating extended target – a rough sea surface.
4.2 Mapping of rough sea surface At present we have much information on rough sea surface viewing by SAR systems [36,62], both airborne and carried by spacecraft. Most of the publications describe wave movements and their effect on radar image quality. However, this issue still remains controversial and is a subject of much debate [56]. When the sea surface is viewed by an airborne or space SAR, the probing radiation incidence varies from 20◦ to 70◦ . Bragg scattering by small-scale and capillary waves has the greatest effect on the reflection of electromagnetic radiation. The effect of large-scale (gravitational) waves on the radar image reveals itself in the modulation of scattering by small-scale waves. These phenomena are usually described by a 2D model which considers the sea surface as a superposition of Bragg scatterers – capillary and longer gravitational waves. They can also be described by a facet model, in which facets represent small-scale scatterers with superimposed capillary waves; the scatterers move with orbital velocities defined by large-scale waves [59]. The imaging of large-scale waves is affected by the following factors: • •
•
the energy modulation of capillary waves due to hydrodynamic interaction between capillary and gravitational waves; the modulation of the facet inclination, which changes the effective incidence of the probing signal with respect to the normal facet surface, which, in turn, changes the Bragg scattering coefficient; the variations in the facet parameters (the position and the normal direction) and the Bragg scattering coefficient due to the facet movement during the synthesis.
The first two processes are important for sea viewing by any radar, whereas the third process affects only SAR imaging. The effect of moving waves on the image quality can be found analytically if one bears in mind that the synthesis time (0.1–3 s) is much shorter than the period of a large-scale wave (8–16 s). Then the functions that describe the time variation of the facet parameters and scattering coefficients can be expanded into a Taylor series. The major expansion terms are related to the radial components (along the slant range) of the orbital velocity and acceleration of the facets. These components are responsible for two effects: the velocity bunching and the image
Imaging radars and partially coherent targets
83
defocusing along the azimuth. The velocity bunching is associated with the azimuthal shift of each facet image because of the radial velocity effect, which represents a periodic rarefaction and thickening of virtual positions of elementary scatterers along the large-scale wave pattern. The bunching degree varies with the number of images of individual facets per unit azimuthal length, which is proportional to =
R dur , v dx
(4.6)
where R is the inclined range, v is the SAR carrier velocity, ur is the radial velocity component and x is the azimuthal coordinate on the sea surface. For small values of ||, this effect is linear and is characterised by a linear transfer function; for large || values (>π/2), it becomes nonlinear, leading to image distortions. It is greatest for waves running along the azimuthal coordinate but practically vanishes for radial waves. Image defocusing of large-scale waves is interpreted as being either due to the radial acceleration of the facets or due to the change in the relative aperture velocity because of the effect of the azimuthal phase velocity of sea waves [61]. Investigations have shown that the latter explanation is better substantiated. The major contribution to the image is made by the amplitude modulation of the reflected signal due to the surface roughness and facet inclination, whereas the velocity bunching plays a minor role. As for the image defocusing, it can be removed by correcting the signal processing conditions, for example, by an additional adjustment of the optical processor or by refining the base function during digital image reconstruction. Generally, the sea wave behaviour appears to be quite complex. For this reason, available models of a probing signal reflected by the sea surface depend on the particular problem to be solved. Models accounting for the orbital motion of liquid droplets are too sophisticated to be extended to a large class of objects defined as partially coherent. Besides, they do not readily apply to the analysis of the influence of aperture parameters on image quality, because imaging is then determined only by the sea wave characteristics and viewing geometry. Probably, the only factor that affects the sea imaging by SAR and related to the choice of radar parameters is the image defocusing. But even here, we deal with the mapping of sea waves, which is a particular problem that does not represent the whole class of partially coherent targets. On the other hand, of academic interest and practical importance are the problems of background dynamics, various anomalies in the extended target reflectivity (for the sea, these are slicks, spills of surface-active substances, etc.), as well as the proper choice of the SAR design for viewing this class of targets. The analysis shows that the results obtained can be extended to a large number of partially coherent extended targets. In principle, the basic characteristics of extended target images, including images of sea surface, could be found by solving the problem of electromagnetic wave scattering by a moving plane. The methods of dealing with these problems are well known but they involve cumbersome calculations.
84
Radar imaging and holography
Another way of describing a radar signal reflected by an extended target is to introduce the autocorrelation function for the object being viewed, as is done in optical systems theory [29]. In this approach, a complex signal reflected by the sea surface can be written as U (x, t) = u(x, t)ur (x, t), where u(x, t) is a co-factor accounting for the effect of large-scale sea waves and ur (x, t) is a random complex component to describe the signal reflected by a capillary wave. The autocorrelation function of this signal is U (x1 , t1 )U ∗ (x2 , t2 ) = u(x1 , t1 )u∗ (x2 , t2 )ur (x1 , t1 )ur∗ (x2 , t2 ) , where the asterisk denotes the complex conjugate and represents the ensemble average. The complex component ur (x, t) can be written as ur (x, t) = f (x)α(t | x)
(4.7)
where f (x) is a complex random amplitude of the reflected signal, defined by the surface roughness, and α(t | x) is a complex reflectivity describing the time fluctuations of the reflected signal with the x-coordinate. Normally, f (x) describes the Gaussian random process with a zero average, which happens in the case of Bragg scattering of an electromagnetic wave on a rough surface. The spatial correlation function of this process can be approximated by the Dirac deltafunction when the spacing between the features is sufficiently small, a condition often fulfilled in practice: f (x1 )f ∗ (x2 ) = pδ(x1 − x2 ),
(4.8)
where p is a factor proportional to the object’s SCS and is defined by the governing radar equation. The autocorrelation function of the time fluctuations of the surface is, in turn, equal to α(t1 | x1 )α ∗ (t2 | x2 ) = Ŵ[(t1 − t2 ) | x1 , x2 ].
(4.9)
It has been termed partial or autocorrelation coherence [103]. The possibility to employ this formalism is a fundamental feature of partially coherent objects which can then be treated as a special class of targets. Thus, the autocorrelation function of the signal reflected by the sea surface can be written as U (x1 , t1 )U ∗ (x2 , t2 ) = u(x1 , t1 )u∗ (x2 , t2 )δ(x1 − x2 )Ŵ[(t1 − t2 ) | x1 , x2 ]. (4.10) Taking the time fluctuations of the signal to be constant, we can approximate the autocorrelation function with the expression Ŵ[(t1 − t2 ) | x1 , x2 ] = exp[−π(t1 − t2 )2 /τc2 ], where τc is the time interval of the correlation.
(4.11)
Imaging radars and partially coherent targets
85
The radar signal model discussed above agrees well with experimental data [112]. Equation (4.11) has a general form allowing the solution of a large range of problems involved in the analysis of extended target imaging by SAR systems. We shall further omit partially coherent background modulation by large-scale waves, assuming u(x, t) = 1 in order to be able to extend the results to a sufficiently large class of objects. The model we have described can provide the basic statistical characteristics of partially coherent surface images, but we should first outline the imaging model itself.
4.3 A mathematical model of imaging of partially coherent extended targets Suppose a SAR is borne by a carrier moving uniformly along a straight line with a velocity v. The carrier position is described by the coordinate y = vt and inclined range R, while the position of an arbitrary element of the viewed surface is described by the x-coordinate (Fig. 4.1). The imaging process is subdivided into two stages – the registration of the reflected signal (hologram recording) and the image reconstruction. This approach allows one to represent a general block diagram of the synthetic aperture (Fig. 4.2) with the complex amplitude of the reconstructed image written as a sum of convolutions: s = f ∗ w ∗ h + n ∗ h,
(4.12)
R
f ( y, t)
y⬘
R u
y = vt 0
Figure 4.1
The geometrical relations in a SAR
Surface model
SAR receiver
SAR processor
+ Noise
Figure 4.2
A generalised block diagram of a SAR
Radar image
86
Radar imaging and holography
where f is a function of the viewed surface reflectivity; w and h are the impulse responses of the radar and the processor, respectively; n is the complex amplitude of additive noise; and ∗ denotes convolution. The optimal quality of images of point objects is achieved by matching the impulse responses of the radar and the aperture processor: h(y) = w∗ (y).
(4.13)
This condition cannot, however, provide an optimal image of an extended proper object [99], since it is impossible to integrate an incoherent signal and to reduce the speckle noise on the image. On the other hand, the fact that the image intensity g(u) = s(u)s∗ (u) is usually registered at the aperture processor output allows introducing the concept of a partially coherent processor in square filtration theory [58]. One can then account simultaneously for the effects of coherent and incoherent signal integration by the aperture and eventually obtain the major statistical characteristics of images of partially coherent extended targets. This type of processor will have the following impulse response: Q(y1 , y2 ) = γ (y1 − y2 )h(y1 )h∗ (y2 ),
(4.14)
where γ (y1 −y2 ) is a factor characterising the degree of incoherent signal integration. Then Eq. (4.13) will be valid for any class of targets. To avoid cumbersome calculations, we shall introduce Gaussian approximations of the functions w(y) = exp(−ay2 /2) exp(jby2 /2),
(4.15)
h(y) = exp(−ay2 /2) exp(jby2 /2),
(4.16)
γ (y1 − y2 ) = exp[−A(y1 − y2 )2 /2], (4.17) where exp(−a y2 /2)dy = (2π/a)1/2 = R is the width of the real antenna pattern projection onto the area, which defines the synthesis range b = 2π/λR; λ is the aperture wavelength; and exp(−Lx2 /2) dx = (2π/L)1/2 = Ls is the synthesised aperture length. The A/a ratio describes the number of independent integrations of an incoherent signal, N = (1 + A/a)1/2 . Within this SAR model, the image intensity is g(u) =
∞
Q(u − y1 , u − y2 )[sr (y1 ) + n(y1 )] [sr∗ (y2 ) + n∗ (y2 )] dy1 dy2 ,
−∞
(4.18) where sr (y) describes the complex hologram function and n(y) is a function describing the intrinsic noise of the aperture. The model of a synthetic aperture with a partially coherent processor can be used to analyse statistical characteristics of images of partially coherent targets and to reveal the effects of coherent and incoherent signal integration on the image parameters.
Imaging radars and partially coherent targets
87
4.4 Statistical characteristics of partially coherent target images Let us turn back to the synthetic aperture shown in Fig. 4.1. In one of the range channels, the reflected signal can be represented as a random complex field. For many real surfaces, the function f (y) in the centimetre wavelength range is a Gaussian random process with the zero average and the correlation function in the form of the Dirac delta-function obeying Eq. (4.8). The time relations for the surface changes can be described by the autocorrelation function of Eq. (4.9) and that of the reflected signal, assuming u(y1 , t1 ) ≡ s0 (y1 , t1 ): s0 (y1 , t1 )s∗ 0 (y2 , t2 ) = pδ(y1 − y2 )Ŵ[(t1 − t2 ) | y1 , y2 ],
(4.19)
where the function Ŵ[(t1 − t2 ) | y1 , y2 ] is defined by Eq. (4.11). The process of imaging can be analysed in terms of the holographic approach, as applied to SAR. At the first stage, the hologram is recorded: sh = s0 ∗ w + n, where w is the impulse response of the aperture receiver, n is additive noise, and ∗ denotes convolution. At the second stage, the image is reconstructed with the intensity g = ss∗ = (sh ∗ h) ∗ (sh ∗ h)∗ ,
(4.20)
where s is the complex amplitude of the image and h is the impulse response of the aperture processor. To smooth out the image fluctuations, one usually uses incoherently integrated signals. We can now evaluate the effects of two smoothing procedures: multi-ray processing and averaging of neighbouring resolution elements on the image [2]. Additionally, we shall consider the potentiality of incoherent signal integration on the hologram. In the first case, when the image is reconstructed by an optical processor, its intensity is [60] (4.21) g1 (u) = g(u, τ )Da (τ ) dτ . Here Da (τ ) describes the light distribution across the aperture stop located in front of the secondary film which records the image, τ is the current exposure of the secondary film, and u is the reconstructed image coordinate. In the second case, the image intensity is (4.22) g2 (u) = g(u′ )Ga (u − u′ ) du′ , where Ga (u − u′ ) is the weighting function of the averaging. In the third case, the image intensity is given by Eq. (4.20), in which the hologram function is (4.23) sha (y′ ) = sh (y′′ )Ca (y′ − y′′ ) dy′′ , where Ca (y′ − y′′ ) is the weighting function of the averaging and y′ = vt is the spatial coordinate on the hologram.
88
Radar imaging and holography
To simplify the calculations, let us approximate the above functions with the expressions Da (τ ) = exp[−D(τ v)2 /2], Ga (u) = exp[−Gu2 /2],
(4.24)
Ca (y) = exp[−Cy2 /2], where exp(−Dv2 τ 2 /2)d(vτ ) = De is the equivalent width of the aperture stop, (2π/G)1/2 = Ge and (2π/C)1/2 = Ce are the equivalent widths of the respective weighting functions; Eqs (4.15) and (4.16) have been used as approximations of the functions w(y) and h(y). We can now find the following parameters characterising the statistical properties of the image: the average intensity ga , the intensity dispersion σa2 , the smoothing degree ga2 /σa2 , the autocorrelation range uc and the signal-to-noise ratio Wg = ga /gan , where gan is the average noise on the image.
4.4.1 Statistical image characteristics for zero incoherent signal integration The parameters of interest can be found using the power spectrum of the image intensity [60]: (4.25) Sg (ω) = (2π )−1 |H (η, ω − η)|2 Sh (ω)Sh (ω − η) dη, where H (η, ω) is a 2D transfer function of the aperture processor and Sh (ω) is the hologram power spectrum. In turn, H (η, ω) = H (η)H ∗ (ω), where H = F{h} is the Fourier image of the function h(x). With Eq. (4.24), we get L 2 2 −1/2 2 2 exp −(η + ω ) H (η, ω) = (L + b ) 2(L2 + b2 ) b × exp j(ω2 − η2 ) . (4.26) 2(L2 + b2 ) The function Sh (ω) represents the Fourier transform of the hologram spatial correlation function, Rh (y′ ), which can be described, for low intrinsic aperture noise, as Rh (y′ ) = sh (y1′ )sh∗ (y2′ ) = p(π/a)1/2 exp[−(y1′ − y2′ )2 (a2 + b2 + 2aB)/4a] (4.27) with y′ = y1′ − y2′ ,
B = 2π/(vτc )2 .
Imaging radars and partially coherent targets
89
Hence, we have Sh (ω) = p[2π/(a2 + b2 + 2aB)]1/2 exp[−ω2 /(a2 + b2 + 2aB)].
(4.28)
By substituting Eqs (4.26) and (4.28) into Eq. (4.25) and using the expression covg (u) = F{Sg (ω)} for the background, we obtain = Sg (ω)dω = 21/2 π p[aL(a + L + 2B) + b2 (a + L)]−1/2 , σg2 = covg (0) = 2(πp)2 [aL(a + L + 2B) + b2 (a + L)]−1 , uc = covg (u)/covg (0)du = π [2a/(a2 + b2 + 2aB) + 2L/(L2 + b2 )]1/2 , /σg2 = 1. (4.29) Assuming that the spectrum of the intrinsic aperture noise recorded on the hologram is uniform and has spectral density Shn (ω) = n, we find the respective parameters of the image noise: = n(π/L)1/2 , σn2 = n2 (π/L), (4.30) ucn = π [2L/(L2 + b2 )]1/2 , 2 gan /σn2 = 1.
The signal-to-noise ratio Wh = ga /gan can be reduced to Wh = W0 Q, where W0 is a classical quantity and Q = [(a/b2 + 1/L)(a + L) + 2aB/b2 ]−1/2
(4.31)
is a factor largely determined by the real antenna pattern. A quantitative analysis of Eqs (4.29) and (4.30) shows that the statistical parameters of the image are practically independent of the surface fluctuations at the typical values of λ ≈ 3 cm, R ≈ 10–20 km, ≈ 0.02 and τc ≈ 0.01 s and that the correlation ranges uc and ucn differ only slightly with a maximum at Ls = (λR/2)1/2 (b = L). The latter circumstance can be attributed to the fact that the function h(y) essentially represents a linearly frequency-modulated signal, whose spectral width is proportional to its range at Ls > (λR/2)1/2 and inversely proportional at Ls < (λR/2)1/2 . So the spectral width of the image fluctuations is minimal at Ls = (λR/2)1/2 .
90
Radar imaging and holography Q, db 0 20 km
R = 10 km –1.0
0
10
20 Ls/√ lR/2
Figure 4.3
The variation of the parameter Q with the synthesis range Ls at λ = 3 cm,
= 0.02 and various values of R
At the minimal width of the |H (ω)| function, the difference between ga and gan is also insignificant. This accounts for the maximum of the Q function at Ls = (λR/2)1/2 (Fig. 4.3). A quantitative analysis of Q shows that the influence of the real aperture pattern on the signal-to-noise ratio is slight and reveals itself only at large synthesis ranges, Ls ≪ (λR/2)1/2 .
4.4.2 Statistical image characteristics for incoherent signal integration According to Eq. (4.21), the image intensity in multi-ray processing is [60] g(u) = Da (vτ ) sh (y1′ )sh (y2′ ) exp[−L(y1′ − u − vτ )2 /2] × exp[−L(y2′ − u − vτ )2 /2] × exp[jb(y1′ − u)2 /2] × exp[jb(y2′ − u)2 /2] dy1′ dy2′ d(vτ ).
(4.32)
This relation describes the impulse response of the aperture processor and enables one to find its transfer function: H (η, ω) =r(l 2 + b2 + 2Al)−1/2 × exp{−[A(η + ω)2 + l(η2 + ω2 )]/[2(l 2 + b2 + 2Al)]} × exp{−jb(η2 − ω2 )/[2(l 2 + b2 + 2Al)]},
(4.33)
with r = 2π/(D + 2L)1/2 ,
l = LD/(D + 2L),
and
A = L2 /(D + 2L).
Imaging radars and partially coherent targets
91
Following the same procedure and using the last two relations, we can find the characteristics of the background and noise on the image: = 21/2 π p{D[aL(a + L + 2B) + b2 (a + L)] + 2aLb2 }−1/2 , σg2
2
2
2
2
2
(4.34)
2 2
= 2(π p) {[L(D + L)(a + b + 2aB) + aD(L + b ) + 2aLb ] − L4 (a2 + b2 + 2aB)}−1
(4.35)
uc = 21/2 π {L(1 + 2L/D)/[L2 + b2 (1 + 2L/D)] + a/(a2 + b2 + 2aB)}1/2 , /σg2
2 2
(4.36)
2
2
2
2 1/2
= {1 + 2L b /[aD(L + b ) + LDb + 2aLb ]} 1/2
= π n[2/(LD)]
,
,
(4.37) (4.38)
σn2 = π 2 n2 (1 + 2L/D)−1/2 /(LD),
(4.39)
ucn = 21/2 π {L(1 + 2L/D)/[L2 + b2 (1 + 2L/D)]}1/2 ,
(4.40)
2 /σn2
(4.41)
= (1 + 2L/D)
1/2
.
The analysis of these relations shows that the image smoothing is improved, as was expected, while the correlation functions of the clutter and radar noise images are practically the same, uc ≈ ucn . Figure 4.4 demonstrates the correlation range versus the normalised quantity Ls for various degrees of incoherent integration De , or for 冓uc冔, m 40 4
8 20 3 7 2 6 0
5 1,5 2
4 Ls/√ lR/2
Figure 4.4
The dependence of the spatial correlation range of the image on normalised Ls for multi-ray processing (solid lines) at various degrees of incoherent integration De and for averaging of the resolution elements (dashed lines) at various Ge ; λ = 3 cm, R = 10 km; 1, 5–0 (curves overlap); 2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2
92
Radar imaging and holography
different aperture stop sizes. It is clear that the image correlation at Ls > (λR/2)1/2 (the focused processing region) will only slightly vary with De but the correlation range in incoherent integration will become larger (the defocused processing region). The parameter Q then takes the form Q = [(a + L)(a/b2 + 1/L) + 2aB/b2 + 2a/D]−1/2 . Its quantitative analysis indicates that it does not much affect the signal-to-noise ratio. When the resolutions of neighbouring elements are averaged according to Eq. (4.22), the processor transfer function is expressed as H (η, ω) = r1 (l12 + b21 + 2A1 l1 )−1/2 × exp{−[A1 (η + ω)2 + l1 (η2 + ω2 )]/[2(l12 + b21 + 2A1 l1 )]} × exp{−jb1 (η2 − ω2 )/[2(l12 + b21 + 2A1 l1 )]}
(4.42)
with r1 = [2π/(G + 2L)]1/2 ,
l1 = LG/(G + 2L),
A1 = (L2 + b2 )/(G + 2L)
and
b1 = bG/(G + 2L).
(4.43)
Hence, we have = 21/2 π p{G[a(L2 + b2 ) + L(a2 + b2 + 2aB)]}−1/2 ,
(4.44)
σg2 = 2(πp)2 {G 2 [Lb2 +a(L2 + b2 )]2 +2Gb2 (L2 +b2 ) [Lb2 + a(L2 + b2 )]}−1/2 ,
(4.45)
ga2 /σg2 = {1 + 2b2 (L2 + b2 )/[aG(L2 + b2 ) + LGb2 ]}1/2 , 1/2
= π n[2/(LG)]
,
(4.46) (4.47)
σn2 = π 2 n2 {LG[LG + 2(L2 + b2 )]}−1/2 ,
(4.48)
2 /σn2 = [1 + 2(L2 + b2 )/(LG)]1/2 ,
(4.49)
uc = 21/2 π {[Lb2 + a(L2 + b2 )]/[b2 (L2 + b2 )] + 2/G}1/2 ,
(4.50)
1/2
ucn = 2
2
2
π [L/(L + b ) + 2/G]
1/2
.
(4.51)
In this case, we also have uc ≈ ucn . Figure 4.4 illustrates this dependence at various widths of the integrating function Ge . Obviously, the image correlation range increases in proportion with the integrating window width. The expression for the coefficient Q coincides with Eq. (4.31), since the statistical properties of the background and noise images are similar and cannot contribute to the power.
Imaging radars and partially coherent targets
93
In incoherent signal integration on the hologram, the H (η, ω) function is described by Eq. (4.26) and the Rh function, after averaging by Eq. (4.23), takes the form: Rh (y′ ) = p[π/C(aC + a2 + b2 + 2aB)]1/2 × exp[−(y′ )2 C(a2 + b2 + 2aB)/4(aC + a2 + b2 + 2aB)]. Therefore, the noise correlation function on the hologram can be written as Rhn (y′ ) = n(π/C)1/2 exp[−(y′ )2 C/4.
(4.52)
This expression yields the hologram signal-to-noise ratio Wh = Rh (0)/Rhn (0) = Wh0 Qh , where Wh0 is the single pulse ratio defined by the governing radar equation as Qh = Ni (1 + Ni2 /Ka )1/2 , Ka = da /(2vTir ), da is the horizontal dimension of the real antenna, Tir is the repetition rate of the pulses, Ni is the number of pulses integrated by the hologram with an incoherent averaging. The variation of Qh with Ni is shown in Fig. 4.5. One can see that incoherent integration is profitable only at ′ = 2(π a)1/2 /b, where Ni ≤ Ka , which agrees well with the condition Ce ≤ yhc ′ yhc is the correlation range of the hologram. If the latter condition is fulfilled, the basic statistical characteristics of the image can be described by expressions similar to Eqs (4.29)–(4.31), which means that there is no image smoothing.
Qh 10
∞
10 8 5 6 4 2 Ka = 1
0
Figure 4.5
5
10 Ni
The variation of the parameter Qh with the number of integrated signals Ni at various values of Ka
94
Radar imaging and holography The results obtained allow the following conclusions to be made:
1. For typical conditions of SAR viewing of background surfaces and for real times τc ≈ 0.01 s of reflected signal correlation, all the image parameters discussed above are actually independent of the degree of coherence of the objects being viewed, in contrast to the radar resolving power. 2. The statistical properties of images of background surfaces and aperture noises are practically identical. This fact can be successfully used to calibrate radar apertures designed for measurement of background SCS. The maximum period of spatial image fluctuations is observed in the synthesis range Ls = (λR/2)1/2 . 3. The analytical expressions we have derived can be used for the calculation of the image smoothing degree in the case of incoherent signal integration. 4. The signal-to-noise ratio of the image is nearly independent of the synthesised aperture length or on the incoherent integration range. 5. Incoherent integration on a hologram does not change the statistical characteristics of the image, that is, it does not lead to image smoothing, provided that the integrating function width is smaller than the hologram correlation range. Otherwise, there is no noticeable improvement of the signal-to-noise ratio on the hologram; therefore, the signal integration procedure becomes meaningless. 6. The methods of incoherent signal integration we have discussed (multi-ray processing and averaging of resolution elements) give similar results on the smoothing of image fluctuations. Multi-ray processing is performed automatically if the image is reconstructed by exposing the secondary film in an optical processor. In the case of digital reconstruction usually based on fast Fourier algorithms, the averaging of resolution elements is preferable because the algorithm performance is very effective when one has to process vast data. Application of special-purpose digital processors may improve the situation.
4.5 Viewing of low contrast partially coherent targets The major SAR characteristics for viewing low contrast targets such as sea currents, wind slicks, oil spills, etc., are the spatial resolution and radiometric (contrast) resolution determined by the number of incoherent signal integrations [58]. It is clear that a proper choice of the proportion between spatial and radiometric resolutions (coherent and incoherent integration) will depend not only on the radar parameters but on the properties of the target to be viewed. So it is reasonable to consider optimisation of SAR performance in the context of partial coherence of signals reflected by an extended target. Recall that the process of imaging includes two stages. First, the received signal is recorded on a radar hologram as u(y′ ) = w(y′ − y)f (y), where w(y′ − y) is the impulse response of the aperture receiver, f (y) is a function describing the spatial distribution of the target reflectivity, y is the coordinate in the viewed surface plane, y′ = vt is the SAR carrier coordinate, and t is current viewing time. Second, the image field is recorded: g(y′′ ) = u(y′ )h(y′ − y′′ ), where h(y′ − y′′ ) is the impulse response of the aperture processor and y′′ is the image coordinate.
Imaging radars and partially coherent targets
95
Imaging can be described in terms of linear filtration theory. The concepts of a quadratic filter and a frequency-contrast characteristic (FCC) well known from optics can be used to present the image intensity: SI (ω) = So (ω)KR (ω)
(4.53)
where So (ω) is the space frequency spectrum of the SCS of the object and KR (ω) is the FCC of the aperture. For instance, if the average SCS of the background is σ0 , the distribution of a low contrast target is described by the function σ (y) = σ0 [1 − m exp(−y2 A/2)],
(4.54)
where m < 1 is a factor defining the target’s initial contrast Kin = (1 − m)/(1 + m) with respect to the background, A = 2π/l 2 is a parameter related to the target’s size l, and the aperture FCC is given by the expression KR (ω) = exp[−ω2 /(2z)],
(4.55)
where z denotes its width. Then using Eq. (4.53), we can write the spatial distribution of the image intensity: g(y′′ ) = σ0 {1 − m[z/(A + z)]1/2 exp[−(y′′ )2 Az/2(A + z)]}.
(4.56)
Hence, the object’s contrast on the image is Kout = {1 − m[z/(A + z)]1/2 }/{1 + m[z/(A + z)]1/2 }
(4.57)
and its observable size is l ′ = [2π(1/A + 1/z)]1/2 .
(4.58)
It is clear that the contrast and target size on the image become distorted but the knowledge of the explicit quantity KR (ω) can give the real object’s parameters. For targets whose reflectivity varies with time randomly, the signal received by the aperture possesses a partial coherence and the hologram function u(y′ ) is no longer a convolution integral. In that case it would be unreasonable to use linear filtration theory. We shall show, however, that statistical methods and physical assumptions concerning the time fluctuations of objects’ reflectivities can make this convenient formalism work successfully. For this, we shall find the aperture response for a low contrast target (m ≪ 1), whose reflectivity distribution is described by the function f (y, t) = [1 + m cos(y)]f (y)α(t | y),
(4.59)
where is a certain space frequency and α(t | y) is a random complex function describing the time fluctuations of the reflected signal. The aperture FCC can be written as KR () = Kout /Kin , where Kin = (σ − σm=0 )/σm=0 ≈ 2m; Kout = (g − gm=0 )/gm=0 ; σ = f (0, 0)f ∗ (0, 0); σ
96
Radar imaging and holography
and g are the average values of the target’s SCS and image intensity, respectively. The correlation function of the field in Eq. (4.59) is defined as Rf = f (y1 , t1 )f ∗ (y2 , t2 ) = [1 + m cos(y1 )]f (y1 )f ∗ (y2 )α(t1 | y1 )α ∗ (t2 | y2 ) . (4.60) For many real surfaces, f (y) in the centimetre wavelength range is a Gaussian process with a zero average and a correlation function in the form of the Dirac delta-function of Eq. (4.8). Assuming the time fluctuations of the signal to be a steady-state random process, we can use the approximation of Eq. (4.11). Together with Eq. (4.60) and m ≪ 1, y′ = vt, we shall have Rf = [1 + m cos(y1 ) + m cos(y2 )]δ(y1 − −y2 ) exp[−(y1′ − y2′ )2 B/2 with B = 2π/(vτ )2 . The average image intensity is g = h(y1′ )h∗ (y2′ )Ru (y1′ , y2′ ) dy1′ dy2′ ,
(4.61)
where Ru (y1′ , y2′ ) is the correlation function of the hologram: Rf w(y1 − y1′ )w∗ (y2 − y2′ ) dy1′ dy2′ . Ru (y1′ , y2′ ) = Using the Gaussian approximations of the impulse responses in Eqs (4.15) and (4.16), we obtain, instead of Eq. (4.61), g = g0 + 2g , where g0 = (2)1/2 π [aL(a + L + (a + L)]−1/2 is the average intensity of the fluctuating background image and g = mg0 × exp[−(2 /4)((a + L)(a + L + 2B)/aL(a + L + 2B) + b2 (a + L))]. (4.62) For real viewing, we have b2 ≫ aL and b2 ≫ LB, which reduces Eq. (4.62) to Kout = 2m exp[−2 (a + L + 2B)/4b2 ], KR () = exp[−2 (a + L + 2B)/4b2 ].
(4.63)
There is a certain relationship between the FCC and the azimuthal resolution of the aperture. The latter can be found from the width of the averaged impulse response to a fluctuating point target: (4.64) δa = g(y′′ ) /g(0) dy′′ . The signal reflected by this target can be prescribed as f (y, y′ ) = δ(y)α(y′ ), where α(y′ ) describes the time fluctuations of the signal, whose correlation properties are defined by Eq. (4.11). With Eqs (4.61) and (4.64), we get δa = [π(a + L + 2B)/b2 ]1/2 .
Imaging radars and partially coherent targets
97
Ωe, rad/m 7 tc → ∞
6 5
tc = 0.4 s
4 3
tc = 0.2 s
2
tc = 0.1 s
1
0
Figure 4.6
20
40
60
80 100 120 140 160 180 Ls, m
The variation of the parameter e with the synthesis range Ls at various signal correlation times τc
2 2 Of course, the aperture FCC can be presented as KR () = exp[− δa /(4π )] and its equivalent width as e = KR () d = 2π/δa . The concept of FCC allows the consideration of a SAR as a linear filter of space frequencies. On the other hand, the filter description essentially depends on the target’s behaviour through the parameter B. Figure 4.6 illustrates the variation of e with the synthesis range Ls for an airborne SAR. The basic radar parameters are λ = 3 cm, R = 10 km, ≈ 0.02, and v = 250 m/s. For zero signal fluctuations (τ → ∞), the width e increases in proportion with Ls but at Ls ≈ R the linear dependence is violated because of the antenna pattern effect through the parameter a. The signal fluctuations lead to the resolution independence of Ls at Ls > vτ but they are rather defined by the correlation time τ . Equation (4.63) can be re-written in the form:
KR () = K0 ()Kτ (),
(4.65)
where K0 () = exp[−2 (a + L)/(4b2 )] is the aperture FCC in the absence of signal fluctuations and Kτ () = exp[−2 B/(2b2 )] is multiplicative noise arising from fluctuations in the radar channel. Therefore, a SAR can be described as a set of two filters – a filter of space frequencies K0 () and a narrow band space–time filter Kτ (), whose bandwidth is determined by the time of the surface fluctuation correlation. The image has a spatial intensity spectrum SI () = S0 ()KR (). On the other hand, one can consider that the aperture measures the space–time spectrum S0τ () = S0 ()Kτ () if one assumes its FCC being independent of the target’s properties and describes the radar with the function K0 ().
98
Radar imaging and holography Q 1.0
tc → ∞
0.8 tc = 0.4 s
0.6 0.4
tc = 0.2 s
0.2
tc = 0.1 s 0
Figure 4.7
20
40
60
80 100 120 140 160 180 Ls, m
The parameter Q as a function of the synthesis range Ls at various signal correlation times τc
To conclude, the parameters of radar apertures for viewing fluctuating targets can be optimised by matching the characteristics K0 () ≈ Kτ (). The latter equality provides the imaging of a surface with nearly as much detail as possible potentially for a particular type of object. This equality can be obtained by choosing the value of Ls equal to Ls = vτ , which means that the synthesis time should not be longer than the time of the signal correlation. As a result, the aperture resolution appears to be limited to δa = λR/2Ls but this choice of Ls provides the (N = R/Ls ) number of image realisations. The aperture contrast resolution, defined by the number of incoherent integrations N , is in turn independent of the signal coherence time τ . So the choice of Ls > vτ does not provide the desired spatial resolution but it decreases N , making the contrast resolution poorer. The potentiality of the SAR in viewing low contrast targets can be conveniently described by the parameter Q = Ndh /(2δa ) equal to unity at zero fluctuations. If the fluctuations are present, Q essentially depends on the chosen synthesis range Ls (Fig. 4.7). For example, the signal fluctuations at Ls < vτ do not noticeably affect the image quality and Q = 1. At Ls > vτ , the aperture performance proves to be inferior to its potentiality (Q < 1), since the real aperture resolution does not fit the chosen value of Ls but is rather defined by the signal correlation time τ . We can draw the following conclusions from these results: • To describe the imaging of fluctuating targets, one can make use of linear filtration theory, representing the radar as a filter with a certain FCC. The aperture can be considered as a device measuring the space–time spectrum of the object being viewed. • One can suggest that the time fluctuations of the signal in the viewing channel create multiplicative noise decreasing the azimuthal resolution of the aperture. • This approach provides a reasonable compromise between the potential azimuthal resolution and the aperture contrast resolution. This compromise can be achieved by choosing the synthesis time equal to the signal correlation time.
Imaging radars and partially coherent targets
99
The overall analysis of the results presented in this chapter shows that the available methods for describing the properties of sea surface images can be supplemented by a more general approach to SAR viewing of partially coherent objects. The concept of partial coherence allows one to cover a much larger class of targets and to describe the basic principles of their imaging. The advantages of this approach are as follows: first, it is based on a fairly general model of the radar signal. Expression (4.10) accounts for general and specific features of the viewing of fluctuating targets. We shall show in the following chapters that the correlation function of time fluctuations in Eq. (4.13) can be used, for example, to describe trajectory instabilities of the SAR carrier. Second, this approach provides an analytical description of the major statistical characteristics of images of partially coherent targets; these, in turn, enable one to evaluate image quality. Finally, the relative simplicity of mathematical calculations and the clear physical sense of the results obtained make this approach advantageous and convenient as a tool for solving practical tasks associated with SAR designing and for remote sensing of partially coherent targets.
Chapter 5
Radar systems for rotating target imaging (a holographic approach)
The possibility of using the rotation of an object to resolve its scattering centres was, probably, first shown by W. M. Brown and R. J. Fredericks [21]. Independently, microwave video imaging of rotating objects was demonstrated theoretically and experimentally by other researchers [109]. An analysis of three approaches (in terms of the antenna, range-Doppler and cross-correlation theories) was made in References 104 and 146 for the imaging of rotating targets. Here we discuss this problem in terms of a holographic approach.
5.1 Inverse synthesis of 1D microwave Fourier holograms We shall start with the basic principles of inverse synthesis of microwave holograms of an object rotating around the centre of mass. The analysis will be based on the holographic approach discussed in Sections 1.2 and 2.4. Lens-free optical Fourier holography [131] implies that an optical hologram is recorded when the amplitude and phase of the field scattered by the object are fixed in a certain range of bistatic angles 0 < β < β0 (Fig. 5.1). In the microwave range, this is equivalent to the displacement of the radar receiver along arc L of radius R0 from point A to point B, while the transmitter remains immobile. A coherent background must be created by a reference supply located in the object plane. Since such a supply is unfeasible, the coherent background is created by an artificial reference wave in the radar receiver (Chapter 2). In further analysis, we shall use a model object made up of scattering centres described by Eq. (2.3). Then a direct synthesis along arc L of radius R0 by a bistatic radar system (Fig. 5.1) can produce a classical microwave Fourier hologram [109], with a subsequent image reconstruction as a 1D distribution of the scattering centres and their effective scattering surfaces. To discuss the principles of inverse synthesis and formation of a 1D microwave Fourier hologram, we shall make use of the well-known relation for uni- and bistatic
102 Radar imaging and holography z
Target 0
y
R
A
0
L x C 1
b0 B
N 2
Figure 5.1
A schematic diagram of direct bistatic radar synthesis of a microwave hologram along arc L of a circle of radius R0 : 1 – transmitter, 2 – receiver
radars [69]. According to Kell’s theorem, at small bistatic angles β the bistatic radar cross-section (RCS) for the angle α (Eq. 2.5) and the bistatic angle β is equal to the unistatic RCS measured along the bisectrix of the angle β at a frequency reduced by a factor of cos(β/2) (Chapter 2). Kell’s theorem and the fact that the rotation of a transmitter–receiver unit around the object can be replaced by the rotation of the object round its axis passing through the centre of mass normal to the radar viewing line lead one to the conclusion that such a unit, fixed at the point C (Fig. 5.2), can synthesise a 1D microwave Fourier hologram identical to a lens-free optical Fourier hologram. This approach was first discussed by S. A. Popov et al. [109]. In order to find analytical relations for the classical and synthesised Fourier holograms, let us consider the schematic diagram in Fig. 5.3. To simplify the calculations, we shall deal only with one kth scattering centre with the coordinates rkx = rk sin θk cos(ϕ + ϕk ), rky = rk sin θk sin(ϕ + ϕk ),
(5.1)
rkz = rk cos θk , is the angular velocity of where ϕ = t is the object rotation angle, = | | the rotating object, ϕk is the initial angle between the rk vector projection on the xOz plane and the positive x-axis, and θk is the angle between the rk vector and the positive z-axis. In our further analysis, we shall follow the References 109 and 145.
Radar systems for rotating target imaging (holographic approach) 103 z →
Ω
Target y
b0
R
A
0
0
x C
B
1
Figure 5.2
A schematic diagram of inverse synthesis of a microwave hologram by a unistatic radar located at point C
z →
Ω
uK wK + w → R0
Target rK
0
y → → → d 0(t K, R0)
x O1 Radar
Figure 5.3
The geometry of data acquisition for the synthesis of a 1D microwave Fourier hologram of a rotating object
With Eq. (5.1), the input receiver signal can be described as a function of the object rotation angle: u˙ r (ϕ) = u0
N k=1
4π ϕ σk exp −j d(rk , R0 ) · exp iω0 , λ1
(5.2)
104 Radar imaging and holography where rk 0) ∼ d(rk , R [sin γ sin θ cos(ϕ + θ ) + cos γ cos θ ] , R 1 − = 0 k k R0
(5.3)
λ1 = 2π c/ω0 is the radar wavelength; σk is the amplitude coefficient accounting for the reflection characteristics of the kth scattering centre; γ = arc tg(xo /yo ) is the 0 and the positive z-axis; xo , yo , zo are the observation point angle between the vector R = xo2 + zo2 is the distance coordinates; O1 is the observation point; and R0 = |R| between the observation point and the centre of mass of the object. In order to derive the hologram function in a way shown in Chapter 2, it is reasonable to use the multiplication procedure performed by an amplitude-phase detector, followed by an averaging. The artificial reference signal is ϕ
u˙ ref (ϕ) = u0 exp −iω0 +ψ , (5.4) where t = ϕ/ is the current moment of time and ψ is an arbitrary initial phase. Using Eqs (5.2) and (5.4), we can write down the hologram function in the form: H (ϕ) =Re u˙ r (ϕ)Re u˙ ref (ϕ) N u02 4π = σk cos rk (cos γ cos θk + sin γ sin θk cos(ϕ + ϕk )) , (5.5) 2 λ k=1
where the sign · · · stands for the averaging. To derive Eq. (5.5), the arbitrary initial phase of the reference signal has been chosen such that 4π R0 − ψ = 0. λ1 By expanding the function cos(ϕ + ϕk ) into the power series of ϕ and choosing only the first two terms of the series, we get N u2 2π σk cos 2 βk − rk lk (ϕ) , (5.6) H (ϕ) = 0 2 λ1 k=1
with βk =
2π rk (cos γ cos θk + sin γ sin θk cos ϕk ) λ1
(5.7)
and ϕ2 ϕ3 lk (ϕ) = sin γ sin θk ϕ sin ϕk + cos ϕk − sin ϕk . 2 6
(5.8)
Consider now the microwave hologram function of the same object (Fig. 5.1), obtained by a classical method. In this method, the radar receiver scans, with an angular velocity , the surface of a cylinder of radius R0 sin γ , having the generatrix parallel to the z-axis. The transmitter is at the point A with the coordinates
Radar systems for rotating target imaging (holographic approach) 105 xA = R0 sin γ , yA = O, zA = R0 cos γ , while the angle β0 is equal to the rotation angle ϕ. Then the function Hcl (ϕ) for the classical microwave Fourier hologram is Hcl (ϕ) =
N u02 π σk cos 2 βk − rk lk (ϕ) , 2 λ1
(5.9)
k=1
where the functions βk and lk (ϕ) are similar to those of Eqs (5.6) and (5.7). A comparison of Eqs (5.6) and (5.9) shows that the function Hcl (ϕ) differs from the function H (ϕ) for the synthesised hologram of the same object in having the factor ( 21 ) in the second term of the argument cos 2[· · · ]. It is clear that the synthesised hologram possesses a double capacity to change the argument and, hence, it has twice as high resolution because it looks like the classical hologram recorded in a field with a wavelength twice as short as the real one. This effect is due to the simultaneous scanning by several elements of the transmitter–object–receiver system. It is easy to see that a microwave hologram recorded by a simultaneous receiver–transmitter scanning of a fixed object along the arc L (Fig. 5.1) is totally identical to the HA (ϕ) hologram. In the case of inverse scanning, however, the rotation of the object alone is equivalent to the movement of two devices – the transmitter and the receiver. We shall show below that the constant initial phase βk does not affect the structure of microwave radar imagery. We shall use a simplified expression for the synthesised Fourier hologram: H1 (ϕ) ∼ =
N k=1
σk cos
4π rk sin θk cos(ϕk + ϕ) , λ1
(5.10)
where rk , θk , ϕk are the spherical coordinates of the kth centre. Equation (5.10) was derived from Eq. (5.5) on the assumption of γ = 90◦ and is valid for the far-zone approximation. Since the H1 (ϕ) function basically coincides with Hcl (ϕ), the image reconstruction from a synthesised Fourier hologram can be made in visible light, using the same techniques as those of optical Fourier holography [131]. Sometimes, a microwave hologram recorded on a flat transparency is placed in the front focal plane of the lens L (Fig. 5.4(a)). When the transparency is illuminated by a plane coherent light wave, two real conjugate images of the object, M and M ′ , are formed near the rear focal plane of the lens. An alternative is to use a spherical transparency of radius F0 , illuminated by a coherent light beam converging at the sphere centre (Fig. 5.4(b)). The two variants are identical in the sense that the operations to be performed are the same. Practically, it is convenient to use the first variant but to analyse the second one. If a microwave hologram is recorded on an optical transparency uniformly moving with velocity vt , the angular coordinate α = vt τ/F0 on the transparency in the reconstruction space will be related to the angular coordinate ϕ = τ on the hologram in the recording space: α = ϕvǫ / F0 = ϕ/µ,
µ = F0 /vt .
(5.11)
106 Radar imaging and holography u
Lens
(a)
M
uM I0
vM
0
v
M⬘ H F0 u
d(u,v,a)
(b) A
M (u,v) F0 I0
a 2ao
0
v
L M⬘
Figure 5.4
Optical reconstruction of 1D microwave images from a quadrature Fourier hologram: (a) flat transparency, (b) spherical transparency
For a hologram of a point object, the distribution of complex-valued light amplitudes in the image space u, v at the point M (up , vp ) in the vicinity of the point O can be represented by an integral (at θk = 90◦ ): E(u, v) = A
α0
−α0
2π 1 + cos 2 · rk cos(µα + ϕk ) λ1
2π × exp −j d(u, v, α) dα = I0 + I+1 + I−1 , λ2
I0 = A
α0
−α0
I±1
A = 2
2π exp −j d(u, v, α) dα, λ2
α0
exp[jψ±1 (u, v, α)]dα,
−α0
I±1 (u, v, α) = ±4
π 2π rk cos(µα + ϕk ) − d(u, v, α), λ1 λ2
d(u, v, α) = [F02 + 2F0 (v cos α − u sin α) + u2 + v2 ]1/2 ,
(5.12)
Radar systems for rotating target imaging (holographic approach) 107 where λ2 is the wavelength in the optical range; A is a complex-valued proportionality factor A = (u02 /4)σ1 ; σ1 is the amplitude coefficient accounting for the reflection characteristics of the scattering centre k = 1; d(u, v, α) is the distance between an arbitrary point in the arc L and the point M near the arc centre on the image; 2α0 is the angular size of the hologram in the image space; and F0 is the lens focal length. The integrals I0 , I+1 , and I−1 describe the distribution of the complex-valued light amplitudes in the zeroth and first diffraction orders, both positive and negative. If the angular dimensions of the hologram are not too large, the functions cos(µα + ϕk ) and d(u, v, α) can be represented as the first term of the respective expansion series to write down the function ψ(u, v, α): α2 λ2 λ2 2π ψ±1 (u, v, α) = α 2 µrk sin ϕk ± u + 2 µ2 rk cos ϕk ± v λ2 λ1 2 λ1 3 λ2 α − (5.13) 2 µ3 rk sin ϕk ± u . 6 λ1 Here we have omitted the constant expansion terms independent of the argument α. ′ , v ′ ), at which two conjugate The coordinates of the points M(uM , vM ) and M′ (uM M images of the point object are formed, can be found from the expressions ∂ψ(u, v, α) = O, ∂α
∂ 2 ψ(u, v, α) = O. ∂α 2
(5.14)
With xM = rk sin ϕk and yM = rk cos ϕk , using Eq. (5.14), we get uM,M′ = ±2µ
λ2 xM , λ1
vM,M′ = ±2µ2
λ2 yM . λ1
(5.15)
Equation (5.15), in turn, gives the transverse and longitudinal scales of the image being reconstructed: vM uM λ2 2 λ2 my = = 2µ , mx = = 2µ . (5.16) yM λ1 xM λ1
An undistorted image of an object can be reconstructed only if all the derivatives of ψ(u, v, α) are simultaneously equal to zero with respect to the argument α. It is easy to show that this condition is met at one point (M and M′ ) at µ = F0 /vt or ϕ ≡ α. The latter identity defines the criterion for optical processing of synthesised Fourier holograms: the aperture angles in the recording and reconstruction space must be the same. If the reconstruction procedure has been designed in the optimal way, we have mx = my = m, and the object is reproduced without distortions along the longitudinal and transverse directions. A specific feature of a synthesised Fourier hologram is that the resolution obtained is independent of the distance to the object. Indeed, let us take the following expression to be the measure of the resolving power:
∞ |I (u)|2 du, (5.17) = |I (uM )|−2 −∞
108 Radar imaging and holography where |I (u)|2 is the light intensity distribution across the scattering centre image and uM is the coordinate of the maximum intensity of the image focusing. Equation (5.17) describes the receiver pulse response to the point object. Then, neglecting all the terms in Eq. (5.13) except for the first one and using the scale relations of Eq. (5.16), we can define the resolving power of the object as x (λ1 , ψS ) =
λ1 λ1 u = = , mx 4ϕ0 2ψS
(5.18)
where ψS is the object angle variation during the recording. Therefore, when the hologram angles are small, the resolving power of the object varies with the wavelength and the synthesised aperture angles, rather than with the distance to the object or the reconstruction parameters. With the scale relations from Eq. (5.16), we find for µ = 1 my = mx = 2
λ2 . λ1
Then the criterion described by Eq. (5.18) can yield the resolution of a video microwave image: u (α0 ) = x (λ1 , ψS )mx =
λ2 . 2ϕ0
(5.19)
It follows from Eq. (5.19) that the resolution of a microwave image obtained by inverse synthesis and optimal processing is fully consistent with the Abbe criterion for optical devices (Chapter 1). Consider now distortions arising from the reconstruction of a microwave image. These are defined by the high-order terms of Eq. (5.13) for the following reason. When an image is viewed in one plane, some of the scattering centres are shifted relative to this plane, that is, they are defocused. With the quadratic term of Eq. (5.13), the field distribution in a defocused point image is defined as 4rx p2 α0 − I+1 (p, t0 ) = A exp π j t0 λ1 2 × {C(t0 + p) + C(t0 − p) + j[S(t0 + p) + S(t − p)]},
√
(5.20)
where t = 2(vM − v)/λ2 describes the viewing plane shift relative to the focusing plane and p = uM /λ2 t, t0 = α0 t and S(z), C(z) are the Fresnel integrals. The resolution of a defocused microwave image is described by the function (to ) =
∞
−∞
|I+1 (p, t0 )|2 dp |I+1 (O, t0 )|2
(5.21)
ˆ = 1.2 is achieved at a certain shown in Fig. 5.5. Obviously, the best resolution ˆ optimal value of t0 = t0 = 1 and an optimal aperture size αˆ 0 = [2(vM − v)/λ2 ]−1/2 .
(5.22)
Radar systems for rotating target imaging (holographic approach) 109 ∆ (t0) 2.0 1.8 1.6 1.4 1.2 1.0
Figure 5.5
0.4
0.6
0.8
1.0
1.2
1.4
1.6
t0
The dependence of microwave image resolution on the normalised aperture angle of the hologram
At v = 0, when the viewing plane is superimposed with the focal plane of the lens, we can use Eq. (5.15) to get −1 √ −1 αˆ 0 = 2µ yM /λ1 = µ τmax ,
(5.23)
where τmax = 2Lmax /λ1 is the maximum longitudinal dimension of the object, expressed as half-wavelengths. As the size of the object or the aperture increases, the influence of the high-order terms of Eq. (5.13) becomes more pronounced resulting in distortions and a lower resolution. These factors impose constraints on the synthesised aperture size. The image reconstruction of microwave Fourier holograms has some specificity associated with the way the artificial reference wave is created. If the reference signal phase is not modulated, the phase of the coherent reference background along the hologram is constant, a situation equivalent to the position of a point object at the rotation centre. So during the reconstruction, the three images – that of the reference source and the two conjugate images of the object – overlap. To separate these images, one should introduce a space carrier frequency (SCF) by changing the phase of the reference signal at a constant rate, like in the expression dψ/dτ ≥ 4π rmax /λ1 ,
(5.24)
where rmax is the vector radius of the scattering centre located at the maximum distance from the object rotation centre. The reference wave phase can be modulated by a phase shifter or by introducing translational motion along the viewing line, in addition to the rotational motion. In the latter case, the translational velocity v must satisfy the inequality v > rmax .
110 Radar imaging and holography
5.2 Complex 1D microwave Fourier holograms We have shown in Section 5.1 that a 1D quadrature microwave Fourier hologram H1 (ϕ) can be described by Eq. (5.10). A conjugate quadrature Fourier hologram with a π/2 phase shift has the form: H2 (ϕ) ∼ =
N k=1
4π rk sin θk cos(ϕk + ϕ) . σk sin λ1
(5.25)
According to Eq. (2.23), the holograms H1 (ϕ) and H2 (ϕ) can form a complex Fourier hologram:
H (ϕ) = H1 (ϕ) + jH2 (ϕ) =
N k=1
4π σk exp j rk sin θk cos(ϕk + ϕ) . λ1
(5.26)
This expression can be re-written in a simpler form: H (x) = u exp( j),
(5.27)
where u and are the amplitude and phase (in the recording plane) of the total field scattered by the object. The argument ϕ of the H function has been replaced by the linear x-coordinate, since a 1D microwave hologram is recorded on a flat transparency. The image reconstruction by a plane wave in a paraxial approximation is reduced to the Fourier transformation of the hologram function, assuming for simplicity that the recording and the reconstruction are performed at the same wavelength:
V (ωx ) =
∞
H (x) exp(−jωx x) dx,
(5.28)
−∞
where ωx is the space frequency corresponding to the coordinate in the image plane. The substitution into Eq. (5.28) of the expressions for the quadrature holograms in Eqs (5.10) and (5.25), re-written as Eq. (5.27), gives
V1 (ωx ) =
1 2
∞
u exp( j) exp(−jωx x) dx
−∞
+
∞
−∞
u exp(−j) exp(−jωx x) dx ,
(5.29)
Radar systems for rotating target imaging (holographic approach) 111 ∞
1 u exp( j) exp(−jωx x) dx V2 (ωx ) = 2j −∞
−
∞
−∞
u exp(−j) exp(−jωx x) dx .
(5.30)
It is seen that each quadrature hologram gives two conjugate images described by the appropriate terms in Eqs (5.29) and (5.30). In a complex hologram, the first quadrature component gives two conjugate images in Eq. (5.29), while the second component reconstructs the images ∞
1 u exp( j) exp(−jωx x) dx V2 (ωx ) = 2 −∞
−
∞
−∞
u exp(−j) exp(−jωx x) dx .
(5.31)
The first terms in Eqs (5.29) and (5.31) are identical, while the second terms differ in the phase by the value π . A combined reconstruction after summing up the fields in Eqs (5.29) and (5.31) yields one pair of conjugate images that enhance each other and another pair of images that annihilate each other; so we eventually have V (ωx ) =
∞
u exp( j) exp(−jωx x) dx.
(5.32)
−∞
The complex-valued function V (ωx ) describes the only image reconstructed from a complex hologram [145]. The image intensity can be defined as W (ωx ) = |V (ωx )|2 .
(5.33)
To illustrate, consider the case when the object is a point and the parameters θ1 and ϕ1 are equal to π/2. For small values of ϕ (ϕ < 1 rad.) and ϕ = x/vt , where vt is the velocity of the recording transparency, Eq. (5.26) reduces to 4π ∼ (5.34) H (x) = u exp j r x . λ 1 vt Since the hologram is recorded in a finite time interval,τ ∈ [−T /2, T /2], Eq. (5.28) yields V (ωx ) =
v t T /2
−vt T /2
H (x) exp(−jωx x) dx.
(5.35)
112 Radar imaging and holography The substitution of Eq. (5.34) into Eq. (5.35) and the integration give vt T 4π 4π r − ωx r − ωx . V (ωx ) = 2σ sin λ 1 vt 2 λ 1 vt
(5.36)
Clearly, this function is of the sin z/z type and has a maximum at ωx = (4π/λ1 )r( /vt ), which corresponds to the image of the point. Digital reconstruction reduces to the calculation of the integral in Eq. (5.28) and has no zeroth order. So a complex hologram can be formed without introducing the carrier frequency, which decreases the amount of data to be processed: a single quadrature hologram requires, at least, twice as many discrete counts because of the high carrier frequency. Optical reconstruction produces the zeroth order, in addition to a single image, because of the presence of the reference level of Hr (Eq. (2.20)). During the processing of a complex hologram recorded without the carrier frequency, the zeroth order overlaps the image. Their spatial separation can be made by just introducing the carrier frequency. Then the use of a complex hologram has no sense, since one does not have to remove the conjugate image. Besides, the optical reconstruction of a complex hologram is hard to make due to the strict requirements on the adjustment of the twochannel processing suggested in Reference 35. Thus, complex microwave holograms should be recorded without introducing the carrier frequency and reconstructed only digitally.
5.3 Simulation of microwave Fourier holograms A comparison of various techniques applied in microwave Fourier holography can be made using a special algorithm for digital simulation of 1D quadrature and complex hologram recording and reconstruction for simple objects. The algorithm consists of two units, one of which records a hologram following Eq. (5.26) and the other reconstructs the image, that is, calculates the integral of Eq. (5.28). The image reconstruction from individual quadrature holograms is performed using an additional procedure for the calculation of the Fourier integrals of the real functions H1 and H2 from the Fourier transform of the complex function H = H1 + jH2 . Figure 5.6(a–c) illustrates some of the results of the digital simulation. The ordinate shows the image intensity in relative units and the abscissa the image size. In digital reconstruction, a microwave image represents a series of discrete counts spaced at a distance λ1 /2ψS . The model object consisted of two scattering centres arranged to form a dumb-bell structure of 10λ1 in length, which rotated at a constant angular velocity round the centre of mass. The quantity θk (Fig. 5.3) was taken to be equal to π/2. The image illustrated in Fig. 5.6(a) was reconstructed from a single quadrature hologram. Peaks 1 and 2 correspond to one conjugate image of the two scattering centres and peaks 3 and 4 to the other. The image separation was made using the SCF, whose introduction was simulated by the radial displacement (with the velocity vl ) of the object rotation centre relative to the receiver. One of the conjugate images vanished during the processing of the complex hologram (Fig. 5(b)), so the carrier frequency
Radar systems for rotating target imaging (holographic approach) 113 (a) 1
2
1.0
W, rel. un 3 4
(d)
W, rel. un cs = p/120
0.8 0.6 0.4 0.2 –20
–10
0 r/l1
(b)
10
–9.5
20
W, rel. un
0 r/l1
9.5 W, rel. un
(e)
1.0
1.0
0.6
0.6
0.2
0.2
cs = p/6
–20
–10
0 r/l1
(c)
10
20
W, rel. un
–20
Figure 5.6
–10
–4.8
0 r/l1
W, rel. un
(f)
1.0
1.0
0.6
0.6
0.2
0.2
0 r/l1
10
20
4.8
–6.4
0 r/l1
cs = p/2
6.4
Microwave images reconstructed from Fourier holograms: (a) quadrature hologram, (b) complex hologram with carrier frequency, (c) complex hologram without carrier frequency and (d,e,f) the variation of the reconstructed image with the hologram angle ψs (complex hologram without carrier frequency)
was not needed. This is clearly seen in Fig. 5.6(c) showing the image reconstructed from a complex hologram recorded without the carrier frequency. Figure 5.6(d–f) presents the variation of the reconstructed image with the hologram angle. The comparison of these results supports the above conclusion that there is an optimal size of the synthesised aperture. As the angle ψS becomes larger, the resolution increases to a certain limit, beyond which distortions arise in the image structure. The resolving power of this technique estimated from the results of the digital simulation is ∼λ1 . Currently, there are two methods used in microwave Fourier holography. One is based on the recording of a single quadrature phase-amplitude hologram of the type described by Eq. (5.10) with the carrier frequency and optical image reconstruction. The other method records a complex hologram of the type described by Eq. (5.26) without introducing the carrier frequency but using a digital image reconstruction. The application of the first method involves some problems associated with the use of an anechoic chamber (AEC), because the linear displacement of the object for introducing the carrier frequency leads to the camera decompensation. So we
114 Radar imaging and holography
Input H1
Input H2
Normalisation H1
Normalisation H2
Selection of synthesis interval
Selection of synthesis interval
Interpolation
Interpolation
Fast Fourier transform
Output Re(V )
Output Im(V )
Computation W = /V/2 Output W
Figure 5.7
The algorithm of digital processing of 1D microwave complex Fourier holograms
recommend the second technique when one uses an anechoic camera. We shall discuss some of the results obtained by the second method. Figure 5.7 illustrates the algorithm of digital image reconstruction, which operates as follows. The setting of discrete data is followed by their normalisation, that is, the data are reduced to the variation range [−1, 1]. The hologram is usually recorded for a full 2π rad rotation; so for the subsequent processing, one selects a series of counts in such a way that their number describes the optimal aperture and their position in the array corresponds to the required aspect. An interpolation unit makes it possible to reduce the number of signal records to 2m , where m is a natural number. The image reconstruction is performed by a Fourier transform unit using the FFT algorithm for the complex-valued function H (x). Arrays of Re(V ) and Im(V ) numbers that define the image, whose intensity is found as W = Re2 (V ) + Im2 (V ), are produced at the unit output. Figure 5.8 presents the results of digital processing of 1D complex Fourier holograms recorded experimentally with an anechoic camera. The image intensity is plotted in relative units along the y-axis and its linear dimension along the x-axis. The object is a metallic sphere of radius 0.3λ1 , rotating along a circumference of radius 3λ1 . The positions of the point image in Fig. 5.8(a–c) are different and vary with the object aspect ψ0 as shown schematically in each figure.
Radar systems for rotating target imaging (holographic approach) 115 (a)
c0 = p/12, cs = p/6 W, rel. un. 1.0 0.6 0.2 –12.8
(b)
0
12.8
r, cm
c0 = 5p/2, cs = p/6 W, rel. un. 1.0 0.6 0.2 –12.8
(c)
0
12.8
r, cm
c0 = 3p/4, cs = p/6 W, rel. un. 1.0 0.6 0.2 –12.8
Figure 5.8
0
12.8
r, cm
A microwave image of a point object, reconstructed digitally from a complex Fourier hologram as a function of the object’s aspects 0 (s = π/6): (a) 0 = π/12, (b) 0 = 5π/2 and (c) 0 = 3π/4.
The methods we have discussed have some advantages and limitations. The recording of single quadrature holograms is made in one channel but requires that the carrier frequency should be introduced in this way or another. The recording of complex holograms does not require the carrier frequency but it is more complicated because the channels must have a strict quadrature character, their parameters must be identical, and the measurements must be well synchronised. However, the recording errors associated with these characteristics of a two-channel system can be easily eliminated by the processing. (We have mentioned above that complex microwave Fourier holograms should be processed only digitally.) The image reconstruction from quadrature holograms can be made both digitally and optically. The possibility of recording a hologram in a form suitable for digital processing increases the dynamic range of the system. It does not then need the use of sophisticated units,
116 Radar imaging and holography such as high-resolution cathode-ray tubes or high-precision focusing and deflecting devices. In optical processing, the aperture size is normally limited by the characteristics of the reconstruction unit, so it cannot be made optimal. On the other hand, optical processing allows re-focusing of the observation plane without difficulty, providing a 2D image (in longitudinal and transversal directions). The investigation and analysis of methods for microwave Fourier holography have shown that they can be successfully used for imaging objects which can be represented as an array of scattering centres. These methods are of interest to those studying diffraction with anechoic cameras (Chapter 9), in particular, for the experimental verification of the applicability of the physical theory of diffraction developed by P. Ya. Ufimtzev [137] and of the geometrical theory of diffraction by J. B. Keller [70]. These methods can also be useful in designing radar systems with an inversely synthesised aperture (Chapter 9).
Chapter 6
Radar systems for rotating target imaging (a tomographic approach)
6.1 Processing in frequency and space domains Section 2.4.2 discussed the tomographic approach to target imaging in two-dimensional (2D) viewing geometry. We suggested an algorithm for processing in the frequency domain, which finds the reflectivity function g(x, ˆ y) from Eq. (2.48). The first procedure to be performed is to reconstruct an image in the frequency domain by calculating the N number of discrete Fourier transform (DFT) records of the echo complex envelope Pθ (l, m) =
N =1
sv (nt, mδθ) exp(−j2π ln /N )
(6.1)
n=0
for each of the M number of the target angular positions mδθ , m = 0, . . . , M − 1. The pixels found in this way are located at the polar grid nodes formed by the interceptions of concentric circumferences separated by the frequency step 1/N t and rotated by the radial beam angle δθ from one another. Since an inverse DFT can be made only on a rectangular grid, the second procedure should include the finding of pixels at the equidistant nodes of a rectangular grid, using the Pθ (l, m) values obtained by the first procedure. This is followed by a 2D inverse DFT computation of the target reflectivity g(x ˆ i , yj ) at the rectangular grid nodes. This algorithm has two important features that deserve attention. First, since the complex envelope of an echo signal is finite, there are distortions near the ±I/2t boundaries of the major period of the Pθ (l, m) spectrum. The distortions arise from the superposition of high-frequency components of the adjacent spectral periods. Besides, the high-frequency spectrum may contain noise that dominates over the signal data. To reduce the noise, one has to resort to weighting by multiplying the Pθ (l, m) DFT data by a ‘window’ function. The choice of such a function should be based on the consideration of how much the noise abates the radar data and what kind of target is being probed [57].
118 Radar imaging and holography Second, since the radar is a coherent system, it seems important to define the discretisation step δθ of the θ angle as the target aspect changes. The criterion for choosing a δθ value can be formulated as follows: the phase shift of the echo signal from the point scatterer most remote from the target centre of mass should not be larger than π when the target aspect changes by δθ. This criterion is written as δθ ≤ λc /4|¯ro |max .
(6.2)
This expression is valid for relatively narrowband signals, whose spectral width is much less than the carrier frequency. Otherwise, one should substitute λc in Eq. (6.2) by the wavelength of the highest frequency component in the signal spectrum. It is worth noting that the method of synthesising the so-called unfocused aperture is a particular case of the above processing algorithm for the frequency domain. The movement of a point scatterer along an arc is approximated by the movement along a tangent to it. By substituting v = y cos θ − x sin θ into Eq. (2.40) and using sin θ ≈ θ and cos θ ≈ 1 − θ 2 /2, we get S( f ) =H ( f )
∞
g(x, y){exp[ j(kc + k)θy2 ]}
−∞
× exp[−j2(kc + k)y + j2(kc + k)θx ] dx dy. If we eliminate the squared phase term, it will be clear that the g(x, ˆ y) function can be reconstructed by an inverse Fourier transform (IFT) over the rectangular raster which has replaced the respective region of the polar raster. This approximation works well only if the aspect variation during the data acquisition was small. Let us discuss now the processing algorithm for the space domain, or the convolution algorithm. For this, Eq. (2.48) will be transformed from the Cartesian to polar coordinates: g(x, ˆ y) =
π 0
dθ
∞
Sθ ( fp )|fp | exp[ j2π fp r cos(θ − ϕ)]dfp .
(6.3)
−∞
The inner integral in Eq. (6.3) represents the IFT of the product of fp and the function defined by expression (2.43). The result is the convolution of the quantity F −1 {Sθ ( fp )} with the so-called kernel function q(v) = F −1 {|fp |}. If one uses the window function F( fp ) to reduce the effect of high-frequency spectral noise, one gets g(v) = F −1 {|fp |F( fp )}.
(6.4)
The result of the integration with respect to the variable fp in Eq. (6.3) using Eq. (6.4) is known as a convolutional projection. It can be used for making a back projection procedure: g(x, ˆ y) =
π 0
ξθ [r cos(θ − ϕ)] dθ .
(6.5)
Radar systems for rotating target imaging (tomographic approach) 119 This procedure implies the integration of the contribution of each convolutional projection ξ0 (·) to the resulting image. The substitution of the integral in Eq. (6.4) by the Riman sum gives g(x ˆ i , yj ) =
M =1
ξ0 [r(xi , yj )mθ ] δθ,
(6.6)
m=0
where r(xi , yj , mδθ) =
xi2 + yj2 cos[mδθ − arctg(xi /yj )].
(6.7)
The latter expression is used to find (by interpolation) the contribution of the convolutional projection obtained at the mth target aspect to each of the (xi , yj ) pixels of the rectangular image grid. An important advantage of the convolution algorithm is the possibility of processing data as they become handy, because the contribution of every projection to the final image is computed individually. If the transmitter signal contains a finite number L of discrete frequencies, Eq. (6.3) will take the form: π L 4π fp l Sθ ( fp l) exp[ j2π fp lr cos(θ − ϕ)] dθ (6.8) g(x, ˆ y) = c l=1
0
and the processing algorithm reduces to summing up 1D integrals with respect to the variable θ. We can make computations with formula (6.8) in two ways. One is to calculate the integral for every value of (xi , yj ) and the other is to solve the subintegral expression for the M number of aspects for every frequency value, followed by interpolation, as in the common convolution algorithm. Thus, radar imaging of extended compact targets by inverse aperture synthesis can be made by using a number of algorithms well known in computerised tomography. The application of the convolution algorithm of the back projection method allows a reduction in the imaging time, as compared with the time of reconstruction in the frequency domain, due to the processing of individual echo signals. The interpolation can be omitted in the case of discrete-frequency transmitter signals, giving an additional reduction in the processing time. Another important feature of an imaging radar is its coherence, so it provides more information than conventional systems using computerised tomography. On the other hand, coherence must be maintained in all of the radar units during the operation. This circumstance also imposes restrictions on the minimum repetition rate of transmitter pulses.
6.2 Processing in 3D viewing geometry: 2D and 3D imaging It has been shown in Chapter 5 that inverse aperture synthesis is the most promising technique for imaging extended proper and extended compact targets with a high
120 Radar imaging and holography angular resolution. The fact that such targets can be imaged during their arbitrary motion makes it possible to use this technique in available radar systems (Chapter 9). The conditions for microwave hologram recording are primarily determined by the application of the images to be obtained. For example, if radar responses are studied in an anechoic chamber (AEC) (Chapter 9), it is sufficient to use a 2D geometry with an equidistant arrangement of the aspect angles. The target rotates uniformly around the axis normal to the line of sight. By deviating the rotation axis from this normal after every measurement run, one can, in principle, obtain 2D images even with monochromatic radar pulses.
6.2.1 The conditions for hologram recording There are a number of applied tasks when the target aspect variation must reflect natural viewing conditions. Let us consider the aspect variation relative to the line of sight of a ground radar viewing a hypothetical satellite moving at an altitude H = 400 km along a circular orbit with the inclination i = 97◦ (Fig. 6.1). The target is assumed to be perfectly stabilised in the orbital coordinates, and its aspect in the orbital plane is defined by the angle α between the longitudinal construction line and the projection of the line of sight onto the orbital plane. The angle β between the line of sight and the orbital plane describes the aspect variation in the plane normal to orbital plane. The analysis of the plots presented shows that the aspect variation of this class of targets during hologram recording in real viewing conditions should be characterised by (1) a 3D viewing geometry and (2) a non-equidistant arrangement of samples within the view zone. To derive analytical relations for the description of a microwave hologram for 3D viewing geometry, we shall consider the following conditions for viewing an orbiting satellite. The target is scanned by a ground coherent radar transmitting a probing signal with the carrier frequency fo and the modulation function w(t) from Eq. (2.30). The radar measures the amplitude and phase of the echo signal (for a narrowband signal w(t) ˙ = A, where A is the complex envelope amplitude). The target is large relative to the wavelength λ of the radar carrier oscillation, such that the target can be represented as an ensemble of individual and independent scatterers. Every scatterer is rigidly bound to the target’s centre of mass or moves across its surface as its aspect changes with respect to the radar. The position of the nth scatterer at any moment of time is defined by the radius vector rno with the origin at point O rigidly bound to the target’s centre of mass. The positions of the arbitrary nth scatterer and the rotation centre of the satellite o , respectively (Fig. 2.8). In the will be described by the radius vectors rno and R general case of 3D viewing geometry, an echo signal is defined, within the accuracy of a constant factor, as o |/c − 2ˆrn /c) exp(−j2π f0 2|R o |/c)} Sv (t) = g(rno ){w(t − 2|R V
× exp(−j2π f0 2ˆrn /c) drno .
(6.9)
Radar systems for rotating target imaging (tomographic approach) 121 (a) 350.0
Aspect angle a, grad
31 grad 66 grad 88 grad 300.0
250.0
200.0
(b)
0.0
100.0
200.0 300.0 Observation time, s
400.0
500.0
0.0
100.0
200.0 300.0 Observation time, s
400.0
500.0
60.0
Aspect angle b, grad
50.0 40.0 30.0 20.0 10.0 0.0 –10.0
Figure 6.1
The aspect variation relative to the line of sight of a ground radar as a function of the viewing time for a satellite at the culmination altitudes of 31◦ , 66◦ and 88◦ : (a) aspect α and (b) aspect β
It follows from Eq. (2.34) that signal noise due to the presence of coordinate information can be corrected by the receiver. The correction consists in selecting the time o |/c and in introducing the phase strobe position in accordance with the delay 2|R o |/c)⌋ in the reference signal during the coherent sensing. factor exp⌊ j2π f0 (2|R
122 Radar imaging and holography As a result of the compensation for the radial displacement of the satellite, the family of spectra of video pulses must be represented as a microwave hologram. For this, we go from time frequencies to space frequencies to get drno , S( fpo + fp ) = F{Sv (ct/2)} = H ( fp ) g(rno ) exp⌊−j2π( fpo + fp )d(t)⌋ v
(6.10)
where F{·} is the Fourier transform operator, W ( fp ) = F{w(v)} is the space frequency spectrum of the transmitter pulse, fpo = 2fo /c is the space frequency corresponding to the spectral carrier frequency, 2fl /c < fp < 2fu /c is the space frequency determined over the whole frequency bandwidth of the transmitter pulse, H ( fp ) = W ( fp )K( fp ) is the aperture function, and K( fp ) is the transfer function of the filter for the range processing of video pulses. The above analytical description of video pulse spectra in terms of space frequencies has not changed the rˆno (t) function, which is still considered to be a time function at the synthesis step. Now the pair of angular coordinates θ, B in the 3D frequency space (Fig. 6.2(b)) will be compared at every moment t of the synthesis step. The microwave hologram function can be presented as a 3D Fourier transform in the spherical coordinates fp , θ, B: (6.11) S( fp ) = H ( fp ) g(rno ) exp(−j2π fp rno ) drno , v
where fp = ( fpo +fp )e(θ, B) is the radius vector of the space frequency in the frequency domain. The geometrical relations for the recording of such a hologram will be derived for two typical cases of ground radar viewing of orbiting satellites. Fig. 6.2(a) shows the viewing geometry and Fig. 6.2(b) illustrates fragments of the holograms obtained. The angular position of the radar line of sight (RLOS) is described by the azimuthal angle θ = α − 3π/2 and the polar angle β with respect to the whole body-related coordinate system xyz. The line of sight is represented in space as a line across a unit sphere with the centre at the coordinate origin. The arrangement of the hologram pixels in the frequency domain is defined relative to the fx fy fz coordinates by the angles θ, B and the radial fp coordinate (Fig. 6.2(b)). The hologram recording should meet the conditions θ = θ ∗ and B = β ∗ , where θ ∗ and β ∗ are the estimates of the θ and β angles. In the first of the above cases, a narrowband radar tracks a satellite, stabilised by the body-related coordinates along the three axes, during its translational motion along the orbit. The line of sight turns relative to the satellite to describe a curve on the unit sphere (the left side of Fig. 6.2(a)), which represents an arc in the xy plane if the radar is located in the orbital plane, or a 3D curve in all other cases. If the radar transmits a continuous wave, the hologram reproduces the shape of this line on the sphere fpo in the frequency domain (Fig. 6.2(b)). If a radar transmits a pulsed signal with the repetition rate Fr or if a continuous echo signal is appropriately discretised, a hologram will represent a series of individual
Radar systems for rotating target imaging (tomographic approach) 123 v
(a) z
b b q
b2 b1 y
q
a x (b)
Spherical surface fpo
fz DB dfB
DB
Dfp B u Du
fy Du
fx
Figure 6.2
Geometrical relations for 3D microwave hologram recording: (a) data acquisition geometry; a–b, trajectory projection onto a unit surface relative to the radar motion and (b) hologram recording geometry
samples separated by δfψ = fpo θ˙ ∗ cos β ∗ /Fr , where θ˙ ∗ = d θ˙ ∗ (t)/dt is the angular velocity of the satellite rotation in the orbital plane. In the second case, one gets a wideband hologram of a satellite stabilised by rotation of the body-related coordinates around the z-axis (the right side of Fig. 6.2(b)). During the tracking, the angle between the line of sight and the rotation axis changes ˙ The interception of the unit sphere slowly by the value β = β2 − β1 with β˙ ≪ θ. surface by the line of sight forms a spiral confined between two conic surfaces with the half angles π/2 − β1 and π/2 − β2 at the vertex. The resulting hologram represents a multiplicity of real beams that form a spiral band (Fig. 6.2(b)). The band is transversely bounded by two spherical surfaces and is ‘fitted’ between two conical surfaces, with B1 = β1 and B2 = β2 . The radii of the spheres are equal to the lower fpl and upper fpu space frequencies of the hologram. Figure 6.2(b) shows a fragment of such a hologram ˙ bounded by the azimuthal step θ , while the satellite makes the θt/2π number of
124 Radar imaging and holography rotations during the synthesis time step t. The adjacent hologram slices synthesised during consecutive rotations are spaced by the frequency step δfu = 2π fpo β˙ ∗ /θ˙ ∗ . Under the condition δfu−1 ≥ D, where D is the maximum linear size of a satellite, the resolution can be achieved by the synthesis in the plane intercepting the z-axis. The resulting 3D wideband hologram containing, at least, several slices will be referred to as a surface hologram. A surface hologram is usually synthesised by a wideband radar, when tracking a satellite stabilised along the three axes, or when dealing with a model target in an AEC. In the latter case, a hologram lies entirely in the fx –fy plane. Every beam of a wideband microwave hologram corresponds to a single echo signal and is made up of a certain number of discrete pixels, L, since digital hologram processing implies discretisation of the echo pulse spectrum. It is clear from the foregoing that the conditions for recording a hologram of a target performing a complex movement relative to an imaging radar are the compensation for its radial displacement and the recording of the video signal spectrum in a form adequate for the respective aspect variation, that is, in a spherical or polar geometry.
6.2.2 Preprocessing of radar data The preliminary processing of radar data integrated in the form of a microwave hologram to be further used for image reconstruction can be described in terms of a linear filtering model as a processing by an inverse filter in a limited frequency band. The transfer function of the filter is Hf ( fp ) = H −1 ( fp )Ho ( fp )Hr ( fp ),
(6.12)
where Hr (fp ) is a non-zero aperture function within the chosen boundaries fph of the hologram (Fig. 6.2(b)): 1, fp ⊂ Vf , (6.13) Ho (fp ) = rect( fp /fph ) = 0, fp ⊂ Vf ; and Hr ( fp ) = exp[j2π( fpo + fp )|ra |] is the transfer function of the compensation step of the target radial displacement. The process of image reconstruction from a hologram described by Eq. (6.11) can be represented as −1 g( ˆ rno ) = F {S( fp )Hf ( fp )} = S( fp )Hf ( fp ) exp( j2π fp rno ) d fp Vf
= g(rno )∗ho (rno ),
(6.14)
where ho (rno ) = F −1 {Ho ( fp )} is a perfect impulse response which only describes the image noise due to the finite diffraction limit, or to the limited size of the aperture function Ho ( fp ).
Radar systems for rotating target imaging (tomographic approach) 125 Thus, the processing of an echo signal during the imaging includes two stages (Fig. 6.3). The signal preprocessing is aimed at synthesising a Fourier hologram, whose size and shape are determined by the transmitter pulse parameters and the target aspect variation. The structure and composition of processing operations 1–5 are conventional radar operations and can be varied with the type of transmitter
Echo signal Step 1 Pre processing Coherent detection
1
Range processing
2
Analogue – digital transform
3
DFT
4
Annihilation of phase distortions due to turbulent troposphere
5
Range estimation
Annihilation of target radial displacement
6
Aspect estimation
Spherical (polar) recording
7
Microwave hologram
Image reconstruction
Step 2
Subdivision into partial holograms
8
Partial image reconstruction by inverse DFT
9
Computation of partial image contributions to the total 10 image
Radar image
Figure 6.3
The sequence of operations in radar data processing during imaging
126 Radar imaging and holography signal, the processing techniques used, and the tracking conditions. For example, a monochromatic pulse does not require operations 2 and 4. When a signal with a LFM is subjected to correlated processing, operations 1 and 2 coincide, and operation 4 becomes unnecessary. The compensation for the radial displacement of a satellite during hologram recording in field conditions is a fairly complex problem [8,10]. In an AEC, the latter operation reduces to the introduction of the phase factor exp⌊ j2π fpl (2Ro /c)⌋, where fpl is the space frequency of the first spectral component of the hologram and R0 is the distance between the antenna phase centre and the target rotation centre [8,10]. Obviously, the phase factor is constant for a particular AEC. A necessary operation specific to ISAR systems at the preprocessing stage is the recording of the target aspect variation. It is assumed that each pixel on the hologram is compared by a digital recorder with the family of coordinates defining its position in the frequency domain fx fy fz (in the frequency plane fx –fy ) (see Fig. 6.2). It is worth discussing a possible application of available processing algorithms for image reconstruction from a microwave hologram. The experience gained from the application of inverse aperture synthesis for imaging aircraft and spacecraft as well as from the study of local radar characteristics has stimulated the development of algorithms for processing echo signals by coherent radars. A fairly detailed analysis of the algorithms can be found in Reference 8 and in Chapter 2 of this book, so we shall discuss only the possibility of applying them to the aspect variation of real targets. It has been shown in Section 2.3.2 and in the References 9 and 10 that the conditions for tracking real targets differ from the conditions in which available algorithms operate. First, discrete aspect pixels are not equidistant because of a constant repetition rate of the transmitter pulses. Second, the angle between the RLOS and the target rotation axis changes during the viewing. An inevitable result of the latter is the consideration of a 3D character of the problem. Attempts at applying the 2D algorithms discussed above to the processing of 3D data lead to essential errors in the images [8]. The level of errors rises with increasing relative size of a target (the ratio of the maximum target size to the carrier radiation wavelength) and with increasing deviation from 90◦ of the angle formed by the line of sight and the target rotation axis. To conclude, radar imaging should consider the viewing geometry, which requires the use of a radically new approach to data processing. The approach should provide 3D microwave holograms and be able to overcome a non-equidistant arrangement of echo pixels representing the aspect variation of space targets.
6.3 Hologram processing by coherent summation of partial components It has been shown earlier that image reconstruction from a microwave hologram should generally include a 3D IFT of the hologram function. The obtained estimate of g( ˆ rno ) is a distorted representation of the target reflectivity function. If there is no processing noise and the radial displacement has been perfectly compensated, an error may be due to a limited bandwidth of the transmitter pulse
Radar systems for rotating target imaging (tomographic approach) 127 or a limited aspect variation. The resolving power of image-synthesising devices is then restricted only by the diffraction limit, and the image produced is known as a diffraction-limited image. Recording and processing noise additionally deteriorate image quality. So when designing algorithms and techniques for image processing, one should bear the following things in mind: (1) the dimensionality of an image is not to be higher than that of a microwave hologram and (2) the image resolution in any direction is to be inversely proportional to the hologram length. Hence, processing of 3D holograms can yield 1D, 2D and 3D images. An advantage of a 3D image is that it fully represents the information recorded on the hologram, but it is to be computer-processed and analysed. For visualisation, an image must be displayed on 2D media, such as paper or photosensitive films, or on computer screens. Moreover, the ‘third’ dimension of a hologram is sometimes insufficient to get a good resolution. Nonetheless, the neglect of a non-3D format of a hologram leads to serious image errors during its processing. Therefore, the problem of producing undistorted 2D images from 3D holograms seems quite important. We can suggest two ways of solving this problem. One way is to obtain a 3D image and then intercept it with a plane of prescribed orientation. However, the computations with cumbersome 3D coordinates and data arrays of lower dimensionality require special processing algorithms and large computation resources. A more simple and cost-effective approach is to compute directly the contributions of single 3D hologram components to a 2D image, if their dimensionality is not higher than that of the image. The computations become less complex and all highlighted components of a hologram can be processed simultaneously, provided that the number of processors is sufficient. The applicability of this technique can be easily extended to 2D holograms. This method of image reconstruction can be termed coherent summation of partial components of a hologram. This method includes the following procedures: Stage 1. A microwave hologram is subdivided into regions of limited size called partial holograms (PH). Since discrete pixels making up the hologram are formed by the interceptions of radial lines (corresponding to single echo signals) and cofocal spherical surfaces (corresponding to discrete values of the space frequency), PHs can be separated from the initial hologram in different ways. The PH dimensionality is chosen from the initial hologram geometry and from considerations of processing convenience. In the case of a 2D hologram, the PHs may be one- or two-dimensional, while for a 3D hologram they may be, in addition, three-dimensional. Figures 6.4 and 6.5 depict 1D PHs with lines having points at their ends, which represent the initial and final pixels. The points on the surfaces of 2D and 3D PHs correspond to single pixels. One-dimensional PHs are composed either of pixels coinciding with the radial rays which correspond to single pulses (radial PHs) or of pixels located on the cofocal spherical surfaces with fpo = const. (transverse PHs). Radial 2D PHs are made up of ensembles of 1D radial PHs and represent regions of planar conic (Fig. 6.5(b)) or more complex curved (Fig. 6.4(b)) surfaces. Transverse 2D PHs
128 Radar imaging and holography (a)
∆Ψ ≈ ∆u cos B ∆Ψ ≈ ∆B
Radial
Transversal
(b)
Radial
Transversal
(c)
Figure 6.4
Subdivision of a 3D microwave hologram into partial holograms: (a) 1D partial (radial and transversal), (b) 2D partial (radial and transversal) and (c) 3D partial holograms
can be separated only from volume holograms. They are regions of spherical surfaces with fpo = const. If the angular discretisation of a hologram is uniform, the maximum angle of 1D transverse, 2D and 3D PHs are chosen from the following considerations. When a spherical coordinate grid (or a polar grid for plane holograms) is replaced by a rectangular grid, the phase noise at the PH edges should not exceed π/2. This criterion leads to the following restrictions: ψ ≤ (λ/D)1/2 ,
(6.15)
ψ ≤ c/Df .
(6.16)
If the intersample spacing on a hologram varies slowly because of a non-uniform rotation of a target, the choice of the PH angle should meet the condition: δν ′ ≤ f arc cos(1 − λ/4rn cos β),
(6.17)
where δν ′ is the difference between the maximum (or minimum) discretisation step and its average value. Condition (6.17) is based on the limited phase noise due to the non-equidistant arrangement of the hologram samples.
Radar systems for rotating target imaging (tomographic approach) 129 (a)
(b)
(c)
∆Ψ
Figure 6.5
Subdivision of a 3D surface hologram into partial holograms: (a) radial, (b) 1D partial transversal and (c) 2D partial
When choosing the PH angle, one should always follow the more rigid of the above criteria. The restriction on the PH size is introduced in order to keep the deviation of the hologram samples from the rectangular grid nodes within a prescribed limit. The PH angles can be easily calculated analytically at a constant or slightly varying value of one of the angles of the spherical coordinates describing the PHs (Fig. 6.4(a)). In that case the PH boundaries will be close to the coordinate surfaces. If both angles θ and B change markedly (Fig. 6.5), the angular step ψ should be found in the plane tangent to the PH. Stage 2. Every PH should be subjected to a DFT providing a radar image with the same dimensionality as that of the PH, while the resolution is determined by its size. Stage 3. The contributions of partial images to the integral image are computed. When the dimensionalities of a PH and a partial image are the same, the pixels of the latter are interpolated to those of the integral image. If the dimensionality of the integral image is higher, the major procedure for the computation is that of back projection [127]. Consider algorithms for the reconstruction of 2D images by processing narrow and wideband surface holograms (Fig. 6.5) produced by a three-axially stabilised ground radar. With such algorithms we shall try to justify the specific features of coherent summation of partial components: (1) the possibility of highlighting partial regions of various shapes on a PH and their independent processing and (2) the possibility to increase the resolution of the integral image as the individual contributions of the partial components are accumulated and the diffraction limit corresponding to the initial hologram size is achieved. The above analysis allows the following conclusions to be drawn. The most general approach to radar imaging of a satellite by inverse aperture synthesis, no matter how it moves and what probing radiation is used, includes two stages of echo signal processing. The preprocessing involves some conventional operations, the
130 Radar imaging and holography compensation for the phase noise specific to coherent radars, and data recording allows the aspect variation to produce a microwave hologram. The second stage is to reconstruct the image by a special digital processing of PHs. A procedure specific to preprocessing is the compensation for the phase shift due to the radial displacement of a space target. In the case of an AEC, this operation is replaced by the introduction of constant phase factors in the wideband echo signal. The use of monochromatic transmitter pulses does not require this operation (Chapter 5). The complex pattern of aspect variation of low orbit satellites requires a 3D hologram with a non-equidistant arrangement of the aspect samples. Since there are no adequate methods for processing such holograms, we have designed a way of image reconstruction by coherent summation of PHs. This reduces the digital processing of a hologram of complex geometry to a number of simple operations. A hologram is subdivided into PHs, from which partial images are reconstructed using a fast Fourier transform (FFT). The contributions of the partial images to the integral image are computed.
6.4 Processing algorithms for holograms of complex geometry We should first change Eq. (2.38) generally relating the hologram and image functions to the Cartesian coordinates necessary for a DFT: fp rno = fx rx + fy ry + fz rz ,
(6.18)
where fx = |fp | sin θ cos B,
(6.19)
fy = −|fp | cos θ sin B,
(6.20)
fz = |fp | sin B;
(6.21)
rx = |rno | sin ν cos β,
(6.22)
ry = −|rno | cos ν cos β,
(6.23)
rz = −|rno | sin β.
(6.24)
The substitution of Eq. (6.18) into Eq. (2.38) reduces it to the conventional 3D Fourier transform. However, it is impossible to apply it directly to a microwave hologram recorded in spherical coordinates (Fig. 6.2(b)). The transition to pixels located at rectangular grid nodes is considered as an interpolation problem. Even a first-order interpolation for a 2D case would require large computational resources. Besides, any noise arising from the interpolation would lead to large errors in the reconstructed image. The procedure of coherent summation of partial components will simplify this problem if we use the reverse order of computational operations: a number of DFT operations and the interpolation of their results (partial images) to the rectangular grid nodes of the integral image. Of special practical importance is the case when a PH and its partial image have a lower dimensionality than the integral image. This is due to a higher computation efficiency of the algorithms used. The interpolation
Radar systems for rotating target imaging (tomographic approach) 131 then represents a transition from a rectangular grid of lower dimensionality to that of a higher dimensionality, a procedure known as back projection [127]. As previously mentioned, we shall focus on designing algorithms for producing 2D images by coherent summation of 1D PHs and individual initial hologram samples. The algorithm for coherent summation of 2D partial images will largely be discussed for a theoretical completeness of the treatment. The analysis will start with algorithms for processing 2D holograms recorded in an AEC and during the imaging of low orbit satellites by a SAR located in the orbit plane.
6.4.1 2D viewing geometry Equation (6.14) will be transformed to polar coordinates by substituting Eq. (6.18) into it and using Eqs (6.19)–(6.24). Assuming B = β = 0 and denoting |fp | = fpo + fp and |rno | = r, we get: g(r, ˆ ν) =
θf fpu
S( fpo + fp , θ )|fp | exp[j2π( fpo + fp )r cos(ν − θ )] dfp dθ, (6.25)
θi fpl
where θi and θf are the initial and final values of the angle θ of the hologram (Figs 6.6 and 6.7), fpl = fpo − fp /2 and fpu = fpo − fp /2 are the lower and upper boundaries of the space frequency band along the hologram radius. It is easier to start the analysis of processing algorithms with a simple case of narrowband microwave holograms. The limit of expression (6.25) at fp → 0 is g(r) = fpo
θf
S( fpo , θ ) exp[ j2π fpo r cos(ν − θ)] dθ.
(6.26)
θi
This expression coincides with the formula for the CCA for a narrowband signal [94]. When an image is reconstructed by this algorithm, circular convolution is performed for every sample of the polar coordinate r in the image space with respect to the parameter θ of the hologram function and the phase factor. The contribution of all hologram samples to every (r, ν) node of the image polar grid is computed. If the satellite aspect changes non-uniformly, the samples are arranged along the hologram circumference with a variable step, so a discrete circular convolution becomes impossible. Let us single out a series of adjacent regions on a hologram, or PHs shown in Fig. 6.6(a), with an angle satisfying the condition of Eq. (6.15). The convolution step of Eq. (6.26) over the whole hologram angle can be represented as a sum of integrals, each taken over a limited angle step θ : g(r, ˆ ν) = fpo
M
Sm ( fpo , θ ) exp[ j2π fpo r cos(ν − θ )] dθ,
m=1
where Sm ( fpo θ ) is the mth PH and M is the total number of such holograms.
(6.27)
132 Radar imaging and holography fy
(a)
uM Du
fpo uf
fx
O ui
u1
y
(b)
xm ym
rno o
Figure 6.6
(xn, yn) x
qm
Coherent summation of partial hologram. A 2D narrowband microwave hologram: (a) highlighting of partial holograms and (b) formation of an integral image
We now introduce the Cartesian xm ym coordinates (Fig. 6.6(b)) for each mth PH with the origin O coinciding with that of the rectangular x–y coordinates of the integral image. The xm -axis is parallel to the tangent to the arc connecting the mth PH pixels at its centre. Since the microwave hologram in question is 2D, let us introduce the azimuthal coordinate fpθ = fpo θ to describe it in the frequency fx –fy plane (Fig. 6.6(a)), in addition to the radial polar coordinate fp . With xm = r sin θm and ym = r cos θm , the transformation of the phase factor under the integral of Eq. (6.27)
Radar systems for rotating target imaging (tomographic approach) 133 fy
(a)
dfu
Dfp uf fpu
fpe
Du fx
Of u1 ui
dfp
(b)
y
xm
ym
rno oq
m
Figure 6.7
x
Coherent summation of partial hologram. A 2D wideband microwave hologram: (a) highlighting of partial holograms, (b) formation of an integral image
will give g(x, ˆ y) =
M
fθm +f θ /2
S( fpo , θ ) exp[ j2π fθ xm ] dfθ m ,
(6.28)
m=1f −f /2 θm θ
where dfθ = fpo dθ is the differential of the space frequency f0 , while fθ m is the space frequency corresponding to the mth PH centre.
134 Radar imaging and holography Expression (6.28) describes the algorithm for coherent summation of partial images obtained from 1D transverse (azimuthal) PHs. Each partial image results from a Fourier transformation of the appropriate PH and is resolved along the azimuthal xm -coordinate. The synthesis of a PH is made simultaneously with its summation with the radar image by moving the partial image along the ym -coordinate (back projection), accompanied by the multiplication of all of its samples by a coherent processing phasor m . The process of image summation by the algorithm of Eq. (6.28) will be discussed with reference to a point scatterer with the xn , yn coordinates in the x–y coordinate system (Fig. 6.6(b)). This scatterer will be assumed to possess an isotropic local radar 1/2 target characteristic g(rno ) = σn exp(jϕn ). A narrowband microwave hologram is defined as S( fpo , θ ) = σn1/2 exp[−j2π fpo rˆn (ϑ)] exp(jϕn ).
(6.29)
The relative range of the point scatterer is expressed by the rectangular xm , ym coordinates. The expansion of rˆn (ϑ) into a Taylor series with respect to the centre ϑm of the mth partial angle step with the linear terms only gives (6.30) rˆn (ϑ) = rˆn (ϑm ) + r˙ˆ (ϑm )ϑ − r˙ˆn (ϑm )ϑm , where r˙ˆn (ϑ) = d rˆn (ϑ)/dϑ, r˙ˆn (ϑm ) = r˙ˆn (ϑ) ϑ=ϑm . By substituting Eq. (6.30) into Eq. (6.29) and denoting rˆn (ϑ) = ym , r˙ˆn (ϑm ) = xm , we transform the expression for the mth PH to Sm ( fpo , θ ) = σn1/2 exp{−j2π fpo (ymn − xmn ϑm )} exp(−j2π fpo xmn ϑ) exp(jϕn ). (6.31) It is further assumed that the estimate of the target rotation rate obtained during the hologram recording contains no error: θ = ϑ. It should also be taken into account that a rectangular window of width fθ = fpo θ framing the PH (6.31) is shifted relative to the centre of the space frequency axis by its half width: fpo θm = fθ /2. Then the expression for the partial image can be written as g(x ˆ m) =
fθ m +fθ/2
S( fpo , θ ) exp( j2π fθ xm ) dfθ
fθ m −fθ/2 1/2
= σn fθ {sin[π(xm − xmn )fθ ]/π(xm − xmn )} exp[ jπ(xm − xmn )fθ ] × mn exp(jϕn ).
(6.32)
The integral image will be described as g(x, ˆ y) = σn1/2 exp(jϕn )fθ
M
{sin[π(xm − xmn )fθ ]/π(xm − xmn )fθ }
m=1
× exp[ jπ(xm − xmn )fθ ]mn .
(6.33)
Radar systems for rotating target imaging (tomographic approach) 135 It is clear from Eq. (6.33) that the complex phase factors varying with the xm , ym coordinates and located at the integral image point corresponding to the position of the scatterer response have the maximum values equal to unity. The contribution of the PH to the integral image is defined by the product of the local radar target characteristic of the scatterer and the sin(x)/x-type of function. Therefore, the PHs are summed equiphasically at the point xn = rno sin ϑn , yn = rno cos ϑn and at other points, of the image, they are mutually neutralised. The width of the major lobe of the scatterer in the partial image (a function of the sin(x)/x-type) is determined by the PH length fθ or by its angle θ (Fig. 6.6(a)). The limiting value of the response width in the partial image derived from Eq. (6.14) is expressed by the inequality δx ≥ 0.5(λD)1/2 . Since D ≫ λ, the major lobe width is much greater than the transmitter pulse wavelength. It follows from this treatment that the mth partial component of the integral image may be regarded as a 2D plane wave superimposed on the image plane. The wave front is normal to the ym -axis and its period is equal to the half wavelength of the transmitter pulse. The initial wave phase (along the xm -axis) is determined by the phasor exp[ jπ(xm − xmn )fθ ] in such a way that a positive half-wave always arrives at the scatterer’s xmn , ymn position. The wave amplitude along the xm -axis is described by a sin(x)/x function with a maximum at the point xm . For this reason, the partial component has a ‘comb’ elongated by the back projection of the partial image parallel to the ym -axis. Note that the resolution of the integral image is defined by the scatterer wavelength rather than by the response width in the partial image. The reduction in the PH size from the maximum value prescribed by Eq. (6.15) to a single sample should not affect the result of summation in a PH. Therefore the synthesised aperture can be focused accurately over the whole image field. Keeping in mind lim
fθ m +f θ /2
fθ →0 fθ m +fθ /2
S( fpo , θ ) exp(j2π fθ xm ) dfθ = fpo S( fpo , θ) dθ,
(6.34)
we obtain from Eq. (2.4) the algorithm for coherent summation of a PH made up of individual samples of the initial hologram: g(x, ˆ y) = fpo
M
Sm ( fpo , θ )m .
(6.35)
m=1
The coherent summation algorithm for hologram samples essentially represents a particular case for 1D transverse (azimuthal) partial images described by Eq. (6.28). However, each has its own specificity. The major advantage of the algorithm for hologram samples is the absence of phase errors due to either the PH approximation or the non-equidistant distribution of samples. As a consequence, this algorithm is applicable to the processing of microwave holograms with any known sample arrangement. On the other hand, the coherent summation algorithm for partial images does not require excessive computer resources because the exhaustive search of the raster pixels in the integral image
136 Radar imaging and holography Table 6.1
The number of spectral components of a PH
Target size
Radial/azimuthal PH
D (m)
Dλ = D/λ
µ = 0.02
µ = 0.04
µ = 0.06
µ = 0.08
µ = 0.1
0.5 1.0 2.0 4.0 6.0 8.0 10.0 15.0
15 25 50 100 150 200 250 325
2/12 2/15 3/22 6/30 9/37 12/43 15/48 23/54
2/12 3/15 6/22 12/30 18/37 24/43 30/48 45/54
3/12 5/15 9/22 18/30 27/37 36/43 45/48 68/50
3/12 6/15 12/22 24/30 36/37 48/38 60/38 90/38
4/12 8/15 15/22 30/30 45/30 60/30 75/30 113/30
during the computation of the partial contribution is made for a group of PH samples rather than for every single hologram sample. Figure 6.8–6.9 compares the computational complexity of the two algorithms as a function of the target size for a narrowband microwave hologram. The criterion for the degree of complexity is taken to be the algorithmic time of the programme realisation. The unit of measure of the algorithmic time is, in turn, taken to be 1 flop (floating point), that is, the time for one elementary operation of summation/multiplication of two operands with a floating point. So we have 1 Mflop = 106 flops. The estimations of the computational complexity and the programme realisation time have been made for a 2D image of 512 × 512 raster pixels in size and 2D microwave holograms with a 120◦ angle. When going from a narrowband hologram to a wideband one, we can just suggest that the number of spectral components increases from 1 to L. As the size of a one-digit image and the hologram discretisation step are inversely proportional to each other, the minimal number of spectral components at a given pulse frequency bandwidth must be proportional to the target size. Table 6.1 presents the L values for various PHs as a function of the maximum target size. The computations have been made for 0.04 m carrier (centre) frequency of the transmitter pulse spectrum and the ratio of the image field size to the maximum target length k = 1.5. One can easily see that the number of azimuthal PH samples rises with the target size as long as the limiting PH angle obeys the inequality (6.15). When a target is rather large and the relative frequency bandwidth is µ = f /f0 (the lower right-hand side of Table 6.1), the inequality (6.16) imposes a more rigid restriction on the PH size. Then both the PH size and its discretisation step decrease inversely with respect to the target size. Therefore, the number of PH azimuthal samples at a given transmitter pulse width f remains constant with increasing target size D. We shall start the discussion of digital processing of 2D wideband holograms with the algorithm for coherent summation of 1D azimuthal partial images, which is the
Radar systems for rotating target imaging (tomographic approach) 137 (a) 0.6
Kpar im, Mflop · 103
0.5 0.4 0.3 0.2 0.1 0.0 0.0
0.5
1.0 1.5 Target dimension, m
2.0
2.5
0.0
0.5
1.0 1.5 Target dimension, m
2.0
2.5
(b) 12.0
Khol sam, Mflop · 103
10.0 8.0 6.0 4.0 2.0 0.0
Figure 6.8
The computational complexity of the coherent summation algorithms as a function of the target dimension for a narrowband microwave hologram: (a) transverse partial images, (b) hologram samples
extension of a similar algorithm for narrowband microwave holograms. Let us relate Eq. (6.28) to the first, 1 = 1th, … , Lth spectral component:
g(x, ˆ y) =
M
fθm +f θ /2
m=1f −f /2 θm θ
Sm ( fpl , θ ) exp( j2π fθ l xm ) dfθ l ml ,
(6.36)
138 Radar imaging and holography where fpl , fθ l are the radial and azimuthal space frequencies and ml is the coherent processing phasor. By summing up the L number of PHs in each of the M number of partial angle steps, we get fθm +f θ /2 L M g(x, ˆ y) = (6.37) Sm ( fpl , θ ) exp( j2π fpθ xm ) dfθ ml . fpl m=1 l=1 fθm −fθ /2
Equation (6.37) describes the following processing operations:
• the L number of azimuthal PHs are selected in each mth partial angle step; • the DFT is applied to each PH to get the L number of 1D partial images; • the L number of partial images in every mth group are back projected and the obtained contributions are multiplied by the coherent processing phasor ml . The analysis of Eq. (6.37) shows that the consecutive multiplication by the phasor ml of the contributions of partial images can be supplemented with a DFT. The result is a new processing algorithm – the coherent summation algorithm for 2D partial images:
g(x, ˆ y) =
f /2 M p
m=1 −f /2 p
fθm +f θ /2 |fp | Sm ( fp , θ ) exp( j2π fpθ xm ) dfθ fθm −fθ /2
× exp( j2π fp ym ) dfp m .
(6.38)
Algorithm (6.38) implies the following series of operations: • the M number of 2D PHs with an angle defined by the conditions of Eqs (6.15) and (6.16) are selected in the initial microwave hologram; • each PH is subjected to a 2D DFT to produce the M number of 2D partial images. All of these have a common centre which coincides with the integral image centre and are rotated by the angle θ relative to one another; • the contribution of each partial image to the integral image is calculated using a 2D interpolation and the result is multiplied by the coherent processing phasor. The last operation generally requires large computer resources. So we shall further refer to the coherent summation algorithm for 2D partial images only to preserve a theoretical completeness. The advantages of coherent summation of individual samples discussed above for narrowband holograms are fully valid for wideband holograms as well. Equations (6.37) and (6.34) yield L M g(x, ˆ y) = fpl Sm ( fpl , θ )ml dθ . (6.39) m=1
l=1
Radar systems for rotating target imaging (tomographic approach) 139 Among the wideband processing algorithms, the one described by Eq. (6.39) is the most simple but it requires a large number of arithmetic operations to be made because the processing is made online. The computational efficiency of this algorithm can be raised by using a 1D DFT along the mth hologram beam: f p /2 M (6.40) S( fp , θ )|fp | exp( j2π fp ym ) dfp m dθ. g(x, ˆ y) = m=1 −fp /2
In accordance with the accepted classification, expression (6.40) is the algorithm of coherent summation of 1D radial (range) partial images. Its implementation involves the following processing operations:
• the hologram samples making up the mth radial PH are multiplied by the linear frequency function and are then subjected to DFT; • the resulting 1D range partial image is back projected and the result is multiplied by the coherent processing phasor. The algorithm of Eq. (6.40) has much in common with the narrowband algorithm for partial images in Eq. (6.28) but it also has some specific features. One is that the 1D image module of a single point scatterer is described by the so-called kernel function of computerised tomography [57], rather than by a function of the sin(x)/x type. Depending on the chosen approximation of the linear frequency function in Eq. (6.40), the kernel function is its Fourier image and may be described analytically in various ways. It always has the form of an infinite periodic function with one major lobe and side lobes decreasing with amplitude. Another specificity of this algorithm is that the back projection operation is performed along the xm -axis. Still another characteristic of the algorithm of Eq. (2.40) is that the PH samples are arranged equidistantly along radial straight lines, so no restriction is imposed on the maximum size of a PH. The relative computational complexities of wideband processing algorithms are compared in Figs 6.10–6.11. It is seen that the number of arithmetic operations always increases with the relative frequency bandwidth of the transmitter pulse µ and the relative target size Dλ , whereas the computational complexity of 1D partial image algorithms changes differently with these parameters. At given values of µ and Dλ , more profitable is the algorithm for a PH with a larger number of samples. This is because the efficiency of a FFT increases with the number of lobes, as compared with an ordinary DFT. For example, at small values of µ and Dλ , it is more reasonable to use the algorithm for azimuthal partial images (Fig. 6.11). As the relative frequency bandwidth and the target size become larger, the number of samples in a radial PH exceeds, at a certain moment, that of an azimuthal PH (see Table 6.1). This happens because the restriction on the azimuthal PH size in Eq. (6.15) begins to dominate over that of Eq. (6.16), such that the use of the coherent summation algorithm for radial partial images becomes more profitable. In spite of its structural simplicity, the coherent summation algorithm for hologram samples has the greatest computational complexity (Fig. 6.10).
140 Radar imaging and holography (a)
6.0 5.0
Kpar im/KCCA
4.0 3.0 2.0 1.0 0.0 0.0
0.5
1.0 1.5 Target dimension, m
2.0
2.5
0.0
0.5
1.0 1.5 Target dimension, m
2.0
2.5
(b) 150.0
Khol sam /KCCA
125.0 100.0 75.0 50.0 25.0 0.0
Figure 6.9
The relative computational complexity of coherent summation algorithms as a function of the target dimension for a narrowband microwave hologram: (a) transverse partial images/CCA, (b) hologram samples/CCA
It is clear that the time for a wideband hologram processing by the above algorithms, estimated from the product of the computational complexity and the time for an elementary multiplication/summation operation, is excessively long, so one should consider the possibility of separate, independent processing of PHs in order to considerably reduce this parameter.
Radar systems for rotating target imaging (tomographic approach) 141 35.0
30.0
0.06 0.02
Khol sam /Kpar im
25.0
0.08
20.0
0.04 15.0
10.0
5.0
0.0 0.0
Figure 6.10
0.5
1.0 1.5 Target dimension, m
2.0
2.5
The relative computational complexity of coherent summation algorithms of hologram samples and transverse partial images versus the coefficient µ in the case of a wideband hologram
6.4.2 3D viewing geometry We now express Eq. (6.14) in spherical coordinates and use Eqs (6.19)–(6.21) to get the relation S( fp , θ, B) exp[ j2π( fpo + fp )(y cos θ cos B g(x, y, z) = Vf
+ x sin θ cos B + z sin B)] dff dθ dB.
(6.41)
142 Radar imaging and holography 7.0 6.5 6.0 5.5 5.0
Kpar im /Kzad par im
4.5 4.0 0.02 3.5 3.0 2.5 0.04
2.0 1.5
0.06 0.08
1.0 0.5 0.0 0.0
Figure 6.11
0.5
1.0 1.5 Target dimension, m
2.0
2.5
The relative computational complexity of coherent summation algorithms for radial and transverse partial images versus the coefficient µ in the case of a wideband hologram
To make the coherent summation of PHs more convenient, it is reasonable to separate the integration variables in Eq. (6.41). This task could be simplified if one of the variables remained constant through a synthesis step. For example, at B = const., the image in the (z = 0) plane will be described as S( fp , θ, B) exp[ j2π fpe (y cos θ + x sin θ)] dfp dθ, (6.42) g(x, y) = Vf
Radar systems for rotating target imaging (tomographic approach) 143 z fz, f ⬙zm
f ⬙ym jm
um Bm
ym
fy
Of
y
fym, f ⬘ym
x
Figure 6.12
fx
f ⬘xm, f ⬙xm xm
The transformation of the partial coordinate frame in the processing of a 3D hologram by coherent summation of transverse partial images
where fpe = ( fpo + f0 cos B) is an ‘equivalent’ space frequency introduced just to reduce Eq. (6.42) to a conventional form. Clearly, the algorithms to be derived from Eq. (6.42) may differ from those for 2D holograms only in the space frequency value. In reality, this may happen in viewing geostationary objects stabilised by rotation. In hologram recording of a low-orbit satellite, both angles describing its geometry change simultaneously. For this reason, the polar angle can be considered to be ‘fixed’ only at certain moments of time. This hologram geometry is best satisfied by the algorithms for coherent summation of individual hologram samples and 1D radial partial images. Expressions for such algorithms can be derived from Eq. (6.42) or directly from Eqs (6.35), (6.39) and (6.40) by substituting fpe for ( fpo + fp ). To design algorithms for transverse PHs, we need to introduce in the frequency domain the partial coordinates fxm , fym , fzm with the origin at the point Of , which is also the origin of the fx , fy , fz coordinates (Fig. 6.12). The fxm –fym plane of the partial coordinates is tangential to the PH at a point with the angular coordinates θm , Bm (the fxm - and fym -axes are not shown in Fig. 6.12). Let us expand the polar angle B as a function of the azimuth θ into a Taylor series in the vicinity of θm : B(θ) = Bm + [dB(θ)/dθ]|θ=θm · (θ − θm ) + B ,
(6.43)
where Bm = B(θ) at θ = θm and B are the residual terms of the series. Obviously, in the ideal case of B = 0, the PH lies totally in the fxm –fym plane, and the difference between the PH and a straight line is only determined by the curvature of the sphere fpo . The non-zero nature of the polar angle derivatives with an order above the first one may generally lead to additional phase errors in the PH approximation by a straight
144 Radar imaging and holography line. However, a digital simulation of the aspect variation of a real target has shown that the phase error is negligible. Therefore, we shall assume the PH angle to be defined by the conditions of Eqs (6.15) and (6.16). To describe the positions of PH samples in the fxm –fym plane, we introduce the angle ψ and write down the partial Cartesian coordinates of the pixels as fxm = fp sin ψ, fym = −fp cos ψ. An acceptable processing algorithm can be obtained if the fxm –fym plane is superimposed with the fx –fy plane which corresponds to the x–y plane in the image space containing the integral image. The superposition operation will be made by two consecutive rotations of the partial coordinates (Fig. 6.12): • the rotation by the angle ξm = arctg⌊(dB/dθ )|θ =θm | ⌋ round the fym -axis gives the ′ f ′ f ′ coordinates, whose f ′ -axis lies in the f –f plane; polar fxm x y ym zm xm ′ f ′ f ′ coordinates by the angle B round the f ′ -axis • the rotation of the polar fxm m ym zm xm ′′ f ′′ f ′′ coordinates. gives the sought for polar fxm ym zm These transformations of the polar coordinates result in the following expression for the scalar product at the mth partial angle step: ′ ′ ⌊ fp rno ⌋ = −fp bm xm sin ψ + fpe ym cos ψ
(6.44)
with ′ xm = xm cos ζm + ym sin ζm , ′ ym = −xm sin ζm + ym cos ζm ,
bm = (1 − sin2 ξm cos2 Bm )1/2 , fpe = fp cos Bm .
In turn, sin ζm = sin ζm sin Bm /bm , cos ζm = cos ξm /bm . Thus, the variation of the polar angle B during hologram recording introduces two specific features in the coherent summation algorithm for transverse partial images. One is the necessity to make an additional rotation of the partial xm , ym coordinates by the ζm angle round the zm -axis. The other is a change in the partial image scale along the xm - and ym -axes by a factor of bm and cos Bm , respectively. Let us now derive an expression for the coherent summation algorithm for transverse partial images in the case of wideband pulses. This can be done by substituting Eq. (6.44) into Eq. (6.14) and reducing the result to the form: fψm +f ψ /2 M L ′ fple S( fp , θ , B) exp( j2π fψ′ o xm ) dfψo mlθ , g(x, ˆ y) = m=1 l=1 fψm −fψ /2
(6.45)
Radar systems for rotating target imaging (tomographic approach) 145 ′ = f b ;f where fψo = fpo ψ is the transverse space frequency; fψo ψo m ple = fpl cos Bm is the equivalent space frequency for the first spectral feature; and mlθ is the coherent processing phasor. The processing with the algorithm of Eq. (6.45) includes the following operations:
• the L number of transverse PHs (equal to the number of spectral features) are selected in every mth partial angle step; • a DFT is performed with every PH with the space frequency f to produce the L number of 1D partial images; • every partial image is back projected, and the contribution is multiplied by the ′ -axis rotated by the ζ phasor mlθ . The back projection is made along the ym m angle relative to the ym -axis. Equation (6.45) can be easily solved to give expressions for coherent summation algorithms for 2D partial and 1D transverse images of narrowband pulses, by analogy with the case discussed in Section 6.4.1. As compared with the respective algorithms for 2D holograms, the computational complexity of coherent summation of individual hologram samples and 1D partial radial images increases only because of the necessity to compute the sine of the polar angle B. However, it does not increase more than by 2–3 per cent even in the most unfavourable case of a narrowband signal and a relatively small target. The complexity rises considerably when a 2D geometry is replaced by a 3D geometry of transverse partial images. This is due to both the polar angle variation during the viewing and the appearance of a variable in the hologram discretisation step. As a result, new operations come into play, but the increase in the computational complexity still lies within 10 per cent. The above treatment allows the following conclusions to be made: 1. The algorithms for digital processing of microwave holograms designed in terms of the theory of coherent summation of partial components provide imaging in a wide range of viewing conditions, in particular, the probing geometry and the frequency bandwidth of transmitter radiation. 2. The wider applicability of digital processing by coherent summation of partial components implies a greater complexity of computations than that required by available techniques. However, one can choose the least time-consuming algorithm for particular values of the relative frequency bandwidth of the transmitter pulse and the size of the space target. A radical reduction in the processing time can be achieved by using separate processing of individual PHs.
Chapter 7
Imaging of targets moving in a straight line
When a target moves in a straight line normal to the radar line of sight, the inverse synthesis of a tracking aperture can be regarded in terms of Doppler information processing, in a way similar to the processing aimed at a high azimuthal resolution by a side-looking radar. Clearly, an inverse aperture can then be considered as a linear antenna array performing a periodic time discretisation of the radiation wave front. This is the so-called antenna approach, and its capabilities are discussed in Reference 139. The author analysed an equivalent array made up of (2N + 1) records of target movement across a real ground antenna beam of sufficient width. It was shown that the azimuthal range resolution R0 and the resolution along the ϕ direction could be defined as =
λR0 , 2VTr (2N + 1) cos ϕ
(7.1)
where λ is the transmitter pulse wavelength, V is the target velocity, ϕ is the angle between the line directed to the target and normal to the synthesising aperture, and Tr is the repetition rate of transmitter pulses. Inverse aperture synthesis for a linearly moving target can also be examined in terms of a holographic approach. This was first done by H. Rogers to study ionosphere [85], making use of D. Gabor’s ideas of holography. Rogers described a method for hologram recording of microwaves reflected by ionospheric inhomogeneities. The principle of this method is as follows. When an ionospheric inhomogeneity moves, the resulting diffraction pattern on the earth surface also moves across the receiver aperture. A signal that has been sensed is recorded on a photofilm as a hologram. What is actually recorded is the wave front, and one can reconstruct the inhomogeneity image from the hologram. For these reasons, E. Leith considered Rogers’ device to be truly holographic rather than quasi-holographic. Holographic concepts were successfully introduced in radar imaging by W. E. Kock [71] who showed that echo signals from a linearly moving target, recorded by the receiver of a coherent continuous pulse radar, were structurally equivalent
148 Radar imaging and holography to 1D holograms. He pointed out a similarity among an airborne SAR, a ground coherent radar and a holographic system. The holographic approach treats inverse aperture synthesis of signals from a linearly moving target as a particular case of hologram recording by the scanning technique (Chapter 3). Here we shall analyse the process of radar imaging in the range– cross range coordinates, using inverse synthesis under real target flight conditions, that is, imaging of partially coherent signals. Radar images obtained in the range–cross range coordinates allow estimate of the target size and shape, as well as the reflectivity of its individual scatterers. Such images can be further used for target identification. The imaging should be performed by ISARs transmitting complex pulses [85,104]. Apart from the prescribed movement, an aerodynamic target makes accidental motions with unknown parameters induced by destabilising factors, such as the constant component of wind velocity, the operation of the internal control system, turbulent flows, elastic fuselage oscillations and vibrations due to the engine operation and the target aerodynamics. Some of these can be estimated in advance by comparing the synthesis time Ts and the correlation time of perturbing effects Tc and by calculating the phase noise they introduce in the echo signal. Among the above factors responsible for phase fluctuations of an echo signal ψ(ϕ), of special importance are turbulent flows. This is because the constant wind velocity factor can be eliminated during the compensation for the radial displacement of a target. The second factor becomes important when a target is manoeuvring. For a typical synthesis time (Ts ∼ 1 s), the value of Tc is smaller than that of Ts . The effect of the fourth factor can be avoided by choosing the wavelength λ such that the condition λ/2 ≫ ε (where ε is the maximum displacement due to fuselage oscillations) is fulfilled [17]. An echo signal from this kind of target is partially coherent. In the case of direct aperture synthesis, the effect of turbulent flows on the carrier pathway is accounted for by introducing a phase correction in the echo signal, which is found from random radial velocity and acceleration measurements [136]. In inverse synthesis, it is very hard to correct phase fluctuations ψ(ϕ) of an echo signal. Below, we shall try to define the imaging conditions, primarily along the cross range coordinate, for a partially coherent signal [89]. The numerical simulation we have made shows that the destabilising factors of interest do not affect the range resolution.
7.1 The effect of partial signal coherence on the cross range resolution Assuming that f (x) is the distribution of the complex scattering amplitude (the target reflectivity) along the cross range x-coordinate, ϕ is an angle characterising the aspect variation, and z(ϕ) is an echo signal, we have 4π (7.2) z(ϕ) = f (x) exp −j ϕx dx. λ
Imaging of targets moving in a line 149 After the reconstruction of the radar image, which reduces to a Fourier transform of the echo signal (7.2) with the weight function w(ϕ) and intensity, we obtain 2 f (x1 )f ∗ (x2 )U (s − x1 , s − x2 ) dx1 dx2 + η(x), (7.3) |ν(s)| = U (s1 , s2 ) = w(ϕ1 )w∗ (ϕ2 ) j4π (s1 ϕ1 − s2 ϕ2 ) dϕ1 dϕ2 , × exp jψ(ϕ1 ) − ψ(ϕ2 ) + λ
(7.4)
where s is the cross range coordinate in the image plane, the sign ∗ indicates complex conjugation, η(x) is complex noise on the image, and U (s1 , s2 ) is the cross correlation function of the hologram. The statistical characteristics |ν(s)|2 and U (s1 , s2 ) will be analysed on the assumption that f (x) is a sum of the δ-functions of point scatterers and ψ(ϕ) is defined by the normal distribution law. Consider the average U (s1 , s2 ) value over the phase fluctuations ψ(ϕ), taking them to be Gaussian. With the formula for the characteristic function and the expansion of ρ(ϕ1 −ϕ2 ) into a Taylor series at σ 2 ≫ 1, we get exp{ j|ψ(ϕ1 ) − ψ(ϕ2 )|} = exp{−σ 2 [1 − ρ(ϕ1 − ϕ2 )]} ∼ = exp{−σ 2 /22 (ϕ1 − ϕ2 )2 }, where σ 2 is the phase noise dispersion, ρ(ϕ1 − ϕ2 ) is the correlation factor and 2 is a quantity inverse to the second derivative of the correlation factor at zero, which describes the angle correlation step of the target aspect variation. Assuming w(ϕ) = exp⌊−ϕ 2 /(2θ 2 )⌋, where θ describes the angle step of the synthesis, we find U (s1 , s2 ) =
λ2 C
64π (ds2 + dc2 )2 − dc4 ds2 ds2 + dc2 2 − s ) + 2 s s , × exp − · (s 1 2 1 2 2[(ds2 + dc2 )2 − dc2 ] ds2 + dc2 (7.5)
where C = exp(−4π 2 ); ds = λ/(2θ ) is a resolution step corresponding to the synthesis time Ts (or the aspect variation θ = VTs sin α/ro ), V is the linear target velocity, α is the angle between the antenna pattern axis and the vector V , dc = λσ/(2) is a space correlation step of target path instabilities, and ro is the target range at the moment of time Ts /2. The average intensity of a point target image (the impulse response of the system), derived for a partially coherent echo signal, is (s − x)2 2 2 , (7.6) |ν(s)| = U (s − x1 , s − x2 ) = C1 |A| exp − 2 ds + 2dc2
150 Radar imaging and holography where C1 is the same factor of the exponent as in Eq. (7.5), A is the signal amplitude, and x is the scatterer coordinate. For a target composed of a multiplicity of scatterers, each scatterer will be represented by a peak in the image described by Eq. (7.6). Its image position of each scatterer along the s-coordinate is its real position along the x-coordinate in the target plane. Moreover, every pair of scatterers will be represented in the image function by an interference term (x1 + x2 )2 [s − (x1 + x2 )/2]2 ∗ U (s − x1 , s − x2 ) = C1 Re A1 A2 exp − − , ds2 + 2dc2 4ds2 (7.7) The additional term in Eq. (7.7) defines the peak located half way between the images of the respective scatterers; it has the same width as the peak for any other scatterer and is described by the ratio of the interscatterer distance to the resolution step value at zero phase noise. If this ratio is large, the interference term due to the superposition of side lobes in individual pixel images is negligible as compared with the average image intensity. Under the conditions of partial signal coherence, the real resolution can be found from the 0.5 level of the maximum intensity |ν(s)|2 : ′ 2 2 ds = 2s |ν(s)|2 =0.5 = C2 ds + 2dc = C2 ds 1 + 2(σ Ts /)2 , (7.8) where C2 is a constant defined by the function w(ϕ) in exponential and uniform approximations of C2 ∼ = 1.66 and C2 = 1, respectively. Obviously, if ds decreases by the value s , the real resolution ds′ will improve only by ′s (Fig. 7.1(a)): (7.9) ′s = C2 ds2 + 2dc2 − C2 (ds − c )2 + 2dc2 and with increasing Ts the gain in the real resolution will become still smaller. Equation (7.9) can be reduced to ad 2s + bd s + c = 0,
where a = 4(p2 − 2s );
c = p2 (22s + 4dc2 ) − p4 − 4s ;
b = 4s (2s − p2 );
p = ′s /C2 .
We can now calculate ds and Ts values that may be considered most suitable for the synthesis at given s and ′s : ds opt = s /2 + 2s /4 − c/a, (7.10) Ts opt = λro /2V sin αds opt .
(7.11) 90◦ ,
At the values of λ = 0.1 m, ro = 50 km, V = 600 m/s, α = s = 0.1 m, ′s = 0.05 m and C2 = 1, we find Ts opt = 1.83 s for Tc = 1.5 s and Tc = 3 s, respectively (dc = 6.98 m and dc = 3.49 m). Formula (7.11) defines the synthesis time of a partially coherent signal, which is optimal in the sense that it will require greater computer resources but will not essentially improve image quality determined by the real resolution ds′ or the ′s /s
Imaging of targets moving in a line 151 (a)
d⬘s, m
30 20 D⬘ s 10
1 Ds
2 3
0
Ts, S
D⬘s/Ds, rel. un.
(b) 1.0
0.5 2 1 0 1
Figure 7.1
2
3
Ts, S
Characteristics of an imaging device in the case of partially coherent echo signals: (a) potential resolving power at C2 = 1, (b) performance criterion (1 – dc = 6.98 m, 2 – dc = 3.49 m and 3 – dc = 0)
ratio (Fig. 7.1(b)). This ratio quantitatively describes the gain in the angular radar resolution owing to the synthesis of partially coherent signals, as compared with that for perfect viewing conditions (dc → 0). In the next section, we shall estimate the synthesis conditions by numerical simulation. The key factor in the imaging model to be described is target path fluctuations.
7.2 Modelling of path instabilities of an aerodynamic target Path instabilities will be considered as random range displacements of a target (model I) or as independent fluctuations of the target velocity along the x′ - and y′ -axes (model II). The appropriate random processes will be expressed by recurrent difference equations [26] Yi [n] =
L
l=0
al X [n − l] +
K
bk Yi [n − k],
(7.12)
k=0
where the coefficients a0 , a1 , . . . , al and b1 , b2 , . . . , bk , as well as L and K vary with the cross correlation function; the subscript i denotes the number of the model for
152 Radar imaging and holography a random deviation of the target motion parameter. The coefficients al and bk in Eq. (7.12) that are necessary for obtaining the values of Yi [n] with a prescribed coefficient ρ(τ ) were presented in the work [26]. In model I of path instabilities, the current range rT [n] to a target is described as a sum of the predetermined range variation r[n] and the random component Yl [n], which is the mean square deviation of the range. The quantities σp and Tc are: σp = 0.04 or 0.05 m, Tc = 1.5 and 3 s. The values of σp and Tc were found heuristically from a preliminary simulation. In model II, the vector modulus of the real target velocity is 2 [n], Vr [n] = (V + Y2x′ [n])2 + Y2y (7.13) ′ where Y2x′ [n] and Y2y′ [n] are the current values of random velocity deviations along the x′ - and y′ -axes, respectively; for comparison, the mean square deviation of the velocity is σx′ ,y′ = 0.1 or 0.2 m/s at Tc = 1.5 or 3 s. The values of σx′ ,y′ and Tc are presented here courtesy of A. Bogdanov, O. Vasiliev, A. Savelyev and M. Chernykh who measured them in real flight conditions. Their experimental data on coherent radar signals in the centimetre wave range are also described in Reference 28. The current angle between the antenna pattern axis and the vector Vr [n], in this model, is α [n] = α + arctg(Y2y′ [n]/(V + Y2x′ [n])).
(7.14)
With Eqs (7.13) and (7.14) combined with the viewing conditions of model II, we have computed the real current range rT [n] to the target.
7.3 Modelling of radar imaging for partially coherent signals To make the next step in the modelling of a radar image, we suggest that the predetermined path component of a point target is normal to the antenna pattern axis, that is, α = 90◦ , the transmitter pulses have a spectral width fc = 75 MHz, and their other parameters are chosen with the account of well-known restrictions for the removal of image inhomogeneities [104]. The range image of a target was formed by coherent correlation processing of every echo signal. For every pixel on the range image, the nth (n = 1, . . . , 256) value of a complex echo signal was recorded to form a microwave hologram [138]. The reference function was formed ignoring the errors in the estimated parameters of target motion. The reconstructed image |ν(r, s)|2 was 2D in the r- and s-coordinates (range and cross range). The simulation showed that the phase noise due to path instabilities did not affect the range image of a target. Therefore, we shall further treat only its cross range section along the range axis. A visual analysis of impulse responses during the imaging of partially coherent echo signals (Tc = 3 s, Ts = 1.5 s) indicates that phase fluctuations largely produce the following types of noise (Fig. 7.2). First, there is a shift of the impulse response along the s-axis in the image field (Fig. 7.2(a)). Second, the peak of the
Imaging of targets moving in a line 153
|n(s)|2, rel. un.
(a) 1.0
(b)
0.5
0
|n(s)|2, rel. un.
(c) 1.0
(d)
0.5
0
20
40 s, m
Figure 7.2
60
0
20
40
60
s, m
Typical errors in the impulse response of an imaging device along the s-axis: (a) response shift, (b) response broadening, (c) increased amplitude of the response side lobes and (d) combined effect of the above factors
major impulse response becomes broader (Fig. 7.2(b)). Third, the side lobes of the impulse response become larger to form some additional features commensurable in their intensities with the major peak (Fig. 7.2(c)). Combinations of the three effects on the final image are also possible (Fig. 7.2(d)). It is worth noting that the first effect can be eliminated during the image processing by relating the window centre to the nth pixel with maximum intensity. The presence of distorting effects necessitates finding ways to measure a real resolution step. A conventional way of estimating resolution is by measuring the impulse response of the processing device at the level 0.5 of the maximum intensity |ν(s)|2 . In that case, analysis is made of all the images along the s-axis, independent of phase noise. Another way of measuring a resolution step is that all additional features on a point target image at the 0.5 level are considered to be side lobes, irrespective of their intensity, and can be removed in advance. Figures 7.3 and 7.4 present the estimates of an average resolution step ds′ for models I and II of path instabilities, respectively. The average value was calculated from 100 records of path instability of a point target for every discrete time moment Ts (Ts = 0.1, . . . , 2.9 s). The estimation of a resolution step within model I fails to predict the degree of partial coherence effect on the radar image, since we know nothing about a perfect image a priori. The analysis of Fig. 7.3 has shown that the resolution step error is fairly large at σ Ts / ≥ 1, where σ = 2π σp /λ. It is the appearance of false features above the 0.5 level with increasing synthesis time that leads to an overestimation of the resolution step computed from the impulse response
154 Radar imaging and holography
d9s, m
(a) 60
40 2
1 20
29
19
0 (b) 80
d9s, m
60 40
1
2
19
29
20 0 1
2
3
Ts, S
Figure 7.3
The resolving power of an imaging device in the presence of range instabilities versus the synthesis time Ts and the method of resolution step measurement: (a) −σp = 0.04 m; 1 and 1′ (2 and 2′ ) – first (second) way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1′ and 2′ – Tc = 3 s; (b) −σp = 0.05 m, 1 and 1′ (2 and 2′ ) – first (second) way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1′ and 2′ – Tc = 3 s
width and, hence, to a larger error in the target size measurement. Such an error is inherent in this method of resolution evaluation. In the model of velocity instabilities (model II), the ds′ (Ts ) curves in Fig. 7.3 show a reasonable agreement with the theoretical curves in Fig. 7.1(a). The curve behaviour in Fig. 7.4 differs from the calculated dependences and from the model computations shown in Fig. 7.3 in that the ds′ (Ts ) curve has a minimum. The latter is due to an error in the method of estimating a resolution step, although the calculated ds′ (Ts ) curve does not indicate the presence of extrema. The simulation results (curve 1′ in Fig. 7.4(a)) can be used to find the synthesis time intervals for a particular type of signal (or a particular imaging algorithm): I – totally coherent, II – partially coherent and III – incoherent. One can choose various imaging algorithms for available statistical characteristics of path instabilities and for a particular time Ts . For instance, it is reasonable to use incoherent processing algorithms at synthesis times for which a signal can be considered as incoherent [78]. For shorter intervals I and II, one should use coherent processing algorithms and evaluate their performance in terms of the criterion ′s /s (Fig. 7.5).
Imaging of targets moving in a line 155 (a) 60
d9s, m
I
II
III
40
19 1
20
2 29
0
d9s, m
(b) 60
40 1 20 2 29
19 0
1
2
3
Ts, s
Figure 7.4
The resolving power of an imaging system in the presence of velocity instabilities versus the synthesis time Ts and the method of resolution step measurement: (a) σx′ = σy′ = 0.01 m/s (other details as in Fig. 7.3), (b) σx′ = σy′ = 0.2 m/s (other details as in Fig. 7.3)
1.0
D⬘S/DS, rel. un.
0.8 0.6 0.4 0.2 0
Figure 7.5
2 1 0.5
1.0
1.5 Ts, S
2.0
2.5
Evaluation of the performance of a processing device in the case of partially coherent signals versus the synthesis time Ts and the space step of path instability correlation dc : 1 – dc = 6.98 m, 2 – dc = 3.49 m
156 Radar imaging and holography The resolution estimate obtained by the second method is close to the theoretical value. However, this approach has a serious limitation because a real target possesses a large number of scatterers. The positions of respective intensity peaks on a radar image are unknown a priori, so the application of this technique may lead to a loss of information on adjacent scatterers on an image. This method proves to work well if one knows in advance that the target being viewed is a point object or that a range pixel corresponds to a single scatterer. In that case, the imaging device can be ‘calibrated’ by evaluating the phase noise effect on it. The discrepancy between the simulation results presented in Figs 7.3 and 7.4 may be interpreted as follows. Model I of target path instabilities simulates random phase noise associated only with the displacement of range aperture pixels. Model II introduces greater phase errors in the echo signal, because the aperture is synthesised by non-equidistant pixels, which are additionally range-displaced. This model seems to better represent the real tracking conditions, since it accounts for random target yawing in addition to random range displacements. The analytical expressions given earlier and the simulation results on partially coherent signals with zero compensation for the phase noise can provide the real resolving power of an imaging device. Today, there are no generally accepted criteria for evaluation of the performance of radar devices for imaging partially coherent signals. The results discussed in this chapter allow estimation of the device performance in the ideal case of dc → 0; on the other hand, they enable one to evaluate the efficiency of computer resources to be used in terms of the possible gain in the resolving power. Track instabilities of real aerodynamic targets and other factors introducing phase noise give rise to numerous defects on an image. So the application of conventional ways of estimating the resolving power of imaging systems leads to errors. However, there is an optimal synthesis time interval which provides the best angular resolution with a minimal effect of phase fluctuations. Therefore, when phase noise cannot be avoided, which is usually the case in practice, it is reasonable to make use of a statistical database on fluctuations of motion parameters for various classes of targets and viewing conditions. The processing model we have suggested can be helpful in the evaluation of the optimal time of aperture synthesis in particular viewing conditions. The viewing conditions also require a specific processing algorithm to be used, so radar-imaging devices should also be classified into coherent, partially coherent or incoherent. The simulation results presented in Fig. 7.4 do not question the validity of analytical relations (7.4), (7.5) and (7.7) but rather define their applicability, because a signal becomes incoherent when a fluctuating target is viewed for a long time.
Chapter 8
Phase errors and improvement of image quality
Possible sources of phase fluctuations of an echo signal, which negatively affect the aperture synthesis, are turbulent flows in the troposphere and ionosphere. Fluctuations of the refractive index due to tropospheric turbulence impose restrictions on the aperture centimetre wavelengths. Ionospheric turbulence affects far-decimetre wavelengths. Phase fluctuations decrease the resolving power of a synthetic aperture, leading to a lower image quality.
8.1 Phase errors due to tropospheric and ionospheric turbulence 8.1.1 The refractive index distribution in the troposphere Fluctuations in the troposphere may arise from changes in the meteorological conditions and air whirls. As a result, there are non-uniform local distributions of temperature and humidity, leading to a non-uniform distribution of refractivity N : N = (n − 1) × 106 ,
(8.1)
where n is the refractive index. At the centimetre wavelengths, a static air volume has refractivity N defined by the Smith–Wentraub formula: 3.73 × 105 e 7.7P , (8.2) + T T2 where P is the total atmospheric pressure measured in millibars, T is temperature in Kelvin degrees and e is the specific water vapour pressure in millibars. It follows from Eq. (8.2) that the value of N at centimetre and longer wavelengths strongly depends on the water vapour concentration, while its variation with the wavelength λ is insignificant. The latter fact is quite important because it makes it possible to obtain phase fluctuation spectra for various wavelengths in the microwave range, using an experimental spectrum measured at any wavelength. The major type N =
158 Radar imaging and holography of non-uniformity responsible for amplitude and phase fluctuations of an electromagnetic wave are so-called globules. These represent spherical or ellipsoidal structures, in which the refractive index differs, for some reason, from that in the environment. Generally, globules have arbitrary and irregular shapes. They arise from the local changes in the temperature, humidity or pressure accompanying turbulent phenomena in the troposphere. Since these causative factors behave differently at different points in space, the troposphere is generally non-uniform. We shall first briefly describe the characteristics of a turbulent troposphere. The refractive index of the troposphere is generally the function n(r , t) of the radius vector r and time t, which can be written as n(r , t) = n + δn(r , t),
(8.3)
where n is an average value of the refractive index and δn(r , t) is its deviation from the average n. Since the problem of interest is the fluctuation of the refractive index only, we shall further take n = 1. The autocorrelation function of these fluctuations is Bn (r1 , r2 , t1 , t2 ) = δn(r1 , t1 )δn(r2 , t2 ),
(8.4)
where r1 , r2 are the radius vectors of the selected points. For a steady-state turbulence, the autocorrelation function is independent of t (the steady state in time): Bn (r1 , r2 ) = δn(r1 , t)δn(r2 , t).
(8.5)
For a statistically non-uniform turbulence (the stationarity in space), the correlation function will not change if a pair of points r1 ur2 is displaced by the same distance and in the same direction simultaneously, that is, B(r1 , r2 ) varies only with r1 − r2 = r . A spatially uniform distribution is called isotropic if Bn (¯r ) depends only on r = |r |, that is, on the distance between the observation points but not on the direction. However, even in the case of a uniform and isotropic random distribution of the refractive index, it appears to be quite difficult to choose an autocorrelation function for its fluctuations such that it could describe the real troposphere accurately. The only case when the fluctuation distribution can be described from theoretical considerations is a locally uniform isotropic turbulence. The general theory of this kind of turbulence was discussed in References 132 and 133. In real meteorological conditions, the distributions of wind velocity, pressure, humidity, temperature and the refractive index cannot be uniform or isotropic in large space regions. But in a relatively small region, whose size Lo is known as the outer-scale size of turbulence, the distributions may be taken to be both uniform and isotropic. Theoretically, it is possible to describe fluctuations of the refractive index in terms of physical considerations of turbulence origin and development. The theory treats statistical fluctuations of velocity and related scalar quantities (such as temperature and the refractive index), induced by disturbances in horizontal air currents because of wind and by perturbations in laminar flow due to convection. The physical mechanism of turbulence origin and development is as follows. When the translational wind velocity exceeds the critical Reynolds number, huge
Phase errors and improvement of image quality 159 whirls (globules) arise and their size may exceed Lo . Such whirls are produced owing to the energy of translational flow movement, for example, to the wind power. This power is then given off to whirls of size Lo , and so on. Eventually, the energy is dissipated because of viscous friction in the smallest whirls of size lo known as the inner-scale size of turbulence. In this way, huge whirls gradually split into smaller ones, and this process goes on until the power of rotational motion of the smallest whirls transforms to heat in overcoming the viscous force. For this reason, a region where huge whirls transform to small ones is called an inertia region. Within such a region, the instantaneous distribution of the refractive index n(r ) is an unsteady random function. However, the difference n(r1 ) − n(r2 ) is steady under the condition |r2 − r1 | < Lo . In other words, n(r ) appears to be a random function with the first increments being steady. Random processes, like those discussed in the books [132,133], can be conveniently described by structure functions. The one for the refractive index distribution has the form: Dn (r) = [n(r1 ) − n(r2 )]2 .
(8.6)
The structure function is a fundamental characteristic of a random process with the first steady increments, replacing the concept of autocorrelation function. The latter just does not exist for random processes. The quantity Dn (r) describes the intensity of n(r) fluctuations, whose periods are smaller or comparable with r. For a locally uniform and isotropic turbulence, it is defined as Dn (r) = [n(r1 + r) − n(r1 )]2 ,
(8.7)
where r is an arbitrary increment of r1 . Let us consider some statistical characteristics of the refractive index distribution in the troposphere. The detailed analysis made in References 132 and 133 has shown that the structure function of this parameter can be written as Dn (r) = Cn2 r 2/3 ,
(lo ≪ r ≪ Lo ),
(8.8)
where Cn2 is a structure constant of the refractive index. Equation (8.8) describes the so-called 2/3 law by Obukhov and Kolmogorov for the refractive index distribution. Numerous measurements made in the near-earth troposphere [132,133] showed a good agreement between the fluctuation characteristics of n and the 2/3 law. The value of lo in the troposphere is found to be ∼1 mm. The quantity Lo is a function of direction and altitude. Therefore, one may assume that the horizontal extension of large whirls near the earth surface will have the same order of magnitude as the altitude, as far as the maximum altitudes lie in the range from 100 to 1000 m [110].
160 Radar imaging and holography Energy dissipation region
Whirl origin region
Inertia region 1 2 3 4
Φn (x) C 2n
Tatarsky’s model-I Tatarsky’s model-II Carman’s model Modified Carman’s model
1 10–4 10–8 –11 3
( x)
10–12 10–10 10–20
1
Figure 8.1
lo≈l mm xm = 2p lo
xo = 2p (Lo ~ l m) Lo 101
102
103
x(m–1) 104
The normalised refractive index spectrum n (χ )/Cn2 as a function of the wave number χ in various models: 1 – Tatarsky’s model-I, 2 – Tatarsky’s model-II, 3 – Carman’s model, 4 – modified Carman’s model
The refractive index spectrum obeying the 2/3 law is n (χ ) = 0.033Cn2 χ −11/3 ,
at (< χ < χm ),
(8.9)
where χo ∼ (2π/Lo ), χm ∼ (2π/lo ) and χ is the spatial wave number. It has been found experimentally that the n (χ ) spectrum has the form of χ −11/3 in an inertia region where the wave numbers are larger than χo . Figure 8.1 shows the normalised spectra for three regions: for the region of whirl origin (χ < (2π/Lo )), for the inertia region ((2π/Lo ) ≪ χ ≪ (2π/lo )) and for the dissipation region (χ ≥ (2π/lo )). It is seen that the spectral density n (χ ) in the region of χ ≥ (2π/lo ) decreases much faster than might be expected from the (χ −11/3 ) formula. But in what way n (χ ) decreases in this region is still unclear theoretically. One usually deals with three kinds of spectra in the dissipation region. One obeys the χ −11/3 law, another drops abruptly at χ = χm , implying that n (χ ) = 0 at χ = χm , and, finally, the spectrum changes on addition of the factor exp[−(χ 2 /χm2 )]. The second case obeys Eq. (8.9) in practice. We have termed the respective model spectrum Tatarsky’s model-I. It has been successfully employed in Reference 133 and some other studies. In Reference 132, V. Tatarsky used the following expression for
Phase errors and improvement of image quality 161 the refractive index spectrum: χ2 n (χ ) = 0.033Cn2 χ −11/3 exp − 2 χm
(8.10)
with χm /lo = 5.92 rather than 2π, as before. We have called the model for this case Tatarsky’s model-II, which is fully valid in the inertia region but is approximate at χ > χm . It follows from the analysis of the two models that they can adequately describe the statistical characteristics of the refractive index in the inertia region and are satisfactory for the dissipation region. In the region of χ < (2π/Lo ), however, these models do not undergo any modification, that is, the dependence n (χ ) remains to be χ −11/3 . On the other hand, it is known from References 132 and 133 that the spectral density curve n (χ ) at χ < (2π/Lo ) is not universal and may change with the meteorological conditions. Therefore, the models of (8.9) and (8.10) are practically unable to evaluate the effects of this region on measurements. Besides, these models describe well only small-scale turbulence, which is quite clear from Fig. 8.1. In reality, however, most of the turbulence pulsation ‘power’ is accumulated in large whirls, at χ ≤ (2π/Lo ). In such regions, the uniformity and isotropic character of the random distribution of n(r , t) are also violated. Still, quantitative estimations can be made from interpolation formulae describing approximately the structure function behaviour at large Lo values, that is, in the range of small χ . One of these is Carman’s function having the following spatial spectrum [133]: n (χ) = 0.063
δn21 L2o (1 + χ 2 L2o )11/6
at χ ≪
2π , Lo
(8.11)
where δn21 is the dispersion of refractive index fluctuations. The spectral model of (8.11) known as Carman’s model works well for large-scale turbulence (Fig. 8.1). One can see from Eq. (8.11) that it does not include explicitly the constant Cn2 related to the dispersion δn21 by the expression −2/3
Cn2 = 1.9δn21 Lo
.
(8.12)
Using Eq. (8.12), one can derive expressions for Tatarsky’s models I and II: −2/3 −11/3
n (χ) = 0.063δn21 Lo n (χ ) =
χ
−2/3 0.063δn21 Lo χ −11/3 exp
at
2π 2π ≪χ ≪ , Lo lo
χ2 − 2 χm
at
2π 2π ≪χ ≪ . Lo lo
(8.13)
(8.14)
This representation is convenient when the refractive index fluctuations are given as δn21 rather than through Cn2 . The next point to discuss is the applicability of the spectra described by Eqs (8.9), (8.10) and (8.11). When using this or that spectral model in problems of parameter fluctuations of an electromagnetic wave in a turbulent medium, one should
162 Radar imaging and holography bear in mind the following factors. First, the spectra are valid in the inertia region of a locally uniform and isotropic turbulence. Sometimes, the turbulence spectrum may strongly differ from the above models. Second, the spectrum at χ ≤ χo is, at best, an approximation, even though one may use Carman’s spectra. At χ ≥ χm , the model spectra are only good approximations. Note that the spectrum of the form (8.11) transforms to that of (8.9) at χ 2 L2 ≫ 1. In addition to the three types of spectra, there is a spectrum of the form: n (χ ) = α=
α exp(−χ 2 /χm2 ) , (1 + χ 2 L2o )11/6
δn21 L3o Ŵ(11/6) C(χm Lo ), π 3/2 Ŵ(1/3)
−1 Ŵ(11/6) Ŵ(−1/3) −2/3 . (χm Lo ) C(χm Lo ) ≈ 1 + Ŵ(1/3) Ŵ(3/2)
(8.15)
At χm Lo ≫ 1, the correction term C(χm Lo ) ≈ 1. Since lo ∼ (1 ÷ 10) mm and Lo ≥ 1 m, we have χm = 5.92, χm = (5.92 ÷ 59.2), lo χm Lo ≥ 5.92 × 103 . Keeping in mind this fact and Ŵ(11/6) ≈ 0.06, π 3/2 Ŵ(1/3) we get n (χ ) = 0.06
δn21 L χ2 exp − [1 + χ 2 L2o ]−11/6 χm2
(8.16)
n (χ ) = 0.06
11/3 χ2 Cn2 Lo exp − . [1 + χ 2 L2o ]−11/6 χm2
(8.17)
or
It would be reasonable to call a spectrum of the type (8.16) or (8.17) Carman’s modified spectrum. If relation (8.12) is fulfilled, this spectrum will coincide with that described by Eqs (8.10) and (8.14) at large values of χ . But in the χ range, it coincides with the Carman spectrum shown in Fig. 8.1. The choice of a particular type of spectrum varies with the problem to be solved. Fluctuations of some electromagnetic wave parameters, such as phase and amplitude, are often sensitive to a certain turbulence spectrum, or to large- or small-scale whirls. Keeping this important fact in mind, one should analyse carefully the applicability of the chosen spectrum before using it.
Phase errors and improvement of image quality 163 The best way of verifying a model is to compare the results obtained with available experimental data. Although the models of (8.9) and (8.10) are rather approximate at χ < (2π/Lo ), they still provide a good agreement with measurements (e.g. of phase fluctuations). Moreover, they can give the results in an analytical form. On the other hand, the models of (8.11) and (8.15) are more accurate for large whirls but they are unable to give clear analytical results. These circumstances have predetermined the applicability of the models of (8.8) and (8.10). In the study of phase fluctuations, both models yield similar analytical expressions. It is of importance to discuss in some detail a vertical profile model of the structure constant. This constant describes the degree of refractive index non-uniformity, because it relates the quantities D(r) and r (see Eq. (8.8)). The structure constant Cn2 is related to the tropospheric parameters δn21 and r. For radiation propagation along an oblique path, the turbulence ‘intensity’ changes with the altitude, and the Cn2 values will be different at different altitudes. The structure function of n(r) will then be Dn (r) = Cn2 (h)r 2/3 , where Cn2 (h) is a structure constant varying with altitude. To obtain quantitative results, one is first to find the Cn2 (h) variation. The theoretical treatment of the problem of parameter fluctuations for a plane wave in a turbulent troposphere [132] included the following Cn2 (h) models: h 2 , (8.18) exp − Cn2 = Cn0 h0 2 Cn2 = Cn0
1 , 1 + (h/h0 )2
(8.19)
2 is the structure constant of the refractive index near the earth surface, h is where Cn0 the altitude of the point in question, and h0 is a constant. However, the question whether Eqs (8.18) and (8.19) can really describe the Cn2 (h) function in the microwave frequency band remains unanswered. In order to find the exact form of this function, it is necessary to examine the microstructure of the refractive index distribution in the microwave range and to design a Cn2 (h) model. This became possible only after the publication of the work [134], which reported measurements made in experimental flight conditions. The structure constant profile of the refractive index was measured along an oblique microwave path. The results of the Cn (h) measurement were summarised in table 1 of Reference 134. Yet, it was impossible to plot the Cn (h) function from these data, because they were to be statistically processed. This was accomplished by the authors of Reference 144. Figures 8.2 and 8.3 show some Cn2 (h) plots for different seasons (for April and November). The Cn2 values in these plots represent records averaged over several runs of the squared structure constant measurement (the averaging was actually made over the time of day). The confidence limit was taken to be 0.98. Some of the Cn values presented in Reference 134 differ considerably from the average values and do not seem to be due to a statistical spread. To reveal such data, the authors used a
164 Radar imaging and holography C 2n (cm–2/3) 10–13
10–14
10–15
10–16
10–17 0.3
Figure 8.2
0.6
0.9
1.2
1.5
1.8
2.1
2.4
2.7
3.0
h (km)
The profile of the structure constant Cn2 versus the altitude for April at the SAR wavelength of 3.12 cm
criterion based on the assumption of a normal error distribution. The Cn records that differed from the average by more than a possible maximum of the statistical spread and were lying within the 0.98 error limit were eliminated from further analysis. The plots thus obtained were approximated by exponential functions, using the least square method. As a result, the following analytical dependencies were derived for the structure constant profile at the wavelength of 3.12 cm: (a) the Cn (h) model for April: h 2 exp − Cn2 (h) = Cn0 h0 2 = 3.69 × 10−15 cm −2/3 and h = 2.17 × 105 cm; with Cn0 0
(8.20)
Phase errors and improvement of image quality 165 C 2n (cm–2/3) 10–13
10–14
10–15
10–16
10–17 0.3
Figure 8.3
0.6
0.9
1.2
1.5
1.8
2.1
2.4
2.7
3.0
h (km)
The profile of the structure constant Cn2 versus the altitude for November at the SAR wavelength of 3.12 cm
(b) the Cn (h) model for November: h 2 2 , Cn (h) = Cn0 exp − h0
(8.21)
2 = 1.27 × 10−14 cm −2/3 and h = 8.89 × 104 cm. with Cn0 0
We can see that the refractive index fluctuations decrease with altitude. The major contribution to the fluctuations is made by a tropospheric stratum 3 km thick above the earth. The contribution of the other 7 km thickness (the total thickness of the troposphere is taken to be 10 km) is five times smaller. It is known that the fluctuation of n increases with rising humidity. The most intense fluctuations are observed at
166 Radar imaging and holography the air–cloud interface and inside the clouds. This model, however, ignores these effects because of the lack of experimental data. But some data are available on the effect of humidity and clouds on the dispersion δn2 of the refractive index values. Therefore, the model of the vertical δn2 profile allows estimation, in a first approximation, of the cloud effect on phase fluctuations. To conclude, it seems reasonable to extend the results on λ = 3.12 cm waves to other centimetre wavelengths, since the Smith–Wentraub formula (8.2) indicates only a slight dependence of n on the wavelength λ within the centimetre frequency band.
8.1.2 The distribution of electron density fluctuations in the ionosphere In contrast to the troposphere, the ionosphere is characterised by electron density fluctuations. Let Ne(r ) denote fluctuations of the equilibrium electron density No , which is the average electron concentration. The variable ξ defined by the equality ξ =
Ne(r ) represents a uniform random distribution with a zero average and a standard deviation σξ . By definition, the autocorrelation function of this distribution is Bξ (r1 − r2 ) = ξ(r1 )ξ(r2 ), where the angular brackets stand for the averaging over an ensemble. According to the Wiener–Khinchin theorem, the autocorrelation function and the spectrum create a Fourier transform pair: ξ (χ ) = (2π )
−3
−3
Bξ (r ) = (2π )
∞
∞ −∞
r 3 Bξ (r )e−jχ d r,
ξ (χ )e−jχ K d 3 χ ,
(8.22)
(8.23)
where χ is the spatial wave number. Experimental investigations have shown [141] that both the phase fluctuation spectra of a wave that has passed through a turbulent ionosphere and the amplitude fluctuation spectra have an asymptotic power dependence. Hence, the spectra of ionospheric whirls must also have a power dependence. Assuming the whirls to be isotropic within a space scale from 70 m to 7 km, a 3D whirl spectrum will have the form [141]: ξ (χ ) ≈ χ −P ,
(8.24)
where P is the power index of the spectrum varying between 2 ≤ P ≤ 3. C. L. Rino and various co-workers [113–116] have suggested a spectrum of the electron density fluctuation: Ne (χ ) = CS χ −(2ν+1)
at χo < χ < χi ,
(8.25)
where CS is the turbulence parameter, χo is the outer-scale size of ionospheric whirls, χi is their inner-scale size, and 2ν = P is the spectral power index. We have mentioned
Phase errors and improvement of image quality 167 above that the power index varies between 2 < P < 3, whereas the power index for the troposphere is P = 8/3 (Kolmogorov’s spectrum). The turbulence parameter is described as CS = 8(π)3/2 (χ )P−2
Ŵ((P + 1)/2) Ne2 Ŵ((P − 1)/2)
(8.26)
where Ŵ(·) is the gamma-function and Ne2 is the mean square value of the fluctuation component of the electron density. For a typical fluctuation distribution in the ionosphere, CS ∼ 1021 (mKs). The quantity Ne2 varies remarkably with the ionospheric conditions, so CS fluctuates from 6.5 × 1019 (mKs) at P = 2.9 to 1.3 × 1023 (mKs) at P = 1.5 [22]. The ionosphere has a thickness of about 200 km. The maximum electron density lies in the Nm F2 stratum at an altitude between 250 and 350 km. The outer-scale size of a turbulent whirl along the shortest distance (ionospheric whirls are anisotropic) is about 10 km. The respective value for a turbulent troposphere is about 1 km.
8.2 A model of phase errors in a turbulent troposphere When discussing a SAR in Chapter 3, we pointed out that a turbulent non-uniform troposphere could be a source of spatial phase fluctuations. Let us consider a turbulence model with reference to a particular type of SAR – a radar with a focused aperture. Suppose a SAR is located along the carrier track (Fig. 8.4). For simplicity, we shall assume that there is only one point scatterer A across the swath width. This target is located at the point having an oblique range R and is scanned for the synthesis time Ls , i.e. Ls ≈ βH , where β is the aperture pattern width. The equiphase surface of an echo signal represents a sphere with the centre at the target location point. The track line is shown by the A1 –A2 line, and the thickness of a turbulent tropospheric stratum is denoted as ht . The structure function of the phase fluctuation for a spherical wave of the point target A is Dϕ (ρ) = [ϕ(r + ρ) − ϕ(r)]2 ,
(8.27)
where ρ is the distance between the points, at which the phase fluctuations are to be measured, for example, ρ = de . To find an analytical expression for D(ρ), consider a 2D spectrum of wave phase fluctuations in a turbulent troposphere. Using a gradual perturbation approach, the authors of Reference 133 derived a simple formula relating the phase fluctuation parameters to the spectral density of the refractive index fluctuations n (χ ). The 2D spectral density Fϕ (χ , 0) and n (χ ) have the simplest relation, because the former is a 2D Fourier transform of the respective phase structure function in the plane x = const. normal to the wave propagation direction. For a plane
168 Radar imaging and holography
hesis
Synt
th L s leng
ier Carr
Z
track
line
A2 H
V u
A1
Ro
R
x
n ectio proj h e n k li eart Trac to the on
de
0
Lf q a ht
Rq
A
Point target
y Swath width
Figure 8.4
A geometrical construction for a spaceborne SAR tracking a point object A through a turbulent atmospheric stratum of thickness ht
wave with the cross section x = L, we have k χ 2L n (χ ), Fϕ (χ, 0) = π k 2 L 1 + 2 sin k χ L
(8.28)
where L is the distance covered by the wave passing through a non-uniform turbulent medium. Using Eq. (1.51) from Reference 133, we can now turn to the structure function of phase fluctuations in the plane x = L: ∞ [1 − J0 (χρ)]Fϕ (χ, 0)χdχ , (8.29) D(ρ) = 4π 0
where ρ is the distance between the points, at which the structure function is to be measured in the plane x = L. It follows from Eq. (8.28) that the 2D spectrum of Fϕ (χ , 0) is similar to the spectrum of the refractive index fluctuations n (χ ) multiplied by the filtering function (in square brackets). Therefore, the wave propagation through a turbulent medium is similar to the linear filter effect in circuit theory.
Phase errors and improvement of image quality 169 The filtering function of phase fluctuations is only slightly sensitive to the parameter variations. For example, at χ = 0, Fϕ (χ , 0) is equal to 2π k 2 L, changing smoothly with increasing χ as far as π k 2 L. Therefore, the filtering occurs relatively uniformly. The maximum product of the filtering function and n (χ ) for typical SARs is observed at small values of χ , or in large whirls. For this reason, phase fluctuations and phase correlation are most sensitive to the outer-scale size of turbulence, Lo . With Eq. (8.29) and the turbulence models of (8.9) and (8.10), we can arrive at an expression for a uniform turbulence and a plane wave: Dϕ (ρ) = αk12 L2 ρ 5/3 ,
(8.30)
where √ at ρ ≥ λL, √ at lo ≪ ρ ≪ λL,
2.91, α= 1.46,
and L is the electromagnetic wave path in a turbulent medium. In order to examine the effect of phase errors on the recording of 1D holograms by a side-looking radar, it would be useful to try to extend the above result to the case of a non-uniform turbulence and a spherical wave [144]. From Tatarsky’s non-uniform model-I, we have Dϕ (ρ) = 1.46k ρ
L
Dϕ (ρ) = 2.91k 2 ρ 5/3
2 5/3
Cn2 (h) dh,
(lo ≪ ρ ≪
√ λL),
(8.31)
0
0
L
Cn2 (h) dh,
(ρ ≫
√ λL).
(8.32)
The last two expressions show that phase fluctuations are equally affected by all whirls, irrespective√ of their distance to the observation point. Moreover, when ρ passes through the value λL, which is usually somewhere at the beginning of the path, the factor in front of Dϕ (ρ) increases 2-fold. Therefore, the experimental structure √ function Dϕ (ρ) must have a positive rise at ρ = λL. It is interesting to follow how Dϕ (ρ) changes when a plane wave is replaced by a spherical one. The formula relating the mean square value of the phase difference fluctuation to the base ‘ρ’ for a spherical and plane wave [132] is 1 2 (ϕ1 − ϕ2 )sp = [Dϕ (ρ)]sp = Dϕ (ρt) dt. 0
For the plane wave Dϕ (ρ) = αk 2 Cn2 Lρ 5/3 , we have [Dϕ (ρ)]pl = A0
1 0
D(tρ)5/3 dt =
3 A0 ρ 5/3 , 8
170 Radar imaging and holography where A0 = αk 2 Cn2 L. Hence, [Dϕ (ρ)]sp =
3 [Dϕ (ρ)]pl . 8
(8.33)
We can conclude that phase fluctuations for a spherical wave are not as large as for a plane wave and that the structure functions for the former differ from those of the latter only in numerical coefficients. For a medium with slowly changing characteristics, we have [Dϕ (ρ)]sp
3 = 1.46k 2 ρ 5/3 8
L
Cn2 (h) dh,
(lo ≪ ρ ≪
Cn2 (h) dh,
(ρ >
√ λL),
(8.34)
0
[Dϕ (ρ)]sp
3 = 2.91k 2 ρ 5/3 8
L
√ λL).
(8.35)
0
The initial expression for the structure function evaluation in a SAR is Eq. (8.35), because there is the relation √ ρ = de > λL. The Cn2 (h) function was shown above to be given by h 2 Cn2 (h) = Cn0 . exp − h0 As a result, we have the formula L 2 Cn2 (h)dh = Cn0 h0 1 − exp − , h0 where L = ht cosec θ , θ is the angle between the wave propagation direction and the skyline, and ht is the total altitude of the turbulent stratum. A synthetic aperture is characterised by the equality ρ = de , where de is the equivalent base at ht . It follows from Fig. 8.4 that de =
Ls ht cosec ϑ Ls ht = . Ro H
Thus, we eventually get the relation Dϕ (ρ) = βo
2π λ
2
Ls ht H
5/3
ht cosec ϑ , Cn0 h0 1 − exp − h0
(8.36)
where Ls = V¯ Ts , Ts is the synthesis time, V is the track velocity of the radar carrier and βo = 1.09.
Phase errors and improvement of image quality 171 Equation (8.36) also allows finding the standard deviation of the phase difference fluctuations at the synthetic aperture ends: (8.37) σϕ (ρ) = Dϕ (ρ).
We shall now examine how phase errors due to tropospheric turbulence affect the resolution limit and optimal length of a synthetic aperture. W. Brown and Y. Riordan [23] have calculated both parameters for the case of phase errors, with the structure function obeying a power law. It was stated that the phase difference [ϕ(r + ρ) − ϕ(r)] has a Gaussian distribution, and this is supported experimentally. For the above type of phase errors, the expression for the aperture resolution along the track is found to be λR (8.38) ρx = 4πρo
with ρo = 0.985b. The quantity b is to be calculated from the equation for the structure function of a phase error: Dϕ (ρ) = bn ρ n ,
n = 5/3.
(8.39)
Then Eqs (8.38) and (8.39) yield ρx =
λR [Dϕ (ρ)]3/5 . 4π ρ
(8.40)
Using the equation for the structure function of a phase error (8.36) and ρ = de , we get
3/5 ht cosec ϑ 2 3/5 ρx = λ−1/5 RC0 (Cn0 ) (h0 )3/5 1 − exp − , (8.41) h0 where C0 = const. This equation shows that ρx varies but slightly with λ and increases slowly with increasing λ. The optimal synthetic aperture affected by a turbulent troposphere [23] can be found as 13.4 . (8.42) Lopt = b Then Eqs (8.42) and (8.39) give Lopt =
d0 λ6/5
3/5 2 )3/5 (h )3/5 1 − exp − ht cosec ϑ (Cn0 0 h0
(8.43)
with d0 = const. The mean square value of the phase error between the optimal aperture centre and its extremal point is σϕ = (Dϕ (Lopt /2))1/2 , where Dϕ and Lopt are to be calculated from Eqs (8.36) and (8.43).
(8.44)
172 Radar imaging and holography Some other methods for reducing propagation-induced phase error in coherent imaging systems were suggested in Reference 22 and 47.
8.3 A model of phase errors in a turbulent ionosphere It was shown in the Appendix to Reference 114 that a good approximation for the structure function of phase fluctuations is the expression: 2 D(y) ∼ |y|2ν−1 , = Cδ
0.5 < ν < 1.5.
(8.45)
2 is defined as The phase structure constant Cδ 2 = Cδ
Cp 2Ŵ(1.5 − ν) , 2π Ŵ(ν + 0.5)(2ν − 1)22ν−1
0.5 < ν < 1.5,
(8.46)
where Cp = re2 λ2 lp CS , lp is the path length of an electromagnetic wave in the ionosphere, re is the classical electron radius re = 2.81 × 10−15 m, λ is the transmitter wavelength, and CS is the turbulence parameter in the ionosphere described by Eq. (8.26). Using the phase screen model of Reference 116 and Eq. (8.46), one can show that the mean square value of the phase fluctuations along the path lp is defined as √ χ −2ν+1 Ŵ(ν − 1/2) δ2 = 2 π re2 λ2 lp CS G o , 4πŴ(ν + 1/2)
(8.47)
where the factor G was borrowed from the Appendix to Reference 113. This factor accounts for: • • • •
the velocity of the scanning beam motion relative to electron density whirls (νo ), the geometrical parameter due to the electron density anisotropy (), the effective velocity of the scanning beam across the earth surface (Vef ), the synthesised aperture length Ls .
The factor G is defined as G = (Vef Ls /νo )p−1 .
(8.48)
The equations for and Vef can be found in Reference 113. All the fundamental concepts of the model we have just discussed were developed by Rino, so we think this model should bear his name. It has been successfully employed to analyse the effects of ionospheric turbulence on communication and navigation device performance. But we also believe that this model can be useful for the estimation of aperture performance in whirls and their effect on the azimuth ambiguity function. The latter is important because one can then evaluate the aperture resolution errors.
Phase errors and improvement of image quality 173
8.4 Evaluation of image quality1 Synthetic apertures were primarily designed for obtaining images to be used by a human operator to solve research and applied problems. It is natural that the evaluation of aperture performance should largely be based on the analysis of image characteristics. To do so, one needs to have at one’s disposal appropriate criteria for a quantitative description of the performance characteristics of a particular type of aperture to be able to compare them with those of other apertures and to suggest appropriate improvements. At present, there is no generally accepted criterion for evaluation of aperture performance or image quality, though there have been some attempts made along this line [99]. Difficulties involved in developing a reliable criterion are due not only to the complex design and random behaviour of a synthetic aperture but also to the diversity of their applications (e.g. a great variety of target aspect angles at which imaging is made). Normally, potential characteristics or some individual parameters are used as criteria for the evaluation of aperture performance.
8.4.1 Potential SAR characteristics SAR designers and researchers often use the so-called potential characteristics, since they describe the aperture response to an echo signal from a point scatterer and do not contain micronavigation noise [53]. The following parameters may be referred to as potential characteristics. We shall mostly list characteristics of apertures using a digital signal processing and a digital image reconstruction. 1. The major lobe width of a synthetic antenna pattern (SAP) characterises the potential resolving power of an aperture in azimuth ρβ . This parameter is determined by the width of the aperture response to a point target at zero noise. In practice, a 3 dB SAP width is most often used as a criterion for evaluation of a potential resolution, but there are other approaches, too. The potential resolution is usually evaluated with a uniform weighting function H (t) ≡ 1 to get ρβ ≈ λ/(2L sin γ ),
(8.49)
where L is the projection of the synthesis step onto the normal to the view line and γ is the incidence angle of microwave radiation. If the weighting function is non-uniform, the major lobe width becomes 1.2–2.5 times larger, depending on the type of the weighting function. 2. The integral level of side lobes ρβ /2 π π 2 2 bi = I (β) dβ − I (β) dβ I 2 (β) dβ (8.50) −π
−ρβ /2
−π
1 Section 8.4 was written by E. F. Tolstov and A. S. Bogachev.
174 Radar imaging and holography Table 8.1
The main characteristics of the synthetic aperture pattern
Type of weighting function
Relative SAP width
bi
20 lg(bm )
Uniform Parabolic Henning’s Hamming’s
1.0 1.3 1.6 1.45
0.0705 — 0.0103 0.178
−13.3 −20.6 −32.0 −42.0
characterises the maximum SAP relative to the background created by the side lobes. 3. The maximum side lobe level is bm = Ims /Im ,
(8.51)
where Ims and Im are the maximum side and major lobe senses, respectively. This parameter is effective in sensing microwave-contrast targets against a weakly reflecting background. The integral and maximum senses of the side lobes, as well as the major lobe width, vary with the weighting function used in the SAR (Table 8.1). The relative width in the Table is the SAP width normalised to that for a uniform weighting function. 4. The azimuthal sample characteristic is ka = ρβ /ρ ,
(8.52)
where ρ is a step between the azimuthal counts of an image digital signal. According to the theorem of samples, the sample characteristic must meet the condition ρ < ρβ . This parameter denotes the number of digital signal counts per azimuthal resolution element and describes the radar capability to reconstruct an image. The larger the sample characteristic, the greater the image contrast. However, a larger coefficient entails a greater complexity of the image reconstruction design. The optimal value of this parameter is taken to be ka = 1.2. 5. Image stability characterises the ability of an image digital reconstruction device to sense and count the relative positions of partial frame centres and to provide the proper scale over all the sample characteristics when partial frames are matched and superimposed. 6. The gain in the signal-to-noise ratio in coherent and incoherent integration is calculated from the variations of this parameter at the processor output. It is assumed that the echo and image signals are integrated linearly in both coherent [17] and incoherent integration [59], whereas noise is integrated in quadratures. Therefore, the total gain in the signal-to-noise ratio Kg is √ Kg = Nn, (8.53)
Phase errors and improvement of image quality 175 where n is the number of echo counts over a synthesis step in one range channel and N is the number of incoherently integrated partial frames. In real flight conditions, the actual aperture characteristics differ from the potential ones. The reason for this is the noise from processing and micronavigation devices, as well as the limitations of imaging systems.
8.4.2 Radar characteristics determined from images The real performance characteristics of a radar system are evaluated from the results of a statistical processing of image parameters registered during experimental flights over a test ground (of the type of Willcox Playa in the United States). The radar characteristics to be found experimentally are usually as follows. 1. The realistic aperture sharpness is taken to be the minimal distance between two corner reflectors discernible on an image along the respective coordinate, if the reflectors produce pulses of equal intensity and if the power of the reflected signals is much greater than the noise. Note that the sharpness evaluation is affected by the sample characteristic that can normally be varied by the operator during the test. 2. The intensity of speckle noise on an image is defined as the ratio of the standard deviation to the mean signal intensity on an image for a statistically uniform area on the earth. The speckle arises from the presence of numerous point scatterers in a resolution element, which have an approximately identical radar cross section (RCS) and are produced by re-emission of the antenna pattern of random geometry. The speckle effect can be reduced by filtering or by incoherent integration of several independent images of the same region on the earth. Independent images can be obtained at different radiation frequencies, polarisations or aspect ratios. Depending on the SAR application, the number of such images varies from 3 to 4 for military applications to 70 for resources survey tasks. 3. The dark level on an image is an average intensity of a signal from a region of the lowest reflectivity. Sometimes, the dark level is taken to be the average image intensity with a zero echo signal at the input (the noise dark level). This parameter is related to the side lobe size in the synthesised antenna pattern and to the processing noise. 4. The dynamic range is defined as the maximum-to-minimum signal intensity ratio on an image. It depends on the design of the transmitter–receiver unit, the processor characteristics, the receiver gain control, etc. 5. The contrast of adjacent samples is found as the ratio of the maximum signal intensity from a point target (much above the noise level) to the average intensity of the adjacent samples. This parameter characterises the SAR ability to reconstruct the maximum space frequency on an image. 6. The mean image power is a parameter affected not only by the transmitter power, the antenna gain, the receiver sensitivity and the signal-to-noise gain at the processor output, but also by the post-processing before a signal is displayed (especially, at the stage of defining its minimum threshold).
176 Radar imaging and holography 7. The intrinsic aperture noise level is the mean image signal level when there is only noise at the aperture input and its gain corresponds to the mean image signal. This parameter covers the total effect of the aperture noise during the synthesis. 8. The radar swath width is determined by the screen parameters (the number of lines and the number of pixels in a line) and by the discretisation step in range and azimuth. An acceptable number of image pixels on a screen normally varies from 512 × 512 to 1024 × 1024. 9. Geometrical distortions of an image are defined as the standard deviation of the positions of reference scatterers relative to their actual positions. The central reference mark is superimposed with the real reference. The standard deviation value is affected by the range, the view angle, altitude, the distance between the reference and the image centre, as well as by the imaging time. 10. The imaging time is an important parameter of an aperture operating in real time. A typical test ground for the study of aperture characteristics is a statistically uniform surface with three-edge corner reflectors (Fig. 8.5) arranged at different distances from each other (for evaluation of the aperture sharpness). The reflectors possess different reflectivities, so one can measure the dynamic range of the system. In addition to a uniform background, a test ground usually includes some common objects such as roads, fields, smooth surfaces, railway roads, etc. In order to understand better the difference between the potential and real characteristics of a synthetic aperture and a SAR as a whole, we shall make use of test results with digital image reconstruction (the AN/APQ-102A modification) [53]. Its potential resolution was 12.2 m along the azimuth and range coordinates. The discretisation step for evaluation of a real azimuthal resolution was taken to be 3.04 m. Figure 8.6 shows an azimuthal signal from two corner reflectors. When the valley
Flight direction
1600 m
1600 m
Figure 8.5
A schematic test ground with corner reflectors for investigation of SAR performance
Phase errors and improvement of image quality 177
Radar image intensity
1.0
0.8
0.5
0.25
242
Figure 8.6
246
250 254 258 Number of azimuth channel
262
A 1D SAR image of two corner reflectors
between their images was 2 dB, the azimuthal resolution was found to be 21.28 m, or 7 pixels in an image line. Part of the test ground image was obtained by a 14-fold incoherent integration with the mean signal value of 0.671 and a standard deviation of 0.201. The evaluated speckle was found to be 0.3, which is a sufficiently low level. The dark level was typically 23 dB of the grey-level value. Hence, the SAR dynamic range is 33 dB, with the contrast of adjacent samples being 2.8 or 4.5 dB. For a synthetic aperture with strongly suppressed side lobes, this parameter was 6–10 dB. The large standard deviation in this case is due to the use of corner reflectors with a large RCS. The dynamic range is estimated from these data to be 33 dB, with the contrast of adjacent samples being 2.8 or 4.5 dB. For a synthetic aperture with strongly suppressed sidelobes, this parameter is 6–10 dB. The large mean square value of the image signals is due to the application of corner reflectors with a large RCS. Figure 8.7 shows a histogram of the noise distribution at the aperture output, and one may suggest that the probability density has a Rayleigh pattern. The mean value of 0.21 was taken to be the dark level. One of the dark regions exhibits a Rayleigh distribution with a mean value of 0.42. A screen with 384 × 360 pixels covered a view zone of 4.8 × 4.5 km. The errors in the measurement of the range positions of the corner reflectors were 14 km and 18 m at a distance of 1600 m from the image centre, whereas the radar was at 14.5 km from it. The azimuth measurement error was ∼50 m under the same conditions.
8.4.3 Integral evaluation of image quality The authors of Reference 99 have suggested a method of integral evaluation of radar images. With this method one can compare images and establish a certain standard
Frequency
178 Radar imaging and holography
0
Figure 8.7
Radar image intensity
A histogram of the noise distribution in a SAR receiver
for the transformation of resolution to the number of incoherent integrations or to a parameter related to the dynamic range of an image signal. It is shown that the interpretability, or the operator’s ability to interpret an image, U , is related to the SGL volume V as U = U0 exp(−V /Vc ),
(8.54)
where U0 is the maximum image interpretability and Vc is the critical grey-level resolution. It has been found empirically that the interpretability is related to the grey-level volume defined as V = pa pr pg ,
(8.55)
where pa , pr are the linear resolutions in azimuth and range, respectively, and pg is the grey-level resolution (in half-tones). The new image parameter – grey-level resolution – can be expressed as the ratio of a level a signal exceeds in 90 per cent of cases to that in 10 per cent of cases for independent samples. This parameter can be found from the formula: √ √ pg ≈ ( N + 1.282)/( N − 1.282). (8.56) However, the calculated value differs noticeably from the measurements made at N < 4 (Fig. 8.8). The experimental interpretability scale ranged from 0 for an uninterpretable image to 4 for a fully interpretable one. Therefore, the maximum interpretability U0 should be 4. The authors of Reference 1 have obtained a more complex equation for pg k+1 t]−1 1 + (N /e) N k=1 3k[(N − k)!N pg = 10 lg . N 1 − (N /e) k=1 3k[(N − k)!N k+1 ]−1
Where e = 2.78, this result, however, is based on the information theory and additionally takes into account properties of photointerpreter’s visual analyser. It was discovered that according to the criterion of the maximum image information capacity N = 2 is optimal.
Phase errors and improvement of image quality 179 rg Approximation
Experiment
1
Figure 8.8
10
100 N
The grey-level (half-tone) resolution versus the number of incoherently integrated frames N
An important experimental finding was the critical volume Vc – for a single frame synthesised by the aperture (N = 1). For the majority of frames, the length per square resolution element in the case of a 37 per cent interpretability was found to be 9.14 m. Such objects were vegetation and urban areas, low-contrast regions, communication lines, city and country roads, etc. Exceptions were the boundaries of water bodies and vegetation covers showing a 37 per cent interpretability even at the lowest linear resolution in azimuth and range (13.72 m). Since the grey-level resolution at N = 1 (Fig. 8.8) is 22, it is easy to find the critical volume: Vc = pa pr pg ≈ 9.142 × 22 ∼ 1850.
(8.57)
With this, the final interpretability expression takes the form: U = 4 exp{−pa pr pg /1850}.
(8.58)
Note that the calculation of the critical volume used the linear resolution of 9.14 m. Figure 8.9 shows the interpretability plotted against the linear resolution pa = pr = p for different numbers of incoherent integrations. When analysing the plots in Fig. 8.9, one should bear in mind that both the measurements and the calculations were based on some a priori assumptions. For example, the half-tone scale was chosen on the assumption that a photograph had the maximum interpretability and that it had an infinite number of incoherent integrations (N = ∞) and the half-tone resolution pg – (Fig. 8.8). An image synthesised without incoherent integrations (N = 1) was thought to have the poorest half-tone resolution, but the resolution was to be finite (pg < ∞), since the image preserved some, though very low, interpretability. It was established experimentally that the poorest half-tone resolution was equal to 22 (Fig. 8.8). The interpretability was evaluated by three qualified and experienced interpreters of radar and optical images, using the four-level scale (from 0 to 4) mentioned above. The interpreters worked with prints of 20.32 cm × 25.40 cm in size. The resolution elements varied in shape from square to rectangular (with the side ratio of 1:10) and in the number of incoherent integrations varying from 1 to ∞. All the experiments
180 Radar imaging and holography U/Uo 0.8 0.6
1
3
N=∞
10
0.4 0.2
0
Figure 8.9
25
50
p, m
The dependence of the image interpretability on the resolution versus linear resolution pa = pr = p
were carried out using a quadratic detector because the detection was performed on a quadratic film. It can be demonstrated theoretically, however, that experimental data can also be useful in linear detection of image signals if the half-tone resolution is calculated by another approximate formula: √ √ (8.59) pgl ≈ ( N + 0.6175)/( N − 0.6175). The major result of this series of investigations [99] was the experimental support of the idea that image interpretability depended only on the half-tone volume resolution, or on the product of the azimuthal, range and half-tone resolutions. Therefore, this parameter varies with the area rather than the shape of a resolution element (square or rectangular). On the other hand, it depends on the resolution element area and the number of incoherent integrations. So one can make a compromise when choosing the resolution in azimuth pa , in range pr and in half-tones pg [99]. Identical interpretabilities can be achieved by using different combinations of these parameters. This conclusion proved to be quite unexpected and may play an important role in solving some applied problems when one has to choose between the complexity and the cost of aperture processing techniques. Indeed, if this conclusion is correct, it is worth making an effort to achieve a high image interpretability by improving low-cost resolutions. To illustrate, a higher range resolution and an incoherent integration in spaceborne SARs can be achieved in a simpler way than a higher azimuthal resolution. For example, one can fix the azimuthal resolution but improve the range resolution or increase the number of incoherent integrations. We shall give a good example to illustrate the effectiveness of resolution redistribution with reference to a side-looking synthetic aperture. In this type of aperture, the azimuthal resolution depends linearly on the number of incoherent integrations N : pa (N ) = λro N /2Lm = po N ,
(8.60)
Phase errors and improvement of image quality 181 pg 20
10
0
Figure 8.10
1
2
3
4
5
6
7
8
9
N
The dependence of the half-tone resolution on the number of incoherent integrations over the total real antenna pattern
where λ is the wavelength, ro is the oblique range, Lm is the maximum possible length of the aperture, and po = λro (2Lm ) is the best aperture resolution. If we now fix the range resolution, the minimum product of pa Npg will show the optimal combination of azimuthal resolution and incoherent integration (Fig. 8.10). This optimum is found to lie at N = 3; hence, pa = 3po . The integral criterion for image evaluation from the half-tone volume resolution is convenient and relatively simple. But when using it in practice, one should bear in mind that the available amount of statistical data is insufficient, so the estimations of image quality may be quite subjective.
8.5 Speckle noise and its suppression Synthetic aperture radar remote sensing of the earth is becoming increasingly popular in many areas of human activity (Chapter 9.1). The analysis of images may be made in terms of a qualitative or quantitative approach [2]. A qualitative analysis is largely made by conventional methods of visual interpretation of aerial photography, combined with the researcher’s knowledge and experience. Although radar images have much in common with aerophotographs (Chapter 1), the physical mechanisms of their synthesis set limits on the applicability of interpretation methods elaborated for optical imagery. Additional difficulties arise from the presence of speckle noise. A quantitive analysis is based on the measurement of target characteristics for various backgrounds and objects [2], followed by computerised processing of video information. The latter is normally used to solve the following tasks. One often has to improve image quality and interpretation procedures at the pre-processing stage, which includes various corrections, noise reduction, contrast enhancement, highlighting contours, etc. It may also be necessary to compress and code images to
182 Radar imaging and holography be transmitted through communication channels. Besides, one may have to identify some of the items on an image and classify various elements present on it. This is usually done by image segmentation, cluster analysis and so on. Obviously, this kind of image subdivision is always somewhat arbitrary. Here we shall discuss methods of solving the first type of task with emphasis on those techniques specific to radar imagery, such as speckle suppression. Some others, like geometrical and radiometrical correction, have already been dealt with in the literature [2,31]. Some of the image processing techniques are quite versatile and have also been discussed in detail [2].
8.5.1 Structure and statistical characteristics of speckle There has been much effort to understand the image speckle structure. The available publications on this subject can be classified into two groups as for the specific problems being tackled. The more extensive group covers work on speckle as a noise, suggesting various ways of its filtering. The other group includes publications on useful properties of speckle, in particular, on the possibility to derive from it information about the area of interest. Naturally, there are problems in each trend that remain poorly understood. A feature common to all the publications is the description of statistical characteristics of speckle. Let us consider the statistical characteristics of an echo signal in terms of a general reflection model when a resolution element contains many echo signals from different point scatterers. The signals are random, independent and have about the same intensity. Then the total signal represents a Gaussian random quantity and its amplitude has a Rayleigh pattern. This kind of reflection model is often termed the Rayleigh model. When a synthetic aperture changes its position relative to a target, the intensity fluctuations of the total echo signal give rise to a characteristic speckle pattern on an image. Clearly, the intensity I of individual pixels will obey the exponential law of the probability density distribution: x 1 exp − (8.61) pI (x) = 2σo2 2σo2 with the mean value of I¯ = 2σo2 and the dispersion σI2 = 4σo4 , while the phase θ of the image pixels is equiprobable in the range from −π to +π . Another reflection model is applied when a resolution element has one bright point together with other point scatterers, such that the total echo signal contains one dominant signal of much higher intensity along with many random independent signals of nearly the same lower intensity. Then the amplitude of the total signal is described by the Rice distribution, or by a generalised Rayleigh distribution. This kind of model is called the Rice reflection model. The distribution of the intensity probability density at single pixels is √ xso 1 x − so Io (8.62) pI (x) = exp − 2 2 2σo 2σo σo2
Phase errors and improvement of image quality 183 with the mean value of I¯ = 2σo2 + so and the dispersion σI2 = 4σo4 (1 + 2r), where so is the square amplitude of the highest intensity component of the signal r = so /(2σo2 ), Io (·) is a modified zero-order Bessel function of the first kind, and the distribution of the phase probability density is 2
a x cos x 1 , (8.63) exp − + a √ (a cos x) exp −a2 sin2 pθ (x) = 2π 2 2 2π where
√ a = so /σo ,
1 (t) = 2
t
−∞
τ2 exp − 2
dτ
is the Laplace function. Since the signal-to-noise ratio √ is the ratio of the mean intensity I¯ to the standard deviation σI equal to 1 and (1+r)/ 1 + 2r for the Rayleigh and Rice models, respectively, the intensity fluctuation amplitude in the speckle structure is commensurable with the useful signal intensity for a complex target at r ≈ 1. For this reason, images of such targets have a well-pronounced speckle structure. Since it is difficult to analyse an echo signal from a target with the Rice reflection, most authors discuss targets with that of the Rayleigh reflection. It is worth noting that the above expressions for the probability density distribution in the case of a uniform and isotropic background are valid for both an ensemble of images at each resolution element and a single image over a multiplicity of resolution elements. For a non-uniform background, however, these expressions are valid only for an ensemble of image realisations. When the N number of independent images of the same earth area are summed up, the probability density distribution of the speckle structure takes the form: pI (x) =
xN −1 exp(−(x/2σo2 )) (2σo2 )N Ŵ(N )
(8.64)
with the value of I¯ = 2N σo2 and the dispersion σI2 = 4N σo4 , where Ŵ(·) is the gammafunction√ described as Ŵ(N ) = (N − 1)! for integer N . In this case, the signal-to-noise ratio is N . The probability density distribution in Eq. (8.64) corresponds to the gamma-distribution with the parameters equal to N and 1/(2σo2 ), or to χ 2 -distribution with 2N degrees of freedom at σo2 = 1. A general expression for the initial moments of distribution (8.64) has the form: MkN = [(N + K − 1)!/(N − 1)!](2σo2 )k , where MkN is the kth initial moment. Reference 2 presents the fluctuation spectrum of the speckle amplitude and its autocorrelation function. Suppose a point scatterer is described by the Dirac δ-function and F(k) is the transfer function of a synthetic aperture, where k = 2π/λ is the wave number and λ is the wavelength of the echo signal. Then the amplitude spectrum of the echo signal from a point scatterer located at a point with the coordinate x relative
184 Radar imaging and holography to the SAR carrier track is F ′ = (1/2)F(k) exp(jxk). For randomly arranged point scatterers, the signal received by the aperture is defined as L
F ′ (k) =
1 exp(jxl k). F(k) 2π l=1
At L → ∞, the speckle power density spectrum can be determined within the accuracy of a constant factor: S(k) = |F ′ (k)|2 = |F(k)|2 . In other words, it is unambiguously dependent on the aperture transfer function. The autocorrelation function of speckle is related to its spectral power density by a Fourier transform. Therefore, the speckle autocorrelation function can be used to find the aperture impulse response directly. The statistical characteristics of speckle for a background represented as an array of randomly moving point scatterers are considered in Reference 2. It is shown that the concept of spatial resolution has no sense if the phase fluctuations of signals from the point scatterers are large (the phase changes by 2π several times during the synthesis).
8.5.2 Speckle suppression The available methods of suppression or smoothing out of image speckle can be subdivided into two groups. Some methods are based on the averaging of several independent images of the same background. This group is not large but these methods have been extensively used owing to their relative simplicity. The other group of methods is much larger and includes so-called aposterior procedures when speckle is suppressed by spatial filtering. Independent images of the same earth area can be obtained in different ways based on a common principle of image segmentation with respect to a particular parameter, for example, the Doppler frequency, the carrier frequency or polarisation (i.e. sensing a background at different polarisations of probing radiation). The first technique is known as a multibeam processing and it is most commonly used in practice [99]. A specific feature of multibeam processing is a proportional decrease of the aperture sharpness in track range when the Doppler frequency band is subdivided into N identical non-overlapping subbands. The specificity of speckle √ suppression procedures is that the signal-to-noise ratio increases by a factor of N if N independent images are averaged. The methods of the first group can use other procedures, for example, median filtering [2], in addition to the averaging of N independent images. A wide application of aposterior techniques is primarily due to a rapid development of image processing technology. The lack of an adequate model of speckle structure and useful signal makes it difficult to design effective algorithms for speckle suppression. Until recently, nearly all researchers working on speckle problems have regarded speckle as a multiplicative noise to a useful signal. However, there are
Phase errors and improvement of image quality 185 more complex models. The authors consider the possibility of employing Wiener’s and Calman’s filtering algorithms, homomorphic processing and various heuristic techniques to suppress speckle. However, a lack of objective criteria for evaluation of image quality by visual perception creates additional difficulties. For this reason, nearly all the researchers cited below compare the processing results with expertise, which makes a comparative analysis of the suggested algorithms quite problematic. The first attempts to suppress speckle by aposterior techniques used the Wiener filtering algorithm which varies with the signal [2]. The workers analysed an additive, signal-modelled noise approach and a multiplicative noise model. In the former, a distorted image is described by the expression: z(x, y) = s(x, y) ∗ h(x, y) + f [s(x, y) ∗ h(x, y)]n(x, y),
(8.65)
where h(x, y) is the space impulse response, f is commonly a non-linear function and n(x, y) is signal s(x, y) independent noise. By introducing the designations n′ (x, y) = s′ (x, y) × n(x, y) and s′ (x, y) = f [s(x, y) ∗ h(x, y)], we transform Eq. (8.65) to z(x, y) = s(x, y) ∗ h(x, y) + n′ (x, y). In the second noise model, an image is described as z(x, y) = n(x, y)[s(x, y) ∗ h(x, y)],
(8.66)
where n(x, y) is signal-independent multiplicative noise. The Wiener’s filter has the transfer function M (µ, ν) = zs (µ, ν)/zz (µ, ν) and minimises the standard deviation of the filtering, provided that z(x, y) and s(x, y) are wideband spatially uniform random fields, zs and zz are the respective power density spectra. With Eq. (8.65), the first noise model gives the following transfer function of a Wiener’s filter: M1 (µ, ν) =
ss (µ, ν)H ∗ (µ, ν) ss (µ, ν)|H (µ, ν)|2 + s′ s′ (µ, ν) ∗ nn (µ, ν)
(8.67)
on the assumption of n(x, y) = 0. Here n(x, y) is statistically independent of s(x, y), s′ (x, y) is a uniform wideband field, and H (µ, ν) = F[h(x, y)] is the system’s transfer function. At f [s(x, y) ∗ h(x, y)] = s(x, y) ∗ h(x, y), we have s′ s′ (µ, ν) = ss (µ, ν)|H (µ, ν)|2 , and Eq. (8.67) can be re-written as M1 (µ, ν) =
ss (µ, ν)H ∗ (µ, ν) . (8.68) ss (µ, ν)|H (µ, ν)|2 + [ss (µ, ν)|H (µ, ν)|2 ] ∗ nn (µ, ν)
If the noise is uniform, wideband and signal-independent, the transfer function of a Wiener’s filter in the second model will be M2 (µ, ν) =
nss (µ, ν)H ∗ (µ, ν) . nn (µ, ν) ∗ [ss (µ, ν)|H (µ, ν)|2 ]
(8.69)
186 Radar imaging and holography It is clear from (8.69) that at n(x, y) = 0 the filter transfer function is M2 (µ, ν) = 0. Suppose we have n1 (x, y) = n(x, y) − n, then M2 (µ, ν) =
ss (µ, ν)H ∗ (µ, ν)/n
. ss (µ, ν)|H (µ, ν)|2 + (1/n2 )n1 n1 (µ, ν) ⊗ [ss (µ, ν)|H (µ, ν)|2 ] (8.70)
Obviously, at n = 1 filters with the transfer functions (8.68) and (8.70) are equivalent. Modelling has shown that a Wiener’s filter for signal-dependent noise with the characteristics M1 and M2 is better than that for additive, signal-independent noise. But the essential limitations of the former are the need for a large amount of a priori information about the signal and the noise, as well as vast computations. Calman’s filtering algorithms [2] suffer from similar disadvantages. The possibility of a homomorphic image processing is discussed in Reference 2. A homomorphic processing is supposed to be any conversion of observable quantities if the signal fluctuations are transformed to additive and signal-independent noise. Within the multiplicative speckle model, Eq. (8.64) yields N −1 NI I NN exp − (8.71) p(I ) = Ŵ(N )I I I 2
with σI2 = I /N 2 . Then the homomorphic transformation reduces to taking the logarithms. The distribution density of the quantity D = ln I is described as p(D) = [N N / Ŵ(N )] exp[−N (D − Do )] exp{−N exp[−(D − Do )]}
(8.72)
with Do = ln I . Practically, the distribution of signal-dependent noise is often approximated by a normal distribution with a signal-dependent dispersion. At any value of N , the approximation accuracy for the normal distribution (8.72) is greater than that for the distribution (8.71). The variable D can be processed by any algorithm available in the model of additive and signal-independent noise. It is pointed out in Reference 2 that the application of Wiener’s filtering algorithm with a preliminary homomorphic processing of an image provides better results than a separate application of each algorithm. The authors of Reference 2 believe that a homomorphic transformation is a reasonable alternative to image processing in signal-dependent noise. On the other hand, experience indicates that this does not give an essential advantage over heuristic methods to be discussed below. Moreover, the necessity to use both direct and inverse transformations increases the computation costs considerably. There is another way of suppressing speckle noise – a local statistics technique [2]. Within the multiplicative speckle model, every element zij on an image is represented as the product of the signal sij and the noise nij . The noise has n = 1 and the dispersion σn2 . On the assumption that the signal and the noise are independent, the authors have derived the expressions z = sn = s
Phase errors and improvement of image quality 187 and σz2 = M [(sn − s n)2 ] = M [s2 ]M [n2 ] − s¯ 2 n¯ 2 . If the signal intensity averaged over the processing window is constant, the expressions are M [s2 ] = s2
σz2 = s2 (M [n2 ] − n¯ 2 ) = s¯2 σn2 or σn = σz /¯z .
and
This model is consistent with the data obtained from the analysis of uniform surface imagery. The standard deviation σn is found to be about 0.28, which is due to a multibeam processing and the use of other algorithms for improving images synthesised by the SAR SEASAT-A. Using the local statistics technique for a selected window (usually with 5 × 5 or 7 × 7 resolution elements), one can find the moving local average z¯ and the dispersion σ 2 . Then one gets s¯ = z¯ /¯n,
σs2 =
σz2 + z¯ 2 − s¯ 2 . σn2 + n¯ 2
(8.73)
The expansion of z into a Taylor series with the account of the first-order terms only yields z = n¯ s + s¯(n − n¯ ).
(8.74)
According to Eqs (8.73) and (8.74), the minimisation of the mean square error of speckle suppression leads to the following formula for sˆ: sˆ = s¯ + k(z − n¯ s¯)
(8.75)
with k=
n¯ σs2 . s¯2 σn2 + n¯ 2 σs2
Then at n = 1, one gets sˆ = s¯ + k(z − s¯),
k=
σs2 . s¯σn2 + σs2
(8.76)
The heuristic algorithm derived from the local statistics approach is especially effective for speckle suppression on images of uniform and isotropic surfaces. It does not remove the contours of extended proper targets. This algorithm has provided good results when processing imagery from the SAR SEASAT-A. Its major advantages are simplicity and adaptive properties associated with the computation of the local statistics. It has, however, a serious limitation: it cannot predict the error behaviour during the speckle suppression. Besides, the necessity of computing the local average and, especially, the dispersion in a common 7 × 7 window considerably reduces the algorithm efficiency. In order to decrease the computational costs inherent in local statistics algorithms, some workers have suggested using a sigma-filter. For a moving window
188 Radar imaging and holography of (2m1 + 1) × (2m2 + 1) in size (m1 and m2 are integer numbers) with the central resolution element zij , the signal sˆij is found from the formula: m m2 +j m2 +j m 1 +i 1 +i sˆij = δkl , (8.77) δkl zkl k=i−m1 l=j−m2
k=i−m1 l=j−m2
where
δkl =
1, 0,
at (1 − 2σn )zij ≤ zkl ≤ (1 + 2σn )zij . otherwise.
It is clear that a filter with the characteristic (8.77) will be more cost-effective than that with (8.76). A 11 × 11 window was used in Reference 2 to estimate σn . It was found that two passes of a sigma-filter were sufficient to get a satisfactory suppression of speckle noise without smearing the contours. When the number of passes was increased to four and more, the image was damaged. The following modification of the sigma-filter was discussed in Reference 2 for filtering impulse noise together with speckle suppression. One chooses the threshold B. If the number of elements to be removed in accordance with Eq. (8.77) is smaller than or equal to the threshold B, the average of four neighbouring elements is ascribed to the estimated position of the moving window. The choice of a threshold is critical because it affects the contours. It is pointed out in this work that the threshold value for a 7 × 7 window should be less than 4 and for a 5 × 5 window less than 3. The use of a sigma-filter with a 11 × 11 window and then another sigma-filter with a 3 × 3 window at the threshold B = 1 proved to be most effective. A small window allows suppression of impulse noise in the vicinity of sharp contours. Other filter modifications are also possible. This type of filter was compared with a filter with the characteristic (8.76) and with a median and an averaged filter. It was concluded from the expertise that a sigma-filter provides better results. Its disadvantage is that one cannot estimate a priori the behaviour of the speckle suppression error. An important merit of this type of filter is its simplicity, a high computational efficiency and additive properties. These characteristics make the filter suitable for application in digital image processing in a real-time mode. The local statistics method can also be implemented with a linear filter minimising the mean square error of the filtering. In addition to the algorithms described above, there is a large number of heuristic algorithms for speckle suppression. Among these are algorithms for median filtering, averaging over a moving window with various weighting functions, algorithms for a nonlinear transformation of the initial image, the reduction of an image histogram to a symmetric form, etc. Most heuristic algorithms are simple to use and have a fairly high computation efficiency but all of them possess a serious drawback – they practically ignore the specific process of SAR imaging: while suppressing noise, they partly suppress the useful signal. It is usually hard to estimate the speckle suppression error when using such algorithms. To conclude, image processing covers a wide range of tasks and problems, many of which have not been dealt with in this chapter. Among these are the processing based
Phase errors and improvement of image quality 189 on the properties of a human visual analyser, the criteria for image quality and image optimisation, quantitative evaluation of information contained in an image, etc. Due to a rapid development of cybernetics, information theory, iconics and computer science and practice, these areas of investigation are constantly trying new approaches. For example, they have tested some concepts of artificial intelligence in the processing of data on remote probing of the earth, the use of radar imagery as a database for visual interpretation and complexing of images obtained in different wavelength ranges. The results obtained from such studies can provide more information about the earth and other planets.
Chapter 9
Radar imaging application
9.1 The earth remote sensing1 9.1.1 Satellite SARs Synthetic aperture radar imagery from satellites and aircraft has a high spatial resolution and is independent of light and clouds. Nearly real-time information and a comprehensive SAR image analysis is of importance not only for scientific studies, but also because it has a practical significance providing information for companies dealing with off-shore oil and gas exploration, deep-ocean mining, fishing, marine transportation, weather forecast, etc. [65]. In 1972 the NASA Office of Applications initiated the Earth and Oceans Dynamics Applications Program for the development of techniques of global monitoring of oceanographic phenomena and the design of an operational ocean dynamics monitoring system. Satellite SAR studies of the earth environment began in 1978, when the first series of images was obtained by the SEASAT during its 3 month’s operation. This L-band horizontally polarised radar operated at a wavelength of 23 cm at an incidence angle of 20◦ . It was primarily designed for ocean wave imaging, although SAR imagery was also acquired over ice and terrestrial surfaces. It demonstrated the potential of satellite radar data in scientific and operative applications. The SEASAT data supported the notion that wind and wave conditions over the ocean could be measured from a satellite with an accuracy comparable to that achieved from surface platforms [5]. Various SAR instruments operating at different wavelengths, polarisations and incidence angles were mounted on bound of Space Shuttles (Table 9.1). In November 1981 and October 1984, the SIR-A and SIR-B radars, which used the SEASAT technology
1 Sections 9.1.1 and 9.1.2 were written by V. Y. Alexandrov, O. M. Johannessen and S. Sandven, Nansen International Environmental and Remote Sensing Centre, St Petersburg, Russia Nansen Environmental and Remote Sensing Centre, Bergen, Norway. Section 9.1.3 was written by D. B. Akimov, Nansen International Environmental and Remote Sensing Centre, St Petersburg, Russia.
192 Radar imaging and holography Table 9.1
Technical parameters of SARs borne by the SEASAT and Shuttle
Parameter
Orbit inclination (◦ ) Altitude (km) Incidence angle (◦ ) Frequency (GHz) Polarisation Swath width (km) Pixel size for four looks (m)
SAR SEASAT
SIR-A
SIR-B
SIR-C/X
X-SAR
108 800 20–26 1.28 HH 100 25 × 25
38 260 47–53 1.28 HH 50 40 × 40
57 225 15–60 1.28 HH 30–60 25
57 225 20–55 1.25 and 5.3 HH, VV, VH, HV 15–90 25
57 225 20–55 9.6 VV 15–45 30 × (10 − 20)
Table 9.2
Parameters of the Almaz-1 SAR
Parameter
Value
Satellite altitude (km) Orbit inclination (◦ ) Wavelength (cm) Polarisation Radiometric resolution, one look (dB) Swath width (km) Spatial resolution, one look (m)
270–380 72.7 9.6 HH 2–3 40 10–15
with the 23 cm wavelength and HH (Horizontal–Horizontal) polarisation, provided data targeted at land applications [77]. The SIR-C mission using a two-frequency multipolarisation SAR with a variable incidence angle, together with the X-band VV (Vertical–Vertical) SAR, operated in three flights during the period of 1994–1996. The SIR-C was of interest to ocean remote sensing, and its data were used to extend the understanding of radar backscatter from the ocean and SAR imaging of oceanographic processes [117]. The first USSR SAR mission started in July 1987 with a launch of the Cosmos1870 satellite equipped with a S-band SAR. Its operation ended in July 1989 and was followed by the Almaz-1 satellite, which operated from May 1991 until October 1992 (Table 9.2). The raw data of 300 km long and 40 km wide stripes with a 10–15 m spatial resolution (one look) could be stored aboard and transmitted to a receiving ground station near Moscow as analogue radio holograms, with SAR images presented as photographic hard copies. Applications of SAR data included studies of various ocean phenomena and sea ice [36].
Radar imaging application 193 Table 9.3
The parameters of the ERS-1/2 satellites
Parameter
Value
Satellite altitude (km) Orbit inclination (◦ ) Wavelength (cm) Polarisation Angle of incidence (◦ ) Swath width (km) Spatial resolution, three looks (m)
785 98.52 5.66 VV 20–26 100 26 × 30
The first European Space Agency ERS-1 satellite with a C-band SAR aboard operated successfully from its launch in July 1991 until 1996 and provided a large amount of global and repeated observations of the environment. The focus was on ocean studies and sea ice monitoring [62,64]. In the high-resolution imaging mode, the ERS-1 SAR provides three-look, noise-reduced images with a spatial resolution of 26 m in range (across-track) and 30 m in azimuth (along-track) (Table 9.3). Because of the absence of onboard data storage, a network of ground receiving stations enabled a wide coverage by SAR images. ERS-2, a second satellite of this series, was launched in April 1995 and since mid-August 1995 both satellites operated in a tandem mode, when ERS-2 imaged the same area as ERS-1 one day later. The RADARSAT launched by the Canadian Space Agency in November 1995 was the first SAR satellite with a clear operational objective to deliver data on various earth objects. Using the onboard data storage, it provides a much wider coverage than the ERS SAR [77]. Processed SAR data could be delivered to users within several hours after acquisition. The RADARSAT operates in the C-band and HH-polarisation, and in several imaging modes with different combinations of the swath width and resolution (Table 9.4). One of its main applications is sea ice monitoring [42]. The advanced SAR (ASAR) onboard the European Space Agency ENVISAT satellite, has been providing image acquisition since 2002 [43]. While its major parameters are similar to those of the RADARSAT, the ASAR can also operate at multipolarisation modes using two out of five polarisation combinations: VV, HH, VV/HH, HV/HH and VH/VV. The five major modes are: global, wide swath, image, alternating polarisation and wave modes (Table 9.5). In the image and alternating polarisation modes the ASAR gives high-resolution data (30 m and 3 look) in a relatively narrow swath (60–100 km), which can be located at different distances from the subsatellite track at the incidence angles from 15◦ to 45◦ . The alternating polarisation mode provides two versions of the same scene, at HH, VV and/or cross-polarisation. The wide swath mode provides a 420 km swath with a spatial resolution of 150 m and 12 looks. In the global monitoring mode, the ASAR continuously gives a 420 km swath with a spatial resolution of 1000 m and 8 looks.
194 Radar imaging and holography Table 9.4
SAR imaging modes of the RADARSAT satellite
RADARSAT-1 modes with selective polarisation
Beam modes
Nominal swath width (km)
Incidence angles to left or right side (◦ )
Number of looks
Transmit H or V Receive H or V or (H and V)
Standard Wide Small incidence angle High incidence angle Fine ScanSAR wide ScanSAR narrow
100 150 170
20–50 20–45 10–20
1×4 1×4 1×4
25 × 28 25 × 28 40 × 28
70
50–60
1×4
20 × 28
50 500 300
37–48 20–50 20–46
1×1 4×2 2×2
10 × 9 100 × 100 50 × 50
Table 9.5
Spatial resolution (approx.) (m)
The ENVISAT ASAR operation modes
Operation mode parameter
Image mode
Alternating/ crosspolarisation
Wide swath mode
Global monitoring
Wave mode
Polarisation
VV or HH
VV/HH, HH/HV or VV/VH 29 × 30
VV or HH
VV or HH
VV or HH
150 × 150
950 × 980
28 × 30
2.5
1.5–1.7
1.4
1.5
Up to 100 (seven subswaths)
400 (five subswaths)
≥400 (five subswaths)
5 KM (vignette seven subswaths) 15–45
Spatial resolution 28 × 28 (along-track and across-track) (m) Radiometric 1.5 resolution (dB) Swath Up to 100 width (km) (seven subswaths) Incidence angle (◦ )
15–45
15–45
At present, SAR data from the ERS, RADARSAT and ENVISAT satellites are widely used in earth observations and monitoring of various natural objects and phenomena. With its fine-scale resolution, a SAR is capable of observing a number of unique oceanic phenomena [117]. These include wind and waves [46,75], ocean
Radar imaging application 195 circulation [63], internal waves [33], oil spills [40,41], shallow sea bathymetry [6], etc. Imaging radars are also used in a number of land applications, such as the study of soil moisture [84], forestry [97] and the studying and monitoring of urban areas [135]. The use of satellite SAR data for monitoring the Arctic sea ice is briefly discribed below.
9.1.2 SAR sea ice monitoring in the Arctic 9.1.2.1 The use of satellite SAR for sea ice monitoring The use of visible images for sea ice monitoring in the Arctic is limited by light in winter, while the cloud cover precludes sea ice observations in the visible and infrared ranges during approximately 80 per cent of time in summer [18,37,123]. Therefore, the development of remote radar sensing is essential for the polar regions. The first satellite SAR images were acquired by the SEASAT satellite which produced over 100 passes over the Beaufort Sea on nearly a daily basis for the analysis of sea ice motion and changes in the ice distribution. The SIR-B SAR gave data on the Antarctic sea ice margin for October 1984 [45]. Several SAR surveys were made over the Antarctic and Arctic with the Kosmos-1870 and Almaz-1 SARs in spite of the fact that the satellite orbits precluded coverage of the high-latitude northern and southern regions. The Almaz-1 SAR data were used to support an emergency operation in the Antarctic, when the research vessel Mikhail Somov got stuck in the ice. During this operation, it was possible to detect icebergs and estimate their size, as well as to derive several sea ice parameters, such as the ice extent, the boundaries of stable and unstable fast ice, the ice types (nilas, young and first-year ice), prevailing ice forms, ridges and areas of strongly deformed ice [3]. The SAR images obtained from ERS-1/2 were used in a number of sea ice studies in the Arctic, Antarctic and in the ice-covered seas in different parts of the World Ocean [48,68,76,93,120]. The ERS-1 SAR proved to be a very powerful instrument for sea ice observations. Although the ERS satellite was not designed for operational service, the data were applied in sea ice monitoring in the United States, Canada, Finland and several other countries [18,27]. With the launch of the Canadian RADARSAT in 1995, the first satellite with operational ice monitoring as a prime objective, ice monitoring in the United States, Canada, Greenland, Norway, Finland, Sweden and some other countries entered a new era. The ScanSAR mode with a swath of 450 km wide and with a 100-m resolution at 8 looks allows daily mapping of the whole polar region north of 70◦ N, and it is used for operational ice services in the Canadian Arctic, the Greenland Sea, the Baltic Sea and other areas with ice [18,48,111]. With a systematic acquisition of ScanSAR images over large Arctic sea ice areas and the use of the RADARSAT geophysical processor, it was possible to estimate the sea ice motion, deformation and thickness from sequential imagery for several years from 1996 [79]. Within 6 h, the US National Ice Center routinely receives ScanSAR images from the Alaska SAR Facility, the Gatineau and Tromsø Satellite Station almost, which provides total Arctic coverage [18]. The sea ice analysis is made by integrating all available remote sensing and in situ data, using the SUN SPARC and Ultra Workstations, and a system of satellite image processing. The RADARSAT improved the Ice Patrol’s reconnaissance
196 Radar imaging and holography efficiency, although the radar iceberg identification remains problematic even with modern techniques. The RADARSAT ScanSAR wide data provide a daily coverage of the Canadian Arctic, and higher resolution modes are used for sea ice monitoring near the ports, in several selected routes and in the rivers. SAR images are synthesised at the receiving stations Prince Albert and Gatineau and are transmitted to the Ice Centre within 2.5 h to be processed and transmitted to the icebreakers of the Canadian Coast Guard and the department of ice operations for visualisation and analysis. Sea ice monitoring is the most successful online application of the RADARSAT data in Canada, which provides the best combination of geographic coverage and resolution to save about 6 million dollars annually, as compared with airborne radar survey [38]. From February 1996 until the end of 2003, CIS used approximately 25,000 scenes for this purpose [42]. During 2003, a special service carried out iceberg detection and monitoring from satellite SAR imagery, and the International Ice Patrol was the user of this information [42]. Now the RADARSAT ScanSAR imagery is the main data source for sea ice mapping in the Greenland waters. Wind conditions may be an important limitation to the operational use of radar satellite imagery in this area. Small (