279 76 7MB
English Pages 222 Year 2019
Non-Line-of-Sight Radar
For a complete listing of titles in the Artech House Radar Series turn to the back of this book.
Non-Line-of-Sight Radar Brian C. Watson Joseph R. Guerci
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Cover design by John Gomes ISBN 13: 978-1-63081-531-8 © 2019 ARTECH HOUSE 685 Canton Street Norwood, MA 02062
All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. 10 9 8 7 6 5 4 3 2 1
Contents Acknowledgments
9
1
Introduction
11
1.1 Background 1.2 NLOS Radar Overview 1.3 Chapter Outline References
11 14 16 18
2
Review of Ground Surveillance Sensors
19
2.1 MTI Radar 2.2 Kalman Filter 2.2.1 Linear Kalman Filter 2.2.2 Extended Kalman Filter 2.3 Multiple Hypothesis Tracking 2.4 Bayesian Particle Filters 2.5 Track-Before-Detect References
19 22 22 28 31 40 45 48
3
Exploitation of Multipath Physics in Detection and Tracking
51
3.1 Introduction 3.2 Review of EM Propagation and Multipath 3.2.1 Diffraction Assumption 3.2.2 Ray Tracing 3.2.3 Modeling Antenna Directivity and Gain 3.2.4 Modeling Atmospheric Loss 3.2.5 Treatment of Diffuse Scattering 3.2.6 Modeling Surface Reflection Polarization Dependence 3.2.7 Range-Doppler Processing 3.2.8 MTI Signal Processing
51 55 55 57 60 63 64 65 70 73
5
6
Non-Line-of-Sight Radar 3.3 3.4 3.5 3.6 3.7
Geometric Analysis: A Basic EM Simulation Capability Full Electromagnetic Simulations Incorporating Building Geometries Radar System Analysis for Multipath Exploitation Radar Track Simulations of the City Model with MER Tower-Mounted Radar 3.7.1 Track 1 (Urban) Simulation of Tower Mounted Radar 3.7.2 Track 2 (Street) Simulation of Tower Mounted Radar 3.7.3 Track 3 (Interstate) of Tower Mounted Radar 3.7.4 Tower Simulation Summary 3.8 Simulations of Phoenix/Tempe and Comparison with Experiment 3.9 NLOS Tracking 3.9.1 MER Data Collection Description 3.9.2 Validation of Multipath Signatures 3.9.3 KA-MAP NLOS Tracker 3.9.4 NLOS Tracker Results 3.10 Summary References
75 82 84 86
94 101 102 104 107 110 113 114
4
Terrain Databases
117
5
High-Fidelity Modeling and Simulation
141
5.1 Geometrics Optics Simulations of Terrain 5.1.1 Terrain Scattering LOS Calculation Algorithm 5.1.2 Terrain Scattering LOS Calculation Example 5.2 Diffraction 5.2.1 KED 5.2.2 UTD 5.3 SEKE 5.4 Radar Simulations Using Elliptical Conic Sections 5.5 Commercial Software References
142 145 146 153 153 155 158 163 174 177
88 89 91 92 93
4.1 Terrain Databases 117 4.1.1 DTED 118 4.1.2 Land Cover 119 4.1.3 Scattering Power versus Incident Angle 121 4.2 Urban Databases 123 4.2.1 Extracting Building Geometries from 2-D Databases 123 4.2.2 Automatic Corner Detection Using Image Processing Algorithms 125 4.2.3 Measuring Building Geometries from LiDAR 132 4.2.4 Existing 3-D Urban Models 135 References 138
Contents
6
7
Computing Hardware Acceleration Strategies
179
6.1 6.2 6.3 6.4
GPU Computational Model GPU Programming FPGA Ray-Tracing Line-of-Sight Algorithms 6.4.1 Shapes and Ray Intersections 6.4.2 Intersection Acceleration Algorithms 6.5 GPU Ray-Tracing Example 6.6 FPGA Ray Tracing References
181 183 184 188 188 192 196 200 202
About the Authors
205
Index
207
Acknowledgments Brian Watson �thanks his wife and children for their help and encouragement. He is also grateful to Margaret Kumzi and her family. Their support made all the difference in the world.
9
CHAPTER
1
Contents
Introduction
1.1 Background 1.2 NLOS Radar Overview 1.3 Chapter Outline
1.1 Background As the well-worn saying goes, “Necessity is the mother of invention.” This is certainly the case for non-line-of-sight (NLOS) radar. At the turn of the century, military intelligence, surveillance, and reconnaissance (ISR) radar systems were increasingly operating in so-called “urban terrain” (i.e., towns and cities). This presented an extremely fundamental and—at the time—seemingly insurmountable problem: Traditional moving target indicator (MTI) radars required direct line of sight (DLOS) to operate. When operating at reasonable standoff ranges and altitudes, it is a simple fact that most of the time surface targets (e.g., vehicles) are obscured by buildings. It would thus seem that MTI radars were rendered obsolete, virtually overnight. Fortunately, President Dwight D. Eisenhower planted the seeds of a solution nearly 60 years ago! In 1958, the Soviet Union launched Sputnik. The panic it caused was well documented on the front pages of every major newspaper. When President Eisenhower asked his senior 11
12
Non-Line-of-Sight Radar
leadership in the Department of Defence (DoD) how it could be that the Russians had beat us into space, the answer essentially was “we did not have a requirement to launch a satellite.” In those days, all of the focus in the United States was on the development of intercontinental ballistic missiles (ICBMs) and supporting technologies—not satellites. In fact, we were ahead of the Soviets in those areas. There was no vetted requirement for a satellite, and the military runs on requirements. Eisenhower, being the master logistician of World War II fame, immediately recognized the need for an entity that would prevent this kind of technical surprise in the future. Thus, the Defense Advanced Research Projects Agency (DARPA) was born [1]. Now, we fast-forward to the year 2000. The emergence of advanced high-performance embedded computing (HPEC), radio-frequency (RF) electronics, expert systems, and machine intelligence created an opportunity to reinvent advanced sensor systems operating in challenging environments. DARPA thus created the Knowledge-Aided Sensor Signal Processing and Expert Reasoning (KASSPER) project [2]. The goal was to create a new real-time knowledge-aided (KA) adaptive sensor architecture that would overcome shortcomings of traditional adaptive processing techniques, particularly in the most challenging environments. Traditional adaptive sensors, such as radar, used sample statistical techniques to estimate the ever-changing propagation/target channel. Methods such as the sample covariance matrix (SCM) and its many variants formed the basis for adaptive techniques such as space-time adaptive processing (STAP) [3]. It presupposes the existence of a wide sense stationary set of training data of sufficient quantity to ensure an accurate covariance estimate. For all but the simplest of sensor systems, this assumption is crude at best. For ground MTI (GMTI) radar, the biggest challenge is accurately estimating the spatiotemporal (angle-Doppler) clutter properties [3]. Ground clutter can be very strong and competes with moving targets, especially slowly moving targets. For homogenous terrain, the aforementioned statistical methods can perform well. However, this clearly is not the case in general. Fortunately, with the continuing proliferation of environmental databases (e.g., GoogleEarth™) coupled with the KASSPER KA architecture, it is possible to augment statistical methods with a priori KA information. One example of a KA technique that is relevant to the subject of this book is physics-based KA processing. Under the KASSPER project, techniques were developed that used digital terrain and elevation data (DTED) and ray tracing physics to estimate the clutter statistics a priori. This predicted clutter was then blended with sample statistics to provide much more
1.1 Background
13
effective performance in challenging environments than conventional methods. These results are well documented in the literature of [3]. During the mid-2000s, coming off the heels of the success of the KASSPER project, Dr. Edward Baranoski, program manager at DARPA, had the idea to extend the concept of KASSPER to a whole new level by exploiting urban terrain databases to resolve the NLOS multipath returns. A back-of-the-envelope calculation clearly shows that there is ample signalto-noise-ratio (SNR) in the multipath reflections of vehicles in urban terrain for many ISR GMTI radars. Thus, in principle, KA physics-based techniques could be used to predict where the multipath returns would show up in range and Doppler, assuming a priori knowledge of the building locations. As discussed later in Chapter 3, in addition to the occasional direct-path returns, there are single-bounce, double-bounce, triple bounce, and even potentially quadruple-bounce reflections [4] as the EM waves ping-pong in the urban canyon. As described in [5], the Multipath Exploitation Radar (MER) program was a complete success and ushered in a new era in radar: NLOS detection and tracking. One very surprising result, only clear in hindsight, was that the new NLOS tracker significantly outperformed traditional DLOS trackers operating at the same ranges against the same targets (but in open terrain)! But how is that possible? It turns out that the multipath signature (including fades) over time creates a unique fingerprint that rapidly zeros in on the only possible target location. Indeed, during a DARPA flight experiment in Tempe, Arizona, the track accuracies achieved were measured in meters, comparable to what would be obtained if the vehicles were instrumented with a Global Positioning System (GPS) [5]! As will be discussed later in Chapter 3, NLOS GMTI radar applied to an urban and/or other NLOS setting can lose effectiveness as target densities increase. This performance loss is due to the increase in the number of target signatures in a multipath environment, a single target can give rise to several MTI range-Doppler detections. A doubling of targets can result in a quadrupling of MTI detections, with a commensurate loss in free (useable) range-Doppler space. A large number of MTI detections can be accommodated by increasing range and Doppler resolution to effectively create more useable range-Doppler space. If a large enough antenna aperture exists, angle-of-arrival (AoA) can also help to alleviate the target density issue. This book focuses on the exploitation of multipath for target geolocation in MTI radar systems. It does not cover over-the-horizon radars that utilize ionospheric or tropospheric scattering.
14
Non-Line-of-Sight Radar
1.2 NLOS Radar Overview Figure 1.1, adapted from the Industry Day Briefing for the DARPA MER project [4], shows the basic concept of NLOS radar in a known urban environment. A back-of-the-envelope calculation indicates that for many airborne GMTI radars there is a sufficient SNR to detect the multipath reflections off building surfaces and moving vehicles (e.g., cars, trucks). Since ground vehicles have radar cross sections (RCS) well in excess of 10 dBsm, compared to say a human with an RCS of approximately 1 dBsm, the losses (~5 to 10 dB per bounce) are tolerable. Figure 1.2 shows how much more area can be covered by an NLOS radar as compared to DLOS for an urban location typical of many geographical regions. Figure 1.3 depicts the basic detection association logic of a DLOS GMTI radar. New MTI detections are first compared with the existing target track database. Since the target is generally moving, the existing track must be extrapolated (predicted) to the next time the radar scans the general area where the target was last seen. For a range-Doppler detection map, association gates need to be setup for a given projected track. If an MTI detection falls in that gate, it will be associated with the previous track, and used to
Figure 1.1 Illustration of the basic NLOS MER concept. The lightly shaded line (double bounce) NLOS path can increase area coverage by a factor of 9, and the dark shaded line (quadruple bounce) by a factor of 25. (Source: [4].)
1.2 NLOS Radar Overview
15
Figure 1.2 Illustration of the increased area visible to an airborne GMTI radar (a) without and (b) with NLOS techniques. The shaded regions are visible to the MTI radar. The data was collected by DARPA in Tempe, AZ. (Source: [5].)
Figure 1.3 DLOS detection association logic.
update the track database. Figure 1.3 shows an example of multiple hypotheses for a single track; in this case a stopping assumption is included for ground moving targets. Other maneuvers, such as turns, can also be hypothesized. Figure 1.4 shows the same association logic for the NLOS case. As explained in Chapter 3, the multiple multipath returns from a single target occupy different locations in range and Doppler. A multipath return travels along a longer path than DLOS, and will always be delayed in range. The multipath returns can be distinguished from the DLOS signal with sufficient temporal resolution and knowledge of the multipath geometry. Note also
16
Non-Line-of-Sight Radar
Figure 1.4 Association logic example for NLOS radar.
that the associated Doppler shifts for each multipath return differ (sometimes significantly) from the DLOS. Each leg in the multipath trajectory imparts a Doppler shift that is unique to the particular ray path. The NLOS case represents a distinctly different paradigm than traditional DLOS detection and tracking. There is significantly more information in the NLOS scenario in Figure 1.4 with potentially multiple range-Doppler detections for a single target. The arrangement of these detections in range-Doppler space uniquely identifies the target location if the building geometry is known. Even when line-of-sight (LOS) is completely blocked, we can effectively localize and track vehicles with accuracies that exceed the DLOS case. For DLOS, the Doppler shift is due to the ownship radar motion along the target LOS and the target radial velocity—and of course a two-way roundtrip (assuming monostatic operation). In contrast, for an example single-bounce multipath, three Doppler shift contributions are experienced due to reflections from: (1) ownship radar motion along LOS induced on transmit; (2) the motion along the bouncepath LOS of the moving target; (3) ownship radar motion along LOS on receive. It is clear that DLOS rangeDoppler returns can be significantly displaced from DLOS in both range and Doppler.
1.3 Chapter Outline The rest of the book is structured as follows. Chapter 2 provides a review of conventional airborne DLOS GMTI radar, including existing methods for detecting and tracking moving objects. While NLOS GMTI radar is a major
1.3 Chapter Outline
17
technological advance, it is still grounded in the basic functions of GMTI radar (detection, estimation, and tracking). In Chapter 3, a review of the key electromagnetic physics that form the foundation for NLOS radar is provided. In particular, the physics of scattering off of buildings and vehicles is considered. While the exact amplitudes of these reflections cannot be precisely predicted a priori using building databases and so forth, the predicted range(s) and Doppler(s) signature(s) of a target are quite accurate. For example, while a range-Doppler MTI detection can easily vary by 10 dB from its predicted value, or even from coherent pulse interval (CPI) to CPI, as long as the received signal is strong enough to exceed the detection threshold it can still be used to successfully update (and maintain) an NLOS track. Stated simply, the range and Doppler of an MTI detection is much more important than its amplitude. Insensitivity to scattering amplitude is a key enabler for NLOS radar, and a major reason why it performs well in practice. The requisite terrain databases are discussed in Chapter 4, along with data sources such as Google Earth. When DARPA first conceived the MER program, it was unknown what accuracy in building/street locations (in three dimensions) was required. Consequently, DARPA collected extremely high-resolution laser interferometric data of the selected test site (Tempe, Arizona). Long story short: The high-resolution data was overkill. It was conclusively demonstrated that the accuracy in publicly available databases such as Google Earth worked just as well. Again, this is because the information is primarily used to (a) determine when a DLOS, NLOS, or both (i.e., DLOS with multipath) is likely, and (b) determine the predicted rangeDoppler for each return. Exquisite database information is not required to do a perfectly acceptable (and useful) job. In Chapter 5, requisite high-fidelity modeling and simulation (M&S) tools are described, along with their theoretical foundations (such as uniform theory of diffraction). RF M&S tools such as RFView™ are discussed and utilized to illustrate a number of NLOS radar applications. Figure 1.5 highlights some of the features of the RF M&S tools used in Chapter 5.� Finally, in Chapter 6, the mapping of NLOS radar algorithms onto field programmable gate array (FPGA) and graphics processing unit (GPU) architectures is discussed and illustrated with several applications. The linearity of Maxwell’s equations enables calculation of the multipath in a reasonable (tactically useful) time frame. The requisite multipath calculations can be performed in parallel (i.e., vectorized) with different portions of the radar field of view (FOV) processed on different cores of the FPGA/GPU. Thus, while the absolute number of calculations per second for NLOS radar can
18
Non-Line-of-Sight Radar
Figure 1.5 Example of one of the high-fidelity physics-based M&S tools used in Chapter 5.
be many orders of magnitude greater than conventional DLOS, they can nonetheless be performed in real-time with a multicore parallel processing architecture.
References [1] History of DARPA. Available: https://www.darpa.mil/about-us/darpa-historyand-timeline. [2] Guerci, J. R., and E. J. Baranoski, “Knowledge-aided adaptive radar at DARPA: an overview,” Signal Processing Magazine, IEEE, Vol. 23, pp. 41-50, 2006. [3] Guerci, J. R., Space-Time Adaptive Processing for Radar, 2nd Edition, Norwood, MA: Artech House, 2014. [4] Baranoski, E., “Multipath exploitation radar industry day,” DARPA Strategic Technology Office Presentation, 2007. [5] Fertig, L. B., M. J. Baden, J. C. Kerce, and D. Sobota, “Localization and tracking with Multipath Exploitation Radar,” in IEEE Radar Conference (RADAR), 2012, pp. 1014-1018.
CHAPTER
2
Contents 2.1 MTI Radar
Review of Ground Surveillance Sensors
2.2 Kalman Filter 2.3 Multiple Hypothesis Tracking 2.4 Bayesian Particle Filters 2.5 Track-Before-Detect
In this chapter, we briefly review the fundamentals of MTI radar for both GMTI and airborne (AMTI) applications. In Section 2.1, the basics of MTI radar including pulse Doppler are outlined. Traditional MTI radar relies on DLOS, with only atmospheric propagation effects potentially being a factor. NLOS MTI radar will also exploit range-Doppler, but will incorporate a model-based approach to account for the significant ambiguities that can arise due to multipath. Sections 2.2 through 2.5 provide a survey on MTI radar multitarget tracking and estimation approaches. Topics addressed include extended Kalman filter (EKF), multiple hypothesis tracking (MHT), Bayesian particle filters, and track-before-detect methods.
2.1 MTI Radar MTI radar pertains to detecting and tracking moving targets, whether surface and/or ground based (GMTI), or airborne (AMTI), including spaceborne [1]. The earliest forms of MTI radar would simply display all target 19
20
Non-Line-of-Sight Radar
echo returns above a certain threshold on a cathode ray tube (CRT). An operator could then visually detect objects that were moving from scan to scan. Rudimentary automated detection techniques consisted of subtracting successive scans which had the effect of filtering out static objects (e.g., clutter) from objects that are moving. Later generation MTI radars incorporated Doppler processing as a more direct means for rapidly detecting moving targets [1]. For airborne (or spaceborne) radars looking down at the surface, targets will also be competing with severe ground (and/or sea) clutter that will also be Doppler-shifted due to ownship motion. For these systems, additional spatial processing (or angle processing) is usually employed to further assist with clutter rejection in the form of a multichannel phased array [2]. The use of coupled angleDoppler (also called space-time) processing is one of the most powerful MTI radar processing techniques for maximizing detection performance, as well as providing advanced electronic protection (EP) against jamming. The operation of MTI radar is characterized by the direct output (detections) and the resulting data products after processing: MTI radar output produces detections in range and Doppler domain – results in dots on an image (see Figure 2.1); Tracker can convert dots into useful data products;
Identify where the targets originated from; Predict where the targets are going; How many targets are traveling together; Keep track of an identified target—where did a specific person go;
Figure 2.1 Simulated MTI detections (circles) at Aberdeen Proving Ground Maryland. The lines represent roads. (Source: [3].)
2.1 MTI Radar
21
Where a target is predicted to be in the future for targeting. Figure 2.2 illustrates the effect of ownship motion on the angle-Doppler distribution of ground clutter. The largest Doppler spread occurs when the radar is side-looking, since you get both increasing Doppler shifts from the clutter bow side of the aircraft and decreasing Doppler shifts towards the aft side [2]. Figure 2.3 shows the angle-Doppler “clutter ridge” for the sidelooking MTI radar case. Figure 2.3 shows how a moving target will generally not lie along the angle-Doppler clutter ridge. Normalized angle and Doppler refer to the unambiguous (Nyquist) regions in both regimes (see [2] for details). This fact is exploited by modern MTI radars by crafting an angle-Doppler filter (also referred to as space-time filtering) that maximizes its 2-D response at an
Figure 2.2 Relationship between ownship radar motion and orientation and an iso-range clutter ring (Source: [2].)
22
Non-Line-of-Sight Radar
Figure 2.3 Corresponding angle-Doppler clutter distribution corresponding to the sidelooking MTI radar case. Note that a moving target with sufficient radial velocity will move off of the clutter ridge. This property is key to modern MTI radar. (Source: [2].)
angle-Doppler cell of interest, while placing a null in regions of potential competing clutter. An example of an adapted angle-Doppler filter response is shown in Figure 2.4. The target (region) of interest is at a normalized Doppler of +0.25, and normalized angle of 0 (boresight). Note the presence of a 2-D null along the clutter ridge, as well as spatial-only jammer nulls.
2.2 Kalman Filter 2.2.1 Linear Kalman Filter The Kalman filter (KF) [4-7] is a widely-used method for reducing the errors associated with tracking moving targets based on noisy sensor data. Fundamentally, the KF is a Markov model, in that it assumes that the future behavior of the system can be predicted solely by knowing the current state and not the past history. The KF is the optimal linear tracking filter when the system dynamics and sensor observation noise are perfectly known. However, in practice, we often have only an approximate knowledge of the
2.2 Kalman Filter
23
Figure 2.4 Examples of angle-Doppler (space-time) filter patterns: (a) Without tapering, and (b) with 30-dB Chebyshev angle and Doppler tapers.
system dynamics. Additionally, it is often difficult to obtain good estimates of the initial state [8, 9], as well as process [10] and measurement noise [11] covariances. Furthermore, the process and measurement noise covariances are often a function of the system state. For instance, the measurement of vehicle position may become more inaccurate at higher vehicle speeds. Nevertheless, the KF is relatively robust to errors in these components and will continually adapt to the changing environment to maintain tracking performance. The KF is appropriate to linear dynamic systems that can be discretized in the time domain. We can model the state of the system, xk, as a linear stochastic process
x k = Fk x k −1 + w k
(2.1)
where Fk is the state transition model and wk is the process noise for any discrete time update, k. The process noise wk represents fluctuations in the state from error sources that are not included in the state transition model. An example of process noise would be wind blowing on a moving vehicle and changing the speed in a random way or friction that isn’t included in the transition model (Fk). Process noise might also include the acceleration of a vehicle from unknown control inputs of the driver. The process noise is often modeled as a zero-mean Gaussian distribution with covariance, Qk.
24
Non-Line-of-Sight Radar
An example one-dimensional state for a moving vehicle is
x xk = x
(2.2)
where x is the position and x is the speed of the vehicle along one dimension. A simple example of a linear state transition model (Fk) assuming a constant speed is
�
1 ∆t Fk = 0 1
(2.3)
where ∆t is the time difference between measurements. At each time increment, measurement of the vehicle’s position is made (zk) which necessarily entails a certain amount of noise (vk)
zk = Hk x k + v k
(2.4)
We model the measurement noise (vk) as a zero-mean Gaussian process with covariance, Rk. If we can only measure the vehicle’s position and not the vehicle’s speed then the observation model is
1 0 Hk = �(2.5) 0 0
For realistic scenarios, the observation model will be 2-D, depending on the geometry between the receive and ground vehicle, and may incorporate the curvature of the Earth. We have retained the subscript k in Hk since it is possible for the observation model to change over time in more complex examples. We define the vector yk as the residual difference between the measured (zk) and predicted (Hkxk|k–1) vehicle position �
y k = zk − Hk x k k −1
(2.6)
2.2 Kalman Filter
25
The KF continually adjusts the system state xk and state covariance Pk to reduce the residual yk. In that way, the predicted vehicle position should closely match (track) the measured position (zk) at each step. In (2.6), we used the nomenclature xk|k–1, which represents the state (xk) at time increment k assuming observations up to time increment k–1. Obviously, predicting the next vehicle position is critical to tracking performance and is the basis for all of the tracking methods presented in this chapter. If the vehicle position is measured near the expected location at each time increment, then it is much more likely that detections from the vehicle will be assigned to the correct (historical) track. The KF is a two-part recursive Bayesian estimation: prediction or a priori step, and update or a posteriori step. In the prediction phase, the vehicle position is estimated according to the physical laws of motion, including such parameters as velocity and acceleration. The KF continually updates the (hidden) underlying state of the system so that the prediction minimizes the residual error yk. In the prediction step (or a priori step), the new state is predicted as
x k |k −1 = Fk x k −1 �(2.7)
from the previous system state and the system dynamics. Vehicle control inputs can be incorporated into (2.7) if they exist (e.g., driver acceleration). However, for most noncooperative vehicle tracking applications (ground or air), we will not know the control inputs and they will be treated as noise. Also, in the prediction step a new state covariance is calculated using an estimate of the process noise
Pk |k −1 = Fk Pk −1FkT + Q k �(2.8)
where Qk is the process noise covariance (or covariance of process wk). In (2.8), the expected process noise covariance is incorporated into the predicted state covariance for each time increment. The process noise represents the uncertainty in knowledge of the physical state of the system (in this case the vehicle position and speed). Since we do not know the details of the physical mechanisms that affect the vehicle position, we need to model them as a random process. For tracking of ground vehicles with radar, the largest component of the process noise will be the acceleration of the vehicle from unknown driver control inputs. For the one-dimen-
26
Non-Line-of-Sight Radar
sional example scenario, we can estimate the process noise covariance by including acceleration (ak) in the equation of motion as a separate term
1 2 ∆t x k |k −1 = Fk x k −1 + Mak = Fk x k −1 + 2 a k t ∆
(2.9)
We can estimate the corresponding process noise covariance as
1 4 4 ∆t Q k = σ a2 MMT = σ a2 1 ∆t 3 2
1 3 ∆t 2 �(2.10) ∆t 2
where σ a2 is the expected variance in the acceleration by the unknown driver control inputs. Next, in the update phase (or a posteriori step), the vehicle state and state covariance estimates are incremented. The prefit residual covariance Sk is calculated as
Sk = R k + Hk Pk |k −1HTk �(2.11)
Sk is an estimate of the expected variation of detections around the existing track that includes measurement and state uncertainty. The residual covariance (Sk) is used to calculate the KF gain (Kk) or the relative weight given to the recent measurements versus the current state estimate. With high gain, the prediction adheres closely to recent measurements; the prediction will agree with each data point better but will incorporate more noise. With a low gain, the filter relies more on the current state to calculate the model predictions; the track will be smoother (less noisy) but also less responsive to target track changes. The nominal KF automatically calculates the optimal filter gain Kk using
K k = Pk |k −1HTk Sk−1
(2.12)
2.2 Kalman Filter
27
However, the filter gain could also be determined manually to adjust the filter behavior or performance. The updated state xk and state covariance Pk are adjusted (assuming the optimal gain as)
x k = x k |k −1 + K k y k �(2.13)
and
Pk = Pk |k −1 − K k Hk Pk |k −1 �(2.14)
We can see from (2.13) that the KF is continuously trying to reduce the residual (yk) to zero. If the residual (yk) is zero, then the state (xk) will remain unchanged. At each time increment, (2.6) to (2.8) and (2.11) to (2.14) are calculated to produce the new vehicle tracking state and state covariance. As an example, we use the KF to track a moving vehicle along one dimension. The moving vehicle starts at a position of 10m and moves at 1 m/s. The radar measurement of the vehicle position is defined by a Gaussian distribution with a standard deviation of 10 m/s. We can calculate Qk using (2.10), assuming σ a2 = 1 and ∆t = 1. The transition and observation matrices Fk and Hk are found using (2.3) and (2.5), respectively. Since we do not initially have a good estimate of the state uncertainty, we set the state covariance Pk to all zeros. We assume the measurement covariance is
σ 2 Rk = x 0
0 �(2.15) 1
where σx is determined later. The initial state xk was set to xk = [0 0]T since the radar receiver did not know the vehicle position and speed when tracking was started. The tracking results for two tests of the KF are plotted in Figure 2.5. The position measurement standard deviation (σx) in (2.15) was set to 10m (see Figure 2.5(a)) and 200m (see Figure 2.5(b)). The vehicle position (radar) measurements are plotted as the plus symbols (same points in both left and right figures). Since the vehicle position measurement is relatively accurate when σx = 10m, the tracking filter output (thin line) includes much of the measured position variation. Nevertheless, the filter output is much closer
28
Non-Line-of-Sight Radar
Figure 2.5 The result of a KF tracking a vehicle position in one dimension with the assumed position measurement standard deviation of (a) 10m and (b) 200m.
to the true vehicle position (thick line) than the instantaneous measurements (plus symbols). When the measurement uncertainty is large, σx = 200m, the filter relies on the internal state more heavily and the reported position is much smoother (see Figure 2.5(b)). In this case, the reported position closely matches the true vehicle position. However, it takes longer for the filter output to approach the true position at the start of the simulation. 2.2.2 Extended Kalman Filter In practice, it is often difficult to describe systems using the linear model in (2.1). The usual method of handling a nonlinear observation equation is through application of the extended Kalman filter (EKF) [12, 13]. The EKF can process nonlinear measurement error statistics, that is, slant range and azimuth angle errors. The advantage of the EKF is faster convergence in terms of iterations compared to traditional methods for nonlinear systems, although the computational cost of each iteration is higher. Additionally, the EKF will be more robust for some systems that exhibit a moderate amount of nonlinear behavior in the state transition or observation functions. In the EKF systems, the state transition is treated as a continuous nonlinear function fˆ (xk–1) of the previous state with additive noise wk.
x k = fˆ ( x k −1 ) + w k
(2.16)
Note that the state transition function fˆ (xk–1) takes a vector as input and outputs a vector. The hat symbol ^ signifies that it is a function. The
2.2 Kalman Filter
29
measured value is also modeled as a continuous nonlinear function hˆ (xk) of the state with additive noise vk
zk = hˆ ( x k ) + v k
(2.17)
The EKF operates similarly to the linear KF above using a linearized version of the signal model about the current state estimate. The transition and observation models are replaced with linearized estimates calculated from the Jacobian matrices, Fk and Hk, and can be written as partial derivatives of the transition, f, and observation, h, vectors.
Fk ≈
∂fˆ ( x ) ∂x
and Hk ≈ k
∂hˆ ( x ) ∂x
�(2.18) k |k −1
The Jacobian Fk(row = 1, col = j)matrix is constructed by calculating the partial derivative of the ith component of fˆ with respect to the jth component of x. A similar process is repeated for Hk. After linearization, (2.7) and (2.6) are replaced by
x k |k −1 = fˆ ( x k −1 ) and y k = zk − hˆ ( x k |k −1 )
(2.19)
The rest of the traditional KF equations are unchanged. This approximation is only accurate when the observation model hˆ and transition model fˆ are linear on time scales that are equal to or larger than the discrete time interval ∆t. The EKF has several drawbacks including possible divergence from the true state for incorrect choices of the initial state, or measurement/process covariances [14]. Additionally, EKFs are not an optimal nonlinear estimator and so a number of improvements/alternatives have been proposed, for example, Unscented KFs [15] and Ensemble KFs (EnKF) [16]. A more realistic example of the EKF operating on simulated target detections is shown in Figure 2.6 [3]. The receiver was 35 km due west from the target at the start of the scenario. The vehicle was moving at 10 m/s going east initially. In this example, the 2-D state was described as
x k = xˆ (k ) yˆ (k ) zˆ (k ) xˆ (k ) yˆ (k ) zˆ (k )
(2.20)
30
Non-Line-of-Sight Radar
Figure 2.6 Example output of the EFK filter on detections (points) from a ground moving target radar scenario. The ground truth is the dark black line, and the EKF output is the gray line. (Source: [3].)
and the corresponding linear transition matrix was used
1 0 0 Fk = 0 0 0
0 1 0 0 0 0
0 ∆t 0 0 1 0 0 1 0 0 0 0
0 ∆t 0 0 1 0
0 0 ∆t 0 0 1
(2.21)
The observation model hˆ (xk) was a nonlinear function of the target position and velocity and included the Earth’s curvature. The instantaneous linear observation model was found using (2.18). Each measurement (dot in Figure 2.6) had a 10m variance along range and a 0.57° variance cross range (or approximately 350m at a 35-km stand-off range). The process noise covariance Qk assumed values of downrange and cross range variance that were relatively large (compared to the measurement covariance), so that the EKF would be responsive to abrupt changes in the track direction (at the cost of an increased response to measurement errors and larger track noise). Increasing the process noise covariance raises the filter gain and causes the KF to rely more on new measurements rather than the previously developed state, enabling the tracker to faithfully follow the target through sudden (90°) turns. In Figure 2.6, the EKF is able to successfully track the target, and the tracker output (gray line) closely matches the ground truth (black line).
2.3 Multiple Hypothesis Tracking
31
The target error decreases after the left-hand turn, since the along-range measurements are more accurate. However, after the turn, the receiver is not perfectly due west of the target and some of the cross range uncertainty leaks into the x-axis component of the target position. The corresponding target state position and speed error are plotted in Figure 2.7. The position and speed error temporarily increase during the turn due to the sudden change in vehicle state. The accuracy of the EKF predictions improve and the error decreases after the turn as the vehicle again moves in a straight line.
2.3 Multiple Hypothesis Tracking Tracking ground targets is relatively difficult due to the typically dense target environment and poor sensor resolution compared to the size of vehicles, roads, and so forth. The overlapping target probability distributions make association difficult. By comparison, tracking air targets is easy, since they are well separated in space and their possible flight dynamics are limited. For ground targets, it is often necessary to associate multiple targets with multiple previously existing tracks. If instantaneous assignment decisions are made when tracks cross (e.g., using nearest-neighbor association), then incorrect assignments are likely. An archetypal GMTI tracking scenario is shown in Figure 2.8. Target detections are indicated by the circles with two shades of gray. The false alarms are designated by the open circles. The ovals represent the measurement uncertainty of each radar scan. As is typical for GMTI, the cross range resolution is much larger than the downrange resolution. The downrange uncertainty is inversely proportional to the radar bandwidth and is usually of the order of meters to tens of meters. The cross range uncertainty
Figure 2.7 The target state (a) distance error and (b) speed error for the EKF tracking example. (Source: [3].)
32
Non-Line-of-Sight Radar
Figure 2.8 An example GMTI scenario with two ground target tracks that cross. The shaded circles represent detections from two different targets, the open circles are false alarms, the ovals represent the measurement uncertainty of each detection, and the true target tracks are indicated by the lines. (Source: [3].)
is inversely proportional to the radar dwell time at any particular location, which is constrained by other radar system performance factors such as the required area coverage. The radar cannot afford to stare at one area continuously. Consequently, cross range resolution is of the order of hundreds of meters. In Figure 2.8, the cross range measurement uncertainties overlap producing ambiguous target-track assignments. When the target tracks cross in Figure 2.8, it is impossible to associate reliably each target with the correct track using single detections. Binary correspondence decisions discard useful information. Better tracking results can be obtained if the correspondence decisions are deferred until detections from multiple frames are accumulated. By combining multiple detections, the two tracks (shaded lines) can be distinguished and reliable target-track association is possible. The MHT algorithm [17] was developed to maintain multiple correspondence hypotheses for each target. The track that is reported is the most likely set of correspondences over time. In this way, questionable track designations (i.e., when targets are crossing) can be confirmed or rejected using information from subsequent scans. Example detections from a single radar scan are shown in Figure 2.9. Two targets are being tracked and their predicted positions in this scan are determined from the nonlinear observation model hˆ ( xik ) of target state
2.3 Multiple Hypothesis Tracking
33
i i Figure 2.9 The predicted positions of two targets hˆ (x k ) for target states x k where superscript i is the track number and subscript k is the time interval. The associated ˆ kj ) are measurement gates for each state are depicted by the ovals. Three measurements (z reported, where superscript j is the measurement number. The tracking algorithm must decide how to associate each detection with a track prediction for each scan.
( xik ). The superscript i signifies the track number and subscript k is the disj crete time interval. The tracking algorithm must decide which detection zˆ k to associate with each track, where superscript j is the measurement number. It also possible that some target detections will be below threshold and that some of the detections are false alarms. We could very simply gate the detections so that we ignore zˆ k3, but that would disregard the possibility that detection zˆ k3 is the start of a new track. We could choose the most probable detections for each increment, that is, the detections that are closer to each predicted measurement. Associating detection 2 with track 2 is probably a good guess. But, perhaps detection 2 is a false alarm and detection 1 is associated with track 2? It is obvious that detection-track assignment errors will occur regularly when tracks cross. The overall probability of each target-track hypothesis is determined by individual association probabilities between each detection and track. For each hypothesis, we must also incorporate the possibility that any detection is a false alarm or the start of a new track. If we choose a hypothesis for the current scan, we can write the probability in general using the expression derived in [17] as the product of multiple assignment probabilities
p (Θk ) = PDN DT (1 − PD )
NTGT
N NT FA βN FA βNT
N DT
∏p
m =1
(z hˆ (x )) p (Θ j k
i k
k −1
) �(2.22)
where NTGT is the (integer) number of existing target tracks (in this case 2) and NDT is the number of detections that fall within existing target gates
34
Non-Line-of-Sight Radar
and are associated with existing tracks (also 2). The integers NFA and NNT are the number of false alarms and new tracks, respectively. The probability of this hypothesis for this scan p(Θk) is multiplied by the probability from the previous scan p(Θk–1). The state observation model hˆ ( xik ) is typically a nonlinear function of the position and velocity for ground targets and will change with time as the receive and ground target move. The tracking algorithm provides the state xik with each scan. The individual Gaussian probabilities in the product operator in (2.22) are calculated using the distance between the prediction hˆ ( xik ) and meaj surement zk , given a certain track state xik normalized by the covariance Sik ,
( ( )) = exp − 12 (z
p zkj hˆ x ik
j k
( ))
− hˆ x ik
H
Sik
−1
(z
j k
( ))
− hˆ x ik
(2π)n Sik
�(2.23)
where n is the dimensionality of the covariance. The covariance Sik , is a combination of the measurement Rk and state covariance Pk|k–1 (obtained from the tracker) Sk = R k + H k Pk|k −1HTk ≈ R k
(2.24)
We expect the combined covariance Sk to be dominated by the measurement covariance Rk. The state covariance is calculated by combining multiple measurements over time and should have less error than a single measurement. The gates (ovals in Figure 2.9) are used to determine whether we associate measurements with an existing track. We can determine if a measurej ment (zk ) is within a gate area using
(z
j k
( ))
− hˆ xˆ ik
H
(
( )) ≤ η
Sik zkj − hˆ xˆ ik
2
(2.25)
where η represents the probability cutoff in terms of the number of sigmas in the Gaussian distribution. The parameter (η) is usually selected to be around 2 to 3. For 3 sigmas, 99.7% of detections associated with that track will be within the gate area. Beyond this gate distance, detections are no longer automatically associated with existing tracks. We can construct a hypothesis tree for all three detections in Figure 2.9, as shown in Figure 2.10. Detection 1 may be associated with either track 1 or 2, or it may be a false alarm or the start of a new track. Given the choice
2.3 Multiple Hypothesis Tracking
35
Figure 2.10 Hypothesis tree for a single scan. Each detection in Figure 2.9 is assigned to either one of the existing tracks, a false alarm, or a new track.
for detection 1, we can assign detection 2, detection 3, and so forth. Detection 3 can only be a false alarm or a new track since it is outside the gates of either track. There are 22 possible hypotheses in Figure 2.10 to explain the scenario depicted in Figure 2.9 for a single scan. The clustering of detections into groups is a key step to reduce the number of hypotheses that must be calculated. In the hypothesis tree shown in Figure 2.10, detection 3 should have been handled completely separately, since it is not associated with either track 1 or 2 (it is outside their detection gates in Figure 2.9). Thus, we can simplify the hypothesis tree as shown in Figure 2.11 to only 13 hypotheses. In MHT, detections are clustered to form separate hypothesis trees and reduce the total number of combinations. However, it is possible that the detections from one tree will eventually land within the detection gates of tracks from another tree. In that case, the two
36
Non-Line-of-Sight Radar
Figure 2.11 Simplified hypothesis tree for a single scan. Each detection depicted in Figure 2.9 is assigned to either one of the existing tracks, a false alarm, or a new track.
clusters and the two corresponding hypothesis trees will need to be merged. Additionally, clusters should be split if a group of tracks in a cluster do not share detections (inside their detection gates) with any other track in the cluster for some specified number of scans. The probability of hypothesis Θ1 can be found using (2.22). Hypothesis 1 Θ (top line in Figure 2.11) assumes that detection 1 is associated with track 1, detection 2 is associated with track 2, and detection 3 is a false alarm. The probability of hypothesis Θ1 for this scan only can be written as the product of multiple assignment probabilities,
( )
2
( ( ))
2 p Θ1k = PD2 (1 − PD ) βFA ∏ p zmk hˆ xmk m =1
(2.26)
where the superscript in factor PD2 represents the square and factor βFA is the probability of a false alarm. If we had assumed that detection 3 was the start of a new track, then we would have used factor βNT . The probability of a false alarm or a new track is set using details of the particular ground ˆ m are calvehicle-tracking scenario. The individual probabilities p zm k h xk culated using (2.23). As we add detections from other scans, the hypothesis trees will grow exponentially. Calculating each of the probabilities in (2.23) entails
( ( ))
2.3 Multiple Hypothesis Tracking
37
significant computational cost. The state ( xik ) and covariance (Sik ) must be updated during each scan for each hypothesis (e.g., using the EKF or particle filter). For realistic situations, after only a small number of scans, it will not be practical to score all of the possible hypotheses. Initially, different strategies were adopted to prune the total number of hypotheses. For instance, those with probabilities below a predetermined threshold can be dropped each scan. However, it is possible that low probability hypotheses will become more likely with further information. Thus, only configurations that are very unlikely can be safely dropped. This approach is very inefficient as it still retains the calculation of many probabilities that will eventually be thrown away. The standard method adopted for MHT is to select a predetermined number of nonconflicting assignments for each scan that have a reasonable chance of success [18]. First, each of the possible measurement track connections is scored using a log-likelihood function. That is, we score each hypothesis by the negative of the logarithm of the probability in (2.22). The score can be written as
n log (2 π ) − N DT log (PD ) − NTGT log (1 − PD ) − N FA log ( βFA ) 2 �(2.27) N DT H −1 1 1 −N NT log ( βnT ) + log Sik + ∑ zkj − hˆ xˆ ik Ski zkj − hˆ xˆ ik 2 m =1 2 c ij =
( )
(
( ))
(
( ))
We have neglected to include the logarithm of the probability of the last scan, p(Θk–1), in (2.27) since it is easier to calculate the score for each scan independently. Using log-likelihood values makes it easy to check various track-detection associations by directly summing the corresponding score. For each value of m in the sum, we include associations from the hypothesis between a track i and a detection j. For instance, if track 1 is associated with detection 1 and track 2 is associated with detection 2, then for m = 1 (i = 1, j = 1) and for m=2 (i = 2, j = 2). We add the score from the last scan (branch) to the scores for the various hypotheses from the current scan (leaves) to get the total score for each hypothesis (in the tree) over multiple scans. To determine the set of nonconflicting assignments with the highest score, we construct an assignment matrix, as shown in Figure 2.12. We used a standard score for false alarms (FA=50), new tracks (NT=40), and no detections (ND=45) that are based on an estimate of the log of those prob-
38
Non-Line-of-Sight Radar
Figure 2.12 An example MHT assignment table that corresponds to the scan depicted in Figure 2.9. T1 and T2 represent the two existing tracks and z1, z2, and z3 represent the three detections. The symbols FA, NT, and ND represent false alarm, new track, and no detection, respectively. (Source: [3].)
abilities. For each combination of track and detection, we calculated the score using (2.27). A score of 100 was used to avoid the assignment. The set of assignments we choose must be both consistent and produce the highest probability (lowest score). For a particular arrangement, we cannot assign the same detection to multiple tracks. From the assignment table, we can see that the most likely assignment (lowest total score) is that detection 1 corresponds to track 1, detection 2 corresponds to track 2, and detection 3 is the start of new track. We also need to know the set of the most probable assignments in terms of score (and not just the most likely assignment). For instance, we need the lowest score, second lowest score, third lowest score, and so forth. For large numbers of detections and tracks, the assignment table will be difficult to interpret manually. The combinatorial optimization algorithm that solves the assignment problem in polynomial time is called the Hungarian method after the researchers who first published a solution [19]. The best N minimum cost combination of measurements and existing targets, new targets, false alarms, and/or missed detections are selected using the algorithm outlined in [20]. Even after limiting the total number of hypotheses in each scan, the number of possibilities after many scans will become computationally intractable. To limit the total number of hypotheses to test, we also limit the tree depth in terms of scans using N-scan pruning. In that approach, the hypothesis tree scan depth is limited to N scans, where N is typically larger than 5. The hypotheses prior to scan k-N are condensed to the single most probable hypothesis. An example radar scenario with two targets with paths that cross is shown in Figure 2.13 (a). The two targets begin 1750m away from each other and move at 10 m/s at a trajectory that causes them to cross paths in
2.3 Multiple Hypothesis Tracking
39
Figure 2.13 MHT applied to an example simulated scenario: (a) Measurements, tracks, and ground truth and (b) the measurements that were used to form each track. Light shaded circles represent false alarms. The two shades of dark gray circles represent detections for two targets. The dark curved and white straight lines represent the estimated and ground truth target tracks, respectively.
the middle of their movement pattern at t = 123 sec. The radar platform was located 70 km due west from the center of the track pattern. Each target was tracked using the EKF at an update rate of 1 second. The two tracks are indicated by the curved dark shaded lines. The ground truth is represented by the straight lines. MHT (using a 10-scan depth) can effectively differentiate between the two targets even when they cross where a single detectiontrack assignment is temporarily ambiguous. The two shades of thick dark gray circles represent detections for two targets. The target 1 measurements consistently associate with the first track, while target 2 measurements associate with the second track. False alarms (very thin lightly shaded circles) are consistently rejected. The detections that were assigned to each track are plotted in Figure 2.13 (b). The associated detection-track cost in the MHT scenario as a function of scan is plotted in Figure 2.14. The cost for detections associated with target 1 (z1k ) is plotted in Figure 2.14(a). The cost for detections associated with target 2 (z2k) is plotted in Figure 2.14(b). The light and dark shaded lines represent the score for assignment to track 1 and track 2, respectively. The score for a false alarm is indicated by the dashed black line. At the start and end of the scenario, the cost for assignment to the correct track is low while the cost for incorrect assignments is high. When the tracks cross near 123 seconds, the score for each track is similar. Near the crossing point, detections from target 2 are temporarily closer to track 1 (have a lower score) and vice versa. Using only single scans to perform the assignment, it would be possible for track 1 to be corrupted by target 2 detections so that it was
40
Non-Line-of-Sight Radar
Figure 2.14 The target-track cost for each scan in the MHT scenario in Figure 2.13: (a) The cost for detections associated with target 1, z1k , and (b) the cost for detections associated with target 2, z2k . The dark shaded line is the cost for assignment to track 1, the lightly shaded line is the cost for track 2, and the black dashed line is the cost of a false alarm. (Source: [3].)
slowly modified to follow target 2. The target-track assignments would often switch at the crossing point. By considering the cumulative cost over multiple (10) scans and deferring the assignment decision, the correct hypothesis is always chosen for this scenario.
2.4 Bayesian Particle Filters The tracking problem is inherently a Markov process where the target track can be completely described by its particular current state (position, velocity, appearance, etc.) that is constructed from its past history. Future states are predicted solely based on its current state and its statistical transition behavior. For instance, we are not utilizing knowledge of a target’s itinerary to predict its destination. A Markov process is accommodated by a Bayesian framework that naturally facilitates incorporation of contextual information and implementation of multiple hypotheses. Bayesian estimation is typically a sequential two-stage approach consisting of a motion model to predict the target location/orientation, followed by a template (radar signature) comparison to weigh promising locations and develop an overall target probability. This approach naturally incorporates feature-aided tracking if reliable features are available (with high range resolution radar). If we are able to match each radar signature to a template derived from the previous track history (in addition to a motion
2.4 Bayesian Particle Filters
41
model) then the performance will be superior to MHT. The template is compared to the current radar image via a least squares approach. If the radar resolution is sufficient, the comparison can provide a high probability of correct assignment even in the presence of multiple confusers. For many applications, particle filters have replaced linear estimations, as they are a more general approach encompassing arbitrary motion and measurement models, as well as extremely non-Gaussian noise. A Bayesian framework can incorporate many possible target modes and their transition probabilities (targets may appear, disappear, split, etc.) and implementation of multiple hypotheses. The target mode can be treated as just another component of the state. The overall distribution that we want to calculate is the probability of the current state given all previous measurements, p(xk|z1:k), with xk representing the current state at discrete time increment k (e.g., position, velocity, and image template) and z1:k representing previous measurements. Using Bayes’ theorem [21], we can write p(xk|z1:k) as
(
)
(
) (
)
p x k z1:k ∝ p zk x k p x k z1:k −1 �(2.28)
This probability is updated sequentially using a two-stage recursive estimation process. Similarly to the KF, the two-stage process consists of a prediction step (a priori step) that estimates the most likely target locations over each problem dimension (second factor on the right of (2.28)) followed by a post prediction step (a posteriori step) to calculate the probability of the target at those locations (first factor on the right of (2.28)). In addition, in the a posteriori step the prediction statistics (e.g., motion model) are updated. The maximum value of the probability output, p(xk|z1:k), serves at the target location in the radar image. The approach will behave similarly to a MHT when no information is available about the target or terrain. In the a priori step, the second probability term on the right of (2.28) is found by integrating over the previous solutions p(xk–1|z1:k–1),
(
)
(
) (
)
p x k z1:k −1 = ∫ p x k x k −1 p x k −1 z1:k −1 dx k −1 �(2.29)
where p(xk|xk–1) is the transition or motion model. Because evaluation of the abstract integration in (2.29) is usually intractable, it is estimated over many (N) discrete locations xn, drawn from a continuous distribution. That is, to make the calculation manageable, we assume point mass (or particle) approximations of continuous probability densities (i.e., a particle tracker
42
Non-Line-of-Sight Radar
[22]). The sum is performed either over a uniform grid or a random set of locations/states (xn) to develop the probability density function (PDF) as a function of location using the sum
(
N
) ∑ p ( xk x n ) p ( x n z1:k −1 ) �(2.30)
p x k z1:k −1 =
n =1
In (2.30), many candidate states, developed from the previous state according to a transition or motion model, are weighted by a likelihood function calculated from a least squares comparison between each template and the radar image. The motion model, p(xk|xk–1), predicts the probability distribution of the next unknown position given the current position probability distribution. For instance, a motion model for ground vehicles might be represented by an affine transformation. We expect ground vehicles to move, change aspect, and get closer or farther away (larger or smaller in size). The corresponding motion (or transition) model between successive radar images is a five-parameter affine transformation (one rotation, two scaling, and two translation parameters) to account for gradual changes in aspect to the target. In homogenous coordinates we can write the motion model update as
a11 xk 1 = a12 0
−a12 a22 0
b1 x b2 k −1 �(2.31) 1 1
where xk and xk–1 are vectors in rectilinear coordinates, [x y]T . The parameters b1,2 describe the translation in x,y. There are three aij parameters that describe the rotation and scaling of the target template. We utilize an affine rather than projective transformation neglecting image skew (the two zero locations in (2.31)) since we don’t expect large distortions due to perspective that might occur at very low grazing angles. In the absence of terrain information, we develop the distribution for b1,2 using a Kalman Filter. The aij parameters are randomly generated using an independent Gaussian distribution to account for a small amount of rotation and scaling of the template between measurement frames. The roads, obstacles, and land cover information can be incorporated into the transition model as constraints. Knowledge of the landcover type determines the locations (e.g., building versus road) that are possible for a particular target as well as the range of appropriate speeds in each location.
2.4 Bayesian Particle Filters
43
This information can be assimilated into the motion model to provide better estimates of the state transition probability. In the second or a posteriori step, the first probability term in (2.28), p(zk|xk), is found by comparing each target template with the radar image. Correlations developed from these comparisons become the weights that are applied to each potential location. The template may consist of a single radar cell or multiple cells. Particle trackers can utilize whatever target signature information is available. A key feature of the particle approach is that it allows us to readily incorporate contextual information about the scene into the tracking solution. The PDF p(zk|xk) describes the probability of an observation given a particular target position. In this PDF, we can incorporate as much environmental information as possible including predictions of multipath and occlusions. For instance, for a particular location, the predicted observation might be a target signature in multiple locations due to multipath. In a straightforward manner, the particle algorithm compares its prediction (including multipath) with the radar observation to develop a probability for that target location. There are two ways to perform the template comparison. We can compare the absolute RCS of the target with the template. If only a single or small number of pixels are available, then the absolute value of the radar return contains most of the relevant information about the target. However, this technique will be suboptimal during global illumination changes (e.g., atmospheric or multipath attenuation) or with large magnitude clutter. Alternatively, if enough resolution cells are available, the relationship between the intensity of each radar cell associated with the target can be used (imaging). The approach that relies on image structure information rather than absolute RCS values will generally be more resistant to noise. We could also employ a hybrid version of the two approaches if the cross section values are reliable and sufficient resolution exists for imaging. If we are using the target RCS in absolute terms, we can find the best location and template by a direct comparison. As described in [23], the candidate template images (multi or single-pixel) are arranged column-wise to form column vectors in matrix B. The current measured radar signature is extracted from the radar return and arranged to form a vector Λ. The radar sub-image and candidate templates are resampled to have the same number of pixels. We can calculate the likelihood p(zk|xk) of the template matching the current observation as the least squares difference (L2-norm) for each column in B
44
Non-Line-of-Sight Radar
(
)
p zk x k =
{
1 exp − α B − Λ Γ
2 2
} �(2.32)
where α is a constant controlling the shape of the Gaussian kernel and 1/Γ is a normalization factor so that the total probability over all templates is one. The constant α should be roughly equal to the inverse of the measurement noise variance for the radar intensity. This result in (2.32) is plugged into (2.28) to find the overall target location probability. If sufficient radar resolution is available, we can rely on the image information rather than absolute value of the radar return. We ignore the absolute value and calculate a dummy scaling factor (that we subsequently throw away) between each template and radar image using a least squares approach. The comparison between the set of image templates that have been rotated, translated, and scaled and the radar image is performed using a L1 minimization approach [23]. Least squares minimization can be unstable and lead to singular matrices, so regularization (L1) of the least squares approach is used [24] to bound the solution weights. Vector weights c are calculated by taking the L2-norm (least squares difference) between the current image vector Λ with regularization contribution equal to the L1-norm of the weight vector λ||c||1.
a = cmin Bc − Λ
2 2
+ λ c 1,
( c ≥ 0)
(2.33)
This minimization process is computationally expensive and various methods have been adopted to reduce the number of particles that need to be calculated. For instance, a technique referred to as bounded particle resampling (BPR) [25], uses an upper-bound approach to compute weights for only the most promising template candidates. Once we have the optimal weights a, the likelihood p(zt|xt) of the template matching the observation is
(
)
p zt x t =
{
1 exp − α Ba − Λ Γ
2 2
}
(2.34)
The template probabilities act as weights that are multiplied by the a priori probabilities in (2.28) to find p(xk|z1k). The highest probability of correct association is used as the new target position and target template. The
2.5 Track-Before-Detect
45
entire probability distribution (not just the best location) is retained for input into the next frame update.
2.5 Track-Before-Detect It is desirable for airborne radar platforms to operate at the maximum ranges from targets where some radar signatures will be near or below the detection threshold. In situations where a target is rarely detectable, intermittent detections will be widely separated in time and space. Significant variations (scintillation) of radar reflections can occur due to small changes in the target aspect so that targets disappear sporadically. Also, for some scenarios, target behavior will contain periods of constant velocity interspersed with sudden accelerations, turns, etc. If the target is not visible for an extended time and has changed position/velocity, it may be impossible to determine the correct target-track assignments when the target is once again visible (above threshold). The conventional approach to tracking is to initially threshold the input measurements, and use the points above threshold as inputs to the tracking algorithm. This process discards significant information contained in the raw sensor returns that is essential to forming continuous tracks when the target SNR is low. Lowering the detection threshold produces too many false alarms. For low SNR targets, it is not possible to choose a threshold that can achieve acceptable probability of detection, Pd, versus probability of false alarm, Pfa. Conventional tracking approaches that use linear estimation and assume Gaussian distributions fail for these challenging targets. Track-before-detect (TkBD) eliminates the thresholding problem by operating on the raw sensor returns, clustering multiple possible target returns into tracks before reporting detections. Integration gain is obtained by considering the raw detections to obtain a higher Pd without increasing Pfa. TkBD also mitigates the assignment problem where the tracker must decide which detections to associate into a single track. In TkBD, the entire measurement is used as the detection. TkBD is a nonlinear estimation problem; the entire measurement will consist of mostly noise with potentially some weak target signatures. There are several different approaches to TkBD, including maximum a posteriori (MAP) [26], Viterbi algorithm [27], particle filter [28], and histogram probabilistic multi-hypothesis tracker [29]. A good summary of the different approaches is provided in [30]. It is not possible to cover all of these approaches in this section. We review the MAP algorithm [26] below as it is a good example of the TkBD approach.
46
Non-Line-of-Sight Radar
In the MAP TkBD algorithm, the state position, velocity, and time are all discrete variables. The state space is searched over all combinations of the state to find the most likely tracks. We consider a target state space that includes the positions, velocities, and intensity of the signal, Ik x k = x k x k yk y k I k
(2.35)
The state is discretized in x and y as ∆y ∆ xk = ∆ x q x r ∆ y s t Ik ∆t ∆t
T
(xk ∈ χ)
(2.36)
for integers q, r, s, and t, where ∆t is the fixed discrete sampling period. The intensity Ik is treated as a continuous variable. The set χ includes all possible states. We assume the same linear stochastic process as in (2.1), except that the transition matrix is two-dimensional (2-D) FS Fk = 0 0
0 FS 0
0 0 1
1 ∆t FS = 0 1
(2.37)
The Bayesian estimator in the discrete space can be written as
(
)
(
pˆ x k zk ≈ K λˆ zk x k
) ∑ pˆ ( xk xk −1 )pˆ ( xk −1 zk −1 ) �(2.38) x k ∈χ
where K is a normalization constant chosen so that the sum of the probability over all states, pˆ (xk|zk), is always one. The observation term λˆ (zk x k ) is the probability of the entire radar measurement given that a particular state is present. This factor can be found by comparing the data with the expected target signature (e.g., pattern comparison). The exact form of λˆ (zk x k ) will depend on how the radar measurement is conducted. For instance, we might calculate the probability using a least squares comparison between the entire radar image and the target signature. As is common for all TkBD approaches, a null state xk = ∅ is included that signifies the condition where no targets are present. The null state will occur frequently for typical radar applications and must be included so that
2.5 Track-Before-Detect
47
the algorithm does not invent tracks (that are extremely unlikely) from the noise input. The transition model is
(
pˆ x k x k −1
)
1 − PB x k = ∅, x k −1 = ∅ P x k = 0, x k −1 ≠ ∅ D = PB N x k ≠ ∅, x k −1 = ∅ (1 − PD ) aˆ ( y k ) x k ≠ ∅, x k −1 ≠ ∅
(2.39)
where N is the number of discrete states in χ and yk = xk – Fkxk–1 is the residual error each time step. The fixed probability of target birth and target death is represented as PB and PD, respectively. The target detection performance can be adjusted by modifying birth and death probabilities. For the discrete components of the state, the probability mass function (PMF) aˆ (yk) has a uniform probability density around the origin that reaches midway to the next state. It compensates for the discretized state space so that nearby states form a continuous state space. Using a finer discretized state space will enable a narrower PMF, aˆ (yk), and lead to better accuracy at the cost of more computational resources. For the continuous component of the state, aˆ (yk) is set to the expected variation of the target intensity. The algorithm is initialized at the start of the processing at k=0 with
pˆ ( x k =0 = ∅) = 1 and pˆ ( x k =0 ) = 0 ∀ x k =0 ≠ ∅ �(2.40)
The algorithm proceeds by calculating (2.38) starting from k = 1. In the first step, the rightmost factor in (2.38) is found by assuming that the null state has a probability of 1, as in (2.40), and all other states have a probability of zero. In subsequent time increments, the rightmost term is the result from the calculation in the previous time increment. The calculation continues recursively until reaching k = kmax. The algorithm reports states with probabilities that are greater than the null state as targets. Otherwise, a null state is reported. If targets are present, the final target tracks can be obtained from the state history. Consideration of all discrete states each time increment in the MAP TkBD algorithm entails significant computational resources. However, the nature of this algorithm permits implementation on parallel hardware. An opportunity exists to exploit the enormous computing potential of modern computational adjuncts to enable TkBD tracking algorithms and increase their effectiveness for low-SNR targets, as discussed in Chapter 6.
48
Non-Line-of-Sight Radar
References [1] Skolnik, M. I., Radar Handbook, 3rd edition, New York: McGraw-Hill, 2008. [2] Guerci, J. R., Space-Time Adaptive Processing for Radar, Norwood, MA: Artech House, 2003, pp. xiv, 189. [3] Kirk, D., S. McNeil, and K. Ohnishi, Target Tracking Toolbox, Information Systems Laboratories, 2008. [4] Kalman, R. E., “A New Approach to Linear Filtering and Prediction Problems,” Journal of Basic Engineering, Vol. 82, No. 1, 1960, pp. 35–45. [5] Grewal, M., and A. Andrews, “Kalman Filtering: Theory and Practice Using MATLAB,” Information and System Science, Englewood Cliffs, NJ: Prentice-Hall, 1993. [6] Stark, H., and J. W. Woods, Probability, Random Processes, and Estimation Theory for Engineers, Prentice-Hall, Inc., 1986. [7] Brown, R. G., and P. Y. Hwang, Introduction to Random Signals and Applied Kalman Filtering: with MATLAB Exercises and Solutions, New York: Wiley, 1997. [8] Jazwinski, A., “Stochastic Processes and Filtering Theory (New York: Academic),” 1970, p. 376. [9] Valappil, J., and C. Georgakis, “Systematic Estimation of State Noise Statistics for Extended Kalman Filters,” AIChE Journal, Vol. 46, No. 2, 2000, pp. 292–308. [10] Garriga, J. L., and M. Soroush, “Model Predictive Control Tuning Methods: A Review,” Industrial and Engineering Chemistry Research, Vol. 49, No. 8, 2010, pp. 3505–3515. [11] Odelson, B. J., M. R. Rajamani, and J. B. Rawlings, “A New Autocovariance Least-Squares Method for Estimating Noise Covariances,” Automatica, Vol. 42, No. 2, 2006, pp. 303–308. [12] Einicke, G. A., and L. B. White, “Robust Extended Kalman Filtering,” IEEE Transactions on Signal Processing, Vol. 47, No. 9, 1999, pp. 2596–2599. [13] Haykin, S. S., Kalman Filtering and Neural Networks, Wiley Online Library, 2001. [14] Schneider, R., and C. Georgakis, “How to Not Make the Extended Kalman Filter Fail,” Industrial and Engineering Chemistry Research, Vol. 52, No. 9, 2013, pp. 3354–3362. [15] Julier, S. J., and J. K. Uhlmann, “New Extension of the Kalman Filter to Nonlinear Systems,” Signal Processing, Sensor Fusion, and Target Recognition VI, International Society for Optics and Photonics, Vol. 3068, 1997, pp. 182-194.
2.5 Track-Before-Detect
49
[16] Evensen, G., “Sequential Data Assimilation with a Nonlinear Quasi Geostrophic Model Using Monte Carlo Methods to Forecast Error Statistics,” Journal of Geophysical Research: Oceans, Vol. 99, No. C5, 1994, pp. 10143–10162. [17] Reid, D., “An Algorithm for Tracking Multiple Targets,” IEEE Transactions on Automatic Control, Vol. 24, No. 6, 1979, pp. 843–854. [18] Cox, I. J., and S. L. Hingorani, “An Efficient Implementation of Reid’s Multiple Hypothesis Tracking Algorithm and its Evaluation for the Purpose of Visual Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 18, No. 2, 1996, pp. 138–150. [19] Kuhn, H. W., “The Hungarian Method for the Assignment Problem,” Naval Research Logistics (NRL), Vol. 2, No. 1–2, 1955, pp. 83–97. [20] Murty, K. G., “Letter to the Editor—An Algorithm for Ranking All the Assignments in Order of Increasing Cost,” Operations Research, Vol. 16, No. 3, 1968, pp. 682–687. [21] Arulampalam, M. S., S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, Vol. 50, No. 2, 2002, pp. 174–188. [22] Khan, Z., T. Balch, and F. Dellaert, “A Rao-Blackwellized Particle Filter for Eigentracking,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, 2004, p. II. [23] Mei, X., H. Ling, Y. Wu, E. Blasch, and L. Bai, “Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT),” Air Force Research Lab Wright-Patterson, AFB OH2011. [24] Schmidt, M., “Least Squares Optimization with L1-Norm Regularization,” CS542B Project Report, 2005, pp. 14–18. [25] Douc, R., and O. Cappé, “Comparison of Resampling Schemes for Particle Filtering,” Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005, pp. 64–69. [26] Stone, L. D., T. L. Corwin, and C. A. Barlow, Bayesian Multiple Target Tracking, Artech House: Norwood, MA, Vol. 2062, 1999. [27] Barniv, Y., “Dynamic programming algorithm for detecting dim moving targets, in Multitarget-Multisensor Tracking: Advanced Applications,” Y. BarShalom (ed.), 1990. [28] Rutten, M. G., N. J. Gordon, and S. Maskell, “Recursive Track-Before-Detect with Target Amplitude Fluctuations,” IEEE Proceedings on Radar, Sonar, and Navigation, Vol. 152, No. 5, 2005, pp. 345–352. [29] Streit, R. L., and W. R. Lane, “Tracking on Intensity-Modulated Data Streams,” Naval Undersea Warfare Center Newport, DIV RI2000.
50
Non-Line-of-Sight Radar
[30] Davey, S. J., M. G. Rutten, and B. Cheung, “A Comparison of Detection Performance for Several Track-Before-Detect Algorithms,” EURASIP Journal on Advances in Signal Processing, Vol. 2008, p. 41.
CHAPTER
3
Contents
Exploitation of
3.1 Introduction
Multipath Physics in
3.2 Review of EM Propagation and Multipath
Detection and Tracking
3.3 Geometric Analysis: A Basic EM Simulation Capability 3.4 Full Electromagnetic Simulations 3.5 Incorporating Building Geometries 3.6 Radar System Analysis for Multipath Exploitation Radar 3.7 Track Simulations of the City Model with MER Tower-Mounted Radar 3.8 Simulations of Phoenix/Tempe and Comparison with Experiment 3.9 NLOS Tracking 3.10 Summary
3.1 Introduction As discussed in Chapter 1, in urban environments and varied terrain, and at operational standoff ranges, GMTI radars do not always have a DLOS to targets. The propagation and scattering environment is complex due to buildings, towers, and other features of landscapes. Blockage of the direct path signal, multipath, and strong clutter render the technology typically employed by conventional radars ineffective. Additionally, the Doppler shift of moving targets will tend to be small, due to the steep look-down angles required to observe targets in urban canyons so that target signals will be masked by stationary clutter and the Doppler spread induced by ownship motion [1]. These issues could be overcome if radars were equipped with a NLOS mode that exploits knowledge of the
51
52
Non-Line-of-Sight Radar
blockage and multipath that is inherent in urban and other environments [2]. An additional issue is that the clutter that results from the urban landscape will be heterogeneous in nature, with very large discrete scatterers from man-made structures. This environment results in the potential for high false alarm rates, and will also stress the dynamic range requirements for any MTI system. Typical approaches for clutter cancellation are only partially effective in these environments [3]. Clutter cancellation is easier for static (nonmoving) radar platforms since the Doppler spread of the clutter is minimized and the statistics are fixed. Moving platforms will result in a more severe clutter mitigation problem, as the statistics must be continuously estimated on-the-fly. In summary, wide-area GMTI coverage is difficult to achieve in an urban environment due to: Signal blockage from buildings; Low projected Doppler shift that results from steep look-down angles; Strong heterogeneous clutter, which requires a large GMTI system dynamic range; A moving receive platform with constantly changing clutter statistics. Target geolocation is greatly complicated by the multipath environment relative to the more open environments as significant blockage of LOS signals results from buildings and other structures. Exploiting NLOS propagation can greatly enhance the system’s coverage area and can enable reliable tracking of targets. This improvement can be achieved via the use of tracking approaches that leverage knowledge of the propagation environment (e.g., building locations combined with a site-specific urban propagation model). Moreover, it has been shown in [4] that if NLOS multipath exploitation methods are employed, far more accurate geolocation is possible than for DLOS systems. This improvement is attainable because the multipath signature is extremely unique to the specific location and motion of the target and radar platform. Typically, there will be a limited number of paths that arrive at the target and are scattered back to the receiver [5, 6]. Generally, we can expect that, at most, two to three paths will be received, and more often it will be a single path, either the direct path or a multipath component that reflects once or twice off the urban terrain. The target SNR, path delay, and path
3.1 Introduction
53
Doppler will have a unique signature that evolves with time and is dependent on the target, sensor locations, and building/terrain geometry. If the LOS and multipath signatures can be predicted for a particular target location in an urban environment, one can then work backwards to geolocate the target. The signature for multiple target routes can be compared to the measured data to determine the best fit, identifying the most likely path. The range and Doppler shift of multipath returns are extremely sensitive to the target’s location. As mentioned previously, and somewhat nonintuitively, targets can be geolocated with greater accuracy when NLOS is available (with or without DLOS). A multipath signal provides significant information concerning the target’s position. In fact, lack of any signal (i.e., a signal fade) (LOS or multipath) can accurately indicate the target’s position when combined with the previous target state and knowledge of the building geometry. An archetypal urban scenario for a continuously transmitting emitter is shown in Figure 3.1. The white polygons represent buildings. The emitter location probability density function (PDF) is plotted as grayscale with black
Figure 3.1 The PDF for a continuous emitter. The white polygons represent buildings. The emitter location PDF is plotted as grayscale with the black area equaling a probability of one. (a) If a signal is detected, the emitter must have LOS to the receiver. (b) If a signal is not detected, the emitter LOS must be blocked [7].
54
Non-Line-of-Sight Radar
equaling a probability of one. The probability of the target being located in the lighter gray areas is much smaller. If a signal is detected (Figure 3.1(a)), the emitter must be located in the area shaded black with LOS to the receiver. If a signal is not detected (Figure 3.1(b)), the emitter LOS must be blocked (area shaded black). The emitter position can be estimated more accurately when the signal is not detected. In an urban environment, the presence or absence of a signal is not sufficient to accurately determine the target location. We must also utilize the radar returns from multiple bounce paths to help localize and track a target. As a target moves through an urban environment, the target SNR, path delay, and path Doppler will have a unique signature that evolves with time and is dependent on the overall target, sensor, and building geometry. The example scene in Figure 3.2 illustrates some of the behavior expected to be observed for a moving target. For target position A, the direct path signal is blocked, however, a single bounce multipath signal reaches the MTI platform. If the receiver does not take into account multipath, it will calculate an incorrect position for the target. At target position B, the direct path and a single bounce path are both possible. The receiver may naively assume that two targets are present. As the target moves from A to B, the target signature will change in a predictable way if the building geometry is
Figure 3.2 Example urban radar scenario [8].
3.2 Review of EM Propagation and Multipath
55
known. The key to tracking is prediction, which is limited by knowledge of the propagation environment. KA multipath exploitation enables accurate target geolocation even during occlusions.
3.2 Review of EM Propagation and Multipath An essential prerequisite is a thorough analytical and experimental understanding of multipath phenomenology. Below, we discuss the development of a modeling capability that treats EM radiation as a limited number of rays and incorporates antenna polarization, antenna directionality/gain, and atmospheric loss. In addition, we describe the conversion of received time-series signals into range-Doppler space and discuss basic MTI signal processing. 3.2.1 Diffraction Assumption For typical MTI radar carrier frequencies (greater than 1 GHz), the RF wavelengths are much smaller than salient objects in the scene, and we can safely ignore diffraction from building edges and treat the EM radiation as rays moving in straight lines (geometric optics). Simulations can be extended to a lower frequency by including diffraction effects into the propagation model. Diffraction enables low-frequency MTI radar systems to “see around the corner.” Usually, diffraction is handled in simulation using the uniform theory of diffraction (UTD) that treats the EM radiation as rays during propagation between surfaces but applies diffraction at surface edges to change the directions of rays [9]. Many of the results in this chapter were obtained from the MER program [10, 11]. Diffraction effects are assumed to be minimal at the frequencies of interest for the MER program (X- and Ku-bands). To test this assumption, we evaluated the impact of diffraction on the propagation factor for a radar operating at 16.9 GHz moving along a simulated target track in an urban environment, as shown in Figure 3.3(a). The city model and details of the radar are discussed in Section 3.6, entitled “Radar System Analysis for Multipath Exploitation Radar.” The material properties that were used in the city model are listed in Table 3.1 and the city model is shown in Figure 3.4. The propagation factor is the amount of one-way loss between the MTI platform and ground target. The number of allowable reflections (maxR in Figure 3.3(a)) in the simulation was varied between 1 and 5. Usually, the signal is dominated by the direct path and possibly a single multipath reflection. As seen in Figure 3.3(a), including additional reflections in the
56
Non-Line-of-Sight Radar
Figure 3.3 Simulated X-band radar scenario. (a) One-way propagation factor (loss) along a target track in an urban environment for varying number of reflections per path. (b) The simulated one-way propagation factor with and without diffraction. The diffraction effects are 30 dB or more below the one-way LOS path [12].
Table 3.1 Material Properties for Polarization Analysis of Concrete, Wood, Stucco, and Dry Dirt. The Relative Magnetic Permeability was Assumed to Be Equal to One [12] Material Property Roughness (mm)
Concrete Wood 1 0.2
Stucco Dry Dirt 1 20
Relative permittivity (εr)
5
4
3
5
Conductivity (mho/m)
0.1
0.3
0.02
0.01
Figure 3.4 Example city model developed from overhead imagery and displayed in Wireless InSite, corresponding to Figure 3.19 [12, 13]. Target track 1 is represented by the solid line. Target tracks 2 and 3 are included as the dashed lines.
3.2 Review of EM Propagation and Multipath
57
simulation doesn’t change the result significantly. A single reflection is usually sufficient to characterize the multipath behavior. The urban simulation, including a maximum of five reflections, was repeated with and without diffraction effects included (diffraction was implemented using the UTD). The propagation factor is plotted in Figure 3.3(b) as a function of track distance. We see that the diffraction impact on propagation is more than 30 dB below the one-way LOS path, which means that the impact on the two-way propagation factor is 60 dB or more below a LOS path. Therefore, diffraction effects will be below the noise level and will not significantly alter the outcome of simulations at X-band or Ku-band (i.e., for the MER MTI data collections). In general, we can safely ignore diffraction in urban environments at frequencies of S-band or higher (wavelengths smaller than approximately a tenth of a meter). 3.2.2 Ray Tracing EM radiation from the target will reflect off of objects in the scene, including buildings, terrain, and so forth, from more than one path (i.e., multipath) and will generally arrive at the receiver with a delay and different Doppler shift than the DLOS path. The signal from the shortest direct path (often DLOS) will arrive first, followed by the signals from all of the bounce paths. To determine if signals from a bounce path intersect the receiver location, we can perform a geometric analysis as described in Section 3.3. Ignoring diffraction, we can assume that each path can be depicted by a straight path or ray. The signal from each independent path is summed together to calculate the expected signal at the receiver. We can model all multipath reflections as specular reflection, since the diffuse scattering in directions other than the specular will almost always be too small to be observed for geometries relevant to MTI detections. We must keep track of the horizontal and vertical electric field (E-field) strength, Eh and Ev, for each ray since each component is handled differently during reflections from surfaces and also upon reception at the antenna. For coherent combinations of paths, the total coherent received power Pc at the receiver for each ray path (labeled with subscript p) is given by
λ2 β Pc ≈ c 8 πη0
P
(
)
(
)
2
∑ Eφ,p gˆ φ φp , q p + Eq,p gˆ q φp , q p p =1
(3.1)
58
Non-Line-of-Sight Radar
where E φ and E q are the complex electric field components relative to the antenna orientation, gˆ φ(φp, qp) is the antenna gain factor for a given ray angle of arrival (φp, qp), λc is the carrier wavelength, η0 is free-space impedance, and β is a measure of frequency spectrum overlap between the transmitted signal and the receiver sensitivity. Usually, we can assume that β = 1. Also, we can assume that the receive antenna will be oriented so that the horizontal electric field component will be equal to gˆ φ(φp, qp) and the vertical component will be equal to gˆ Θ(φp, qp). However, if the platform antenna is rotated with respect to the Earth, we will need to convert the horizontal and vertical ray electric fields into the antenna frame of reference Eφ and Eq through a rotation. The narrowband complex electric field components are calculated by taking into account the individual Doppler shifts and phases of each ray segment. For ray segments labeled with subscript p, we can write the complex electric field, E φ and E q, from each ray as
(
)
(3.2)
(
)
(3.3)
Eφ,p = E φ,p exp −i 2πf pt + j φ,p
Eq ,p = Eq ,p exp −i 2πf pt + jq ,p
where i = −1, fp is the frequency of each ray segment, jφ,p and jq,p are the phases of each polarization component, and Eφ,p and Eq,p are the ray electric field (real number) azimuth and polar amplitudes, respectively. The phase of each ray jp is related to the distance traveled along each path Lp and the phase of the previous ray path segment as
jp = jp −1 + f p Lp / c
(3.4)
where c is the speed of light. The cumulative phase is a function of the total distance traveled. Also, a phase shift will occur in a polarization-dependent manner, due to reflection from surfaces as described below (see Fresnel equations). We can follow a ray path along each segment, keeping track of the cumulative phase shift to find the total phase shift between the target and receiver. Knowing the phase shift enables us to sum the complex signal from each path and find the aggregate received signal [10]. The frequency along each ray path fp includes the Doppler shift fD,p caused by motion of the target or receiver
3.2 Review of EM Propagation and Multipath
59
f p = f c + f D ,p
(3.5)
where fc is the radar carrier frequency. Each ray will experience a Doppler shift for each path of
f D ,p =
∆v ⋅ rˆ �(3.6) λc
where ∆v = vend – vstart is the difference between the velocity of the two ray endpoints, rˆ is the unit vector between the two ray endpoints, and λc is the wavelength of the RF carrier signal. Obviously, if both endpoints of a ray segment are stationary (e.g., building walls) then the Doppler shift for that path segment fD,p is zero. In (3.6), we are ignoring relativistic effects, which may degrade the performance of the matched filter for large receiver (or target) velocities, due to the temporal compression of the baseband waveform. Usually, we will not need to consider relativistic corrections as typical MTI radar waveforms are designed to be robust to Doppler effects. However, if the velocity of the receiver (or target) is large (e.g., satellite motion or even a jet aircraft), relativistic effects may become important. For GMTI, the motion of the target will almost certainly be too small to produce relativistic effects. However, relativistic effects can be incorporated along each segment of the ray path if it is appropriate. Relative emitter-receiver motion has two primary effects on the signal characteristics. First, the carrier signal is shifted in frequency by ∆f = ∆v/λc where Dv is defined as the relative motion of the receiver toward the source ∆v = vrec – vsource We can write the Doppler shift in terms of the frequency of the source fc in a general relativistic way as
f = γfc (1 − ∆v / c ) = fc
1 − ∆v / c 1 + ∆v / c
(3.7)
where γ = 1/ 1 − ( ∆v / c ) . Second, the emitter baseband signal will be compressed or expanded in time. This should be obvious if we consider a single (constant frequency/constant phase) square pulse. If the frequency changes due to Doppler 2
60
Non-Line-of-Sight Radar
effects, we also expect the pulse to change in (temporal) length so that the total signal phase progression remains constant. We should not assume that relativistic effects on MTI processing are negligible, since the radar waveform will be compressed and the timing between elements of the radar waveform, which are precisely defined, will change. Time dilation occurs on the moving platform so that in the stationary frame of reference, the signal time period ∆T is longer than nominal ∆Ts by a factor of γ. Our measurement of the period of the signal will be shortened or lengthened if it is moving towards or away from the receiver by a factor of (1+∆v/c)
∆T =
∆Ts (1 + ∆v / c ) 1 − ( ∆v / c )
2
= ∆Ts
1 + ∆v / c 1 − ∆v / c
(3.8)
We can show that the total phase progression ∆Φ of the signal waveform remains constant from either frame of reference
f 1 − ∆v / c ∆Ts 1 + ∆v / c ∆Φ = c = fc ∆Ts 1 + ∆v / c 1 − ∆v / c
(3.9)
Due to Doppler effects, the received waveform will be shifted in frequency and the baseband modulating signal will be compressed in time so that the total phase progression remains constant. If the speeds are sufficiently high, the matched filter temporal length must be adjusted to find the best correlation with the received signal. For relatively low maximum receiver (and target) speeds (100 m/s), relativistic effects will be small so that we can ignore the waveform compression and use the transmit waveform directly as the matched filter. In addition, as mentioned earlier, most radar waveforms are designed to be robust to (limited) Doppler effects. 3.2.3 Modeling Antenna Directivity and Gain The ray from each path will be modified by the antenna pattern (unless the antenna is omnidirectional). Most likely, the antenna or antenna array will be directional to some extent. If the rays converge on the MTI receiver from different angles, they will experience different amounts of gain. For commercial antennas, we can use the antenna pattern supplied by the
3.2 Review of EM Propagation and Multipath
61
manufacturer to determine the relative signal gain of each ray as a function of azimuth and polar angle. Each ray in a simulation can be multiplied by the corresponding gain or loss obtained from the known antenna pattern versus azimuth and polar angle, gˆ φ(φ, q) and gˆ q(φ, q). Sometimes it is more convenient to adopt a user-defined antenna beam pattern. We can approximate an arbitrary antenna pattern by modeling a planar array of discrete antenna elements. For instance, we consider an antenna array in the y – z plane with each element separated by half of a wavelength. A smaller array spacing than λc/2 is acceptable, however, using a larger antenna spacing than λc/2 in the array can result in grating lobes (maxima in the gain profile in addition to the mainlobe). If the individual antennas in the array are polarized, then the final result will be polarized similarly. We can assume any desired polarization for the array. The dependence of the combined array signal f is determined independently along the azimuth and polar axes by summing over all of the discrete antenna element positions yn and zn, taking into account the relative phase shift from the path length differences
fˆφ ( φ) =
fˆq ( q ) =
∑ w φ,n exp ( −ikyn sin ( φ)) n
∑ wq,n exp ( −ikz n sin ( q)) n
(3.10)
(3.11)
where the wavenumber is k = 2πfc/c. We can nominally assume all of the weights wn are equal to 1, although different tapers (such as hamming, Taylor, etc.) can be applied to modify the antenna pattern and/or reduce the sidelobes. The expected (transmit or receive) antenna array power as a function of angle (or antenna directivity) is calculated from the complex functions above as
∗ ∗ dˆ ( φ, q ) = fˆφ ( φ) fˆφ ( φ) fˆq ( q ) fˆq ( q ) �(3.12)
where the asterisk represents the complex conjugate. The antenna gain gˆ (φ, q) is the directivity in (3.12) multiplied by the antenna efficiency (usually assumed to be one).
gˆ ( φ, q ) = ε dˆ ( φ, q )
(3.13)
62
Non-Line-of-Sight Radar
Example antenna patterns for an L-band (1.2 GHz) antenna array are shown in Figure 3.5. We assumed a rectangular pattern of four horizontal elements by two vertical elements for a total of eight elements. We further assumed that the backlobe antenna pattern (region from 90° to 270° azimuth) was reduced by a factor of 30 dB.
Figure 3.5 Example antenna pattern for a 4 × 2 element array at 1.2 GHz: (a) azimuth, (b) two-dimensional, and (c) elevation.
3.2 Review of EM Propagation and Multipath
63
Often it is important to model antenna arrays that have particular directivity or gain values along the azimuth and polar axes. We can directly calculate the antenna directivity from the antenna pattern as the peak radiated power divided by the average radiated power. Alternatively, (but less accurately) we can estimate the directivity using the half-power beamwidths. The half power beamwidth is the inner angle between the points where the gain drops by 3 dB. The antenna directivity (and gain) can be estimated from the half-power azimuth and elevation beamwidths ∆φ and ∆q in degrees as 2
180 4 π �(3.14) d≈ π ∆φ ∆q
The antenna gain is estimated as the scalar directivity value in (3.14) multiplied by the antenna efficiency ε (usually one). In Figure 3.5, the halfpower azimuthal and elevation beam widths are 26.3° and 60°, respectively, and the corresponding antenna directivity is 26 dB. 3.2.4 Modeling Atmospheric Loss If the ray path lengths are relatively long (tens of kilometers) we may want to include atmospheric attenuation in the calculation of propagation factor. The total atmospheric attenuation as a function of frequency is plotted in Figure 3.6, with the individual peaks from the H2O and O2 atmospheric compositions identified. We used the expressions from [14] assuming a standard day (density is 1.225 kg/m3, pressure is 1013.25 hPa, temperature is 15°C, water density is 7.5 g/m3). For frequencies higher than Ku-band (18 GHz), the path length attenuation can be relatively large. In addition, significant radar loss will be encountered due to rainfall. A plot of the oneway atmospheric attenuation due to rainfall is shown in Figure 3.7 [15]. The curves represent rainfall rates of 0.25, 1.25, 5, 25, 50, 100, and 150 mm/hour (bottom to top). During a heavy rainfall, MTI performance at Xband or at higher frequencies will be degraded. Below carrier frequencies of 1 GHz, the loss due to precipitation is relatively insignificant. Most radars (including MTI radars) use carrier frequencies at Ku-band or below for these reasons. Operating at higher frequencies enables the use of smaller antennas with the drawback of increased atmospheric loss, especially due to precipitation.
64
Non-Line-of-Sight Radar
Figure 3.6 Total one-way atmospheric attenuation as a function of the emitter frequency [14].
Figure 3.7 Total atmospheric attenuation as a function of rainfall and the emitter frequency [15]. Rainfall rate from bottom to top is 0.25, 1.25, 5, 25, 50, 100, and 150 mm/hour.
3.2.5 Treatment of Diffuse Scattering The propagation models tend to focus on specular-geometry scatter only. However, for rough surfaces, diffuse scattering will be significant. EM
3.2 Review of EM Propagation and Multipath
65
radiation from a ray incident on most building surfaces will be distributed over a range of angles rather than at just one angle (as in the case of specular reflection). The angular spread of reflected energy will depend on the amount of surface roughness at various scales (from large-scale to the wavelength of the radiation). An ideal diffuse surface would exhibit equal luminance when viewed from all directions. In cases where the surface roughness is sufficient to eliminate the specular component, diffuse scatter off the urban terrain will still tend to be strongest in the specular direction. Scattering other than specular reflections is usually too weak to be detected at the MTI platform receiver and can safely be ignored. Thus, we can almost always incorporate diffuse scattering in simulation merely by including a scattering loss factor for the specular component at each reflecting surface. At higher radar carrier frequencies, the losses due to the effect of building surface roughness will increase (diffuse scattering will increase) potentially reducing the effectiveness of all NLOS propagation modes. 3.2.6 Modeling Surface Reflection Polarization Dependence Polarization can have a significant effect on the strength of the reflected multipath signal. The Fresnel equations describe the behavior of EM radiation at the intersection between media of differing refractive indices [16]. We can calculate the reflectance of materials using the Fresnel equations knowing the material relative electrical permittivity, relative magnetic permeability, and conductivity. Example material parameters for concrete are listed in Table 3.1. For most materials, the relative magnetic permeability can be assumed to be equal to one (usually a very good assumption). The reflection geometry is defined in Figure 3.8. For typical geometries, the azimuth angle j for rays will vary over a range that is much larger than
Figure 3.8 Definition of reflection geometry. For typical MTI geometries, the MTI platform will not be directly overhead of the target. The elevation angle is shallow and the plane of incidence (and reflection) is mostly parallel to the ground plane.
66
Non-Line-of-Sight Radar
the elevation angle q, so that we can consider the plane of reflection to be mostly parallel to the ground plane. The receive platform will be located far away and the streets will be relatively narrow so that the elevation angles q will tend to be small. When EM radiation impacts a wall, some of the energy will be reflected and some will be transmitted (absorbed) into the wall. The three rays corresponding to the incident, reflected, and transmitted energy form a plane. For transverse electric (TE) polarization, the electric field of the radiation is perpendicular to the plane of incidence while, for transverse magnetic (TM) polarization, the magnetic field of the radiation is perpendicular to the plane of incidence. The Fresnel equations describe the fraction of reflected power R separately for EM radiation with TE and TM polarization as
RTE =
RTM =
µ2 cos ( qi ) − ε2
µ1 cos ( qt ) ε1
µ2 cos ( qi ) + ε2
µ1 cos ( qt ) ε1
µ2 cos ( qt ) − ε2
µ1 cos ( qi ) ε1
µ2 cos ( qt ) + ε2
µ1 cos ( qi ) ε1
2
�(3.15)
2
�(3.16)
The symbols µ and ε represent the material permittivity and magnetic permeability respectively. The reflected power PR is related to the incident power with TE polarization, PI, TE and TM polarization, PI,TM, as PR = PI,TERTE + PI,TMRTM. Note that the permittivities in (3.15) and (3.16) can be written in terms of the relative value er, compared to the permittivity of free space ε0 as ε = εrε0. We can simplify the Fresnel equations by writing them in terms of the index of refraction n = εr ur ≈ εr , assuming that the magnetic permeability is approximately one (µ1,2 ≈ 1). Furthermore, the index of refraction of air is one, and for our case (i.e., rays propagating in air and reflecting from solid surfaces) we can always assume that n1 = 1. With those assumptions, (3.15) and (3.16) can be written as
3.2 Review of EM Propagation and Multipath
RTE ≅
n1 cos ( qi ) − n2 cos ( qt )
2
n1 cos ( qt ) − n2 cos ( qi )
2
=
n1 cos ( qi ) + n2 cos ( qt )
67
cos ( qi ) − n2 cos ( qt )
2
cos ( qt ) − n2 cos ( qi )
2
cos ( qi ) + n2 cos ( qt ) �(3.17)
and
RTM ≅
n1 cos ( qt ) + n2 cos ( qi )
=
cos ( qt ) + n2 cos ( qi ) �(3.18)
If radiation is a mixture of TE and TM, then the reflected energy will be described by the proportional combination of (3.17) and (3.18). In simulations, rays are usually described using the complex amplitude (not power). We can also write reflection coefficients for the complex ray electric field amplitude as
rTE ≅
n1 cos ( qi ) − n2 cos ( qt ) n1 cos ( qi ) + n2 cos ( qt )
=
cos ( qi ) − n2 cos ( qt ) cos ( qi ) + n2 cos ( qt )
�(3.19)
and
rTM ≅
n2 cos ( qi ) − n1 cos ( qt ) n1 cos ( qt ) + n2 cos ( qi )
=
n2 cos ( qi ) − cos ( qt )
cos ( qt ) + n2 cos ( qi )
�(3.20)
The Fresnel power reflection coefficients are the square of the amplitude reflection coefficients R = |r|2. Note that rTE ≤ 0 and thus the electric field amplitude for a TE field will change sign upon reflection. For a TM mode, or p-mode, wave (assuming the sign convention implicit in (3.19) and (3.20)), the direction of the electric field is defined relative to the direction of the wave (see Figure 3.9). For TE- and TM-mode polarizations, the ray complex amplitudes are calculated at every reflection as A2,TE = rTEA1 or A2,TM = rTMA1. For a TE mode, or s-mode, the electric field direction remains the same before and after the reflection, shown in Figure 3.9(b). For TM mode, the E-field direction is changed according to Figure 3.9(a). The calculated Fresnel reflection coefficients for concrete and geometries corresponding to that expected to be encountered in an urban multipath environment are shown in Figure 3.10. For building multipath, the
68
Non-Line-of-Sight Radar
Fgure 3.9 (a) For a TM (or p-mode) wave, the E-field direction is defined relative to the wave both before and after reflection. (b) For a TE (or s-mode) wave, the electric field direction remains constant.
Figure 3.10 Fresnel reflection coefficients assuming the material properties of concrete in Table 3.1: (a) TE (or vertically polarized) sensor, and (b) TM (or horizontally polarized) sensor [12].
reflection surface will typically be vertical (since buildings are approximately right rectangles) and the MTI platform will most likely be far away from the narrow street corridor. Thus, we have only considered elevation angles up to 30°. The TE and TM polarizations for this situation are shown in Figure 3.10(a) and in Figure 3.10(b), respectively. For these geometries of interest, the plane of the incident and reflected ray is almost parallel to the ground plane with a maximum of a 30°
3.2 Review of EM Propagation and Multipath
69
elevation angle tilt considered in Figure 3.10. In this geometry, we note that a vertically polarized radar will see mostly TE polarization at the vertical building surface, shown in Figure 3.10(a). Conversely, a horizontally polarized radar will have mostly TM polarization at the vertical building surface. For a horizontally polarized sensor, there is a null (the familiar Brewster angle effect) in the reflection coefficient (see Figure 3.10(b)) that corresponds to the typical range of angles of interest to a radar (that is trying to exploit multipath in an urban environment). For some azimuth angles, the multipath reflection will be extremely small for a horizontally polarized radar due to the Brewster angle effect. The impact of EM polarization is illustrated in Figure 3.11, where we have shown the propagation factor for a number of multipath components for a simulated target track at two elevation angles. The propagation factor is defined as the total two-way signal loss from transmit to receive. The plots
Figure 3.11 Simulated propagation factors over an example target track. The black squares represent LOS, and the grayscale range from darkest to lightest shaded circles represents one bounce, two bounce, three bounce, and four bounce, respectively [12].
70
Non-Line-of-Sight Radar
show various types of multipath and indicate the total number of reflections (exclusive of ground multipath). An elevation angle of 8° is depicted on the two left plots and an elevation angle of 14° is depicted in the two right plots. In this example, one bounce implies a single reflection from a wall during the entire two-way propagation, two bounces implies reflection from two walls during the two-way propagation, and so forth. As expected, more signal loss is experienced as the number of bounces increases. On average, the propagation factor is improved when using a vertically polarized (TE) sensor, as predicted from Figure 3.10. 3.2.7 Range-Doppler Processing At the receiver, the signal (consisting of rays from multiple paths that are delayed and Doppler shifted) is multiplied by a matched filter consisting of the complex conjugate of the transmitted waveform. For a point target in noise, we would observe a single peak (or narrowband delta function response) corresponding to the target. If multipath echoes are present, we will observe peaks for the different bounce paths at particular ranges and Doppler shifts. The received power of the multipath signals will depend on the number of bounces, incidence angles, type of surfaces, material type and finish (rough, textured, smooth), and also the EM polarization. For multipath analysis, it is convenient to display the radar returns (both clutter and discrete returns from targets) in 2-D range-Doppler space (with axes of range and Doppler shift). Two types of range-Doppler processing are usually applied: fast-time and slow-time processing. For typical monostatic MTI radars, fast–time processing is almost always utilized, since it requires much less computational resources. In range-Doppler processing, the received signal is grouped into sets of pulses. The number of pulses in each set, or the CPI is limited in time so that the motion of the platform and antenna beam can be ignored (for instance, a set of 20 pulses at a 500-Hz pulse repetition frequency (PRF)). The integrated target power will increase as the CPI is lengthened; however, eventually target signatures will be smeared or distributed over multiple range and Doppler bins. Thus, typically, the CPI length is limited so that the signal energy from a target return is localized to a single range/Doppler bin. In fast-time range-Doppler processing, the received data is organized in a 2-D matrix with samples after each pulse arranged along each row (x-axis). Moving down the y-axis (pulse axis) are samples from different pulses. The samples after each pulse must be aligned in time relative to the transmit pulse. For instance, the first sample along each row must represent a
3.2 Review of EM Propagation and Multipath
71
constant time delay from the start of the transmit pulse. For monostatic radars, this time alignment is almost always automatically true since the radar clock system is synchronized with the transmit and acquisition hardware. For bistatic radars (transmitter and receiver in different locations), this time synchronization is difficult to achieve and slow-time range-Doppler processing is necessary. A signal taper (e.g., Chebyshev taper) is applied along each column (along the y-axis (pulse axis)) before a fast Fourier transform (FFT) to reduce the pulse sidelobes. Finally, an FFT is taken along each column. The data can be padded with extra rows filled with zeros to increase the FFT resolution in the range-Doppler map. Range-Doppler processing can be implemented extremely efficiently since it is based on the FFT. As an example, we analyzed the radar flight test dataset collected under the DARPA multiple-input multiple-output (MIMO) radar program using Telephonics’ RDR-1700B radar. The parameters of that system are listed below: Two receive channels; Two MIMO transmit channels; X-band (9.4779 GHz); PRF of 4 kHz; Linear frequency modulated pulse (bandwidth 17.34 MHz, width 18.75 µs); Radar sampling rate of 20 MHz; CPI length of 128 ms. Although two channels were collected, we only analyzed the first channel. An example range-Doppler map is shown in Figure 3.12(a) with clutter that is modulated due to terrain features. The flight pattern for this data collection was centered over Long Island, NY. Knowing the platform velocity and antenna-pointing angle, we can derive the relationship between Doppler and azimuth angles and overlay the range-Doppler map clutter on an overhead image from Google Earth, as shown in Figure 3.12(b). Features in the range-Doppler map such as roads match the underlying terrain. Reflection from building walls appear as bright returns. The radar returns from ocean surf (breaking waves) are apparent at the top of Figure 3.12(b).
72
Non-Line-of-Sight Radar
Figure 3.12 (a) An example range-Doppler map showing the clutter ridge, and (b) the clutter ridge overlaid on a Google Earth map of the terrain. Features such as roads are visible in the clutter [17].
A simulated range-Doppler map at X-band of a runway and surrounding buildings at Eglin Air Force Base (AFB) is shown in Figure 3.13. The corresponding overhead optical imagery (Goggle Earth) is included to highlight the fidelity of the simulation. Clutter features are evident in the simulated data, such as the runway and the strong returns due to buildings and other man-made features. The target radar returns will need to compete with stationary clutter. Ground clutter that is received in the mainlobe of the radar beam is called endoclutter. It forms a ridge extending along range at zero Doppler shift (after compensating for the motion of the receive platform). The width of this endoclutter ridge depends on the maximum relative Doppler shift of all the terrain clutter in the azimuth beamwidth of the radar. Thus, the width of the endoclutter ridge increases with the antenna azimuth beamwidth. Fastmoving objects (along the radial direction) will appear as discrete returns outside of this clutter ridge while slow-moving targets must compete with the endoclutter. Note that if a target is moving perpendicular to the radial direction, it will have a Doppler shift identical to that of the ground clutter and will be difficult to observe directly. Clutter will also enter through the sidelobes of the MTI antenna (i.e., exoclutter) and even fast-moving targets will need to compete with the exoclutter. The endoclutter and exoclutter regimes are illustrated in Figure 3.12(a) and Figure 3.14.
3.2 Review of EM Propagation and Multipath
73
Figure 3.13 Example of a range-Doppler clutter map generated for an simulated scenario at Eglin AFB. Note that the features in the radar range-Doppler clutter map are rotated relative to the optical imagery due to the particular simulated look direction [18].
3.2.8 MTI Signal Processing In the simplest case, MTI radar operates by alternating the transmit phase of the radar by 180° each pulse (i.e., a two-pulse canceler). When the received pulses are combined together, the returns from stationary clutter will cancel through destructive interference while the returns from moving targets will remain. Essentially, two samples are used to filter the data (e.g., to remove stationary clutter). If we have N independent time samples (or equivalently, N temporal degrees of freedom), we can produce N – 1 nulls in the rangeDoppler space. Thus, for the two-pulse canceler, we have a single null at 180° corresponding to zero Doppler shift. In practice, a two-pulse canceler will experience multiple issues that degrade performance. The Doppler shift from a target moving along the radial direction will also cause a pulse-to-pulse phase shift in the radar return of fDoppler = vradial/λc , where vradial is the radial speed (in meters per second) and λc is the wavelength of the carrier (in meters). At certain blind radial speeds (when 2πfDoppler(∆Tp) = nπ, where n is an integer) the phase shift between pulses due to target motion will equal 180° so that the target returns will also cancel
74
Non-Line-of-Sight Radar
Figure 3.14 The aircraft antenna beam pattern (solid lines). Endoclutter enters through the mainlobe with zero Doppler (after compensation for the motion of the platform) to compete with slow-moving targets. Exoclutter enters through the sidelobes and competes with fast-moving targets.
v blind =
n λc 2 ∆Tp
(3.21)
where ∆Tp is the time between pulses. Near these radial speeds, targets will not be detected by the radar. With modern MTI radars, the pulse phases are modulated in a more complex pattern to eliminate possible blind velocities. Two-pulse cancelers will not typically have very good target geolocation accuracy. If the MTI radar uses a single antenna phase center, the AoA for a target is limited to the azimuth beamwidth of the radar, which is usually relatively large (e.g., 3°). It will be difficult to maintain track on dozens of vehicles in an urban environment with such a large cross range position ambiguity. Note that the antenna phase center is the apparent source of radiation of an antenna (if the antenna were transmitting), which is not necessarily
3.3 Geometric Analysis: A Basic EM Simulation Capability
75
the physical center of the antenna. For single-element antennas such as a dish antenna, the antenna phase center is usually near the physical center of the antenna. For antenna arrays, the antenna phase center can move depending on the relative phase shift applied to each element in an array. Another issue with the two-pulse MTI canceler is that the aircraft moves between pulses so that the radar returns from stationary clutter will not have identical amplitudes and thus will not cancel perfectly. The look angle to the clutter will change slightly (due to aircraft motion), modifying the signal return. Exploiting pulses over a longer time will have diminishing returns, as the clutter amplitude will vary by a larger amount. A common approach to mitigating the signal amplitude change caused by the motion of the aircraft is to adjust the phase center of the antenna array toward the rear of the aircraft, so that the effective antenna location remains constant in space as the aircraft moves, that is, displaced phase center antenna (DPCA) pulse canceler. For this approach to work, the signal PRF and antenna adjustment must be a function of the aircraft speed. DPCA can effectively transmit and receive two or more pulses from essentially the same location in space enabling better cancellation of stationary clutter. We can improve the two-pulse canceler performance using multiple (spatial) phase centers for each transmit pulse (via an antenna array) to also cancel clutter using spatial rather than temporal degrees of freedom. More degrees of freedom enable nulling of other signal components in addition to stationary clutter. The problem of determining the AoA to a target is one of optimal null steering. For instance, with two nulls (an antenna array with three phase centers or three degrees of freedom), we can steer one null at the ground clutter and the other null at a target. In this way target AoA can be significantly improved and slow-moving targets may be detected in the endoclutter. Modern MTI platforms use 3 or more antenna phase centers for improved target detection in endoclutter and simultaneous AoA estimation. For instance, J-STARS uses three phase centers and Vehicle And Dismount Exploitation Radar (VADER) uses four phase centers. For more details on advanced MTI, see [4].
3.3 Geometric Analysis: A Basic EM Simulation Capability A full EM simulation of an urban environment can provide performance characterization or verification of radar and/or communication systems using available knowledge of an urban environment and the scene geometry. Simulated data can be compared to experimentally observed data (e.g., radar returns), assuming that the conditions of an experimental data
76
Non-Line-of-Sight Radar
collection (such as scene geometry) are known and can be replicated in the simulation. In high-level signal processing software (such as MATLAB™), it may be difficult to fully simulate multipath in an urban environment in a reasonable timeframe, due to the computational complexity. A full EM simulation of an urban environment will require software that has been written in a low-level programming language (e.g., C++) and optimized for speed, for example, commercially available EM simulation or ray tracing software that is mapped in a parallel fashion onto a multicore high-performance computing (HPC) architecture. However, even a relatively simple geometric analysis can provide significant information concerning possible multipath signals. To speed up the calculation time, we can limit the analysis to specular bounces only, and ignore diffusive scattering and diffraction. In addition, we can limit the multipath to single reflections. This approach provides a computationally efficient method of predicting the occurrence of multipath for ground targets in an urban environment. This information can be used to analyze complex urban scenes for possible multipath and to design experiments. To predict the direct-path and multipath signals for an urban environment, we need to know: MTI receiver location as a function of time; Vertex coordinates for the nearby buildings; Building heights; Target position, including height above ground as a function of time. An example geometric analysis is described next. We assume that the MTI platform is above an urban scene with the streets forming narrow corridors. The rays emanate from the receiver and can travel directly to the target or bounce off of only one surface before traveling to the target. For computational reasons, we prohibit multiple bounces. Note that most commercial ray tracers also use rays that start from the receiver to minimize the number of ray calculations. The rays are initiated in a grid over solid angle, with the angular separation between rays a constant. The choice of the number of rays defines the simulation fidelity and required processing time. A smaller angular separation increases the number of rays and improves the simulation accuracy. To limit the amount of processing, rays emanate from the receiver only over vectors that could possibly impact the
3.3 Geometric Analysis: A Basic EM Simulation Capability
77
urban environment. Typically, we initiate rays in the half sphere oriented towards the ground. For each ray, we determine which surface in the city model it intersects first. We can parameterize each ray as a vector r,
r = pv + r0
(3.22)
where r0 is the receiver location, v is the vector direction of the ray, and p is a real number. We determine if the ray intersects any of the planes in the simulation (ground or building face) by calculating the value of the parameter p = pint
rint = pint v + r0 and pint =
(p
0
)
− r0 ⋅ nˆ v ⋅ nˆ
, v ⋅ nˆ ≠ 0
(3.23)
where nˆ is the normal unit vector of the plane, p0 is any point on the plane, and rint is the intersection point. If v · nˆ ≠ 0, then there is a single point of intersection. We can ignore cases where v · nˆ = 0, indicating the ray is perpendicular to the surface normal. Next, we compare the intersection point rint with the corners of that surface to determine if that building face intersects the ray. We ignore building surfaces if the intersection point lies outside of the surface extent. The smallest value of pint determines which plane is intersected first. If this is the first reflection of this ray, then the ray is reflected specularly, as described below. If a ray impacts none of the surfaces in the model, then it must be directed into the air. In that case, a large arbitrary value of pint is assumed to indicate termination of the ray at some point above the urban model. Every time a ray strikes either the ground or another vertical surface, that ray is examined for closest approach to the target. If any ray approaches the target within a certain distance dthreshold, then it is assumed to intersect the target. The minimum distance threshold is the solid angular separation between rays Θ (in radians), multiplied by the slant range between the MTI receive platform and target R divided by two
dthreshold ≥ (Θ R ) / 2 �(3.24)
78
Non-Line-of-Sight Radar
The distance threshold can be increased above the value in (3.24) to accommodate uncertainty in the target position. The minimum distance dmin, or closest approach between a ray defined by points pr1 and pr2 and target point, ptarget, is calculated as
dmin =
(p
target
)
− pr1 × ( pr2 − pr1 )
( pr2 − pr1 )
�(3.25)
where the × symbol represents the cross product. If dmin ≤ dthreshold, then the ray intersects the target. The first endpoint of the ray pr1 will be either the receiver location r0, or the location of a specular bounce from the ground surface or a building surface. The second endpoint of the ray pr2 will be either an intersection point (with the ground or a building face) or a ray termination point above the urban model. The closest approach to the target for each ray (see (3.25)) is compared to the threshold (see (3.24)) to determine if an intersection occurred. Rays that do not intersect the target are ignored, while rays that intersect the target are compiled to estimate the possible multipath returns. If an intersection occurs on a vertical wall, we redirect the ray due to the specular reflection. According to the law of specular reflection, the incidence angle of a propagated path is equal to its reflected angle. Thus, if a path propagates from a target to a reflection (specular) point on a building face, the incidence angle is derived as
j = acos ( nˆ ⋅ vˆ )
(3.26)
where nˆ is the unit normal vector to the building face at the specular point and vˆ is the unit vector from the receiver to the specular point. The incidence angle j is measured from the normal to the plane (see Figure 3.15). In order to find the unit vector that points from the specular point to the MTI platform, we begin by computing the intermediate vector
xˆ = nˆ × vˆ
(3.27)
and then a second cross product
yˆ = nˆ × xˆ
(3.28)
3.3 Geometric Analysis: A Basic EM Simulation Capability
79
Figure 3.15 A three-dimensional (3-D) view of the vector geometry. The vectors xˆ and yˆ are perpendicular and in the plane of the building face. The vector nˆ is normal to the building vertical wall (adapted from [12]).
Finally, the desired unit vector of the reflected ray from the specular point (to the next intersection with the ground or vertical building face) is computed as
ˆ = w x w
wy
w z = nˆ cos ( j) + yˆ sin ( j)
(3.29)
ˆ ˆ and ysin(j) ˆ where ncos(j) is the azimuthal vector component of w is the ˆ The azimuth and elevation angle of the elevation vector component of w. reflected ray from the specular point can be calculated as
w jr = atan x and qr = asin (w z ) wy
(3.30)
ˆ are illustrated The relationships between the vectors nˆ , vˆ , xˆ , yˆ , and w in Figure 3.15. To summarize the process for the geometric analysis of an urban scene: Initiate rays from the receiver facing the city model. The ray spacing over solid angle is a constant. For each ray, calculate the first impact point (either building or ground).
80
Non-Line-of-Sight Radar
For each ray, calculate the closest approach to the target—if the minimum distance between the ray and the target is less than the threshold value in (3.24), then record the NLOS path to the target and terminate the ray, otherwise continue. Calculate the new direction of the ray via a specular bounce. Calculate the final endpoint of the ray (either building, ground, or no intersection). Calculate the closest approach to the target—if the minimum distance between the ray and the target is less than the threshold value in (3.24), then record the NLOS path to the target. This procedure will indicate if a NLOS path exists for a particular target location and geometry. This information is very useful for determining if a NLOS tracking would be effective for a particular urban geometry and target/receiver location. An example geometric analysis is shown in Figure 3.16, indicating two buildings separated by a narrow street. The two buildings have a height of 5.75m. The MTI platform was located at a nominal azimuth angle of 135° (or southeast) and the azimuth was allowed to vary by ±1° from 135°. The target was assumed to be within the area between the two buildings indicated by the shaded rectangle. The minimum distance of possible target tracks from the building was 2m, and the target height from the ground
Figure 3.16 Multipath prediction with the receiver located southeast at an azimuth angle of 134° to 136°: (a) Elevation angle of 2° to 3°—no multipath possible, and (b) elevation angle of 4° to 5°—multipath region indicated by the white line [12].
3.3 Geometric Analysis: A Basic EM Simulation Capability
81
was 1.25m. The distance between simulated target positions in the shaded rectangular area was 1m. The radius of the target was set to 0.5m to match the distance between grid points (and target position uncertainty) in the simulation. The added lines in Figure 3.16 indicate the region in shadow of the nearby buildings (from the radar signal). In this example, two elevation angles were tested. For this geometry, there is no multipath return possible at elevation angles between 2° and 3° and azimuth angles of 135±1° (Figure 3.16(a)). Some multipath return is present for elevation angles between 4° and 5° (Figure 3.16(b)) as indicated by the white line. In this way, we can predict single-bounce multipath propagation for an urban environment without using a large amount of computational resources. With this simple geometric analysis tool, we can design NLOS experiments without performing full EM simulations (discussed below). Estimating if multipath is present in the scene given specified target and platform locations is useful in setting up an ideal platform altitude and look angle for observing multipath returns from target locations, which can then be applied to determine an optimal scene geometry for experimental data collection or a more detailed simulation. We could expand this geometric analysis to provide a real simulation capability with limited effort. We assume that rays emanate from the MTI platform transmit antenna(s), reflect from surfaces in the urban environment, possibly strike the target, and return to the MTI platform receive antenna(s). The rays are separated by a constant solid angle with a smaller separation resulting in better simulation fidelity. The amplitude of each ray is adjusted as a function of angle depending on the (MTI platform) antenna pattern so that rays in the mainlobe have a larger amplitude than rays in the sidelobes, and so forth. This adjustment occurs both on transmit and receive. Some simulators adjust the density of the ray pattern as a function of the antenna gain, however, this approach requires a more complicated intersection test with the target or receiver. It is easier to adjust the amplitude of each ray according to the antenna pattern (with a uniform solid angle separation). The initial polarization of each ray is determined by the transmit antenna polarization. The aggregate path length, time delay, Doppler shift, and phase, as well as polarization of each ray, are calculated at every intersection point. At every bounce, the power loss due to the diffusive surface (a function of the material) reduces the ray power. If a ray intersects the target, we can assume a reflection back along the initial ray path. However, more sophisticated target models can be used if available. For instance, the target could be modeled as a rectangular shape with specular ray reflections. For each ray, we also test for intersection with the MTI platform.
82
Non-Line-of-Sight Radar
The aggregate received signal is formed by the combination of all rays that intersect the receiver. Thus, the received signal will consist of copies of the transmit signal that are delayed, Doppler shifted, and attenuated.
3.4 Full Electromagnetic Simulations The primary ray-based techniques for calculating scattering behavior involve the combination of the shooting and bouncing ray (SBR) method with the UTD [19]. With SBR, rays are emitted in all directions from a receiver (or transmitter) location with the number of rays in any direction weighted by the antenna pattern. Propagation paths from the receiver back to the transmitter result from various reflections, diffractions, transmissions, or a combination of these. The environment is characterized using a ‘city file’, which contains the geometry and materials for all surfaces. This brute force ray tracing method requires a large number of rays to accurately characterize all scattering paths between transmitter and receiver. Due to the large number of rays to consider, commercial products are typically utilized for radar ray racing simulations. For instance, Wireless InSiteTM, developed by Remcom, is an electromagnetic urban propagation software modeling tool designed for predicting signal characteristics in an area containing buildings and varied terrain. It is widely used within the U.S. intelligence community for characterizing the electromagnetic environment in urban terrain and has been extensively validated against measured data [20]. Empirical, free-space, and ray-based propagation methods (appropriate for site-specific scattering analysis) can be used with Wireless InSite. The software has multiple modeling options. The most accurate modeling option is the full 3-D model, which places no restrictions on interactions, building parameters, or sensor placements. Once the ray paths have been determined, UTD evaluates the complex electric field associated with each ray path. The LOS and multipath signal components are characterized in terms of signal strength (propagation factor), path length, and AoA at each end of the path. The Doppler shift is then readily calculated based on an assumed platform velocity. An example of urban propagation from an elevated sensor simulated using Wireless InSite is provided in Figure 3.17 [21]. The simulation used an urban model of Rosslyn, VA shown in (Figure 3.18). The city model was created by estimating the building corners from Google Earth and heights from Google Earth Street ViewTM. The simulated radar was operating at a relatively low frequency of 908 MHz and the simulation included diffraction from building edges via the UTD. The power that reaches various
3.4 Full Electromagnetic Simulations
83
Figure 3.17 Receive power calculated using the Wireless InSite simulation tool. (a) Overhead view of building height map, and (b) received power [21].
Figure 3.18 Perspective view of extracted building geometries for a section of Rosslyn, VA [21].
84
Non-Line-of-Sight Radar
points on the ground is shown in Figure 3.17(b)). The platform was 15 km due south of the area of interest and at a 3-km altitude. Shadowing due to buildings can be observed as well as considerable ‘filling in’ of shadowed areas due to multipath and diffraction.
3.5 Incorporating Building Geometries Any EM simulation tool will require the construction of a city model. The city model is a set of buildings described by their vertices and material properties. Each simulation tool will have a proprietary format to describe boundary surfaces, such as building wall positions, heights, material composition of building surfaces, and associated electromagnetic properties. The effectiveness of any NLOS approach is dependent on the accuracy of the KAs available. In addition to the basic inaccuracies introduced by errors in the measurement of range, angle, and Doppler shift, additional inaccuracies will be introduced by errors in the knowledge of the building locations, heights, properties, and so forth. It is critical that the model of each urban environment be as accurate as possible. There are different methods for entering the building locations into an EM modeling environment. One way is to determine building locations by manually selecting their corner locations from (overhead) satellite imagery such as Google Earth, as shown in Figure 3.19. In this example, a human operator selected the corners of each building in Google Earth and recorded the geocoded (latitude and longitude) points [22]. Alternatively, the building corners can be extracted automatically using software from overhead images, as described in Chapter 4. In Figure 3.19, the building heights were estimated from Google Earth’s Street View. However, the building heights could also be estimated from the length of shadows in overhead imagery and the position of the Sun. Optimally, building heights (and corners) can be obtained directly from Light Detection and Ranging (LiDAR) data if it is available. Recently, city models have become available in commercial products. For instance, a 3-D model of San Diego, California, obtained from Google Earth is shown in Figure 3.20. However, Google Earth does not provide access to their building models. Extracting building models from Google Earth imagery is discussed in Chapter 4. As an example, we developed a city model using the building corners shown in Figure 3.19 [13]. The site is located in Avondale, Arizona, in Maricopa County. The area is bounded to the north by Interstate 10 (I-10), to the south by West Van Buren Street, to the east by North Litchfield Road, and
3.5 Incorporating Building Geometries
Figure 3.19 Example overhead view of a site showing buildings selected in Google Earth [12, 13]. The circles are manually identified building corners, and the numbers are building labels.
Figure 3.20 3-D model of San Diego, California from Google Earth.
85
86
Non-Line-of-Sight Radar
to the west by South and North Estrella Parkway. Approximately 25 buildings and multiple fields are within the test site that spans a flat terrain area of approximately 2.5 km2. The building height and construction materials were estimated from Google Earth’s street view. In this example city model, we limited the designated material types to three building materials (concrete, wood, and stucco) and a single material for the ground (dirt). Example properties for the different materials are listed in Table 3.1. All of the surfaces in the city model were assigned one of the materials in Table 3.1. Figure 3.4 shows a 3-D view of the example city model (generated using the overhead imagery in Figure 3.19) and estimated building heights from Google Earth’s street view, along with three user-defined receiver target tracks. Three candidate track scenarios were analyzed: a target winding its way through several closely based buildings (track 1), a target traveling west on I-10 (track 2), and a target traveling west on West Van Buren Street (track 3). These target tracks were chosen to represent a variety of different multipath exploitation possibilities, from the target locations where multipath exploitation is expected to be most beneficial (urban environments) to locations where multipath exploitation is unnecessary (open areas). We assumed that there was a tower radar located southeast of the site at a range of approximately 2.5 km and a height of 17m. The resulting tower radar look angle is extremely shallow so that LOS is often blocked by the buildings in the city model. This scenario was used to generate example simulated multipath results in the sections below.
3.6 Radar System Analysis for Multipath Exploitation Radar In 2007, the DARPA MER program was initiated to enable a thorough analytical and experimental understanding of multipath phenomenology. There was a need to develop an entirely new GMTI radar mode that allowed for the NLOS detection and tracking of vehicles (particularly in urban settings). The MER program was focused on conducting high-quality data collections aimed at capturing the essential characteristics of target RF propagation required for subsequent algorithm development and validation. Both tower tests and helicopter tests were conducted to obtain data for both a nominally stationary and moving platform. These data sets were used to test NLOS radar and multipath exploitation strategies. The radar hardware used in these tests was developed by Lockheed Martin Information Systems and Global Solutions (IS&GS). The radar is a two-channel (sum and difference) Ku-band system with data recording
3.6 Radar System Analysis for Multipath Exploitation Radar
87
capability and a maximum bandwidth that exceeds 100 MHz [22]. We can calculate the SNR of the radar as SNR =
Pp δ Gr Gt σ o λ2 M
( 4 π)3 R 4 Lt Lp fp kBTFn
(3.31)
with the parameters provided in Table 3.2 [12]. The carrier frequency was 16.9 GHz and the radar power was relatively low (22 dBW). Since the maximum range to the targets was short, the PRF [22] was relatively high (6,800 Hz). Each coherent pulse interval was 17.6 ms corresponding to 120 pulses. The calculated SNR of 61 dB corresponds to a range of approximately 2.5 km. The clutter-to-noise ratio (CNR) for this system is calculated for each clutter cell in the simulated area using the equation
CNR =
Pp δ Gr Gt σ c Ac λ2
( 4 π)3 R 4 Lt Lp fp kBT Fn
�(3.32)
Table 3.2 Radar Parameters Used in the Target SNR Calculation [12, 22] Parameter Peak power Duty factor Transmit gain Receive gain Target RCS
Symbol Pp δ Gt Gr
Value 21.76 0.072
Units dBW dB dB dBm2
σo
35.7 35.7 10
Wavelength
λ
0.0177
m
Number of pulses Number of receive channels Range Losses Propagation losses PRF Noise factor Thermal noise floor SNR
M N R Lt Lp fp Fn kbT
120 1 2537 10.3 0.205 6800 2 –204.24 61.74
pulses sum channel m dB dB Hz dB dB dB
88
Non-Line-of-Sight Radar
The parameters used in (3.32) are listed in Table 3.3. The CNR of 46 dB in this example is calculated for a target at a range of 2.5 km from the radar. The area Ac is the size of the scattering area on the ground and is given by Ac = R δφ δr ,
(3.33)
where R is the range to the cell and Rδφ and δr are the azimuth and range extent of the scattering cell projected onto the ground.
3.7 Track Simulations of the City Model with MER TowerMounted Radar The three test site tracks in Figure 3.4 were analyzed using Wireless InSite to determine the level of multipath present. The simulated radar system was mounted on a tower with a height of 17m and a range of approximately 2.6 km from the area in Figure 3.19. The tower location latitude and longitude are provided in Table 3.4. Spacing between the simulated target track positions is provided in Table 3.5, along with the target collection radius. The collection radius is used by Wireless InSite to determine which rays, propagated through the environment during the SBR process, arrive at a given target location (intercept the target) from a transmitter location. There is a lower bound on the collection radius to ensure locally valid results. The Table 3.3 Radar Parameters Used in the CNR Calculation [12] Parameter Peak power Duty factor
Value 21.76 0.072
Units dBW
σc
35.7 35.7 –4
dB dB dB
Clutter cell area Wavelength
Ac λ
58.7 0.0177
m2 m
Range Losses Propagation losses PRF Noise factor Thermal noise floor CNR
R Lt Lp fp Fn kbT
2537 10.3 0.1377 6800 2 –204.24 46.44
m dB dB Hz dB dB dB
Transmit gain Receive gain Scatter coefficient
Symbol Pp δ Gt Gr
3.7 Track Simulations of the City Model with MER Tower-Mounted Radar
89
Table 3.4 Tower Test Site Radar Position and Polarization [12] Parameter Radar operating frequency
Parameter Value 17 GHz
Radar position (Lat./Lon./Ht)
33.43438°N, 112.358642°W, 16.67m
Radar polarization
Horizontal
Table 3.5 Tower Test Site Simulation Inputs [12] Parameter Target tracks Urban track spacing intervals Highway track spacing intervals Target collection radius Target height Ray spacing in SBR method
Parameter Value 3 (1 urban, 2 highway) 2m 25m 1.25m 1.25m
Maximum number of intersections
4 reflections
0.01°
lower bound is set to insure that in a free-space environment at least one ray from the transmitter should be capable of intersecting the collection sphere around a target. This intersection condition occurs if r ≥ (ΘR)/2, where r is the collection radius, R is the slant range from the transmitter to the target, and Θ is the ray spacing in radians. Given a slant range of 2.6 km and the given ray spacing of 0.01°, a minimum collection radius of 0.23m is required. The collection radius can be set to larger than this minimum value to account for the uncertainty in the target position. With a larger collection radius, more possible multipath rays will be identified as intersecting the target. For this analysis, a 1.25-m collection radius with a center height of 1.25m was used. The upper bound on the collection radius is not as readily defined, but it generally needs to be of a small-enough scale to allow only paths in the immediate vicinity of the target to affect calculations for its given position. Ray spacing could be reduced to allow for a smaller collection radius, but reducing ray spacing by a factor of M increases the computation time by a factor of M2 for the full 3-D propagation model. 3.7.1 Track 1 (Urban) Simulation of Tower Mounted Radar The effects of multipath propagation are evident in the propagation factor, which measures observed path loss relative to what is expected from
90
Non-Line-of-Sight Radar
free-space path loss at the target’s given locations. The propagation factors for each multipath component will dictate whether it is detectable. Along track 1 (see Figure 3.4), the target is winding its way through several closely based buildings. The simulated one-way propagation factor is shown in Figure 3.21(a). LOS is only available to the radar during the track’s initial turn and the final leg where the target passes in front of the buildings, closer to the radar. The radar had sufficient sensitivity to intermittently observe multipath signatures of the target while it moved beyond LOS, between approximately 350 and 900m of the track distance. Individual path propagation factors relative to LOS path loss are also shown in Figure 3.21(b). The grayscale indicates the number of bounces for each signal. The ground bounce marker in Figure 3.21(b) indicates where a multipath component intersected the ground prior to the target. The direct LOS signals follow a very similar trajectory to rays that first bounce off of the ground due to the low grazing angle of the tower test site radar. The slant range was approximately 2.6 km and the tower radar altitude was 17m, producing a very shallow-grazing angle. The interaction of LOS and ground-bounce paths produces a nonzero-dB propagation factor even during LOS availability, due to the interference pattern encountered with coherent combination of two waveforms of similar signal strength (LOS and ground bounce). Normally, surface roughness attenuates the reflected multipath components, but attenuation of ground-bounce paths at low grazing angles is minimal. This creates a very clear relationship between the LOS and ground-bounce path phase difference and resulting propagation factor, as seen in the destructive case for this target in Figure 3.22(a). As the relative
Figure 3.21 (a) Total one-way propagation factors simulated for target track 1. Multipath propagation factor includes ground bound components on LOS path. (b) Individual path propagation factors [2, 12].
3.7 Track Simulations of the City Model with MER Tower-Mounted Radar
91
Figure 3.22 (a) Resulting change in propagation factor observed during target track 1, and (b) phase difference of LOS and ground-bounce path signals [12].
phase between the LOS and ground-bounce path approach 180°, the propagation factor decreases due to destructive interference. The expanded version of Figure 3.21(a), shown in Figure 3.22(a), indicates that the propagation factor in areas with LOS is dominated by the ground bounce interference. The propagation factor matches the shape of the phase differential curve (see Figure 3.22(a)). The small-scale variability in the propagation factor is attributed to contributions from other weaker multipath components. Target multipath components are detected at range and Doppler locations different from those expected from LOS detection, as seen in Figure 3.23. The legend of Figure 3.21 (indicating the number of reflections) also applies for Figure 3.23. As expected, the path range shown in Figure 3.23(a) is shortest for signals with LOS. The signal Doppler shift varies over the vehicle speed (from –10 to +10 m/s) due to multipath. As the vehicle moves through the city, the geometry of the signal paths that reach the MTI receiver changes. The recorded range and Doppler shift for the single target vehicle will appear to hop over many locations. The variation in the Doppler shift is reduced once the vehicle is in front of the buildings on the final leg of the path. Obviously, normal tracking approaches will have a difficult time monitoring a single vehicle in an urban environment if the apparent target range and target Doppler shift vary suddenly. Multipath exploitation will be necessary to track the target while it travels within this building complex. 3.7.2 Track 2 (Street) Simulation of Tower Mounted Radar This target moves west along West Van Buren Street (see Figure 3.4). Groundbounce paths are the primary NLOS components and the aforementioned
92
Non-Line-of-Sight Radar
Figure 3.23 Range and Doppler of multipath components for target track 1. Legend for Figure 3.21 applies. (a) Multipath ranges, and (b) multipath Doppler velocities [12].
phase difference between the two signals at the target creates the propagation factor observed in Figure 3.24(a). As the target passes in front of buildings, additional NLOS paths are possible. However, the propagation factor for these multipath signals is far below the direct LOS or bounce paths (see Figure 3.24(b)). The bounce paths produce a small variation in the overall propagation factor (see wiggles in Figure 3.24(a)). The target range and Doppler velocity characterization are illustrated for the complete track duration in Figure 3.25. 3.7.3 Track 3 (Interstate) of Tower Mounted Radar For track 3, the target moves west along I-10. Because of the low grazing angle of the radar, portions of the track are shadowed by buildings. Multipath
Figure 3.24 (a) Total one-way propagation factors observed on target track 2. Multipath propagation factor includes ground-bound components on the LOS path. (b) Individual path propagation factors (with ground bounce on the LOS path (0-dB points) excluded) for target track 2 [12].
3.7 Track Simulations of the City Model with MER Tower-Mounted Radar
93
Figure 3.25 Range and Doppler of multipath components for target track 2. Legend for Figure 3.24 applies. (a) Multipath ranges, and (b) multipath Doppler velocities [12].
exploitation opportunities are minimal for most of the NLOS track length as indicated in Figure 3.26. Either the target is observable through LOS or it is not detectable at all. The expected range and Doppler for the target as it moves along track 3 are plotted in Figure 3.27. 3.7.4 Tower Simulation Summary The propagation factor was simulated for a candidate tower radar system against a target moving at 10 m/s along three different tracks in an urban environment. For this scenario, the target SNR was high, greater than 50 dB when a LOS component existed due to the short range (approximately 2.5 km) to the target. Along track 1, the target was blocked by buildings on both
Figure 3.26 (a) Total one-way propagation factors observed on target track 3. Multipath propagation factor includes ground bound components on the LOS path. (b) Individual path propagation factors (with ground bounce on the LOS path (0-dB points) excluded) for target track 3 [12].
94
Non-Line-of-Sight Radar
Figure 3.27 Range and Doppler of multipath components for target track 3. Legend for Figure 3.26 applies. (a) Multipath ranges, and (b) multipath Doppler velocities [12].
sides and multipath signatures were readily available for exploitation. For track 2, where LOS was always available, it is trivial to track the target. For track 3, LOS to the target was available for most of the track but occluded by buildings for a portion of the route. No multipath signatures were available on track 3 to exploit due to a lack of buildings behind the target.
3.8 Simulations of Phoenix/Tempe and Comparison with Experiment Additional simulations were conducted for the Phoenix/Tempe area and compared to experimental data collections. The buildings in this area had a number of desirable characteristics: the building heights were of the proper order, the streets were narrow, and the spacing of the buildings was consistent. Candidate test areas were identified near the intersection of West University Drive and the Hohokam Expressway in the Phoenix/Tempe area. An overhead image of that area from Google Earth is shown in Figure 3.28. The location was divided into quadrants for individual assessments. The building vertices were identified manually from overhead Google Earth images to produce the city file buildup. Google Earth street level view was initially used to determine building locations and building heights. Later, DARPA provided LiDAR data of the test area, which was used to refine the building heights in the city model. An overlay of the city model based on Google Earth and the LiDAR data is shown in Figure 3.29. The LiDAR data set is composed of two data sets (tile 14 and 15) as provided by the government. The height shown is the bare earth value subtracted from the first return value. No shifting of either data set was required in order to get
3.8 Simulations of Phoenix/Tempe and Comparison with Experiment
95
Figure 3.28 Candidate test areas near the intersection of West University Drive and the Hohokam Expressway in the Phoenix/Tempe area [12].
Figure 3.29 LiDAR height data overlaid with city model derived from Google Earth [12].
the displayed result, indicating good agreement between the Google Earth and LiDAR data sets. A number of candidate target paths were selected in each of the four quadrants of the region using an extensive and detailed search employing Google Earth. Some candidate tracks for the northwest (NW) and northeast (NE) quadrants are shown in Figure 3.30. Each of the paths shown in
96
Non-Line-of-Sight Radar
Figure 3.30 Candidate target tracks (lines) from Figure 3.28: (a) NW quadrant, (b, c) NE quadrant [12].
Figure 3.30 were evaluated using geometric analysis tools as well as Wireless InSite prior to the GMTI experimental data collection. An example analysis of the NE quadrant is illustrated in Figure 3.31 [23]. An overhead drawing of the buildings is shown in Figure 3.31(a). The buildings in these areas are spaced close together so that a radar platform viewing the streets at a shallow elevation angle will only occasionally have LOS to vehicles in the street. The areas between buildings with LOS to a radar platform (radar located to the southeast) are indicated by the shaded areas in Figure 3.31(a). For most of a vehicle path along the horizontal streets, the vehicle will be in shadow. Lack of LOS detections for most of any path will significantly impact the accuracy of conventional tracking algorithms. The simulated power of multipath returns in the urban canyons for this area is shown in Figure 3.31(b). In the shadowed regions, there are locations where bright multipath returns are present. Only by leveraging the multipath can a NLOS mode maintain track throughout a path that winds behind the buildings (where LOS is not available). A geometric analysis was performed to determine if the building spacing in the NW and NE quadrants would provide the desired multipath phenomenology at the grazing angles of interest. Examples from this analysis are shown in Figures 3.32 and 3.33. In Figure 3.32, a small section of the track from the NE quadrant was analyzed. The GMTI platform was assumed to be oriented at an azimuth angle of 224° to 226° (southwest). The elevation angle was varied from 7° to 8° (see Figure 3.32(a)), 8° to 9° (see Figure 3.32(b)), and 10° to 11°
3.8 Simulations of Phoenix/Tempe and Comparison with Experiment
97
Figure 3.31 (a) Limited LOS (indicated by the shaded areas) is available especially in horizontal roads between buildings with the radar located to the southeast. (b) There is significant multipath power available for tracking as NLOS returns illuminate urban canyons [23].
Figure 3.32 Geometric analysis of buildings from the NE quadrant. The projection of the buildings on the ground is indicated by the perspective lines. The analysis region is designated by the shaded rectangles. Multipath is possible in the black areas [12].
(see Figure 3.32(c)). This analysis was repeated for the NW quadrant in Figure 3.33. For the range of elevation angles, a variety of conditions was observed, including LOS propagation, multipath, and combinations of LOS and multipath. The projection of the buildings on the ground is indicated by the perspective lines. The areas within the perspective lines are in shadow so that
98
Non-Line-of-Sight Radar
Figure 3.33 Geometric analysis of buildings from the NW quadrant. The projection of the buildings on the ground is indicated by the perspective lines. The analysis region is designated by the shaded rectangles. Multipath is possible in the black areas [12].
LOS is not available. The rectangular areas shaded gray were analyzed for multipath with the dark black indicating areas where multipath is possible. Since the geometric analysis confirmed that the general building spaces would support the desired phenomenology, we conducted a more detailed simulation of specific target paths. Four target routes were designated for more detailed analysis, as shown in Figure 3.34(a). Ultimately, data collection routes were derived from combinations of parts of all of these tracks. Analyzing the entire routes as shown was sufficient to provide adequate characterization of each of the candidate target trajectories. For the analysis in question, we used the same system parameters as in Table 3.2. We assumed that the sensor was operated in staring mode at a particular area of the city. The primary sensor altitude for the data collection was 1 km above ground level (AGL). A relatively close slant range of 8 km was assumed in the analysis to insure adequate power for multipath signatures. The shorter slant range resulted in only a 250m-wide cross range swath given the system azimuth beamwidth. An example of the range and cross range swath for data collection is plotted on the overhead terrain in Figure 3.34(b). The scenarios would need to be very carefully designed so that the targets would remain within the radar’s collection swath throughout their track.
3.8 Simulations of Phoenix/Tempe and Comparison with Experiment
99
Figure 3.34 (a) Route designations, and (b) overlay of example collection area on the test area with a 500-m range swath and 250-m cross range swath. The radar platform was located to the southeast [12].
Using the parameters in Table 3.2 and the assumed platform range and altitude, we calculated the signal-to-interference-noise ratio (SINR) for each of the four paths. The target SINR over route 1 is shown in Figure 3.35. The legend indicates the total number of multipath reflections (exclusive of ground bounces close to the target, which are, when present, included coherently with the associate non-ground-bounce path). Thus ‘1 refl’ indicates that there is a bounce on the inbound or outgoing path, and a ‘1 refl’ path
Figure 3.35 Route 1 target SINR analysis. (a) Platform 135° azimuth, and (b) platform 120°azimuth [12].
100
Non-Line-of-Sight Radar
can only be present when LOS is also present. Multipath on the inbound and outgoing paths is indicated by ‘2 refl’. Additional multipath reflections are indicated as ‘4 refl’. No three reflection multipath signals were possible for this scenario. Locations where the signal would be detectable (12-dB threshold) are shown in Figure 3.36. Though the map indicates a somewhat sparse detection pattern, there are sufficient detectable signals to continuously track the target using a NLOS approach that exploits multipath. The SINR result for route 2 is shown in Figure 3.37. The corresponding areas with sufficient SINR for detection are plotted in Figure 3.38. This scenario provides detections over a larger fraction of the target path relative to route 1. In addition, it is interesting to observe the change in the detection pattern with the variation in platform azimuth. We note that each of the routes has a unique detection pattern. The results for route 3 are shown in Figures 3.39 and 3.40. Here we see results similar to route 2. This route also provides detections over a significant fraction of the target path. Again, there is a change in detection pattern with the variation in platform azimuth. In particular, we note the loss of the multipath-only signals in the middle of the scenario. This loss most likely results from the larger angle with respect to the normal of the buildings, which produces longer overall path lengths. Due to the limited building heights, the multipath signals intersect the ground before reaching the target locations.
Figure 3.36 Locations where the signal would be detectable (12-dB threshold) along route 1. (a) Platform at 135° azimuth, and (b) platform at 120° azimuth. The lightly shaded dots are target locations with LOS only, the darkest shaded dots are target locations with multipath only, and the medium shaded dots are target locations with both LOS and multipath [12].
3.9 NLOS Tracking
101
Figure 3.37 Route 2 analysis: (a) Platform at 135° azimuth, and (b) platform at 120° azimuth [12].
Figure 3.38 Locations where the signal would be detectable (12-dB threshold) along route 2. (a) Platform at 135° azimuth, and (b) platform at 120° azimuth. The lightly shaded dots are target locations with LOS only, the darkest shaded dots are target locations with multipath only, and the medium shaded dots are target locations with both LOS and multipath [12].
The analysis of route 4 is shown in Figures 3.41 and 3.42. The detection pattern changes significantly between each platform azimuth.
3.9 NLOS Tracking Data was collected on the Phoenix/Tempe area during the MER Phase I data collection that occurred in December 2008. The several data sets supplied
102
Non-Line-of-Sight Radar
Figure 3.39 Route 3 target SINR analysis. (a) Platform 135 azimuth, and (b) platform 120° azimuth [12].
Figure 3.40 Locations where the signal would be detectable (12-dB threshold) along route 3. (a) Platform at 135° azimuth, and (b) platform at 120° azimuth. The lightly shaded dots are target locations with LOS only, the darkest shaded dots are target locations with multipath only, and the medium shaded dots are target locations with both LOS and multipath [12].
by the government were analyzed to assess the data quality and to validate the MER hypothesis that multipath was observable and could be predicted. 3.9.1 MER Data Collection Description The ground truth used for this assessment, collected via Air Force GPS units in the target vehicles, consisted of target locations (latitude, longitude, and elevation) and velocity as a function of time.
3.9 NLOS Tracking
103
Figure 3.41 Route 4 target SINR analysis(a) Platform 135° azimuth, and (b) platform 120° azimuth [12].
Figure 3.42 Locations where the signal would be detectable (12-dB threshold) along route 4. (a) Platform at 135° azimuth, and (b) platform at 120° azimuth. The lightly shaded dots are target locations with LOS only, the darkest shaded dots are target locations with multipath only, and the medium shaded dots are target locations with both LOS and multipath [12].
The radar used in the data collection was built and operated by Lockheed Martin. The parameters for this radar are listed in Table 3.2. Full documentation of the data formats and radar operating parameters can be found in [24]. The radar operates at 16.9 GHz, has a 120-MHz bandwidth, and a 6800-Hz PRF. Both the sum and delta channels were recorded. In addition, an auxiliary data file was provided that contains the radar operating parameters and the platform navigation information, recorded at a rate of about one entry for every 26 pulses.
104
Non-Line-of-Sight Radar
The received data was processed via conventional Doppler processing using the sum channel. The CPI for the results shown in this report consisted of 512 pulses (75 ms). A 60-dB Chebyshev window was applied in both range and Doppler during processing. The data was processed at equally spaced intervals. To determine the data collection time of each CPI, we used the time stamp record in the auxiliary data file that was closest to the first pulse. The time stamp recorded in the radar auxiliary file was used to align the radar data and GPS ground truth. The pulse time and the ground truth time were both recorded as integer values. Due to the uncertainty of the collection time (for time intervals of less than one second), no interpolation was performed on the ground truth data to provide sub-second accuracy. The data collection was partitioned into individual acquisitions two minutes in length with the radar staring at a location near the center of each route. After the experiment, the radar data was processed over each two-minute collection interval and the detections compared with the predicted results. The target ground truth was used in conjunction with the city model to predict the expected radar returns, both LOS and multipath, using Wireless InSite. These simulated radar returns were subsequently compared to the radar observations. Multipath returns were clearly observed at the positions predicted by the propagation model. The alignment of the ground truth and the radar data was an empirical process to determine the correct range and time offset. The time offset was determined by identifying sudden changes in the vehicle velocity from GPS data (when the vehicle was executing a turn) and correlating those features with changes in the target Doppler signature. The range swath was adjusted to provide alignment between the target detections observed in the radar data with the ground truth locations from GPS. 3.9.2 Validation of Multipath Signatures Figure 3.43 shows analysis of an example coherent processing interval developed from the recorded radar time series data. The figure on the left is the range-Doppler map and the figure on the right shows the location of the targets and the paths of the simulated rays from Wireless InSite. The location of each target vehicle is circled. The radar is located to the southeast. At this snapshot in time, target 2 is behind both buildings with no predicted return, and target 1 is between the two buildings with a predicted multipath return. The location (range, Doppler) of the LOS signal if it were present is plotted on the range-Doppler map as the square and circle for
3.9 NLOS Tracking
105
Figure 3.43 Comparison between data and simulation demonstrating good agreement between predicted and observed multipath. (a) Measured range-Doppler map, and (b) geometry with target locations circled. The inset shown in (b) represents the multipath geometry [2, 12].
targets 1 and 2. The location of the predicted multipath is shown by the plus sign. The simulation predicts that a path reflecting from the far building should reach the target, and the location and the Doppler velocity of the observed multipath seems to agree well with the prediction. Significant multipath power is present near the plus sign. This agreement is evidence that we can use tools such as Wireless InSite to successfully predict multipath and match experimental multipath signatures in urban environments. Another comparison between the experimental range-Doppler map and simulation demonstrating good agreement between predicted and observed LOS and multipath signals is provided in Figure 3.44. At this time, target 1 is in front of the building and a LOS signal is observed at the predicted location (square in range-Doppler map). Target 2 is between the buildings so that no LOS signal is predicted or observed (circle), but a multipath return is observed near the predicted location (x symbol). A further example of a successful multipath prediction is provided in Figure 3.45. For target 1, between the buildings, the LOS signal is detected (square in the range-Doppler map). The multipath return is also observed in the range-Doppler map near the prediction (clutter near plus sign). No returns are observed from target 2, behind the second building as expected (circle). Comparison between the data and the simulation exhibits good agreement between the predicted and observed LOS and multipath signals. Another example of a multipath analysis is shown in Figure 3.46. Target 1 is in front of the building and a LOS signal (square) and multipath signal (plus sign) are observed in the predicted location in the range-Doppler map.
106
Non-Line-of-Sight Radar
Figure 3.44 Comparison between data and simulation demonstrating good agreement between predicted and observed LOS and multipath signals. (a) Measured range-Doppler map, and (b) geometry with target locations circled. The insets shown in (b) represent the multipath geometry [12].
Figure 3.45 Comparison between data and simulation demonstrating good agreement between predicted and observed LOS and multipath signals. (a) Measured range-Doppler map, and (b) geometry with target locations circled. The inset shown in (b) represents the multipath geometry [12].
Target 2 is behind the second building. In this location, no LOS return is predicted. However, the LOS signal (circle) is expected to fall on the clutter ridge and it is impossible to verify the absence of a signal. The multipath signal for target 2 is observed near the predicted location (x). A final example of a multipath analysis is shown in Figure 3.47. A LOS signal and two multipath signals are predicted and observed for target 1 (square and plus symbols) in the range-Doppler map. One of the multipath signals is close to the clutter ridge and difficult to observe. No LOS or
3.9 NLOS Tracking
107
Figure 3.46 Comparison between data and simulation showing good agreement between predicted and observed LOS and multipath signals. (a) Measured range-Doppler map, and (b) geometry with target locations circled. The insets shown in (b) represent the multipath geometry [12].
Figure 3.47 Comparison between data and simulation demonstrating good agreement between predicted and observed LOS and multipath signals. (a) Measured range-Doppler map, and (b) geometry with target locations circled [12].
multipath signals were predicted or observed for target 2 due to the building and radar platform geometry (circle and x). 3.9.3 KA-MAP NLOS Tracker The effectiveness and accuracy of conventional trackers can be severely degraded when multipath or blockages are present. Existing tracker approaches
108
Non-Line-of-Sight Radar
can be modified to utilize multipath by incorporating the predicted multipath signatures in the a posteriori step (in either the KF or Bayesian particle filter approaches). The resulting NLOS tracker is a KA maximum a posteriori (KA-MAP) formulation that inherently facilitates the incorporation of: Terrain effects (occlusions, multipath, and kinematic constraints); Highly non-Gaussian PDFs; Highly nonlinear and constrained kinematics. The calculations are amenable to both parallel processing and particle filtering techniques. In the general Bayesian formulation, the probability of state xk given measurements z1:k is
(
)
(
) (
)
p x k z1:k ∝ p zk x k p x k z1:k −1
(3.34)
In the second, or a posteriori, step (that occurs after the measurement), the first probability term p(zk|xk) is found by comparing each target template with the radar image. The target state xk is defined in the previous chapter and typically consists of the 2-D position and velocity for ground vehicles. In the general particle approach, we first develop many possible target states (particles) based on estimates of the previous target position and velocity. For each particle, we then predict the range and Doppler shift for the LOS signature (if present) and the range and Doppler for each (of possibly several) multipath signatures. It is possible for LOS to be blocked and multipath may not be possible for many geometries. This prediction step typically requires significant computational resources. As the target moves through an urban environment, we expect the signals due to LOS and multipath to appear sporadically. The NLOS tracker must exploit this limited available information efficiently. The comparison between experiment and prediction occurs in three dimensional power-range-Doppler space. The output of the multipath prediction stage will initially be LOS and multipath powers at precise values of range and Doppler (determined by the relative receiver-target geometry, target state, receive platform position and velocity, and city file). Thus, the predicted range-Doppler map will initially consist of a series of points with the heights representing the expected power of the LOS and multipath
3.9 NLOS Tracking
109
signatures. Some of these calculated multipath points will be below the noise floor and can be ignored. The noise floor can be nonuniform to account for the clutter ridge. The remaining points (above the noise threshold) are multiplied by a 2-D Gaussian point spread function with a variance along each axis (of range and Doppler) that represents the uncertainty in the range and Doppler measurements. This uncertainty must include propagating error from sources such as the platform position and velocity as well as the effect of error in the urban geometry (or city model) on the measurement. If these error sources are difficult to calculate then it is best to overestimate the variances in the Gaussian point spread function. The resulting range-Doppler map consists of Gaussian peaks at each predicted LOS or multipath location. A range-Doppler map is produced for every target state from the (Gaussian-spread) LOS and multipath signatures. Each range-Doppler map that is constructed represents the overall predicted observation if the target is in a particular state (location and velocity) in the urban environment. The predicted range-Doppler surface (power versus range and Doppler shift) is compared directly to the measured range-Doppler surface for each predicted location using the L1 minimization approach as described in Chapter 2. That is, we subtract the two surfaces to develop the probability of a match. If a large LOS or multipath signature is predicted (at a range-Doppler location) but not observed, then the difference between the predicted and observed range-Doppler map will be large and the assigned probability for this target state will be low. Likewise, if a large target signature is observed but not predicted then the probability of the corresponding target state will also be low. If a particle state provides LOS and multipath signatures that match the measured data, then the a posteriori probability will be high and that state will become the best estimate of the vehicle location and velocity at that discrete time increment. This approach naturally incorporates all three possible radar target return phenomenologies: 1.
DLOS skin return (moving or stationary target);
2.
NLOS multipath returns (moving target);
3.
Terrain-blocked return (no multipath).
Due to the presence of urban structures and multipath, the inputs to the NLOS tracker concerning the possible target states are extremely nonGaussian. Thus, the target state PDF also becomes very non-Gaussian over
110
Non-Line-of-Sight Radar
time. The tracker naturally considers multiple target states (target locations and velocities) whose probability is refined over time through the sparse target LOS and multipath signatures that are detected. This MAP formulation is inherently multihypothesis. The drawback of this KA-MAP NLOS tracking approach is that it requires significant computing resources to continuously predict multipath signatures for a set of target states. However, with modern computing architectures it is now possible to exploit multipath in real time to enable NLOS tracking. An example of a target PDF in an urban environment is shown in Figure 3.48. The target state probability is concentrated at various discrete locations in the urban terrain. This distributed PDF is continuously modified to incorporate new information due to both the presence of target signatures or lack of target signatures (where they are expected). 3.9.4 NLOS Tracker Results A traditional KF tracker was applied to the experimental radar returns from a vehicle in the NE quadrant. The radar platform was located to the southeast. The track using a traditional KF approach is indicated in Figure 3.49 by the black squares. The true vehicle path is indicated by the light circles. When the vehicle goes behind the row of buildings, LOS is lost and the KF tracker (squares) erroneously assumes that the vehicle continues along the street in a straight line due to the lack of new information. The KA-MAP NLOS tracker (diamonds) is able to exploit multipath signatures to determine the true target location and accurately track the vehicle path. The
Figure 3.48 Example target location PDFs for two sequential time increments. In the NLOS tracker, the target model becomes highly nonlinear in urban terrain. This nonlinear target model leads to highly non-Gaussian PDFs [7].
3.9 NLOS Tracking
111
Figure 3.49 Comparison of ground truth with tracking result for route 2. The true track of the vehicle is indicated by the circles. The output of the NLOS tracker is indicated by the diamonds. Using LOS alone (squares), the tracker finds an incorrect path [25].
NLOS tracker correctly identifies target locations behind the row of buildings from the (limited) multipath signatures. The NLOS tracker was also applied to the experimental radar returns along four different paths in the NE quadrant, as shown in Figure 3.50. Each path is indicated by a different grayscale shaded square. In each case, the NLOS tracker accurately geolocates the vehicle, despite the lack of any direct path signal over sections of each track. The tracker output was compared with ground truth to develop estimates of the statistical accuracy
Figure 3.50 The RMS error of the NLOS tracker over four different paths through the NE quadrant. The radar platform was located to the southeast [25].
112
Non-Line-of-Sight Radar
over the entire track. The RMS error (difference between each position and ground truth) over each track is 6.3m for route 1, 15.7m for route 2, 4.7m for route 3, and 15.7m for route 4. An alternative NLOS tracker (developed by [23]) was applied to the radar returns from two vehicles in the NW quadrant, as shown in Figure 3.51. The true vehicle tracks (ground truth) are indicated by the solid lines. The estimated trajectories using the NLOS tracker are included as the dashed lines. In each case, the NLOS tracker is able to accurately identify the vehicle path. The NLOS tracker (dashed lines) match the ground truth (solid lines) well. The building geometry is outlined by the lightly shaded lines. The position error for the NLOS tracker compared to ground truth along path #1 as a function of time is shown in Figure 3.52 (corresponding to the darkest shaded path in Figure 3.51). The position error compared to ground truth along path #2 as a function of time is shown in Figure 3.53 (bottom lighter shaded path in Figure 3.51). The position errors occur when both LOS and multipath target signatures are temporarily unavailable as the target moves along its trajectory. Without updated information, the NLOS tracker will assume that the target moves along its previous velocity vector. If LOS and multipath signals are not available during a turn, the NLOS tracker will temporarily assume that the vehicle moves in a straight line. As in conventional trackers, once a signal is detected again, the tracker error
Figure 3.51 Two true vehicle tracks (ground truth) are indicated by the solid lines. The estimated trajectories using a NLOS tracker are included as the dashed lines. The radar platform was located to the southeast [23].
3.10 Summary
113
Figure 3.52 The position error for a NLOS tracker compared to ground truth as a function of time for path #1 (top darker shaded path in Figure 3.51) [23].
Figure 3.53 The position error for a NLOS tracker compared to ground truth as a function of time for path #2 (bottom lighter shaded path in Figure 3.51) [23].
will be able to accurately predict the target location. However, NLOS trackers can also extract information from the total lack of a signal. For instance, if the tracker expects a target LOS or multipath signature and one is not observed, then the tracker is also able to localize the target (location/velocity) state by eliminating possible paths (target states) from consideration. The lack of a signal is almost as important as a signal itself as both conditions carry significant information about the target.
3.10 Summary Conventional trackers often do not perform well in urban environments due to the complicated propagation and scattering environment, blockage of the direct path signal, presence of multipath, and strong clutter. However,
114
Non-Line-of-Sight Radar
with knowledge of the urban terrain (e.g., a city model), we can make accurate predictions of LOS blockages and multipath. In this chapter, we have shown several comparisons between multipath predictions and experimental range-Doppler signatures for multipath in an urban environment. By incorporating information on both occlusions and multipath propagation into the a posteriori step of a tracker, we can accurately predict the target state over time despite limited LOS to the target. The lack of a signal (either direct LOS reflection or multipath) also provides information about the target state. Accurate predictions that utilize knowledge of the urban terrain enable robust operation in an urban environment. The resulting target PDFs are spatially distributed and extremely non-Gaussian. We refer to this tracking approach as nowledge-aided maximum a posteriori (KA-MAP). The performance of the KA-MAP NLOS tracking approach has been validated experimentally using data collected during the DARPA MER program.
References [1] Guerci, J. R., Space-Time Adaptive Processing for Radar, 2nd Edition, Norwood, MA: Artech House, 2014. [2] Techau, P. M., D. R. Kirk, L. K. Fomundam, and S. C. McNeil, “Experiment Design, Test Planning, and Data Validation for an Urban Mode Radar Data Collection,” ISL-SCRO-TR-09-008, 2009. [3] Fertig, L. B., J. M. Baden, and J. R. Guerci, “Knowledge-Aided Processing for Multipath Exploitation Radar (MER),” IEEE Aerospace and Electronic Systems Magazine, Vol. 32, No. 10, 2017, pp. 24–36. [4] “DARPA Strategic Technology Office (STO) Industry Day,” ed. Arlington, VA: Defense Advanced Research Projects Agency (DARPA), 2013. [5] Hirschler-Marchand, P. R., and G. F. Hatke, “Superresolution Techniques in Time of Arrival Estimation for Precise Geolocation,” in Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, IEEE, Vol. 2, 2002, pp. 1272–1277. [6] Li, X., and K. Pahlavan, “Super-Resolution TOA Estimation with Diversity for Indoor Geolocation,” IEEE Transactions on Wireless Communications, Vol. 3, No. 1, 2004, pp. 224–234. [7] Guerci, J., “Knowledge-Aided Radar Tracking in Complex Terrain,” 2010. [8] Hulbert, C., B. Watson, and J. Bergin, “Detection and Tracking of Moving Vehicles and Dismounts Across the Land-Sea Boundary,” Information Systems, Laboratories, 2012.
3.10 Summary
115
[9] Kouyoumjian, R. G., and P. H. Pathak, “A Uniform Geometrical Theory of Diffraction for an Edge in a Perfectly Conducting Surface,” Vol. 88, November, 1974, pp. 1448–1461. [10] Peterson, A. F., S. L. Ray, R. Mittra, Computational Methods for Electromagnetics, IEEE Press: New York, 1998. [11] Durek, J., “Multipath Exploitation Radar Industry Day Presentation,” ed: DARPA, Herndon, VA, 2009. [12] Techau, P. M., D. R. Kirk, L. K. Fomundam, and S. C. McNeil, “Experiment Design, Test Planning, and Data Validation for an Urban Mode Radar Data Collection,” Information Systems Laboratories, ISL-SCRO-TR-09-008, March, 2009. [13] McNeil, S., and L. Fomundam, “Urban Electromagnetic Propagation Simulation Tools,” Information Systems Laboratories, March, 2009. [14] I. Recommendation, “Attenuation by Atmospheric Gases,” Recommendation ITU-R, pp. 676-5, 2001. [15] Ho, C. M., C. Wang, K. Angkasa, and K. Gritton, “Estimation of Microwave Power Margin Losses Due to Earth’s Atmosphere and Weather in the Frequency Range of 3–30 GHz,” report for USA Air Force, 2004. [16] Hecht, E., “Optics, Adelphi University,” ISBN 0-201-11609-X 1987. [17] ed, p. with permission from Telephonics, Telephonics Corporation, Formingdale, NY, 2018. [18] Bergin, J., “Vertically Integrated Sensing, Tracking, and Attack,“ Information Systems Laboratories, 2007. [19] Weinmann, F., “Ray Tracing with PO/PTD for RCS Modeling of Large Complex Objects,” IEEE Transactions on Antennas and Propagation, Vol. 54, No. 6, 2006, pp. 1797–1806. [20] http://www.remcom.com/wireless-insite/examples/. [21] Kirk, D., K. Ohnishi, S. McNeil, J. Bergin, and P. Techau, “Transmitter Geolocation in Urban Terrain,” DARPA Urban Propagation Workshop, October 4–5, 2006. [22] Schneider, L., “Multi-Path Exploitation Radar (MER) Phase 1—Data Collection Kickoff, Task 1—Radar,” Lockheed Martin Document Number 5694A003-001-00, 2008. [23] Fertig, L. B., M. J. Baden, J. C. Kerce, and D. Sobota, “Localization and Tracking with Multipath Exploitation Radar,” in IEEE 2012 Radar Conference, pp. 1014–1018.
116
Non-Line-of-Sight Radar
[24] “Dataset Documentation for the Multi-Path Exploitation Radar (MER),” Lockheed Martin Document Number 5694-A008-001-00 November 21, 2008. [25] Bergin, J., D. Kirk, P. Techau, and J. Guerci, “Knowledge-Aided Geolocation,” Information Systems Laboratories, January 18, 2010.
CHAPTER
4
Contents 4.1 Terrain Databases 4.2 Urban Databases
Terrain Databases Greater fidelity in the terrain and city model in the electromagnetic simulation and multipath prediction tools will lead to improved performance of KA NLOS tracking. In this chapter, we discuss different sources of data regarding the terrain, geometry of buildings in urban areas, and scattering physics. We limited our discussion to publicly available sources of information.
4.1 Terrain Databases Underlying terrain (without buildings and other man-made structures, and so forth) is usually fully described by three popular databases: Digital Terrain Elevation Data (DTED) (see https://www.nga.mil/ProductsServices/ TopographicalTerrestrial/Pages/DigitalTerrainElevationData.aspx), land cover (for example, see https://nationalmap.gov/landcover.html), and the radar scattering coefficient versus angle data, which is typically a database derived from experimental sources (see discussion below). At every point, the terrain height and slope are specified via DTED while the type of terrain is specified by a land cover
117
118
Non-Line-of-Sight Radar
map. We obtain the scattering power for each land cover terrain type via a look-up table using the incident angle of the radiation relative to the terrain normal. High-fidelity physics-based models can also be used depending on the required accuracy (for example, see the website http://www.islinc.com/ rfview). Urban terrain databases will also be discussed in Section 4.2, and of course are of critical importance to NLOS MER radar. 4.1.1 DTED The terrain height over a latitude/longitude grid is critical for determining if LOS is available from an emitter or receiver location to every terrain pixel in the radar field-of-view (FOV). In addition, the height and slope of the terrain will affect the amplitude of scattered radiation (as of course will the type of land cover). Typically, databases of the Earth’s surface height (and land cover type) are partitioned into a uniform geocoded (latitude/longitude) grid. Alternatively, equal area conic projections [1] are used, but these can be interpolated to a uniform grid over a local area. We can usually assume that the terrain database is single valued, with one terrain height for every geocoded position (ignoring caves, overhangs, etc.). With that assumption, the database can be restricted to two dimensions so that the memory requirements are relatively small. A database of the terrain height with a 30 m resolution for the entire Earth can easily be stored on a modern (2018) computer. The level of the DTED database is defined as a function of the grid spacing as specified by military standard MIL-PRF-89020B [2]: Level 0 has a spacing of approximately 900m; Level 1 has a spacing of approximately 90m; Level 2 has a spacing of approximately 30m. DTED for the entire planet is publicly available from the United States Geological Survey (U.S.G.S.) website [3]. High-resolution (level 2 DTED) is available from the Shuttle Radar Topography Mission (SRTM): choose ‘Digital Elevation’ and ‘One Arc-Second SRTM’. Downloads are available in one latitude/longitude degree patches or through bulk transfers. The data is available in multiple formats including ‘GeoTiff’ picture files that can easily be read into the MATLAB environment.
4.1 Terrain Databases
119
4.1.2 Land Cover Land cover data is a bit more difficult to obtain for the entire Earth. One public source of data is available at [4]. It consists of separate data for tree cover and bare ground as integer percentages at every terrain point (1% to 100%). In addition, there is a binary water mask. These data sets can be combined together to produce a more detailed estimate of the land cover at every point. For instance, using the percentages, we assigned the land cover types into five categories: 11 water, 23 residential, 24 highly developed, 41 grass, and 81 forest. In every location where the water mask was set, we assigned that location to water (11); If the tree cover was equal to or higher than 50%, then we assigned the land cover type to forest (81); If the tree cover was less than 50% and the bare ground was equal to or higher than 50%, then we assigned the land cover type to highly developed (24); If the tree cover was equal to or greater than 20% but less than 50%, and the bare ground was equal to or higher than 20% but less than 50%, then we assigned the land cover type to residential (23); If the tree cover was less than 20% and the bare ground was less than 20% then we assigned the land cover type to grass (41); Everything else was assigned to forest (81). These criteria were chosen arbitrarily and the code values (11, 23, 24, 41, 81) were designed to match the U.S.G.S. land cover assignments. An example is shown in Figure 4.1(b) for the Chesapeake Bay area. The corresponding DTED map, limited to a maximum altitude of 40m, is included in Figure 4.1(a). Land cover data with more detailed terrain type assignments is also publicly available at one arc second (30m) resolution over limited areas. For instance, land cover data is available for the contiguous United States arranged state-by-state in [5]. The data for each state overlaps with adjacent states by 10 pixels or 300m. The data is in Albers equal area conical projection format and can be converted with commercially available tools such as Erdas Image Viewer [6]. An example of the land cover for Florida is
120
Non-Line-of-Sight Radar
Figure 4.1 (a) DTED for the Chesapeake Bay area, and (b) corresponding land cover. Legend is 11 water, 23 residential, 24 highly developed, 41 grass, and 81 trees.
shown in Figure 4.2. The land cover data include a much larger number of land cover classifications (see the legend) [7]. Data for the entire contiguous United States is included in Figure 4.3.
Figure 4.2 The land cover for Florida. Legend is from [7].
4.1 Terrain Databases
121
Figure 4.3 Land cover for the contiguous United States of America. Legend is from [7].
4.1.3 Scattering Power versus Incident Angle In electromagnetic simulations, we obtain the scattering power for each land cover terrain type via a look-up table using the incident angle of the radiation relative to the terrain normal. To create the scattering tables, we interpolate between the values in references [8–10]. Example plots of the S-band polarimetric scattering coefficients for various land cover types (desert, grass, road, shrub, tree, and ocean) are included in Figure 4.4. Note the ocean scattering curves do not go to zero at low grazing angles, as would be predicted by purely Bragg scattering from capillary ocean waves. As observed experimentally, due to foam created from ocean turbulence, the scattering coefficient approaches a constant at low grazing angles. These scattering coefficient values are tabulated for monostatic geometries (the radar emitter and receiver are colocated so that the radiation has the same incident and scattering angles). To simulate bistatic scattering (using the monostatic scattering coefficients from the tables), we use the monostatic-bistatic equivalence theorem [11]. That is, the bisector between the incident and scattered vectors is used as the monostatic scattering vector to provide the monostatic incident angle in the look-up tables (illustrated in Figure 4.5). If the incoming and outgoing rays are nearly specular, the bisector polar angle will approach the surface normal vector and a large
122
Non-Line-of-Sight Radar
Figure 4.4 The polarimetric scattering coefficient as a function of grazing angle (grazing angle is defined as 90° minus incident angle) for different land cover types at S-band (derived from [8–10]): (a) desert, (b) grass, (c) road, (d) shrub, (e) tree, and (f) ocean.
4.2 Urban Databases
123
Figure 4.5 The bistatic bisector angle is substituted for the monostatic angle in the scattering model via the monostatic-bistatic equivalence theorem.
scattering coefficient will be generated as expected. The bisector polar angle is used to find the scattering coefficient in the look-up tables. The polarimetric scattering coefficient for various land cover types at S-band is plotted in Figure 4.4.
4.2 Urban Databases The availability and fidelity of urban terrain databases is quite variable. Fortunately, the availability of high-quality publicly available 3-D building data increases every year. Figure 4.6 is a 3-D map of midtown Manhattan obtained from Google Earth Pro (https://www.google.com/earth/desktop/). However, as discussed at the end of this chapter (Section 4.2.4), it is not always possible to extract this information (for free) from Google Earth. In Section 4.2.1, we discuss methods for deriving 3-D data from readily available 2-D databases. 4.2.1 Extracting Building Geometries from 2-D Databases The most direct way to establish the building geometry in an urban area is for a human operator to select the corners of buildings in overhead images. An overhead image (e.g., from Google Earth) is loaded onto a computer screen and an operator selects the corner of each building with a cursor manually. Subsequently, ground level images (e.g., from Google Street View) can be used to estimate the height of each building. This manual height estimate is an inexact process using the number of floors in each building. For instance, in Figure 4.7, there are four floors on the building to the left corresponding to a height of approximately 40 feet.
124
Non-Line-of-Sight Radar
Figure 4.6 3-D map of midtown Manhattan via Google Earth.
Figure 4.7 Google Street View image of 130 N. Washington Ave., Scranton, Ohio.
4.2 Urban Databases
125
Another method for estimating building heights is to use the length of shadows in overhead images. Building height is proportional to the length of each shadow. Initially, a small subset of the building heights is found using street level images. Then the height of the rest of the buildings can be estimated by (manually) measuring the relative length of each shadow (Figure 4.8). However, this approach will only work for buildings with sufficient spacing so that the extent of each shadow can be identified. If the buildings are closely spaced then this method may not be feasible. 4.2.2 Automatic Corner Detection Using Image Processing Algorithms Selecting building corners in large overhead images can be tedious for a human operator. With modern computing resources, corner detection can be performed automatically using image processing algorithms. Usually a human is still required to check the resulting corner output for mistakes. In this section, we describe an automated image processing method for extracting building corners in the MATLAB environment. Historically, there have been many approaches to corner detection using image processing algorithms. Unfortunately, the great majority of the corner detection approaches described in the literature are not effective at
Figure 4.8 Building heights are proportional to the shadow length (double arrows) in overhead images (Google EarthTM).
126
Non-Line-of-Sight Radar
detecting building corners and extracting building geometries. Most corner detectors are more accurately classified as feature detectors where features are regions of sharp image curvature that may or may not correspond to actual physical corners. The feature curvature is typically measured using either self-similarity [12], eigenvalues of the Hessian matrix [13], or by estimating the number of separate contiguous regions surrounding each pixel in the image [14, 15]. Feature detectors are most useful for other types of image processing (e.g., image registration) where the consistency of the detection is more important than relevance to the geometric structure. Thus, most historical corner detectors are not effective for extracting the physical geometry of urban structures. To extract building locations for simulations of urban environments, we need corner detection to define the physical structure of objects (buildings) in the image, especially the positions of walls. Therefore, it is important to have a detector that reports only corners created from three or more planar surfaces. To find corners more reliably, it makes sense to consider areas in the image where two or more lines terminate only, as line associations provide significant evidence of corners. Corner determination using lines is advantageous, as it samples a much larger pixel area than the usual corner feature approaches. In addition, the computational cost is lower for combination line/corner detection, since corner detection uses the result of the previous line finding operation instead of testing every pixel in the image. To find boundaries in overhead images that pertain to physical structures, we need a line detector that also provides the location of the line endpoints. If multiple line endpoints occur in a small, localized area, then we can assume that the intersection point is a corner corresponding to a physical structure (building). There are a number of conventional line detecting algorithms for images including Hough Transform-based algorithms, as well as Canny or other edge detection approaches [16, 17]. Hough Transform-based line detection is unsatisfactory, as it is computationally expensive and does not provide a convenient method for extracting line endpoints. Techniques based on edge detection (e.g., Canny or Laplace of Gaussian zero crossing), find the edges in an image followed by straight line detection. The primary drawback of edge detection-based methods is that they sometimes miss important straight lines if they do not necessarily coincide with the maximum intensity gradient. A method to extract all reasonable straight lines in an image is required for building detection. For the method to work on any image regardless of the light level, all thresholds must be adaptive. That is, the criteria for deciding whether an
4.2 Urban Databases
127
image component (line, edge, etc.) is relevant must always depend on a relative comparison to local noise and signal levels. Second, it is useful to apply competition so that the detection of any component only occurs if its signal strength exceeds that of nearby similar components. An analogous process whereby strong signals inhibit nearby weaker signals operates in the visual systems of mammals and is referred to as lateral inhibition. Finally, any image processing step must be able to fill in missing information if it improves the overall consistency of the result. An approach used to assist robot navigation [18] is very effective at picking out lines due to its speed and simplicity. However, we modified that method extensively as described below. As in many line finding approaches, the first and higher order derivatives of the image are accomplished using Laplace of Gaussian (LofG) operators to provide a larger spatial extent to the derivative operation and thus reduce noise. In essence, we are convolving the image with a Gaussian operator and then applying a derivative operator. However, the two operators can be combined into a single step by taking the derivative of the operator before convolution, which is mathematically equivalent to
d d G ( x , y ) ∗ F ( x , y ) = G ( x − τ , y − v )F ( τ, v ) d τdv = dx dx ∫ d d ∫ dx G ( x − τ, y − v )F ( τ, v ) d τdv = dx G ( x , y ) ∗ F ( x , y )
(4.1)
where G(x, y) and F(x, y) represent the operator and image data, respectively. The corresponding first and second order derivative operators are
x d x y G ( x , y ) ∝ − 2 exp − 2 + 2 and 2σ x 2σ y dx σx x2 x 2 − σ 2x y2 G x y , exp ∝ − + ) ( 4 2σ 2x 2σ 2y dx 2 σx
(4.2)
d2
where σx and σy dictate the amount of smoothing. In the following examples, we typically used σx=0.75 pixels normal to the line and σy=5 pixels tangential to the line, although the results are relatively insensitive to the amount of smoothing tangential to the line. The derivatives along the x
128
Non-Line-of-Sight Radar
and y direction can be applied independently and thus mixed derivative operators can be obtained by extension of the results above. Example LofG operators are shown in Figure 4.9. The spatial scale of the operator can be varied by changing σx. We can use multiple scales to delineate lines that correspond to fuzzy boundaries. For typical overhead images, a single spatial scale is sufficient and only a single scale (σx=0.75) was employed for the examples below.
Figure 4.9 (a) An example first derivative LofG operator with σx=0.75 pixels and σy=5 pixels, (b) an example mixed first derivative LofG operator (first derivative normal and first derivative tangent to line), and (c) an example mixed second derivative LofG operator (first derivative normal and second derivative tangent to line).
4.2 Urban Databases
129
In the original fast line finder [18], we encountered several issues. The detected lines either ran past the correct endpoints or terminated early. Furthermore, extraneous lines are detected since the thresholding process cannot adequately compensate for the varying contrast. If the threshold is increased, valid lines are removed along with the extraneous lines. To correct these deficiencies, we employed several strategies. In the original fast line finder, the LofG operators are applied to the image only along the horizontal and vertical orientations and then subsequently quantized (or binned) into a limited number of buckets (subimages) according to their gradient direction. In the updated algorithm, we used 16 angular orientations instead of just two to increase the angular specificity of each bucket. As before, each pixel was binned according to its maximum angular response. The threshold at each pixel was determined adaptively using a combination of the mean gradient response and the response from nearby lines. We employed lateral inhibition to select only the most definite lines in the image. Strong lines suppress nearby weaker lines with similar angular orientations. The lateral inhibition was accomplished using the operator shown in Figure 4.10(a), which was applied after the binning procedure. The total inhibition in any particular bin included the response from nearby bins scaled by a factor of the cosine of the angular difference between those
Figure 4.10 (a) A lateral inhibition operator used to reduce the gradient response near a strong line, and (b) the line stop filter—one half of the 1st derivative LofG operator (renormalized so that the mean value is zero).
130
Non-Line-of-Sight Radar
bins. Thus, a strong line would inhibit not only nearby lines at the same angle but also, to some extent, the nearby lines at slightly different angles. A connected components algorithm [19] is then operated on each bucket independently. Any connected regions that are sufficiently straight are deemed a line and the line endpoints are calculated using the bounding box. The line endpoints were adjusted by applying the first and second mixed derivative LofG operators (Figure 4.9(b, c), respectively). That is, we applied the first derivative normal to the line and first or second derivative tangent to the line. We determined that the line endpoint was reached when the first derivative along the line indicated decreasing gradient intensity and the second derivative along the line was positive. To ensure that a line endpoint was not detected in the middle of a valid line, left and right line stop filters were also used (Figure 4.10(b)). Either the left or right line stop filter must be below the adaptive threshold (in addition to the proper sign for the first and second mixed derivatives) for a line endpoint detection to occur. Once the line endpoint was reached, gradient points past the detected endpoint were suppressed. Proper line terminations improve corner detection as they reduce spurious line associations. In addition, a line recovery algorithm was developed. Extraneous gradient points due to noise can occasionally merge separate connected pixel regions that correspond individually to valid lines. These merged regions do not pass the straightness criteria and are thus rejected. The recovery algorithm is computationally expensive and, fortunately, the lateral inhibition described above reduces the probability of connected lines considerably. Still, we must be able to handle the occasional merged region. It is not possible to recover the separate lines by using linear regression alone. The slope and intercept from the combined region will not correspond to either line individually. Instead, a most probable line is extracted using a sum of Gaussians approach. The density of pixels both normal and tangential to the region is calculated. The highest peaks on either side of the line correspond to the most probable line. That line is extracted from the pixel region and the remaining pixels are sent back to the connected components algorithm for further analysis. As in the original algorithm, if a gradient was larger than the mean noise level for that bucket, it was added to that bucket. A connected components algorithm was run on each bucket and a straightness criterion was applied to each connected region to determine if it could be represented by a line. However, as we described above, if a region does not pass the straightness
4.2 Urban Databases
131
criteria, the recovery algorithm is used to extract the most probable line if possible. The only free parameter is the minimum acceptable line length. In addition to the changes described above, we also added a line merging capability. In complicated scenes, lines are often interrupted due to poor image quality, shadow, diffraction near edges, and so forth. The line merging capability connects nearby line pairs only if their endpoints are close relative to the length of the shorter line. The maximum possible misalignment is determined by the length of each line and the maximum possible error in each slope due to line quantization. Once we identify the important lines in an image, we are able to use line convergence to indicate valid corners related to the physical geometry of the scene (Figure 4.11). Convergence between lines is determined by examining the endpoint of every line in the image. The amount of necessary distance around each line to search is linearly proportional to the length of the line. Accurate line endpoints are essential to eliminating spurious line associations by reducing the amount of area around each line that must be included in the search. Lines with similar angles ( 0 signifies a vector oriented upwards away from the center of the Earth. In a similar manner, we can also find the elevation angle at the terrain point to the emitter. The expression (5.12) is used with q found at the terrain patch location (rather than the emitter location) and the unit vector having the opposite sign.
pˆ e −t = ( pemitter − pterrain ) pemitter − pterrain
(5.13)
150
Non-Line-of-Sight Radar
jterrain = 90 − cos −1 ( qterrain ⋅ pˆ e −t )
(5.14)
Using the procedure above, we can calculate the elevation angle to every point in our range-azimuth map on a curved ellipsoidal Earth. An example of the elevation angle as a function of range from the emitter and azimuth angle for the terrain in Death Valley, California is shown in Figure 5.4(a). We assumed that the emitter was raised 10m above the valley floor, which was at an altitude of –82m, significantly below sea level. Because the emitter is in a valley, the calculated elevation angles from the emitter to the terrain are very positive for the nearby surrounding mountains and then are lower past the mountains. Obviously, the nearby hills will block LOS to terrain farther away. To calculate LOS, for every azimuth angle bin, we move along range starting from the emitter to the maximum range (100 km in this case). If the elevation angle is the largest (most positive) angle encountered thus far, then there is LOS between the emitter and that clutter patch. If the elevation angle is not the largest (most positive) angle encountered thus far, then that terrain patch is in shadow. The binary LOS determination to all nearby points for the Death Valley example is shown in Figure 5.4(b). The bright areas have LOS and the dark areas are in shadow. As expected, the areas beyond the nearby hills are hidden from the emitter except along the valley floor.
Figure 5.4 (a) Elevation angle as a function of range from the emitter and azimuth angle with north equal to 0° and clockwise rotation for positive angles. (b) Boolean LOS mask as a function of range from the emitter and azimuth angle. Bright areas have LOS and dark areas are in shadow.
5.1 Geometrics Optics Simulations of Terrain
151
For a bald curved Earth, we would expect a visual LOS horizon range of 11.3 km (for a high-frequency emitter) at a height above the terrain of 10m. Note that for lower frequency emitters, the radar signal range will extend past the visual horizon due to diffraction [7]. The visual LOS horizon for a perfectly spherical Earth is
r he2 + 2Re h
(5.15)
where Re is the radius of the Earth and h is the height of the emitter above the terrain. This equation provides a decent estimate for low altitude ( 0 is the radius of curvature measured from the origin and β0 is the polar angle of the incident ray. As before, to simplify the calculation, we can assume ρ →∞ in (5.28) to represent an incident plane wave. The values of Ds and Dh are calculated using
Ds ,h ( φ, φ′ ; β0 ) =
− exp ( j π / 4 ) L π sin β0
ˆ 2 φ − φ′ f (kL, φ − φ′ ) exp j 2kL cos sgn ( π + φ′ − φ) 2 fˆ kL, φ + φ′ exp j 2kL cos2 φ + φ′ sgn π − φ′ − φ ) ) ( ( 2
(5.30)
Note the term in brackets in (5.30) consists of two terms that are subtracted or added to produce parameters Ds or Dh , respectively. The Fresnel integral function fˆ in (5.30) is
fˆ (kL, η) =
∞
∫
2kL cos( η/2)
(
)
exp − j τ2 d τ
(5.31)
For an incident plane wave, the distance Lˆ is defined as L = s sin2b0. If the incident wave is not a plane wave, then the distance Lˆ is written more generally as
L=
(
)
s ρie + s ρ1i ρi2 sin2 β0 ρie
(
ρ1i
)(
+ s ρi2 + s
)
(5.32)
The expressions for diffraction from other edge shapes such as curved screens and wedges can be found in [12].
158
Non-Line-of-Sight Radar
5.3 SEKE SEKE was developed by Lincoln Laboratory to predict losses for RF waves propagating over terrain [15, 16]. The model uses characteristics of the underlying terrain profile between each end of the propagation path to compute loss in excess of free space using a weighted sum of multipath, KED, and spherical earth diffraction models. SEKE has been validated for use at frequencies from HF through X-band [17, 18]. The results in this section were taken from [15]. Figure 5.8 gives an example of the SEKE propagation factor predictions at 1.5 GHz for a scenario where the transmitter (Tx) and receiver (Rx) are both located in the open ocean with a distance separation of 65 km. The receive antenna has an altitude of 200m above the sea surface and the transmit antenna altitude is varied from 200 to 600m above the ocean surface. As expected, the propagation factor (F) exhibits a lobing structure, since the dominant propagation mechanism in this type of scenario is multipath. In SEKE, the multipath signals
Figure 5.8 Top: Propagation scenario with the transmitter and receiver over the sea. Bottom: Plots of the power propagation factor (F) for both horizontal (HH) and vertical (VV) polarizations [15].
5.3 SEKE
159
are combined coherently so that they either constructively or destructively interfere with the direct path signal depending on the geometry. The propagation factor for a similar geometry is shown in Figure 5.9, except that the emitter and receiver are both located over land at White Sands Missile Range in New Mexico with a distance separation of 54 km. As in the previous example, the height of the transmitter is varied. The dominant terrain feature is a large mountain between the transmitter and receiver. For the low transmitter heights, the direct LOS path is blocked by the mountain. This occlusion gives rise to significant propagation losses as predicted by the KED routine in SEKE. As the transmitter height is increased to around 300m, the propagation loss approaches 0 dB. For heights greater than 300m, the propagation is a combination of direct LOS and multipath. However, the lobing structure that was observed over the sea is not as pronounced because the power of the ground bounce path is reduced due to the significant roughness of the terrain. The high-fidelity propagation loss predictions that SEKE provides may not be required in all modeling applications. In some cases, a simpler model
Figure 5.9 Top: Propagation scenario with the transmitter and receiver over land at White Sands Missile Range, New Mexico. Bottom: Plots of the power propagation factor (F) for both horizontal (HH) and vertical (VV) polarizations [15].
160
Non-Line-of-Sight Radar
that determines the propagation factor based on whether or not there is a direct LOS path between the transmitter and receiver may be adequate. In general, LOS methods will require less computation and therefore are better suited for problems where fidelity can be sacrificed for computational efficiency. In addition, for applications involving propagation at higher frequencies (e.g., X-band) it is expected that LOS methods will produce results very similar to SEKE since KED and spherical earth diffraction effects will be a relatively small contribution to the propagation factor at frequencies above ultra high frequency (UHF). The one-way propagation factor between a receiver located at White Sands Missile Range, New Mexico and the terrain is shown in Figure 5.10 [15]. The receiver was the Lincoln Laboratory developed Radar Surveillance Technology Experimental Radar (RSTER) (labeled ‘Rx’ in Figure 5.10) located on North Oscura Peak [19]. The results shown represent the propagation factor between the receiver and all the ground scatterers in the
Figure 5.10 LOS propagation loss between the receiver (Rx) and each ground scatterer for 4 different receiver heights (10, 500, 2,000, and 20,000 meters). Note that RxExcessLoss represents the propagation factor (F) [15].
5.3 SEKE
161
scenario. This simulation was conducted without SEKE using only the LOS model. Each rectilinear terrain patch was 100m × 100m. In Figure 5.10, the LOS propagation results were repeated for the same scenario for four different receiver altitudes above the local terrain (10, 500, 2,000, and 20,000 meters). As expected, the number of regions (black areas) where propagation is obscured by the terrain is reduced as the receiver altitude increases. In the case of the 20-km altitude receiver (representative of an very highaltitude airborne surveillance radar), there are very few locations where propagation is blocked. The propagation factor for the previous mountain top scenario computed using both the LOS and SEKE models is shown in Figure 5.11(a, c) at the nominal receiver altitude of 10m above the terrain. The propagation factor using LOS only are included for comparison in Figure 5.11(b, d). The results are provided for both UHF (top panels, 100 MHz) and X-band (bottom
Figure 5.11 Propagation loss computed for the mountain top scenario: (a) UHF SEKE, (b) UHF LOS, (c) X-band SEKE, and (d) X-band LOS. Note that RxExcessLoss represents the propagation factor (F) [15].
162
Non-Line-of-Sight Radar
panels, 10 GHz). In general, the areas with significant losses coincide for both the SEKE and LOS results; however, lower losses are computed by SEKE due to inclusion of KED and spherical earth diffraction propagation effects. As expected, the losses predicted by SEKE for the X-band scenario are significantly higher than for the UHF scenario since propagation by way of diffraction mechanisms is less effective at the higher frequencies. The LOS results do not change much from very high frequency (VHF) to X-band (panels on right hand side of Figure 5.11). The area in the lower left hand corner of the panels in Figure 5.11 exhibit significant blockages in the LOS model (right panels) and minimal losses for the SEKE model (left panels). The negligible propagation losses for the diffracted signal are explained by the existence of a very small knife edge that barely blocks the LOS path. The power received by the radar (labeled Rx) in the same mountain top scenario at White Sands Missile Range is computed in Figure 5.12, including the direct path power from the transmitter located on Socorro Peak
Figure 5.12 Scattered power received for the mountain top scenario [15] (a) UHF SEKE, (b) UHF LOS, (c) X-band SEKE, and (d) X-band LOS.
5.4 Radar Simulations Using Elliptical Conic Sections
163
(labeled Tx) and the ground clutter. The results are shown for both UHF (Figure 5.12(a, b)) and X-Band (Figure 5.12(c, d)). For each operating frequency, the received power in the SEKE model (left panels) is calculated as a combination of the LOS and SEKE components. The LOS results alone are shown in the right panels in Figure 5.12 for comparison. This particular scenario results in strong scattering from the ground directly between the emitter and receiver (a.k.a., the glistening region). In the case of the results computed using the SEKE model the glistening region extends continuously from the emitter to the receiver. The LOS results however, show an interruption to this region due to blockages of the LOS path from either the receiver or emitter to the scatterers within this region. The terrain profile between the emitter and receiver for this particular scenario is illustrated in Figure 5.13. The terrain profile reveals a significant terrain feature directly between the emitter and receiver that helps explain the discontinuous nature of the glistening region seen in Figure 5.12. The ray extending from the emitter and receiver to the top of the mountain is plotted for both the emitter and receiver. The sizes of the resulting shadowed regions behind the mountain are consistent with those seen in the purely LOS results from Figure 5.12(b, d).
5.4 Radar Simulations Using Elliptical Conic Sections One alternative for radar simulations of urban areas is to approximate the radar transmit energy as a sparse array of ray tubes with finite width. An example of conic rays emanating from a source is shown in Figure 5.14. The fidelity of the simulation will increase as the number of rays increase. The shape and relative power of each ray tube can be normalized to match
Figure 5.13 Terrain profile for the mountain top scenario [15].
164
Non-Line-of-Sight Radar
Figure 5.14 A ray tracing simulation conducted using a relatively small number of elliptical conic rays with finite width.
the antenna directivity pattern. Objects and buildings in the scene are constructed from a set of planes. Curved objects can be approximated by many smaller planes. The radar reflection is estimated by considering the type of material for the object as well as the area of intersection of each elliptical radar beam and plane. To apply this technique to urban simulations, we need to be able to calculate the area of intersection between an elliptical (or circular) ray cone and a plane [20]. It is also essential to estimate the shape of the area of intersection to know if it lies within the boundaries of the planar surface. Each ray is an elliptical or circular cone that extends out from the transmit point with separate half angles along the major and minor axes. The return scattering power Pn for each ray is calculated using the assumed radar backscatter coefficient σ 0n for each surface type as a function of incident angle, and the area of intersection, An, of the planar surface with each elliptical conic radar beam
P n = σ 0n An
(5.33)
The main lobe and side lobes of the radar beam are modeled using many (dozens to hundreds) of individual rays and increasing the number of rays will characterize the radar antenna directivity pattern more accurately. To model multipath, the direction of the ray vector is changed upon reflection from a wall (plane) consistent with specular reflection. The power and polarity of the ray are modified accordingly depending on the material properties of the reflecting surface. The shape of the ellipse is conserved through reflections so that it increases in size continuously with distance from the emitter. If only a part of the ray intersects a plane then the power of the reflected ray is adjusted accordingly; the reflected power is proportional to the intersection area of the building wall divided by the intersection
5.4 Radar Simulations Using Elliptical Conic Sections
165
area over an infinite plane. We only consider intersections where the center of the ray (the ray vector) intersects the plane. The energy that is lost due to partial intersections is neglected. To increase the simulation fidelity, the number of rays is increased. Four possible types of intersection shapes can occur. If the elliptical cone completely intersects the plane, then the shape of the intersection is also elliptical. If the cone does not completely intersect the plane, then the intersection curve is hyperbolic and open-ended. If the plane is parallel to the cone direction, then the intersection curve is parabolic and openended. Finally, if the plane is barely touching the edge of the cone then the intersection is a line with zero area. We are only interested in the condition where the cone completely intersects the plane and the shape of the intersection is elliptical since, the backscattered power at very small grazing angles resulting from partial intersections is not significant. To obtain a cone in an arbitrary orientation, we start with a standard cone along the z-axis and then rotate it. The equation of an elliptical cone in the z-normal plane is
( x − x0 )T M ( x − x0 ) = 0
(5.34)
where x are the points on the cone, x0 is the cone origin, and M is defined as
−a −2 M= 0 0
0 −b −2 0
0 0 1
(5.35)
The parameters a and b are the semi-major and semi-minor axes of the ellipse, respectively. First, we must rotate the elliptical cone angle so that it is along our chosen cone axis. The new cone matrix is derived from the z-normal matrix M as
M ′ = R1T MR1
(5.36)
where T represents the transpose operation and the rotation matrix is the standard 3-D rotation matrix
166
Non-Line-of-Sight Radar
(
)
u2 + 1 − u2 c ux u y (1 − c ) − uz s ux uz (1 − c ) + u y s x x 2 2 R1 = ux u y (1 − c ) + uz s uy + 1 − uy c u y uz (1 − c ) − ux s uz2 + 1 − uz2 c ux uz (1 − c ) − u y s u y uz (1 − c ) + ux s
(
)
(
(5.37)
)
where c and s are the cosine and sine of the rotation angle θ. The elliptical cone is rotated around an axis u that is perpendicular to the z-axis and the new desired cone axis a specified by the cone (or ray) direction. We can determine the axis u using
u = zˆ × a
(5.38)
The rotation angle is obtained by the dot product of the two angles
q = cos −1 ( zˆ ⋅ a )
(5.39)
We have an additional complication in that we wish to specify the vector of the major axis of the elliptical cone. The initial axis was along the x-direction as seen in (5.35). The new major axis of the cone ellipse can be determined after the previous initial rotation operation using the same rotation matrix
1 x ′ = R1 0 0
(5.40)
This new x-axis x ′ will be different from the original x-axis and rotated from the intended vector by an angle j �
j = cos −1 ( x ′ ⋅ eˆ )
(5.41)
where eˆ is the intended major axis of the ellipse. We can rotate the matrix M around the new cone axis aˆ using
5.4 Radar Simulations Using Elliptical Conic Sections
(
)
167
a2 + 1 − a2 c a x a y (1 − c ) − uz s a x az (1 − c ) + a y s x x 2 2 R2 = a x a y (1 − c ) + az s ay + 1 − ay c a y az (1 − c ) − a x s az2 + 1 − az2 c a x az (1 − c ) − a y s a y az (1 − c ) + ux s
(
)
(
(5.42)
)
where now c and s are the cosine and sine of the negative of the angle j in (5.41), c = cos(–j) and s = sin(–j). The final cone matrix is described as
M ′′ = RT2 R1T MR1R 2
(5.43)
With this matrix, we can find any point on the elliptical cone at an arbitrary orientation defined by the cone axis aˆ , the elliptical cone major axis ˆ and the cone origin x0 (radar antenna position) e,
( x − x0 )T M ′′ ( x − x0 ) = 0
(5.44)
Next, we parameterize the points on the plane. Any point on a plane can be written using the plane origin p0 and two in-plane perpendicular vectors p1 and p2
x = p0 + x1p1 + x 2 p2
(5.45)
In city models, it is usually convenient to specify the plane unit normal nˆ rather than the in-plane vectors and, therefore, the in-pane vectors must be calculated from the unit normal. For that purpose, we use the following procedure. First, an arbitrary point σ is picked that is not on the unit normal nˆ vector. Next, we find the point on the normal line that is closest to our chosen point. We assume that any point on the plane unit normal is defined by the parametric equation
x = p0 + nˆ t
(5.46)
The distance between the arbitrary point and the point on the line is
d = p0 + nˆ t − σ
(5.47)
168
Non-Line-of-Sight Radar
We minimize the distance between the points by taking the derivative with respect to the parametrization variable t. The optimal parameter t is given by
t min (d ) = nˆ ⋅ ( σ − p0 )
(5.48)
We then plug this value of t into (5.46) to find the closest point on the normal line. By construction, the line defined by the endpoints σ and p0 + nˆ [nˆ ·(s – p0)] is an in-plane vector perpendicular to the unit normal
(
)
p1 = σ − p0 + nˆ nˆ ⋅ ( σ − p0 )
(5.49)
A second perpendicular in-plane vector can be found by taking the cross product between n and p1 p2 = nˆ × p1
(5.50)
Any point on the plane can be described by (5.45). Now that we have the equation of any point on both the elliptical cone and plane, we can find the intersection. We develop the intersection matrix Q using homogeneous coordinates in terms of the in-plane parameters x1 and x2 from (5.45) so that on the intersection curve
x1
x2
x1 1Q x 2 = 0 1
(5.51)
Taking into account the symmetry of the problem, we define the (symmetric) matrix as
q1 Q = q2 q4
q2 q3 q5
q4 q5 q6
(5.52)
5.4 Radar Simulations Using Elliptical Conic Sections
169
Combining the plane equation (5.45) with the conic equation (5.44) we acquire the equation of the line of intersection
( p0 + x1p1 + x 2 p2 ) − x 0 M ′′ ( p0 + x1p1 + x 2 p2 ) − x 0 = 0 T
(5.53)
After some algebra, we obtain a quadratic equation in two variables
q1 x12 + 2q2 x1 x 2 + q3 x 22 + 2q4 x1 + 2q5 x 2 + q6 = 0 �(5.54) The individual parameters in (5.54) are
q1 = p1T M ′′p1
(5.55)
q2 = p1T M ′′p2
(5.56)
q3 = pT2 M ′′p2
(5.57)
q4 = ( p0 − x 0 ) M ′′p1
(5.58)
q5 = ( p0 − x 0 ) M ′′p2
(5.59)
q6 = ( p0 − x 0 ) M ′′ ( p0 − x 0 )
(5.60)
T
T
T
Knowing the ellipse equation would be helpful for calculating the area. It appears that we could re-write (5.54) as the conventional equation of a 2-D ellipse if we could apply the correct translation and rotation operations to the intersection shape. We start by finding the rotation that will remove the second term in (5.54). The eigenvectors and eigenvalues of the top left corner of the matrix equation are defined as
170
Non-Line-of-Sight Radar
q1 q 2
q2 q v1 = λ1v1 and 1 q3 q2
q2 v = λ2 v 2 q3 2
(5.61)
where λ1, λ2, and v1, v2 are the (two) eigenvalues and eigenvectors of the submatrix. These vectors also represent the normal modes of the submatrix and provide the vector basis to express the submatrix in its diagonal form. That is, we are performing eigen-decomposition of the submatrix of Q. This can be accomplished using the ‘eig.m’ function in MATLAB
q1 q 2
q2 = VΛVT = VΛV −1 q3
(5.62)
where V is the matrix formed using the eigenvectors for each column and Λ is the diagonal matrix formed by the eigenvalues. We are rotating the matrix Q into the canonical form of an ellipse HT Q H = Q ′
(5.63)
where Q ′ is a diagonal matrix in the form of the elliptical matrix (5.35). The homogeneous matrix that will provide the correct translation and rotation is
v11 H = v 21 0
v12 t1 v 22 t 2 0 1
(5.64)
The new translation t will remove the fourth and fifth terms in (5.54) or alternatively make q4′ and q5′ in Q ′ (5.63) equal to zero. In matrix form, the third column of (5.63) becomes
q V 1 q2 T
q2 q3
t q4 1 t = 0 q5 2 1
(5.65)
5.4 Radar Simulations Using Elliptical Conic Sections
171
If we rearrange terms and apply the eigenvalue equation in (5.61), we obtain
t VT Λ 1 = − VT t 2
q4 q 5
(5.66)
Finally, the new translation vector or center of the ellipse in the planar coordinate system can be calculated as
q t = −Λ −1 4 q5
(5.67)
We have used the property VVT = I, with I as the identity matrix. Note that the ‘eig.m’ function in MATLAB uses a different convention than the typical eigenvalue equation, or
q1 q 2
q2 v = v1 λ1 q3 1
(5.68)
This produces a change in (5.66)
t ΛV T 1 = − V T t 2
q4 q 5
(5.69)
q4 q 5
(5.70)
We can rewrite (5.69) as
t ΛV −1 1 = − VT t 2
so that the result (5.67) relevant to the MATLAB environment is
172
Non-Line-of-Sight Radar
t1 T t = − VΛV 2
q4 q1 q = − q 5 2
q2 q3
−1
q4 q 5
(5.71)
The elliptical parameters can be taken directly from the matrix Q ′
−q33 ′ = a ′ and q11 ′
−q33 ′ = b′ q22 ′
(5.72)
Finally, the area of the ellipse can be found directly from the major and minor axes,
area = π a ′b ′
(5.73)
The center of the ellipse is the vector in the planar coordinate system (defined by the in-plane vectors). The equation of intersection in the same coordinate system can be found by taking the (matrix) square root of the rotated ellipse equation
a ′ 0 T S = V V 0 b′
(5.74)
The matrix S is the ellipse equation in the in-plane coordinate system centered at position t. The position of any point on the intersection in planar coordinates can be found from
x cos φ x = 1 = S + t x sin φ 2
(5.75)
where φ is an arbitrary angle. In normal coordinates, any intersection point is
x = x1p1 + x 2 p2 + p0 �(5.76)
5.4 Radar Simulations Using Elliptical Conic Sections
173
With (5.76), we can draw the elliptical intersection area to any desired resolution. A MATLAB script was developed to test the function and provide a graphical display of the geometry. A graphical depiction of the intersection between a plane and cone is shown in Figure 5.15(a). The elliptic cone is 1m above the plane and oriented directly at the plane. The major and minor half angles were 20° and 10° degrees, respectively, with the major ellipse axis along the y-direction. The bright line is the intersection shape. In this case, the area of the intersection is easily calculated by area = π tan 20° tan10° = 0.2016. It is possible for the intersection area to be relatively large if the cone intersects the plane at a shallow (grazing) angle, as in Figure 5.15(b). In this case, the cone origin was still 1m from the plane, the relative angle was 61° (between the cone direction and plane normal), and the intersection area was 0.5836. To integrate the intersection function with a radar simulation, it is also usually necessary to limit the intersection area to within a radial interval representing the radar range resolution cell. Essentially, we need to calculate the area of intersection between an infinite plane, elliptic cone, and spherical shells defined by an inner and outer radius (see Figure 5.16). The limited area estimate is accomplished numerically using the previous elliptical equation (5.76). The intersection area is divided into many angular slices. Only the slices that are completely within the radial limits are summed to provide the area.
Figure 5.15 (a) The intersection (bright line) between an elliptic cone and plane with an area of 0.2016. (b) Another example of an intersection area at a different orientation with an area of 0.5836.
174
Non-Line-of-Sight Radar
Figure 5.16 A graphical depiction of the intersection (bright line) between an infinite plane and elliptic cone. The area is further limited to the spherical shell defined by the mesh spheres.
5.5 Commercial Software Recently, there are a number of high-fidelity, physics-based EM propagation software tools that can significantly enhance NLOS radar design, analysis, and even implementation. For nonurban terrain, RFView from ISL, Inc., can perform very high-fidelity, physics-based calculations that capture many of the real-world effects required to perform meaningful simulations of real-world behavior [21].
5.5 Commercial Software
175
Figure 5.17 Screen capture showing the basic RFView user interface. (Courtesy of ISL, Inc.)
Figure 5.18 Example of populating the RF parameters in RFView. (Courtesy of ISL Inc.)
176
Non-Line-of-Sight Radar
Figure 5.19 Example of the data visualization tools available with the RFView software. (Courtesy of ISL, Inc.)
Figure 5.17 shows a screen capture of the cloud-based (online) version of RFView user interface. It features a Google Maps interface for positioning the transmitters (airborne or ground-based) and receivers, along with their relative motions. A number of pull-down menus are used to populate all of the relevant RF parameters (see Figure 5.18). Once all of the input parameters are entered, the RFView software loads in the relevant 3-D terrain databases, along with any available land cover information, that is used in calculating RF reflections. The output data files are essentially identical with what would be obtained by a real-time multichannel flight data recorder (e.g., raw in-phase and quadrature (I&Q) signal data). A number of data visualization and post-processing tools are available with the software. Figure 5.19 shows an example of one of the data visualization tools available with RFView. Here the radar clutter returns are overlaid onto a
5.5 Commercial Software
177
Google Maps display, greatly facilitating the understanding of the interplay between electromagnetic propagation and terrain. For urban terrain, Remcom’s Wireless InSiteTM software can provide fully 3-D EM propagation, including NLOS multipath (https://www.remcom.com/wireless-insite-em-propagation-software/).
References [1] Osterman, A., “Implementation of the R. Cuda. LOS Module in the Open Source GRASS GIS by Using Parallel Computation on the NVIDIA CUDA Graphic Cards,” Elektrotehniëski Vestnik, Vol. 79, No. 1-2, 2012, pp. 19–24. [2] Amanatides, J., and K. Choi, “Ray Tracing Triangular Meshes,” in Proceedings of the Eighth Western Computer Graphics Symposium, 1997, Vol. 43. [3] Moyer, L., C. Morgan, and D. Rugger, “An Exact Expression for Resolution Cell Area in Special Case of Bistatic Radar Systems,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 4, 1989, pp. 584–587. [4] Weiner, M. M., and P. Kaplan, “Bistatic Surface Clutter Resolution Area at Small Grazing Angles,” Mitre Corp., Bedford, Ma, 1982. [5] Vincenty, T., “Direct and Inverse Solutions of Geodesics on the Ellipsoid with Application of Nested Equations,” Survey Review, Vol. 23, No. 176, 1975, pp. 88–93. [6] Bergin, J. S., C. M. Texeira, and G. H. Chaney, “ISL Geographic Coordinate Transformation Toolbox,” November 2002, Vol. Tech Note ISL-TN-SCRD-02-006. [7] Shatz, M. P., and G. H. Polychronopoulos, “An Improved Spherical Earth Diffraction Algorithm for SEKE,” MIT Lexington Lincoln Lab, 1988. [8] Born, M., and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, Elsevier, 2013. [9] Durgin, G. D., “The Practical Behavior of Various Edge-Diffraction Formulas,” IEEE Antennas and Propagation Magazine, Vol. 51, No. 3, 2009. [10] Sommerfeld, A., “Optics,” Google Scholar, (Academic, New York, 1954), pp. 129–139. [11] Keller, J. B., “Geometrical Theory of Diffraction,” JOSA, Vol. 52, No. 2, 1962, pp. 116–130. [12] Kouyoumjian, R. G., and P. H. Pathak, “A Uniform Geometrical Theory of Diffraction for an Edge in a Perfectly Conducting Surface,” Vol. 88, 1974, pp. 1448–1461.
178
Non-Line-of-Sight Radar
[13] McNamara, D. A., and C. Pistotius, Introduction to the Uniform Geometrical Theory of Diffraction, Norwood, MA: Artech House, 1990. [14] Chang, C.-W., “Radiowave Propagation Modeling using the Uniform Theory of Diffraction,” School Eng., Univ. Auckland, Auckland, New Zealand, Tech. Rep, 2003. [15] Bergin, J., “A Line-of-Sight Propagation Model,” Information Systems Laboratories, February, 2000. [16] Ayasli, S., “SEKE: A Computer Model for Low Altitude Radar Propagation Over Irregular Terrain,” IEEE Transactions on Antennas and Propagation, Vol. 34, No. 8, 1986, pp. 1013–1023. [17] Meeks, M. L., “Radar Propagation at Low Altitudes: A Review and Bibliography,” MIT Lexington Lincoln Lab, 1981. [18] Meeks, M. L., “Radar Propagation at Low Altitudes,” NASA STI/Recon Technical Report A, Vol. 83, 1982. [19] Titi, G. W., and D. F. Marshall, “The ARPA/NAVY Mountaintop Program: Adaptive Signal Processing for Airborne Early Warning Radar,” in IEEE ICASSP, 1996, pp. 1165–1168. [20] Calinon, S., and A. Billard, “Teaching a Humanoid Robot to Recognize and Reproduce Social Cues,” in The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2006, pp. 346–351. [21] RFView(TM). Available: http://rfview.islinc.com.
CHAPTER
6
Contents
Computing Hardware
6.1 GPU Computational Model
Acceleration Strategies
6.2 GPU Programming
Hardware acceleration strategies exploit many individual processors to perform tasks in less time. Utilizing more than one processor necessarily requires cooperation so that they can share resources effectively. In addition, the system must be designed so that processors are not sitting idle, waiting for intermediate results before they can complete their tasks. Utilizing many processors is also a challenge for the software engineer; how does a programmer assign a single task to multiple processors? Two of the most common computing hardware architectures enabling hardware acceleration through parallelization are GPUs and FGPAs. A single processor will perform poorly for applications that require many simultaneous parallel computations (e.g., robotic sensing, graphical display, ray-tracing simulations). In the single processor model, a central processing unit (CPU) operates sequentially in a manner similar to a Turing machine. A sequence of instructions, such as adding two numbers, storing a result in a register, and so
6.3 FPGA 6.4 Ray-Tracing Line-ofSight Algorithms 6.5 GPU Ray-Tracing Example 6.6 FPGA Ray Tracing
179
180
Non-Line-of-Sight Radar
forth, is fed to the CPU one at a time. Multiple distinct processes are performed using a single CPU through an interrupt structure. When an application or hardware device connected to the system needs attention, it sets an interrupt. The level of the interrupt determines the order in which it is addressed. The processing rate per instruction is typically in the GHz range for contemporary digital computers. Because of the high speed of the processor, the CPU appears to run multiple applications simultaneously, in parallel. However, as the number of tasks increase, the CPU will perform each task more slowly, since the speed and resources of the CPU are limited. The ever-increasing demand for computing power has led to the adoption of parallel architectures consisting of many digital processors (multiprocessing). Most desktop personal computers purchased today (2018) have multiple processors and utilize multithreading to speed up execution times. There are different levels of parallelism, and the operating system will utilize the available processors in different ways. Most obviously, many individual applications can be run concurrently on different processors without conflict, since they do not typically share resources. A task scheduler optimizes utilization of all the computer resources (e.g., load balancing). The parallelization can be applied to individual applications or processes as well. Each process can be further partitioned into threads (individual tasks within a process), which may share resources. The system develops a pipeline for each process so that tasks that do not require data from other tasks can be performed concurrently on different processors. Parallelization also occurs at the lowest instruction level, where operations are assigned to individual cores in a multicore processor. The task and instruction assignment can occur statically during compilation or dynamically in real time through hardware. Parallelization offers a path forward to continue the steady march of increasing computing performance, even as Moore’s law begins to hit limits related to minimum device (transistor) sizes. Emerging advanced processing architectures, including GPUs and FPGAs, offer enormous potential. Note that we could also have included cell processors or 3-D integrated circuits (ICs). Software tools have been developed that enable engineers to more easily utilize these parallel architectures. With these novel hardware architectures and associated software tools, there is an opportunity to implement advanced NLOS tracking algorithms in real-time. The main advantage of using GPUs is that it is much easier initially to develop software. If a task can be partitioned into multiple, mostly parallel threads, a function can easily be written to utilize the GPU to perform the task without having a good understanding of how those threads are
6.1 GPU Computational Model
181
scheduled in the GPU. Of course, with an understanding of memory conflicts/bottlenecks, the GPU can be leveraged further to perform the task more efficiently. The drawback of GPUs is that they use significantly more power than FPGAs for the same processing task. GPUs incorporate a large number of processors in an architecture that is designed to manage blocks of data in parallel to support video graphics. They typically have substantial memory bandwidths and memory buffers to facilitate the rapid transfer of data. They were initially embraced by the computer gaming industry to provide realistic, real-time, 3-D graphics. They have been subsequently adopted for general purpose computing, that is, general purpose GPUs (GPGPUs). GPGPUs provide an exciting new opportunity to perform real-time simulations of urban and terrestrial environments generating multipath predictions for NLOS tracking algorithms. Currently in 2018, two companies, NVIDIA and AMD, dominate the GPGPU market. In addition to the hardware changes, the development tools for GPU programming have improved significantly over the years. An open, royaltyfree interface, termed the Open Computing Language (OpenCL), is available for programming GPGPUs for general non-graphics applications [1]. OpenCL is a standard originating at Apple and transferred to the Khronos group. It consists of an application programming interface (API) for coordinating parallel computation across heterogeneous processors, and a cross-platform programming language with a well-specified computation environment. Many companies contribute to the specification, including GPU vendors (i.e., NVIDIA and AMD), microprocessor vendors (e.g., Intel, AMD, and IBM), and others (e.g., Apple). Alternatively, two proprietary software development kits (SDKs) are available: NVIDIA’s proprietary Compute Unified Device Architecture (CUDA) SDK [2] or AMD’s Accelerated Parallel Processing (APP) SDK [3]. Both NVIDIA and AMD GPGPUs currently support the latest OpenCL standard, as well as their own development tools. Each SDK includes a debugger and profilers. The NVIDIA SDK is currently at version 9.2, and AMD APP SDK is at version 3.0. The most commonly used integrated development environments (IDEs) for GPGPU programming in Windows and Linux are Visual Studio [4] and Eclipse [5], respectively.
6.1 GPU Computational Model The OpenCL computational model (similar to NVIDIA’s CUDA computational model) consists of multiple compute units (i.e., multiprocessors),
182
Non-Line-of-Sight Radar
each with multiple processing elements, as shown in Figure 6.1 [6]. The software is divided into a host program and kernels that are executed on an OpenCL device. The individual multiprocessors operate asynchronously and concurrently with all other multiprocessors. Each multiprocessor has a group of stream or core processors. A modern NVIDIA Quadro P6000 GPU with the Pascal microarchitecture has 30 multiprocessors with 128 cores each, for a total of 3,840 cores. Each core can execute threads independently. However, the hardware is designed to execute the code in warps, which are groups of 32 threads in each core. All the cores in the same warp execute the same set of instructions, that is, a single instruction, multiple thread (SIMT) approach. The device code will operate more efficiently if it is designed to accommodate warps with multiple threads performing similar operations (with different memory addresses indexed from a common set of pointers). GPU devices have their own global memory, which is growing with every new architecture (e.g., 24 GB for the NVIDIA Quadro P6000). Data is transferred from the host to the device or vice versa, usually at the start
Figure 6.1 OpenCL execution model. (Source: [6].)
6.2 GPU Programming
183
or the end of calculation via direct memory access (DMA). For most GPUs, a high-speed Peripheral Component Interconnect (PCI) bus slot is used for host/device data transfer. All of the multiprocessors share the global device memory. However, obtaining data from global memory entails significant timing latency. Thus, each multiprocessor has a small amount of local memory that can be accessed at register speeds, usually called shared memory. For some GPUs, each multiprocessor also has additional local read-only memory (ROM), sometimes termed constant memory, which is designed for increased efficiency when the input data will not change during the kernel execution. In addition, each multiprocessor often has read-only texture memory to enable access to 2- or 3-D arrays. Multiprocessors also include a large number of registers as well as local caches to facilitate memory transfers. The NVIDIA Quadro P6000 delivers 12 teraflops of single-precision performance using 3,840 cores. However, GPU performance is often constrained by the memory bandwidth, or how quickly a GPU can transfer memory to and from its random access memory (RAM) (located on the same GPU circuit board or IC). A multibyte pipeline is employed to accomplish memory transfers at relatively high speeds. The absolute memory transfer speed for the NVIDIA Quadro P6000 is a rather amazing 432 GB/ second using a bus width of 384-bits and a clock rate of 1,417 MHz. However, note that the memory transfer rate relative to the number of FLOPs is low, compared to historical trends. The bandwidth per clock cycle is approximately 305 bytes and the bandwidth per clock cycle per core instruction is only 0.08 bytes. For older CPUs, the memory bandwidth was much higher in terms of bytes per instruction. GPU compute power is growing faster than memory bandwidth, which is why GPUs are using larger memory caches to compensate (with the disadvantage of additional latency).
6.2 GPU Programming GPUs are programmed as a sequence of kernels; a kernel is a small unit of execution that performs a clearly defined function and that can be accomplished in parallel on the device. In CUDA and OpenCL, kernels are clearly differentiated from host functions. From the host code, a kernel is initiated and the resources of the entire GPU are dedicated to completion of the kernel. After completion, control is returned to the host code. It is possible for some threads to start running on the next kernel before all threads have completed the previous kernel, which could lead to incorrect results or
184
Non-Line-of-Sight Radar
conflicts. Thus, there are built-in methods (synchronization) to ensure that all threads have completed before continuing to the next kernel. However, in the GPU software model, there is no method for communicating between multiprocessors to enable coordination [7]. It is difficult, if not impossible, to synchronize threads running on different multiprocessors. The device kernels are spawned with a predetermined (at compile time) number of blocks. Each block has a certain number of threads. The set of blocks is referred to as a grid. The block and grid size can be specified in multiple dimensions (e.g., block.x, block.y, and block.z) [8]. The total number of threads utilized is the number of blocks multiplied by the number of threads per block. Threads in different blocks are executed on different multiprocessors. The GPU scheduler determines how multiprocessors are assigned to each block dynamically in real time. Threads in the same block use the same local memory resources (e.g., registers, caches, shared, constant, and texture memory) and are also executed in sets of (usually) 32 threads (called a warp). In a warp, the same sequence of commands is applied to each thread with the memory locations offset by a predetermined (indexed) amount each thread. For instance, the threads might be assigned relative to pointers [*pa, *pc, *pc] as:
a (* pa + 0) + b (* pb + 0) = c (* pc + 0) for thread 0
(6.1)
a (* pa + 1) + b (* pb + 1) = c (* pc + 1) for thread 1
(6.2)
a (* pa + 31) + b (* pb + 31) = c (* pc + 31) for thread 31
(6.3)
If the total number of blocks launched is larger than the number of available multiprocessors, then the GPU schedules as many blocks as possible with the available resources. As blocks terminate, new blocks are launched on the vacated multiprocessors.
6.3 FPGA FPGAs are ICs that are designed to be reconfigured (possibly multiple times) after manufacturing. Their flexibility in design makes them preferable in
6.3 FPGA
185
many applications to application-specific integrated circuits (ASICs). ASICs generally provide faster performance with lower power and less chip area. However, it is expensive and time consuming to design an ASIC. FPGAs can be configured in minutes (see FPGA workflow described below) and designed to replicate any function that an ASIC can perform. For these reasons, FPGAs are vastly preferred for low-volume production. The process for configuring a FPGA, termed the workflow, is considerably more complicated than programming a GPU. First, an algorithm is converted into a hardware description language (HDL) with the two most common being VHSIC Hardware Description Language (VHDL) or Verilog. VHSIC is an acronym for Very High Speed Integrated Circuit. Then, a behavioral simulation of the circuit (or register-transfer level synthesis) is performed to test the digital flow and logical operations. Next, a gate-level description of the circuit is developed that is specific to the architecture and ready to be routed on the FPGA. At this point, timing simulations are usually performed to verify the design. Each manufacturer has its own set of tools to predict the timing delays along each signal path and adjust the logic circuit locations and interconnects to avoid timing errors or conflicts. Additionally, floor planning is often conducted where the physical location of the circuits on the FPGA IC are visualized and modified to improve performance. Finally, the design is translated onto the FPGA for bench-top testing. This workflow cycle may be repeated several times to achieve the desired performance. We note that there are higher-level languages available to program FPGAs, including Handel-C [9] and Mitrion-C [10, 11]. In addition, National Instruments is developing a LabVIEW™ module (in their graphical language) to configure FPGA hardware [12]. MATLAB also has toolbox support for programming FPGAs, including HDL Coder™ and HDL Verifier™. Currently the FPGA market is dominated by three manufacturers: Microsemi, Altera, and Xilinx. Xilinx has the greatest market share with approximately 50% of the market. Each of these manufacturers has their own set of tools and libraries. To help speed up design, reusable design blocks (a.k.a., Intellectual Property (IP) cores) are available commercially from FPGA vendors for various commonly used circuits. A photo of a Xilinx FPGA IC is shown in Figure 6.2 [13]. In this section, we adopt Xilinx terminology for the electrical components; other manufacturers use different nomenclature. FPGAs consist of three primary circuit elements: Configurable logic blocks (CLBs);
186
Non-Line-of-Sight Radar
Figure 6.2 Photo of a Xilinx Spartan-3 FPGA IC [13].
Programmable routing; I/O blocks that enable off-chip connections. Each of those elements is discussed below. Configurable Logic Blocks FPGAs typically have many (thousands to millions) of CLBs that are connected via routing channels (see Figure 6.3). The symbols FA, mux, LUT, and DFF represent full adder, multiplexer, lookup table, and D-type flipflop, respectively [14]. The lookup tables (LUTs) can be programmed to implement any combinatorial logic, including conventional logic gates such as AND, OR, and so forth. The Boolean truth table for any logic gate is mapped to the configurable LUT. The multiplexers enable additional flexibility in the design. The output can be selected as synchronous or asynchronous using the final multiplexer on the right-hand side of Figure 6.3. The D-type flipflop waits on the clock signal to transfer the input state to the output. Programmable Routing The programmable routing interconnects consist of wires and programmable switches. In a typical design, the CLBs are arranged in a 2-D array on the chip and connected via programmable horizontal and vertical routing channels (see Figure 6.4). Most (approximately 80 to 90%) of the FPGA chip array is typically devoted to interconnect [15]. Not surprisingly, most of the timing delay is due to the signal routing (rather than the CLBs). The programmable routing has to accommodate a wide variety of signals and so it includes many short connections (small delays for speed) to nearby CLBs
6.3 FPGA
187
Figure 6.3 Example CLBs in an FPGA [14].
Figure 6.4 FPGA logic blocks with interconnects [17].
as well as a small percentage of long connections to distant CLBs. Most logic designs include many local connections and so it is advantageous to include many short interconnects. The arrangement of routing resources is very important to the FPGA performance [16]. Two common FPGA architectures are Island Style and Hierarchical. In the Island Style (Figure 6.4), the CLBs are arranged in a 2-D array separated by horizontal and vertical routing channels (with I/O blocks around the edge). In the Hierarchical architecture, the CLBs are arranged in a two-level tree structure. Individual logic blocks contain multiple CLBs with local interconnects to support both
188
Non-Line-of-Sight Radar
combinatorial and sequential logic. These logic blocks are connected to other blocks via longer signal channels. I/O Blocks The I/O blocks are arranged at the outside periphery of the CLB area, and provide user-configurable connections to external devices. These I/O blocks include drivers to produce digital signals over high-capacitance external wiring and buffers to protect the internal logic array from overvoltage or electrostatic discharges. The entire function of a microprocessor can be replicated via FPGA logic. Manufacturers usually offer soft IP processor cores as part of their design suite. Some FPGA models include distinct embedded hardware microprocessors. Additionally, a smaller number of FPGA designs include mixed signal or analog components such as analog-to-digital converters. Modern FPGAs incorporate both internal on-chip memory and methods for accessing external memory. In addition, memory circuits can be implemented via the programmable logic (using multiple gates).
6.4 Ray-Tracing Line-of-Sight Algorithms For general ray tracing of a 3-D scene, we cannot always assume a ray over flat terrain (with a single height value for every terrain point). For a 3-D urban scene, a more general LOS algorithm must be implemented. Performing a 3-D urban simulation in real time is difficult even when using a GPU and low-level computing language such as C++/CUDA. In this section, we discuss an algorithm for testing for ray-triangle intersections [18]. We also present three common ray-tracing acceleration schemes. 6.4.1 Shapes and Ray Intersections Ray tracing follows the propagation path of a ray from an initial point through the scene until a surface is intersected [19]. In a photorealistic rendering of a 3-D scene, the goal is an image that is indistinguishable from a photograph of the scene. To achieve a realistic image, most systems utilize a ray-tracing algorithm and model objects and phenomena including: Cameras (i.e., the viewing point of the image); Light distribution;
6.4 Ray-Tracing Line-of-Sight Algorithms
189
Visibility; Surface scattering; Recursive rays (i.e., multipath propagation); Propagation effects (e.g., fog). The objects and phenomena modeled in high-fidelity photorealistic rendering are very similar to the desired phenomena modeled in electromagnetic models. Many of the algorithms used in these rendering systems can be directly applied to high-frequency radar scattering. In particular the methods for checking for LOS propagation between an emitter, scatterers (e.g., clutter and targets), and receiver are similar. An example 3-D scene of the ocean surface created with the commercial POV ray tracer is shown in Figure 6.5 [20]. The wave height map was generated in MATLAB using a typical ocean wave spectrum. In this scene, realistic waves, clouds, fog, and sky are depicted.
Figure 6.5 An example ocean surface created with MATLAB and POV ray tracer.
190
Non-Line-of-Sight Radar
One of the core tasks of rendering is modeling of the scene as shapes, and finding intersections of a ray with the shape. Most rendering systems support shape primitives, which include triangles, spheres, cylinders, and other quadrics (surfaces described by quadratic polynomials). For instance, a rectangle that defines a building wall is a primitive. Complex surfaces are typically built as a mesh of primitives (e.g., triangles). Another method of modeling complex surfaces uses nonuniform rational B-splines (NURBS). While it is unknown how many ray-tracing rendering systems support this method, an efficient technique for testing the intersection with NURBS has been developed [21]. To find the intersection of a ray with a shape primitive, the typical solution involves substituting the parametric representation of a ray into the equation of the shape and solving. The parametric representation of a ray originating at o, and propagating in the direction specified by the unit vector d is given by r (t ) = o + t d �(6.4)
Since the triangle is the most commonly used primitive, we present a simple ray-triangle intersection algorithm [19, 22, 23] to illustrate the process. The most efficient ray-triangle intersection parameterizes the triangle in barycentric coordinates as �
p (b1 , b2 ) = (1 − b1 − b2 ) p0 + b1 p1 + b2 p2
(6.5)
where p0, p1, and p2 are the vertices of the triangle as shown in Figure 6.6, along with the barycentric parametric form of the points. Note that a point that lies on the edge or inside of the triangle satisfies b1 ≥ 0, b2 ≥ 0, and b1 + b2 ≤ 1. Substituting (6.4) in (6.5), we obtain
o + t d = (1 − b1 − b2 ) p0 + b1 p1 + b2 p2
(6.6)
which can be rearranged as �
o − p0 = −td + b1 ( p1 − p0 ) + b2 ( p2 − p0 )
(6.7)
6.4 Ray-Tracing Line-of-Sight Algorithms
191
Figure 6.6 Vertices and barycentric parametric points of a triangle.
Equation (6.7) can be solved for t and the barycentric coordinates using Cramer’s rule, which gives (o − p0 ) × ( p1 − p0 ) ⋅ ( p2 − p1 ) t 1 d × ( p2 − p0 ) ⋅ (o − p0 ) = b 1 � d × ( p − p ) ⋅ ( p − p ) (6.8) 2 0 1 0 b2 (o − p0 ) × ( p1 − p0 ) ⋅ d where the symbols ⋅ and × represent the vector dot product and cross product, respectively. Equation (6.8) can be rewritten using intermediate values (as would be used in the implementation) to avoid duplicate operations as
�
t s2 ⋅ e 2 1 b1 = s ⋅ e s1 ⋅ s 1 1 b2 s2 ⋅ d
(6.9)
where s = o – p0, e1 = p1 – p0, e2 = p2 – p0, s1 = d × e2, and s2 = s × e1. If b1 and b2 are between 0 and 1, then an intersection has occurred. The computations required to compute (6.9) are listed in Table 6.1. Note that the intersection solution requires a single reciprocal operation, which typically has an instruction load equal to 5–10 instructions for a GPU and 10–20 instructions for an x86 architecture. Using 10 as the instruction throughput factor gives a total FLOP count of approximately 58. Note that this computational
192
Non-Line-of-Sight Radar
Table 6.1 Computational Complexity of a Single Isolated Ray-Triangle Intersection
Number Operation 1 s=o–p 2 e =p –p 1
1
0
Floating Point Operations 3 3
3
e2 = p2 – p0
3
4
s1 = d × e2
9
5
s2 = s × e1 s1· e1 1/(s1· e1) b1 complex multiplication b2 complex multiplication
9
6 7 8 9 Total
9 10 6 6 58
load can be reduced when calculating intersections over a continuous scene by using the available partial results from adjacent triangles. 6.4.2 Intersection Acceleration Algorithms It is inefficient to check every polygon in a scene for intersection. Based on (6.9), approximately 58 FLOPS are performed to test if a ray intersects a triangle. A scene with one million triangles would require 58 million FLOPS (MFLOPS), and shooting a modest 1,000 rays would require 58 billion FLOPS (GFLOPS). Thus, checking every object in a scene for each ray is an undesirable algorithm for ray tracing. Algorithms that speed up the intersection tests are known as intersection acceleration algorithms. The typical objective of the intersection acceleration algorithm is to quickly reject groups of shapes that do not need to be checked for a given ray. Three common acceleration algorithms (described in [19]) are: Grid acceleration; Bounding volume hierarchies (BVH); K-dimensional (K-D) trees.
6.4 Ray-Tracing Line-of-Sight Algorithms
193
Each of these acceleration strategies is discussed separately below. In these approaches, the scene is partitioned into bounding boxes that each contains multiple primitives. Ray tracing consists of determining if a ray intersects a hierarchy-bounding box. Only if the ray intersects a bounding box is it checked against the enclosed primitives. Grid Acceleration The grid acceleration algorithm divides the scene into equally sized 3-D bounding boxes, or voxels. Each ray is checked against all voxels to determine any intersections. Since the number of voxels is relatively small, this initial intersection test can be performed quickly. Each voxel contains either primitives or nothing (i.e., empty voxel). If a voxel is empty, the algorithm immediately moves to the next voxel in the ray’s path. Otherwise, all of the children (e.g., primitives) in that voxel are checked for intersection. If no intersections are found, the algorithm moves to the next voxel in the ray’s path. If an intersection is found, the ray tracing stops and returns the intersection information. Once an intersection is found, then that primitive is passed to the calling program and the ray is terminated. Additional rays may be generated due to reflection (or diffraction). Uniform grid acceleration schemes have the advantage of being the most easily implemented in hardware. Additionally, statistical evaluations indicate that they are within a factor of two in terms of performance as compared to hierarchical tree structures such as BVHs or K-D trees [24]. An example of 2-D ray tracing using a 15 × 15 grid of bounding boxes is shown in Figure 6.7 [18]. Two rays are shown traversing the grid. Only the primitives in the shaded voxels need to be tested for intersection. There may be many primitives (triangles, rectangles, etc.) in each voxel. The determination of which primitives are contained in each voxel is performed once at the start of the simulation. Some primitives will overlap between adjacent voxels. If each voxel contains many primitives (or very few) then the acceleration will not be very effective. There are many methods to improve the performance of grid acceleration. If there are a large number of primitives in the scene, it is advantageous to use a nested (two-staged) grid with a smaller grid of subvoxels corresponding to every large voxel of the main grid. In addition, grids with a nonuniform shape, such as a perspective grid, are sometimes used to increase efficiency.
194
Non-Line-of-Sight Radar
Figure 6.7 Example 15 × 15 2-D bounding box (voxel) grid. Two rays are shown. The highlighted bounding boxes indicate intersections [18]. Only the primitives in the highlighted bounding boxes are checked.
Bounding Volume Hierarchies BVHs partition the primitives into a tree. Each node of the tree contains a bounding box of all the primitives descending from that node, and each leaf contains a primitive. When propagating a ray, it is checked for an intersection with a node’s bounding box. If the ray does not intersect the bounding box, it cannot intersect any of the descending primitives, so the entire node can be skipped. If the ray does intersect the bounding box, then each leaf of the node is checked. A simple scene with three objects and a BVH-tree representation is shown in Figure 6.8. The main differences between BVH trees and K-D trees are [19]: 1.
In a BVH tree, a primitive only appears once, whereas in grids a primitive may show up in multiple voxels (due to overlap) and undergo testing multiple times;
2.
There are at most 2n– 1 nodes in the tree, where n is the number of primitives;
3.
BVH trees are generally slightly slower than K-D trees for ray intersection calculations, but can be significantly faster to build.
6.4 Ray-Tracing Line-of-Sight Algorithms
195
Figure 6.8 (a) Scene with three primitives, and (b) BVH-tree representation of the scene [18].
K-Dimensional Trees K-D trees are a variation of binary space partitioning (BSP) trees, where space is recursively subdivided. In K-D trees, the primitives are partitioned along an axis cut. For each left and right child, the primitives are divided by another axis cut. The order of the splitting is chosen to produce a maximum number of primitives per voxel. Once the desired number of primitives has been reached or a predetermined tree-depth limit has been realized, then the tree is terminated. K-D trees partition the scene space similarly to grid acceleration, but the storage is a binary tree, and there are no empty nodes unlike the grid’s empty voxels. The K-D tree deterministically or adaptively divides the space along one of the dimensions each split. For 3-D space, the K-D tree will be split along either the x, y, or z dimension each time. Primitives occurring on one side of the splitting plane go to one side of the node, and primitives on the other side of the splitting plane go to the other. Primitives that intersect the splitting plane go on both sides of the node, and thus there is primitive duplication (unlike the BVH tree). The voxel pattern for an example K-D tree is shown in Figure 6.9(a). The corresponding tree diagram is shown on Figure 6.9(b). For this example, the binary partition occurs in the sequence x-axis, y-axis, x-axis, y-axis. A ray would be tested for intersections against the primitives that are located only in voxels that intersect the ray.
196
Non-Line-of-Sight Radar
Figure 6.9 (a) Hierarchical grid of size Nx × Ny, and (b) corresponding K-D tree structure [25].
6.5 GPU Ray-Tracing Example The algorithms for ray tracing are extremely parallel, with each ray traversing the scene almost independently. In the most obvious acceleration strategy for GPUs, we assign a single GPU thread to manage each ray traversal through the scene. If we wanted to include reflected and diffracted rays, we would spawn additional rays after each ray/primitive collision. In the GPU computing model it is not feasible to continue processing a list of rays that is dynamically increasing in size in the same GPU kernel. Thus, it is necessary to record each new ray (position, direction, power, polarity, etc.) in the initial kernel function call. Then, process the new set of rays in a separate call to the kernel. In that way, we can follow the ray through multiple reflections in the scene with each GPU function call. As an example, we considered the calculation of LOS to every point on a surface consisting of 251 × 251 pixels using the modified (C++/CUDA) LOS example supplied with the CUDA SDK [2, 26]. We implemented the terrain simulator on a desktop computer (Windows 7, 64-bit OS) with a GeForce GTX-980 GPU. The 3-D analytical surface is shown in Figure 6.10(a) with the emitter location indicated by the sphere. The corresponding 2-D image is included in Figure 6.10(b) to help indicate the height scale. The emitter
6.5 GPU Ray-Tracing Example
197
Figure 6.10 Example emitter (sphere) above a curved analytical surface [2].
altitude was increased from 20 to 200m in 1m increments. At each emitter altitude, LOS was calculated from the emitter location to every pixel. For every terrain simulation (at a chosen emitter altitude), the software was looped over every point on the outside of the surface in a C++ ‘for’ loop. A ray was sequentially launched to every terrain pixel on the circumference for a total of 1,001 rays. The angle to each terrain point (125 points) along each ray was calculated concurrently. Then LOS was determined using the calculated elevation angles by moving along in range and testing if the elevation angle was the largest encountered thus far. If not, the pixel was in shadow; otherwise it had LOS with the emitter. After each kernel, threads were synchronized on the host so that each GPU kernel was complete before the next one was started. Most of the computation in the simulation was performed by two GPU kernels (termed global functions in CUDA) and the built-in CUDA Thrust Library [27] functions. In the first GPU kernel, the elevation angle was calculated (directly below the ray) as the range from the emitter was increased. The height for arbitrary (noninteger coordinate) points over the terrain was determined during execution using a CUDA texture height field map. The analytical surface terrain was not organized into range-azimuth space before the simulation, as in the example from the previous chapter. Next, the cumulative elevation angle was calculated along each ray using a ‘thrust:inclusive_scan’ operation. Finally, LOS was determined in a second GPU kernel function call by checking (via a boolean comparison) that the cumulative elevation angle continually increased along the ray. An example elevation angle map with the emitter at a 70m altitude is shown in Figure 6.11. The minimum angles (–90°) are directly below the
198
Non-Line-of-Sight Radar
Figure 6.11 An example of the elevation angles calculated for an emitter altitude of 70m.
emitter in the center of the map. At the edges of the map, the elevation angle decreases but is still less than zero (pointed downwards) as expected. The emitter is slightly above the highest points on the surface. Locations where the elevation angle decreases as the range (from the centrally located emitter) increases will be in shadow. We repeated the simulation 181 times, sequentially increasing the altitude of the emitter (sphere in Figure 6.10) in 1m increments from 20 to 200m. The output of the LOS calculation is shown in Figure 6.12 at four different emitter heights. As the height increased from 20 to 200m, the area in shadow (black area) decreased. The GTX-980 has 16 multiprocessors and 128 cores per multiprocessor, for a total of 2048 cores. We initially used a block size of 128 (along one dimension, block.x) so that the entire ray calculation was performed on a single multiprocessor (the grid size was one). We launched all 125 threads (per ray) concurrently and we let the GPU scheduler sort out how to accomplish the parallelization (on the single multiprocessor). This intermediate angular result remained on the device (was not transferred to the host). The total time for all 181 LOS simulations was 65.540 seconds for an individual time of 362 ms for each terrain simulation (see top line of Table 6.2). This timing throughput measurement did not include the memory transfer time required to move the results to the host at the end of the simulation. When we used only 32 threads per block for a grid size of 4, the time was reduced to 62.807, corresponding to 347 ms per terrain simulation. Thus, using more multiprocessors, the time decreased slightly. When we set the block size to 8 for a grid size of 16, the total time actually increased to 64.780 seconds for an individual time of 358 ms, despite using more
6.5 GPU Ray-Tracing Example
199
Figure 6.12 LOS maps of the analytical surface at an altitude of 20, 50, 100, and 200m. As the emitter height increased, the area in shadow (black) decreased.
Table 6.2 The Timing for the GPU Terrain Simulation Description Each ray sequentially Each ray sequentially Each ray sequentially All rays concurrently All rays concurrently All rays concurrently All rays concurrently All rays concurrently
Block Size 128 32 8 64 128 256 512 1,024
Time per Terrain Simulation 362 ms 347 ms 358 ms 0.27 ms 0.28 ms 0.28 ms 0.29 ms 0.32 ms
200
Non-Line-of-Sight Radar
multiprocessors. The GPU will attempt to use warps consisting of 32 threads even if we set a block size that is smaller than 32. If the block size is too small then some computation is wasted and the ability of the GPU to coordinate activity across threads in a warp is reduced. Therefore, for maximum efficiency, we should assign threads in groups of 32. By launching rays sequentially in the previous SDK example, much of the GPU computational power was not used. The simulation operates much faster if more threads are initiated at once and run concurrently. For instance, in a faster implementation, we only used two global function kernels corresponding to the elevation angle calculation and LOS calculation. The elevation angle to each terrain patch was calculated by launching 251 × 251 threads, one for every pixel. Then, 1,001 threads were started simultaneously, corresponding to all rays from the emitter to an edge point. The block size was varied for both the angle and LOS calculations (see Table 6.2). The best time for all 181 simulations was 48.8 ms or 0.27 ms per simulation. Obviously, using more threads vastly improves the time with a speed-up of approximately a factor of 1,200. The updated terrain simulation could easily be run in real time at video frame rates. As seen in Table 6.2, by using a smaller block size (and additional multiprocessors), the performance improved slightly. It is possible that two or more rays will need to access the same memory locations as they traverse the scene, which will lead to memory bottlenecks. We could attempt to minimize conflicts by initiating rays that are separated spatially. For instance, we could launch every third ray in batches: ray numbers 1,4,7,… and then ray numbers 2,5,8,… and so forth. To speed up the simulation further, we would initially organize the terrain into a polar format in terms of range and azimuth, as discussed in Chapter 5. The amount of improvement using a polar geometry will depend on the size of the terrain, as some overhead computation is required for the initial terrain conversion to polar format. For 3-D scenes, we would need to call many more rays (millions). For efficient performance, the scene would need to be partitioned into a grid of bounding boxes using one of the strategies outlined above including grid acceleration, BVH, or K-D trees.
6.6 FPGA Ray Tracing For FPGA ray tracing, the connection between individual rays and the architecture is not as obvious. The FPGA is not organized into many discrete processors that can be assigned individually to threads (as in a GPU).
6.6 FPGA Ray Tracing
201
However, with the available FPGA logic units we can design multiple pipelines to process ray-triangle intersections that operate simultaneously as long as they do not require access to the same data. In a straightforward manner, we can implement the ray-triangle intersection test (outlined in Section 6.4.1) on an FPGA. The first three operations in Table 6.1 (complex addition) are performed first in parallel since they are needed for subsequent operations. Then, the two cross products (operation numbers 4 and 5 in Table 6.1) are performed (using the results from the additions). The overall ray-triangle intersection pipeline requires 7 clock cycles to determine if an intersection occurs. It can be continuously fed new triangle data (in the frontend of the pipeline) while calculations are occurring on previous triangles. In [22], the intersection pipeline was implemented using 20,000 (4-input) LUTs. The reciprocal operation only needs to be performed if an intersection occurs. Another 31 cycles are required to perform the reciprocal operation (and division) to determine the location of the intersection. The reciprocal and division calculation requires an additional 8,000 LUTs [22]. If the raytriangle intersection test pipeline continuously feeds the division calculation, then the total pipeline can calculate an intersection location every 31 clock cycles. An FPGA with a 50-MHz clock and a single ray-triangle intersection pipeline can process 1.6 million ray-triangle intersections per second. We have not considered the additional time required to complete the ray tracing operations (such as shading of each primitive) since that processing does not significantly impact the timing results. Thus, the ultimate FPGA ray-tracing performance is limited by clock speed, the number of pipelines that can fit on an IC, and the memory bandwidth. For modern FPGAs (Xilinx Virtex-6), we could fit 15 ray-triangle intersection pipelines with enough room remaining for the additional required logic. With the nominal 650-MHz clock rate, we could calculate approximately 300 million ray-triangle intersections per second,
(
)
300 × 106 ∆/sec ≈ (15 ) 650 × 106 Hz / (31/∆ )
(6.10)
where ∆ represents a triangle. This rate is only a rough estimate since, in the modern FPGA designs, the number of clock cycles required for each division operation has decreased. Additionally, it is not clear that the division units will always be fully utilized. A FPGA pipeline may check many triangles before it finds one with an intersection and engages the division unit to calculate the intersection location. Multiple ray-triangle intersection
202
Non-Line-of-Sight Radar
pipelines could share a single division unit although bottlenecks will occur sporadically if many intersections are found simultaneously. In [22], 184 bits were required per triangle although the amount of bits will increase if higher primitive shape precision is warranted). The associated memory bandwidth for 15 pipelines is
(
)
15GB/second ≈ (184 bits) 650 × 106 Hz / 8 bits/byte
(6.11)
which is achievable using the latest FPGA designs that place the memory on the same IC as the FPGA.
References [1] Khronos. Available: https://www.khronos.org/opencl/ [2] Nvidia CUDA Toolkit. Available: https://developer.nvidia.com/cuda-toolkit [3] AMD SDK. Available: https://developer.amd.com/tools-and-sdks/ [4] Visual Studio. Available: https://visualstudio.microsoft.com/ [5] Eclipse. Available: https://www.eclipse.org/ [6] Lee, H., and M. Aaftab, “The OpenCL Specification, Version: 2.1, Document Revision: 23,” Khronos Group, Vol. 3, 2015. [7] Wolfe, M., and P. C. Engineer, “Understanding the CUDA Data Parallel Threading Model, a Primer,” PGI Insider, 2010. [8] Zeller, C., “CUDA C/C++ Basics,” NVIDIA Coporation, 2011. [9] Spivey, M., I. Page, and W. Luk, “How to Program in Handel,” in Technical Report, Oxford University Computing Laboratory, 1993. [10] Mohl, S., “The Mitrion-C Programming Language,” Mitrionics Inc, 2006. [11] Bångdahl, O., and M. Billeter, “FPGA Assisted Ray Tracing,” master’s thesis, Department of Computer Engineering Computer Graphics Research Group, Chalmers University of Technology, Göteborg, Sweden, 2007. [12] NI FPGA. Available: http://www.ni.com/labview/fpga/. [13] Xilinx. Available: https://en.m.wikipedia.org/wiki/Xilinx. [14] Kallstrom, P., Available: https://en.wikipedia.org/wiki/Field-programmable_ gate_array.
6.6 FPGA Ray Tracing
203
[15] Betz, V., and J. Rose, “FPGA Routing Architecture: Segmentation and Buffering to Optimize Speed and Density,” in Proceedings of the 1999 ACM/SIGDA Seventh International Symposium on Field Programmable Gate Arrays, 1999, pp. 59–68. [16] Farooq, U., Z. Marrakchi, and H. Mehrez, Tree-based Heterogeneous FPGA Architectures: Application Specific Exploration and Optimization, Springer Science & Business Media, 2012. [17] Prado, D. F. G., “Tutorial on FPGA Routing,” Electrónica-UNMSM, No. 17, 2006, pp. 23–33. [18] Hulbert, C., “Simulation and Modeling Using Graphics Processors (GPUs),” Information Systems Laboratories, 2011. [19] Pharr, M., W. Jakob, and G. Humphreys, Physically Based Rendering: From Theory to Implementation, Morgan Kaufmann, 2016. [20] POV Ray Tracer. Available: http://www.povray.org/. [21] Martin, W., E. Cohen, R. Fish, and P. Shirley, “Practical Ray Tracing of Trimmed NURBS Surfaces,” Journal of Graphics Tools, Vol. 5, No. 1, 2000, pp. 27–52. [22] Fender, J., and J. Rose, “A High-Speed Ray Tracing Engine Built on a FieldProgrammable System,” in Proceedings. 2003 IEEE International Conference on Field-Programmable Technology (FPT), 2003, pp. 188–195. [23] Möller, T., and B. Trumbore, “Fast, Minimum Storage Ray/Triangle Intersection,” in ACM SIGGRAPH Courses, 2005, p. 7. [24] Havran, V., J. Prikryl, and W. Purgathofer, “Statistical Comparison of RayShooting Efficiency Schemes,” 2000. [25] Hulbert, C. C., and J. S. Bergin, “Advanced Computing Architectures,” Technical Note ISL-SCRD-TN-09-007, Information Systems Laboratories, April 2009. [26] Blelloch, G. E., Vector Models for Data-Parallel Computing, MIT press Cambridge, 1990. [27] Thrust Library. Available: https://developer.nvidia.com/thrust.
About the Authors Brian C. Watson has over 25 years of experience with a diverse range of research disciplines, including radar and image signal processing, nonlinear circuits, material physics, printed circuit board, and antenna design. His Ph.D. thesis involved the study of quantum transitions in low-dimensional magnetic materials to increase the performance of high temperature superconductors. During his time at the University of Florida, he also designed and built a 9-Tesla nuclear magnetic resonance system that can operate at temperatures near 1 kelvin. In 2000, Dr. Watson was selected to receive the University of Florida Tom Scott Memorial Award for distinction in experimental physics. He has worked for the Navy and Air Force developing software in support of DoD programs, including the Tomahawk and F-111 weapons systems. He is currently the chief technology officer of the Research Development and Engineering Solutions (RDES) Division at Information Systems Laboratories, Inc. J. R. Guerci has over 30 years of experience in advanced technology research and development in government, industrial, and academic settings, including the U.S. Defense Advanced Research Projects Agency (DARPA) as director of the Special Projects Office (SPO), where he led the inception, research, development, execution, and ultimately the transition of next generation multidisciplinary defense technologies. In addition to authoring over 100 peer-reviewed articles in next generation sensor systems, he is the author of Space-Time Adaptive Processing for Radar, and Cognitive Radar: The Knowledge-Aided Fully Adaptive Approach, (Artech House). In 2007, he received the IEEE Warren D. White Award for radar adaptive processing and is a Fellow of the IEEE for contributions to advanced radar theory and embodiment in real-world systems. He is currently president/ CEO of Information Systems Laboratories, Inc.
205
Index A Accelerated Parallel Processing (APP) SDK, 181 Acceleration strategies computing hardware, 179–202 FPGA, 184–88 FPGA ray tracing, 200–202 GPU computational model, 181–83 GPU programming, 183–84 GPU ray-tracing example, 196–200 overview, 179–81 ray-tracing LOS algorithms, 188–96 Adapted angle-Doppler filter response, 22 Airborne MTI (AMTI) radars, 19 Angle-Doppler clutter distribution, 22 Angle-Doppler filter patterns, 23 Angle-of-arrival (AoA), 13 Antenna gain estimating, 63 modeling, 60–63 relative signal, 61 Antenna patterns backlobe, 62 example, 62 user-defined, 61
Application programming interface (API), 181 Atmospheric loss in calculation of propagation factor, 63 modeling, 63–64 smaller antennas and, 63 total, as function of rainfall, 64 total one-way, 64 Automatic corner detection, 125–32 Azimuth angle, 65 Azimuth angular resolution, 143 B Bayesian particle filters distribution calculation, 41 minimization process, 44 motion model, 42 posteriori step, 43 a priori step, 41 probability density function (PDF), 42, 43 radar resolution and, 44 recursive estimation process, 41 as replacing linear estimations, 41 roads, obstacles, and land cover information, 42–43 sequential two-stage approach, 40 as superior to MHT, 41
207
208
Bayesian particle filters (continued) template comparison, 43 template matching likelihood, 44 template probabilities, 44–45 two-stage approach, 40 Bistatic bisector angle, 123 Bounding volume hierarchies (BVHs), 194–95 Building corners automatic detection of, 125–32 detection approaches, 125–26 selecting in overhead images, 125 Building geometries extracting from 2-D databases, 123–25 incorporating, 84–86 measuring from LiDAR, 132–35 Building heights estimation, 123–25 from LiDAR data, 84, 95 C Carrier frequencies, 63 Carrier wavelength, 58 Central processing unit (CPU), 179–80 Clutter cancellation, 52 Clutter-to-noise ratio (CNR), 87–88 Commercial software, 174–77 Cone matrix, 167 Configurable logic blocks (CLBs), 186, 187 Connected components algorithm, 130 Corner detection approaches, 125–26 automatic, 125–32 connected components algorithm, 130 example LofG operators, 128 fast line finder, 129 lateral inhibition operator, 129 line convergence, 131 line detecting algorithms, 126 line endpoint adjustments, 130
Non-Line-of-Sight Radar
line recovery algorithm, 130 output, 132 robot navigation approach, 127 threshold at pixels, 129–30 CUDA built-in, 197 NVIDIA computational model, 181–83 SDK, 196 texture height field, 197 D Defense Advanced Research Projects Agency (DARPA), 12, 14 Device kernels, 184 Diffraction assumption, 55–57 geometry of translational object, 154 knife-edge (KED), 153–55 in low-frequency MTI systems, 55 uniform theory of (UTD), 55, 57, 155–57 Diffuse scattering, 64–65 Digital Terrain Elevation Data (DTED) for entire planet, 118 interpolation from rectilinear grid, 152 KASSPER project, 12 level of database, 118 level of function, 118 map, 120, 147 rectilinear grid, 143 terrain height, 146, 147 Directivity estimating, 63 modeling, 60–63 Direct line of sight (DLOS) radar defined, 11 detection association logic, 15 Doppler shift, 16 GMTI, 14 Direct memory access, 183
Index E Earth-centered, Earth-fixed (ECEF) coordinates calculation, 148, 149 defined, 148 point, 149 Eigenvalue equation, 171 Eigenvalues, 169–70 Eigenvectors, 169–70 Electronic protection (EP), 20 Elevation angle, 66, 70, 150 Elliptical conic sections in city models, 167 cone matrix, 167 eigenvalue equation, 171 eigenvectors and eigenvalues, 169–70 elliptical parameters, 172 graphical depiction of intersection, 174 intersection function integration, 173 intersection shapes, 165 main lobe and side lobes modeling, 164 MATLAB and, 170–73 multipath modeling, 164–65 need for calculation of, 164 in-plane coordinate system, 172 quadratic equation in two variables, 169 radar simulations using, 163–74 ray tracing simulation, 164 rotation angle, 166 symmetry matrix, 168 Elliptical parameters, 172 EM propagation and multipath diffraction assumption, 55–57 diffuse scattering, 64–65 modeling antenna directivity and gain, 60–63 modeling atmospheric loss, 63–64 modeling surface reflection polarization dependence, 65–70
209 MTI signal processing, 73–75 range-Doppler processing, 70–73 ray tracing, 57–60 EM radiation impacting a wall, 66 reflection off of targets, 57 EM simulations city model construction, 84 full, 82–84 geometric analysis, 75–82 high-fidelity, 141–77 incorporating building geometries, 84–86 performance characterization, 75 scattering power, 121 Expert systems, 12 Extended Kalman filter (EKF) advantage of, 28 defined, 28 drawback, 29 example, 29–31 example output of, 30 observation model, 30 operation, 29 state transition function, 28–29 See also Kalman filter (KF) F Fast Fourier Transform (FFT), 71 FPGA ray tracing connection between rays and architecture, 200 memory bandwidth for pipelines, 202 performance, 201 pipeline, 201 ray-triangle intersection test, 201 FPGAs configurable logic blocks (CLBs), 186, 187 defined, 184 flexibility in design, 184–85 I/O blocks, 188
210
FPGAs (continued) logic blocks with interconnects, 187 for low-volume production, 185 market, 185 programmable routing, 186–88 Free-space impedance, 58 Fresnel equations, 65 Fresnel reflection coefficients, 67, 68 Full EM simulations, 82–84 G Geometric analysis as basic EM simulation capability, 75–82 distance threshold, 78 example, 76–80 MTI platform, 81 multipath prediction, 80 NLOS path existence, 80 parameter calculation, 77 Phoenix/Tempe simulations, 96, 97, 98 ray examination, 77 ray parameterization, 77 real simulation capability, 81–82 requirements for, 76 unit vector, 78, 79 urban scene summary, 79–80 vector geometry, 79 Geometrics optics simulations azimuth angular resolution, 143 clutter power, 145 coherent or noncoherent calculation, 145 computational example, 144–45 over rectilinear grid with uniform spacing, 143 of terrain, 142–53 terrain scattering LOS calculation algorithm, 145–46 terrain scattering LOS calculation example, 146–53
Non-Line-of-Sight Radar
Global Positioning System (GPS), 13 Google Earth building corner estimation from, 82 overhead view from, 84, 85 Street View, 82, 86, 124 3-D model from, 85, 86, 123, 124 GPU computational model, 181–83 GPUs advantage of, 180–81 computational power, 200 device global memory, 182 general purpose (GPGPUs), 181 potential, 180 processors, 181 programming, 183–84 GPU terrain simulation, 199 Grid acceleration, 193–94 Ground MTI (GMTI) radars airborne, increased area visible to, 15 defined, 12 DLOS, 14 ISR, 13 tracking scenario, 31–32 wide-area coverage, 52 See also NLOS GMTI H High-fidelity photorealistic rendering, 189–90 High-performance computing (HPC) architecture, 76 High-performance embedded computing (HPEC), 12 Histogram probabilistic MHT, 45 Hypothesis tree, 34–35, 36 I Image processing algorithms automatic corner detection using, 125–32 connected components, 130
Index line detection, 126 line recovery, 130 output, 132 Intersection acceleration algorithms bounding volume hierarchies (BVHs), 194–95 grid acceleration, 193–94 K-D trees, 195–96 strategies, 193 types of, 192 Intersection shapes, 165 I/O blocks, 188 K Kalman filter (KF) defined, 22 extended, 28–31 linear, 22–28 as Markov model, 22 system state adjustment, 25 in tracking moving vehicle, 27–28 as two-part Bayesian estimation predication, 25 Kalman filter (KF) gain calculation, 26 manual determination, 27 optimal, 26 KA maximum aposteriori (KA-MAP) tracker Bayesian formulation, 108 defined, 108 drawback, 110 example target location PDFs, 110 experiment and prediction comparison, 108–9 general particle approach, 108 in NLOS tracking, 107–10 performance, 114 possible radar target return phenomenologies, 109 range-Doppler surface prediction, 109 state target PDF, 109–10
211 K-D trees, 195–96 Knife-edge diffraction (KED) in calculation of Fresnel zones, 154 defined, 141 formulation, 154 Knowledge-Aided Sensor Signal Processing and Expert Reasoning (KASSPER) project, 12–13 L LabVIEW, 185 Land cover data categories, 119 detailed type assignments, 119 illustrated, 120, 121 obtaining, 119 rectilinear grid, 143 See also Terrain databases Least squares minimization, 44 Light Detection and Ranging (LiDAR) data building heights from, 84, 95 coherent and incoherent, 132 defined, 132 discrete-return point cloud, 133 example illustration, 134 example point cloud, 133 measuring building geometries from, 132–35 overhead view, 133 recording for cities, 133 Linear Kalman filter, 22–28 Linear state transition model, 24 Line detecting algorithms, 126 Line recovery algorithm, 130 M Machine intelligence, 12 MAP TkBD algorithm, 45, 46 Markov process, 40 Material properties for polarization analysis, 56 MATLAB, 76, 170, 171, 173, 185, 189
212
Measurement noise, 24 Monostatic-bistatic equivalence theorem, 121 Motion model, 42 Mountain top scenario, 161, 162, 163 Moving target indicator (MTI) radars defined, 11 diffraction and, 55 Doppler processing, 20 dynamic range requirements, 52 electronic protection (EP), 20 processing techniques, 20 signal processing, 73–75 simulated detections, 20 two-pulse canceler, 73–74 types of, 19 See also Ground MTI (GMTI) radars Moving targ indicator (MTI) radars, airborne (AMTI), 19 Multipath analysis, examples of, 105–7 Multipath Exploitation Radar (MER) program city model with tower-mounted radar, 88–94 data collection description, 102–4 defined, 13 focus, 86 illustrated, 14 initiation of, 86 radar system analysis for, 86–88 Multipath prediction, 105 Multipath propagation factors, 90, 91, 93 Multiple hypothetical tracking (MHT) algorithm development, 32 applied to example simulated scenario, 39 clustering of detection, 35 combined covariance, 34 detection-track cost, 39 example radar scenario, 38–39 example table, 38
Non-Line-of-Sight Radar
Gaussian probabilities, 34 histogram probabilistic, 45 hypothesis tree, 34–35, 36 most probable assignments, 38 nonconflicting assignments determination, 37–38 probability of target-track hypothesis, 33 pruning hypotheses, 37 target differentiation, 39 target-track cost, 39, 40 Multiple-input multiple-output (MIMO), 71 N Narrowband complex electric-field components, 58 Newport Condominiums, Chicago models, 135–38 NLOS GMTI, 16–17 NLOS tracking alternative tracker, 112 applied to experimental radar returns, 111 coherent processing interval, 104 data and simulation comparison, 105, 106, 107 KA-MAP tracker, 107–10 MER data collection, 102–4 multipath prediction, 105 overview, 101–2 position error of tracker, 113 RMS error of, 111 summary, 113–14 tracker results, 110–13 validation of multipath signatures, 104–7 vehicle tracks illustration, 112 Non-line-of-sight (NLOS) radar area coverage, 14 association logic example, 16 background, 11–13 effectiveness of approaches, 84
Index
multipath exploitation methods, 52 overview, 14 propagation, exploiting, 52 See also NLOS tracking NVIDIA CUDA computational model, 181–83 Nyquist regions, 21 O Observation model, 24, 30 Open Computing Language (OpenCL) computational model, 181–82 defined, 181 execution model illustration, 182 Organization, this book, 16–18 P Parallelization, 180 Phoenix/Tempe simulations building vertices, 94 candidate target paths, 95–96 candidate test areas, 95 defined, 94 detectable signal locations, 100, 101, 102, 103 geometric analysis, 96, 97, 98 LiDAR height data, 95 limited LOS, 97 NE quadrant example analysis, 96 projection of buildings, 97–98 route 1 SINR analysis, 103 route 2 analysis, 101 route 3 SINR analysis, 102 route 4 SINR analysis, 103 route designations, 99 SINR calculation, 99–100 system parameters, 98 target routes, 98 Polarimetric scattering coefficient, 122 Polarization effect of, 65
213
impact of, 69 surface reflection dependence, 65–70 transverse electric (TE), 66, 67–68 transverse magnetic (TM), 66, 67–68 Prediction, 53 Probability density function (PDF), 42, 43 Probability mass function (PMF), 47 Probability of hypothesis, 36 Process noise, 23, 25 Process noise covariance corresponding, estimating, 26 estimating, 26 expected covariance, 25 Programmable routing, 186–88 Propagation factors defined, 69 mountain top scenario, 161, 162, 163 multipath, 90, 91, 93 one-way, 160 SEKE prediction, 158–63 simulated, 69 total one-way, 90, 91, 93 Propagation losses, 159–62 Pulse repetition frequency (PRF), 70 R Radar cross section (RCS), 14, 43 Radar Surveillance Technology Experimental Radar (RSTER), 160 Radar system analysis, 86–88 Radio-frequency (RF) electronics, 12 Range-Doppler processing cluster map example, 73 CPI, 70 defined, 70 example map, 72 FFT, 71 received signal in, 70 target radar returns, 72
214
Ray tracing brute force of, 82 Doppler effects and, 60 FPGA, 200–202 frequency along each path, 58–59 in near real time, 142 overview, 57–60 phase shift and, 58 physics, 12 receiver velocity and, 59 relative emitter-receiver motion and, 59–60 simulation, 164 time dilation and, 60 Ray tracing LOS algorithms GPU example, 196–200 intersection acceleration algorithms, 192–96 overview, 188 shapes and ray intersections, 188–92 Ray-triangle intersection, 192 Reflection geometry, 65 Rendering, 189–90 Residual covariance, 26 RFView data visualization tools, 176–77 interface, 175 RF parameters in, 175 3-D terrain databases, 176 Robot navigation approach, 127 Rotation angle, 166 S Sample covariance matrix (SCM), 12 Scattering coefficient, 121, 122 Scattering curves, 121 Scattering power versus incident angle, 117, 121–23 Shapes and ray intersections, 188–92 Shooting and bouncing ray (SBR) method, 82 Signal processing, MTI, 73–75
Non-Line-of-Sight Radar
Signal-to-interference-noise ratio (SINR), 99–100 Signal-to-noise ratio (SNR), 13, 14, 45, 47, 52, 54, 87 Space-time adaptive processing (STAP), 12 Spherical Earth knife-edge (SEKE) defined, 141 development of, 158 high-fidelity propagation loss predictions, 159–60 LOS propagation loss, 160 propagation factor prediction, 158 propagation scenario, 158, 159 simulations conducted without, 161 State covariance, 27 State transition model, 23 Surface reflection polarization dependence, 65–70 Symmetry matrix, 168 T Target geolocation, 52 Terrain databases Digital Terrain Elevation Data (DTED), 117, 118, 120, 143 land cover, 117, 119–21 overview, 117–18 RFView, 176 scattering power versus incident angle, 117, 121–23 types of, 117 urban, 123–38 Terrain scattering for bald curved Earth, 151 calculated range-azimuth clutter map, 151 elevation angle, 150 LOS calculation algorithm, 145–46 LOS calculation example, 146–53 3-D integrated circuits (ICs), 180 3-D urban models
Index
existence of, 135 illustrated, 136–38 Newport Condominiums, Chicago, 135–38 Tower simulation (city model) summary, 93–94 test tracks, 88, 89 tower test site inputs, 89 track 1 (urban), 89–91 track 2 (street), 91–92 track 3 (interstate), 92–93 Track 1 (urban) simulation, 89–91 Track-before-detect (TkBD) approaches, 45 discrete states and, 47 MAP algorithm, 45, 46 measurement used as detection, 45 as nonlinear estimation problem, 45 null state and, 46–47 probability mass function (PMF), 47 threshold problem elimination, 45 transition model, 47 Transition model, 47 Transverse electric (TE) polarization, 66, 67–68 Transverse magnetic (TM) polarization, 66, 67–68 Triangle vertices and barycentric parametric points, 191 Two-pulse MTI cancelers defined, 73 performance, improving, 75 radar returns from stationary clutter, 75 target geolocation accuracy, 74
215 U Uniform theory of diffraction (UTD) defined, 55, 155 diffraction implementation with, 57 geometry of, 156 problem of, 155 See also Diffraction Urban propagation, 82 Urban terrain databases automatic corner detection, 125–32 existing 3-D models, 135–38 extracting building geometries, 123–25 Google Earth Pro, 123 measuring building geometries from LiDAR, 132–35 See also Terrain databases V Vector geometry, 79 VHSIC Hardware Description Language (VHDL), 185 Viterbi algorithm, 46 W Warp, 184 Wireless InSite, 82, 83 X X-band radar scenario, 56 Xilinx, 185
Recent Titles in the Artech House Radar Series Dr. Joseph R. Guerci, Series Editor Adaptive Antennas and Phased Arrays for Radar and Communications, Alan J. Fenn Advanced Techniques for Digital Receivers, Phillip E. Pace Advances in Direction-of-Arrival Estimation, Sathish Chandran, editor Airborne Pulsed Doppler Radar, Second Edition, Guy V. Morris and Linda Harkness, editors Basic Radar Analysis, Mervin C. Budge, Jr. and Shawn R. German Basic Radar Tracking, Mervin C. Budge, Jr. and Shawn R. German Bayesian Multiple Target Tracking, Second Edition , Lawrence D. Stone, Roy L. Streit, Thomas L. Corwin, and Kristine L Bell Beyond the Kalman Filter: Particle Filters for Tracking Applications, Branko Ristic, Sanjeev Arulampalam, and Neil Gordon Cognitive Radar: The Knowledge-Aided Fully Adaptive Approach, Joseph R. Guerci Computer Simulation of Aerial Target Radar Scattering, Recognition, Detection, and Tracking, Yakov D. Shirman, editor Control Engineering in Development Projects, Olis Rubin Design and Analysis of Modern Tracking Systems, Samuel Blackman and Robert Popoli Detecting and Classifying Low Probability of Intercept Radar, Second Edition, Phillip E. Pace
Digital Techniques for Wideband Receivers, Second Edition, James Tsui Electronic Intelligence: The Analysis of Radar Signals, Second Edition, Richard G. Wiley Electronic Warfare in the Information Age, D. Curtis Schleher Electronic Warfare Target Location Methods, Second Edition, Richard A. Poisel ELINT: The Interception and Analysis of Radar Signals, Richard G. Wiley EW 101: A First Course in Electronic Warfare, David Adamy EW 102: A Second Course in Electronic Warfare, David Adamy EW 103: Tactical Battlefield Communications Electronic Warfare, David Adamy FMCW Radar Design, M. Jankiraman Fourier Transforms in Radar and Signal Processing, Second Edition, David Brandwood Fundamentals of Electronic Warfare, Sergei A. Vakin, Lev N. Shustov, and Robert H. Dunwell Fundamentals of Short-Range FM Radar, Igor V. Komarov and Sergey M. Smolskiy Handbook of Computer Simulation in Radio Engineering, Communications, and Radar, Sergey A. Leonov and Alexander I. Leonov High-Resolution Radar, Second Edition, Donald R. Wehner Highly Integrated Low-Power Radars, Sergio Saponara, Maria Greco, Egidio Ragonese, Giuseppe Palmisano, and Bruno Neri Introduction to Electronic Defense Systems, Second Edition, Filippo Neri
Introduction to Electronic Warfare, D. Curtis Schleher Introduction to Electronic Warfare Modeling and Simulation, David L. Adamy Introduction to RF Equipment and System Design, Pekka Eskelinen Introduction to Modern EW Systems, Andrea De Martino An Introduction to Passive Radar, Hugh D. Griffiths and Christopher J. Baker Linear Systems and Signals: A Primer, JC Olivier Meter-Wave Synthetic Aperture Radar for Concealed Object Detection, Hans Hellsten The Micro-Doppler Effect in Radar, Victor C. Chen Microwave Radar: Imaging and Advanced Concepts, Roger J. Sullivan Millimeter-Wave Radar Targets and Clutter, Gennadiy P. Kulemin MIMO Radar: Theory and Application, Jamie Bergin and Joseph R. Guerci Modern Radar Systems, Second Edition, Hamish Meikle Modern Radar System Analysis, David K. Barton Modern Radar System Analysis Software and User's Manual, Version 3.0, David K. Barton Monopulse Principles and Techniques, Second Edition, Samuel M. Sherman and David K. Barton MTI and Pulsed Doppler Radar with MATLAB®, Second Edition, D. Curtis Schleher Multitarget-Multisensor Tracking: Applications and Advances Volume III, Yaakov Bar-Shalom and William Dale Blair, editors
Non-Line-of-Sight Radar, Brian C. Watson and Joseph R. Guerci Precision FMCW Short-Range Radar for Industrial Applications, Boris A. Atayants, Viacheslav M. Davydochkin, Victor V. Ezerskiy, Valery S. Parshin, and Sergey M. Smolskiy Principles of High-Resolution Radar, August W. Rihaczek Principles of Radar and Sonar Signal Processing, François Le Chevalier Radar Cross Section, Second Edition, Eugene F. Knott, et al. Radar Equations for Modern Radar, David K. Barton Radar Evaluation Handbook, David K. Barton, et al. Radar Meteorology, Henri Sauvageot Radar Reflectivity of Land and Sea, Third Edition, Maurice W. Long Radar Resolution and Complex-Image Analysis, August W. Rihaczek and Stephen J. Hershkowitz Radar RF Circuit Design, Nickolas Kingsley and J. R. Guerci Radar Signal Processing and Adaptive Systems, Ramon Nitzberg Radar System Analysis, Design, and Simulation, Eyung W. Kang Radar System Analysis and Modeling, David K. Barton Radar System Performance Modeling, Second Edition, G. Richard Curry Radar Technology Encyclopedia, David K. Barton and Sergey A. Leonov, editors Radio Wave Propagation Fundamentals, Artem Saakian Range-Doppler Radar Imaging and Motion Compensation, Jae Sok Son, et al.
Robotic Navigation and Mapping with Radar, Martin Adams, John Mullane, Ebi Jose, and Ba-Ngu Vo Signal Detection and Estimation, Second Edition, Mourad Barkat Signal Processing in Noise Waveform Radar, Krzysztof Kulpa Space-Time Adaptive Processing for Radar, Second Edition, Joseph R. Guerci Special Design Topics in Digital Wideband Receivers, James Tsui Systems Engineering of Phased Arrays, Rick Sturdivant, Clifton Quan, and Enson Chang Theory and Practice of Radar Target Identification, August W. Rihaczek and Stephen J. Hershkowitz Time-Frequency Signal Analysis with Applications, Ljubiša Stankovi´c, Miloš Dakovi´c, and Thayananthan Thayaparan Time-Frequency Transforms for Radar Imaging and Signal Analysis, Victor C. Chen and Hao Ling Transmit Receive Modules for Radar and Communication Systems, Rick Sturdivant and Mike Harris For further information on these and other Artech House titles, including previously considered out-of-print books now available through our In-Print-Forever® (IPF®) program, contact: Artech House
Artech House
685 Canton Street
16 Sussex Street
Norwood, MA 02062
London SW1V HRW UK
Phone: 781-769-9750
Phone: +44 (0)20 7596-8750
Fax: 781-769-6334
Fax: +44 (0)20 7630-0166
e-mail: [email protected]
e-mail: [email protected]
Find us on the World Wide Web at: www.artechhouse.com