136 114 19MB
English Pages [68] Year 2023
This Month’s Covers…. Front Cover: Image licensed by Ingram Publishing Back Cover: Image licensed by Ingram Publishing
IEEE AESS PUBLICATIONS BOARD Lance Kaplan, VP–Publications, Chair Daniel O’Hagan, Editor-in-Chief, Systems
Gokhan Inalhan, Editor-in-Chief, Transactions Amanda Osborn, Administrative Editor
IEEE AESS Society The IEEE Aerospace and Electronic Systems Society is a society, within the framework of the IEEE, of members with professional interests in the organization, design, development, integration and operation of complex systems for space, air, ocean, or ground environments. These systems include, but are not limited to, navigation, avionics, spacecraft, aerospace power, mobile electric power & electronics, military, law enforcement, radar, sonar, telemetry, defense, transportation, automatic test, simulators, and command & control. Many members are concerned with the practice of system engineering. All members of the IEEE are eligible for membership in the Society and receive the Society magazine Systems upon payment of the annual Society membership fee. The Transactions are unbundled, online only, and available at an additional fee. For information on joining, write to the IEEE at the address below. Member copies of publications are for personal use only.
THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, INC. Saifur Rahman, President & CEO Thomas M. Coughlin, President-Elect Forrest D. Wright, Director & Secretary Mary Ellen Randall, Director & Treasurer K. J. Ray Liu, Past President Rabab Ward, Director & Vice President, Educational Activities
Sergio Benedetto, Director & Vice President, Publication Services and Products Jill I. Gostin, Director & Vice President, Member and Geographic Activities Yu Yuan, Director & President, Standards Association John P. Verboncoeur, Director & Vice President, Technical Activities Eduardo F. Palacio, Director & President, IEEE-US
IEEE Publishing Operations Senior Director: DAWN MELLEY Director, Editorial Services: KEVIN LISANKIE Director, Production Services: PETER M. TUOHY Associate Director, Editorial Services: JEFFREY E. CICHOCKI Associate Director, Information Conversion and Editorial Support: NEELAM KHINVASARA Manager, Journals Production: PATRICK J. KEMPF Journals Production Manager: CATHERINE VAN SCIVER
IEEE Aerospace and Electronic Systems Magazine ® (ISSN 0885-8985; USPS 212-660) is published monthly by the Institute of Electrical and E lectronics Engineers, Inc. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society/Council, or its members. IEEE Corporate Office: Three Park Avenue, 17th Floor, New York, NY 10016, USA. IEEE Operations Center: 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855, USA. NJ Telephone: 732-981-0060. Price/Publication Information: To order individual copies for members and nonmembers, please email the IEEE Contact Center at [email protected]. (Note: Postage and handling charges not included.) Member and nonmember subscription prices available upon request. Copyright and reprint permissions: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, USA. For all other copying, reprint, or republication permissions, write to the Copyrights and Permissions Department, IEEE Publications Administration, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855. Copyright © 2023 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Periodicals postage paid at New York, NY, and at additional mailing offices. Postmaster: Send address changes to IEEE Aerospace and Electronic Systems Magazine, IEEE, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855. GST Registration No. 125634188. CPC Sales Agreement #40013087. Return undeliverable Canada addresses to: Pitney Bowes IMEX, P.O. Box 4332, Stanton Road, Toronto, ON M5W 3J4, Canada. IEEE prohibits discrimination, harassment, and bullying. Printed in the U.S.A.
Editors Editor-in-Chief–Daniel W. O’Hagan, Fraunhofer FHR, Germany VP Publications–Lance Kaplan, U.S. Army Research Laboratory, USA AESS President–Mark Davis, Independent Consultant, USA Operations Manager, AESS–Amanda Osborn, Conference Catalysts, LLC, USA
Contributing Editors Awards–Fulvio Gini, University of Pisa, Italy Book Reviews Editor–Samuel Shapero, Georgia Tech Research Institute, USA Conferences–Braham Himed, Air Force Research Laboratory, USA Distinguished Lecturers & Tutorials–Alexander Charlish, Fraunhofer Institute for Communication, Information Processing and Ergonomics, Germany Education–Alexander Charlish, Fraunhofer Institute for Communication, Information Processing and Ergonomics, Germany History–Hugh Griffiths, University College London, UK Student Research–Federico Lombardi, University College London, UK Technical Panels–Michael Braasch, Ohio University, USA Tutorials–W. Dale Blair, Georgia Tech Research Institute, USA Website Updates–Amanda Osborn, Conference Catalysts, LLC, USA
Associate Editors and Areas of Specialty Scott Bawden–Energy Conversion Systems, Arctic Submarine Laboratory, USA Erik Blasch, US Air Force Research Lab (AFRL), USA Roberto Sabatini, RMIT University, Australia– Avionics Systems Stefan Brueggenwirth, Fraunhofer Institute for High Frequency Physics and Radar Techniques FHR, Germany–AI and ML in Aerospace Dietrich Fraenken, Hensoldt Sensors, Germany– Fusion and Signal Processing Lyudmila Mihaylova, The University of Sheffield, UK–Target Tracking Mauro De Sanctis, University of Rome “Tor Vergata,” Italy–Signal Processing and Communications Jason Gross, West Virginia University (WVU), USA–Navigation, Positioning Giancarmine Fasano, University of Naples Federico II, Italy–Unmanned Aircraft Systems Michael Brandfass, Hensoldt–Radar Systems Raktim Bhattacharya, Texas A&M, USA–Space Systems Haiying Liu, DRS Technologies, Inc., USA– Control and Robotic Systems Michael Cardinale, Retired, USA–Electro-Optic and Infrared Systems, Image Processing Ruhai Wang, Lamar University, USA–Systems Engineering Marco Frasca, MBDA, Italy–Quantum Technologies in Aerospace Systems
September 2023
ISSN 0885-8985
Volume 38 Number 9
COLUMNS In This Issue –Technically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
FEATURE ARTICLES Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks _ M. P»otka, K. Abratkiewicz, R. Maksymiuk, J. Wszo»ek, A. Ksi˛ezyk, P. Samczy nski, T.P. Zieli nski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption J. Liang, X. Liu, R. Wang, L. Yang, X. Li, C. Tang, K. Zhao . . . . . . . . . . . . . . . . .
24
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission A.D. Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
NEWS AND INFORMATION Call for Papers: Special Issue on Hypersonic Weapons: Threat and Defence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Call for Papers: IEEE International Radar Conference 2023 . . . . . . . . . . . . . . .
23
AESS Virtual Distinguished Lecturer Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
Interview With Roy Streit S.P. Coraluppi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
Call for Papers: Special Issue on Ethics, Values, Laws, and Standards for Responsible AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
AESS Meet the Team: Roberto Sabatini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
2023 AESS Organization and Representatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
2023 Aerospace & Electronic Systems Society: Meetings and Conferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . inside back cover
How to Reach Us We welcome letters to the editor; but, we reserve the right to edit for space, style, and clarity. Include address and daytime phone number with your correspondence. E-mail: Daniel W. O’Hagan, [email protected] Catherine Van Sciver, [email protected]
Publishers: Send books for review to Samuel Shapero, 407 Angier Place NE, Atlanta, GA 30308. If you have questions, contact Samuel by e-mail at [email protected]. Advertisers If you are interested in advertising in the AESS SYSTEMS Magazine, please contact Anthony Land at Naylor Associates at [email protected] or Daniel W. O’Hagen at [email protected]
IEEE AESS
SCHOLARSHIP IEEE AEROSPACE AND ELECTRONIC SYSTEMS
PROGRAM $10,000 + CERTIFICATE
DEADLINE: DECEMBER 1
AESS awards one Graduate and one Undergraduate-level scholarship annually. Visit the website for requirements and eligibility. UNDERGRADUATE LEVEL Electrical Engineering
GRADUATE LEVEL Systems Engineering
CONTACT US: [email protected] ieee-aess.org/scholarship
In This Issue –Technically OPPORTUNITIES AND LIMITATIONS IN RADAR SENSING BASED ON 5G BROADBAND CELLULAR NETWORKS In recent years, the 5G network has been deployed more and more densely. The signal coverage area is increasing, and in many countries, it allows the use of the new radio in most large cities. This makes it possible to use 5G network signals for telecommunication and remote sensing purposes. This article describes the opportunities and limitations of using 5G signals for radars. The waveform characteristics of the 5G network were tested and compared, range analyses were made, and the possibilities of detecting targets using impulse signals sent for various purposes in the network were examined. The article presents measurable properties of 5G signals that allow one to detect targets. Also, possibilities of exploiting 5G signalling for radar applications are provided. Moreover, the main limitations of using 5G waveforms and the general 5G architecture for radar purposes are discussed. Finally, a preliminary concept validation is shown with experimental results of car detection using 5G network-based passive radar.
LTP FOR RELIABLE DATA DELIVERY FROM SPACE STATION TO GROUND STATION IN THE PRESENCE OF LINK DISRUPTION Designed as the primary transport protocol of delay/disruption-tolerant networking (DTN) in space, Licklider Transmission Protocol (LTP) is expected to provide reliable data delivery service in a challenging networking environment regardless of presence of random link disruptions and/or extremely long propagation delays. The NASA has implemented the use of DTN protocol on the International Space Station for data delivery to the earth ground station. However, there is a lack of a performance evaluation of LTP for its use in the space station communications, especially in presence of link disruption. In this article, a study using a PC-based experimental infrastructure is presented in evaluating the performance of LTP for reliable data delivery from the space station to the ground stations. Realistic experiment results are presented with a focus on the effect of link disruption occurred either over the downlink or over the uplink. The effect of data loss due to channel error is also investigated.
RADAR CHALLENGES, CURRENT SOLUTIONS, AND FUTURE ADVANCEMENTS FOR THE COUNTER UNMANNED AERIAL SYSTEMS MISSION The proliferation of unmanned aerial system (UAS) technology has made counter unmanned aerial systems (C-UAS) a ubiquitous weapon in UAS conflicts. The advancement of UAS (drones) has elevated them to be highly relied upon assets for military applications and they have become a weapon of choice for nonstate groups with nefarious intent. These groups employ the technology for surveillance, battlespace management, propaganda, and aerial strike attacks, often to considerable effect. Radars are the primary sensor for most C-UAS to combat threat UAS. They provide persistent detection, tracking, and classification at ranges beyond visual line of sight (LOS). Additionally, they are capable of 24/7 operation in allweather conditions. This article provides insight on future radar advancements for C-UAS. Prior to describing radar advancements, a detailed overview of existing challenges and current solutions is presented. The challenges are expounded upon to provide an understanding of how they impact radar performance. Existing radar solutions are also discussed providing insight into how C-UAS radar challenges are being solved today. This article provides insight into what is on the technological horizon for C-UAS radars. SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
3
Feature Article:
DOI. No. 10.1109/MAES.2023.3267061
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks _ w, Poland Adam Ksi˛ezyk , Nokia Solutions and Networks, 30-348 Krako Marek P»otka , Karol Abratkiewicz , and Rados»aw Maksymiuk , Warsaw University of Technology, 00-665 Warszawa, Poland w, Poland, and also Jacek Wszo»ek , AGH University of Krakow, 30-059 Krako w, Poland Ericsson POL, 30-392 Krako , Warsaw University of Technology, 00-665 Warszawa, Piotr Samczynski Poland w, Poland , AGH University of Krakow, 30-059 Krako Tomasz P. Zielinski
INTRODUCTION The main advantage of passive coherent location (PCL) is detecting targets without a cooperating signal source. The benefits are as follows. Lower energy consumption due to the lack of energy-consuming transmitters; no spectrum allocation required; and the possibility of covert work resulting from the lack of its own illuminator [1]. Nowadays, the emergence of a new digital broadcast system often entails a corresponding passive radar. The trend has been noticeable for years and can be observed by analyzing numerous systems and demonstrators of passive radar systems [2], [3], [4], [5], [6], [7], [8], [9]. The latest wireless transmission systems must meet several restrictive requirements. They result from many factors, including the most important ones, that is, dense allocation of spectral resources by various systems, the need to transmit Authors’ current addresses: Adam Ksi˛e z_ yk is with Nokia Solutions and Networks, 30-348 Krak ow, Poland (e-mail: [email protected]). Marek P»otka, Karol Abratkiewicz, Rados»aw Maksymiuk, and Piotr Samczy nski are with the Institute of Electronic Systems, Warsaw University of Technology, 00-665 Warszawa, Poland (e-mail: [email protected], [email protected], [email protected], [email protected]. edu.pl). Jacek Wszo»ek and Tomasz Piotr Zieli nski are with the Institute of Telecommunications, AGH University of Krakow, 30-059 Krak ow, Poland, and also with Ericsson POL, 30-392 Krak ow, Poland (e-mail: jacek.wszolek @agh.edu.pl, [email protected]). Manuscript received 29 July 2022, revised 5 January 2023; accepted 6 April 2023, and ready for publication 13 April 2023. Review handled by Daniel O’Hagan. 0885-8985/23/$26.00 ß 2023 IEEE 4
large amounts of data (especially in mobile telephony), the need to work with a low signal-to-noise ratio (SNR), and minimizing the signal sent in order to reduce the so-called electromagnetic smog [10]. The mentioned requirements are rational from the perspective of telecommunications and the benefits of excessive requirements on the part of customers. On the other hand, limitations on bandwidth or the duration of continuous transmission will be reflected in the ability of passive radars to detect targets. The abovementioned problems have already been encountered in the implementation of passive radars (e.g., [5], [7], [11]). The latest works show that passive radar based on the 5G cellular network will be demanding [7]. This is mainly due to signal limitations, whose quantity and bandwidth strongly depend on the transmitted content. In addition, the standard allows the use of techniques, such as digital beamforming and time division duplex (TDD), which further complicates the use of signals in the context of passive radar. The 5G network has not been sufficiently analyzed in terms of possible signals for use in noncooperative passive radars, only partly in a few, mostly conceptional and theoretical works (e.g., [12], [13], [14]), although there are relatively many references (e.g., [15], [16], [17]). This article aims to present the signal structure and show the limitations and possibilities of using the 5G network in radars. Similar considerations were carried out for older generation telecommunications systems for which the implementation of passive radar demonstrators was developed [18], [19]. The air interface of the 5G network is composed of a large number of channels and signals. There are downlink and uplink (from/to radio base stations) ones. They carry user plane data and/or signaling. There are broadcast, multicast, and user equipment (UE) dedicated channels and signals. Some can be used as illumination signals in passive radars cooperating with a base
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Image licensed by Ingram Publishing
transceiver station (BTS) or in active radars realized inside the BTS as its additional functionality. Such a system could be called a joint radar and communication system, and would be a cutting edge technology. For this purpose, it is necessary to examine the properties of these signals, and from the point of view of radiolocation, the ambiguity function is the most informative tool since it characterizes the signal distribution for different delays and Doppler shifts. The rest of this article is organized as follows. In the “Principles of Active and Passive Radars in a 5G Network-Based Sensing Perspective” section, a brief description of active and passive radar basics is given. In particular, required radar signal features are mentioned. The “5G Signal Morphology” section details 5G signals used by a new radio (NR) user for accessing the 5G network. In the “Radar Characteristics of 5G Signals” section, the radar characteristics of described 5G signals are analyzed and compared. In the “Opportunities and Limitations of 5G-Based Radar” section, opportunities and limitations existing in 5G-based passive radar sensing are listed and analyzed. Finally, the “Conclusions” section is presented. In this article, the most important results and conclusions, presented in [7], are enhanced and generalized/unified in more readable form in the “Principles of Active and Passive Radars in a 5G Network-Based Sensing Perspective,” “5G Signal Morphology,” and “Radar Characteristics of 5G Signals” sections, while completely new material is discussed in the “Opportunities and Limitations of 5G-Based Radar” section. Analysis of contemporary joint communication and sensing systems is not addressed in this article. There are many good surveys on this subject, such as [14], [20], [21], and [22].
SEPTEMBER 2023
PRINCIPLES OF ACTIVE AND PASSIVE RADARS IN A 5G NETWORK-BASED SENSING PERSPECTIVE Nowadays, active radar is still the most popular type of radar. Recently, however, passive radars are gaining more and more popularity in the civilian and military environment [23]. Both serve as a sensor for range and velocity estimation using electromagnetic waves. Hypothetically, the 5G network can serve as a source of illumination in both of these cases. An example of an active radar geometry is presented in Figure 1(a). The radar consists of a cooperative transmitter and an integrated receiver. The transmitter sends an electromagnetic signal, which reflects from obstacles in the space. The reflected pulses reaching the receiving antenna are processed to give information about the distance and speed of moving targets in the space. It is possible to modify the network architecture by transferring a part of the transmitted signal to the cooperating radar receiver. Then one can compare the known waveform pattern with the received signal resulting from reflections from moving obstacles and returning to the receiver. As a result, analogous remote detection and sensing capabilities as in classical active radar are obtained. The various impulses and continuous signals available in the 5G network can be used for space observation. They may differ in properties, favoring the detection of weak targets because the 5G network is primarily used for data transmission. Theoretically, a slight change in a transmitted waveform with no or marginal influence on telecommunication capabilities may allow for plugging additional network features. For example, different modulation and radio transmission schemes can be used [24] with dedicated pilots for sensing purposes. Also, a small part of bandwidth can be aggregated only for radar sensing,
IEEE A&E SYSTEMS MAGAZINE
5
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks surveillance channel of the receiver with signal xs ðtÞ (or several such channels) is used for sectoral space observation to detect moving targets. However, it should be kept in mind that when the 5G network is noncooperative, it is necessary to synchronize the receiver with a 5G base station [7]. If the signal can be decoded and reconstructed in the receiver, it can work with one antenna for reference xr ðtÞ and surveillance xs ðtÞ signal acquisition [25], [26]. Active and passive radars can work in a pulsed and continuous mode. This means that the illuminating signal may have a character of short bursts/pulses or a constant and full duration in time. For pulse-based radars, the range-Doppler map is computed based on matched filtering resulting in a range-compressed signal. The matched filtering is defined as follows: Z xðtÞ ¼
1
1
xs ðtÞhðt tÞdt;
(1)
where hðtÞ ¼ xr ðtÞ is a matched filter response. Next, using a set of the range-compressed signals one can estimate the target velocity by performing a Fourier transform along the slow-time axis for each range cell. On its basis, velocity and range are estimated simultaneously. In passive radars, the range-Doppler map is obtained using the cross-ambiguity function (CAF) [1] Z CðRb ; Vb Þ ¼
Figure 1. Two possible scenarios comprising the 5G network as a source of illumination for active and passive radars. (a) Active radar with a cooperative 5G transmitter. (b) Passive radar with a noncooperative and spatially separated transmitter. R1 —transmitter to target distance, R2 —receiver to target distance, L—baseline.
keeping the data rate almost at the same level. As a result, the base station can get both a more precise channel estimate (increasing transmission quality) and information about targets in the space. Moreover, such waveforms can be exploited for channel estimation in receivers (e.g., passive sensors). Knowledge of the signals present in the network can be used for radar purposes, which is explored in this work. The morphology of currently existing waveforms is described in the further part of this article. For a 5G network-based passive radar, the scenario can be illustrated, for example, as in Figure 1(b). Passive radar is a particular type of radar, where the transmitter and receiver are spatially separated. In addition, in practice, the receiver has no information about the signal sent by the transmitter, so it is necessary to obtain a reference waveform and/or reconstruct it. Current passive radars often use commercial signals as an illumination source [7], [9], [11], [19]. This scenario imposes the reception of a direct signal from the transmitting antenna [reference signal xr ðtÞ]. At the same time, the second 6
Tint 0
xs ðtÞxr
Rb 2p t exp j Vb t dt; c (2)
where Tint is the integration time, Rb is the bistatic range, c is the speed of light, Vb is the bistatic velocity, and is the wavelength. Equations (1) and (2) look different, nevertheless, the effect is the same, and the difference is due to the different architecture of the two systems. In this article, the analysis of how 5G NR waveforms, transmitted by the base station, can be decoded by third party receivers, resynthesized by them, and then used as a reference xr ðtÞ of xs ðtÞ in (2) is presented. The general idea of 5G signal processing in active (option 1) and passive radar (options 2, 3, and 4) is sketched in Figure 2. In option 1, a BTS is co-operating with a radar receiver while in options from 2 to 4 it does not. In mode 2, which can be identified as a PCL operation, the whole full-content signal obtained from the reference antenna is used as a search pattern, and eventually an uplink is additionally removed from it (in the case of TDD mode). In options 3 and 4, conceived as a passive pulse radar operation, resynthesized impulsive 5G synchronization and control signals are exploited as search patterns— they are extracted either from the reference antenna (option 3) or the surveillance antenna (option 4). Using option 3, we can achieve a single antenna passive radar,
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al. a third user perspective. In this article, the focus is put on those downlink physical signals and channels that are not user specific. Then, downlink channel reference signals will be described. As an introduction to the detailed analysis of 5G signals’ and physical channels’ morphology given in the following, a rough description of their usage during UE access to the 5G network is presented in Table 1.
NR RG AND NUMEROLOGY
Figure 2. General signal processing diagram for radar using a 5G network as a source of illumination.
but operating in option 4, at least two receiving antennas have to be provided. As mentioned, the processing in both types of radar, active and passive, is very similar. In passive radars, the signal can be additionally synchronized allowing for the increase of the SNR at the output of the detector, but this operation is not mandatory. The range compression is performed according to matched filtering (1) or correlation (2) for active and passive radar, respectively. The next steps rely on clutter removal, velocity estimation, target detection, and tracking. These steps are close in passive and active systems and operate on the base of the same principles.
5G SIGNAL MORPHOLOGY A passive radar approach relies on the concept of utilizing an existing signal produced by ongoing communication between a transmitter and a receiver. In the 5G NR cellular network, these devices correspond to the BTS and UE. A key requirement is to identify the signals that are useful in active and passive location techniques presented in Figure 1. The 5G NR waveform is constructed from several physical channels and signals. Each of them has a different purpose, occurs in different places in the time– frequency (TF) grid, and is coded in a different way. It is important to identify the position of those signals in the TF resource grid (RG) and, if possible, decode them from SEPTEMBER 2023
Filtered orthogonal frequency division multiplex (FOFDM) was chosen as a 5G NR access scheme. Similar to the 4G long term evolution (LTE) deployment, all 5G physical channels and signals in both uplink and downlink form an RG on which particular symbols are placed in the frequency and time domain. The main differences between NR and LTE are i) scalable (so-called) numerology, identified by parameter m ¼ f0; 1; 2; . . .g, and ii) max carrier bandwidth. Parameter m directly identifies subcarries spacing (SCS), symbol and cyclic prefix duration, and slot length. Possible values of all options for general parameters are presented in Table 2.
Table 1.
Some 5G Physical Channels/Signals Which Are Important for Radar Purposes Analyzed
Receiving/ Decoding
Goal
PSS
Frequency and symbol synchron
SSS
Physical cell identification
DM-RS and PBCH
Frame synchro, spacial filter selection
MIB from PBCH
Finding PDCCH configuration
PDCCH
PDCCH
Obtaining info about SIB1 position
PDSCH
PDSCH
Obtaining SIB1, i.e., min system info
CSI-RS
CSI-RS
Finding 5G channel characteristics
SSB
1. PSS: Primary synchronization signal. 2. SSS: Secondary synchronization signal. 3. DM-RS: Demodulation reference signal. 4. PBCH: Physical broadcast channel. 5. SSB = 1+2+3+4: Synchronization signal block. 6. MIB = Master information block from PBCH. 7. PDCCH: Physical downlink control channel. 8. PDSCH: Physical downlink shared/data channel. 9. SIB1: System information block type 1 from PDSCH. 10. CSI-RS: Channel state information reference signal.
IEEE A&E SYSTEMS MAGAZINE
7
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks Table 2.
in [7]. The basic cell information is advertised to the UE through the SSB.
Mapping Between SCS and Symbol Length in NR Numerology m
0
1
2
3
4
SCS ¼ 15 2m [kHz]
15
30
60
120
240
OFDM symbol duration [ms]
66.67
33.33
16.67
8.33
4.17
Cyclic prefix duration [ms]
4.69
2.34
1.17
0.59
0.29
Maximum carrier bandwidth is 100 MHz for low/mid (middle)-band and 400 MHz for high band in Release 15 and 16. Release 17 introduces band n263 with the optional channel bandwidth up to 2 GHz and 960 kHz subcarrier spacing.
INITIAL DOWNLINK CHANNELS AND SIGNALS A 5G BTS periodically sends cell and system specific information. Every UE that wants to attach to the network needs to receive and decode them. This means that it is possible to decode them without knowing network or terminal specific parameters. First, the synchronization signal block (SSB) is sent to ensure time and frequency synchronization between the BTS and the UE and to provide the master information block (MIB) with basic cell information (see Figure 3). Next, The Type#0-physical downlink control channel (PDCCH) is transmitted, which carries the downlink control information (DCI) with the cyclic redundancy code (CRC) scrambled by the system information radio network temporary identifier (SI-RNTI). Because SI-RNTI is constant and defined in the standard, each UE can decode it. Finally, with all this information, a UE can find and read the system information block1 (SIB1) carried by the physical downlink shared channel (PDSCH).
SSB The SSB is the first signal that the UE has to receive to be able to attach to the 5G cell. It is the only signal, which is always on and is sent periodically with the default interval of 20 ms. The synchronization block consists of 20 RBs SSB (NRB ) in the frequency domain, each with 12 subcarriers (240 subcarriers altogether), and four OFDM symbols SSB (Nsymb ) in the time domain. The SSB itself is constructed of two synchronization signals (SS): primary (PSS) and secondary (SSS). Their purpose is to provide a reference for time and frequency synchronization for listening devices. The rest of the SSB structure is reserved for the physical broadcast channel (PBCH), which carries basic cell information. In a multibeam environment multiple SSBs are transmitted, one in each beam—they have the same frequency position but different time locations inside the 5G frame. The reception and location of the SSB are straightforward. It is a good candidate for passive radar goals. The signal is not user specific, therefore, it is not encrypted and a third party user can also receive and decode it. Furthermore, the exact position of the SSB is defined and then sent periodically.
TYPE#0-PDCCH After the acquisition of basic cell information, the UE can proceed with PDCCH detection and decoding. The PDCCH carries DCI. Its correct reception is mandatory for the correct demodulation of the PDSCH, which carries the user’s data. Each DCI is CRC scrambled by RNTI. It is impossible to decode it without the prior knowledge of RNTI. Each UE is assigned with a set of dedicated DCIs. However, as described previously, the
TIME AND FREQUENCY SYNCHRONIZATION The 5G NR cellular network uses OFDM in its physical layer to aggregate many narrowband subcarriers into a wideband signal [27]. The signal structure can be represented by an RG, divided into resource blocks (RBs). An RB is a group of 12 consecutive subcarriers in the frequency domain and one OFDM symbol in the time domain. A precise time and frequency synchronization is thus required to obtain the exact positions of RBs. This allows one to find reference signals and physical channels which could be further used for radar purposes, as described 8
Figure 3. SSB structure.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al.
Figure 4.
Figure 5.
SSB, CORESET#0, and PDSCH containing SIB1.
PDCCH and SIB1.
Type#0-PDCCH uses a predefined SI-RNTI, which makes it possible for decoding by a passive device. This procedure is present only in the stand alone deployment of the 5G network. The PDCCH is located in the control resource set (CORESET)—it is the potential position for DCI. It can be found with the usage of information carried by the SSB [28]. The CORESET can, but does not have to, contain the PDCCH. CORESET can consist of between 24 CORESET#0 and 96 consecutive RBs (NRB ) in the frequency CORESET#0 domain and from 1 to 3 OFDM symbols (Nsymb ) in the time domain. An example of CORESET is marked in red in Figure 4. It occupies 48 RBs in one OFDM symbol. The control channel can be present within these resources. To find its exact position, monitoring occasions, as defined in [28] Section 10, are searched. The PDCCH can have different sizes, depending on the configuration. It is expressed in the number of resource element groups (REG). The potential sizes and the number of potential positions are listed in Table 3. If the control channel is not present in the same 5G frame as the received SSB, it has to be present before the next SSB because the Type#0PDCCH periodicity is equal to the periodicity of SSB. The CORESET from Figure 4 contains a PDCCH, which is shown in Figure 5. As can be seen, the PDCCH does not have to occupy the whole available resources. This
Table 3.
PDCCH Size and Possible Positions Size (REG)
Number of possible positions within CORESET
1
4
8
2
16
1
SEPTEMBER 2023
depends on the aggregation level and the exact placement on the candidate number. Knowing the precise position of the PDCCH, one can demodulate and decode it. The obtained DCI can be used to locate and decode other channels.
PDSCH AND SIB1 After decoding the MIB and DCI blocks, the UE can proceed with reception of the SIB1. It is an element of the remaining minimum system information, which carries information necessary for a UE to connect to the network. The SIB1 is transmitted within bandwidth part #0 (BWP#0) in the PDSCH. The BWP#0 has the same amount of resources in the frequency domain as the CORESET#0 BWP#0 CORESET#0 (NRB ¼ NRB ), this is from 24 to 96 consecutive RBs and from 4 to 13 symbols, depending on the configBWP#0 uration. The starting RB (Nstart;RB ) and OFDM symbol can be determined from the DCI. The SIB1 is sent with a periodicity equal to 20 ms or the SSB periodicity [29]. Using this information, one can locate the SIB1 and track it with the given periodicity. The decoding process of the PDSCH includes demodulation of quadrature amplitude modulation (QAM), and downlink shared channel decoding, which is protected by low-density parity codes. An example of SIB1 message is shown in Figure 5. Its position and resources were configured by the DCI carried by the PDCCH, which was described previously. This deployment of the PDSCH carrying the SIB1 has 34 RBs over 11 symbols.
CHANNEL REFERENCE SIGNALS In the 5G NR, one distinguishes the following reference signals. 1) Demodulation (DM-RS), used to provide a channel reference for the demodulation of physical channels. 2) Channel state information (CSI-RS), and 3) Phase-tracking reference signal (PT-RS).
IEEE A&E SYSTEMS MAGAZINE
9
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks
DM-RS To ensure high-quality signal transmission, the new generations of mobile systems, for example, LTE and NR, use DM-RS to provide an accurate reference for channel estimation. In 5G NR, the DM-RS is content specific and is transmitted only when data are present. The reference signal appears in every physical channel, regardless of the link direction (PBCH, PUCCH, PDSCH, etc.). The exact position in these channels can be very different. In the PBCH, the position is dependant on the physical cell ID, which can be derived from synchronization signals (PSS and SSS) and supports only eight possible sequences. Therefore, one can calculate the position of the PBCH DM-RS performing a simple correlation of the received signal with known eight templates [27]. The SSB depicted in Figure 3 contains a PBCH (yellow) with its DM-RS (light stripes across the whole channel resources). The PDCCH has to be discussed together with the CORESET, despite the fact that it is only a placeholder and does not carry any information itself. Nevertheless, the CORESET parameters also configure the position and values of the DM-RS symbols. More precisely, the parameter pdcch-DMRS-ScramblingID designates the sent bit sequence. In general, the CORESET parameters are configured by a radio resource control (RRC) message, which is sent directly to a dedicated UE. The only exception is the CORESET#0, which can be derived from the MIB. An example of the PDCCH resources is shown in Figure 6 (left-hand side) together with its associated DM-RS (light stripes across the whole channel resources). The PDSCH is the data plane channel, and it occupies the largest amount of radio resources in the downlink. The associated DM-RS could be very valuable in the context of a reference signal for passive radar. Unfortunately, UEs are informed by the BTS about the PDSCH. The channel position within the RG and the coding configuration is provided by the BTS by a higher layer parameter. Since the PDSCH is a large channel, it has a bigger number of associated DM-RS symbols. This gives one the opportunity to use them for passive radar processing. To be able to find out, which DM-RS sequence is used (one of 65,535 possible) one has to receive the DMRS-DownlinkConfig higher layer parameter. An example of the PDSCH resources is shown in Figure 6 (right-hand side) together with the associated DM-RS (light stripes across some symbols).
CSI-RS CSI-RS are needed in the 5G NR system for many functionalities. These include link adaptation, DL precoding (both codebook and noncodebook based), frequency and time tracking, and beam management. CSI-RS was first introduced in LTE release 10 to support transmission mode 9 (TM9) with up to eight layers. The 10
Figure 6. PDCCH, PDCCH DM-RS, PDSCH, and PDSCH DM-RS.
reason for introducing CSI-RS (in TM9) was to decrease the signaling overhead caused by the cell-specific reference signal (CRS) when using a high multiple input multiple output (MIMO) order (beyond 4). In the CRS scenario, each antenna in the MIMO scenario must be assigned with dedicated, broadcast reference symbols. Increasing the number of antennas also causes an increasing number of RS and therefore increasing the signalling overhead for all users in the cell, not only the 8 8 MIMO ones. On the other hand, CSI-RS are UE-specific reference symbols. This means that they are assigned only to users, which need them and are not constantly broadcast. This approach was further adapted and extended in 5G NR where there are no CRS at all. From the telecommunication perspective, it is advantageous because of the lower energy consumption and signaling overhead. However, this approach generates a lot of challenges for 5G-based active and passive radars. In order to perform CSI-related measurement, each UE must be explicitly informed (via RRC signaling), which resources it should use to measure the channel and which to measure interference. As mentioned previously, CSI-RS are user specific. Each UE, when needed, is assigned with a dedicated set of CSI-RS via RRC signaling. This indicates that, in general, it may be hard or even impossible to eavesdrop on selected UE RRC signaling and get information on assigned CSIRS. The task becomes even more difficult if passive radar is obligated to extract a CSI-RS combination from all connected UEs within its range. ðp;mÞ The NR CSI-RS signal ak;l , defined in Section 7.4.1.5.3 of [27], is based on the pseudorandom sequence rðmÞ defined in Section 7.4.1.5.2 and comprises one or more resource elements (REs) mapped to the RG according to RRC-configurable attributes: numerology m, number of antenna ports X, port number p, density r, resource location in the time l and frequency k domains, cdm-
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al.
Figure 7.
Example of CSI-RS signal with: Ports: 1; Density: 3; cdm-Type: noCDM; k0 ¼ 5; l0 ¼ 2; CDM group index j : ð0; 0; 0Þ; k0 ¼ 0; l0 ¼ 0.
Figure 8.
Example of CSI-RS signal with: Ports: 8; Density: 3; cdm-Type: cdm4-FD2-TD2; k0 ¼ 5; l0 ¼ 4; CDM group index j : ð0; 1Þ; k0 ¼ ð0; 1Þ; l0 ¼ ð0; 1Þ.
Figure 9.
Example of CSI-RS signal with: Ports: 32; Density: 3; cdm-Type: cdm8-FD2-TD4; k0 ¼ 3; l0 ¼ 4; CDM group index j : ð0; 1Þ; k0 ¼ ð0; 1Þ; l0 ¼ ð0; 1Þ.
Type, and code division multiplexing (CDM) group index j [27]. Examples of CSI-RS signals are presented in Figures 7, 8, and 9. A relatively high number of attributes may suggest that there are a high number of possible combinations SEPTEMBER 2023
of CSI-RS. However, according to Table 7.4.1.5.3-1 in [27], the number of possible CSI-RS combinations is very limited. This is due to the limited number of values that can be assigned to the attributes l, k, r, and the CDM group index. Another important aspect is
IEEE A&E SYSTEMS MAGAZINE
11
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks Table 4.
5G Physical Channel/Signal PCL Features Physical channel/signal
Position known
Decoding possible
SSB
Yes
Yes
Type#0-PDCCH
Yes
Yes
SIB1
Yes
Yes
Possible
Possible
CSI-RS
that, despite the fact that CSI-RS signals are UE-specific, nothing stops a radio base station (RBS) from assigning the same combination of CSI-RS to all UEs in the current cell range. Assuming that an RBS applies such a policy and a passive radar is connected to one of the UE, which provides the current information of the used CSI-RS format, it should be possible to use CSI-RS in passive radar scenarios.
transmitted 5G waveforms are shown, namely for SSB, Type#0-PDCCH, and SIB1 signals. Several attributes typical for the ambiguity function are apparent. First, one can see a strong main peak characteristic for noiselike signals. Its height above the mean sidelobe level is defined by the product of bandwidth (B) and the integration time (Tint ). Around the main peak are several sidelobes whose influence limits the possibility of target detection. For the sampling rate fs ¼ 61:44 MHz, the signals’ duration were as follows. Type#0PDCCH SSB Tint ¼ 143:9 ms, Tint ¼ 37:3861 ms, and SIB1 Tint ¼ 393:6 ms. In Table 5, several parameters of SAFs of the 5G subwaveforms considered in this article are presented. The parameters in the table include the maximum value of the SAF, peak-to-sidelobe ratio (PSLR), and the inte-
PT-RS PT-RS are sent by a BTS only when operating in high band (mmWave), that is, above 6 GHz, in order to minimize the effect of the oscillator phase noise in the transmitter and receiver. Their presentation and discussion are skipped here since this article is presently addressing only the radar usage in the sub 6 GHz bands.
RADAR CHARACTERISTICS OF 5G SIGNALS Conclusions of the 5G NR signal and channels analysis described in the “5G Signal Morphology” section are presented in Table 4. SSB, Type#0-PDCCH, and SIB1 can be used as an illumination source for passive radar deployment. This is because all of these signals are broadcasted without being encrypted and are available for reception by all UEs, not only those registered in the current 5G network. On the other hand, according to standard, CSI-RS signals are UE-specific. Therefore, their applicability in passive radar deployment is not obvious. Assuming that i) the passive radar is equipped with a 5G terminal connected to the network and ii) the BTS is broadcasting the same combination of CSI-RS signals for all (or at least most) UEs in a current cell, it should be possible to decode, reassemble, and use CSI-RS as an illumination source. Inspection of the delay-Doppler compactness of the self-ambiguity function (SAF) of any waveform is the best method for checking its radar usability. SAF is defined by (2), when xs and xr are the same signal. In Figure 10, SAFs for clear different periodically 12
Figure 10. SAF for different periodically transmitted 5G waveforms. (a) SAF of the SSB. (b) SAF of the Type#0-PDCCH. (c) SAF of the SIB1.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al. Table 5.
active and passive) is given as
Quantitative Metrics for the SAF of 5G Signals Waveform
Max value [dB]
SSB
2.3284
Type#0-PDCCH SIB1
2:1884 9.0352
PSLR [dB]
Vb 2 ISL [dB]
(3)
Table 6 illustrates the unambiguous velocity range for available SSB periodicity for typical frequencies in a 5G network. As can be seen, the lower the frequency, the wider the range of velocity. The best performances from the pulse 5G-based radar are obtainable for the lowest band, for example, fc ¼ 750 MHz. However, in this case the available bandwidth is also narrower and, in turn, better radar range resolution can be achieved for millimeter waves, where even 2 GHz of bandwidth is obtainable.
15:5905 24.2335 3:7130
: ; 4Tdist 4Tdist
22.4489
13:3599 20.3376
grated sidelobe level (ISL). The presented values were calculated according to distributions in Figure 10, that is, for the delay from 10 to 10 ms and the Doppler shift from 200 to 200 kHz. As can be seen, the best performance is obtained for the SIB1 where the maximum value is the highest and the ISL has the lowest value. Such properties result from the best bandwidth and integration time product. In the analyzed scenario, SIB1 uses the highest number of subcarriers and its duration is the longest. Thus, the best integration gain is certainly expected. The PSLR is slightly worse than for the SSB, which probably results from the squared envelope of the waveform. The Type#0-PDCCH has the worst correlation properties among the analyzed waveforms. The obtained SAF has strong side lobes at the level of 3:713 dB below the main peak, whose value is the lowest among the compared signals. Among the signals in question, only SSBs are broadcasted periodically in every network configuration in all SSB beams every Tdist ¼ f5; 10; 20; 40; 80; 160g ms. The default, and most often used, SSB periodicity is 20 ms. However, it is possible to shorten it to 5 ms to obtain a wider velocity unambiguity in 5G network-based radar. The unambiguous velocity in pulse-based radar (both
OPPORTUNITIES AND LIMITATIONS OF 5G-BASED RADAR In this section, the opportunities and limitations for 5G communications and sensing applications are presented. In the first section, the signal processing challenges are discussed, the second section presents the study on the possible obtaining ranges for sensing applications, and in the last section, the 5G-based sensing concept validation is presented, showing the proof of using 5G illumination for passive radar applications.
SIGNAL PROCESSING The use of 5G signals in radar sensing is possible in many ways. The authors of this article have identified and shown in Figure 2 four possible signal processing paths. The authors’ work so far has been focused on passive bistatic radar using two receiver channels: the PCL [7] and the pulse radar-based processing techniques (paths 2 and 3, respectively). A very interesting concept (path number 4), which is a modification of path 3, is the use of a single receive channel [25], [26]. This places additional demands
Table 6. SSB Unambiguous Velocity (In [m/s]) for Different Carrier Frequencies fc and the SSB Periodicity Tdist . SSB Tdist
fc [MHz]
5
10
20
40
80
160
750
20
10
5
2.50
1.25
0.63
3440
4.36
2.18
1.09
0.54
0.27
0.13
5900
5.17
2.59
1.29
0.64
0.32
0.16
25,000
0.60
0.30
0.15
0.08
0.04
0.02
60,000
0.25
0.13
0.06
0.03
0.02
0.01
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
13
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks on the appropriate SNR level and the architecture of the analog receiving part (antennas, preamplifiers, receiver dynamic range, etc.). In addition, there is the need for fully decoding the reference signal to use it as the reference in the single antenna receiver approach, which is not an easy task. The last path recognized by the authors is the one numbered 1. This path assumes a fully cooperated 5G transmitter with a receiver and can be realized only with agreement with the 5G network operator. In this case, it is possible to operate the radar in both continuous signal mode (option 2) and pulse mode (option 3/4), but without the need to provide synchronization, or resynthesis of the signals. The rest of this section will discuss selected issues related to paths 2 and 3/4, which have been best recognized by the authors so far [7].
PCL SIGNAL PROCESSING From the point of view of the PCL signal processing (option 2 in Figure 2) [7], one can observe here a number of difficulties related to the nature of the 5G NR signal. The most serious of these seems to be the high variability of the transmission content in the transmitted signal. This phenomenon also occurs in other signals used for passive radiolocation, such as FM [30], wireless fidelity[11], or LTE [31]. Due to content dependency, there is a number of issues that must be addressed as follows.
Figure 11. Example of CAFs for 5G signal. (a) Example of CAF for a selected time moment. (b) Example of CAF for another moment in time.
14
Decrease in effective operating bandwidth lowering distance discrimination and signal processing gain. Radar bistatic range resolution DRb can be expressed as [1] DRb ¼
c ; B
(4)
where c is the speed of light and B is the signal bandwidth. Processing gain Gint is defined as a product of bandwidth and integration time Tint Gint ¼ BTint :
(5)
Assuming B ¼ 38:16 MHz, which is a value used in existing 5G facilities [7], one can reach the best (minimum) bistatic range discrimination equal to 7.86 m. Operating with half of the maximum available bandwidth, a range resolution is only about 16 m, which may not be sufficient for some urban scenarios, such as detecting multiple unmanned aerial vehicles (UAVs) simultaneously. At the same time, the processing gain is reduced by 3 dB, which decreases the radar’s detection capabilities. CAF fluctuation between consecutive processing periods. According to the number of users connected to the network, as well as the amount of content they download, different 5G signals may or may not appear during the transmission. For this reason, the resulting 5G waveform can vary strongly over time. As a result, CAFs can also change, or more precisely, the values of their PSLR, ISL, maximum value, or noise floor level parameters. Examples of two strongly different CAFs are shown in Figure 11. Both results were determined for two consecutive integration periods equal to 20=40 ms. For this reason, it is necessary to develop an automatic algorithm for selecting data for processing, which is described in the next paragraph. The need to select time slots where signal integration makes sense. If the link occupancy is too low, it is difficult to obtain a reliably high SNR of the target in the absence of artifacts and additional bars observed on the CAF. The future algorithm that will perform these tasks should meet a number of criteria. First, its operation should be fully automatic. The passive radar must operate fully autonomously, and it is practically impossible for any operator to tune its parameters on the fly. It is, of course, allowed to calibrate it in advance. Second, the future algorithm should be based on the characteristics of the 5G NR signal. Third, it should be possible to tune it in advance in order to achieve the best
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al. Removing or skipping part of the signal transmitted by the mobile terminal (uplink transmission). From the point of view of passive radiolocation, for a 5G NR network, the ideal situation would be for BTSs to be the only source of the transmitted signal. Signals transmitted by mobile terminals, due to their relatively low power and possible target detection range, seem to be, in the case under analysis, even interference, which should be eliminated as efficiently as possible. Due to the transmission method of the 5G network (TDD), part of the time slots are provided for uplink transmission, and this is a loss that must be reconciled with in the case of passive radar operation. Because signals transmitted by mobile terminals can be received by both the reference and observation antenna, additional artifacts from these signals may be visible on the CAF. For this reason, this phenomenon can be reduced by zeroing or filling the uplink slots with white noise.
Figure 12. Example of spectrograms for different amount of content in 5G signal. (a) No data content. (b) Lot of data content.
realization of the goal, such as detecting targets of a given kind (e.g., drones) with a preset probability of false alarm. Some help in evaluating the amount of occupied link during transmission can be provided by the image of the signal spectrogram. Figure 12(a) shows the spectrogram of the 5G signal for low link occupancy and Figure 12(b) shows the result for high link occupancy. However, some difficulty here may be the computational complexity of the algorithm for determining the spectrogram, calculated based on the short-time Fourier transform. A decrease in signal filling also results in the reduction of average transmitted power. As a result, the maximum range of the radar system is also reduced. This effect is presented in more detail in the “Coverage prediction” section. Another difficulty is also working in the TDD mode, which requires the following. Ensuring synchronization to the received signal (detection of synchronization blocks); it is necessary here to know the numerology, including the TDD pattern, intervals, frequencies, etc. Without this basic knowledge, it will not be possible to properly synchronize the receiver and potentially modify the received signal. For this reason, the performance of the passive radar can be significantly reduced. SEPTEMBER 2023
Removing/omitting the uplink is associated with a decrease in effective integration time, resulting in a decrease in resolution of the bistatic rate measurement and lower processing gain. The bistatic velocity resolution DVb is defined as follows [1]: DVb ¼
: Tint
(6)
The selection of the proper integration time is also related to the range and velocity cell migration. For this reason, the coherent processing time cannot be too long [1], [4]. Assuming bistatic velocity resolution of about 1 m/s, the integration time should be equal to 87 ms for a 5G station operating in the Sband (carrier frequency of 3.44 GHz) [7]. When the signal bandwidth is equal to 38.16 MHz, the processing gain [defined in (5)] is as high as 65 dB. However, when uplink slots are not used in signal processing, only part of the whole signal is covered. In the previously discussed case [7], only 74% of the whole signal is occupied by downlink, which yields effective Tint ¼ 65 ms, effective Vb ¼ 1:35 m/s, and effective processing gain Gint ¼ 64 dB. However, these calculations assume that the content of the transmission is fully loaded, which is not always true. Less content is an additional aggravating factor for the bistatic range resolution. Integration time is strongly related with the number of samples that must be processed. Regardless of the effective duration of the integration time, the number of samples remains the same. This should be kept in mind when designing a real-time radar system and assuming a sufficiently high resolution of bistatic velocity measurements.
IEEE A&E SYSTEMS MAGAZINE
15
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks It should be noted, however, that with the relatively short integration times of 5G NR signals, one can obtain, “for free,” the ability to refresh measurements very frequently. Assuming an integration period of 50 ms, one gets a maximum of 20 measurements per second. In the case of real-time radar operation, it may turn out that the processing time for such an amount of data (the operating bandwidth is of the order of tens or even hundreds of MHz), is much longer than the integration time. What can be proposed is to skip part of the data in order to realize real-time operation. In selected applications (e.g., car detection or UAVs), a refresh rate of a few Hz is sufficient. Working with a TDD mode may also bring some benefits. A fixed carrier frequency for uplink and downlink allows hardware simplification for applications different from radar sensing. For example, terminal localization can be performed using the same receiver. For this purpose, one can remove the downlink signal, leaving only the signal from the terminal. Thanks to this, such techniques as the direction of arrival can be used for user localization both in outdoor and indoor conditions [32]. In the case of selected frequency bands of 5G NR network operation (sub-6 GHz), one deals with the frequency division duplex mode. The operation of passive radar in such conditions seems to be much simpler, since there is no need to provide fine time and frequency synchronization to remove signal fragments coming from the mobile terminal (uplink).
PULSE SIGNAL-BASED PROCESSING Previously described challenges mainly referred to passive radars operating in PCL mode, focused on processing continuous signals. As mentioned in the “Principles of Active and Passive Radars in a 5G Network-Based Sensing Perspective” section, passive radars can also operate in pulse mode (options 3 and 4 in Figure 2), which brings new difficulties for designers working with 5G NR, such as the following. Selection of signals, which can be applied in pulse operation mode. This is a vast issue and relates to a number of concerns. The “5G Signal Morphology” section describes a number of signals that are potentially suitable for use in pulse-based passive radar operation. Only signals that can be decoded are considered for re-synthesis without knowing any user IDs or codes for decryption. The only one that is always present in a 5G frame, and meets the presumed requirements, is SSB. A detailed analysis of its possible use is presented in the “Radar 16
Characteristics of 5G Signals” section. However, the most serious problem with this signal is its limited operating bandwidth, limited duration, and too long repetition period (for some applications). Also in the “Radar Characteristics of 5G Signals” section, analyses are presented on the use of Type#0PDCCH and SIB1 signals. These are other potential signals on which radar processing can be based. However, the presence of these signals is not guaranteed in every case. A chance to increase the pool of usable signals is by using those that require knowledge of user IDs (not completely accessible to an outside observer). In the case of PDSCH (SIB1), the modulation of the signal is known (QAM), through which it is possible to reconstruct the constellation and its resynthesis. The disadvantage of such a solution is that it is impossible to verify that the resynthesis process took place without error (no CRC check), but with a sufficiently high SNR of the signal, a negligible share of errors can be assumed. Unambiguous Doppler frequency range strongly depends on characteristics of selected signals and the frequency band where the 5G network operates. In the case of SSB, this issue was analyzed in detail in the “5G Signal Morphology” section. In the case of the other signals (Type#0-PDCCH and SIB1), it is even possible that they do not occur at all.
COMMON ISSUES FOR BISTATIC RADARS Regardless of which mode the passive radar operates in (PCL or pulse-based), some difficulties are common due to the bistatic geometry of the measurement scenario. This includes such phenomena as propagation effects (detection losses due to multipath nulls) as well as antenna characteristics (elevation propagation pattern, antenna beam tilt, etc.). Due to the fact that this is a very complex topic, the next section is devoted to this.
COVERAGE PREDICTION The coverage area of the radar system is determined by the power of the received echo signal Pr . To detect targets, Pr has to be large enough so the SNR is above the minimal threshold value SNRmin . Assuming free space loss of the signal, and thermal noise as a main source of system noise, the radius of monostatic radar coverage area Rmax is given by [4]
IEEE A&E SYSTEMS MAGAZINE
Rmax
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pt Gt Gr S0 2 BTint 4 ¼ ; ð4pÞ3 LkTr Br SNRmin
(7)
SEPTEMBER 2023
Ksie_ ˛zyk et al. where Pt is the average transmitted power, Gt and Gr are the gains of transmitter and receiver antennas, S0 is the radar cross section (RCS), B is the bandwidth of the transmitted signal, L is propagation and system loss, k is the Boltzmann constant, Tr is the equivalent noise temperature of the receiver, and Br is the bandwidth of the receiver. In the case of passive radar, the bistatic geometry has to be considered. Boundaries of the coverage area of a bistatic radar have the shape of so called Cassini ovals, determined by a pair of distances: R1 —range from a transmitter to the target, R2 —range from the target to a receiver (see Figure 1), that fulfills the following inequality: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pt Gt Gr S0 2 BTint : R1 R2 < ð4pÞ3 LkTr Br SNRmin
Figure 13. Cassini ovals obtained in bistatic scenario for 10 30 dB of SNR.
Range of the drone, car, and bus detection (0:1; 1; 10 m2 of RCS) for different EIRPs and two variants of resource allocation percentage (RA).
SEPTEMBER 2023
Network Parameters Name of the parameter Center frequency EIRP (Pt Gt )
Value 3.44 GHz 57 77 dBm
Receiver antenna gain Gr
10 dBi
SNRmin
11 dB
Integration time Tint Transmitted signal bandwidth B Receiver bandwidth Br
(8)
Example of shapes obtained for equivalent isotropic radiated power (EIRP) equal to 57 dBm, S0 ¼ 0:1 m2 , L ¼ 20 dB, and several SNR values are depicted in Figure 13. Notice that for small SNRs, such as 10 dB, and large distances, the coverage area converges to the circular shape as in the monostatic case.
Figure 14.
Table 7.
Loss L
100 ms 38.16 MHz 50 MHz 10 dB
In general, parameters related to the power and gain may vary significantly depending on particular 5G network settings, especially on usage of massive MIMO (mMIMO) mode. Also, the radar range depends on the instantaneous resource allocation [which affects the average transmitted power used in (8)]. Therefore, the coverage area of the radar system should be estimated for every case separately. In order to predict the 5G-based radar range, let us consider the parameters (listed in Table 7) of the existing network, which was used for concept validation described in the “5G-based sensing concept validation” section. Calculations were made for three RCS values: 0:1; 10; 100 m2 , which are typical for targets, such as a drone, car, and bus, respectively. Two cases were taken into account: with full resource allocation (plotted with solid line), and with 50% of resource allocation (dashed line). The equivalent noise temperature Tr of the receiver was obtained assuming T0 = 290 K and that the noise factor equals 3.6 dB. The results, depicted in Figure 14, are calculated for a monostatic scenario but according to the considerations from the previous section they are applicable to the bistatic case (8). It shows that coverage of the 5G radar can reach over 50 km in a line-of-sight scenario for large targets (correspondingly less for lower resource allocation). Additional simulations were done to obtain the range of the pulse radar based on the 5G signals analyzed in the “Radar Characteristics of 5G Signals” section, namely SSB, Type#0- PDCCH, and SIB1. It was assumed that all of them occur periodically every 20 ms and have the maximum bandwidth allowed in the specification. The integration time and other parameters were set the same as in the previous simulation (see Table 7). Results are depicted in Figure 15. It can be seen that the maximum range is significantly lower than in the case of the continuous wave from Figure 14. The
IEEE A&E SYSTEMS MAGAZINE
17
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks
Figure 15.
Range of the drone, car, and bus detection (RCS equal to 0:1; 1; 10 m2 ) for different EIRP values and for specific 5G NR signals.
largest range of the three analyzed signals was obtained for SIB1 due to its longer duration, which leads to a bigger average transmitted power. The presented computations assume some idealization, but in reality some limitations need to be considered. First, the detection range could decrease significantly in an urban area due to non-line-of-sight or multipath propagation. A second important aspect is the antenna tilt, which is frequently done to create large coverage of the communication cell on the ground level rather than in the airspace (see Figure 16). Considering a 5G NR transmitter is located H ¼ 30 m high, the antenna tilt b is 10 , and the elevation beamwidth a is 16 , the main beam will reach the ground at the area limited by d1 ¼ 95 m and d2 ¼ 859 m. In addition, the transmitter radiation pattern may not be ideal in the whole beamwidth, therefore illumination of some parts of the area may be reduced with respect to the assumed EIRP value. In this case, the coverage area is not as large as it appears from the calculations of the received power (see
Figure 14), but is limited by the beamwidth, transmit antenna tilt, and pattern. All the mentioned impairments are thoroughly described in [33] with respect to FM-based radar. The 5G NR standard allows usage of the m-MIMO technique. This means the utilization of narrow beams, which leads to the division of the coverage area into even smaller spots, not all of which are always being illuminated. For active radars, this mechanism reduces the coverage area. In the case of passive radar, certain beams with direct transmission may not reach the receiver position, which makes extraction of the reference signal challenging. Considering the scenario depicted in Figure 16, a radar system based on such a 5G network configuration would suit the application of car detection in the vicinity of the transmitter or monitoring low flying drones. In some urban scenarios, base stations are deployed and configured to provide coverage in tall buildings. Such transmitters would perfectly fit, for example drone monitoring. Taking
Figure 16. Effect of antenna tilt.
18
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Ksie_ ˛zyk et al. advantage of the dense communication cell coverage, a multinode radar system could be deployed to monitor a large area. For aircraft surveillance, only a rural network configuration with large coverage and a line-of-sight environment would be suitable. Cooperation between communication and radar systems may overcome or significantly decrease issues related to the detection area. Considering a network designed for joint communication and drone monitoring, an adjustment of the antenna tilt could be made to cover not only a wide area of ground, but also illuminate the airspace. Also, the antennas with larger elevation beams could be used. In the case of m-MIMO, when the coverage area is reduced to small spots, two approaches could be utilized as follows. – periodically change the m-MIMO mode to broadcast mode for a short time to cover the whole sector with electromagnetic waves. – periodically switch on additional beams dedicated for sensing beams to fill the gaps in the coverage area.
5G-BASED SENSING CONCEPT VALIDATION In this section, the authors present preliminary results of sensing capabilities using signals emitted by 5G NR. The concept of 5G-based active radar presented in the article requires modification of the hardware part and cooperation of both mobile network operators and BTS manufacturers, certainly making it difficult. The authors, wishing to show the possibilities of sensing, present in this section the target detection results for the passive radar described in [7]. Table 7 shows the most important network and signal parameters,
Figure 17. 5G passive radar measurement station.
SEPTEMBER 2023
Figure 18. Measurement scenario. Red cross—the 5G base station location, red circle—the recorder position, black arrow—the trajectory of the car movement.
although the signal integration time is 20 ms instead of 100 ms. The measurement setup is illustrated in Figure 17. The recorder was composed of commercialoff-the-shelf antennas, the software defined radio platform (Ettus USRP X310), and a computer (Intel Core i7-9700 2.66 GHz processor, 32 GB RAM and SSD drive, Ubuntu 18.04 OS) equipped with a recording application. The measurement scenario is presented in Figure 18. The cooperative target (Volvo XC90) was illuminated by the BTS, and moving around the parking lot of the º odz University of Technology campus. In Figure 19, a selected result obtained for the presented cooperative 5G NR network is presented. This detection was obtained employing option 2 with uplink removal, as described in Figure 2 in the “Principles of Active and Passive Radars in a 5G Network-Based Sensing Perspective” section. A single target (the car) can be noticed in the CAF [see Figure 19(a)] at the bistatic range of about 49 m and the bistatic velocity of about 12 m/s. This detection was obtained for a case when the RG was not fully allocated [see Figure 19(b)], proving that 5G-based PCL operation is possible even without full data allocation. However, a degradation of the bistatic velocity resolution is observed due to a reduction in effective integration time. Also, the deterioration of the bistatic range resolution is observed due to a reduction in effective signal bandwidth. The presented result proves the feasibility of using 5G for sensing. However, a deeper analysis of the challenges presented in the article will be necessary in the future, as they will have to be solved in order to create a fully functional autonomous passive radar.
IEEE A&E SYSTEMS MAGAZINE
19
Opportunities and Limitations in Radar Sensing Based on 5G Broadband Cellular Networks additional receivers with processing to the already operated 5G networks, which can extensively increase the service area for mobile providers, giving other services for mobile users and security applications. Such operational conditions lead to radar on demand functionalities, where the possibility of target detection is activated when necessary.
ACKNOWLEDGMENTS The authors would like to thank Professor S»awomir Hausman and Professor Piotr Korbel for enabling the 5G infrastructure at the º odz University of Technology and for valuable help during the measurement campaigns. They also thank Professor Krzysztof Kulpa for his support during the trials.
REFERENCES [1] M. Malanowski, Signal Processing for Passive Bistatic Radar. Norwood, MA, USA: Artech House, 2019.
Figure 19.
[2] M. Malanowski et al., “Passive radar based on LOFAR
Results for a case when a single target is present and can be detected. (a) CAF. (b) Spectrogram.
radio telescope for air and space target detection,” in Proc. IEEE Radar Conf., 2021, pp. 1–6. [3] G. Mazurek, “DAB signal preprocessing for passive coher-
CONCLUSION
ent location,” Sensors, vol. 22 no. 1, pp. 378–395, 2022.
This article presents opportunities and limitations in radar sensing based on 5G broadband cellular networks. The authors intended to present this field’s current state-of-the-art and hope that the presented results will be the guideline for researchers in modern communications plus sensing applications using 5G networks, and a prelude for using 6G-based communications in sensing. A detailed description of the fundamentals in 5G sensing and a waveform description according to the used standard are described. The article also discussed challenges in signal processing in sensing applications. The predicted power budget and obtainable detection ranges are shown, which are finally confirmed by a real experiment using passive radars for moving target detection utilizing a 5Gbased network as an illumination of opportunity. In the authors’ opinion, this kind of illumination can be effectively used in the future for joint radar and communication applications. It can be used for passive radar. Namely, channel estimation informing on target positions and velocities can be obtained both from the current 5G waveform and modified modulation strategies [16], [22], [24]. Also, one of the prospective applications can be cooperation between 5G communication networks and distributed wide-area low-energy radar systems fully cooperative with the 5G networks. This can be an additional commercial application given by telecommunications providers to the users. The applications can be easily applied by adding 20
[4] M. Malanowski, K. S. Kulpa, J. S. Kulpa, P. Samczynski, and J. Misiurewicz, “Analysis of detection range of FMbased passive radar,” IET Radar Sonar Navigation, vol. 8, pp. 153–159, 2014. [5] P. Falcone, F. Colone, and P. Lombardo, “Potentialities and challenges of WiFi-based passive radar,” IEEE Aerosp. Electron. Syst. Mag., vol. 27, no. 11, pp. 15–26, Nov. 2012. [6] O. Cabrera, C. Bongioanni, F. Colone, and P. Lombardo, “Non-coherent DVB-S passive radar demonstrator,” in Proc. 21st Int. Radar Symp., 2020, pp. 228–231. [7] P. Samczynski et al., “5G network-based passive radar,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5108209. [8] P. Gomez-del Hoyo, N. del Rey-Maestre, M. P. JaraboAmores, D. Mata-Moya, and M. C. Benito-Ortiz, “Improved 2D ground target tracking in GPS-Based passive radar scenarios,” Sensors, vol. 22 no. 5, 2022, Art. no. 1724. [9] R. Zemmari, U. Nickel, and W. D. Wirth, “GSM passive radar for medium range surveillance,” in Proc. Eur. Radar Conf., 2009, pp. 49–52. [10] M. Salovarda and K. Malaric, “Measurements of electromagnetic smog,” in Proc. IEEE Mediterranean Electrotechnical Conf., 2006, pp. 470–473. [11] S. Rzewuski, K. Kulpa, and P. Samczynski, “Duty factor
IEEE A&E SYSTEMS MAGAZINE
impact on WIFIRAD radar image quality,” in Proc. IEEE Radar Conf., 2015, pp. 400–405.
SEPTEMBER 2023
Ksie_ ˛zyk et al.
[12] P. Lingadevaru, B. Pardhasaradhi, P. Srihari, and G.
[23] D. Franken et al., “Integrating multiband active and passive
Sharma, “Analysis of 5G new radio waveform as an illu-
radar for enhanced situational awareness,” IEEE Aerosp.
minator of opportunity for passive bistatic radar,” in Proc.
Electron. Syst. Mag., vol. 37, no. 8, pp. 36–49, Aug. 2022.
Nat. Conf. Commun., 2021, pp. 1–6. [13] J. A. Zhang, M. L. Rahman, X. Huang, S. Chen, Y. J. Guo,
[24] Z. Wei et al., “Orthogonal time-frequency space modulation: A promising next-generation waveform,”
and R. W. Heath, “Perceptive mobile networks. cellular
IEEE Wireless Commun., vol. 28, no. 4, pp. 136–144,
networks with radio vision via joint communication and radar sensing,” IEEE Veh. Technol. Mag., vol. 16, no. 2,
Aug. 2021. [25] W. C. Barott and J. Engle, “Single-antenna ATSC passive
pp. 20–30, Jun. 2021.
radar observations with remodulation and keystone for-
[14] J. A. Zhang et al., “Enabling joint communication and radar
matting,” in Proc. IEEE Radar Conf., 2014, pp. 0159–0163.
sensing in mobile networks–A survey,” IEEE Commun.
[26] K. Gronowski, P. Samczynski, K. Stasiak, and K. Kulpa,
Surv. Tut., vol. 24, no. 1, pp. 306–345, Jan.–Mar. 2022. [15] C. B. Barneto et al., “Full-duplex OFDM radar with LTE
“First results of air target detection using single channel passive radar utilizing GPS illumination,” in Proc. IEEE
and 5G NR waveforms: Challenges, solutions, and measurements,” IEEE Trans. Microw. Theory Techn., vol. 67,
Radar Conf., 2019, pp. 1–6. [27] ETSI, 5G NR Physical Channels and Modulation (3GPP
no. 10, pp. 4042–4054, Oct. 2019.
TS 38.211 version 16.2.0, Release 16), Standard Version
[16] R. S. Thom€a et al., “Cooperative passive coherent location: A. promising 5G service to support road safety,” IEEE
16.2.0, ETSI, 2020. [28] 3GPP, 5G NR Physical Layer Procedures for Control (3GPP
Commun. Mag., vol. 57, no. 9, pp. 86–92, Sep. 2019. [17] O. Kanhere, S. Goyal, M. Beluri, and T. S. Rappaport, “Target localization using bistatic and multistatic radar
TS 38.213 version 16.2.0 Release 16), Standard, ETSI, 2020. [29] 3GPP, 5G NR Radio Resource Control (RRC) Protocol Specification (3GPP TS 38.331 version 16.1.0 release 16),
with 5G NR waveform,” in Proc. IEEE 93rd Veh. Technol. Conf., 2021, pp. 1–7.
Standard, ETSI, 2020. [30] F.
Colone,
C.
Bongioanni,
and
P.
Lombardo,
[18] A. Evers and J. A. Jackson, “Analysis of an LTE wave-
“Multifrequency integration in FM radio-based passive
form for radar applications,” in Proc. IEEE Radar Conf.,
bistatic radar. Part I: Target detection,” IEEE Aerosp. Elec-
2014, pp. 200–205. [19] A. Evers and J. A. Jackson, “Cross-ambiguity characteriza-
tron. Syst. Mag., vol. 28, no. 4, pp. 28–39, Apr. 2013. [31] A. A. Salah, R. S. A. Raja Abdullah, A. Ismail, F. Hashim,
tion of communication waveform features for passive radar,” IEEE Trans. Aerosp. Electron. Syst., vol. 51, no. 4,
and N. H. Abdul Aziz, “Experimental study of LTE signals as illuminators of opportunity for passive bistatic radar
pp. 3440–3455, Oct. 2015.
applications,” Electron. Lett., vol. 50, no. 7, pp. 545–547,
[20] L. Zheng, M. Lops, Y. C. Eldar, and X. Wang, “Radar and
2014.
communication coexistence: An overview: A review of
[32] R. Yijie and W. Xiaojun, “Non-blind DOA estimation
recent methods,” IEEE Signal Process. Mag., vol. 36,
method for 5G mobile terminal,” in Proc. IEEE Int. Conf.
no. 5, pp. 85–99, Sep. 2019. [21] J. A. Zhang et al., “An overview of signal processing tech-
Signal Process., Commun. Comput., 2021, pp. 1–5. _ [33] M. Malanowski, M. Zywek, M. P»otka, and K. Kulpa,
niques for joint communication and radar sensing,” IEEE J. Sel. Topics Signal Process., vol. 15, no. 6, pp. 1295–
“Passive bistatic radar detection performance prediction considering antenna patterns and propagation effects,”
1315, Nov. 2021.
IEEE Trans. Geosci. Remote Sens., vol. 60, 2022,
[22] T. Wild, V. Braun, and H. Viswanathan, “Joint design of
Art. no. 5101516.
communication and sensing for beyond 5G and 6G systems,” IEEE Access, vol. 9, pp. 30845–30857, 2021.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
21
Call for Papers IEEE Aerospace and Electronic Systems Magazine Special Issue on
Hypersonic Weapons: Threat and Defence The IEEE Aerospace and Electronic Systems Society (AESS) Magazine invites scholars, researchers, and industry professionals to submit papers for a special issue dedicated to the rapidly evolving domain of "Hypersonic Weapons: Threat and Defence". This issue is intended to be an outlet for the state-ofthe-art regarding sensors for Hypersonic Missile Defence (HMD) as well as for platform/on-board sensors. The special issue will provide new insights into the latest developments, challenges, and opportunities in this crucial field of aerospace defence technology. The emergence of hypersonic weapons has disrupted the traditional paradigms of global security and defence. Capable of achieving speeds over Mach 5 and being highly manoeuvrable, these weapons pose a significant and complex threat. They challenge existing defence systems and necessitate innovation in threat detection, tracking, interception, and mitigation technologies. This special issue seeks high-quality original research and review articles that shed light on these aspects, bringing together theoretical analyses, empirical studies, and practical applications. Submissions may cover the following topics, including, but not limited to: Hypersonic Weapons Technologies: Advancements in missile guidance systems Threat Analysis: Examination of the potential threats posed by hypersonic weapons, including their impact on current defence architectures and doctrines Defensive Strategies and Technologies: Development and deployment of systems capable of detecting, tracking, and neutralizing hypersonic threats Policy and Regulation: International policies, treaties, and regulations on hypersonic weapons Manuscript submission deadline: · First review completed: · Revised manuscript due: · Second review completed: · Final manuscript due:
30 November 2023 31 January 2024 28 February 2024 30 April 2024 31 May 2024
We strongly encourage you to submit your recent work. We provide a link to author instructions: http://sysaes.msubmit.net/cgi-bin/main.plex?form_type=do_cat&file_nm=info.htm Special Guest Editors: Daniel W. O’Hagan (Fraunhofer FHR), Michael Brandfass (Hensoldt), Georg Bahmeier (AMDC), Andreas Schmidt (NATO JAPCC), Nico de Bruijn (Thales) For questions, please contact Lead Guest Editor: [email protected]
RadarConf’24 2024 IEEE RADAR CONFERENCE
May 6-10, 2024 // Hilton Denver City Center // Denver, Colorado
The Peak of Radar Innovation KEY DATES
17 September 2023
Special Session Proposals Due
20 November 2023 Paper Submissions Due
27 November 2023 Tutorial Proposals Due
24 January 2024
Notification of Acceptance
4 March 2024
Final Paper Submission Due
Aim & Scope
In 2024, the IEEE Radar Conference will be held in Denver, Colorado. With over 300 days of sunshine per year and situated at the foot of the Rocky Mountains, Denver features beautiful weather, outdoor activities, world-class museums and public art, performing arts complex, sporting events, and walkable shopping and dining destinations. The Denver area is home to offices of industry-relevant companies, major universities, the North American Aerospace Defense Command (NORAD), the US Air Force Academy (AFA), and the US Northern Command (NORTHCOM). Don’t miss this exciting week filled with technical innovation and great adventure.
The Venue
The 2024 IEEE Radar Conference will be held at the Hilton Denver City Center. In addition to ample meeting space and conference amenities, the venue is within easy walking distance to the Larimer Square dining/shopping/entertainment district, Coors Field (baseball), the Denver Performing Arts Complex, and outdoor city parks. Red Rocks Park & Amphitheatre, Rocky Mountain National Park, and other outdoor adventure and hiking destinations are drivable within 30-90 minutes. Attendees will find myriad options for work, shopping, dining, exercise, relaxation, and wonder.
Call for Papers
Original papers describing significant advances in radar technologies, systems, applications, and techniques are sought. Prospective authors should prepare a 4-6 page full paper (including supporting figures) using the IEEE format. Papers should be submitted no later than 20 November 2023. Particular topics of interest include, but are not limited to:
» Radar Signal & Data Processing: STAP & adaptive processing, MIMO, waveform & frequency agility / software-defined radar, sparsity-based techniques, SAR / ISAR processing, digital beamforming & array processing, super-resolution techniques, detection & false alarm improvements, target tracking & fusion, classification & identification, AI/ML techniques
» Radar Phenomenology: target & clutter modeling and estimation, atmospheric propagation & scattering phenomenology, foliage & ground penetration, multipath exploitation
» Radar Systems & Applications: innovative designs / missions for airborne, spaceborne
& shipborne radar, imaging radar, distributed active & passive radar, air traffic radar, over-the-horizon radar, automotive radar, multi-function radar / RF, sense & avoid radar, weather radar, medical / biomedical sensing
» Antenna Technology: conformal / low-profile arrays, design for low sidelobe level, ultra
wideband, metamaterials, multi-polarization, frequency-diverse arrays, dual / multi-band antennas & arrays, simultaneous multiple beams
» Subsystems and Components: novel & advanced processing architectures, processing
& RF architectures for software-defined radar, RF system-on-chip (RFSoC) & other transceiver technologies, advanced components (e.g., GaN MMICs), real-time processing (e.g. FPGA, GPU, hybrid), T/R modules, advanced receiver designs, and simultaneous transmit / receive (STAR) architectures
» Emerging Radar Technologies: cooperative radar systems (scheduling, networking, fusion), cognitive radar, spectrum sharing & frequency agility, fully digital phased array radar, millimeter-wave / terahertz radar, application of AI/ML
For any inquiry, please contact [email protected]
2024.IEEE-RADARCONF.ORG
Feature Article:
DOI. No. 10.1109/MAES.2023.3290134
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption Jie Liang , Xingya Liu , and Ruhai Wang , Lamar University, Beaumont, TX 77710 USA Lei Yang, Nanjing University, Nanjing 210093, China Xinghao Li, Tianjin University of Technology, Tianjin 300384, China Chao Tang , University of Michigan, Ann Arbor, MI 48109 USA Kanglian Zhao , Nanjing University, Nanjing 210093, China
INTRODUCTION It is commonly recognized that extremely long signal propagation delays, lengthy and frequent link disruptions, high data loss rates, and highly asymmetric channel rates are the major channel factors that degrade data transmission performance in space networks. Delay/disruption-tolerant networking (DTN) [1] was developed as a baseline networking technology to implement interplanetary deepspace networks [2]. As the main protocol of DTN, bundle protocol (BP) [3] was developed to serve as the core “overlaying” protocol of DTN under which a variety of transport-layer protocols can be accommodated for various user requirements. Targeted at the primary transport protocol of DTN in space, Licklider Transmission Protocol (LTP) [4], [5] is expected to operate underneath BP for reliable data delivery service in a challenging networking environment regardless of presence of random link disruptions and/or extremely long propagation delays. LTP implements multiple concurrent transmissions of its basic protocol data units (PDUs) (namely, blocks) in a
Authors’ current addresses; Jie Liang, Xingya Liu, and Ruhai Wang are with Lamar University, Beaumont, TX 77710 USA (email: [email protected]; [email protected]; [email protected]). Lei Yang and Kanglian Zhao are with Nanjing University, Nanjing 210093, China (e-mail: [email protected]; zhaokan [email protected]). Xinghao Li is with the Tianjin University of Technology, Tianjin 300384, China (e-mail: [email protected]). Chao Tang is with the University of Michigan, Ann Arbor, MI 48109 USA (e-mail: [email protected]). (Corresponding authors: Ruhai Wang; Kanglian Zhao.) Manuscript received 31 January 2023; accepted 23 June 2023, and ready for publication 28 June 2023. Review handled by Mauro De Sanctis. 0885-8985/23/$26.00 ß 2023 IEEE 24
continuous manner [5] without waiting for acknowledgments. In addition, to meet various user application requirements, selective data transmission services are available for LTP, including the transmission control protocol (TCP) type of reliable delivery service and user datagram protocol (UDP) type of unreliable delivery service. The initial intent of DTN is for deep-space communications that are typical of extremely long link delays. However, even if the link delay is short, DTN is expected to have highly effective tolerance in a networking environment having random link connectivity. A typical example of this application is for reliable data transmission from the space stations to ground stations for which the link propagation delay is not long but it frequently experiences link disruptions due to periodic rotations of the station at a very low orbit [6]. The primary space station communication system is generally designed as a relay-based architecture—the data downlink paths are relayed through a high-bandwidth bent-pipe satellite operating at a much higher geostationary earth orbit (GEO) [7]. Even with the relay-based architecture, the contact windows are not aligned all the time. This is a scenario in which, using DTN’s “store-and-forward” service, data can be “stored” in persistent memory and then “forwarded” upon availability of the next-hop data link. Extensive efforts have been made jointly by the Jet Propulsion Laboratory (JPL), California Institute of Technology, and other academic groups to exercise LTP [8], [9], [10], [11], [12], [13], [14], [15] and BP [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], in Moon- and Mars-communications. The use of DTN for data transmission from the International Space Station (ISS) to the earth ground station has been implemented by the National Aeronautics and Space Administration (NASA) [6]. However, very little work has been done in studying the performance of LTP in terms of reliable data/file transfer from the space station to the ground stations, particularly in the
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Image licensed by Ingram Publishing
case of link disruption. Extensive work is needed to have a solid performance evaluation of LTP for reliable file delivery over a relay-based communication architecture between the space station and the ground stations. In this article, a study of the effect of link disruption events on LTP using a PC-based experimental infrastructure is presented. The main contribution of this article is an experimental performance evaluation of LTP data transfer experiencing link disruptions between the space station and the ground stations. Realistic data transmission experiment results are presented with a focus on the effect of link disruption which happened during data transmission. The analysis is done over both a clean channel (i.e., without data loss caused by channel error) and a lossy channel having a high channel error rate. To the best of our knowledge, this article presents the first experimental study for effect of link disruptions on the use of LTP for reliable file delivery over a relay-based communication architecture between the ISS and the ground stations. The quantitative results collected using an experimental method are useful in characterizing the operation and transmission performance of LTP for reliable file delivery from the space station to the ground stations.
DTN PROTOCOLS AND RELIABLE TRANSMISSION MECHANISMS OF LTP FOR SPACE As mentioned, DTN was intended to serve as the baseline technology to implement space networks. As the core protocol of DTN, BP was designed to build a “store-andforward” overlay network for custody-based data transmission service to DTN. Figure 1 illustrates the BP-based DTN protocol stacks that can be deployed in space networks [18, with changes]. As the overlaying protocol of space DTN, BP utilizes the service of the underlying data transport protocols through an interfacing “convergence layer adapter” (CLA) for reliable or unreliable data SEPTEMBER 2023
delivery. As illustrated, a variety of data transport protocols are employed to be operable underneath BP to meet various user requirements from heterogeneous networks. These data protocols may be the most widely deployed TCP, UDP, or the recently developed LTP. To improve data delivery efficiency and maximize link utilization, LTP is designed to operate on multiple concurrent transmissions of data blocks in a continuous manner. This is implemented using the transmission “sessions.” A “session” is defined as the sequence of segment exchanges of LTP undertaken for the successful transmission of a single data block. LTP enables multiple block sessions running in different stages of data transmission concurrently. One session corresponds to transmission of a single block per second. The data size of a block is equal to the maximum data rate of the channel per second. The transmission mechanism of block sessions is expected to maximize the space link utilization by fully using the data link capacity. On the other hand, it serves as a flow control to LTP. Selective data transmission mechanisms are available for LTP: LTP service with respect to any single block can be reliable, unreliable, or both [5]. That is, LTP may transmit an entire block either reliably (in the manner of TCP) or unreliably (in the manner of UDP), or alternatively it may transmit part of a block reliably and the rest unreliably. Reliable transmission of a single block or a portion of it is effected by designating the data bytes as “red” data while the rest data is designated as “green” data. As this study is mainly concerned with the reliable transmission service of LTP used in space station communications, we only present an overview of the transmission of a block when it is configured to be entirely “red.” For the transmission of a “green” block or a mix of “green” and “red” parts of a block, refer to [5]. The “red” data block is generally segmented as multiple red LTP data segments (RDS) depending on the block size and the data link frame size with the last segment flagged as an asynchronous checkpoint (CP). The CP segment also serves as the end of
IEEE A&E SYSTEMS MAGAZINE
25
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption
Figure 1. BP-based DTN protocol stacks for space networks.
red part (EORP) and the end of block (EOB). As soon as the queued CP/EORP/EOB segment is transmitted, the retransmission time out (RTO) timer of the CP segment, termed the CP timer, is started at the sender. Similar to the operation of TCP, the CP segment (not the entire block) is retransmitted if the acknowledging report segment (RS) is not received from the LTP receiver upon expiration of the CP timer. The RS segment is sent by the receiving LTP in response to reception of the CP segment to report the delivery status of the entire block; the RS requests retransmission of any data segments that were not successfully received. If all the red data segments are reported to have been successfully received, a report acknowledgment (RA) segment is sent by the sender in response to the RS. However, if any segment is reported lost (that is, not delivered), it is retransmitted immediately. The last one of the retransmitted segments of the block is again flagged as a CP, which requests another RS reporting on the delivery status of those retransmitted segments. The process repeats until all the segment of the original block are successfully delivered at the receiver. For a side-by-side comparison in a tabular form between the conventional TCP/ IP and LTP running with BP, see [8].
PRIMARY COMMUNICATION ARCHITECTURE FOR SPACE STATION It is well known that in the case of frequent and random link disruptions, a relay-based networking architecture should be adopted for more connectivity and higher data transmission effectiveness in space communications. Recent deep-space missions have shown that a significant increase in data delivery to earth can be achieved if a relay-based networking architecture is used. Considering the similarity with respect 26
to link connectivity, more extensive use of relay-based communication architectures should be made in near-earth space flight missions. For both the ISS and China’s “Tian-gong” Space Station, they are not exceptional. Taking the ISS as an example, because of its fast traveling speed at a very low orbit, the direct-to-earth communication channel is disrupted for most of its operating time. For this reason, the communication system of the ISS is designed so that its primary data downlink paths are relayed via the Tracking and Data Relay Satellite System (TDRSS) [7] at a much higher GEO. With the bent-pipe relay service from the TDRSS, a much higher transmission efficiency is achieved for data delivery from the ISS to the earth. However, even with the relay service of the TDRSS, the link disruption is inevitable because the contact windows between TDRSS and the earth ground stations do not always align during the data transmission process. For the relaying GEO-satellite architecture such as the TDRSS, a single satellite can cover 1/3 of the earth area. Theoretically, three satellites can cover most of the surface areas of the earth. With a relaying GEO-satellite available at a very high orbit, the space stations can easily establish and maintain the link (for most of the time) with the ground stations on the earth through the bent-type relaying service. Two relaying architectures through the GEO-satellites are widely adopted for space station communications—a typical one-hop relaying architecture and a two-hop relaying architecture. The one-hop relaying refers to an architecture in which a single GEO-satellite serves as a relaying node between the space station and the earth ground station. In comparison, the two-hop relaying refers to that two GEO-satellites provide relaying service between the space station and the ground station with each satellite serving as one relaying node. The flight period of the space station is about 90 min. The space station communicates with three relay satellites
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Liang et al. in different positions during the flight for reliable data delivery to the earth station. As the space station moves into the next satellite’s range of coverage, the link delay of each state is also different due to the different link path taken. The data link paths may switch between the onehop relay to the two-hop relay when needed for contact.
DIFFERENT SCENARIOS OF LINK DISRUPTIONS BETWEEN SPACE STATION TO EARTH STATIONS The periodic movement of space station and the resulted end-to-end link disruption can be predicted. However, the link disruption may also occur randomly due to the other factors such as nonalignment of contact windows, weather, or Sun storm. These disruption scenarios generally occur in two different cases: 1) link disruption occurred over the downlink from the space station to the ground station; and 2) link disruption occurred over the uplink from the ground station to the space station. Figure 2 illustrates the transmission scenarios for an LTP block transmission having a link disruption experienced in both cases.
CASE 1: LINK DISRUPTION OVER DOWNLINK FROM SPACE STATION TO GROUND STATION In space communications, a primary goal is to have scientific data successfully delivered (from spacecraft) over the downlink to the earth ground station. The space station
communication is not exceptional. In other words, a link disruption over a downlink from space station to an earth ground station mainly affects the transmission of the data sent from the space station. With respect to BP/LTP, it affects the transmission of LTP segments of a block, either regular data segment(s) and/or a CP segment. Figure 2(a) illustrates the transmission scenario for an LTP block transmission having a link disruption experienced during transmission of data segments of an LTP block. When a link disruption event occurs, the transmission of all the segments that are already sent out (prior to their delivery at the receiver) are failed. That is, they are lost over the data link. The receiver only receives those data segments that are delivered prior to the link disruption, and the rest segments of the block (including the last one, CP segment) are lost. Because of the failed delivery of the CP, the receiver is not triggered for transmission of an RS. As a result, the CP segment is resent at the sender upon expiration of the RTO timer. Assumed the link is resumed when the CP segment is resent, then the resent CP is successfully delivered triggering the receiver for the delivery status. The receiver responds to the sender with an RS reporting the lost segments of the block that need to be resent. Then, the lost data segments are retransmitted.
CASE 2: DISRUPTION OVER UPLINK FROM GROUND STATION TO SPACE STATION The reversing uplink from ground station to space station is mainly for transmission of ACK segments. This implies
Figure 2. LTP block transmission scenarios having a link disruption experienced. (a) Disruption occurred during block transmission over a downlink. (b) Disruption occurred during transmission of an RS over uplink.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
27
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption that with respect to the operation of LTP, a link disruption over the uplink from an earth ground station to space station only affects the transmission of the RSs sent by the receiver. Figure 2(b) illustrates the transmission scenario in this case. Because the link disruption occurs during the transmission of the RS, the RS is lost over the uplink. Similar to the retransmission mechanism of CP, the RS is also retransmitted upon the expiration of its RTO timer when the corresponding RA is not received. Because of the failed delivery of the RS, the sender is not triggered for transmission of an RA segment. As a result, the RS is resent at the receiver upon expiration of its RTO timer.
EXPERIMENTAL EVALUATION RESULTS AND DISCUSSION
EXPERIMENTAL INFRASTRUCTURE AND CONFIGURATIONS The PC-based space communication and networking testbed (SCNT) used in [8], [9], [10], [11], [12], and [13] was adopted as an infrastructure for the data block delivery experiments in this article. The testbed was validated. As the testbed was extensively described in the previous work [13], it is not discussed in detail here. The Interplanetary Overlay Network (ION) distribution v4.1.1 [26] developed by NASA’s JPL was adopted as the LTP protocol implementation for the experiments. For the protocol configuration, LTP was configured to operate under the overlaying BP (i.e., BP/LTP). The BP custody transfer option was disabled so that the effect of BP reliability service on LTP transmission was removed. By this, the sole reliable data delivery service of LTP can be evaluated. To configure for the reliable transmission service of LTP, all the data bytes of the LTP block were designated as “red” data. The LTP block length was configured to be 400 Kbytes. The block was fragmented into data segments for transmission with each of them having 1400 bytes. With respect to the space channel configurations, a delay of 0.35 s was adopted to emulate the inevitable oneway link delay. This delay emulates the one-way light time propagation delay over each of the downlink channel from the space station through the GEO relaying satellite to the earth ground station and the uplink channel in the opposite direction. The effect of channel-rate asymmetry on file transmission was also integrated into the experiments. The channel asymmetry was implemented by configuring a downlink channel rate of 250 Kbytes/s and an uplink channel rate of 25 Kbytes/s, leading to a channel ratio (CR) of 10/1. In response to two cases of link disruptions scenarios in Section “Different Scenarios of Link Disruptions Between Space Station to Earth Stations”, two sets of the experiments, named Set 1 and Set 2, are conducted. Set 1 corresponds to Case 1, which has the disruption started at 28
7 s from the beginning of data transmission. Set 2 corresponds to Case 2 having the disruption started at 16.35 s from the beginning of data transmission. Set 1 leads to a loss of the CP segment over the downlink, and Set 2 leads to a loss of the RS over the uplink. To study the effect of different link disruption durations on LTP, six link durations are configured for each set of the experiments—10 s, 24 s, 38 s, 52 min, 66 min, and 80 min. To evaluate the joint effect of link disruption and channel error, three different channel bit-error-rates, 10 6, 510 6, and 10 5, are configured for the experiments.
The numerical results from the experiments are presented in this section. Time sequence graphs (TSGs) [27] are also used to analyze the link disruption effect at data packet (or segment) level.
EFFECT OF LINK DISRUPTION Figure 3 presents the time effect to LTP data block delivery caused by link disruption measured from the experiments in block transfer over a clean space channel (i.e., with a bit-error-rate (BER) of 0) in Set 1 and Set 2. For both Set 1 and Set 2, the overall variation trend of the time effect to data delivery in Figure 3 increases linearly with an increase in the duration of link disruption. The link disruption in the experiment is configured to increase linearly with an increasing offset of 14 s—leading to a linear increasing pattern. A longer link disruption leads to a longer time for the sender or receiver in waiting for the link
Figure 3. Time effect on LTP data block delivery caused by various link disruptions in block transfer over a clean space channel (i.e., with a BER of 0) in set 1.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Liang et al.
Figure 4. A TSG for transmission of data blocks over a clean space channel with a link disruption of 38 s in set 1.
to resume. This results in an increase in the number of times the CP timeout timer (in Set 1) or RS timeout timer (in Set 2) expire during the disruption. As the CP segment of the block or the RS is resent upon expiration of the CP/ RS timer until the data link is resumed, the number of CP/ RS retransmission attempts increases, leading to the linearly increasing pattern of the effect to the data delivery time. To illustrate the effect of link disruption, Figure 4 presents a TSG for transmission of five data blocks with a link disruption of 38 s in Set 1. It is observed that the blocks are transmitted following a smooth pattern but with a disruption event experienced during transmission of the first block. The link disruption starts around 7 s and ends around 45 s from the beginning of the first block transmission. While the first block is affected by the disruption event, all other four blocks sent after the link disruption are successfully delivered. This is indicated by that each of four blocks is acknowledged by the receiver by an RS. The first block is not acknowledged by RS (as done for other four blocks) because its CP segment is lost due to the link unavailability caused by the disruption. As the CP is lost, it could not trigger transmission of an RS at the receiver. The enlarged TSG in Figure 5 indicates that the CP segment of the first block is retransmitted three times shown by three consecutive R flags (following the initial block transmission). In other words, there are four CP transmission attempts made in response to the link disruption event—the first three are affected by the disruption, and the last one is successfully delivered. It is also compatible to the time effect at the link disruption of 38 s
SEPTEMBER 2023
reported in Figure 3. As shown in Figure 3, the time effect of the link disruption increases from 20 s (at which the first CP is sent) to 58 s. The time difference of around 38 s corresponds to three transmission attempts of the CP segment given that the RTO timer of is 12 s. When the disruption ends around 45 s, the last (third) resent CP segment is successfully delivered at the receiver. The delivered CP triggers the receiver for the transmission of an RS. As soon as the RS is delivered at the sender, all the segments that are lost during the disruption are resent by the sender, as shown in Figure 5. After that, the transmission of new blocks is resumed. It is also observed from the enlarged view in Figure 5 that regardless of the first-time transmission or retransmission of the block, all its data segments are sent following a consistent pattern. This is because no channel error is experienced. The time effect to the file delivery time in Figure 3 increases following a stepwise pattern along with an increase in the duration of link disruption. This is due to the increment of the retransmission attempts (including both retransmission of the CP and data segments). With an increase in the duration of link disruption for 14 s, which is slightly longer than the CP retransmission timeout timer, the CP retransmission attempts increase by one. This can be easily clarified by comparing two TSGs for the block transfer experiments having two different link disruptions experienced. Figure 6 presents a TSG for the block transfer experiment with a link disruption of 66 s in Set 1. All the transmission conditions are the same as the experiment in Figure 4 except that the link disruption increases from 38 to 66 s. As shown, because of the much longer link disruption experienced by
IEEE A&E SYSTEMS MAGAZINE
29
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption
Figure 5. An enlarged TSG illustrating a scenario of link disruption of 38 s for LTP data block delivery in set 1 and the associated retransmission of data segments.
the data transfer, the total retransmission attempts of the RS increase to five leading to six transmission attempts in total for the block transfer. In other words, the block transfer having a link disruption of 66 s has two more retransmission attempts than the one having a disruption of 38 s. Or, in
simple, the extra length of the link disruption of 28 s results in two more retransmission attempts for the data transfer, which is reasonable according to the earlier analysis. In comparison, for all the link disruption settings, the block delivery runs in Set 2 in Figure 3 have less
Figure 6. A TSG for transmission of data blocks with a link disruption of 66 s in set 1.
30
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Liang et al. time effect than the runs in Set 1. The difference in the time effect can be explained based on the difference in the link disruption starting time of two sets. The link disruptions in Set 1 and Set 2 start at 7 and 16.35 s from the beginning of the block transmission, respectively. It takes 16 s for the sender to transmit out all the segments of the block for the given data rate of 250 Kbytes/s and block length of 400 Kbytes. When the link disruption starts at 7 s from the beginning of the transmission in Set 1, the CP segment (as the last segment of the block) and some segments are surely lost. The lost CP segment has to be retransmitted upon the long CP timeout timer expiration. This does not happen with the runs in Set 2 because the disruption starts later—the CP segment is already successfully transmitted when the link disruption occurs. Only the transmission of the RS from the receiver over the reverse channel is affected. This does not result in the long time waiting for the CP timeout timer expiration at the sender to resend any lost segment (if any), leading to less time effect than the runs in Set 1.
JOINT EFFECT OF LINK DISRUPTION AND CHANNEL ERROR Figure 7 presents the file delivery time of LTP measured for transmission of a 10-Mbyte file with variations of link disruption in Set 1 at three channel error rates—the BERs of 10 6, 510 6, and 10 5. Considering that both Set 1 and Set 2 show the similar variation patterns with respect to the variations of the BER and duration of link disruption, only Set 1 is presented in Figure 7. Even with both link disruption and channel error involved, the file delivery time shows a similar increasing pattern as the time effect caused by link disruption in Figure 3—a linear and staircase increase along with an increase in the link disruption at all three channel error rates. This variation pattern has been clarified during the discussion of the numerical results in Figure 3. However, in comparison, the file delivery times in all three sets are significantly longer than those in Set 1 and Set 2 without channel error involved. This is mainly due to the introduction of channel error, which results in data segment corruption (i.e., loss events) and thus retransmission of them. These retransmissions are likely done together for the regular data segments and CP segments that are lost due to the link disruption. They may also be retransmitted in separate retransmission attempts. As a result, additional retransmission attempts are needed. This is why the file delivery time is significantly increased compared to those without channel error in Figure 3. The effect of channel error and the resulted data loss/retransmission on LTP can be seen from the TSG SEPTEMBER 2023
Figure 7. File delivery time with variations of link disruption at three channel error rate for LTP in set 1.
in Figure 8, which is for the transmission at a BER of 10 6 with a link disruption of 52 s. Provided that the block length was configured to be 400 Kbytes, a 10-Mbyte file is transmitted as 25 blocks as shown in the TSG. Because the BER of 10 6 is introduced during transmission, every block has data segments corrupted and retransmitted. As observed from the enlarged view, some blocks even make more than one retransmission attempts because multiple segments are corrupted for a single block. As the corrupted data segments need to be retransmitted before a new block is sent, the block delivery time is increased. As a result, the entire file delivery time (for all the blocks) is significantly longer when compared to the TSG without channel error in Figure 4. It is also observed that the higher the channel error rate is introduced, the longer the file delivery time is taken. Taking the transmission with the duration of the link disruption of 52 s in Figure 7 as an example, the file delivery time at the BER of 10 5 is 1065 s, which is much higher than 900 s at the BER of 510 6 and 770 s at the BER of 10 6. This is because a higher BER results in a loss of more data segments. All the lost segments need to be retransmitted either together with the CP segment or by themselves. This leads to additional retransmission attempts required for successful file delivery and therefore, a longer file delivery time.
SUMMARY AND CONCLUSION The quantitative results collected from realistic LTP data transmission experiments indicate that LTP is very effective in handling link disruption events in space station communications even with a presence of a high channel error rate. The time effect on transmission of LTP caused by the link interruption increases along with an increase in the duration of link
IEEE A&E SYSTEMS MAGAZINE
31
LTP for Reliable Data Delivery From Space Station to Ground Station in the Presence of Link Disruption
Figure 8. A TSG for file transmission with a link disruption of 52 s joint with a channel BER of 10 set 1.
disruption. Under the joint effect of link disruption and channel error, the transmission performance deteriorates—the higher the channel error rate is introduced, the longer the disruption time effect is experienced, leading to longer file delivery time. This is because a higher channel error rate causes more corruptions to LTP data segments (i.e., more data losses), which results in additional retransmission attempts. The findings from the experimental results also indicate that for a given link disruption event, the earlier the event starts, the more transmission attempts fail. As a result, the number of transmission attempts increases, leading to the longer the data block delivery time and the significant degradation of transmission efficiency.
6
illustrating the effect of channel error on LTP in
REFERENCES [1] S. Burleigh et al., “Delay-tolerant networking: An approach to inter-planetary Internet,” IEEE Commun. Mag., vol. 41, no. 6, pp. 128–136, Jun. 2003, doi: 10.1109/MCOM.2003. 1204759. [2] The Space Internetworking Strategy Group (SISG), “Recommendations on a strategy for space internetworking,” IOAG.T.RC.002.V1, Report of the Interagency Operations Advisory Group, NASA Headquarters, Washington, DC 20546-0001, USA, Aug. 2010. [3] Consultative Committee for Space Data Systems, “Bundle protocol specifications,” CCSDS 734.2- B-1. Blue Book. Issue 1. Washington, DC, USA: CCSDS, Sep. 2015. [4] Consultative Committee for Space Data Systems, “Licklider transmission protocol for CCSDS,” CCSDS 734.1- B-1. Blue
FUTURE WORK
Book. Issue 1. Washington, DC, USA: CCSDS, May 2015.
This article presents an experimental study of LTP using a PCbased testbed infrastructure on reliable data delivery from the space station to the ground stations in presence of link disruption. The numerical results collected from the experimental runs may vary slightly if different hardware setup and software/operating system are adopted for the infrastructure. As the major future work, we propose to build analytical models to analyze the performance of LTP for reliable file delivery between the space station and the ground stations with a focus on the effect of link disruption, which may verify the experimental results presented in this article. 32
[5] M. Ramadas, S. Burleigh, and S. Farrell, “Licklider transmission protocol specification,” Internet RFC 5326, Sep. 2008. [6] A. Schlesinger, B. M. Willman, L. Pitts, S. R. Davidson, and W. A. Pohlchuck, “Delay/disruption tolerant networking for the International Space Station (ISS),” in Proc. IEEE Aerosp. Conf., 2017, pp. 1–14, doi: 10.1109/AERO.2017.7943857. [7] National Aeronautics and Space Administration, “Space net-
IEEE A&E SYSTEMS MAGAZINE
work users’ guide (SNUG),” Rev. 10, Goddard Space Flight Center, Greenbelt, MD, USA, Aug. 2012. Accessed: Feb. 2021. [Online]. Available: http://esc.gsfc.nasa.gov/assets/ files/450-SNUG.pdf
SEPTEMBER 2023
Liang et al.
[8] Q. Yu, R. Wang, Z. Wei, X. Sun, and J. Hou, “DTN Licklider
[18] A. Sabbagh, R. Wang, K. Zhao, and D. Bian, “Bundle pro-
Transmission Protocol (LTP) over asymmetric space
tocol over highly asymmetric deep-space channels,” IEEE
channels,” IEEE Aerosp. Electron. Syst. Mag., vol. 28, no. 5,
Trans. Wireless Commun., vol. 16, no. 4, pp. 2478–2489,
pp. 14–22, May 2013, doi: 10.1109/MAES.2013.6516145. [9] R. Wang, Z. Wei, Q. Zhang, and J. Hou, “LTP aggregation
Apr. 2017, doi: 10.1109/TWC.2017.2665539. [19] G. Yang, R. Wang, A. Sabbagh, K. Zhao, and X. Zhang,
of DTN bundles in space communications,” IEEE Trans.
“Modeling optimal retransmission timeout interval for
Aerosp. Electron. Syst., vol. 49, no. 3, pp. 1677–1691, Jul.
bundle protocol,” IEEE Trans. Aerosp. Electron. Syst.,
2013, doi: 10.1109/TAES.2013.6558012.
vol. 54, no. 5, pp. 2493–2508, Oct. 2018, doi: 10.1109/
[10] Z. Yang et al., “Analytical characterization of Licklider
TAES.2018.2820398.
Transmission Protocol (LTP) in cislunar communications,”
[20] A. Sabbagh, R. Wang, S. C. Burleigh, and K. Zhao,
IEEE Trans. Aerosp. Electron. Syst., vol. 50, no. 3, pp. 2019–
“Analytical framework for effect of link disruption on bun-
2031, Jul. 2014, doi: 10.1109/TAES.2013.120746. [11] K. Zhao, R. Wang, S. C. Burleigh, M. Qiu, A. Sabbagh, and J.
dle protocol in deep-space communications,” IEEE J. Sel. Areas Commun., vol. 36, no. 5, pp. 1086–1096, May 2018,
Hu, “Modeling memory variation dynamics for the Licklider
doi: 10.1109/JSAC.2018.2832832.
Transmission Protocol in deep-space communications,”
[21] G. Wang, S. Burleigh, R. Wang, L. Shi, and Y. Qian,
IEEE Trans. Aerosp. Electron. Syst., vol. 51, no. 4, pp. 2510–
“Scoping contact graph routing scalability,” IEEE Veh. Technol. Mag., vol. 11, no. 4, pp. 46–52, Dec. 2016,
2524, Oct. 2015, doi: 10.1109/TAES.2015.140907.
doi: 10.1109/MVT.2016.2594796.
[12] Q. Yu, S. Burleigh, R. Wang, and K. Zhao, “Performance modeling of LTP in deep-space communications,” IEEE
[22] R. Wang and S. Horan, “Protocol testing of SCPS-TP over
Trans. Aerosp. Electron. Syst., vol. 51, no. 3, pp. 1609–1620, Jul. 2015, doi: 10.1109/TAES.2014.130763.
NASA’s ACTS asymmetric links,” IEEE Trans. Aerosp. Electron. Syst., vol. 45, no. 2, pp. 790–798, Apr. 2009, doi: 10.1109/TAES.2009.5089562.
[13] R. Wang, S. C. Burleigh, P. Parik, C.-J. Lin, and B. Sun, “Licklider Transmission Protocol (LTP)-based DTN for cis-
[23] R. Wang et al., “Which DTN CLP is best for long-delay
lunar communications,” IEEE/ACM Trans. Netw., vol. 19,
cislunar communications with channel-rate asymmetry?,”
no. 2, pp. 359–368, Apr. 2011, doi: 10.1109/TNET.2010.
IEEE Wireless Commun., vol. 18, no. 6, pp. 10–16, Dec. 2011, doi: 10.1109/MWC.2011.6108327.
2060733. [14] J. Hu, R. Wang, X. Sun, Q. Yu, Z. Yang, and Q. Zhang,
[24] Q. Yu, X. Sun, R. Wang, Q. Zhang, J. Hu, and Z. Wei,
“Memory dynamics for DTN protocol in deep-space communications,” IEEE Aerosp. Electron. Syst. Mag., vol. 29,
“The effect of DTN custody transfer in deep-space communications,” IEEE Wireless Commun., vol. 20, no. 5,
no. 2, pp. 22–30, Feb. 2014, doi: 10.1109/MAES.2014.130123.
pp. 169–176, Oct. 2013, doi: 10.1109/MWC.2013. 6664488.
[15] L. Shi et al., “Integration of Reed-Solomon codes to Licklider Transmission Protocol (LTP) for space DTN,”
[25] R. Wang, A. Sabbagh, S. C. Burleigh, K. Zhao, and Y. Qian,
IEEE Aerosp. Electron. Syst. Mag., vol. 32, no. 4,
“Proactive retransmission in delay-/disruption-tolerant net-
pp. 48–55, Apr. 2017, doi: 10.1109/MAES.2017.160118.
working for reliable deep-space vehicle communications,”
[16] L. Yang, R. Wang, Y. Zhou, J. Liang, K. Zhao, and S. C.
IEEE Trans. Veh. Technol., vol. 67, no. 10, pp. 9983–9994,
Burleigh, “An analytical framework for disruption of Licklider Transmission Protocol in Mars communications,”
Oct. 2018, doi: 10.1109/TVT.2018.2864292. [26] S. C. Burleigh, “Interplanetary overlay network design
IEEE Trans. Veh. Technol., vol. 71, no. 5, pp. 5430–5444,
and operation v4.1.1,” JPL D-48259, NASA’s Jet Propul-
May 2022, doi: 10.1109/TVT.2022.3153959.
sion Laboratory (JPL), California Inst. Technol., Pasa-
[17] K. Zhao, R. Wang, S. C. Burleigh, A. Sabbagh, W. Wu,
dena, CA, USA, Jan. 2022. Accessed: Jan. 2022.
and M. De Sanctis, “Performance of bundle protocol for
[Online]. Available: http://sourceforge.net/projects/ion-
deep-space communications,” IEEE Trans. Aerosp. Electron. Syst., vol. 52, no. 5, pp. 2347–2361, Oct. 2016,
dtn/files/latest/download [27] “TCP analysis tool: TCPTRACE.” Accessed: Jun. 2022. [Online]. Available: https://linux.die.net/man/1/tcptrace
doi: 10.1109/TAES.2016.150462.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
33
Feature Article:
DOI. No. 10.1109/MAES.2023.3289928
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission Arik D. Brown , Northrop Grumman, Linthicum, MD 21090 USA
INTRODUCTION The advancement of unmanned aerial systems (UAS) has elevated them to be highly relied upon assets for commercial and military applications. For the commercial/civilian sectors, UAS provide the ability to execute dangerous or difficult tasks safely and efficiently, saving time, money, and lives [1]. They support multiple applications such as public safety (e.g., police, firefighters, and other first responders), disaster mitigation, environmental protection, scientific research, and agriculture [1]. For military applications, a principal advantage of UAS is their ability to reduce the risk to humans, and thus to provide cost-effective military options that can be used when political or environmental conditions prohibit the use of manned systems [2]. Drones are also increasingly becoming a weapon of choice for nonstate groups that employ the technology for surveillance, battlespace management, propaganda, and aerial strike attacks, often to considerable effect [3]. The proliferation of UAS technology has made counterdrone systems a ubiquitous weapon in UAS conflicts. Although UAS have been a phenomenal technology advancement, their benefits have also been leveraged and implemented by adversaries with nefarious intent. Some of the key advantages nonstate actors take advantage of are listed below. 1) Affordability—For small UAS, such as a DJI phantom, the cost for entry is low. These UAS are readily available worldwide with internet purchase accessibility. Many of these can be bought and deployed for
Author’s current address: Arik D. Brown, Northrop Grumman, Linthicum, MD 21090 USA (e-mail: arik. [email protected]). Manuscript received 29 August 2022, revised 11 May 2023; accepted 23 June 2023, and ready for publication 27 June 2023. Review handled by Daniel O’Hagan. 0885-8985/23/$26.00 ß 2023 IEEE 34
low cost, individually or in swarms, with minimal sustainment. For larger UAS in the Group 2 to 3 category (see Table 1), nonstate actors can purchase these from other countries for far less than fighter or surveillance aircraft. An unmanned air force capability becomes achievable with UAS. Michael Kofman, a military analyst at the Center for Naval Analyses, points out, “An air force is a very expensive thing...and they permit the utility of air power to smaller, much poorer nations [4].” At least 95 countries possess drones, which can potentially furnish even poorly funded state actors with an aerial command of the battlespace that was previously unavailable [3]. 2) Precision Navigation—UAS use GPS for navigation. This enables them to be operated autonomously without the requirement of an operator. Smaller UAS can be programmed with waypoints, flying precise routes to do surveillance or cause damage critical infrastructure [5]. Additionally, UAS can be flown in GPSdenied environments using onboard video cameras. 3) Payload Capacity—UAS can carry varying types of payloads, including multispectral imaging systems for surveillance and intelligence gathering, nonkinetic effectors for electronic attack, and kinetic effectors for hard kill destruction [5]. 4) Networked Lethality—UAS can provide real-time monitoring of geographical areas and provide real-time dissemination to other networked sensors and systems. With a group of UAS, a battlefield network can be employed for timely intelligence information and precision strike of valued assets and infrastructure. Over the years there have been various incidents involving nonfriendly drone attacks that have caused severe damage and heightened security concern. 2013—A small quadcopter flew within feet of the German Chancellor and Defense Minister. The small UAS hovered briefly and then landed at
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Image licensed by Ingram Publishing
Chancellor Merkel’s feet. Although the drone was harmless, it was an early sign of the potential for drone attacks [6]. 2018—An assassination of Venezuelan President, Nicolas Maduro was attempted. The attack was done with a retail drone that was purchased online and armed with military grade explosives. The unsuccessful attack was a demonstration of the potential for drone attack lethality [7]. 2018—The Gatwick Airport was shut down due to drone sightings at the airport. The incident took place at the travel period prior to Christmas. It led to the airport being closed for 30 h, disrupting 1000 flights. Thousands of travelers had their holiday plans disrupted [8]. 2019—Aramco, the Saudi Aramco oil processing facility at Abqaiq, was attacked. A swarm of drones along with cruise missiles were used to strike the oil infrastructure. This caused the facility to be shut down due to large fires and voluminous damage [9]. 2022—In Ethiopia’s Tigray region, 19 people were killed over two days with UAS air strikes and many more injured. The drones used hovered before dropping bombs [10].
In addition to incidents like these recent conflicts have shown how battlefields are being transformed by UAV technology. In the Nagorno–Karabakh War in 2020, the Azerbaijan forces, utilizing UAVs, had a decisive warfare advantage over Armenia. Confirmed losses via photographs or videos showed Armenian losses at 185 T-72 tanks, 90 armored fighting vehicles, 182 artillery pieces, 73 multiple rocket launchers, 26 surface-to-air missile systems, 14 radars or jammers, one SU-25 war plane, 4 drones, and 451 military vehicles [4]. The definitive UAV advantage of the Azerbaijan forces forced a cease-fire and ended the war in 44 days [4]. In the Ukraine and Russia conflict, UAVs have been used to varying degrees by both sides with Ukraine employing them the most. UAVs have been used for a variety of uses from carrying out strikes to guiding artillery and recording video that feeds directly into information operations [11]. The use of drones has given Ukraine an edge over Russia, which is impressive given Ukraine ranks fortieth in the world for defense spending [11]. Ukraine’s fleet of Bayraktar TB-2 s, Turkish-made military drones, have carried out numerous successful attacks against Russian forces, accounting for almost half of Russia’s surface-to-air missiles that have been destroyed and helping to sink the Moskva, the flagship in Russia’s Black Sea Fleet [11]. Additionally, the scale of small, commercial drone use in Ukraine is unprecedented enabling cheap airborne surveillance or even strike capability [12].
Table 1.
UAS Group Categories UAS
Weight
Altitude
Speed
Group 1
< 20 lbs
< 1200 feet AGL
< 100 knots
Group 2
21–55 lbs
< 3500 feet AGL
< 250 knots
Group 3
< 1320 lbs
< 18,000 feet AGL
< 250 knots
Group 4
< 1320 lbs
< 18,000 feet AGL
any speed
Group 5
< 1320 lbs
< 18,000 feet AGL
any speed
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
35
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
Figure 1. USMC’s MADIS uses DRS RADA’s multimission radars for 360 persistent coverage of UAS threats.
Due to the asymmetric nature of UAV threats technology solutions that are cost effective, high performing and size, weight, and power (SWaP) efficient are required to defend against small, medium, and large UAVs. The air defense systems that have traditionally been used to protect airspace are mostly designed with inhabited aircraft in mind and are optimized for detecting, tracking, and shooting down large fast-moving objects [3]. As a result, these traditional systems are challenged with small, slow, low flying drones [3]. As UAV threats have evolved, the U.S. Department of Defense (DoD) has developed a variety of detection and countermeasure systems to combat UAVs. These systems are called counter unmanned aerial systems (C-UAS). This has driven the market for C-UAS with over 537 systems on the market with varying levels of capability [3]. C-UAS are primarily composed of technologies that provide the following capabilities: detection/tracking, interdiction/mitigation, and command and control (C2). C-UAS detection and tracking are done with radar, electrooptical sensors, acoustic sensors, and passive RF detection. An excellent summary of these different technologies is provided in [3] and [13]. These sensors are 36
used to detect, identify, locate, and track UAVs. With the measurement of target tracks, classification processing is usually applied to provide additional threat information to the user [14], [15], [16], [17], [18], [19], [20], [21]. Mitigation technologies for interdiction can be divided into the following categories: soft kill (SK) and hard kill (HK). SK mitigation does not physically destroy UAVs and disables them without collateral damage. These approaches span electronic attack (EA) via jamming (RF/GNSS) or spoofing, dazzling, and nets. HK mitigation involves physically destroying and/or damaging the threat UAV(s). HK approaches include directed energy [high energy lasers (HEL) and highpower microwave (HPM)], munitions and collision drones [3], [13], [22], and [23]. Although rarely mentioned the C2 in a C-UAS is very important. C-UAS typically employ heterogeneous combinations of detection/tracking and interdiction/mitigation subsystems. The C2 integrates the various subsystems to provide C-UAS capability. Examples of programs of record that employ C-UAS are the USMC’s Marine Air Defense Integrated System (MADIS) (see Figure 1) and the U.S. Army’s Maneuver Short Range Air Defense (M-SHORAD) (see Figure 2). The primary payload for effective C-UAS is a radar. It is the only detection/tracking system that provides target range, velocity, and angular location in a single sensor. Electro-optical and infrared (EO/IR) sensors as an example do not provide range with a single sensor. Additionally, radars are able to provide day and night operation in all-weather environments. For C-UAS, other integrated payloads such as EO/IR cameras, EA sensors, and HK mitigation systems are cued based on the track information made available by the radar. Table 2 highlights key radar parameters for C-UAS. The parameters listed provide the performance needed for a radar system that will give the warfighter the required C-UAS battlefield capability. Table 3 highlights why radars are the most robust sensor for detection and tracking with a comparison against alternate technologies.
Figure 2. U.S. Army’s M-SHORAD is a program of record that uses C-UAS technology.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown Table 2.
Table 3.
Key C-UAS Radar Parameters That Ensure Maximum Radar Performance Parameters Maximum detection range
Minimum detection range
Elevation coverage
Description Maximizing the detection range increases the decision-making reaction time for threat neutralization. Providing coverage for close-in engagements is key for providing satisfactory coverage.
Detection/Tracking Technologies Key Performance Parameters
EO/ IR
Audio
Passive Detection
Maximum detection range Minimum detection range Elevation coverage Azimuth coverage Angle accuracy
Radars with limited elevation angle coverage will have a coverage gap near zenith around the radar. This poses a danger for C-UAS targets.
Number of simultaneous tracks
Azimuth coverage
360 AZ spatial coverage provides full capability for situational awareness.
Day and night operation
Angle accuracy and resolution (range/azimuth)
Directly impacts target tracking for C-UAS.
Number of simultaneous tracks
To protect against dynamic attacks from multiple angles, being able to track multiple targets per 90 sector is advantageous.
Operational capability - OTM
On the move (OTM) capability provides greater benefit than being only transportable. OTM means the radar is fully operational while the platform is moving.
Operational capability - OTM
All-weather operation The colors in the table represent level of performance [blue (best) ) green ) yellow ) red (worst)].
Day and night operation
Radars detect and track with the same performance in day or night conditions providing 24/7 availability.
All-weather operation
Radars are able to operate in various weather conditions such as rain and fog providing persistent capability.
As C-UAS capabilities have advanced, more insight is understood relative to the existing challenges specific to the C-UAS mission. An overview of these challenges will be expounded upon to provide an understanding of how they SEPTEMBER 2023
Radar
affect radar performance. Existing radar solutions will also be discussed providing insight into how C-UAS radar challenges are being solved today. Finally, future advancements for C-UAS will be discussed, providing insight into what is on the technological horizon for C-UAS radars.
RADAR CHALLENGES AND EXISTING SOLUTIONS FOR C-UAS SWaP
CHALLENGES C-UAS radars for defense applications must support both base protection and mobile installations. For base protection, detection and tracking of UAS targets is required to protect critical infrastructure such as government facilities, airports, and military bases. For these type of fixed installations, SWaP is not tightly restricted. For mobile installations, this is not the case. Force expedition applications normally require 360 coverage on a vehicle. Examples of this include the U.S. Army’s M-SHORAD [24] and mobile-low, slow, small unmanned aircraft integrated defeat system (M-LIDS) [25], and also the
IEEE A&E SYSTEMS MAGAZINE
37
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission MADIS [26]. M-SHORAD uses the Styrker, M-LIDS uses a mine-resistant all-terrain (M-ATV) vehicle, and the MADIS platform is a joint light tactical vehicle (JLTV). For these vehicles and others, SWaP is restricted for the radar while at the same time requiring maximum detection range. Size is challenging because the radar must provide 360 coverage, but not obstruct the view of other payloads on the vehicle. Additionally, for lightweight vehicles like the JLTV, there is limited space to mount a radar system. This forces the radar system employed to have a small footprint for the installation options of being mounted on the top of the vehicle or distributed around the vehicle. Weight has similar limitations as size. For vehicles like the Stryker, the C-UAS radar is not the only system on the platform. Other systems such as effectors, jammers, comms, etc., share space with the radar, in addition to making sure there is enough room for the crew to operate the vehicle. Because of this the weight of each individual vehicle payload has to be minimized to meet the overall weight requirements. This is a challenge for radars that must provide enough effective radiated power (ERP) to meet maximum detection range requirements in a small form factor.2 For any vehicle platform, power is limited. Although the radar is important, there are many other critical vehicle systems that require power also. This means that the radar must be efficient in terms of power dissipation to not require a large amount of cooling or require no cooling. In extreme heat environments where the radar must operate continuously for long time durations, this becomes critical relative to reliability and maintainability of the radar. Additionally, the power required by the radar must fit into the overall power budget of the vehicle. Existing solutions to these SWaP challenges will be described next.
EXISTING SOLUTIONS For a monostatic radar, detection SNR is a function of G2 , where G is the antenna gain. G can also be expressed by the equation: G¼
4pAf 2 : c2
(1)
In (1), A is the antenna area, f is the radar operational frequency, and c is the speed of light. (1) shows that the antenna gain can be maximized by increasing the antenna size A, and/or operational frequency f. Selecting an operational bandwidth at higher frequencies enables achieving optimal antenna gain while minimizing antenna size (see Figure 3). Although operating at higher frequencies allows decreased antenna area for lower SWaP, the loss due to environmental effects such as rain increases, requiring increased transmit power. This is highlighted in Figure 4. As an example, if 20 dBi of antenna gain is required, looking at Figure 3, Ku-band might be selected to minimize the antenna area. However, Figure 4 shows that at 38
Figure 3. The antenna gain is selected based on maximum detection range requirements. Once the desired gain is determined, the operating frequency can be traded to minimize the antenna area as required based on size requirements.
Ku-band there is a detection range performance reduction of greater than 10% for ranges greater than a kilometer. This would cause the required transmit power to be increased to account for the loss. Size and weight are balanced with performance in the radar design by trading antenna area and frequency to achieve the required detection range performance. Table 4 shows a subset of companies that build and manufacture radars for C-UAS and the frequency bands of operation that are employed. Each company’s radars will have its own strengths and weaknesses based on the frequency selected.
MULTIPATH AT LOW ELEVATION ANGLES
CHALLENGES Ground-based radars must contend with multipath for targets that are at low altitudes (small elevation angle relative to the radar). The IEEE radar definition for multipath is “the propagation of a wave from one point to another by more than one path” [35]. For radars, this occurs when the energy reflected from a target has more than one path to return back to the radar as shown in Figure 5. The direct path is the desired return; however the indirect path simultaneously provides a path for the signal to return with a different amplitude and phase than the direct path. The direct and indirect returns add together causing performance degradation in angle accuracy and loss of signal. Multipath returns can be categorized as main beam (MB) or sidelobe (SL). SL multipath can be mitigated by weighting of the antenna SLs [36], however MB multipath mitigation is more challenging since it cannot be mitigated with sidelobe weighting. UAS are categorized into five different groups [37]. The groups are highlighted in Table 1 [38]. Radar systems
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown
Figure 4. Operating at higher frequencies increases loss due to rain. This is accounted for by transmitting more power, which can impact the radar power system design.
employed for C-UAS can target subsets of the groups shown or all the groups. For UAS that fly above 1200 ft, SL multipath is dominant and can be mitigated as mentioned previously with SL weighting. For small UAS (sUAS) in the Group 1 (Figure 6) and two categories that fly at altitudes below 1200 ft MB multipath is dominant. The radar beam must be scanned near the ground for low altitude coverage.
EXISTING SOLUTIONS An approach to multipath mitigation is to decrease the beamwidth of the radar antenna beam. The beamwidth of an antenna is a function of the largest length of the antenna in the dimension of interest D and the operating frequency f. This is expressed as Beamwidth ¼ k
c : f D
(2)
Table 4.
Companies That Manufacture Radars for C-UAS and Their Operational Frequencies Company
Operational Frequency Band
DRS RADA
S-band [27]/X-band [28]
Echodyne
Ku-band/K-band [29]
Thales
X-band [30]
SRC
L-band [31]/S-band [32]/X-band [33]
Blighter
Ku-band [34]
BIOLOGICAL VERSUS MAN-MADE TARGET CLASSIFICATION
Frequency is one of the primary design variables that drives performance of C-UAs radars.
SEPTEMBER 2023
In (2), k is the beamwidth factor and is equal to 0.886 for uniform illumination (no SL weighting) [36]. By designing the radar at higher frequencies, the radar beamwidth is decreased. This reduces the MB multipath discussed previously, while also improving the angle accuracy. This does not come for free because decreasing the radar beamwidth increases the time required to scan the FOV. This will be illustrated later when discussing maximizing detection range. Another approach to multipath mitigation is to decrease the elevation MB beamwidth by increasing the length of the radar in the elevation dimension. Similar to increasing the frequency, this also decreases the antenna beamwidth and minimizes MB multipath and increases elevation (EL) angle accuracy. In many practical operational scenarios, multipath is most significant in the EL dimension, and the azimuth (AZ) dimension of the radar can be optimized to maintain a sufficient scan rate for performance. Figure 7 illustrates the dependence of the radar beamwidth on f and D. A software approach for multipath mitigation in contrast to modifying the radar antenna is to use algorithms for multipath suppression. Many of these algorithms attempt to characterize the reflected ground bounce from the indirect path and optimize the received signal from the direct path. A combination of narrow beamwidth and software algorithms are typically the most favored approach.
CHALLENGES A challenge for all C-UAS is differentiating between biological targets (birds) and man-made targets (sUAS). Birds fly at
IEEE A&E SYSTEMS MAGAZINE
39
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
Figure 5. Multipath illustration shows the direct and indirect paths for target reflections. The indirect path dynamically adds constructively and destructively altering the estimated elevation angle measurement of the target. For C-UAS radars, this is primarily main beam multipath at elevation angles relative to the radar of less than 10 .
max speeds similar to sUAS (20–100 m/s) and have a radar cross section (RCS) that is also comparable to drones (25 to 20 dBsm). This creates a source of ambiguity for the radar tracker and can lead to false positive and negative classifications due to birds (see Figure 8). For C-UAS radar modes that simultaneously detect Group 1/2 and Group 3–5 UAS, this poses a problem. For larger targets that fly at speeds much greater than birds, classification is minimally impacted. However, if the requirements necessitate tracking slower moving Group 1 and 2 UAS, the improper classification of birds as UAS will clutter the air picture and degrade battlespace awareness. For C-UAS applications that focus only on Group 1/2 UAS, classification becomes a larger problem.
EXISTING SOLUTIONS The most prevalent solution for mitigating false bird detections is to couple the radar with an EO/IR sensor. The radar generates target tracks that are passed to a multispectral camera. The camera is cued in the direction of the radar track,
and visual confirmation is used to verify if the track is a valid drone detection or not. Additionally, artificial intelligence/ machine learning (AI/ML) is used for target classification. AI/ML classification algorithms for imagery are well understood and mature [40], [41]. These algorithms are applied to the images captured by an EO/IR sensor that is coupled with a radar in C-UAS. Autotracking cameras with AI/ML classification capability are a powerful mitigation solution for false positive bird tracks. Ideally, using only the radar track data is more beneficial enabling operation without a separate camera sensor. This would minimize the SWaP of the overall C-UAS system in addition to the system complexity. Microdoppler is a type of processing that uses the time varying doppler signature of target returns to determine if the track is a bird or drone. Microdoppler [16], [42] is effective but it requires a large SNR. This is because the returns of rotating blades on a quadcopter or any UAS with rotating blades are much
Figure 7. Figure 6. The quadcopter DJI UAS is an example of a low-cost and easily accessible drone that can be retrofitted and weaponized [39].
40
The radar array beamwidth is proportional to the array size and operational frequency. The beamwidth can be reduced by either increasing the antenna length and/or the operational frequency. This is depicted by the yellow arrow showing the reduction in beamwidth to less than 5 is above the contour.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown
Figure 8. Example scenario with a single UAS and multiple birds. The birds cause a cluttered air picture if not properly classified.
smaller than that of the UAS body that the radar is designed to detect. This limits the detection range for which microdoppler is effective. Microdoppler also has limited capability for fixed wing drones that do not have a large number of propeller blades for doppler discrimination.
MAXIMIZING DETECTION RANGE
CHALLENGES The average power form of the radar range equation is shown below: SNR ¼
TCPI duty cycle P G2 s 2 ð4pÞ3 R4 k To F L
:
(3)
TCPI is the coherent processing interval (CPI) time, duty cycle is the ratio of the pulsewidth to the pulse repetition interval (PRI), P is the peak transmitted power, G is the antenna gain, s is the radar cross section (RCS), is the RF wavelength, R is the detection range, k is Boltzmann’s constant, To is 290o K, F is the noise factor, and L represents system losses. To maximize R the following parameters are traded and optimized: P , G, TCPI , and duty cycle. These parameters directly affect other radar performance parameters such as scan rate, and must be balanced accordingly. Rearranging (3) in terms of R better shows the dependencies of R on P , G, TCPI , and duty cycle. This is highlighted in the following equation:
R¼
1 TCPI duty cycle P G2 s 2 4
SEPTEMBER 2023
ð4pÞ3 SNR k To F L
:
(4)
P and G Detection range is proportional to the transmitted 1 power P . Equation (4) shows that R / P 4 . As an example, in order to improve the detection range (using transmit power only) by a factor of two (100% increase) requires an increase in power by a factor of 16 (see Figure 9). SWaP constraints limit the allowable P because increasing the power too much can lead to a need for liquid cooling, which also increases the system weight and reduces the reliability. These are undesirable for C-UAS. Maximizing G also increases the radar detection range and can be used to minimize the required P . Increasing G provides a quadratic improvement in SNR [see (3)] and improves angle accuracy (s BW ), which is directly proportional to the ratio of the antenna beamwidth and SNR BW ffi p ffiffiffiffiffiffiffiffiffiffiffi s BW ¼ . At first glance, this appears optimal. ð2SNR To rapidly scan the field of view (FOV) a larger beamwidth is desired. However, increasing the beamwidth reduces the antenna gain and decreases the angle accuracy. This is highlighted in Figure 10. Since G ¼ 4pAf 2 , the antenna size A and the frequency f must be c2 balanced to optimize G for both detection range, angle accuracy, scan rate, and radar size/weight (see Figure 11). TCPI The coherent processing interval time when increased directly increases SNR and maximizes detection range [see (4)]. Increasing TCPI also improves doppler/velocity resolution. Increasing TCPI , though cannot be done without consideration of the scan rate. The scan rate can be defined as the inverse of the product of TCPI and the number of beams required to scan the FOV (Nbeams ). For C-UAS, scan rates of 2 Hz or more and velocity resolution on the order of 0.5 m/s are typically desired to optimally maintain track continuity. This requires TCPI to be selected carefully to balance detection performance, scan rate, and velocity resolution.
Figure 9. The figure illustrates the required increase in power for improvement in detection range with constant antenna gain. Arbitrarily increasing the transmit power is not a viable option due to increased power dissipation and a potential need to require liquid cooling. The required transmit power must be balanced accordingly.
IEEE A&E SYSTEMS MAGAZINE
41
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
Figure 10. Optimizing antenna gain to improve detection range requires a trade between scan rate of the FOV and angle accuracy. Increasing the gain decreases the antenna beamwidth for improved angle accuracy but at the cost of increased scan time. The radar beam in the figure is a pencil beam (equal beamwidths in AZ and EL).
Duty Cycle Increasing the duty cycle also directly maximizes the detection range. This though must be traded with minimum detection range. Figure 12 show two waveforms with their coverage overlaid. The second waveform’s duty cycle is increased by 50%. Correspondingly, the minimum detection range has been increased. This poses a challenge for C-UAS applications that require a maximum detection range greater than 5 km with a minimum detection range on the order of 100 m.
EXISTING SOLUTIONS P and G P is typically determined by the semiconductor technology used for the transmit high-power amplifiers (HPAs). In order to maximize transmit efficiency (minimizing power dissipation), a semiconductor material is chosen that will simultaneously maximize the allowable transmit power while maximizing the transmit efficiency. Gallium Nitride
Figure 11. Transmit power and frequency are traded to balance detection range as a function of power and antenna gain. Increasing the operational frequency increases the antenna gain and reduces the required transmit power and vice versa. The figure shows the max detection range as a function of frequency and transmit power for a 0.1 m2 radar array.
42
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown (GaN) is typically the preferred choice for maximizing P . GaN-based amplifiers can provide a much higher output power in a smaller space [43], and transmit with better power efficiency at higher power levels than other semiconductors such as gallium arsenide (GaAs) [44]. Thus, for applications that require longer detections ranges without increasing the antenna size (and thereby gain), GaN is optimal because it is capable of transmitting high power with optimal power dissipation (maintaining SWaP). TCPI As previously mentioned, increasing TCPI maximizes SNR and thereby detection range but also decreases the scan rate. To counter this, TCPI is shortened to improve/ maintain the scan rate at the expense of velocity resolution, which is inversely proportional to TCPI . The velocity resolution may be increased slightly, but can be balanced for maintaining adequate performance. Duty Cycle Increasing the duty cycle also directly maximizes the detection range as previously discussed. For applications where minimum detection range is important this is not favorable. Increasing the duty cycle by increasing the pulsewidth leads to an increase in the minimum detection range. Pulse compression [45] is nominally used to keep the pulsewidth the same while still increasing power on target for maximal detection range. However, in some instances a combination of both pulse compression and an increase in the pulsewidth is employed. The pulse compression gain offsets the amount of increase required by the pulsewidth and keeps the minimum detection range at a desirable level. Common pulse compression waveforms used are Barker codes [46] and linear frequency modulation (LFM) [47].
FOV COVERAGE
CHALLENGES A standard approach for air defense applications is to create a scan fence with the radar. The maximum height of the scan fence is at the maximum detection range as shown in Figure 13. The number of beams required to cover the scan fence region is a function of the radar’s antenna beamwidths and drives the scan rate as discussed previously. For C-UAS, UAS are detected in the search fence region and subsequently tracked. A vulnerability that exists is if a UAS target flies out of the search fence and the radar track is lost. When this occurs, the radar is unable to reacquire and establish a new track, and the UAS will be undetected by the radar. This is less of a problem for Group 3–5 UAS because the speeds at which they fly, dropping the track is very low probability. However, for Group 1–2 drones flying at speeds less than 100 m/s with RCS profiles like birds this is much more problematic. UAS target tracks can switch to bird tracks for birds in close proximity to the UAS, and at low speeds (< 30 m/s) with maneuvering flight profiles, maintaining track can be additionally challenging.
EXISTING SOLUTIONS To maintain track outside of the search fence, the simplest solution is to add more search beams for increased elevation coverage. For mechanically steered radars this is challenging. The time required to scan the radar antenna with a gimbal is not feasible. This would decrease the scan rate for undesired performance. Active electronically scanned arrays (AESAs) enable an increased scan volume due to
Figure 12. Waveform 1 has a maximum range of 3000 m with a minimum range of 125 m. Increasing the pulsewidth of Waveform 1 results in Waveform 2. Waveform 2 has a longer pulsewidth and higher duty cycle for extended detection range, however the minimum range has been reduced to 500 m.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
43
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
SIMULTANEOUS TRACKING OF MULTIPLE UAS GROUPS
CHALLENGES Some C-UAS applications require simultaneous tracking of UAS targets spanning different groups. Examples include Groups 1 and 2, Groups 3–5, and Groups 1–5. For applications requiring simultaneous tracking performance for Groups 1 and 2, and Groups 3–5 a single mode can typically detect and track UAS with good performance. Simultaneously detecting and tracking Groups 1–5 is more challenging. The range of RCS sizes, maximum velocities, and maximum altitudes make this a challenge for a single mode. This is because the number of beams required dramatically decreases the scan rate. The mode can be optimized for Groups 1 and 2 or Groups 3–5, but the nonoptimized group will suffer.
EXISTING SOLUTIONS Figure 13. Scan Fence example for air defense applications. C-UAS radars employ this approach.
their rapid scan capability [36]. AESA radars are therefore the preferred radar of choice for C-UAS applications. If the scan rate degradation from adding more search beams is to great even for an AESA radar, another approach is to add dedicated track beams. When UAS targets that are being tracked leave the search fence coverage area, a dedicated track beam can be used for the UAS target while still executing track while scan (TWS). For this approach, an AESA is also required for operational implementation.
Overcoming this challenge is accomplished using two different approaches. The first approach is to use two different modes, each optimized for either Groups 1 and 2, or Groups 3–5. For a single radar this approach does not provide simultaneous coverage and is suboptimal. However, in applications where multiple vehicles are used, the radar on each vehicle can use a different mode and collectively provide the required coverage. Figure 14 illustrates the modes for this approach. The second approach is to use a single mode to provide the simultaneous coverage. To overcome the challenges previously discussed, this can be accomplished by interleaving modes with waveforms that are tailored to different targets. For Groups 1 and 2, the waveform used would have a shorter maximum range, requiring a
Figure 14. Two modes when used simultaneously on different vehicles or in a single mode with interleaving provide the desired coverage of Groups 1–5.
44
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown be able to assess the composition and potential lethality of the threat swarm. With the increasing advancement of directed energy (DE) weapons, this can be extremely critical. DE HELs are best suited for neutralizing threat UAS one at a time which is not optimal for UAS swarms. DE HPMs are optimal for simultaneous neutralization of multiple drones. It is important for a radar in a C-UAS to be able to discriminate drone swarms to aid in the decision-making process for selecting the right effector.
EXISTING SOLUTIONS Figure 15. Radars with AESA technology and software defined architectures, such as the multimission hemispherical radar (MHR), enable challenging simultaneous tracking of Groups 1–5 UAS.
shorter TCPI and less required beams due to the lower maximum altitude. The waveform for Groups 3–5 would have a longer TCPI due to the increased detection range, but an increased minimum detection range and minimum altitude. This allows the waveform to have an increased duty cycle, and less beams to cover the modified FOV. Both waveforms are then optimized together to provide the required scan rate. To accomplish this, a radar like DRS RADA’s MHR that uses AESA technology with a software defined architecture is required (see Figure 15). AESA’s provide rapid beam scanning and the software-defined capability allows customizable modes for design flexibility.
SWARMS
CHALLENGES UAS swarms are a low-cost battlefield application that can overwhelm the common operational picture with targets, carry payloads such as explosives, cameras, and jammers, and provide a high-level threat to friendly forces [48]. UAS swarms have no limit on the number of individual UAS that comprise the swarm. This places a demand on the number of simultaneous targets that a radar must be able to detect. Radars that are limited to tracking less than 10–20 targets simultaneously will be vulnerable to larger swarms. Many C-UAS radars use multiple radar panels to provide 360 coverage. For these radars, an additional metric of simultaneous tracks per sector is also important. UAS swarms that are comprised of UAS that ingress at various angles over the radar systems field of regard (FOR) place an additional demand on the radar system to support a large track capacity. UAS swarms also challenge a radar’s resolution capability. Swarms that fly in a tight formation will appear as a single target to a radar that does not have the requisite range and angle resolution to differentiate the individual UAS. This degrades battlefield awareness, as the operator will not SEPTEMBER 2023
Doppler velocity discrimination is a key tool for drone swarms. Although frequency modulated continuous wave (FMCW) radars can provide target doppler velocity [49], the following discussion will focus on pulse doppler radars. For high performance tracking and long engagement ranges, pulse doppler radars are primarily employed for C-UAS application. In a typical pulse doppler radar processing chain, the pulse doppler processing is done prior to detection. The processing gain that results from pulse compression and Fast Fourier Transform (FFT) processing is used to increase the SNR of the target return for a detection. The FFT pulse doppler processing measures the doppler velocity of the target from which the absolute velocity can be calculated. After detection, angle of arrival (AoA) processing is performed providing target angle measurement in EL and/or AZ. For swarms, if the velocity difference between drones is greater than the doppler velocity resolution the individual targets will be resolved. This allows the target angles to be measured via AoA processing, even for targets measured in the same radar beam, which is the most stressing swarm condition. This is illustrated in Figure 16. Figure 16 shows that if the drones in the depicted swarm are traveling with velocity differential less than the doppler velocity resolution, they will be detected in the same doppler bin and will not appear as separate targets. If the drones in the swarm are traveling at velocities that differ by an amount greater than the doppler velocity resolution then they will be resolved. For AoA, monopulse processing is typically implemented however high resolution array processing AoA techniques can be used as well [50], [51], [52], [53].
FUTURE ADVANCEMENTS Current radars in the C-UAS market have significant capabilities for addressing UAS threats. These capabilities were discussed in the previous section and include: Simultaneous: Low SWAP and UAS long range detection; Multipath Mitigation; Micro-Doppler Processing for Classification; Track-While-Scan (TWS);
IEEE A&E SYSTEMS MAGAZINE
45
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
Figure 16. In the figure the drone swarm targets are in the same range bin. The only way they can be resolved is if they can be separated in doppler and the difference in their range is greater than the range resolution of the waveform. If both of these criteria are met then the targets will be detected individually and AoA processing employed postdetection, i.e., monopulse. This implies waveforms that have a large bandwidth for high range resolution and long CPI dwells for high doppler resolution. Additionally, higher operating frequencies enables better discrimination in doppler as well.
Waveform interleaving; AESA technology; Single multimission modes; Pulse Doppler processing; AoA processing; Looking toward the future, several technology areas will have an impact on new and emerging C-UAS radars. These forward-looking capabilities are described next.
ELEMENTAL AND QUASI-ELEMENTAL DIGITAL BEAMFORMING Digital beamforming (DBF) is currently being implemented in C-UAS radars. An example of this is DRS RADA’s
Figure 17. DRS RADA’s nMHR is an X-band multimission DBF radar that supports C-UAS missions.
46
nMHR that is a multichannel DBF radar (see Figure 17). DBF provides the ability to do rapid FOV scanning with simultaneous narrow beam accuracy. Additionally, advanced algorithms can be implemented for performance enhancing capabilities such as adaptive array nulling, high resolution AoA algorithms, space time adaptive processing (STAP), etc. DBF is currently primarily implemented at the subarray (SA) level in practice. With the continued advance of highspeed wideband digital receivers and existing wideband front-end electronics, elemental DBF (EDBF) is becoming more feasible. For short range C-UAS applications (< 3 km), the number of elements required is approximately less than a hundred depending on the operational frequency. This is an optimal range for EDBF implementation since the number of channels equals the number of array elements. For larger arrays >100 elements, SA DBF is typically implemented because the number of channels is more than can be practically implemented [54]. EDBF provides the ultimate capability because it offers the highest number of degrees of freedom (DOF) to process target returns. This provides improved performance for advanced algorithms and multiple simultaneous beams [54]. Impediments to EDBF implementation are usually cost and power dissipation. For small radars with a low number of elements/channels, the cost is becoming practical. Typical C-UAS waveforms have bandwidths on the order of several megahertz, which reduces the data rate for EDBF processing. These factors make EDBF very attractive for short range C-UAS. For longer C-UAS detection ranges (> 3 km), EDBF is not practical. This would require greater than a hundred channels (equals the number of elements for EDBF) of streaming detections. A quasi-EDBF approach is to design repeatable and manufacturable SA building blocks with a
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown
Figure 18. Subarrays enable quasi EDBF performance. The subarray is comprised of a subset of array elements for a repeatable building block. In this example, subarray building blocks of 2 2 elements and 4 4 elements are shown.
small number of elements. The SAs would be tiled together to form the complete array. The small number of elements has similar behavior to a single element enabling quasi-EDBF performance. Using the same repeatable building block lowers cost and enables scalability. As an example, consider a radar that requires 256 elements. By building a 2 2 SA building block (four elements), the number of channels to process is reduced from 256 to 64. Similarly, a 4 4 building block (16 elements) would reduce the number of channels from 256 to 16 as shown in Figure 18. For both examples, a family of radars with different sizes could be produced with the same repeatable building block as shown in Figure 19.
AI/ML As discussed previously, the primary methods of mitigating false bird tracks for classification is to pair a radar with an EO/IR sensor and/or microdoppler processing. Using an accompanying EO/IR sensor in a C-UAS suite adds more hardware, and more user responsibility to monitor both the radar tracks and the camera verification tracks. Microdoppler although effective requires a large
SNR and is limited in maximum detection range implementation. Using AI/ML to process radar detections for classification [55] without requiring a companion EO/IR sensor provides the ultimate flexibility. This can be done at the IQ level, detection level, track level, or a combination of the three. This paired with microdoppler could enable false positive performance to less than 10%. Also, an EO/IR sensor with AI/ML image classification could still be used in parallel for an even more robust total solution. In addition to bird-drone classification, UAS threat intent is also an application of radar only AI/ML classification. Training on UAS flight profile behaviors for surveillance, loitering munition, etc., would provide valuable situational awareness for the operator. This will require significant amounts of training data to train AI/ML algorithms but could provide valuable capability.
LOW PROBABILITY OF INTERFERENCE (LPI)/LOW PROBABILITY OF DETECTION(LPD) WAVEFORMS For force expedition applications, using a radar limits the ability to remain undetectable. The radar is scanning the FOV with high gain array beams for maximum detection
Figure 19. Using a single SA building block different radars can be manufactured leveraging the subarray scalability. The difference in radar topologies would differ only in number of channels and subarrays.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
47
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission range. To counter this, using LPI/LPD waveforms would reduce the radar signature and minimize the radar’s probability of being intercepted or detected [56], [57], [58], and [59]. One way of accomplishing this is using wide bandwidth waveforms that have a very low probability of detection. Another approach is to use the commonly known practice of frequency hopping. This would require the radar to operate over a wideband, but would reduce its probability of being intercepted. Techniques similar to these described will ultimately require AESA radars with wide operational bandwidths greater than 1 GHz to be effective. Wide operational bandwidths will be discussed next.
WIDE OPERATIONAL BANDWIDTHS Wideband operation is becoming more important for intentional and unintentional RF interference. This is especially true for operational frequencies at X-band or higher. C-UAS are being deployed in locations, where there is a large amount of interference from other RF systems. This creates an interference rich environment that the radar must operate in with acceptable performance. Radars with wide operational bandwidths will be able to find available spectrum regions to operate. From an Electronic Counter-Countermeasures (ECCM) perspective, wideband operation enables the radar to dynamically switch to different frequencies when being jammed. Current work is being developed for radars to autonomously switch frequencies and bandwidths when operating in RF signal rich environments [60]. This could be of added benefit for C-UAS radars. However, for optimum effectiveness, wideband operation is required.
Multimission—Multiple radars optimized for different mission such as C-UAS and Counter Rockets, Artillery, and Mortars (C-RAM). AI/ML Battle Space Awareness—C2 can use the networked track data to identify patterns and use for engagement response
CONCLUSION This article illustrates the importance of radars for the CUAS mission. Radars are predominantly the primary sensor for C-UAS. They provide target range, angle, and velocity measurements to detect, track, and classify UAS threats. Radars provide persistent surveillance with 24/7 operational capability in all-weather environments at beyond LOS ranges. With the increase of threat UAS such as loitering munitions, C-UAS have become an increasingly vital asset for the warfighter on the battlefield. This has placed a premium on high performance radars that can be the backbone for C-UAS defense. Understanding the challenges radars are presented with for C-UAS and the current solutions is of value for forecasting future advanced radar capability. The challenges and current solutions were discussed for SWaP; Multipath at low elevation angles; Biological versus man-made target classification; Maximizing detection range; FOV coverage; Simultaneous tracking of multiple UAS groups;
NETWORKED INTEGRATION FOR CO AND CROSS MISSION APPLICATION
Swarms.
As the types of UAS threats increases with increasing capability using a single radar system may in some cases not be beneficial. The ability for radars to communicate with each other in the battlefield is becoming an emerging capability. The radars can be performing the same or different mission(s) with the ability to communicate and share information [61], [62], and [63]. This ability can reduce the burden on the C2 system by providing it with an integrated common operating picture without having to manage the multiple radars on its own. Below are some examples of the utility of radars communicating with each other in a networked configuration. Multistatic Operation—One radar is transmitting while the other radars are passively receiving. Layered Defense—Multiple radars optimized for short, medium, and long range UAS detection spread out diversely to protect critical infrastructure. 48
Forward looking capabilities were described based on their relevance and impact to the C-UAS mission. These capabilities are based on existing and emerging trends in C-UAS defense. The future advancements described were Elemental and quasi-elemental digital beamforming; AI/ML for radar only UAS classification in the presence of birds; LPI/LPD waveforms; Wide operational bandwidths; Networked integration for co- and cross-mission application. Many of these capabilities are starting to be incorporated in new radars coming to market. As the C-UAS mission continues to advance, these capabilities will mature and advance as well.
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Brown
[16] V. Sineglazov, “Multi-functional integrated complex of
REFERENCES
detection and identification of UAVs,” in Proc. IEEE Int.
[1] Association for Unmanned Vehicle Systems International,
Conf. Actual Problems Unmanned Aerial Veh. Develop.,
“The benefits of unmanned aircraft systems: Saving time,
2015, pp. 320–323. [17] A. Coluccia et al., “Drone-vs-bird detection challenge at
saving money, saving lives,” 2016. [Online]. Available: https://epic.org/events/UAS-Uses-Saving-Time-Saving-Mon ey-Saving-Lives.pdf
IEEE AVSS2017,” in Proc. IEEE Int. Conf. Adv. Video
[2] D. Glade, Unmanned Aerial Vehicles: Implications for
[18] A. Schumann, L. Sommer, J. Klatte, T. Schuchert, and
Military Operations. Islamabad, Pakistan: Center Strategy
J. Beyerer, “Deep cross-domain flying object classifi-
Technol., Air War College, Air University, 2000.
cation for robust UAV detection,” in Proc. IEEE 14th
Signal Based Surveill., 2017, pp. 1–8.
[3] A. Michel, Counter-Drone Systems. Annandale-on-Hudson, NY, USA: Bard College, 2019. [4] R. Dixon, “Azerbaijan’s drones owned the battlefield in
Int. Conf. Adv. Video Signal Based Surveill., 2017, pp. 1–6. [19] H. Fu, S. Abeywickrama, L. Zhang, and C. Yuen, “Low-
Nagorno-Karabakh—and showed future of warfare,” Washington Post, Nov. 11, 2020.
complexity portable passive drone surveillance via SDRbased signal processing,” IEEE Commun. Mag., vol. 56,
[5] B. Wilson et al., Small Unmanned Aerial System Adversary Capabilities. Santa Monica, CA, USA: RAND
no. 4, pp. 112–118, Apr. 2018. [20] N. Mohajerin, J. Histon, R. Dizaji, and S. L. Waslander,
Corporation, 2020.
“Feature extraction and radar track classification for
[6] S. Gallagher, “German chancellor’s drone ”attack” shows
detecting UAVs in civilian airspace,” in Proc. IEEE Radar
the threat of weaponized UAVs,” Ars Technica, Sep. 18,
Conf., 2014, pp. 0674–0679. [21] G. J. Mendis, T. Randeny, J. Wei, and A. Madanayake,
2013. [7] N. P. Walsh, N. Gallon, E. Perez, D. Castrillon, B. Arvanitidis, and C. Hu, “Inside the august plot to kill Maduro with drones,”
“Deep learning based doppler radar for micro UAS detection and classification,” in Proc. IEEE Mil. Commun.
CNN, 2019. [Online]. Available: https//www.cnn.com/2019/ 03/14/americas/venezuela-drone-maduro-intl/index.html
Conf., 2016, pp. 924–929. [22] C. Lyu and R. Zhan, “Global analysis of active
[8] N. Lomas, “Last year’s gatwick drone attack involved at least
defense technologies for unmanned aerial vehicle,”
two drones, say police,” TechCrunch, 2019. [Online]. Avail-
IEEE Aerosp. Electron. Syst. Mag., vol. 37, no. 1,
able: https://techcrunch.com/2019/09/27/last-years-gatwick-
pp. 6–31, Jan. 2022. [23] H. Kang, J. Joung, J. Kim, J. Kang, and Y. S. Cho, “Protect
drone-attack-involved-at-least-two-drones-say-police/ [9] B. Hubbard, P. Karasz, and S. Reed, “Two major Saudi oil installations hit by drone strike, and US blames Iran,” The New York Times, Sep. 15, 2019.
your sky: A survey of counter unmanned aerial vehicle systems,” IEEE Access, vol. 8, pp. 168671–168710, 2020. [24] J. Allen, “M-SHORAD system bolsters Army’s air defense
[10] “Ethiopia: 19 people killed in latest drone strikes in Tigray,”
capabilities,” U.S. Army, 2021. [Online]. Available:
2022. [Online]. Available: https://www.theguardian.com/
https://www.army.mil/article/245530/m_shorad_system_
world/2022/jan/11/ethiopia-19-people-killed-in-latest-dronestrikes-in-tigray [11] Z. Kallenborn, “Seven (initial) drone warfare lessons from
bolsters_armys_air_defense_capabilities [25] A. Deshpande, “M-LIDS mobile-low, slow, small unmanned aircraft integrated defeat system,” 2022.
Ukraine,” Modern War Institute, West Point, NY, USA, 2022. [Online]. Available: https://mwi.usma.edu/seven-
[Online]. Available: https://www.army-technology.com/ projects/m-lids-mobile-low-slow-small-unmanned-aircraft-
initial-drone-warfare-lessons-from-ukraine/
integrated-defeat-system/
[12] M. Burgess, “Small drones are giving Ukraine an unprece-
[26] Admin, “US marine corps MADIS remote weapon station
dented edge,” Wired, 2022. [Online]. Available: https://
program kicks off US production,” MilitaryLeak, 2022.
mwi.usma.edu/seven-initial-drone-warfare-lessons-from-
[Online]. Available: https://militaryleak.com/2022/05/31/
ukraine/
us-marine-corps-madis-remote-weapon-station-programkicks-off-us-production/
[13] J. Wang, Y. Liu, and H. Song, “Counter-unmanned aircraft system (s)(C-UAS): State of the Art, challenges, and future trends,” IEEE Aerosp. Electron. Syst. Mag., vol. 36, no. 3,
[27] RADA USA, “MHR radar,” 2022. [Online]. Available:
pp. 4–29, Mar. 2021.
[28] “nMHR radar,” 2022. [Online]. Available: https://www.
https://www.rada.com/products/mhr
[14] A. Bernardini, F. Mangiatordi, E. Pallotti, and L. Capodiferro, “Drone detection by acoustic signature identification,” Elec-
rada.com/products/nmhr [29] Echodyne, “EchoGuard and EchoShield radars,” 2022.
tron. Imag., vol. 2017, no. 10, pp. 60–64, 2017.
[Online]. Available: https://www.echodyne.com/defense/
[15] S. Jeon, J.-W. Shin, Y.-J. Lee, W.-H. Kim, Y. Kwon, and H.-Y. Yang, “Empirical study of drone sound detection in
counter-uas-radar/ [30] Thales, “Ground observer 80 radar,” 2021. [Online].
real-life environment with deep neural networks,” in Proc. 25th Eur. Signal Process. Conf., 2017, pp. 1858–1862.
SEPTEMBER 2023
IEEE A&E SYSTEMS MAGAZINE
Available: https://www.thalesgroup.com/sites/default/files/ database/document/2021-10/GO80_datasheet_R0.pdf
49
Radar Challenges, Current Solutions, and Future Advancements for the Counter Unmanned Aerial Systems Mission
[31] SRC, “AESA50 multi-mission radar,” 2021. [Online].
[48] D. Hermann, “Swarm: A drone wars story,” U.S. Naval
Available: https://www.srcinc.com/products/radar/aesa50-
Institute Blog, 2022. [Online]. Available: https://blog.usni.
multi-mission-radar.html
org/posts/2022/06/07/swarm-a-drone-wars-story
[32] SRC, “SkyChaser on-the-move multi-mission radar,” 2021. [Online]. Available: https://www.srcinc.com/products/radar/ skychaser-on-the-move-radar.html
[49] M. Jankiraman, FMCW Radar Design. New York, NY, USA: Artech House, 2018. [50] S. Haykin, Array Signal Processing. Hoboken, NJ, USA:
[33] SRC, “Gryphon family of radars,” 2022. [Online]. Available: https://www.srcinc.com/products/radar/gryphon
Prentice-Hall, 1985. [51] S. U. Pillai, Array Signal Processing. Berlin, Germany:
-radars.html
Springer, 2012.
[34] Blighter, “A800 3D multi-mode radar,” 2022. [Online]. Avail-
[52] A. Pezeshki, B. D. Van Veen, L. L. Scharf, H. Cox, and M. L.
able: https://www.blighter.com/products/a800-3d-multi-mode-
Nordenvaad, “Eigenvalue beamforming using a multirank
radar/ [35] IEEE Standard for Radar Definitions, IEEE Standard 686-
MVDR beamformer and subspace selection,” IEEE Trans. Signal Process., vol. 56, no. 5, pp. 1954–1967, May 2008.
2017 (Revision of IEEE Standard 686-2008), 2017.
[53] W. L. Melvin and J. Scheer, Principles of Modern Radar:
[36] A. D. Brown, Electronically Scanned Arrays: MATLAB
Advanced Techniques. Chennai, India: SciTech Pub., 2013.
Modeling and Simulation. Boca Raton, FL, USA: CRC
[54] A. D. Brown, Active Electronically Scanned Arrays: Funda-
Press, 2012.
mentals and Applications. Hoboken, NJ, USA: Wiley, 2021.
[37] L. King, “DoD unmanned aircraft systems training pro-
[55] S. Roychowdhury and D. Ghosh, “Machine learning based
grams,” International Civil Aviation Organization 12-S-0493,
classification of radar signatures of drones,” in Proc. IEEE
2015. [38] “Unmanned aerial vehicles (UAVs) classes,” 2020.
2nd Int. Conf. Range Technol., 2021, pp. 1–5. [56] K. A. Lukin, “Radar design using chaotic and noise waveforms,” in Proc. IEEE Int. Waveform Diversity Des. Conf.,
[Online]. Available: https://www.globalsecurity.org/intell/
2006, pp. 1–5.
systems/uav-classes.htm [39] V. Valkov, “DJI phantom 2 drone image,” 2015. [Online].
[57] R. M. Narayanan and N. S. McCoy, “Delayed and
Available: https://www.siliconrepublic.com/machines/iot-
summed adaptive noise waveforms for target matched
dji-microsoft-nokia
radar detection,” in Proc. 22nd Int. Conf. Noise Fluctuations, 2013, pp. 1–4.
[40] Q. Dong and Q. Zou, “Visual UAV detection method with online feature classification,” in Proc. IEEE 2nd Inf. Technol., Netw., Electron. Autom. Control Conf., 2017, pp. 429–432.
[58] J. R. van der Merwe, W. P. du Plessis, F. D. V. Maasdorp, and J. E. Cilliers, “Introduction of low probability of rec-
[41] D. Lee, W. G. La, and H. Kim, “Drone detection and identifi-
ognition to radar system classification,” in Proc. IEEE Radar Conf., 2016, pp. 1–5.
cation system using artificial intelligence,” in Proc. Int. Conf. Inf. Commun. Technol. Convergence, 2018, pp. 1131–1133.
[59] J. Yu and Y.-D. Yao, “Detection performance of chaotic spreading LPI waveforms,” IEEE Trans. Wireless Com-
[42] S. Bj€ orklund, “Target detection and classification of small
mun., vol. 4, no. 2, pp. 390–396, Mar. 2005.
drones by boosting on radar micro-Doppler,” in Proc. 15th
[60] J. A. Kovarskiy, B. H. Kirk, A. F. Martone, R. M. Narayanan,
Eur. Radar Conf., 2018, pp. 182–185.
and K. D. Sherbondy, “Evaluation of real-time predictive spectrum sharing for cognitive radar,” IEEE Trans. Aerosp.
[43] M. Lamarche, “The benefits and challenges of using GAN technology in AESA radar systems,” Military Embedded Sys-
Electron. Syst., vol. 57, no. 1, pp. 690–705, Feb. 2021.
tems, 2022. [Online]. Available: https://militaryembedded.
[61] Q. Liu, Z. Liu, R. Xie, S. H. Zhou, and Y. F. Liu, “Radar
com/radar-ew/rf-and-microwave/the-benefits-and-
assignment for stealth targets detection and tracking based
challenges-of-using-gan-technology-in-aesa-radar-systems
on BPSO in air-defense radar network,” in Proc. IET Int.
[44] A. D. Brown, Active Electronically Scanned Arrays: Fun-
Radar Conf., 2013, pp. 1–5.
damentals and Applications: Transmit Receive Modules.
[62] C. Shi, Z. Shi, S. Salous, and J. Zhou, “Joint optimization
Hoboken, NJ, USA: Wiley, 2022.
of radar assignment and resource allocation for target tracking in phased array radar network,” in Proc. CIE Int.
[45] M. N. Cohen, “Pulse compression in radar systems,” in Principles of Modern Radar. Berlin, Germany: Springer,
Conf. Radar, 2021, pp. 2445–2447.
1987, pp. 465–501. [46] D. Adamy, “EW 101: Pulse compression by Barker code,” J.
[63] T. J. Nohara, P. Weber, G. Jones, A. Ukrainec, and A. Pre-
Electromagn. Dominance, vol. 45, no. 8, pp. 36–38, 2022. [47] M. A. Richards, J. Scheer, W. A. Holm, and W. L. Melvin, Principles of Modern Radar, vol. 1, Citeseer, 2010.
50
IEEE A&E SYSTEMS MAGAZINE
mji, “Affordable high-performance radar networks for homeland security applications,” in Proc. IEEE Radar Conf., 2008, pp. 1–6.
SEPTEMBER 2023
The Virtual DisAnguished Lecturer Program (VDLP) allows us to serve the AESS parAcipants and the aerospace and electronic systems community the opportunity to hear from our respected DisAnguished Lecturers from around the world – both live and on demand. RegistraAon is free for all webinars. If you are unable to aCend the "live" virtual events, the presentaAons will be available a2er the event.
Scan to Register ieee-aess.org/vdl
+100
DL talk Btles to choose from in a variety of technical fields
38
available DLs providing lectures online and in person at your next meeBng, chapter event, and more
UPCOMING WEBINARS
TUES. 10 OCTOBER 11 AM ET/3 PM UTC
Advances in Detect and Avoid for Unmanned Aircra2 Systems and Advanced Air Mobility Recent Research on Deep Learning Based Radar AutomaAc Target RecogniAon An IntroducAon to Quantum CompuAng and Data Fusion Measurement ExtracAon for a Point Target from an OpAcal Sensor
THURS. 12 OCTOBER 11 AM ET/3 PM UTC
AdapAve Radar DetecAon
Antonio De Maio
THURS. 26 OCTOBER 11 AM ET/3 PM UTC
MulAple-Hypothesis Tracking
Stefano P. Coraluppi
THURS. 2 NOVEMBER 11 AM ET/3 PM UTC
Sense and Avoid Radar
Lorenzo Lo Monte
WED. 15 NOVEMBER 11 AM ET/3 PM UTC
Ontological Decision-Making Support for Air Traffic Management Towards Trustworthy Autonomy: How AI Can Help Address Fundamental Learning and AdaptaAon Challenges
THURS. 14 SEPTEMBER 11 AM ET/3 PM UTC THURS. 21 SEPTEMBER 11 AM ET/3 PM UTC THURS. 28 SEPTEMBER 11 AM ET/3 PM UTC
WED. 29 NOV. 11 AM ET/4 PM UTC
Giancarmine Fasano UCam Kumar Majumder Felix Govaers Yaakov Bar-Shalom
Carlos C. Insaurralde
Full schedule is available on ieee-aess.org/vdl
Gokhan Inalhan
DOI. No. 10.1109/MAES.2023.3289785
Interview Interview With Roy Streit
Interviewer: Stefano P. Coraluppi , Systems and Technology Research, Woburn, MA 01801 USA
Index Terms—Information fusion, target tracking, signal processing INTERVIEW WITH ROY STREIT
This was a fantastic landscape for a kid to grow up in. The problem was that those who did not own “mineral rights” Stefano: Roy, I am honored to be the to their land, if they owned one to conduct this interview with land, were poor. Much later I you. Can you start by telling us a bit realized that there was almost about when and where you were no middle class, just like Appaborn, and memories of your lachia. I was not in the middle. childhood? My parents were wonderful Roy: I was born in Oklahoma, but I people. My father was very grew up in the West Texas oilfields friendly and always happy to (the Permian Basin, for those who talk. He shared a warm smile know the Southwest). We lived with everyone he met. He could where the big rigs were drilling, so talk easily to anyone. He loved we moved a lot. I can date my childto laugh. He was down to earth. hood memories reliably because I He was kind and understanding, can remember where I was at the and he accepted people exactly time. It is a little bit like archeology. as they were. Not once did I My early memories are filled with hear him say a hateful word desert. Heat. Sand. Sun. No open about anyone. He kept his opinwater for a hundred miles in any ions strictly to himself, but his direction. Dust bowl like sandstorms stories spoke volumes. I loved that pitted the paint on cars. Dramatic his stories, which he never weather. Squall lines marching across repeated even in his later years. the desert, and timing when to go I deeply admired his instinctive inside. Towering anvil thunderheads, NATO lecture series in Rome (2022). love and respect for everyone with lightning unlike any I have seen he met. My mother was a differin the northeast. The clean, sweet smell of long dry soil ent kettle of fish. She was reserved and smiled as needed. moistened by the first raindrops. Glorious sunsets. Strange She had opinions that she guarded closely. She had a pithy insects. When I got my driver’s license (age 14), I would wickedly sharp tongue, but she policed it well and few drive to the edge of the Caprock at high noon to see the ever saw it. She wanted to go to college, but she graduated horizon stretch away 40 miles in one direction and 15 in high school in the depths of the Great Depression. Instead, the other. Why high noon? To avoid the rattlesnakes who she went to work in the local shoe factory. That left a were smarter than me and stayed out of the noonday sun. mark, but she was tenacious. Some 20 years later, she was The air was crystal and nothing manmade could be seen. able to go to college and get a nursing degree. She worked as a nurse in the local (now regional) hospital for the next 25 years. My parents both wanted to see their children get Author’s current address: Stefano P. Coraluppi is with the education they could not, and to see them (in their Systems and Technology Research, Woburn, MA 01801 words) “stand on their own two feet.” My brother and I USA (e-mail: [email protected]). were not the first in the family to attend and graduate from Manuscript received 3 May 2023; accepted 23 June college—that unique honor belongs to my mother. On 2023, and ready for publication 3 July 2023. graduation day, my father was teary, and my mother Review handled by Daniel O’Hagan. beamed. Their profound love of family and deep inner 0885-8985/23/$26.00 ß 2023 IEEE 52
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Coraluppi
Roy, the day the earth stood still (1948).
strength made possible many things that otherwise might not have been. Stefano: Please tell us a bit about how you first developed an interest in mathematics, science, and engineering. Roy: I liked history and storytelling. They made me ask questions. I wanted (needed?) to understand what I was seeing, hearing, smelling. I was very shy, much too shy to ask questions. (For those who know me now, that must be hard to believe, but it is true—I have report cards to prove it.) No one talked about what mattered to me, but even if I had asked, they would have been puzzled by my questions. A lot of them were weird. One question I fondly recall asking myself, while looking at my bowl of breakfast Cheerios, “Which one is the last Cheerio I will eat?” There had to be one, but only by eating the bowl could I find it. I was 4 (I remember what I saw out of the window). And then Sputnik happened in October 1957. I was in the 4th grade. We would had a television for less than a year. They called Sputnik a new moon, and they said you could see it overhead. I went outside at the appointed time, and there it was. A bright dot in the dark sky moving fast. I remember thinking that was “almost” interesting. But in December of that same year, I watched on tv as an early Vanguard rocket rose 4 feet in the air before settling almost straight down into an inferno of flames. I thought that was very interesting. I recall thinking something vaguely like “they don’t know what they’re doing,” which turned into “I want to help them.” To do that I knew I needed to learn a few things. In the sixth grade, I discovered that the U.S. Army Artillery and
SEPTEMBER 2023
Missile School in Ft. Sill, OK, ran an amateur rocket testing facility. I wrote them a letter (3 cents postage), and they sent me a small booklet, “A Guide to Amateur Rocketry.” I am reading it again, now! The chapters on propellants, rocket engine design, ignition systems, aerodynamics, launchers, and performance analysis are written for high school and college students. I understood none of it back then, but it became a guide to what I needed to learn. I did a lot of reading on my own. I loved science and engineering, but math was a dull thing until I entered Mr. Kuser’s math class in the 12th grade. He was a retired Dallas lawyer who was taught by the Socratic method, just as he was taught in law school. (Back in the 1930s, it was common for lawyers to get dual degrees in math and law.) The very first day, he asked some poor soul, “What is the reciprocal of 5?” Answer, “it’s the number upside down.” Mr. Kuser went to the board, carefully drew the number 5, and then he drew it upside down. “But . . .” protested the student. “You had your chance,” said Mr. Kuser, and moved to the next student, whose answer was, “It’s one over that number.” Mr. Kuser stretched and wrote the number 1 far above the 5 already on the board. “But. . .” said the student. “You had your chance,” said Mr. Kuser. And so it went until he got the answer he wanted. When he got to me, he asked, “What is a function?” I had no answer, and I was deeply mortified. For most students, Mr. Kuser was not a good teacher, but for me he was ideal. By Christmas, I was reading an inspiring book by Courant and Robbins [“What is Mathematics?” (1941)] and teaching myself calculus (Granville, Smith, and Longley). The following Christmas found me reading a little book by Kamke [“Theory of Sets”
IEEE A&E SYSTEMS MAGAZINE
Roy’s sixth grade ft. still roadmap (1958).
53
Interview With Roy Streit
The future begins (1981).
(Dover ed., $1.35)]. Math won me over totally—for beauty’s sake—and physics came along for the ride. Learning was a great joy and a consolation. It gave me the courage to keep going. Stefano: What are your memories of your time at East Texas State (B.A. in mathematics and physics) and University of Missouri (M.A. in mathematics)? Roy: You did not mention Stevens Institute of Technology in Hoboken, NJ. I was there for the summer of 1968, in between East Texas and Missouri. Stevens sits atop a cliff in New Jersey across from midtown Manhattan. I watched as the SS United States and the SS France sailed up the Hudson and docked in Manhattan, as they did several times that summer. What a magnificent sight! I often took the tube (the name the locals give to the tunnels that carry subway trains between Manhattan and Jersey City) over to Manhattan, sometimes alone, often with other students at Stevens. I spent several delightful evenings at the Bitter End in Greenwich Village which had many folk singers. Joni Mitchell performed there one evening, and I was in sitting right in front of her in the first row. How did that happen? Pure luck and the fact that she was almost unknown at the time. Anyway, a guitar string broke, and while she was fixing it, she started chatting with me. (I like to think she was flirting, and I was too dumb to know it at the time.) I have many wonderful memories of that summer (though some are sad, like the RFK funeral at St. Patrick’s.) So how did I get to Stevens? I had a small NSF undergraduate research participation grant ($600), one that I found out about and applied for entirely on my own. Why Stevens? Because it was the only program I knew about (no Internet, folks). Where was Stevens? I did not care—it was not Texas. How did I get there? 54
On my very first airplane flight to anywhere. It was also my first trip “East of the Mississippi.” It was the summer of 1968. For those who did not experience 1968, be thankful. It was a Dickens’s “best of times/worst of times” year. Stevens changed my life. East Texas (now Texas A&M University at Commerce) was not a good school at that time, and I felt trapped and more than a little bit angry. Stevens helped me escape. I made one of my professors at Stevens so annoyed he would not meet with me again. (I was late for a meeting, and he missed his train home. I had no clue what that meant.) He did give me a problem to work on, a good one too. It was to prove the existence of an optimal control for a linear system of differential equations. Wow! That was very different from anything I had seen before. When I finally settled into the problem with a different professor, I solved the problem exactly as the first professor had outlined to me—it was my first encounter with convex functionals and Hilbert spaces. I loved it. I also wrote it up in a nice way and typed it myself, painfully, one special character at a time (this was long before LaTex). To this day, I have no idea why I did that. When I showed it to the new professor, he was shocked as he quite reasonably had expected nothing from me. My fellows in the program watched me write it and were emotionally supportive (I needed it, too). When the summer ended, I went back to East Texas, bereft at the loss of my new friends. But I had learned a lot and discovered that I had the audacity to slap a cover page in that paper and submit it as an honors thesis. It was almost too late, and a little finagling was needed. The department had never seen anything like it. I graduated with honors. As you can tell, I do not particularly enjoy speaking of East Texas State University. The University of Missouri is a very different story. I spent the first year (1968–1969) making up deficits caused by East Texas, but the next year was a treasure. I spent the summer of the in-between year (1969) in New London, CT. I will talk about that in a moment. The second year (1969– 1970) at Mizzou (what they call the state University in Columbia) was special and challenging. I got As from professors who very rarely gave As. I loved analysis, but group theory with its upper central descending series was a bore. Lots of good memories. One story I rarely tell is my involvement in a student demonstration following the Kent State murders. A large group of students assembled in the Quad. There were a couple of hundred, which was very large by Missouri standards. It was tense. The students were really angry. The Quad happens to be next to the residence of the university president. At one point someone shouted, “Let’s go talk to the president” and headed that way. I was first in line behind him. Being too naı¨ve for words, I really did want to hear what the president had to say. We arrived
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Coraluppi
First born, a bridge, and Stanford (1982).
on his porch and stopped at the front door. The porch was filling up behind us. What now? We looked at each other. We had a choice—open the door or ring the doorbell. He rang the doorbell. The president opened the door himself. I learned two things instantly: one was that I was a very lucky young man, for behind him and nearly out of sight were four state troopers, armed. The other was that the president was a remarkable man, with no small measure of courage. We talked and disbursed, all very peacefully. I soon found out that someone took a picture of me sitting on the porch and it was published in a local paper of some kind. I never saw it myself, but it seemed like everyone else did. I graduated in 1970, in part because a man rang a doorbell. Had he opened the door instead, the state troopers would have pounced. Enough said. Stefano: What brought you to NUWC in 1970 and what are your best memories of your time there? What were the major challenges? Tell us a bit about the projects on which you worked. Roy: Well, you would not be at all surprised to hear me say that I got there by accident. I had two summer intern jobs, the first was the summer of 1969. One evening in the spring of that year, I was wandering around Jesse Hall (the admin building at Mizzou) after hours with more than one beer in me. How did I get in? The door was unlocked! Anyway, I bumped into a set of mail slots and grabbed onto a brochure from the U.S. Navy Underwater Sound Laboratory (USN/USL) in New London, CT. It was a summer intern job. In that state, I recall thinking it might be fun to work underwater. I knew better the next morning, but I applied anyway. They accepted. I had $50 in my name, so I used the offer letter to borrow enough money to get me to my first paycheck. You would not be surprised by now to hear I did not care for the job, but the size of my first “real” paycheck astounded me. This was the summer of SEPTEMBER 2023
1969. I watched the moon landing from a tv in the common room of a women’s college dormitory. I nonchalantly relegated this astonishing feat of astronautical engineering to the category of “of course they can do it” because I somehow thought that would impress the ladies. My penance for that bonehead comment, which betrayed what I really thought, was to spend my career working with engineers. Summer of ’69 over, I returned to Mizzou. By the time I graduated from Mizzou in 1970, I knew that I liked having money in my pocket and needed a break from being a student. The booming economy of the 1960s had gone bust, and finding no jobs, I reluctantly returned to NUWC for my second summer intern job. I told myself that by the end of the summer, I will find a permanent job. That did not happen the way I expected because no one was hiring. My NUWC supervisor noticed me though and noticed that I could write well, so he was able to convert my summer intern job into a full-time position. A hiring freeze hit two weeks later. Once again, I was very lucky. I could not have found a better job, but it took me several years to figure that out, and many more years to fully appreciate my good fortune. The men (yes, all men) were tough but fair, and tolerated sloppy work very poorly. I was not very likable, but I was able to do what they wanted done and write about it clearly too, which I think surprised them. The first two years or so I was doing raytracing for underwater acoustic propagation. It was my first encounter with asymptotics. I recall too well the sad day I suggested the speed profile should be modeled as a random variable. Enough said. I moved on to more open-minded groups. In time there came the magical day when I found I could do things others did not know how to do. This led to an encounter with a man who all feared. He was an exceptionally talented structural acoustician. I was warned that he was combative and ran over people who he felt did not measure up. I was sent to talk to him, not knowing what to expect. He looked me up and down (I was 26), chomped on his lit cigar (allowed in offices in the 1970s), and growled that he never expected to need a mathematician to do his job. To which I replied instantly that I never thought I would need an engineer to have a job. At that, he bit his cigar practically in half. He took a moment, grinned in a devilish way, and invited me into his office, whereupon I was gifted with a marvelous onehour extemporaneous lecture on structural acoustics. (It was a kind of an in-depth interview, but I passed.) His engineering vision was blocked by the need to solve certain large dense generalized Hermitian eigenproblems. In those days, most numerical analysts thought that an order 10 matrix was pretty big, but his problems were of order several hundred. Solving them with the available computer resources was fun. We were friends to the end of his days, though after a few years, we never worked
IEEE A&E SYSTEMS MAGAZINE
55
Interview With Roy Streit me a Visiting Scholar. It was a very good year. It was my first year away from NUWC. I discovered I could leave without leaving.
What matters most (1993).
together again. After that, I was no longer the man who was going to leave next year.
Stefano: What led to your choice to pursue the Ph.D. in mathematics (University of Rhode Island, 1978)? Was it difficult to perform coursework and research while maintaining your position at NUWC? What was the focus of your dissertation? Do you recommend advanced studies while employed, or would you recommend a different approach? Roy: As I have said, the lack of resources drove me out of academia in 1970. My accidental discovery of USN/USL meant that I suddenly had money in my pocket for the first time. I liked how that felt. But to escape the sound propagation group, if only for an afternoon, I started taking one-off courses at URI. Why URI? Because it was closer than UConn. I was not planning to get a degree, as I was still “leaving the next year” to get a degree somewhere else TBD. But inertia comes from investment, and I was getting old (I was 25), so URI it was. My topic was approximation theory, which had an appeal to me, and for which I found applications in beamforming. For my year “in residence” at URI, both NUWC and URI considered me full time. I did that by working too many hours. By doing that, the rules being what they were at that time, I graduated with no time commitments—I was free to leave, as I always dreamed of doing. And then I did not. Why I did not leave had to do with the structural acoustician story, and the fact that the job market was still awful. Large eigenproblems had morphed into large constrained optimization problems for compact high-power steerable active arrays, and I found a way to study the project for a year at Stanford University in the Operations Research Department, where I learned many of the tools of the trade that are useful in machine learning. I remained a full-time NUWC employee for that year, but I (we, as I was married with a one-year-old baby boy) lived in Palo Alto. I personally thought of it as a kind of postdoc, but Stanford called
56
Stefano: During your time at NUWC, you had several extended stays around the world including Stanford University, La Spezia (Italy), and Adelaide (Australia). The longest visit was in Australia, from 1987 to 1989. Please tell us a bit about these visits and how they contributed to your personal and professional life. Roy: Well, I think I just said the how and why of Stanford. That year opened my mind to many things, but I was blind to other things. It was 1982–1983 and I had an offer to get in on the ground floor of computer games. I declined, a lack of imagination on my part I suppose. When I returned to Connecticut, I was blessed—a stronger word than fortunate—to move into a group that was everything I wanted. It was called the Towed Array Group. One section designed arrays, another built prototypes, and a third evaluated them in sea tests at AUTEC. The group did the full arc from theory to test to evaluation, and I was part of all of it for five very happy years. I could talk about that period in my life for hours. It is relevant to my life’s later trajectory that I was just beginning to explore modeling spatially nonhomogeneous self-noise in towed arrays using hidden Markov models (HMMs), being inspired to consider them by their use in speech processing. Then out of the clear blue sky came an opportunity to go to Australia for a year, all expenses paid. It was too good to pass up, but still I refused to go until and unless I was guaranteed a return to my beloved Towed Array Group. They agreed. My family and I packed up (we now had two boys) and we went and had a
IEEE A&E SYSTEMS MAGAZINE
Roy in the lab with lofargrams (2000).
SEPTEMBER 2023
Coraluppi grand time in every way imaginable. Our third child, Katherine, was born there. Before I accepted the posting, the Aussies sent along a rank-ordered list of about ten topics they might ask me to work on while I was there. The last item was tracking. I read all such lists in reverse order, an old habit which is ofttimes very revealing. So it was this time. Before I went to Australia, to be frank, whenever I was in a meeting and the subject of tracking came up, I did my best to leave as quietly as possible. Tracking was not on my list of interesting topics. Three weeks after I arrived in Adelaide, however, I realized that by swapping a space variable for a time variable, the HMMs I had begun to study back in the states were models for tracking. Australia was a watershed. After concluding my posting to DSTO (Defence Science and Technology Organisation) in Australia, we stayed on for four more months in Adelaide. Our house in Connecticut was still rented, so why return early when we could live in this idyll in Australia? I had saved a lot of my vacation time, so I used it. Free and clear—no obligations to “The Man” for four whole months! Adelaide University offered me an office (which I declined) and access to the library and faculty club. I spent my time studying artificial neural networks (NNs) on my own and totally self-supported. It was liberating. I moved fast and in those few months developed what I later turned into three NN patents, all assigned by me (not the Navy). It was 1989, and the AI winter was coming but no one knew that at the time. These NN ideas came back in a totally unexpected new guise five years later [the probabilistic multihypothesis tracker (PMHT)].
Stefano: Before continuing the technical discussion, I would like to ask you to talk to us about your lovely wife Nancy, who many of us know from her frequent participation at the FUSION conferences and other events, as well as your children Adam, Katherine, and Andrew, and their families. Roy: Ah, the joy and delight of my life. We had one and only one opportunity to meet and that singular moment was in January 1976, in JFK Airport about 3 P.M. Talk about luck. We have been together ever since. We are very different people united by deeply shared values of every kind—personal, family, social, professional, religious, and political. Nancy started coming to FUSION conferences because of the wonderful venues. After I retired from NUWC and was finally able to participate actively in ISIF (International Society of Information Fusion). Her attendance became regular and is now expected. We bring our children when we can, especially Andrew who still lives with us. It has become a family affair. Our other children SEPTEMBER 2023
attend when they can, and our grandchildren (Anna and Clara) will soon be attending their first in Charleston, SC. Good conferences can be family affairs. The stronger the bonds, the more productive the conference.
Stefano: In 2005, you made a significant career move by leaving NUWC, where at this point you had reached the level of Senior Executive Service and joined Metron. Please tell us about this time in your life and what motivated this transition. Looking back on those days, do you think this was the right career move for you? Take this opportunity to talk more broadly about career choices and what advice you might have for our listeners and readers. Roy: Leaving NUWC and moving to Metron was absolutely the right move. I was at NUWC for 35 years, but it was not as if I had the same job all that time. In fact, I had at least five very different jobs, and unlike nearly everyone who works there, I switched departments at least four times, not counting my two long-term assignments (Stanford for a year and DSTO in Australia for almost two years). With each switch, I gained experience and learned new things. I learned to avoid working in groups whose supervisor functioned as an administrator and not a Leader. When I had to choose between two technical jobs, one that I deemed safe and within my skill set and another one just outside what I knew, I learned to make the riskier choice. Not once did I ever look back with regret. I stayed in my final position at NUWC for about five years. I was at the top of my technical career ladder, and further promotions were not possible. My position was on the technical side of the SES, which few are aware of, and it came with little intrinsic authority. I could have stayed in it for many years, and part of me wanted to do just that. It was demanding in several ways. I had to deal with internal politics, which is hugely important, and I hated it. I had skillfully managed to avoid it before then. Upper-level management often does not like change. (John Milton wrote of one tragic figure who declared, “Better to rule in hell than serve in heav’n.”) I also had to engage with a larger Navy community external to NUWC via a program called Advanced Processing Build (APB). The best of APB reminded me of my treasured earlier years in the Towed Array Group. I discovered that Milton’s tragic figure lives also on the technical side. Gaining acceptance of new ideas in established programs to try to meet the larger needs of the APB program was a challenge. Getting technical groups to cooperate that have never previously worked together, and passionately do not want to do so, was another. In large spiral programs like APB, competing institutional interests add a special spice to the
IEEE A&E SYSTEMS MAGAZINE
57
Interview With Roy Streit
Granddad trying to be a tree (2017).
Gang’s all here (2020).
program (cayenne, Jalape~no, habanero). The APB program was fortunate to have good leadership at NAVSEA and good engineers throughout. For five years, I found it rewarding and fascinating to watch the spirals in the program evolve, but it was also often frustrating, and I slowly began to recognize the signs—it was time to move on, but this time it meant leaving NUWC. It turned out to be psychologically harder to leave than I thought it would be: “As you bend the twig, so grows the tree.” In terms of the work, though, moving to Metron was easy because I went back to doing the kinds of work that I like best. I am fond of telling all who will listen that my intention was to stay at Metron for about five years and then slow down. I made no secret of that when I accepted the offer. That was 18 years ago. This story is my way of saying that I believe that Metron is a great small company with all the right values. It highly esteems good work and strongly supports its employees. Moving to Metron began a whole new phase of my career.
asked to do some work with them for classification. It was a perfect application for my patents. Tod and I, and others too, developed these ideas further, with a special eye on dimensional reduction which included aggregating training data over a time window. One day, someone complained that these models were static, meaning the NNs were time independent. Sure, I said, so you want them to evolve? By this time Tod and I had a close working relationship, and a quick glance told us that we both knew how to do that. Well, we knew what it was we wanted to do, which was to “put the Gaussian mixtures in motion,” but the technical details eluded us for a while. To make things easier, we decided to make the Gaussian mixture have exactly one term. That is when the connection to multitarget tracking became obvious—our classes were targets, and our aggregatedover-time data were sensor measurements. We had a sliding window multitarget tracker. What is more, we did not use the “at most one measurement per target rule” because we used EM. The technical details were finally ironed out when we used what is today called a graph-based model to elucidate the conditioning, and then we had the first PMHT filter. It did not come out of the tracking community, but from the classification community. Today one might say that it was AI inspired, but 30 years ago during the AI winter, that would not have been a good thing to say.
Stefano: You are perhaps best known for your co-development of the PMHT along with Tod Luginbuhl at NUWC. Please tell us about this work and how you view the PMHT today. Roy: That is an interesting story. I mentioned my NN patents earlier, the ideas for which I developed in Australia. They were based on Gaussian mixtures. I trained them using the expectation-maximization (EM) method, which was rarely used outside of statistics at that time. The algorithm did not use HMMs because I did not need them. On my return to NUWC after Australia, word about my work on NNs got around, and I was 58
Stefano: In more recent years, particularly since your move to Metron, you have contributed to the area of labelfree tracking methods. This is the realm in which the Ph.D. filter was developed, along with many of its more recent versions and extensions (including labeled tracking). How do you view this subfield in the tracking
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Coraluppi community? How does your point processes approach differ from the Ph.D.? Should practicing engineers be interested in these methods as design alternatives to more established methods such as PDA, MHT, etc.? Roy: I had no interest in what you call label-free methods until 2008. When I began (belatedly) to read what was out there, I found exceedingly poor mathematical discussions that were written in what struck me as a newly invented jargon that failed totally to acknowledge connections to established mathematics. That puzzled me then, and it puzzles me still. Finite point processes and random finite sets (as defined by the Vo’s) are the very same thing. The intensity function of a point process is identical to what is called a Ph.D. Many today still do not acknowledge these facts. Those who claim there is a difference are either just plain wrong, or they are using Mahler’s flawed definition of an RFS that does not make sense on discrete spaces and has theoretical problems on continuous ones. These methods have been extended to tracking labeled targets. On the very face of it, labeling the targets in an unlabeled filter would seem to give up the very advantages that are touted for the unlabeled filter. In my view, labeled versions of the Ph.D. intensity filter are a limited subclass of MHT trackers. It is regrettable that the advocates of labeled methods are oblivious to the obvious connections to MHT. This posture is at variance with the entire body of scientific thought. By inference, this is also a sad statement about the quality of the editorial and reviewing oversight in our journals and transactions.
Stefano: While the active scientific discussion is of great benefit, the subfield to which the Ph.D. filter belongs has led to many, sometimes acrimonious discussions within the tracking community. Is there anything you would like to share with us here? What lessons can we learn from this for ongoing research endeavors? Roy: Your words “sometimes acrimonious” do not begin to describe it. I was amazed at the animosity that permeated the entire subfield for many years. It was far and away the most egregious unprofessional behavior I have ever witnessed. Deep personal hostility was directed at anyone who gave alternative technical formulations that differed from the accepted doctrine of RFS. The overt hostility drove established researchers out of the field, and motivated others to avoid studying it. It made some students shift fields after graduating. It is pointless to go into the details. In short, it sullied the reputation and academic standing of the field in the eyes of many. For several years, as a friend of mine once said, its reputation was “lower than pond scum.” The reputation of the field of unlabeled target tracking has improved somewhat in the last few years, which SEPTEMBER 2023
is good because it has real value. The recognition of earlier scholarly research in other countries may improve its standing further. For example, a 1976 paper in Russian by Bakut and Ivanchuk (translated into English the same year) derived—as a special case— what is called the Ph.D. intensity filter. This article was merely one of a series of papers inspired by the work of giants in statistical physics. Before I became interested in the subject, I saw everything only from outside. That changed in 2008, when I presented an alternative derivation of the Ph.D. intensity filter at the FUSION conference in Cologne. I had not yet realized that alternative derivations were “not allowed.” I was not naı¨ve and fully expected some strenuous technical exchanges, but I was amazed at the animosity. The attacks started immediately and persisted for years. My papers were misrepresented in print, and the misrepresentations were attacked. Some authors were “ordered” by anonymous reviewers to remove references to my papers as a condition for acceptance. Some reviewers, in the second round of reviews, attacked other reviewers who, in the first round of reviews, had offered objective criticism. On learning I had a paper that was still under review, an individual wrote to the editor demanding to be made a reviewer, and they were. I could go on, but eventually, one individual involved in this unprofessional activity was publicly sanctioned by the IEEE for violating its ethical standards. Do you mean lessons learned about myself, or about how professional societies can fail their membership? For myself, I discovered that I am still that young kid in West Texas who did not let what others thought and said change my thinking unless and until I was convinced by fact and careful discussion. I am not easily overwhelmed by lies and distortions. As for societies, I do have a few heartfelt thoughts about associate editors (AEs) and the important role they play. The big one is that, in my opinion, many do not fully understand one of their most important roles, perhaps because it so rarely needs to be exercised— they are responsible for protecting the integrity of the blind review process. As I have learned firsthand, and from others who have been victimized this way, bad actors can and will find ways to abuse this process. Why would not they since, after all, they can attack cowardly from the dark, anonymously, in novel ways? They expect AEs to be confined behind a code of silence. In truth AEs have a choice—they can stay silent, and risk being accused of violating ethical standards, or they can act to control the abuse, and risk being accused of violating the blind review process. It requires courage and sound judgment on the part of AEs to do their duty and not abdicate it. Undoubtedly some AEs (and there are often 50 or more nowadays) need the training to understand how to
IEEE A&E SYSTEMS MAGAZINE
59
Interview With Roy Streit exercise their authority. Do they get this kind of training? I think not.
getting things done. What an incredible team they were to work with. It is always an honor to work with extraordinarily talented people. I was heartbroken to leave the team when I left NUWC, but I am very proud of the work we accomplished.
Stefano: Your most recent work has been in analytic combinatorics (AC). Please tell us about this work. Roy: Until very recently, say the last ten years, I would have never believed I would be interested in this subject. Stefano: Tell us about the most memorable work-related I was the kind of guy who enjoyed seeing answers to travel experiences, at conferences or lecture series. combinatorial problems but had absolutely no interest in Roy: There have been many. An especially treasured deriving them. That is because I cannot count worth moment was meeting the legendary Paul Erd€ os at the beans and worry that I have overcounted or under1978 World Congress of Mathematics in Helsinki. There counted. AC is really a subject for younger minds, but was a queue of people wanting to ask him a question. I hallelujah here I am. The subject liberated me—I no lonjoined the queue and asked my question, he listened, ger worry about counting because I am confident that I smiled at me with extraordinarily kind eyes, and directed can differentiate correctly. That only makes sense in AC me to speak to a man sitting on a low wall. I had not because—magically—the terms in the derivatives map noticed the wall until that moment, but arrayed along it one-to-one to combinatorial configurations in the probwere about 20 people, all world-class experts in their own lem. Problems are reduced to modeling the functionals to fields, who were there to support Erd€ os. Another incredibe differentiated, and—yet more magic—the governing ble meeting was in 1999 in Paris at a workshop on trackfunctionals are derived ing organized by the late from first principles, just Jean-Pierre LeCadre to as things are done in mark the five years since physics! All the standard PMHT was invented. A tracking filters such as third was the FUSION PDA, JPDA, IJPDA, Conference in Florence multiBernoulli, Ph.D., that you, Stefano, helped JiFi, and more can be organize in 2006. There formulated in this way. was the FUSION ConferThe functionals that ence that I co-chaired in describe them are very Istanbul in 2013, which compact, concise, comwas held shortly after the plete, and exact. Moreriots in Taksim Square over, these functionals near the conference venue are closely related and and during which Ramacan be used to organize dan began. Finally, and the filters into a beautimore recently, the several Stefano and Roy in Charleston (2023). ful family tree. But I NATO lecture series that I digress. There are other reasons to be interested in AC have been honored to be part of on multiple occasions has too, and these have to do with higher level information led to lasting friendships and working relationships. As I fusion. I feel lucky to have been blessed to have discovsaid, so many wonderful experiences, and after I joined ered such interests, especially now. They help me feel Metron in 2005, I was able to share many of them with young at heart. my family.
Stefano: What are you most proud of in your technical accomplishments? Roy: I cannot talk about many of the things I am most proud of, but I know of them, and that is enough. There was a period of about ten years starting about 1985 that was especially exhilarating. For me personally, all I can say is that I am very grateful to have been so lucky. I have already talked about those years. When I mentioned my last five years at NUWC, I neglected to say that I was part of a small but determined team that was dedicated to 60
Stefano: Do you have any further words of advice for young researchers in our field? How should they navigate important career choices? What are some open problems that you believe are most worthy of investigation? Roy: Trust yourself. Go where your intellectual interests are, if you can. Remember that hiding inside every problem that someone else is having difficulty with solving, there resides a real problem, and it is often interesting. Take ownership of it, and you may be rewarded. When in time you believe you can “feel the boundaries of the job
IEEE A&E SYSTEMS MAGAZINE
SEPTEMBER 2023
Coraluppi you’re in,” maybe it is time to think about doing something else but be patient. Opportunity comes to the well prepared. When given career choices, go to the boundaries of what you know, assess the choices, and take the riskier one, provided it is not too risky. What do I mean by risk? Well, risk is in the eye of the beholder. Given my lack of resources when I was younger, my appetite for risk never included financial risk. Professional risk, sure, but I am fairly confident that I can contribute if I put my mind to it, whether or not I like the work at first. Often as not, the core problem will seduce me and, if not, I have always found that another opportunity will pop up. So here I am, nearer the end of my career than the start of it, and I find myself interested in AC. The amazing thing is that over the last two years, AC has stimulated in me a deep and growing interest in quantum computing. Inquiring minds walk through many doors. I have little to say about navigating the choice between technical and corporate career paths. As your experience grows over time, it is inevitable that you will move into ever more senior positions. At some point, a corporate opportunity may tempt you. All I can say is to try to be prepared for that moment. There is no one right choice. All I will say is that good leadership is quite rare but priceless, and poor leadership is vastly overpaid (and poorly respected). Some open problems are purely technical and scientific, and I see little need to talk about them here. However, I will venture to say that many important problems are only partly technical. They arise from the ever-increasing abundance of mis- and disinformation, much of which can now be generated automatically without cost. Its purpose is to sow doubt and uncertainty about all kinds of facts, to say nothing of half truths, distortions, and lies. It spreads with astonishing ease on social media. Contributing to the problem are the “echo chambers” that are enhanced by automated methods (that I imagine are
SEPTEMBER 2023
akin to reinforcement learning). There are grave societal risks if we completely ignore these problems. And there are grave societal risks if we go the other way and overly regulate the space. Finding a middle road will be a serious challenge and, like the problem itself, finding it will be partly technical and partly not. Stefano: Please share with us any further thoughts that we have not had a chance to discuss. Also, please tell us a bit about any plans for the coming years. Roy: I am looking forward to clamming this summer up in Maine. But I have no plans to retire, at least not fully, if that is what you were asking. Too many exciting things are happening to leave the playing field just when things are heating up. I think I could go on and on from here, but I have said enough. Stefano: Thank you, Roy, for spending time on this very interesting and enlightening discussion. And, on behalf of our tracking and fusion technical community, thank you for your significant contributions and service over so many years. Best wishes for the years to come!
FORTHCOMING RECOGNITION Roy has been selected to receive the highly prestigious ISIF Yaakov Bar-Shalom Award for a Lifetime of Excellence in Information Fusion. Yaakov Bar-Shalom was the inaugural winner in 2015, and the award now carries his name. Only three other individuals have received the award since that time. Further details may be found at https://isif.org/isif-yaakov-bar-shalom-award-lifetimeexcellence-information-fusion. The award will be given at the ISIF/IEEE FUSION conference in Charleston, SC, USA, in June 2023.
IEEE A&E SYSTEMS MAGAZINE
61
Call for Papers IEEE Aerospace and Electronic Systems Magazine
Special Issue on
Ethics, Values, Laws, and Standards for Responsible AI June 2023 Dear Colleagues, Artificially intelligent automation is permeating Aerospace and Electronic Systems. Functions that were previously performed by the ‘Natural Intelligence’ of conscious users and their deliberate decisions of will are increasingly fulfilled by cognitive machines and executed in a partially or highly automated manner. In many cases, these AI-based systems operate more reliably, safely, and resiliently. But how do human users remain mentally, morally, and psychologically equal to the power, which artificial intelligent machines give them? How can humans be supported to retain the ultimate authority in mission execution? And how can aerospace and electronic systems engineers shape the technosphere that surrounds us in such a way that it serves people and their goals, and not vice versa? The IEEE Aerospace and Electronic Systems (AESS) will publish a Special Issue of its magazine focusing on these questions from interdisciplinary perspectives. We cordially invite you and your colleagues, recognised experts in the field of Artificial Intelligence and its applications, to submit contributions from mathematical, engineering, philosophical, legal, societal, and political domains and perspectives. There is a pressing need to uncover ethically relevant problems early in the technical design phase and as systematically as possible. This requires that we identify the core values, value qualities, and value dispositions that make it possible to approve artificially intelligent systems for safe use. We therefore also encourage papers on standardization and certification of AI, as well as on Explainable AI (XAI). With more than 2000 delegates from over 80 nations, the Summit on Responsible AI in the Military Domain (REAIM) 2023 was the largest global event dealing with topics of this special issue. In Spring 2025, REAIM will take place in South Korea. The Special Issue will be published for REAIM 2025. If you plan to submit a paper, please reply with a working title and a very short abstract by October 1, 2023 to [email protected]. The deadline for submission of your paper is April 1, 2024. Please submit it to [email protected] according to the IEEE instructions here: https://ieee-aess.org/publications/systems-magazine/special-issue-guidelines. Short papers are encouraged. The peer-review process will take three months. Authors will be notified by July 1, 2024. Based on the feedback of the reviewers, authors are to submit their final version by October 1, 2024.
Special Guest Editors Ariel Conn, IEEE-Standards Association Research Group, Defense Systems, [email protected] Carmody Grey, Durham University, UK, Ethics and Philosophy, [email protected] Wolfgang Koch, Fellow IEEE, Fraunhofer FKIE, Germany, Computer Science, [email protected]
MEET THE TEAM Learn more about your AESS Technical Panel Chairs
Roberto Sabatini
Avionics Systems Panel Khalifa University of Science and Technology (UAE) and RMIT University (Australia)
How many years have you been involved with IEEE, AESS, and the Panel: 20 years
What positions have you held in IEEE, AESS, and the Panel? 2015-Present 2021-Present 2023-Present 2022-Present 2022-Present
Avionics Systems Assoc Editor (TAES Technical Areas and Editors) Avionics Systems Panel Chair BoG Industry Relations Committee Member Members-at-Large (BoG) BoG Publications Committee Member
2022-Present 2022-2022 2019-2021 2015-2021 2020-2024
Cyber Security Panel Member BoG Technical Operations Committee Member Vice-Chair (Avionics Systems Panel) Senior Editor for Avionics Systems Distinguished Lecturer
What are some of your research interests?
Avionics Systems, Air Traffic Management, GNC, Trusted Autonomy, UAS, UTM, AAM, Cyber-Security, Space Systems, GNSS, Navigation, Tracking, Sense-and-Avoid, Space Domain Awareness, Space Traffic Management, Multi-Domain Traffic Management
What are some of your interests and activities outside of your professional career? Volunteering, Coaching and Mentoring, Sports, Travel, Writing
What has been the greatest contribution of the Panel to AESS and its field of interest?
Research and Innovation (R&I) – Participation to NASA UTM and AAM activities; connections/collaborations with NextGen in the US and SESAR in the EU; other national and international Avionics/ATM/UAS programs; Collaboration with JARUS, ICAO and IFATCA (UAS/UTM).
Where do you see the panel going in the next 50 years?
Addressing the challenges and opportunities offered by cyber-physical systems and AI for trusted autonomous air and space operations. Boosting the digital transformation and sustainable development of the aerospace and aviation sectors.
What has been your favorite memory of being involved with the IEEE and/or AESS?
Developing and co-delivering ASP Tutorials at DASC; developing and magazine articles and special issues that have generated significant impact; impacting the global aerospace and aviation community through our direct involvement in NASA, EU, FAA and JARUS R&D and standardization initiatives.
How might interested AESS members get involved?
Membership in the ASP is open to active IEEE members from the aerospace community who desire to advance avionics technology and system capabilities. Currently, the panel includes several standing committees, including: Avionics Research and Innovation (R&I) Committee; Avionics Conference Committee; Awards, Nominations, and Elections Committee; Standards Committee; Education Committee; Journal Publications Committee; UAV Panel Committee, and Cyber Security Panel Committee. The ASP has a strategic agenda of initiatives liaising with national, regional, and international research organizations that are impacting the future of the aviation and aerospace sectors. The ASP develops and maintains a robust research cooperation program of work in collaboration with relevant industry and government organizations.
ieee-aess.org
Scan QR Code for Full Interview
2023 Aerospace & Electronic Systems Society Organization and Representatives OFFICERS
VP Member Services – Lorenzo Lo Monte VP Publications – Lance Kaplan VP Technical Operations – Michael Braasch
President – Mark Davis President-Elect – Sabrina Greco Past President – Walt Downing Secretary – Kathleen Kramer Treasurer – Mike Noble VP Conferences – Braham Himed VP Education – Alexander Charlish VP Finance – Peter Willett VP Industry Relations – Steve Butler
OTHER POSITIONS
Undergraduate Student Rep – Abir Tabarki Graduate Student Rep – Jemma Malli Young Professionals Program Coordinator – Philipp Markiton Operations Manager – Amanda Osborn
BOARD OF GOVERNORS 2023 Members-at-Large
2021-2023 Laura Anitori Steve Butler Michael Cardinale Alexander Charlish Stefano Coraluppi Braham Himed Lorenzo Lo Monte Peter Willett
2022-2024 Alfonso Farina Maria Sabrina Greco Hugh Griffiths Puneet Kumar Mishra Laila Moreira Bob Rassa Michael Noble Roberto Sabatini
STANDING COMMITTEES & CHAIRS Awards – Fulvio Gini M. Barry Carlton Award – Gokhan Inalhan Harry Rowe Mimno Award – Daniel O’Hagan Warren D. White Award – Scott Goldstein Pioneer Award – Daniel Tazartes Fred Nathanson Award – Braham Himed Robert T. Hill Best Dissertation Award – Alexander Charlish AESS Early Career Award – George T. Schmidt AESS Judith A. Resnik Space Award – Maruthi Akella Chapter Awards – Kathleen Kramer Distinguished Service Award – Peter Willett Industrial Innovation Award – Mike Noble Engineering Scholarship – Bob Rassa Chapter Program Coordinator – Kathleen Kramer Constitution, Organization & Bylaws – Hugh Griffiths Education – Alexander Charlish Distinguished Lecturer Program – Alexander Charlish Fellow Evaluation – Hugh Griffiths Fellow Search – George T. Schmidt History – Alfonso Farina International Director Liaison – Joe Fabrizio Member Services – Lorenzo Lo Monte Nominations & Appointments – Walt Downing Publications – Lance Kaplan Systems Magazine – Daniel O’Hagan Transactions – Gokhan Inalhan Tutorials – W. Dale Blair QEB – Francesca Filippinni; Philipp Markiton Strategic Planning – Sabrina Greco Student Activities –Kathleen Kramer Technical Operations – Michael Braasch Avionics Systems – Roberto Sabatini Cyber Security – Aloke Roy Glue Technologies for Space Systems – Claudio Sacchi Gyro & Accelerometer Panel – Jason Bingham Navigation Systems Panel – Michael Braasch Radar Systems Panel – Laura Anitori Visions and Perspectives (ad hoc) – Joe Dauncey
2023-2025 William Dale Blair Arik Brown Joe Fabrizio Francesca Filippini Wolfgang Koch Luke Rosenberg Marina Ruggieri George Schmidt
CONFERENCE LIAISONS IEEE Aerospace Conference – Claudio Sacchi IEEE AUTOTESTCON – Bob Rassa, Dan Walsh, Walt Downing IEEE International Carnahan Conference on Security Technology – Gordon Thomas IEEE/AIAA Digital Avionics Systems Conference – Kathleen Kramer IEEE Radar Conference – Kristin Bing IEEE/ION Position, Location & Navigation Symposium – Michael Braasch IEEE/AIAA/NASA Integrated Communications Navigation & Surveillance – Aloke Roy IEEE International Workshop for Metrology for Aerospace – Pasquale Daponte FUSION – W. Dale Blair REPRESENTATIVES TO IEEE ENTITIES Journal of Lightwave Technology – Michael Cardinale Nanotechnology Council – Yvonne Gray Sensors Council – Paola Escobari Vargas, Peter Willett Systems Council – Bob Rassa, Michael Cardinale IEEE Women in Engineering Committee – Kathleen Kramer
Please send corrections or omissions for this page to the Operations Manager at [email protected].
Visit our website at ieee-aess.org.
2023-2024 Aerospace & Electronic Systems Society Meetings and Conferences The information listed on this page was valid as of 1 August 2023. Please check the respective conference websites for the most up-to-date information. DATE
MEETING
LOCATION
CONFERENCE WEBSITE
19-20 Sept. 2023
DGON Inertial Sensors and Systems (ISS)
Braunschweig, www.aconf.org/conf_187089.html
20-22 Sept. 2023
20th European Radar Conference (EuRAD)
Berlin, Germany
www.eumweek.com/conferences/eur ad.html
26-28 Sept. 2023
Signal Processing Symposium (SPSympo)
Karpacz, Poland
spsympo23.pwr.edu.pl
1-5 Oct. 2023
IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC)
Barcelona, Spain
2023.dasconline.org
2-6 Oct. 2023
13th European Space Power Conference (ESPC)
Elche, Spain
atpi.eventsair.com/espc2023
2-6 Oct. 2023
European Data Handling & Data Processing Conference (EDHPC)
Juan-LesPins, France
atpi.eventsair.com/edhpc-conference
11-15 Oct. 2023
IEEE International Carnahan Conference on Security Technology (ICCST)
Pune, India
site.ieee.org/iccst
6-8 Nov. 2023
IEEE International Conference on Microwaves, Communications, Antennas, Biomedical Engineering and Electronic Systems (COMCAS)
Tel Aviv, Israel
www.comcas.org
6-10 Nov. 2023
IEEE International Radar Conference (RADAR)
Sydney, Australia
www.radar2023.org
27-29 Nov. 2023
IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI)
Bonn, Germany
www.fkie.fraunhofer.de/de/Veranstalt ungen/sdf2023.html
21-24 Jan. 2024
IEEE Radio and Wireless Symposium (RWS)
San Antonio, TX, USA
www.radiowirelessweek.org
2-9 March 2024
IEEE Aerospace Conference (AERO)
Big Sky, MT, USA
www.aeroconf.org
23-25 April 2024
IEEE Integrated Communications, Navigation and Surveillance Conference
Herndon, VA, USA
i-cns.org/
24-26 April 2024
International Conference on Global Aeronautical Engineering and Satellite Technology (GAST)
Marrakesh, Morocco
gast24.sciencesconf.org/resource/ac ces
6-10 May 2024
IEEE Radar Conference (RadarCon’f24)
Denver, CO, USA
2024.ieee-radarconf.org/
7-11 July 2024
International Conference on Information Fusion (FUSION)
Venice, Italy
fusion2024.org
Germany
For a full list of AESS-sponsored conferences, visit ieee-aess.org/conferences. For corrections or omissions, contact [email protected]. VP Conferences, Braham Himed