Study on Signal Detection and Recovery Methods with Joint Sparsity. Doctoral Thesis accepted by Tsinghua University, Beijing, China 9789819941162, 9789819941179


126 28 5MB

English Pages [135] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Supervisor’s Foreword
Preface
Acknowledgments
Contents
Abbreviations
1 Introduction
1.1 Background
1.2 Related Works
1.2.1 Detection Methods for Jointly Sparse Signals
1.2.2 Recovery Methods for Jointly Sparse Signals
1.3 Main Content and Organization
References
2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Gaussian Noise
2.1 Introduction
2.2 Signal Model for Jointly Sparse Signal Detection
2.3 LMPT Detection Based on Analog Data
2.3.1 Detection Method
2.3.2 Theoretical Analysis of Detection Performance
2.4 LMPT Detection Based on Coarsely Quantized Data
2.4.1 Detection Method
2.4.2 Quantizer Design and the Effect of Quantization on Detection Performance
2.5 Simulation Results
2.5.1 Simulation Results of the LMPT Detector with Analog Data
2.5.2 Simulation Results of the LMPT Detector with Quantized Data
2.6 Conclusion
References
3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Generalized Gaussian Model
3.1 Introduction
3.2 The LMPT Detector Based on Generalized Gaussian Model and Its Detection Performance
3.2.1 Generalized Gaussian Model
3.2.2 Signal Detection Method
3.2.3 Theoretical Analysis of Detection Performance
3.3 Quantizer Design and Analysis of Asymptotic Relative Efficiency
3.3.1 Quantizer Design
3.3.2 Asymptotic Relative Efficiency
3.4 Simulation Results
3.5 Conclusion
References
4 Jointly Sparse Signal Recovery Method Based on Look-Ahead-Atom-Selection
4.1 Introduction
4.2 Background of Recovery of Jointly Sparse Signals
4.3 Signal Recovery Method Based on Look-Ahead-Atom-Selection and Its Performance Analysis
4.3.1 Signal Recovery Method
4.3.2 Performance Analysis
4.4 Experimental Results
4.5 Conclusion
References
5 Signal Recovery Methods Based on Two-Level Block Sparsity
5.1 Introduction
5.2 Signal Recovery Method Based on Two-Level Block Sparsity with Analog Measurements
5.2.1 PGM-Based Two-Level Block Sparsity
5.2.2 Two-Level Block Matching Pursuit
5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit Measurements
5.3.1 Background of Sparse Signal Recovery Based on 1-Bit Measurements
5.3.2 Enhanced-Binary Iterative Hard Thresholding
5.4 Simulated and Experimental Results
5.4.1 Simulated and Experimental Results Based on Analog Data
5.4.2 Simulated and Experimental Results Based on 1-Bit Data
5.5 Conclusion
References
6 Summary and Perspectives
6.1 Summary
6.2 Perspectives
References
Appendix A Proof of (2.61)
Appendix B Proof of Lemma 1
Appendix C Proof of (3.6)
Appendix D Proof of Theorem 1
Appendix E Proof of Lemma 2
About the Author
Recommend Papers

Study on Signal Detection and Recovery Methods with Joint Sparsity. Doctoral Thesis accepted by Tsinghua University, Beijing, China
 9789819941162, 9789819941179

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Springer Theses Recognizing Outstanding Ph.D. Research

Xueqian Wang

Study on Signal Detection and Recovery Methods with Joint Sparsity

Springer Theses Recognizing Outstanding Ph.D. Research

Aims and Scope The series “Springer Theses” brings together a selection of the very best Ph.D. theses from around the world and across the physical sciences. Nominated and endorsed by two recognized specialists, each published volume has been selected for its scientific excellence and the high impact of its contents for the pertinent field of research. For greater accessibility to non-specialists, the published versions include an extended introduction, as well as a foreword by the student’s supervisor explaining the special relevance of the work for the field. As a whole, the series will provide a valuable resource both for newcomers to the research fields described, and for other scientists seeking detailed background information on special questions. Finally, it provides an accredited documentation of the valuable contributions made by today’s younger generation of scientists.

Theses may be nominated for publication in this series by heads of department at internationally leading universities or institutes and should fulfill all of the following criteria • They must be written in good English. • The topic should fall within the confines of Chemistry, Physics, Earth Sciences, Engineering and related interdisciplinary fields such as Materials, Nanoscience, Chemical Engineering, Complex Systems and Biophysics. • The work reported in the thesis must represent a significant scientific advance. • If the thesis includes previously published material, permission to reproduce this must be gained from the respective copyright holder (a maximum 30% of the thesis should be a verbatim reproduction from the author’s previous publications). • They must have been examined and passed during the 12 months prior to nomination. • Each thesis should include a foreword by the supervisor outlining the significance of its content. • The theses should have a clearly defined structure including an introduction accessible to new PhD students and scientists not expert in the relevant field. Indexed by zbMATH.

Xueqian Wang

Study on Signal Detection and Recovery Methods with Joint Sparsity Doctoral Thesis accepted by Tsinghua University, Beijing, China

Author Dr. Xueqian Wang Tsinghua University Beijing, China

Supervisor Prof. Gang Li Department of Electronic Engineering Tsinghua University Beijing, China

ISSN 2190-5053 ISSN 2190-5061 (electronic) Springer Theses ISBN 978-981-99-4116-2 ISBN 978-981-99-4117-9 (eBook) https://doi.org/10.1007/978-981-99-4117-9 Jointly published with Tsinghua University Press The print edition is not for sale in China Mainland. Customers from China Mainland please order the print book from: Tsinghua University Press. © Tsinghua University Press 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Supervisor’s Foreword

Sparsity means redundancy in signals. The idea behind compressed sensing is that it is possible to capture characteristics of a signal with sparsity using very few sampling data, where high-dimensional signals allow sparse representation by suitable atom matrices. In practice, a number of sensors do not individually measure signals, and their measurements are correlated. Joint sparsity further considers the dependence of sparse signals captured by multiple sensors and describes signals more accurately. Signal detection and recovery are two key tasks in the context of signal processing. Regarding the detection of signals with joint sparsity, the low communication and computation cost is required in practice. Regarding the recovery of signals with joint sparsity, a key issue is to improve the reconstruction performance with better sparse representation models. . In this book, Dr. Xueqian Wang systematically summarized his research findings from his Ph.D. thesis on the topic of signal detection and recovery methods with joint sparsity. First, a method is presented to solve the problem of detection of jointly sparse signals based on the strategy of the locally most powerful test with coarsely quantized observations, and the theoretical detection performance analysis of this method in Gaussian and non-Gaussian noise scenarios is provided. This method significantly reduces the computational and communication burden in the detection task. Second, a method is proposed for recovery of jointly sparse signals based on the atom selection with the look-ahead strategy. Experiments based on the UWBthrough-wall radar data demonstrate that this method improves the accuracy of signal recovery with joint sparsity and reduces the number of artifacts in radar images. Third, a new sparsity model, i.e., two-level block sparsity, is presented for better recovery of jointly sparse signals. By further considering the clustered structure in jointly sparse signals, this model greatly improves the radar image quality as shown in the experiments on real polarimetric radar data. I have known Dr. Xueqian Wang for eight years. Based on my personal interactions with him, I consider him to be an outstanding young researcher who has carried out impactful research and explained complicated matters in an instructive

v

vi

Supervisor’s Foreword

and easy-to-understand way. I do hope that this book will provide theoretical and technical support and be a good reference for undergraduate/graduate students and researchers in the area of sparse signal processing. One would save quite some effort with a book like this when he/she enters the field of sparse signal processing and would like to know the frontier progress regarding joint sparsity. Beijing, China March 2023

Prof. Gang Li

Preface

The task of signal detection is deciding whether signals of interest exist by using their observed data. Furthermore, signals are reconstructed or their key parameters are estimated from the observations in the task of signal recovery. Sparsity is a natural characteristic of most signals in practice. The fact that multiple sparse signals share the common locations of dominant coefficients is called joint sparsity. In the context of signal processing, the joint sparsity model results in higher performance of signal detection and recovery. This book focuses on the task of detecting and reconstructing signals with joint sparsity. The main contents include key methods for detection of jointly sparse signals and their corresponding theoretical performance analysis and methods for jointly sparse signal recovery and their application in the context of radar imaging. The main contribution of this book is as follows: (1) For the problem of detection of jointly sparse signals, a method is proposed based on the strategy of the locally most powerful test. The theoretical detection performance of this method is provided in the cases of analog observations, coarsely quantized observations, Gaussian noise, and non-Gaussian noise, respectively. For the problem of signal detection with quantized observations, the thresholds of optimal quantizer are solved, and the detection performance loss caused by quantization is quantitatively evaluated with the optimal quantizer. The strategy of compensating for the detection performance loss caused by quantization is also provided. Compared with existing detection methods, the proposed method significantly reduces the computational and communication burden without noticeable detection performance loss. (2) For the problem of recovery of jointly sparse signals, a method is proposed based on the selection of atoms with the look-ahead strategy. Atoms correspond to the locations of nonzero values in sparse signals. This method evaluates the effect of the selection of atoms on future recovery error in the iterative process. Theoretical analysis indicates that the proposed method improves the stability in the selection of atoms. The application of this method in the field of multiple-channel radar imaging is considered. Experiments based on real radar

vii

viii

Preface

data demonstrate that the proposed method improves the accuracy of signal recovery with joint sparsity and reduces the number of artifacts in radar images. (3) For the problem of recovery of jointly sparse signals, a method is proposed based on the two-level block sparsity, which combines not only the joint sparsity of multiple signals but also the clustering structure in each sparse signal. Experimental results based on real radar data show that, compared with existing methods, the dominant pixels in radar images generated by the proposed method are more concentrated in the target area, and there is less energy leak in the non-target area, i.e., better imaging quality is provided by the proposed method. Furthermore, this method is extended to the 1-bit quantization scenario to reduce the hardware consumption of radar imaging systems. Experiments based on real radar data demonstrate that the proposed method based on the two-level block sparsity significantly improves the quality of 1-bit radar imaging. This book is organized as follows. In Chap. 1, the background and related works of joint sparsity are briefly reviewed. In Chaps. 2 and 3, the joint sparsity-driven signal detection methods in the context of Gaussian and non-Gaussian noise environments are presented to accelerate existing methods, respectively. In Chaps. 4 and 5, the joint sparsity-driven signal recovery methods based on look-ahead-atom-selection and two-level block sparsity are studied to enhance the performance of radar imaging. Chapter 6 summarizes the book and discusses future perspectives. I do hope that this book could be a good reference to undergraduate/graduate students and researchers in the areas of signal processing and radar imaging and to provide theoretical and technical support in their research and engineering works. Beijing, China

Xueqian Wang

Acknowledgments

I would like to sincerely thank my supervisor, Prof. Gang Li, for his dedicated guidance on my Ph.D. research from scratch. His talent and spirit have inspired me and pushed me forward on the way to science. Professor Li also helped me enthusiastically in daily life, paid much attention to my career development, and set himself an outstanding example for me. Without his help and guidance, this book cannot be finished. I would like to sincerely thank Prof. Pramod K. Varshney for hosting my visit at Syracuse University and providing kind guidance of my doctoral research. I will benefit from the half-year visiting in Prof. Varshney’s lab for my whole life. I would like to thank Prof. Xiao-Ping Zhang at Ryerson University, Prof. Antonio Plaza at the University of Extremadura, Prof. Moeness G. Amin at Villanueva University, Prof. Robert Burkholder at Ohio State University, and Prof. Qun Wan at the University of Electronic Science and Technology of China for their guidance of my academic articles. I would like to sincerely acknowledge the National Key R&D Program of China under Grant 2021YFA0715201, National Natural Science Foundation of China under Grants 61790551, 61925106, and 62101303, Post-doctoral Innovative Talent Support Program under Grant BX20200195, China Post-doctoral Science Foundation under Grant 2020M680561, Shuimu Tsinghua Scholar Program, and Autonomous Research Project of Department of Electronic Engineering at Tsinghua University for funding my research. I also wish to express my thanks to the staff of Springer and Tsinghua University Press for their interest and efforts in the publication of this book. Finally, I would like to sincerely thank all my family for their support at my research career continuously and greatly.

ix

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Detection Methods for Jointly Sparse Signals . . . . . . . . . . . . 1.2.2 Recovery Methods for Jointly Sparse Signals . . . . . . . . . . . . 1.3 Main Content and Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Gaussian Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Signal Model for Jointly Sparse Signal Detection . . . . . . . . . . . . . . . 2.3 LMPT Detection Based on Analog Data . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Detection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Theoretical Analysis of Detection Performance . . . . . . . . . . . 2.4 LMPT Detection Based on Coarsely Quantized Data . . . . . . . . . . . . 2.4.1 Detection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Quantizer Design and the Effect of Quantization on Detection Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Simulation Results of the LMPT Detector with Analog Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Simulation Results of the LMPT Detector with Quantized Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Generalized Gaussian Model . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The LMPT Detector Based on Generalized Gaussian Model and Its Detection Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 4 4 5 9 12 17 17 18 20 20 23 25 26 28 33 33 35 40 40 43 43 44 xi

xii

Contents

3.2.1 Generalized Gaussian Model . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Signal Detection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Theoretical Analysis of Detection Performance . . . . . . . . . . . 3.3 Quantizer Design and Analysis of Asymptotic Relative Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Quantizer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Asymptotic Relative Efficiency . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Jointly Sparse Signal Recovery Method Based on Look-Ahead-Atom-Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Background of Recovery of Jointly Sparse Signals . . . . . . . . . . . . . . 4.3 Signal Recovery Method Based on Look-Ahead-Atom-Selection and Its Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Signal Recovery Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44 46 49 50 50 53 54 59 59 61 61 62

64 65 67 69 75 75

5 Signal Recovery Methods Based on Two-Level Block Sparsity . . . . . . 77 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.2 Signal Recovery Method Based on Two-Level Block Sparsity with Analog Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.1 PGM-Based Two-Level Block Sparsity . . . . . . . . . . . . . . . . . . 79 5.2.2 Two-Level Block Matching Pursuit . . . . . . . . . . . . . . . . . . . . . 83 5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3.1 Background of Sparse Signal Recovery Based on 1-Bit Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 Enhanced-Binary Iterative Hard Thresholding . . . . . . . . . . . . 89 5.4 Simulated and Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.1 Simulated and Experimental Results Based on Analog Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.4.2 Simulated and Experimental Results Based on 1-Bit Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Contents

6 Summary and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

107 107 109 110

Appendix A: Proof of (2.61) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Appendix B: Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Appendix C: Proof of (3.6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Appendix D: Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Appendix E: Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Abbreviations

ADC ARE AUC CDF DOMP DSP GBCD GLRT HMP IHT LAAS LAHMP LMPT LRT MAP MMV MP MRF MT-BCS MV N (a,b) OMP PD PDF PFA PGM PMF PSO RIP ROC SNR

Analog-to-Digital Converter Asymptotic Relative Efficiency Area Under the Curve Cumulative Distribution Function Distributed Orthogonal Matching Pursuit Distributed Subspace Pursuit Greedy Block Coordinate Descent Generalized Likelihood Ratio Test Hybrid Matching Pursuit Iterative Hard Thresholding Look-Ahead-Atom-Selection Look-Ahead Hybrid Matching Pursuit Locally Most Powerful Test Likelihood Ratio Test Maximum a Posteriori Multiple Measurement Vector Matching Pursuit Markov Random Field Multitask Bayesian Compressive Sensing Majority-Vote Gaussian Distribution with mean a and variance b Orthogonal Matching Pursuit Probability of Detection Probability Density Function Probability of False Alarm Probabilistic Graph Model Probability Mass Function Particle Swarm Optimization Restricted Isometry Property Receiver Operating Characteristic Signal-to-Noise Ratio xv

xvi

SOMP SP SSP TCR TLBMP

Abbreviations

Simultaneous Orthogonal Matching Pursuit Subspace Pursuit Simultaneous Subspace Pursuit Target-to-Clutter Ratio Two-Level Block Matching Pursuit

Chapter 1

Introduction

1.1 Background Traditional signal processing methods are based on Nyquist sampling theory, i.e., the sampling frequency of a real signal should be larger than twice its bandwidth, while for a complex signal (e.g., radar signal), the sampling frequency is supposed to be larger than its bandwidth. Huge amounts of data have been generated based on the Nyquist sampling theory with the recent development of sensor technologies [1]. This significantly increases the burden and complexity of data transmission, processing, and storage, and also leads to large hardware costs. Therefore, it is urgent to explore a new framework of signal processing to reduce the redundancy in huge amounts of data and extract key information of interest from a small number of observation data. The exploiting of a priori knowledge of signals is beneficial to reduce data redundancy and to improve the performance of signal processing. Sparsity is a commonly used characteristic of signals. Sparsity means that most of the elements in a highdimensional signal are zero or close to zero [1–3]. In practice, a large number of signals of interest show the sparsity characteristic. For example, optical images show sparsity in the wavelet domain [4], aerial and marine targets only occupy a few pixels in range-azimuth radar images [5]; user signals present sparsity in the frequency domain in the context of spectrum sensing [6], and so on. Compressed sensing aims at extracting information of high-dimensional and sparse signals from only a few compressed observations [1–3]. With the emergence and development of compressed sensing technologies, theory and methods regarding sparse signal processing have received much attention [1–35] and have been widely investigated in the fields of face recognition [8–10], speech identification [11–13], object retrieve in videos [14–17], radar imaging [18–22], medical imaging [23–26], image superresolution [27–29], image denoising [30–32], and channel estimation [33, 34]. Joint sparsity is a common characteristic of multiple sparse signals, i.e., different sparse signals share the same locations of non-zero values, which is a representation of dependence among multiple sparse signals. In comparison with the (single) sparsity characteristic, joint sparsity can model signals more accurately. The initial © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_1

1

2

1 Introduction

interest in joint sparsity was motivated by the inverse problem in Magnetoencephalography (MEG), i.e., the recovery of MEG signals [36]. It is supposed that the activity at activation areas in the brain will produce the MEG signals. Since there are only a small number of activation areas in the brain at a certain moment, the MEG signals show the sparse characteristic. In practice, we can obtain the multiple snapshots (i.e., measurement vectors) of MEG signals in a very short period, where the activation magnitudes may change, but the activation sites do not. Therefore, the MEG signals corresponding to the multiple snapshots share the joint sparsity. Experimental results have demonstrated that the exploiting of joint sparsity improves the quality of brain imaging with only a few observations [37]. Joint sparsity also plays an important role in the context of multi-channel radar imaging [18, 38–40]. A radar target presents different scattering characteristics with different polarizations, frequencies, and observation angles, and accordingly shows different scattering amplitudes in multi-channel radar images. However, the locations of radar targets are the same across multi-channel radar images. Joint sparsity can be utilized to enhance targets and to reduce the clutter and noise in multi-channel radar images [18, 38– 40]. In addition, in sensor networks, the amplitudes of received signals are different across all the local sensors due to the channel attenuation, but the observed scenario is the same for different local sensors. Thus, the signals in sensor networks also satisfy the definition of joint sparsity. Existing theory and simulation results show that joint sparsity leads to improved performance of target location and detection [41–47]. Furthermore, the model of joint sparsity is also helpful to provide higher performance metrics in fields of hyperspectral image processing [48–51], array signal processing [52–54], and medical tomography [37]. Signal detection and reconstruction are two fundamental tasks in signal processing [56, 57]. Recently, the research works regarding joint sparsity also mainly focus on these two tasks [36–54, 56–62]. Between these two tasks, the problem of reconstruction of jointly sparse signals firstly received much attention in academia. In this problem, the multiple sparse signals are expected to be solved based on their linear observations and a priori overcomplete sensing matrix (matrices) [36, 58–60]. The reconstruction of signals with joint sparsity has been widely used in sensor networks and radar imaging. However, existing reconstruction methods for jointly sparse signals still face the following challenges: (1) the robustness of the selection of atom signals. A key problem for the recovery of jointly sparse signals is the selection of atom signals from the overcomplete sensing matrixes to represent the jointly sparse signals. In the iterative process of existing approaches, the selected atom signals at the current iteration may increase the reconstruction error in future iterations [63]. Thus, the improvement of stability of atom signals in the iterative approaches is desired to enhance the recovery performance of jointly sparse signals. (2) The combination of joint sparsity and clustering structure. The non-zero elements in a single signal often occur in clusters instead of isolated points. The clustering structure is a basic characteristic of array signals and high-resolution images [64–68]. Lots of theories and experiments have shown that the utilization of clustering structure is helpful to reduce the recovery errors of sparse signals. Note that existing methods mainly

1.1 Background

3

independently focus on joint sparsity and clustering structure of signals, and few ones simultaneously consider the exploiting of both of them. Since joint sparsity and clustering structure are special cases of block sparsity [65], the combination of them is referred to as two-level block sparsity in the remainder of this book. (3) The reconstruction of jointly sparse signals based on 1-bit observations. Traditional methods for sparse signal recovery depend on the processing of analog or finely quantized measurements, which increase the burden of data storage and transmission. Only the signs of measurements are retained at the process of 1-bit quantization that extremely compresses original measurements, reduces the cost of the analog-todigital converter (ADC), and alleviates the burden of storage and transmission of large data [45–47, 69–72]. Sparsity implies that there is redundancy in data. Accordingly, the reconstruction of jointly sparse signals based on 1-bit data is expected to achieve storage and transmission savings without the loss of reconstruction performance. Differing from the estimation of values or parameters of signals considered in the task of signal recovery, the detection of jointly sparse signals aims to determine the presence or absence of signals with joint sparsity. Existing methods for the problem of detection of signals with joint sparsity still face the following challenges: (1) High complexity. Existing methods often partly/completely reconstruct sparse signals and then design test statistics and thresholds to detect them. Note that the stage of sparse signal reconstruction contains the iterative process and matrix inversion operations, while the output of the detection stage is only a binary variable (i.e., ‘1’ for the presence of signals and ‘0’ for the absence of signals). Accordingly, the computation complexity of existing signal detection methods (based on partly/completely reconstructed signals) is relatively heavy and shows poor real-time performance. (2) Signal detection based on coarsely quantized measurements. Detection of jointly sparse signals is often applied in distributed sensor networks, where the fusion center decides the presence and absence of signals in accordance with the received measurements from lots of local sensors. It is worth emphasizing that the battery resources of local sensors and the communication bandwidth for data transmission are extremely limited in distributed sensor networks [73]. Similar to the aforementioned content, coarsely quantized measurements (1 ~ 3 bits) lead to reduced hardware cost and higher battery/bandwidth efficiencies. There are several hot research problems in the context of detection of signals with joint sparsity using coarsely quantized data, e.g., how to design the optimal test statistics and quantization thresholds, how to compensate for the performance loss caused by coarse quantization, and so on. (3) Detection of jointly sparse signals with non-Gaussian noise. The commonly used signal detection methods are based on Gaussian noise, while there are also a large number of non-Gaussian noise in practice, e.g., ship transit noise (generalized Gaussian with shape parameter 2.8) and noise from sea surface agitation (generalized Gaussian with shape parameter 1.6) [74]. The research on the detection of jointly sparse signals in non-Gaussian environments is beneficial to enhance the robustness of detection performance in practical applications.

4

1 Introduction

1.2 Related Works 1.2.1 Detection Methods for Jointly Sparse Signals The detection methods for jointly sparse signals can be divided into two categories: known/unknown signal detection, where each category can be further divided into two sub-categories: deterministic/stochastic signal detection. In the following, the existing detection methods for jointly sparse signals are summarized based on their categories. Note that a single sparse signal is the special case of jointly sparse signals, and the detection methods for the former can be extended to for the latter, so the detection methods for a single sparse signal are also briefly reviewed. (1) Detection of Known Signals Known and Deterministic Signals. According to the Neyman–Pearson formula, the optimal strategy for known and deterministic signals with joint sparsity is matched filtering [75]. Baraniuk R G designed a test statistic for jointly sparse signals based on the match filtering strategy and provided analysis on the upper/lower bounds of the probability of detection (PD ) with a fixed probability of false alarm (PFA ) based on the restricted isometry property (RIP) [75]. Besides, the detection performance of sparse signals based on compressed measurements is very close to that based on uncompressed measurements when the signal-to-noise ratio (SNR) is larger than 15 dB [75]. In accordance with a priori probabilities of different hypotheses, Cao J proposed a Bayesian method for sparse signals and derived the approximated version and upper/lower bounds of the error probability in the detection task based on RIP. Varshney P K developed a signal detection method in distributed sensor networks with the constraints of joint sparsity and physical layer secrecy and provided the analytical error probability and its bounds [76]. Inspired by that sparse signals can be regarded as “weak” signals [56], i.e., most of the elements in a sparse vector signal are zero, Nagananda K G proposed a sparse signal detection method based on Pade approximation [114] and analyzed its detection performance with compressed measurements. Known and Stochastic Signals. Wimalajeewa T provided the theoretical analysis of bounds of PD with a fixed PFA of the likelihood ratio test (LRT) detector when detecting a sparse signal with the known Gaussian probability density function (PDF) [77]. Assume that the subspace of stochastic sparse signals is known, Rao R proposed a sparse signal detection method based on LRT and derived the theoretical bounds of PD and PFA . [78]. (2) Detection of Unknown Signals Unknown and Deterministic Signals. The commonly used strategy for detection of unknown and deterministic signals with joint sparsity is to firstly reconstruct them and then design test statistics based on the reconstructed results. Zayyani H proposed a detection method for jointly sparse signals based on generalized LRT (GLRT) [62] that is an approximately optimal detection strategy. In [62], the design of test

1.2 Related Works

5

statistics depends on the completely reconstructed sparse signals and requires a huge computation cost. Duarte M F [79] proposed a detection method for sparse signals based on the partial reconstructed results of signals which shows higher computation efficiency in comparison with [62]. For the problem of detection of sparse signals in distributed sensor networks, Wimalajeewa T formulated the detection method based on distributed orthogonal matching pursuit (DOMP) [80], where the majority voting (MV) strategy is used to design the test statistics, i.e., each sensor firstly obtains the signal recovery results via DOMP, and then the test statistics are designed based on the common support of (reconstructed) sparse signals at all the sensors derived from the MV strategy. In [42], four variants of DOMP for the detection of signals with joint sparsity are developed, reducing the communication burden of sensor networks or improve the signal detection performance. Li G proposed a distributed subspace pursuit (DSP) method for the problem of detection of jointly sparse signals and provided a theoretical analysis of its performance bounds [81, 82]. The DSP method reduces the communication cost of distributed sensor networks without remarkable loss of detection performance. It is worth mentioning that all the aforementioned detection methods [42, 62, 79, 80–82] need the recovery of (jointly) sparse signals. Having the “0 or 1” output of the detection task in mind, the large computation complexity of sparse signal recovery (requiring the iterative process) relatively degrade the computation efficiency of sparse signal detection. Unknown and Stochastic Signals. Ma J proposed a detector based on matched filtering and ordered statistics for the problem of detection of a stochastic sparse signal without the complete knowledge of PDF [83] and then extend it for the problem of detection of jointly sparse signals [84]. The detection method in [83] only utilizes the element with the maximum amplitude in a sparse signal and neglects other informative elements, which may deteriorate its detection performance. In [85], BabaieZadeh M proposed a GLRT approach for the problem of detection of unknown and stochastic signals. However, the detector in [85] still requires the estimation of unknown signal parameters, increasing the complexity of the detection stage. In addition, the “weak” characteristic of sparse signals deteriorates the performance of the estimation step in GLRT and leads to degraded detection performance.

1.2.2 Recovery Methods for Jointly Sparse Signals The output of signal recovery is the complete form of signals instead of the presence or absence of signals in the stage of signal detection. We need to solve the underdetermined system equations when reconstructing jointly sparse signals, which is an l 0 -norm optimization problem and also a non-deterministic polynomial-time hard problem (NP-hard) problem. The reconstruction algorithms for jointly sparse signal recovery may be broadly divided into four types as, convex optimization [48, 49, 59, 86–90], non-convex optimization [36, 91–99], Bayesian optimization [100–105], and greedy pursuit [18, 36, 41, 58, 106–109] methods. The former three ones aim at approximate the l0 -norm optimization problem and then find sub-optimal solutions,

6

1 Introduction

while the latter one directly finds the sub-optimal solution of the l0 -norm optimization problem based on a priori knowledge of the number of non-zero elements in a sparse signal. The aforementioned four kinds of signal recovery methods are briefly introduced as follows. (1) Convex optimization The commonly used convex optimization methods for sparse signal recovery are to approximate the l 0 -norm optimization problem to the l1 -norm one. In order to further guarantee the joint sparsity between multiple signals, Chen J utilized l1,1 -norm to solve the problem of recovery of jointly sparse signals and theoretically pointed out that the approximately optimal sparse solution can be obtained based on l1,1 -norm [86, 87]. Tropp J A proposed to use l∞,1 -norm to formulate the optimization problem of recovery of jointly sparse signals [59]. Compared with l1,1 -norm optimization, l∞,1 norm optimization further ensures the dependence among multiple sparse signals and avoids that the sparse signal matrix containing multiple sparse signals (in parallel) becomes a sparse column matrix [59]. Willsky A S proposed a recovery method for jointly sparse signals based on l2,1 -norm and applied it to source location problems in practice [35]. Eldar Y C provided a sufficient condition for exact recovery of jointly sparse signals using l 2,1 -norm [88]. A few open-source codes can be used to solve the aforementioned convex optimization problems, such as CVX [89], l 1 magic [90], et al. Beck A proposed a recovery method for jointly sparse signals based on the strategy of iterative shrinkage threshold, which does need inverse operations in the iterative process and accelerates the convergence of CVX [48, 49]. It is worth emphasizing that the sparse solutions obtained by the above convex optimization methods often share theoretically rigorous convergence [59, 86–88], but often require a time-consuming iterative process. The low computation efficiency of convex optimization methods is a key factor limiting their applications in practice especially when dealing with large-scale problems. (2) Non-convex optimization Non-convex optimization methods aim at achieving higher recovery accuracy of jointly sparse signals by designing non-convex cost functions with the loss of theoretical convergence in comparison with convex optimization methods. Hyder M M approximated the non-differentiable l0 -norm to a smooth negative-exponential function, and then adopted the gradient descent strategy to iteratively recover the jointly sparse signals [91–94]. Under certain conditions, non-convex negative-exponential function leads to better signal recovery performance than convex methods based on l1 -norm [91]. l p,1 -norm-based M-FOCUSS (p ≤ 1) is another classical non-convex optimization method [36], which takes the reciprocal of the signal elements calculated in each iteration as the weight of the sparse signals in the next iteration. Candes E J proposed an iterative reweighted signal recovery method by designing an alternative function of l1 -norm based on the logarithmic function [95–97], where the reciprocal of the absolute values of signal elements obtained in each iteration is taken as the weights of sparse signals in the next iteration. It has been demonstrated that the iterative reweighted signal recovery methods enhance the sparsity of signals

1.2 Related Works

7

in comparison with M-FOCUSS [96]. Fosson S M proposed an iterative reweighted method for jointly sparse signal recovery in the context of distributed sensor networks with constrained communication bandwidth [98]. It is worth noting that the solutions of non-convex optimization problems usually contain multiple local minima points, and the above (iterative) non-convex signal recovery methods lack the theoretical guarantee of convergence to the optimal solution. In addition, there is still a large research space for the theoretical analysis of the above non-convex methods in terms of hyper-parameter selection, initialization strategies, and anti-noise performance [99]. (3) Bayesian optimization The signal recovery methods based on Bayesian optimization need to assume a prior statistical distributions of sparse signals, and then recover the sparse signals via the maximum a posteriori (MAP) criterion. Rao B D proposed multiple measurement vector (MMV)-based sparse Bayesian learning method to recover jointly sparse signals, where the elements in sparse signals are modeled by Gaussian variables [100]. The advantage of MMV-based sparse Bayesian learning method is that its globally optimal solution is always the most sparse solution, and the number of locally optimal solutions is fewer than that of M-FOCUSS method [36]. Carin L proposed a multi-task Bayesian compressed sensing (MT-BCS) method to reconstruct jointly sparse signals [101], where the hierarchical Bayesian network is constructed to establish the relationship among key variables in the signal model and the Gamma distribution is used to enhance the sparsity of signals. Zhang Z proposed two improved methods of MMV-based sparse Bayesian learning, where the intracorrelation in a single sparse signal is exploited when recovering the jointly sparse signals [102]. Based on the framework of MT-BCS, Amin M G further utilized the two-level block sparsity (joint sparsity and clustering characteristics) of signals and improved the performance of radar imaging [103–105]. Hannak G proposed a signal recovery method based on approximate message passing that adopts the Gaussian quadratic approximation and belief propagation network to achieve the recovery of jointly sparse signals and to provide good convergence [110]. Bayesian optimization methods involve many time-consuming matrix inversion operations in the iterative process. In addition, inappropriate a prior statistical distributions of sparse signals may deteriorate the performance of jointly sparse signal recovery. How to select the most appropriate a prior statistical distributions in practical applications is still an open problem. (4) Greedy pursuit The performance of jointly sparse signal recovery mainly depends on the accuracy of locating non-zero elements in sparse signals, that is, the recovery accuracy of the support set (or index set of atom signals) of sparse signals. Based on the correlation between the atoms in the over-complete sensing matrix and the (linear) measurement vectors, greedy pursuit methods reconstruct the support set of jointly sparse signals in an iterative manner (determining one or more support elements in each iteration), and then recover the sparse signals based on the reconstructed support

8

1 Introduction

set and the least-squares method. Rao B D proposed MMV matching pursuit (MMP) method and MMV orthogonal matching pursuit (M-OMP) method based on l 2,0 -norm [36]. Compared with M-MP, M-OMP further utilizes orthogonal projection to update the signal residual, which ensures the orthogonality of the selected atoms corresponding to the support set and achieves better signal recovery performance. Tropp J A proposed a simultaneous orthogonal matching pursuit (SOMP) method based on l1,0 -norm. In comparison with the M-OMP method with l2,0 -norm, SOMP enhances the sparsity of signals and improves the recovery performance of jointly sparse signals [58]. Wimalajeewa T [41] and Skoglund M [106] proposed a distributed OMP (DOMP) method and its variants under the condition of limited communication bandwidth in distributed sensor networks. Compared with M-MP, M-OMP, and SOMP methods, DOMP [41, 106] alleviates the burden of data transmission in distributed sensor networks at the cost of performance loss of signal recovery. The aforementioned OMP-based methods select only one currently best support element in each iteration, while as another popular greedy pursuit algorithm, subspace pursuit (SP), selects multiple support elements in each iteration, and adds a “backtracking” step to correct the existing support set [111]. Based on the SP method, Feng J M proposed a simultaneous SP (SSP) method to reconstruct jointly sparse signals [112]. Skoglund M [106] and Li G [107] proposed distributed SP (DSP) methods to reduce the communication burden of distributed sensor networks. Experiments show that DSP methods can achieve improved performance of jointly sparse signal recovery than DOMP methods under the same communication constraints [107]. Li G also proposed a hybrid matching pursuit (HMP) method [18] that aims to combine the advantages of OMP and SP to obtain higher recovery accuracy of jointly sparse signals. Iterative hard thresholding (IHT) is also a commonly used greedy pursuit method that solves the support sets of sparse signals by setting a hard threshold based on a prior number of non-zero elements in sparse signals [108]. Blanchard J D proposed four jointly sparse signal recovery methods based on IHT [109]: simultaneous IHT (SIHT), simultaneous normalized IHT (SNIHT), simultaneous hard threshold pursuit (SHTP), and simultaneous normalized hard threshold pursuit (SNHTP). In the SIHT method, the l2,0 -norm function and IHT method are combined to recover the jointly sparse signals. Unlike SIHT, which directly obtains the sparse solutions by setting a hard threshold, SHTP projects the observation data into the subspace occupied by the current estimated support set to obtain the sparse signals. SNIHT and SNHTP are extended versions of SIHT and SHTP, respectively, where the step size of gradient descent is adaptively selected to improve the recovery accuracy of jointly sparse signals in the iterative process. In [109], the theoretical performance bounds of the above four methods based on IHT are also provided, and experimental results show that SNIHT generates better signal recovery performance among these four methods. Compared with other types of methods, greedy pursuit methods require fewer computation resources with the capability of rapid implementation, and the theoretical analysis indicates that greedy pursuit methods can accurately reconstruct jointly sparse signals under certain conditions [113]. It is worth emphasizing that

1.3 Main Content and Organization

9

the investigation regarding the combination of greedy pursuit methods and the other characteristics of signals beyond joint sparsity (e.g., clustering structure within a single signal) is scarce. Moreover, most of the existing greedy pursuit methods are designed based on analog measurements or finely quantized measurements. There is a large research space to investigate the recovery methods of jointly sparse signals based on greedy pursuit strategies with coarsely quantized measurements, i.e., low hardware cost.

1.3 Main Content and Organization Focusing on the topic of performance improvement of existing jointly sparse signal processing methods, this book mainly studies the key technologies of jointly sparse signal detection and recovery. A new method for the problem of detection of jointly sparse signals is proposed, improving the computation efficiency of the existing detection methods without noticeable loss of detection performance. In addition, theoretical analysis of the performance of the proposed detection method in cases of analog measurements, coarsely quantized measurements, and Gaussian/nonGaussian noise is provided. For the problem of jointly sparse signal recovery, inspired by the advantages of the greedy pursuit strategy in terms of computational complexity and signal recovery accuracy, the greedy pursuit strategy is adopted to achieve the task of signal recovery. First, a new jointly sparse signal recovery method is proposed with a more robust mechanism of atom selection to improve the accuracy of signal recovery. Besides, a new greedy pursuit-based method is proposed by exploiting the two-level block sparsity of jointly sparse signals, not only reducing the cost of data quantization/computation, but also improving the accuracy of signal recovery. Taking sparse radar imaging as an example, the effectiveness of the proposed jointly sparse signal recovery methods is verified by using the measured radar data. The main content of this book is summarized as follows, as shown in Fig. 1.1. (1) For the problem of jointly sparse signal detection, a new method based on the locally most powerful test (LMPT) with Gaussian noise is proposed without signal recovery, alleviating the complexity of the detection system with negligible performance loss. In addition, this LMPT method is extended in the case of coarsely quantized measurements to save the communication resource usage in distributed sensor networks. In order to reduce the loss of detection performance caused by coarse quantization, the optimal quantizers are theoretically derived. With the optimal quantizers, the loss of detection performance caused by coarse quantization in comparison with the analog observation scenario is accurately evaluated from theoretical and simulated perspectives. A strategy to compensate for the loss of detection performance caused by coarse quantization is also provided. The generalized Gaussian distribution is a commonly used non-Gaussian distribution. The LMPT detection method and related theoretical

10

1 Introduction

Fig. 1.1 The framework of the main content of this book

analysis with Gaussian noise is also extended to the generalized Gaussian case, amplifying the application scope of the proposed LMPT detection method. (2) For the problem of jointly sparse signal recovery, a new method based on lookahead-atom-selection is proposed, where a look-ahead strategy is adopted during the selection of atoms to evaluate the effectiveness of the currently selected atom signals on future iterations, and the atom signal that causes the smallest future recovery error is selected to be added in the candidate set of atom signals. Based on the theoretical analysis regarding the condition number of matrixes, it is indicated that the robustness of the jointly sparse signal recovery method can be improved by using the strategy of look-ahead-atom-selection. Experimental results based on measured multi-channel radar data show that the proposed method based on look-ahead atom-selection significantly improves the accuracy of jointly sparse signal recovery, reduces the clutter level in radar images, and improves the quality of multi-channel radar imaging. (3) For the problem of jointly sparse signal recovery, a new greedy pursuit method based on two-level block sparsity is proposed, which simultaneously utilizes the joint sparsity among multiple signals and the clustering property within a single sparse signal. Experimental results based on measured multi-channel radar data show that, compared with existing methods, the proposed method based on two-level block sparsity enhances the recovery performance of jointly sparse signals, more accurately concentrates non-zero pixel points into target regions, and improves the quality of radar imaging. In addition, the above two-level block sparsity-based method is extended to the scenario with 1-bit quantization

1.3 Main Content and Organization

11

measurements, which reduce the hardware cost of quantizers and the data storage burden of the signal processing system. Experimental results based on measured 1-bit radar data show that, compared with the existing 1-bit CS methods, the proposed two-level block sparsity-based signal recovery method improves the quality of radar imaging with comparable computational complexity. The chapters of this book are organized as follows: This chapter describes the research background and significance of detection and recovery methods of jointly sparse signals, summarizes and analyzes the corresponding state-of-the-art research works, and provides an overview of this book. Chapter 2 firstly establishes the relationship between jointly sparse signal detection and close and one-sided hypothesis testing. According to the close and onesided hypothesis testing model, the LMPT detection method under Gaussian noise background is deduced, the theoretical detection performance of the LMPT method is analyzed, and the detection performance loss (caused by coarse quantization in comparison with analog observation data) is analyzed theoretically. The detection performance of the LMPT method and existing methods are compared, and the effectiveness of theoretical analysis of LMPT under Gaussian noise background is verified via simulation results. Chapter 3 proposes the LMPT detection method of jointly sparse signals with the generalized Gaussian noise, derives the analytical form of asymptotically optimal 1-bit quantizers, and calculates the analytical asymptotic relative efficiency (ARE) to compare the detection performance of the methods based on coarsely quantized observation data and analog observation data. Simulation and experimental results verify the effectiveness of the proposed method and the corresponding theoretical analysis. Chapter 4 proposes a recovery method of jointly sparse signals based on lookahead-atom-selection. The theoretical analysis is provided to demonstrate the superiority of the proposed signal recovery method with the look-ahead-atom-selection strategy. The computational complexity of the proposed method is also analyzed. Taking the multi-channel radar imaging as an application example, the measured radar data are used to verify the quality improvement of radar imaging by using the proposed method. Chapter 5 firstly constructs a probability graph model (PGM) to describe the twolevel block sparsity of signals. In accordance with the PGM, a new signal recovery method with analog observation data is proposed. The superiority of the proposed method is verified by using measured radar data in terms of radar imaging quality. In addition, this signal recovery method based on two-level block sparsity is extended to the case of 1-bit observation data. Experimental results based on measured radar data show that our signal recovery ty of 1-bit radar imaging. Chapter 6 concludes this book and provides some hints at plausible future research.

12

1 Introduction

References 1. Baraniuk RG (2007) Compressive sensing. IEEE Signal Process Mag 24(4):118–121 2. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 3. Eldar YC, Kutyniok G (2012) Compressed sensing: theory and applications. Cambridge University Press 4. Romberg J (2008) Imaging via compressive sampling. IEEE Signal Process Mag 25(2):14–20 5. Amin MG (2014) Compressive sensing for urban radar. CRC Press 6. Sun H, Nallanathan A, Wang CX et al (2013) Wideband spectrum sensing for cognitive radio networks: a survey. IEEE Wirel Commun 20(2):74–81 7. Wright SJ, Nowak RD, Figueiredo MAT (2009) Sparse reconstruction by separable approximation. IEEE Trans Signal Process 57(7):2479–2493 8. Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: which helps face recognition? In: Proceedings of international conference on computer vision. IEEE, pp 471–478 9. Gao Y, Ma J, Yuille AL (2017) Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples. IEEE Trans Image Process 26(5):2545– 2560 10. Liu S, Li L, Jin M et al (2019) Optimized coefficient vector and sparse representation based classification method for face recognition. IEEE Access 8:8668–8674 11. Li W, Zhou Y, Poh N et al (2013) Feature denoising using joint sparse representation for in-car speech recognition. IEEE Signal Process Lett 20(7):681–684 12. Primorac R, Togneri R, Bennamoun M et al (2016) Audio-visual biometric recognition via joint sparse representations. In: Proceedings of 23rd international conference on pattern recognition (ICPR). IEEE, pp 3031–3035 13. Sharma P, Abrol V, Sao AK (2017) Deep-sparse-representation-based features for speech recognition. IEEE/ACM Trans Audio, Speech, Lang Process 25(11):2162–2175 14. Cui Z, Chang H, Shan S et al (2014) Joint sparse representation for video-based face recognition. Neurocomputing 135:306–312 15. Xu D, Huang Y, Zeng Z et al (2011) Human gait recognition using patch distribution feature and locality-constrained group sparse representation. IEEE Trans Image Process 21(1):316– 326 16. Dai Q, Yoo S, Kappeler A et al (2016) Sparse representation-based multiple frame video super-resolution. IEEE Trans Image Process 26(2):765–781 17. Zhang Y, Zhang H, Yu M et al (2019) Sparse representation-based video quality assessment for synthesized 3D videos. IEEE Trans Image Process 29:509–524 18. Li G, Burkholder RJ (2015) Hybrid matching pursuit for distributed through-wall radar imaging. IEEE Trans Antennas Propag 63(4):1701–1711 19. Qiu W, Zhou J, Fu Q (2019) Jointly using low-rank and sparsity priors for sparse inverse synthetic aperture radar imaging. IEEE Trans Image Process 29:100–115 20. Hu X, Tong N, Zhang Y et al (2018) MIMO radar imaging with nonorthogonal waveforms based on joint-block sparse recovery. IEEE Trans Geosci Remote Sens 56(10):5985–5996 21. Peng X, Tan W, Hong W et al (2015) Airborne DLSLA 3-D SAR image reconstruction by combination of polar formatting and L1 regularization. IEEE Trans Geosci Remote Sens 54(1):213–226 22. Bi H, Bi G, Zhang B et al (2018) Complex-image-based sparse SAR imaging and its equivalence. IEEE Trans Geosci Remote Sens 56(9):5006–5014 23. Graff CG, Sidky EY (2018) Compressive sensing in medical imaging. Appl Opt 54(8):C23– C44 24. Mardani M, Gong E, Cheng JY et al (2018) Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging 38(1):167–179 25. Jacob M, Ye JC, Ying L et al (2020) Computational MRI: compressive sensing and beyond. IEEE Signal Process Mag 37(1):21–23

References

13

26. Han Y, Ye JC (2018) Framing U-Net via deep convolutional framelets: application to sparseview CT. IEEE Trans Med Imaging 37(6):1418–1429 27. Shao Z, Wang L, Wang Z et al (2019) Remote sensing image super-resolution using sparse representation and coupled sparse autoencoder. IEEE J Sel Top Appl Earth Observations Remote Sens 12(8):2663–2674 28. Zhang Y, Liu J, Yang W et al (2015) Image super-resolution based on structure-modulated sparse representation. IEEE Trans Image Process 24(9):2797–2810 29. Dong W, Fu F, Shi G et al (2016) Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans Image Process 25(5):2337–2352 30. Li S, Yin H, Fang L (2012) Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans Biomed Eng 59(12):3450–3459 31. Liu S, Liu M, Li P et al (2017) SAR image denoising via sparse representation in shearlet domain based on continuous cycle spinning. IEEE Trans Geosci Remote Sens 55(5):2985– 2992 32. Wang Q, Zhang X, Wu Y et al (2017) Nonconvex weighted l q minimization based group sparse representation framework for image denoising. IEEE Signal Process Lett 24(11):1686–1690 33. Wei C, Liu H, Zhang Z et al (2017) Near-optimum sparse channel estimation based on least squares and approximate message passing. IEEE Wirel Commun Lett 6(6):754–757 34. Ma X, Yang F, Liu S et al (2017) Design and optimization on training sequence for mmWave communications: a new approach for sparse channel estimation in massive MIMO. IEEE J Sel Areas Commun 35(7):1486–1497 35. Malioutov D, Cetin M, Willsky AS (2005) A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 53(8):3010–3022 36. Cotter SF, Rao BD, Engan K et al (2005) Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans Signal Process 53(7):2477–2488 37. Lee O, Kim JM, Bresler Y et al (2011) Compressive diffuse optical tomography: noniterative exact reconstruction using joint sparsity. IEEE Trans Med Imaging 30(5):1129–1142 38. Tivive FHC, Bouzerdoum A, Abeynayake C (2018) GPR target detection by joint sparse and low-rank matrix decomposition. IEEE Trans Geosci Remote Sens 57(5):2583–2595 39. Tang VH, Bouzerdoum A, Phung SL (2017) Multipolarization through-wall radar imaging using low-rank and jointly-sparse representations. IEEE Trans Image Process 27(4):1763– 1776 40. Hu X, Tong N, Zhang Y et al (2017) MIMO radar imaging with nonorthogonal waveforms based on joint-block sparse recovery. IEEE Trans Geosci Remote Sens 56(10):5985–5996 41. Wimalajeewa T, Varshney PK (2014) OMP based joint sparsity pattern recovery under communication constraints. IEEE Trans Signal Process 62(19):5059–5072 42. Wimalajeewa T, Varshney PK (2016) Sparse signal detection with compressive measurements via partial support set estimation. IEEE Trans Sign Inf Process Over Netw 3(1):46–60 43. Zeng F, Li C, Tian Z (2010) Distributed compressive spectrum sensing in cooperative multihop cognitive networks. IEEE J Sel Top Sign Proces 5(1):37–48 44. Wang X, Li G, Varshney PK (2018) Detection of sparse signals in sensor networks via locally most powerful tests. IEEE Signal Process Lett 25(9):1418–1422 45. Wang X, Li G, Varshney PK (2019) Detection of sparse stochastic signals with quantized measurements in sensor networks. IEEE Trans Sign Process 67(8):2210–2220 46. Wang X, Li G, Quan C et al (2019) Distributed detection of sparse stochastic signals with quantized measurements: the generalized gaussian case. IEEE Trans Sign Process 67(18):4886–4898 47. Li C, He Y, Wang X et al (2019) Distributed detection of sparse stochastic signals via fusion of 1-bit local likelihood ratios. IEEE Sign Process Lett 26(12):1738–1742 48. Tan Z, Yang P, Nehorai A (2014) Joint sparse recovery method for compressed sensing with structured dictionary mismatches. IEEE Trans Sign Process 62(19):4997–5008 49. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imag Sci 2(1):183–202

14

1 Introduction

50. Zhang Y, Du B, Zhang L et al (2016) Joint sparse representation and multitask learning for hyperspectral target detection. IEEE Trans Geosci Remote Sens 55(2):894–906 51. Zhang Y, Du B, Zhang Y et al (2017) Spatially adaptive sparse representation for target detection in hyperspectral images. IEEE Geosci Remote Sens Lett 14(11):1923–1927 52. Yang Z, Xie L (2016) Exact joint sparse frequency recovery via optimization methods. IEEE Trans Sign Process 64(19):5145–5157 53. Chen H, Shao HZ, Wang WQ (2017) Joint sparsity-based range-angle-dependent beampattern synthesis for frequency diverse array. IEEE Access 5:15152–15161 54. Hu X et al (2016) Joint sparsity-driven three-dimensional imaging method for multiple-input multiple-output radar with sparse antenna array. IET Radar Sonar Navig 11(5):709–720 55. Lee O, Kim JM, Bresler Y et al (2011) Compressive diffuse optical tomography: noniterative exact reconstruction using joint sparsity. IEEE Trans Med Imaging 30(5):1129-1142 56. Kay SM (1998a) Fundamentals of statistical signal processing: detection theory. Prentice-Hall, Upper Saddle River, NJ 57. Kay SM (1998b) Fundamentals of statistical signal processing: estimation theory. PrenticeHall, Upper Saddle River, NJ 58. Tropp JA, Gilbert AC, Strauss MJ (2006) Algorithms for simultaneous sparse approximation. Part I: greedy pursuit. Sign Process 86(3):572–588 59. Tropp JA (2006) Algorithms for simultaneous sparse approximation. Part II: convex relaxation. Sign Process 86(3):589–602 60. Ji S, Dunson D, Carin L (2008) Multitask compressive sensing. IEEE Trans Signal Process 57(1):92–106 61. Li C, Li G, Kailkhura B et al (2019) Secure distributed detection of sparse signals via falsification of local compressive measurements. IEEE Trans Signal Process 67(18):4696–4706 62. Zayyani H, Haddadi F, Korki M (2016) Double detector for sparse signal detection from one-bit compressed sensing measurements. IEEE Signal Process Lett 23(11):1637–1641 63. Chatterjee S, Sundman D, Vehkapera M et al (2008) Projection-based and look-ahead strategies for atom selection. IEEE Trans Signal Process 60(2):634–647 64. Cevher V, Indyk P, Carin L et al (2010) Sparse signal recovery and acquisition with graphical models. IEEE Signal Process Mag 27(6):92–103 65. Eldar YC, Kuppinger P, Bolcskei H (2010) Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Trans Signal Process 58(6):3042–3054 66. Wang X, Li G, Liu Y et al (2018) Two-level block matching pursuit for polarimetric throughwall radar imaging. IEEE Trans Geosci Remote Sens 56(3):1533–1545 67. Wang X, Li G, Liu Y et al (2019) Enhanced 1-bit radar imaging by exploiting two-level block sparsity. IEEE Trans Geosci Remote Sens 57(2):1131–1141 68. Baraniuk RG, Cevher V, Duarte MF et al (2010) Model-based compressive sensing. IEEE Trans Inf Theory 56(4):1982–2001 69. Jacques L, Hammond DK, Fadili JM (2010) Dequantizing compressed sensing: when oversampling and non-gaussian constraints combine. IEEE Trans Inf Theory 57(1):559–571 70. Boufounos PT, Baraniuk RG (2008) 1-bit compressive sensing. In: Proceedings of 42nd annual conference on information sciences and systems. IEEE, pp 16–21 71. Dong X, Zhang Y (2015) A MAP approach for 1-bit compressive sensing in synthetic aperture radar imaging. IEEE Geosci Remote Sens Lett 12(6):1237–1241 72. Knudson K, Saab R, Ward R (2015) One-bit compressive sensing with norm estimation. IEEE Trans Inf Theory 62(5):2748–2758 73. Haupt J, Bajwa WU, Rabbat M et al (2008) Compressed sensing for networked data. IEEE Signal Process Mag 25(2):92–101 74. Banerjee S, Agrawal M (2013) Underwater acoustic noise with generalized Gaussian statistics: effects on error performance. In: Proceedings of MTS/IEEE OCEANS-Bergen. IEEE, pp 1–8 75. Davenport MA, Boufounos PT, Wakin MB et al (2010) Signal processing with compressive measurements. IEEE J Sel Top Sign Process 4(2):445–460 76. Cao J, Lin Z (2014) Bayesian signal detection with compressed measurements. Inf Sci 289:241–253

References

15

77. Wimalajeewa T, Chen H, Varshney PK (2010) Performance analysis of stochastic signal detection with compressive measurements. In: Proceedings of forty fourth asilomar conference on signals, systems and computers. IEEE, pp 813–817 78. Rao BSMR, Chatterjee S, Ottersten B (2012) Detection of sparse random signals using compressive measurements. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 3257–3260 79. Duarte MF, Davenport MA, Wakin MB et al (2006) Sparse signal detection from incoherent projections. In: Proceedings of international conference on acoustics speech and signal processing. IEEE, pp 305–308 80. Wimalajeewa T, Varshney PK (2013) Cooperative sparsity pattern recovery in distributed networks via distributed-OMP. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 5288–5292 81. Li G, Zhang H, Wimalajeewa T et al (2014) On the detection of sparse signals with sensor networks based on subspace pursuit. In: Proceedings of IEEE global conference on signal and information processing, pp 438–442 82. Zhao W, Li G (2015) On the detection probability of sparse signals with sensor networks based on distributed subspace pursuit. In: Proceedings of IEEE China summit and international conference on signal and information processing, pp 324–328 83. Ma J, Xie J, Gan L (2018) Compressive detection of unknown-parameters signals without signal reconstruction. Sign Process 142:114–118 84. Ma J, Gan L, Liao H et al (2019) Sparse signal detection without reconstruction based on compressive sensing. Sign Process 162:211–220 85. Hariri A, Babaie-Zadeh M (2017) Compressive detection of sparse signals in additive white Gaussian noise without signal reconstruction. Sign Process 131:376–385 86. Chen J, Huo X (2005) Sparse representations for multiple measurement vectors (MMV) in an over-complete dictionary. In: Proceedings of international conference on acoustics, speech, and signal processing. IEEE, pp 257–260 87. Chen J, Huo X (2006) Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans Sign Process 54(12):4634–4643 88. Eldar YC, Mishali M (2009) Robust recovery of signals from a structured union of subspaces. IEEE Trans Inf Theory 55(11):5302–5316 89. Grant M, Boyd S (2014) CVX: matlab software for disciplined convex programming, version 2.1 90. Candes E, Romberg J (2005) l 1 -magic: recovery of sparse signals via convex programming. www.acm.caltech.edu/l1magic/downloads/l1magic.pdf 91. Hyder MM, Mahata K (2009) A robust algorithm for joint-sparse recovery. IEEE Sign Process Lett 16(12):1091–1094 92. Hyder M, Mahata K (2009) An approximate l 0 norm minimization algorithm for compressed sensing. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 3365–3368 93. Mohimani H, Babaie-Zadeh M, Jutten C (2008) A fast approach for overcomplete sparse decomposition based on smoothed l 0 norm. IEEE Trans Sign Process 57(1):289–301 94. Hyder MM, Mahata K (2008) Direction-of-arrival estimation using a mixed l 2,0 norm approximation. IEEE Trans Sign Process 58(9):4646–4655 95. Ling Q, Wen Z, Yin W (2012) Decentralized jointly sparse optimization by reweighted l q minimization. IEEE Trans Sign Process 61(5):1165–1170 96. Candes EJ, Wakin MB, Boyd SP (2012) Enhancing sparsity by reweighted l 1 minimization. J Fourier Anal Appl 14(5–6):877–905 97. Wei MH, Scott WR, McClellan JH (2012) Jointly sparse vector recovery via reweighted l 1 minimization. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 3929–3932 98. Fosson SM, Matamoros J, Antón-Haro C et al (2016) Distributed recovery of jointly sparse signals under communication constraints. IEEE Trans Signal Process 64(13):3470–3482

16

1 Introduction

99. Chen LM (2016) Theory and algorithms of sparse recovery via non-convex optimization [Doctoral dissertation]. Department of Electronic Engineering, Tsinghua University 100. Wipf DP, Rao BD (2007) An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans Signal Process 55(7):3704–3716 101. Ji S, Dunson D, Carin L (2008) Multitask compressive sensing. IEEE Trans Sign Process 57(1):92–106 102. Zhang Z, Rao BD (2011) Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning. IEEE J Sel Top Sign Process 5(5):912–926 103. Wu Q, Zhang YD, Ahmad F et al (2014) Compressive-sensing-based high-resolution polarimetric through-the-wall radar imaging exploiting target characteristics. IEEE Antennas Wirel Propag Lett 14:1043–1047 104. Wu Q, Zhang YD, Amin MG et al (2014) Multi-task Bayesian compressive sensing exploiting intra-task dependency. IEEE Sign Process Lett 22(4):430–434 105. Wu Q, Zhang YD, Amin MG et al (2015) High-resolution passive SAR imaging exploiting structured Bayesian compressive sensing. IEEE J Sel Top Sign Process 9(8):1484–1497 106. Sundman D, Chatterjee S, Skoglund M (2014) Distributed greedy pursuit algorithms. Sign Process 105:298–315 107. Li G, Wimalajeewa T, Varshney PK (2015) Decentralized and collaborative subspace pursuit: a communication-efficient algorithm for joint sparsity pattern recovery with sensor networks. IEEE Trans Sign Process 64(3):556–566 108. Blumensath T, Davies ME (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14(5–6):629–654 109. Blanchard JD, Cermak M, Hanle D et al (2014) Greedy algorithms for joint sparse recovery. IEEE Trans Sign Process 62(7):1694–1704 110. Hannak G, Perelli A, Goertz N et al (2018) Performance analysis of approximate message passing for distributed compressed sensing. IEEE J Sel Top Sign Process 12(5):857–870 111. Dai W, Milenkovic O (2009) Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans Inf Theory 55(5):2230–2249 112. Feng JM, Lee CH (2013) Generalized subspace pursuit for signal recovery from multiplemeasurement vectors. In: Proceedings of IEEE wireless communications and networking conference, pp 2874–2878 113. Tropp J, Gilbert AC (2007) Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666 114. Nagananda KG, Varshney PK (2017) On weak signal detection with compressive measurements. IEEE Signal Process Lett 25(1):125–129

Chapter 2

Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Gaussian Noise

2.1 Introduction In recent years, compressive sensing (CS) has emerged as a new paradigm for sparse signal processing, which aims at obtaining valuable information of sparse signals from a small number of measurements [1–3]. CS was originally proposed to accurately reconstruct sparse signals with a small number of measurements. Besides the CS-based signal recovery, another emerging need is the extraction of test statistics from only a few compressed measurements for the problem of sparse signal detection, i.e., decide the presence or absence of sparse signals [4, 5]. In [4], an MP method is proposed for sparse signal detection. The theoretical analysis of detection performance for (deterministic and known) sparse signals and stochastic sparse signals is derived in [5] and [6], respectively. Note that the aforementioned research works [4–6] consider the detection of a single sparse signal, that is, there is only a sensor in the signal detection system. In a distributed sensor network, multiple sensors are scattered at several locations in the monitoring area. Each sensor node observes its surrounding environment and transmits its processed data to the fusion center. Finally, the fusion center makes the final decision regarding the absence and presence of signals. Recently, advances in CS have resulted in new schemes of sparse signal detection with sensor networks. Instead of high-dimensional data, compressed measurements can be transmitted within the network and further processed to save resource usage. Sparse signal detection in distributed sensor networks plays a key role in many practical applications, e.g., radar, communication, and other fields [8–11]. Due to the disturbance of noise and multipath channels, the signal values in different sensors are different. However, because each sensor in the distributed network observes the same scene, the signals at each sensor share the same sparse structure, that is, the same support set. Therefore, the problem of sparse signal detection in distributed sensor networks can be formulated as the problem of jointly sparse signal detection. In [12, 13], distributed subspace pursuit (DSP) algorithms are proposed for sparse signal detection by exploiting the joint sparsity pattern of signals, where the complete estimation of unknown sparse signals © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_2

17

18

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

is not required. In the iterative process of DSP, we determine the absence and presence of jointly sparse signals by observing whether there are the same elements in the (current) estimated support set from each sensor. Similar to DSP, several variants of distributed orthogonal matching pursuit (DOMP) are proposed in [14, 15], where test statistics are derived based on partial estimation of the support set and the energy detector. Existing jointly sparse signal detection methods still suffer from two main limitations. First, the computation burden is relatively large. Existing methods require (partial) signal recovery results. For sparse signals, signal recovery does not have analytical solutions and generally needs an iterative process. However, the detection process only provides a binary output (0 or 1), that is, whether the signal exists. Compared with 0 and 1 outputs of the detection process, the iterative calculation in the signal recovery stage is very large. Second, the existing detection methods assume that analog data can be used to calculate test statistics. In practice, the communication bandwidth of distributed sensor networks is often limited [16]. In addition, the large number of sensors in distributed networks limits the usage of expensive high-precision quantizers at local sensors. Due to the above limitations, transmitting analog data in sensor networks is not conducive to saving communication resources and reducing the cost of sensor networks. This chapter proposes the LMPT detection method for jointly sparse signal detection based on the Gaussian noise model. Firstly, the detection performance of the proposed method based on analog observation data is theoretically analyzed. The proposed method does not require signal estimation and iterative calculation operations, reduces the computational complexity of the detection process without the noticeable loss of detection performance. Coarsely quantized data is helpful to reduce the communication burden and hardware cost of distributed sensor networks. In this chapter, the detection performance of the proposed LMPT method with coarsely quantized observation data is also provided, and a method to compensate for the loss of detection performance caused by quantization is proposed. Simulated results verify the superiority of the proposed method over the existing methods, and demonstrate the effectiveness of the corresponding theoretical analysis.

2.2 Signal Model for Jointly Sparse Signal Detection The detection system is illustrated in Fig. 2.1. In a distributed sensor network, all the local sensors jointly observe the signals with the same sparsity pattern. After linear compression of signals, all the compressed data are transmitted to the data fusion center, and then the fusion center decides whether the signals exist or not. The hypothesis testing model [17] of the detection problem is as follows: 

l = 1, 2, · · · , L , H0 : yl = wl H1 : yl = hlT xl + wl l = 1, 2, · · · , L ,

(2.1)

2.2 Signal Model for Jointly Sparse Signal Detection

19

Fig. 2.1 Illustration of jointly sparse signal detection in distributed sensor networks

where H 0 and H 1 denote the hypotheses that jointly sparse signals are absent/ present, respectively, l denotes the sensor index, yl ∈ R is the compressed measurement, xl ∈ R N ×1 is the sparse signal containing only a few nonzero coefficients, hl ∈ R N ×1 is the known random linear operator without any knowledge of signals, {wl , l = 1, 2, · · · , L} are the independent and identically distributed (i.i.d.) Gaussian   noise variables with wl ∼ N 0, σw2 , ∀l, and (·)T denotes the transpose operation. Upon receiving the compressed measurements {yl , l = 1, 2, · · · , L} from all the sensors, the fusion center produces a global decision about the absence or presence of the sparse signals. Next, the Bernoulli-Gaussian distribution is introduced, which has been used to model sparse signals. Due to the joint sparsity pattern, the locations of nonzero coefficients in xl are the same across all the sensors. Define an N × 1 binary vector s to describe the joint sparsity pattern of {xl , l = 1, 2, · · · , L}, where 

{ } sn = 1, with {xl,n /= 0, l = 1, 2, · · · , L } , sn = 0, with xl,n = 0, l = 1, 2, · · · , L ,

(2.2)

and n = 1, 2, · · · , L. Assume that the elements {sn , l = 1, 2, · · · , L} are i.i.d. Bernoulli variables, i.e., the probabilities of sn = 1 and sn = 0 is p and 1 − p, respectively, where p ∈ (0, 1]. According to (2.2), each coefficient in the sparse vector xl is nonzero with probability p, for l = 1, 2, · · · , L. Moreover, each nonzero coefficient is assumed to follow an i.i.d. Gaussian distribution N (0, σx2 ). Therefore, the BG distribution is imposed on xl,n :   xl,n ∼ pN (1 − p) + δ xl,n , ∀n, ∀l,

(2.3)

where δ(·) is the Dirac delta function. In (2.3), Bernoulli parameter p is defined as the sparsity degree, which is close to zero under H1 to promote the sparsity of signals, i.e., p → 0+ . In general, it is difficult to know the value of the sparsity degree of signals before the signal detection task. Here the sparsity degree p is not known a priori but σw2 and σx2 are. In practice, σw2 can be measured in the absence of the signal [18], and σx2 can be obtained from the statistical characteristic of the sparse signals. The robustness of the proposed detection methods in this chapter on hyper-parameters

20

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

σw2 and σx2 will be discussed in Sect. 2.5. In this chapter, Gaussian signal and noise cases are considered. Non-Gaussian cases will be discussed in Chap. 3.

2.3 LMPT Detection Based on Analog Data 2.3.1 Detection Method Due to the unknown sparsity degree p, the hypothesis testing problem in (2.1) can be recast as the parameter testing problem 

H0 : p = 0, H1 : 0 < p ≤ 1,

(2.4)

which is the problem of one-sided and close hypothesis testing. “One-side” means that p > 0 and “close” means that sparsity degree p is close to zero under H 1 , and is zero under H 0 . First, we consider the LRT detector: P(y|H1 ; p1 ) H1 ≷ η, P(y|H0 ; p0 ) H0

(2.5)

where y  [y1 , y2 , · · · , y L ] represents the local data collected at different sensors, P(y|H1 ; p1 ) and P(y|H0 ; p0 ) denote the likelihood functions under H 1 and H 0 , respectively, p0  0, p1 denotes the true sparsity degree under H 1 , and η is the decision threshold. In accordance with Neyman–Pearson formula [19], LRT achieves the highest probability of detection with a fixed false alarm rate. However, LRT is not suitable for the parameter testing problem in (2.4) due to the unknown sparsity degree p1 . For this type of hypothesis testing in (2.4), the uniformly most powerful test (UMPT) has the optimal detection performance [19]. In UMPT, all the unknown parameters in test statistics are transformed into the side of decision thresholds. In this way, the test statistics are not dependent on the unknown parameters. Generally, it is difficult to achieve the above simplification in UMPT. Accordingly, UMPT seldom exists. In the absence of UMPT, most approaches consider the strategy of GLRT [18]. In GLRT, we first obtain the estimation of unknown parameters via maximum likelihood estimation (MLE) and then substitute the unknown parameters into the likelihood ratio detector using their corresponding estimations. However, GLRT still shows two limitations. On the one hand, when SNRs of sparse signals are very low, the performance of the estimation step in GLRT significantly decreases [20]. In [21], it is indicated that the increase of estimation error leads to the remarkable deterioration of GLRT detection performance for scalar signals with extremely low SNRs, and the detection performance of LMPT is higher than that of GLRT with extremely

2.3 LMPT Detection Based on Analog Data

21

low SNRs. On the other hand, the theoretical analysis of the detection performance of GLRT is difficult in the case of one-sided hypothesis testing [19]. This leads to the difficulty in determining the decision threshold η of GLRT according to the pre-defined false alarm rate. The LMPT strategy shows asymptotical optimality for the one-sided and close hypothesis testing problem [22]. Compared with GLRT, LMPT based on Taylor’s series expansion does not require the estimation of unknown parameters and the decision threshold of the LMPT detector can be unambiguously selected based on the theoretical analysis of detection performance This motivates us to use the LMPT strategy to solve the problem of detection of jointly sparse signals. The detailed derivation of the LMPT detector for jointly sparse signals is formulated as below. From (2.1), we have that the PDF of yl |H0 is   yl |H0 ∼ N 0, σw2 , ∀l.

(2.6)

Sparse signals are high-dimensional signals, i.e., N is a very large number. In this case, the asymptotical PDF of yl |H1 can be written as   a yl |H1 ∼ N 0, p1 κl2 + σw2 , ∀l,

(2.7)

based on the Lyapounov Central Limit Theorem [23], where κl2  σx2 ||hl ||22 , ‘a’ denotes ‘asymptotical’. We refer the readers to the Appendix of [18] for the detailed proof of (2.7). The elements in y  [y1 , y2 , · · · , y L ] are independently collected at different sensors. Accordingly, we have P(y|H0 ; p0 ) =

L ∏

P(yl |H0 ; p0 ),

(2.8)

P(yl |H1 ; p1 ).

(2.9)

l=1

P(y|H1 ; p1 ) =

L ∏ l=1

Taking the logarithm, (2.5) becomes H1

ln P(y|H1 ; p1 ) − ln P(y|H0 ; p0 ). ≷ ln η

(2.10)

H0

Taylor’s series expansion provides the approximation of a differentiable function in the neighborhood of a given point. In (2.10), since the difference between p1 and p0 is very small and κl2 is a limited value, the first-order Taylor’s series expansion of ln P(y|H1 ; p1 ) around p0 is given by ln P(y|H1 ; p1 ) ≈ ln P(y|H1 ; p0 ) +

∂ ln P(y|H1 ; p) ∂p

( p1 − p0 ), p= p0

(2.11)

22

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Fig. 2.2 The illustration of Taylor’s series expansion in (2.11)

where yl2 κl2 κ2 1∑ ∂ ln P(y|H1 ; p) = − 2 l 2.   2 ∂p 2 l=1 σ 2 + pκ 2 σw + pκl w l L

(2.12)

The illustration of (2.11) is provided in Fig. 2.2. From (2.6) and (2.7), we have ln P(y|H1 ; p0 ) = ln P(y|H0 ; p0 ).

(2.13)

By substituting (2.13) into (2.11), the NP test in (2.10) can be rewritten as

∂ ln P(y|H1 ; p) ∂p



H1



p= p0 H0

ln η . p − p0

(2.14)

Note that one-sided hypothesis testing is implied in (2.14), i.e., sparsity degree p is a positive value under H 1 . Multiplying both sides of (2.14) by a scale factor, the resulting LMPT detector based on analog observations is given as T

ana

(y) =

∂ ln P(y|H1 ; p) ∂p

/√ ana FI ( p)

H1

≷ η' ,

(2.15)

p= p0 H0

where the superscript ‘ana’ denotes the ‘analog observations’, FIana ( p) represents the Fisher information,  ana

FI

( p) = E

∂ ln P(y|H1 ; p) ∂p

2  =

L ∑ l=1

κl4 2 ,  2 σw2 + pκl2

(2.16)

√ E(.) denotes the expectation operation, η' = [( p1 − p0 ) FI ( p0 )]−1 ln η is the final decision threshold. The LMPT detector in (2.15) is the first-order Taylor’s approximation of the logarithmic LRT in (2.10) and presents asymptotic optimality. Besides, the test statistic in (2.15) does not require the estimation of unknown parameters.

2.3 LMPT Detection Based on Analog Data

23

2.3.2 Theoretical Analysis of Detection Performance In this subchapter, we theoretically analyze the detection performance of the LMPT method in (2.15). Under the H 0 hypothesis, we have that  E

∂ ln P(y|H0 ; p) ∂p





= 0,

(2.17)

p= p0

⎧ 2 ⎫ ⎨ ∂ ln P(y|H ; p)

⎬ 0 E = FIana ( p0 ), ⎩ ⎭ ∂p p= p0

(2.18)

based on the regularity condition [24]. When there are a large number of local sensors, i.e., L → ∞, we can obtain the asymptotical PDF of the test statistic T ana (y) in (2.15) under the H 0 hypothesis: a

H0 : T ana (y) ∼ N (0, 1),

(2.19)

based on the Central Limit Theorem (CLT).  Under the H 1 hypothesis, the first-order 1 ; p) in (2.15) is Taylor’s approximation of ∂ ln P(y|H ∂p p= p0



∂ ln P(y|H1 ; p) ∂p



∂ ln P(y|H1 ; p) ∂p p= p1

2 ∂ ln P(y|H1 ; p) + ( p1 − p0 ). ∂2 p p= p1

≈ p= p0

(2.20)

In accordance with (2.20), we have  E

∂ ln P(y|H1 ; p) ∂p





≈ p1 FIana ( p1 ),

(2.21)

p= p0

⎧ 2 ⎫ ⎨ ∂ ln P(y|H ; p)

⎬ 1 E ≈ FIana ( p1 ). ⎩ ⎭ ∂p p= p0

(2.22)

From (2.16), it is clear that FIana ( p) is a right-continuous function at p = 0. Due to p1 ≈ p0 , we have the approximation of FIana ( p1 ) ≈ FIana ( p0 ).

(2.23)

Similar to (2.19), we can obtain the asymptotical PDF of the test statistic T ana (y) in (2.15) under the H 1 hypothesis:

24

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

  a H1 : T ana (y) ∼ N μana , 1 ,

(2.24)

where μana = p1



┌ | L |∑ κl4 p 1√ ana FI ( p1 ) =   . 2 2 2 2 l=1 σ + p1 κ w

(2.25)

l

From (2.19), (2.24), and (2.25), we obtain the relationship among the probability of false alarm PFA , the probability of detection PD , and the decision threshold η' of the LMPT detector:     PF A = P T ana (y) > η' |H0 = 1 − F η' ,

(2.26)

    PD = P T ana (y) > η' |H1 = 1 − F ' η' , μana ,

(2.27)

η' = F −1 (1 − PF A ),

(2.28)

where { F(b) = F ' (b, c) =

b

−∞

{

b

−∞

 √     1 2π exp −a 2 2 da,

 √     1 2π exp −(a − c)2 2 da,

(2.29)

(2.30)

F −1 (·) denotes the inverse function of F(·). Different from the existing detection methods of jointly sparse signals, sparse signal recovery is not required for the proposed LMPT detector in (2.15), as shown in Fig. 2.3. This means that the LMPT detector can achieve faster detection than the detectors based on partial or complete signal reconstruction. For example, the complexity of the detector in [15] based on partial signal recovery by DOMP is O(t N L), where t is the number of iterations of DOMP. However, the computational complexity of our proposed LMPT detector is only O(L) that is significantly lower than that of [15]. In addition, because LMPT is an approximation of the optimal LRT detection strategy, its detection performance is asymptotically optimal for sparse signal detection. In Sect. 2.5, the detection performance and computational complexity of the proposed LMPT method and the existing detection method will be compared via simulation results.

2.4 LMPT Detection Based on Coarsely Quantized Data

25

Fig. 2.3 Comparison between the methods proposed in this chapter and the existing methods. a Framework of existing detection methods of jointly sparse signals; b Framework of the proposed LMPT detection method

2.4 LMPT Detection Based on Coarsely Quantized Data In above Sect. 2.3, the LMPT detection method is proposed based on analog data. In a few scenarios, stringent bandwidth constraints necessitate the transmission of coarsely quantized data within sensor networks. Besides, coarsely quantized data can greatly reduce the cost of the analog-to-digital converter (ADC) in the detection system [25]. Thus, in this chapter, we design the LMPT detector for jointly sparse signals with quantized data. Note that, quantization, especially coarse quantization, leads to the loss of detection performance. In this chapter, we also design the optimal quantizers by maximizing Fisher information to reduce the loss of detection performance caused by quantized data. A strategy is proposed to compensate for the loss of detection performance (caused by quantized data) by quantitatively increasing the number of sensors in networks.

26

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

2.4.1 Detection Method Have (2.1) in mind, analog measurements collected by all the sensors are transmitted to the fusion center. When the bandwidth in the sensor network is limited, the local sensors have to quantize their real-valued measurements {yl , ∀l} before transmitting q them to the fusion center. The q-bit quantizer Q l (·) at the l-th sensor is defined as ⎧ ⎪ ⎪ v1 ⎪ ⎨ v2 q ul = Q l (yl ) = . .. ⎪ ⎪ ⎪ ⎩ v2q

τl,0 ≤ yl < τl,1 , τl,1 ≤ yl < τl,2 , .. .

(2.31)

τl,2q −1 ≤ yl < τl,2q ,

where l{= 1, 2, · · · , L, q ∈ Z+} denotes the bit depth (bits per quantized measurement), τl,m , m = 0, 1, · · · , 2q denote the quantization thresholds, the output ul indicates the interval where yl lies in, and {vk , k = 1, 2, · · · , 2q } are the binary code words with vk ∈ {0, 1}q . For example, given q = 2, we have v1 =‘00’, v2 = ‘01’, v3 = ‘10’ and v4 = ‘11’. Figure 2.4 illustrates the q-bit quantizer at the l-th sensor. Note that (2.31) will degenerate to { the 1-bit quantization}case when q = 1. In addition, the quantization thresholds τl,m , m = 0, 1, · · · , 2q may be different at different sensors. When the fusion center receives all the quantized data, it makes the decision on the absence or presence of jointly sparse signals. Next, the mathematical form of the LMPT detector based on quantized data is derived. The probability mass function (PMF) of ul , i.e., the data received from the l-th local sensor, is given by 2 ∏ q

P(ul ) =

[P(ul = vi )] I (ul ,vi ) , ∀l,

(2.32)

i=1

where I (α, β) = 1 if α = β and 0 otherwise. According to (2.31), ul is the quantized version of the compressed measurement yl . Thus, we have   P(ul = vi ) = P τl,i−1 ≤ yl ≤ τl,i , ∀l, i.

(2.33)

{ } Fig. 2.4 Illustration of the q-bit quantizer at the l-th sensor, where τl,m , m = 0, 1, · · · , 2q are q the quantization thresholds, {vk , k = 1, 2, · · · , 2 } are the binary code words, for l = 1, 2, · · · , L

2.4 LMPT Detection Based on Coarsely Quantized Data

27

In (2.6) and (2.7), we have provided the PDFs of yl |H0 and yl |H1 . Based on (2.6), (2.7), (2.32), and (2.33), we have P(ul |H0 ) = 2 ∏ q

P(ul |H1 ; p1 ) =



" 2q ! ∏ τl,i τl,i−1 I (ul ,vi ) F −F , σw σw i=1

⎡ ⎛ ⎣F ⎝ /

i=1

⎞ τl,i p1 κl2



(2.34)

⎞⎤ I (ul ,vi )

⎠ − F ⎝ / τl,i−1 ⎠⎦ + σw2 p1 κl2 + σw2

. (2.35)

All the quantized measurements received at the fusion center are denoted by U  {u1 , u2 , · · · , u L }. Due to the statistical independence of the quantized data {u1 , u2 , · · · , u L } collected at different sensors, we have P(U) =

L ∏

P(ul ).

(2.36)

l=1

Similar to (2.8)-(2.14), we can derive the LMPT detector with quantized data: T Q (U) =

∂ ln P(U|H1 ; p) ∂p

//

FI Q ( p)

H1

≷ η' ,

(2.37)

p= p0 H0

where the superscript ‘Q’ denotes ‘quantization’, 2 L L ∂ ln P(U|H1 ; p) ∑ ∂ ln P(ul |H1 ; p) ∑ ∑ κl2 I (ul , vi ) = =   2 2 3/2 ∂p ∂p l=1 l=1 i=1 2 pκl +σw ⎞ ⎞⎤ ⎡ ⎛ ⎛ τ τ l,i−1 ⎠ − τl,i G ⎝ / l,i ⎠⎦ × ⎣τl,i−1 G ⎝ / 2 2 2 2 pκl +σw pκl +σw ⎡ ⎛ ⎛ ⎞ ⎞⎤−1 τ τ l,i ⎠ − F ⎝ / l,i−1 ⎠⎦ , × ⎣F ⎝ / (2.38) 2 2 pκl +σw pκl2 +σw2  

 ∑

 ∑ L L ∂ ln P(U|H1 ; p) 2 ∂ ln P(ul |H1 ; p) 2 Q Q FI ( p) = E E FIl ( p), =  ∂p ∂p q

l=1

l=1

(2.39)

28

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests … 2 ∑ q

κl4

FIlQ ( p) =  2 4 pκl2 + σw2

⎡ ⎣/

i=1

⎛ −/

τl,i−1 pκl2

σw2

+

⎡ ⎛ × ⎣F ⎝ /

G⎝ /

τl,i pκl2

σw2

pκl2

+

τl,i−1 pκl2





⎛ τl,i

+

σw2

G⎝ /

⎞⎤2

τl,i pκl2

+

⎠ σw2

⎠⎦ σw2 ⎛

⎠ − F⎝ /

⎞⎤−1

τl,i−1 pκl2

+ +  √     G(α) = 1 2π exp −α 2 2 .

σw2

⎠⎦ ,

(2.40)

(2.41)

2.4.2 Quantizer Design and the Effect of Quantization on Detection Performance Similar to (2.17)-(2.24), we can derive the PDFs of the test statistics T Q (U):  a N (0, 1)  H0 ,  T Q (U) ∼ N μ Q , 1 H1 ,

(2.42)

where / μ Q = p1 FI Q ( p1 ).

(2.43)

The relationship among the probability of false alarm PFA , the probability of detection PD , and the decision threshold η' of the LMPT detector based on quantized data is     PF A = P T Q (U) > η' |H0 = 1 − F η' ,

(2.44)

    PD = P T Q (U) > η' |H1 = 1 − F ' μ Q , η' ,

(2.45)

η' = F −1 (1 − PF A ).

(2.46)

Obviously, the quantization operation reduces the information carried by data. The detection performance of the LMPT detector based on quantized data is lower than that based on analog data in Sect. 2.3. From (2.39), (2.40), and (2.44)–(2.46), we can observe that quantization thresholds at the local sensors will influence the detection performance of jointly sparse signals. In particular, from (2.44)–(2.46), we

2.4 LMPT Detection Based on Coarsely Quantized Data

29

can see that increasing the mean value μ Q will improve the detection performance of the quantized LMPT detector. This inspires us to design the quantization thresholds via maximizing μ Q to reduce performance loss caused by quantization as follows: {

} τˆl,m , l = 1, 2, · · · , L , m = 1, 2, · · · , 2q − 1 = arg max μ Q , τl,m

s.t. τl,1 < τl,2 < · · · < τl,2q −1 , ∀l,

(2.47)

where τˆl,m is the estimate of τl,m , ∀l, m. In (2.47), saturated quantization is considered, i.e., τl,0 = −∞, τl,2q = +∞, ∀l,

(2.48)

and these extreme thresholds will not be optimized. Since the mean value μ Q is a monotonically increasing function in terms of the Fisher Information FI Q ( p1 ) (see (2.43)), and FI Q ( p1 ) ≈ FI Q (0),

(2.49)

Equation (2.47) can be simplified as: {

} τˆl,m , l = 1, 2, · · · , L , m = 1, 2, · · · , 2q − 1 = arg max FI Q (0), τl,m

s.t. τl,1 < τl,2 < · · · < τ

l,2q −1

, ∀l.

(2.50)

Q Q In Sect. 2.5, the effectiveness of approximation ∑ L FI Q( p1 ) ≈ FI (0) in (2.49) will Q be evaluated by simulations. Since FI (0) = l=1 FIl (0) (as shown in (2.39) and (2.40)) and FIlQ (0) is a function of the quantization thresholds at the l-th local sensor, we can decouple the optimization problem in (2.50) into L independent problems:

{ } τˆl,m , m = 1, 2, · · · , 2q − 1 = arg max FIlQ (0), τl,m

s.t. τl,1 < τl,2 < · · · < τl,2q −1 ,

(2.51)

for l = 1, 2, · · · , L. For ease of notation, we drop the subscript l in (2.51) and use the notation Ω to denote the objective function FIlQ (0) in the following description, i.e., {

} τˆm , m = 1, 2, · · · , 2q − 1 = arg max Ω , τm

s.t. τ1 < τ2 < · · · < τ

2q −1

.

(2.52)

where Ω = FIl (0). It is difficult to find closed-form optimal solutions in (2.52), so we solve it by using a numerical optimization algorithm. For a maximization problem, commonly used

30

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

optimization methods, such as the gradient descent (GD) method or the Newton’s method, require the concavity of the objective function. However, the objective function Ω in (2.52) exhibits the non-concave property. Therefore, here, the particle swarm optimization (PSO) algorithm is used to solve the problem in (2.52), which does not rely on the concavity property. PSO stems from the social cooperative and competitive behaviors of bird flocking and fish schooling, which aims at optimizing a problem by iteratively improving a set of particles, i.e., candidate solutions, with respect to an objective function [26, 27]. Actually, PSO is a stochastic optimization algorithm, for which global convergence has been shown under certain assumptions [28]. It is worth emphasizing that PSO is based on simple iterations with low computational complexity. We refer the readers to [26, 27] for the detailed descriptions of PSO. Next, the performance of the proposed quantized LMPT detector (the local quantizers are designed using PSO) and that of the clairvoyant LMPT detector (based on analog data) is compared and the approximate analysis of the performance loss caused by quantization is carried out. When the communication bandwidth in the sensor network is extremely limited, 1-bit data transmission may be needed [29]. In the case of 1-bit quantization, the problem of (2.52) degenerates to the following problem: τˆ = arg max Ω1 - bit τ

τ , σw

(2.53)

where Ω1 - bit (z) =

{ } κ4 [zG(z)]2 [F(z)]−1 + [1 − F(z)]−1 . 4σw4

(2.54)

The numerical solution of (2.53) provided by PSO [21, 25–27] converges to two points: τˆ ≈ ±1.575. σw

(2.55)

The function Ω1 - bit (z) is plotted in Fig. 2.5, which also verifies the solution in (2.55). Since Ω1 - bit (z) is an even function, the selection of τˆ ≈ 1.575σw or τˆ ≈ −1.575σw produces the same detection performance. Without loss of generality, we select τˆ ≈ 1.575σw .

(2.56)

in the following discussion. Note that the quantization threshold given in (2.56) is valid for all the sensors. Substituting (2.56) into (2.54), Ω1 - bit (z) achieves - bit : approximately its maximum value Ω1max

2.4 LMPT Detection Based on Coarsely Quantized Data

31

Fig. 2.5 The function  Ω1 - bit (z), where z = τ σw and the scaling factor κ 4 4σw4 is set to 1. The maximum values are achieved at zˆ ≈ ±1.575, i.e.,  τˆ σw ≈ ±1.575

0.152κl4 . σw4

- bit Ω1max ≈

(2.57)

In a homogeneous scenario, it is expected that all the sensors have the same κl2 [30]. When κl2 = κ 2 for l = 1, 2, · · · , L, from (2.16) and (2.39) we have p1 →0+

FI1−bit max ( p1 ) ≈

L ∑

- bit Ω1max =

i=1 p1 →0+

FIana ( p1 ) ≈

0.152Lκ 4 , σw4

Lκ 4 . 2σw4

(2.58)

(2.59)

Therefore, to ensure that the 1-bit LMPT detector and the clairvoyant LMPT ana detector have the same detection performance, i.e., FI1−bit ( p1 ), it is max ( p1 ) = FI easy to derive that L 1−bit = L ana



1 0.55

2 ≈ 3.3

(2.60)

where L 1−bit and L ana denote the required number of sensors for the 1-bit LMPT detector and the clairvoyant LMPT detector, respectively. It is clear from (2.60) that, for the problem of detection of sparse stochastic signals, the 1-bit LMPT detector needs to increase the number of sensors to 3.3L to achieve approximately the same performance as that of the clairvoyant LMPT detector with L sensors. As for the communication burden, the proposed 1-bit LMPT detector is more efficient than the clairvoyant LMPT detector, since the former only needs to transmit a total number of 3.3L bits, while the latter transmits L analog measurements.

32

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Compared with 1-bit quantization, less loss of detection performance is incurred by multilevel quantization. A question naturally arises: can multilevel LMPT detector with optimal local quantizers achieve similar detection performance as that of the clairvoyant LMPT detector? We first consider the extreme case when the bit depth q tends to infinity. It is natural that the detection performance of the multilevel LMPT detector converges to that of the clairvoyant LMPT detector when the bit depth q tends to infinity, i.e., lim μq−bit = μana .

(2.61)

q→+∞

For completeness, a mathematical proof of (2.61) is provided in Appendix A. In Table 2.1 and Table 2.2, we provide the quantization thresholds for 2-bit and 3bit quantization, which are obtained by the PSO algorithm. Substituting the designed quantization thresholds into (2.39), we have - bit μ2max = 0.85μana ,

(2.62)

- bit μ3max = 0.95μana .

(2.63)

From (2.26), (2.27), (2.43), and (2.63), it follows that the 3-bit LMPT detector can attain the detection performance very close (within 95%) to that of the clairvoyant LMPT detector when the designed quantization thresholds are used at all the local sensors. In other words, for the problem of detection of sparse stochastic signals, it is unnecessary to use finely quantized measurements to compensate for the performance loss caused by quantization. Table 2.1 Designed quantization thresholds for the 2-Bit LMPT detector 2-bit LMPT Quantization thresholds obtained by the PSO algorithm

τˆ1

−2.263σw

τˆ2

−1.325σw

τˆ3

1.623σw

Table 2.2 Designed quantization thresholds for the 3-Bit LMPT detector 3-bit LMPT Quantization thresholds obtained by the PSO algorithm

τˆ1

−2.694σw

τˆ2

−1.787σw

τˆ3

−1.057σw

τˆ4

−0.980σw

τˆ5

1.565σw

τˆ6

2.154σw

τˆ7

2.915σw

2.5 Simulation Results

33

2.5 Simulation Results 2.5.1 Simulation Results of the LMPT Detector with Analog Data In this subchapter, simulation results are presented to show the performance of the LMPT detector based on analog data in Sect. 2.3. The length of sparse signals is fixed at N = 1000. The elements of the linear operator hl ∈ R N ×1 are taken from a standard normal distribution, for l = 1, 2, · · · , L. A homogeneous scenario is considered, 2 i.e., all the linear operators are normalized so that ||h l ||22 = 1, for l = 1, 2, · · · , L. 2 SNRs at the local sensors are defined as SNR = p1 σx σw , where all the local sensors are assumed to have the same SNRs. First, Monte Carlo (MC) results with 104 trials of the proposed LMPT detector are illustrated to corroborate the theoretical analysis in Sect. 2.3.2. In Fig. 2.6, we plot the receiver operating characteristic (ROC) curves of the LMPT detector versus sparsity degree, where σx2 = 5, σw2 = 1, L = 100. The solid lines represent the theoretical performance, while the star marks denote the performance of the MC trials. As observed in Fig. 2.6, our theoretical analysis and simulations are quite consistent. In addition, a larger value of sparsity degree leads to better detection performance of the LMPT detector since it produces a larger SNR. In Fig. 2.7, we plot the ROC curves of the LMPT detector versus the number of sensors, σx2 = 10, σw2 = 1, p1 = 0.05. The consistency between the MC results and our theoretical analysis can also be seen in Fig. 2.7. Besides, the increase in the number of sensors generates better detection performance of the LMPT detector. The LMPT detector assumes that σx2 and σw2 are known a priori. However, errors may exist in the pre-estimated σx2 and σw2 . In Fig. 2.8, we show the performance Fig. 2.6 ROC curves of the LMPT detector versus sparsity degree, where σx2 = 5, σw2 = 1, L = 100, and ‘Theo.’ denotes ‘Theoretical’

34

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Fig. 2.7 ROC curves of the LMPT detector versus the number of sensors, where σx2 = 10, σw2 = 1, p1 = 0.05

of the LMPT detector versus the normalized mean square error (NMSE) of σx2 and  2 2 2 σw , where NMSE(θ ) = (θ − θ ) θ , θ is the estimation of θ . From Fig. 2.8, it is observed that the perturbation in the performance of the LMPT detector caused by small errors of σx2 and σw2 is negligible. Next, the LMPT detector is compared with the DOMP-based detector proposed in [15] via simulations. In Fig. 2.9, we compare the ROC curves of the LMPT detector and the DOMP-based detector with different numbers of sensors and sparsity degrees, where σx2 = 5, σw2 = 1. From Fig. 2.9, it is clear that the LMPT detector and the DOMP-based detector have almost the same detection performance. In Table 2.3, a comparison of runtime between these two detectors is carried out with different numbers of sensors, where σx2 = 5, σw2 = 1, p1 = 0.05. We can observe that the Fig. 2.8 ROC curves of the LMPT detector versus the NMSE of σx2 and σw2 , where L = 100, p1 = 0.01, the true values of σx2 and σw2 are σx2 = 10, σw2 = 1

2.5 Simulation Results

35

Fig. 2.9 ROC curves of the LMPT detector and the DOMP-based detector, where σx2 = 5, σw2 = 1

Table 2.3 Runtime of the LMPT detector and the DOMP-based detector [15] for different numbers of sensors

L

Runtime (s) LMPT detector

DOMP-based detector

50

4.7 ×

10–6

3.1 × 10–3

100

5.9 ×

10–6

6.6 × 10–3

150

7.2 × 10–6

1.0 × 10–2

200

7.8 ×

1.3 × 10–2

10–6

computational time of the proposed LMPT detector is much less than that of the DOMP-based detector. This implies that the proposed LMPT detector reduces the computational complexity at the fusion center without noticeable loss of detection performance and achieves fast detection of jointly sparse signals.

2.5.2 Simulation Results of the LMPT Detector with Quantized Data In this subchapter, simulation results are provided to corroborate our theoretical analysis and demonstrate the performance of the proposed local quantizers and the quantized LMPT detector. In Fig. 2.10, the effectiveness of the approximation FI Q ( p1 ) ≈ FI Q (0) in (2.49), where L = 100, q = 2. As the baseline, the dotted lines in Fig. 2.10 denote the case with a priori known sparsity degree, where the quantization thresholds are obtained from solving the following problem: { } τˆl,m , l = 1, 2, · · · , L , m = 1, 2, · · · , 2q − 1 = arg max FI( p1 ), τl,m

36

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

- bit versus the sparsity degree p, where L = 100. The solid lines Fig. 2.10 The mean value μ2max denote the case when the sparsity degree p is unknown and the local quantizers are designed according to (2.51). The dotted lines denote the baseline degree p isknown a priori.   that the sparsity  Four scenarios are considered, i.e., σx2 σw2 = 1, σx2 σw2 = 2, σx2 σw2 = 3 and σx2 σw2 = 5

s.t. τl,1 < τl,2 < · · · < τl,2q −1 , ∀l,

(2.64)

The problem in (2.64) can be decoupled to L independent problems and solved by PSO. The solid lines in Fig. 2.10 denote the case that the sparsity degree p is unknown and the quantization thresholds are designed according to (2.51). From Fig. 2.10, we can see that the approximation FI Q ( p1 ) ≈ FI Q (0) has a negligible performance when the sparsity degree is a small value. influence on the detection  In addition, when σx2 σw2 >> 1, a large value of sparsity degree (e.g., p > 0.1) will increase the performance loss caused by the approximation FI Q ( p1 ) ≈ FI Q (0). But LMPT detector, this does not reduce the effectiveness of the proposed quantized  since the SNRs of sparse signals are naturally low, i.e., σx2 σw2 is a limited value and p → 0+ . Therefore, the approximation FI Q ( p1 ) ≈ FI Q (0) is valid for the problem of sparse signal detection. Then, we evaluate the effect of the threshold of 1-bit quantization given in (2.56) on the detection performance. In Fig. 2.11, four scenarios with different parameters are taken into account to examine the proposed threshold of 1-bit quantization with 104 MC trials. The sparsity degree is p1 = 0.05, and the probability of false alarm is PFA = 0.05. It is observed that the threshold provided in (2.56) leads to the optimal probability of detection. For multilevel quantization, the problem in (2.52) is a multivariable optimization problem. It is difficult to directly show the optimality of proposed thresholds of multilevel quantization (like Fig. 2.11). Without loss of generality, we select the 2-bit LMPT detector as an example to show the superiority of the proposed quantizers. Uniform quantizers have been widely investigated in the existing literature. In Fig. 2.12, we compare receiver operating characteristic

2.5 Simulation Results

37

(ROC) curves of the 2-bit LMPT detector generated by the uniform quantizers and the proposed quantizers. The range of quantization thresholds of the uniform quantizer is set to [−5σw , 5σw ]. In Fig. 2.12, the solid lines represent the theoretical performance, while the star and diamond marks denote the performance of the MC trials. As shown in Fig. 2.12, the 2-bit LMPT detector with the proposed quantization thresholds provides better performance than that based on uniform quantizers, since uniform quantizers do not consider the relationship between the quantization thresholds and the detection performance. In Fig. 2.13, ROC curves of the 1-bit LMPT detector and the clairvoyant LMPT detector are plotted for different numbers of sensors, where the number of sensors of the proposed 1-bit LMPT detector is set to be 3.3 times larger than that of the clairvoyant LMPT detector in all the MC trials. As illustrated in Fig. 2.13, the 1-bit LMPT detector with 3.3L sensors and the clairvoyant LMPT detector with L sensors provide approximately the same detection performance. This is consistent with our theoretical analysis in (2.60).

Fig. 2.11 Probability of detection of the 1-bit LMPT detector versus different thresholds. The sparsity degree is p1 = 0.05, and the probability of false alarm is PF A = 0.05. a L = 100, SNR = −6 dB, b L = 100, SNR = −3 dB, c L = 300, SNR = −6 dB, d L = 300, SNR = −3 dB

38

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Fig. 2.12 ROC curves of the 2-bit LMPT detector generated by the uniform quantization and the proposed quantization thresholds, where p1 = 0.05, SNR = −6 dB. Two scenarios are considered, i.e., L = 100 and L = 500

Fig. 2.13 ROC curves of the 1-bit LMPT detector and the clairvoyant LMPT detector for different numbers of sensors, where p1 = 0.01 and SNR = −5 dB

In Fig. 2.14, the ROC curves of the LMPT detector based on analog/quantized data are plotted with different parameters. It is observed from Fig. 2.14 that the simulated ROC curves are consistent with the theoretical ROC curves. Figure 2.14 also shows that there is an obvious performance gap between the 1-bit LMPT detector and the clairvoyant LMPT detector when they use the same number of sensor nodes. The increase of the bit depth leads to a gain in the detection performance. The performance loss caused by quantization can be negligible when the bit depth increases to 3, which is consistent with our theoretical analysis in (2.63).

2.5 Simulation Results

39

Fig. 2.14 ROC curves of the LMPT detector based on analog/quantized data, where σx2 = 5, σw2 = 1. a p1 = 0.01, L = 100, b p1 = 0.03, L = 100, c p1 = 0.05, L = 100, d p1 = 0.01, L = 300, e p1 = 0.03, L = 300, f p1 = 0.05, L = 300

40

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

2.6 Conclusion The problem of detection of jointly sparse signals with sensor networks was studied in this chapter. The LMPT detector was proposed to detect jointly sparse signals to improve the computation and communication efficiencies of existing methods. The main work of this chapter is summarized as follows: (1) The problem of detection of jointly sparse signals was converted into the problem of close and one-sided hypothesis testing. (2) The LMPT detectors based on analog and quantized data are derived. (3) The theoretical detection performance of LMPT detectors was analyzed. Theoretical performance loss caused by coarse quantization was analyzed and the corresponding strategy to compensate for the above performance loss was proposed. (4) Simulations were conducted to show the effectiveness of theoretical results in this chapter. The comparison between the DOMP detector and the proposed LMPT detector is conducted. Based on the aforementioned works, we have the following conclusive comments. LMPT is asymptotically optimal for the problem of detection of jointly sparse signals. The LMPT detector is operated without the requirement of signal reconstruction. For a similar detection performance, LMPT has a much lower computational burden than the existing detector based on signal reconstruction. The LMPT detector based on quantized data alleviates the communication burden of sensor networks. The proposed 1-bit LMPT detector with 3.3L sensors achieves approximately the same performance as that of the clairvoyant LMPT detector with L sensors collecting analog measurements. The proposed multilevel LMPT detector based on 3-bit quantized measurements can provide very close detection performance to that of the clairvoyant LMPT detector. In this chapter, the data transmission in distributed sensor network does not consider channel interference and data dependence. The LMPT detector with noisy channels and data dependence will be studied in our future work for the problem of detection of jointly sparse signals.

References 1. Baraniuk RG (2007) Compressive sensing. IEEE Signal Process Mag 24(4):118–121 2. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 3. Eldar Y C, Kutyniok G. Compressed sensing: theory and applications. Cambridge university Press, 2012. 4. Duarte M F, Davenport M A, Wakin M B, et al. Sparse signal detection from incoherent projections. in Proceedings of International Conference on Acoustics Speech and Signal Processing, 2006, IEEE: 305–308.

References

41

5. Davenport MA, Boufounos PT, Wakin MB et al (2010) Signal processing with compressive measurements. IEEE Journal of Selected topics in Signal processing 4(2):445–460 6. Wimalajeewa T, Chen H, Varshney P K. Performance analysis of stochastic signal detection with compressive measurements. In Proceedings of Forty Fourth Asilomar Conference on Signals, Systems and Computers: 2010, IEEE: 813–817. 7. Sun H, Nallanathan A, Wang CX et al (2013) Wideband spectrum sensing for cognitive radio networks: a survey. IEEE Wirel Commun 20(2):74–81 8. Zeng F, Li C, Tian Z (2010) Distributed compressive spectrum sensing in cooperative multihop cognitive networks. IEEE Journal of Selected Topics in Signal Processing 5(1):37–48 9. Wang X, Li G, Varshney PK (2018) Detection of sparse signals in sensor networks via locally most powerful tests. IEEE Signal Process Lett 25(9):1418–1422 10. Wang X, Li G, Varshney PK (2019) Detection of sparse stochastic signals with quantized measurements in sensor networks. IEEE Trans Signal Process 67(8):2210–2220 11. Wang X, Li G, Quan C et al (2019) Distributed Detection of Sparse Stochastic Signals With Quantized Measurements: The Generalized Gaussian Case. IEEE Trans Signal Process 67(18):4886–4898 12. Li G, Zhang H, Wimalajeewa T, et al. On the detection of sparse signals with sensor networks based on Subspace Pursuit. In Proceedings of IEEE Global Conference on Signal and Information Processing, 2014: 438–442. 13. Zhao W, Li G. On the detection probability of sparse signals with sensor networks based on distributed subspace pursuit. in Proceedings of IEEE China Summit and International Conference on Signal and Information Processing, 2015: 324–328. 14. Wimalajeewa T, Varshney PK (2014) OMP based joint sparsity pattern recovery under communication constraints. IEEE Trans Signal Process 62(19):5059–5072 15. Wimalajeewa T, Varshney PK (2016) Sparse signal detection with compressive measurements via partial support set estimation. IEEE Transactions on Signal and Information Processing over Networks 3(1):46–60 16. Haupt J, Bajwa WU, Rabbat M et al (2008) Compressed sensing for networked data. IEEE Signal Process Mag 25(2):92–101 17. Kailkhura B, Wimalajeewa T, Varshney PK (2016) Collaborative compressive detection with physical layer secrecy constraints. IEEE Trans Signal Process 65(4):1013–1025 18. Hariri A, Babaie-Zadeh M (2017) Compressive detection of sparse signals in additive white Gaussian noise without signal reconstruction. Signal Process 131:376–385 19. Kay SM (1998) Fundamentals of Statistical Signal Processing: Detection Theory. Prentice-Hall, Upper Saddle River, NJ 20. Kay SM, Gabriel JR (2002) Optimal invariant detection of a sinusoid with unknown parameters. IEEE Trans Signal Process 50(1):27–40 21. Wang X, Li G, Varshney PK (2019) Distributed detection of weak signals from one-bit measurements under observation model uncertainties. IEEE Signal Process Lett 26(3):415–419 22. Kassam S A. Signal detection in non-Gaussian noise. Springer Science & Business Media, 2012. 23. Papoulis A, Pillai S U. Probability, random variables, and stochastic processes. Tata McGrawHill Education, 2002. 24. Kay SM (1998) Fundamentals of Statistical Signal Processing: Estimation Theory. PrenticeHall, Upper Saddle River, NJ 25. Stein MS, Bar S, Nossek JA et al (2018) Performance analysis for channel estimation with 1-bit ADC and unknown quantization threshold. IEEE Trans Signal Process 66(10):2557–2571 26. Duarte C, Barner KE, Goossen K (2016) Design of IIR multi-notch filters based on polynomially-represented squared frequency response. IEEE Trans Signal Process 64(10):2613–2623 27. Gao F, Guo L, Li H et al (2014) Quantizer design for distributed GLRT detection of weak signal in wireless sensor networks. IEEE Trans Wireless Commun 14(4):2032–2042 28. Hui Q, Zhang H. Global convergence analysis of swarm optimization using paracontraction and semistability theory. In Proceedings of IEEE American Control Conference, 2016: 2900–2905.

42

2 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

29. Zayyani H, Haddadi F, Korki M (2016) Double detector for sparse signal detection from one-bit compressed sensing measurements. IEEE Signal Process Lett 23(11):1637–1641 30. Cheung RCY, Aue A, Lee TCM (2017) Consistent estimation for partition-wise regression and classification models. IEEE Trans Signal Process 65(14):3662–3674

Chapter 3

Detection of Jointly Sparse Signals via Locally Most Powerful Tests with Generalized Gaussian Model

3.1 Introduction In Chap. 2, we consider the problem of detection of jointly sparse signals with Gaussian signals and noise. It is well known that the Gaussian assumption does not capture all the scenarios arising in practical applications [1–9]. As shown in Fig. 3.1, the generalized Gaussian distribution adds a shape parameter on the Gaussian distribution to obtain a large number of non-Gaussian distributions for heavy/ light-tailed scenarios. Thanks to its flexible parametric form, the generalized Gaussian distribution has received considerable attention while modeling the signal and noise variables in practical applications, e.g. communication systems [2], speech/ image processing [3, 4], and sea clutter suppression [5, 10]. It is worth mentioning that, the generalized Gaussian distribution has also been studied for modeling of sparse (or compressible) signals [6], signal denoising [1], and signal sampling [7] in the context of CS. However, to the best of our knowledge, studies on the detection of jointly sparse signals with the generalized Gaussian distributions have not been reported in the existing literature. In this chapter, we consider the detection of jointly sparse signals based on a generalized Gaussian model. Assume that both the noise and the dominant elements in sparse signals follow the generalized Gaussian distribution, the corresponding the LMPT detector is proposed and the asymptotically optimal 1-bit quantizers are designed. The asymptotic relative efficiency (ARE) is derived analytically to measure and compensate for the performance loss of the proposed LMPT detector caused by the quantization of local measurements. Simulation results corroborate our theoretical analysis of the proposed LMPT detector with a generalized Gaussian model.

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_3

43

44

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Fig. 3.1 The PDFs of the generalized Gaussian distribution with different shape parameters, where β denotes the shape parameter and α is the scale parameter

3.2 The LMPT Detector Based on Generalized Gaussian Model and Its Detection Performance 3.2.1 Generalized Gaussian Model Here, we still consider the hypotheses for the detection of jointly sparse signals in (2.1). Assume all the dominant entries in the sparse signals {xl , l = 1, 2, · · · , L} are assumed to follow the i.i.d. generalized Gaussian distribution with zero-mean and variance σx2 , i.e., if xl,n /= 0, ( ) xl,n ∼ f βx xl,n =

[ ( | |) ] αx βx β ( / ) exp − αx |xl,n | x , ∀l, n, 2┌ 1 βx

(3.1)

/ ( / )/ ( / ) where 0 < βx < +∞ is the shape parameter and αx = σx−1 ┌ 3 βx ┌ 1 βx is the scale parameter of generalized Gaussian distribution, respectively, {+∞ ┌(a) = t a−1 exp(−t)dt, a > 0,

(3.2)

0

denotes the Gamma function. In (3.1), the GG distribution is imposed on the nonzero entries of the signals, since this distribution has a flexible parametric form and has been successfully used to model the fluctuations of signal sources [11]. Based on (3.1), the Bernoulli-generalized Gaussian distribution imposed on xl,n can be given by

3.2 The LMPT Detector Based on Generalized Gaussian Model and Its …

( ) ( ) xl,n ∼ p f βx xl,n + (1 − p)δ xl,n

45

(3.3)

Similarly, the noise variable wl with zero-mean and variance σw2 follows wl ∼ f βw (wl ) =

] [ αw βw ( / ) exp −(αw |wl |)βw , ∀l 2┌ 1 βw

(3.4)

/ ( / )/ ( / ) where αw = σw−1 ┌ 3 βw ┌ 1 βw . The generalized Gaussian distribution imposed on the noise variables {wl , l = 1, 2, · · · , L} is motivated by the nonGaussian noise environments, e.g., ship transit noise (βw = 2.8) or noise from sea surface agitation (βw = 1.6) in the sea environments [12]. It is worth emphasizing that the values of βw in (3.4) and βx in (3.1) may be different. Proposition 1 Let sl denote the compressed measurement of the sparse signal xl at hlT xl , for l = 1, 2, · · · , L. Assume that the sparse signal the l-th sensor, i.e., sl xl is high-dimensional (N → +∞) and h l,n /= 0, ||hl ||2 < +∞, ∀l, n. With a fixed sparsity degree η, the compressed measurement sl asymptotically follows: ( ) a sl ∼ N 0, pσx2 ||hl ||22

(3.5)

Proof The compressed measurement sl hlT xl is the sum of independent and nonidentically distributed random variables. According to the Lyapunov-CLT [13], the sufficient condition of Proposition 1 is the Lyapunov’s condition: for some ξ > 0, lim

N →+∞

1 ω2+ξ

N [| ∑ |2+ξ ] = 0, E |h l,n xl,n |

(3.6)

n=1

∑N 2 where ω2 = σx2 n=1 h l,n . In Appendix C, we demonstrate that the above Lyapunov’s condition holds based on (3.3). Therefore, we can directly obtain Proposition 1 from the Lyapunov-CLT. □ From Proposition 1, it follows that when the dimension of sparse signals is large, the PDF of the compressed measurement of the sparse signal is asymptotically Gaussian regardless of the kind of generalized Gaussian distribution that the dominant elements of sparse signals follow. In Fig. 3.2, the simulated PDFs of the variable sl with different shape parameters βx are plotted. It is observed that all the simulated PDFs are consistent with the theoretical PDF in (3.5). It is worth emphasizing that although the compressed measurements of the signal are asymptotically Gaussian, we will show that the generalized Gaussian noise makes the detector proposed in this chapter significantly different from that in Chap. 2, which considered the Gaussian signal and noise case. Based on (3.5), since the shape parameter βx in the PDF of signal variables is unrelated to the distribution of yl , it will not appear in the rest of this chapter. In addition, we drop the subscript ‘w’ in {αw , βw } and use {α, β} to

46

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Fig. 3.2 Theoretical and simulated PDFs of variable sl , where N = 3000, η = 0.015, σx2 = 1, ||hl ||22 = 1. The simulated PDFs (denoted by ‘Simu.’) are obtained using the Matlab function ksdensity with 5000 random samples of sl

denote the parameters in the PDFs of the noise variables for ease of notation. Based on Proposition 1, the detection problem in (2.1) can be formulated as {

H0 : yl = wl l = 1, 2, · · · , L , √ ◠ H1 : yl = η s l + wl l = 1, 2, · · · , L ,

(3.7)

) ( ◠ a where s l ∼N 0, σx2 ||hl ||22 . Since the sparsity degree η is close to zero, the hypothesis problem in (3.7) is a close hypothesis testing problem.

3.2.2 Signal Detection Method Based on the mathematical deduction in Chap. 2, the proposed LMPT detector based on analog data under the generalized Gaussian model is formulated as T ana (y) =

| L ∑ ) ||hl ||22 ( 2β 2 2β−2 ∂ ln P(y|H1 ; p) || α β |yl | = − α β β(β − 1)|yl |β−2 | ∂p 2 p=0 l=1 (3.8)

Note that the detector similar to that in (3.8) has been provided in [9], which does not involve the context of sparse signal and quantized data. Following the quantization rule in (2.31), all the quantized measurements received {u1 , u2 , · · · , u L }. Based on (2.37), the at the fusion center are still denoted by U proposed LMPT detector based on quantized data can be formulated as

3.2 The LMPT Detector Based on Generalized Gaussian Model and Its …

T Q (U) =

| ∂ ln P(U|H1 ; p) || | ∂p p=0

H1 > < H0

η

47

(3.9)

where η is the decision threshold. Next, we calculate the expression of the test statistic in (3.9). Based on the statistical independence of {u1 , u2 , · · · , u L }, we have | | L ∑ ∂ ln P(U|H1 ; p) || ∂ ln P(ul |H1 ; p) || = | | , ∂p ∂p p=0 p=0 l=1

(3.10)

In (3.10), PMF P(ul |H1 ; η) is 2 ∏ q

P(ul |H1 ; p)=

[P(ul = vi |H1 ; p)] I (ul ,vi ) , ∀l,

(3.11)

i=1

where q is the bit depth of quantization. Therefore, (3.10) can be reformulated as | | L ∑ 2q ∑ ∂ P(ul = vi |H1 ; p) || ∂ ln P(U|H1 ; p) || I (ul , vi ) = | | ∂p P(ul = vi |H1 ; p) ∂p p=0 p=0 l=1 i=1 (3.12) From the quantization rule in (2.31), we have ( ) P(ul = vi |H1 ; p) = P τl,i−1 ≤ yl ≤ τl,i |H1 ; p {τl, i {+∞ ( √ ◠ ) (◠ ) ◠ f β yl − p s l g s l d s l dyl , ∀l, ∀i =

(3.13)

τl, i−1 −∞

( ) ◠ ◠ where g s l is the PDF of the variable s l in (3.7). Then, P(ul = vi |H1 ; p)| p=0 in (3.12) is calculated as {τl,i P(ul = vi |H1 ; p)| p=0 =

( ) ( ) f β (yl ) dyl = ϒβ τl,i − ϒβ τl,i−1

(3.14)

τl,i−1

where ϒβ (·) is the cumulative density function (CDF) of generalized Gaussian variables: ( ) [ / ( | |)β ] | | ( ) 1 sign τl,i γ 1 β, α τl,i + , (3.15) ϒβ τl,i 2 2┌(1/β)

48

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

{b γ (a, b) =

t a−1 exp(−t) dt, a > 0, b ≥ 0,

(3.16)

0

γ ( · , · ) denotes the incomplete gamma function, sign(x) is 1 if x > 0 and −1 otherwise. The first-order derivative of P(ul = vi |H1 ; p) at p = 0 presented in (3.12) is calculated as ( ) | | | {τl,i {+∞ ∂ f β yl − √ p ◠s l ( ) | ∂ P(ul = vi |H1 ; p) ||

◠ ◠ | = g d s s dy l l l| | ∂p ∂p | p=0 τl,i−1 −∞ p=0 ( ) | √ τ ◠ +∞ l,i | { { ◠ | −s l ∂ f β yl − p s l (◠ ) ◠ ) g s l d s l dyl || ( = √ √ ◠ 2 p ∂ y − ps | l

τl,i−1 −∞

=

l

p=0

{τl,i {+∞ ◠2

τl,i−1 −∞

σ 2 ||hl ||22 = x 2

s l ∂ 2 f β (yl ) (◠ ) ◠ g s l d s l dyl 2 ∂ yl2

{τl,i τl,i−1

∂ 2 f β (yl ) dyl ∂ yl2

βσx2 ||hl ||22 [

( ) ( / ( | |)β ) α ( / ) sign τl,i β γ 2 − 1 β, α |τl,i | 4┌ 1 β ( / ( | |)β ) ( ) − sign τl,i (β − 1) γ 1 − 1 β, α |τl,i | |)β ) ( ) ( / ( | − sign τl,i−1 β γ 2 − 1 β, α |τl,i−1 | ( |)β )] ( ) / ( | + sign τl,i−1 (β − 1) γ 1 − 1 β, α |τl,i−1 |

=

2

( ) ( ) ∝ ϒβ' τl,i − ϒβ' τl,i−1 , β > 1

(3.17)

where ϒβ' (·) is defined as ( ) ϒβ' τl,i



( ) [ ( | |) ] α 2 β 2 ||hl ||22 sign τl,i ( | |)β−1 β ( / ) exp − α |τl,i | , ∀l, ∀i, β > 1. α |τl,i | 4┌ 1 β (3.18)

In (3.17), < 1 > is derived based on Appendix 9 of [14], < 2 > is obtained based { ◠ (◠ ) ◠ on L’ Hôpital’s rule due to ◠s s l g s l d s l = 0, and < 4 > stems from [15], l

γ (a + 1, b) = aγ (a, b) − ba exp(−b), for a > 0, b ≥ 0.

(3.19)

3.2 The LMPT Detector Based on Generalized Gaussian Model and Its …

49

Note that (3.17) is established under the constraint β > 1 to satisfy the definition of incomplete gamma function γ ( · , · ) (see < 3 > of (3.17)). When 0 < β ≤ 1, the incomplete gamma function in (3.17) may be divergent [16], and it is difficult to calculate the analytical expression of (3.17). Substituting (3.14) and (3.17) into (3.13), the test statistic of the proposed quantized LMPT detector in (3.9) can be written as L ∑ 2 ∑ q

T (U) = Q

l=1 i=1

( ) ( ) ϒβ' τl,i − ϒβ' τl,i−1 ( ), I (ul , vi ) ( ) ϒβ τl,i − ϒβ τl,i−1

(3.20)

for β > 1. It is worth mentioning that when β = 2, the proposed detector in (3.20) degenerates to that in Chapter 2 (see (2.37)), i.e., the detector in (2.37) is a special case of (3.20).

3.2.3 Theoretical Analysis of Detection Performance Similar to the deduction in Chap. 2, we have ( ) a 2 , T Q (U)|H0 ∼ N μq−bit, 0 , σq−bit,0

(3.21)

( ) a 2 T Q (U)|H1 ∼, N μq−bit, 1 , σq−bit, 1 ,

(3.22)

μq−bit, 0 = 0,

(3.23)

where

2 σq−bit, 0

[ ( ) ( )]2 L ∑ 2q ϒβ' τl,i − ϒβ' τl,i−1 ∑ ( ) ( ) , = ϒβ τl,i − ϒβ τl,i−1 l=1 i=1 2 μq−bit, 1 ≈ pσx2 σq−bit, 0,

2 σq−bit,1

[ ( ) ( )]3 2q L ∑ ϒβ' τl,i − ϒβ' τl,i−1 ∑ 2 2 ≈ σq−bit, [ ( ) ( )]2 0 + pσx l=1 i=1 ϒβ τl,i − ϒβ τl,i−1 ⎧ [ ( ) 2 ( )]2 ⎫ q ⎪ ⎪ ' ' 2 L ⎨ ∑ ∑ ϒβ τl,i − ϒβ τl,i−1 ⎬ ( ) ( ) − p 2 σx4 , ⎪ ϒβ τl,i − ϒβ τl,i−1 ⎪ ⎭ l=1 ⎩ i=1

(3.24) (3.25)

(3.26)

50

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

for β > 1. The results in (3.23)–(3.26) can be easily derived from Appendix 6D of [17]. According to (3.21)–(3.26), the probability of false alarm PFA and the decision threshold η of the quantized LMPT detector satisfy the following relationship, PF A = P T Q (U) > η|H0 = 1 − Fμq−bit, 0, (η), 2 σq−bit, 0

η = Fμ

1

q−bit,0, 2 σq−bit, 0

(1 − PF A ),

(3.27)

(3.28)

where '' Fμ,σ 2 (b)

1

=√ 2π σ 2

{b −∞

] [ (a − μ)2 da, exp − 2σ 2

(3.29)

''−1 '' and Fμ,σ 2 (·) denotes the inverse function of Fμ,σ 2 (·). Accordingly, the probability of detection PD of the proposed quantized LMPT detector is

P D = P T Q (U) > η|H1 = 1 − Fμq−bit, 1,(η) 2 σq−bit, 1

(3.30)

3.3 Quantizer Design and Analysis of Asymptotic Relative Efficiency 3.3.1 Quantizer Design In Sect. 3.2, the LMPT detector for jointly sparse signals is proposed in the generalized Gaussian case and its theoretical detection performance is provided. As shown in (3.22)–(3.26), the detection performance of the proposed quantized LMPT detector depends on the selection of local quantizers {τl,i , ∀l, i}. Similar to the research in Chap. 2.3.3, a natural question is: how to design the local quantizers to achieve the optimal detection performance of the proposed quantized LMPT detector. In this subchapter, we first introduce the efficacy [18] of the proposed detector to measure the discrimination ability of the test statistic T Q (U) between H 0 and H 1 , which is a function of the local quantization thresholds. Then, the local quantizers are designed to maximize the efficacy of the proposed detector. In the following, we will also discuss the relationship between efficacy and Fisher information of the LMPT detector. First, we consider the case of 1-bit quantization. As an extreme case of quantization, 1-bit quantization is popular since it is helpful for bandwidth saving and

3.3 Quantizer Design and Analysis of Asymptotic Relative Efficiency

51

low-cost hardware implementation [19]. Next, the design of quantizers is presented for the proposed LMPT detector in (3.20) with q = 1. When q = 1, the quantizer in (2.31) degenerates to the 1-bit quantization case: { ul =

Q l1 (yl )

=

v1 if τl,0 ≤ yl < τl,1 , v2 if τl,1 ≤ yl ≤ τl,2 ,

(3.31)

where v1 =‘0’, v2 =‘1’. In what follows, we drop the subscript ‘1’ of τl,1 and use τl to denote the threshold of the 1-bit quantizer to be optimized. The efficacy of the proposed 1-bit LMPT detector T 1−bit (U) is [18] [

ζ T

1−bit

(U)

]

1

| 2 | L σ1−bit,1 p=0

(

| )2 ∂μ1−bit,1 || ∂ p | p=0

] [ L |)2β−2 exp −2(α|τl |)β α 4 β 4 σx4 ∑ 4 (α|τl [ ]} , ||hl ||2 { = 4L l=1 ┌ 2 (1/β) − γ 2 1/β, (α|τl |)β

(3.32)

for β > 1. Efficacy measures the discrimination ability of a test statistic between two close hypotheses for a large number of sensors [18]. Assume that the first-order and second-order terms of sparsity degree η in σ12 - bit, 1 (see (3.26)) are negligible when p → 0+ , then we have σ12 - bit, 1 ≈ σ12 - bit, 0 . Multiplying both sides of (3.21) and (3.22) by a scale factor 1/σ1 - bit, 0 , they can be rewritten as ( ) T 1 - bit (U) a |H0 ∼ N μ1 - bit, 0 , 1 , σ1 - bit, 0

(3.33)

( ) T 1 - bit (U) a |H1 ∼ N μ˜ 1 - bit, 1 , 1 , σ1 - bit, 0

(3.34)

where μ˜ 1 - bit, 1 =

] pL [ 1 - bit ζ T (U) . 2 σx

(3.35)

For a fixed probability of false alarm (i.e., a fixed decision threshold), a larger value of efficacy ζ [T 1 - bit (U)] means an improved probability of detection. Based on the above observations, we design the 1-bit quantizers so as to maximize the efficacy of the proposed 1-bit-LMPT detector as follows, {

} [ ] τˆl , ∀l = arg max ζ T 1 - bit (U) τl ,∀l

(3.36)

It is worth mentioning that, the efficacy in (3.36) is the value of the Fisher information in (2.39) after the square root operation and a linear transformation, and does

52

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

not influence the trend of Fisher information with respect to the quantization thresholds. In other words, maximizing the efficacy and Fisher information is equal with respect to the quantization thresholds. In (3.36), Fisher information is not adopted in the objective function for the convenience in deducing ARE in the following. Based on (3.32), the optimization problem in (3.36) can be simplified as max

τˆl = arg τl κl , s.t. τl ≥ 0.

(3.37)

] [ (α|τl |)2β−2 exp −2(α|τl |)β [ / ]. κl = 2 ┌ (1/β) − γ 2 1 β,(α|τl |)β

(3.38)

where

The constraint τl ≥ 0 in (3.37) is set because that κl is an even function with respect to τl . It is difficult to directly solve (3.37) due to the integral form of the incomplete gamma function in κl (3.38). A common strategy for complicated optimization problems is to solve their (log-)concave approximation problems. Fortunately, we can find a log-concave lower bound of the cost function κl in (3.37), as described in Theorem 1. Theorem 1 Assume β > 1, τl ≥ 0. Then, we have κl ≥ κ˜l

) ) ( ( (ατl )2β−2 exp − 2 − β1 (ατl )β 2┌ 2 (1/β)

.

(3.39)

The lower bound κ˜l in (3.39) is a log-concave function with respect to τl . The maximum value of κ˜l is achieved at τˆl = τˆ =

( ) 1 2β − 2 1/ β , ∀l. α 2β − 1

(3.40) □

Proof See Appendix D.

Based on Theorem 1, we can obtain the asymptotically optimal and closedform expression of 1-bit quantizers without complex computation. In simulations of Sect. 3.4, we will show the effectiveness of the 1-bit quantizers in (3.40). Similar to (3.36), the multilevel quantizers for the proposed quantized LMPT detector can be formulated as {

} [ ] τˆl,i , ∀l, i = arg max ζ T q−bit (U) , τl,i ,∀l,i

s.t. τl,1 < τl,2 < · · · < τl,2q −1 , ∀l, where

(3.41)

3.3 Quantizer Design and Analysis of Asymptotic Relative Efficiency

[

ζ T q−bit (U)

]

(

1

| | 2 L σq−bit, 1|

53

| )2 ∂μq−bit, 1 || ∂ p | p=0

p=0

[ ( ) ( )]2 ' ' 2 L σx4 ∑ ∑ ϒβ τl,i − ϒβ τl,i−1 ( ) ( ) , β > 1. = L l=1 i=1 ϒβ τl,i − ϒβ τl,i−1 q

(3.42)

The cost function in (3.41) shows the non-monotonic and non-concave behavior, which makes it more difficult to solve (3.41) compared with the 1-bit case. Thus, we still resort to the PSO algorithm [20–21] to solve the optimization problem in (3.41).

3.3.2 Asymptotic Relative Efficiency Here, ARE is used to theoretically evaluate the loss of detection performance and then provide guidelines to compensate for performance loss. The ARE represents the ratio of the number of sensors required by the quantized detector and the clairvoyant detector when they attain the same detection performance. The ARE can also be equivalently expressed as the ratio of efficacies between the clairvoyant detector and the quantized detector [18]. First, the efficacy of the clairvoyant LMPT detector in the generalized Gaussian case is [

ζ T

ana

L ρσx4 ∑ ||hl ||42 , (y) = 4L l=1

]

(3.43)

/ ( / ) ( / ) where ρ = α 4 β 2 (3β − 4)(β − / 1)┌ 2 − 3 β / ┌ 1 β , for β > 3 2. Similar to (3.17), the constraint of β > 3 2 is set to satisfy the definition of the gamma function ┌( · ) in ρ [22]. Based on (3.42) and (3.43), ARE can be calculated as [

AREana/q−bit

]

ζ T ana (y) L q−bit ]= = [ q−bit L ana ζ T (U)

ρ 4

L∑ q−bit ∑ 2q l=1 i=1

L∑ ana

||hl ||42 l=1 ]2 [ ϒβ' (τl,i )−ϒβ' (τl,i−1 )

. (3.44)

ϒβ (τl,i−1 )−ϒβ (τl,i )

/ for β > 3 2. When all the local sensors share identical quantization thresholds, AREana/q−bit in (3.44) can be simplified as

54

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

{ AREana/q−bit

( / ) / 2q ∑ 2ρ ┌ 1 β = α4β 4 i=1

}2 [ ] (α|τi |)β−1 exp −(α|τi |)β ] [ − (α|τi−1 |)β−1 exp −(α|τi−1 |)β ] [ / , sign(τi )γ 1 β,(α|τi |)β ] [ / − sign(τi−1 )γ 1 β,(α|τi−1 |)β (3.45)

where the subscript ‘l’ is dropped.

3.4 Simulation Results In this subchapter, simulation results are provided to demonstrate the performance of the proposed quantized LMPT detector in the generalized Gaussian case. For simulation purposes, the linear operators {h1 , h2 , · · · , h L } are sampled from the i.i.d. standard normal distribution [23]. The l 2 -norm of hl is then normalized to 1, ∀l. The size of sparse signal vectors to be detected is N = 3000. In Fig. 3.3, we compare the cost function κl in (3.38) and its log-concave approximation κ˜l given in (3.39). The maximum values of κl and κ˜l presented in Fig. 3.3 are obtained by PSO and (3.40), respectively. From Fig. 3.3, it is observed that the lower bound function κ˜l reflects the trend of true cost function κl well. Meanwhile, we can see that the 1-bit quantizers analytically designed in (3.39) are close to the numerically obtained optimal solution. In Fig. 3.4, we further investigate the effect of the 1-bit quantizers analytically designed in (3.40) on the detection performance of the proposed 1-bit LMPT detector. The black dot-dash lines in Fig. 3.4 denote the ROC curves of the 1-bit LMPT detector with near-optimal quantizers analytically given in (3.40), while the black solid lines represent the ones with numerically obtained optimal quantizers using PSO. From Fig. 3.4, we can see that the analytically derived near-optimal 1-bit quantizers provide detection performance comparable to the PSO-generated quantizers. Note that, in Fig. 3.4, the scenarios β = 1.8 and β = {2.4, 3.0} correspond to the heavy-tailed and light-tailed cases, respectively. In Fig. 3.5, we show the ROC gap between 1-bit LMPT detectors with the quantizers derived in (3.40) and the PSO-generated quantizers versus the shape parameter β. Here the ROC gap is evaluated by the ‘distance’ of the area under curves (AUCs). As can be observed, the AUCs of detectors with the quantizers in (3.40) and the PSO-generated quantizers are always comparable. The maximum gap between the two AUCs is only 0.04, which is attained at β = 3.5. Besides, the performance gap is negligible with 1 < β < 2 or β > 6. The above simulation results demonstrate the asymptotical optimality of our analytical 1-bit quantizers. In Fig. 3.6, the ROC curves of the multilevel LMPT detector and the clairvoyant LMPT detector are plotted with different parameters. As shown in Fig. 3.6, the theoretical ROCs are consistent with the simulated ROCs. By comparing Figs. 3.4

3.4 Simulation Results

55

Fig. 3.3 The cost function κl in (3.38) and its lower bound κ˜l in (3.39) versus quantization threshold τl , where σw2 = 1 and the maximum value of each curve is marked by ‘o’. a β = 1.2, b β = 1.4, c β = 2, d β = 3

and 3.6, it is observed that there is a fairly large performance gap between the 1bit LMPT detector and the clairvoyant LMPT detector in the generalized Gaussian case. The performance loss caused by coarse quantization is negligible when the bit depth increases to q = 3. This is a more general conclusion compared with that in Chap. 2.6, where only the Gaussian case is considered. Next, we present the AREana/q−bit performance as predicted in (3.44). The values of AREana/q−bit for β = 1.8, 2.4, 3.0 are presented in Table 3.1. From Table 3.1, it is observed that AREana/3−bit ≈ 1, which is consistent with the fact that the ROC curves of 3-bit LMPT detector and clairvoyant LMPT detector are quite close in Fig. 3.6. Then, we select the 1-bit quantization case as an example. In Fig. 3.7, the ROC

56

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

(a)

(b)

(c)

Fig. 3.4 ROC curves of the 1-bit LMPT detectors, where ‘Theo.’ and ‘Simu.’ denote the theoretical and simulated ROC curves, respectively, and σx2 = 4, σw2 = 1, p = 0.015, L = 2000. a β = 1.8; b β = 2.4; c β = 3.0 Fig. 3.5 AUCs of 1-bit LMPT detectors with the quantizer in (3.39) and the PSO-generated quantizer versus the shape parameter β, where σx2 = 4, σw2 = 1, p = 0.015, L = 2000

3.4 Simulation Results

57

(a)

(b)

(c)

Fig. 3.6 ROC curves of the clairvoyant LMPT detector and the multilevel LMPT detector, where σx2 = 4, σw2 = 1, p = 0.015, L = 2000. a β = 1.8; b β = 2.4; c β = 3.0

curves of the 1-bit LMPT detector and the clairvoyant LMPT detector are plotted for β = 1.8, 2.4, 3.0, where the number of sensors for the 1-bit LMPT detector is set to be AREana/1−bit times larger than that for the clairvoyant LMPT detector. For the 1-bit LMPT detector, the local quantizers are designed using (3.40). As illustrated in Fig. 3.7, the proposed 1-bit LMPT detector with L 1−bit = AREana/1−bit × L ana sensors and the clairvoyant LMPT detector with L ana sensors provide approximately the same detection performance. This further corroborates the validity of the AREana/q−bit calculated in (3.44).

58

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

Table 3.1 The value of AREana/q−bit

β

1.8

2.4

3.0

AREana/q−bit (with quantizers in (3.40))

3.98

4.19

4.77

AREana/q−bit (with PSO quantizers)

3.57

3.18

3.20

AREana/q−bit (with PSO quantizers)

1.41

1.36

1.39

AREana/q−bit (with PSO quantizers)

1.13

1.10

1.12

(a)

(b)

(c)

Fig. 3.7 ROC curves of the clairvoyant LMPT detector and 1-bit LMPT detector, where σx2 = 4, σw2 = 1, p = 0.015. The number of sensors of the proposed 1-bit LMPT detector is set to be AREana/1−bit times larger than that of the clairvoyant LMPT detector. a β = 1.8; b β = 2.4; c β = 3.0

References

59

3.5 Conclusion Generalized Gaussian distributions with flexible forms are commonly used to model non-Gaussian signals/noise in the context of signal processing. It is of great significance to study the problem of detection of jointly sparse signals based on generalized Gaussian model. The main contribution of this chapter is threefold: (1) We proposed the LMPT detector based on quantized measurements in the generalized Gaussian case, which is a beneficial improvement of the existing detection methods for sparse signals in non-Gaussian environments. (2) The near-optimal closed-form of 1-bit quantizers were obtained by optimizing the log-concave approximation of the efficacy. (3) In order to measure the performance loss caused by the quantization of local measurements, ARE between the clairvoyant LMPT detector and the proposed quantized LMPT detector was analytically derived. ARE can also be used to quantitatively increase the number of local sensors in the network to compensate for the performance loss caused by quantization. Simulation results corroborate our theoretical analysis of the proposed LMPT detector and ARE. The signal detection method in this chapter is based on the pre-estimated parameters in the generalized Gaussian distribution. The future research related to this chapter includes the analysis of the loss of detection performance caused by the estimation errors in the pre-estimation step.

References 1. Jacques L, Hammond DK, Fadili JM (2010) Dequantizing compressed sensing: when oversampling and non-gaussian constraints combine. IEEE Trans Inf Theory 57(1):559–571 2. Ciuonzo D, Papa G, Romano G et al (2013) One-bit decentralized detection with a Rao test for multisensor fusion. IEEE Signal Process Lett 20(9):861–864 3. Wang G, Zhu J, Xu Z (2019) Asymptotically optimal one-bit quantizer design for weak-signal detection in generalized Gaussian noise and lossy binary communication channel. Sign Process 154:207–216 4. Do MN, Vetterli M (2002) Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance. IEEE Trans Image Process 11(2):146–158 5. Gazor S, Zhang W (2003) Speech probability distribution. IEEE Sign Process Lett 10(7):204– 207 6. Novey M, Adali T, Roy A (2009) A complex generalized Gaussian distributioncharacterization, generation, and estimation. IEEE Trans Sign Process 58(3):1427–1433 7. Baraniuk RG, Cevher V, Wakin MB (2010) Low-dimensional models for dimensionality reduction and signal recovery: a geometric perspective. Proc IEEE 98(6):959–971 8. Guo C, Davies ME (2003) Sample distortion for compressed imaging. IEEE Trans Sign Process 61(24):6431–6442 9. Poor HV, Thomas JB (1978) Locally optimum detection of discrete-time stochastic signals in non-Gaussian noise. J Acoust Soc Am 63(1):75–80

60

3 Detection of Jointly Sparse Signals via Locally Most Powerful Tests …

10. Xue S, Oelmann B (2006) Unary prefixed Huffman coding for a group of quantized generalized Gaussian sources. IEEE Trans Commun 54(7):1164–1169 11. Dytso A, Bustin R, Poor HV et al (2018) Analytical properties of generalized Gaussian distributions. J Stat Distrib Appl 5(1):6 12. Banerjee S, Agrawal M (2013) Underwater acoustic noise with generalized Gaussian statistics: effects on error performance. In: Proceedings of MTS/IEEE OCEANS-Bergen. IEEE, pp 1–8 13. Papoulis A, Pillai SU (2002) Probability, random variables, and stochastic processes. Tata McGraw-Hill Education 14. Billingsley P (2008) Probability and measure. Wiley, Hoboken, NJ, US 15. Davis PJ (1959) Leonhard Euler’s integral: a historical profile of the gamma function: in memoriam: Milton Abramowitz. Am Math Mon 66(10):849–869 16. Durrett R (2019) Probability: theory and examples. Cambridge university press 17. Kay SM (1998) Fundamentals of statistical signal processing: detection theory. Prentice-Hall, Upper Saddle River, NJ 18. Kassam SA (2012) Signal detection in non-Gaussian noise. Springer Science & Business Media 19. Cheung RCY, Aue A, Lee TCM (2017) Consistent estimation for partition-wise regression and classification models. IEEE Trans Signal Process 65(14):3662–3674 20. Duarte C, Barner KE, Goossen K (2016) Design of IIR multi-notch filters based on polynomially-represented squared frequency response. IEEE Trans Sign Process 64(10):2613– 2623 21. Gao F, Guo L, Li H et al (2014) Quantizer design for distributed GLRT detection of weak signal in wireless sensor networks. IEEE Trans Wireless Commun 14(4):2032–2042 22. Alzer H (1997) On some inequalities for the incomplete gamma function. Math Comput 66(218):771–778 23. Li S, Yin H, Fang L (2012) Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans Biomed Eng 59(12):3450–3459

Chapter 4

Jointly Sparse Signal Recovery Method Based on Look-Ahead-Atom-Selection

4.1 Introduction Different from the signal detection algorithms in Chaps. 2 and 3 with the output H 0 or H 1 , signal recovery approaches aim to solve an inverse problem, i.e., accurately reconstruct sparse signals with an overcomplete sensing matrix and a small number of measurements. The task of jointly sparse signal recovery plays an important role in distributed sensor networks, multi-channel radar imaging, spectrum sensing, etc. [1–5]. Greedy algorithms are commonly used in the context of jointly sparse signal recovery. Generally speaking, greedy algorithms are computationally simpler than the methods based on convex optimization and Bayesian learning and have a theoretical guarantee on the recovery performance [6, 7]. “Atom” is an important concept for greedy methods. Atom signals are the column vectors in the overcomplete sensing matrix corresponding to the positions of non-zero elements in the sparse signal. The greedy algorithms select one or more atom signals that are most dependent to the compressed measurements in each iteration to reconstruct jointly sparse signals. Existing greedy algorithms based on the joint sparsity model include SOMP [6], SSP [8], and HMP [1]. Based on orthogonal projection operations, SOMP guarantees the orthogonality among the selected atom signals in the iteration process. SSP selects more than one atom signals in each iteration, and reevaluates the reliability of atom signals selected before to remove poor atom signals and to add new candidates. HMP is superior to other greedy algorithms in terms of the accuracy of the atom selection thanks to its capability of combining the strengths of SOMP and SSP, i.e., orthogonality among the newly selected atom signals and the reevaluation of all the atom signals selected at past iterations. As demonstrated in [1], we can see a significant performance improvement of signal recovery by using HMP. However, in the iterative process of HMP, new atom candidates are selected to directly expand the previous support set. It is noted that the straightforward expansion of the atom candidate set cannot provide a guarantee of the reduction of the recovery error. Namely, some unreliable entries in the atom candidate set may bring a destabilizing factor © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_4

61

62

4 Jointly Sparse Signal Recovery Method Based …

and, therefore, increase the recovery error at the next iteration. Look-ahead-atomselection (LAAS) was firstly presented in [9], where an optimal new atom signal is selected from the set of candidates by appraising its effect on the final performance in the sense of minimizing the norm of recovery error at the end of all future iterations. As reported in [9], the LAAS strategy can improve the reliability of the selected atom signals. In this chapter, we extend HMP by embedding the LAAS strategy into the iterative process of HMP and refer to the new algorithm as “look-ahead hybrid matching pursuit” (LAHMP). LAHMP combines the superiorities of HMP and LAAS. Theoretical analysis is provided by checking the conditioning of the least-squares problem in LAHMP and HMP to demonstrate the advantage of LAHMP. Besides, LAHMP is applied to solve the multi-channel radar imaging problem. Experiments based on measured multi-channel radar data show that, compared with the popular greedy pursuit algorithms, LAHMP reduces the clutter points in radar images with better quality.

4.2 Background of Recovery of Jointly Sparse Signals Assume there are L measurement vectors {y(1) , y(2) , . . . , y(L) }, y(l) ∈ C M×1 , l = 1, 2, . . . , L. Based on linear algebra theory, y(l) can be linearly represented by a few vectors { ϕ1(l) , ϕ2(l) , . . . , ϕ N(l) } : y

(l)

=

N ∑

xn(l) ϕn(l)

(4.1)

n=1

where xn(l) denotes the representation coefficient, ϕn(l) ∈ C M×1 , l = 1, 2, . . . , L, n = 1, 2, . . . , N . Generally, we can obtain the overcomplete representation of y(l) , i.e., M < N . For example, in the signal model of radar imaging, the number of vectors in { ϕ1(l) , ϕ2(l) , . . . , ϕ N(l) } corresponds to the number of pixels in the generated radar image, and the latter one is far more than the dimension of y(l) [10]. (4.1) can be rewritten as y(l) = ψ (l) x(l)

(4.2)

where l = 1, 2, . . . , L, ψ (l) = [ϕ1(l) , ϕ2(l) , . . . , ϕ N(l) ] denotes the sensing matrix, and x(l) is sparse signal to be reconstructed. In practice, the measurements y(l) is often corrupted by the additive noise w(l) . Thus, we reformulated (4.2) by y(l) = ψ (l) x(l) + w(l) .

(4.3)

Similar to Chaps. 2 and 3, we assume that {x(1) , x(2) , . . . , x(L) } in (4.3) share the same sparsity pattern, i.e., the locations of non-zero elements in each vector

4.2 Background of Recovery of Jointly Sparse Signals

63

in {x(1) , x(2) , . . . , x(L) } are the same. Let K be the sparsity degree in each sparse, and signal [11] K  N. The recovery of jointly sparse signals is to reconstruct the {x(1) , x(2) , . . . , x(L) } with jointly sparse structure based on the low-dimensional measurements {y(1) , y(2) , . . . , y(L) } and sensing matrixes { ψ (1) , ψ (2) , . . . , ψ (L) } . For example, the recovery of jointly sparse signals is to reconstruct the sparse scenes observed by radar in the context of multi-channel radar imaging. The relationship between K here and the sparsity degree defined by the probability p in (2.3) is that, K refers to the upper limit of the number of non-zero values in a sparse signal, while pN denotes the expectation of the number of non-zero elements in a sparse signal. The signal recovery methods based on greedy strategies aim to accurately estimate positions of non-zero elements in sparse signals (i.e., the support set of sparse signals) and then estimate the values of non-zero elements. HMP is a classical method for recovery of jointly sparse signals based on the greedy strategy, which combines the orthogonality of atoms selected by the SOMP method and the reevaluation of the atoms in the SSP method. Experiments based on measured data show that HMP provides more accurate signal recovery results than existing competitors SOMP and SSP [1]. In what follows, the HMP algorithm is briefly reviewed. In the following description, (·) H is the conjugate transpose, (·)† represents  H −1 H (·) (·) (·) , max_ ind (σ, K )  {the set of indices corresponding to the K largest amplitude components of σ}, x = OMP(y, ψ, K ).

(4.4) (4.5)

In (4.5), x denotes the output of the standard OMP algorithm [11], where y, ψ, K represent the linear measurement vector, overcomplete sensing matrix, and the number of non-zero elements required by OMP. HMP is summarized in Algorithm 4.1. At the initialization phase, the solution of each sparse signal is obtained via OMP. Then, the global estimate of the common support set Ʌold is determined by seeking K indices corresponding to the largest coefficients from the fusion result of all the sparse signals. After that, local residual vectors are obtained by projecting the measurement y(l) onto the column subspace of ψ (l) Ʌold , for l = 1, 2, . . . , L. HMP iterates over two key steps: set expansion and reevaluation. The expanded common support Ʌtemp is obtained by merging K indices, which correspond to the largest entries in the fusion result of all the OMP outputs, into the index set Ʌold obtained at the previous iteration. Accordingly, the enlarged set Ʌtemp contains no more than 2K indices. Subsequently, reevaluation for the expanded support set is carried out. For l = 1, 2, . . . , L, we calculate the projection coefficients of the local set y(l) onto the (l) . The set Ʌnew , which contains the K indices corresponding column subspace ψɅtemp to the largest entries in the fusion result of all the projection coefficients, is deemed as the global estimation of the common support set. After reevaluation, the residual vectors are updated by projecting the local measurement data y(l) onto the column

64

4 Jointly Sparse Signal Recovery Method Based …

subspace of ψ (l) Ʌnew . The iterative process is terminated when the global recovery error no longer decreases. Algorithm 4.1 HMP [1] Input: {y(l) , ψ (l) , l = 1, 2, . . . , L}, K Initialization:   1. x' (l) = OMP y(l), ψ (l) , K ;  L || ' (l) ||

2. Ʌold = max_ind |, K ; l=1 |x



†  (l) (q) ψ (l) 3. rold = y(q) − ψ (l) Ʌold Ʌold y , l = 1, 2, . . . , L Iterations: 

(l) 4. x'' (l) = OMP rold , ψ (l) , K , l = 1, 2, . . . , L; 

L '' (l) 5. Ʌtemp = Ʌold ∪ max_ind |x |, K ; l=1  L  (l) † (q) 6. Ʌnew = max_ind | ψ y |, K ; Ʌtemp l=1



†  (l) (l) ψ (l) 7. rold = y(l) − ψ (l) Ʌnew Ʌnew y , l = 1, 2, . . . , L; || ||  L || || (l) ||2  L || (l) (l) ||2 (l) 8. If l=1 , then rold = rnew , Ʌold = Ʌnew , return ||rold || > l=1 ||rnew 2 2 to step 4;

†  (l) (l) else xˆ Ʌ = ψ y(l) and stop the iteration Ʌ old old Output: xˆ (l) ∈ R N ×1 , its elements are zero except the entries indexed by Ʌold , l = 1, 2, …, L.

4.3 Signal Recovery Method Based on Look-Ahead-Atom-Selection and Its Performance Analysis From the above description of HMP (step 5) in Sect. 4.2, we can see that the process of expansion of support set does not consider its impact on the recovery error in the future. In this chapter, inspired by LAAS [9], we propose the LAHMP algorithm by embedding the LAAS operation into the iterative process of HMP. Then the innovation of LAHMP and the computational complexity of LAHMP are analyzed in detail.

4.3 Signal Recovery Method Based on Look-Ahead-Atom-Selection and Its …

65

Algorithm 4.2 LAAS Input: {y(l) , ψ (l) , l = 1, 2, . . . , L}, K , previous estimate of the support set Ʌold , candidate index set Ʌc Initialization: 1. n = [n 1 , n 2 , . . . , n K ]T ← 0 K ×1 ; Iterations: for k = 1 : K 2. Ʌtest = Ʌold || ∪ Ʌc (k);



† ||  ||  L || (l) (l) (l) || 3. n k = l=1 ||y − ψ Ʌtest ψ Ʌtest y(l) || || ; end for 2

Output: i = Ʌc ( j ), where j = arg mink {n k , k = 1, 2, . . . , K }.

4.3.1 Signal Recovery Method Firstly, the LAAS operation is described in Algorithm 4.2, which selects an optimal atom from the candidate set by evaluating its effect on the future recovery error in the sense of minimizing the norm of the residual vector after the support set expansion. In Algorithm 4.2, the inputs are {y(l) , ψ (l) , l = 1, 2, . . . , L}, the previous estimate for the support set Ʌold , the candidate support set Ʌc , and the number of non-zero elements K. Ʌc contains K potential atoms selected from the additive fusion result of standard OMP outputs. Firstly, the k-th atom in Ʌc is merged into the previous support set, and a temporary set is formed as Ʌtest = Ʌold ∪ Ʌc (k). Then, by projecting the measurements to the subspace spanned by the columns indexed by Ʌtest , we can calculate the average of the norms of residual vectors for all the measurement vectors, denoted as n k . Obviously, n k represents the effect on recovery error caused by the k-th atom in the candidate set. After testing all atoms in Ʌc , the index i of the most promising atom is found, which corresponds to the smallest component in {n k , k = 1, 2, . . . , K }. When more than one atom corresponds to the same global minimum recovery error, we arbitrarily select one among them. A compact notation of LAAS is defined here as:   i = LAAS { y(l) , ψ (l) , l = 1, . . . , L} , K , Ʌold , Ʌc .

(4.6)

66

4 Jointly Sparse Signal Recovery Method Based …

Algorithm 4.3 LAHMP Input: {y(l) , ψ (l) , l = 1, 2, . . . , L}, K Initialization:   1. x' (l) = OMP y(l), ψ (l) , K ;  L || ' (l) ||

2. Ʌold = max_ind |, K ; l=1 |x



†  (l) (q) ψ (l) 3. rold = y(q) − ψ (l) Ʌold Ʌold y , l = 1, 2, . . . , L Iterations:



(l) 4. x'' (l) = OMP rold , ψ (l) , K , l = 1, 2, . . . , L 

L '' (l) 5. Ʌc = max_ind |x |, K ; l=1

6. i = LASS ({ y(l) , ψ (l) , l = 1, . . . , L} , K , Ʌold , Ʌc ); 7. Ʌtemp = Ʌold ∪ {i}; | |  L || (l) † (q) || ψ 8. Ʌnew = max_ind y Ʌtemp l=1 | |, K ;



†  (l) (l) ψ (l) 9. rold = y(l) − ψ (l) Ʌnew Ʌnew y , l = 1, 2, . . . , L || || 2  L || (l) ||  L || (l) || ||2 , let r(l) = r(l) , Ʌold = Ʌnew and 10. If l=1 ||rold || > l=1 ||rnew new old 2 2

†  (l) (l) (l) = ψ return to step 4; else stop the iteration and let xˆ Ʌ Ʌold y old Output: xˆ (l) ∈ R N ×1 , its elements are zero except the entries indexed by Ʌold , l = 1, 2, …, L. By embedding the forward-looking atom selection of LAAS into HMP, the LAHMP algorithm is summarized in Algorithm 4.3. Similar to existing greedy algorithms, the LAHMP algorithm also aims to reconstruct the common support set. In LAHMP, after the initialization phase, which is the same as HMP, three main steps are contained in the iterative process: index selection, set expansion, and reevaluation. At the index selection step, local indices are obtained by utilizing standard OMP at each channel and the indices corresponding to the largest K entries in the fusion result of all the OMP outputs are put into the candidate support set Ʌc . Then, the LAAS algorithm is invoked to evaluate each atom in Ʌc , and the atom i corresponding to the smallest future recovery error is found and merged into the previous estimate of the support set to form the temporarily expanded support set Ʌtemp . The next step is reevaluation, just like the operations in HMP: after refining the expanded support set based on the projection coefficients, the new global estimation of the common support set Ʌnew is obtained. Finally, the local residual vectors are updated. The iterations are terminated when the global recovery error no longer decreases. Similar to the existing greedy algorithms, the proposed LAHMP requires the value of K as a part of its input. The value of K can be deemed sufficient so that

4.3 Signal Recovery Method Based on Look-Ahead-Atom-Selection and Its …

67

the dimension of the determined subspace is large enough to comprise the desired dominant components of the sparse signal. A simple approach of choosing the value of K is to run a greedy algorithm using a range of sparsity levels such as K = 1, 2, 4, . . . , M and then select the appropriate sparse recovery result, at the cost of the runtime increase by a factor no more than O(log M) [12]. Another way of the sparsity selection is to use the correlation method [13], where the observation vectors are correlated with the atoms in ψ (l) and the value of K is chosen as the number of the atoms whose absolute correlation exceeds the predefined threshold. The authors of [14] suggest that the value of K increases with a step length in the iterative process which is terminated when the norm of the residual is lower than a preset parameter.

4.3.2 Performance Analysis In step 5 of HMP as seen in Algorithm 4.1, the previous support set estimate Ʌold is expanded by adding K indices newly selected by the standard OMP algorithm. This expansion does not consider the effect of newly added atoms on the recovery error in the future and, therefore, cannot guarantee that the updated atom set will reduce the overall recovery error. In contrast, the look-ahead operation in LAHMP can select an optimal atom from the candidate set by evaluating its effect on the future reconstruction quality. In this way, we can decide how to choose a proper atom candidate to continuously reduce the recovery error. Let us look at step 6 in HMP and step 8 in LAHMP, both of which solve a least-squares approximation problem. In what follows, the stability of the least-squares approximation before and after embedding LAAS is analyzed. For an arbitrary least squares problem: y = ψx, where y ∈ Ca×1 , ψ ∈ Ca×b , and x ∈ Cb×1 , the stability of the solution depends on how well conditioned ψ is, which is determined by the condition number C(ψ) [15], i.e., ||Δx|| κ(ψ) ≤ ||x|| 1 − κ(ψ) ||Δψ|| ||ψ||



||Δψ|| ||Δy|| , + ||ψ|| ||y||

(4.7)

where Δ(·) denotes a perturbed term of (·). A smaller value of κ(ψ) is helpful to guarantee a stable least-squares solution, when the coefficient matrix ψ or the measurement vector y have a perturbation term. Under the assumption of a ≥ b, let {ϑ1 , ϑ2 , . . . ϑ R } denote the non-zero singular values of ψ arranged in non-increasing order, where R = rank(ψ). The condition number κ(ψ) can be expressed as [15]: κ(ψ) =

ϑ1 . ϑR

(4.8)

˜ ∈ Ca×v be the Theorem 2 [16]: Let ψ ∈ Ca×b be a given matrix and let ψ matrix generated by removing any b − v columns from ψ, where 1 ≤ v < b. Let

68

4 Jointly Sparse Signal Recovery Method Based …

{ϑ1 , ϑ2 , . . . ϑ R } denote the singular values of ψ and let {ϑ˜ 1 , ϑ˜ 2 , . . . , ϑ˜ R˜ } consists ˜ both arranged in non-increasing order, where R˜ = of the values of ψ,  singular

˜ . If a ≥ b, then for each integer d such that 1 ≤ d ≤ R˜ we have the rank ψ conclusion: ϑd ≥ ϑ˜ d ≥ ϑd+R− R˜ .

(4.9)

Combining (4.8) and (4.9), it can be deduced that: C(ψ) =



ϑ˜ 1 ϑ1 ϑ1 ϑ1 ˜ . = ≥ ≥ =C ψ ϑR ϑ R+R− ϑ˜ R˜ ϑ˜ R˜ ˜ R˜

(4.10)

At the t-th iteration of HMP, where the look-ahead operation is not embedded, ψ (l) Ʌtemp contains no more than 2K columns. In contrast, no more than K + 1 columns

are involved in ψ (l) Ʌtemp after using the look-ahead operation in the LAHMP. From the above Theorem 2, it follows that the embedding of the look-ahead operation lowers (l) . Thus, in step 8 of LAHMP, we can obtain more the condition number of ψ Ʌtemp stable projection coefficients to form the refined support set Ʌnew , which will be used at the next iteration. This is the reason why the look-ahead operation can provide a beneficial effect on the next iteration. Accordingly, through the application of LAAS at every iteration, it is expected that LAHMP will outperform HMP in terms of the selection of atoms corresponding to the true dominant values in sparse signals. Next, we discuss the computational complexity of the LAHMP algorithm. Since one new atom is added into the previous support set estimate at every iteration of LAHMP, the number of iterations required for LAHMP is about O(K ). As reported in [1], HMP also requires O(K ) iterations. However, at each iteration of HMP, more than one atom (K atoms) are merged into the previous support set estimate. Inevitably, more iterations are required for LAHMP to select all correct atoms compared to HMP. The new operation in the LAHMP is the application of LAAS. In LAAS, the average of the norm  of residual vectors is performed K times and each calculation the costs O K 3 L M by using the modified Gram-Schmidt algorithm [17]. Hence,   additional computational complexity owing to LAAS itself is about O K 4 L M , when LAHMP is terminated. In step 6 of HMP and step 8 of LAHMP, the leastsquares estimations require (2K )2 L M and (K + 1)2 L M floating point operations, respectively. That is to say that embedding LAAS into HMP reduces the computational complexity of the least-squares estimation per iteration. Thus, when LAHMP is terminated, computational complexity caused by the embedding of  the reduced  compuLAAS is O K 3 L M , which is negligible in comparison with  2 the additional  O K N L M , it is clear that tational complexity. Since the complexity of HMP is     the total computational complexity of LAHMP is O K 2 K 2 + N L M . Compared with HMP, the increase of the computational complexity of LAHMP is affordable when K 2  N . Moreover, the computational complexity of LAHMP is still lower compared with Bayesian learning and convex optimization algorithms. For  example, the complexity of clustered multitask Bayesian CS (CMT-BCS) is O tIter N L M 3

4.4 Experimental Results

69

[18], where t Iter is the number of iterations in the The  computational  Gibbs sampling.  burden of convex optimization methods is O N 2 L M or O L N 3 [19].

4.4 Experimental Results The problem of multi-channel radar imaging can be formulated as the problem of recovery of jointly sparse signals. Taking radar imaging as an example, measured multi-channel radar data is used to demonstrate the effectiveness of our proposed LAHMP method. Running times of LAHMP and other competitors are also reported. First, we introduce the background of radar imaging, which plays an important role in the context of search-and-rescue operations, law enforcement, urban surveillance, and mine detection [10, 20–22]. The aim of radar imaging is to obtain high-resolution images of targets from the observation data. Traditional radar imaging methods such as back-projection (BP) are based on matched filtering over all the spatial and temporal measurements. For BP-based algorithms, a high down-range resolution is obtained by employing ultra-wideband waveforms and high cross-range resolution depends on a large antenna aperture. Nevertheless, the physical size of the real or synthetic aperture may be limited by a number of restrictions, for example, covertness, non-cooperative environment, and space/time limitations. Therefore, images generated by BP-based algorithms may suffer from low resolution and high sidelobe levels. Recently, the development of the theory of sparse signal recovery has provided a new framework for data reduction without sacrificing performance in the application of radar imaging [10]. By exploiting the sparse feature of radar images, fewer radar measurements are required by the sparse radar imaging technologies with higher resolution and reduced sidelobe. In this chapter, multi-polarization is used to obtain the multi-channel radar data. Polarimetric sensing contains more independent observations, in other words, more abundant target information in sharp contrast to a single polarization, thanks to the differing scattering behavior of a target under diverse polarizations [23]. Since the rules of discretization of the observed scene are the same for all the channels, the problem of polarimetric radar imaging is naturally a problem of recovery of jointly sparse signals. It is worth mentioning that, multi-polarization radar is only an example of multichannel radar. Our LAHMP method can also be used in the applications of multi-static and multi-band radar imaging. Besides, the topic of target detection and recognition utilizing polarimetric information is not included in this book. In this chapter, the radar data was generated by the Radar Imaging Lab of the Center for Advanced Communications at Villanova University [24]. A stepped frequency radar of 1 GHz bandwidth centered at 2.5 GHz with a frequency step size of 5 MHz is used to acquire multi-polarization data. In this chapter, measurements from three polarimetric channels, S11 (HH), S12 (VV), and S22 (HV) are used to evaluate different algorithms. At each polarimetric channel, the data comprises the same 201 frequency points and 69 antenna positions. Two horn antennas were mounted side-by-side on a field probe scanner which model H-1479 by BAE Systems,

70

4 Jointly Sparse Signal Recovery Method Based …

Fig. 4.1 Picture of the imaging scene: a image depicting the nine targets, b the ground-truth image

one oriented for vertical polarization and the other for horizontal polarization. The transmit power of signal was set to 5 dBm. The synthetic aperture length is 1.51 m in cross-range and the imaging scene includes one 6'' trihedral, three 3'' trihedrals, three 12'' dihedrals, a 12'' diameter sphere, and a 3'' diameter cylinder, all placed at different spatial positions as shown in Fig. 4.1. Through-wall measured data were obtained through a 127 mm thick nonhomogeneous plywood and gypsum board wall which is positioned 1.27 cm in downrange from the front of the antennas. Measured data in the free-space scenario were collected in the same setup but without the wall. We refer readers to [25] for complete details of the experiment setup. The observed down-range-cross-range imaging plane is discretized as 121 × 81 pixels so that the resolution of the scene is 0.05 × 0.05 m2 , which is comparable with the size of the smallest 3'' trihedral in the scene. The imaging plane is at the height of 1.06 m above the ground. The results of performing the BP algorithm in the free-space and through-wall scenario on the full measured data are shown in Figs. 4.2 and 4.3, respectively, where the additive fusion method is used to generate the composite images.1 From Figs. 4.2 and 4.3, we can observe the joint sparsity pattern of radar images obtained from different polarization channels. Furthermore, the change of clutter and the distortion of the image due to the presence of the wall can also be observed in Figs. 4.2 and 4.3. It is obvious that BP suffers from low resolution and high sidelobe levels. 1

Here, additive fusion is used due to its simplicity. Other algorithms for fusing images such as fuzzy fusion [26] and principal component analysis fusion [27] also can be used after the formation of source images.

4.4 Experimental Results

71

Fig. 4.2 Images by BP algorithm in free-space scenario: a HH, b VV, c HV, d composite image by additive fusion. Color scale is in dB relative to peak magnitude

Fig. 4.3 Images by BP algorithm in through-wall scenario: a HH, b VV, c HV, d composite image by additive fusion

Then, we evaluate four greedy algorithms for jointly sparse signal recovery, i.e., SOMP, SSP, HMP, and LAHMP, by using both free-space and through-wall data. Here 100 frequency points and 69 antenna positions are selected for all the polarimetric channels. K is set to be 70, which is sufficient to image the scene of interest since the number of dominant pixels in Figs. 4.4 and 4.5 is smaller than 70. It can be observed from Figs. 4.4 and 4.5 that we can discriminate targets from clutter and noise by greedy algorithms with fewer data. In Fig. 4.4a, SOMP finds all point-like scatterers but suffers from many artifacts out of the target areas. In Fig. 4.4b, SSP suppresses artifacts well but fails to find all scatterers in the observed scene. From Fig. 4.4c and d, we can see the dominant coefficients caused by LAHMP are more concentrated than that of HMP. In Fig. 4.5, LAHMP also shows its superiority even if the presence of the wall leads to a large increase in propagation complexity. Similar to the previous

72

4 Jointly Sparse Signal Recovery Method Based …

experiment, SOMP and HMP are disturbed by more artifacts while SSP suffers from the loss of information about seven targets. LAHMP outperforms other algorithms in the following two aspects: (1) the dominant coefficients are more concentrated around the true scatterers; (2) the clutter and artifacts are more effectively suppressed. All the greedy algorithms do not detect the trihedral at 5.7 m down range due to its weak scattering characteristics. To quantitatively compare the performance of these algorithms, the target-toclutter-ratio (TCR) is defined as TCR =

(1/NTar ) (1/NClu )

 

(x,y)∈Tar

|Img(x, y)|2

(x,y)∈Clu

|Img(x, y)|2

,

Fig. 4.4 Composite images in free-space scenario: a SOMP, b SSP, c HMP, d LAHMP

Fig. 4.5 Composite images in through-wall scenario: a SOMP, b SSP, c HMP, d LAHMP

(4.11)

4.4 Experimental Results

73

where Img(x, y) is the magnitude of the pixel (x, y) in the composite image, “Tar” and “Clu” denote the target and clutter regions, N Tar and N Clu are the numbers of pixels in the regions “Tar” and “Clu”, respectively. A large value of TCR indicates that the targets are exactly located and the clutter outside the target areas is effectively suppressed. In Table 4.1, the values of TCR of four algorithms are compared as a function of different percentages of measurements. Because SSP only recovers two targets and cannot reflect the true information about the observed scene, its TCR is not included in Table 4.1. As shown in Table 4.1, the quality of the reconstructed image improves as the number of measurements increases. The TCR of SOMP and HMP are lower due to a large number of artifacts. A consistently higher TCR is provided by LAHMP, which implies that LAHMP achieves more accurate localization of targets and a better suppression of artifacts. It is worth mentioning that the presence of the wall lowers the TCR due to the complexity of through-wall propagation, which is consistent with the comparison between Figs. 4.4 and 4.5. The higher image quality generated by LAHMP is at the expense of computational complexity. Next, we compare HMP and LAHMP in terms of the running time and the number of iterations. We perform 50 trials on through-wall data, where 50 frequency points and 30 azimuth positions are randomly selected from every polarimetric channel for each trial. The average running time and the average number of iterations of HMP and LAHMP are compared in Table 4.2. It is observed that the running time and the number of iterations of LAHMP are larger than that of HMP, and furthermore, the increase of the computational complexity of LAHMP becomes more serious as the value of K increases. The computation burden of LAHMP is similar to that of HMP when K is a small value. Table 4.1 TCR of four different imaging algorithms Percentage of measurements (%)

TCR LAHMP

HMP

SOMP

BP

LAHMP

HMP

SOMP

BP

50

24.18

23.84

23.71

19.41

21.92

21.38

20.33

16.15

37.5

23.80

23.77

23.65

18.37

21.59

21.25

19.80

15.39

25

23.67

23.45

23.61

16.64

20.61

19.97

19.65

14.10

Free-space

Through-wall

Note Bold indicate better performance

Table 4.2 Running time and iterations of HMP and LAHMP Algorithm

K = 10

K = 40

Average running time

Average number of iterations

HMP

3.72

2.0

LAHMP

4.56

2.3

Average running time

K = 70 Average number of iterations

Average running time

Average number of iterations

47.08

3.0

119.35

3.4

121.39

7.0

501.99

8.0

74

4 Jointly Sparse Signal Recovery Method Based …

The initial step of the original look-ahead algorithm in [9] is that the atom candidate is merged into the previous support set to form the test support set, and then the recovery error is obtained. After that, orthogonality projections are applied to iteratively add new atoms into the test support set until the recovery error no longer decreases. As summarized in Algorithm 4.3, the look-ahead strategy used in LAHMP can be regarded as a “one-step” look-ahead strategy. Without loss of generality, we can also set up a “p-step” look-ahead strategy which evaluates the atom candidate on the recovery error after p new atoms are merged into the previous support set. As seen in Fig. 4.6, even if the original look-ahead strategy [9] is embedded in HMP, TCR is only 11% higher than that of LAHMP, but the running time is about 33 times as much as that of LAHMP. Therefore, the simplification of the look-ahead strategy in this chapter leads to good tradeoff between performance and complexity.

Fig. 4.6 Comparisons of LAHMP and the algorithms which combine HMP with different versions of look-ahead strategies (25% measurements of through-wall scenario are selected): a TCR, b running time

References

75

4.5 Conclusion Recovery of jointly sparse signals refers to the reconstruction of multiple highdimensional signals with joint sparsity pattern from a small amount of measurements. In this chapter, we have developed a new greedy algorithm referred to as LAHMP for the problem of recovery of jointly sparse signals. LAHMP is an improved version of HMP. The HMP method does not consider the influence of selected atom signals on future iterations, while the LAHMP method innovatively uses the LAAS operation that allows us to evaluate future performance caused by atom selection and ensures well-conditioning of the least-squares problem involved in LAHMP. LAAS in LAHMP is also helpful to the continuous reduction of signal reconstruction error in the iterations. Taking multi-channel radar imaging as an example, experiments based on real radar data have demonstrated that, compared with SOMP, SSP, and HMP, the proposed LAHMP method can better suppress the clutter in radar images and improve the quality of multi-channel radar imaging at an affordable increase of the computational burden. In the experiment part of this chapter, through-wall radar imaging is involved. In this chapter, the background subtraction technology is used to suppress the wall clutter. In practice, the background observations of the through-wall scenario may not be directly available. Existing literature shows that wall clutter presents the low-rank characteristic [23]. Inspired by this, greedy signal recovery methods based on lowrank wall clutter can be designed in the future to achieve the accurate reconstruction of jointly sparse signals with the suppression of low-rank clutter.

References 1. Li G, Burkholder RJ (2015) Hybrid matching pursuit for distributed through-wall radar imaging. IEEE Trans Antennas Propag 63(4):1701–1711 2. Zeng F, Li C, Tian Z (2010) Distributed compressive spectrum sensing in cooperative multihop cognitive networks. IEEE J Select Topics Signal Process 5(1):37–48 3. Fosson SM, Matamoros J, Antón-Haro C et al (2016) Distributed recovery of jointly sparse signals under communication constraints. IEEE Trans Signal Process 64(13):3470–3482 4. Hannak G, Perelli A, Goertz N et al (2018) Performance analysis of approximate message passing for distributed compressed sensing. IEEE J Select Topics Signal Process 12(5):857–870 5. Li G, Wimalajeewa T, Varshney PK (2015) Decentralized and collaborative subspace pursuit: a communication-efficient algorithm for joint sparsity pattern recovery with sensor networks. IEEE Trans Signal Process 64(3):556–566 6. Tropp JA, Gilbert AC, Strauss MJ (2006) Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process 86(3):572–588 7. Blanchard JD, Cermak M, Hanle D et al (2014) Greedy algorithms for joint sparse recovery. IEEE Trans Signal Process 62(7):1694–1704 8. Sundman D, Chatterjee S, Skoglund M (2014) Distributed greedy pursuit algorithms. Signal Process 105:298–315 9. Chatterjee S, Sundman D, Vehkapera M et al (2008) Projection-based and look-ahead strategies for atom selection. IEEE Trans Signal Process 60(2):634–647 10. Amin MG (2014) Compressive sensing for urban radar. CRC Press

76

4 Jointly Sparse Signal Recovery Method Based …

11. Tropp J, Gilbert AC (2007) Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666 12. Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321 13. Choi JW, Shim B (2015) Statistical recovery of simultaneously sparse time-varying signals from multiple measurement vectors. IEEE Trans Signal Process 63(22):6136–6148 14. Do TT, Gan L, Nguyen N et al (2008) Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In: Proceedings of 42nd Asilomar conference on signals, systems and computers. IEEE, pp 581–587 15. Horn RA, Johnson CR (2012) Matrix analysis. Cambridge University Press 16. Deepa KG, Ambat SK, Hari KVS (2014) Modified greedy pursuits for improving sparse recovery. In: Proceedings of twentieth national conference on communications. IEEE, pp 1–5 17. Bjorck A (1996) Numerical methods for least squares problems. Siam 18. Wu Q, Zhang YD, Ahmad F et al (2014) Compressive-sensing-based high-resolution polarimetric through-the-wall radar imaging exploiting target characteristics. IEEE Antennas Wirel Propag Lett 14:1043–1047 19. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press 20. Qiu W, Zhou J, Fu Q (2019) Jointly using low-rank and sparsity priors for sparse inverse synthetic aperture radar imaging. IEEE Trans Image Process 29:100–115 21. Hu X, Tong N, Zhang Y et al (2018) MIMO radar imaging with nonorthogonal waveforms based on joint-block sparse recovery. IEEE Trans Geosci Remote Sens 56(10):5985–5996 22. Peng X, Tan W, Hong W et al (2015) Airborne DLSLA 3-D SAR image reconstruction by combination of polar formatting and l1 regularization. IEEE Trans Geosci Remote Sens 54(1):213–226 23. Tang VH, Bouzerdoum A, Phung SL (2017) Multipolarization through-wall radar imaging using low-rank and jointly-sparse representations. IEEE Trans Image Process 27(4):1763–1776 24. Dilsavor R, Ailes W, Rush P et al (2005) Experiments on wideband through-the-wall radar imaging. In: Algorithms for synthetic aperture radar imagery XII, vol 5808. International Society for Optics and Photonics, pp 196–209 25. Orlando JR, Mann R, Haykin S (1990) Classification of sea-ice images using a dual-polarized radar. IEEE J Oceanic Eng 15(3):228–237 26. Seng CH, Bouzerdoum A, Amin MG et al (2013) Probabilistic fuzzy image fusion approach for radar through wall sensing. IEEE Trans Image Process 22(12):4938–4951 27. Wimalajeewa T, Varshney PK (2014) OMP based joint sparsity pattern recovery under communication constraints. IEEE Trans Signal Process 62(19):5059–5072

Chapter 5

Signal Recovery Methods Based on Two-Level Block Sparsity

5.1 Introduction The clustering property of sparse signals is that the non-zero elements occur in clusters instead of a few isolated points. Existing literature shows that the exploiting of the clustering structure of sparse signals is helpful to enhance the performance of signal recovery [1–6]. Based on the signal recovery method introduced in Chap. 4, here we improve the signal recovery performance of greedy pursuit methods by exploiting the clustering property of sparse signals. The signal model in this chapter is inspired by the problem of multi-channel radar imaging. As described in Chap. 4, the radar images from different channels share the same sparsity pattern, since the rules of discretization of the observed scene are the same for all the channels. Besides, in L-band to X-band high-resolution radar imaging, most targets extend over a set of pixels instead of isolated pixel points [3–5]. Thus, the problem of high-resolution multi-channel radar imaging can be formulated as the problem of recovery of signals with joint sparsity and clustering properties. It is worth mentioning that, both joint sparsity pattern and clustering property are special cases of block sparsity [1]. In the following, the joint sparsity and clustering properties of multiple signals are referred to as two-level block sparsity. In [3], a method named clustered multi-task Bayesian compressive sensing (CMTBCS) has been proposed to exploit two-level block sparsity in multiple signals. Based on the paired spike-and-slab prior of the sparse signals, CMT-BCS uses Gibbs sampling to learn the two-level sparse solution in the iterative framework. However, CMT-BCS requires lots of a priori statistical assumptions of signals and shows a low convergence speed. Differing from Bayesian learning, greedy algorithms apply the strategy that make a decision based on some locally optimal optimization criterion at each step. As reported in [7], the greedy algorithm can (finally) produce the globally optimal solution or an approximate overall solution. Generally, greedy pursuit algorithms have lower computational complexity than the methods based on Bayesian learning, as seen in [8, 9]. Exist literature has paid a contribution to the application of greedy pursuit algorithms for signal recovery with block sparsity [1, 10–12]. As the © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_5

77

78

5 Signal Recovery Methods Based on Two-Level Block Sparsity

variants of OMP and SP, respectively, the block orthogonal matching pursuit (BOMP) [1] and the subspace matching pursuit (SMP) [11] have been proposed to recover sparse signals with the block structure. However, BOMP and SMP require the information of block partition. Lattice matching pursuit (LaMP) [10, 12] is a PGM-based greedy algorithm that captures the clustering structure of a sparse signal. Compared with BOMP and SMP, the advantage of LaMP is that it does not require the block partition in advance. However, PGM in [10, 12] does not take the structure of a series of jointly sparse signals into account. Further, the popular greedy algorithms for the reconstruction of jointly sparse signals (mentioned in Chap. 4) such as SOMP, HMP, and LAHMP do not emphasize the clustering structure of each sparse signal. At present, there are few greedy methods that can simultaneously exploit joint sparsity and clustering properties of signals. Another limitation of the aforementioned signal recovery methods (exploiting joint sparsity or clustering properties) is that all of them assume the availability of analog observations. In practice, however, the analog values must be quantized, i.e., each measurement is mapped to a discrete value from a finite set via an ADC. Recently, signal processing systems based on the 1-bit quantization have received much attention [13, 14]. The 1-bit quantization is appealing due to its simplified hardware implementation, as ADC takes the form of a comparator to zero with very low cost. Moreover, the 1-bit measurement is optimal in presence of strong noise, i.e., measurement noise can be neglected as long as it does not alter the signs of the measurements. In addition, 1-bit quantization significantly alleviates the burden of large data transmission and storage. Existing research works have devoted to developing sparse signal recovery methods based on 1-bit measurements [15–18], where binary iterative hard thresholding (BIHT) [17] receives more attention due to its simpler and more comprehensible framework. However, existing methods in [15–18] do not consider the exploiting of the two-level block sparsity of signals. In this chapter, we first propose a new greedy algorithm based on analog measurements, referred to as two-level block matching pursuit (TLBMP) by combining PGM and the greedy pursuit framework. TLBMP clusters the dominant values and enforces the joint sparsity pattern for all the sparse signals, which improves the performance of signal recovery. In addition, we propose a new 1-bit CS algorithm, the enhancedbinary iterative hard thresholding (E-BIHT) algorithm, to reduce the cost caused by high-precision quantization without the noticeable performance loss. Experiments with measured data demonstrate the superiority of our proposed signal recovery methods based on the two-level block sparsity.

5.2 Signal Recovery Method Based on Two-Level Block Sparsity …

79

5.2 Signal Recovery Method Based on Two-Level Block Sparsity with Analog Measurements 5.2.1 PGM-Based Two-Level Block Sparsity Here, we still adopt the signal model in (4.3), i.e., y(l) = ψ (l) x(l) + w(l) , where l = 1, 2, . . . , L. Next, we define the two-level block sparsity of signals in {x(1) , x(2) , . . . , x(L) } based on PGM. The non-zero elements in a single sparse signal x(l) occur in cluster. Markov random field (MRF) captures the spatial dependencies between an element and its neighbors by defining a local conditional probability distribution. In this chapter, a second-order MRF model, auto-logistic model [19], is used to cluster the dominant coefficients in a single sparse signal. For compact description, we define the support set Ʌ(l) as the set of indices corresponding to the non-zero coefficients in x(l) . The ¯ (l) . A support area s is denoted by a vector composed complementary set of Ʌ(l) is Ʌ of 1 and −1, where sɅ(l) = 1,sɅ¯ (l) = −1.

(5.1)

This support area s can be described by the auto-logistic model, where each element only interacts with its neighbors [19]: ⎛ ⎞/ ∑ ) ( ( ) '' ⎠ p sn |sNn = exp⎝ψn' sn + ψn,n Z sNn , ' sn sn '

(5.2)

n ' ∈Nn

where Nn is the set of all neighbors of the n-th element, ψn' ≥ 0 denotes( the prior ) '' ' information about sn , ψn,n ' > 0 controls the interaction between sn and sn ' n ∈ Nn , ( ) and Z sNn is the normalizing function of the conditional PDF. In this chapter, we choose '' ' ψn,n ' = ψ > 0, ∀n, n ,

(5.3)

ψn' = 0, ∀n.

(5.4)

The condition in (5.3) means that each pair of neighboring elements enforces the same influence on each other, while the condition in (5.4) indicates that we avoid making any additional prior assumptions about the sn . Without loss of generality, we assume ψ = 1 in (5.3). Due to the same sparsity pattern of {x(1) , x(2) , . . . , x(L) }, we denote the common support set as Ʌ, i.e., Ʌ = Ʌ(l) , l = 1, 2, . . . , L ,

(5.5)

80

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Fig. 5.1 A simple example of the two-level block sparsity. The colorized lattice means the nonzero areas, while the white lattice represents the zero area. In different sub-images, the locations of non-zero areas are the same. In a single sub-image, non-zero elements show the clustering structure

and sɅ = 1, sɅ¯ = −1. Figure 5.1 presents a simple example of the two-level block sparsity, where x(l) is represented by a 2-D image signal for the sake of convenience. Next, we briefly introduce the background knowledge of PGM. PGM uses a graph-based representation as the foundation for compactly encoding a complex distribution. Let Gr = (V , E) be a directed graph formed by a collection of vertices V and edges E. Each vertex v ∈ V represents a random variable X v . A directed edge indicates that the conditional probability density of X v is a function of only those variables with edges directed towards v. We denote the set of parent vertices of v as Pa(v), as shown in Fig. 5.2. A directed acyclic graph (DAG) is a directed graph without directed loops. It is easy to show that the joint PDF of a DAG can be factorized as [20] ( ) ∏ ( v Pa(v) ) . p X 1, . . . , X v , . . . = p X |X

(5.6)

v∈V

Based on (5.6), we propose a PGM describing the two-level block sparsity of multiple signals, i.e., the connection between {x(1) , x(2) , . . . , x(L) }, sn and sNn , as shown in Fig. 5.3. We can observe that (1) The support vertex sn is related to its neighbors sNn according to the clustering property; (2) The value of sn determines

Fig. 5.2 An example of the relationship between a vertex and its parent vertex. In this directed graph, A is the parent vertex of B, and B is the parent vertex of C

5.2 Signal Recovery Method Based on Two-Level Block Sparsity …

81

Fig. 5.3 PGM for the two-level block sparsity

whether the values of {xn(1) , xn(2) , . . . , xn(L) } are zero according to the joint sparsity pattern. By assuming independence of the data over the different observations, we have L ) ∏ ) ( ( P xn(1) , xn(2) , . . . , xn(L) |sn = P xn(l) |sn .

(5.7)

l=1

) ( In (5.7), the form of P xn(l) |sn should naturally satisfy the fact that xn(l) is a dominant non-zero value with a high probability when sn = 1. On the contrary, xn(l) is a zero value with a high probability when sn = −1 [12]. Next, we discuss the estimation of sn based on PGM in Fig. 5.3. According to (5.6), the joint distribution of the vertices in Fig. 5.3 is ) ( P sNn , sn , xn(1) , xn(2) , . . . , xn(L) L ) ( )∏ ) ( = P sNn P sn |sNn P xn(l) |sn

(

l=1 L ( ( )∏ ) ∝ P sn |sNn P xn(l) |sn  J (sn )

(5.8)

l=1

By maximizing the joint PDF in (5.8), we can estimate sn as sˆn = arg max

sn ∈{−1,1}

J (sn ), ∀n.

(5.9)

Note that the optimization problem in (5.9) is equal to the MAP optimization. From (5.9), we ) that the value of J (sn ) is determined by two parts: (1) The ( can see probability P sn |sNn is the measure of the influence( of the) neighbors sNn on sn , which is described by MRF in (5.3). Maximizing P sn |sNn is key in preserving

82

5 Signal Recovery Methods Based on Two-Level Block Sparsity

∏L the underlying continuous structure. (2) The product l=1 P(xn(l) |sn ) represents the relationship between the common support area and the reflection coefficients for different observations. For example, when xn(l) is a dominant coefficient for all the ∏L P(xn(l) |sn ) reaches a maximum if and only if observations (l = 1, 2, . . . , L), l=1 ∏L sn = 1. On the other hand, l=1 P(xn(l) |sn ) reaches a maximum if and only if sn = −1 when xn(l) is a zero value for all the observations. Hence maximizing ∏L (l) l=1 P(x n |sn ) enforces the joint sparsity pattern across all the observations. Due to the finite-alphabet constraint of sn , the optimization problem (5.9) can be converted to hypothesis testing. The two hypotheses H 0 and H 1 are {

H0 : sn = −1, H1 : sn = 1,

∀n.

(5.10)

Hypothesis H 0 is assumed to be true if J (sn = −1) ≥ J (sn = 1),

(5.11)

otherwise, H 1 is assumed to be true. We define two functions F0,n = log( J (sn = −1)),

(5.12)

F1,n = log(J (sn = 1)).

(5.13)

Consequently, the solution of (5.9) can be given by }

sˆn = 1, when Δ > 0, sˆn = −1, when Δ ≤ 0,

(5.14)

where Δn = F1,n − F0,n = 2



s +

L ∑

n'

n ' ∈Nn

[ ( )/ ( )] log P xn(l) |sn = 1 P xn(l) |sn = −1 .

l=1

(5.15) Below, we discuss the geometrical approximations of P(xn(l) |sn = 1) and P(xn(l) |sn = −1) in (5.15). In general, sn = 1 means that xn(l) is a dominant coefficient. On the other hand, sn = −1 implies there is a zero value. Because of this property, we use geometrical approximation [10, 12] in Fig. 5.4a, b to describe the PDFs P(xn(l) |sn = 1) and P(xn(l) |sn = −1), respectively. In Fig. 5.4a, b, the threshold τ is a slack parameter to separate large and small signal coefficients, and can be selected by an adaptive method, which will be introduced in Sect. 5.2.2. As seen in Fig. 5.4a, when sn = −1, xn(l) tends to assume values close to zero. The situation is reversed when sn = 1, where xn(l) is described by its dominant value, as shown in Fig. 5.4b. We can

5.2 Signal Recovery Method Based on Two-Level Block Sparsity …

83

Fig. 5.4 Geometrical approximations of PDFs in (5.15). Horizontal axis represents the abso(l) denotes a P(xn(l) |sn = 1), b P(xn(l) |sn = −1), lute( value of x)/ n , and ( vertical axis ) c P xn(l) |sn = 1 P xn(l) |sn = −1 , respectively

( ) ( ) directly draw the geometrical approximation of P xn(l) |sn = 1 /P xn(l) |sn = −1 , as shown in Fig. 5.4c. In this way, p

(

xn(l) |sn

)/ ( ) = 1 p xn(l) |sn = −1 =

}

ε3 /ε2 ε4 /ε1

| (l) | |x | ≥ τ, | n(l) | |x | < τ. n

(5.16)

Based on (5.16), (5.15) can be rewritten as [ [ )] (| | ε3 1 + sign |xn(l) | − τ Δn = 2 sn ' + log 2ε2 n ' ∈Nn l=1 ( ) )] (| (l) | 1 + sign |xn | − τ ε4 + 1− . ε1 2 ∑

L ∑

(5.17)

5.2.2 Two-Level Block Matching Pursuit In this subchapter, we propose a new PGM-based greedy algorithm, named TLBMP, which attempts to combine the clustering property and the joint sparsity pattern in multiple signals. Then, we discuss the selection of model parameters and address the computation complexity of TLBMP. For ease of notation, we define the following function: add(a, Ʌ)  {∀i ∈ Ʌ, ai = ai + 1}.

(5.18)

84

5 Signal Recovery Methods Based on Two-Level Block Sparsity

5.2.2.1

Algorithm Description

Steps of the Algorithm 5.1 are highlighted in the box and are underlined by an MV scheme. The latter can be viewed as a democratic voting strategy to estimate the common support set, i.e., joint sparsity pattern. In Algorithm 5.1, the input is the estimated support sets from different observations. By using the algorithmic notations defined in (5.18), the common support set is obtained, and it is based on the premise that a support index that is present across multiple signals should belong to the common support set. In essence, the MV algorithm relies on the fact that the elements in the common support set have the highest scores in terms of their occurrences. Algorithm 5.1 MV Input: {Ʌ(l) , l = 1, 2, . . . , L} Initialization: 1. a = [a1 , a2 , . . . , a N ]T ← 0 N ×1 Calculation:

) ( 2. for l = 1, 2, . . . , L, a = add a, Ʌ(l) , end for; 3. Ʌvote = max_ind(a, K )

Output: Ʌvote .

Algorithm 5.2 TLBMP Input: {y(l) , ψ (l) , l = 1, 2, . . . , L}, K , {ε1 , ε2 , ε3 , ε4 } Initialization: 1. r(l,old) = y(l) , xˆ (l,old) = 0 N ×1 , l = 1, 2, . . . , L, s = −1 N ×1 Iterations:  (l)

2. x

)H ( (l,old) = ψ (l) r(l,old) ( + xˆ) , l = 1, 2, . . . , L;  (l)

3. Ʌ(l) = max_ind x , K ; ({ } ) 4. Ʌvote = MV Ʌ(l) |l = 1, 2, . . . , L , K , sɅvote = 1, and sɅ¯ vote = −1; 5. For each n ∈ {1, 2, . . . , N }, compute Δn based on (5.17); if Δn > 0, then sˆn = 1, else sˆn = −1; 6. Ʌesti ={the set of indices which correspond to the locations of ‘1’ in sˆ}; )† ( (l,new) (l) (l,new) 7. xˆ Ʌ = ψɅ y(l) , xˆ Ʌ = 0, then the smallest N −K coefficients ¯ esti esti esti

of xˆ (l,new) are set to be zeros, for l = 1, 2, . . . , L;

5.2 Signal Recovery Method Based on Two-Level Block Sparsity …

85

8. r(l,old) =||y(l) − || ψ (l) xˆ (l,new) , for l = 1, || 2, . . . , L; ∑ L || (l,old) ||2 ∑ L || (l,new) ||2 || 9. If l=1 r > l=1 r , then r(l,old) = r(l,new) , xˆ (l,old) = 2 2 (l,new) xˆ , and return to step 2, else stop the iteration Output: xˆ (l) ∈ C N ×1 , for l = 1, 2, . . . , L. The TLBMP algorithm is Algorithm 5.2 in the box. After the initialization phase, each iteration of TLBMP starts by calculating the temporary coefficient  (l)

estimates denoted by x . These estimates are the sum of the previous estimates } ( )H { xˆ (l,old) and ψ (l) r(l,old) [12]. In step 3, the local support sets Ʌ(l) |l = 1, 2, . . . , L  (l)

are obtained by seeking K indices corresponding to the largest values of x for l = 1, 2, .{. . , L. In step 4, the }temporary common support set Ʌvote is estimated by MV of Ʌ(l) |l = 1, 2, . . . , L followed by setting sɅvote = 1 and sɅ¯ vote = −1. In step 5, for each n ∈ {1, 2, . . . , N }, the joint PDF in (5.9) is maximized by using  (1)  (2)

 (L)

sNn and the temporary signal estimates { x ,x , . . . ,x } . From the analysis in Sect. 5.2.1, we can surmise that maximizing the joint PDF in (5.9) not only captures the clustering structure but also enhances the joint sparsity pattern. The solution of (5.9), sˆ, is given in (5.14). In step 6, the updated common support set Ʌesti is obtained by selecting the set of indices that correspond to the locations of ‘1’ in sˆ. In step 7, TLBMP generates an updated signal estimate by using the least-squares estimation with the selected columns of the sensing matrix. In order to ensure sparsity, the smallest N − K coefficients of xˆ (l) are set to be zeros. In step 8, the residual vectors are updated. The iterations are terminated when the global recovery error no longer decreases.

5.2.2.2

Model Parameter Selection

In Fig. 5.4, the threshold τ is a slack parameter to separate large and small signal coefficients, which is adaptively chosen at step 2 in TLBMP. By sorting the magni (l)

tudes of the coefficients of x

in descending order, the 2K-th magnitude is selected  (l)

as the slack parameter τ . This gives preference to the largest 2K coefficients of x , i.e., the largest 2K coefficients can achieve the state sn = 1. The pruning operation contained in step 7 of TLBMP guarantees the desired sparsity K. According to (5.17), only the values of ε4 /ε1 and ε3 /ε2 are required instead of the values of ε1 , . . . , ε4 . Considering the natural constraint that ε4 /ε1 < ε3 /ε2 , we set ε4 = ω, ε1

ε3 1 = , 0 < ω < 1, ε2 ω

for simplification. Then, we can rewrite (5.17) as

(5.19)

86

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Δn = 2

∑ n ' ∈Nn

sn ' +

L ∑ l=1

[[ log

( )] ) )] (| | (| | 1 + sign |xn(l) | − τ 1 + sign |xn(l) | − τ . +ω 1− 2ω 2 (5.20)

According to (5.19) and the fact that the integral of each PDF in Fig. 5.4a, b is 1, it can be deduced that ε1 ϖ − τ ε3 τ , . = = ε2 ωτ ε4 ω(ϖ − τ )

(5.21)

Due to ε1 > ε2 and ε3 > ε4 as seen in Fig. 5.4, we can draw that } τ ϖ −τ , . ω < min τ ϖ −τ }

(5.22)

We assume ω = 10−6 to greatly satisfy the constraint in (5.22). The number of non-zero elements in signals, K, can be selected according to the method introduced in Sect. 4.3.1.

5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit Measurements In above Sect. 5.2, we introduce the signal recovery method that exploits the twolevel block sparsity based on analog (or finely quantized) measurements. There are stringent constraints on the data sampling rate, transmission rate, and storage space [15] in a few practical applications, such as small handheld radar equipment, or outdoor sensor network for monitoring the atmosphere and rivers, and so on. 1bit quantization is an extreme case of quantization, where only the signs of the real-valued measurements are maintained and used for follow-on processing. 1-bit quantization requires simplified hardware implementations with fast data sampling and low-cost data transmission and storage. In this subchapter, we propose the EBIHT algorithm to enhance the signal recovery performance with the two-level block sparsity based on 1-bit measurements. Taking radar imaging as an example, measured radar data demonstrate that, in comparison with existing competitors, the E-BIHT algorithm offers better imaging results with better signal recovery performance based on 1-bit measurements.

5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit …

87

5.3.1 Background of Sparse Signal Recovery Based on 1-Bit Measurements Similar to (4.3), we consider the complex signal model: y = ψx + w,

(5.23)

where y ∈ C M×1 is the measurement vector, x ∈ C N ×1 is the complex sparse signal vector, ψ ∈ C M×N denotes the sensing matrix, and w ∈ C M×1 represents the noise vector. The number of the non-zero coefficients in x is not larger than K. The difference between (5.23) and (4.3) is that there is only one observation in (5.23), i.e., the case of L = 1 in (4.3). In this subchapter, the two-level block sparsity of signals exhibits as the clustering structure within x and the joint sparsity pattern of the real and imaginary parts of x. The method proposed in this subchapter can be directly extended to the case of L > 1, which is not introduced in this book to avoid the tedious description. The complex data are quantized in terms of I/Q channels. The vector of 1-bit quantization measurements of y can be written as ] [ Re(y) , y = sign Im(y)

(5.24)

where Re(·) and Im(·) represent the real part operator and the imaginary part operator, respectively. Then, from (5.23) and (5.24), we have ( ) y = sign ψx + w ,

(5.25)

] [ ] [ ] Re(ψ) −Im(ψ) Re(x) Re(w) ψ= , x= , w= . Im(ψ) Re(ψ) Im(x) Im(w)

(5.26)

where [

The quantization consistency function measures the consistency between the original 1-bit measurement vector y and the re-generated measurement vector ψx [17], ||[ ( )] || || || J (x) = || y Θ ψx − || , (5.27) 1

where || · ||1 is the l1 norm, Θ denotes Hadamard product, [·]− is an elementwise negative function, [x]− =

x − |x| , 2

(5.28)

88

5 Signal Recovery Methods Based on Two-Level Block Sparsity

i.e., [x]− = x if x < 0 and 0 otherwise. Intuitively, when y and ψx have the same sign, J (x) achieves the minimum value, i.e., minx J (x) = 0. We can now proceed to estimate x by xˆ = arg min J (x) s.t. ||x||0 ≤ 2K , x

(5.29)

where || · ||0 denotes the l0 norm. Note that in (5.29), both the real and imaginary parts of x have K dominant coefficients, respectively. Thus the sparsity level of x is 2K . The problem in (5.29) can be solved by the BIHT algorithm [17], which is reviewed in Algorithm 5.3. After the initialization phase, BIHT firstly computes a subgradient descent step to decrease J (x), i.e., ) ( a = xt−1 − μ∇ J xt−1 ,

(5.30)

where a is the solution of the subgradient descent step, and μ is the descent step size. The subgradient of J (x) is ∇ J (x) =

( ) ] 1 T[ ψ sign ψx − y , 2

(5.31)

Then, a nonlinear sparse approximation step is used to guarantee the sparsity level, where HK (·) is the hard thresholding operator which sets all the coefficients of a vector to zero but those having the K strongest amplitudes. Algorithm 5.3 BIHT [17] Input: y, ψ, K , the descent step size μ, the maximum iteration count tmax , the predefined accuracy ε Initialization: 1. x0 = 02N ×1 , t = 0; Iterations: 2. 3. 4. 5.

t = t + 1; ) ( a = xt−1 − μ∇ J xt−1 ; {Subgradient descent step} xt = H2K (a); {Nonlinear sparse || / || approximation} || || If t ≥ tmax or ||xt − xt−1 ||2 ||xt ||2 < ε, then stop the iteration, else return to step 2

Output: xt . Although BIHT is capable of recovering the sparse vector x, the inherent structure in x is not fully considered. This motivates the extension of the original BIHT

5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit …

89

algorithm to account for such a structure (two-level block sparsity), as described in the following. Note that the proposed extension can also be applied to other sparse signal recovery methods based on 1-bit measurements. In this chapter, we specifically consider BIHT due to its computational simplicity [17].

5.3.2 Enhanced-Binary Iterative Hard Thresholding In this subchapter, we extend the original BIHT algorithm and formulate the EBIHT algorithm, by exploiting the joint sparsity pattern and the clustering property of the sparse signals. We also analyze the difference of the models of two-level block sparsity in this subchapter and Sect. 5.2.1. The real and imaginary parts of a complex target reflectivity are the projections of the same complex value onto two orthogonal axes. To a large extent, it is synchronous that the real and imaginary components together assume zero or non-zero values, i.e., they share a joint sparsity pattern [21]. The simple modification of step 3 of the original BIHT algorithm can proceed as follows, exploiting the joint sparsity pattern: xt = arg min||a − b||22 s.t. b

|| N ||/ ∑ || || || (bn )2 + (bn+N )2 || ≤ K , || ||

(5.32)

0

n=1

where a is calculated by (5.30), b ∈ R N ×1 is an unknown vector to be optimized. Compared with step|| 4 of the original BIHT algorithm, the constraint ∑ N || ||√ || 2 of n=1 || (bn ) + (bn+N )2 || ≤ K in (5.32) enhances the joint sparsity pattern 0 between the real and imaginary components of sparse signals. Non-zero values in sparse signals are not represented by an isolated point but rather by a clustering structure. Here, we still adopt the MRF in (5.2) to describe the clustering structure in signals. Let Ʌ denote the set of indices corresponding to the non-zero coefficients in x, and sɅ = 1,sɅ = −1. This interaction of elements in clustering structure can be given by ⎛ ⎞ ∑ ( ) P sn |sNn ∝ exp⎝ sn sn ' ⎠ .

(5.33)

n ' ∈Nn

A pseudo-likelihood function is defined by [19] ⎛ ⎞ ] N N ∏ ∑ ∑ ) ( ⎝si U (s) = − log P sn |sNi ∝ − si ' ⎠, [

n=1

n=1

i ' ∈Nn

(5.34)

90

5 Signal Recovery Methods Based on Two-Level Block Sparsity

which is expected to be minimized to enforce the clustering structure. One pertinent question is how to select a function to illustrate the relationship between x and its corresponding support area s. This function should satisfy } sn =

1 |xn | /= 0, −1 |xn | = 0.

(5.35)

In Sect. 5.2.1, step functions, which are non-differentiable, are selected to approximatively describe the relationship between x and s. Inspired by the smooth Gaussian indicator function provided in [22], we choose, ) ( xn xn∗ sn = g (xn ) = 1 − 2 exp − 2 , 2σ '

(5.36)

to approximatively describe the relationship between x and s, where (·)∗ denotes the conjugate operation, and σ > 0 determines the quality of the approximation as opposed to (5.36). Note that }

'

lim g (xn ) =

σ →0

|xn | /= 0, 1 −1 |xn | = 0,

(5.37)

or approximately [23] }

'

g (xn ) =

1 |xn | ≫ σ, −1 |xn | « σ.

(5.38)

where the parameter σ can be selected by an adaptive method as presented in the following “model parameter selection”. Then, the pseudo-likelihood function in (5.34) can be rewritten by U (x) ∝ −

⎧ N ⎨ ∑ n=1



⎡ g ' (xn )⎣



n ' ∈Nn

⎤⎫ ⎬ g ' (xn ' )⎦ . ⎭

(5.39)

An equivalent form of (5.39) is given as follows: ⎧ ⎡ ⎤⎫ ) ∑ (/ ) ⎬ N ⎨ (/ ∑ g' U˜ (x) ∝ − g' (x n )2 + (x n+N )2 ⎣ (x n ' )2 + (x n ' +N )2 ⎦ . ⎩ ⎭ ' n=1

n ∈Nn

(5.40)

5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit …

91

Algorithm 5.4 E-BIHT Input: y, ψ, K , the descent step size μ, the maximum iteration count tmax , the predefined accuracy ε Initialization: 1. x0 = 02N ×1 , t = 0; Iterations: 2. t = t + 1; ) ( 3. a = xt−1 − μ∇ J xt−1 ; {Subgradient descent step} || √ ∑ N || || || t 2 4. x = arg min||a − b||2 +λU˜ (b) s.t. n=1 || (bn )2 + (bn+N )2 || ≤ K b

0

{Two-level block sparsity || || || /approximation} || 5. If t ≥ tmax or ||xt − xt−1 ||2 ||xt ||2 < ε, then stop the iteration, else return to step 2 Output: xt . In what follows, we propose E-BIHT in Algorithm 5.4 that attempts to enforce the two-level block sparsity, i.e., cluster the dominant values and impose the joint sparsity pattern of the real and imaginary parts of the sparse signal. After the initialization phase, each iteration of E-BIHT starts by calculating the temporary signal estimate, a, which is the same as step 3 of BIHT. Then, in step 4 of E-BIHT, we extend (5.32) by adding a regularization term to further impose the clustering property: xt = arg min||a − b||22 + λU˜ (b) s.t. b

|| || ∑ N ||/ || || (bn )2 + (bn+N )2 || ≤ K , (5.41) || || n=1 0

where λ ≥ 0 is the so-called regularization parameter. The iterations of E-BIHT are terminated when the number of iterations exceeds the predetermined maximum iteration count or the normalized mean squared error is less than the predefined accuracy. ||a − b||22 +||λU˜ (b) Next, we proceed to solve (5.41). The optimization function ∑ N || ||√ || 2 is differentiable; however, the sparsity constraint n=1 || (bn ) + (bn+N )2 || ≤ K 0 is non-differentiable. These observations suggest an iterative greedy block coordinate descent (GBCD) algorithm [24, 25] to solve (5.41). The GBCD algorithm shares similarity with the standard block coordinate descent method but adopts a greedy selection rule which gives preference to joint sparsity. Let F(b) denote the optimization function ||a − b||22 + λU˜ (b) in (5.41), i.e., F(b) = ||a − b||22 + λU˜ (b).

(5.42)

92

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Algorithm 5.5 GBCD Input: a, K, the maximum iteration count t˜max , the predefined error accuracy ε˜ . Initialization: 1. b0 = a, t˜ = 0; Iterations: 2. g = bt˜;; 3. t˜ = t˜ + 1; 4. for j = 1, 2, 3, . . . , 2N , bt˜j = arg min F(g1 , . . . , g j−1 , bt˜j−1 , g j+1 , . . . , bt˜j−1

g2N )

{Only bt˜j−1 is optimized, and other variables are fixed} g j = bt˜j ; end for; ˜ t˜, K ) ||; 5. bt˜ = ┌(b || /|| || || || || || 6. If t˜ ≥ t˜max or ||bt˜ − bt˜−1 || ||bt˜|| < ε˜ , then stop the iteration, else 2 2 return to step 2. Output: xt = bt˜ Then, GBCD for solving (5.41) can be given in Algorithm 5.5. In steps 2–4 of GBCD, F(b) is minimized by block coordinate descent. In step 4 of GBCD, the variable bt˜j can be optimized by the gradient descent method, which requires only a few iterations to get an accurate solution. Gradient descent proceeds by computing the following recursion until the predefined stopping tolerance or maximum iteration count: bt˜j = bt˜j − μ˜

∂F ∂bt˜j

,

(5.43)

where μ˜ is a scalar that controls gradient descent step size, and j ∈ {1, 2, 3, . . . , 2N }. When j ≤ N , we have ⎛ ( )2 ( )2 ⎞ t˜ t˜ t˜ ) ( b + b 4λb j j j+N ∂F ⎟ ⎜ = −2 a j − bt˜j − 2 exp⎝− ⎠ 2 t˜ σ 2σ ∂b j ⎡ (/ )⎤ ( )2 ( )2 ∑ ⎣ ⎦, bt˜j ' + bt˜j ' +N g j ' ∈N j

(5.44)

5.3 Signal Recovery Method Based on Two-Level Block Sparsity with 1-Bit …

93

otherwise, ⎛ ( )2 ( )2 ⎞ t˜ t˜ t˜ ) ( b + b 4λb j j j−N ∂F ⎟ ⎜ = −2 a j − bt˜j − 2 exp⎝− ⎠ 2 t˜ σ 2σ ∂b j ⎡ (/ )⎤ ( )2 ( )2 ∑ ⎣ ⎦. bt˜j ' + bt˜j ' −N g

(5.45)

j ' ∈N j

In step 5 of Algorithm(5.5, the) greedy selection rule is used to guarantee the joint sparsity pattern, where ┌˜ bt˜, K is an elementwise operation: when j ≤ N , ⎧ /( ) )2 ( 2 ⎪ ˜ ⎪ t ( ) ⎨ b j if bt˜j + bt˜j+N ≥ ρ, ˜ /( ) ┌ bt˜j , K = ) ( 2 2 ⎪ ⎪ ⎩ 0 if bt˜j + bt˜j+N < ρ, ˜

(5.46)

when j > N , ⎧ /( ) )2 ( 2 ⎪ ˜ ⎪ t ⎨ ( ) b j if bt˜j + bt˜j−N ≥ ρ, ˜ /( ) ┌ bt˜j , K = ) ( 2 2 ⎪ ⎪ ⎩ 0 if bt˜j + bt˜j−N < ρ. ˜ (5.46)–(5.47), ρ is } the }/In t˜ ˜ 2 2 t (bn ) + (bn+N ) , n = 1, 2, . . . , N .

5.3.2.1

K-th

largest

(5.47)

element

of

Comparison Between TLBMP and E-BIHT

Similar to TLBMP in Sect. 5.2, E-BIHT proposed in this subchapter also utilizes two-level block sparsity of signals. The differences between TLBMP and E-BIHT are as follows: (1) TLBMP implements sparse signal recovery with analog data, while E-BIHT reconstructs sparse signals based on 1-bit data. (2) In TLBMP, the MV strategy is used to obtain the initial estimate of the common support set (of multiple signals) to enforce the joint sparsity pattern, while in E-BIHT, the l0 -norm constraint is imposed on the complex signals to enhance the joint sparsity pattern. This is because E-BIHT considers the joint sparsity pattern of the real and imaginary parts of the complex signal (only two vectors satisfy the joint sparsity pattern) and the MV strategy based on voting rules is difficult to obtain the common support set of two vector signals. (3) In TLBMP, the step function is used to describe the relationship between the sparse signals and the binary vectors (corresponding to the support set of signals), while in E-BIHT, the smooth Gaussian function is used to

94

5 Signal Recovery Methods Based on Two-Level Block Sparsity

represent the aforementioned relationship, reducing the number of hyper-parameters in the signal model.

5.3.2.2

Model Parameter Selection

In (5.38), σ is a slack parameter to separate large and small signal coefficients. The slack {√parameter σ can be adaptively chosen } in step 3 of E-BIHT. By sorting the values 2 2 (an ) + (an+N ) , n = 1, 2, . . . , N in descending order, the K-th magnitude of is selected as the slack parameter σ . Similar to BIHT, the proposed E-BIHT requires the sparsity K as a part of its input. As suggested in [26], the following condition can guarantee the successful recovery of sparse signals from 1-bit measurements: ( ( / )) M = Ω'lower K log N K

(5.48)

where Ω'lower (·) means the lower bound. Note that (5.48) also provides a strategy for the selection of K.

5.4 Simulated and Experimental Results In this subchapter, taking radar imaging as an example, we demonstrate the effectiveness of our proposed sparse signal recovery methods exploiting the two-level block sparsity based on simulated and measured radar data.

5.4.1 Simulated and Experimental Results Based on Analog Data Here, we compare the signal recovery performance of BP, TLBMP, CMT-BCS [3], SOMP [27], and HMP [8] using multi-channel radar data. The hyper-parameters involved in CMT-BCS can be selected according to [3]. The quality of radar imaging is still quantitatively evaluated via TCR in (4.11). Radar data is collected from stepped-frequency synthetic aperture radars.

5.4.1.1

Simulated Results

The simulated system is a radar consisting of 61-antenna array with an aperture of 0.6 m. The antenna array is placed against the front wall. The stepped-frequency signal covers a 1 GHz bandwidth, ranging from 2 to 3 GHz, with 201 frequency points. The observed scene is situated behind a homogeneous wall of thickness

5.4 Simulated and Experimental Results

95

0.15 m and a dielectric constant η = 7.6632, which is typical for block concrete. The multipath returns are all considered to be 10 dB weaker than the direct path and only single-bounce multipath is considered [28]. Here, three polarization channels are used: HH, VV, and HV. Measurements in the free-space scenario were collected in the same setup but without the wall. White Gaussian noise with 25 dB signal-tonoise ratio (SNR) is added to the simulated measurements. Figure 5.5 provides the layout of the simulated scene. Figure 5.6 illustrates the BP fusion results obtained from 12.5% measurements (51 frequencies and 35 antenna positions are selected) in free-space and throughwall scenarios. As can be observed, the formed images contain many large sidelobes around the targets. From Fig. 5.6a and b, we can observe the displacement of the targets, mainly in range and marginally in azimuth owing to the presence of the front homogeneous wall. In addition, there are some ghost targets in Fig. 5.6b due to the indoor multipath environments. Using simulated free-space and through-wall radar data, the results by the sparsityaware methods are presented in Figs. 5.7 and 5.8. The number of non-zero elements for greedy algorithms is set to K = 30. As shown in Fig. 5.7, CMT-BCS, HMP, and TLBMP provide relatively concentrated target areas. SOMP suffers from some isolated pixels in the target area. In Fig. 5.8, CMT-BCS, SOMP, and HMP fail to remove the spurious pixels outside the target area. The image resulting from applying the proposed TLBMP is better than that of other methods in terms of clustering the dominant pixels and removing the artificial pixels. The TCRs of the reconstructed images corresponding to BP and sparsity-aware methods are listed in Table 5.1, where TLBMP yields a higher TCR than achieved by other methods, both at the free-space scenario and through-wall scenario. The running time results are shown in Table 5.2. From Table 5.2, we find that the BP has the minimum running time because it contains only a simple correlation operation. The running time of TLBMP is slightly less those of SOMP and HMP. CMT-BCS is more computationally demanding compared with greedy methods due

Fig. 5.5 Layout of the simulated scene

96

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Fig. 5.6 Composite BP results by addition fusion on 12.5% measurements (simulated data): a freespace scenario, b through-wall scenario. The target is noted by a white rectangle, while the multipath ghost is noted by a white circle

Fig. 5.7 Composite images by 12.5% measurements in free-space scenario (simulated data): a CMT-BCS, b SOMP, c HMP, d TLBMP

Fig. 5.8 Composite images by 12.5% measurements in through-wall scenario (simulated data): a CMT-BCS, b SOMP, c HMP, d TLBMP

to a few matrix inversion operations therein and its low convergence speed. We also added Table 5.3 to indicate the iterations of greedy algorithms. The numbers of iterations of both HMP and TLBMP are lower than that of SOMP.

5.4 Simulated and Experimental Results Table 5.1 TCR of reconstructed images using simulated data (in dB)

Table 5.2 Computational time of radar imaging methods using simulated data

Table 5.3 Iterations of greedy algorithms using simulated data

5.4.1.2

97

Free-space scenario

Through-wall scenario

BP

19.25

16.57

CMT-BCS

25.56

21.40

SOMP

32.13

24.74

HMP

27.67

22.84

TLBMP

56.51

31.14

Free-space scenario

Through-wall scenario

BP

0.1555

0.1809

CMT-BCS

2039.9

2179.1

SOMP

4.6626

4.6676

HMP

13.950

14.252

TLBMP

1.4389

1.7114

Free-space scenario

Through-wall scenario

SOMP

30

30

HMP

2

3

TLBMP

3

4

Experimental Results

Here, we still use the measured multi-channel radar data introduced in Sect. 4.4, observing the free-space and through-wall scenarios. See Sect. 4.4 for a detailed explanation of these data. For imaging the observed scene, the value of K is set to 50. The imaging results in Figs. 5.9 and 5.10 also demonstrate that the number of dominant pixels is less than 50. In Fig. 5.9, the scene imaged by the sparsity-aware methods in free-space is provided using 12.5% of the full measurement data. From Fig. 5.9a, it is evident that the CMT-BCS method tends to highlight the four strongest targets, but ignores others. In Fig. 5.9b, c, both SOMP and HMP methods locate targets in the scene but suffer from some artifacts outside the target area. The imaging result of the proposed TLBMP algorithm is given in Fig. 5.9d, where the dominant pixels are concentrated with relatively clear background. The above experiment is repeated for a throughwall scenario with the same parameter settings. The imaging results are presented in Fig. 5.10. As seen in Fig. 5.10a, the CMT-BCS method generates a clear background with concentrated target areas, similar to Fig. 5.10a. However, some targets are still missing. This could be attributed to the absence of orthogonal projection operations performed over the different iterations of CMT-BCS. These operations are present in the greedy algorithms so as to guarantee the orthogonality between the selected

98

5 Signal Recovery Methods Based on Two-Level Block Sparsity

atoms and the residual during each iteration. In Fig. 5.10b, c, the SOMP and HMP methods, which are blind to target structure (i.e., the clustering structure), can locate the scatters but fail to attenuate spurious artifacts outside the target area. It is evident from Fig. 5.10d that the behind the wall targets assume accurate locations and the level of artifacts obtained by TLBMP is much lower than that of other methods. Quantitatively, higher TCR is provided by TLBMP both in free-space and throughwall scenarios as seen in Table 5.4. This implies that TLBMP achieves accurate localization of targets and effective suppression of artifacts. Again, these experimental results confirm the superiority of the proposed TLBMP algorithm. However, the performance of the TLBMP in Table 5.4 is slightly worse than that in Table 5.1, which could be attributed to the fact that the EM undergoes more complex propagation in heterogeneous wall compared to that in homogeneous wall. It is worth mentioning that the imaging result of the trihedral at 5.7 m down-range in Figs. 5.9

Fig. 5.9 Composite images by 12.5% measurements from free-space scenario (real data): a CMTBCS, b SOMP, c HMP, d TLBMP

Fig. 5.10 Composite images by 12.5% measurements from through-wall scenario (real data): a CMT-BCS, b SOMP, c HMP, d TLBMP

5.4 Simulated and Experimental Results Table 5.4 TCR of reconstructed images using real data (in dB)

Table 5.5 Computational time of radar imaging methods using measured data

Table 5.6 Iterations of greedy algorithms using measured data

99

Free-space scenario

Through-wall scenario

BP

16.41

13.80

CMT-BCS

21.64

15.49

SOMP

18.56

17.66

HMP

20.68

17.55

TLBMP

23.85

21.72

Free-space scenario

Through-wall scenario

BP

0.1738

0.1732

CMT-BCS

2684.8

3123.3

SOMP

12.709

14.496

HMP

56.146

75.505

TLBMP

5.0642

7.5815

Free-space scenario

Through-wall scenario

SOMP

50

50

HMP

3

4

TLBMP

7

7

and 5.10 is weak or difficult to detect. This is perhaps due to its weak scattering characteristics. We performed the comparison of the running time and iterations in Tables 5.5 and 5.6, respectively. The relative algorithm complexities are similar to those of Tables 5.2 and 5.3.

5.4.2 Simulated and Experimental Results Based on 1-Bit Data In this subchapter, we compare the quality of radar images generated by BP, BIHT [17], 1-Bit-MAP [16], and the proposed E-BIHT algorithm using simulated and measured stepped-frequency radar data. BIHT only considers the sparsity characteristic of signals, while 1-Bit-MAP only exploits the clustering structure of signals. The hyper-parameters involved in 1-Bit-MAP can be selected according to [16]. For E-BIHT, we choose tmax = t˜max = 100 as the maximum iteration count, the predefined accurancy is ε = ε˜ = 10−2 , and the step size is set as μ = μ˜ = 10−1 . The regularization parameter λ is problem-dependent and needs to be tuned appropriately. We suggest λ = 0.4σ 2 where we have found this setting works well in terms of convergence speed and accuracy (note that the parameter σ can be adaptively

100

5 Signal Recovery Methods Based on Two-Level Block Sparsity

selected as introduced in Sect. 5.3.2). We evaluate the performance of reconstructed images in terms of TCR.

5.4.2.1

Simulated Results

The settings of the simulated radar system are the same as those in Sect. 5.4.1. There are three point targets in the simulated scene, as seen in Fig. 5.11. Uniform Subsample Pattern introduced in [29] is used to decide which space-frequency samples are selected and the number of space-frequency samples is 1500. Before quantization, Gaussian noise is added to the measurements, wherein SNR = 15 dB. Figure 5.12 illustrates the BP imaging results based on matched filtering by using 8-bit data and 1bit data, which suffer from high sidelobe levels. Note that the target region is marked by white rectangles. Due to the heavier amplitude and phase imbalances between I/Q channels caused by 1-bit quantization, Fig. 5.12b shows more cluttered background than Fig. 5.12a. In Fig. 5.13, we provide sparse radar images reconstructed using 1-bit CS methods, i.e., BIHT, 1-Bit-MAP, and the proposed E-BIHT algorithm. The sparsity level is set to K = 30. It is evident from Fig. 5.13a, b that both BIHT and 1-Bit-MAP can locate all the targets; however, they exhibit isolated artifacts in the non-target area. The imaging result of the proposed E-BIHT algorithm is given in Fig. 5.13c, where the dominant pixels are clustered and the artifacts are alleviated. Furthermore, as illustrated in Fig. 5.13, E-BIHT yields a higher TCR than other methods. Next, we evaluate the performance of 1-bit CS methods versus SNR and the bit budget. In each random trail, the Random Subsample Pattern introduced in [29]

Fig. 5.11 Layout of the simulated scene (for 1-bit radar imaging), where black circles indicate the locations of targets

5.4 Simulated and Experimental Results

101

Fig. 5.12 BP imaging result of the simulated scene with a 8-bit data, TCR = 22.9021 dB, b 1-bit data, TCR = 18.4830 dB

Fig. 5.13 Imaging results of simulated data: a BIHT, TCR = 28.2003 dB, b 1-Bit-MAP, TCR = 28.7149 dB, c the proposed E-BIHT algorithm, TCR = 32.1669 dB

provides guidelines to decide which space-frequency samples are selected. The TCR of reconstructed images is obtained by averaging over 50 random trails. Figure 5.14a provides the trends of TCR as SNR increases. It is clear that the performance of EBIHT is better than that of BIHT and 1-Bit-MAP when SNR ≥ 5 dB. Figure 5.14b depicts TCRs for different values of bit budget, where it is evident that E-BIHT consistently outperforms other 1-bit CS methods. Next, the complexity and convergence of the proposed E-BIHT are investigated, respectively. Compared to BIHT, exploiting the two-level block sparsity in E-BIHT increases the computational complexity. However, it is difficult to quantify such increase since the complexity of E-BIHT is related to its iteration number. In Fig. 5.15, we provide concrete computation times of BIHT, 1-Bit-MAP and the proposed EBIHT using simulated data, where the running times are obtained by averaging over 50 random trails. We can see that the computational time of E-BIHT is larger than that of BIHT but less than that of 1-Bit-MAP as the bit budget increases. In Fig. 5.16,

102

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Fig. 5.14 The comparison of TCRs of different methods. a TCR versus SNR. The bit budget is set to 1500. b TCR versus bit budget. The SNR is set to 15 dB Fig. 5.15 Averaged computational time versus bit budget. SNR is set to 15 dB

the trends of TCR of the images produced by E-BIHT versus the number of iterations are plotted with different parameters. As illustrated in Fig. 5.16, E-BIHT converges as the number of iterations increases.

5.4.2.2

Experimental Results

The measured radar data are acquired using the radar introduced in Sect. 4.4, where the polarimetric channel is HH and Uniform Subsample Pattern introduced in [29] is used to decide which space-frequency samples are selected. The number of spacefrequency samples is 1750. Figure 5.17 illustrates the BP imaging results based on 8-bit data and 1-bit data, where the target-to-clutter contrasts in Fig. 5.17a, b are both blurry. Similar to Fig. 5.12, we observe from Fig. 5.17 that coarse quantization causes more loss of image quality.

5.4 Simulated and Experimental Results

103

Fig. 5.16 TCR versus the number of iterations of E-BIHT, a for different SNRs with a fixed bit budget of 1500, b for different bit budgets with fixed SNR of 10 dB

Fig. 5.17 BP results of measured: a 8-bit data, TCR = 17.1426 dB, b 1-bit data, TCR = 13.3063 dB

The imaging results of three 1-bit CS methods are provided in Fig. 5.18. The sparsity level K of this image is set to 70, which is sufficient to image the scene of interest. Beyond sparsity, BIHT is insensitive to the underlying target span and structure. From Fig. 5.18a, it is evident that BIHT achieves localization of the targets, but

104

5 Signal Recovery Methods Based on Two-Level Block Sparsity

Fig. 5.18 Imaging results of measured 1-bit data: a BIHT, TCR = 15.6962 dB, b 1-Bit-MAP, TCR = 23.0237 dB, c the proposed E-BIHT algorithm, TCR = 27.2799 dB

suffers from considerable strong artifacts outside the target area. 1-Bit-MAP implicitly endows the joint sparsity pattern of the real and imagery parts of complex target reflectivity, however, without consideration of the clustering property. As shown in Fig. 5.18b, compared to BIHT, 1-Bit-MAP provides relatively better image quality but also suffers some isolated artifact points. As shown in Fig. 5.18c, E-BIHT outperforms other algorithms in the following two aspects: (1) the dominant coefficients are more concentrated around the positions of true scatterers; (2) the clutter and artifacts are significantly suppressed. Quantitatively, higher TCR is provided by E-BIHT, as seen in Fig. 5.18. This implies that E-BIHT achieves accurate localization of targets and effective suppression of artifacts. Again, the superiority of the proposed E-BIHT algorithm is clearly demonstrated.

5.5 Conclusion Two-level block sparsity refers to the joint sparsity and clustering characteristics of multiple signals, which is a prior knowledge of signals in the context of multichannel radars and distributed sensor networks. In this chapter, a PGM representing the two-level block sparsity of signals is proposed, and the PGM-based TLBMP algorithm is proposed to reconstruct sparse signals based on analog data. Compared with existing greedy methods exploiting joint sparsity or clustering features separately, TLBMP captures both features at the same time, which helps to achieve more accurate sparse signal recovery. Compared with the Bayesian method, CMT-BCS, designed based on two-level block sparsity, our proposed TLBMP method significantly reduces the computational cost of signal recovery. Experiments based on simulated and measured multi-channel radar data show that the proposed TLBMP method enhances targets in radar images, improves the accuracy of sparse signal recovery, and does not significantly increase the computation burden.

References

105

In hardware systems of signal processing, 1-bit data has attracted much attention because of its simple quantization structure and low hardware cost. In order to reduce the hardware cost of radar imaging, based on the two-level block sparsity model, we also proposed a sparse signal recovery method, E-BIHT, with 1-bit measurements. Experiments based on simulated and measured radar data show that the proposed E-BIHT improves the performance of existing sparse signal reconstruction methods based on 1-bit data, maintaining low hardware complexity, thanks to the exploiting of two-level block sparsity. In the methods proposed in this chapter, the selection of hyper-parameters is involved in the relationship between sparse signals and their support sets. In the future, we will continue to study how to reduce the number of hyper-parameters or find a more reasonable parameter-setting method. Future researches related to this chapter also include the theoretical analysis error bounds of signal recovery caused by data quantization based on the exploiting of two-level block sparsity.

References 1. Eldar YC, Kuppinger P, Bolcskei H (2010) Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Trans Signal Process 58(6):3042–3054 2. Baraniuk RG, Cevher V, Duarte MF et al (2010) Model-based compressive sensing. IEEE Trans Inf Theory 56(4):1982–2001 3. Wu Q, Zhang YD, Ahmad F et al (2014) Compressive-sensing-based high-resolution polarimetric through-the-wall radar imaging exploiting target characteristics. IEEE Antennas Wirel Propag Lett 14:1043–1047 4. Wu Q, Zhang YD, Amin MG et al (2014) Multi-task Bayesian compressive sensing exploiting intra-task dependency. IEEE Signal Process Lett 22(4):430–434 5. Wu Q, Zhang YD, Amin MG et al (2015) High-resolution passive SAR imaging exploiting structured Bayesian compressive sensing. IEEE J Select Topics Signal Process 9(8):1484–1497 6. Korki M, Zhang J, Zhang C et al (2016) Iterative Bayesian reconstruction of non-IID blocksparse signals. IEEE Trans Signal Process 64(13):3297–3307 7. Elad M (2010) Sparse and redundant representations: from theory to applications in signal and image processing. Springer Science & Business Media 8. Li G, Burkholder RJ (2015) Hybrid matching pursuit for distributed through-wall radar imaging. IEEE Trans Antennas Propag 63(4):1701–1711 9. Li G, Wimalajeewa T, Varshney PK (2015) Decentralized and collaborative subspace pursuit: a communication-efficient algorithm for joint sparsity pattern recovery with sensor networks. IEEE Trans Signal Process 64(3):556–566 10. Cevher V, Indyk P, Carin L et al (2010) Sparse signal recovery and acquisition with graphical models. IEEE Signal Process Mag 27(6):92–103 11. Joshi A, Kannu AP (2014) On recovery of block sparse signals from multiple measurements. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 7163–7167 12. Cevher V, Duarte MF, Hegde C et al (2009) Sparse signal recovery using markov random fields. In: Advances in proceedings of neural information processing systems, pp 257–264 13. Li C, He Y, Wang X et al (2019) Distributed detection of sparse stochastic signals via fusion of 1-bit local likelihood ratios. IEEE Signal Process Lett 26(12):1738–1742 14. Jacques L, Hammond DK, Fadili JM (2010) Dequantizing compressed sensing: when oversampling and non-Gaussian constraints combine. IEEE Trans Inf Theory 57(1):559–571

106

5 Signal Recovery Methods Based on Two-Level Block Sparsity

15. Boufounos PT, Baraniuk RG (2008) 1-bit compressive sensing. In: Proceedings of 42nd annual conference on information sciences and systems. IEEE, pp 16–21 16. Dong X, Zhang Y (2015) A MAP approach for 1-bit compressive sensing in synthetic aperture radar imaging. IEEE Geosci Remote Sens Lett 12(6):1237–1241 17. Jacques L, Laska JN, Boufounos PT et al (2013) Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans Inf Theory 59(4):2082–2102 18. Plan Y, Vershynin R (2012) Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans Inf Theory 59(1):482–494 19. Li SZ (2009) Markov random field modeling in image analysis. Springer Science & Business Media 20. Koller D, Friedman N (2009) Probabilistic graphical models: principles and techniques. MIT Press 21. Wu Q, Zhang YD, Amin MG et al (2014) Complex multitask Bayesian compressive sensing. In: Proceedings of international conference on acoustics, speech and signal processing. IEEE, pp 3375–3379 22. Mohimani H, Babaie-Zadeh M, Jutten C (2008) A fast approach for overcomplete sparse decomposition based on smoothed l 0 norm. IEEE Trans Signal Process 57(1):289–301 23. Hu X, Tong N, Zhang Y et al (2017) MIMO radar imaging with nonorthogonal waveforms based on joint-block sparse recovery. IEEE Trans Geosci Remote Sens 56(10):5985–5996 24. Wei X, Yuan Y, Ling Q (2012) DOA estimation using a greedy block coordinate descent algorithm. IEEE Trans Signal Process 60(12):6382–6394 25. Li H, Fu Y, Hu R et al (2014) Perturbation analysis of greedy block coordinate descent under RIP. IEEE Signal Process Lett 21(5):518–522 26. Baraniuk RG, Foucart S, Needell D et al (2017) Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans Inf Theory 63(6):3368–3385 27. Tropp JA, Gilbert AC, Strauss MJ (2006) Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process 86(3):572–588 28. Leigsnering M, Ahmad F, Amin MG et al (2015) Compressive sensing-based multipath exploitation for stationary and moving indoor target localization. IEEE J Select Topics Signal Process 9(8):1469–1483 29. Lagunas E, Amin MG, Ahmad F et al (2012) Joint wall mitigation and compressive sensing for indoor image reconstruction. IEEE Trans Geosci Remote Sens 51(2):891–906

Chapter 6

Summary and Perspectives

6.1 Summary Joint sparsity is a natural feature of signals and implies the dependence of multiple sparse signals. Compared with the independent model and processing of multiple sparse signals, joint sparsity is helpful to improve the performance of signal detection and recovery. Focusing on the problems of detection and recovery of jointly sparse signals, this book mainly introduces the jointly sparse signal detection and recovery methods and the corresponding performance analysis with analog and coarsely quantized measurements. The main contributions of this book are as follows: (1) We propose a detection method for jointly sparse signals based on LMPT to reduce the computation and data transmission burden of existing methods, and this method and its theoretical detection performance are extended to the case of coarsely quantized measurements and the generalized Gaussian signal model. The problem of detection of jointly sparse signals is formulated as the onesided and close hypothesis testing problem, and then a signal detection method based on LMPT is designed. Compared with the existing detection methods, the main advantages of our LMPT detection method are that it greatly reduces the computation and data transmission cost in the detection process while the detection performance loss is negligible. In addition, the theoretical detection performance of the LMPT detection method based on analog data and coarsely quantized data is deduced, and the optimal thresholds of quantizers are solved. We have the following comments: under the background of Gaussian noise, (a) the proposed 1-bit LMPT detector with 3.3L sensors achieves approximately the same performance as that of the clairvoyant LMPT detector with L sensors collecting analog measurements; (b) using the same amount of measurements, the proposed multilevel LMPT detector based on 3-bit quantized measurements can provide very close detection performance to that of the clairvoyant LMPT detector. Under the background of generalized Gaussian noise, analytically and asymptotically optimal 1-bit quantizer is designed. The theoretical ARE of the © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9_6

107

108

6 Summary and Perspectives

LMPT detection method is derived, which can be used to compensate for the performance loss caused by data quantization. This LMPT work is beneficial to improve the existing jointly sparse signal detection methods and enhance the completeness of the corresponding theory. (2) We proposed a jointly sparse signal recovery method based on the strategy of look-ahead-atom-selection. Taking radar imaging as an example, the superiority of this method in terms of reducing clutter in sparse radar images is verified by using the measured radar data. Atoms correspond to the positions of the non-zero elements in a sparse signal. The correct selection of atoms is key for the recovery of jointly sparse signals. In the existing jointly sparse signal recovery methods, the effect of selected atoms on future iterations is not considered. In this book, a jointly sparse signal recovery method based on look-ahead-atom-selection is introduced to evaluate the effect of selected atoms on signal recovery error in the next iteration and select the optimal atoms. Theoretical analysis based on condition number shows that this method enhances the stability of atom selection. Based on the measured multi-channel radar data, it is proved that, compared with the existing competitors, the proposed method improves the accuracy of jointly sparse signal recovery and reduces the clutter in sparse radar images. This work is a beneficial supplement to the existing jointly sparse signal recovery methods. (3) We proposed a jointly sparse signal recovery method based on the two-level block sparsity, where the clustering structure of the sparse signals is further exploited on the basis of the joint sparsity. Taking radar imaging as an example, the superiority of this method is verified using the measured radar data. The clustering characteristic of sparse signals means that the non-zero elements in the signal tend to be concentrated rather than exist in isolation. The clustering feature is the natural information of signals in another dimension in comparison with the joint sparse pattern. Existing sparse signal recovery methods based on greedy strategy only independently consider the joint sparsity or clustering characteristics of multiple signals. In this book, a signal recovery method based on two-level block sparsity is introduced, where both the joint sparsity and clustering characteristics are exploited to enhance the sparse constraints. The forms of this method with the analog and 1-bit quantized data are presented. Experiments based on measured radar data show that, compared with the existing methods, the proposed method can improve the quality of radar image reconstruction, specifically, non-zero target pixels are more concentrated in the target area and the energy leakage in the non-target area is better suppressed. This work is a beneficial supplement to the existing jointly sparse signal recovery methods.

6.2 Perspectives

109

6.2 Perspectives Based on the research works in this book, the following issues are deserved to be focused in the future: (1) Detection of jointly sparse signals. (a) Dependence within signals. The joint sparsity is a dependence feature among multiple sparse signals. Existing signal detection methods assume that the values of non-zero elements in each sparse signal are independent of each other. In practice, the non-zero elements in each sparse signal may also be dependent [1]. Existing models regarding data dependence, such as Copula model [2], can be utilized to further exploit the dependence of jointly sparse signals, which is expected to improve the signal detection performance. (b) Jointly sparse signal detection based on heterogeneous/nonuniform sensor fusion. Signal detection methods introduced in this book mainly consider homogeneous sensor fusion to achieve signal detection. In practice, there are also heterogeneous/non-uniform sensors, such as sensor networks with different noise levels due to various channel attenuation and multipath factors and the heterogeneous fusion of acoustic sensors and infrared sensors to detect human gait signals [1]. There is still a large research space on how to design detectors and analyze the theoretical detection performance under the condition of heterogeneous/non-uniform sensor fusion. (2) Recovery of Jointly sparse signal recovery. (a) Reduce the number of hyperparameters in signal recovery methods. This book mainly investigates jointly sparse signal recovery methods based on the greedy pursuit strategy. Most of the existing greedy pursuit methods require one or more hyper-parameters to be pre-determined. Improper hyper-parameter settings may deteriorate the performance of sparse signal recovery. In recent years, deep neural networks have made great progress in the context of image, speech, and text processing due to their powerful learning capability and excellent performance. Deep neural networks are expected to reduce the number of hyper-parameters in the jointly sparse signal recovery methods based on the training data. (b) Exploiting clutter/ noise features. For example, this book studies the through-wall radar imaging problems. Existing literature shows that wall clutter shows a low-rank feature [3]. Inspired by this, greedy pursuit-based signal recovery methods exploiting low-rank clutter feature can be investigated to effectively suppress low-rank wall clutter and achieve the more accurate reconstruction of through-wall observation scenes. (3) Investigate the applications of sparse signal processing in practice. This book considers the application of recovery of jointly sparse signals in the field of radar imaging, that is, linear inversion of high-resolution radar images from a small amount of radar measurements. The following steps of radar imaging include the detection and recognition of targets in radar images. In satellite synthetic aperture radar images, marine targets naturally exist sparsity in spatiality. Due to the influence of sea states, radar incident angles, and inshore clutter, there are still many technology bottlenecks in the detection and recognition of marine targets.

110

6 Summary and Perspectives

How to use the sparsity feature of (marine) targets to improve the performance of target detection and recognition in radar images still has a large research space.

References 1. Wimalajeewa T, Varshney PK (2017) Compressive sensing-based detection with multimodal dependent data. IEEE Trans Signal Process 66(3):627–640 2. Cherubini U, Luciano E, Vecchiato W (2004) Copula methods in finance. Wiley 3. Tang VH, Bouzerdoum A, Phung SL (2017) Multipolarization through-wall radar imaging using low-rank and jointly-sparse representations. IEEE Trans Image Process 27(4):1763–1776

Appendix A

Proof of (2.61)

We first introduce Lemma 1: Lemma 1 lim+ ∆→0

g1 (∆) g2 (∆)

= 1, where [τ G(τ ) − (τ − ∆)G(τ − ∆)]2 , F(τ ) − F(τ − ∆) ]2 [ g2 (∆) = G(τ ) 1 − τ 2 ∆

g1 (∆) =

(A.1) (A.2)

Proof See Appendix B. Denote that ∆l,i = τl,i − τl,i−1 , where l = 1, 2, · · · , L, i = 1, 2, · · · , 2q . When the bit depth q tends to be infinite, we can derive that ∆l,i → 0+ , ∀l, i. From (2.39) and (2.40) we have lim

∆l,i →0+ ,∀l,i

=

FI Q ( p1 ) ⎛



L ∑



2q ∑ κl4 τl,i ⎣ / τl,i ⎠ lim G⎝ / )2 × ( 2 2 ∆l,i →0+ ,∀l,i 2 2 p1 κl +σw p1 κl2 +σw2 l=1 4 p1 κl +σw i=1

⎞ ⎤2 ⎡ ⎛ ⎞ ⎛ ⎞⎤−1 ⎛ τ τ − ∆ τl,i − ∆l,i ⎝ τl,i − ∆l,i ⎠⎦ l,i l,i ⎠⎦ ⎠ − F ⎝ /l,i −/ G / × ⎣F ⎝ / 2 2 2 2 2 2 2 pκl +σw pκl +σw pκl +σw pκl +σw2 ⎛ ⎞[ ]2 L 2q 2 ∑ ∑ τl,i κl4 τ ∆l,i

l,i ⎠ 1− = / G⎝ / lim ) × ( 2 2 2 ∆l,i →0+ ,∀l,i pκl2 +σw2 2 2 2 2 4 pκ +σ pκ +σ pκ +σ i=1 l=1 w l l

=

L ∑ l=1

(

κl4

4 pκl2 +σw2

w

l

w

{+∞ )2 ×

]2 [ G(ς ) 1 − ς 2 dς

−∞

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

111

112

Appendix A: Proof of (2.61)

L ∑

=

l=1

= =

κl4

( 4 pκl2 +σw2

⎡ +∞ ⎤ { {+∞ {+∞ ς 2 G(ς )dς + ς 4 G(ς )dς ⎦ )2 ×⎣ G(ς )dς − 2 −∞

L ∑

κl4 ) (1 ( 2 2 2 l=1 4 pκl +σw

L ∑ l=1

(

κl4

2 pκl2 +σw2

−∞

−∞

− 2 + 3)

)2 .

(A.3)

The equality of (A.3) is a consequence of Lemma 1, and the fourth order moment of normal random variable in the equality of (A.3) is calculated in [1]. According to (A.3), it can be derived that

lim μq - bit =

q→+∞

lim

∆l,i →0+ ,∀l,i

This completes the proof.

[ | L |∑ κl4 ana p FI( p) = p | ( 2 ) =μ . 2 2 2 pκ +σ l=1 w l √

(A.4)

Appendix B

Proof of Lemma 1

Proof of Lemma 1 g1 (∆) lim+ ∆→0 g2 (∆) [τ G(τ ) − (τ − ∆)G(τ − ∆)]2 = lim+ ] [ ∆→0 ∆G(τ ) 1 − τ 2 2 [F(τ ) − F(τ − ∆)] [ ] 2[τ G(τ ) − (τ − ∆)G(τ − ∆)] G(τ ) − τ 2 G(τ )

= lim ] [ ∆→0+ G(τ ) 1 − τ 2 2 {[F(τ ) − F(τ − ∆)] + ∆G(τ − ∆)} [ ] 2 G(τ − ∆) − (τ − ∆)2 G(τ − ∆)

= lim [ ] ∆→0+ 1 − τ 2 [2G(τ − ∆) + ∆(τ − ∆)G(τ − ∆)] [ ] 2 G(τ ) − τ 2 G(τ ) ] = 1, = lim+ [ ∆→0 1 − τ 2 [2G(τ )]

(B.1)

where the equalities and stem from the L’ Hôpital’s rule [2].

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

113

Appendix C

Proof of (3.6)

It is sufficient to check the Lyapunov’s condition with ξ = 1 [3]. Then, we have {+∞ [| | |3 |3 ] | |3 | | | | |xl,n | = η h l,n E h l,n xl,n −∞

| |3 αx βx = η|h l,n | ( / ) [ 1 βx |

=

[ ( | |) ] αx βx β ( / ) exp − αx |xl,n | x d xl,n 2[ 1 βx {+∞ ] [ t 3 exp −(αx t)βx dt 0

( / )/ ( / ) 4 βx [ 1 βx ( ( / ))3/ 2 [ 3 βx |

3 ησx3 |h l,n | [

(C.1)

Based on (C.1), we can derive that ( / )/ ( / ) ∑ N | |3 N ] [ |h l,n | η[ 4 βx [ 1 βx ∑ | | 1 3 lim E |h l,n xl,n | = lim ( n=1 ( ( / )) )3/ 2 3 2 3 N →+∞ ∑ N N →+∞ ω [ 3 βx / n=1 h2 n=1

l,n

(C.2) The gamma function [(·) has the following property, 0 < [(a) < +∞, When 0 < a < +∞

(C.3)

In addition, it is easy to derive that 0 = lim

N →+∞

∑ N | |3 3 | | Nh n=1 h l,n lim = 0, ≥ lim ( )3/ 2 ≥ N →+∞ 3 N →+∞ ∑ N N 3/ 2 h˜ 3 2 N 3/ 2 h h n=1 l,n N h˜ 3

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

(C.4)

115

116

Appendix C: Proof of (3.6)

| |} {| | | | where h and h˜ are the minimum and maximum values in |h l,1 |, |h l,2 |, . . . , |h l,N | , respectively. Based on (C.3) and (C.4), (C.2) can be calculated as N ∑ |3 ] 1 [|| | =0 h lim E x l,n l,n 3 N →+∞ ω n=1

This completes the proof of (3.6).

(C.5)

Appendix D

Proof of Theorem 1

First, we demonstrate (3.39) in Theorem 1. Lemma 1 and Lemma 2 are introduced in the following: Lemma 1 If b ≥ 0 and c > 1, then ) ( / )]1 c [ ( γ 1 c, bc ( / ) ≥ 1 − exp −bc / [ 1 c

(D.1)

Lemma 1 is a direct consequence of Theorem 1 in [4]. Lemma 2 If 0 ≤ t ≤ 1 and c > 1, then (1 − t)1/ c ≥ 1 − t 1/ c

(D.2)

Proof of Lemma 2 See Appendix E. The denominator of κl in (3.38) is calculated as [ / ] { [ / ]} [ 2 (1/β) − γ 2 1 β,(α|τl |)β = [(1/β) + γ 1 β,(α|τl |)β { [ / ]} [(1/β) − γ 1 β,(α|τl |)β

{ [ / ]} ≤ 2[(1/β) [(1/β) − γ 1 β,(α|τl |)β [ ]] [ / γ 1 β,(α|τl |)β 2 = 2[ (1/β) 1 − [(1/β) {

)]1 β } [ ( ≤ 2[ 2 (1/β) 1 − 1 − exp −(α|τl |)β / )]} { [ (

≤ 2[ 2 (1/β) 1 − 1 − exp − β1 (α|τl |)β ) ( = 2[ 2 (1/β) exp − β1 (α|τl |)β (D.3) © Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

117

118

Appendix D: Proof of Theorem 1

where is derived from [(a) ≥ γ (a, b), is a consequence of Lemma 1, and stems from Lemma 2. Substitute (D.3) into (3.37), we have ) ] [ ( ] [ (ατl )2β−2 exp − 2 − β1 (ατl )β (α|τl |)2β−2 exp −2(α|τl |)β [ / ]≥ κl = 2 2[ 2 (1/β) [ (1/β) − γ 2 1 β,(α|τl |)β

κ˜l (D.4)

which establishes (3.39) in Theorem 1. Next, we provide the proof of the log-concave property of the lower bound function κ˜l and (3.40) in Theorem 1. When τl > 0, we have κ˜l > 0 and ∂ ln κ˜l 2β − 2 = − α(2β − 1)(ατl )β−1 ∂τl τl ∂ 2 ln κ˜l 2β − 2 =− − α 2 (2β − 1)(β − 1)(ατl )β−2 < 0 for β > 1 2 ∂τl τl2

(D.5)

(D.6)

When τl = 0, we have ln κ˜l = −∞

(D.7)

Based on (D.6) and (D.7), it is derived that the lower bound κ˜l is log-concave [5] with respect to τl when β > 1 and τl ≥ 0. Setting the first derivative of ln κ˜l in (D.5) to zero, one can obtain ∂ ln κ˜l 2β − 2 = − α(2β − 1)(ατl )β−1 = 0 ∂τl τl ( ) 1 2β − 2 1/ β ⇒ τˆl = τˆ = α 2β − 1 This completes the proof of Theorem 1.

(D.8)

Appendix E

Proof of Lemma 2

) ( Let Z = (1 − t)1/ c − 1 − t 1/ c . Then, we have ( ) ] 1 [ ∂2 Z 1 1/ c−2 1/ c−2 1 − − t) < 0, for c > 1, 0 ≤ t ≤ 1 (E.1) = − + t (1 ∂t 2 c c Thus, Z is a concave function with respect to 0 ≤ t ≤ 1 for c > 1, i.e., Z |t= pt1 +(1− p)t2 ≥ p Z |t=t1 + (1 − p) Z |t=t2 , for 1 ≥ p ≥ 0, 0 ≤ t1 , t2 ≤ 1 (E.2) When t1 = 0 and t2 = 1 are selected, we have Z |t=t1 = Z |t=t2 = 0

(E.3)

Substituting t1 = 0 and t2 = 1 into (E.2), it is derived that Z |t=1− p ≥ 0, for 1 ≥ p ≥ 0 and c > 1

(E.4)

Z |0≤t≤1 ≥ 0, for c > 1

(E.5)

i.e.,

Therefore, we have (1 − t)1/ c ≥ 1 − t 1/ c if 0 ≤ t ≤ 1 and c > 1. This completes the proof of Lemma 2.

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

119

120

Appendix E: Proof of Lemma 2

References 1. Papoulis A, Pillai SU (2002) Probability, random variables, and stochastic processes. Tata McGraw-Hill Education 2. Wang X, Li G, Varshney PK (2019) Detection of sparse stochastic signals with quantized measurements in sensor networks. IEEE Trans Signal Process 67(8):2210–2220 3. Hariri A, Babaie-Zadeh M (2017) Compressive detection of sparse signals in additive white Gaussian noise without signal reconstruction. Signal Process 131:376–385 4. Alzer H (1997) On some inequalities for the incomplete gamma function. Math Comput 66(218):771–778 5. Davis PJ (1959) Leonhard Euler’s integral: a historical profile of the Gamma function: In memoriam: Milton Abramowitz. Am Math Mon 66(10):849–869

About the Author

Xueqian Wang received the B.S. and Ph.D. degrees in Electronic Engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 2015, and Tsinghua University, Beijing, China, in 2020, both with the highest honors. From 2018 to 2019, he visited Syracuse University, Syracuse, NY, USA. From 2020 to 2022, he was a Post-Doctoral Fellow with the Department of Electronic Engineering, Tsinghua University, Beijing, China. He is currently an Assistant Professor with the Department of Electronic Engineering, Tsinghua University. His main research interests include target detection, information fusion, remote sensing, radar imaging, and distributed signal processing. He has authored or coauthored 50 journal and conference papers. He is an IEEE Member and a reviewer for IEEE Transactions on Geoscience and Remote Sensing, IEEE Transactions on Signal Processing, IEEE Transactions on Communications, IEEE Transactions on Circuits and Systems II: Express Briefs, and so on. The Doctoral Thesis of Xueqian Wang has received the award of “Excellent Doctoral Thesis of China Education Society of Electronics” and “Excellent Doctoral Thesis of Tsinghua University”. He has received the awards of 2020 Postdoctoral Innovative Talent Support Program, 2020 Outstanding Graduate of Beijing, and 2022 Outstanding Postdoctoral Fellow of Tsinghua University.

© Tsinghua University Press 2024 X. Wang, Study on Signal Detection and Recovery Methods with Joint Sparsity, Springer Theses, https://doi.org/10.1007/978-981-99-4117-9

121