Sampling: Theory and Applications: A Centennial Celebration of Claude Shannon (Applied and Numerical Harmonic Analysis) 3030362906, 9783030362904

The chapters of this volume are based on talks given at the eleventh international Sampling Theory and Applications conf

99 77 3MB

English Pages 214 [210] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
ANHA Series Preface
Preface
Overview of SampTA
Book Chapters
Crossroads
The Classical (WKS) Sampling Theorem
Dedication—John Rowland Higgins (17 August 1935–24 February 2020)
Contents
Claude Shannon: American Genius
Reconstruction of Signals: Uniqueness and Stable Sampling
1 Orthogonal Bases, Riesz Bases, and Complete Systems
2 Frames
2.1 Examples
2.2 Frame Decomposition
3 Uniqueness and Stable Sampling
3.1 Fourier Transform on the Line
3.2 Paley–Wiener Spaces
3.3 Sampling Problem
3.3.1 Uniqueness
3.3.2 Stable Sampling
3.3.3 Whittaker–Kotelnikov–Shannon Sampling Theorem
4 Beurling's Sampling Theorem
4.1 Uniform Densities
4.2 Bernstein Space Bσ
5 Riesz Sequences and Interpolation
5.1 Interpolation
5.2 Kahane's and Beurling's Interpolation Theorems
6 Disconnected Spectra: Landau's Theorems
6.1 Landau's Necessary Density Conditions
6.2 Uniformly Minimal Sets
6.3 Uniformly Minimal Sets of Exponentials
6.3.1 Proof of Theorem 8
6.4 Proof of Theorem 7
7 Universal Sampling
7.1 Universality Problem
7.2 Meyer's Model Sets
7.3 No Universal Sampling for Non-compact Sets
7.4 Universal Completeness
7.5 Universal Completeness on the Circle
8 Unbounded Spectra
8.1 Uniqueness Sets
8.1.1 Completeness of Exponentials on Subsets of (0,2π)
8.1.2 Periodization
8.1.3 Proof of Theorem 13
8.2 Uniqueness Sets for Sobolev Spaces
8.2.1 Sobolev Spaces
8.2.2 Periodic Gaps
8.2.3 Random Gaps
9 Back to Exponential Frames
9.1 Frames on Unbounded Sets
9.2 What Is a Good Exponential Frame?
9.2.1 Discrete Situation
9.3 Construction of Good Frames
9.4 Extraction of Frames from Continuous Frames
10 Sampling Bound for Bernstein Spaces
10.1 Sampling Bound
10.2 Sampling of Polynomials
10.3 Proof of Theorem 19
10.4 Interpolation Bound for Bernstein Spaces
11 Completeness of Translates
11.1 Examples
11.2 The Case p=1
11.3 The Case p=2
11.4 The Case p>2
11.5 The Case 1
Recommend Papers

Sampling: Theory and Applications: A Centennial Celebration of Claude Shannon (Applied and Numerical Harmonic Analysis)
 3030362906, 9783030362904

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Applied and Numerical Harmonic Analysis

Stephen D. Casey Kasso A. Okoudjou Michael Robinson Brian M. Sadler Editors

Sampling: Theory and Applications A Centennial Celebration of Claude Shannon

Applied and Numerical Harmonic Analysis Series Editor John J. Benedetto University of Maryland College Park, MD, USA

Advisory Editors Akram Aldroubi Vanderbilt University Nashville, TN, USA

Gitta Kutyniok Technical University of Berlin Berlin, Germany

Douglas Cochran Arizona State University Phoenix, AZ, USA

Mauro Maggioni Johns Hopkins University Baltimore, MD, USA

Hans G. Feichtinger University of Vienna Vienna, Austria

Zuowei Shen National University of Singapore Singapore, Singapore

Christopher Heil Georgia Institute of Technology Atlanta, GA, USA

Thomas Strohmer University of California Davis, CA, USA

Stéphane Jaffard University of Paris XII Paris, France

Yang Wang Hong Kong University of Science & Technology Kowloon, Hong Kong

Jelena Kovaˇcevi´c Carnegie Mellon University Pittsburgh, PA, USA

More information about this series at http://www.springer.com/series/4968

Stephen D. Casey • Kasso A. Okoudjou Michael Robinson • Brian M. Sadler Editors

Sampling: Theory and Applications A Centennial Celebration of Claude Shannon

Editors Stephen D. Casey Department of Mathematics and Statistics American University Washington, DC, USA Michael Robinson Department of Mathematics and Statistics American University Washington, DC, USA

Kasso A. Okoudjou Norbert Wiener Center University of Maryland College Park, MD, USA Brian M. Sadler Army Research Laboratory Adelphi, MD, USA

ISSN 2296-5009 ISSN 2296-5017 (electronic) Applied and Numerical Harmonic Analysis ISBN 978-3-030-36290-4 ISBN 978-3-030-36291-1 (eBook) https://doi.org/10.1007/978-3-030-36291-1 Mathematics Subject Classification: 42-XX, 42-06, 43-XX, 43-06, 65-XX, 65-06 © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2020 All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

ANHA Series Preface

The Applied and Numerical Harmonic Analysis (ANHA) book series aims to provide the engineering, mathematical, and scientific communities with significant developments in harmonic analysis, ranging from abstract harmonic analysis to basic applications. The title of the series reflects the importance of applications and numerical implementation, but richness and relevance of applications and implementation depend fundamentally on the structure and depth of theoretical underpinnings. Thus, from our point of view, the interleaving of theory and applications and their creative symbiotic evolution is axiomatic. Harmonic analysis is a wellspring of ideas and applicability that has flourished, developed, and deepened over time within many disciplines and by means of creative cross-fertilization with diverse areas. The intricate and fundamental relationship between harmonic analysis and fields such as signal processing, partial differential equations (PDEs), and image processing is reflected in our state-of-theart ANHA series. Our vision of modern harmonic analysis includes mathematical areas such as wavelet theory, Banach algebras, classical Fourier analysis, time-frequency analysis, and fractal geometry, as well as the diverse topics that impinge on them. For example, wavelet theory can be considered an appropriate tool to deal with some basic problems in digital signal processing, speech and image processing, geophysics, pattern recognition, biomedical engineering, and turbulence. These areas implement the latest technology from sampling methods on surfaces to fast algorithms and computer vision methods. The underlying mathematics of wavelet theory depends not only on classical Fourier analysis, but also on ideas from abstract harmonic analysis, including von Neumann algebras and the affine group. This leads to a study of the Heisenberg group and its relationship to Gabor systems, and of the metaplectic group for a meaningful interaction of signal decomposition methods. The unifying influence of wavelet theory in the aforementioned topics illustrates the justification for providing a means for centralizing and disseminating information from the broader, but still focused, area of harmonic analysis. This will be a key role of ANHA. We intend to publish with the scope and interaction that such a host of issues demands. v

vi

ANHA Series Preface

Along with our commitment to publish mathematically significant works at the frontiers of harmonic analysis, we have a comparably strong commitment to publish major advances in the following applicable topics in which harmonic analysis plays a substantial role: Antenna theory Biomedical signal processing Digital signal processing Fast algorithms Gabor theory and applications Image processing Numerical partial differential equations

Prediction theory Radar applications Sampling theory Spectral estimation Speech processing Time-frequency and time-scale analysis Wavelet theory

The above point of view for the ANHA book series is inspired by the history of Fourier analysis itself, whose tentacles reach into so many fields. In the last two centuries Fourier analysis has had a major impact on the development of mathematics, on the understanding of many engineering and scientific phenomena, and on the solution of some of the most important problems in mathematics and the sciences. Historically, Fourier series were developed in the analysis of some of the classical PDEs of mathematical physics; these series were used to solve such equations. In order to understand Fourier series and the kinds of solutions they could represent, some of the most basic notions of analysis were defined, e.g., the concept of “function.” Since the coefficients of Fourier series are integrals, it is no surprise that Riemann integrals were conceived to deal with uniqueness properties of trigonometric series. Cantor’s set theory was also developed because of such uniqueness questions. A basic problem in Fourier analysis is to show how complicated phenomena, such as sound waves, can be described in terms of elementary harmonics. There are two aspects of this problem: first, to find, or even define properly, the harmonics or spectrum of a given phenomenon, e.g., the spectroscopy problem in optics; second, to determine which phenomena can be constructed from given classes of harmonics, as done, for example, by the mechanical synthesizers in tidal analysis. Fourier analysis is also the natural setting for many other problems in engineering, mathematics, and the sciences. For example, Wiener’s Tauberian theorem in Fourier analysis not only characterizes the behavior of the prime numbers, but also provides the proper notion of spectrum for phenomena such as white light; this latter process leads to the Fourier analysis associated with correlation functions in filtering and prediction problems, and these problems, in turn, deal naturally with Hardy spaces in the theory of complex variables. Nowadays, some of the theory of PDEs has given way to the study of Fourier integral operators. Problems in antenna theory are studied in terms of unimodular trigonometric polynomials. Applications of Fourier analysis abound in signal processing, whether with the fast Fourier transform (FFT), or filter design, or the

ANHA Series Preface

vii

adaptive modeling inherent in time-frequency-scale methods such as wavelet theory. The coherent states of mathematical physics are translated and modulated Fourier transforms, and these are used, in conjunction with the uncertainty principle, for dealing with signal reconstruction in communications theory. We are back to the raison d’être of the ANHA series! University of Maryland College Park

John J. Benedetto Series Editor

The figure is designed to reflect the rich harmonic analysis/signal processing community in the greater DC metropolitan region. Like DC itself, it radiates outward symmetrically from the Capitol. First, the function sin(π |z|) π |z| was plotted in MATLAB in the region C \ D, where C represents the complex plane and D represents the unit disk in the plane. Then, the U.S. Capitol Dome was “lathed” onto the image, joining the radial sinc function at the unit circle ∂D. The design was created by Stephen Casey and Randy Mays, with Mays providing the Capitol Dome framework, filling it in with MATLAB coloring.

Preface

Seminal ideas are triggered by amazing insights, often derived from straightforward ideas. Many lead to important developments far beyond what was envisioned. The ideas of Claude E. Shannon on sampling and signal processing, entropy, information theory, and cryptography are examples of such seminal ideas. He placed what we commonly refer to as the Classical (a.k.a., Whittaker–Kotel’nikov–Shannon (WKS)) Sampling Theorem as a cornerstone of information theory in his paper Communications in the Presence of Noise.1 The harmonic analysis and signal processing communities have adopted Shannon’s point of view, as is evidenced in the papers in this monograph. Our “Age of Information” presents us with new challenges, and, in particular, with respect to sampling. We are in the time of miniature and handheld devices for communications, robotics, and micro aerial vehicles, cognitive radio, radar, etc. We are also presented with the challenge of powering these and other systems. We marvel at the rich extent of mathematical research and significant applications that Shannon’s ideas have inspired, from the mathematics hiding in compact discs and videos to modern advances in signal and image processing, compressed sensing, deep learning, real and complex analysis, applied and computational harmonic analysis, geosciences, inverse problems, optics, computational neuroscience, etc. We have witnessed many breakthroughs in some of these areas in the past two decades, and more is yet to come. The Applied and Numerical Harmonic Analysis (ANHA) series publishes works in harmonic analysis and its myriad of incredible applications. It has and will continue to report on the rich tapestry of these disciplines as they continue to grow. The papers in this volume are outgrowths of the SAMPTA conferences. The 13th SAMPTA occurred in July 2019 at Université de Bordeaux.

1 Shannon

himself was careful to note that the Sampling Theorem did not originate with him. The history of the theorem is rich and includes the names Cauchy, Raabe, and Ogura, among others. ix

x

Preface

Overview of SAMPTA SAMPTA (Sampling Theory and Applications) is a biennial interdisciplinary international conference for mathematicians, engineers, and applied scientists. The main purpose of SAMPTA is to exchange recent advances in sampling theory and to explore new trends and directions in the related areas of application. SAMPTA has traditionally focused on such fields as signal processing and image processing, coding theory, control theory, real and complex analysis, harmonic analysis, and the theory of differential equations. The conference has always featured plenary talks by prominent speakers, special sessions on selected topics reflecting the current trends in sampling theory and its applications to the engineering sciences, as well as regular sessions about traditional topics in sampling theory, and poster sessions. The conferences are a bridge between the mathematical and engineering signal processing communities. The mix between mathematicians and engineers is unique and leads to extremely useful and constructive dialogs. SAMPTA has had sessions on theory—compressed sensing, frames, geometry, wavelets, nonuniform and weighted sampling, finite rate of innovation, universal sampling, time-frequency analysis, deep learning, operator theory, and applications—A-to-D conversion, computational neuroscience, mobile sampling issues, and biomedical imaging. SAMPTA Meetings • • • • • • • • • • • • •

SAMPTA 2019: Université de Bordeaux, Bordeaux, France, July 8–12, 2019 SAMPTA 2017: Tallinn University, Tallinn, Estonia, July 3–7, 2017 SAMPTA 2015: American University, Washington DC, USA, May 25–29, 2015 SAMPTA 2013: Jacobs University, Bremen, Germany, July 1–5, 2013 SAMPTA 2011: Nanyang Technical University, Singapore, May 2–6, 2011 SAMPTA 2009: CIRM, Marseilles, France, May 18–22, 2009 SAMPTA 2007: Aristotle University, Thessaloniki, Greece, June 1–5, 2007 SAMPTA 2005: Samsun, Turkey, July 10–15, 2005 SAMPTA 2003: Strobl, Austria, May 26–30, 2003 SAMPTA 2001: UCF, Orlando, Florida, USA, May 13–17, 2001 SAMPTA 1999: Loen, Norway, August 11–14, 1999 SAMPTA 1997: University of Aveiro, Aveiro, Portugal, July 16–19, 1997 SAMPTA 1995: Riga, Latvia, September 20–22, 1995

The SAMPTA 2015 conference had 203 attendees and achieved some rather notable milestones. The meeting was endorsed by the Institute of Electrical and Electronics Engineers (IEEE) and the Society for Industrial and Applied Mathematics (SIAM). The conference papers were published in IEEE Xplore. SAMPTA 2015 also received grant support from the Air Force Office of Scientific Research (AFOSR) (BAA-AFOSR-2014-0001) and the Army Research Office (ARO) (BAA W911NF-12-R-0012-02). The conference was also supported by the efforts of the

Preface

xi

College of Arts and Sciences and the Department of Mathematics and Statistics at American University and the Norbert Wiener Center at the University of Maryland. The interaction at SAMPTA has pushed the envelopes forward in both the mathematics and engineering communities. The SAMPTA conferences have and will continue to serve as a meeting ground for harmonic analysts and electrical engineers and give graduate students and junior investigators a chance to learn about the developments of the subjects. The conference provides the community an opportunity to interact with some of the leaders in the field in a relaxed and yet very constructive environment. We thank all the authors, editors, and referees for their contributions to this volume and their ongoing efforts to support the Sampling/ANHA/SAMPTA community.

Book Chapters The volume opens with the delightful essay Claude Shannon: American Genius by Professor John Rowland Higgins (Anglia Polytech), our historian on sampling theory and expositor par excellence. The essay is a salute to Shannon’s genius and tells us the tales of not only his seminal mathematics but also of several of his extremely clever inventions. The subsequent chapters report on major advances in their areas; collectively, they explore impressive frontiers of modern sampling theory. The first book chapter—Reconstruction of Signals: Uniqueness and Stable Sampling—is by Alexander Olevskii (Tel Aviv) and Alexander Ulanovskii (Stavanger). It addresses sets of uniqueness and sampling, discussing uniform densities, universal sampling, and universal completeness. Recall that the classical sampling problem is to reconstruct, in a unique and stable way, a continuous signal with a given spectrum S from its samples on a discrete set Λ. Through the Fourier transform, the problems ask for which sets of frequencies Λ are the exponential system {eiλt , λ ∈ Λ} complete or constitute a frame or a Riesz sequence in the space L2 on a given set S of finite measure? When S is a single interval, these problems were essentially solved in deep papers by A. Beurling and P. Malliavin in terms of appropriate densities of the discrete set Λ. The main tool in these papers was the theory of entire functions. However, this tool becomes considerably less effective in the case of disconnected spectra S. H. Landau extended the necessity of the density conditions in these results to the general bounded spectra. However, when S is a disconnected set, no sharp sufficient condition for sampling and completeness can be expressed in terms of the density of the set Λ. Both the size and arithmetic structure of Λ come into the play. This chapter gives a short introduction into the subject of sampling and related problems. It presents both classical and recent results on universal sampling. The chapter by M. Maurice Dodson (York) and J. Rowland Higgins (Anglia Polytech)—Sampling Theory in a Fourier Algebra Setting—gives a way of developing sampling in terms of the Approximate Sampling Theorem. This structure, a Fourier algebra setting, is not as well-known as it might be, possibly because

xii

Preface

the Approximate Sampling Theorem, central to their discussion, had what could be called a mysterious birth and a confused adolescence. They also discuss functions of the familiar bandlimited and bandpass types, showing that these too have a place in this viewpoint. The chapter combines an expository and historical treatment of the origins of exact and approximate sampling, including bandpass sampling, all from the Fourier algebra viewpoint. It has two objectives. The first is to provide an accessible and rigorous account of this sampling theory. The various cases mentioned above are each discussed in order to show that the Fourier algebra, a Banach space, is a broad and natural setting for the theory. The second objective is to clarify the early development of approximate sampling in the Fourier algebra and to unravel its origins. This is followed by Sampling Series, Refinable Sampling Kernels, and Frequency Band Limited Functions by Wolodymyr R. Madych (Connecticut). It studies sampling series and their relationship to frequency bandlimited functions. Motivated by the theories of multiresolution analysis and subdivision, particular attention is paid to sampling series whose kernels are refinable. After developing the basic properties of the sampling kernels understudy, the chapter focuses on three families of such kernels: 1. damped cardinal sines, 2. the fundamental functions for cardinal spline interpolation, and 3. a family of compactly supported kernels defined in terms of their masks. The limiting kernel of each family is the cardinal sine. In the cases (i) and (ii), it presents results concerning the limiting behavior of the corresponding sampling series when the data samples {cn } have polynomial growth as n −→ ±∞. The chapter Prolate Shift Frames and Sampling of Bandlimited Functions by Jeffrey Hogan (Newcastle) and Joseph Lakey (New Mexico State) reviews recent developments involving frames of time- and bandlimited functions for Paley– Wiener spaces. The classic Shannon sampling theorem can be viewed as a special case of (generalized) sampling reconstructions for bandlimited signals in which the signal is expressed as a superposition of shifts of finitely many bandlimited generators. The coefficients of these expansions can be regarded as generalized samples taken at a Nyquist rate determined by the number of generators and basic shift rate parameters. When the shifts of the generators form a frame for the Paley–Wiener space, the coefficients are inner products with dual frame elements. There is a trade-off between time localization of the generators and localization of dual generators. The Shannon sampling theorem is an extreme manifestation in which the coefficients are point values but the generating sinc function is poorly localized in time. This work reviews and extends some recent related work of the authors regarding frames for the Paley–Wiener space generated by shifts of prolate spheroidal wave functions and considers the question of trade-off between localization of the generators and of the dual frames. The volume finishes with A Survey on the Unconditional Convergence and the Invertibility of Frame Multipliers with Implementation by Peter Balazs and Diana Stoeva (Acoustics Research Institute, Austrian Academy of Sciences). Multipliers

Preface

xiii

are operators which consist of an analysis stage, a multiplication, and then a synthesis stage. This is a very natural concept, and it occurs in a lot of scientific questions in mathematics, physics, and engineering. In physics, multipliers are the link between classical and quantum mechanics, the so-called quantization operators. Multipliers link sequences (or functions) mk corresponding to the measurable variables in classical physics to operators Mm,Φ,Ψ , which are the measurables in quantum mechanics. In signal processing, multipliers are a particular way to implement time-variant filters. One of the goals in signal processing is to find a transform which allows certain properties of the signal to be easily discovered. Via such a transform, one can focus on those properties of the signal one is interested in or would like to change. The coefficients can be manipulated directly in the transform domain, thus allowing certain signal features to be amplified or attenuated. This is, for example, the case of what a sound engineer operating an equalizer does during a concert, i.e., changing the amplification of certain frequency bands in real time. Convolution operators correspond to a multiplication in the Fourier domain, and therefore to a time-invariant change of frequency content. The chapter presents a survey of frame multipliers and related concepts. In particular, it includes a short motivation of why multipliers are of interest and a review and extensions of recent results. These include the unconditional convergence of multipliers, sufficient and/or necessary conditions for the invertibility of multipliers, and representation of the inverse via Neumann-like series and via multipliers with particular parameters. Multipliers for frames with specific structure, namely Gabor multipliers, are also considered. Results for the representation of the inverse multiplier are implemented in MATLAB code, and the algorithms are described.

Crossroads Our wonderful discipline sits at a crossroad. Just as mathematics and engineering used to be driven by mathematical physics, our mathematics and engineering will be motivated by the science of information. The Sampling/ANHA/SAMPTA community is in a unique position to contribute to these efforts. Our work provides the tools for advances in a very wide spectrum of areas, from communications, data, and information, to advances in signal and imaging processing, to breakthroughs in the physical, medical, social, and political sciences. These efforts are ongoing, as witnessed by the SAMPTA conferences and the growing body of literature generated by the Sampling/ANHA/SAMPTA community. Washington, DC, USA College Park, MD, USA Washington, DC, USA Adelphi, MD, USA

Stephen D. Casey Kasso A. Okoudjou Michael Robinson Brian M. Sadler

Claude E. Shannon

The Classical (WKS) Sampling Theorem sin( Tπ t) , and δnT (t) = δ(t − nT ). T πt 1. If T ≤ 1/2, then for all t ∈ R,

Let f ∈ PW , sinc(t) =

f (t) = T

 n∈Z

sin( Tπ (t − nT )) =T f (nT ) π(t − nT )

 

  δnT f ∗ sinc(t) .

n∈Z

2. If T ≤ 1/2 and f (nT ) = 0 for all n ∈ Z, then f ≡ 0.

T

Dedication—John Rowland Higgins (17 August 1935–24 February 2020)

The Sampling/ANHA/SAMPTA community is shocked and saddened by Rowland Higgins’ unexpected death during the final stages of publication of this volume. Just a few weeks earlier, Rowland and his wife Nan had returned from the south of France, where they had lived since retirement, to Cambridge, where they used to live, to be with their children and grandchildren. Despite health problems over recent years, Rowland had continued to think about sampling and having completed his joint chapter in this volume, was planning to write more papers. Alas, it was not to be. Rowland was a mainstay of our Sampling/ANHA/SAMPTA community from the start. His papers, books, and lectures were key elements in the growing body of knowledge of the community. He served on the original Steering Committee of the SAMPTA meetings and then went on to fulfill his duties as a Founding Member. Rowland was a gift to our community. Through his deep interest in the rich mathematics of sampling theory and his wide knowledge of its fascinating history, he provided for all of us an informed perspective and a sense of the history of the subject. We were able to see the deep roots and interconnections of sampling and appreciate the tremendous potential of this theory, as it branched out to exciting new areas, even beyond the genius of Shannon. However, even more brilliant than the excellence of Rowland as a mathematician and lecturer was his role to our community as a friend and mentor. With an encyclopediac knowledge of sampling which he wore lightly, he was in a real sense a scholar and a gentleman. His kindness, generosity, and courtesy brought out, in many ways, our best. And for that gift, we will always be grateful. He will be deeply missed by all of us and by his family. In honor of his enormous contribution to the Sampling/ANHA/SAMPTA community, the editors are dedicating this book to Rowland Higgins. Washington, DC, USA York, UK

Stephen D. Casey M. Maurice Dodson

xv

Contents

Claude Shannon: American Genius. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Rowland Higgins

1

Reconstruction of Signals: Uniqueness and Stable Sampling . . . . . . . . . . . . . . . Alexander Olevskii and Alexander Ulanovskii

9

Sampling Theory in a Fourier Algebra Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Maurice Dodson and J. Rowland Higgins

51

Sampling Series, Refinable Sampling Kernels, and Frequency Band Limited Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wolodymyr R. Madych

93

Prolate Shift Frames and Sampling of Bandlimited Functions . . . . . . . . . . . . . 141 Jeffrey A. Hogan and Joseph D. Lakey A Survey on the Unconditional Convergence and the Invertibility of Frame Multipliers with Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Diana T. Stoeva and Peter Balazs

xvii

Claude Shannon: American Genius J. Rowland Higgins

Abstract Claude Shannon was born one hundred years ago. This essay is a salute to his genius.

Claude Shannon was born one hundred years ago. This essay is a salute to his genius. What is genius? Opinions will differ of course, but in general terms we would not be far wrong in allowing that genius goes beyond mere talent. The philosopher Schopenhauer said1 “Talent is like the marksman who hits a target which others cannot reach; genius is like the marksman who hits a target, as far as which others cannot even see.” Some of the more explicit qualities often associated with genius are: an ability to initiate major advances in a subject, sometimes even to create a new subject, a prodigious memory, and an unflagging curiosity. Astounding examples are not hard to come by, and Claude Shannon sits well in their company. As for major advances, Shannon’s three most important advances in the sciences of his time were, undoubtedly, his creation of information theory, his contributions to switching theory and his contributions to cryptography. Next, memory. One thinks of Euler’s extraordinary memory, for example; he knew Homer’s Iliad by heart, and given any page number in his copy he could remember the first and last lines on that page. Shannon’s memory is evident in, for example, his practice of dictating his articles for publication entirely from memory and without the need for correction. Incidentally, his collected works list 127 publications.

1 In

his book: Die Welt als Wille und Vorstellung. (The World as Will and Representation, Tr. R.B. Haldane).

J. R. Higgins (deceased) Cambridge, UK © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_1

1

2

J. R. Higgins

Now to curiosity. The art historian Kenneth Clark recognizes2 Leonardo da Vinci as one of the greatest geniuses, and writes that “all his gifts were dominated by one ruling passion—curiosity. He was the most relentlessly curious man in history. Everything he saw made him ask why and how.” It is clear that Shannon too was strongly motivated by curiosity. In his own words3 I don’t think I was ever motivated by the notion of wining prizes, . . . I was more motivated by curiosity. Never by the desire for financial gain. . . . There are many things I have done and never written up at all.

Much scholarly attention has been given to Claude Shannon and his work in mathematics and the theory of communications; throughout all of this attention his genius is in full view. But, strange as it may seem, among the several excellent commentaries on Shannon to be found in the literature, there are some that make little or no acknowledgment of his usage of sampling theory. This may be due to the fact that by his time basic sampling theorems were already considered part of the general background4 ; however, it was Shannon who recognized the theorem as having a central place in information theory. We shall return to these points later, with particular emphasis on sampling methods and some of their many generalizations and extensions; meanwhile, let us expand a little on Shannon and the three major advances mentioned above, and on Shannon the inventor of strange devices. First, in the following excerpt from the paper by Gallager5 we begin to get an idea of Shannon’s creation. Much more is to be found in this source. By 1948, all the pieces of “A mathematical theory of communication”6 had come together in Shannon’s head. He had been working on this project, on and off, for eight years. There were no drafts or partial manuscripts|remarkably, he was able to keep the entire creation in his head . . . . His theory was about the entire process of telecommunications, from source to data compression to channel coding to modulation to channel noise to detection to error correction. . . . . Shannon employed the provocative term “information” for what was being communicated . . . . This new notion of information reopened many age-old debates about the difference between knowledge, information, data, and so forth . . . the paper (that of footnote 6) totally changed both the understanding and the design of telecommunication systems . . . .

2 Civilization,

a personal view, Penguin Books, 1969, p. 105. quotation is from an interview: Anthony Liversidge, “Profile of Claude Shannon,” Claude Elwood Shannon Collected Papers, edited by N. J. A. Sloan, Aaron D. Wyner (IEEE, 1993). 4 His source is found in the work of J. M. Whittaker from the middle 1930s. 5 “Claude E. Shannon: A retrospective on his life, work and impact.” IEEE Trans. Info. Th., 47, 2001. 6 Bell System Tech J., 27, pp. 379–423, 623–656, 1948. 3 The

Claude Shannon: American Genius

3

It is important to realize how surprising the paper “A mathematical theory of communication” really was, even to Shannon’s close colleagues. The following quotation7 underlines this. “We had an internal distribution system” recalls a Bell Labs mathematician, Brock McMillan, who worked in the office next to Shannon’s at Murray Hill at the time, “so people in the math department could read things before publication. Shannon was not particularly talkative. He talked to us about his theories maybe a little bit. But the 1948 paper caught me almost stone cold.” Most of the Bell researchers, with the exception of Shannon’s friend Barney Oliver, experienced a similar sense of shock. John Pierce likened the surprise he felt upon encountering his friend’s ideas to the dropping of a powerful explosive.

Second, in 1937 Shannon wrote a master’s thesis at MIT entitled “A symbolic analysis of relay and switching circuits.” Shannon’s insight was decisive; the symbolic logic of Boole’s Laws of thought (1854) provided a mathematical model for switching theory. Achieved at the age of 21, this thesis is one of his main claims to fame and was received with abundant and universal praise. A detailed study can be found in the book by Nahin.8 Third, Shannon became interested in cryptography during WWII. In his only published paper in this area,9 he established a mathematical theory for secrecy systems and showed that the encryption system known as the “one-time pad” achieves perfect secrecy. This paper was enormously influential. Kotel’nikov also proved this in a report of 194110 which is apparently still classified! Shannon’s childhood and undergraduate days were spent in Michigan. It was here that he started inventing gadgets and strange devices, including model airplanes and a telegraph device to a neighbor’s house using a wire fence. As time went by we find him inventing a calculator that worked in Roman numerals, a motorized pogo stick and a juggling machine that could actually juggle three balls and was decorated to look like the entertainer W.C. Fields. These were just a few of Shannon’s devices. During his time at Bell Telephone Laboratories he could be seen in the corridors riding a unicycle and juggling four balls at the same time. Later, Shannon gave the unicycle’s wheel an eccentric mounting whereupon he was seen to bob up and down “like a duck.” We may ask why Shannon gave time and effort to what may appear at first to be mere frivolity. Well, surely it was not a waste of time. I think he did it, at first anyway, out of pure joi-de-vivre; he could do these unusual things and doing them must have given him enormous pleasure. But a more important consideration is the attunement of the eager mind, the habituation to a way of thought as the purposes inevitably became more sophisticated and more serious. 7 Found in a footnote to “The idea factory: the Bell Labs and the great age of American innovation,”

The Penguin Press, 2012, by Jon Gertner. Logician and the Engineer How George Boole and Claude Shannon created the information age. Princeton University Press, 2013. 9 “Communication theory of secrecy systems,” BSTJ, 1949. 10 See Sergei N. Molotkov, “Quantum cryptography” and V.A.’s one-time key and sampling theorems. 8 The

4

J. R. Higgins

One example of this emerging maturity was a mechanical mouse; another, the programming of computers for playing chess. As for the mouse, in about 1950 Shannon introduced a mechanical mouse that could negotiate a maze successfully.11 It should be borne in mind that this mouse belonged to the pre-electronic era; in fact, he was controlled using relay circuits and a magnet underneath the maze, and proceeded via an exhaustive search algorithm. Although it is an interesting and early example of such an algorithm, current opinion does not allow Shannon’s mouse the status of an artificial learning device, or indeed to have any intelligence at all. Although striking at the time the algorithm is now considered elementary. Shannon began the formalization of chess via computer programs in a paper published in The Philosophical Magazine12 of 1950: “Programming a computer for playing chess.”13 Let Shannon himself be our guide as we contemplate, briefly, the theme of his paper. Following on directly from the title Shannon tells us: Although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and greater significance.

Shannon goes on to list several possibilities. Among these are: machines for performing symbolic mathematical operations, for translation of one language into another, for making strategic decisions and for orchestrating a melody. Shannon goes on: It is believed that all of these and many other devices of a similar nature are possible developments in the immediate future.

He describes how his chess-playing programs were designed, and emphasizes that the computers of his time were general enough and flexible enough to work symbolically with elements representing words, propositions, or other conceptual entities and speculates on whether computers can be said to “think.” Thus, Shannon’s interest was not so much in the game of chess itself as in the evolution of software for more important purposes. Before that could happen chess would provide a move in the right direction. Shannon had the insight to realize14 (from now on let us call the paper in footnote 14 just (1949)) that sampling theory is an essential part of information theory since it shows, in particular, that band limited analog signals, that is, continuous

11 The

mouse was called Theseus, a name borrowed appropriately from Greek mythology. of the earliest scientific journals, founded by Alexander Tulloch in 1797. 13 In 1965 Shannon met the then world chess champion M.M. Botvinnik at a conference in Moscow. Botvinnik had been three times world-champion and was a pioneer of computerized chess-playing programs. Information about this meeting seems hard to come by, but it is known that they discussed chess and its computerization. More can be found it Botvinnik’s book: Computers, chess and long-range planning, Heidelberg Science Library, 1970, Springer, Translator’s preface. 14 “Communication in the presence of noise,” Proc. IRE, 37, 1949, pp. 10–21. 12 One

Claude Shannon: American Genius

5

functions that are square integrable over R and have Fourier transform supported on the interval [−π , π ω] (call this interval the “set of spectral support”), carry the same information as the discrete signal composed only of their samples. This is the essence of analog-to-digital and digital-to-analog conversion. Shannon’s development of sampling theory is mostly found in (1949); by bringing parts of this paper together we can say that Shannon had, in fact, identified a classical sampling principle. It consists of four parts: sets of uniqueness, sampling series, sampling rate, and stability. Actually, the principle may be allowed a fifth part because one could add the necessity of adopting quantized values of the samples, whose instantaneous values are of course unavailable. Shannon and his colleagues recognized the need for quantization15 in the context of pulse code modulation and its early development. We will now make a brief review of Shannon’s treatment. In the paper (1949) Shannon tacitly assumes that signals are of finite energy, that is, they belong to L2 (R). First, let f belong to the class B of band limited analog signals. Then the n set {  }, n ∈ Z, is a set of uniqueness for B; that is, if f ∈ B vanishes at every n point  , then it vanishes identically. Shannon remarked that this was “common knowledge in the communications art” and gave an intuitive proof, followed by a more mathematical proof. Second, for a representation of f in terms of its samples Shannon chose the sampling series f (t) =

 n∈Z

f(

n sin(π(t − n)) ) .  π(t − n)

He found that this sampling series is an orthonormal expansion in B and fits well into the surrounding Fourier analysis. He gave an intuitive proof here as well, then appealed to the classical literature of the mid 1930s for a rigorous proof. Third, we saw above that the spacing between sample points was 1 seconds. Shannon called this the Nyquist interval; the rate of taking samples is therefore  samples per second and this is now known as the Nyquist sampling rate. Shannon’s treatment of sampling and the Nyquist rate is found in (1949) (&II) and constitutes what soon became known as “Shannon’s sampling theorem.” Although other sources are now known, this name persists in the literature. It is often called just the classical sampling theorem. Fourth, we are not quite finished; there is still stability to consider. Research at the (then) Bell Telephone Laboratories dating from the middle 1960s made it clear that a sampling theorem was incomplete without an associated condition of stability. Intuitively, this means that in digitalto- analog conversion, a small error in reading a sample must produce only a correspondingly small error in the recovered analog signal. Shannon had anticipated this in (1949), page 15.

15 B.M.

Oliver, J.R. Pierce and C.E. Shannon, “The Philosophy of PCM,” Proc. IRE, 36, 1948, pp. 1324–1331.

6

J. R. Higgins

Shannon made explicit suggestions for the extension of sampling methods. He mentions the band pass case (where the spectral support has more than one component); derivative sampling (which led eventually to ideas of multi-channel sampling); and cases where the sample points are irregularly spaced. Some of these ideas have a long history, going back to seminal work of Cauchy in the 1840s. There are settings for the extension and generalization of sampling theorems that were not directly observed by Shannon in the paper (1949), but which, with hindsight of course, may be seen there nevertheless and indicate directions in which the subject was to develop. All the following settings for sampling theory are suggested by the classical sampling series (above) and all have been studied in the literature. A good example is the setting of locally compact abelian groups. It is a natural step to pass from classical Fourier analysis to abstract harmonic analysis, since R is a locally compact abelian group under addition with discrete n subgroup {  }, n ∈ Z. Here an abstract treatment starts to appear. Abstract versions of sampling in this setting has been known for many years; it is so general as to include the settings of multi-dimensional Euclidean space, the circle, the torus, the integers, and the dyadic group, to mention just a few. Another good example is the setting of reproducing kernel Hilbert space; and recently of reproducing kernel Banach space. This setting is suggested by the sampling series above, which can be viewed as a discrete reproducing formula, f being “reproduced” from a discrete set of samples. The setting of time–frequency analysis is suggested by observing the waveletlike appearance of the expansion functions in the sampling series above. Here the phrase “Shannon wavelet” gained currency. More settings are suggested by the observation that, on formally replacing t with a complex variable z, the sampling series takes the form of an expansion in integer translates of a basic function, sin(z) z , which is entire and of exponential type. So here is another potentially favorable setting. Further fruitful settings in this general area are: the class of polynomially ally bounded entire functions, and the classes of harmonic functions and of polyharmonic functions. Again, the translation structure of the sampling series suggests that shift invariant spaces will provide another fruitful setting. Several settings for a sampling theory arise from suggestions that are rather more tenuous than those mentioned above. For example, it is known that aspects of sampling theory in the Fourier algebra, which comprises inverse Fourier transforms of elements of L1 (R), are direct extensions of the classical case. Further examples are found in the theory of distributions and in stochastic analysis. A slightly different usage of the sampling series is found in certain expansions for special functions. Other areas of development include sampling and integral transforms, sampling, and uncertainty principles, the problem of missing samples and many more. Perhaps there are still areas that remain to be discovered. Shannon did not claim to be the originator of the sampling theorem, but he did realize its significance for information theory. The very existence of the journal

Claude Shannon: American Genius

7

(Sampling Theory in Signal and Image Processing) and the biennial SAMPTA conferences testify to Shannon’s long lasting influence. Acknowledgements The present essay has been helpfully informed in several places by articles in Wikipedia. The author takes pleasure in thanking Maurice Dodson, Paulo Ferreira, David A. Gustafson, and Gerhard Schmeisser for their help, and my wife Nancy for her constant support.

Reconstruction of Signals: Uniqueness and Stable Sampling Alexander Olevskii and Alexander Ulanovskii

Abstract The classical sampling problem is to reconstruct a continuous signal with a given spectrum S from its samples on a discrete set Λ. Through the Fourier transform, the problems ask for which sets of frequencies Λ is the exponential system {eiλt , λ ∈ Λ} complete, or constitutes a frame in the space L2 on a given set S of finite measure? When S is a single interval, these problems were essentially solved by A. Beurling, A. Beurling and P. Malliavin in terms of appropriate densities of the discrete set Λ. H. Landau extended the necessity of the density conditions in these results to the general bounded spectra. However, when S is a disconnected set, no sharp sufficient condition for sampling and completeness can be expressed in terms of the density of the set Λ. Not only the size, but also the arithmetic structure of Λ comes into the play. This paper gives a short introduction into the subject of sampling and related problems. We present both classical and recent result.

1 Orthogonal Bases, Riesz Bases, and Complete Systems Let H be a separable infinite-dimensional complex Hilbert space equipped with √ an inner product ·, · H and the norm · H = ·, · H . We are interested in systems of vectors in H which have good approximation or representation

A. Olevskii () Tel Aviv University, Tel Aviv, Israel e-mail: [email protected] A. Ulanovskii Stavanger University, Stavanger, Norway e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_2

9

10

A. Olevskii and A. Ulanovskii

properties. The orthonormal bases (ONBs) enjoy the “best” representation property. It is well-known that given an ONB {un }, every vector x ∈ H can be represented as a series  cn un , (1) x= n

which converges to x in the norm of H . The coefficients are defined by the Fourier formula cn = x, un , and the Parseval equality holds

x 2H =



|cn |2 .

n

We will be mostly interested in the case H = L2 (S), where S is a measurable set in one dimension. However, most of the results below hold also in several dimensions. Here and below L2 (S) denotes the subspace of L2 -functions which vanish a.e. outside S. There are a number of important ONBs in the L2 -spaces on an interval, on the whole line or in Rn . In particular, the classical Haar system was historically the first example of the so-called wavelets. It is an ONB generated from a single function by translations and dilations (compressions). Such systems have been studied actively within the last decades and turned out to be useful in applications. In this paper we will focus on the representation/approximation properties of the exponential systems. Let us say that a set Λ ⊂ R is uniformly discrete (u.d.), if δ(Λ) :=

inf

λ,λ ∈Λ,λ=λ

|λ − λ | > 0.

(2)

The constant δ(Λ) is called the separation constant of Λ. Denote by E(Λ) := {eiλt , λ ∈ Λ},

t ∈ R,

the corresponding system of exponentials. It is commonly known that the system 

1 √ eint , n ∈ Z 2π



is an ONB in the space L2 (−π, π ). By re-scaling and normalization, one can construct an exponential ONB in L2 (I ) for any given interval I.

Reconstruction of Signals: Uniqueness and Stable Sampling

11

However, if S ⊂ R is not a single interval, the space L2 (S) does not always admit an exponential ONB. For example, it is easy to check that for the simple set S = [0, 2] ∪ [3, 5] there is no set Λ such that E(Λ) forms an orthogonal basis in L2 (S) (for a more general result, see [27]). One may ask for which sets S ⊂ Rn of positive measure does the space L2 (S) admit an exponential ONB? Fuglede’s conjecture [15] says that L2 (S) admits an exponential ONB if and only if it is possible to tile Rn by a family of translates of S. The first counterexample was constructed by Tao [53] who showed that Fuglede’s conjecture is false in Rn for n ≥ 5. Later, Tao’s counterexample was improved to disprove both directions in Fuglede’s conjecture for dimensions 3 or higher. In the cases of dimensions 1 and 2, both directions in the conjecture are still open. On the other hand, it was shown in [20] that the conjecture holds for convex compact sets in the plane. A similar result for convex polytopes in R3 is proved in [17]. Finally, recently it has been proved in [29] that Fuglede’s conjecture for convex domains is true in all dimensions. Fortunately, there are more flexible bases which preserve the main advantages of ONBs. Definition 1 A collection of vectors {un } in H is called a Riesz basis for H if it is the image of an orthonormal basis for H under an invertible linear transformation. In other words, if there is an orthonormal basis {en } for H and an invertible linear transformation T such that T en = un for all n. If {un } is a Riesz basis for H , then every x ∈ H admits a unique representation (1), and there is a constant C which does not depend on x such that the coefficients cn in (1) satisfy 

|cn |2 ≤ C x 2H .

n

A recent result [25] shows that L2 (S) admits an exponential Riesz basis whenever S is a finite union of disjoint intervals. The same is true for a finite union of rectangles in Rn with edges parallel to the axes, see [26]. However, there are not many other examples of sets S in one or several dimensions, such that the space L2 (S) admits an exponential Riesz basis. For example, it is not known if an exponential Riesz basis exists for L2 (S), where S is the unite disk in R2 . On the other hand, it is not difficult to find an exponential system E(Λ) which is complete in L2 (S). Definition 2 A collection of vectors {un } in a Hilbert space H is called complete, if for every vector x ∈ H and every > 0, x admits approximation with error ,

x − p H < ,

12

A. Olevskii and A. Ulanovskii

by a finite linear combination of the form  p := cn un . n

A collection {un } is complete in a Hilbert space H if and only if there is no vector in H orthogonal to all un . Observe that the completeness property does not guarantee the possibility of decomposition of a vector into a series. Take, for instance, the system of powers {t j , j = 0, 1, 2, . . . }. By the Weierstrass approximation theorem, it is complete in L2 on any interval, but only analytic functions can be decomposed in a power series. Therefore, it is desirable to have a concept of a “good” system of vectors in a Hilbert space, which, on the one hand, inherits nice properties of ONBs and, on the other hand, extends their constructive abilities. Such an important concept, the concept of frame, was introduced by Duffin and Schaeffer in [12].

2 Frames Definition 3 A collection of vectors {un } in a Hilbert space H is called a frame if there exist numbers 0 < A ≤ B such that  A x 2H ≤ |x, un |2 ≤ B x 2H , for every x ∈ H. (3) n

The optimal constants (maximal for A and minimal for B) are called the frame bounds. The right inequality in (3) is called the Bessel inequality. In most situations, it is not difficult to check whether it holds or not. Determining whether the left inequality holds is usually more problematic. Consider the operator T acting from H to l 2 as T x = {x, un H }n . T is called the analysis operator. Due to the Bessel inequality, T is a bounded linear √ operator satisfying T ≤ B. The adjoint operator acts from l 2 to H , and one may show that  cn un . T ∗ {cn } = n ∗ T ∗ is called √ the synthesis operator. Since T has the same norm as T , we have ∗

T ≤ B. Hence, the Bessel inequality is equivalent to the inequality



 n

cn un 2H ≤ B

 n

|cn |2 ,

for every sequence {cn } ∈ l 2 .

Reconstruction of Signals: Uniqueness and Stable Sampling

13

2.1 Examples Example 1 Assume we have an ONB in L2 (I ). Then it is a frame in L2 (S) for any subset S ⊂ I , with the frame constants A = B = 1 (such frames are called Parseval frames). Example 2 Let S be a bounded set and Λ a u.d. set. Then the Bessel inequality holds:  |F, eiλt |2 ≤ B F 2L2 (S) , for every F ∈ L2 (S), (4) λ∈Λ

where B is a constant. Here and below we set



F, G :=

R

F L2 (S) :=

F (t)G(t) dt,

1/2 |F (t)| dt 2

.

S

Proof As explained above, it suffices to prove the inequality



c(λ)eiλt 2L2 (S) ≤ B

λ∈Λ



|c(λ)|2 , for every {c(λ)} ∈ l 2 .

(5)

λ∈Λ

Assume that c(λ) = 0 for all but a finite number of λ ∈ Λ. Choose σ > 0 such that S ⊂ [−π/4σ, π/4σ ] and δ(Λ) ≥ 2σ , where δ(Λ) is the separation constant defined in (2). Set

h(t) :=

sin(σ t) σt

2 .

Put a := mint∈S h(t), and denote by H the Fourier transform of h (see (8) below). One may check that a > 0, H (0) > 0, and H (x) = 0, |x| ≥ 2σ . Since δ(Λ) ≥ 2σ , we have H (λ − λ ) = 0 whenever λ, λ ∈ Λ, λ = λ . Therefore, 2 2  1 ∞  iλt iλt c(λ)e dt < c(λ)e h(t) dt a −∞ S λ∈Λ

=

λ∈Λ

1  H (0)  c(λ)c(λ )H (λ − λ ) = |c(λ)|2 . a a λ,λ ∈Λ

Hence, (5) holds for finite sequences c(λ).

λ∈Λ

14

A. Olevskii and A. Ulanovskii

Given an arbitrary sequence {c(λ)} ∈ l 2 , the argument above readily shows that the sums  c(λ)eiλt , n ∈ N, λ∈Λ,|λ| 0 such that F (t) = 0 for a.e. |t| > σ. In this case the Fourier integral (8) admits analytic continuation to the complex plane C, and using the Cauchy–Schwarz inequality one gets the estimate |f (x + iy)| ≤ Ceσ |y| ,

x, y ∈ R,

where

C=

σ

F 2 . π

(10)

Reconstruction of Signals: Uniqueness and Stable Sampling

17

3.2 Paley–Wiener Spaces Let S be a measurable subset of R and L2 (S) denote the subspace of L2 (R) of functions supported by S. Definition 4 The space P WS of all L2 -functions f = Fˆ such that F ∈ L2 (S) is called the Paley–Wiener space with spectrum S. Hence, P WS consists of all functions f ∈ L2 (R) whose spectrum lies in S. Equipped with the L2 -norm · 2 , P WS is a Hilbert space. When S is bounded, the elements of P WS are entire functions of exponential type. When S is an unbounded set of finite measure, from the Cauchy–Schwartz inequality we get L2 (S) ⊂ L1 (S), and so the elements of the Paley–Wiener space P WS are continuous. However, in general they are not analytic anymore. In what follows, we denote by |S| the measure of S. The classical case is when S = [−σ, σ ], σ > 0, an interval in R. For simplicity, we will write P Wσ := P W[−σ,σ ] . The space P Wσ is completely determined by a growth condition in the complex plane: Theorem 2 (Paley–Wiener) A function f belongs to P Wσ if and only if f ∈ L2 (R) and it can be extended to the complex plane as an entire function satisfying (10) with some constant C = C(f ) depending on f . The necessity is explained at the end of Section 3.1. For the proof of sufficiency, see [49], or [56, p. 86], or [31, p. 69]. One can easily see that when S is a disconnected set, already when S is a union of two intervals, the space P WS cannot be characterized in terms of a growth condition in the complex plane.

3.3 Sampling Problem Let Λ ⊂ R be a u.d. set. In signal processing, sampling is the restriction of a continuous signal f to a set Λ: f → f |Λ := {f (λ), λ ∈ Λ}, where Λ ⊂ R is a given (usually, u. d.) set. Given a set Λ ⊂ R, it is a classical problem to determine whether it is possible to reconstruct the original signal f from the samples f |Λ . There are two aspects of this problem: 1. Uniqueness. When can the signal f be uniquely defined by the samples? 2. Stable sampling. When can the samples provide a “stable reconstruction” of the signal?

18

A. Olevskii and A. Ulanovskii

3.3.1

Uniqueness

Definition 5 A set Λ is called a uniqueness set (US) for the space P WS , if f ∈ P WS , f |Λ = 0 ⇒ f = 0. Claim Assume |S| < ∞. Then Λ is a US for P WS if and only if the exponential system E(Λ) is complete in L2 (S). Proof Indeed, assume Λ is not a US for P WS , that is, there is a non-trivial function f ∈ P WS which vanishes on Λ. Denote by F the inverse Fourier transform of f . Then F, eiλt = 0 for λ ∈ Λ. This means that F is orthogonal to E(Λ), and so E(Λ) is not complete in L2 (S). On the other hand, if E(Λ) is not complete in L2 (S), then there is a non-trivial function F ∈ L2 (S) orthogonal to E(Λ). Its Fourier transform f ∈ P WS vanishes on Λ, which means that Λ is not a US for P WS . Example 4 Λ = Z+ is a US for P Wσ , 0 < σ < π . This follows from Example 3 above.

3.3.2

Stable Sampling

Definition 6 Λ is called a stable sampling set (SS) for P WS if there exists C such that  |f (λ)|2 , for every f ∈ P WS . (11)

f 22 ≤ C λ∈Λ

Claim Let S be a bounded set and Λ a u.d. set. Then Λ is an SS for P WS if and only if E(Λ) is a frame in L2 (S). Proof Indeed, recall that E(Λ) is a frame in L2 (S) if there exist 0 < A ≤ B such that  |F, eiλt |2 ≤ B F 22 , F ∈ L2 (S). A F 22 ≤ λ∈Λ

The right inequality follows from (4). Using the Fourier transform and the Plancherel identity, we see that the left inequality is equivalent to (11).

3.3.3

Whittaker–Kotelnikov–Shannon Sampling Theorem

Given a set S, the sampling problem is to determine which sets Λ form SSs for P WS .

Reconstruction of Signals: Uniqueness and Stable Sampling

19

Classical sampling results deal with equidistant sampling, i.e., the case where Λ is an arithmetic progression. Example 5 The set of integers Z is an SS for P Wπ . Every signal f ∈ P Wπ can be written as  sin π(x − n) f (n) f (x) = , (12) π(x − n) n∈Z

convergence being absolute and uniform on compact subsets of C, and with respect to the L2 -norm. Moreover  |f (n)|2 , for every f ∈ P Wπ . (13)

f 22 = n∈Z

Both the representation formula (12) and (13) follow easily from the fact that the exponential system E(Z) is an orthogonal basis in L2 (−π, π ) (see details in [56], Chapter 2). Equality (12) is called a sampling theorem since it allows to reconstruct a continuous function f ∈ P Wπ from its sample values f (n). It is often called the Whittaker–Kotelnikov–Shannon sampling theorem. Observe that the sampling sets for P WS are always uniqueness sets for P WS , but the converse is not true. For example, the set Λ = Z+ is a US but not an SS for P Wσ , 0 < σ < π , see Examples 3 and 4.

4 Beurling’s Sampling Theorem 4.1 Uniform Densities The lower uniform density of a u.d. set Λ is defined as #(Λ ∩ (a, a + l)) , l→∞ a∈R l

D − (Λ) := lim inf and its upper uniform density is

#(Λ ∩ (a, a + l)) . l→∞ a∈R l

D + (Λ) := lim sup

Roughly speaking, one may say that if D − (Λ) ≥ d then Λ is everywhere at least as dense as the lattice (1/d)Z, while the condition D + (Λ) ≤ d means that Λ is everywhere at least as sparse as the lattice (1/d)Z. When these two densities coincide, we say that Λ is regularly distributed and possesses the uniform density D(Λ) := D − (Λ) = D + (Λ).

20

A. Olevskii and A. Ulanovskii

Theorem 3 Let Λ be a u.d. set. If D − (Λ) > σ/π then Λ is an SS for P Wσ . If D − (Λ) < σ/π then it is not. This result follows from Beurling’s characterization of the sampling sets for the Bernstein space Bσ presented below. Observe that a complete description of the SSs for P Wσ is obtained in [48, 52]. It is not given in terms of some density of Λ.

4.2 Bernstein Space Bσ Definition 7 In the classical case S = [−σ, σ ], the Bernstein space Bσ was introduced by S.N. Bernstein in [3]. It consists of all entire functions f satisfying (10). Observe that in (10) one can take (see [43], Lecture 2) C = sup |f (x)|. x∈R

This definition and the Paley–Wiener theorem show that P Wσ = Bσ ∩ L2 (R). Definition 8 A set Λ is called a set of stable sampling (SS) for Bσ if for every f ∈ Bσ there exists C = C(σ ) such that

f ∞ ≤ C f |Λ ∞ ,

for every f ∈ Bσ .

(14)

Here

f ∞ := sup |f (x)|,

f |Λ ∞ := sup |f (λ)|. λ∈Λ

x∈R

Theorem 4 (Beurling [5]) A u.d. set Λ is an SS for Bσ if and only if D − (Λ) > σ/π. We sketch some main ideas of the proof (for details see [5] or [43], Lecture 3). The following compactness property of the space Bσ will be used: Given any bounded sequence fn ∈ Bσ ,

sup fn ∞ < ∞, n

there is a subsequence fnj converging uniformly on compacts of the complex plane C to a function f ∈ Bσ .

Reconstruction of Signals: Uniqueness and Stable Sampling

21

In the proof of Theorem 4 we may assume that σ = π . 1. Let D − (Λ) > 1. We have to show that Λ is an SS for P Wπ . Assume Λ is not an SS for Bσ . This means that there are functions fn ∈ P Wπ satisfying

fn ∞ = 1,

fn |Λ ∞
1/2. (a) Assume that all xn = 0. By the compactness property, there is a subsequence fnj which converges to some function f ∈ Bπ . We see that |f (0)| ≥ 1/2 and f |Λ = 0. The following principle is well-known: An entire function of a given growth in C cannot have too many zeros. In particular, a non-trivial function f from the space Bπ (which means that f satisfies (10) with σ = π ) cannot vanish on a set Λ of the lower density satisfying D − (Λ) > 1. This is an easy consequence of Jensen’s formula (see [31], Lecture 2). (b) Assume xn = 0. How does one adjust the proof above? We translate: Set gn (t) := fn (t + xn ). Then |gn (0)| > 1/2,

gn ∞ = 1,

g|Λ−xn ∞ ≤

1 . n

We may assume that gn → g ∈ Bπ , n → ∞. Assume satisfies 0 < < δ(Λ)/2, where δ(Λ) is the separation constant defined in (2). Then any interval I = (a − , a + ) contains at most one point of Λ. It is easy to check that there is a subsequence nj and a u.d. set Γ such that the translates Λ − xnj converge to Γ in the sense that the number of elements of Γ and Λ − xnj on every interval I whose endpoints are not elements of Γ , is the same for all but a finite number of j . The set Γ is called the weak limit of the translates Λ − xnj . Clearly, we have |g(0)| ≥ 1/2 and g|Γ = 0. Claim D − (Γ ) ≥ D − (Λ). This claim follows from the definition of density D − . Using the claim, similarly to part (i), we come to contradiction. 2. To complete the proof of Theorem 4, it suffices to prove that no u.d. set Λ with the “critical density” D − (Λ) = 1 is an SS for P Wπ . This is the most delicate point in Beurling’s proof. It involves the connection between sampling and balayage discovered by Beurling, and some other ideas (see [5] or [43], Lecture 3). We present another proof, which follows from Theorem 19 below.

22

A. Olevskii and A. Ulanovskii

Theorem 3 can be easily deduced from Theorem 4 by using the following. Proposition 1 ([43]) Assume Λ is a u.d. set, and 0 < τ < σ . (i) If Λ is not an SS for Bσ −τ , then it is not an SS for P Wσ . (ii) If Λ is an SS for Bσ +τ , then it is an SS for P Wσ . Proof The proof of part (i) is easy: Take functions fn ∈ Bσ −τ satisfying (15) and find points xn such that |fn (xn )| > 1/2. Then set gn (x) := fn (x)

sin τ (x − xn ) . τ (x − xn )

Then one may check that gn ∈ P Wσ , infn gn 2 > 0 while gn |Λ 2 → 0 as n → ∞. This means that Λ is not an SS for P Wσ . Beurling [5] proved part (ii) using the so-called linear balayage operators. We suggest a simpler proof. We may assume that σ + τ ≤ π . Fix a function h ∈ P Wτ with h(0) = 1 and satisfying C1 := sup



|h(k − x)|2 < ∞.

x∈[0,1] k∈Z

Then for every f ∈ P Wσ we have f (·)h(k − ·) ∈ Bσ +τ and |f (k)| ≤ max |f (x)h(k − x)|. x∈R

Hence, by (13),

f 22 =



|f (k)|2 ≤

k∈Z

≤C

 k∈Z

 k∈Z

max |f (x)h(k − x)|2 x∈R

max |f (λ)h(k − λ)|2 ≤ C λ∈Λ



|f (λ)h(k − λ)|2

k∈Z λ∈Λ

≤ CC1



|f (λ)|2 ,

λ∈Λ

which shows that Λ is an SS for P Wσ .

5 Riesz Sequences and Interpolation We will now recall one more geometric concept which has an important interpretation in terms of the signal processing.

Reconstruction of Signals: Uniqueness and Stable Sampling

23

Definition 9 A system of vectors {uk } in a Hilbert space H is called a Riesz sequence (RS) if there is a constant γ > 0 such that



ck uk 2H ≥ γ

k



|ck |2 ,

for every finite sequence of scalars ck .

(16)

k

Example 6 Let Λ = {2n , n ∈ N}. Then the system of exponentials E(Λ) is an RS in the space L2 (S), for every set S ⊂ (−π, π ) of positive measure, see [57]. Example 7 On the other hand, E(N) = {eint , n ∈ N} is not an RS for L2 (−σ, σ ), for every 0 < σ < π . Indeed, it is easy to check that there exist trigonometric polynomials pn satisfying

pn L2 (−π,π ) = 1,

pn L2 (−σ,σ ) → 0,

n → ∞.

A similar argument shows that if a set Λ ⊂ Z contains arbitrary long arithmetic progressions then there is a set S ⊂ (−π, π ) of positive measure such that E(Λ) is not an RS in L2 (S). In particular, this is true for the set of primes, due to the famous Green–Tao theorem [16]. The concept of RS is in a way dual to the concept of frame. The following lemma illustrates this. Lemma 1 (Duality Lemma) Let W = {wn } be an ONB in L2 on an interval I . Suppose that I = A ∪ B is a decomposition of I into two disjoint measurable sets. Let W = U ∪ V be a decomposition of W into two disjoint sub-systems. Then U is a frame in L2 (A) if and only if V is a RS in L2 (B). Proof We will prove “a half” of this lemma. The prove for the other half is similar. Assume V is not an RS on B. Then for any > 0 there is a “polynomial” p satisfying p :=



c(v)v,

p 2L2 (I ) = 1,

p 2L2 (B) < .

v∈V

Hence,

p 2L2 (A) > 1 − . Since p is orthogonal to U in L2 (I ), we have p, u L2 (A) = −p, u L2 (B) , It follows that

u ∈ U.

24

A. Olevskii and A. Ulanovskii



|p, u L2 (A) |2 =

u∈U



|p, u L2 (B) |2 ≤ p 2L2 (B) < .

u∈U

This means that U is not a frame in L2 (A). One can reformulate this lemma for any Hilbert space H , see [43], Lecture 1.

5.1 Interpolation Let S ⊂ R be a bounded set of positive measure, and Λ a u.d. set. Definition 10 A u.d. set Λ ⊂ R is called an interpolation set (IS) for the space P WS , if for every l 2 -sequence {c(λ), λ ∈ Λ}, there is a function f ∈ P WS such that f (λ) = c(λ),

λ ∈ Λ.

In other words, this means that the restriction operator Rf = {f (λ), λ ∈ Λ}

(17)

acting from P WS into l 2 (Λ) (R is bounded due to the Bessel inequality) is a surjection. Let F be the inverse Fourier transform of f . Then √ f (λ) = F (t), eiλt / 2π , and so the restriction operator√can be identified with the operator from L2 (S) to l 2 (Λ) given by F → F, eiλt / 2π . Hence, the conjugate operator R ∗ : l 2 (Λ) → L2 (S) can be identified with the operator (see Sect. 2.2) 1  R ∗ {c(λ)} = √ c(λ)eiλt . 2π λ∈Λ Due to a fundamental Banach theorem (see Theorem E9, [19]), the surjectivity of R is equivalent to the following estimate for the conjugate operator R ∗ :

 λ

c(λ)eiλt 2L2 (S) ≥ γ



|c(λ)|2 ,

for every finite sequence c(λ)

λ

(with a positive γ , not depending on c(λ)). This proves the following Claim Λ is an IS for P WS if and only if E(Λ) is an RS for L2 (S).

Reconstruction of Signals: Uniqueness and Stable Sampling

25

5.2 Kahane’s and Beurling’s Interpolation Theorems The following theorem provides an essential characterization of interpolation sets for P Wσ in terms of the upper uniform density D + . Theorem 5 (Kahane, [21]) Let S = [−σ, σ ], σ > 0, and let Λ be a u.d. set. (i) If D + (Λ) < σ/π then Λ is an IS for P Wσ . (ii) If D + (Λ) > σ/π then it is not. When Λ is a subset of an infinite arithmetic progression, this result can be deduced directly from Theorem 3 by the Duality Lemma 1. A complete characterization of the interpolation sets for P Wσ is given in [52]. As in the case of sampling, it cannot be expressed in terms of a density of Λ. Recall that the elements of the Bernstein space Bσ are bounded on R. Therefore, the restriction operator R defined in (17) also acts from Bσ to l ∞ (Λ). Similarly to Sect. 5.1, one can define the concept of interpolation set for the Bernstein space Bσ . Definition 11 A u.d. set Λ is called an IS for Bσ if the restriction operator R : Bσ → l ∞ (Λ) is a surjection. Theorem 6 (Beurling, [6]) A u.d. set Λ is an IS for Bσ if and only if D + (Λ) < σ/π. For the proof see [6] or [43], Lecture 4. Observe that there is an analogue of Proposition 1 for the property of interpolation. Using this result, one may deduce Theorem 5 from Theorem 6, see details in [43], Lecture 4.

6 Disconnected Spectra: Landau’s Theorems When S = [−σ, σ ] is a single interval, Theorems 3 and 5 show that the sampling and interpolation properties of Λ in P WS can be essentially described in terms of appropriate densities of Λ. This is no longer so when S is a disconnected set, already when it is a union of two disjoint intervals: No density condition is sufficient for a u.d. set Λ to be a sampling or interpolation set. For example, the uniform density D(Λ) can be large (small) with respect to |S|, but Λ fails to be a sampling (interpolation) set for P WS . Example 8 The set of integers Z is not a sampling set (not even a uniqueness set) for the space P WS , where S := (−π − , −π + ) ∪ (π − , π + ). Indeed, the function f (x) =

sin( x) sin π x x

vanishes on Z and belongs to the space P WS .

26

A. Olevskii and A. Ulanovskii

The next example illustrates the situation with interpolation. Example 9 Let N be a large number. Set SN =

N 

[2π k, 2π k + π ].

k=0

One can easily check that Z is not an IS for P WSN .

6.1 Landau’s Necessary Density Conditions On the other hand, H. Landau discovered that the necessity of the density conditions in Theorems 3 and 5 remains to be true for disconnected spectra, and also in several dimensions. For simplicity we present the results below in 1-D setting. Theorem 7 (Landau, [28]) Let S ⊂ R be a bounded measurable set and Λ ⊂ R a u.d. set. (i) If Λ is an SS for P WS then D − (Λ) ≥ |S|/2π ; (ii) If Λ is an IS for P WS then D + (Λ) ≤ |S|/2π . The proof below follows [35], see also [43], Lecture 4. It essentially differs from the original one in [28], and is much simpler.

6.2 Uniformly Minimal Sets First, we recall the concept of uniformly minimal set in Hilbert space. We denote by C different positive constants. Definition 12 A system of vectors {uk }, uk H > C, is called uniformly minimal (UM) if there is a constant γ > 0 such that for every k, 

uk − c(n)un H > γ , n=k

for any finite family of coefficients c(n). Another form of this definition is as follows: A system above is a UM if there is a system {vk }, vk H < C, which is biorthogonal to uk , which means that  1 k=j uk , vj = . 0 k = j One can see easily that these two definitions are equivalent.

Reconstruction of Signals: Uniqueness and Stable Sampling

27

The UM-property is weaker than the RS-property. Indeed, it is straightforward to check that if {uk } is a Riesz sequence then it is uniformly minimal. The inverse implication is not true, even for the exponential systems. Claim ([51]) Let Λ = {λj , j ∈ Z}, where ⎧ 1 ⎨ j + 4 , j = 1, 2, . . . , λj := 0, . j = 0, ⎩ j − 14 , j = −1, −2, . . . The exponential system E(Λ) is complete and uniformly minimal in L2 (−π, π ), but it is not an RS.

6.3 Uniformly Minimal Sets of Exponentials Theorem 8 ([35]) Let S ⊂ R be set of finite measure (not necessarily bounded), and Λ be a u.d. set. If E(Λ) is a UM system in L2 (S) then D + (Λ) ≤

|S| . 2π

The proof is based on the following simple geometric lemma. Lemma 2 Let {fj , gj , 1 ≤ j ≤ n} be a biorthogonal system in an n-dimensional subspace V of P WS . Then 0≤

n 

fj (x)gj (x) ≤

j =1

|S| , 2π

x ∈ R.

Proof Set 1 ex := √ eitx 1S (t). 2π Denoting by Fj and Gj the inverse Fourier transform of fj and gj , respectively, we get n  j =1

fj (x)gj (x) =

n  j =1

n  Fj , ex ex , Gj =  ex , Gj Fj , ex . j =1

28

A. Olevskii and A. Ulanovskii

The last sum is the orthogonal projection of the vector ex to the space Vˇ , obtained from V by the inverse Fourier transform. So the scalar product above does not exceed ex 2L2 (S) , which proves the lemma. 6.3.1

Proof of Theorem 8

Denote by h the Fourier transform of the indicator function 1S . Fix > 0 and choose b = b( ) such that |h(x)|2 dx < 2 . (18) |x|>b

Since E(Λ) is uniformly minimal in L2 (S), the system {h(x − λ), λ ∈ Λ} obtained from E(Λ) by a unitary operator (the Fourier transform) has the same property in P WS . So, there is a bounded system {gλ ∈ P WS , λ ∈ Λ} such that {h(x − λ), gλ } is a biorthogonal system. Fix a long interval I and set V := span{h(x − λ), λ ∈ Λ ∩ I },

J := I + (−b, b).

Let PV be the orthogonal projector onto V . We see that {h(x − λ), PV gλ , λ ∈ Λ ∩ I } is a biorthogonal system in V ,

(19)

and

PV gλ < C.

(20)

Then Lemma 2 gives 

0
0 such that every set Λ := {λ + (λ), λ ∈ Λ},

| (λ)| < ,

λ ∈ Λ,

is also an SS for P WS . It is natural to call the set Λ in Lemma 3 an -perturbation of Λ. Fix a bounded set S and chose from the lemma above so small that S lies in [−π/ , π/ ]. Then find an -perturbation Λ of Λ such that Λ lies on the lattice Z, and E(Λ ) is an SS for P WS . The Duality Lemma 1 implies that E( Z \ Λ ) is an IS for L2 ((−π/ , π/ ) \ S). Now, applying part (ii) of Theorem 7 and keeping in mind that D − (Λ ) + D + ( Z \ Λ ) = 1/ , we finish the proof.

30

A. Olevskii and A. Ulanovskii

7 Universal Sampling 7.1 Universality Problem Let S be a disconnected bounded set on R, say a finite union of intervals. How can one find a sampling set Λ, which allows one to recover signals f ∈ P WS ? A possible way is to cover S by an interval I and take Λ to be an SS for P WI . However, if the measure of S is small with respect to the length of I , this requires a large volume of measurements (compared to the measure of S), see Example 8. On the other hand, it is desirable to avoid a special construction of Λ, depending on the structure of S. Hence, the following problem appears: Is it possible to find a u.d. set Λ, which serves as an SS for each spectrum S of fixed measure, independently of its structure and localization? This problem was put forward in [40]. We proved that for the compact spectra such a “universal” sampling does exist. Moreover, it can be found with a “minimal possible” density. Theorem 9 ([40]) There exist a “universal” u.d. set Λ, D(Λ) = 1, which is an SS for P WS , whenever S is a compact set, |S| < 2π . By re-scaling, one can get a universal sampling set Λ, D(Λ) = d, which provides stable sampling for any compact S, |S| < 2π d. The exponential system E(Λ) from Theorem 9 is a frame in L2 (S) for an arbitrary compact S of measure < 2π . In fact, we proved a stronger property: E(Λ) is a Riesz basis in the space L2 (S), whenever S is any finite union of intervals of total measure 2π , with rational endpoints (multiplied by π ). The construction is based on a special perturbation of the lattice 2π Z.

7.2 Meyer’s Model Sets Later, an alternative example of a universal sampling set was suggested by B. Matey and Y. Meyer. The construction is based on Meyer’s “cut and project” sets. Take a lattice Γ in R2 in general position. Cut it by a horizontal strip a < y < b and project the lattice points in the strip onto the x-axis. The set M obtained is called a simple Meyer’s model. Theorem 10 ([33]) The set M is an SS for P WS , for every compact S, |S| < 2π D(M).

Reconstruction of Signals: Uniqueness and Stable Sampling

31

7.3 No Universal Sampling for Non-compact Sets Assume a u.d. set Λ is an SS for P WS , i.e., the inequality (11) holds. Denote by K(Λ, P WS ) the smallest constant C in (11), i.e., K(Λ, P WS ) := sup

f ∈P WS

f 2 .

f |Λ 2

It is natural to call this constant the sampling bound. It describes the “quality” of sampling provided by the set Λ. In order to obtain a stronger form of universality, one may wish to construct a u.d. set Λ such that the sampling bound K(Λ, P WS ) admits an estimate depending only on the measure of S. However, this is not possible, even for simple spectra S lying on a fixed interval. More precisely, we have Theorem 11 ([40]) Let Λ be a u.d. set, D − (Λ) = 1, and let > 0. Then sup K(Λ, P WS ) = ∞, S

the supremum is taken over all finite union of intervals S, S ⊂ [0, 2π(1 + )], |S| < . The proof is based on interaction of analytic and combinatorial arguments. The latter is based on Szemeredi’s theorem on arithmetic progressions. One can deduce easily from Theorem 11 that the non-compact bounded spectra do not admit in general a universal sampling. For a detailed exposition of the above results, see [40] and [43], Lecture 6.

7.4 Universal Completeness The following theorem shows that a weaker “universality” property, completeness on all sets of fixed measure, can be achieved with no topological restriction. Theorem 12 ([41]) Let Λ = {n + 2−|n| , n ∈ Z}. The exponential system E(Λ) is complete in L2 (S), for every bounded set S, |S| < 2π . In fact, one can take any Λ, obtained from the lattice Z by an exponentially small perturbation. Observe, in a contrast, that the perturbations needed for the proof of Theorem 9 have a special arithmetic structure, and cannot decrease too fast. By re-scaling, one can get a universal u.d. set Λ of the critical density for any given value of measure.

32

A. Olevskii and A. Ulanovskii

7.5 Universal Completeness on the Circle The universality phenomenon can be also observed on the circle T := (−π, π ). In this case it is natural to require that Λ ⊂ Z. Due to this additional arithmetic structure, our results are not as complete as in the non-periodic case. Proposition 2 ([41]) There is a set Λ ⊂ Z, D(Λ) = 1/2, such that the system E(Λ) is complete in L2 (S), for every set S ⊂ T, |S| = π . In fact, one can take Λ = 2Z+ ∪ (2Z− − 1). Similar results are true for some sets Λ of density greater than 1/2. However, we do not know if there exist sets Λ ⊂ Z of density less than 1/2 such that E(Λ) is complete in L2 (S) for every S ⊂ T of small measure.

8 Unbounded Spectra Above we have always assumed that S is a bounded set. Now we are going to discuss the case when S is an unbounded set of finite measure. The essential difference is that the elements f of the Paley–Wiener space P WS are no longer analytic functions. Even smoothness is lost, in general. However, the condition |S| < ∞ implies the continuity of the elements of P WS . So, the problem of reconstruction of signals from their discrete sampling still makes sense, but presumably becomes harder.

8.1 Uniqueness Sets We start with the uniqueness problem. We will show that for every (bounded or unbounded) set S of finite measure there is a u.d. set Λ of the critical density D(Λ) = |S|/2π , which is a US for P WS . An equivalent form is as follows: For every set S of measure 2π there is a u.d. set Λ, D(Λ) = 1, such that the system of exponentials E(Λ) is complete in L2 (S). In fact, we will prove a stronger version of this result. Define the “projection” of the set S on the interval [0, 2π ) as A := Proj S := (S + 2π Z) ∩ [0, 2π ). Define also the “multiplicity function”: w(t) := #{n ∈ Z : t + 2π n ∈ S}, Clearly, A = {t ∈ [0, 2π ) : w(t) ≥ 1}.

t ∈ [0, 2π ).

Reconstruction of Signals: Uniqueness and Stable Sampling

33

It is easy to check that if |S| = |A| = 2π then w(t) ≡ 1 and Λ = Z is a US for P WS . Theorem 13 ([42]) Assume that |S| < ∞ and |A| < 2π . There is a u.d set Λ of density 1 which is a US for P WS . Below we sketch the proof of this result. It is based on two ingredients.

8.1.1

Completeness of Exponentials on Subsets of (0, 2π )

Let us introduce the weighted space X := L2 (A, w) = {F :

|F (t)|2 w(t) dt < ∞}. A

One can check that the dual space is X∗ = L2 (A, 1/w), with respect to the duality G, F A :=

F (t)G(t) dt. A

It is also not difficult to see that X∗ ⊂ L1 (A). Using this, one can easily obtain the following Claim The system of exponentials {eint , |n| > N } is complete in X, for every N ∈ N. In turn, this allows one to prove Lemma 4 There is a partition Z = ∪∞ j =1 Zj on pairwise disjoint sets Zj , such that for every j the exponential system E(Zj ) is complete in X. Proof Using the claim above, the proof follows from the induction: Fix a dense countable set of elements {Fk , k ∈ N} in X. On the first step, we set B1 := {0, ±1, . . . , ±(n1 − 1)},

34

A. Olevskii and A. Ulanovskii

where we choose n1 ∈ N so large that the function F1 can be approximated in X by a trigonometric polynomial with frequencies from B1 with an error less than 1. On the j -th step of induction, by the claim above, one may choose nj ∈ N so large so that every function Fk , k = 1, . . . , j, can be approximated by a a trigonometric polynomial pk,j with the frequencies from the block Bj = {±nj −1 , ±nj + 1, . . . , ±(nj − 1)} with the error less than 1/j . Take any partition N = ∪∞ j =1 Δj , where Δj are infinite and disjoint, and set Zj := ∪l∈Δj Bl . It is easy to see that for every k, j ∈ N, the function Fk can be approximated with an arbitrarily small error by a trigonometric polynomial with frequencies from Bj . This proves the lemma.

8.1.2

Periodization

Given a function F ∈ L1 (R) set P F (t) :=



F (t + 2π k).

k∈Z

This is 2π -periodic function belonging to L1 on the circle T := (−π, π ). One can easily establish the following connection between the Fourier coefficients of this function and the Fourier transform of F :



P F (t)e−int dt =

√ 2π Fˆ (n),

n ∈ Z.

0

This implies



P (F (·)e−ix· )(t)eint dt =



2π Fˆ (n + x).

(22)

0

The condition F ∈ L2 (R) does not imply that Pf ∈ L2 (T). However, using the definition of the weight w and the Cauchy–Schwartz inequality for sums, one can check that F ∈ L2 (S) implies P F ∈ X∗ . Indeed, applying the inequality (a1 + . . . + an )2 ≤ a12 + . . . + an2 , n

Reconstruction of Signals: Uniqueness and Stable Sampling

35

we get |P F (t)|2 A



1 dt = w(t)

k∈Z

|F (t + 2π k)|2 dt =

A k∈Z

8.1.3

 1 dt ≤ ( |F (t + 2π k)|)2 w(t) A |F (t)|2 dt < ∞. S

Proof of Theorem 13

Choose any sequence {xj } dense in [0, 1) and set Λ := ∪∞ j =1 (Zj + xj ),

(23)

where Zj satisfy Lemma 4. Let us show that Λ is a US for P WS . Assume f ∈ P WS , f = Fˆ , and f |Λ = 0. We have to show that f = 0. The assumption f |Λ = 0 is equivalent to the equalities f (n + xj ) = 0, n ∈ Zj , j ∈ N. From (22), we see that the Fourier coefficients of the function P (F (·)eixj · ) vanish on Zj . This function is supported by the set A. So, by Lemma 4, it is zero a.e. Therefore, using again (22), we get f (n + xj ) = 0, n ∈ Z. Since this is true for all j and {xj } is dense in [0, 1), we conclude that f vanishes on a dense set in R. But f is continuous, so that it is identically zero. Theorem 13 is proved.

8.2 Uniqueness Sets for Sobolev Spaces So far we have considered spaces with unbounded spectra of finite measure. Now we are going to discuss the uniqueness problem for function spaces with spectrum of infinite measure. Since the functions f ∈ P WS , |S| = ∞, are not necessarily continuous, we need to change slightly the spaces we are working with.

8.2.1

Sobolev Spaces

Definition 13 For every number α > 1/2, we denote by W (α) the Sobolev space of functions f such that the inverse Fourier transform F, f = Fˆ , satisfies 2

F α := (1 + |t|2α )|F (t)|2 dt < ∞. R

(α)

We denote by WS the subspace of W (α) of functions f with spectrum in S, i.e., F = 0 a.e. outside S.

36

A. Olevskii and A. Ulanovskii

Observe that if F 2α < ∞ then F ∈ L1 (R). Hence, W (α) consists of continuous functions.

8.2.2

Periodic Gaps

Clearly, for every u.d. set Λ there exists a non-trivial smooth function which vanishes on Λ. So, if one wishes to have a u.d. uniqueness set for a space of smooth functions with spectrum in S, one should assume that S has gaps. Moreover, one may check that the spectral gaps must be dense in the sense that S cannot contain arbitrarily long intervals. However, this condition is not sufficient for existence of u.d. uniqueness set. The arithmetic structure of the spectral gaps plays a crucial role. Definition 14 We say that S has periodic gaps if there is a bounded set Q of positive measure (the gap) and a number T (the period) such that the set Q + T Z is disjoint from S. (α)

Theorem 14 ([44]) The Sobolev space WS u.d. uniqueness set.

with periodic spectral gaps admits a

The proof is based on the ideas described in Sec. 8.1.1 and 8.1.2.

8.2.3

Random Gaps

The following result shows the importance of the periodicity condition: Theorem 15 ([44]) The Sobolev space WS(α) with random spectral gaps does not admit a u.d. uniqueness set. More precisely: Let wn be an independent sequence of random variables, say uniformly distributed over the interval [3, 4]. Let Jn , n = 1, 2, . . . , be the sequence of intervals defined by J0 = [0, 1],

Jn = Jn−1 + wn ,

S := ∪n Jn .

Then with probability 1, for any u.d. set Λ there is a Sobolev function with spectrum in S, which vanishes on Λ. In fact, a stronger form of non-uniqueness is true: Every discrete function c(λ) ∈ l 2 (Λ) can be interpolated by an analytic L2 -function with spectrum in S. Some arithmetic properties of random sets are crucial in the proof, see [44].

Reconstruction of Signals: Uniqueness and Stable Sampling

37

9 Back to Exponential Frames 9.1 Frames on Unbounded Sets It was an open problem if for every unbounded set S, |S| < ∞, the space L2 (S) admits exponential frames. A positive solution was obtained recently: Theorem 16 ([37]) For every set S of finite measure, the space L2 (S) admits an exponential frame. Moreover, there is a discrete set Λ such that E(Λ) is a frame in L2 (S) with frame bounds A ≥ c|S| and B ≤ C|S|, where c and C are absolute positive constants. In contrast to Theorem 13, the proof is not purely constructive. It is based on a recent outstanding result [34], whose proof involves some stochastic elements. We will discuss the subject below.

9.2 What Is a Good Exponential Frame? We will describe briefly the approach to the proof of Theorem 16. Firstly, observe that the method used in Theorem 13 does not work in this case. The uniqueness is a much more “liberal” property than the sampling. For example, consider the decomposition of Z in Lemma 4. Clearly, some of Zj must have infinitely many arbitrarily long gaps, and this does not contradict to the completeness of E(Zj ) in the L2 -space on a subset of (−π, π ). However, such a system cannot be a frame, due to Landau’s theorem above. Basically, the main problem one faces when trying to prove Theorem 14 is as follows: What is a “good” frame in L2 (S) for a subset S ⊂ (−π, π )? Let us illustrate the problem by a simple example. Assume S = [0, 2π/m], where m is a large integer. Then E(Z) is a frame in L2 (S) (see Example 1). But it is too overcomplete. In particular, the frame constants A = B = 1 are much larger than the measure of S. On the other hand, E(mZ) is an orthogonal basis in L2 (S) whose frame constants A = B = 1/m are proportional to the measure of S. One may ask if such a good exponential frame exists for every set S ⊂ (−π, π ) of small measure? This question was considered in [36], where the approach was based on a result from [2]. The proof of Theorem 16 is based on the result in [34], see below.

9.2.1

Discrete Situation

Let A be an orthogonal matrix of order m. Choose any n columns of A, where we assume that m is much larger than n. An important problem is as follows: Can one find say, 2n rows, so that the corresponding 2n × n submatrix J is “well-invertible”, in the sense that the inequality holds

38

A. Olevskii and A. Ulanovskii

c

n n

x 2 ≤ J x 2 ≤ C x 2 , m m

x ∈ Rn ,

(24)

where C, c are absolute positive constants? Such a problem was considered by B. Kashin [23] in connection with theory of orthogonal series and by J. Bourgain and L. Tzafriry [9], inspired by the so-called Kadison–Singer problem. A paper by Lunin [32] should also be mentioned in the context. He proved (in the previous notations) that for any A and any family of n columns there are n rows such that the right inequality in (24) holds. Later, using a different approach, the authors of [2] proved proved a similar result for the left inequality. Using [2], the following was proved Proposition 3 ([36]) For every measurable set S ⊂ (−π, π ) there is a set Λ ⊂ Z, D(Λ) < 2|S|, such that |S| f 22 ≤ C



|f (λ)|2 ,

f ∈ P WS ,

λ∈Λ

with an absolute constant C. However, one would wish to obtain a two-sided estimate. Such an improvement became available after the outstanding result of Markus– Spielman–Srivastava: Theorem 17 ([34]) Given an n × m matrix of orthogonal columns such that the length of every row uj , 1 ≤ j ≤ m, is less than , there is a partition {1, 2, . . . , m} = S1 ∪ S2 such that for every x ∈ Rn and k = 1, 2, (1 −

 5 5 |x, u |2 < (1 + ) x 2 . ) x 2 ≤ 2 2 u∈Sk

This proved a Weaver’s conjecture, which has been known to be equivalent to the Kadison–Singer problem, see [54].

9.3 Construction of Good Frames Based on Theorem 17, good exponential frames on the circle were constructed:

Reconstruction of Signals: Uniqueness and Stable Sampling

39

Theorem 18 ([37]) For every set S ⊂ (−π, π ) of positive measure there exists Λ ⊂ Z such that E(Λ) is a frame for L2 (S) with frame bounds A ≥ c|S| and B ≤ C|S|, where c and C are absolute positive constants. Finally, the construction of good frames was extended in this paper to the unbounded sets of finite measure, which in particular delivers the proof of Theorem 16.

9.4 Extraction of Frames from Continuous Frames The concept of frame can be generalized to a family indexed by some measure space: Definition 15 Let (X, Σ, μ) be a positive, σ -finite measure space and let H be a separable Hilbert space. A measurable function Ψ : X → H is a continuous frame for H if there exists constants A, B > 0 such that 2 A x H ≤ |x, Ψ (t) |2 dμ(t) ≤ B x 2H , for every x ∈ H. X

The frame Ψ is called bounded if Ψ (t) H < C for all t ∈ X. It is called Parseval if A = B = 1. If X is a countable set with counting measure, then Ψ : X → H is a continuous frame iff {Ψ (t), t ∈ X} is a frame for H . Thus, frames are a special case of continuous frames. Example 10 Let H = L2 (S), where S is any set of finite measure and let X = R, dμ = dt. Then Ψ : R → L2 (S),

1 Ψ (t) = √ eit· 1S (·) 2π

is a bounded continuous Parseval frame for L2 (S). Indeed, by the Plancherel identity, for every F ∈ L2 (S) we get

1 |F, √ eit· L2 (S) |2 dt = Fˆ 22 = F 22 . 2π R

The idea from [36] and [37] to use discrete estimates of Spielman et al type for constructing frames was recently applied in [14]. It is proved in [14] that from every continuous bounded frame Ψ (t) one may extract a frame {Ψ (tj )}. Moreover, if Ψ (t) is Parseval and Ψ (t) ≤ 1, t ∈ X, then there is a frame {Ψ (tj )} with frame bounds A ≥ c and B ≤ C, where 0 < c ≤ C are absolute constants. Theorem 16 follows from the latter result and Example 10.

40

A. Olevskii and A. Ulanovskii

10 Sampling Bound for Bernstein Spaces 10.1 Sampling Bound Recall that a set Λ is called a stable sampling set (SS) for the Bernstein space Bσ , if there exists C such that

f ∞ ≤ C f |Λ ∞ ,

for every f ∈ Bσ .

We will call the minimal constant C for which this holds the sampling bound K(Λ, Bσ ). In other words, K(Λ, Bσ ) =

f ∞ .

f |Λ ∞ f ∈Bσ ,f =0 sup

We set K(Λ, Bσ ) = ∞ when Λ is not an SS for Bσ . Therefore, Λ is an SS for Bσ if and only if K(Λ, Bσ ) < ∞. Assume Λ is a u.d. set satisfying D − (Λ) = 1. By Beurling’s Theorem 3, Λ is an SS for every space Bσ , σ < π, and is not an SS for Bσ whenever σ ≥ π. Using this and the compactness property for Bσ formulated in Sect. 4.2, one can show that the constants K(Λ, Bσ ) must grow to infinity when σ approaches π from below. When Λ = Z, S.N. Bernstein [4] proved that the growth is precisely logarithmical: K(Z, Bσ ) =

π 2 log (1 + o(1)), π π −σ

σ ↑ π.

(25)

We prove that K(Λ, Bσ ) has at least logarithmic growth as σ ↑ π, for every Λ satisfying D − (Λ) = 1. In the rest of this section we denote by C different positive constants. Theorem 19 ([47]) Let Λ be a u.d. set with D − (Λ) = 1. Then K(Λ, Bσ ) ≥ C log

π , π −σ

π/2 < σ < π.

Let σ > σ . Then Bσ ⊂ Bσ and clearly, K(Λ, Bσ ) ≥ K(Λ, Bσ ). Hence, Theorem 19 implies K(Λ, Bπ ) = ∞. This provides a new proof for the critical case in Beurling’s Theorem 4 above. The proof of Theorem 19 is based on a reduction of the sampling problem for the Bernstein space Bσ to a similar one for the algebraic polynomials.

Reconstruction of Signals: Uniqueness and Stable Sampling

41

10.2 Sampling of Polynomials Denote by Pn the space of all algebraic polynomials of degree ≤ n on the unit circle in the complex plane ∂D := {z ∈ C : |z| = 1}. Given a finite set Λ ⊂ ∂D, #Λ > n, one may introduce the corresponding sampling constant K(Λ, Pn ) :=

maxz∈∂D |P (z)| . P ∈Pn ,P =0 maxλ∈Λ |P (λ)| sup

Theorem 20 ([47]) For every Λ ⊂ ∂D, #Λ > n, the estimate holds K(Λ, Pn ) ≥ C log

n . #Λ − n

The following result essentially goes back to Faber: Let U be a projector from the space C(T) onto the subspace Pn . Then U > C log n, see [22], Chapter 7. Faber’s approach is based on averaging over translations. Different versions of the result have been obtained by this approach. To prove Theorem 20 in [47], we use the following one due to Al. A. Privalov [50]: For every projector U above and every family of linear functionals ψj (1 ≤ j ≤ m) in C(T), there is a unit vector f in C(∂D) such that Uf > C log(n/m), and the functionals vanish on f .

10.3 Proof of Theorem 19 Theorem 19 can be easily deduced from Proposition 4 below. Let N ∈ N and Λ be a finite set on [−N, N ]. Set ΛN := Λ ∪ (−∞, −N ] ∪ [N, ∞). By Theorem 3, ΛN is a sampling set for Bπ . Proposition 4 ([47]) For every Λ ⊂ [−N, N ], #Λ > 2N, we have K(ΛN , Bπ ) ≥ C log

2N . #Λ − 2N

(26)

Assume N is a large number. Then estimate (26) shows that the sampling constant K(ΛN , Bπ ) must be large unless the number of points of Λ in (−N, N ) is “much larger than” 2N .

42

A. Olevskii and A. Ulanovskii

10.4 Interpolation Bound for Bernstein Spaces If Λ is an IS for Bσ , Banach theory implies that there is a constant K such that for every datum {c(λ), λ ∈ Λ} ∈ l ∞ , a function f ∈ Bσ exists such that f (λ) = c(λ), λ ∈ Λ and

f ∞ ≤ K sup |c(λ)|. λ∈Λ

The minimal constant K for which this estimate holds is called the interpolation bound Ki (Λ, Bσ ). Assume Λ satisfies D + (Λ) = 1. By Theorem 6, Λ is an IS for Bσ , σ > π, and is not an IS for Bσ , σ ≤ π . One may check that Ki (Λ, Bσ ) → ∞ as σ approaches π from above. Bernstein [4] proved that a similar to (25) asymptotic estimate holds the problem of interpolation on Z: Ki (Z, Bσ ) =

2 π log (1 + o(1)), π σ −π

σ ↓ π.

We prove that for every Λ satisfying D + (Λ) = 1, the growth of Ki (Λ, Bσ ) is at least logarithmic: Theorem 21 ([47]) Let Λ be a u.d. set with D + (Λ) = 1. Then Ki (Λ, Bσ ) ≥ C log

π , σ −π

π < σ < 3π/2.

The main step of the proof is to get sharp estimates on the interpolation bound for complex polynomials. The proof is considerably more technical than the one for the sampling bound in Theorem 20.

11 Completeness of Translates Classical Wiener’s [55] theorems provide necessary and sufficient condition on a ˆ whose translates {g(t−s), s ∈ R} span the space L1 (R) or L2 (R): function g = G 1. The translates of g ∈ L1 (R) span L1 (R) if and only if G does not vanish; 2. The translates of g ∈ L2 (R) span L2 (R) if and only if G is non-zero almost everywhere on R. There is no similar result for 1 < p < 2, since the spanning property of the translates of g ∈ Lp (R) cannot be expressed in terms of the zero set of G, see [30]. It is well-known that sometimes even a discrete set of translates may span Lp (R).

Reconstruction of Signals: Uniqueness and Stable Sampling

43

Definition 16 We call a discrete set Λ ⊂ R p-generating, if there is a function g ∈ Lp (R) such that the family of translates {g(t − λ), λ ∈ Λ} spans Lp (R). Such a function g is called a Λ-generator. The property of being a p-generating set is intimately connected with the completeness property of exponential systems. In particular, when p = 2, using the definition above and the Fourier transform, one can prove the following Claim A set Λ is 2-generating if and only if there exists G ∈ L2 (R) such that the system {G(t)eiλt , λ ∈ Λ}

(27)

spans the whole space L2 (R). Let p ≥ 1. We ask which discrete sets Λ are p-generating? Observe, that if Λ is p-generating, then it is also p -generating, for every p > p (see [8]). So, it is “more difficult” to be a p-generating set for small values of p. Below we present a brief account of the results, see [43], Lectures 11 and 12 for details. In Sect. 11.5 we present some recent results.

11.1 Examples We start with some examples. Example 11 The set of integers Z is not 2-generating. This follows easily from the claim above: Indeed, in this case the system (27) becomes {G(t)eint , n ∈ Z}. Due to 2π -periodicity of the functions eint , one may find a (non-periodic) bounded function F , such that the function F G cannot be approximated by finite linear combinations of the elements of the system. On the other hand, the following is true: √ Example 12 ([43], Lecture 11) The set Λ = { n, n ∈ N} is p-generating for every p ≥ 1. The set Λ in this example is not uniformly discrete. So, one may wonder if it is possible for a u.d. set Λ to be p-generating? We will see that the answer is negative for p = 1, and it is positive for p > 1 (See Sect. 11.6 for an extension of this dichotomy to general function spaces). Moreover, when p > 1, the p-generating property of a u.d set Λ depends on its arithmetic structure.

44

A. Olevskii and A. Ulanovskii

11.2 The Case p = 1 The case p = 1 is the only case where a complete description of generating sets is known: Theorem 22 ([10]) The following are equivalent: (i) Λ is 1-generating; (ii) The exponential system E(Λ) is complete in L2 (−σ, σ ), for every σ > 0. Given a discrete set Λ, the completeness radius of Λ is defined to be the number R(Λ) := sup{σ ≥ 0 : E(Λ) is complete in L2 (−σ, σ )}. The classical Beurling–Malliavin theorem [7] states that the completeness radius can be expressed in terms of a certain density, R(Λ) = π DBM (Λ). The density DBM (Λ) is usually called the upper (or exterior) Beurling–Malliavin density of Λ. Hence, by Theorem 22, Λ is 1-generating if and only if DBM (Λ) = ∞. Assume Λ is a u.d. set. From the definition of DBM (Λ) (see [7]) it follows that DBM (Λ) < ∞. Therefore, no u.d. set Λ is 1-generating.

11.3 The Case p = 2 Definition 17 Let us call a Λ an almost integer set if Λ = {λn := n + an , n ∈ Z},

an = 0, n ∈ Z,

an → 0, |n| → ∞.

(28)

We have seen that the set of integers Z is not 2-generating. On the other hand, the following is true: Theorem 23 ([38]) Every almost integer set Λ is 2-generating. This means that for every set Λ satisfying (28) there is a Λ-generator, i.e., a function g ∈ L2 (R) whose Λ-translates form a complete set in L2 (R). One may ask for which sets Λ a Λ-generator g can be chosen with a good “time-frequency” localization, that is, g ∈ S(R)? Here S(R) denotes the space of Schwartz functions. It turns out that this is the case when the perturbations an are exponentially small: Theorem 24 ([39]) Assume the “perturbations” an in (28) are exponentially small: |an | < Cr |n| ,

n ∈ Z; for some C > 0, 0 < r < 1.

(29)

Then Λ admits an L2 -generator g ∈ S(R). This result may seem surprising, since when (28) and (29) hold, the set Λ is “very close” to the “limiting case” Λ = Z when no generator exists.

Reconstruction of Signals: Uniqueness and Stable Sampling

45

Examples in [39] show that there exist almost integer sets which do not admit Schwartz generators.

11.4 The Case p > 2 Recall (see Example 11) that due to the periodicity of the functions eint , the set of integers Z is not 2-generating. However, rather surprisingly, it is p-generating, for every p > 2. The proof in [1] is based on an effective construction, which allows one to get the result in a stronger form: Theorem 25 ([1]) For every p > 2 there is a smooth function (Z-generator) g ∈ Lp (R) ∩ L2 (R) such that the family {g(t − n), n ∈ Z} is complete and minimal in Lp (R). The contrast between the cases p > 2 and p = 2 is due to the fact that the Fourier transforms of functions from Lp (R), p > 2, are no longer functions, but distributions.

11.5 The Case 1 < p < 2 Recently, Theorem 24 was extended to the case p ∈ (1, 2): Theorem 26 ([45]) Every set Λ satisfying (28) and (29) is p-generating, for every p > 1. The proof sketched below is based on a uniqueness theorem for a certain class of tempered distributions.

11.5.1

Distributions with Deep Zeros

Definition 18 Denote by K0 the class of continuous functions Φ satisfying the condition 1

|Φ(t)| ≤ C1 e−C2 (|t|+ d(t,Z) ) ,

t ∈ R,

(30)

where d(t, Z) = minn∈Z |t − n|, and C1 = C1 (Φ), C2 = C2 (Φ) are positive constants depending on Φ. Condition (30) means that Φ has “deep” zeros on the set of integers and at infinity. Let S (R) denote the space of tempered distributions on R. Denote by F, Φ the action of the distribution F ∈ S (R) on the test function Φ ∈ S(R).

46

A. Olevskii and A. Ulanovskii

We would like to extend condition (30) to tempered distributions. Definition 19 Denote by K the class of all tempered distributions F ∈ S (R) which admit a representation (k1 )

F = Φ1

(kl )

+ · · · + Φl

,

(31)

where l ∈ N, k1 , . . . , kl ∈ N ∪ {0} and Φ1 , . . . , Φl are continuous functions satisfying (30).

11.5.2

Uniqueness Theorem for a Class of Distributions

Let Kˆ denote the class the distributional Fourier transforms of the distributions from K. Using (30) and (31), one may check that every element f ∈ Kˆ is a function analytic in some horizontal strip x + iy, |y| < C. Theorem 26 is a fairly easy consequence of the following ˆ If f |Λ = 0 then Theorem 27 ([45]) Assume Λ satisfies (28) and (29) and f ∈ K. f = 0. This result shows that the sets Λ satisfying (28) and (29) are uniqueness sets for ˆ the class K.

11.6 Discrete Translates in Function Spaces Theorem 26 can be extended to more general function spaces. Consider Banach function spaces X satisfying two conditions: (a) The Schwartz space S(R) is continuously embedded and dense in X; (b) No non-trivial element h ∈ X∗ has spectrum on Z. Theorem 28 ([46]) There is a function ϕ ∈ S(R) such that the family of Λtranslates {ϕ(· − λ), λ ∈ Λ} spans every Banach function space X satisfying (a) and (b), where Λ is any u.d. set satisfying (28) and (29). The proof in [46] is based on Theorem 27. One may check that every space X = Lp (R), p > 1, satisfies conditions (a) and (b), and so Theorem 26 follows from Theorem 28. Example 13 Let w > 0 be a weight vanishing at infinity. Then Theorem 28 holds for the space X = L (w, R) := {f : f w = 1

R

|f (x)|w(x) dx < ∞}.

Reconstruction of Signals: Uniqueness and Stable Sampling

47

Clearly, the elements of the dual space X∗ = L∞ (1/w, R) are functions vanishing at infinity. One may prove that conditions (a) and (b) are satisfied. However, by Theorem 22, Theorem 28 ceases to be true for w = 1, that is for the space L1 (R). Theorem 28 is also applicable to the symmetric and Sobolev spaces, see [46].

12 Some Open Problems 1. Is it true that for every compact set S there exists an exponential frame E(Λ) in L2 (S) with the critical density D(Λ) = |S|/(2π )? 2. Is it true that for every compact set S there exists an exponential system E(Λ) which is complete and minimal in L2 (S)? 3. Does there exist a set Λ ⊂ Z of density D(Λ) = 1/3, such that the system E(Λ) is complete in L2 (S) for every set S ⊂ T of small measure? Compare with Proposition 2. 4. Does a set Λ of finite density exist which is a uniqueness set for P WS , for every set S of sufficiently small measure? Compare with Theorem 12. 5. Let S be an unbounded set of finite measure. Does there exist a u.d. set Λ such that F ∈ L1 (S), Fˆ |Λ = 0 implies F = 0 a.e.? Compare with Theorem 13. 6. Does there exist a system of exponentials which is a Riesz basis in the space L2 over a disk or a triangle in the plane? Notice that so far no example is known of a set S such that L2 (S) does not admit an exponential Riesz basis.

References 1. Atzmon, A., Olevskii, A. Completeness of integer translates in function spaces on R. J. Approx. Theory 87 (1996), no. 3, 291–327. 2. Batson J., Spielman D. A., Srivastava N. Twice-Ramanujan sparsifiers. SIAM Rev., 56(2) (2014), 315–334. 3. Bernstein, S. N. Sur une propriete des fonctions entieres. C.R. 176 (1923). 4. Bernstein, S. N. The extension of properties of trigonometric polynomials to entire functions of finite degree. (Russian) Izvestiya Akad. Nauk SSSR. Ser. Mat. 12, (1948), 421–444. 5. Beurling, A. Balayage of Fourier–Stiltjes transforms. In: The collected Works of Arne Beurling, v. 2, Harmonic Analysis. Birkhauser, Boston, 1989. 6. Beurling, A. Interpolation for an interval in R. In: The Collected Works of Arne Beurling, v. 2, Harmonic Analysis. Birkhäuser, Boston, 1989. 7. Beurling, A., Malliavin, P. On the closure of characters and the zeros of entire functions. Acta Math., 118 (1967), 79–93. 8. Blank, N. Generating sets for Beurling algebras. J. Approx. Theory 140 (2006), no. 1, 61–70. 9. Bourgain, J., Tzafriri, L. Invertibility of “large” submatrices with applications to the geometry of Banach spaces and harmonic analysis. Israel J. Math. 57 (1987), no. 2, 137–224. 10. Bruna, J., Olevskii, A., Ulanovskii, A. Completeness in L1 (R) of discrete translates. Rev. Mat. Iberoam. 22 (2006), no. 1, 1–16.

48

A. Olevskii and A. Ulanovskii

11. Christensen, O. Frames and bases. An introductory course. Applied and Numerical Harmonic Analysis. Birkhä user Boston, Inc., Boston, MA, 2008. 12. Duffin, R. J., Schaeffer, A. C. A class of nonharmonic Fourier series. Trans. Amer. Math. Soc. 72 (1952), 341–366. 13. Duren, P. L. Theory of H p Spaces. Dover books on mathematics, 2000. 14. Freeman D., Speegle, D. The discretization problem for continuous frames, Adv. Math., 345 (2019), 784–813. 15. Fuglede, B. Commuting Self-Adjoint Partial Differential Operators and a Group Theoretic Problem. J. Func. Anal. 16 (1974), 101–121. 16. Green, B., Tao, T. The primes contain arbitrarily long arithmetic progressions. Annals of Mathematics. 167 (2) (2008), 481–547. 17. Greenfeld, R., Lev, N. Fuglede’s spectral set conjecture for convex polytopes, Anal. PDE 10 (2017), no. 6, 1497–1538. 18. Helson, H. Harmonic Analysis, Addison–Wesley, 1983. 19. Hewitt, E., Ross, K. A. Abstract harmonic analysis. Volume II, Springer–Verlag, Berlin, 1970. 20. Iosevich, A., Katz, N., Tao, T. The Fuglede spectral conjecture holds for convex planar domains, Math. Res. Lett. 10 (2003), no. 5–6, 559–569. 21. Kahane, J.-P. Sur les fonctions moyenne-périodiques bornées. Ann. Inst. Fourier, 7 (1957), 293–314. 22. Kantorovich, L.V., Akilov, G.P. Functional Analysis. Pergamon Press, 1982. 23. Kashin, B. S. On a property of bilinear forms. Soobshch. Akad. Nauk Gruz. SSR, 97 (1980), 29–32. 24. Katznelson, Y. An Introduction to Harmonic Analysis, Cambridge University Press, 2004. 25. Kozma, G., Nitzan, S. Combining Riesz bases. Invent. Math. 199 (2015), no. 1, 267–285. 26. Kozma, G., Nitzan, S. Combining Riesz bases in Rd , Rev. Mat. Iberoam. 32 (2016), no. 4, 1393–1406. 27. Laba, I. Fuglede’s conjecture for a union of two intervals. Proc. Amer. Math. Soc. 129 (2001), no. 10, 2965–2972. 28. Landau, H. J. Necessary density conditions for sampling and interpolation of certain entire functions. Acta Math. 117 (1967), 37–52. 29. Lev, N., Matolcsi, M. The Fuglede conjecture for convex domains is true in all dimensions. arXiv:1904.12262, (2019). 30. Lev, N., Olevskii, A. Wiener’s “closure of translates” problem and Piatetski–Shapiro’s uniqueness phenomenon. Ann. of Math. (2) 174, (2011), no. 1, 519–541. 31. Levin, B. Ya. Lectures on entire functions. In collaboration with and with a preface by Yu. Lyubarskii, M. Sodin and V. Tkachenko. Translated from the Russian manuscript by Tkachenko. Translations of Mathematical Monographs, 150. American Mathematical Society, Providence, RI, 1996. 32. Lunin, A. A. On operator norms of submatrices. Mat. Zametki 45 (1989), no. 3, 94–100, 128 (in Russian). 33. Matei, B., Meyer, Yves. Quasicrystals are sets of stable sampling. C. R. Math. Acad. Sci. Paris 346 (2008), no. 23–24, 1235–1238. 34. Marcus, A., Spielman, D.A., Srivastava, N. Interlacing families II: Mixed characteristic polynomials and the Kadison-Singer problem. Ann. of Math. 182 (2015), no. 1, 327–350. 35. Nitzan, S., Olevskii, A. Revisiting Landau’s density theorems for Paley–Wiener spaces. C. R. Acad. Sci. Paris, Ser. ´ I Math. 350 (2012), no. 9–10, 509–512. 36. Nitzan, S., Olevskii, A., Ulanovskii, A. A few remarks on sampling of signals with small spectrum. Proceedings of the Steklov Institute of Mathematics 280 (2013), 240–247. 37. Nitzan, S., Olevskii, A., Ulanovskii, A. Exponential frames for unbounded sets. Proc. Amer. Math. Soc. 144 (2016), 109–118. 38. Olevskii, A., Completeness in L2 (R) of almost integer translates. C. R. Acad. Sci. Paris Ser. ´ I Math. 324 (1997), no. 9, 987–991. 39. Olevskii, A., Ulanovskii, A. Almost integer translates. Do nice generators exist? J. Fourier Anal. Appl. 10 (2004), no. 1, 93–104.

Reconstruction of Signals: Uniqueness and Stable Sampling

49

40. Olevskii, A., Ulanovskii, A. Universal sampling of band-limited signals. C. R. Math. Acad. Sci. Paris 342 (2006), no. 12, 927–931. 41. Olevskii, A., Ulanovskii, A. Universal sampling and interpolation of band-limited signals. Geom. Funct. Anal. 18 (2008), 1029–1052. 42. Olevskii, A., Ulanovskii, A. Uniqueness sets for unbounded spectra. C. R. Acad. Sci. Paris, Sér. I 349 (2011), 679–681. 43. Olevskii, A., Ulanovskii, A. Functions with Disconnected Spectrum: Sampling, Interpolation, Translates. AMS, University Lecture Series, 65, 2016. 44. Olevskii, A., Ulanovskii, A. Discrete Uniqueness Sets for Functions with Spectral Gaps. Mat. Sb., 208 (2017), No. 6, 130–145. 45. Olevskii, A., Ulanovskii, A. Discrete translates in Lp (R), Bull. Lond. Math. Soc. 50 (2018), no. 4, 561–568. 46. Olevskii, A., Ulanovskii, A. Discrete translates in function spaces, Anal. Math. 44 (2018), no. 2, 251–261. 47. Olevskii, A., Ulanovskii, A. On Irregular Sampling and Interpolation in Bernstein Spaces, Proc. Steklov Inst. Math. 303 (2018), no. 1, 178–192. 48. Ortega-Cerdá, J., Seip, K. Fourier frames. Ann. of Math. (2) 155 (2002), no. 3, 789–806. 49. Paley, R., Wiener, N. Fourier Transform in the Complex Domain, AMS 1934. 50. Privalov, Al. A. The growth of the powers of polynomials, and the approximation of trigonometric projectors. (Russian) Mat. Zametki 42, no. 2 (1987), 207–214. English translation in: Math. Notes 41 (1987), no. 1–2, 619–623. 51. Redheffer, R.M., Young, R.M. Completeness and basis properties of complex exponentials. Trans. Amer. Math. Soc. 277 (1983), no. 1, 93–111. 52. Seip, K. Interpolation and Sampling in Spaces of Analytic Functions, University Lecture Series, 33. American Mathematical Society, Providence, RI, 2004. 53. Tao, T. Fuglede’s conjecture is false in 5 and higher dimensions, Math. Res. Lett. 11 (23) (2004) 251–258. 54. Weaver, N. The Kadison-Singer problem in discrepancy theory. Discrete Math. 278 (2004), 227–239. 55. Wiener, N. Tauberian Theorems, Annals of Math., 33, (1) (1932), 1–100. 56. Young, R.M. An introduction to Nonharmonic Fourier Series, Academic Press. 2001. 57. Zygmund, A. Trigonometric series. Vol. I, II. Third edition, With a foreword by Robert A. Fefferman. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 2002. xii; Vol. I, II.

Sampling Theory in a Fourier Algebra Setting M. Maurice Dodson and J. Rowland Higgins

Abstract Sampling theory has been studied in a variety of function spaces and our purpose here is to develop the theory in a Fourier algebra setting. This aspect is not as well-known as it might be, possibly because the approximate sampling theorem, central to our discussion, had what could be called a mysterious birth and a confused adolescence. We also discuss functions of the familiar bandlimited and bandpass types, showing that they too have a place in this Fourier algebra setting. This paper combines an expository and historical treatment of the origins of exact and approximate sampling, including bandpass sampling, all in the Fourier algebra setting. It has two objectives. The first is to provide an accessible and rigorous account of this sampling theory. The various cases mentioned above are each discussed in order to show that the Fourier algebra, a Banach space, is a broad and natural setting for the theory. The second objective is to clarify the early development of approximate sampling in the Fourier algebra and to unravel its origins.

1 Introduction It is sometimes possible to represent a function f by a formula involving the restriction of f to a subset of its domain. A familiar example is Cauchy’s integral formula for functions analytic on the unit disc, where the restriction is to the unit circle. The restriction can also be to a countable set, such as the rationals

M. M. Dodson () Department of Mathematics, University of York, York, UK e-mail: [email protected] J. R. Higgins (deceased) Cambridge, UK © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_3

51

52

M. M. Dodson and J. R. Higgins

Q for continuous functions, or the discrete set (π/w) Z for analytic functions of exponential type w (see Sect. 3.3), where w > 0 and throughout will be an independent parameter. This last type of representation in terms of partial information is proved in the classical sampling theorem1 (Theorem 2 in Sect. 4.1) for continuous complex square-integrable functions f on the real line R. If the maximal frequency of f : R → C is πw radians per second (or w/2 hertz), then f is given by the formula f (t) =

 n∈Z

f

 n  sin π(wt − n) , w π(wt − n)

(1)

where the samples n/w are taken every 1/w seconds, the Nyquist rate of twice the maximal frequency (see Sect. 4.1 for more details). In engineering, the functions f are usually referred to as bandlimited signals (see Sect. 2.2). This result, which in principle is exact, is associated with C. E. Shannon [51] and many others [12, 15, 29, 32]. It is widely applied in science and engineering (e. g., [7, 19, 37, 49]) and underpins communication theory, particularly ‘digital-to-analogue’ conversion (e. g., [2, 5, 36]). It is a fundamental result in the mathematical theory, where the setting is often taken to be L2 , corresponding to finite energy signals (see Sect. 2.1). The mathematical theory has been extended in many directions and studied in a variety of function spaces (see, for example, [5, 17, 62]). Our purpose in this chapter is to develop a sampling theory in a Fourier algebra setting parallel to that in the classical theory. The classical theory begins in a subspace of L2 (R) (defined in Sect. 2.1) in which the above sampling result is presented as Theorem 2 and called an Exact Sampling Theorem2 (see Sect. 4.1). However, the Fourier algebra aspect of sampling theory is not as well-known as it might be, partly because it involves L1 (R) which has less structure than L2 (R) and possibly also because the approximate sampling theorem (Theorem 6), central to our discussion, had what could be called a mysterious birth and a confused adolescence. In order to clarify this, an expository and historical treatment is given of the origins of exact and approximate sampling; it is not intended to be a survey but a fresh look at early work, with some attention being paid to the finer points of the analysis. Bandlimited and bandpass functions (see Sect. 6) will be shown to have a place in the Fourier algebra setting as well as in the L2 one. Although L2 -norm information does not appear, it emerges that the Fourier algebra, which will be denoted by A (see Sect. 3.1) and is a Banach space, is a broad and natural domain for sampling.

1 The

name classical sampling theorem already occurs in [8, p. 75]. adjective ‘exact’ is in contrast with ‘approximate’ sampling; the two forms are defined in Sect. 2.4.

2 The

Sampling Theory in a Fourier Algebra Setting

53

1.1 Some Background Classical sampling involves square-integrable, continuous functions with frequencies limited to the interval [−π w, π w]. These functions constitute the richly endowed Paley–Wiener space, denoted by P Ww (see Sect. 3.3). The classical exact sampling theorem establishes that members of P Ww have a representation as the sampling series (1) that is both mathematically exact and widely applicable. The Paley–Wiener space is a standard mathematical model of the engineers’ class of bandlimited signals but a reduction in a priori information, such as inaccurate bounds for the frequencies, can cause errors which compromise its effectiveness. A different and more realistic model would result by dropping the bandlimited requirement at the expense of losing the exactness of the P Ww model. This was the approach of P. Weiss [56], who announced in a brief note in 1963, without proof or references, an approximate or generalized sampling theorem ([16, p. 251, Theorem B]). Weiss’s generalization was two-fold: the bandlimitation of the frequencies and the square-integrability were dropped and replaced simply by the integrability of the spectral or frequency function (see Sect. 2.2) over R. Thus instead of functions in an L2 setting, as in the classical exact sampling theorem, he considered the inverse Fourier transforms of functions in L1 (R), i.e., functions in the Fourier algebra3 A. Weiss also required that the functions should satisfy a number of other technical conditions, though these turned out to be unnecessary. He concluded with the assertion that the constant in the approximate sampling theorem was best possible but did not provide an extremal. Weiss’s note was followed within a few years by a flurry of papers which supplied proofs, simplifications and generalizations of his theorem: J. L. Brown [8, 9], C. J. Standish [52], D. C. Stickler [54] and R. P. Boas [6]. All the proofs, with the partial exception of Brown’s, are in the Fourier algebra. The details are not straightforward and are clarified in Sect. 7. The upshot was that Brown proved the approximate sampling theorem (Sect. 5.2) for the class F 2 of square-integrable functions with integrable spectral functions (see (16) in Sect. 3.4), while Boas established it for functions in the larger Fourier algebra A (see Sect. 5.3). There are other fruitful generalizations of the exact sampling theorem, see, for example, Chaps. 8, 9, 16 in [33] and the contributions in [36], but these topics lie outside the present study. Although functions in the Fourier algebra A are not necessarily integrable, the spaces F 2 and A can be seen as counterparts in roughly parallel theories. The subset Aw of functions in A with frequencies limited to the interval [−π w, π w] is the counterpart of P Ww but has significantly less structure. In particular, the duality enjoyed by functions and spectral functions in the classical case is no longer valid. Not surprisingly, the exact sampling theorem for bandlimited functions in A, i.e., for Aw (Theorem 3), is much weaker than for bandlimited functions in L2 (R), i.e., for P Ww (Theorem 2), and only establishes pointwise convergence for the sampling

3 This

terminology was not used by these authors but at the time it was relatively recent [27].

54

M. M. Dodson and J. R. Higgins

series. On the other hand, A has a simpler definition and has a broader setting than F 2 which it strictly contains (Theorem 1). Moreover, the sampling series is the same for both spaces (see Theorems 5 and 6 for functions in F 2 and A, respectively), so that functions f ∈ / L2 (R) (i.e., signals not admitting a finite energy condition) can still be reconstructed approximately using the same sampling formula as in the exact case (19). Finally, instead of the time domain and signal being in the foreground as in the classical case, the primary focus of interest in the Fourier algebra setting is the frequency domain and spectral functions. This has some advantages. The terminology and definitions needed to discuss sampling are set out in Sect. 2 and some relationships between the Fourier algebra and other function spaces are studied in Sect. 3. The classical exact sampling theorem for functions in P Ww is stated and its Fourier algebra analogue, the exact sampling theorem for functions in Aw , is proved in Sect. 4. The development of the more difficult approximate sampling in the Fourier algebra A is discussed fully in Sect. 5, where two proofs of the approximate sampling theorem are given. Exact and approximate bandpass sampling theorems, which rest on these bandlimited results, thus increasing the complexity of the arguments, are proved in Sect. 6, while Sect. 7 ends our study with some historical notes.

2 Notation, Definitions and Terminology A knowledge of basic measure theory and the Lebesgue integral is assumed, together with some familiarity with Fourier analysis, including of course both the Fourier transform and series, as covered in the books [4, 24, 38, 48, 53, 55]. Some basic definitions and terminology are introduced to fix notation.

2.1 Measure and Integral Given an exponent p ∈ [1, ∞), the set of complex Lebesgue measurable functions f : R → C on R for which |f |p is integrable and satisfies |f (t)|p dt = |f |p < ∞, R

is denoted by

Lp (R).

R

The p-norm f p of f ∈ Lp (R) is defined by 1/p



f p := |f |p < ∞. R

The sets of Lebesgue integrable and square-integrable functions on R will be denoted by L1 (R) and L2 (R), respectively. They are both Banach spaces, i.e., complete normed linear spaces. An inner product R f g is defined for each

Sampling Theory in a Fourier Algebra Setting

55

f, g ∈ L2 (R), which makes it a Hilbert space as well. The Lebesgue measure of a measurable set X ⊂ R is given by the integral R χ X (x)dx, where the characteristic function χ X of the set X is given by χ X (x) = 1 when x ∈ X and 0 otherwise. The integral of a function f : X → R can be defined by X f = R f χ X , with  norm over X given by ( X |f |p )1/p . Such functions comprise Lp (X) and when the meaning is clear, will be simply referred to as integrable. We use Lp to describe a theory involving the Lebesgue integral for exponents p and spaces X. A useful property of the Lebesgue integral concerns sets E ⊂ R which are null or of Lebesgue measure zero. Sets of discrete points, such as the integers Z, or countable sets, such as the rationals Q, are examples  of null sets. A null set E is ‘negligible’ in the sense that for any integrable ψ, E ψ = 0 [48, Chap. 1], [55, Chap. X]. As a consequence, the values of an integrable function and the range of integration can be changed on a null subset without altering the value of the integral. If two functions f and g differ only on a null subset of R, i.e., if f and g everywhere (a. e.) or almost always (a. a.) on R, then the integral  agree almost  R f = R g and we write f ∼ g. The relation ∼ is an equivalence relation on L1 (X) and on L2 (X). The resulting space of equivalence classes is usually identified with the original space of functions [48, Sect. 3.10].

2.2 Signals and Frequencies We will model signals and their frequencies in the Fourier algebra and related spaces (see Sect. 3) by complex-valued functions defined on the real line. There are two types of function. First, analogue signals, which will be taken to be continuous, complex-valued functions and denoted by f : R → C. In engineering, signals are often taken to be real but in this chapter they are not necessarily real unless stated otherwise. They are also not necessarily integrable nor square-integrable over R or subsets of R such as intervals, unless otherwise stated. Signals can be thought of as depending on time and their domain R is called the time domain by engineers. These analogue signals or functions will usually be denoted by Roman letters, such as f , g or h, and the real or ‘time’ variables by t or x. The term ‘analogue’ will usually be omitted and the terms function and signal will be used interchangeably when the meaning is clear. Secondly, a spectral function ϕ : R → C of a signal f is a complex valued function of the frequencies, assumed to be real and, to use engineering terminology, in the frequency domain. The extension to complex frequencies and entire functions will not be considered here (more details are in [33, Sect. 6.3]). Spectral functions will usually be Lebesgue integrable or square-integrable (e.g., in L1 (R) or L2 (R), respectively) but are not assumed to be continuous. They will be denoted by Greek letters, e.g., ϕ, ψ, η, ρ, with frequencies usually denoted by τ or ω. The frequencies can be measured in cycles per second (hertz) or radians per second, depending on the choice of kernel for the Fourier transform (see Sect. 2.3.1). From a mathematical

56

M. M. Dodson and J. R. Higgins

point of view, the Fourier transform and its inverse (see Sect. 2.3.1) relate a signal and its frequencies.4 Our main concern is with integrable spectral functions and their inverse Fourier transforms (see (3)) which are signals or continuous functions of time. Squareintegrable spectral functions (or finite energy signals), where the frequencies and signal are related by the Fourier–Plancherel transform and its inverse (see Sect. 2.3.1), are also considered. The support (also known as the essential support) of a spectral function ϕ is defined to be the smallest closed set in R outside which ϕ vanishes a. e. and is denoted by supp ϕ [33, Defn. 7.1], [48, Defn. 2.9]. The support of spectral functions is basic to the theory of sampling considered here. A function or signal f with a spectral function ϕ is called bandlimited if its frequencies are bounded, i.e., if its support supp ϕ lies in a bounded interval, such as [−π w, π w]. The term bandwidth is used for the measure of the support of a spectral function ϕ. Thus when the supp ϕ ⊆ [−π w, π w], the bandwidth is at most 2π w.

2.3 Fourier Analysis Fourier analysis lies at the heart of sampling theory and Fourier series, as well as the Fourier transform, will play an important rôle in the paper. Fuller accounts of this theory, including basic material and further references on Lebesgue measure and integral, are in the books cited at the beginning of the section, together with [33, 41].

2.3.1

The Fourier Transform

Given an integrable (spectral) function ϕ : R → C, i.e., ϕ ∈ L1 (R), the Fourier transform ϕ ∧ of ϕ, will be defined by 1 ϕ ∧ (t) := √ 2π

R

ϕ(τ)e−i τ t dτ.

(2)

√ The kernel has modulus 1/ 2π, so that its product with an integrable function ϕ is also integrable. Indeed the transform ϕ ∧ is a uniformly continuous function and by the Riemann–Lebesgue Lemma, ϕ ∧ (t) → 0 as |t| → ∞, whence ϕ ∧ is in C0 , the Banach algebra of continuous functions vanishing at infinity [48, Sect. 3.16], [53, Theorem 1.2]. √ The kernel e−i tτ / 2π chosen for the Fourier transform corresponds to the frequency τ being in radians per second. The kernel e−2π i tτ , corresponding to τ being measured in hertz, has a number of technical advantages and has been 4 The

time and frequency domains are both represented by the real line because it is self-dual [47].

Sampling Theory in a Fourier Algebra Setting

57

widely adopted across mathematics. Our choice of the kernel in (2) was dictated by the historical component that is an essential part of this chapter in order to make comparisons with the original papers on approximate sampling simpler. The inverse Fourier transform ϕ ∨ of ϕ ∈ L1 (R) is defined by 1 ϕ ∨ (t) := √ ϕ(τ)ei τ t dτ (3) 2π R and is also in C0 . The functions f = ϕ ∨ in (3) constitute the Fourier algebra A = {ϕ ∨ : ϕ ∈ L1 (R)} = (L1 (R))∨ ⊂ C0 (see Sect. 3.1). If the spectral function ϕ ∼ ψ on R, then the inverse Fourier transforms of ϕ and ψ agree everywhere on R, i.e., ϕ ∨ = ψ ∨ . For convenience the characteristic function of the open interval (−πw, πw) will be written χ w := χ (−πw, πw) .

(4)

It will also be used for closed interval [−πw, πw] as well as the half open ones, since the endpoints {−πw, πw} are a null set. The inverse Fourier transform (χ w )∨ of χ w is given by πw √ 1 1 χ w (τ)ei τ t dτ = √ ei τ t dτ = 2π w sinc w t, (χ w )∨ (t) = √ 2π R 2π −π w (5) where  sin π t : t = 0 πt sinc t := (6) 1 : t = 0. The sinc function and its parameterization given by t → sinc(wt) occur throughout the following analysis, e. g., (9). If f ∈ L1 (R) and ϕ = f ∧ ∈ L1 (R), then f ∧∨ = ϕ ∨ is in C0 and the L1 Fourier inversion theorem (f ∧ )∨ ∼ f holds, i.e., f ∧∨ (t) = f (t) for a. a. t ∈ R. If f is also continuous, then equality holds for all t ∈ R. Usually, we consider Lebesgue integrable functions f but square-integrable functions f will also be encountered and their Fourier–Plancherel transforms Ff needed. A brief account of their properties follows (for more details see [48, 53]). When f is in L1 ∩L2 (R), the Fourier–Plancherel transform is given by the L1 (R) transform, i.e., 1 (Ff )(τ) = f ∧ (τ) = √ f (t)e−i t τ dt. 2π R

58

M. M. Dodson and J. R. Higgins

The definition of this transform is extended to functions in the complete Hilbert space L2 (R) using the right-hand side of (2) but with the integral to be understood in the sense of L2 (R) [48, Chap. 9]. The different notations used here for the L1 (R) and the L2 (R) Fourier transforms are for clarity and to make the difference between Theorems 5 and 6 explicit. The Fourier–Plancherel transform F is an isometry of L2 (R) onto L2 (R), with inverse denoted by F−1 . For each f in L2 (R), F −1 F f ∼ f, so that F−1 Ff (t) = f (t) for a. a. t ∈ R. When f is continuous, F −1 Ff (t) = f (t) for all t ∈ R. Analogous relations hold for inverse transforms; and also when the support of f ∧ is restricted to the interval [−π w, π w].

2.3.2

Fourier Series

A function ψ : R → C is 2π w-periodic if ψ(ω + 2πwk) = ψ(ω) for each ω ∈ R and k ∈ Z and is associated with a Fourier series denoted by Tψ. We will use the complex form of the Fourier series T ψ for the 2π w-periodic function ψ, defined formally as the 2πw-periodic, doubly infinite symmetric sum given by T ψ(ω) := lim T (N ) ψ(ω) = N →∞



ck ei k ω/w ,

(7)

k∈Z

where N 

T (N ) ψ(ω) :=

ck ei k ω/w

k=−N

and where (k) = ck := ψ

1 2πw



πw

−πw

ψ(ω)e−i k ω/w dω = c−k

(8)

is the k-th Fourier coefficient of ψ. (k) ∈ C of a periodic function ψ should not The k-th Fourier coefficient ψ be confused with ψ ∧ (k), the value at k of the Fourier transform of an integrable function ψ. However, there is an interplay between them: the Fourier series for a 2πw-periodic function ψ associated with a spectral function vanishing outside the interval [−π w, π w] plays an important part in sampling theory (see Sect. 2.5). Indeed Shannon [51] observed that the coefficient f (n/w) in the sampling formula (19) is essentially the n-th coefficient in a Fourier series expansion of the spectral function made periodic.

Sampling Theory in a Fourier Algebra Setting

59

The convergence of the Fourier series representation Tψ of the 2πw-periodic function ψ given by (7) is more delicate than in the L2 (R) case. For example, the Fourier series Tψ of a function ψ integrable over the interval [−π w, π w] might not converge anywhere ([40], [41, Theorem 19.2]). However, if ψ is continuous as well, its Fourier series Tψ converges to ψ a. e. [41, Theorem 19.4], [48, Sect. 5.11], i.e., Tψ ∼ ψ (the symbol ∼ can also be used to indicate that T ψ is the formal Fourier series of ψ but in this chapter it means equality a. e.). A useful fact is that if the function ψ is of bounded variation5 in a finite interval [48, Sect. 8.12], [55, Sect. 11.4], then the sequence of partial sums T (N ) ψ of the Fourier series converges boundedly [55, Sect. 13.232]. At a discontinuity at ω say of such a function, the Fourier series converges to the mean of the values ψ(ω − 0) and ψ(ω + 0) [55, Sect. 13.232]. Together, bounded variation and continuity imply uniform convergence to ψ [55, Sect. 13.25]. Integrating Fourier series is less problematic. Any Fourier series, whether convergent or not, can be integrated termwise over any finite limits [55, Sect. 13.5]. Again, the product of a Fourier series of any integrable periodic function with a function of bounded variation can be integrated termwise over any finite limits [55, Sect. 13.53] (see Sects. 4.2, 5.3 below).

2.4 The Sampling Series Sw f The bandlimited sampling series or cardinal series [33] for a function f : R → C is displayed in (1). For a function f in a class of functions (or signals), this series will be denoted by Sw f and defined formally by the limit of the finite symmetric sum ) (S(N w f )(t) :=

N 

f

k=−N



k k sinc w t − , w w

i. e., (Sw f )(t) := lim

N →∞

) (S(N w f )(t)

:=

 k∈Z

f



k k sinc w t − , w w

(9)

an infinite symmetric sum. When Sw f converges to f and f = Sw f , the sampling is said to be exact. For f in the Paley–Wiener space P Ww (see Sect. 3.3), the series

5 In

the interval ψ has only finitely many discontinuities and its real and imaginary parts have only finitely many maxima and minima [55, p. 407].

60

M. M. Dodson and J. R. Higgins

converges by the classical exact sampling theorem (Theorem 2) and in the Fourier algebra analogue Aw by the exact sampling theorem for f ∈ Aw (Theorem 3). When the limit Sw f exists but f = Sw f , the aliasing error for f is Ew f := f − Sw f

(10)

and the sampling is called approximate. The sampling series Sw f converges in F 2 (Theorem 5) and in A (Theorem 6) and is a natural approximation to a nonbandlimited function f (see Sect. 5). The sinc kernel (6) in Sw f arises from its spectral function of being supported in the interval [−π w, π w]. When the limit does not exist, the sampling series is used in a formal sense, as with the closely related Fourier series discussed in Sect. 2.3.2. The term ‘foldover error’ is also used [54] but covers other distortions such as the related ‘sideband foldover’ [50, p. 76]. For equivalent forms and some history see [33, Chap. 1]. There are other types of sampling series that depend on the nature of the signal and the sampling, such as bandpass, interlaced or multichannel sampling, which can have quite different sampling series (see [33] for more details). Bandpass sampling is associated with functions which have spectral functions supported in a pair of frequency bands symmetrically placed about the origin and is discussed in Sect. 6. Although derived from the bandlimited sampling series, bandpass sampling series are much more complicated (see Theorem 8). For brevity, sampling series will be taken to be ‘bandlimited’ unless described otherwise (mainly in Sect. 6).

2.5 Periodization Let ρ : R → C be a spectral function. The form of periodization of ρ used here is the function ρ + : R → C ∪ {∞} defined formally by an infinite symmetric sum ρ + (τ) := lim

N →∞

N  k=−N

ρ(2kπw + τ) :=



ρ(2kπw + τ).

(11)

k∈Z

The function ρ + is evidently 2π w-periodic and is called the 2πw-periodization of ρ. It is related to the Poisson summation formula [6], [53, Chap. 7, Sect. 2]. Various cases can occur. For example, in the case of a spectral function ρ vanishing outside the fundamental period [−πw, πw), the construction (11) is equivalent to repeatedly translating the restriction of ρ to the period [−π w, π w) by 2πw along the frequency axis6 R. Thus ρ + is a well-defined periodic function on R and similarly for the interval (−π w, π w]. Apart from overlapping endpoints,

6 The

translates of the period are essentially cosets of the quotient group R/S1 ∼ = Z.

Sampling Theory in a Fourier Algebra Setting

61

the same holds for the periodization of a spectral function with frequency band [−π w, π w]. Typically, the periodization ρ + will be discontinuous at the end points of the translated interval, where the set of endpoints {(2k + 1)π w : k ∈ Z} is null. In any case, these bandlimited spectral functions ρ + reduce to ρ on the open interval (−π w, π w). Brown [8], Standish [52] and Stickler [54] used the above constructions, as did Shannon [51] before them. More recent examples are in [11, 16] and below in Lemma 1 and in Sect. 5.2. Periodization can also be applied to integrable functions on R, producing a 2πwperiodic function whose Fourier series representation converges pointwise a. e. on R and is integrable over [−π w, π w] (Lemma 4); this is used in the second proof of Theorem 6 in Sect. 5.3. Generally the periodization will have discontinuities at the endpoints of the translates, even if the original function is continuous. Periodization also arises naturally with lattices in the abstract group setting [3, 22, 28].

3 The Fourier Algebra and Related Spaces The Fourier algebra A and its subspace Aw are presented as settings for approximate and exact sampling, respectively. They will be compared with the more familiar space F 2 and its subspace the Paley–Wiener space P Ww .

3.1 The Fourier Algebra A The Fourier algebra A = A(R) for the reals R consists of all those functions that are inverse Fourier transforms of members of L1 (R), i.e., A := {f : R → C : f = ϕ ∨ for some ϕ ∈ L1 (R)} = (L1 (R))∨ .

(12)

Thus for each f ∈ A, there exists a spectral function ϕ ∈ L1 (R) such that 1 f (t) = ϕ ∨ (t) = √ 2π

R

ϕ(τ)ei tτ dτ.

(13)

It is readily verified that A = (L1 (R))∧ , the usual definition of A [38, p. 123], [46, p. 6], [47, Chap. 5], but (12) is more appropriate for sampling theory, as f corresponds to a signal and ϕ to frequencies. Each f is continuous and A is a proper subset of C0 (more details are in [38, Chap. 6, Sect. 1.8], [48, Theorem 9.6], [53, 2 Chap. 1, Theorem 1.2]). By (5), the sinc function  is in A and in L (R) but not in 1 L (R), although the improper Lebesgue integral R sinc = 1. Any function ψ ∼ ϕ would also serve as a spectral function for f = ϕ ∨ . The characteristic function χ w , defined in (4), is integrable and, by (5), is a spectral

62

M. M. Dodson and J. R. Higgins

function of a parameterization by w of the sinc function (6). Thus sinc ∈ A and functions or signals in A do not need to be integrable, nor even square-integrable, as is required in F 2 (see Sect. 3.4 and Theorem 1 below), the counterpart of A. Addition and scalar multiplication in L1 (R) are preserved under the Fourier transform, while convolution in L1 (R) is replaced by pointwise multiplication in A [48, Theorem 9.2]. Thus A is algebraically isomorphic to L1 (R) and, with norm transferred from L1 (R), becomes a Banach algebra of functions [46, p. 6]. As a subset of C0 , A inherits the sup norm, corresponding to pointwise uniform convergence, and is dense in and a proper subset of C0 . Having natural extensions to Rn and to locally compact abelian groups [46], A and its generalizations are of interest in functional analysis [47, Chap. 5]. For example, an approximate sampling theorem in the Fourier algebra has been proved for locally compact abelian groups [22]. Fourier algebras for non-abelian locally compact groups have also been studied [27] but as far as we know sampling theory has not been developed in this more general setting. In passing, it is of interest that from (12), A is a shift invariant space.

3.2 The Subspace Aw The space Aw ⊂ A consists of the inverse Fourier transforms of spectral functions ϕ ∈ L1 (R) with support in the interval [−π w, π w], i.e., Aw = {f ∈ A : f = ϕ ∨ for some ϕ ∈ L1 (R), supp ϕ ⊆ [−π w, π w]}, so that each f ∈ Aw has the representation 1 f (t) = ϕ ∨ (t) = √ 2π



1 ϕ(τ)ei t τ dτ = √ 2π R



πw

−πw

ϕ(τ)ei t τ dτ,

(14)

for some ϕ ∈ L1 (R) with supp ϕ ⊆ [−π w, π w]. The definition above for Aw is similar to that of the Paley–Wiener space P Ww , defined in Sect. 3.3 below. The subspace Aw can be identified with (L1 ([−π w, π w]))∨ since ϕ ∼ ϕ χ [−π w, π w] can be identified with the restriction ϕ|[−π w, π w] ∈ L1 ([−π w, π w]). While Aw is a normed linear space (with the supremum or sup norm) and a subset of A, it is not a sub-algebra of A. This is because, unlike L1 (R), L1 ([−π w, π w]) is not (algebraically) closed under convolution of its elements; hence by the usual Fourier transform calculus, Aw is not closed under multiplication.

Sampling Theory in a Fourier Algebra Setting

63

3.3 The Paley–Wiener Space P Ww We take the Paley–Wiener space P Ww 7 to be the set of continuous square-integrable functions f : R → C,8 so that f = F −1 ϕ, with the support supp ϕ of their spectral functions ϕ = F f ∈ L2 (R) contained in the interval [−π w, π w]. More concisely, P Ww := {f ∈ L2 ∩ C(R) : supp F f ⊆ [−πw, πw]}, where C(R) is the set of continuous functions on R, and for each t ∈ R, 1 f (t) = √ 2π



πw −π w

(Ff )(τ)ei t τ dτ.

Thus the functions f ∈ P Ww are bandlimited signals and the content of the classical sampling theorem is that the sampling formula (1) holds for functions in P Ww (see Sect. 4). More details of this interesting space are in [33, Chaps. 6,7].

3.4 The Space F 2 Although approximate sampling began with the Fourier algebra A, classical sampling (exact and approximate) has usually been considered for continuous functions in L2 (R). Other integrability conditions have been studied in the sets F p in Lp (R) ∩ C(R), 1  p < ∞ (see [16]). The space F 1 := {f ∈ L1 (R) ∩ C(R) : f ∧ ∈ L1 (R)} ⊂ L1 (R)

(15)

has quite restrictive conditions and, for example, does not include the sinc function (6) but see [10]; an abstract analogue is in [22]. Classical approximate sampling (see Sect. 5, Theorem 5) is studied in a generalization of P Ww given by F 2 := {f ∈ L2 (R) ∩ C(R) : Ff ∈ L1 (R)}

(16)

(see [11, 16]). As f ∈ L2 (R), the associated spectral function ϕ is given by the inverse Fourier transform, i.e., ϕ = F f , and f = F −1 ϕ. This nice duality is absent in A. Since it is continuous, f ∈ F 2 is determined uniquely by any function ψ ∼ ϕ. By contrast with the Fourier algebra A, the functions in F 2 are in L2 (R) and required to be continuous (see Sect. 3.4).

7 See 8 The

[33, Sect. 6.3], [48, Chap. 19] for the complex analysis theory. extensions to C are entire [33, Theorem 7.2].

64

M. M. Dodson and J. R. Higgins

3.5 Comparing the Function Spaces It is readily seen that the class F 2 , which is a subset of L2 (R), is also contained in A but it is not so simple to prove that the inclusion is strict (see [22, Prop. 1] for the analogue in the Fourier algebra A(G), where G is a locally compact abelian group). Theorem 1 F 2 ⊂ A. Proof Let f ∈ F 2 , and let ϕ := F f , so that ϕ ∈ L2 ∩ L1 (R). But ϕ ∨ is continuous and ϕ ∨ and F −1 ϕ agree a. e., i.e., ϕ ∨ ∼ F −1 ϕ = F −1 F f ∼ f. Moreover, f is continuous by hypothesis, whence ϕ ∨ (t) = f (t) for every t ∈ R and so f ∈ A and F 2 ⊆ A. To show that the inclusion is strict, we construct a g ∈ A \ F 2 . The function ψ defined by  ψ(τ) =

|τ|−1/2 : 0 < |τ|  1 0 : otherwise

(17)

is a standard example of an even (real) L1 (R) function not in L2 (R). Hence the inverse Fourier transform g := ψ ∨ of ψ is even and belongs to A. The function g is now calculated directly and shown not to be in L2 and a fortiori not in F 2 . Suppose first that t  0. Then since ψ is even, its inverse Fourier transform g = ψ ∨ = ψ ∧ reduces (modulo constants) to a cosine Fourier transform and 1 g(t) = ψ (t) = √ 2π ∨

R

|τ|

−1/2 χ

i tτ

[−1,1] (τ)e

dτ =

2 π



1

τ−1/2 cos(t τ) dτ.

0

By straightforward changes of variable, it follows that

g(t) =

2 πt

0

t

−1/2

τ

2 cos τ dτ = 2 πt





0

t

2 cos(ω )dω = √ C t 2



2t π

 , (18)

where C is the Fresnel integral [1, p. 300, C(t) :=

Sect. 7.3.1],9

t

cos 0

π 2

given by

 ω2 dω.

The Fresnel integral C is analytic with power series obtained by integrating the expansion for the integrand term by term [1, p. 301, Sect. 7.3.11]. For large t, C(t) 9 The

Fresnel integral is defined differently in [25, Vol. 1, p. 267, Sect. 6.9.2,(29)], [42, p. 353].

Sampling Theory in a Fourier Algebra Setting

65

is bounded away from 0 and indeed C(∞) = 1/2 [1, Sect. 7.3.20]. More precisely, using contour integration, it can be shown that for large t that C(t) =

1 +O 2

1 t

and an asymptotic formula can be obtained using integration by parts.10 Hence, for all t  K, where K is a sufficiently large constant, the Fresnel integral C(t) > 1/3 and so by (18), 1 g(t) > √ 3 t for all t  π K 2 /2. It follows that



|g|2 

0

1 9





π K 2 /2

1 d t = ∞. t

The proof of the theorem is now completed by recalling that g is even, whence g is not in L2 (R) and so not in F 2 . The subspace Aw ⊂ A was defined with P Ww ⊂ F 2 in mind, by analogy with exact sampling for functions in P Ww . The following shows that the analogy bears up. Corollary 1 P Ww ⊂ Aw . Proof Let f ∈ P Ww and let ϕ = Ff . Then ϕ ∈ L2 ([−πw, πw]), f = F −1 ϕ and ϕ vanishes a. e. outside [−πw, πw]. Hence ϕ ∈ L1 ∩L2 ([−πw, πw]). This means that ϕ ∨ = F −1 ϕ = f a. e. But f and ϕ ∨ are both continuous, whence for every t ∈ R, f (t) = ϕ ∨ (t) so that f ∈ Aw . This inclusion is strict, since a simple change of variable in the definition (17) of ψ yields a function, η say, which is supported in [−πw, πw]. In the same way as with the functions ψ and g in the approximate sampling case, η is an integrable spectral function for h = η∨ which is also not square-integrable, whence there is strict inclusion here too. A spectral function with a ‘moderate spike’ of the type exhibited by ψ (see (17) above) would be integrable but gives rise to a signal with infinite energy.

10 The

formula is C(t) =

this also follows from [42, p. 356].

1 1 π + sin t 2 + O 2 πt 2



1 t2

;

66

M. M. Dodson and J. R. Higgins

4 Exact Sampling Theory In exact sampling the signal is reconstructed exactly (in principle) from samples taken at equally spaced points in R. Mathematical treatments of exact sampling go back a long time and can be traced to works of Cauchy, dating from the 1820s [34]. Most modern treatments adopt the L2 (R) framework (see the survey article [32] and the books [5, 33]).

4.1 Exact Sampling for P Ww The classical exact sampling theorem for functions in P Ww ⊂ L2 (R) is now stated for comparison with Theorem 3, the Fourier algebra analogue. Theorem 2 Let f ∈ P Ww . Then for each t ∈ R, f (t) = (Sw f )(t) :=



f (k/w)sinc (wt − k),

(19)

k∈Z

where the sampling series Sw f converges both uniformly to f and absolutely. Moreover, by Parseval’s theorem,

f 22 = Sw f 22 =

1  |f (k/w)|2 , w

(20)

k∈Z

so that the series Sw f converges to f in the L2 (R) norm. The Hilbert space structure enjoyed by the Paley–Wiener space P Ww as a subset of L2 (R) gives the sampling series (19) good convergence properties. The functions f are analytic and by the above can (theoretically) be reconstructed exactly from samples taken at the Nyquist rate of w samples per second. The theorem is related to a surprising number of results in mathematical analysis (see [11, 13, 14, 35] and references therein) and has been proved in many different ways (see, for example, [6, 23, 32]). While periodization of the spectral function is a natural approach and used by Shannon [51], the theorem can also be deduced from the reproducing kernel property [33, Lemma 6.9], using contour integral methods [33, Chap. 9] or from F (P Ww ) being a Hilbert space, so that the analogues of the inversion and Plancherel’s theorems hold [21], [48, Sect. 4.26]). It can also be seen as an interpolation result and indeed was proved as such by E. T. Whittaker in 1915 [57]. Some further sources for the above remarks are the books [5, 33, 61] and the survey [10]. We now turn to exact sampling in Aw .

Sampling Theory in a Fourier Algebra Setting

67

4.2 The Exact Sampling Theorem for Aw Although the functions in the space Aw do not enjoy the same nice convergence properties as those in P Ww , it can be shown that the sampling series for each function in Aw converges pointwise everywhere. Theorem 3 Let f ∈ Aw . Then for each t ∈ R, f (t) =



f (k/w)sinc(wt − k) = (Sw f )(t).

k∈Z

Note that there cannot be any L2 norm results for Theorem 3 on the lines of (20) in Theorem 2. In 1972 R. P. Boas [6]11 and H. Pollard and O. Shisha [45]12 proved Theorem 3 directly and independently. The proof is neat and a useful prelude to Theorem 5. As f ∈ Aw , there exists a ϕ ∈ L1 (R) supported in [−πw, πw] and such that f = ϕ ∨ . The 2π w-periodization ϕ + , given by (11), of ϕ consists of translates of [−πw, πw] by 2π w along R and is well-defined everywhere. It is 2πw-periodic and is integrable over [−πw, πw]. First the Fourier series T ϕ + for ϕ + is calculated. Lemma 1 The formal Fourier series T ϕ + for ϕ + is given by T ϕ + (τ) =



+ (k)eikτ/w = ϕ

k∈Z

1  f (k/w)e−ikτ/w . √ w 2π k∈Z

(21)

Proof From the definitions, f (t) = ϕ ∨ (t), t ∈ R and ϕ + (τ) = ϕ(τ) for |τ| < π w, whence by (8), the Fourier coefficient + (k) = ϕ

1 2πw



πw

−πw

ϕ + (τ)e−ikτ/w dτ =

1 2πw



πw

−πw

ϕ(τ)e−ikτ/w dτ

1 1 = √ ϕ ∨ (−k/w) = √ f (−k/w). w 2π w 2π

11 Most

of this note concerns summation formulae and establishes Theorem 6 (the approximate sampling theorem for A). Brown [8] and Weiss [56] are cited but not Standish [52] or Stickler [54]. In a short concluding section, Theorem 3 (the exact sampling theorem for Aw ) is proved under rather general hypotheses, namely that the signal f is in Aw . 12 Their paper discusses continuous analogues of the binomial series and does not mention sampling.

68

M. M. Dodson and J. R. Higgins

The ratio −k/w differs in sign from that in [6] where f = ϕ ∧ , while here we take f = ϕ ∨ . By (7), the Fourier series Tϕ + for ϕ + is given by 1  T ϕ + (τ) = √ f (−k/w)eikτ/w 2πw k∈Z and (21) follows on changing −k to k. Proof of Theorem 3. On the interval (−π w, π w), ϕ = ϕ + . The inverse Fourier transform ϕ ∨ of ϕ is the integral over the interval (−π w, π w) of ϕ = ϕ + √ itτ multiplied by the kernel e / 2π. Hence 1 f (t) = ϕ ∨ (t) = √ 2π



πw

1 ϕ(τ)eitτ dτ = √ 2π −πw



πw

−πw

ϕ + (τ)eitτ dτ

and since eitτ is a continuous function of bounded variation on (−π w , π w), it follows from (21) that 1 f (t) = √ 2π



πw

1 Tϕ (τ)e dτ = 2πw −πw +

itτ

πw

 

−πw

k∈Z



f

 k −ikτ/w e eitτ dτ w

and by [55, Sect. 13.53] that the Fourier series Tϕ + can be integrated term by term, to give

πw 1  k f eiτ(t−k/w) dτ 2πw w −πw k∈Z

k 2 sin πw(t − k/w) 1  , f = 2πw w (t − k/w)

f (t) =

k∈Z

which simplifies to (Sw f )(t) and establishes the result. This argument can be readily adapted to give another proof of the sampling theorem for the Paley–Wiener space above (Theorem 2). A more general form of the above argument is used in approximate sampling (see Sect. 5).

4.3 A New Proof of Theorem 3 In 1929 J. M. Whittaker [59] proved two sampling results for the Fourier–Stieltjes transforms πw   f (t) = cos ω t dΨ 1 (ω) + sin ω t dΨ 2 (ω) , −πw

Sampling Theory in a Fourier Algebra Setting

69

where Ψ 1 and Ψ 2 are continuous [59, p. 171, Theorem 2]. The second result, which we formulate for continuous complex functions ϑ : [−π w, π w] → C, considers convergence of the sampling series to f in terms of (C, 1) summability. It is now stated as Theorem 4. Theorem 4 (J. M. Whittaker [59]) Let ϑ : [−π w, π w] → C be continuous and let f : R → C be given by f (t) :=

πw −π w

ei t τ dϑ(τ).

(22)

Then the sampling series (Sw f )(t) :=



f (n/w)sinc(wt − n)

(23)

n∈Z

is summable (C,1) to f (t). Note that the function f is in the Fourier–Stieltjes algebra analogue of Aw . The proof relies on the Fourier kernel being of bounded variation and has some similarities with [52]. We turn to the new proof of Theorem 3. Since f ∈ Aw , the integral representation (14) holds for some ϕ ∈ L1 (R) supported in [−π w, π w]. Define the function ϑ on [−π w, π w] by ϑ(ω) :=

ω −π w

ϕ(τ) dτ.

Then by [48, Th 8.17], ϑ is an absolutely continuous function (and measure), of normalized bounded variation and ϑ (ω) = ϕ(ω) for a. a. ω ∈ [−π w, π w]. Hence dϑ(ω) = ϕ(ω) dω for a. a. ω ∈ [−π w, π w] and (22) holds with this choice of ϑ. Thus Theorem 4 applies to f and the sampling series (23) is summable (C, 1) to f (t). Now from (14), f (t)  → 0 as |t| → ∞ by the Riemann–Lebesgue lemma, and since f is continuous, f (n/w) n∈Z is a bounded sequence. It follows that the n-th term of the series in (23) is O(1/n) for each fixed t. But if the n-th term of a (C, 1) summable series is O(1/n), the series is pointwise convergent by Hardy’s convergence theorem (see [58, p. 156] for details and references) and Theorem 3 follows. It will be noticed that the sampling rate is w samples per second in both the classical and the Aw exact sampling theorems. This is compatible with the Nyquist– Landau minimal sampling rate. We will not discuss minimal rates here but the interested reader can find more information in [33, p. 103].

70

M. M. Dodson and J. R. Higgins

5 Approximate Sampling Theory Approximate sampling results were first obtained by Brown [8] and Standish [52], both stemming from Weiss’s announcement [56], and independently by Stickler [54]. In [9], Brown established the following approximate sampling theorem for F 2 . We recall that the aliasing error is defined in (10) by Ew f := f − Sw f . Theorem 5 Let f ∈ F 2 . Then for each t ∈ R, f (t) = (Sw f )(t) + (Ew f )(t), where the aliasing error Ew f satisfies

|(Ew f )(t)| 

2 π

|τ|>π w

|Ff (τ)|dτ.

(24)

Furthermore, f (t) = lim (Sw f )(t) w→∞

(25)

uniformly in t. A self-contained account of this result is included in a comprehensive study of convergence by Butzer et al. [16, Theorem B]. A direct proof without frequency splitting is given in [11, Sect. 5, Theorem 1], where Theorem 5 is chosen to close an equivalence ring of seven theorems. The proof of Boas [6] for functions in A is different but both proofs require some intricate arguments which will be discussed in Sects. 5.2 and 5.3.

5.1 The Approximate Sampling Theorem for A We now return to the Fourier algebra setting, where the approximation result below differs slightly from the F 2 case. Two proofs will be given. The representations of the sampling series are the basis of the approximation (Lemmas 3 and 6). Both proofs, which involve dominated convergence arguments, use periodization and bounded variation but in different ways. Theorem 6 Let f ∈ A be given by 1 f (t) = √ 2π

R

ϕ(τ)eitτ dτ = ϕ ∨ (t),

where ϕ ∈ L1 (R). Then for each t ∈ R, f (t) = (Sw f )(t) + (Ew f )(t),

Sampling Theory in a Fourier Algebra Setting

71

where the aliasing error Ew f satisfies

|(Ew f )(t)|  Moreover, the constant



2 π

|τ|>π w

|ϕ(τ)| dτ.

(26)

2/π cannot be reduced and f (t) = lim (Sw f )(t) w→∞

(27)

uniformly in t. The integrand in the error estimate (26) above for f ∈ A differs from that in (24) of Theorem 5, where f ∈ F 2 . However, the asymptotic formula (27) has the same 2 form as (25). Since √ F ⊂ A, the result for A implies Theorem 5 and so the best possible constant 2/π is also the same.

5.2 Periodization and the Fourier Kernel The proof given here has similarities with Brown’s pioneering paper [8] and with [52, 54] and indeed is close to [11, Sect. 5, Theorem 1], where Theorem 5 above is proved for functions in F 2 . In these papers, the restriction ξ√ t to a fundamental interval of the exponential factor in the Fourier kernel ei t τ / 2π is periodized (see Sect. 2.5 and Lemma 2). Let t ∈ R be fixed. The function ξ t : R → C is defined by  ξ t (τ) =

ei t τ : −π w  τ < π w, 0 : otherwise.

(28)

The function ξ t is in L1 (R), is of modulus 1 on [−π w, π w) and vanishes otherwise. It is of bounded variation and continuous on (−π w, π w). The translates of the interval [−π w, π w) by integer multiples of 2π w form a disjoint cover for R. Thus for each τ ∈ R, there exists a unique integer kτ such that τ lies in the translate [−π w, π w) + 2π w kτ . Let τ = ω + 2π w kτ , where ω is in [−π w, π w), so that kτ vanishes when −π w  τ < π w. By (11), the of ξ t at τ is given by periodization ξ + t +

ξ t (τ) :=

 k∈Z

 ξ t (τ + 2πwk) =

/ (2Z + 1)π w ei t (τ−2π w kτ ) : τ ∈ −iπ w t : τ ∈ (2Z + 1)π w. e

(29)

72

M. M. Dodson and J. R. Higgins

Hence ξ + is a well behaved 2πw-periodic function with modulus 1, of bounded t variation13 on [−π w, π w], satisfies ξ + = ξ t on (−π w, π w) where it is continut + ous. Thus, by periodicity, ξ t is continuous a. e. in R and ξ + (τ) = ei t (τ−2π w kτ ) a. e. t in R. Lemma 2 For each t ∈ R, the Fourier series T ξ + for ξ + is given by t t T ξ+ (τ) = t



sinc(wt − k)eiτ k/w

k∈Z

and T ξ + ∼ ξ+ in R. t t + (k) of the Fourier series Proof By definition (8), the k-th Fourier coefficient ck = ξ t + for ξ t is given by + (k) = ξ t

1 2π w



πw

−π w

eitτ e−ikτ/w dτ =

sin π(wt − k) := sinc(wt − k). π(wt − k)

The Fourier series follows from substitution in (7). Since ξ + is continuous and of t bounded variation in the interval (−π w, π w), by Sect. 2.3.2 when |τ| < π w, the partial sums T (N ) ξ + (τ) of the Fourier series T ξ + (τ) converge uniformly to ξ + (τ). t t t + + Thus by periodicity T ξ t = ξ t a. e. on R. The Fourier series for ξ + was established by L. Euler [26] (see [35] for comments t and references). The Fourier coefficient indicates the link with the sampling series (19). The representation of Sw f when f ∈ A is now obtained. If the support of ϕ is contained in [−π w, π w], the integral reduces to f , giving Theorem 3. Lemma 3 Let f ∈ A have a spectral function ϕ ∈ L1 (R), where f = ϕ ∨ . Then for each t ∈ R, the sampling series Sw f for f is given by 1 (Sw f ) (t) = √ 2π

R

ϕ(τ)ξ + (τ) dτ. t

Proof From thedefinition (28), |ξ + (τ)| = 1 for all τ ∈ R, whence ϕξ + ∈ L1 (R) t t + and the integral R |ϕ ξ t | < ∞. By Lemma 2, R

ϕξ + = t

R

ϕ T ξ+ = t



R

ϕ

. lim T (N ) ξ + t

N →∞

(30)

is one simple discontinuity at each of the end points ±π w of the interval and the number of maxima and minima of cos tτ and sin tτ is O(tw).

13 There

Sampling Theory in a Fourier Algebra Setting

73

Again, since ξ + is of bounded variation, the partial sums T (N ) ξ + are uniformly t t bounded convergent sequences and it follows for each τ ∈ R and k ∈ N, N 

|T (N ) ξ + (τ)| = | t

sinc(wt − k)eiτk/w |  K,

k=−N

for some K ∈ R+ . Hence |ϕ(τ)T (N ) ξ + (τ)|  K|ϕ(τ)| for N ∈ N and τ ∈ R and t so by Lebesgue’s dominated convergence theorem [48, Theorem 1.34], the limit in the integrand on the right-hand side of (30) can be taken outside the infinite integral. Thus, R

ϕ(τ)ξ + (τ)dτ = lim t



N →∞ R

= lim

N →∞

N 

ϕ(τ)

k=−N

N  k=−N

sinc(wt − k)ei k τ/w dτ

R

i k τ/w

ϕ(τ)e

dτ sinc(wt − k)

√ √ k sinc(wt − k) = 2π(Sw f )(t) = 2πf w k∈Z

and the lemma follows. The First Proof of Theorem 6 By definition and the representation in Lemma 3, 1 1 it τ Ew f (t) = f (t) − (Sw f )(t) = √ ϕ(τ)e dτ − √ ϕ(τ)ξ + (τ) dτ t 2π R 2π R     1 1 =√ ϕ(τ) eitτ − ξ + (τ) dτ = ϕ(τ)ei t τ 1 − e−i t2π w kτ dτ √ t 2π R 2π R by (29). But kτ = 0 when |τ| < π w and hence the integrand vanishes over this range, while the modulus of the integrand is at most |ϕ(τ)| otherwise. The estimate (26) and, since ϕ ∈ L1 (R), the asymptotic formula (27) follows. However, we will go a little further in order to compare this approach with that of Boas (36) in the next section. The error can be rewritten as 2i Ew f (t) = √ ϕ(τ)eit (τ−π w kτ ) sin(π w kτ t)dτ 2π R

2  (2k+1)π w ϕ(τ)ei t (τ−π w k) sin (πkwt)dτ, =i π (2k−1)π w k=0

since kτ = k on the interval [(2k − 1)π w, (2k + 1)π w) and vanishes on [−π w, π w) (this equation is the same as (36) below).

74

M. M. Dodson and J. R. Higgins

Brown’s extremal function is now constructed. In [8], Brown showed that the function f  , given by f  (t) = 2w sinc(2wt − 1)

(31)

satisfies f  (1/(2w)) = 2w and vanishes at t = k/w, k ∈ Z (f  is undersampled). Thus by (9), the sampling series Sw f  (t) vanishes for all t, so that by (10), f  = Ew f  . It is readily verified that ϕ  is a spectral function of f  := (ϕ  )∨ , where ϕ  (τ) :=

e−iτ/(2w) χ 2w (τ), √ 2π

(32)

and χ 2w is the characteristic function of the interval (−2πw, 2πw) (4). Thus f  is in A2w ⊂ A and √ 1 |ϕ  (τ)|dτ = (33) √ χ 2w (τ)dτ = 2π w. 2π |τ|>πw πwπw

which is (39). The upper bound in Theorem 7 is one half that of Theorem 6. The approximate reconstruction involves samples of fw and periodization is not used. The question of finding an extremal function f ∈ A for which the estimate (39) is an equality is open.

Sampling Theory in a Fourier Algebra Setting

79

6 Bandpass Sampling in A In exact bandlimited sampling, we considered bandlimited signals f which have no high frequencies. However, there could be no low frequencies either, so that its positive and negative frequency ranges would be confined to two separate bands. In this case, f is said to be a bandpass signal. To make this more precise we consider functions f = g + ih ∈ A, where g, given by g(t) = Re(f (t)) ∈ R, and h, given by h(t) = Im(f (t)) ∈ R, are, respectively, the real and imaginary parts of f . The spectral function of f = ϕ ∨ is ϕ = ψ + iη ∈ L1 (R), where ψ, η ∈ L1 (R) are spectral functions for g, h, respectively. Suppose the spectral function ϕ = ψ + iη is supported in the bands [−πw, πw] + γ and [−πw, πw] − γ .14 If γ = πw, the intervals reduce to a single interval [−2πw, 2πw] and f is a bandlimited function with bandwidth 4π w, already covered. So we now assume that γ > π w, so that the bands are disjoint and f is a bandpass signal, with bandwidth 4πw (Sect. 2.2). To simplify notation, we let B(γ ) := [−π w, π w] + γ , B(−γ ) := [−π w, π w] − γ and     B(±γ ) := [−π w, π w] − γ ∪ [−π w, π w] + γ . The class of bandpass functions in L2 with frequencies confined to B(±γ ) will be denoted by P WB(±γ ) , an analogue of P Ww (see [33, Sect. 6.6] for more details). The class of bandpass functions in A with frequencies confined to B(±γ ) will be ±γ denoted by Aw , a bandpass analogue of Aw . By adapting the argument in Cor. 1, ±γ it can be shown that P WB(±γ ) ⊂ Aw . ±γ From (13), each f ∈ Aw ⊂ A can be expressed as 1 f (t) = √ 2π



ψ(ω)ei ω t dω = ψ ∨ (t),

B(±γ )

where the spectral function ϕ ∈ L1 (R) for f is supported on B(±γ ). Simply using the bandlimited exact sampling theorem (Theorem 3) with the maximum frequency πw + γ to obtain a sampling theorem is evidently very inefficient.

14 The

symmetry of the intervals is consistent with the signals being real but this restriction is not imposed on f .

80

M. M. Dodson and J. R. Higgins

In 1954, A. Kohlenberg [39] extended the classical exact sampling theorem to the bandpass case by introducing ‘interlaced’ sampling for real signals. The interlaced samples allow an exact but very complicated sampling series with combined sampling rates equal to the Nyquist rate. A sampling rate close to the Nyquist rate for bandpass functions in L2 (R) was constructed in [23, p. 101] to provide an exact and less complicated representation. An exact sampling theorem for L2 (R) bandpass functions with frequencies supported in B(±γ ) has been established using a 2channel approach (related to that of [39]) in [33, Sect. 13.6]. However, these results have not been extended to the approximate theory and will not be pursued here. In [8, Sect. 3], Brown also applied his bandlimited approximation result to the more complicated question of bandpass approximation. He relied on earlier work on exact bandpass sampling where essentially the real bandpass function was transformed to a bandlimited one represented by its sampling series and the pair transformed back (see, for example, ([30, Sect. 2.3], [43, Sect. 4.2–4], [60, pp. 34– 41]). Brown used this idea to obtain a bandpass approximate sampling formula for functions in A, under a weaker condition than the function being real. He again produced an extremal that established that the bound was sharp. We give a self-contained proof of the bandpass exact sampling theorem for ±γ bandpass functions f ∈ Aw (Sect. 6.1, Theorem 8). Then we prove a bandpass approximate sampling theorem for all functions f in A (Sect. 6.2, Theorem 9). The arguments depend on a detailed understanding of transformations of bandpass formulae to and from bandlimited ones. We start with some elementary considerations of the inverse Fourier transform which lead to the Hilbert transform.15 From now on, g : R → C will always be a real-valued function in A. By the representation (13) for g = ψ ∨ , 1 g(t) = ψ ∨ (t) = √ 2π



∞ −∞

ψ(ω)ei ω t dω = JR+ (t) + JR− (t),

(40)

where 1 JR+ (t) := √ 2π





iωt

ψ(ω)e 0

1 dω and JR− (t) := √ 2π



0

−∞

ψ(ω)ei ω t dω.

Since g is real, ψ(ω) = ψ(−ω) and it is readily verified that JR− = JR+ , so that by (40), g(t) = 2Re(JR+ )(t). The difference g, JR+ − JR− = JR+ − JR+ = i2Im(JR+ ) = (sgn.ψ)∨ := i 

15 Known

as the ‘quadrature’ function of g in signal processing [7, p. 269]. We are grateful to Professor Alister Burr of the Dept. of Electronics, University of York for explaining its significance in signal processing.

Sampling Theory in a Fourier Algebra Setting

81

where sgn(ω) = ω/|ω| for ω = 0 and ω(0) = 0, so that  g (t) = −i(sgn.ψ)∨ = i(JR− (t) − JR+ (t)) = 2Im(JR+ (t)).

(41)

The function  g ∈ A is the ‘multiplier’ form of the Hilbert transform of g [33, Appendix B], [31] and is real. Note that the spectral functions ψ and i(sgn.ψ) of g and i g , respectively, agree on R+ but have the opposite sign on R− . The sum a := g + i  g∈A

(42)

is called the analytic or complex signal in signal processing [7, p. 269] (to avoid confusion, the term ‘analytic’ will not be used). By (40) and (41),

a(t) = 2JR+ (t) =

2 π





ψ(ω)ei ω t dω = 2(ψ χ [0,∞ )∨ (t),

0

(43)

whence the spectral function of a is 2ψ χ [0,∞) . Thus the spectral function of a is supported in [0, ∞), i.e. the complex function a has no negative frequencies. The same holds for any complex signal b := h + i h when h is a real-valued function in A. This definition of the Hilbert transform can be extended to complex-valued functions in A by considering real and imaginary parts. For f = g + ih ∈ A, where g and h are real, the Hilbert transform f of f can be defined for each t ∈ R as f(t) :=  g (t) + i h(t). Translating the frequency variable ω by −γ to ω = ω − γ shifts 0 to −γ and the interval B(γ ) ⊂ [0, ∞) to [−π w, π w] ⊂ [−γ , ∞) since γ > π w, so that the spectral function ψ(ω) becomes ψ(ω) = ψ(ω + γ ) := ψγ (ω ).

(44)

Multiplying a(t) by e−iγ t changes it to aγ (t), where aγ is defined below in (45), and moves the frequencies of a to [−γ , ∞), as follows:

−iγ t

a(t)e

=

=

2 π 2 π





i t (ω−γ )

ψ(ω)e 0



∞ −γ

:= aγ (t).

dω =

2 π



∞ −γ

ψ(ω + γ )ei t ω dω

ψγ (ω)ei tω dω = 2(ψγ χ [−γ ,∞) )∨ (t) (45)

(this is the ‘shift’ theorem of Fourier analysis [48, Theorem 9.2 (b)]). Note that the spectral function of the shifted signal aγ ∈ A is 2ψγ χ [−γ ,∞) and so is supported in [−γ , ∞) which contains the interval [−π w, π w].

82

M. M. Dodson and J. R. Higgins

±γ

6.1 Exact Bandpass Sampling in Aw

±γ

Suppose that the function f = g + i h ∈ Aw , where g and h are real, so that by definition, the spectral functions ϕ, ψ and η of f, g and h, respectively, are supported in B(±γ ). The spectral functions of g and  g agree on B(γ ) but have opposite sign in B(−γ ) (40), (41) and therefore cancel in the sum a := g + i g. ±γ Hence the spectral function of the complex signal a := g + i g ∈ Aw is supported in the single band B(γ ) and so is not a bandpass signal in the sense of two bands used here. The same is true for the complex signal b := h + i h ∈ A for h. By (45), the complex signals aγ , bγ , shifted from a, b, are bandlimited to the interval [−π w, π w] (a subset of [−γ , ∞)) and so lie in Aw , with respective spectral functions 2ψγ χ w , 2ηγ χ w . Hence the bandlimited exact sampling theorem for Aw (Theorem 3) holds for aγ and bγ , implying aγ = Sw aγ , bγ = Sw bγ ,

(46)

where Sw aγ and Sw bγ are convergent bandlimited sampling series (by Lemmas 3 or 6) which represent the bandlimited functions aγ and bγ , respectively. By (46) and Theorem 3,

 k k sinc w t − (47) aγ (t) = (Sw aγ )(t) = aγ w w k∈Z

and similarly for bγ : bγ (t) = (Sw bγ )(t) :=





k∈Z



k k sinc w t − . w w

Shifting aγ back to a = g + i g (42) takes the bandlimited sampling series γ Sw aγ (47) to another convergent series ei t γ (Sw aγ )(t) := (Sw a)(t). Since aγ lies γ i t γ i t γ in Aw , it follows that (Sw a)(t) = e (Sw aγ )(t) = e aγ (t) = a(t) by (45), whence by (47) and (42),



k k sinc w t − w w k∈Z

 k iγ k k e− w sinc w t − = ei t γ a w w k∈Z  



 k k k −i γ t− wk g + i g e sinc w t − = w w w

(Sγw a)(t) = ei t γ (Sw aγ )(t) = ei t γ





k∈Z

= a(t)

(48)

Sampling Theory in a Fourier Algebra Setting

83

by (42). Symbolically, the transform aγ (t) = ei γ t a(t) (45) written as T : a → aγ γ induces a ‘similarity’ a = T −1 (Ta) = T −1 aγ = T −1 Sw (Ta) := Sw a. ±γ Now by (42), g = Re(a) ∈ Aw , so that taking real parts of (48) gives   g(t) = Re ei t γ (Sw aγ )(t) := Sw±γ g(t), the convergent bandpass sampling series for the bandpass function g with its frequencies supported in B(±γ ). Similarly,   h(t) = Re ei t γ (Sw bγ )(t) := Sw±γ h(t). By using either Euler’s formula for ei γ (t−k/w) in (48) or by adding the series ei t γ (Sw aγ )(t) to its complex conjugate e−i t γ (Sw aγ )(t), it can be verified that (S±γ w g)(t) =







  k k k k k g cos γ t− − g sin γ t− sinc w t− . w w w w w k∈Z

±γ

±γ

Similarly, the bandpass sampling series Sw h for h = Re(b) ∈ Aw h is given by (S±γ w







  k k k k k  h cos γ t− −h sin γ t− sinc w t− . h)(t) = w w w w w k∈Z

±γ

±γ

±γ

Combining the last four equations gives f = Sw g + iSw h = Sw f , whence ±γ

Theorem 8 The bandpass sampling series for f ∈ Aw is given by







 k k k k k  f f (t) = cos γ t − −f sin γ t − sinc w t − . w w w w w k∈Z

When the different parameters, such as the bandwidth, are taken into account, this sampling formula is identical to (13.6.11) in [33, Sect. 13.6]. However, the bandpass functions considered there are square-integrable.

6.2 Approximate Bandpass Sampling in A ±γ

The exact bandpass sampling formula for functions in Aw can be generalized for any function in A. By analogy with the bandlimited case, we consider the sampling ±γ ±γ ±γ series approximation Sw f := Sw g + iSw h for f = g + ih ∈ A with a bandpass ±γ aliasing error Ew f for f given by ±γ f (t) := f (t) − S±γ Ew w f (t).

84

M. M. Dodson and J. R. Higgins

  ±γ Note that since Sw g(t) := Re ei t γ (Sw aγ )(t) , where Sw aγ converges by ±γ Lemma 6, the corresponding bandpass sampling series Sw g for g ∈ A converges; ±γ similarly Sw h converges. ±γ We begin by calculating the aliasing error Ew g for g, the real part of f . This relies on Boas’s formula (36) for the bandlimited aliasing error in A, which opens the way to a proof of the bandpass approximate sampling theorem for all functions f in A (Theorem 9). Lemma 7 Let g ∈ A, so that g = ψ ∨ for a spectral function ψ ∈ L1 (R), and ±γ suppose that g is real. Then for each t ∈ R, the aliasing error Ew g satisfies

±γ g(t) Ew

=i

(2k+1)π w+γ  2 sin (π k w t) e−i π k w t ψ χ R+ (ω)ei ω t dω π (2k−1)π w+γ k=0

+i π k w t

−e



−(2k−1)π w−γ −(2k+1)π w−γ

 ψ χ R− (ω)ei ω t dω .

Proof Recall that by (42), the analytic signal of g is defined by a := g + i g ∈ A. From (43), the spectral function of a is 2ψ χ [0,∞) , where ψ is the spectral function for g. It can be seen from (45) that the shifted analytic signal aγ ∈ A has spectral function 2ψγ χ [−γ ,∞) , where ψγ is given by (44). Now since γ > π w, the interval [−π w, π w] ⊂ [−γ , ∞), so that Theorem 6 holds for aγ , which can thus be approximated by the convergent bandlimited sampling series Sw aγ , given by the sum in (47). The aliasing error Ew aγ = aγ − Sw aγ by (10), i.e., aγ − Sw aγ = Ew aγ . Multiplying the above equation by ei t γ and using (45) yields γ a)(t). a(t) − ei t γ (Sw aγ )(t) = ei t γ (Ew aγ )(t) := (Ew ±γ

(49)

±γ

Hence taking real parts of (49) and noting that g − Sw g = Ew g is the definition of the bandpass aliasing error for g, we get it γ γ ±γ (Ew aγ )(t)) = Re((Ew a)(t)) = (Ew g)(t). g(t) − (S±γ w g)(t) = Re(e

Substituting aγ and its spectral function 2ψγ χ [−γ ,∞) (45) into Eq. (36), we obtain the bandlimited error Ew aγ for aγ , given by

(Ew aγ )(t) = i

(2k+1)π w 2 sin (π k w t) e−i π k w t 2ψγ (ω)χ [−γ ,∞) (ω)ei ω t dω, π (2k−1)π w k=0

Sampling Theory in a Fourier Algebra Setting

85

since sin(π kwt) vanishes for k = 0. Hence, by (45) and (49), the bandpass error

γ a(t) Ew

= 2e

itγ



2 i sin(πkwt)e−iπkwt π

(2k−1)πw

k=0

= 2ie

itγ

(2k+1)πw

2 sin(πkwt)e−iπkwt e−itγ π k=0



2 sin(πkwt)e−iπkwt = 2i π

(2k+1)πw+γ

(2k−1)πw+γ

(2k+1)πw+γ (2k−1)πw+γ

k=0

γ



γ

ψ(ω+γ )χ [−γ ,∞) (ω)eiωt dω

ψ(ω)χ R+ (ω)eiωt dω

ψ χ R+ (ω)eiωt dω. ±γ

γ

Next, since Ew a(t) + Ew a(t) = 2Re(Ew a(t)) = 2Ew g(t),

±γ Ew g(t)

=i

(2k+1)π w+γ  2 sin (π k w t) e−i π k w t ψ χ R+ (ω)ei ω t dω π (2k−1)π w+γ k=0

− e+i π k w t



(2k+1)π w+γ (2k−1)π w+γ

 ψ χ R+ (ω)e−i ω t dω .

(50)

But g = ψ ∨ and χ R+ are real, whence ψ χ R+ (ω) = ψ(−ω)χ R− (−ω) and

(2k+1)π w+γ

(2k−1)π w+γ

ψ χ R+ (ω)e−i ω t dω =



(2k−1)π w+γ

=

(2k+1)π w+γ

ψ χ R− (−ω)e−i ω t dω

−(2k−1)π w−γ

−(2k+1)π w−γ

ψ χ R− (ω)ei ω t dω.

The lemma follows on replacing the last integral in (50) with the last integral above. ±γ

We now consider the bandpass sampling error Ew f for f = g + ih ∈ A, where g, h are real and in A. Theorem 9 Let f ∈ A, so that f = ϕ ∨ for some spectral function ϕ ∈ L1 (R). ±γ Then for each t ∈ R, the bandpass aliasing error Ew f satisfies

±γ |(Ew

The constant

f )(t)| =

|f (t) − (S±γ w

√ 2/π cannot be reduced.

f )(t)| 

2 π

R\B(±γ )

|ϕ|.

(51)

86

M. M. Dodson and J. R. Higgins

Proof Consider f = g + ih ∈ A. Recall that ϕ, ψ, η ∈ L1 (R) are the respective spectral functions of f, g, h and that ϕ = ψ + iη. By Lemma 7,

±γ Ew h(t)

=i

(2k+1)π w+γ  2 sin (π k w t) e−i π k w t ηχ R+ (ω)ei ω t dω π (2k−1)π w+γ k=0



+i π k w t

−e

−(2k−1)π w−γ −(2k+1)π w−γ

 ηχ R− (ω)ei ω t dω .

±γ

Thus the error Ew f for the sum f = g + ih is given by ±γ ±γ ±γ f (t) = Ew g(t) + iEw h(t) Ew

(2k+1)π w+γ  2 −i π k w t sin (π k w t) e (ψ + iη)χ R+ (ω)ei ω t dω =i π (2k−1)π w+γ k=0

− ei π k w t



−(2k−1)π w−γ

−(2k+1)π w−γ

 (ψ + iη)χ R− (ω)ei ω t dω .

and since ψ + iη = ϕ, taking moduli, ±γ |Ew f (t)|

 −(2k−1)π w−γ  2  (2k+1)π w+γ ϕ χ R+ (ω)ei ω t dω + ϕ χ R− (ω)ei ω t dω  π (2k−1)π w+γ −(2k+1)π w−γ



k=0

2  π k=0



(2k+1)π w+γ (2k−1)π w+γ

|ϕ χ R+ (ω)|dω +

−(2k−1)π w−γ

−(2k+1)π w−γ

|ϕ χ R− (ω)|dω



  2 2 = |ϕ| + |ϕ| = |ϕ|, π R+ \B(γ ) π R\B(±γ ) R− \B(−γ ) since the two integrals in the k = 0 term, excluded from the sum, have ranges of itegration B(γ ) and B(−γ ). Brown showed that the function f  given by f  (t) := 2w cos

γ w

 (wt − 1) sinc(wt − 1)

is an extremal for the inequality (51). The cosine factor splits the nonzero-set of the spectral function of f  for the bandlimited approximate sampling theorem (Theorem 6) appropriately. ±γ

±γ

Note that if f ∈ Aw , then Ew f = 0 and Theorem 8 holds. Also, by Theorem 1, F 2 ⊂ A, so that these results hold in F 2 .

Sampling Theory in a Fourier Algebra Setting

87

7 Some Historical Details Approximate sampling or the study of aliasing goes back to C. J. de la Vallée Poussin in 1908. However, his approach has a history of its own in the context of approximation theory (see [20], particularly the commentaries by Butzer and Nessel, and by Butzer and Stens pp. 421–453). de la Vallée Poussin proved the asymptotic result (27) for functions satisfying conditions of bounded variation and continuity, but not for the Fourier algebra setting. We now take up in more detail the tangled origins of the approximate sampling theorem in the Fourier algebra setting, referred to in the Introduction. On reviewing early sources, the authors were surprised to find that the approximate sampling theorem first arose in the Fourier algebra setting with the brief note by Weiss in 1963 [56].

7.1 Weiss’s Note Weiss announced that an approximate sampling theorem held for functions f with spectral functions ϕ which satisfied the following four hypotheses: (i) f = ϕ ∧ , ϕ ∈ L1 (R); (ii) ϕ(ω) = ϕ(−ω); (iii) ϕ is of bounded variation over R; (iv) ϕ(ω) = 12 [ϕ(ω + 0) + ϕ(ω − 0)]. The first of these, as we now recognize, is that f ∈ A = (L1 (R))∨ , while (ii) implies f is real. Weiss also stated that if ϕ(ω) = 0 for |ω| > Ω, then f satisfies the Sampling Theorem, that is, he assumed that the exact sampling theorem held in Aw . But in the absence of any references or proofs, it is impossible to know what source Weiss had in mind. Perhaps he was familiar with the 1957 paper [2] of A. V. Balakrishnan, in which a direct and a converse sampling theorem is given for continuous parameter stochastic processes (the paper claims to be the first such stochastic formulation). The setting may be seen as a stochastic version of the Fourier algebra. In any case, the earliest source known to us for an exact sampling result in a Fourier algebra setting (but not with that name) is the 1929 paper of J. M. Whittaker, discussed in Sect. 4.3, in which the Fourier–Stieltjes transform version of the Fourier algebra is considered. Weiss also stated that the upper bound in (26) can be attained, but gave no explicit extremal and did not mention the asymptotic result (27). The first appearance of this result that we know of occurs in [18, p. 15], but under hypotheses that are rather restrictive and less general than those in Theorem 6.

88

M. M. Dodson and J. R. Higgins

7.2 After Weiss Weiss’s announcement of 1963 was followed in 1967 by the independent papers of Brown [8] and Stickler [54], which were discussed by Standish [52] in a further paper in the same year.16 The hypotheses were significantly simpler but the proofs are quite a mixture. They are mathematically unsatisfactory in one way or another but all three papers contain at least the basis of a correct proof (see Sect. 5.2). Brown’s paper [8] dispensed with the last three of Weiss’s conditions, assuming just f ∈ A. However, in a frequency splitting argument in which frequencies in [−π w, π w] and its complement in R were analysed differently, he relied on exact sampling in Aw (Theorem 3) but gave a fallacious proof. In a correction [9] a year later, Brown added a square-integrability condition which allowed the use of the classical Exact Sampling Theorem (Theorem 2) and established the bandlimited Approximate Sampling Theorem for functions in F 2 (Theorem 5). In Sect. 3 of [8], Brown discussed bandpass sampling and applied his bandlimited approximation result to functions in A for which the modulus of their spectral functions was even, a weaker condition than the √ function being real. He also produced extremals which showed that the constant 2/π in Theorems 6 and 9 could not be reduced. Stickler’s brief note [54] had the aim of improving a result of A. Papoulis [44] on the error involved in reconstructing undersampled real bandlimited signals as a consequence of under-estimating the maximal frequency. Stickler’s argument extended naturally to signals with unbounded frequencies, giving an estimate equivalent to (26). However, no analytic properties of the signal or its associated spectral function were specified. It was explicitly assumed that the signal was real and tacitly that the (analogue) signal satisfied the mathematical conditions required. Stickler appears to have been unaware of the work of Weiss and Brown as the single reference was to Papoulis. Standish [52] pointed out the error in [8] and addressed the questions of convergence omitted by Stickler from [54]. He also placed the approximate sampling theorem in the more general Fourier–Stieltjes algebra setting by considering Fourier–Stieltjes transforms of functions G of normalized bounded variation, which in addition satisfied some technical continuity conditions. However, the partial sums of the sampling series are not taken to be symmetrical, which unnecessarily compromises the argument. The approximate sampling theorem for A follows since the derivative G is essentially the integrable spectral function for f . A breakdown of the result in the L2 setting is also discussed. Five years later, in his note [6] of 1972, Boas related sampling to Fourier analysis and in particular to Poisson summation, thereby proving Theorem 6. He essentially used the periodization of the spectral function, which makes his proof quite different from those above. His novel approach is of independent interest, as it is a natural extension of the exact sampling case and applicable in other settings. Moreover,

16 A

striking case of multiple discovery occurs for exact sampling in the L2 setting [29].

Sampling Theory in a Fourier Algebra Setting

89

his analysis provides a key to proving the bandpass approximate sampling theorem for all functions in A. In a concluding remark, Boas also sketched a very short proof of the exact sampling theorem in Aw (Theorem 3), as did Pollard and Shisha independently at the end of their paper [45]. As well as comparing and clarifying these papers on approximate sampling in the Fourier algebra in Sect. 5, we show in Theorem 1 that the Fourier algebra A strictly contains F 2 (the setting for classical approximate sampling theory); provide a new, classically based proof of the exact sampling theorem for Aw in Sect. 4.3; establish an elementary yet sharper type of approximation formula in Sect. 5.4; and in Sect. 6 give a self-contained treatment of exact and approximate bandpass sampling in A. Acknowledgements We are grateful to the editors for the opportunity to contribute to this centennial ANHA volume for Claude Shannon and particularly to Stephen Casey for his patience and long-distance support. Our thanks for their long standing interest and helpful comments also go to Paul Butzer, Paulo Ferreira and Simon Eveson, who also gave us invaluable assistance with LaTeX, and to the referee for suggesting an improvement to the presentation. We are also grateful to our families for their wonderful support which enabled us to complete this paper.

References 1. M. Abramowitz and I. Stegun, Handbook of mathematical functions : with formulas, graphs, and mathematical tables, Dover Publications, New York, 1965. 2. A. V. Balakrishnan, A note on the sampling principle for continuous signals, IRE Trans. Inf. Th. IT-3 (1957), 143–146. 3. M. G. Beaty, M. M. Dodson, S. P. Eveson, and J. R. Higgins, On the approximate form of Kluvánek’s theorem, J. Approx. Th. 160 (2009), 281–303. 4. J. J. Benedetto and W. Czaja, Integration and modern analysis, Birkhäuser, Boston, 2009. 5. J. J. Benedetto and J. S. G. Ferreira, Modern sampling theory, Birkhäuser, Boston, 2001. 6. R. P. Boas, Summation formulas and band-limited signals, Tohôku Math. Journ. 24 (1972), 121–125. 7. R. N. Bracewell, The Fourier Transform and its Applications. 2nd. edn. McGraw-Hill, New York, 1978. 8. J. L. Brown, Jr., On the error in reconstructing a non-bandlimited function by means of the band-pass sampling theorem, J. Math. Anal. Appl. 18 (1967), 75–84. 9. J. L. Brown, Jr., Erratum, J. Math. Anal. Appl. 21 (1968), 699. 10. P.L. Butzer, A survey of the Whittaker–Shannon sampling theorem and some of its extensions, J. Math. Research and Exposition, 3 (1983), 185–212. 11. P. L. Butzer, M. M. Dodson, P. J. S. G. Ferreira, J. R. Higgins, G. Schmeisser and R. L. Stens, Seven pivotal theorems of Fourier analysis, signal analysis, numerical analysis and number theory: their interconnections, Bull. Math. Sci. 4, 3, (2014), 481–525. 12. P. L. Butzer, P. J. S. G. Ferreira, J. R. Higgins, S. Saitoh, G. Schmeisser and R.L. Stens, Interpolation and Sampling: E. T. Whittaker, K. Ogura and Their Followers, J. Fourier Anal. Appl. 17, (2014), 320–354. 13. P. Butzer and A. Gessinger, The approximate sampling theorem, Poisson’s sum formula, a decomposition theorem for Parseval’s equation and their interconnections, Ann. Numerical Math. 4 (1997), 143–160. 14. P. L. Butzer, M. Hauss, and R. L. Stens, The sampling theorem and its unique rôle in various branches of mathematics, Mathematical Sciences, Past and Present, 300 years of Mathematische Gesellschaft in Hamburg, Mitteilungen Math. Ges. Hamburg, Hamburg, 1990.

90

M. M. Dodson and J. R. Higgins

15. P. L. Butzer, J. R. Higgins, and R. L. Stens, Sampling theory of signal analysis. Development of mathematics 1950–2000, 193–234, Birkhäuser, Basel, 2000. 16. P. L. Butzer, J. R. Higgins, and R. L. Stens, Classical and approximate sampling theorems: Studies in the Lp (R) and the uniform norms, J. Approx. Theory 137 (2005), no. 2, 250–263. 17. P. L. Butzer, G. Schmeisser, and R. L. Stens, Shannon’s sampling theorem for bandlimited signals and their Hilbert transform, Boas-type formulae for higher order derivatives – the aliasing error involved by their extensions from bandlimited to non-bandlimited signals, Entropy vol. 14 (2012), 2192–2226. 18. P. L. Butzer and W. Splettstößer, Approximation und Interpolation durch verallgemeinerte Abtastsummen, Forschungsberichte des Landes Nordrhein-Westfalen, vol. 2708, Westdeutscher Verlag, Opladen, 1977. 19. P. L. Butzer, W. Splettstößer and R. L. Stens, The sampling theorem and linear prediction in signal analysis, Jber. d. Dt. Math.-Verein., vol. 3 (1988), 1–70. 20. C. J. de la Vallée Poussin, Collected works/Oeuvres Scientifiques, vol. III, P. L Butzer, J. Mawhin and P. Vetro, eds., Académie Royale de Belgique and Circolo Matematico di Palermo, Brussels and Palermo, 2004. 21. M. M. Dodson, Shannon’s sampling theorem, incongruent residue classes and Plancherel’s theorem, J. Théor. Nombres Bordeaux 14 (2002), 425–437. 22. M. M. Dodson, Abstract exact and approximate sampling theorems, Chap 1. In: A I. Zayed and G. Schmeisser (eds.) New perspectives on approximation and sampling theory, Appl. Numer. Harmon. Anal., Birkhäuser/Springer, Cham (2014). 23. M. M. Dodson and A. M. Silva, Fourier analysis and the sampling theorem, Proc. Royal Irish Acad. 85A (1985), 81–108. 24. H. Dym and H. P. McKean, Fourier Series and Integrals, Probability and Mathematical Statistics, No. 14. Academic Press, New York-London, 1972. 25. A. Erdélyi et al., (Bateman Manuscript Project) Higher transcendental functions, McGrawHill, New York, 1953.  m−1 26. L. Euler, Investigatio valoris integralis 1−2xx k cosdx a termino x = 0 ad x = ∞ extensi, θ +x 2k Opuscula Analytica II, (1785), 55–75. 27. P. Eymard, L’algèbre de Fourier d’un groupe localement compact, Bull. Soc. Math. France 92 (1964), 181–236 (in French). 28. A. Faridani, A generalized sampling theorem for locally compact abelian groups, Math. Computation 63 (1994), 307–327. 29. P. J. S. G. Ferreira and J. R. Higgins, The establishment of sampling as a scientific principle—a striking case of multiple discovery, Notices Amer. Math. Soc. 58 (2011), no. 10, 1446–1450. 30. S. Goldman, Information theory, Constable, London, 1953. 31. E. A. Gonzalez-Velasco and E. Sanvicente, The Analytic Representation of Band-Pass Signals, J. Franklin Inst.310 (1980), 135–142. 32. J. R. Higgins, Five short stories about the cardinal series, Bull. Amer. Math. Soc. 12 (1985), 45–89. 33. J. R. Higgins, Sampling theory in Fourier and signal analysis: Foundations, Clarendon Press, Oxford, 1996. 34. J. R. Higgins, Historical origins of interpolation and sampling, Sampling Th. Signal and Image Proc. 2 (2003), 117–128. 35. J. R. Higgins, Two basic formulae of Euler and their equivalence to Tschakalov’s sampling theorem, Sampling Th. Signal and Image Proc. 2 (2003), 259–270. 36. J. R Higgins and R. L.Stens, Sampling theory in Fourier and signal analysis: Advanced topics, Oxford University Press, 1999. 37. A. J. Jerri, The Shannon sampling theorem— its various extensions and applications: a tutorial review, Proc. IEEE 65 (1977), 1565–1596. 38. Y. Katznelson, An Introduction to Harmonic Analysis. Second ed. Dover, New York, 1976. 39. A. Kohlenberg, Exact interpolation of bandlimited functions, J. Appl. Phys. 24 (1953), 1432– 1436.

Sampling Theory in a Fourier Algebra Setting

91

40. A. Kolmogorov, Une série de Fourier-Lebesgue divergente partout, C. R. Acad. Sci. Paris, 183 (1926), 1327–1328. 41. T. W. Körner, Fourier Analysis, Cambridge University Press, 1988. 42. W. Magnus et al., Formulas and Theorems for the Special Functions of Mathematical Physics, Springer-Verlag, New York, 1966. 43. D. Middleton, An Introduction to Statistical Communication Theory, McGraw-Hill, New York, 1960. 44. A. Papoulis, Error analysis in sampling theory, Proc. IEEE 54 (1966), 947–955. 45. H. Pollard and O. Shisha, Variations on the binomial series, Amer. Math. Monthly 79 (1972), 495–499. 46. H. Reiter and J. D. Stegeman, Classical Harmonic Analysis and Locally Compact Groups, London Mathematical Society Monographs: New Series 22, Clarendon Press, Oxford, 2000. 47. W. Rudin, Fourier analysis on groups, John Wiley, 1962. 48. W. Rudin, Real and complex analysis, McGraw Hill, New York, 1986. 49. D. Sayre, Some implications of a theorem due to Shannon, Acta Cryst., 5 (1952), 834. 50. M. Schwartz, W. R. Bennett and S. Stein, Communication Systems and Techniques, McGraw Hill, New York, 1966. 51. C. E. Shannon, Communication in the presence of noise, Proc. IRE, 37 (1949), 10–21. 52. C. J. Standish, Two remarks on the reconstruction of sampled non-bandlimited functions, IBM J. Res. Develop. 11 (1967), 648–649. 53. E.M. Stein and G. Weiss, Introduction to Fourier analysis on Euclidean spaces, Princeton University Press, Princeton, 1971. 54. D. C. Stickler, An upper bound on the aliasing error, Proc. IEEE (Lett.) 55 (1967), 418–419. 55. E. C. Titchmarsh, The Theory of Functions, 2nd edn., Oxford University Press, Oxford, 1939. 56. P. Weiss, An estimate of the error arising from misapplication of the sampling theorem, Notices Amer. Math. Soc. 10 (1963), p. 351. 57. E. T. Whittaker, On the functions which are represented by the expansions of the interpolation theory, Proc. Roy. Soc. Edinburgh, 35 (1915), 181–194. 58. E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, fourth ed., Cambridge University Press, Cambridge, 1962. 59. J. M. Whittaker, The “Fourier” theory of the cardinal function, Proc. Edinburgh Math. Soc. 1 (1929), 169–176. 60. P. M. Woodward, Probability and information theory, with applications to radar. Second ed. Pergamon Press, Oxford, 1964. 61. A. I. Zayed, Advances in Shannon’s sampling theory, CRC Press, Boca Raton, 1993. 62. A I. Zayed and G. Schmeisser (eds.), New perspectives on approximation and sampling theory, Appl. Numer. Harmon. Anal., Birkähuser/Springer, Cham, 2014.

Sampling Series, Refinable Sampling Kernels, and Frequency Band Limited Functions Wolodymyr R. Madych

Abstract We study sampling series and their relationship to frequency band limited functions. Motivated by the theories of multiresolution analyses and subdivision, particular attention is paid to sampling series whose kernels are refinable. After developing the basic properties of the sampling kernels under study, we consider three families of such kernels: (1) damped cardinal sines, (2) the fundamental functions for cardinal spline interpolation, and (3) a family of compactly supported kernels defined in terms of their masks. The limiting kernel of each family is the cardinal sine. In the cases (1) and (2) we present results concerning the limiting behavior of the corresponding sampling series when the data samples {cn } have polynomial growth as n → ±∞.

1 Introduction 1.1 Description of the Subject Matter Given a bi-infinite sequence of numerical values {cn : n = 0, ±1, ±2, . . .}, a cardinal sampling series is the expression ∞ 

cn Φ(ρx − n),

(1)

n=−∞

where the sampling kernel Φ(x) is a continuous function of the variable x, −∞ < x < ∞ that satisfies

W. R. Madych () University of Connecticut, Storrs, CT, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_4

93

94

W. R. Madych

 Φ(n) =

1 if n = 0 0 if n = ±1, ±2, . . .

(2)

and the positive constant ρ is the sampling rate. If the series (1) converges locally uniformly it defines a continuous function f (x) with the property that f (n/ρ) = cn , n = 0, ±1, ±2, . . . . In this case (1) extends the  discrete  data sequence {(n/ρ, cn ) : n = 0, ±1, ±2, . . .} to the continuous graph { x, f (x) : −∞ < x < ∞}. In this article we will be concerned with the case of fixed sampling rate ρ. Since a dilation argument shows that there is no loss of generality by restricting attention to the special case ρ = 1 we do so in what follows. The quintessential example of a sampling kernel is the familiar cardinal sine Φ(x) = sinc(x) :=

sin π x πx

(3)

that plays a critical role in Shannon’s sampling theorem and many other applications. The main subjects of our study are sampling kernels that are refinable, in the sense that they satisfy the two scale difference equation Φ(x) =



sn Φ(2x − n).

(4)

n

The sequence of coefficients {sn } is referred to as the scaling sequence or mask. Relations such as (4) play a significant role in the theory of multiresolution analyses and wavelets. We will make use of some of the techniques and results of this theory in our development. The familiar cardinal sine is also an example of a sampling kernel that is refinable. Namely, it follows from Shannon’s sampling theorem that sinc(x) =

∞ 

 sinc(n/2) sinc 2x − n).

(5)

n=−∞

One reason why property (4) may be desirable is that it can be used in evaluating the series (1) for dyadic values of x by the method of iterative refinement or subdivision. Namely, suppose Φ(x) satisfies (4) and f enjoys the representation f (x) =

∞ 

f (n)Φ(x − n),

n=−∞

with known data samples {f (n) : n = 0, ±1, ±2, . . .}. Then, when x is in the refined lattice {n/2 : n = 0, ±1, ±2, . . .} , f (x) can be evaluated via

Refinable Sampling Kernels

95



f (n/2) f (n/2) =  m f (m)sn−2m

if n is even if n is odd.

By iteration, for any positive integer k and any integer m, f (x) can be evaluated at x = m/2k+1 in terms of its values on the lattice {n/2k : n = 0, ±1, ±2, . . .}. This method is particularly effective when the subsequence of the non-zero terms in the mask {sn } is finite or rapidly decaying. In what follows, we develop the basic properties of sampling kernels and refinable sampling kernels that are Fourier transforms of non-negative integrable functions, a property enjoyed by many useful examples. Then we consider three families of such kernels: (1) damped cardinal sine functions, (2) fundamental functions of interpolation for piecewise polynomial cardinal splines, and (3) compactly supported kernels defined in terms of their masks. In each case, the members of the family can be parametrized in such a way that when the parameter tends to its limiting value the corresponding kernels tend to the cardinal sine. For each value of the parameter, which for the moment we will call α, the series fα (x) =

∞ 

cn Φα (x − n)

n=−∞

converges and is a well-defined continuous function whenever the data samples {cn } do not grow too rapidly as n → ±∞. The main results presented here, which are new, concern the behavior of fα (x) as α tends to its limiting value. Namely, when the value of the parameter tends to its limiting value while the data samples remain fixed, fα (x) either converges to an entire function of exponential type no greater than π or fails to converge for essentially all non-integer values of x. A consequence of such limiting behavior is that corresponding sampling series can be used to approximate or extend the classical cardinal sine series.

1.2 Background, Contents, and Conventions 1.2.1

Background

The impetus for this work are the theories of multiresolution analyses [17, 44], subdivision [12, 14], and sampling [28, 60] from the point of view of Schoenberg’s cardinal spline summability theory [51, Definition 2, p. 106]. Readers familiar with the theory of wavelets and multiresolution analyses no ˜ doubt recognize that if φ(x) is an orthogonal scaling function and φ(x) = φ(−x) ˜ then the convolution Φ(x) = φ ∗ φ(x) is a refinable sampling kernel. One objective of this work is to develop the basic properties of such kernels in a straightforward and direct manner. In this I was most strongly influenced by xerox copies of various handwritten notes, long since lost or mislaid, of Yves Meyer that I had access

96

W. R. Madych

to in the late 1980’s courtesy of Ronald Coifman. Other early and fundamental contributions to the subject include [16, 35, 42, 44]. Since then, of course, many fine works on wavelets and multiresolution analyses have appeared, including the books [13, 17, 26, 33, 43, 57, 59] and the articles [15, 20, 39, 55, 56] to mention only a few samples. Subdivision theory was created almost concurrently with wavelets, or so it appears. A few examples of the work in this area include [8, 9, 12, 14, 19–24]. On the other hand, the cardinal sine series used in classical sampling have a long rich history that is nicely outlined in [27]; other examples of work on this subject include [6, 7, 10, 11, 28, 30, 45, 46, 49, 58, 60]. Another objective of this article is to indicate that the use of appropriate families of such kernels in sampling series naturally lead to regularization or summability methods for the classical cardinal sine series that are similar to the spline sum introduced in [51, Definition 2, p. 106]. The primary inspiration here is the work of Schoenberg [52–54] and the extension described in [18]. Other works that deal with this aspect of sampling series include [1, 2, 25, 34, 37, 47, 48]. The final objective is to present new results concerning the limiting behavior of two families of sampling series, (1) damped cardinals sines and (2) piecewise polynomial cardinal splines, as described in Sect. 1.1. These results were mainly motivated by the developments in [47, 54].

1.2.2

Contents

The basic properties of refinable sampling kernels are developed in Sect. 2. Sections 3 and 4 deal with damped cardinal sines and piecewise polynomial splines, respectively. They contain the precise statements of our results concerning the limiting behavior of certain families of cardinal sampling series when the data samples {cn } are fixed and are of no greater than polynomial growth. Such families include piecewise polynomial cardinal splines as their degree tends to infinity. Section 5 treats a family of compactly supported scaling functions that are defined in terms of their masks. This section is somewhat incomplete in the sense that while the limiting behavior of the sampling kernels is treated, the limiting behavior of the sampling series with coefficients {cn } of appropriate generality is not treated. Each section closes with a subsection containing miscellaneous comments. For the most part, Sects. 2, 3, and 4 are written in a narrative style. By this I mean that the relevant definitions are stated and the relatively evident consequences are deduced in the text in a discursive manner. Results that require arguments that are technically more involved are either given a formal proof that is clearly delineated in the text or provided with an appropriate reference. Section 5 is somewhat different in tone, specifically in the technical nature of the definitions and arguments required to establish the properties of the family of compactly supported refinable sampling kernels that are presented there.

Refinable Sampling Kernels

1.2.3

97

Conventions

We use standard notation and only alert the reader to the fact that Eσ , where σ > 0, denotes the class of entire functions of exponential type no greater than σ that have no greater than polynomial growth along the real axis. In view of the distributional variant of the Paley–Wiener theorem, for example see [29, Theorem 1.7.7], Eσ consists of the Fourier transforms of distributions with support in the interval [−σ, σ ]. The Fourier transform of a tempered distribution u is denoted by  u and, in the normalization used here, is defined by  u(ξ ) =



−∞

e−iξ x u(x)dx

when u is an integrable function. As is customary, the symbol C is used to denote generic constants whose value depends on the context.

2 Basic Properties of Refinable Sampling Kernels 2.1 Sampling Kernels: Basic Assumptions and Their Consequences In what follows, we assume that the sampling kernel Φ(x) is the inverse Fourier  ). That is, transform of a non-negative integrable function Φ(ξ 1 Φ(x) = 2π





−∞

 )dξ, eixξ Φ(ξ

(6)

where  ) ≥ 0 for all real ξ Φ(ξ and  L1 (R) =

Φ





−∞

 )|dξ < ∞. |Φ(ξ

(7)

(8)

This implies that Φ(x) is continuous on the real line R

(9)

and lim Φ(x) = 0.

x→±∞

(10)

98

W. R. Madych

Also, we are assuming that Φ is a sampling kernel, which means that n = 0, ±1, ±2, . . . ,

Φ(n) = δ0,n

(11)

where δ0,n is the Kronecker delta,  δ0,n =

1 if n = 0 0 if n = ±1, ±2, . . . .

Hence properties (7) and (8) can be re-expressed with more information as 1 Φ(0) = 2π





 )dξ = Φ(ξ

−∞

1  L1 (R) = 1.

Φ



(12)

 is integrable leads to Property (11) and the fact that Φ ∞ 

 − 2π m) = 1. Φ(ξ

(13)

m=−∞

This will play a significant role in what follows.  is integrable Proof of (13) To see (13) note that property (11) and the fact that Φ allow us to write 1 Φ(n) = 2π





−∞

e

 )dξ = Φ(ξ

inξ

1 2π



π −π

einξ

∞  

  − 2π m) dξ = δ0,n . Φ(ξ

m=−∞

The last equality in the above string implies (13).



Identity (13) together with assumption (7) imply that  )≤1 0 ≤ Φ(ξ

for all real ξ.

(14)

 is square integrable. Namely, Note that (12) and (14) imply that Φ



−∞

 )|2 dξ ≤ |Φ(ξ



∞ −∞

 )dξ = 2π, Φ(ξ

(15)

which also implies that Φ is square integrable and



−∞

|Φ(ξ )|2 dξ ≤ 1.

(16)

Refinable Sampling Kernels

2.1.1

99

Examples

The classical cardinal sine (3) is the quintessential example of a sampling kernel of the type described above. Modulated variants Φa (x) that are defined in terms of the real parameter a, −∞ < a < ∞, by 

Φa (x) = e

iax

sinc(x)

 ) = 1 when |ξ − a| ≤ π, with Φ(ξ 0 otherwise,

(17)

also provide accessible examples. Note that, in addition to being a sampling kernel, the system of functions and Φa (x − n : n = 0, ±1, ±2, . . .} is orthogonal, that is



−∞

Φa (x − m)Φa (x − n)dx = δm,n .

Kernels Φ(x) that possess such properties and also enjoy more rapid decay as x → ±∞ are studied in [31]. A class of examples Φ(x) that are defined in terms of their Fourier transform and the homogeneous functions |ξ |−α , α > 1 are defined by |ξ |−α . −α n=−∞ |ξ − 2π n|

 ) = ∞ Φ(ξ

(18)

In the case α = 2 the kernel Φ(x) can be expressed as  Φ(x) = (1 − |x|)+ =

1 − |x|

when |x| ≤ 1,

0

otherwise,

and it is evident that this kernel has compact support. This is not the case when α = 2. Many other examples can be found in what follows.

2.2 Refinable Sampling Kernels As mentioned in the introduction, the main subjects of our study are sampling kernels that are also refinable scaling functions that satisfy the two scale difference equation Φ(x) =

 n

sn Φ(2x − n).

(19)

100

W. R. Madych

Properties of the scaling coefficients, or mask, {sn } that are evident from (11) and (19) are the following: s2k = δ0,k ,

k = 0, ±1, ±2 . . . ,

(20)

and s2k−1 = Φ(k − 12 ),

k = 0, ±1, ±2 . . . .

(21)

To see this, evaluate (19) at x = m/2 and applying (11) we get Φ(k) = s2k Φ(0) = δ0,k

when m is even, m = 2k, and

  Φ (2k − 1)/2 = s2k−1 Φ(0).

when m is odd, m = 2k − 1,

Since Φ(0) = 1, (20) and (21) follow. Identities (20) and (21) allow us to deduce that the sequence of scaling coefficients {sn } is square summable, ∞ 

|sn |2 < ∞.

(22)

n=−∞

Proof of (22) If we let A(ξ ) =

∞ 

 − 4π n) Φ(ξ

n=−∞

and note that A(ξ ) is 4π periodic then in view of (6) and (21) we may write s2k−1 = Φ((2k − 1)/2) = = =

1 2π 1 2π

1 = 2π



2π −2π 2π 0 π −π

1 2π



∞ −∞

 )dξ ei(2k−1)ξ/2 Φ(ξ

ei(2k−1)ξ/2 A(ξ )dξ   eikξ e−iξ/2 A(ξ ) + e−i(ξ −2π )/2 A(ξ − 2π ) dξ

   eikξ e−iξ/2 A(ξ ) − A(ξ − 2π ) dξ.

The fact that 0 ≤ A(ξ ) =

∞  m=−∞

 − 4π m) ≤ Φ(ξ

∞  n=−∞

 − 2π n) = 1 Φ(ξ

Refinable Sampling Kernels

101

implies that −1 ≤ A(ξ ) − A(ξ − 2π ) ≤ 1 and hence the subsequence of coefficients s2k−1 , k = 0, ±1, ±2m . . . , is square summable since they are the Fourier coefficients of a bounded, 2π periodic function, namely ∞ 

  s2k−1 e−ikξ = e−iξ/2 A(ξ ) − A(ξ − 2π ) .

(23)

k=−∞



This together with (20) and (21) implies (22). The Fourier transform of (19) is  ) = 1  Φ(ξ 2 s(ξ/2)Φ(ξ/2), where  s(ξ ) is the Fourier transform of  s(ξ ) =



(24)

sn δ(x − n),

∞ 

sn e−inξ .

(25)

n=−∞

Let s(ξ ). S(ξ ) = 12

(26)

 ) = S(ξ/2)Φ(ξ/2)  Φ(ξ

(27)

0 ≤ S(ξ ) ≤ 1

(28)

S(ξ ) + S(ξ − π ) = 1.

(29)

Then

and it follows that

and

Proof of (28) and (29) To see (28) re-express (27) as  ) = S(ξ )Φ(ξ  ) Φ(2ξ so that ∞  n=−∞

∞     2(ξ − 2π n) = S(ξ )  − 2π n) Φ Φ(ξ n=−∞

102

W. R. Madych

which, in view of (13), reduces to ∞ 

S(ξ ) =

 Φ(2ξ − 4π n).

(30)

n=−∞

Since ∞ 

0≤

 Φ(2ξ − 4π n) ≤

n=−∞

∞ 

 Φ(2ξ − 2π n) = 1,

n=−∞

identity (30) implies (28). Relation (29) can be seen, by applying identities (13) and (27) to write 1=

∞ 

∞ 

 − 2π m) = Φ(ξ

m=−∞

= S(ξ/2)

     (ξ − 2π m)/2 S (ξ − 2π m)/2 Φ

m=−∞ ∞ 

∞         (ξ/2) − 2π k + S (ξ/2) − π  (ξ/2) − π − 2π k Φ Φ

k=−∞

k=−∞



 = S(ξ/2) + S (ξ/2) − π .

 If we iterate identity (27) we get   ) = S(ξ/2)S(ξ/4)Φ(ξ/4)  Φ(ξ = ··· =

N 

 N  S(ξ/2 ) Φ(ξ/2 ). n

(31)

n=1 n In view of (28), N n=1 S(ξ/2 ), N = 1, 2, . . . , is a decreasing sequence bounded below by 0 and hence converges to a value v(ξ ), 0 ≤ v(ξ ) ≤ 1. If v(ξ ) > 0 N ), N = 1, 2, . . . , converges, but otherwise, without  we may conclude that Φ(ξ/2  the behavior of this sequence is indeterminate. To more information concerning Φ,  ) in avoid unnecessary complications that may arise subject to the behavior of Φ(ξ  ) neighborhoods of ξ = 0, in the remainder of this subsection we assume that Φ(ξ is continuous at ξ = 0. This is not a serious restriction since essentially all practical examples enjoy this property. (For examples of sampling kernels that do not enjoy this property see item 2.5.2 in Sect. 2.5.)  ) is continuous at ξ = 0 then If Φ(ξ

  )= Φ(ξ

∞ 

n=1

  S(ξ/2n ) Φ(0)

Refinable Sampling Kernels

103

and it follows that  = 0 Φ(0)

(32)

and ∞ 

S(ξ/2n ) > 0

for some ξ.

(33)

n=1

 ) = 0 for all ξ , which is a contradiction. Otherwise Φ(ξ In view of (32), relation (27) implies that S(0) = 1,

(34)

S(π ) = 0

(35)

which can be used to show that

and  Φ(2π n) = δ0,n ,

n = 0, ±1, ±2, . . . .

(36)

Furthermore, (32) together with (27) imply that S(ξ ) is continuous at ξ = 0 and  ) is continuous at ξ = 2π n, n = ±1, ±2, . . . , via (36) implies that Φ(ξ 

 + 2π n) ≤ 0 ≤ Φ(ξ

 + 2π m) = 1 − Φ(ξ  ). Φ(ξ

−∞ 0 when |ξ | ≤ π/2

(51)

is sufficient to do the job. Proof that (51) does the job To see this let F∗ (ξ ) =

∞ 

F (ξ − 2π n).

n=−∞

Then F∗ is 2π periodic, continuous, F∗ (0) = 1, 0 ≤ F∗ (ξ ) ≤ 1, and in view of the fact that F (ξ ) = S(ξ/2)F (ξ/2)     F∗ (ξ ) = S(ξ/2)F∗ (ξ/2) + S (ξ/2) + π F∗ (ξ/2) + π . To verify that F∗ (ξ ) = 1 for all ξ it suffices to verify it for |ξ | ≤ π . We shall do so by contradiction. Suppose that the minimum of F∗ (ξ ) is c. If c is < 1 then there is a point ξ = ξ0 that enjoys  the property  that |ξ0 | ≤ π , F∗ (ξ 0 ) = c, and|ξ0 | = min{|ξ | : F∗ (ξ ) = c}. Thus S (ξ0 /2) + π F∗ (ξ0 /2) + π ≥ S (ξ0 /2) + π c and since |ξ0 /2| ≤ π/2 we may conclude that S(ξ0 /2) > 0 and S(ξ0 /2)F∗ (ξ0 /2) > S(ξ0 /2)c. Hence, in view of the last displayed expression, we may write   c = F∗ (ξ0 ) > S(ξ0 /2)c + S (ξ0 /2) + π c = c, which is a contradiction.



We summarize the above observations as follows. Proposition 1 Suppose {sn } is absolutely summable, positive definite, and satisfies (46) and (47). Then there is a refinable function Φ(x) whose scaling sequence, or mask, is {sn }. Furthermore, if

110

W. R. Madych ∞ 1  sn e−inξ 2 n=−∞

S(ξ ) = then

 Φ(x) =

∞ 

S(ξ/2n ),

n=1

where the infinite product converges uniformly on compact subsets of R,

 ) ≥ 0, Φ(ξ

and



−∞

 )dξ ≤ 2π. Φ(ξ

If, in addition, S(ξ ) satisfies (51), then Φ is a refinable sampling kernel. To see the construction of Φ(x) in the time domain, let  gN (ξ ) = GN (ξ ), where GN (ξ ) is defined by (50). Then g1 (x) =



sn1 φ(2x − n1 ),

n1

g2 (x) =





sn2 g1 (2x − n2 ) =

n2

sn2



n2

sn1 φ(4x − 2n2 − n1 )

n1

=

  

g3 (x) =

sn3 g2 (2x − n3 ) =

n3

=



   n1

n2

n3

=

n3

sn3

   n1

n2

sn2 sn1 −2n2 φ(4x − n1 ),

n2

n2 1





 sn2 sn1 −2n2 φ(4(2x − n3 ) − n1 ) 

sn3 sn2 sn1 −2n2 −4n3 φ(8x − n1 )    n1

n2

 sn3 sn2 −2n3 sn1 −2n2 φ(8x − n1 ),

n3

and by induction gN (x) =

⎧  ⎨ n1



n2

···

 nN

snN

N −1  j =1

⎫ ⎬ snj −2nj +1



φ(2N x − n1 ).

(52)

Refinable Sampling Kernels

111

If x is an integer multiple of 1/2N , x = k/2N , then in view of (52) gN (k/2N ) =

 n2

···



snN

N −1 

snj −2nj +1 ,

where n1 = k.

j =1

nN

Comparing this with (44) and relabeling the indices leads to gN (k/2N ) = Φ(k/2N ),

k = 0, ±1, ±2, ±3, . . . .

(53)

Since Φ is a bounded, uniformly continuous function we can state the following: Proposition 2 The function gN is a continuous, piecewise linear function that in view of (53) interpolates Φ at the points k/2N , k = 0, ±1, ±2, ±3, . . . . Furthermore lim gN (x) = Φ(x)

N →∞

uniformly on R.

2.4.1

Examples

It may be interesting to note that the modulated cardinal sines Φa (x) defined by (17) are refinable when |a| ≤ π but fail to be refinable when |a| > π . The kernels defined by (18) in terms of the homogeneous functions |ξ |−α , α > 1, are all refinable. This is a consequence of a calculation using the homogeneity of  ) = Q(ξ/2)Φ(ξ  ), where Q(ξ ) is 2π periodic. An example |ξ |−α that shows that Φ(ξ of this type of calculation can be seen immediately preceding Sect. 4.2.

2.5 Miscellaneous Comments All the conclusions in this section follow from the definitions and hypotheses in a straightforward manner with the possible exception of the proof that (51) does the job. The argument used here, adapted from [26, p. 80], requires no additional continuity assumptions on S(ξ ).

2.5.1 Note that if the refinable sampling kernel Φ(x) is integrable, then in view of (36) we may write  Φ(2π n) =





−∞

Φ(x)e−i2π n dx =

0

1

∞  j =−∞

 Φ(x − j ) e−i2π n dx = δ0,n ,

112

W. R. Madych

which implies that ∞ 

Φ(x − j ) = 1.

j =−∞

This means that the interpolating sampling series reproduces constants.  ) is differentiable and the values Furthermore, if x Φ(x) is integrable, then Φ(ξ

 of its derivative Φ (ξ ) at the integer multiples of 2π can be expressed as  (2π n) = Φ





−∞

ixΦ(x)e

−i2π n



1

dx = i 0

 (x − j )Φ(x − j ) e−i2π n dx.

∞  j =−∞

 (2π n) = 0 for all integers n, then Hence if Φ ∞ 

(x − j )Φ(x − j ) = 0 = x −

j =−∞

∞ 

j Φ(x − j ),

j =−∞

which implies that the interpolating sampling series reproduces algebraic polynomials of degree ≤ 1. Analogous calculations and an argument using induction lead to the following: Proposition 3 Suppose the refinable sampling kernel Φ(x) is integrable. Then the interpolating sampling series reproduces constants. Furthermore, if x N Φ(x) is  is in C N (R) and if, in addition, integrable for some positive integer N , then Φ (k)  Φ (2π n) = 0 for k = 1, . . . , N and all integers n, then the interpolating sampling series reproduces algebraic polynomials of degree ≤ N . In other words, ∞ 

nk Φ(x − n) = x k ,

for k = 0, 1, . . . , N.

j =−∞

2.5.2 The function Φ(x) = eiπ x sinc(x) whose Fourier transform is  1 when 0 ≤ ξ ≤ 2π  Φ(ξ ) = 0 otherwise  ) that is not continuous at is an example of a refinable sampling function with Φ(ξ ξ = 0. For more complicated examples see [5] or [39, item 3.1.2, p. 279]

Refinable Sampling Kernels

113

2.5.3 Before leaving this section, we mention that the fixed point iteration gN +1 (x) =



sn gN (2x − n)

n

used in the proof of Proposition 2 is also known as the cascade algorithm, [17, p. 202].

3 Damped Cardinal Sines 3.1 Preliminaries A straightforward way of obtaining a sampling kernel that enjoys better decay than the cardinal sine is to multiply sinc(x) by an appropriate damping function φ(x), Φ(x) = sinc(x)φ(x),

(54)

where φ(x) is smooth, decays sufficiently rapidly, and satisfies φ(0) = 1.

(55)

Also, to ensure that the conditions described in Sect. 2.1 are met, the constraints  ∈ L1 (R) φ

and

(ξ ) ≥ 0 φ

(56)

will suffice. Typical examples include the Gaussian φ(x) = exp(−x 2 ),

(57)

the function φ(x) =

1 (1 + x 2 )α

with constant α > 0,

(58)

and, for some fixed value of k, k = 1, 2, . . . , the function

φ(x) =

sin x/k x/k

k (59)

114

W. R. Madych

and the normalized B spline φ(x) = ck Bk (x) = ck b ∗ · · · ∗ b(x) , %& ' $

(60)

k fold convolution

where b(x) = (1 − |x|)+ and 1/ck = Bk (0). Also, any dilate φ( x), with > 0, of such a function is an appropriate damping function. One reason for considering such dilates is that in addition to ensuring that sinc(x)φ( x) is a sampling function, conditions (55) also imply that lim sinc(x)φ( x) = sinc(x)

→0

giving rise to the speculation that, with sufficiently small , the resulting sampling series will provide a good approximation of a function f in Eπ . However, the resulting sampling kernels need not necessarily be refinable. Indeed, in spite of the fact that all the abovementioned examples enjoy properties (55) and (56), if φ(x) is one of the examples (57), (58), or (60), then Φ (x) = sinc(x)φ( x) fails to be refinable for every value of > 0. This is a consequence of  (ξ ) > 0 for all ξ , so Φ  (2π n) = δ0,n , violating the fact that, for every positive , Φ (36). This is not the case with example (59).

3.2 Refinable Damped Cardinal Sines To obtain properties of φ that will give rise to a refinable sampling kernel take the Fourier transform of (54). This results in  )= Φ(ξ

1 1 (ξ ) = χ ∗φ 2π 2π



π −π

(ξ − η)dη. χ (η)φ

(61)

 ) defined by (61) enjoys the relation A painless way to ensure that the Φ(ξ   (ξ ) whose Φ(ξ ) = S(ξ/2)Φ(ξ/2) for some 2π periodic function S(ξ ) is to choose φ support is contained in the interval [− , ] where < π/3. Then Φ will be refinable. To be more precise, we state the following: Proposition 4 Suppose that in addition to (55) the function φ is in E1 . Then Φ (x) = sinc(x)φ( x) is in Eπ + . If < π/3 then Φ is refinable and  (ξ ) = S (ξ/2)Φ  (ξ/2), Φ

Refinable Sampling Kernels

115

where S (ξ ) is the 2π periodic function ∞ 

S (ξ ) =

   2(ξ − 2π n) . Φ

n=−∞

 (ξ ) are not necessarily non-negative unless φ  also satisfies (56). S (ξ ) and Φ  (ξ ) has support in the interval |ξ | ≤ π + and Proof Suppose < π/3. Then Φ  (ξ/2) has support in the interval |ξ | ≤ 2(π + ) is = 1 when |ξ | ≤ π − while Φ and is = 1 when |ξ | ≤ 2(π − ). The 4π periodic function S(ξ/2) =

∞ 

 (ξ − 4π n) Φ

n=−∞

satisfies  S (ξ/2) =

 (ξ ) if |ξ | ≤ π + Φ if π + < |ξ | < 3π − .

0

Since  (ξ/2) = Φ

 1 0

if |ξ | ≤ 2(π − ) if |ξ | > 2(π + ),

it follows that if < π/3 then π + < 2(π − ) and  (ξ/2) = Φ  (ξ ). S (ξ/2)Φ  The functions φ(x) of example (59) are members of E1 . Examples that enjoy (ξ ) faster decay can be obtained as inverse Fourier transforms of C ∞ functions φ 1 (ξ )dξ = 2π , for instance, that have support in |ξ | ≤ 1 and satisfy −1 φ 1   1 φ(x) = c −1 exp ixξ + ξ 2 −1 dξ where the constant c is such that φ(0) = 1.

3.3 The Bernstein–Boas Formula If 0 ≤ σ < π and f (z) is a function in Eσ that is bounded on the real axis, then   sin (z − n) f (n) sinc(z − n) f (z) = (z − n) n=−∞ ∞ 

if 0 < < π − σ.

(62)

116

W. R. Madych

This identity is known as the Bernstein–Boas formula. Since the proof is short and involves a handy procedure, we reproduce it here. In view of the sampling theorem, for any w ∈ C we may write     ∞  sin (w − z) sin (w − n) = sinc(z − n) f (z) f (n) (w − z) (w − n) n=−∞

if 0 < < π − σ,

because the right-hand side of the above relation is in P Wπ = Eπ ∩ L2 (R). This results in (62) when w → z. Now consider the functions in example (59),

φk (z) =

sin z/k z/k

k ,

k = 1, 2, 3, . . . .

If 0 ≤ σ < π and f (z) is a function in Eσ that grows no faster than a polynomial of degree k − 1, then an argument analogous to the one above shows that f (z) =

∞ 

  f (n) sinc(z − n)φk (z − n) if 0 < < π − σ.

(63)

n=−∞

In particular (63) is valid whenever f (z) is a polynomial of degree no greater than k − 1. By choosing the damping function φ(x) to have faster decay, the restriction on the growth of f (x) at ±∞ on the real axis can be eliminated. We summarize these observations as follows. Proposition 5 Suppose φ(z) is a function in E1 that satisfies φ(0) = 1, (56), and for some positive integer k, sup{|x k φ(x)| : −∞ < x < ∞} ≤ C < ∞. If σ < π and f is a function in Eσ that grows no faster than a polynomial of degree k − 1, then f (z) =

∞ 

  f (n) sinc(z − n)φ (x − n) when 0 < < π − σ.

(64)

n=−∞

If φ satisfies sup{|x k φ(x)| : −∞ < x < ∞} ≤ Ck < ∞ for all positive integers k, then (64) is valid for all functions f in Eσ . Suppose φ satisfies the hypothesis of Proposition 5 and Φ (z) = sinc(z)φ( z). Then Φ (z/2) is in Eσ for σ = (π + )/2. If < π/3 then < π − σ and in view of (64), it follows that Φ (z/2) =

∞  n=−∞

Φ (n/2)Φ (z − n).

Refinable Sampling Kernels

117

Replacing z with 2z in the above relation shows that Φ (z) satisfies the scaling equation Φ (z) =

∞ 

Φ (n/2)Φ (2z − n).

(65)

n=−∞

This derivation of (65), via the Bernstein–Boas formula, provides an alternate proof of Proposition 4. Before closing this subsection we bring attention to certain similarities in the hypotheses and conclusions in Propositions 3 and 5. For example, both propositions make use of a kind of “flatness” at the integers of the Fourier transform of the sampling kernel.

3.4 The Limiting Behavior of Sampling Series Consisting of Damped Cardinal Sines In Sect. 3.1 we mentioned that one reason to consider sampling series with sampling function Φ (x) = sinc(x)φ( x) is to obtain approximations of functions f in Eπ in terms of the data samples {cn : n = 0, ±1, ±2, . . .}. To this end consider the sampling series F (x) =

∞ 

  cn sinc(x − n)φ (x − n)

(66)

n=−∞

and suppose φ(x) enjoys the following property: φ(x) is in C ν (R) where ν is non-negative integer, φ(0) = 1, φ(−x) = φ(x), and for j = 0, . . . , ν there is a constant C independent of x such that |φ (j ) (x)| ≤

C . 1 + |x|ν

We will refer to this as hypothesis H(ν). Note that example (57), the Gaussian, satisfies hypothesis H(ν) for all ν. Examples (58) and (59) are in C ∞ but satisfy the decay condition only when ν ≤ α and ν ≤ k, respectively. The B-spline example, (60) is compactly supported but is in C ν (R) only when ν ≤ 2k − 2.

118

W. R. Madych

Theorem 1 Suppose cn = O(nρ ) as n → ±∞, where 0 ≤ ρ < κ and κ is a positive integer. Also suppose {φ(x)} satisfies hypothesis H(ν) with ν ≥ κ. When > 0, the series (66) converges absolutely and F (x) is a well-defined function in C ν (R). As tends to zero, F (x) either converges uniformly on compact subsets of R to a function F0 (x) in Eπ or diverges for all but at most κ − 1 non-integer values of x. In the case of convergence, the function F0 is defined by ∞    cn sin(π x) , sinc(x − n) − P (x) F0 (x) = c0 sinc(x) + x κ κ n π n=−∞ n=0

where P (x) is a polynomial of degree ≤ κ − 1. Furthermore, if the data samples {cn } are the point values cn = f (n) of an entire function f in Eσ where σ < π then lim F (x) = f (x) = F0 (x).

→0

For the sake of brevity we omit the proof which can be found in [40]. However, we do mention that, in the case of convergence, there are formulas for the coefficients of the polynomial P (x) in Theorem 1 that involve the data samples {cn } and φ. These formulas and the proof of the following corollary can be found in [40]. Corollary 1 Suppose the sequence {cn } consists of the coefficients of a convergent cardinal sine series, ∞ 

f (x) =

cn sinc(x − n).

n=−∞

If φ satisfies hypothesis H(ν) with ν ≥ 3, then the series (66) converges absolutely and F (x) is a well defined function in C ν (R). Furthermore lim F (x) = f (x)

→0

uniformly on compact subsets of R. The cardinal sine series converges when its symmetric partial sums converge uniformly on compact subsets. This corollary, that can be considered as an Abel type theorem, justifies the view that the series (66) is a regularization of the cardinal sine series.

Refinable Sampling Kernels

119

3.5 Miscellaneous Comments The Gaussian (57) is a popular damping function, for example see [45, 46, 49]. It is interesting to note that the corresponding sampling series fail to reproduce constants. The Bernstein–Boas formula and some of its important applications can be found in [36, Lecture 21]. The cardinal sine series converges when its symmetric partial sums converge uniformly on compact subsets. There are several classical results that imply this, including the celebrated WKS sampling theorem, the Plancherel–Polya theorem, etc., that can be found in the cited literature on sampling theory. A necessary and sufficient condition for the convergence of the cardinal sine series is the convergence of both ∞  cn − c−n (−1)n n n=1

and

∞  cn + c−n (−1)n . n2 n=1

See [3]. A more direct, but less efficient, proof of a version of the corollary to Theorem 1 in terms of damping factor φ that is entire can be found in [4].

3.5.1 It may be of some interest to consider the damping function of example (60) relabeled as φk (x) = ck Bk (x) = ck b ∗ · · · ∗ b(x) , %& ' $ k fold convolution

where b(x) = (1 − |x|)+ and 1/ck = Bk (0). In view of the behavior of the Fourier transform of φk , it follows that lim φk (x) = 1 uniformly on compact subsets of R.

k→∞

The behavior of the sampling series Fk (x) =

∞ 

cn sinc(x − n)φk (x − n)

n=−∞

as k tends to ∞ should be analogous to the behavior of the sampling series in Theorem 1.

120

W. R. Madych

4 Piecewise Polynomial Cardinal Splines 4.1 Basic Setup Piecewise polynomial cardinal splines were introduced by I. J. Schoenberg as a kind of regularization method for the classical cardinal sine series, [50]. Since we are concerned only with splines of even order, our notation and some conventions are slightly different from that found in the CBMS lecture notes [51]. If k is a positive integer, the space Sk of cardinal piecewise polynomial splines of order 2k with knots at the integers can be defined as consisting of functions s(x) that satisfy the following properties: • In each interval n ≤ x ≤ n + 1, n = 0, ±1, ±2, . . . , s(x) is a polynomial of degree no greater than ≤ 2k − 1. • s(x) is in C 2k−2 (R) • s(x) has no greater than polynomial growth as x → ±∞. In other words, |s(x)| ≤ C(1 + |x|)m where C and m are constants independent of x. The space Sk can also be defined more succinctly as the class of tempered distributions s(x) that satisfy ∞ 

s (2k) (x) =

cn δ(x − n),

(67)

n=−∞

where δ(x) is the Dirac delta measure at the origin. Members of Sk are uniquely determined by their values on the integers, {s(n) : n = 0, ±1, ±2, . . .}. Conversely, for every bi-infinite sequence {cn : n = 0, ±1, ±2, . . .} of no greater than polynomial growth there is a unique spline s(x) in Sk that interpolates {cn }, in other words s(n) = cn for n = 0, ±1, ±2, . . . . To indicate the dependence of this spline on k and on the sequence {cn } we denote it by Sk ({cn }, x). Thus, if s(x) is in Sk , then s(x) = Sk ({s(n)}, x). The function Lk (x) = Sk ({δ0,n }, x) is the sampling kernel in Sk , namely the spline of order 2k that interpolates the Kronecker delta sequence {δ0,n }. The definition (67) together with the fact that Lk (n) = δ0,n imply that the Fourier transform of Lk (x) is ξ −2k . −2k j =−∞ (ξ + 2πj )

k (ξ ) = ∞ L

(68)

 This can also  be verified by observing that the inverse Fourier transform of Lk (ξ ) is  in Sk and n Lk (ξ − 2π n) = 1. Lemma 1 There are positive constants, A and a, independent of x such that |Lk (x)| ≤ Ae−a|x|

and

|L k (x)| ≤ Ae−a|x| .

Refinable Sampling Kernels

121

k (ζ ) is holomorphic in a strip Proof This is a consequence of the fact that L {ζ = ξ + iη : |η| < } for some positive and integrable on lines parallel to the real axis. Namely, if 0 < a < then 2π e±ax Lk (x) = e±ax



∞ −∞

k (ξ )eiξ x dξ L

=



−∞

k (ξ )ei(ξ ∓ia)x dξ = L



∞ −∞

k (ξ ± ia)eiξ x dξ L

so ±ax e Lk (x) ≤ 1 2π



∞ −∞

k (ξ ± ia)|dξ |L

and hence the desired bound on |Lk (x)| follows. Analogous reasoning, mutatis mutandis, justifies the bound on |L k (x)|.



Lemma 2 lim Lk (x) = sinc(x)

k→∞

uniformly on R.

Proof Note that k (ξ ) ≤ ξ −2k (ξ − 2π n)2k 0≤L

when |(ξ − 2π n)| ≤ π

or, more crudely, k (ξ ) ≤ ξ −2k π 2k , 0≤L which is valid for all ξ . These inequalities imply that for all ξ k (ξ ) ≤ min{1, (ξ/π )−2k }. 0≤L k (ξ ) can be re-expressed as Now, L k (ξ ) = L

1+



1 |j |≥1 (ξ/(ξ

− 2πj ))2k

from which it follows that

k (ξ ) = lim L

k→∞

⎧ ⎪ ⎪ ⎨1 1/2

⎪ ⎪ ⎩0

if |ξ | < π if |ξ | = π if |ξ | > π.

(69)

122

W. R. Madych

In view of (69) and the dominated convergence theorem we may conclude that the last limit is valid in L1 (R) which implies the desired conclusion.  Every spline Sk ({cn }, x) in Sk enjoys the representation Sk ({cn }, x) =

∞ 

cn Lk (x − n).

(70)

n=−∞

In view of Lemma 1 the series (70) converge absolutely and uniformly on compact subsets of R for every data sequence of samples {cn } of no greater than polynomial growth. Note that s(x/2) is a member of Sk whenever s(x) is. In particular Lk (x/2) is a member of Sk so that in view of (70) it follows that ∞ 

Lk (x/2) = Sk ({Lk (n/2)}, x) =

Lk (n/2)Lk (x − n).

n=−∞

The variable x can be replaced by 2x in the above relation so we may conclude that Lk (x) is refinable and satisfies the scaling equation Lk (x) =

∞ 

sk,n Lk (2x − n)

where sk,n = Lk (n/2).

(71)

n=−∞

The Fourier transform of (71) is k (ξ ) = Pk (ξ/2)L k (ξ/2), L

(72)

where Pk (ξ ) =

1 2

∞ 

sk,n e−inξ .

n=−∞

Identity (72) is also evident from ξ −2k −2k j (ξ + 2πj ) −2k   (ξ/2)−2k j (ξ/2) + 2πj ) = 2k   −2k  −2k 2 j (ξ + 2πj ) j (ξ/2) + 2πj −2k   j (ξ/2) + 2πj ) k (ξ/2), = 2k  L −2k 2 j (ξ + 2πj )

k (ξ ) =  L

Refinable Sampling Kernels

123

where

∞

Pk (ξ ) =

−2k j =−∞ (ξ + 2πj )  . −2k 22k ∞ j =−∞ (2ξ + 2πj )

This expression for Pk (ξ ) also follows from an application of (30) with k (ξ ).  )=L S(ξ ) = Pk (x) and Φ(ξ

4.2 The Limiting Behavior of Cardinal Splines In [54, Theorem 11], Schoenberg observed, as an extension of a result in [47, Theorem 5], that if f (x) is a continuous bounded function and lim Sk ({f (n)}, x) = f (x) − C sin(π x), where C is a constant

k→∞

uniformly on compact subsets of R then f (x) is in the Bernstein class Bπ . The class Bπ consists of those functions f in Eπ that are bounded on the real axis. The following spline analogue of Theorem 1 is an improvement of this observation. Theorem 2 Suppose cn = O(nρ ) as n → ±∞ where 0 ≤ ρ < κ and κ is a positive integer. As k tends to ∞, Sk ({cn }, x) either converges uniformly on compact subsets of R to a function S∞ ({cn }, x) in Eπ or diverges for all but at most κ −1 noninteger values of x. In the case of convergence, the function S∞ ({cn }, x) is defined by ∞    cn sin(π x) , sinc(x − n) − P (x) S∞ ({cn }, x) = c0 sinc(x) + x κ κ n π n=−∞ n=0

where P (x) is a polynomial of degree ≤ κ − 1. Furthermore, if the data samples {cn } are the values cn = f (n) of an entire function f in Eσ , where σ < π then lim Sk ({cn }, x) = f (x) = S∞ ({cn }, x).

k→∞

Observe that right-hand side of the formula for S∞ ({cn }, x) is an entire function in Eπ . Hence if S∞ ({cn }, x) is bounded it must be in Bπ . As a corollary of Theorem 2 we have the following: Corollary 2 Suppose f (z) is a convergent cardinal sine series. Then lim Sk ({f (n)}, x) = f (x)

k→∞

uniformly on compact sunsets of R.

124

W. R. Madych

In short, the spline summability method is regular. The proof of Theorem 2 and its corollary involves a tedious argument showing that Lk (x)/ sinc(x) behaves like φ( x) in Theorem 1 with = 1/k. For the sake of conciseness we omit it. The details can be found in [40]; for a more succinct outline see [41].

4.3 Miscellaneous Comments Expansions such as the right-hand side of (70), which Schoenberg labeled as cardinal in [50], prompted him to introduce splines as a regularization method for the classical cardinal sine series. No doubt results such as Lemmas 1 and 2 and related observations encouraged this point of view. The details concerning the properties of the piecewise polynomial cardinal splines listed here can be found in [51]. The details concerning the definition of these splines as tempered distributions that satisfy (67) and an alternate derivation of some of their properties can be found in [37] The fact that limk→∞ Sk ({f (n)}, x) = f (x) when f is in Eσ , σ < π , was observed in [38, 48]. In work on the subject by Schoenberg [53], it was shown that when f is in Eπ and has a derivative of some order in L2 (R) then limk→∞ Sk ({f (n)}, x) = f (x). This motivated him to consider interpolation by cardinal splines as a summability method for the classical cardinal sines series, which he christened spline sum [51, Definition 2, p. 106]. However, it was never established that this summability method was regular; that is, that limk→∞ Sk ({f (n)}, x) exists and equals f (x) whenever f is a convergent cardinal sine series. The corollary to Theorem 2 fills this gap.

5 Refinable Sampling Functions with Compact Support 5.1 Preliminaries The piecewise linear spline L1 (x) = (1 − |x|)+ is the quintessential example of a sampling function with compact support. It also satisfies the scaling relation L1 (x) = 12 L1 (2x + 1) + L1 (2x) + 12 L1 (2x − 1). Other examples of sampling functions with compact support can be constructed by imitating the examples in Sect. 3: take a sampling function Ψ (x) and a compactly supported continuous function φ(x) with φ(0) = 1. Then it is evident that Φ(x) = Ψ (x)φ(x) is a sampling function with compact support. To some extent,

Refinable Sampling Kernels

125

the properties of Φ can be regulated by the choice of φ and Ψ . However, in general, the result will not be a scaling function. A specific example of such a compactly supported function φ(x) is given by (60). One way to construct examples with compact support that are refinable is to start with a finite sequence of coefficients {sn } that satisfy properties (46) and (47). The Fourier transform of sn δ(x − n) will be the trigonometric polynomial  s(ξ ) =  −inξ . Define the function Φ(x) via the formula for its Fourier transform as sn e in Proposition 1. Alternatively, Φ(x)can be defined as the limit as N → ∞ of the fixed point iteration gN +1 (x) = n sn gN (2x − n) with g0 (x) = (1 − |x|)+ . These procedures will lead to a refinable function with compact support. However, the result need not necessarily be a sampling function unless the polynomial  s(ξ ) satisfies additional properties, such as (51), that assure that the limit is a sampling function. With this objective in mind, in Sect. 5.2 we consider a family of trigonometric polynomials that have certain desirable properties. In Sect. 5.3 we use these trigonometric polynomials to define a family of refinable sampling functions and show that in the limiting case its members converge to the cardinal sine. The corresponding scaling sequences or masks are arrived at via an alternate procedure, that is of subdivision type, in Sect. 5.4.

5.2 A Family of Polynomials with Desirable Properties Following Strichartz [56, top of p. 551], consider the binomial expansion of the left-hand side of the identity 

sin2 (θ/2) + cos2 (θ/2)

n

=1

(73)

and let Pn (θ ) be the first half of this expansion. More precisely, if n is odd, n = 2m − 1 P2m−1 (θ ) =

m−1 

j =0

j  2m−1−j 2m − 1  2 sin (θ/2) cos2 (θ/2) j

(74)

and when n is even, n = 2m

P2m (θ ) =

m−1 

j =0

j  2m−j 2m  2 sin (θ/2) cos2 (θ/2) j

1 2m + sin2m (θ/2) cos2m (θ/2). 2 m

(75)

126

W. R. Madych

The following properties of Pn (θ ) are readily apparent from its definition: Pn (θ ) is 2π periodic, even, and non-negative,

(76)

and of degree no greater than n. Namely, Pn (θ ) =

n 

ak eikθ = a0 +

k=−n

n 

2ak cos(kθ ) ≥ 0,

(77)

ak .

(78)

(−1)k ak .

(79)

k=1

where a−k = ak . n 

Pn (0) = 1 =

k=−n

Pn (π ) = 0 =

n  k=−n

Identities (78) and (79) are equivalent to 

an =

n even



an = 1/2.

(80)

n odd

Since Pn (θ + π ) is the second half of the binomial expansion of the left-hand side of (73), we may conclude that Pn (θ ) + Pn (θ + π ) = 1.

(81)

Substituting θ = −π/2 into (81) and using the fact that Pn (θ ) is even, Pn (−θ ) = Pn (θ ), yields P (π/2) = 1/2.

(82)

Lemma 3 The polynomials P2m−1 (θ ) and P2m (θ ) are identical. Namely, P2m (θ ) = P2m−1 (θ ). Proof To see this, let Q2m−1 (t) =

m−1 

j =0

when n = 2m − 1 is odd and

2m − 1 j t (1 − t)2m−1−j j

(83)

Refinable Sampling Kernels

Q2m (t) =

127

m−1 

j =0



2m j 1 2m m 2m−j t (1 − t) t (1 − t)m + j 2 m

when n = 2m is even. Then Qn (sin2 (θ/2)) = Pn (θ ) for n = 1, 2, 3, . . . and the conclusion of the lemma will follow if Q2m−1 (t) = Q2m (t).

(84)

To see (84) write (1 − t)Q2m−1 (t) = (1 − t)m +

m−1 

j =1

2m − 1 j t (1 − t)2m−j j

and m

 2m − 1 j t (1 − t)2m−j . tQ2m−1 (t) = j −1 j =1

Thus Q2m−1 (t) = (1 − t)Q2m−1 (t) + tQ2m−1 (t) = (1 − t)m +

m−1  

j =1



 2m − 1 2m − 1 + t j (1 − t)2m−j j j −1 +

2m − 1 m t (1 − t)m . m−1

Since



2m − 1 2m − 1 2m + = and j j −1 j





2m − 1 2m − 1 = , m−1 m

we also have





 2m − 1 1 2m − 1 2m − 1 1 2m = + . = m−1 2 2 m m−1 m In view of these identities, the last expression for Q2m−1 (t) reduces to Q2m−1 (t) =

2m j 1 2m m t (1 − t)m = Q2m (t). t (1 − t)2m−j + 2 m j

m−1 

j =0

This proves (84) and the conclusion of the lemma follows.



128

W. R. Madych

In view of Lemma 3, in what follows, we focus on the polynomials P2m−1 , m = 1, 2, . . . . Lemma 4 The polynomial P2m−1 (θ ) can be expressed in terms of its Fourier coefficients as  



m 1 8m 2m − 1  k+1 2m − 1 cos (2k − 1)θ P2m−1 (θ ) = + 2m (−1) . m m−k 2 4 2k − 1

(85)

k=1

(θ ) whose Proof We prove identity (85) by first obtaining an expression for P2m−1 integral yields (85). Let  N = 2m − 1, use the notation established in the proof of Lemma 3, QN sin2 (θ/2) = PN (θ ), and write

Q N (t)

m−1 

N {j t j −1 (1 − t)N −j − (N − j )t j (1 − t)N −j −1 } = j j =0 ⎧ ⎫ m−1 ⎨m−1  t j −1 (1 − t)N −j  t j (1 − t)N −j −1 ⎬ − = N! ⎩ (j − 1)!(N − j )! j !(N − j − 1)! ⎭ j =1

j =0

⎧ ⎫ m−1 ⎨m−2  t j (1 − t)N −j −1  t j (1 − t)N −j −1 ⎬ − = N! ⎩ (j )!(N − j − 1)! j !(N − j − 1)! ⎭  = N!

j =0

−{t (1 − t)}m−1 (m − 1)!(N − m)!



j =0

that can be expressed more succinctly as Q N (t) = −m

N {t (1 − t)}m−1 . m

Hence   PN (θ ) = Q N sin2 (θ/2) sin(θ/2) cos(θ/2)

m−1 N  2 sin(θ/2) cos(θ/2) = −m sin (θ/2) cos2 (θ/2) m

2m−1 N  = −m sin(θ/2) cos(θ/2) m

N N m  iθ e − e−iθ . =− N m (4i)

Refinable Sampling Kernels

129

Now

N   iθ  −iθ N N j N ei(2j −N )θ e −e == (−1) (−1) j j =0

and pairing the j and the N − j terms in the last expression on the right-hand side of this string results in 

e −e iθ

 −iθ N

= (−1)

N





 N i(2j −N )θ N N −j i(N −2j )θ e e (−1) + (−1) j N −j

m−1  j =0

= (−1)

N

j

  N ei(2j −N )θ − e−i(2j −N )θ (−1) j

m−1  j =0

= (−1) 2i N

  N sin (2j − N)θ (−1) j

m−1  j =0

= (−1)N 2i

j

j



m    N sin (2k − 1)θ , (−1)m−k−1 m−k k=1

where we used the substitution k = m − j to obtain the final expression. N  Substituting the final expression for eiθ − e−iθ into the one for PN (θ ) results in



m    N N m N m−k−1 sin (2k − 1)θ (−1) 2i (−1) PN (θ ) = − N m−k m (4i) k=1

that upon integration yields  



m  cos (2k − 1)θ N N m N m−k PN (θ ) = c − . (−1) 2i (−1) m−k 2k − 1 m (4i)N k=1

Finally, in view of (82), PN (π/2) = c = 1/2 and the desired result (85) follows.  In view of (85) we may update (77) as follows: P2m−1 (θ ) is a trigonometric polynomial of degree 2m − 1, P2m−1 (θ ) =

2m−1  j =−2m−1

aj eij θ = a0 +

m  k=1

  2a2k−1 cos (2k − 1)θ

(86)

130

W. R. Madych

where a−j = aj , a0 = 1/2, and a2k−1



(−1)k+1 4m 2m − 1 2m − 1 . = 2m m m−k 4 (2k − 1)

(87)

Lemma 5 ⎧ ⎪ ⎪ ⎨1 lim P2m−1 (θ ) = 1/2 m→∞ ⎪ ⎪ ⎩0

if |θ | < π/2 if |θ | = π/2

(88)

if π/2 ≤ |θ | ≤ π.

The convergence is uniform outside neighborhoods of θ = ±π/2. Since P2m−1 (θ ) is 2π periodic, (88) implies that limm→∞ P2m−1 (θ ) = 1 whenever |θ − 2π n| < π/2, etc., where n is any integer. In other words, if ⎧ ⎪ ⎪ ⎨1 χ (θ ) = 1/2 ⎪ ⎪ ⎩0

if |θ | < π/2 if |θ | = π/2 if |θ | > 1/2,

then lim P2m−1 (θ ) =

m→∞

∞ 

χ (θ − 2π n).

(89)

n=−∞

Proof of Lemma 5 To establish (88)  we use the notation used to verify (85), where N = 2m − 1 and QN sin2 (θ/2) = PN (θ ). For 0 ≤ p ≤ 1 let

N j wj = p (1 − p)N −j , j μN =

N 

j wj ,

and

j =0

VN =

j = 0, 1, . . . , N, N  (j − μN )2 wj . j =0

Then μN = pN

and

VN = p(1 − p)N,

which follows from the fact that μN and VN are the mean and variance of the binomial distribution with parameters N and p. Of course this can also be verified by direct calculation.

Refinable Sampling Kernels

Since QN (p) = N 2

− μN

131

m−1 j =0

2

wj , if p > 1/2 we may write

QN (p) =

m−1 

N

j =0



N 

2

− μN

2

wj ≤

m−1 

(j − μn )2 wj

j =0

(j − μn )2 wj = p(1 − p)N

j =0

and, since μN = pN, it follows that p(1 − p)N p(1 − p) 1 0 ≤ QN (p) ≤  = 2 2 . N 1 1 2 − p N − p 2 2

(90)

Hence if p > 1/2 lim QN (p) = 0.

N →∞

If p < 1/2 use 1 − QN (p) = N 2

− μN

2

N

j =m wj

{1 − QN (p)} ≤

N 

and similar reasoning to obtain (j − μn )2 wj ≤ p(1 − p)N,

j =m

which implies that p(1 − p) 1 0 ≤ 1 − QN (p) ≤  2 . N 1 − p 2

(91)

Hence if p < 1/2 lim QN (p) = 1.

N →∞

Note that in both cases the convergence is uniform outside neighborhoods of p = 1/2.   Since PN (θ ) = QN sin2 (θ/2) and PN (π/2) = 1/2 for all N , this completes the proof of the lemma.  For future reference note that items (90) and (91) in the above proof show that 1 ≥ P2m−1 (θ ) ≥ 1 −

C m

when |θ | ≤ a < π/2,

(92)

132

W. R. Madych

where the constant C depends only on a and 0 ≤ P2m−1 (θ ) ≤

C m

when π/2 < b1 ≤ |θ | ≤ b2 < 3π/2,

(93)

where the constant C depends only on b1 , and b2 . Finally, we observe that 1 ≥ P2m−1 (θ ) ≥ 1/2

when |θ | ≤ π/2.

(94)

This follows from the fact that P2m−1 (0) = 1, P2m−1 (π/2) = 1/2, and P2m−1 (−θ )

= P2m−1 (θ ), together with the fact that P2m−1 (θ ) < 0 when 0 < θ < π.

5.3 A Family of Compactly Supported Refinable Sampling Functions Suppose P2m−1 , m = 1, 2, . . . are the polynomials studied in the previous subsection. Namely, P2m−1 (ξ ) =

m−1 

j =0

j  2m−1−j 2m − 1  2 sin (ξ/2) cos2 (ξ/2) . j

Then, by virtue of Proposition 1 and properties (76), (78), (81), and (94) of P2m−1 , the function m (ξ ) = Φ

∞ 

P2m−1 (ξ/2n )

(95)

n=1

is well defined and is the Fourier transform of a refinable sampling function Φm (x). In view of Proposition 2 and (87), Φ(x) can also be defined as the, uniform in x, limit Φ(x) = lim gn (x) n→∞

(96)

of the fixed point iteration gn+1 (x) =

2m−1 

sm,j gn (2x − j ),

j =−(2m−1)

where g0 (x) = (1 + |x|)+ and the coefficients sm,j are specified by sm,−j = sm,j , sm,2k = δ0,k , k = 0, 1, 2 . . . ,

Refinable Sampling Kernels

sm,2k−1

133



(−1)k+1 8m 2m − 1 2m − 1 , = 2m m m−k 4 (2k − 1)

k = 1, 2, . . . m,

(97)

and sm,2k−1 = 0, k = m + 1, m + 2, . . . . We summarize these observations as Proposition 6 The function Φm (x) defined by (95) or equivalently by (96) is a refinable sampling function with scaling coefficients described by (97). The next proposition provides a bound on the compact support of the sampling kernel Φm (x). Proposition 7 Φm (x) is supported in the interval [−(2m − 1), 2m − 1]. Proof This follows from (95) and an application of the Paley–Wiener theorem, but can also be seen, perhaps more directly, from (96). Namely, g0 (x) has support [−1, 1] and g1 (x) has support [−(N + 1)/2, (N + 1)/2], where N = 2m − 1. More generally, if gn (x) has support [−r, r] then gn+1 (x) has support [−(N + r)/2, (N + r)/2]. Hence by induction gn (x) has support in [−rn , rn ], where n  N 1 rn = n + . Since lim rn = N = 2m − 1, the desired result follows. n→∞ 2 2n k=1  In view of the fact that Φm (x) has compact support, the cardinal sampling series ∞ 

cn Φm (x − n)

(98)

n=−∞

is well defined for any sequence of data samples {cn : n = 0, ±1, ±2, . . .}. Proposition 8

m (ξ ) = lim Φ

m→∞

⎧ ⎪ ⎪ ⎨1

if |ξ | < π

1/2 ⎪ ⎪ ⎩0

if |ξ | = π if |ξ | > π.

The limit is uniform in ξ for |ξ | ≤ a < π and |ξ | ≥ b > π . Proof To see the assertion in the case |ξ | ≤ a < π write P2m−1 (ξ ) = 1 + 0

ξ

P2m−1 (θ )dθ,

recall from the proof of Lemma 8 that

2m−1 2m − 1 

(θ ) = −m , sin(θ/2) cos(θ/2) P2m−1 m

(99)

134

W. R. Madych

and note that

2m−1 m

≤ 22m−2 . Hence if 0 ≤ θ ≤ π ,

 2m−1

−P2m−1 (θ ) ≤ m22m−2 sin(θ/2) ≤ m22m−2 (θ/2)2m−1 , and

ξ

P2m−1 (ξ ) ≥ 1 −

m22m−2 (θ/2)2m−1 dθ

0

= 1 − m22m−2 (ξ/2)2m

ξ 2m 2 =1− . 2m 4

Since 1 − c/e ≥ e−c when 0 ≤ c ≤ 1 and 1 − c/4 ≥ 1 − c/e when c ≥ 0, we may write P2m−1 (ξ ) ≥ exp(−ξ 2m )

when |ξ | ≤ 1.

Hence, if |ξ | < π and j ≥ 2, then |ξ/2j | < π/4 < 1 and we may write m (ξ ) = P2m−1 (ξ/2) Φ

∞ 

P2m−1 (ξ/2j )

j =2

≥ P2m−1 (ξ/2)

∞ 

     exp − (ξ/2j )2m = P2m−1 (ξ/2) exp − (ξ/2j )2m .

j =2

j ≥2

Since ∞ 

) * (ξ/2j )2m = (ξ/4)2m 22m /(22m − 1) < 2(ξ/4)2m

j =2

and 1 ≥ P2m−1 (ξ/2) ≥ 1 −

C m

when |ξ | ≤ a < π

where the constant C depends only on a, we may conclude that    C m (ξ ) ≥ 1 − 1≥Φ exp − 2(ξ/4)2m . m Since |ξ/4| ≤ a/4 < 1, it follows that m (ξ ) = 1 lim Φ

m→∞

uniformly when |ξ | ≤ a < π.

This proves the assertion of the proposition in the case |ξ | ≤ a.

(100)

Refinable Sampling Kernels

135

If ξ = π then P2m−1 (ξ/2) = 1/2 for all m and, since |ξ/2| < π , identity (100) yields m (ξ ) = lim P2m−1 (ξ/2)Φ m (ξ/2) = lim Φ

m→∞

m→∞

1 1 m (ξ/2) = . lim Φ 2 m→∞ 2

Finally, to see the assertions of the proposition in the case |ξ | > π , suppose b satisfies π/2 < b < (3π )/4. Then 2b < (3π )/2 and, in view of (92) 0 ≤ P2m−1 (ξ/2j ) ≤

C m

when b ≤ |ξ/2j | ≤ 2b,

where the constant C depends only on b. Since {ξ : |ξ | ≥ b} =

∞ 

{ξ : b ≤ |ξ/2j | ≤ 2b},

j =0

for any ξ such that |ξ | ≥ b there is a j0 such that b ≤ |ξ/2j0 | ≤ 2b so that m (ξ ) = P2m−1 (ξ/2j0 ) 0≤Φ

∞ 

P2m−1 (ξ/2j ) ≤ P2m−1 (ξ/2j0 ) ≤

j =1 j =j0

C . m

This implies that m (ξ ) = 0 uniformly when |ξ | ≥ b > π lim Φ

m→∞



and completes the proof of the proposition. Proposition 8 implies that lim Φm (x) = sinc(x)

m→∞

in the distribution sense

 is in L1 (R), or, if φ lim φ ∗ Φm (x) = φ ∗ sinc(x)

m→∞

uniformly in x ∈ R.

(101)

5.4 An Alternate Development In this subsection we describe an alternate approach that leads to compactly supported refinable sampling functions. Given the data samples {cn : n = 0, ±1, ±2, . . .} and a positive integer m, the interpolation procedure of Deslauriers and Dubuc [19] extends the data to a function f defined on the refined lattice {n/2 : n = 0, ±1, ±2, . . .} as follows: When n is

136

W. R. Madych

even, n = 2k, then f (n/2) = f (k) = ck . When n is odd, n = 2k − 1, then f (n/2) = f (k − 1/2) = p(k − 1/2) is the value of the unique polynomial p of degree 2m−1 that satisfies p(j ) = f (j ) = cj , j = k−m+ , = 0, 1, 2, . . . , 2m− 1; in other words, f (k −1/2) is the value of the unique polynomial of degree 2m−1 that interpolates the data at the set of 2m points consisting of the nearest m points to the left of k − 1/2 and the nearest m points to the right of k − 1/2. More explicitly f (x) = ck

when x = k

(102)

and f (x) =

k+m−1 

cj

k+m−1 

j =k−m

=k−m =j

x− j−

when x = k − 1/2.

(103)

By iterating this procedure N times the data can be extended to the lattice {n/2N : n = 0, ±1, ±2, . . .} and, in the limit if it exists, to all the dyadic rationals in R. Since the process is linear, translation invariant, and 2 scale invariant, at the N th stage the result f (x) enjoys the representation f (x) =

∞ 

ck Ψm (x − k)

when x = n/2N ,

k=−∞

where Ψm (x) is the result of applying the process to the data sequence {δ0,n } and thus satisfies Ψm (x) =

∞ 

Ψm (n/2)Ψm (x − n)

when x = n/2N .

(104)

n=−∞

Hence, if the limit as N → ∞ exists and can be extended continuously to all of R, Ψm (x) is a refinable sampling function. Applying (102) and (103) to the sequence {δ0,n } results in Ψm (n/2) = Ψm (k) = δ0,k

when n = 2k is even

(105)

and, when n = 2k − 1 is odd, Ψm (n/2) = Ψm (k − 1/2) =

k+m−1  j =k−m j =0

k − 1/2 − j −j

(106)

when 1 − m ≤ k ≤ m and Ψ (k − 1/2) = 0 otherwise. A straightforward, but somewhat tedious, calculation shows that when k = 1, 2, . . . m

Refinable Sampling Kernels k+m−1  j =k−m j =k

137



(−1)k+1 8m 2m − 1 2m − 1 k − 1/2 − j = 2m . −j m−k 4 (2k − 1) m − 1

Comparing this with (97) leads to the conclusion that 2m−1 

(ξ ) = Ψ

Ψ (n/2)e−inξ = 2P2m−1 (ξ ),

(107)

n=−(2m−1)

where P2m−1 is the polynomial defined by (74) and Ψm (n/2) = Φm (n/2), where Φm is the refinable sampling kernel defined by (95) and (96). Hence Ψm (x) is indeed well defined on the dyadic rationals and can be extended continuously to all of R. The resulting function Ψm (x) is same as the function Φm (x) defined by (95) and (96). We summarize this as Proposition 9 Suppose Φm (x) is the refinable sampling kernel defined by (95) or, equivalently, by (96). If {f (n) = cn : n = 0, ±1, ±2, . . .} is any data sequence then f (x) =

∞ 

f (n)Φm (x − n)

n=−∞

is a well-defined continuous function such that, for each integer k, f (k − 1/2) is equal to the value of the unique polynomial of degree 2m − 1 that interpolates the data at the set of 2m points consisting of the nearest m points to the left of k − 1/2 and the nearest m points to the right of k − 1/2. Explicitly f (k − 1/2) =

k+m−1 

f (j )

j =k−m

k+m−1  =k−m =j

k − 1/2 − , j−

k = 0, ±1, ±2, . . . .

  This is the case at every dyadic level, namely, f (2k − 1)/2N +1 is equal to value of the unique polynomial of degree 2m − 1 that interpolates ) the N  * f n/2 : n = 0, ±1, ±2 . . . at the set of 2m points consisting of the nearest m points to the left of (2k − 1)/2N +1 and the nearest m points to the right of (2k − 1)/2N +1 . Explicitly k+m−1   (2k − 1)/2 −   k+m−1 , f (2k − 1)/2N +1 = f (j/2N ) j− j =k−m

=k−m =j

k = 0, ±1, ±2, . . . .

138

W. R. Madych

5.5 Miscellaneous Comments The reader familiar with the theory of multiresolution analyses and orthogonal wavelets may recognize that the trigonometric polynomials P2m−1 defined by (74) are the squares of the moduli of the polynomials that give rise to the scaling functions associated with the celebrated Daubechies wavelets; in the cases m = 2, 3, . . . , 10, the scaling coefficients of these scaling functions are listed in Table 6.1 of [17]. Formula (74) was adapted from [56, top of p. 551]. A theorem analogous to Theorems 1 and 2 should be valid for the cardinal series defined by (98). Partially due to time constraints, I have been unable to verify this supposition. The interpolation method introduced in [19] that is described in Sect. 5.4 is very convenient, numerically robust, and preserves polynomials of degree ≤ 2m − 1. A m (ξ ) = P2m−1 (ξ ), that is the content of (107), result equivalent to the fact that Ψ was apparently first observed in [55]. Finally, I want to thank Stuart Nelson, who kindly provided the proof of Lemma 4 and simplifications of my arguments for Lemmas 3 and 5.

References 1. A. Aldroubi and M. Unser, Sampling procedures in function spaces and asymptotic equivalence with Shannon’s sampling theory, Numer. Funct. Anal. Optim., 15 (1994), no. 1–2, 1–21 2. A. Aldroubi, and M. Unser, Families of wavelet transforms in connection with Shannon’s sampling theory and the Gabor transform, Wavelets, 509–528, Wavelet Anal. Appl., 2, Academic Press, Boston, MA, 1992. 3. B. A. Bailey and W. R. Madych, Convergence of Classical Cardinal Series and Band Limited Special Functions, J. Fourier Anal. Appl. 19 (2013), no. 6, 1207–1228. 4. B. A. Bailey and W. R. Madych, Convergence and summability of cardinal sine series, Jaen J. Approx., 10, No. 1–2, (2018), 49–72. 5. J. J. Benedetto and R. L. Benedetto, The construction of wavelet sets. Wavelets and multiscale analysis, 17–56, Appl. Numer. Harmon. Anal., Birkhäuser/Springer, New York, 2011. 6. P. L. Butzer and R. L. Stens, A modification of the Whittaker-Kotel’nikov-Shannon sampling series, Aequation’es Mathematicae 28 (1985), 305–311. 7. P. L. Butzer, J. R. Higgins, and R. L. Stens, Sampling theory of signal analysis, Development of mathematics 1950–2000, 193–234, Birkhäuser, Basel, 2000. 8. C, A. Cabrelli, C. Heil, U. M. Molter, Polynomial reproduction by refinable functions. Advances in wavelets, (Hong Kong, 1997), 121–161, Springer, Singapore, 1999. 9. C. A. Cabrelli, S. B. Heineken, U. M. Molter, Local bases for refinable spaces. Proc. Amer. Math. Soc. 134 (2006), no. 6, 1707–1718. 10. S. D. Casey and D. F. Walnut, Systems of convolution equations, deconvolution, Shannon sampling, and the wavelet and Gabor transforms. SIAM Rev. 36 (1994), no. 4, 537–577. 11. S. D. Casey and J. G. Christensen, Sampling in Euclidean and non-Euclidean domains: a unified approach, Sampling theory, a renaissance, 331–359, Appl. Numer. Harmon. Anal., Birkhäuser/Springer, Cham, 2015. 12. A. S. Cavaretta, W. Dahmen, and C. A. Micchelli, Stationary subdivision, Mem. Amer. Math. Soc. 93 (1991), no. 453, vi+186 pp.

Refinable Sampling Kernels

139

13. C. K. Chui, An introduction to wavelets, Wavelet Analysis and its Applications, 1. Academic Press, Inc., Boston, MA, 1992. x+264 pp. 14. C.K. Chui and J. de Villiers, Wavelet Subdivision Methods, CRC Press, Boca Raton, 2010. 15. A. Cohen, I. Daubechies, J.-C. Feauveau, Biorthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math. 45 (1992) 485–560. 16. I. Daubechies, Orthonormal bases of compactly supported wavelets, Comm. Pure Appl. Math., 41 (1988), no. 7, 909–996. 17. I. Daubechies, Ten lectures on wavelets, CBMS-NSF Regional Conference Series in Applied Mathematics, 61. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992. xx+357 pp. 18. C. de Boor, K. Höllig, and S. Riemenschneider, Convergence of cardinal series. Proc. Amer. Math. Soc., 98 (1986), no. 3, 457–460. 19. G. Deslauriers and S. Dubuc, Symmetric iterative interpolation processes. Constr. Approx. 5 (1989), no. 1, 49–68. 20. D. L. Donoho and T. P. Y. Yu, Deslauriers-Dubuc: ten years after, Spline functions and the theory of wavelets , 355–370, CRM Proc. Lecture Notes, 18, Amer. Math. Soc., Providence, RI, 1999. 21. N. Dyn, K. Hormann, M. A. Sabin, and Z. Shen, Polynomial reproduction by symmetric subdivision schemes. J. Approx. Theory, 155 (2008), no. 1, 28–42. 22. N, Dyn, P. Oswald, Univariate subdivision and multi-scale transforms: the nonlinear case. Multiscale, nonlinear and adaptive approximation, 203–247, Springer, Berlin, 2009. 23. T. N. T. Goodman, Refinable spline functions and Hermite interpolation. Mathematical methods for curves and surfaces, 147–161, Innov. Appl. Math., Vanderbilt Univ. Press, Nashville, TN, 2001. 24. T. N. T. Goodman, C. A. Micchelli, and J. D. Ward, Spectral radius formulas for subdivision operators. Recent advances in wavelet analysis, 335–360, Wavelet Anal. Appl., 3, Academic Press, Boston, MA, 1994. 25. K. Hamm and J. Ledford, Cardinal interpolation with general multiquadrics. Adv. Comput. Math. 42 (2016), no. 5, 1149–1186. 26. E. Hernández and G. Weiss, A First Course on Wavelets, CRC Press, New York, 1996. 27. J. R. Higgins, Five short stories about the cardinal series, Bull. Amer. Math. Soc. (N.S.) 12, no. 1, (1985), 45–89. 28. J.R. Higgins, Sampling Theory in Fourier and Signal Analysis: Foundations, Oxford Science Publications, Clarendon Press, Oxford, 1996. 29. L. Hörmander, Linear partial differential operators, Third revised printing, Springer-Verlag, New York 1969. 30. A. J. Jerri, The Shannon sampling theorem-its various extensions and applications: a tutorial review, Proc. IEEE, 65, (11), (1977), 1565–1596. 31. N. Kaiblinger and W. R. Madych, Orthonormal sampling functions. Appl. Comput. Harmon. Anal. 21 (2006), no. 3, 404–412. 32. Y. Katznelson, An introduction to harmonic analysis, John Wiley & Sons, Inc., New YorkLondon-Sydney 1968 xiv+264 pp. 33. F. Keinert, Wavelets and multiwavelets, Studies in Advanced Mathematics. Chapman & Hall/CRC, Boca Raton, FL, 2004. xii+275 pp. 34. J. Ledford, On the convergence of regular families of cardinal interpolators. Adv. Comput. Math., 41 (2015), no. 2, 357–371. 35. P. G. Lemarie and Y. Meyer, Ondelettes et bases Hilbertiennes, Rev. Mat. Ibero-Amer., 2, no,1– 2, (1986), 1–18. 36. B. Ya. Levin, Lectures on entire functions, In collaboration with and with a preface by Yu. Lyubarskii, M. Sodin and V. Tkachenko. Translated from the Russian manuscript by Tkachenko. Translations of Mathematical Monographs, 150. American Mathematical Society, Providence, RI, 1996. 37. W. R. Madych and S. A. Nelson, Polyharmonic cardinal splines. J. Approx. Theory, 60 (1990), no. 2, 141–156.

140

W. R. Madych

38. W. R. Madych, Polyharmonic splines, multiscale analysis and entire functions. Multivariate approximation and interpolation, 205–216, Internat. Ser. Numer. Math., 94, Birkhäuser, Basel, 1990. 39. W. R. Madych, Some elementary properties of multiresolution analyses of L2 (Rn ), Wavelets, 259–294, Wavelet Anal. Appl., 2, Academic Press, Boston, MA, 1992. 40. W. R. Madych, The limiting behavior of certain sampling series and cardinal splines, J. Approx. Theory, 249 (2020). 41. W. R. Madych, On the Convergence of Cardinal Splines, Appl. Comput. Warmon. Anal., 48 (2020), no. 1, 508-512. 42. S. G. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2 (R), Trans. Amer. Math. Soc., 315 (1989), no. 1, 69–87. 43. S. G. Mallat, A wavelet tour of signal processing, Academic Press, Inc., San Diego, CA, 1998. xxiv+577 pp. 44. Y. Meyer, Wavelets and operators, Translated from the 1990 French original by D. H. Salinger. Cambridge Studies in Advanced Mathematics, 37. Cambridge University Press, Cambridge, 1992. xvi+224 pp. 45. L. Qian, On the regularized Whittaker-Kotel’nikov-Shannon sampling formula, Proc. Amer. Math. Soc. 131 (2003), no. 4, 1169–1176. 46. L. Qian and H. Ogawa, Modified Sinc kernels for the localized sampling series, Sampl. Theory Signal Image Process. 4 (2005), no. 2, 121–139. 47. F. B. Richards and I. J. Schoenberg, Notes on spline functions. IV. A cardinal spline analogue of the theorem of the brothers Markov. Israel J. Math., 16 (1973), 94–102. 48. S. D. Riemenschneider, Convergence of interpolating splines: power growth, Israel J. Math, 23 (1976), no. 3–4, 339–346. 49. G. Schmeisser and F. Stenger, Sinc approximation with a Gaussian multiplier, Sampl. Theory Signal Image Process. 6 (2007), no. 2, 199–221. 50. I. J. Schoenberg, Contributions to the problem of approximation of equidistant data by analytic functions. Part A. On the problem of smoothing or graduation. A first class of analytic approximation formulae. Quart. Appl. Math., 4, (1946). 45–99. 51. I. J. Schoenberg, Cardinal spline interpolation, Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 12. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1973. vi+125 pp. 52. I. J. Schoenberg, Notes on spline functions. III. On the convergence of the interpolating cardinal splines as their degree tends to infinity, Israel J. Math., 16 (1973), 87–93. 53. I. J. Schoenberg, Cardinal interpolation and spline functions. VII. The behavior of cardinal spline interpolants as their degree tends to infinity. J. Analyse Math. 27 (1974), 205–229. 54. I. J. Schoenberg, On the remainders and the convergence of cardinal spline interpolation for almost periodic functions. Studies in spline functions and approximation theory, pp. 277–303. Academic Press, New York, 1976. 55. M. J. Shensa, The discrete wavelet transform: wedding the a trous and Mallat algorithms, IEEE Transactions on Signal Processing, 40, no. 10 (1992), 2464–2482. 56. R. S. Strichartz, How to make wavelets, Amer. Math. Monthly , 100 (1993), no. 6, 539–556. 57. D.F. Walnut, An Introduction to Wavelet Analysis, Birkhäuser, Boston, 2004. 58. G. G. Walter, Sampling bandlimited functions of polynomial growth, SIAM J. Math. Anal., 19, no. 5, (1988), 1198–1203. 59. G. G. Walter and X. Shen, Wavelets and other orthogonal systems, 2nd ed. Studies in Advanced Mathematics. Chapman and Hall/CRC, Boca Raton, FL, 2001. 60. A. I. Zayed, Advances in Shannon’s sampling theory. CRC Press, Boca Raton, FL, 1993.

Prolate Shift Frames and Sampling of Bandlimited Functions Jeffrey A. Hogan and Joseph D. Lakey

Abstract The Shannon sampling theorem can be viewed as a special case of (generalized) sampling reconstructions for bandlimited signals in which the signal is expressed as a superposition of shifts of finitely many bandlimited generators. The coefficients of these expansions can be regarded as generalized samples taken at a Nyquist rate determined by the number of generators and basic shift rate parameter. When the shifts of the generators form a frame for the Paley–Wiener space, the coefficients are inner products with dual frame elements. There is a tradeoff between time localization of the generators and localization of dual generators. The Shannon sampling theorem is an extreme manifestation in which the coefficients are point values but the generating sinc function is poorly localized in time. This work reviews and extends some recent related work of the authors regarding frames for the Paley– Wiener space generated by shifts of prolate spheroidal wave functions, and the question of tradeoff between localization of the generators and of the dual frames is considered.

1 Introduction This paper reviews and extends several recent results of the authors involving frames and bases generated by shifts of functions that are bandlimited and essentially time limited. The frames potentially provide pointwise expansions of bandlimited signals such that the value at any point is nearly a finite linear sum of bandlimited components that are  concentrated near that point. Contrast this with the classical sinc series, f (t) = k∈Z f (k) sinc (t − k) (f ∈ PW), which expresses f as a

J. A. Hogan School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW, Australia e-mail: [email protected] J. D. Lakey () Department of Mathematical Sciences, New Mexico State University, Las Cruces, NM, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_5

141

142

J. A. Hogan and J. D. Lakey

superposition of shifted sinc functions: the coefficients—the integer samples—are as concentrated as possible, but the expansion functions—shifted sinc functions— are not concentrated. There is necessarily a tradeoff between concentration of coefficients (the analysis) and of expansion (the synthesis) and here is no exception: when the expansion functions are concentrated pointwise, the coefficients come from diffuse signal values. The expansion functions considered here are shifts of the most concentrated prolate spheroidal wave functions, which are eigenfunctions of the operator PΩ Q that first time limits to the interval [−1, 1] via Q, then bandlimits to [−Ω/2, Ω/2] via PΩ . The tradeoff between concentration of coefficients versus expansions will be discussed in Sect. 6. First we will review frame and Riesz basis/sequence properties of prolates in Sects. 3 and 4 and establish some generalizations of prior results for functions frequency limited to a finite union of intervals. After this we consider some specific instances of frames for bandpasslimited functions generated by what we call bandpass prolates in Sect. 5. Some of the technical apparatus is to combine baseband representations into multiband ones. In Sect. 6 we show how to compute dual frames for the baseband and bandpass cases, and we consider the tradeoff between concentration of signals versus concentration of dual frame generators when considering subspaces of bandlimited signals.

2 Background For f ∈ L1 (R), its Fourier transform fˆ and Fourier inversion f = (fˆ)∨ are F (f )(ξ ) = fˆ(ξ ) =

∞ −∞

f (t)e−2π itξ dt

(fˆ)∨ (t) =

and

∞ −∞

fˆ(ξ )e2π itξ dξ = f (t) ,

where the latter converges if fˆ ∈ L1 (R) as well. The Fourier transform extends to ∞ a unitary mapping of L2 (R) (with inner product to be f, g = −∞ f (t)g(t) dt). Given Ω > 0 we define the Paley–Wiener space PWΩ by PWΩ = {f ∈ L2 (R); fˆ(ξ ) = 0 if |ξ | > Ω/2}. Orthogonal projections PΩ and Q defined by Qf (t) = 1[−1,1] (t)f (t) and PΩ f (t) = (1[−Ω/2,Ω/2] fˆ)∨ (t) =





−∞

f (s)

sin(π Ω(t − s)) ds π(t − s)

(1)

on L2 (R) give rise to an iterated projection PΩ Q : PWΩ → PWΩ acting via PΩ Qf (t) =

1 −1

f (s)

sin(π Ω(t − s)) ds. π(t − s)

(2)

Prolate Shift Frames

143

The prolate spheroidal wave functions (PSWFs) or prolates {ϕn }∞ n=0 are the L2 (R)-normalized eigenfunctions of the self-adjoint compact operator PΩ Q, i.e., PΩ Qϕn = λn ϕn . Attractive properties of these functions include 1. The prolates may be chosen such that they are real-valued; the even-indexed prolates are even functions, while the odd-indexed prolates are odd functions; and the eigenvalues are decreasing: 1 > λ0 > λ1 > · · · > 0. 2. The eigenvalues {λn }∞ n=0 have a threshold behavior [9]: their first (approximately) a = 2Ω eigenvalues are bunched near 1, the next (approximately) log a eigenvalues plunge towards 0, and subsequent eigenvalues decay very rapidly to zero: λn ≤ Ce−αn log n . ∞ 3. Orthogonality on the line: −∞ ϕn (t)ϕm (t) dt = ϕn , ϕm = δnm . 1 4. Orthogonality on [−1, 1]: −1 ϕn (t)ϕm (t) dt = Qϕn , Qϕm = λn δnm . Property (3) of course follows directly from the nondegeneracy in (1), while (4) follows from the symmetry of the kernel in (2). The quantity a = 2Ω is the product of the lengths of the intervals on which the prolates are band limited and time concentrated, and is known as the time–bandwidth product. More generally, taking QT f = f 1[−T ,T ] , eigenfunctions of PΩ QT are called (T , Ω)-prolates. The prolates are different for different values 2ΩT . Prolates of the same order n corresponding to different time and frequency concentrations but the same time– bandwidth product, such as (Ω, 1) and (1, Ω)-prolates, are necessarily dilates of one another. We take this fact for granted when specifying time and frequency parameters in the figures. For further details on prolate functions, the reader is referred to [13] and [3]. In [14], shifts of the lowest order prolate ϕ0 were shown to generate the shiftinvariant space PWΩ . The Nyquist rate shifts of ϕ0 are not orthogonal, but they do form a frame for PWΩ , though the lower frame bound is prohibitively small. In [5], more prolates are used to generate frames for PWΩ with snug bounds.

3 Frame Properties of Shifts of Eigenfunctions of Time and Frequency Limiting Some properties of the eigenfunctions of PΩ Q carry over to limiting to more general subsets of time and frequency. We continue to work in one variable. Let S and Σ to be compact subsets of R. Let QS denote multiplication by 1S and PΣ = F −1 QΣ F . We shall assume that Σ = −Σ, which implies that the kernel of PΣ is real and symmetric. The operator PΣ QS PΣ is then self-adjoint. The eigenvalues λn = λn (S, Σ) of PΣ QS can be expressed 1 > λ0 ≥ λ1 ≥ · · ·  0 and we continue to denote by ϕn = ϕn (S, Σ) the eigenfunction belonging to λn . We  λ ϕn (ξ )|2 converge begin with a lemma stating, in effect, that partial sums N n=0 n | to the ideal multiplier 1Σ .

144

J. A. Hogan and J. D. Lakey

Lemma 1 Under the conditions above, the eigenfunctions ϕn of PΣ QS , normalized to ϕn L2 (R) = 1, form a complete orthogonal family in L2 (S) and  λn | ϕn (ξ )|2 = |S|, ξ ∈ Σ. Proof To prove that {ϕn } are complete in L2 (S), if f ∈ L2 (S) and f, ϕn = 0 for  all n then (writing f = QS f ) Q ϕn = 0 and, since { ϕn } forms an orthonormal Sf ,    basis for L2 (Σ), one concludes that Q f = 0 a.e. on Σ. But Q S S f is real analytic,  so QS f = 0 identically and therefore QS f = f = 0. Orthogonality follows from the eigenvalue property. If ρΣ is the inverse Fourier transform of 1Σ then, since ρΣ is real and symmetric, a change of order of integration and the reproducing property of ρ give

∞ 1 ϕn ϕm = ρΣ (t − s) ϕn (s) ds ρΣ (t − u) ϕm (u) du dt λn λm −∞ S S −∞ 1 1 = ϕm (u) ϕn (s)ρΣ (u − s) ds du = ϕm (u)ϕn (u)du . λn λm S λm S S ∞

Orthogonality in L2 (S) then follows from orthogonality in L2 (R). The identity PΣ QS ϕn = λn ϕn now implies that when ξ ∈ Σ, 

λn | ϕn (ξ )|2 =

∞  1 ϕn (t) e−2π itξ dt ϕ n (s) e2π isξ ds λn S S n=0

 ∞   1 e2π isξ ϕ n (s) ds ϕn (t) e−2π itξ dt = S n=0 λn S = e2π itξ e−2π itξ dt = 1S = |S| . S

 Here we used the orthonormal expansion e2π itξ = n λ1n e2π iξ , ϕn L2 (S) ϕn (t) of  the exponential in L2 (S) for ξ ∈ Σ. Thus λn | ϕn (ξ )|2 = |S| on Σ.   N ϕn (ξ )|2 Since  ϕn = (QS ϕn )∧ /λn is continuous on Σ, the partial sums n=0 λn | are continuous and monotone increasing, so Dini’s theorem implies that the convergence is uniform in ξ . As such, given A < |S| there is an N such that  A ≤ λn | ϕn (ξ )|2 ≤ |S| for √ all ξ ∈ Σ. Denote Φn (t) = λn ϕn (t) > 0 fixed and √ and, for α ∈ Z, let Φn,α (t) = Φn (t − α ) = λn ϕn (t − α ). Set Fα = Fα (S, Σ) = {Φn, α ; Fα,N = {Φn,α ;

∈ Z, n ≥ 0}

∈ Z, 0 ≤ n < N}.

(3)

Prolate Shift Frames

145

Theorem 1 Suppose that S ⊂ R and α > 0 are such that for almost every t ∈ R, the translated lattice {t − α } ∈Z intersects S at least A times and at most B times. Then Fα = {Φ n,α : ∈ Z,n ≥ 0} forms a frame for PWΣ with lower  frame bound  1S (t −α ) and upper frame bound B ≤ ess supt 1S (t −α ) . A ≥ ess inft Proof Let f ∈ PWΣ . Then with τs f (t) = f (t − s), after a change of variable and using the self-adjointness of PΣ and QS one has (where ϕn is the √ nth eigenfunction of PΣ QS which, as before, is assumed self-adjoint, and Φn = λn ϕn ) 

|f, Φn,α |2 =

n

=

∞ −∞



τ−α f (t)







τ−α f (t)τ−α f (s)λn ϕn (t)ϕn (s) dtds

τ−α f, ϕn PΣ QS ϕn (t) dt =



τ−α f, PΣ QS τ−α f

n

QS τ−α f 2 = =



−∞ −∞

n



=





∞ −∞



|f (t + α )|2 dt =



S

|f (t)|2 dt

S+α

1S (t − α )|f (t)|2 dt =

∞  −∞

 1S (t − α ) |f (t)|2 dt .

We used the fact that f ∈ PWΣ in the third identity and that {ϕn } forms an orthonormal basis in Lemma 1) in the second.   Consequently,  for PWΣ (established 1S (t − α ) and B = ess supt 1S (t − α ) then if A = ess inft A f 2 ≤



|f, Φn,α |2 ≤ B f 2

n

which was to be proved.

√ Corollary 1 Fix α ≤ 1 and Ω > 0 and let Φn = λn ϕn where ϕn are the ∈ Z, n ≥ 0} forms a frame eigenfunctions of PΩ Q. Then F2α = {Φn,2α : for PWΩ with lower and upper frame bounds A ≥ "1/α# and B ≤ $1/α%. In particular, if 1/α = M ∈ N then F2α is an M-tight frame and F2 is a Parseval frame for PWΩ . Proof For each t ∈ R, the translated lattice {t − 2α } "1/α# times and at most $1/α% times.

∈Z

intersects [−1, 1) at least

The proof of Theorem 1 required that {ϕn } forms a complete orthonormal family for PΣ . The normalization of the functions Φn is not intuitive, but is what enables √ tight frame bounds. Taking Φn to have L2 (R)-norm λn insures that high order terms do not add too much energy to the coefficient sum when f is shifted into a region in which ϕn is more concentrated. A different path to tight frames of shifted

146

J. A. Hogan and J. D. Lakey

prolates comes via the identity in Lemma 1, when 1/α is an integer multiple of the length of the convex hull of Σ. Theorem 2 Let S, Σ ⊂ R be compact with Σ = −Σ and Σ ⊂ [−Ω, Ω]. The ∞,∞ family {Φn (t − k/(2Ω))}n=0,k=−∞ forms a tight frame for PWΣ with bound 2|S|Ω. Proof Using Plancherel’s theorem for the Fourier series of a function in L2 [−Ω, Ω] and Lemma 1 we have ∞ +   , f, λn ϕn · − n=0

=

∞ 

 + λn f, eπ i



·/Ω

- 2

∞ - 2    ϕn (·) = λn

n=0

n=0

= 2Ω

∞  n=0

λn

Ω

−Ω

2 f(ξ )  ϕn (ξ ) dξ = 2Ω = 2Ω|S|



Ω −Ω

Ω

−Ω Ω −Ω

f(ξ ) e−π i

|f(ξ )|2

∞ 

ξ/Ω

2  ϕn (ξ ) dξ

2 ϕn (ξ ) dξ λn 

n=0

|f(ξ )|2 = 2Ω|S|



|f(ξ )|2

Σ

where we have used the fact that f ∈ PWΣ , that is, f(ξ ) = 0 unless ξ ∈ Σ. Theorem 2 was proved in [5] for the case in which Σ = [−Ω, Ω] and S = [−1, 1]. The present frame bounds still depend on Ω, and thus on the ratio Ω/|Σ|, which is one factor in quantifying redundancy. The other is the number of eigenfunction generators—infinitely many in this case.  −1 2 to |S| on Σ implies the following corollary of Convergence of N λ | ϕ (ξ )| n=0 n n Lemma 1, which provides (non-tight) frame bounds using finitely many generators. Corollary 2 With S, Σ, and Ω as in Theorem 2, for N sufficiently large, the shifts N −1,∞ {Φn (t − k/(2Ω))}n=0,k=−∞ form a (non-tight) frame for PWΣ . Proof Calculating as above gives −1 +   N , f, λn ϕn · − n=0



N −1 - 2  2 ϕn (ξ ) dξ. |f(ξ )|2 λn  = 2Ω Σ

n=0

2  −1 ϕn (ξ ) converge uniformly to |S| on Σ so, The partial sums SN (ξ ) = N n=0 λn  if A < |S| is given then for large enough N one has A ≤ SN ≤ |S| on Σ. Consequently 2ΩA f 2 ≤

−1 +   N , f, λn ϕn · − n=0

which proves the claim.



- 2 ≤ 2Ω|S| f 2

Prolate Shift Frames

147

 −1 ϕn (ξ ) 2 . In the The frame bounds are 2Ω times the inf/sup of SN (ξ ) = N n=0 λn  special case in which S = . [−1, 1] and Σ = [−Ω/2, Ω/2] one can take advantage   2 n Qϕn 2ξ of the identity  ϕn = (−i) Ωλ Ω to deduce the following, cf., [5]. n Theorem 3 For α = 1/Ω, Fα,N forms a frame for PWΩ with upper and lower frame bounds BN and AN , respectively, satisfying AN ≥ 2 inf

|ξ |≤1

N −1  n=0

|ϕn (ξ )|2 ;

BN ≤ 2 sup

N −1 

|ξ |≤1 n=0

|ϕn (ξ )|2 .

Partial sums SN (ξ ) for different N (times 2) are plotted in Fig. 2.

4 Low Redundancy Prolate Shift Frames and Riesz Sequences None of the results in Sect. 3 depends on the fine structure of eigenfunctions of PΣ QS , but rather on generic properties of the operators themselves. The frame bounds do, however, rely on redundancy in a couple of ways: first, in that the eigenfunctions are complete in L2 (S) and, second, that Σ is compact and one can take Fourier series over its convex hull (Theorem 2). Thinking of frame coefficients as generalized samples and of shifted eigenfunctions as a type of generalization of the sinc function, one might only require a total of |Σ| eigenfunction shifts per unit time to provide stable recovery of f ∈ PWΣ from its coefficients f, ϕn (· − α ) . For general concentration sets S and frequency supports Σ, there is not √ yet a general method to deduce low redundancy frame properties Fα,N = { λn ϕn (· − α ) : n = 0, . . . , N − 1, ∈ Z}, but such properties can be established in the special case when S and Σ are intervals. The following results are (essentially) proved in [5], see also [8]. In both results we assume that S = [−1, 1], Σ = [−Ω/2, Ω/2]. Theorem 4 If N ≥ $Ωα% then Fα,N is a frame for PWΩ . Theorem 5 If N ≤ "Ωα# then Fα,N forms a Riesz basis for its span. In fact, only the case Ωα = N ∈ N is proved in [5]. In that case we refer to 1/α as the Nyquist rate of Fα,N . In view of Theorem 4, Fα,N forms a Riesz basis for PWΩ in that case. However, much the same arguments provide the result for N ≤ "Ωα#, though Fα,N is not complete in PWΩ when N < Ωα. Fundamental to the proofs is the fact that the prolate spheroidal wave functions form a Markov system [3], a property that relies crucially on the fact that the prolates are also eigenfunctions of a Sturm–Liouville system—and a property that does not extend to eigenfunctions of PΣ QS when S or Σ is not a single interval.

148

J. A. Hogan and J. D. Lakey

A Markov system on [−1, 1] is a collection of functions F = {fi }∞ i=1 such that for all integers N ≥ 1 and points {xj }N j =1 ⊂ R satisfying −1 ≤ x1 < x2 < · · · < xN ≤ 1, the matrix M ∈ CN ×N with (j, k)-th entry Mj,k = fj (xk ) has non-vanishing determinant. Riesz bounds in Theorems 4 and 5 can be estimated numerically, in the case Ωα ∈ N, in terms of the pointwise norms of the N × N matrix function M(ξ ) defined on [−Ω/2, −Ω/2 + Ω/N] defined by Mj n (ξ ) = ϕn (2ξ/Ω + 2j/N ) and its inverse. Theorem 5 involves the discrete parameter N (number of prolate generators), the fixed bandwidth Ω, and the shift parameter α. The constraint N = Ωα ∈ N is needed to establish the matrix bounds corresponding to Riesz bounds. In the general case N ≤ "Ωα#, a similar argument relies on uniform injectivity of the matrix whose columns are ϕn (2ξ/Ω + 2j/(Ωα)) (whose size can depend on ξ ). At present, we do not know of any simple quantification of the Riesz bounds for Fα,N in terms of N, Ω, α and the eigenvalues λn of PΩ Q when N = Ωα ∈ N. What follows is a method to quantify Riesz sequence bounds for systems of shifts Fα,N of the first several eigenfunctions of PΩ Q when α is large. This rules out the completeness of Fα,N in PWΩ , but the argument does not rely on the Markov property. In fact, it readily extends to the case in which [−Ω/2, Ω/2] is replaced by a finite union Σ of pairwise disjoint intervals of equal length. The method should extend further to more general unions of intervals (the full extension requires a nonuniform sampling approach and is not presented here). Nevertheless, the Riesz bounds do depend explicitly on the eigenfunction/eigenvalue properties of the generators. The Feichtinger conjecture—that any bounded frame can be written as a finite union of Riesz basic sequences—implies that whenever ψ1 , . . . , ψm generate a shift-invariant space for which the shifts of these generators form a frame, one can subdivide the shifts {ψn (· − αk)} (n = 1, . . . , m; k ∈ Z) into finitely many Riesz basic sequences. The following method quantifies a concrete means to do so when the ψn are the first several eigenfunctions of PΣ Q for Σ a finite union of intervals of equal length. That is, Corollary 2 provides sufficient frame conditions on Fα,N , and Theorem 6 below provides sufficient Riesz sequence conditions on Fβ,N for the same N for such Σ. Choosing sufficient α, β such that β/α ∈ N and defining Fβ,α ,N = {Φn,βk (· − α ) : k ∈ Z}, = 0, . . . , (β/α) − 1, one obtains Fα,N = ∪ Fβ,α ,N , expressing the frame Fα,N as a finite union of Riesz sequences Fβ,α ,N .

4.1 Riesz Bounds for Prolate Shifts Here we estimate Riesz bounds for Fα,N for a range of α and N . The number of shifts per unit frequency will necessarily be less than the Nyquist rate satisfied by the examples of Theorem 5, but the method will enable bounds in the case in which the frequency support Σ is a finite union of intervals of equal length.

Prolate Shift Frames

149

Theorem 6 Let α > 0 and Ω > 0 be such that αΩ ∈ 2N and N ∈ N. Let Σ be a symmetric union of J intervals of length Ω with pairwise disjoint interiors and let ϕn be the eigenfunctions of √ PΣ Q with eigenvalues λn arranged in nonincreasing order. As before, let Fα,N = { λn ϕn (· − αk); n = 0, . . . N − 1, k ∈ Z}. Then (i) Fα,N is a Riesz basis for its span provided λN −1 >

2



2π ΩJ . √ 3α

(ii) Under these circumstances, the lower and upper Riesz bounds AN and BN satisfy λN −1 −

√ √ 2 2π ΩJ 2 2π ΩJ ≤ AN < BN ≤ λ0 + √ . √ 3α 3α

The theorem generalizes to arbitrary αΩ > 0 but bounds depend then on the integer and fractional parts of α and αΩ, see [8]. We will prove the special case Σ = [−Ω/2, Ω/2], (i.e., J = 1) in detail, then outline the steps of the general case when Σ is a union of J intervals of length Ω. The proof of Theorem 6 requires the following quadrature estimate. The estimate generalizes to the case of noninteger M, but is simpler to state and prove for M ∈ N. Lemma 2 Fix r ∈ R, Ω > 0, M a positive integer and 0 ≤ η ≤ 2/M. Then QM,η,r

M−1  π ir( 2k +η−1) M 2 π ir(t−1) M = e − e dt ≤ π |r|. 2 0

(4)

k=0

Before proving the lemma, observe that when M ∈ N, QM,η,r



π ir(η−1/M) e M = sin(π r) − sin(π r/M) πr

and Taylor approximations lead to the estimate QM,η,r  | sin π r|(1 + 2π |r|/(3M) + O((π |r|/M)3 ), which indicates that the estimate (4) is essentially sharp. Proof (of Lemma 2) The quantity on the left-hand side of (4) may be rewritten as QM,η,r

M−1 2(k+1) M  M π ir( 2k +η−1) π ir(t−1) M = [e −e ] dt . 2k 2 M k=0

Using the chord–arc estimate |eix − eiy | ≤ |x − y|,

150

QM,η,r

J. A. Hogan and J. D. Lakey M−1 2(k+1) M 2k M  ≤ |eπ ir( M +η−1) − eπ ir(t−1) | dt 2k 2 M k=0

M−1 2(k+1) M π M|r|  ≤ 2k 2 M k=0



π M|r| 2

M−1  k=0

M−1  2k 2  2 + η − t dt = π M|r| η2 − η + 2 M 2 M M k=0

2 = π |r| M2

(5)

since η ∈ [0, 2/M]. This proves the lemma. Proof (of Theorem 6) Let C = {Cn ; 0 ≤ n ≤ N − 1, theorem, define /N−1 ∞ , /  FN (C) = / Cn λn ϕn (· − α /

/2 / )/ /

n=0 =−∞

=

Ω 2

−Ω 2

N−1 ∞ ,   Cn λn  ϕn (ξ )e−2π ıα

∞ −2π i where Cn (ξ ) = =−∞ Cn e allows us to express

FN (C) =

2 2 Ω N−1 2  , ξ dξ = λn  ϕn (ξ ) Cn (ξ ) dξ, Ω − 2 n=0

n=0 =−∞

N −1 

,



αΩ 2 −1

λn λm

k=− αΩ 2

n,m=0

∈ Z}. Using Plancherel’s

0

αξ ,

1 α

which has period 1/α. This periodicity

Cn (ξ )Cm (ξ ) ϕn (ξ + k/α)  ϕ m (ξ + k/α) dξ .

Since PΩ Qϕn = λn ϕn , setting 2M = αΩ, one can write N −1 

1 FN (C) = √ λn λm n,m=0



1 α

Cn (ξ )Cm (ξ )

0

1



1

−1 −1

ϕn (s)ϕm (t) e(t − s, ξ ) ds dt dξ,

where e(t − s, ξ ) =

M−1  k=−M

Finally, we can write

e2π i(ξ +k/α)(t−s) .

(6)

Prolate Shift Frames

151

N −1 

1 FN (C) = √ λn λm n,m=0



1 α

Cn (ξ )Cm (ξ )(An,m + Bn,m (ξ ))dξ = I + I I,

(7)

0

where I and I I are the sums with An,m and Bn,m terms, respectively, where An,m = M

1





1

−1 −1

ϕn (s)ϕm (t)

1

−1

eπ iΩu(t−s) du ds dt

(the factor M is for subsequent use of Lemma 2) and Bn,m (ξ ) =

1



1

−1 −1

ϕn (s)ϕm (t) f (t − s, ξ ) ds dt ,

where f (u, ξ ) = e(u, ξ ) − M

1

−1

eπ iΩuv dv .

(8)

But An,m = 2M

1



1

ϕn (s)ϕm (t)

−1 −1

sin π Ω(t − s) ds dt π Ω(t − s) 2Mλm 1 2Mλ2n = δmn ϕn (s)ϕm (s) ds = Ω Ω −1

using the fact that the truncated sinc function is the integral kernel for PΩ Q and that 1 −1 ϕn ϕm = λn δn –the double orthogonality property of the prolates. Consequently, N −1 

1 I= √ λn λm n,m=0 =



1 α

Cn (ξ )Cm (ξ )An,m dξ

0

N −1 

2Mλ2n δmn √ Ω λn λm n,m=0



1 α

Cn (ξ )Cm (ξ )dξ =

0

n=0

=

N −1 N −1   2M   λn |Cn |2 = λn |Cn |2 . Ωα n=0

Now rewrite e in (6) as

1 N −1 α 2M  λn |Cn (ξ )|2 dξ Ω 0

n=0

152

J. A. Hogan and J. D. Lakey M−1 

e(u, ξ ) =

e2π iu(ξ +k/α) =

2M−1 

k=−M

e2π iu(ξ +k/α−M/α)

k=0 2M−1 

=

eπ iuΩ

 2ξ Ω

2k 2M + Ωα − αΩ



k=0

=

2M−1 

eπ iuΩ

 2ξ Ω

k +M −1

 .

k=0

By Lemma 2 then, f in (8) becomes |f (t − s, ξ )| = e(t − s, ξ ) − M

1 −1

eπ iΩu(t−s) du

2M−1    π i(t−s)Ω 2ξ + k −1 Ω M = e −M

2

π iΩ(t−s)(u−1)

e

0

k=0

du ≤ π Ω|t − s|,

where in the Lemma, r = Ω(t − s) and η = 2ξ/Ω = 2ξ α/(αΩ) ≤ 1/M since M = αΩ/2 and ξ ∈ [0, 1/α] (note that 2M here corresponds to M in the lemma). To estimate the term I I in (7) we begin by estimating the norm of the matrix E defined by (suppressing dependence on ξ ) En,m = √

1 λn λm



1



1

−1 −1

ϕn (s)ϕm (t)f (s − t) ds dt = √

1 Bn,m . λn λm

To do so, first define F (z, u) =

N −1  n=0

1 √ ϕn (u) zn ; λn

The map z → F (z, ·) is unitary since

F (z, ·) = 2



−1 1 N  −1 n=0

1

−1 ϕn ϕm

z = (z0 , . . . , zN −1 ) . = λn δnm implies

2 N −1  1 |zn |2 . √ ϕn (u)zn du = λn n=0

Therefore 1 1 |Ez, w | = F (z, s)F (w, t)f (s − t) ds dt −1 −1

≤ πΩ

1 1 −1 −1

|s − t||F (z, s)F (w, t)| ds dt

Prolate Shift Frames

≤ πΩ

153

1 1 −1 −1

|s − t|2 ds dt

From this we conclude that E op given by C(ξ )n = Cn (ξ ), we have

II =

1/2

√ , 2 2π Ω

z

w .

z

w ≤ π Ω 8/3 z

w = √ 3

√ 2 2π Ω ≤ √ . Therefore, with C : [0, 1/α] → CN 3

N−1 1 1  α α 1 Cn (ξ )Cm (ξ )Bn,m (ξ )dξ = Cn (ξ )Cm (ξ )En,m dξ √ λn λm 0 0 n,m=0 n,m=0 N−1 

=

1 α

0



=

1 α

0

EC(ξ ), C(ξ ) dξ ≤

1 α

0

|EC(ξ ), C(ξ ) | dξ

√ √ 1 N−1 α  2 2π Ω 2 2π Ω |Cn (ξ )|2 dξ

C(ξ ) 2 dξ = √ √ 3 3 0 n=0

√ N−1 1 α 2 2π Ω  √ 3 n=0 0

∞  Cn e2π iα =−∞

√ 2 N−1 ∞ Ω   ξ dξ = 2 2π |Cn |2 . √ α 3 n=0 =−∞

Combining the estimates for I and I I one has

FN (C) ≤

N −1 

λn



n=0

√ N −1 ∞ 2 2π Ω   |Cn | + |Cn |2 √ α 3 n=0 =−∞ 2

√ N −1 ∞

2 2π Ω   ≤ λ0 + |Cn |2 . √ α 3 n=0 =−∞ Using the identity for I and bound for I I one also obtains

FN (C) ≥

N −1  n=0

λn



√ N −1 ∞ 2 2π Ω   |Cn | − |Cn |2 √ α 3 n=0 =−∞ 2

√ N −1 ∞ 2 2π Ω   ≥ λN −1 − |Cn |2 . √ α 3 n=0 =−∞

√ √ Thus the lower frame bound condition is that λN −1 > 2 2π Ω/(α 3) as the theorem states, in the case J = 1. We now briefly indicate the key steps in extending from the case J = 1 to the case in which Σ is the pairwise disjoint union of J intervals of the form ωj + [−Ω/2, Ω/2]. First, taking C = {Cn ; 0 ≤ n ≤ N − 1, ∈ Z} as before, and using Plancherel’s theorem and the relation (Qϕn )∧ = λn  ϕn on Σ, one obtains

154

J. A. Hogan and J. D. Lakey N −1 

1 FN (C) = √ λn λm n,m=0



1 α

Cn Cm (ξ )

0

1



1

−1 −1

ϕn (s)ϕm (t)×

J −1 M−1 

e−2π i(s−t)(ξ +ωj +k/α) ds dt .

j =0 k=−M

Next, applying Lemma 2 on each term of the sum over j , one has

J −1

M−1 

e−2π iωj (s−t)

j =0

e−2π i(s−t)(ξ +k/α)



J −1

Me−2π i(s−t)ωj

eπ iΩu(s−t) du

u=−1

j =0

k=−M

1

with an error π ΩJ |s − t| by adding up the corresponding errors for each of the J intervals. One then proceeds as before from (7), where now An,m = M

1



J −1

1

−1 −1

ϕn (s)ϕm (t)

e2π iωj (s−t)

j =0

1

−1

eπ iΩu(t−s) du ds dt

and Bn,m (ξ ) =

1



1

−1 −1

ϕn (s)ϕm (t) f (t − s, ξ ) ds dt

where f (t − s, ξ ) =

J −1 j =0

2π iωj (s−t)

e

 M

−2π i(s−t)(ξ +k/α)

e

k=−M

−M

1 −1

π iΩu(t−s)

e

du

.

Keeping in mind that, now, ϕn are eigenfunctions of PΣ Q, one obtains just as before 2Mλ2

that An,m = Ω n δn,m , while Bn,m is also estimated just as before, meaning now there are J terms, each separately satisfying the same bound obtained in the case J = 1. The remainder of the argument is just as before. A lower Riesz bound now requires that λN −1

√ 2 2π ΩJ > . √ α 3

The factor ΩJ here replaces Ω before (i.e., J = 1) as the bandwidth parameter. This completes the outline of the general case and thus the proof of Theorem 6.

Prolate Shift Frames

155

5 Bandpass Pseudo Prolates Implementing frame decompositions requires ability to compute the frame elements themselves. Prolate spheroidal wave functions can be computed because they are eigenfunctions of a Sturm–Liouville operator, see, e.g., [1, 11]. When Σ is multiband, PΣ Q no longer commutes with any second order operator with polynomial coefficients (see, e.g., [10, 12]), barring this approach to numerical computation of its eigenfunctions. In [3, p. 201 ff.], a general method to compute eigenfunctions of PΣ1 ∪Σ2 Q in terms of eigenfunctions ϕnΣi of PΣi Q (i = 1, 2) was outlined, involving computing the eigenspace decomposition of the matrix with Σ2 entries QϕnΣ1 , ϕm . The latter is not always helpful, but it is in certain cases in which the eigenfunctions associated with the non-negligible eigenvalues can be approximated using a finite dimensional truncation of this matrix. This was shown to be the case for what we call bandpass prolates (see [4, 6])—eigenfunctions of the operator (PΩ − PΩ )Q (Ω < Ω ). Slepian and Pollak [13, p. 63] referred to them as eigenfunctions (of PΣ Q) for the bandpass kernel. The annulus {ξ : Ω ≤ |ξ | ≤ Ω} (0 < Ω < Ω) can also be written as AΩ0 ,Δ = {ξ : |ξ ± Ω0 | < Δ/2} where Ω0 = (Ω + Ω )/2 is the center frequency and Δ = Ω − Ω is the passband width. We denote by PWΩ0 ,Δ the space of square-integrable functions that are frequency supported in AΩ0 ,Δ and denote by PΩ0 ,Δ the orthogonal projection onto PWΩ0 ,Δ . Functions in PWΩ0 ,Δ have the form e2π iΩ0 ξ f+ (ξ ) + e−2π iΩ0 ξ f− (ξ ) where f± ∈ PWΔ . The method of [6] results in 2π iΩ0 ξ ψ + (ξ ) + e−2π iΩ0 ξ ψ − (ξ ) expansions of bandpass n n ∞ ±prolates ψn in the form e ± where ψn = m=0 cnm ϕm and ϕm is the baseband prolate eigenfunction of PΔ Q. The coefficients in these expansions decay rapidly in |n − m|. If Fα,N is a prolate shift frame for PWΔ then one also has expansions of the bandpass prolate components ψn± in terms of Fα,N . Then f± can be expanded in this frame and thus f in appropriate modulates of the frame elements. In light of this observation, in designing frames of shifts of bandpass-limited functions, rather than taking the eigenfunctions of PΩ0 ,Δ Q as generators, it makes sense to take as generators the most concentrated linear combinations of modulated baseband prolates e2π iΩ0 ξ ϕn and e−2π iΩ0 ξ ϕn (ξ ), n = 0, . . . N − 1. We refer to the resulting functions as bandpass pseudo prolates (BPψPs) because they are not actual eigenfunctions (N ) (N ) of PΩ0 ,Δ Q, but rather of the surrogate operator PΩ0 ,Δ Q, where PΩ0 ,Δ denotes orthogonal projection onto the span of the up- and down-modulated baseband prolates e±2π iΩ0 ξ ϕn , n = 0, . . . , N − 1. Numerical computation of the BPψPs then amounts to finding the coefficients of e±2π iΩ0 ξ ϕn in their orthogonal expansions of (N ) the eigenfunctions of PΩ0 ,Δ Q. For large N , the BPψPs are good L2 -approximations of true bandpass prolates, since one has (N )

PΩ0 ,Δ Q − PΩ0 ,Δ Q H S ≤ C

∞  n=N

λn (PΔ Q),

156

J. A. Hogan and J. D. Lakey

where · H S represents the Hilbert–Schmidt norm. More precise estimates of ϕn − ) ϕn(N ) —the error between the nth eigenfunctions of PΩ0 ,Δ Q and of PΩ(N0 ,Δ Q—can be obtained from the methods of [6].  −1 N −1 2π iΩ0 t ϕ + −2π iΩ0 t ϕ be a linear combination of Let ψ = N n n n=0 wn e n=0 zn e ) . the generating elements of PW(N Ω0 ,Δ (N)

PΩ ,Δ Qψ, e2π iΩ0 t ϕm = 0

=

N−1  n=0

PΩ ,Δ Q((wn e2π iΩ0 t + zn e−2π iΩ0 t )ϕn ), e2π iΩ0 t ϕm 0 (N)

1

N−1 

wn

−1

n=0

= λm wm +

ϕn (t)ϕm (t) dt +

N−1 

N−1 

1 zn

n=0

−1

e−4π iΩ0 t ϕn (t)ϕm (t) dt

Γmn zn = (Λw + Γ z)m ,

(9)

n=0

where Λ ∈ CN ×N is the diagonal matrix with entries Λmn = λn δmn , (λn is the eigenvalue of PΔ Q) and Γ ∈ CN ×N has entries Γmn =

1 −1

e−4π iΩ0 t ϕn (t)ϕm (t) dt,

w = (w0 , w1 , . . . , wN −1 )T and z = (z0 , z1 , . . . , zN −1 )T . Similarly, PΩ0 ,Δ Qψ, e−2π iΩ0 t ϕm = (Γ ∗ w + Λz)m . (N )

(10) (N )

Combining (9) and (10) we see that ψ is an eigenfunction of PΩ0 ,Δ Q with eigenvalue μ ∈ R if and only if

Λ Γ Γ∗ Λ



w w =μ . z z

(11)



Λ Γ is self-adjoint, so it has 2N eigenvectors The 2N × 2N matrix Y = Γ∗ Λ

(j ) w ∈ C2N (0 ≤ j ≤ 2N − 1) that can be chosen to be orthonormal. c(j ) = z(j ) Efficient computation of matrices having the structure of Y is addressed in [6]. Regarding each w(j ) and z(j ) as a column vector in CN defines N × 2N matrices W = (w(0) w(1) · · · w(2N −1) ), Z = (z(0) z(1) · · · z(2N −1) )

(12)

Prolate Shift Frames

157

W such that the partitioned matrix U = ∈ C2N ×2N is unitary and Z



IN 0N W W ∗ W Z∗ W  ∗ ∗ = , UU = W Z = ZW ∗ ZZ ∗ 0N IN Z ∗

that is, W W ∗ = ZZ ∗ = IN ;

W Z ∗ = ZW ∗ = 0N

(13)

and U ∗ U = (W ∗ Z ∗ )

W = W ∗ W + Z ∗ Z = I2N . Z

(14)

√ Taking, as before, Φn = λn ϕn for the baseband prolates, we define the bandpass −1 pseudo prolates (BPψP) {ψj }j2N =0 by (Fig. 1) ψj (t) =

N −1 

[wn e2π iΩ0 t + zn e−2π iΩ0 t ] Φn (t). (j )

(j )

(15)

n=0 (N )

These ψj are different from the eigenfunctions of PΩ0 ,Δ Q in which Φn is replaced by ϕn . This normalization enables frame properties of Fα,N for PWΔ to be inherited by corresponding bandpass prolate shift frames. A different definition would take  −1 (j ) 2π iΩ t 0 + z(j ) e−2π iΩ0 t ] ϕ (t), of ψj (t) to be the true eigenfunction N n n n=0 [wn e (N ) PΩ0 ,Δ Q.

5.1 Frames of Shifted Bandpass Pseudo Prolates With ψj (0 ≤ j ≤ 2N − 1) defined as in (15) and given α > 0, we consider also the shifted bandpass pseudo prolates ψj,α (0 ≤ j ≤ 2N − 1, ∈ Z) by ψj,α (t) = ψj (t − α ). We seek conditions under which Gα,2N = {ψj,α ;

∈ Z, 0 ≤ j ≤ 2N − 1}

is a frame for PWΩ0 ,Δ . Theorem 7 Suppose α > 0 and N ∈ N are such that Fα,N is a frame for PWΔ . Then Gα,2N is a frame for PWΩ0 ,Δ with the same frame bounds.

158

J. A. Hogan and J. D. Lakey

0.5 0 -0.5 -1.5

-1

-0.5

0

0.5

1

1.5

2

-1.5

-1

-0.5

0

0.5

1

1.5

2

-1.5

-1

-0.5

0

0.5

1

1.5

2

0.5 0 -0.5

0.5 0 -0.5

Fig. 1 The three most concentrated symmetric BPψP’s are plotted for the case Δ = 5/2 and (N ) Ω0 = 10, using N = 5 in the definition of PΩ0 ,Δ

If H = H1 ⊕ H2 is an orthogonal direct sum, the union of frames for H1 and H2 , respectively, forms a frame for H whose bounds are the min and max of the subspace frame bounds. After an orthogonal transformation of generators, the theorem effectively reduces to this observation. Proof Suppose f ∈ PWΩ0 ,Δ . Then f, ψj,α =

N −1 

(j )



f (t)e−2π iΩ0 (t−α ) Φn (t − α ) dt

wn

n=0

+

N −1 

(j ) zn

f (t)e2π iΩ0 (t−α ) Φn (t − α ) dt

n=0

=e

2π iΩ0 α

N −1 

(j ) wn



f (t)e−2π iΩ0 t Φn (t − α ) dt

n=0

+ e−2π iΩ0 α

N −1 

(j )



zn

f (t)e2π iΩ0 t Φn (t − α ) dt = Cj + Dj

n=0

so that 2N −1 j =0



2 = j |f, ψj,α | (j ) (j ) wn zm = (W Z ∗ )nm

for fixed ,



|2 + |Dj |2 + 2((Cj Dj )). However,  −1 = 0 by (13), so j2N =0 Cj Dj = 0. Therefore, j (|Cj

Prolate Shift Frames 2N −1 

159

|f, ψj,α |2 =

j =0

2N −1 

(|Cj |2 + |Dj |2 ) =

j =0

2 N −1 2 2N −1 N −1    (j ) (j ) −2π iΩ0 t 2π iΩ0 t wn f e , Φn (· − α ) + zn f e , Φn (· − α ) j =0

n=0

n=0

= W T x 2 + Z T y 2 with W, Z ∈ CN ×2N as in (12) and x, y ∈ CN given by xn = f e−2π iΩ0 t , Φn ;

yn = f e2π iΩ0 t , Φn

= x, W W ∗ x = x 2 = x 2 , and (0 ≤ n ≤ N − 1). However, by (13), W T x 2 −1 T 2 2 2 2 2 similarly, Z y = y . We therefore have j2N =0 |f, ψj,α | = x + y

and since Fα,N is a frame for PWΔ (with upper and lower frame bounds B and A), −1 ∞ 2N  

|f, ψj,α |2 =

=−∞ j =0 ∞ N −1    |f e−2π iΩ0 t , Φn (· − α ) |2 + |f e2π iΩ0 t , Φn (· − α ) |2 =−∞ n=0

=

−1 ∞ N    |PΔ (f e−2π iΩ0 t ), Φn (· − α ) |2 + |PΔ (f e2π iΩ0 t ), Φn (· − α ) |2 =−∞ n=0

≤ B PΔ (f e−2π iΩ0 t ) 2 + B PΔ (f e2π iΩ0 t ) 2 = B f 2 and similarly,

∞

=−∞

2N −1 j =0

|f, ψj,α |2 ≥ A f 2 . This completes the proof.

5.2 Riesz Bases of Shifted Bandpass Pseudo Prolates Theorem 8 Suppose α > 0 and N ∈ N are such that Fα,N is a Riesz basis for PWΔ . Then Gα,2N is a Riesz basis for PWΩ0 ,Δ with the same Riesz bounds. Proof Suppose the upper and lower bounds of Fα,N are B and A, respectively, Cj  −1  2 (0 ≤ j ≤ 2N − 1, ∈ Z) satisfies ∞=−∞ j2N =0 |Cj | < ∞ and f (t) =

−1 ∞ 2N   =−∞ j =0

Cj ψj,α (t).

160

J. A. Hogan and J. D. Lakey (j )

(j )

Then with Wnj = wn and Znj = zn we have −1 ∞ N  

f (t) =

[(W C ∗ )n e2π iΩ0 t + (ZC ∗ )n e−2π iΩ0 t ]Φn, (t).

=−∞ n=0

Hence, fˆ(ξ ) =

−1 ∞ N  

n, (ξ − Ω0 ) + (ZC ∗ )n Φ n, (ξ + Ω0 )]. [(W C ∗ )n Φ

=−∞ n=0

The first term in this sum is supported on I+ = Ω0 +[−Δ/2, Δ/2], while the second term is supported on I− = −I+ and these intervals are disjoint when Ω0 > Δ/2. Consequently,



−∞



|fˆ(ξ )|2 dξ +

|f (t)| dt = 2

I+



|fˆ(ξ )|2 dξ. I−

But for the integral over I+ , denoting by C ∈ C2N the column with entries Cj , we have (since Fα,N is a Riesz basis)

|fˆ(ξ )|2 dξ = I+

I+

=



Δ/2

−Δ/2

 2 −1 ∞ N ˆ (W C) (ξ − Ω ) Φ n n 0 dξ =−∞ n=0

∞ N −1 2   ˆ Φ (W C) (ξ ) dξ = n n

≤B

−∞

=−∞ n=0

∞ N −1  



|(W C)n |2 = B

=−∞ n=0

∞ 

∞ N −1 2   dt (W C) Φ (t) n n =−∞ n=0

C ∗ (W ∗ W )C .

(16)

=−∞

Similarly,

|fˆ(ξ )|2 dξ ≤ B

I−

∞ 

C ∗ (Z ∗ Z)C .

(17)

=−∞

Combining (16) and (17) and applying (14) gives



−∞

|f (t)|2 ≤ B

∞  =−∞

C ∗ (W ∗ W + Z ∗ Z)C = B

−1 ∞ 2N  

|Cj |2 ,

=−∞ j =0

thus giving the upper Riesz bound. The lower Riesz bound is obtained similarly.

Prolate Shift Frames

161

As indicated, one can define the BPψP ψj alternatively as the j -th eigenfunction √ (N ) of PΩ0 ,Δ Q. In view of Theorem 2, defining Ψj = μj ψj where μj is the j th (N )

eigenvalue of PΩ0 ,Δ Q will also provide shift frames/Riesz bases.

6 Dual Frames N Applying the inverse of the frame operator f → n=0 f, Φn,α Φn,α to the α,N . In Sect. 6.3 we will n of the dual frame F generators Φn produces generators Φ see that when the Φn are well localized, the canonical dual generators are not. We then consider alternative frames and duals that may provide a more optimal balance between localization of shift frame generators and dual generators.

6.1 Duals of Redundant Prolate Shift Frames Any dual frame of a shift-invariant frame is also shift-invariant, so computing a dual frame is a matter of computing its shift generators. Suppose that {gn (· − αk) : 0 ≤ n ≤ P − 1, k ∈ Z} is a finite subset of (a closed subspace of) PWΩ . If 1/α ≥ Ω then, as a consequence of the Shannon sampling theorem, f, g = α



f (α ) g(α )

f, g ∈ PWΩ ,

(18)

∈Z

and the formal frame operator Tf = written

Tf (t) =

P −1 

∞  

n=0 k=−∞

−∞

P −1 ∞  n=0

P −1  n=0

1 α





∞ −∞

f (x)

n=0

k=−∞ f,

gn,αk gn,αk can then be

 f (x)gn (x − αk) dx gn (t − αk)



=

=

P −1 ∞

∞ −∞

−∞

f (x)

∞  

 gn (x − αk) gn∗ (αk − t) dx

k=−∞

gn (x − s) gn∗ (s − t) ds dx =

P −1 1  f ∗ gn ∗ gn∗ (t) α n=0

162

J. A. Hogan and J. D. Lakey

when 1/α ≥ Ω, where f ∗g denotes the convolution of f and g and g ∗ (t) = g(−t). Taking Fourier transforms one obtains  (ξ ) = f(ξ ) μ(ξ ); Tf

μ(ξ ) =

P −1 1  | gn (ξ )|2 . α

(19)

n=0

In other words, the formal frame operator is a Fourier multiplier T = Tμ and the shifts {gn (· − αk)} form a frame if μ is bounded above and below on its support (the frame bounds are A = inf μ and B = sup μ). Let Σ denote the support of μ in [−Ω/2, Ω/2]. If μ is non-vanishing on Σ then μ can be inverted on Σ and T = Tμ can be inverted on PWΣ by defining −1 f )(ξ ) = (T

f(ξ ) , μ(ξ )

ξ ∈ Σ.

√ In the special case in which gn = Φn = λn ϕn and ϕn is the nth prolate eigenfunction of PΩ Q, sums defining μ for different P are plotted in Fig. 2. The canonical dual frame is obtained by applying T −1 to each of the frame elements, e.g., [2]. For generators gn ∈ PWΩ , the generators  gn of the canonical dual are obtained by applying T −1 to the generators of the primal frame, namely,   g n (ξ ) =

 gn (ξ ) , μ(ξ )

|ξ | ≤ Ω/2 .

(20)

Expanding 1/μ in its Fourier series on [−Ω/2, Ω/2], 11 10 9 8 7 6 5 4 3 2 1 0 -1

-0.8

-0.6

-0.4

-0.2

0



0.2

0.4

0.6

0.8

1

Fig. 2 Multiplier μ = μP in (19) for gn = λn ϕn using P = 5, 10, and 15 terms with a time– bandwidth product of 10 and 1/α equal to twice the Nyquist rate. The values of μ5 near ±1 are approximately 5 × 10−5

Prolate Shift Frames

163

∞  1 = b e2π i μ

ξ/Ω

;

b =

=−∞

1 Ω



Ω/2 −Ω/2

e−2π i η/Ω dη μ(η)

allows one to express  gn (t) =

Ω/2    Ω/2   gn (ξ ) 2π itξ e . dξ = b  gn (ξ ) e2π iξ(t+ Ω ) dξ = b gn t + μ(ξ ) Ω −Ω/2 −Ω/2

One can estimate  gn numerically by estimating the coefficients {b }, either by numerical estimation of the integral defining b if one has precise values of μ, or by estimating {b } as a convolution inverse of the sequence of Fourier coefficients {μ } of μ. Using the Shannon sampling theorem as above, one can compute μ =

P −1 ∞  − k  k  1   gn∗ . gn Ω Ω Ω

(21)

n=0 k=−∞

In the case of a frame Fα,P generated by the first P (1, Ω)-PSWFs, when P ) 2Ω a variant of the 2ΩT –theorem allows good approximation of the non-negligible μ by truncated samples of the ϕn corresponding to (21). For small P , μ is nearly vanishing at ±Ω/2 making inversion of μ problematic. On the other hand, if P  2Ω then estimating μ via truncated samples becomes problematic because ϕn is no longer concentrated on [−1, 1] when n  2Ω. In [7] a method was devised to allow accurate frame expansions for functions bandlimited to a strictly smaller bandwidth. We will return to address better frame expansions for the full space PWΩ in Sect. 6.3 (Fig. 3).

0.6 0.5 0.4 0.3 0.2 0.1 0

0.4

0.4

0.2

0.2

0

0

-0.2

-0.2

-0.4 -5

0

5

-5

0

5

-0.4

0.15

0.1

0.06

0.1

0.05

0.04

0.05

0

0

-0.05

-5

0

5

-5

0

5

0.02 0 -5

0

5

-0.02 -5

0

5

Fig. 3 The most concentrated generators ϕn (top) and their duals (bottom) are plotted for n = 0, 2, 4 using P = 5 and shift parameter α = 1/4Ω with a time–bandwidth product equal to 10. The generators are computed by sinc interpolating 8000 Nyquist sample estimates. The dual generators are not themselves concentrated on the concentration interval of the prolates ([−2.5, 2.5] here)

164

J. A. Hogan and J. D. Lakey

6.2 Duals of Bandpass Prolates In the case of bandpass prolates or of BPψP’s as defined in Sect. 5, frames for PWΩ0 ,Δ generated by the first Q (Ω0 , Δ)-BPψP’s can be defined by the operator f(ξ ) μψ (ξ );

−1 (T ψ f )(ξ ) =

f(ξ ) , μψ (ξ )

μψ (ξ ) =

Q−1 1  n (ξ )|2 |ψ α

(22)

n=0

(with terms of μψ possibly normalized by eigenvalues), where the inversion is restricted to {ξ : |ξ ± Ω0 | ≤ Δ/2}. Since μψ vanishes outside AΩ0 ,Δ , one cannot compute the Fourier series of 1/μψ on {ξ : |ξ | < Ω0 − Δ/2}. Instead, one observes that the ψj defined in (15) have the form ψj = ψj+ e2π iΩ0 t + ψj− e−2π iΩ0 t ;

+

ψj− = ψ j

+ is supported in [−Δ/2, Δ/2] and |ψ + | is symmetric about zero. As such that ψ j j  Q−1 + 1 2 in the baseband case, one can define μ+ = ψ n=0 |ψn (ξ )| on [−Δ/2, Δ/2] α n+ = ψn+ ∗ (1/μ+ )∨ , then recover and can invert its Fourier series there to define ψ ψ the dual frame generator by defining n = 2 Re (ψ n+ e2π iΩ0 t ) . ψ Plots for the bandpass case most analogous to the baseband case considered above can be found in [7], where further details on this method to compute bandpass frame duals can also be found.

6.3 Better Prolate Shift Frames Prolate shift frames potentially give rise to representations of bandlimited signals whose oscillatory behavior is captured locally in the shifted prolates. If Fα,N is a prolate shift frame with dual generators  ϕn then one has f =

−1  N

λn f,  ϕn (· − α ) ϕn (· − α )

f ∈ PWΩ .

(23)

∈Z n=0

However, if N ) 2Ω then the multiplier μ nearly vanishes near ±Ω/2, see Fig. 4. This translates into slow decay of the Fourier coefficients of 1/μ, hence poor localization of the duals  ϕn . On the other hand, if N  2Ω, then ϕn itself is poorly localized (n  2Ω). In neither case does (23) provide a representation of f ∈ PWΩ in which f (t) is well-approximated by sums of nearby shifts. The

Prolate Shift Frames

165

5 4 3 2 1 0 -1

-0.5

0

0.5

1

-200

-100

0

100

200

4.5 4 3.5 3 2.5 2 1.5 1 0.5 -1

-0.5

0

0.5

1

-200

-100

0

100

200

Fig. 4 Multiplier μ5, base (Ωξ ) for Ω = 2.5 (top left) and μ5, base + μ1, pass (bottom left). On right are shown the Fourier coefficients of the inverses of the functions on the left. These coefficients convolve the prolate generator samples to define the samples of the dual prolate generators. Hence, the more compact Fourier series of 1/(μ5, base + μ1, pass ) results in more compact dual frame generators

fundamental problem is that the prolates that capture the high-frequency information within [−Ω/2, Ω/2] are poorly localized in time. A potential remedy is, instead, to generate a frame by combining shifts of the low-order prolates with those of one or more functions that are frequency localized near the edge of [−Ω/2, Ω/2], P −1 such as the most concentrated BPψPs. Let μP , base = ϕn |2 and let n=0 λn | Q−1 2  μQ, pass = 1[−Ω/2, Ω/2] m=0 |ψm | where ϕn are (1, Ω)-prolates and ψn are (Ω/2, Ω/2)-BPψPs (the choice Δ = Ω/2 is ad hoc). The multipliers μ5, base and μ5, base + μ1, pass are plotted in Fig. 4. The differences between the duals defined by these multipliers and the primal PSWF generators are plotted in Fig. 5. Since μ1, pass is symmetric about Ω/2, its restriction to [−Ω/2, Ω/2] can be regarded as a periodic shift of a smooth periodic function. The price paid for this mollification of the frame generated by the first P baseband prolates is that the shifts of the most concentrated bandpass prolate are not bandlimited to [−Ω/2, Ω/2].   Nevertheless, the component of f reflected in a sum ∈Z f, ψm (· −  /Ω) ψm (t − /Ω) is itself Ω-bandlimited, and pointwise values of such a component are well-approximated by partial sums of nearby terms. In addition, this component can be computed from the Nyquist samples of f . (ξ ) = ψ (ξ + Ω) for −Ω < ξ < 0. Lemma 3 Let ψ ∈ P W2Ω satisfy ψ ˆ Then for any f ∈ P W (i.e., f supported on [−Ω/2, Ω/2]), f, ψ = Ω 1 f ( /Ω) ψ( /Ω). ∈Z 2

166

J. A. Hogan and J. D. Lakey

0.6

0.4 0.2 0 -0.2 -0.4

0.4 0.2 0 15

-10

-5

0

5

0.4 0.2 0 -0.2 -10

10

× 10-3

-5

0

5

-0.4

10

× 10-3 20 15 10 5 0 -5

10 5 0 -10 × 10-3

-5

0

5

20 10

0

5

10

-10

-5

0

5

10

-10

-5

0

5

10

0 -0.01 -10 × 10-3

10

-5

0.01

10

15

-10

0.02

-5

0

5

10 0.02 0.01 0

5

0

-0.01

0 -10

-5

0

5

-10

10

-5

0

5

10

Fig. 5 Top row plots prolate generators (n = 0, 2, 4) using N = 5 and Ω = 2.5 (time bandwidth product of 10). The second row plots the difference between the generator and corresponding dual generator when the duals are defined in terms of the multiplier μ5, base + μ1, pass . The third row plots the difference between the generator and corresponding dual generator when the duals are defined in terms of the multiplier μ5, base alone (the lack of symmetry is due to quadrature errors). The oscillatory errors in the third row reflect the high-frequency shift coefficients plotted in the top right of Fig. 4, and indicating that the dual frame generators in that case depend on distant shifts of the primal generators

Proof By the Shannon sampling theorem,   f

f, ψ = f, PΩ ψ = =



 f ( /Ω)

Ω/2

Ω

  (PΩ ψ)

(ξ )e−2πi ψ

 Ω

=

  f

ξ/Ω

dξ +



 f ( /Ω)

Ω/2

(ξ )e−2πi ψ

ξ/Ω

dξ +

0

  = f ( /Ω)

Ω/2

(ξ )e−2πi ψ

0

0 −Ω/2

ξ/Ω

dξ +

0



Ω

−Ω/2

0

=



Ω/2 −Ω/2

(ξ )e−2πi ψ

(ξ )e−2πi ψ

(ξ )e−2πi ψ



 ξ/Ω

(ξ + Ω)e−2πi ψ

Ω

ξ/Ω

dξ 

ξ/Ω



 ξ/Ω



Ω/2

=





Ω

f ( /Ω)

(ξ )e−2πi ψ

ξ/Ω

dξ .

(ξ )e2π i ψ

ξ/Ω

dξ .

0

On the other hand, for all integers , 0 0 (ξ )e2π i ξ/Ω dξ = (ξ + Ω)e2π i ψ ψ −Ω

Since ψ( /Ω) = lemma follows.

−Ω

 0

−Ω

+

Ω

dξ = 0

 Ω 0

ξ/Ω

(ξ ) e2π i ψ

ξ/Ω

dξ = 2

Ω 0

(ξ ) e2π i ψ

ξ/Ω

dξ , the

Prolate Shift Frames

167

When μ has the form μ = μP , base + μQ, pass , the shifts of the first P baseband prolates and the first Q bandpass prolates form a frame for PWΩ with m = (ψ m /μ)∨ , dual generators  ϕn = ( ϕn /μ)∨ , (n = 0, . . . , P − 1) and ψ (M = 0, . . . , Q − 1). The ψm satisfy the condition of Lemma 3, so the frame coefficients can be computed by taking inner products of (shifted) samples of f m . Each such sample sequence is well localized (when with the samples of  ϕn and ψ P , Q are small) because it is obtained by convolution of the samples of ϕn or ψm with Fourier coefficients of 1/μ, which is well behaved in this case. Acknowledgements JAH thanks the Centre for Computer-Assisted Research in Mathematics and its Applications at the University of Newcastle for its continued support. JAH is also supported by Australian Research Council Discovery Project Grant DP160101537. Thanks Roy. Thanks HG.

References 1. J.P. Boyd, “Algorithm 840: computation of grid points, quadrature weights and derivatives for spectral element methods using prolate spheroidal wave functions—prolate elements,” ACM Trans. Math. Softw. vol 31, pp. 149–165, 2005. 2. C. Heil, “What is . . . a frame?” Notices Amer. Math. Soc., vol 60 pp 748–750, 2013. 3. J.A. Hogan and J.D. Lakey, Duration and Bandwidth Limiting. Prolate Functions, Sampling, and Applications. Boston, MA: Birkhäuser, 2012. 4. J.A. Hogan and J.D. Lakey, “Letter to the Editor: On the numerical evaluation of bandpass prolates”, Jour. Fourier Anal. Appl., vol 19, pp 439–446, 2013. 5. J.A. Hogan and J.D. Lakey, “Frame properties of shifts of prolate spheroidal wave functions”, Appl. Comput. Harmonic Anal., vol 39, pp 21–32, 2015. 6. J.A. Hogan and J.D. Lakey, “On the numerical evaluation of bandpass prolates II”, Jour. Fourier Anal. Appl., vol 23, pp 125–140, 2017. 7. J.A. Hogan and J.D. Lakey, “Frame expansions of bandlimited signals using prolate spheroidal wave functions”, Sampl. Theory Signal Image Process.,vol 15, pp 139–154, 2016. 8. J.A. Hogan and J.D. Lakey, “Riesz bounds for prolate shifts”, 2017 International Conference on Sampling Theory and Applications (SampTA), pp 271–274, 2017. 9. H.J. Landau and H. Widom, “Eigenvalue distribution of time and frequency limiting”, J. Math. Anal. Appl., vol 77, pp. 469–481, 1980. 10. J.A. Morrison, “On the eigenfunctions corresponding to the bandpass kernel”, Quart. Appl. Math. vol 21, pp 13–19, 1963. 11. A. Osipov, V. Rokhlin and H. Xiao, Prolate Spheroidal Wave Functions or Order Zero: Mathematical Tools for Bandlimited Approximation. New York, NY: Springer, 2013 12. I. SenGupta, B. Sun, W. Jiang, G. Chen, and M.C. Mariani, “Concentration problems for bandpass filters in communication theory over disjoint frequency intervals and numerical solutions,” J. Fourier Anal. Appl. vol 18, pp. 182–210, 2012. 13. D. Slepian and H.O. Pollak, “Prolate spheroidal wavefunctions, Fourier analysis, and uncertainty. I,” Bell Syst. Tech. J., vol 40, pp. 43–63, 1961. 14. G.G. Walter and X. Shen, “Wavelets based on prolate spheroidal wave functions,” J. Fourier Anal. Appl., vol. 10, pp. 1–26, 2004.

A Survey on the Unconditional Convergence and the Invertibility of Frame Multipliers with Implementation Diana T. Stoeva and Peter Balazs

Abstract The paper presents a survey over frame multipliers and related concepts. In particular, it includes a short motivation of why multipliers are of interest to consider, a review as well as extension of recent results, devoted to the unconditional convergence of multipliers, sufficient and/or necessary conditions for the invertibility of multipliers, and representation of the inverse via Neumann-like series and via multipliers with particular parameters. Multipliers for frames with specific structure, namely Gabor multipliers, are also considered. Some of the results for the representation of the inverse multiplier are implemented in Matlab-codes and the algorithms are described.

1 Introduction Multipliers are operators which consist of an analysis stage, a multiplication, and then a synthesis stage (see Definition 1). This is a very natural concept that occurs in a lot of scientific questions in mathematics, physics, and engineering. In Physics, multipliers are the link between classical and quantum mechanics, the so-called quantization operators [1]. Here multipliers link sequences (or functions) mk corresponding to the measurable variables in classical physics, to operators Mm,Φ,Ψ , which are the measurables in quantum mechanics, via (1), see, e.g., [20, 24]. In Signal Processing, multipliers are a particular way to implement time-variant filters [28]. One of the goals in signal processing is to find a transform which allows certain properties of the signal to be easily found or seen. Via such a transform, one can focus on those properties of the signal, one is interested in, or would like to change. The coefficients can be manipulated directly in the transform domain and thus, certain signal features can be amplified or attenuated. This is, for example,

D. T. Stoeva () · P. Balazs Acoustics Research Institute, Vienna, Austria e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1_6

169

170

D. T. Stoeva and P. Balazs

the case of what a sound engineer does during a concert, operating an equalizer, i.e., changing the amplification of certain frequency bands in real time. Filters, i.e., convolution operators, correspond to a multiplication in the Fourier domain, and therefore to a time-invariant change of frequency content. They are one of the most important concepts in signal processing. Many approaches in signal processing assume a quasi-stationary assumption, i.e., a shift-invariant approach is assumed only locally. There are several ways to have a true time-variant approach (while still keeping the relation to the filtering concept), and one of them is to use the so-called Gabor Filters [5, 28]. These are Gabor multipliers, i.e., time-frequency multipliers. An additional audio signal processing application is the transformation of a melody played by one instrument to a sound played by another. For an approach of how to identify multipliers which transfer one given signal to another one, see [31]. In Acoustics, the time-frequency filters are used in several fields, for example, in computational auditory scene analysis (CASA) [48]. The CASA-term refers to the study of auditory scene analysis by computational means, i.e., the separation of auditory events. The CASA problem is closely related to the problem of source separation. Typically, an auditory-based time-frequency transform, see, e.g., [9, 29], is calculated from certain acoustic features and the so-called time-frequency masks are generated. These masks are directly applied to the perceptual representation; they weight the “target” regions (mask = 1) and suppress the background (mask = 0). This corresponds to binary time-frequency multipliers. Such adaptive filters are also used in perceptual sparsity, where a time-frequency mask is estimated from the signal and a simple psychoacoustical masking model, resulting in a time-frequency transform, where perceptual irrelevant coefficients are deleted, see [10, 30]. Last but certainly not least, multipliers were and are of utmost importance in Mathematics, where they are used for the diagonalization of operators. Schatten used multipliers for orthonormal bases to describe certain classes of operators [36], later known as Schatten p-classes. The well-known spectral theorems, see, e.g., [19], are just results stating that certain operators can always be represented as multipliers of orthonormal bases. Because of their importance for signal processing, Gabor multipliers were defined as time-frequency multipliers [13, 22, 26], which motivated the definition of multipliers for general frames in [3]. Recently, the formal definition of frame multipliers led to a lot of new approaches to multipliers [2, 7, 34, 35] and new results [23, 41, 43]. Like in [11] we show a visualization of a multiplier Mm,Ψ,Ψ in the timefrequency plane in Fig. 1, using a different setting at a different soundfile. The visualization is done using algorithms in the LTFAT toolbox [33, 38], in particular using the graphical user interface MULACLAB1 for multipliers [33]. We consider 2 s long excerpt of a Bulgarian folklore song (“Prituri se planinata,” performed by Stefka Sabotinova) as signal f . For a time-frequency representation of the musical

1 http://ltfat.github.io/doc/base/mulaclab.html.

A Frame Multiplier Survey

171

Fig. 1 An illustrative example to visualize a multiplier. (TOP LEFT) The time-frequency representation of the music signal f . (TOP RIGHT) The symbol m, found by a (manual) estimation of the time-frequency region of the singer’s voice. (BOTTOM LEFT) The multiplication in the TF domain. (BOTTOM RIGHT) Time-frequency representation of Mm,Ψ,Ψ f

signal f (TOP LEFT) we use a Gabor frame Ψ (a 23 ms Hanning window with 75% overlap and double length FFT). This Ψ constitutes a Parseval Gabor frame, so it is self-dual. By a manual estimation, we determine the symbol m that should describe the time-frequency region of the singer’s voice. This region is then multiplied by 0.01, the rest by 1 (TOP RIGHT). Finally, we show the multiplication in the TF domain (BOTTOM LEFT), and time-frequency representation of the modified signal (BOTTOM RIGHT). In this paper we deal with the mathematical concept of multipliers, in particular with frame multipliers. We will give a survey over the mathematical properties of these operators, collecting known results and combining them with new findings and giving accompanying implementation. We give proofs only for new results. We consider the case of multipliers for general sequences (Sect. 3), as well as multipliers for Gabor and wavelet systems (Sect. 4). The paper is organized as follows. In Sect. 2 we collect the basic needed definitions and the notation used in the paper. In Sect. 3, first we discuss well-definedness of multipliers, as well as necessary and sufficient conditions for the unconditional convergence of certain classes of multipliers. Furthermore, we list relations between the symbol of a multiplier and the operator type of the multiplier. Finally, we consider injectivity, surjectivity, and invertibility of multipliers, presenting necessary and/or sufficient conditions. In the cases of invertibility of frame multipliers, we give formulas for the inverse operators in two ways—as Neumann-like series and as multipliers

172

D. T. Stoeva and P. Balazs

determined by the reciprocal symbol and appropriate dual frames of the initial ones. For the formulas of the inverses given as Neumann-like series, we provide implementations via Matlab-codes. In Sect. 4, first we state consequences of the general results on unconditional convergence applied to Gabor and wavelet systems; next we consider invertibility of Gabor multipliers and representation of the inverses via Gabor multipliers with dual Gabor frames of the initial ones. Finally, in Sect. 5, we implement the inversion of frame multipliers according to some of the statements in Sect. 3 and visualize the convergence rate of one of the algorithms in Fig. 2. For the codes of the implementations, as well as for the scripts and the source files which were used to create Figs. 1 and 2, see the webpage https://www.kfs.oeaw.ac. at/InversionOfFrameMultipliers.

2 Notation and Basic Definitions Throughout the paper, H denotes a separable Hilbert space and I denotes a countable index set. If not stated otherwise, Φ and Ψ denote sequences (φn )n∈I and (ψn )n∈I , respectively, with elements in H ; m denotes a complex number sequence (mn )n∈I , m—the sequence of the complex conjugates of mn , 1/m—the sequence of the reciprocals on mn , mΦ—the sequence (mn φn )n∈I . The sequence Φ (resp. m) is called norm-bounded below (in short, N BB) if 0 < infn∈I φn

(resp. 0 < infn∈I |mn |). The sequence m is called semi-normalized if it is N BB and supn∈I |mn | < ∞. An operator F : H → H is called invertible on H (or just invertible, if there is no risk of confusion of the space) if it is a bounded bijective operator from H onto H . Recall that Φ is called – a Bessel  sequence in H if there is BΦ ∈ (0, ∞) (called a Bessel bound of Φ) so that n∈I |f, φn |2 ≤ BΦ f 2 for every f ∈ H ; – a frame for H [21] if there exist  AΦ ∈ (0, ∞) and BΦ ∈ (0, ∞) (called frame bounds of Φ) so that AΦ f 2 ≤ n∈I |f, φn |2 ≤ BΦ f 2 for every f ∈ H ; – a Riesz basis for H [12] if it is the image of an orthonormal basis under an invertible operator. Recall the needed basics from frame theory (see, e.g., [14, 17, 21]). Let Φ be a frame for H . Then there exists a frame Ψ for H so that f = n∈I f, φn ψn =  n∈I f, ψn φn for every f ∈ H ; such a frame Ψ is called a dual frame of Φ. One associates the analysis operator UΦ : H → 2 given by UΦ f  = (f, φn )n∈I , the synthesis operator TΦ : 2 → H given by TΦ (cn )n∈I = n∈I cn φn , and the frame operator SΦ : H → H given by SΦ f = n∈I f, φn φn ; all these operators are well-defined and bounded for Bessel sequences. For a frame Φ for −1 H , the frame operator SΦ is invertible on H and the sequence (SΦ φn )n∈I forms

A Frame Multiplier Survey

173

 = (φ n )n∈I . If Φ is a dual frame of Φ called the canonical dual, denoted by Φ a Riesz basis for H (and thus a frame for H ), the canonical dual is the only dual frame of Φ. If Φ is a frame for H which is not a Riesz basis for H (the so-called overcomplete frame), then in addition to the canonical dual there are other dual  frames. If a sequence Ψ not  necessarily being a frame for H satisfies f = f, φ ψ (resp. f = n n n∈I n∈I f, ψn φn ) for all f ∈ H , it is called a synthesis (resp. analysis) pseudo-dual of Φ; for more on analysis and synthesis pseudo-duals see [40]. Let Λ = {(ω, τ )} be a lattice in R2d , i.e., a discrete subgroup of R2d of the form AZ2d for some invertible matrix A. For ω ∈ Rd and τ ∈ Rd , recall the modulation operator Eω : L2 (Rd ) → L2 (Rd ) determined by (Eω f )(x) = e2π iωx f (x) and the translation operator Tτ : L2 (Rd ) → L2 (Rd ) given by (Tτ f )(x) = f (x − τ ). For g ∈ L2 (Rd ), a system (resp. frame for L2 (Rd )) of the form (Eω Tτ g)(ω,τ )∈Λ is called a Gabor system (resp. Gabor frame for L2 (Rd )). Recall also the dilation operator Da : L2 (R) → L2 (R) (for a = 0) determined by (Da f )(x) = √1|a| f ( xa ). Given ψ ∈ L2 (R), a system of the form (Daj Tbj ψ)j ∈I for a discrete set (aj , bj )j ∈I in R+ × R is called a wavelet system for L2 (R). Definition 1 Given m, Φ, and Ψ , the operator Mm,Φ,Ψ given by Mm,Φ,Ψ f =



mn f, ψn φn , f ∈ D(Mm,Φ,Ψ ),

(1)

 with domain D(Mm,Φ,Ψ ) = {f ∈ H : mn f, ψn φn converges}, is called a multiplier. The sequence m is called the symbol (also, the weight) of the multiplier. When Φ and Ψ are Gabor systems (resp. Bessel sequences, frames, Riesz bases, wavelet systems), Mm,Φ,Ψ is called a Gabor (resp. Bessel, frame, Riesz, wavelet) multiplier. A multiplier Mm,Φ,Ψ is called well-defined (resp. unconditionally convergent) if the series in (1) is convergent (resp. unconditionally convergent) in H for every f ∈ H . Note that frame multipliers with constant symbol m = (1) are called frame-type operators or mixed frame operators in the literature.

3 Properties of Multipliers In this section we consider some mathematical properties of multipliers using sequences, without assuming a special structure for them, while Sect. 4 is devoted to certain classes of structured sequences.

174

D. T. Stoeva and P. Balazs

3.1 On Well-Definedness, Unconditional Convergence, and Relation to the Symbol Multipliers are motivated by applications and they can be intuitively understood. Still the general definition given alone is a mathematical one, and so we have to clarify basic mathematical properties like boundedness and unconditional convergence. We collect and state some basic results: Proposition 1 For a general multiplier, the following relations to unconditional convergence hold: (a) A well-defined multiplier is always bounded, but not necessarily unconditionally convergent. (b) If m ∈ ∞ , then a √ Bessel multiplier Mm,Φ,Ψ is well-defined bounded operator with Mm,Φ,Ψ ≤ BΦ BΨ m ∞ and the series in (1) converges unconditionally for every f ∈ H . The converse does not hold in general, even for a frame multiplier Mm,Φ,Ψ . (c) If Φ and Ψ are NBB, then a Bessel multiplier Mm,Φ,Ψ is unconditionally convergent if and only if m ∈ ∞ . (d) If Φ, Ψ , and m are NBB, then Mm,Φ,Ψ is unconditionally convergent if and only if Mm,Φ,Ψ is a Bessel multiplier and m is semi-normalized. (e) If Φ is NBB Bessel, then Mm,Φ,Ψ is unconditionally convergent if and only if mΨ is Bessel. (f) If Φ is NBB Bessel and m is NBB, then Mm,Φ,Ψ is unconditionally convergent if and only if Mm,Φ,Ψ is a Bessel multiplier. (g) If Φ is a Riesz basis for H , then Mm,Φ,Ψ is unconditionally convergent if and only if it is well-defined if and only if mΨ is Bessel. (h) If Φ is a Riesz basis for H and Ψ is NBB, then well-definedness of Mm,Φ,Ψ implies m ∈ ∞ , while the converse does not hold in general. (k) A Riesz multiplier Mm,Φ,Ψ is well-defined if and only if m ∈ ∞ if and only if it is unconditionally convergent. Proof (a) That every well-defined multiplier is bounded is stated in [42, Lemma 2.3], the result follows simply by the uniform boundedness principle. However, not every well-defined multiplier is unconditionally convergent— consider, for example, the sequences in [43, Remark 4.10(a)], namely Φ = (e1 , e2 , e2 , e2 , e3 , e3 , e3 , e3 , e3 , . . .) and Ψ = (e1 , e2 , e2 , −e2 , e3 , e3 , −e3 , e3 , −e3 , . . .) for which one has that M(1),Φ,Ψ is the identity operator on H but it is not unconditionally convergent. (b) The first part of (b) is given in [3]. To show that the converse is not valid in general, i.e., that unconditionally convergent frame multiplier does not require m ∈ ∞ , consider, for example, [43, Ex. 4.6.3(iv)], namely m = (1, 1, 1, 1, 2, 2, 1, 3, 3, . . .), Φ = (e1 , e1 , −e1 , e2 , e2 , −e2 , e3 , e3 , −e3 , . . .),

A Frame Multiplier Survey

(c) (e) (f)

(g) (k)

175

and Ψ = (e1 , e1 , e1 , e2 , 12 e2 , 12 e2 , e3 , 13 e3 , 13 e3 , . . .), for which one has that Mm,Φ,Ψ is unconditionally convergent. (resp. (d)) One of the directions follows from (b) and the other one from [42, Prop. 3.2(iii)] (resp. [42, Prop.3.2(iv)]). is given in [42, Prop. 3.4(i)]. Let Φ be NBB Bessel and let m be NBB. If Mm,Φ,Ψ is unconditionally convergent, then by (e) the sequence mΨ is Bessel which by the NBB-property of m implies that Ψ is Bessel. The converse statement follows from (b). and (h) can be found in [42, Prop. 3.4]. The first equivalence is given in [42, Prop. 3.4(iv)] and for the second equivalence see (g).

A natural question for any linear operator is how its adjoint looks. For multipliers we can show the following. Proposition 2 ([42]) For any Φ, Ψ , and m, the following holds: ∗ (i) If Mm,Φ,Ψ is well-defined (and hence bounded), then Mm,Φ,Ψ equals Mm,Ψ,Φ in a weak sense. ∗ (ii) If Mm,Φ,Ψ and Mm,Ψ,Φ are well-defined on all of H , then Mm,Φ,Ψ = Mm,Ψ,Φ .

For more on well-definedness and unconditional convergence, we refer to [42]. As a consequence of Proposition 2, every well-defined multiplier Mm,Φ,Φ with real symbol m is self-adjoint. Below we list some further relations between the symbol and the operator type of a Bessel multiplier. In fact we investigate when a multiplier belongs to certain operator classes. Proposition 3 Let Mm,Φ,Ψ be a Bessel multiplier. (a) If m ∈ c0 , then Mm,Φ,Ψ is a compact operator. 1 , then M (b) If m ∈ is a trace  class operator with m,Φ,Ψ √

Mm,Φ,Ψ trace ≤ BΦ BΨ m 1 and tr(Mm,Φ,Ψ ) = n mn φn , ψn . 2 , then M (c) If m ∈ m,Φ,Ψ is a Hilbert Schmidt operator with √

Mm,Φ,Ψ H S ≤ BΦ BΨ m 2 . (d) If m ∈ p for 1 √ < p < ∞, then Mm,Φ,Ψ is a Schatten-p class operator with

Mm,Φ,Ψ Sp ≤ BΦ BΨ m p . Proof (a)–(c) is in [3]. (d) follows directly from (b) and Proposition 1(b) by complex interpolation [49, Rem. 2.2.5, Theorems 2.2.6 and 2.2.7].

3.2 On Invertibility As mentioned above, a multiplier is a time-variant filtering, which can be seen as the mathematical description of what a sound engineer does during a concert or recording session. Should the technician make an error, can we get the original

176

D. T. Stoeva and P. Balazs

signal back? Or in the mathematical terms used here: How and under which circumstances can we invert a multiplier? To shorten notation, throughout this section M denotes any one of the multipliers Mm,Φ,Ψ and Mm,Ψ,Φ .

3.2.1

Riesz Multipliers, Necessary and Sufficient Conditions for Invertibility

Schatten [36] investigated multipliers for orthonormal bases and showed many nice results leading to certain operator classes. Extending the notion to Riesz bases keeps the results very intuitive and easy, among them the following one: Proposition 4 Let Φ be a Riesz basis for H . The following statements hold: (a) If Ψ is a Riesz basis for H and m is semi-normalized, then Mm,Φ,Ψ is invertible and −1 = M1/m,Ψ,Φ. Mm,Φ,Ψ

(2)

(b) If Ψ is a Riesz basis for H , then M is invertible if and only if m is seminormalized. (c) If m is semi-normalized, then M is invertible if and only if Ψ is a Riesz basis for H . Proof (a) is [3, Prop. 7.7], (b) and (c) are in [41, Theorem 5.1]. Under the assumptions of Proposition 4, one can easily observe that M is invertible if and only if mΨ is a Riesz basis for H . The following proposition further clarifies the cases when M is injective or surjective. Proposition 5 ([44]) Let Φ be a Riesz basis for H . The following equivalences hold: (a) (b) (c) (d)

Mm,Φ,Ψ Mm,Ψ,Φ Mm,Φ,Ψ Mm,Ψ,Φ

is injective if and only if mΨ is a complete Bessel sequence in H . is injective if and only if TmΨ is injective. is surjective if and only if mΨ is a Riesz sequence. is surjective if and only if mΨ is frame for H .

For further results related to invertibility and non-invertibility of M in the cases when Φ is a Riesz basis and m is not necessarily semi-normalized, see [44].

3.2.2

Bessel Multipliers, Necessary Conditions for Invertibility

Looking at invertible multipliers indicates somehow a kind of duality of the involved sequences. To make this more precise, let us state the following: Proposition 6 ([41]) Let Mm,Φ,Ψ be invertible.

A Frame Multiplier Survey

177

(a) If Ψ (resp. Φ) is a Bessel sequence for H with bound B, then mΦ (resp. mΨ ) 1 satisfies the lower frame condition for H with bound . −1 2 B Mm,Φ,Ψ

(b) If Ψ (resp. Φ) is a Bessel sequence in H and m ∈ ∞ , then Φ (resp. Ψ ) satisfies the lower frame condition for H . (c) If Ψ and Φ are Bessel sequences in H and m ∈ ∞ , then Ψ , Φ, mΦ, and mΨ are frames for H .

3.2.3

Frame Multipliers

Naturally, due to the non-minimality the case for overcomplete frames is not that easy as for Riesz bases. First we are interested to determine cases when multipliers for frames are invertible: Sufficient Conditions for Invertibility and Representation of the Inverse via Neumann-Like Series In this part of the section we present sufficient conditions for invertibility of multipliers Mm,Φ,Ψ based on perturbation conditions, and formulas for the inverse −1 Mm,Φ,Ψ via Neumann-like series. In Sect. 5 we provide Matlab-codes for the inversion of multipliers according to Propositions 8, 9, and 11. Let us begin our consideration with the specific case when Ψ = Φ and m is positive (or negative) semi-normalized sequence. In this case the multiplier is simply a frame operator of an appropriate frame: Proposition 7 ([6, Lemma 4.4]) If Φ is a frame for H and m is positive (resp. negative) and semi-normalized, then Mm,Φ,Φ = S(√mn φn ) (resp. Mm,Φ,Φ = √ −S(√|mn | φn ) ) for the weighted frame ( mn φn )n∈I and is therefore invertible on H. If we give up with the condition Φ = Ψ , but still keep the condition on m to be semi-normalized and positive (or negative), then the multiplier is not necessarily invertible and thus it is not necessarily representable as a frame operator. Consider, for example, the frames Φ = (e1 , e1 , e2 , e2 , e3 , e3 , ...) and Ψ = (e1, e1, e2, e3, e4, e5, ...); the multiplier M(1),Φ,Ψ is well-defined (even unconditionally convergent) but not injective [43, Ex. 4.6.2]. For the class of multipliers which satisfy the assumptions of the above proposition except not necessarily the assumption Φ = Ψ , below we present a sufficient condition for invertibility, which is a reformulation of [41, Prop. 4.1]. The reformulation is done aiming at an efficient implementation of the inversion of multiple multipliers— √ given Φ and m, once the frame operator of the weighted frame ( mn φn ) and its inverse are calculated, one can invert Mm,Φ,Ψ for different Ψ fast, just with matrix summation and multiplication (see Algorithm 1 in Sect. 5). Proposition 8 Let Φ be a frame for H , m be a positive (or negative) seminormalized sequence, and a and b satisfy 0 < a ≤| mn |≤ b for every n. Assume that the sequence Ψ satisfies the condition

178

D. T. Stoeva and P. Balazs

P1 :



|h, ψn − φn |2 ≤ μ h 2 , ∀ h ∈ H ,

for some μ ∈ [0,

a 2 A2Φ ). b2 BΦ

Then Ψ is a frame for H , M is invertible on H , and

1 1

h ≤ M −1 h ≤

h , √ √ bBΦ + b μBΦ aAΦ − b μBΦ

M

−1

 ∞ =

−1 k −1 √ √ √ k=0 [S( mn φn ) (S( mn φn ) − M)] S( mn φn ) , −1 k −1 √ √ √ k=0 (−1)[S( |mn |φn ) (S( |mn |φn ) + M)] S( |mn |φn ) ,

∞

where the n-term error M −1 −

n

k=0 . . .

(3)

if mn > 0, ∀n, if mn < 0, ∀n,

is bounded by

√ n+1 b μBΦ 1 · . √ aAΦ aAΦ − b μBΦ

(4)

For μ = 0, the above statement gives Proposition 7. Note that P1 is a standard perturbation result for frames (for μ < AΦ ). Notice that the bound

a 2 A2Φ b2 BΦ

for μ is sharp in the sense that when the inequality in P1

holds with μ ≥

a 2 A2Φ , b2 BΦ

the conclusions may fail (consider, for example,

(en )∞ n=1 , Ψ

= (2e1 , 12 e2 , 13 e3 , 14 e4 , . . .)), but may also hold (consider, m = (1), Φ = for example, m = (1), Φ = (en )∞ n=1 , Ψ = (2e1 , e2 , e3 , e4 , . . .)). Note that for an overcomplete frame Φ, a frame Ψ satisfying P1 with μ < a 2 A2Φ (≤ b2 BΦ

AΦ ) must also be overcomplete, by Christensen [17, Cor. 22.1.5]. This is also in correspondence with the aim to have an invertible frame multiplier in the above statement—when m is semi-normalized and Φ and Ψ are frames, then invertibility of Mm,Φ,Ψ requires either both Φ and Ψ to be Riesz bases, or both to be overcomplete (see Proposition 4). Now we continue with consideration of more general cases, giving up the positivity/negativity of m, allowing complex values. Again, aiming at an efficient inversion implementation for a set of multipliers (for given Φ and m, and varying Ψ ), we reformulate the result from [41, Prop. 4.1]: Proposition 9 Let Φ be a frame for H and let λ := supn |mn − 1| < that the sequence Ψ satisfies the condition P1 with μ ∈ [0, a frame for H , Mm,Φ,Φ and M are invertible on H , and

AΦ BΦ .

(AΦ −λBΦ )2 ). (λ+1)2 BΦ

Assume

Then Ψ is

1 1 −1

h ≤ Mm,Φ,Φ h ≤

h , (λ + 1)BΦ AΦ − λBΦ 1 (λ + 1)(BΦ +



μBΦ )

h ≤ M −1 h ≤

1

h , √ AΦ − λBΦ − (λ + 1) μBΦ

A Frame Multiplier Survey

179

−1 Mm,Φ,Φ =

∞ 

−1 −1 [SΦ (SΦ − Mm,Φ,Φ )]k SΦ ,

(5)

k=0

 where the n-term error is bounded by

M

−1

=

∞ 

λBΦ AΦ

n+1

·

1 AΦ −λBΦ ,

and

−1 −1 [Mm,Φ,Φ (Mm,Φ,Φ − M)]k Mm,Φ,Φ ,

(6)

k=0

 where the n-term error is bounded by

n+1 √ (λ+1) μBΦ AΦ −λBΦ

·

1 √ . AΦ −λBΦ −(λ+1) μBΦ

In the special case when Φ and Ψ are dual frames, one has a statement concerning invertibility of the multiplier Mm,Φ,Ψ with the simpler formula (7) for the inverse operator in comparison to (6): Proposition 10 ([41, Prop. 4.4]) Let Φ be a frame for H and let Ψ be a dual frame of Φ. Assume that m is such that there is λ satisfying |mn − 1| ≤ λ < √B 1 B Φ Ψ for all n ∈ N. Then M is invertible, 1 1 , ,

h ≤ M −1 h ≤

h , ∀h ∈ H , 1 + λ BΦ BΦ d 1 − λ BΦ BΦ d M −1 =

∞  (I − M)k ,

(7)

k=0 √

n+1 λ BΦ BΨ ) and the n-term error is bounded by ( 1−λ√ . B B Φ

Ψ

Note that the bound for λ in the above proposition is sharp in the sense that one cannot claim validity of the conclusions using a bigger constant instead of √ 1 . Indeed, if the assumptions hold with supn |mn − 1| = λ = √B 1 B , then BΦ BΨ Φ Ψ the multiplier might not be invertible on H , consider, for example, M(1/n),(en ),(en ) which is not surjective. Next we extend Proposition 10 to approximate duals and provide implementation for the extended result. Recall that given a frame Φ for H , the sequence Ψ is called an approximate dual of Φ [18] if for some ε ∈ [0, 1) one has TΨ UΦ f −f ≤ ε f

for all f ∈ H (in this case we will use the notion ε-approximate dual of Φ). Proposition 11 Let Φ be a frame for H and let Ψ be an ε-approximate dual frame of Φ for some ε ∈ [0, 1). Assume that m is such that λ := supn |mn − 1| < √B1−εB Φ Ψ for all n ∈ N. Then M is invertible, 1 1 , ,

h ≤ M −1 h ≤

h , ∀h ∈ H , 1 + λ BΦ BΦ d + ε 1 − λ BΦ BΦ d − ε

180

D. T. Stoeva and P. Balazs

M −1 =

∞  (I − M)k ,

(8)

k=0 √

n+1 λ BΦ BΨ +ε) and the n-term error is bounded by ( 1−λ√ . B B −ε Φ

Ψ

Proof The case ε = 0 gives Proposition 10. For the more general case ε ∈ [0, 1), observe that for every f ∈ H , ,

Mf − f ≤ Mf − TΨ UΦ f + TΨ UΦ f − f ≤ (λ BΦ BΨ + ε) f . Applying [41, Prop. 2.2] (which is based on statements in [25] and [15]), one comes to the desired conclusions. For the cases when Φ and Ψ are equivalent frames, one can use the following sufficient conditions for invertibility and representations of the inverses: Proposition 12 ([41]) Let Φ be a frame for H , G : H → H be a bounded bijective operator and ψn = Gφn , ∀n, i.e., Φ and Ψ are equivalent frames. Let m be semi-normalized and satisfy one of the following three conditions: m is positive; m is negative; or supn |mn − 1| < AΦ /BΦ . Then Ψ is a frame for H , M is invertible, −1 −1 −1 −1 = (G−1 )∗ Mm,Φ,Φ , and Mm,Ψ,Φ = Mm,Φ,Φ G−1 . Mm,Φ,Ψ  of a frame Φ, it is clearly equivalent If one considers the canonical dual frame Φ to Φ and furthermore, it is the only dual frame of Φ which is equivalent to Φ [27]. In this case one can apply both Propositions 10 and 12. Representation of the Inverse as a Multiplier Using the Reciprocal Symbol and Appropriate Dual Frames of the Given Ones Some of the results above were aimed at representing the inverse of an invertible frame multiplier via Neumann-like series. Here our attention is on representation of the inverse as a frame multiplier of a specific type. The inverse of any invertible frame multiplier Mm,Φ,Ψ with non-zero elements of m can always be written using the reciprocal symbol 1/m, e.g., trivially, by any frame G and an −1 appropriate sequence related to a dual frame (gnd )n∈I of G (e.g., as Mm,Φ,Ψ = M1/m,(M −1 (mn gnd )),(gn ) ). Actually, any invertible operator V can be written as a multiplier in such a way (V = M1/m,(V (mn gnd )),(gn ) ). The focus here is on the −1 possibility for a representation of Mm,Φ,Ψ using the reciprocal symbol and specific dual frames of Φ and Ψ , more precisely, using

M1/m,Ψ d ,Φ d for some dual frames Φ d of Φ and Ψ d of Ψ . The motivation for the consideration of such representations comes from Proposition 4(a), which concerns Riesz multipliers with semi-normalized weights and representation using the reciprocal symbol and the canonical duals (the only dual

A Frame Multiplier Survey

181

frames in this case) of the given Riesz bases. The representation (2) is not limited to Riesz multipliers (as a simple example consider the case when Φ = Ψ is an overcomplete frame and m = (1)), and it is clearly not always valid. Naturally this leads to the investigation of cases where such inversion formula holds for overcomplete frames. Furthermore, for the cases of non-validity of the formula, it opens the question whether the canonical duals can be replaced with other dual frames. Theorem 1 ([11, 45]) Let Φ and Ψ be frames for H , and let the symbol m be such that mn = 0 for every n and the sequence mΦ is a frame for H . Assume that Mm,Φ,Ψ is invertible. Then the following statements hold: (i) There exists a unique sequence Ψ † in H so that −1 = M1/m,Ψ † ,Φ ad , ∀ a-pseudo-duals Φ ad of Φ, Mm,Φ,Ψ −1 (mn φn ))∞ and it is a dual frame of Ψ . Furthermore, Ψ † = (Mm,Φ,Ψ n=1 . (ii) If G = (gn )∞ is a sequence (resp. Bessel sequence) in H such that n=1 M1/m,Ψ † ,G is well-defined and −1 Mm,Φ,Ψ = M1/m,Ψ † ,G ,

then G must be an a-pseudo-dual (resp. dual frame) of Φ. Note that the uniqueness of Ψ † in the above theorem is even guaranteed from the −1 validity of Mm,Φ,Ψ = M1/m,Ψ † ,Φ d for all dual frames Φ d of Φ. Indeed, take Ψ † to be determined by the above theorem and assume that there is a sequence F such −1 that Mm,Φ,Ψ = M1/m,F,Φ d for all dual frames Φ d of Φ. Then M1/m,F −Ψ † ,Φ d for all dual frames Φ d of Φ, which by Stoeva and Balazs [45, Lemma 3.2] implies that F = Ψ †. Similar statements hold with respect to an appropriate dual frame of Φ: Theorem 2 ([11, 45]) Let Φ and Ψ be frames for H , and let the symbol m be such that mn = 0 for every n and the sequence mΨ is a frame for H . Assume that Mm,Φ,Ψ is invertible. Then the following statements hold: (i) There exists a unique sequence Φ † in H so that −1 = M1/m,Ψ sd ,Φ † , ∀ s-pseudo-duals Ψ sd of Ψ, Mm,Φ,Ψ −1 )∗ (mn ψn ))∞ and it is a dual frame of Φ. Furthermore, Φ † = ((Mm,Φ,Ψ n=1 . ∞ (ii) If F = (fn )n=1 is a sequence (resp. Bessel sequence) in H such that M1/m,F,Φ † is well-defined and −1 Mm,Φ,Ψ = M1/m,F,Φ † ,

182

D. T. Stoeva and P. Balazs

then F must be an s-pseudo-dual (resp. dual frame) of Ψ . Corollary 1 ([45]) Let Φ and Ψ be frames for H , and let the symbol m be such that mn = 0 for every n and m ∈ ∞ . Assume that Mm,Φ,Ψ is invertible. Then Theorems 1 and 2 apply. Below we consider the task of representing the inverse using the canonical dual frames of the given ones and the question when the special dual Φ † (resp. Ψ † ) coincides with the canonical dual of Φ (resp. Ψ ). Proposition 13 ([11, 45]) Let Φ and Ψ be frames for H and let m be seminormalized. Assume that Mm,Φ,Ψ be invertible. Then −1 ; = M1/m,Ψ,Φ ⇐ Ψ is equivalent to mΦ ⇔ Ψ † = Ψ Mm,Φ,Ψ −1  Mm,Φ,Ψ = M1/m,Ψ,Φ ⇐ Φ is equivalent to mΨ ⇔ Φ † = Φ.

(9) (10)

−1 Mm,Φ,Ψ = M1/m,Ψ,Φ  (Φ is equivalent to mΨ ) or (Ψ is equivalent to mΦ).

In the cases when the symbol m is a constant sequence, the one-way implications in the above proposition become equivalences, more precisely, the following holds: Proposition 14 ([11]) Let Φ and Ψ be frames for H . For symbols of the form m = (c, c, c, . . .), c = 0, the following equivalences hold: −1 Mm,Φ,Ψ is invertible and M(c),Φ,Ψ =M(1/c),Ψ,Φ ⇔ Ψ is equivalent to Φ

 ⇔ Mm,Φ,Ψ is invertible and Ψ †=Ψ  ⇔ Mm,Φ,Ψ is invertible and Φ †=Φ. Motivated by Theorems 1 and 2, it is natural to consider the question whether (9) and (10) from Proposition 13 hold under more general assumptions. The answer is affirmative: Proposition 15 Let Φ and Ψ be frames for H , and let the symbol m be such that mn = 0 for every n and the sequence mΦ be a frame for H . Then the following holds:  ⇔ Ψ is equivalent to mΦ Mm,Φ,Ψ is invertible and Ψ † =Ψ −1 ⇒ Mm,Φ,Ψ is invertible and Mm,Φ,Ψ = M1/m,Ψ,Φ.

, where Ψ † is determined based Proof First let Mm,Φ,Ψ be invertible and Ψ † = Ψ −1 on Theorem 1. Then using Theorem 1(i), one obtains Mm,Φ,Ψ = M1/m,Ψ,Φ and † −1 0 −1 −1 −1 furthermore ψn = SΨ (ψn ) = SΨ (ψn ) = SΨ Mm,Φ,Ψ (mn φn ), ∀n, leading to equivalence of the frames Ψ and mΦ.

A Frame Multiplier Survey

183

Conversely, let the frames Ψ and mΦ be equivalent. Then by Proposition 14, the multiplier M(1),mΦ,Ψ (= Mm,Φ,Ψ ) is invertible, and thus, by Theorem 1, a dual −1 (mn φn ))∞ frame Ψ † of Ψ is determined by Ψ † = (Mm,Φ,Ψ n=1 . This implies that the † dual frame Ψ of Ψ is at the same time equivalent to the frame Ψ , which by [27, Prop. 1.14] implies that Ψ † must be the canonical dual of Ψ . In a similar way as above, one can state a corresponding result involving Φ † : Proposition 16 Let Φ and Ψ be frames for H , and let the symbol m be such that mn = 0 for every n and the sequence mΨ is a frame for H . Then the following holds:  ⇔ Φ is equivalent to mΨ Mm,Φ,Ψ is invertible and Φ † =Φ −1 ⇒ Mm,Φ,Ψ is invertible and Mm,Φ,Ψ =M1/m,Ψ,Φ.

For examples, which illustrate statements from this section, see [11, 45].

4 Time-Frequency Multipliers In this section we focus on sequences with particular structure, namely on Gabor and wavelet sequences, which are very important for applications. Considering the topic of unconditional convergence, we apply results from Sect. 3.1 directly. For the topic of invertibility we go beyond the presented results in Sect. 3.2 and consider further questions motivated by the specific structure of the sequences.

4.1 On the Unconditional Convergence As a consequence of Proposition 1, for Gabor and wavelet systems, we have the following statements: Corollary 2 Let Mm,Φ,Ψ be a Gabor (resp. wavelet) multiplier. (c) If Mm,Φ,Ψ is furthermore a Bessel multiplier, then it is unconditionally convergent if and only if m ∈ ∞ . (d) If m is NBB, then Mm,Φ,Ψ is unconditionally convergent if and only if Mm,Φ,Ψ is a Bessel multiplier and m is semi-normalized. (e) If Φ is Bessel (resp. Φ is Bessel and m is NBB), then Mm,Φ,Ψ is unconditionally convergent if and only if mΨ is Bessel (resp. Ψ is Bessel). (f) If Mm,Φ,Ψ is furthermore a Riesz multiplier, then it is well-defined if and only if it is unconditionally convergent if and only if m ∈ ∞ .

184

D. T. Stoeva and P. Balazs

4.2 On the Invertibility For the particular case of Gabor and wavelet multipliers, one can apply the general invertibility results from Sect. 3.2. In addition, one can be interested in cases where the dual frames Φ † , Ψ † , induced by an invertible Gabor (resp. wavelet) frame multiplier have also a Gabor (resp. wavelet) structure and whether one can write the inverse multiplier as a Gabor (resp. wavelet) multiplier. Here we consider the Gabor case. Proposition 17 ([46]) Let Λ = {(ω, τ )} be a lattice in R2d , i.e., a discrete subgroup of R2d of the form AZ2d for some invertible matrix A, and let Φ = (Eω Tτ v)(ω,τ )∈Λ and Ψ = (Eω Tτ u)(ω,τ )∈Λ be (Gabor) frames. Assume that the (Gabor) frame-type operator V = M(1),Φ,Ψ is invertible. Then the following holds. (a) The frames Φ † and Ψ † determined by Theorems 1 and 2 are Gabor frames. (b) V −1 can be written as a Gabor frame-type operator as follows: V −1 = M(1),(Eω Tτ V −1 v)(ω,τ )∈Λ ,Φ d = M(1),Ψ d ,(Eω Tτ (V −1 )∗ u)(ω,τ )∈Λ , using any dual Gabor frames Φ d and Ψ d of Φ and Ψ , respectively (in particular, the canonical duals). (c) Let (Eω Tτ g)(ω,τ )∈Λ be a Gabor frame for L2 (R). Then V −1 can be written as the Gabor frame-type operator M(1),(Eω Tτ g)(ω,τ )∈Λ ,( hω,τ )(ω,τ )∈Λ , where hω,τ = Eω Tτ V g, (ω, τ ) ∈ Λ. As an illustration of the above proposition, one may consider the following example with Gabor frame-type operators: Example 1 ([46]) Let Ψ = (Ekb Tna u)k,n∈Z be a frame for L2 (R) (for example, 2 take u(x) to be the Gaussian e−x and a ∈ R and b ∈ R so that ab < 1). Furthermore, let F be an invertible operator on L2 (R) which commutes with all Ekb Tna , k, n ∈ Z, (e.g., take F = Ek0 /a Tn0 /b for some k0 , n0 ∈ Z) and let Φ = (Ekb Tna F u)k,n∈Z . Then the Gabor frame-type operator V = M(1),Φ,Ψ (= F SΨ ) is invertible and furthermore:  and Ψ † = (Ekb Tna S −1 u)k,n∈Z = Ψ ; (a) Φ † = (Ekb Tna (V −1 )∗ u)k,n∈Z = Φ Ψ −1 (b) V = M(1),ψ † ,Φ d = M(1),Ψ,Φ d for any dual Gabor frame Φ d of Φ.  one comes to the formula In particular, choosing the canonical dual Φ, −1 V = M(1),Ψ,Φ, whose validity is in correspondence with Proposition 14;

A Frame Multiplier Survey

185

(c) applying Proposition 17(c) with g = u, we have V −1 = M(1),Ψ,( hk,n )k,n∈Z , where hk,n = Ekb Tna V u; applying Proposition 17(c) with g = S −1 u, we obtain V −1 = M(1),Ψ,Φ. The representation of the inverse of a Gabor frame-type operator as a Gabor frame-type operator has turned out to be related to commutative properties of the multiplier with the time-frequency shifts: Proposition 18 ([11]) Let Λ = {(ω, τ )} be a lattice in R2d , i.e., a discrete subgroup of R2d of the form AZ2d for some invertible matrix A, g ∈ L2 (Rd ), and (Eω Tτ g)(ω,τ )∈Λ be a Gabor frame for L2 (Rd ). Let V : L2 (Rd ) → L2 (Rd ) be a bounded bijective operator. Then the following statements are equivalent: (a) For every (ω, τ ) ∈ Λ, V Eω Tτ g = Eω Tτ V g. (b) For every (ω, τ ) ∈ Λ and every f ∈ L2 (Rd ), V Eω Tτ f = Eω Tτ Vf (i.e., V commutes with Eω Tτ for every (ω, τ ) ∈ Λ). (c) V can be written as a Gabor frame multiplier with the constant symbol (1) and with respect to the lattice Λ, i.e., V is a Gabor frame-type operator with respect to the lattice Λ. (d) V −1 can be written as a Gabor frame multiplier with the constant symbol (1) and with respect to the lattice Λ, i.e., V is a Gabor frame-type operator with respect to the lattice Λ. The above two propositions 17 and 18 concern the cases when the symbol of an invertible Gabor frame multiplier is the constant sequence (1). They show that such multipliers commute with the corresponding time-frequency shifts and the frames induced by these multipliers have Gabor structure. Considering the more general case of m not necessarily being a constant sequence, the commutative property of the multiplier with the time-frequency shifts Eω Tτ is no longer necessarily valid (see [46, Corol. 5.2 and Ex. 5.3 ]), but it can still happen that the frames (Eω Tτ v)†(ω,τ )∈Λ and (Eω Tτ u)†(ω,τ )∈Λ determined by Theorems 1 and 2 have Gabor structure (see [46, Ex. 5.6]). Below we consider cases, when validity of the commutativity and Φ † (resp. Ψ † ) being Gabor frames imply that m is a constant sequence: Proposition 19 Let Λ = {(ω, τ )} be a lattice in R2d , Φ = (Eω Tτ v)(ω,τ )∈Λ and Ψ = (Eω Tτ u)(ω,τ )∈Λ be Gabor frames for L2 (R), and m = (mω,τ )(ω,τ )∈Λ be such that mω,τ = 0 for every (ω, τ ) ∈ Λ. Assume that the Gabor frame multiplier Mm,Φ,Ψ is invertible on L2 (R) and commutes with Eω Tτ for every (ω, τ ) ∈ Λ. Then the following statements hold: (a) If mΦ is a frame for L2 (R) and the frame Ψ † determined by Theorem 1 has Gabor structure, then m must be a constant sequence. (b) If mΨ is a frame for L2 (R) and the frame Φ † determined by Theorem 1 has Gabor structure, then m must be a constant sequence. (c) If Φ is a Riesz sequence, then m must be a constant sequence.

186

D. T. Stoeva and P. Balazs

Proof (a) and (b) are [46, Cor. 5.5]. (c) Fix an arbitrary couple (ω , τ ) ∈ Λ. Having in mind the general relation M(mω,τ )(ω,τ )∈Λ ,Φ,Ψ Eω Tτ = Eω Tτ M(mω+ω ,τ +τ )(ω,τ )∈Λ ,Φ,Ψ (see [46, Coroll. 5.2]), and using the assumption that Mm,Φ,Ψ commutes with Eω Tτ and the invertibility of Eω Tτ , we get M(mω,τ )(ω,τ )∈Λ ,Φ,Ψ = M(mω+ω ,τ +τ )(ω,τ )∈Λ ,Φ,Ψ . The assumption that Φ is a Riesz sequence leads now to the conclusion that (mω,τ − mω+ω ,τ +τ )f, Eω Tτ u = 0, ∀(ω, τ ) ∈ Λ, ∀f ∈ H . Since u = 0 (as otherwise Mm,Φ,Ψ is not invertible), one can apply f = Eω Tτ u = 0 above and conclude that mω,τ = mω+ω ,τ +τ for every (ω, τ ) ∈ Λ. Then, for any fixed (ω, τ ) ∈ Λ, we have mω,τ = mω+ω ,τ +τ , ∀(ω , τ ) ∈ Λ, implying that m is a constant sequence. Observe that using the language of tensor products, in a similar way as in Proposition 19(c) one can show that φλ ⊗ ψλ being a Riesz sequence in the tensor product space H ⊗ H would lead to the conclusion that m must be a constant sequence. Notice that when m is not a constant sequence, it is still possible to have Φ † and Ψ † with the Gabor structure (see [46, Ex. 5.6]), which opens interest to further investigation of this topic. In this manuscript we presented results focusing on Gabor frames. Note that some of the above results hold for a more general class of frames, containing Gabor frames and coherent frames (the so-called pseudo-coherent frames), defined in [11] and further investigated in [46]. Note that wavelets are not pseudo-coherent frames in general and one cannot state corresponding results like Propositions 17 and 18 for wavelet frames. The wavelet case is a topic of interest for further investigations of questions related to the structure of Φ † and Ψ † for wavelet multipliers. This might be related to the topic of localized frames, see, e.g., [8], where wavelets also cannot be included.

A Frame Multiplier Survey

187

5 Implementation of Inversion of Frame Multipliers For the inversion of multipliers Mm,Φ,Ψ and Mm,Ψ,Φ according to Propositions 8, 9, and 11, we provide Matlab-LTFAT codes (including demo-files), which are available on the webpage https://www.kfs.oeaw.ac.at/InversionOfFrameMultipliers. The inversion is done using an iterative approach based on the formulas from the corresponding statement. Here we present the algorithms for these codes. Note that these implementations are only meant as proof-of-concept and could be made significantly more efficient, which will be the topic of future work, but is beyond the scope of this manuscript. Implementations naturally require to work in the finite dimensional case; for specifics of finite frame theory and in particular for finite Gabor theory, we refer, e.g., to [4, 16, 32, 37, 39, 47]. As an implementation of Proposition 8, we provide three Matlab-LTFAT codes— −1 one for computing Mm,Φ,Ψ (see Algorithm 1), a second one for computing −1 −1 Mm,Φ,Ψ f for given f (see Algorithm 2), and a third one—for computing Mm,Φ,Ψ in the case when Φ and Ψ are Gabor frames (see Algorithm 3). For illustration of the convergence rate of Algorithm 3, see Fig. 2. On the horizontal axis one has the number of the iterations in a linear scale, and on the vertical axis—the absolute error (the norm of the difference between the n-th iteration and the real inverse) in a logarithmic scale. To produce this plot we have used Gabor frames with length of the transform L = 4096, time-shift a = 1024, number of channels M = 2048, a Hann window for Φ, a Gaussian window for Ψ , symbol m with elements between 1/2 and 1, and parameter e = 10−8 , which guides the number of iterations by the predicted error bound (4). The size of the multiplier-matrix in this test is 4096 × 4096. The inversion takes only a few iterations—it stops at n = 8 with error 6.1514 · 10−14 . We also provide implementations of Proposition 9 (see Algorithm 4) and Proposition 11 (see Algorithm 5). Note that the codes provide the possibility for multiple inversion of multipliers varying the frame Ψ . Although the initial step in the implementations of Propositions 8 and 9 involves inversion of an appropriate frame operator and is computationally demanding, it depends only on Φ and m, and thus multiple inversions with different Ψ will, in total, be very fast. It is the aim of our next work to improve the above-mentioned algorithms avoiding the inversion in the initial step. Notice that the iterative inversion according to Proposition 11 seems to be very promising for fast inversions as it does not involve an inversion of a full matrix. In further work we will run tests to compare the speed of the inversion of the above implementation algorithms with inversion algorithms in Matlab and LTFAT, with the goal to improve the numerical efficiency.

188

D. T. Stoeva and P. Balazs

−1 Algorithm 1 Iterative computation of Mm,Φ,Ψ (M1inv): [TPsi,M1,M2,M1inv,M2inv,n] = Prop8InvMultOp(c,r,TPhi,TG,m,e) 1: TG ← TG , m, TΦ 2: TΨ = TΦ + TG , M1 = TΦ ∗ diag(m) ∗ TΨ √ 3: Initialize M1inv = S(−1 mn φn ) 4: n ← e, m, TΦ , TG 5: if n > 0 √ 6: P 1 = S(−1 ∗ (S(√mn φn ) − M1) mn φn ) √ 7: Initialize Q1 = S(−1 mn φn ) 8: for i:=1 to n 9: Q1 = P 1 ∗ Q1 10: M1inv = M1inv + Q1 11: end 12: end −1 13: Mm,Φ,Ψ ← M1inv

−1 Algorithm 2 Iterative computation of Mm,Φ,Ψ f (M1invf): [TPsi,M1,M2,M1invf,M2invf,n] = Prop8InvMultf(c,r,TPhi,TG,m,f,e) 1: TG ← TG , m, TΦ 2: TΨ = TΦ + TG , M1 = TΦ ∗ diag(m) ∗ TΨ √ 3: Initialize M1invf = S(−1 f mn φn ) 4: n ← e, m, TΦ , TG 5: if n > 0 √ 6: P 1 = S(−1 ∗ (S(√mn φn ) − M1) mn φn ) 7: Initialize q1 = M1invf 8: for i:=1 to n 9: q1 = P 1 ∗ q1 10: M1invf = M1invf + q1 11: end 12: end −1 13: Mm,Φ,Ψ f ← M1invf

−1 Algorithm 3 Iterative computation of Mm,Φ,Ψ (M1inv) for Gabor [TPhi,TPsi,M1,M2,M1inv,M2inv,n] = Prop8InvMultOpGabor(L,a,M,gPhi,gG,m,e) 1: Φ ← gP hi, a, M 2: TΦ ← Φ, L 3: G ← gG, a, M 4: TG ← G, L 5: -17: These steps are like 1–13 of Algorithm 1.

frames:

A Frame Multiplier Survey

189

−1 −1 Algorithm 4 Iterative computation of Mm,Φ,Φ (M0inv) and Mm,Φ,Ψ [m,TPsi,M0,M1,M2,M0inv,n0,M1inv,M2inv,n] = Prop9InvMultOp(c,r,TPhi,TG,m,e) 1: m ← m, TΦ 2: M0 = TΦ ∗ diag(m) ∗ TΦ −1 3: Initialize M0inv = SΦ 4: n0 ← e, m, TΦ 5: if n0 > 0 −1 6: P 0 = SΦ ∗ (SΦ − M0) −1 7: Initialize Q0 = SΦ 8: for i:=1 to n0 9: Q0 = P 0 ∗ Q0 10: M0inv = M0inv + Q0 11: end 12: end −1 13: Mm,Φ,Φ ← M0inv 14: TG ← TG , m, TΦ 15: TΨ = TΦ + TG , M1 = TΦ ∗ diag(m) ∗ TΨ 16: Initialize M1inv = M0inv 17: n ← e, m, TΦ , TG 18: if n > 0 19: P 1 = M0inv ∗ (M0 − M1) 20: Initialize Q1 = M0inv 21: for i:=1 to n 22: Q1 = P 1 ∗ Q1 23: M1inv = M1inv + Q1 24: end 25: end −1 26: Mm,Φ,Ψ ← M1inv

−1 Algorithm 5 Iterative computation of Mm,Φ,Φ (M0inv) and [m,TPsi,M1,M2,M1inv,M2inv,n] = Prop11InvMultOp(c,r,TPhi,TPsi,m,e) 1: TΨ ← TΨ , TΦ 2: m ← m, TΦ , TΨ 3: Initialize M1inv = eye(r) 4: n ← e, m, TΦ , TG 5: if n > 0 6: P 1 = eye(r) − M1 7: Initialize Q1 = eye(r) 8: for i:=1 to n 9: Q1 = P 1 ∗ Q1 10: M1inv = M1inv + Q1 11: end 12: end −1 13: Mm,Φ,Ψ ← M1inv

−1 Mm,Φ,Ψ

(M1inv):

(M1inv):

190

D. T. Stoeva and P. Balazs

Fig. 2 The convergence rate of Algorithm 3 using base-10 logarithmic scale in the vertical axis and a linear scale in the horizontal axis. Here the absolute error in each iteration is plotted in red, and the convergence value predicted in Proposition 8 is plotted in blue

Acknowledgements The authors acknowledge support from the Austrian Science Fund (FWF) START—project FLAME (‘Frames and Linear Operators for Acoustical Modeling and Parameter Estimation’; Y 551-N13). They thank Z. Prusa for help with LTFAT.

References 1. S. T. Ali, J.-P. Antoine, and J.-P. Gazeau. Coherent States, Wavelets and Their Generalization. Theoretical and Mathematical Physics. Springer New York, 2014. Second Expanded Edition. 2. M. L. Arias and M. Pacheco. Bessel fusion multipliers. J. Math. Anal. Appl., 348(2):581–588, 2008. 3. P. Balazs. Basic definition and properties of Bessel multipliers. J. Math. Anal. Appl., 325(1):571–585, 2007. 4. P. Balazs. Frames and finite dimensionality: Frame transformation, classification and algorithms. Appl. Math. Sci., 2(41–44):2131–2144, 2008. 5. P. Balazs. Hilbert-Schmidt operators and frames - classification, best approximation by multipliers and algorithms. International Journal of Wavelets, Multiresolution and Information Processing, 6(2):315–330, March 2008. 6. P. Balazs, J.-P. Antoine, and A. Grybos. Weighted and controlled frames: Mutual relationship and first numerical properties. Int. J. Wavelets Multiresolut. Inf. Process., 8(1):109–132, 2010. 7. P. Balazs, D. Bayer, and A. Rahimi. Multipliers for continuous frames in Hilbert spaces. J. Phys. A: Math. Theor., 45(24):244023, 2012. 8. P. Balazs and K. Gröchenig. A guide to localized frames and applications to Galerkin-like representations of operators. In I. Pesenson, H. Mhaskar, A. Mayeli, Q. T. L. Gia, and

A Frame Multiplier Survey

191

D.-X. Zhou, editors, Novel methods in harmonic analysis with applications to numerical analysis and data processing, Applied and Numerical Harmonic Analysis series (ANHA). Birkhauser/Springer, 2017. 9. P. Balazs, N. Holighaus, T. Necciari, and D. Stoeva. Frame theory for signal processing in psychoacoustics. In R. Balan, J. J. Benedetto, W. Czaja, and K. Okoudjou, editors, Excursions in Harmonic Analysis Vol. 5,, pages –. Springer, 2017. 10. P. Balazs, B. Laback, G. Eckel, and W. Deutsch. Time-frequency sparsity by removing perceptually irrelevant components using a simple model of simultaneous masking. IEEE Transactions on Audio, Speech, and Language Processing, 18(1):34–49, 2010. 11. P. Balazs and D. T. Stoeva. Representation of the inverse of a frame multiplier. J. Math. Anal. Appl., 422(2):981–994, 2015. 12. N. K. Bari. Biorthogonal systems and bases in Hilbert space. Uch. Zap. Mosk. Gos. Univ., 148:69–107, 1951. 13. J. Benedetto and G. Pfander. Frame expansions for Gabor multipliers. Applied and Computational Harmonic Analysis (ACHA)., 20(1):26–40, Jan. 2006. 14. P. G. Casazza. The art of frame theory. Taiwanese J. Math., 4(2):129–201, 2000. 15. P. G. Casazza and O. Christensen. Perturbation of operators and applications to frame theory. J. Fourier Anal. Appl., 3(5):543–557, 1997. 16. P. G. Casazza and G. Kutyniok, editors. Finite frames. Theory and applications. Boston, MA: Birkhäuser, 2013. 17. O. Christensen. An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis. Birkhäuser, Boston, 2016. Second Expanded Edition. 18. O. Christensen and R. Laugesen. Approximately dual frames in Hilbert spaces and applications to Gabor frames. Sampl. theory Signal Image Process., 9(1-2):77–89, 2010. 19. J. B. Conway. A Course in Functional Analysis. Graduate Texts in Mathematics. Springer New York, 2. edition, 1990. 20. N. Cotfas and J.-P. Gazeau. Finite tight frames and some applications. Journal of Physics A: Mathematical and Theoretical, 43(19):193001, 2010. 21. R. J. Duffin and A. C. Schaeffer. A class of nonharmonic Fourier series. Trans. Am. Math. Soc., 72:341–366, 1952. 22. H. G. Feichtinger and K. Nowak. A first survey of Gabor multipliers. Feichtinger, Hans G. (ed.) et al., Advances in Gabor analysis. Basel: Birkhäuser. Applied and Numerical Harmonic Analysis. 99–128 (2003)., 2003. 23. F. Futamura. Frame diagonalization of matrices. Linear Algebra Appl., 436(9):3201–3214, 2012. 24. J.-P. Gazeau. Coherent states in quantum physics. Wiley, Weinheim, 2009. 25. I. Gohberg, S. Goldberg, and M. A. Kaashoek. Basic Classes of Linear Operators. Basel: Birkhäuser, 2003. 26. K. Gröchenig. Representation and approximation of pseudodifferential operators by sums of Gabor multipliers. Appl. Anal., 90(3-4):385–401, 2010. 27. D. Han and D. R. Larson. Frames, Bases and Group Representations. Mem. Amer. Math. Soc., 697:1–94, 2000. 28. G. Matz and F. Hlawatsch. Linear Time-Frequency filters: On-line algorithms and applications, chapter 6 in ’Application in Time-Frequency Signal Processing’, pages 205–271. Electrical Engineering & Applied Signal Processing Series (Book 10). CRC Press, Boca Raton, 2002. 29. T. Necciari, N. Holighaus, P. Balazs, Z. Pr˚uša, P. Majdak, and O. Derrien. Audlet filter banks: A versatile analysis/synthesis framework using auditory frequency scales. Applied Sciences, 8(1), 2018. accepted. 30. T. Necciari, S. Savel, B. Laback, S. Meunier, P. Balazs, R. Kronland-Martinet, and S. Ystad. Auditory time-frequency masking for spectrally and temporally maximally-compact stimuli. PLOS ONE, 2016. 31. A. Olivero, B. Torresani, and R. Kronland-Martinet. A class of algorithms for time-frequency multiplier estimation. IEEE Transactions on Audio, Speech, and Language Processing, 21(8):1550–1559, 2013.

192

D. T. Stoeva and P. Balazs

32. G. E. Pfander. Gabor frames in finite dimensions. In Finite frames. Theory and applications., pages 193–239. Boston, MA: Birkhäuser, 2013. 33. Z. Pr˚uša, P. L. Søndergaard, N. Holighaus, C. Wiesmeyr, and P. Balazs. The Large TimeFrequency Analysis Toolbox 2.0. In M. Aramaki, O. Derrien, R. Kronland-Martinet, and S. Ystad, editors, Sound, Music, and Motion, Lecture Notes in Computer Science, pages 419– 442. Springer International Publishing, 2014. 34. A. Rahimi. Multipliers of generalized frames in Hilbert spaces. Bulletin of Iranian Mathematical Society, 37(1):63–83, 2011. 35. A. Rahimi and P. Balazs. Multipliers for p-Bessel sequences in Banach spaces. Integral Equations Oper. Theory, 68(2):193–205, 2010. 36. R. Schatten. Norm Ideals of Completely Continuous Operators. Springer Berlin, 1960. 37. P. Soendergaard. Gabor frames by sampling and periodization. Adv. Comput. Math., 27(4):355–373, 2007. 38. P. Soendergaard, B. Torrésani, and P. Balazs. The linear time frequency analysis toolbox. International Journal of Wavelets, Multiresolution and Information Processing, 10(4):1250032, 2012. 39. P. L. Søndergaard. Efficient Algorithms for the Discrete Gabor Transform with a long FIR window. J. Fourier Anal. Appl., 18(3):456–470, 2012. 40. D. T. Stoeva. Characterization of atomic decompositions, Banach frames, Xd-frames, duals and synthesis-pseudo-duals, with application to Hilbert frame theory. arXiv:1108.6282. 41. D. T. Stoeva and P. Balazs. Invertibility of multipliers. Appl. Comput. Harmon. Anal., 33(2):292–299, 2012. 42. D. T. Stoeva and P. Balazs. Canonical forms of unconditionally convergent multipliers. J. Math. Anal. Appl., 399(1):252–259, 2013. 43. D. T. Stoeva and P. Balazs. Detailed characterization of conditions for the unconditional convergence and invertibility of multipliers. Sampl. Theory Signal Image Process., 12(2-3):87– 125, 2013. 44. D. T. Stoeva and P. Balazs. Riesz bases multipliers. In M. Cepedello Boiso, H. Hedenmalm, M. A. Kaashoek, A. Montes-Rodríguez, and S. Treil, editors, Concrete Operators, Spectral Theory, Operators in Harmonic Analysis and Approximation, volume 236 of Operator Theory: Advances and Applications, pages 475–482. Birkhäuser, Springer Basel, 2014. 45. D. T. Stoeva and P. Balazs. On the dual frame induced by an invertible frame multiplier. Sampling Theory in Signal and Image Processing, 15:119–130, 2016. 46. D. T. Stoeva and P. Balazs. Commutative properties of invertible multipliers in relation to representation of their inverses. In Sampling Theory and Applications (SampTA), 2017 International Conference on, pages 288–293. IEEE, 2017. 47. T. Strohmer. Numerical algorithms for discrete Gabor expansions. In Gabor analysis and algorithms. Theory and applications, pages 267–294, 453–488. Boston, MA: Birkhäuser, 1998. 48. D. Wang and G. J. Brown, editors. Computational Auditory Scene Analysis: Principles, Algorithms, and Applications. Wiley-IEEE Press, 2006. 49. K. Zhu. Operator Theory In Function Spaces. Marcel Dekker New York, 1990.

Applied and Numerical Harmonic Analysis (100 volumes)

1. A. I. Saichev and W. A. Woyczyñski: Distributions in the Physical and Engineering Sciences (ISBN: 978-0-8176-3924-2) 2. C. E. D’Attellis and E. M. Fernandez-Berdaguer: Wavelet Theory and Harmonic Analysis in Applied Sciences (ISBN: 978-0-8176-3953-2) 3. H. G. Feichtinger and T. Strohmer: Gabor Analysis and Algorithms (ISBN: 978-0-8176-3959-4) 4. R. Tolimieri and M. An: Time-Frequency Representations (ISBN: 978-0-81763918-1) 5. T. M. Peters and J. C. Williams: The Fourier Transform in Biomedical Engineering (ISBN: 978-0-8176-3941-9) 6. G. T. Herman: Geometry of Digital Spaces (ISBN: 978-0-8176-3897-9) 7. A. Teolis: Computational Signal Processing with Wavelets (ISBN: 978-0-81763909-9) 8. J. Ramanathan: Methods of Applied Fourier Analysis (ISBN: 978-0-81763963-1) 9. J. M. Cooper: Introduction to Partial Differential Equations with MATLAB (ISBN: 978-0-8176-3967-9) 10. Procházka, N. G. Kingsbury, P. J. Payner, and J. Uhlir: Signal Analysis and Prediction (ISBN: 978-0-8176-4042-2) 11. W. Bray and C. Stanojevic: Analysis of Divergence (ISBN: 978-1-4612-7467-4) 12. G. T. Herman and A. Kuba: Discrete Tomography (ISBN: 978-0-8176-4101-6) 13. K. Gröchenig: Foundations of Time-Frequency Analysis (ISBN: 978-0-81764022-4) 14. L. Debnath: Wavelet Transforms and Time-Frequency Signal Analysis (ISBN: 978-0-8176-4104-7) 15. J. J. Benedetto and P. J. S. G. Ferreira: Modern Sampling Theory (ISBN: 978-0-8176-4023-1) 16. D. F. Walnut: An Introduction to Wavelet Analysis (ISBN: 978-0-8176-3962-4)

© Springer Nature Switzerland AG 2020 S. D. Casey et al. (eds.), Sampling: Theory and Applications, Applied and Numerical Harmonic Analysis, https://doi.org/10.1007/978-3-030-36291-1

193

194

Applied and Numerical Harmonic Analysis (100 volumes)

17. A. Abbate, C. DeCusatis, and P. K. Das: Wavelets and Subbands (ISBN: 978-0-8176-4136-8) 18. O. Bratteli, P. Jorgensen, and B. Treadway: Wavelets Through a Looking Glass (ISBN: 978-0-8176-4280-80 19. H. G. Feichtinger and T. Strohmer: Advances in Gabor Analysis (ISBN: 978-0-8176-4239-6) 20. O. Christensen: An Introduction to Frames and Riesz Bases (ISBN: 978-08176-4295-2) 21. L. Debnath: Wavelets and Signal Processing (ISBN: 978-0-8176-4235-8) 22. G. Bi and Y. Zeng: Transforms and Fast Algorithms for Signal Analysis and Representations (ISBN: 978-0-8176-4279-2) 23. J. H. Davis: Methods of Applied Mathematics with a MATLAB Overview (ISBN: 978-0-8176-4331-7) 24. J. J. Benedetto and A. I. Zayed: Sampling, Wavelets, and Tomography (ISBN: 978-0-8176-4304-1) 25. E. Prestini: The Evolution of Applied Harmonic Analysis (ISBN: 978-0-81764125-2) 26. L. Brandolini, L. Colzani, A. Iosevich, and G. Travaglini: Fourier Analysis and Convexity (ISBN: 978-0-8176-3263-2) 27. W. Freeden and V. Michel: Multiscale Potential Theory (ISBN: 978-0-81764105-4) 28. O. Christensen and K. L. Christensen: Approximation Theory (ISBN: 978-08176-3600-5) 29. O. Calin and D.-C. Chang: Geometric Mechanics on Riemannian Manifolds (ISBN: 978-0-8176-4354-6) 30. J. A. Hogan: Time?Frequency and Time?Scale Methods (ISBN: 978-0-81764276-1) 31. C. Heil: Harmonic Analysis and Applications (ISBN: 978-0-8176-3778-1) 32. K. Borre, D. M. Akos, N. Bertelsen, P. Rinder, and S. H. Jensen: A SoftwareDefined GPS and Galileo Receiver (ISBN: 978-0-8176-4390-4) 33. T. Qian, M. I. Vai, and Y. Xu: Wavelet Analysis and Applications (ISBN: 978-3-7643-7777-9) 34. G. T. Herman and A. Kuba: Advances in Discrete Tomography and Its Applications (ISBN: 978-0-8176-3614-2) 35. M. C. Fu, R. A. Jarrow, J.-Y. Yen, and R. J. Elliott: Advances in Mathematical Finance (ISBN: 978-0-8176-4544-1) 36. O. Christensen: Frames and Bases (ISBN: 978-0-8176-4677-6) 37. P. E. T. Jorgensen, J. D. Merrill, and J. A. Packer: Representations, Wavelets, and Frames (ISBN: 978-0-8176-4682-0) 38. M. An, A. K. Brodzik, and R. Tolimieri: Ideal Sequence Design in TimeFrequency Space (ISBN: 978-0-8176-4737-7) 39. S. G. Krantz: Explorations in Harmonic Analysis (ISBN: 978-0-8176-4668-4) 40. B. Luong: Fourier Analysis on Finite Abelian Groups (ISBN: 978-0-81764915-9)

Applied and Numerical Harmonic Analysis (100 volumes)

195

41. G. S. Chirikjian: Stochastic Models, Information Theory, and Lie Groups, Volume 1 (ISBN: 978-0-8176-4802-2) 42. C. Cabrelli and J. L. Torrea: Recent Developments in Real and Harmonic Analysis (ISBN: 978-0-8176-4531-1) 43. M. V. Wickerhauser: Mathematics for Multimedia (ISBN: 978-0-8176-4879-4) 44. B. Forster, P. Massopust, O. Christensen, K. Gröchenig, D. Labate, P. Vandergheynst, G. Weiss, and Y. Wiaux: Four Short Courses on Harmonic Analysis (ISBN: 978-0-8176-4890-9) 45. O. Christensen: Functions, Spaces, and Expansions (ISBN: 978-0-8176-49791) 46. J. Barral and S. Seuret: Recent Developments in Fractals and Related Fields (ISBN: 978-0-8176-4887-9) 47. O. Calin, D.-C. Chang, and K. Furutani, and C. Iwasaki: Heat Kernels for Elliptic and Sub-elliptic Operators (ISBN: 978-0-8176-4994-4) 48. C. Heil: A Basis Theory Primer (ISBN: 978-0-8176-4686-8) 49. J. R. Klauder: A Modern Approach to Functional Integration (ISBN: 978-08176-4790-2) 50. J. Cohen and A. I. Zayed: Wavelets and Multiscale Analysis (ISBN: 978-08176-8094-7) 51. D. Joyner and J.-L. Kim: Selected Unsolved Problems in Coding Theory (ISBN: 978-0-8176-8255-2) 52. G. S. Chirikjian: Stochastic Models, Information Theory, and Lie Groups, Volume 2 (ISBN: 978-0-8176-4943-2) 53. J. A. Hogan and J. D. Lakey: Duration and Bandwidth Limiting (ISBN: 978-08176-8306-1) 54. G. Kutyniok and D. Labate: Shearlets (ISBN: 978-0-8176-8315-3) 55. P. G. Casazza and P. Kutyniok: Finite Frames (ISBN: 978-0-8176-8372-6) 56. V. Michel: Lectures on Constructive Approximation (ISBN: 978-0-81768402-0) 57. D. Mitrea, I. Mitrea, M. Mitrea, and S. Monniaux: Groupoid Metrization Theory (ISBN: 978-0-8176-8396-2) 58. T. D. Andrews, R. Balan, J. J. Benedetto, W. Czaja, and K. A. Okoudjou: Excursions in Harmonic Analysis, Volume 1 (ISBN: 978-0-8176-8375-7) 59. T. D. Andrews, R. Balan, J. J. Benedetto, W. Czaja, and K. A. Okoudjou: Excursions in Harmonic Analysis, Volume 2 (ISBN: 978-0-8176-8378-8) 60. D. V. Cruz-Uribe and A. Fiorenza: Variable Lebesgue Spaces (ISBN: 978-30348-0547-6) 61. W. Freeden and M. Gutting: Special Functions of Mathematical (Geo-)Physics (ISBN: 978-3-0348-0562-9) 62. A. I. Saichev and W. A. Woyczyñski: Distributions in the Physical and Engineering Sciences, Volume 2: Linear and Nonlinear Dynamics of Continuous Media (ISBN: 978-0-8176-3942-6) 63. S. Foucart and H. Rauhut: A Mathematical Introduction to Compressive Sensing (ISBN: 978-0-8176-4947-0)

196

Applied and Numerical Harmonic Analysis (100 volumes)

64. G. T. Herman and J. Frank: Computational Methods for Three-Dimensional Microscopy Reconstruction (ISBN: 978-1-4614-9520-8) 65. A. Paprotny and M. Thess: Realtime Data Mining: Self-Learning Techniques for Recommendation Engines (ISBN: 978-3-319-01320-6) 66. A. I. Zayed and G. Schmeisser: New Perspectives on Approximation and Sampling Theory: Festschrift in Honor of Paul Butzer’s 85th Birthday (ISBN: 978-3-319-08800-6) 67. R. Balan, M. Begue, J. Benedetto, W. Czaja, and K. A. Okoudjou: Excursions in Harmonic Analysis, Volume 3 (ISBN: 978-3-319-13229-7) 68. H. Boche, R. Calderbank, G. Kutyniok, and J. Vybiral: Compressed Sensing and its Applications (ISBN: 978-3-319-16041-2) 69. S. Dahlke, F. De Mari, P. Grohs, and D. Labate: Harmonic and Applied Analysis: From Groups to Signals (ISBN: 978-3-319-18862-1) 70. A. Aldroubi: New Trends in Applied Harmonic Analysis (ISBN: 978-3-31927871-1) 71. M. Ruzhansky: Methods of Fourier Analysis and Approximation Theory (ISBN: 978-3-319-27465-2) 72. G. Pfander: Sampling Theory, a Renaissance (ISBN: 978-3-319-19748-7) 73. R. Balan, M. Begue, J. Benedetto, W. Czaja, and K. A. Okoudjou: Excursions in Harmonic Analysis, Volume 4 (ISBN: 978-3-319-20187-0) 74. O. Christensen: An Introduction to Frames and Riesz Bases, Second Edition (ISBN: 978-3-319-25611-5) 75. E. Prestini: The Evolution of Applied Harmonic Analysis: Models of the Real World, Second Edition (ISBN: 978-1-4899-7987-2) 76. J. H. Davis: Methods of Applied Mathematics with a Software Overview, Second Edition (ISBN: 978-3-319-43369-1) 77. M. Gilman, E. M. Smith, and S. M. Tsynkov: Transionospheric Synthetic Aperture Imaging (ISBN: 978-3-319-52125-1) 78. S. Chanillo, B. Franchi, G. Lu, C. Perez, and E. T. Sawyer: Harmonic Analysis, Partial Differential Equations and Applications (ISBN: 978-3-319-52741-3) 79. R. Balan, J. Benedetto, W. Czaja, M. Dellatorre, and K. A. Okoudjou: Excursions in Harmonic Analysis, Volume 5 (ISBN: 978-3-319-54710-7) 80. I. Pesenson, Q. T. Le Gia, A. Mayeli, H. Mhaskar, and D. X. Zhou: Frames and Other Bases in Abstract and Function Spaces: Novel Methods in Harmonic Analysis, Volume 1 (ISBN: 978-3-319-55549-2) 81. I. Pesenson, Q. T. Le Gia, A. Mayeli, H. Mhaskar, and D. X. Zhou: Recent Applications of Harmonic Analysis to Function Spaces, Differential Equations, and Data Science: Novel Methods in Harmonic Analysis, Volume 2 (ISBN: 9783-319-55555-3) 82. F. Weisz: Convergence and Summability of Fourier Transforms and Hardy Spaces (ISBN: 978-3-319-56813-3) 83. C. Heil: Metrics, Norms, Inner Products, and Operator Theory (ISBN: 978-3319-65321-1) 84. S. Waldron: An Introduction to Finite Tight Frames: Theory and Applications. (ISBN: 978-0-8176-4814-5)

Applied and Numerical Harmonic Analysis (100 volumes)

197

85. D. Joyner and C. G. Melles: Adventures in Graph Theory: A Bridge to Advanced Mathematics. (ISBN: 978-3-319-68381-2) 86. B. Han: Framelets and Wavelets: Algorithms, Analysis, and Applications (ISBN: 978-3-319-68529-8) 87. H. Boche, G. Caire, R. Calderbank, M. März, G. Kutyniok, and R. Mathar: Compressed Sensing and Its Applications (ISBN: 978-3-319-69801-4) 88. A. I. Saichev and W. A. Woyczyñski: Distributions in the Physical and Engineering Sciences, Volume 3: Random and Fractal Signals and Fields (ISBN: 978-3-319-92584-4) 89. G. Plonka, D. Potts, G. Steidl, and M. Tasche: Numerical Fourier Analysis (9783-030-04305-6) 90. K. Bredies and D. Lorenz: Mathematical Image Processing (ISBN: 978-3-03001457-5) 91. H. G. Feichtinger, P. Boggiatto, E. Cordero, M. de Gosson, F. Nicola, A. Oliaro, and A. Tabacco: Landscapes of Time-Frequency Analysis (ISBN: 978-3-03005209-6) 92. E. Liflyand: Functions of Bounded Variation and Their Fourier Transforms (978-3-030-04428-2) 93. R. Campos: The XFT Quadrature in Discrete Fourier Analysis (978-3-03013422-8) 94. M. Abell, E. Iacob, A. Stokolos, S. Taylor, S. Tikhonov, J. Zhu: Topics in Classical and Modern Analysis: In Memory of Yingkang Hu (978-3-03012276-8) 95. H. Boche, G. Caire, R. Calderbank, G. Kutyniok, R. Mathar, P. Petersen: Compressed Sensing and its Applications: Third International MATHEON Conference 2017 (978-3-319-73073-8) 96. A. Aldroubi, C. Cabrelli, S. Jaffard, U. Molter: New Trends in Applied Harmonic Analysis, Volume II: Harmonic Analysis, Geometric Measure Theory, and Applications (978-3-030-32352-3) 97. S. Dos Santos, M. Maslouhi, K. Okoudjou: Recent Advances in Mathematics and Technology: Proceedings of the First International Conference on Technology, Engineering, and Mathematics, Kenitra, Morocco, March 26-27, 2018 (978-3-030-35201-1) 98. Á. Bényi, K. Okoudjou: Modulation Spaces: With Applications to Pseudodifferential Operators and Nonlinear Schrödinger Equations (978-1-0716-0330-7) 99. P. Boggiato, M. Cappiello, E. Cordero, S. Coriasco, G. Garello, A. Oliaro, J. Seiler: Advances in Microlocal and Time-Frequency Analysis (978-3-03036137-2) 100. S. Casey, K. Okoudjou, M. Robinson, B. Sadler: Sampling: Theory and Applications (978-3-030-36290-4) For an up-to-date list of ANHA titles, please visit http://www.springer.com/ series/4968