Random Signals and Noise: A Mathematical Introduction (Instructor Solution Manual, Solutions) [1 ed.] 9780849375545, 9780849328862, 0849375541

Understanding the nature of random signals and noise is critically important for detecting signals and for reducing and

150 97 874KB

English Pages 61 Year 2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Random Signals and Noise: A Mathematical Introduction  (Instructor Solution Manual, Solutions) [1 ed.]
 9780849375545, 9780849328862, 0849375541

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SOLUTIONS MANUAL FOR Random Signals and Noise: A Mathematical Introduction

by Shlomo Engelberg

8861.indd 1

9/8/08 3:39:43 PM

8861.indd 2

9/8/08 3:39:43 PM

SOLUTIONS MANUAL FOR Random Signals and Noise: A Mathematical Introduction

by Shlomo Engelberg

Boca Raton London New York

CRC Press is an imprint of the Taylor & Francis Group, an informa business

8861.indd 3

9/8/08 3:39:43 PM

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2009 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-0-8493-2886-2 (Softcover) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

8861.indd 4

9/8/08 3:39:44 PM

Solutions Manual

Summary: In this chapter we present complete solution to the exercises set in the text.

Chapter 1 1. Problem 1. As defined in the problem, A−B is composed of the elements in A that are not in B. Thus, the items to be noted are true. Making use of the properties of the probability function, we find that: P (A ∪ B) = P (A) + P (B − A) and that: P (B) = P (B − A) + P (A ∩ B). Combining the two results, we find that: P (A ∪ B) = P (A) + P (B) − P (A ∩ B). 2. Problem 2. (a) It is clear that fX (α) ≥ 0. Thus, we need only check that the integral of the PDF is equal to 1. We find that: Z ∞ Z ∞ fX (α) dα = 0.5 e−|α| dα −∞

−∞ 0

Z

α

= 0.5

Z

e dα + −∞

∞ −α

e

 dα

0

= 0.5(1 + 1) = 1. Thus fX (α) is indeed a PDF. (b) Because fX (α) is even, its expected value must be zero. Additionally, because α2 fX (α) is an even function of α, we find that: Z ∞ Z ∞ α2 fX (α) dα = 2 α2 fX (α) dα −∞

0

1

2

Random Signals and Noise: A Mathematical Introduction Z ∞ = α2 e−α dα 0 Z ∞ by parts = (−α2 e−α |∞ + 2 αe−α dα 0 0 Z ∞ by parts −α ∞ = 2(−αe |0 ) + 2 e−α dα 0

=

2.

2 Thus, E(X 2 ) = 2. As E(X) = 0, we find that σX = 2 and σX = √ 2.

3. Problem 3. The expected value of the random variable is: Z ∞ 2 2 1 √ E(X) = αe−(α−µ) /(2σ ) dα 2πσ −∞ Z ∞ 2 1 u=(α−µ)/σ √ (σu + µ)e−u /2 dα. = 2π −∞ 2

Clearly the piece of the integral associated with ue−u /2 is zero. The remaining integral is just µ times the integral of the PDF of the standard normal RV—and must be equal to µ as advertised. Now let us consider the variance of the RV—let us consider E((X −µ)2 ). We find that: Z ∞ 2 2 1 √ (α − µ)2 e−(α−µ) /(2σ ) dα E((X − µ)2 ) = 2πσ −∞ Z ∞ 2 u=(α−µ)/σ 2 1 u2 e−u /2 dα. = σ √ 2π −∞ As this is just σ 2 times the variance of a standard normal RV, we find that the variance here is σ 2 . 4. Problem 4. (a) Clearly (β − α)2 ≥ 0. Expanding this and rearranging it a bit we find that: β 2 ≥ 2αβ − α2 . (b) Because β 2 ≥ 2αβ − α2 and e−a is a decreasing function of a, the inequality must hold. (c) Z



α

e−β

2

/2

Z



dβ ≤ α

2

e−(2αβ−α

)/2



Solutions Manual

3 The PDF Function β

0 1/2 2

−2

1/2

α

2

−2 0

FIGURE 1.1 The PDF of Problem 6. α2 /2

Z



e−2αβ/2 dβ

=e

2

= eα

/2

α −αβ ∞

e −α α

−α2 α2 /2 e

=e

α −α2

=

e

α

The final step is to plug this into the formula given at the beginning of the problem statement. 5. Problem 5. If two random variables are independent, then their joint PDF must be the product of their marginal PDFs. That is, fXY (α, β) = fX (α)fY (β). The regions in which the joint PDF are non-zero must be the intersection of regions in which both marginal PDFs are non-zero. As these regions are strips in the α, β plains, their intersections are rectangles in that plain. (Note that for our purposes an infinite region all of whose borders are right angles to one another is also considered a rectangle.) 6. Problem 6. Consider the PDF given in Figure 1.1. It is the union of two rectangular regions. Thus, it is at least possible that the two random variables are independent. In order for the random variables to actually be independent it is necessary that fXY (α, β) = fX (α)fY (β) at all points.

4

Random Signals and Noise: A Mathematical Introduction Let us consider the point (−2.5, 2.5). It is clear that fX (−2.5) = 0.5 and fY (2.5) = 0.5. Thus if the random variable are independent, fXY (−2.5, 2.5) = 0.5 · 0.5. However, the actual value of the PDF at that point is 0. Thus, the random variables are not independent. Are the random variables correlated? Let us consider E(XY ). Because the probability is only non-zero when either both α and β are positive or both are negative, it is clear that: Z Z αβfXY (α, β) dαdβ > 0. It is also easy to see that the marginal PDFs of X and Y are even functions. Thus, E(X) = E(Y ) = 0. We find that E(XY ) 6= E(X)E(Y ) and the random variables are correlated. 7. Problem 7. Making use of the definition of the fact that the Xi are zero-mean, the fact that the Xi have a common variance, and the fact that the Xi are mutually uncorrelated, we find that: E(Q) = E(R) = E(S) = 0 and that: 2 2 2 σQ = σR = σS2 = E((X3 + X4 )2 ) = E(X32 + 2X3 X4 + X42 ) = 2σX .

Now let us calculate several important expectations. We find that: E(QR) = E((X1 + X2 )(X2 + X3 )) = E(X1 X2 + X1 X3 + X22 + X2 X3 ) 2 = 0 + 0 + σX +0 2 = σX , and that: E(QS) = E((X1 + X2 )(X3 + X4 )) = E(X1 X3 + X1 X4 + X2 X3 + X2 X4 ) = 0+0+0+0 = 0, and that: E(RS) = E((X2 + X3 )(X3 + X4 )) = E(X2 X3 + X2 X4 + X32 + X3 X4 ) 2 = 0 + 0 + σX +0 2 = σX .

Solutions Manual

5

Making use of the preceding calculations and the definition of the correlation coefficient we find that: ρQR = 1/2,

ρQS = 0,

ρRS = 1/2.

These results are quite reasonable. If the correlation coefficient really measures the degree of “sameness,” then as Q and R are “half the same” and Q and S have no overlap their correlation coefficients ought to be 1/2 and zero respectively. Similarly, as R and S overlap in half their constituent parts the degree of correlation ought to be 1/2. 8. Problem 8. (a) With fX (α) a pulse of unit height that stretches from −1/2 to 1/2, we find that: Z 1/2 ϕX (t) = ejαt dα −1/2

Z

1/2

=

Z

1/2

cos(αt)dα + j −1/2

sin(αt)dα −1/2

1 (sin(t/2) − sin(−t/2)) + 0 t 2 sin(t/2) . = t

=

(How can this argument be made more precise (correct) when t = 0?) (b) We must calculate ϕ0X (t)|t=0 and ϕ00X (t)|t=0 . The easiest way to do do this is to calculate the Taylor series associated with ϕX (t). We find that: 2(t/2 − (t/2)3 /3! + · · ·) t = 1 − t2 /24 + · · · = ϕX (0) + ϕ0X (0)t + ϕ00X (0)t2 /2 + · · · .

ϕX (t) =

By inspection, we find that ϕ0X (0) = 0 and ϕ00X (0) = −1/12. We find that: jE(X) = 0 −E(X 2 ) = −1/12. Thus E(X) = 0, and E(X 2 ) = 1/12. 9. Problem 9. Making use of the definition of the characteristic function, we find that: ϕX (0) = E(ejX0 ) = E(1) = 1.

6

Random Signals and Noise: A Mathematical Introduction 10. Problem 10. Simply note that: CiN =

N! N! N! N , and CN = . −i = (N − i)!i! (N − (N − i))!(N − i)! i!(N − i)!

11. Problem 11. The marginal PMF is defined as: X pX (α) ≡ pXY (α, β). β

Since the joint PMF is non-negative so is the marginal PMF. Let us consider the sum of the marginal PMF. We find that: X XX pX (α) = pXY (α, β) = 1. α

α

β

Thus, the marginal PMF inherits its legitimacy from the joint PMF. 12. Problem 12.

E(g1 (X)g2 (Y ))

=

X

g1 (α)g2 (β)pXY (α, β)

α,β independence

=

X

g1 (α)g2 (β)pX (α)pY (β)

α,β

=

X α

=

g1 (α)pX (α)

X

g2 (β)pY (β)

β

E(g1 (X))E(g2 (Y )).

13. Problem 13. (a) We will consider pX (α). The calculation for pY (β) is identical. X pX (−1) = pXY (−1, β) = 1/4 + 1/4 = 1/2. β

Similarly, we find that pX (1) = 1/2. Additionally pY (−1) = pY (1) = 1/2. (b) In this case we have a simple, finite, set of calculations. Let us consider pXY (−1, −1). We know that this is 1/4. Let us compare this with pX (−1)pY (−1). We find that this too is 1/4. Checking the other three possible values, we find that they too correspond. Thus the RVs are independent. (c) ϕX (t) = E(ejXt ) = ej(−1)t (1/2) + ej1t (1/2) = cos(t). Similarly, ϕY (t) = cos(t).

Solutions Manual

7

(d) As the RVs are independent: ϕX+Y (t) = ϕX (t)ϕY (t) = cos2 (t). (e) The expression given is the definition of: ϕZ (t) = E(ejZt ). (f) The possible values of X + Y are −2, 0, and 2. We find that the characteristic function of X + Y is: ϕX+Y (t) = cos2 (t) = (cos(2t) + 1)/2 = ej(−2)t /4 + ej0t /2 + ej2t /4. Thus, pZ (−2) = pZ (2) = 1/4 and pZ (0) = 1/2. 14. Problem 14. (a) The relative frequency of the measurement “1” is still one. Thus, it is reasonable to say that the probability of a “1” occurring should be one. (b) The random variable X may, at exceedingly infrequent intervals, take values other than 1. The random variable Y must always equal 1. 15. Problem 15. (a) The function is clearly always positive. Its integral over the entire real line is: ∞ Z ∞ Z 1 1 1 ∞ −1 dα = tan (α) = 1. fX (α) dα = 2 π −∞ 1 + α π −∞ −∞ Thus, the integral is one—as it must be. (b) We find that: 1 FX (α) = π

Z

α

−∞

1 1  −1 π dα = tan (α) + . 1 + α2 π 2

(c) Let us consider the integral that defines E(X) carefully. We find that: Z 1 ∞ 1 α dα π −∞ 1 + α2 Z R1 1 1 = lim α dα π R0 ,R1 →∞ −R0 1 + α2 1 R1 = lim ln(|α|)|−R 0 2π R0 ,R1 →∞ 1 = lim ln(R0 ) − ln(R1 ). 2π R0 ,R1 →∞

8

Random Signals and Noise: A Mathematical Introduction Clearly this is not well defined. It can be any number, positive, negative, or zero depending on how R0 and R1 go to infinity. (d) In fact, E(X 2 ) is worse than E(X). Looking at the integrand, we α2 2 find that it— 1+α 2 —tends to one at infinity. Thus E(X ) = ∞. The expectation of the square of the RV does not “exist” either. 16. Problem 16. Defining Q = X − E(X) and R = Y − E(Y ), we find that E(Q) = E(R) = 0. For this case, we already know that the result holds. That is, we know that: −σQ σR ≤ E(QR) ≤ σQ σR . Simply applying the definition of Q and R we find that the result is proved.

Chapter 2 1. Problem 1. Let us consider E((X(t) + X(t + τ0 ))2 ). We find that: E((X(t) + X(t + τ0 ))2 ) = E(X 2 (t)) + 2E(X(t)X(t + τ0 )) + E(X 2 (t + τ0 )) = RXX (0) + 2RXX (τ0 ) + RXX (0) = 0. As we have seen, if the expectation of the square of a random variable is zero, the random variable is zero with probability one. Thus, we find that X(t) + X(t + τ0 ) = 0 with probability one. This is what we wanted to prove. 2. Problem 2. (a) This function could conceivably be an autocorrelation function as it is even and has its unique maximum at τ = 0. (b) As sin(0) = 0 and sin(π/2) = 1 we find that RXX (π/2) > RXX (0) Thus, this function cannot be an autocorrelation function. (c) The function cos(τ ) could be an autocorrelation function. It is even, it has its maxima at τ = 2nπ, n = . . . , −1, 0, 1, . . . and its absolute value is never greater than it value at zero. (Note: As cos(τ ) is periodic, it would have to be the autocorrelation function of a periodic function.) (d) As this function is always positive, takes its maximum value at τ = 0, and is even, it could be an autocorrelation function.

Solutions Manual

9

3. Problem 3. We have proved that if an autocorrelation function attains its maximum value at a point τ0 6= 0, then the autocorrelation function must be periodic with period τ0 . The function under consideration attains its maximum at an infinite number of points other than τ = 0. For example, it achieves its maximum at τ0 = 1/2. As the function is not periodic with period 1/2, the function cannot be the autocorrelation of any stochastic process. 4. Problem 4. Let us calculate the two quantities when t1 = t2 = 0. We find that: E(N 2 (0)) = E(Y 2 ) = 1. Now let us consider N 2 (t). We find that: N 2 (t) = X 2 sin2 (2πt) + 2XY sin(2πt) cos(2πt) + Y 2 cos2 (2πt). Integrating this expression from zero to one leaves use with (X 2 +Y 2 )/2. This is not generally equal to one. We have just shown the the statistical average over all the possible values of N 2 (0) is not generally equal to the time average of N 2 (t) over one period. Thus the stochastic process is not ergodic. (Note that for a periodic stochastic process it is sufficient to consider the average over one period. The average over the entire real line is equal to the average over a single period.) 5. Problem 5. Let us perform the relevant calculations. Note that fΦ (α) = 1, 0 ≤ α ≤ 1, and it equals zero elsewhere. We start by considering the expected value of N (t). Z

1

sin(2π(t − α) dα = 0.

E(N (t)) = 0

Clearly this is equal to: Z

1

Z

0

1

sin(2π(t − Φ)) dt = 0.

N (t) dt = 0

This takes care of the first part of the problem. Let us consider the autocorrelation function. We find that: Z 1 E(N (t1 )N (t2 )) = sin(2π(t1 − α) sin(2π(t2 − α)) dα 0

10

Random Signals and Noise: A Mathematical Introduction Z 1 1 = (cos(2π(t1 − t2 )) − cos(2π(t1 + t2 − 2α))) dα 2 0 Z 1 1 = cos(2π(t1 − t2 )) dα 2 0 1 = cos(2π(t1 − t2 )). 2 (We have made use of the fact that the average of the cosine function over any whole number of periods is zero.) Considering the integral, we find that: Z 1 Z 1 N (t + t1 )N (t + t2 ) dt = (sin(2π(t + t1 − Φ)) sin(2π(t + t2 − Φ)) dt 0

0

Z 1 1 = (cos(2π(t1 − t2 )) − cos(2π(2t + t1 + t2 − 2Φ))) dt 2 0 1 = cos(2π(t1 − t2 )). 2 Thus the autocorrelation function can also be calculated by calculating the appropriate average over one period. This is a kind of ergodicity result.

Chapter 3 1. Problem 1. The sum of the random variable is just N times the average of the random variables. Suppose that Y = N X where X and Y are random variables. Then, E(Y ) = N E(X) and: 2 σY2 = E((Y −E(Y ))2 ) = E((N X−N E(X))2 ) = N 2 E((X−E(X))2 ) = N 2 σX .

Thus, we find that the expected value of the sum of N IID RVs is N E(Xi ) and the variance is: 2 2 σY2 = N 2 (σX /N ) = N σX .

2. Problem 2. Let Xi equal 1 if one scores a basket on the ith attempt, and let it equal 0 if one’s ith attempt is unsuccessful. Let Y = X1 + · · · + X100 . Y is the total number of times that one was successful in hitting the bullseye. Clearly E(X) = 1 · 0.7 + 0 · 0.3 = 0.7. Additionally as Xi is always either 1 or 0, Xi2 = Xi and E(Xi2 ) = E(X √i ) = 0.7. We find that 2 2 2 σX = E(X ) − E (X ) = 0.21. Thus σ = 0.21. i Xi i i

Solutions Manual

11

As √ we have seen, E(Y ) = 100 · E(Xi ) = 70. Also, σXi = 21. Making use of Chebyshev’s inequality we find that:



100 · σXi =

P (60 < Y < 80) = P (|Y − 70| < 10) = 1 − P (|Y − 70| ≥ 10) = 1 − P (|Y − E(Y )| ≥ (10/σY )σY ) σ2 ≥ 1 − Y2 10 = 1 − 0.21 = 0.79. 3. Problem 3. Let Xi equal 1 if one hits a bullseye on the ith attempt, and let it equal 0 if one’s ith attempt is unsuccessful. Let Y = X1 + · · · + X100 . Y is the total number of times that one was successful in hitting the bullseye. Clearly E(X) = 1 · 1/2 + 0 · 1/2 = 1/2. Additionally as Xi is always either 1 or 0, Xi2 = Xi and E(Xi2 ) = E(Xi ) = 1/2. We find that 2 σX = E(Xi2 ) − E 2 (Xi ) = 1/4. Thus σXi = 1/2. i √ As we have seen, E(Y ) = 100 · E(Xi ) = 50. Also, σY = 100 · σXi = 5. Making use of Chebyshev’s inequality we find that: P (Y < 40 or Y > 60) = P (|Y − 50| > 10) = P (|Y − E(Y )| > 2σY ) 1 1 ≤ 2 = . 2 4 In fact we could have gotten a slightly better estimate. Note that because Y always increases by a whole number, saying Y must be less than 40 is the same as saying that it must be less than 39 +  for any small . Similarly Y must actually be greater than 61 − . Thus, P (Y < 40 or Y > 60) = P (Y < 39 +  or Y > 61 − ) = P (|Y − 50| > 11 − ) ≤ P (|Y − 50| ≥ 11 − ) = P (|Y − E(Y )| ≥ (2.2 − 0.2)σY ) 1 1 ≤ = . 2 (2.2 − 0.2) 4.84 − 0.44 + 0.042 As this is true for all  sufficiently small, taking the limit as  → 0, we find that: 1 P (Y < 40 or Y > 60) ≤ . 4.84

12

Random Signals and Noise: A Mathematical Introduction 4. Problem 4. As fX (α) is even: Z



E(X) =

αfX (α) dα = 0. −∞

2 Thus, σX = E(X 2 ).

Now let us consider the problem at hand: Z ∞ P (X ≥ kσX ) = fX (α) dα kσX Z evenness 1 = fX (α) dα 2 |α|≥kσX  2 Z 1 α ≤ fX (α) dα 2 |α|≥kσX kσX Z ∞ 1 ≤ α2 fX (α) dα 2 2k 2 σX −∞ 1 2 = 2 E(X ) 2 2k σX E(X)=0 1 = . 2k 2 5. Problem 5. Consider the sum N −1 X

|RXX (i)|.

i=−(N −1)

We find that: N −1 X i=−(N −1)

|RXX (i)| ≤

N −1 X

D p

i=−(N −1)

= D+2

N −1 X

|i + 1|



i=1 N −1

Z ≤ D+2

0

D i+1 √

D dx x+1

N −1 √ = D + 4 x + 1 0 √ = D + 4( N − 1). √ We√see that the sum grows like N and thus the variance decreases like 1/ N . We conclude that: P (|Y | ≥ ) = P (|Y | ≥ (/σY )σY ))

Solutions Manual

13 σY2 2 √ D + 4( N − 1) ≤ N 2 ≤

Thus, if N is large enough the probability tends to zero. 6. Problem 6. (a) We find that: E(Yi ) = E(c + Ni ) = c + 0 = c σY2 i = E((Yi − E(Yi ))2 ) = E(Ni2 ) = 1. (b) The addition of a constant does not make a set of variables dependent if they were independent. More precisely, if we denote the common CDF of the noise by FN (α), then we find that: FYi (α) = P (Yi ≤ α) = P (Ni +c ≤ α) = P (Ni ≤ α−c) = FN (α−c). Thus the Yi are identically distributed. To show that the random variables are independent, we consider the joint CDF of Yi and Yj when i 6= j. We find that: FYi Yj (α, β) = F(Ni +c)(Nj +c) (α, β) = P (Ni + c ≤ α, Nj + c ≤ β) = FNi Nj (α − c, β − c) = FNi (α − c)FNj (β − c) = FYi (α)FYj (α). Thus, the Yi are independent as well. (c) (E(Y1 ) + · · · E(YN ))/N = c 1 2 σZ = E((Z − E(Z))2 ) = 2 E((Y1 − c + · · · + YN − c)2 ) N 1 = E((N1 + · · · + NN )2 ) N2 2 σN independence = N 1 = . N

E(Z)

=

2 (d) If c = 1 and N = 100, then E(Z) = 1 and σZ = 0.01. E(Y ) is also 2 equal to 1, but σY = 1. From Chebyshev’s inequality we find that:

P (|Z − 1| ≥ 0.3) = P (|Z − 1| ≥ 3 · σZ ) ≤ 1/9.

14

Random Signals and Noise: A Mathematical Introduction (e) Using Chebyshev’s inequality we find that: P (|Yi − 1| ≥ 0.3) = P (|Yi − 1| ≥ 0.3 · σY ) ≤

1 = 11.1. 0.09

This is certainly true. In fact, we can say that the probability is less than or equal to one. We find that the chances of the averaged value being more than 0.3 from the transmitted value are small, while the chances of any particular sample being distant are not necessarily small at all. 7. Problem 7. (a) As the noise is zero-mean, e0 = 0. (b) As the noise is zero-mean, e1 = 1 · 1 + 2 · 2 + (−1) · (−1) + · · · where the pattern repeats M times. Thus, e0 = 6M . (c) The condition that we area asked to check “translates” to “is Y > 3M ?” As we are told to use Chebyshev’s inequality, we are going to need to know σY . For this we will need to consider Y − E(Y ). It is easy to see that whether the signal was sent or not, Y − E(Y ) = N1 + 2N2 − N3 + N4 + · · ·. We find that: E((Y − E(Y ))2 ) = E((N1 + 2N2 + · · · − N3M )2 ) = E(N12 ) + 22 E(N22 ) + (−1)(−1)E(N32 ) + · · · . √ Thus, σY2 = 12M and σY = 2 3M . i. If the signal was not transmitted, then we are trying to determine P (Y > 3M ). As e0 = 0, this is equivalent to considering P (Y − E(Y ) > 3M ). We find that: P (Y − E(Y ) > 3M ) ≤ P (|Y − E(Y )| ≥ 3M ) √ = P (|Y − E(Y )| ≥ ( 3M /2)σY ) 1 ≤ 3M/4 4 = . 3M ii. Proceeding in a fashion similar to that of the previous section, one finds that the estimate of the probability of error is the same when the signal is sent. 8. Problem 8. (a) In the example of §3.3 the RV measures the number times one of two possible events occur. This is precisely the case that leads to a binomially distributed random variable.

Solutions Manual

15

r (b) Giving MATLAB the command:

probability = sum(binopdf([9700:9900],10000,0.98)) causes MATLAB to evaluate the probability. The answer MATLAB gives is 0.99999999999135. (c) The answer MATLAB gives tells us that the chances of not being between 9700 and 9900 are much smaller than Chebyshev’s inequality indicated.

Chapter 4 1. Problem 1. Making use of the the fact that the situation here is identical to that of Problem 2 of Chapter 3, we solve this problem using the notation and some of the calculations we made there. We find that we are interested in the probability: √ 1 − P (|Y − E(Y )| > (10/ 21)σY ) = 1 − P (|Y − E(Y )| > 2.18σY ). Making use of the central limit theorem we find that the probability is approximately 1 − 0.0292 ≈ 97%. Note that in Chatper 3, using Chebyshev’s inequality, we found that the probability was greater than or equal to 79%. The result here is more “accurate” provided that it was reasonable to use the central limit theorem. 2. Problem 2. (a) N = 100, and p = 0.7. (b) Using the command: sum(binopdf([61:79],100,0.7)) we find that the probability is actually 0.9625. We see that the central limit theorem is a little bit too “optimistic.” 3. Problem 3. Let Xi be equal to one if one scores on the ith shot and let it be equal to zero if one does not. As the probability of scoring is 0.7, E(Xi ) = 0.3 · 0 + 0.7 · 1 = 0.7. As Xi2 = Xi , we find that E(Xi2 ) = 0.7 and 2 σX = 0.7 − 0.49 = 0.21. i

16

Random Signals and Noise: A Mathematical Introduction Let Y = X1 + · · · + X100 . As the Xi are IID, we find that E(Y ) = 2 100 · E(Xi ) = 70 and σY2 = 100 · σX = 21. According to the central i limit theorem, Y is (approximately) normally distributed. Putting everything together, we find that Y is approximately N (70, 21). Let us now approximate the probability that Y is between 50 and 70. As always, we will try to work the problem from one involving a non-standard normal distribution to one involving a standard normal distribution. We find that: P (50 ≤ Y ≤ 70) = P (−20 ≤ Y − 70 ≤ 0)   Y − 70 = P −4.36 ≤ √ ≤0 21 = 0.5 − 6.5 × 10−6 ≈ 0.5. Thus the probability is very nearly one half. It would be hard to use Chebyshev’s inequality to evaluate this probability because the region we are asked to evaluate is not symmetric with respect to the expected value. 4. Problem 4. As the situation in this problem is the same as that considered in Problem 3 of Chapter 3, we make use of the results and the notation from the solution of that problem. We are interested in the probability: P (|Y − E(Y )| > 2σY ). According to the central limit theorem, this is (approximately) 0.0456 or 5%. 5. Problem 5. Let:

 Xi =

1 bullseye . 0 otherwise

Clearly, E(Xi ) = 0 · 1/2 + 1 · 1/2 = 1/2. Additionally, as Xi2 = Xi , 2 we find that E(Xi2 ) = 1/2. Thus, σX = 1/4 and σXi = 1/2. Let i √ Y = X1 +· · · X100 . Then E(Y ) = 100·1/2 = 50 and σY = 100σXi = 5. The central limit theorem states that the sum of many IID random variables is approximately normal. Thus, Y is approximately normal with a standard deviation of 5 and a mean of 50. We are interested in the probability: P (15 ≤ Y ≤ 20). Restating that in terms of a random variable normalized to have zero mean and unit variance, we find that:     15 − 50 Y − 50 20 − 50 Y − 50 P ≤ ≤ = P −7 ≤ ≤ −6 . 5 5 5 5

Solutions Manual

17

It is easy to check that (Y −50)/5 has mean zero and standard deviation 1. That is it is approximately a standard normal variable. Using MATLAB (see Problem 6 of this chapter for instructions), one finds that the probability is approximately 9.8531 × 10−10 . This is a tiny number and it would not be reasonable to rely on it. What one can say is that the probability is very small. It would be hard to use Chebyshev’s inequality to solve this problem because the region of interest is not symmetric with respect to the expected value of Y . 6. Problem 6. (a) If the signal was not transmitted, then as the expected value of the noise is zero, we find that e0 = 0. (b) If the signal was transmitted, then: e1 = M (1 · 1 + 2 · 2 + (−1) · (−1)) = 6M. The contribution of the noise in either case is: 1·(N1 +N4 +· · ·)+2·(N2 +N5 +· · ·)−1·(N3 +N6 +· · ·) ≡ 1·N1 +2·N2 −1·N3 . As the random variables that make up N1 , N2 , and N3 are independent, so are N1 , N2 , and N3 . As each of the three random variables are sums of Ni , we find that: N1,2,3 → N (0, 2M ). As multiplication by a constant multiplies expected values and multiplies the variance by the square of the constant, we find that: 1 · N1 → N (0, 2M ) 2 · N1 → N (0, 8M ) (−1) · N1 → N (0, 2M ). As the sum of independent Gaussian random variables is again Gaussian (see §1.11), we find that: 1 · N1 + 2 · N2 − 1 · N3 → N (0, 12M ). When no signal was transmitted we find that: Y → N (0, 12M ). (c) When the signal was transmitted we find that the noise is added to e1 . This shifts the expected value by e1 . Thus, we find that: Y → N (6M, 12M ).

18

Random Signals and Noise: A Mathematical Introduction (d) The criterion given translates into checking to see if Y > 3M . i. If the signal was not transmitted, then the probability that there was an error is:   3M σY P (Y > 3M ) = P Y − E(Y ) > √ 12M ! √ 3M σY = P Y − E(Y ) > 2 ! √ 3M = 1−A . 2 ii. If the signal was transmitted, then the probability that there was an error is: P (Y ≤ 3M ) = P (Y − E(Y ) ≤ −3M )   3M = P Y − E(Y ) ≤ − √ σY 12M ! √ 3M σY = P Y − E(Y ) ≤ − 2 ! √ 3M . =A − 2 7. Problem 7. (a) As the time interval is one second and as E(Q) = µτ , we find that µ = 100. As the variance of Q is equal to its mean, we find that 2 σQ = 100 and σQ = 10. When asked to estimate P (|Q − E(Q)| < 10) we consider 1 − P (|Q − E(Q)| ≥ 10). According to Chebyshev’s inequality this is greater than or equal to zero! This does not help us at all. (b) Using the MATLAB commands poisspdf and sum we calculate the exact value of the probability. It is: >> sum(poisspdf([91:109],100)) ans = 0.6581 8. Problem 8. Summing the series, we find that: ∞ X M =0

pY (M )

=

∞ X (µτ )M −µτ e M!

M =0

Solutions Manual

19

% File: pl_dist.m % This program plots the values of the Poisson and Gaussian % Distributions on the same set of axes. The file Poisson.m % is a short file that calcuates the probability that a variable % with a Poisson distribution will take the value M if mu and % tau are given by the second and third arguments of the % function respectively. for M = 0:200 x(M+1) = M; poi(M+1) = poisson(M,100,1); gau(M+1) = exp(-(M - 100)^2/(2 * 100)) / sqrt(2 * pi * 100); end; plot(x,poi,’r-’,x,gau,’b--’) % The unbroken line is the Poisson distribution. The dashed % line is the Gaussian distribution. FIGURE 1.2 The MATLAB m-file. e−µτ

=

∞ X (µτ )M M!

M =0 definition

= =

−µτ µτ

e 1.

e

9. Problem 9. (a) To answer this problem we made use of the MATLAB m-file of Figure 1.2. The result is the plot that appears in Figure 1.3 (b) Similar to the previous section. (c) Similar to the previous section. (d) We find that under many circumstances the Poisson distribution is well approximated by the Gaussian distribution. 10. Problem 10. We find that: E(Y ) = =

∞ X M =0 ∞ X M =1

M

(µτ )M −µτ e M!

(µτ )M −µτ e (M − 1)!

= µτ e−µτ

∞ X (µτ )M −1 (M − 1)!

M =1

20

Random Signals and Noise: A Mathematical Introduction

A Comparison of the Guassian and Poisson Distributions for µ = σ2 = 100

0.04

0.035

0.03

0.025

0.02

0.015

0.01

0.005

0

0

20

40

60

80

100

120

FIGURE 1.3 A Comparison of the Poisson and Gaussian Distributions.

140

160

180

200

Solutions Manual

21 = µτ e−µτ

∞ X (µτ )N N!

N =0

= µτ. Furthermore: E(Y 2 ) = = = =

∞ X M =0 ∞ X M =1 ∞ X M =1 ∞ X M =2

M2 M

(µτ )M −µτ e M!

(µτ )M −µτ e (M − 1)!

(M − 1 + 1)

(µτ )M −µτ e (M − 1)!

∞ X (µτ )M −µτ (µτ )M −µτ e + e (M − 2)! (M − 1)! M =1

= (µτ )2 e−µτ

∞ X (µτ )M −2 + µτ (M − 2)!

M =2 2

= (µτ ) + µτ. Thus, we find that: σY2 = E(Y 2 ) − E 2 (Y ) = µτ.

Chapter 5 1. Problem 1. We have already seen that the simple average is the least squares estimate of the constant. Thus the estimate is: −0.9 + 1 + 0.65 + 2.0 + 3.0 − 1.1 + 1.2 7 One way to calculate this is to use MATLAB. Giving MATLAB the command: mean([-0.9,1,0.65,2.0,3.0,-1.1,1.2]) we find that the average is 0.8357. 2. Problem 2. We would like to “solve” the equations: A · 0 + B = 0.1

22

Random Signals and Noise: A Mathematical Introduction A · 2 + B = 3.7 A · 4 + B = 6.1 A · 6 + B = 10.1. That is, we would like to “solve” the equations:     01   0.1 2 1 A  3.7       4 1  B =  6.1  . 61 10.1 As this system is overdetermined, we make use of (5.4) and we find that the best estimate for A and B is:  T  −1  T   01 01 01 0.1    2 1   2 1   2 1   3.7  A         =  4 1   4 1   4 1   6.1  B 61 61 61 10.1 Plugging in the values, we find that the best estimate is: A = 1.62,

B = 0.14.

3. Problem 3. We must solve the set of equations: ~ x

A z }| { z }| {     a 5.2 231   b = . 151 0.7 c

Making use of (5.8) we find that the smallest solution of our problem satisfies: ~x = AT (AAT )−1~b. With the help of MATLAB, we find that:     a 3.1815 ~x =  b  =  −0.6593  c 0.8148 4. Problem 4. Let f (~x) = x1 + · · · + xN and let g(~x) = x21 + · · · x2N . Then we find that we must solve the equation:     1 2x1  ..   .   .  = λ  ..  . 1

2xN

Solutions Manual

23

Clearly, the xi must be identical. √ In order for them to satisfy the constraint, we find that xi = ±1/ √N . For√a maximum, one must pick the plus sign. The maximum is N/ N = N . 5. Problem 5. We can phrase the question as, “find the maximum value of (f~, ~g ) under the condition that kf~k = 1” where:     x1 1     . f~ =  ..  , ~g =  ...  . xN

1

From the Cauchy-Schwarz inequality, we know that the maximum of the scalar product is attained when f~ = c~g . That is, all of the elements of f~ must be equal. Additionally, the norm of f~ must be one. There are only two vectors for which both conditions hold. We find that f~ must be one of the vectors:   √1 N

  ±  ...  . √1 N

A quick check shows that the maximum is achieved for:  √1  N

  f~ =  ...  . √1 N

√ √ At this point the value of the dot product is N/ N = N . 6. Problem 6. In this problem: 

 1111 A = 1 2 1 2, 1232

  q r ~b =   , s t



 9.5 and ~b =  17  . 20.5

Using MATLAB and defining A = [1 1 1 1;1 2 1 2; 1 2 3 2], we find that: >> A * A’ ans = 4 6 8

6 10 12

8 12 18

24

Random Signals and Noise: A Mathematical Introduction The inverse is: >> inv(ans) ans = 4.5000 -1.5000 -1.0000

-1.5000 1.0000 0

-1.0000 0 0.5000

Thus, we find that AT (AAT )−1~b is: >> A’ * ans *[9.5 ; 17 ; 20.5] ans = 0.2500 3.7500 1.7500 3.7500 Thus, q = 0.25, r = 3.75, s = 1.75 and t = 3.75 give the smallest solution in the mean-square sense.

Chapter 6 1. Problem 1. (a) As the noise is white, the matched filters are just the coefficients themselves with their orders reversed. Let filter1 be the filter for the first set of coefficients and let filter2 be the filter for the second set of coefficients. Then we find that: filter1 = {0, ..., 0 , 1, ..., 1 } | {z } | {z } N times N times

filter2 = {1, ..., 1 , 0, ..., 0 }. | {z } | {z } N times N times

(b) Let us define the output of the first filter as y1 and the output of the second filter as y2 . Saying that we decide which of the signals was sent by determining which of the outputs is bigger is the same thing as saying that we consider the sign of y1 − y2 .

Solutions Manual

25

Under the assumption that signal1 was sent, it is easy to see that E(y1 − y2 ) = N . Let us consider the standard deviation and the distribution of y1 − y2 . Under the assumption that signal1 was sent, y1 − y2 = N + n0 + · · · + nN −1 − (nN + · · · n2N −1 ). That is, the difference of the outputs is a constant plus a sum of random variables less another sum of random variables. Making use of the central limit theorem as the problem requests, we find that the first sum tends to a Gaussian RV whose expected value is 0 and whose variance is N σn2 = N . Similarly, the second sum tends to an identically distributed Gaussian. As the Gaussian is a symmetric distribution, we may ignore the minus sign—subtracting the RV or adding it have the same effect. We find that: a number

y 1 − y2 =

z}|{ N +N (0, N ) + N (0, N ) = N (N, 2N ).

We are interested in determining P (y1 − y2 < 0). Let A(α) be defined as on p. 69. We find that for the cases N p = 3, 6, 9, and 12 the probability ofperror is (approximately) A( 3/2) = 0.110, √ √ A( 3) = 0.042, A( 9/2) = 0.017, and A( 6) = 0.007 respectively. 2. Problem 2. In this example:     1 0.5 0.1 0.5/17 ~s =  −2  , 0.5 0.1  . N =  0.1 1 0.5/17 0.1 0.5 (a) The coefficients of the matched filter are:   2.8606 ~h0 = N−1~s =  −5.1442  . 2.8606 The matched filter is defined by y = 2.8606x0 −5.1442x1 +2.8606x2 . T (b) Using the formula SNR = ~s ~h0 , we find that the SNR is 16.0096. (One can also solve this using the definition of the SNR. That is sometimes an instructive approach to take.)

(c) From the formula, the probability of error should be less than 1/4. If one can assume that the noise-related random variable is normal, one can, in principle, say much more. 3. Problem 3. One can make use of the programs of Figure 1.4 and Figure 1.5 to do all of the calculations. The SNRs given by these programs are given in

26

Random Signals and Noise: A Mathematical Introduction

% Name: ch6pr3(N) % This function calculates the NxN autocorrelation matrix for problem % 3 of % chapter 6 of the random signals and noise book. function corr = ch5pr3(N) a = exp(-2*[0:(N-1)]/N)); corr = toeplitz(a); FIGURE 1.4 A Routine that Calculates the Autocorrelation Matrix. % Name: snr_hwk(N) % This function calculates the SNR for problem 3 of chapter 6 in the % random % signals and noise book. function val = snr_hwk(N) Noise = ch6pr3(N); % The function ch6pr3 builds the autocorrelation matrix. Noise_Inv = inv(Noise); t = [0:(N-1)]/N; sig = cos(2 * pi * t); % This is the signal given in the problem. for i = 1:N rev_sig(N-i+1) = sig(i); end filt = Noise_Inv * rev_sig’; rev_sig; val = rev_sig * Noise_Inv * rev_sig’; FIGURE 1.5 A Program to Calculate the Filter Coefficients and the Resulting SNRs. TABLE 1.1

The Signal to Noise Ratios for Various Values of N . N 5 10 20 30 40 50

SNR 4.5575 5.8955 6.2785 6.3552 6.3841 6.3985

Solutions Manual

27

table 1.1. We see that the point of diminishing returns is hit at around N = 10. Taking one example, the coefficients of the matched filter for N = 5 are ~h = {1.5460, −1.5206, −1.5206, 0.5808, 1.4398}. 4. Problem 4. A simple calculation shows that:   0.5 0.1 N= . 0.1 0.5 The inverse matrix is just: 1 0.24



0.5 −0.1 −0.1 0.5



Furthermore the eigenvectors are just:     1 1 ~v1 = , and ~v2 = . 1 −1 The eigenvalues are λ1 = 0.4/0.24 = 1.667 and λ2 = 0.6/0.24 = 2.5. We find that the optimal signal satisfies:   1 ~s = c −1 ~h = ~s. In order for the signal to have average power 4, we find and that √ that c = 2. Finally, the optimal filter is: y = −x0 + x1 . 5. Problem 5. Finding N and ~s is easy. We find that:     1 e−1 e−2 e−3 −1  e−1 1 e−1 e−2     ~  1 N = 0.3   e−2 e−1 1 e−1  , s =  −1  . e−3 e−2 e−1 1 1 (a) The matched filter is given by h~0 = N−1~s. Using MATLAB we found that:   −5.2733  7.2132   h~0 =   −7.2132  . 5.2733 T (b) The SNR is given by SNR = ~s h~0 . Using MATLAB we found that the SNR is 24.9729.

28

Random Signals and Noise: A Mathematical Introduction (c) As we have seen, the number of standard deviations from either S to the decision point or from 0 to the decision point (which is S/2) √ is just SNR/2. In our case, this is 2.4986. Assuming that N is approximately normal, the probability of being this many standard deviations away from the mean is approximately 6.2 · 10−3 . 6. Problems 4 and 6. We consider the general problem of finding the optimal signal when one is given a 2 × 2 matrix, N . From the properties of the autocorrelation, we know that:   ab N = . ba As RXX (0) ≥ |RXX (τ )| for all τ (see p. 165), we know that a ≥ |b|. It is easy to calculate the inverse of N . We find that:   1 a −b −1 N = 2 . a − b2 −b a We must now find the eigenvalues and eigenvectors of this matrix. One can do this either by the standard method or by inspection. Looking at the matrix it is clear that the vector:   1 v1 = 1 is an eigenvector of the matrix. The corresponding eigenvalue is 1/(a + b). As the the matrix is symmetric the eigenvectors of the matrix must be orthogonal to one another. Thus, the second eigenvector is:   1 v2 = −1 The corresponding eigenvalue is 1/(a − b). In order to determine which of the two eigenvectors to use, all that we need to know is the sign of b. If b > 0, then we must use v2 . If b < 0, then we must use v1 . We find that in problems 4 and 6 that we must use v2 after scaling it so that it meets the power requirements set in the problems. 7. Problem 7. For white noise, the autocorrelation matrix is: 2 N = σN I.

Clearly: N−1 =

1 2 I. σN

Solutions Manual

29

The optimal filter satisfies: ~h0 = N−1~s = 1 ~s. 2 σN Thus, using this method we also find that for white noise the optimal filter’s coefficients are just the values of the signal. 8. Problem 8. (a) As the diagonal entries of an autocorrelation matrix are equal to RXX (0), they must be larger than or equal to any other entries in the matrix. As that is not true here, this matrix cannot be an autocorrelation matrix. (b) The second matrix can be an autocorrelation matrix. (Consider the autocorrelation function RXX (τ ) = 1/(1 + τ 2 ) with the sampling time Ts = 1.) (c) Autocorrelation matrices must be symmetric. As the third matrix is not symmetric, it cannot be the autocorrelation matrix of any stationary stochastic process. 9. Problem 9. Clearly the matched filter is ~h = {1, 2, 1}. There are (at least) two ways to calculate the signal to noise ratio at the output of the filter. One is to use the formula SNR = k~sk2 /σn2 = 6/1 = 6. A second way is to go back to the definition of the SNR and work directly with the definition. The output of the FIR will be y = 1(1 + n0 ) + 2(2 + n1 ) + 1(1 + n2 ). The signal portion of this is 6 and the noise portion is n0 + 2n1 + n2 . The square of the signal portion is 36, while the expected value of the square of the noise portion is: E((n0 +2n1 +n2 )2 ) = E(n20 +4n21 +n22 +cross terms) = 1+4+1+0 = 6. Thus, the signal to noise ratio—a ratio of “powers”—is 36/6 = 6. This is, of course, consistent with the results the formula gives.

Chapter 7 1. Problem 1.

Z

1/2

(en (t), em (t)) =

en (t)em (t) dt −1/2

30

Random Signals and Noise: A Mathematical Introduction Z 1/2 = e2πj(n−m)t dt. −1/2

Clearly, if m = n this is the integral of the number one over an interval one unit long. The value of the integral is clearly one. If m 6= n, then one finds that: Z 1/2 (en (t), em (t)) = e2πj(n−m)t dt −1/2

Z

1/2

(cos(2πj(n − m)t) + j sin(2πj(n − m)t)) dt.

= −1/2

This integral is clearly zero. 2. Problem 2. First note that: 1 2c = 2 2 (2πz) + c 2πj



1 1 c − c z + −j 2π z + j 2π

 .

This function has poles at z = ±jc/(2π). To calculate the Fourier transform of the function we must calculate the integral: Z ∞ 2c e−2πjf t dt. (2πt)2 + c2 −∞ We will extend this to an integral in the complex plain and then proceed. Let us define the complex variable z = t + jβ. Let us calculate our integral as a contour integral. (We will discuss the contour momentarily.) We want to calculate the integral: I 2c lim e−2πjf z dz R→∞ C (2πz)2 + c2 R where the contour consists of the interval [−R, R] and one of the two semi-circles that sit above or below the interval. The contour is traversed counter-clockwise. Considering the exponential, we find that: |e−2πjf z | = |e−2πjf (t+jβ) | = |e−2πjf t e2πf β | = e2πf β . If f ≥ 0 and we want the exponentials value to remain less than 1, then we must make sure that β ≤ 0. Thus, we consider the contour that consists of the interval and the lower semi-circle. We must calculate the integral:   I 1 1 1 −2πjf z dz lim e c c − z + j 2π 2πj R→∞ CR z + −j 2π

Solutions Manual

31

The first piece of the integrand has no poles in the lower half-plane and thus its contribution to the integral is zero. The second piece has a single pole at z = −jc/(2π). As long as R is sufficient large this pole is inside the domain, and it is the only pole in the domain. Using the method of residues it is clear that for all sufficiently large R the contour integral is equal to −e−2πjf (−jc/(2π)) = −e−cf . It is not hard to show that as R → ∞ the contribution of the semicircle tends to zero. Thus the value given is the value of the integral over the interval. However, because we traverse the integral in the counter-clockwise direction, the value we have found is the negative of the value of the integral. The value of the integral for f ≥ 0, the value of the Fourier transform for f ≥ 0, is e−cf . Proceeding similarly for f < 0 but considering the contour that includes the upper semi-circle, we find that for f < 0 the Fourier transform is −c|f | . ecf . In sum, we find that the Fourier transform of (2πf2c )2 +c2 is e 3. Problem 3. On p. 116 we found that:   2 2 2 1 F √ e−t /2 (f ) = e−2π f . 2π Making use of Property 3 with a = 1/σ we find that:   2 2 2 1 −t2 /(2σ 2 ) F √ e (f ) = e−2π σ f . 2πσ 4. Problem 4. The defining property of the delta function is that when integrated “against” other functions it picks out the value of the function at t = 0. Thus the Fourier transform of the delta function should be: Z ∞ F(δ(t))(f ) = e−2πjf t δ(t) dt = e−2πjf 0 = 1. −∞

5. Problem 5. From Property 1 we see that f 2 F(g(t))(f ) is (up to a constant multiple) the Fourier transform of g 00 (t). Parseval’s equation says that the square of the norm of the Fourier transform of a function is equal to the square of the norm of the function. In the case under consideration, the second derivative of e−|at| is actually the derivative of a function with a jump discontinuity. Such a function should, in some sense, be infinite at the point of the jump—in our case at t = 0. Thus the norm of the second derivative should be infinite and by Parseval’s equation so should the norm of the second derivative’s Fourier transform.

32

Random Signals and Noise: A Mathematical Introduction 6. Problem 6. One must make use of Property 1 and its extension: F(y (n) (t))(f ) = (2πjf )n (y(t))(f ). Combining this with (7.14) we find that: q sup |y (n) (t)| ≤ 2πkF(y (n) (t))(f )kkf F(y (n) (t))(f )k t q = (2π)2n+1 kf n F(y(t))(f )kkf n+1 F(y (n) (t))(f )k. 7. Problem 7. We start by taking the Fourier transform of both sides of the equation: y 0 (t) = −ty(t). Denote the Fourier transform of y(t) by Y (F ). We find that Y (f ) satisfies: Y 0 (f ) ⇒ Y 0 (f ) = −4π 2 f Y (f ). 2πjf Y (f ) = 2πj As this satisfies nearly the same equation as y(t) it ought to have the same basic form as y(t). In fact, it is easy to integrate this equation. We find that: Y (f ) = Y (0)e−2π

2

f2

.

Note, however, that: Z



Y (0) =

y(t) dt = 1. −∞

Thus, Y (f ) = e−2π

2

f2

.

8. Problem 8. (a) Let y(t) be a real function. Then we find that: Z ∞ F(y(−t))(f ) = e−2πjf t y(−t) dt u=−t

= =

realness

−∞ Z −∞

− e2πjf u y(u) du ∞ Z ∞ e2πjf u y(u) du −∞ ∞

Z

=

e−2πjf u y(u) du

−∞

=

F(y(t))(f ).

Solutions Manual

33

(b) On p. 116 we found that the Fourier transform of y(t) = e−t u(t) is 1/(2πjf + 1). As the function under consideration is y(−t), we find that the Fourier transform of et u(−t) is 1/(−2πjf + 1). 9. Problem 9. As we saw on p. 118:  F

sin(πBt) πt



  1 |f | < B/2 (f ) = 1/2 |f | = B/2 .  0 |f | > B/2

Noting that convolution in time is the same as multiplication in frequency and noting that the Fourier transform of sB (t) is one in the entire region in which the Fourier transform of r(t) is non-zero, we find that the multiplication in the frequency domain leaves us with the Fourier transform of r(t). Thus, the time-domain convolution must leave us with r(t). 10. Problem 10. (a) Recall that Π(t) is the box function which is 1 for |t| ≤ 1/2 and 0 elsewhere. We find that the convolution of Π(t) with itself yields: Z Π(t) ∗ Π(t)



Π(t − τ )Π(τ ) dτ

=

−∞ Z 1/2

Π(t − τ ) dτ

= −1/2 symmetry

Z

1/2

Π(τ − t) dτ.

=

−1/2

If |t| > 1, then Π(t − τ ) = 0 throughout the region −1/2 ≤ τ ≤ 1 and the convolution is equal to zero. Suppose that t ≥ 0. Then the convolution is equal to: Z

1/2

Π(t) ∗ Π(t) =

1 dτ = 1 − t. t−1/2

If τ ≤ 0, then the convolution is equal to: Z

1/2+t

Π(t) ∗ Π(t) =

1 dτ = 1 + t. −1/2

This completes the calculation of the convolution.

34

Random Signals and Noise: A Mathematical Introduction 2

(b) As Λ(t) = Π(t) ∗ Π(t), we find that F(Λ(t))(f ) = (F(Π(t))(f )) . On p. 117 we found the Fourier transform of Π(t). From the results there we find that:  2 sin(πf ) F(Λ(t))(f ) = . πf (c) We know that: Z



e2πjf t F(y(t))(f ) df.

y(t) = −∞

Letting t = 0, we find that: Z ∞ Z y(0) = e2πjf 0 F(y(t))(f ) df = −∞



F(y(t))(f ) df..

−∞

(d) Applying the previous result to our function, we find that: 2 Z ∞ sin(πf ) Λ(0) = 1 = df. πf −∞ 11. Problem 11. (a) We find that when n 6= 0: Z an =

1

e−2πjnt f (t) dt

0

Z =

1/2

e−2πjnt dt

0

1/2 e−2πjnt = −2πjn 0 e−πjn − 1 −2πjn  0 n even = . 1 πjn n odd =

For n = 0 then it is clear that a0 = 1/2. (b) From Parseval’s equation we know that: Z 1 ∞ X |f (t)|2 dt = 1/2 = |an |2 . 0

−∞

In our example, ∞ X −∞

|an |2 = (1/2)2 + 2

∞ X 1 1 . 2 π (2n + 1)2 n=0

Solutions Manual

35

Combining our results, we find that: ∞ X

1 π2 = . 2 (2n + 1) 8 n=0 Note that one can calculate the sum of the squares of the reciprocals of the odd integers, Sodd , from the sum of the squares of the reciprocals of the integers, Sint . Note that the sum of the squares of the reciprocals of the integers must be four times the value of the sum of the squares of the reciprocals of the even integers. (Consider (1/22 )Sint .) Thus, the sum of the squares of the reciprocals of the odd integers is just: Sodd = Sint −

Sint 3Sint 3 π2 π2 = = = . 4 4 4 6 8

12. Problem 12. (a) Z F(y(t − τ ))(f )



e−2πjf t y(t − τ ) dt

= u=t−τ

=

−∞ Z ∞

e−2πjf (u+τ ) y(u) du Z ∞ e−2πjf τ e−2πjf u y(u) du −∞

=

−∞

=

−2πjf τ

e

F(y(t))(f ).

(b) In Problem 3 we found that the Fourier transform of: √ is e−2π

2

σ2 f 2

2 2 1 e−t /(2σ ) 2πσ

. Clearly the Fourier transform of: √ 2

2

2 2 1 e−(t−µ) /(2σ ) 2πσ

2

is e2πjf µ e−2π σ f n . The definition of the characteristic function is: Z ∞ jXt ϕX (t) = E(e ) = ejαt fX (α) dα = F(fX (α))(t/2π). −∞

Thus, to calculate the characteristic function we need only plug t/(2π) into the Fourier transform we have calculated. We find that the characteristic function is: ejtµ e−σ

2 2

t /2

.

36

Random Signals and Noise: A Mathematical Introduction

13. Checking the conditions is easy enough. We find that the CauchySchwarz inequality is: sZ Z ∞ sZ ∞ ∞ f (t)g(t) dt ≤ |f (t)|2 dt |g(t)|2 dt. −∞

−∞

−∞

Chapter 8 1. Problem 1. The PSD of the input signal is just the Fourier transform of the input signal’s autocorrelation. Calculating the Fourier transform, we find that: 1 1 SXX (f ) = 2 . (2) c2 (2πf )2 + (1/c2 )2 Denote the output by Y (t). Then the PSD of the output is: SY Y (f )

|H(f )|2 SXX (f ) 1 1 1 = 2 2 2 (2πf c1 ) + 1 c2 (2πf ) + (1/c2 )2 1 1 1 = (c1 c2 )2 (2πf )2 + (1/c2 )2 (2πf )2 + (1/c1 )2   1 1 1 1 partial fractions = − (c1 c2 )2 (1/c2 )2 − (1/c1 )2 (2πf )2 + (1/c1 )2 (2πf )2 + (1/c2 )2   1 1 1 = − . c21 − c22 (2πf )2 + (1/c1 )2 (2πf )2 + (1/c2 )2 =

Calculating the inverse Fourier transform (by inspection) we find that: RY Y (τ ) =

c 1 c2 −|τ |/c2  1 −|τ |/c1 e − e . c21 − c22 2 2

2. Problem 2. As c2 → 0, we find that the autocorrelation of the output tends to: 1 −|τ |/c1 RY Y (τ ) → e . 2c1 Let us consider the PSD of the input. As we saw in (2) SXX (f ) =

1 . (2πf c2 )2 + 1

We find that as c2 → 0: SXX (f ) → 1.

Solutions Manual

37

2 That is, the noise tends to white noise for which σN = 1. In §8.6 we found that passing white noise through a simple low-pass filter like the filter in our problem leads to precisely the answer we just found.

3. Problem 3. As the autocorrelation of the input is: RXX (τ ) =

1 −|τ |/c2 e , 2c2

we find that: SXX (f ) =

2 c12 1 2c2 (2πf )2 +

1 c22

=

1/c22 . (2πf )2 + 1/c22

The PSD of the output is just: SY Y (f ) = |H(f )|2 SXX (f ) =

1/c22 (2πf )2 . 2 2 (2πf ) + 1/c1 (2πf )2 + 1/c22

Expanding this into its partial fraction expansion we find that:   1 c21 c22 SY Y (f ) = 2 2 − . c2 (c1 − c22 ) (2πf )2 + 1/c22 (2πf )2 + 1/c21 Finding the inverse transform “by inspection,” we see that: RY Y (τ ) =

c1 c2 (c1 e−|τ |/c2 2 2c2 (c21 − c22 )

− c2 e−|τ |/c1 ).

As a sanity check, note that as c1 → 0 the filter’s frequency response tends to zero. As c1 → 0 we find that RY Y (τ ) → 0 as well—as it should if the filter is blocking all of the input. Also, if c2 → 0, then the input is tending to white noise. When this happens the output of the highpass filter should also tend to something that is almost white noise. As c2 → 0, we find that: RY Y (τ ) →

e−|τ |/c2 e−|τ |/c1 − . 2c2 2

This is indeed something that is tending to white noise save for a (relatively) small change introduced by the filter. 4. Problem 4. As the PSD of the input is SXX (f ) = 1, the PSD of the output is: (2πf c)2 SY Y (f ) = |H(f )|2 = . ((2πf c)2 + 1)2 Calculating RXX (τ ) is a bit tricky, however.

38

Random Signals and Noise: A Mathematical Introduction Let us start with a known and related Fourier transform pair. We have seen that: 1 1 ↔ e−|τ |/c . (2πf c)2 + 1 2c We know that differentiating with respect to f in the frequency domain is the same as multiplying by −2πjτ in the time domain. Thus, we find the transform pair: −(2πc)2 f −πjτ −|τ |/c ↔ e . 2 2 ((2πf c) + 1) c If we could only add an f to the Fourier transform we would be nearly done. As we know that the Fourier transform of the derivative of a function is 2πjf times the Fourier transform of the function, we move on to the transform pair: (2πjf )

d −πjτ −|τ |/c −(2πc)2 f ↔ e . ((2πf c)2 + 1)2 dτ c

Shifting constants around until we have the form we need, we find that: d τ −|τ |/c 1 (2πf c)2 ↔ e = 2 2 ((2πf c) + 1) dτ 2c 2c



|τ | 1− c



e−|τ |/c .

We find that: 1 RY Y (τ ) = 2c



|τ | 1− c



e−|τ |/c .

5. Problem 5. (a) Let the input to the filter be denoted by X(t). From the general theory we have developed, we know that: RXX (τ ) = 4e−10|τ | . Taking the Fourier transform, we find that: SXX (f ) =

80 . (2πf )2 + 100

The autocorrelation of the output is just |H(f )|2 times this value. Denoting the output by Y (t) we find that: SY Y (f ) =

80 1 . 2 (2πf ) + 100 (2πf )2 + 1

Solutions Manual

39

(b) The autocorrelation of the output is found by using the partial fractions expansion and taking (2πf )2 to be the “variable.” We find that:   1 80 1 − SY Y (f ) = 99 (2πf )2 + 1 (2πf )2 + 100 and that: RY Y (τ ) =

 80  −|τ | e /2 − e−10|τ | /20 . 99

(c) The inverse Fourier transform of H(f ), h(t), is zero when t < 0 and is e−t when t > 0. (d) The output of the filter is given by: Z ∞ Y (t) = h(τ )X(t − τ ) dτ. −∞

Making use of the triangle inequality for integrals, we find that: Z ∞ |Y (t)| ≤ |h(τ )||X(t − τ )| dτ −∞ Z ∞ = |h(τ )|2 dτ −∞ Z ∞ =2 e−τ dτ 0

= 2. Thus the output must always be between 2 and −2. 6. Problem 6. As we have seen, SXX (f ) = 2kT R = 2 · 1.38 × 10−23 · 295 · 106 = 8.14 × 10−15 . As SY Y (f ) = |H(f )|2 SXX (f ), we find that: SY Y (f ) =

106 · 8.14 × 10−15 . (2πf )2 + 106

Finding the inverse Fourier transform of this function is trivial. We find that: RY Y (τ ) = 500e−|1000|τ 8.14 × 10−15 . To find the mean square voltage, evaluate this at τ = 0. This gives: E(Y 2 (t)) = RY Y (0) = 4.07 × 10−12 . The RMS voltage is the square root of this number and is 2.02µV .

40

Random Signals and Noise: A Mathematical Introduction 7. As the problem allows it, we will assume that the resistor does not produce thermal noise. (This is equivalent to assuming that the resistor is being kept at absolute zero.) The noise current, i(t), associated with the current has PSD Sii (f ) = e0 Iavg = e0 . After passing through the resistor the noise voltage is X(t) = 1000i(t). The PSD of the voltage is SXX = 106 e0 . After passing through the filter, the PSD of the filter’s output, Y (t), is: 1012 SY Y (f ) = e0 . (2πf )2 + 104 To calculate RY Y (τ ) we must calculate the inverse Fourier transform of SY Y (f ). We find that: RY Y (τ ) =

1012 −100|τ | e e0 . 200

The mean √ square voltage is just RY Y (0) = 5 × 109 e0 . Thus, the RMS voltage is 5 × 109 e0 . As e0 = 1.6 × 10−19 C we find that the RMS voltage of the output of the filter is 28.3µV . 8. Problem 8. (a) The autocorrelation is: Z

1

Πperiodic (t+α)Πperiodic (t+α+τ ) dα.

RXX (τ ) = E(X(t)X(t+τ )) = 0

It is (relatively) clear that the autocorrelation is a linear function of the distance between the samples. Additionally it is clear that when there is no shift the autocorrelation is 1, when there is a shift of one quarter of a wave, the autocorrelation is zero, and when there is a shift of one half of a wave, the autocorrelation is −1. Thus, the autocorrelation is RXX (τ ) = 1 − 4|τ |, |τ | ≤ 1/2 and repeats periodically over the rest of the real line. (b) The Fourier coefficients are: Z 1/2 cn = RXX (τ )e−2πjnτ dτ −1/2 symmetry

=

1/2

Z 2

RXX (τ ) cos(2πnτ ) dτ 0

by parts

=

= =

2

! 1/2 Z 1/2 sin(2πnτ ) sin(2πnτ ) RXX (τ ) − (−4) dτ 2πn 0 2πn 0

2 (1 − cos(πn)) 2 n2 π  4 π 2 n2 n odd . 0 n even

Solutions Manual

41

(c) Because the sum of all the Fourier coefficients must equal to the value of the function at t = 0, it is clear that the total power is 1. (Additionally, RXX (0) is known to be the expected value of the power.) The power at 1Hz is c1 + c−1 = 8/π 2 . As the total power is one, 8/π 2 is also the fraction of the power located at 1Hz. 9. Problem 9. Property 10 combined with the linearity of the Fourier transform and the fact that F(1)(f ) = δ(f ) gives the result. 10. Problem 10. Given the Fourier transform pair, we know that the integral of the square of the magnitude of one piece of the pair equals the integral of the square of the magnitude of the other piece of the pair. In our case, this shows that: Z



−∞

sin2 (πf τ ) df = (πf τ )2

Z



h2 (t) dt =

−∞

Z

τ /2

−τ /2

1 1 dt = . τ2 τ

11. Problem 11. (a) The PSD is SN N (f ) = 0.1. (b) Let the output of the filter be denoted by Y (t). Then we find that: SY Y (f ) =

106 2 × 103 1 0.1 = 0.1 = 50 . 2 2 6 (2πf /1000) + 1 (2πf ) + 10 (2πf )2 + (103 )2

(c) The autocorrelation is: RY Y (τ ) = 50e−1000|τ | . (d) The total power is E(Y 2 (t)) = RY Y (0) = 50. The power in the given range is: Z

Power in range

= u=2πf /1000

= =

1000

1 0.1 df 2+1 (2πf /1000) −1000 Z 1000 2π 1 0.1 du 2 2π −2π u + 1 100 tan−1 (2π). π

Thus, we find that the fraction of the power in the given range is: Fraction of power =

2 tan−1 (2π) ≈ 0.90. π

42

Random Signals and Noise: A Mathematical Introduction

Chapter 9 1. Problem 1. (a) The product of two such signals is another signal that takes one of two values at all times and whose values change in a totally random way. (b) One way of seeing this is to consider the autocorrelation of the product of the signals. We find that the autocorrelation is: RR3 R3 (τ )

= = independence

= = = =

E(R3 (t)R3 (t + τ )) E(R1 (t)R2 (t)R1 (t + τ )R2 (t + τ )) E(R1 (t)R1 (t + τ ))E(R2 (t)R2 (t + τ )) RR1 R1 (τ )RR2 R2 (τ ) a21 e−2µ1 τ a22 e−2µ2 τ (a1 a2 )e−2(µ1 +µ2 )τ .

As this is a random telegraph signal its “µ” must be µ1 + µ2 . (c) As R1 (t)R2 (t) is independent of X(t), it acts to “spread” X(t) just like any other random telegraph signal. Because of its large µ, it spreads more than either R1 (t) or R2 (t) alone would spread the signal. (d) Suppose one has a sum of spread spectrum signals, R1 (t)X1 (t) + · · · + RN (t)XN (t). For simplicity’s sake assume that all the signals switch between +1 and −1. When one tries to decode, say, X1 (t) one is left with X1 (t) + R1 (t)R2 (t)X2 (t) + R1 (t)R3 (t)X3 (t) + · · · R1 (t)RN (t)XN (t). All of the signals other than X1 (t) are spread among a wide range of frequencies. As long as there are not too many such signals low-pass filtering should remove almost all of their contribution while allowing the desired signal to go through. 2. Problem 2. (a) It corresponds to the recurrence relation yn = yn−1 + yn−3 . (b)

i. As the polynomial includes a 1 all the factors must include a 1 as well. The factors must be first and a second order polynomial. Thus one of the factors must be 1 + z −1 and the other must be one of 1 + z −1 + z −2 or 1 + z −2 . The first possibility leads to: (1+z 1 )(1+z −1 +z −2 ) = 1+z −1 +z −2 +z −1 +z −2 +z −3 = 1+z −3 .

Solutions Manual

43

FIGURE 1.6 The Shift Register Connections Necessary to Implement the Recurrence Relation. The second possibility leads to: (1 + z 1 )(1 + z −2 ) = 1 + z −2 + z −1 + z −3 = 1 + z −1 + z −2 + z −3 . As neither of these is the desired polynomial the polynomial is irreducible. ii. If we let y−3 = 1, y−2 = 0, and y−1 = 0, then we find that the sequence generated by the recurrence relation is: {y0 = 1, y1 = 1, y2 = 1, y3 = 0, y4 = 1, y5 = 0, y6 = 0, y7 = 1, y8 = 1, y9 = 1, . . .}. The sequence is periodic with period equal to seven. Thus, we have a maximal length solution. Such solutions can only correspond to irreducible polynomials. (c) Please see Figure 1.6. 3. Problem 3. (a) It corresponds to the recurrence relation: yn = yn−2 + yn−3 . (b) The first part of this section is identical to the solution of the first section of the previous problem—the polynomial we have is neither

44

Random Signals and Noise: A Mathematical Introduction

FIGURE 1.7 The Shift Register Connections Necessary to Implement the Recurrence Relation. of the polynomials found above. Assuming that y−3 = 1, y−2 = 0, and y−1 = 0, the sequence produced here is: {y0 = 1, y1 = 0, y2 = 1, y3 = 1, y4 = 1, y5 = 0, y6 = 0, y7 = 1, y8 = 0, y9 = 1, . . .}. The sequence is periodic with period equal to seven. Thus, we have a maximal length solution. Such solutions can only correspond to irreducible polynomials. (c) Please see Figure 1.7. 4. Problem 4. Clearly: (1 + z −1 )(1 + z −1 ) = 1 + (1 + 1)z −1 + z −2 = 1 + z −2 . As we have factored the polynomial it is not irreducible. Additionally, if we let y−2 = 1 and y−1 = 0 then we find that the sequence generated by the recurrence relation is: {y0 = 1, y1 = 0, y2 = 1, y3 = 0, . . .}. This sequence has period 2. As a maximal length sequence would have period 22 −1 = 3 this sequence is not a maximal length sequence. As the period that a maximal length sequence would have to have, 22 − 1 = 3, is prime, the polynomial cannot be irreducible.

Solutions Manual

45

5. Problem 5. (a) If the number of non-zero coefficients is odd, then when one starts with the all-ones solution, the next term is always the sum of an odd number of ones and is one. If the number of non-zero coefficients is even, then the sum is zero and the all-ones solution is not possible. (b) We find that the coefficient of P (z) must solve the equations: b0 = 1 b1 + b0 = a1 .. .. .. . . . bN −1 + bN −2 = aN −1 bN −1 = aN . Note that because in our arithmetic a + a = 0 this is a telescoping sum. We find that: b0 = 1 b1 = 1 + a1 .. .. .. . . . bN −1 = 1 + a1 + · · · + aN −1 bN −1 = aN . Note that bN −1 seems to need to take on two values. The only way bN −1 can be properly choses is if those two values are really the same. As Q(z) is assumed to be N th order aN = 1. Thus, we find that the condition for the two values to be the same is that: 1 = 1 + a1 + · · · + aN −1 . This is equivalent to requiring that: a1 + · · · + aN −1 = 0. This, however, is equivalent to requiring that the number of ak that are equal to one should be even. (c) We have shown that the polynomial associated with a recurrence relation that supports a constant solution can be factored as (1 − z −1 )P (z). Thus, the polynomial is not irreducible.

46

Random Signals and Noise: A Mathematical Introduction

Chapter 10 1. The Fourier transform of the function RXX (τ ) is sin(2πf )/(πf ). As this function is negative for some frequencies—for example, for f = 3π/2, the Fourier transform cannot be a PSD. If the Fourier transform cannot be a PSD, then the function that we “mistakenly” referred to as RXX (τ ) cannot be the autocorrelation of any stochastic process. 2. The code of Figure 1.8 should estimate the PSD of the process. When run, one finds that the plot is almost a straight line. This is what one expects from white noise. Moreover because rand generates numbers that are uniformly distributed between 0 and 1, after subtracting 1/2 we have noise that is uniformly distributed between −1/2 and 1/2. The variance of this noise should be 1/12. If all of the energy in the noise is to appear in the PSD and the PSD is to be approximately constant, then the value of the PSD must be approximately (1/12)/(10000). This is, indeed, what we find in the graph. Note that there is a fine-point that is being glossed over here. If the noise were truly white, its variance would be infinite. One must actually think of the samples here as being drawn from filtered white noise. We think of the filter as removing all noise above 10KHz. Then the results makes a reasonable amount of sense. Additionally, the DFT does not really estimate frequencies near or above the Nyquist frequency, 1/(2TS ), well. In this case we see the result we expect anyhow. 3. Problem 3. As we have set up the problem in the form ~hT RXX ~h and as the autocorrelation matrix, the matrix RXX , is positive-semidefinite we have shown that the sum is always non-negative.

Chapter 11 1. Problem 1. (a) It is easy to see that: SXX (f ) =

200 , (2πf )2 + 1002

SN N =

The frequency response of the filter is: H(f )

=

SXX (f ) SXX (f ) + SN N (f )

2000 . (2πf )2 + 10002

Solutions Manual

% This code estimates the PSD of random noise. sum = zeros(size([1:100])); % First zero the sum. M = 100; N = 100; % Then choose how many samples will be ‘‘drawn.’’ Ts = 0.0001; % Choose the sample time. T = N * Ts; % Calculate the time that each periodogram spans. X = rand(size([1:M*N])) - 0.5; % Use the rand command. Remember to subtract off the average. for i = 1:M y = fft(X((i - 1)*N + 1: i * N)); % Calculate the fft. z = (T / N^2) * abs(y).^2; % Calculate the normalized magnitude squared sum = sum + z; % Sum the periodograms. end avg = sum / M; % Average the periodograms. f = [0:(N-1)] / (N* Ts); % Define the frequencies. plot(f, avg) % Plot the estimated PSD. axis([0 f(N) 0 0.00002]) % Set up the plot nicely. FIGURE 1.8 A Program to Estimate the PSD of the Stochastic Process.

47

48

Random Signals and Noise: A Mathematical Introduction

FIGURE 1.9 The Frequency Response of the Optimal Non-causal Filter.

algebra

=

1 1 7.8 · 105 2µ + , 11 11 2µ (2πf )2 + µ2

√ µ=

2.2 · 105 ≈ 469.0.

(b) The magnitude of the frequency response (for f ≥ 0) is given in Figure 1.9. 2. Problem 2. (a) The function RN N (τ ) = Ce−C|τ | will look like C until τ is very large. For very large τ the autocorrelation is even smaller. Because the autocorrelation is so nearly constant for so long, the noise looks rather like low-level “DC noise.” (b) One finds that H(f ) will generally be near 1 and that h(t) is tending to an impulse function. (c) As the noise is tending to zero the optimal filter ought to be the filter that does nothing. This is the filter whose frequency response is 1 and whose impulse response is an impulse function. 3. (a) When C is equal to 1 the signal and the noise are statistically indistinguishable. We find that for such values, the optimal H(f ) is 1/2. (b) As the signal and the noise are indistinguishable, all that we can really do is try to make the combination have the same “size” as the original signal. This leads us to a filter that changes the size of its input—but nothing else.

Solutions Manual

49

4. (a) As C → ∞ we find that γ 2 ≈ 2C 2 and µ2 → 2. Considering the constants in (11.14) we find that: 2 + 2C → 0, γ 2 (1 + µ)

2 + 2C 1 √ . (C − µ) → γ 2 (1 + µ) 1+ 2

Thus, the impulse response of the filter is tending to: h(t) =

√ 1 √ e− 2t , 1+ 2

t≥0

and it is zero for negative t. (b) The PSD of the noise in this problem is: SN N (f ) =

2C 2 . (2πf )2 + C 2

When C → ∞ this tends to SN N (f ) = 2—to pure white noise. 2 Plugging σN = 2 into the formulas developed in §11.5.1, we find that the impulse response of the filter is the same as the filter we just found. 5. (a) In §8.13 we showed that for such a signal: 1

RXX (τ ) = e−2 2 |τ | = e−|τ | . (b) This is precisely the situation dealt with in §11.5.2. We find that for t ≥ 0 the impulse response of the filter is: h(t) =

2002 (δ(t) + (1000000 − 1.4142)e−1.4142t . 2000002(1 + 1.4142)

For negative values of t it is zero, of course. 6. (a) Making use of the definition of γ, we find that: √ 1 2 |t| − 1+2/σN p h(t) = e . 2 σ N σN + 2 2 When σN is small we find that:

h(t) ≈ √

√ 1 e− 2|t|/σN . 2σN

A simple application of L’Hˆopital’s rule shows that this expression tends to zero for all t 6= 0. (b) Z



h(t) dt = −∞

Clearly as

2 σN

1 2 γ σN

Z



−∞

e−γ|t| dt =

1 2 2 2 2 γ γ = σ2 γ 2 = 2 + σ2 . σN N

→ 0, this expression tends to 1.

50

Random Signals and Noise: A Mathematical Introduction 7. Problem 7. In this problem, we must minimize the function E((R(t) − S(t))2 ) over all possible choices of α. We find that the function is: E((R(t) − S(t))2 ) = E((αY (t) − S(t))2 ) = E(((α − 1)S(t) + αN (t))2 ) = (α − 1)2 E(S 2 (t)) + 2(α − 1)αE(S(t)N (t)) + α2 E(N 2 (t)) = (α − 1)2 RSS (0) + α2 RN N (0). Differentiating with respect to α and setting the derivative equal to zero, we find that the critical point satisfies: 2(α − 1)RSS (0) + 2αRN N (0) = 0. Solving for α we find that: α=

RSS (0) . RSS (0) + RN N (0)

As the function we were trying to minimize is a concave-up parabola, this value of α must be the function’s unique minimum. 8. Problem 8. As the PSD is always a real even function, H(f ) must be a real even function as well. Using the same logic we used to show that the Fourier transform of a real even function is a real even function, we can show that the inverse Fourier transform of a real even function is a real even function. Thus, the impulse response of the filter is a real even function. The only such functions that are also causal are the functions h(t) ≡ 0 and h(t) = δ(t)—which correspond to H(f ) = 0 and H(f ) = 1 respectively. Unless the PSD of the signal or the noise is identically zero, neither of these frequency responses are possible. As by assumption the PSDs of the signal and the noise are not identically zero, we find that the filter is non-causal.

Appendix A 1. Problem 1. Let us suppose that the first statement is true, but the second is not. Then we find that there exist coefficients, ai , i = 1, . . . N , not all of which are zero, such that: a1 ~x1 + · · · aN ~xN = 0.

Solutions Manual

51

Suppose that ai is one of the non-zero coefficients. Then we find that: X ~xi = − (ak /ai )~xk . k6=i

That is, we find that it is possible to express one of the vectors as a linear combination of the remaining vectors. By assumption that is impossible. Thus, if the first statement is true, the second follows. Let us assume that the second statement is true, but the first is not. That is, we assume that there exists at least one vector that can be written as a linear combination of the other vectors. Without loss of generality we can assume that the vector is x~1 . That is, we assume that: ~x1 = a2 ~x2 + · · · aN ~xN . Rearranging terms, we find that this is equivalent to: ~0 = −~x1 + a2 ~x2 + · · · aN ~xN . However this cannot be as we have assumed that the only linear combination of the vectors that gives the zero vector is the “all zeros” linear combination. Thus, if the second statement is true, so is the first one. We have shown that the two statements are equivalent. 2. Problem 2. This problem is an exercise in “definition application.” Applying the definition of a linear mapping to A and B, we find that: B(A(α~x + β~y )) = B((αA(~x) + βA(~y ))) = αB(A(~x)) + βB(A(~y )). 3. Problem 3. Let us consider the effect of matrix multiplication on a linear combination of two vectors, ~x and ~y . Making use of (A.1) it is easy to see that the product of the matrix A and the vector α~x + β~y is:  PN   PN   PN  a1i (αxi + βyi ) α i=1 a1i xi β i=1 a1i yi i=1 P P P N N N        i=1 a2i (αxi + βyi )   α i=1 a2i xi   β i=1 a2i yi   = +  . . . .. .. ..       PN PN PN α i=1 aM i xi β i=1 aM i yi i=1 aM i (αxi + βyi ) As the right-hand side is α times the effect of A on ~x plus β times the effect of A on ~y , we see that the mapping defined by multiplication of a vector by a matrix is linear. 4. Problem 4. We find that: k~x + ~y k2 = (~x + ~y , ~x + ~y )

52

Random Signals and Noise: A Mathematical Introduction = |(~x + ~y , ~x + ~y )| = |(~x, ~x) + (~x, ~y ) + (~y , ~x) + (~y , ~y )| ≤ (~x, ~x) + |(~x, ~y )| + |(~y , ~x)| + (~y , ~y ) ≤ k~xk2 + 2k~xkk~y k + k~y k2 = (k~xk + k~y k)2 . Taking the square root of both sides gives the desired result. 5. 

 10 3 1 | 1 0 0  3 9 1 | 0 1 0 118|001 ⇓   79/8 23/8 0 | 1 0 −1/8  23/8 71/8 0 | 0 1 −1/8  1/8 1/8 1 | 0 0 1/8 ⇓   79/8 23/8 0 | 1 0 −1/8  23/71 1 0 | 0 8/71 −1/71  1/8 1/8 1 | 0 0 1/8 ⇓   (71 · 79 − 232 )/(8 · 71) 0 0 | 1 −23/71 −6/71  23/71 1 0 | 0 8/71 −1/71  1/8 1/8 1 | 0 0 1/8 ⇓   635/71 0 0 | 1 −23/71 −6/71  23/71 1 0 | 0 8/71 −1/71  1/8 1/8 1 | 0 0 1/8 ⇓   1 0 0 | 71/635 −23/635 −6/635  23/71 1 0 | 0 8/71 −1/71  1/8 1/8 1 | 0 0 1/8 ⇓   1 00| 71/635 −23/635 −6/635 0 1 0 | −23/635 (232 + 8 · 635)/(71 · 635) (−635 + 6 · 23)/(71 · 635)  0 1/8 1 | −71/(8 · 635) 23/(8 · 635) (6 + 635)/(635 · 8) ⇓   1 00| 71/635 −23/635 −6/635 0 1 0 | −23/635 79/635 −7/635  0 1/8 1 | −71/(8 · 635) 23/(8 · 635) 641/(635 · 8) ⇓

Solutions Manual

53 



1 0 0 | 71/635 −23/635 −6/635  0 1 0 | −23/635 79/635 −7/635  . 0 0 1 | −6/635 −7/635 648/(635 · 8) 6. Problem 6. The determinant is: det(A) = 1(4·2−(−2)5)−2(3·2−(−2)6)−2(3·5−4·6) = 18−2·18−2(−9) = 0. Thus the matrix is not invertible. 7. Problem 7. First we solve the equation (det(A − λI) = 0. We find that the equation is: (5 − λ)2 − 16 = 0. The solutions as λ = 9, 1. Next we solve the equation A~x = λ~x for the given matrix for each eigenvalue. Considering the first eigenvalue, we find that:      54 x1 x =λ 1 . 45 x2 x2 Taking one of the equations we find that the eigenvector must satisfy: 5x1 + 4x2 = 9x1 . We find that x1 = x2 . Thus the eigenvalue that corresponds to λ = 9 is:   1 ~x = . 1 A similar calculation shows that the other eigenvector is:   1 ~x = . −1 8. Problem 8. The product is:  AB =

9 12 15 6 9 12

 .

Multiplying the vector by the matrix we find that the product gives:   9 . 6 9. Problem 9. (a) The verification is just that and is trivial. (b) The norm is: s

Z

kf k = a

b

|f (t)|2 dt.

54

Random Signals and Noise: A Mathematical Introduction (c) The Cauchy-Schwarz inequality states that: s s Z Z b Z b b |(f (t), g(t))| = f (t)g(t) dt ≤ kf (t)kkg(t)k = |f (t)|2 dt. |g(t)|2 dt. a a a

10. This problem is a simple verification and is very similar to the previous problem.

8861.indd 4

9/8/08 3:40:45 PM

8861 ISBN 0-849-32886-2

an informa business

www.taylorandfrancisgroup.com

8861.indd 5

6000 Broken Sound Parkway, NW Suite 300, Boca Raton, FL 33487 270 Madison Avenue New York, NY 10016 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK

9/8/08 3:40:46 PM