287 47 7MB
English Pages XV, 372 [380] Year 2021
Kasturi Vasudevan
Analog Communications Problems and Solutions
Analog Communications
Kasturi Vasudevan
Analog Communications Problems and Solutions
123
Kasturi Vasudevan Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh, India
ISBN 978-3-030-50336-9 ISBN 978-3-030-50337-6 https://doi.org/10.1007/978-3-030-50337-6
(eBook)
Jointly published with ANE Books Pvt. Ltd. In addition to this printed edition, there is a local printed edition of this work available via Ane Books in South Asia (India, Pakistan, Sri Lanka, Bangladesh, Nepal and Bhutan) and Africa (all countries in the African subcontinent). ISBN of the Co-Publisher’s edition: 9789386761811 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family
Preface
Analog Communications: Problems and Solutions is suitable for third-year undergraduates taking a first course in communications. Although most of the present-day communication systems are digital, they continue to use many basic concepts like Fourier transform, Hilbert transform, modulation, synchronization, signal-to-noise ratio analysis, and so on that have roots in analog communications. The book is richly illustrated with figures and easy to read. Chapter 1 covers some of the basic concepts like the Fourier and Hilbert transforms, Fourier series, and the canonical representation of bandpass signals. Random variables and random processes are covered in Chap. 2. Amplitude modulation (AM), envelope detection, Costas loop, squaring loop, single sideband modulation, vestigial sideband modulation, and quadrature amplitude modulation are covered in Chap. 3. In particular, different implementations of the Costas loop is considered in Chap. 3. Chapter 4 deals with frequency modulation (FM) and various methods of generation and demodulation of FM signals. The figure-of-merit analysis of AM and FM receivers is dealt with in Chap. 5. The principles of quantization are covered in Chap. 6. Finally, topics on digital communications are covered in Chap. 7. I would like to express my gratitude to some of my instructors at IIT Kharagpur (where I had completed my undergraduate)—Dr. S. L. Maskara (Emeritus faculty), Dr. T. S. Lamba (Emeritus faculty), Dr. R. V. Rajkumar and Dr. S. Shanmugavel, Dr. D. Dutta, and Dr. C. K. Maiti. During the early stages of my career (1991–1992), I was associated with the CAD-VLSI Group, Indian Telephone Industries Ltd., Bangalore. I would like to express my gratitude to Mr. K. S. Raghunathan (formerly a Deputy Chief Engineer at the CAD-VLSI Group) for his supervision of the implementation of a statistical fault analyzer for digital circuits. It was from him that I learnt the concepts of good programming, which I cherish and use to this day. During the course of my master's degree and Ph.D. at IIT Madras, I had the opportunity to learn the fundamental concepts of digital communications from my instructors, Dr. V. G. K. Murthy, Dr. V. V. Rao, Dr. K. Radhakrishna Rao, Dr. Bhaskar Ramamurthi, and Dr. Ashok Jhunjhunwalla. It is a pleasure to vii
viii
Preface
acknowledge their teaching. I also gratefully acknowledge the guidance of Dr. K. Giridhar and Dr. Bhaskar Ramamurthi who were jointly my Doctoral supervisors. I also wish to thank Dr. Devendra Jalihal for introducing me to the LATEX document processing system, without which this book would not have been complete. Special mention is also due to Dr. Bixio Rimoldi of the Mobile Communications Lab, EPFL Switzerland, and Dr. Raymond Knopp, now with Institute Eurecom, Sophia Antipolis France, for providing me the opportunity to implement some of the signal processing algorithms in real time, for their software radio platform. I would like to thank many of my students for their valuable feedback. I thank my colleagues at IIT Kanpur, in particular Dr. S. C. Srivastava, Dr. V. Sinha (Emeritus faculty), Dr. Govind Sharma, Dr. Pradip Sircar, Dr. R. K. Bansal, Dr. K. S. Venkatesh, Dr. A. K. Chaturvedi, Dr. Y. N. Singh, Dr. Ketan Rajawat, Dr. Abhishek Gupta, and Dr. Rohit Budhiraja for their support and encouragement. I would also like to thank the following people for encouraging me to write this book: • • • • • • • • • • • • • • • • • • • • •
Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr.
Surendra Prasad, IIT Delhi, India P. Y. Kam, NUS Singapore John M. Cioffi, Emeritus faculty, Stanford University, USA Lazos Hanzo, University of Southampton, UK Prakash Narayan, University of Maryland, College Park, USA P. P. Vaidyanathan, Caltech, USA Vincent Poor, Princeton, USA W. C. Lindsey, University of Southern California, USA Bella Bose, Oregon State University, USA S. Pal, former President IETE, India G. Panda, IIT Bhubaneswar, India Arne Svensson, Chalmers University of Technology, Sweden Lev B. Levitin, Boston University, USA Lillikutty Jacob, NIT Calicut, India Khoa N. Le, University of Western Sydney, Australia Hamid Jafarkhani, University of California Irvine, USA Aarne Mämmelä, VTT Technical Research Centre, Finland Behnaam Aazhang, Rice University, USA Thomas Kailath, Emeritus faculty, Stanford University, USA Stephen Boyd, Stanford University, USA Rama Chellappa, University of Maryland, College Park, USA
Thanks are also due to the open source community for providing operating systems like Linux and software like Scilab, LATEX, Xfig, and Gnuplot, without which this book would not have been complete. I also wish to thank Mr. Jai Raj Kapoor and his team for their skill and dedication in bringing out this book.
Preface
ix
In spite of my best efforts, some errors might have gone unnoticed. Suggestions for improving the book are welcome. Kanpur, India
Kasturi Vasudevan
Contents
1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 76
2 Random Variables and Random Processes . . . . . . . . . . . . . . . . . . . . 77 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 4 Frequency Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 5 Noise in Analog Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 6 Pulse Code Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 7 Signaling Through AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . 357 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
xi
About the Author
Kasturi Vasudevan completed his Bachelor of Technology (Honours) from the Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, India, in 1991, and his M.S. and Ph.D. from the Department of Electrical Engineering, IIT Madras, in 1996 and 2000, respectively. During 1991–1992, he was employed with Indian Telephone Industries Ltd., Bangalore, India. He was a Postdoctoral Fellow at Mobile Communications Lab, EPFL, Switzerland, and then an Engineer at Texas Instruments, Bangalore. Since July 2001, he has been a faculty member at the Electrical Department at IIT Kanpur, where he is now an Professor. His interests lie in the area of communication.
xiii
Notation
a^b a_b ?
a¼b 8 b xc d xe j D
¼ V dD ðÞ dK ðÞ ~x ^x x IM S 0, is full wave rectified to obtain g p (t). (a) Using the complex Fourier series representation of g p (t) and the Parseval’s power theorem, compute S=9
2 2 2 2 1 + 2 + ··· + + · · · , π 3 (1 − 4n 2 )2
(1.35)
where n is an integer greater than or equal to one. (b) The full wave rectified signal is passed through an ideal bandpass filter having a gain of 2 in the passband frequency range of f 0 < | f | < 5 f 0 . Compute the power of the output signal. • Solution: The complex Fourier series representation of a periodic signal g p (t), having a period T1 , is g p (t) =
∞
cn exp( j 2πnt/T1 ),
(1.36)
g p (t) exp(−j 2πnt/T1 ) dt
(1.37)
n=−∞
where 1 cn = T1
T1 /2 −T1 /2
denotes the complex Fourier series coefficient. In the given problem: g p (t) = A| cos(2π f 0 t)| T1 = 1/(2 f 0 ).
(1.38)
8
1 Signals and Systems
Hence cn = = = =
=
= =
A T1
T1 /2
|cos(2π f 0 t)| exp(−j 2πnt/T1 ) dt
−T1 /2 T1 /2
2A 2 cos(2π f 0 t) cos(2πnt/T1 ) dt 2T1 0 A T1 /2 [cos(2π( f 0 − n/T1 )t) + cos(2π( f 0 + n/T1 )t)] dt T1 0 A [sin(2π( f 0 − n/T1 )T1 /2)/(2π( f 0 − n/T1 )) T1 + sin(2π( f 0 + n/T1 )T1 /2)/(2π( f 0 + n/T1 ))] AT1 [cos(nπ)/(2π( f 0 T1 − n)) T1 + cos(nπ)/(2π( f 0 T1 + n))] A(−1)n [1/(1 − 2n) + 1/(1 + 2n)] π 2 A(−1)n . (1.39) π(1 − 4n 2 )
Observe that in (1.39) c−n = cn .
(1.40)
From Parseval’s power theorem, we have 1 T1
T1 /2
t=−T1 /2
∞ g p (t)2 dt = |cn |2 n=−∞
= |c0 |2 + 2 =
∞
|cn |2
n=1
2 2A 2 2 1 + 2 + ··· + + · · · , π 3 (1 − 4n 2 )2 (1.41)
where we have used (1.40). Comparing (1.35) with the last equation of (1.41), we obtain A = 3. Now, the left-hand side of (1.41) is equal to
(1.42)
1 Signals and Systems
1 T1
9
2 g p (t)2 dt = A T1 t=−T1 /2 T1 /2
=
T1 /2 t=−T1 /2
cos2 (2π f 0 t) dt
A2 . 2
(1.43)
Substituting for A from (1.42), the sum in (1.35) is equal to S = 4.5.
(1.44)
The (periodic) signal at the bandpass filter output is g p1 (t) =
2
2cn exp( j 2πnt/T1 ),
(1.45)
n=−2 n=0
where we have assumed that the gain of the bandpass filter is 2 and cn is given by (1.39). The signal power at the bandpass filter output is P=
2
4 |cn |2
n=−2 n=0
= 3.372.
(1.46)
8. (Simon Haykin 1983) Determine the autocorrelation of the Gaussian pulse given by g(t) =
πt 2 1 exp − 2 . t0 t0
(1.47)
• Solution: We know that e−πt e−π f 1 1 2 2 ⇒ e−πt e−π f . t0 t0 2
2
(1.48)
Using time scaling with a = 1/t0 we get g(t) =
|t0 | −π f 2 t02 1 −πt 2 /t02 e e = G( f ). t0 t0
(1.49)
Since G( f ) is real-valued, the Fourier transform of the autocorrelation of g(t) is simply G 2 ( f ). Thus G 2 ( f ) = e−2π f
2 2 t0
= ψg ( f )
(say).
(1.50)
10
1 Signals and Systems
Fig. 1.4 X ( f )
X(f ) A
f −W
0
W
Once again we use the time scaling property of the Fourier transform, namely, |a|g(at) G( f /a) with 1/a =
(1.51)
√ 2 in (1.49) to get Rg (t) =
1 2 2 √ e−πt /(2t0 ) , |t0 | 2
(1.52)
which is the required autocorrelation of the Gaussian pulse. Observe the |t0 | in the denominator of (1.52), since Rg (0) must be positive. 9. (Rodger and William 2002) Assume that the Fourier transform of x(t) has the shape as shown in Fig. 1.4. Determine and plot the spectrum of each of the following signals: (a) x1 (t) = (3/4)x(t) + (1/4)j x(t) ˆ
ˆ e j 2π f0 t (b) x2 (t) = (3/4)x(t) + (3/4)j x(t) (c) x3 (t) = (3/4)x(t) + (1/4)j x(t) ˆ e j 2πW t , where f 0 W and x(t) ˆ denotes the Hilbert transform of x(t). • Solution: We know that x(t) ˆ −j sgn ( f )X ( f ).
(1.53)
Therefore X 1 ( f ) = (3/4)X ( f ) + (1/4)sgn ( f )X ( f ) ⎧ for f > 0 ⎨ X( f ) for f = 0 . = (3/4)A ⎩ (1/2)X ( f ) for f < 0
(1.54)
The plot of X 1 ( f ) is given in Fig. 1.5. To solve part (b) let m(t) = (3/4)x(t) + (3/4)j x(t). ˆ
(1.55)
1 Signals and Systems
11
Fig. 1.5 X 1 ( f )
X1 (f ) A 3A/4 A/2 f −W
W
0
Fig. 1.6 X 2 ( f )
X2 (f ) 3A/2 3A/4 f f0
f0 + W
Then M( f ) = (3/4)X ( f ) + (3/4)sgn ( f )X ( f ) ⎧ ⎨ (3/2)X ( f ) for f > 0 for f = 0 . = (3/4)A ⎩ 0 for f < 0
(1.56)
Then X 2 ( f ) = M( f − f 0 ) and is given by ⎧ ⎨ (3/2)X ( f − f 0 ) for f > f 0 for f = f 0 . X 2 ( f ) = (3/4)A ⎩ 0 for f < f 0
(1.57)
The plot of X 2 ( f ) is shown in Fig. 1.6. To solve part (c) let m(t) = (3/4)x(t) + (1/4)j x(t). ˆ
(1.58)
Then M( f ) = (3/4)X ( f ) + (1/4)sgn( f ) X ( f ) ⎧ for f > 0 ⎨ X( f ) for f = 0 . = (3/4)A ⎩ (1/2)X ( f ) for f < 0
(1.59)
12
1 Signals and Systems
Fig. 1.7 X 3 ( f )
X3 (f ) A 3A/4 A/2 f 0
W
2W
Thus X 3 ( f ) = M( f − W ) and is given by ⎧ for f > W ⎨ X( f − W) for f = W X 3 ( f ) = (3/4)A ⎩ (1/2)X ( f − W ) for f < W
(1.60)
The plot of X 3 ( f ) is given in Fig. 1.7. 10. Consider the input x(t) = rect (t/T ) cos(2π( f 0 + f )t)
(1.61)
for f f 0 . When x(t) is input to a filter with impulse response h(t) = αe−αt cos(2π f 0 t)u(t)
(1.62)
find the output by convolving the complex envelopes. Assume that the impulse response of the filter is of the form: h(t) = 2h c (t) cos(2π f 0 t) − 2h s (t) sin(2π f 0 t).
(1.63)
• Solution: The canonical representation of a bandpass signal g(t) centered at frequency f c is g(t) = gc (t) cos(2π f c t) − gs (t) sin(2π f c t),
(1.64)
where gc (t) and gs (t) extend over the frequency range [−W, W ], where W f c . The complex envelope of g(t) is defined as g(t) ˜ = gc (t) + j gs (t).
(1.65)
The given signal x(t) can be written as x(t) = rect (t/T ) [cos(2π f 0 t) cos(2π f t) − sin(2π f 0 t) sin(2π f t)] , (1.66)
1 Signals and Systems
13
where f f 0 . By inspection, we conclude that the complex envelope of x(t) is given by
x(t) ˜ = rect (t/T ) cos(2π f t) + j sin(2π f t) = rect (t/T )e j 2π f t .
(1.67)
Thus j 2π f 0 t . x(t) = x(t)e ˜
(1.68)
In order to facilitate computation of the filter output using the complex envelopes, it is given that the filter impulse response is to be represented by h(t) = 2h c (t) cos(2π f 0 t) − 2h s (t) sin(2π f 0 t),
(1.69)
where the complex envelope of the filter is ˜ = h c (t) + j h s (t). h(t)
(1.70)
Again by inspection, the complex envelope of the channel is ˜ = 1 αe−αt u(t). h(t) 2
(1.71)
Thus j 2π f 0 t ˜ . h(t) = 2h(t)e
(1.72)
Now, the complex envelope of the output is given by ˜ y˜ (t) = x(t) ˜ h(t) ∞ ˜ − τ ) dτ . = x(τ ˜ )h(t
(1.73)
τ =−∞
˜ ) in the above equation, we get Substituting for x(·) ˜ and h(τ y˜ (t) =
α 2
T /2 τ =−T /2
e j 2π f τ e−α(t−τ ) u(t − τ ) dτ .
(1.74)
Since u(t − τ ) = 0 for t < τ , and since −T /2 < τ < T /2, it is clear that y˜ (t) = 0, and hence y(t) = 0 for t < −T /2. Now, for −T /2 < t < T /2, we have
14
1 Signals and Systems
α 2
y˜ (t) =
t
e j 2π f τ e−α(t−τ ) dτ
τ =−T /2 t
α = e−αt 2
eτ (α+j 2π f ) dτ
τ =−T /2
α e−αt et (α+j 2π f ) − e−T /2(α+j 2π f ) 2(α + j 2π f )
j 2π f t α e − e−α(t+T /2)−j π f T . = 2(α + j 2π f ) =
(1.75)
Let θ1 = tan−1
2π f α
θ2 = π f T r = α2 + (2π f )2 .
(1.76)
Then for −T /2 < t < T /2 the output is y(t) = y˜ (t)e j 2π f0 t α cos(2π( f 0 + f )t − θ1 ) = 2r
− e−α(t+T /2) cos(2π f 0 t − θ1 − θ2 ) .
(1.77)
Similarly, for t > T /2 we have α y˜ (t) = 2 =
α e 2
T /2
e j 2π f τ e−α(t−τ ) dτ
τ =−T /2 T /2 −αt
eτ (α+j 2π f ) dτ
τ =−T /2
α e−αt eT /2(α+j 2π f ) − e−T /2(α+j 2π f ) = 2(α + j 2π f )
−α(t−T /2) j π f T α e = e − e−α(t+T /2) e−j π f T . 2(α + j 2π f ) (1.78) Therefore for t > T /2 the output is y(t) = y˜ (t)e j 2π f0 t α −α(t−T /2) e = cos(2π f 0 t − θ1 + θ2 ) 2r
− e−α(t+T /2) cos(2π f 0 t − θ1 − θ2 ) .
(1.79)
1 Signals and Systems
15 Y (f ) 38.4 19.2 4
−46 −40 −34
−20
−6
f
0
6
20 17
34
40
46
23
Fig. 1.8 Y ( f )
11. A nonlinear system defined by y(t) = x(t) + 0.2x 2 (t)
(1.80)
has an input signal with a bandpass spectrum given by X ( f ) = 4 rect
f − 20 6
+ 4 rect
f + 20 . 6
(1.81)
Sketch the spectrum at the output labeling all the important frequencies and amplitudes. • Solution: The Fourier transform of the output is Y ( f ) = X ( f ) + 0.2X ( f ) X ( f ).
(1.82)
The spectrum of Y ( f ) can be found out by inspection and is shown in Fig. 1.8. 12. (Simon Haykin 1983) Let R12 (τ ) denote the cross-correlation function of two energy signals g1 (t) and g2 (t). (a) Using Fourier transforms, show that (m+n) R12 (τ )
= (−1)
n
∞ t=−∞
h 1 (t)h ∗2 (t − τ ) dt,
(1.83)
where h 1 (t) = g1(m) (t) h 2 (t) = g2(n) (t) denote the mth and nth derivatives of g1 (t) and g2 (t), respectively.
(1.84)
16
1 Signals and Systems g1 (t)
g2 (t)
2
2 t
−3
0
t
3
−2
−3 −1 0
3
1
Fig. 1.9 Pulses g1 (t) and g2 (t)
(b) Use the above relation with m = 1 and n = 0 to evaluate and sketch the cross-correlation function R12 (τ ) of the pulses g1 (t) and g2 (t) shown in Fig. 1.9. • Solution: We know that R12 (t) = g1 (t) g2∗ (−t) G 1 ( f )G ∗2 ( f ) (m+n) ⇒ R12 (t) (j 2π f )m G 1 ( f ) (j 2π f )n G ∗2 ( f ) (m+n) (t) (j 2π f )m G 1 ( f ) (−1)n (−j 2π f )n G ∗2 ( f ). ⇒ R12
(1.85)
Let h 2 (t) = g2(n) (t) (j 2π f )n G 2 ( f )
⇒ h ∗2 (−t) (−j 2π f )n G ∗2 ( f ).
(1.86)
Similarly h 1 (t) = g (m) (t) (j 2π f )m G( f ).
(1.87)
Using the fact that multiplication in the frequency domain is equivalent to convolution in the time domain, we get (m+n) (t) = (−1)n h 1 (t) h ∗2 (−t) R12 ∞ (m+n) n ⇒ R12 (τ ) = (−1) h 1 (t)h ∗2 (t − τ ) dt.
(1.88)
t=−∞
Hence proved. For the signal in Fig. 1.9, using m = 1 and n = 0, we get h 1 (t) = 2δ(t + 3) − 2δ(t − 3) h 2 (t) = g2 (t).
(1.89)
1 Signals and Systems
17 g2 (t) 2 t 0
−2 (1)
R12 (τ ) 4 τ −4 R12 (τ )
τ
−8
−6 −4 −2
6
−3 −1
1
4
2
0
3
Fig. 1.10 Cross-correlation R12 (τ )
Thus (1) (τ ) R12
=
∞
h 1 (t)h 2 (t − τ ) dt
t=−∞
= 2g2 (−τ − 3) − 2g2 (3 − τ ).
(1.90)
(1) (τ ) and R12 (τ ) are plotted in Fig. 1.10. R12 13. Let R12 (τ ) denote the cross-correlation function of two energy signals g1 (t) and g2 (t).
(a) Using Fourier transforms, show that (m+n) (τ ) R12
= (−1)
n
∞ t=−∞
h 1 (t)h ∗2 (t − τ ) dt,
(1.91)
18
1 Signals and Systems
Fig. 1.11 Pulses g1 (t) and g2 (t)
g1 (t) g2 (t)
2 1
1 t
t 0
1
2
0
1
−1
2
where h 1 (t) = g1(m) (t) h 2 (t) = g2(n) (t)
(1.92)
denote the mth and nth derivatives of g1 (t) and g2 (t), respectively. (b) Use the above relation with m = 1 and n = 0 to evaluate and sketch the cross-correlation function R12 (τ ) of the pulses g1 (t) and g2 (t) shown in Fig. 1.11. • Solution: We know that R12 (t) = g1 (t) g2∗ (−t) G 1 ( f )G ∗2 ( f ) (m+n) ⇒ R12 (t) (j 2π f )m G 1 ( f ) (j 2π f )n G ∗2 ( f ) (m+n) (t) (j 2π f )m G 1 ( f ) (−1)n (−j 2π f )n G ∗2 ( f ). ⇒ R12
(1.93)
Let h 2 (t) = g2(n) (t) (j 2π f )n G 2 ( f )
⇒ h ∗2 (−t) (−j 2π f )n G ∗2 ( f ).
(1.94)
Similarly h 1 (t) = g (m) (t) (j 2π f )m G( f ).
(1.95)
Using the fact that multiplication in the frequency domain is equivalent to convolution in the time domain, we get (m+n) (t) = (−1)n h 1 (t) h ∗2 (−t) R12 ∞ (m+n) ⇒ R12 (τ ) = (−1)n h 1 (t)h ∗2 (t − τ ) dt.
(1.96)
t=−∞
Hence proved. For the signal in Fig. 1.11, using m = 1 and n = 0, we get
1 Signals and Systems
19
h 1 (t) = δ(t) + δ(t − 1) − 2δ(t − 2) h 2 (t) = g2 (t).
(1.97)
Thus (1) (τ ) R12
=
∞
h 1 (t)h 2 (t − τ ) dt
t=−∞ ∞
=
[δ(t) + δ(t − 1) − 2δ(t − 2)] g2 (t − τ )
t=−∞
= g2 (−τ ) + g2 (1 − τ ) − 2g2 (2 − τ ).
(1.98)
(1) (τ ) and R12 (τ ) are plotted in Fig. 1.12. R12
g2 (t) 1 t −1 (1)
R12 (τ ) 2 1 τ
−3 R12 (τ )
1 τ
−2 −2 −1
Fig. 1.12 Cross-correlation R12 (τ )
0
1
2
20
1 Signals and Systems
14. (Simon Haykin 1983) Let x(t) and y(t) be the input and output signals of a linear time-invariant filter. Using Rayleigh’s energy theorem, show that if the filter is stable and the input signal x(t) has finite energy, then the output signal y(t) also has finite energy. • Solution: Let H ( f ) denote the Fourier transform of h(t). We have
∞
H( f ) =
h(t)e−j 2π f t dt
−j 2π f t h(t)e dt ⇒ |H ( f )| = t=−∞ ∞ h(t)e−j 2π f t dt ≤ t=−∞ ∞ |h(t)| dt, ⇒ |H ( f )| ≤ t=−∞ ∞
(1.99)
t=−∞
where we have used Schwarz’s inequality. Since h(t) is stable
∞
|h(t)| dt < ∞
t=−∞
⇒ |H ( f )| < ∞.
(1.100)
Let A = max |H ( f )|.
(1.101)
The energy of the output signal y(t) is
∞
|y(t)| dt = 2
t=−∞
∞ f =−∞
|Y ( f )|2 d f
(1.102)
where we have used the Rayleigh’s energy theorem. Using the fact that Y ( f ) = H ( f )X ( f )
(1.103)
the energy of the output signal is
∞ |H ( f )|2 |X ( f )|2 d f ≤ A2 |X ( f )|2 d f f =−∞ f =−∞ ∞ ∞ 2 2 |y(t)| dt ≤ A |x(t)|2 dt. ⇒ ∞
t=−∞
f =−∞
Since the input signal has finite energy, so does the output signal.
(1.104)
1 Signals and Systems
21
15. (Simon Haykin 1983) Prove the following properties of the complex exponential Fourier series representation, for a real-valued periodic signal g p (t): (a) If the periodic function g p (t) is even, that is, g p (−t) = g p (t), then the Fourier coefficient cn is purely real and an even function of n. (b) If g p (t) is odd, that is, g p (−t) = −g p (t), then cn is purely imaginary and an odd function of n. (c) If g p (t) has half-wave symmetry, that is, g p (t ± T0 /2) = −g p (t), where T0 is the period of g p (t), then cn consists of only odd order terms. • Solution: The Fourier series for any periodic signal g p (t) is given by g p (t) =
∞
cn e j 2πnt/T0
n=−∞
= a0 + 2
∞
an cos(2πnt/T0 ) + bn sin(2πnt/T0 ),
(1.105)
n=1
where ⎧ ⎨ an − j bn for n > 0 for n = 0 cn = a0 ⎩ a−n + j b−n for n < 0.
(1.106)
Note that an and bn are real-valued, since g p (t) is real-valued. To prove the first part, we note that g p (−t) =
∞
cn e−j 2πnt/T0 .
(1.107)
n=−∞
Substituting n = −m in the above equation we get g p (−t) = =
−∞
c−m e j 2πmt/T0
m=∞ ∞
c−m e j 2πmt/T0 .
(1.108)
m=−∞
Since g p (−t) = g p (t), comparing (1.105) and (1.108) we must have c−m = cm (even function of m). Moreover, from (1.106), cm must be purely real. To prove the second part, we note from (1.105) and (1.108) that c−m = −cm (odd function of m) and moreover from (1.106) it is clear that cm must be purely imaginary. To prove the third part, we note that
22
1 Signals and Systems ∞
g p (t ± T0 /2) = =
n=−∞ ∞
cn e j 2πn(t±T0 /2)/T0 cn (−1)n e j 2πnt/T0 .
(1.109)
n=−∞
Since it is given that g p (t ± T0 /2) = −g p (t), comparing (1.105) and (1.109) we conclude that cn = 0 for n = 2m. 16. (Simon Haykin 1983) A signal x(t) of finite energy is applied to a square-law device whose output y(t) is given by y(t) = x 2 (t).
(1.110)
The spectrum of x(t) is limited to the frequency interval −W ≤ f ≤ W . Show that the spectrum of y(t) is limited to −2W ≤ f ≤ 2W . • Solution: We know that multiplication of signals in the time domain is equivalent to convolution in the frequency domain. Let x(t) X ( f ).
(1.111)
Therefore Y( f ) = =
∞
α=−∞ W α=−W
X (α)X ( f − α) dα X (α)X ( f − α) dα,
(1.112)
where we have used the fact that X (α) is bandlimited to −W ≤ α ≤ W . Consequently, we must also have −W ≤ f −α≤ W
(1.113)
so that the product X (α)X ( f − α) is non-zero and contributes to the integral. The plot of X ( f − α) as a function of α and f is shown in Fig. 1.13. We see that the convolution integral is non-zero (for this particular example) only for −2W ≤ f ≤ 2W . The important point to note here is that the spectrum of Y ( f ) cannot exceed −2W ≤ f ≤ 2W . 17. A signal x(t) has the Fourier transform shown in Fig. 1.14. (a) Compute its Hilbert transform x(t). ˆ (b) Compare the area under x(t) ˆ and the value of Xˆ (0). Comment on your answer.
1 Signals and Systems
23
Fig. 1.13 Plot of X (α) and X ( f − α) for different values of f
X(α)
α −W
W
0
X(−2W − α) f = −2W α −3W
−W
W
0
X(2W − α) f = 2W α −3W
−W
W
0
3W
X(−α) f =0 α −W
W
0
Fig. 1.14 Fourier transform of x(t)
X(f ) A −2B
f −A
2B
• Solution: We know that A rect(t/T ) AT sinc( f T ).
(1.114)
Using duality and substituting 2B for T we have A rect
f 2B
2 AB sinc(2Bt).
(1.115)
24
1 Signals and Systems
Now X ( f ) = A rect
f −B 2B
− A rect
f +B 2B
2 AB sinc(2Bt)e j 2π Bt − 2 AB sinc(2Bt)e−j 2π Bt ⇒ X ( f ) j 4 AB sinc(2Bt) sin(2π Bt). (1.116) The Fourier transform of x(t) ˆ is Xˆ ( f ) =
−j A rect 0
f −B 2B
− j A rect
f +B 2B
for f = 0 for f = 0.
(1.117)
From (1.115) we see that the inverse Fourier transform of Xˆ ( f ) is Xˆ ( f ) x(t) ˆ = −j 2 AB sinc(2t B)e j 2π Bt − j 2 AB sinc(2t B)e−j 2π Bt = −j 4 AB sinc(2t B) cos(2π Bt) = −j 4 AB sinc(4t B).
(1.118)
Note that
∞
x(t) ˆ dt = −j A = Xˆ (0) = 0.
(1.119)
t=−∞
This is because the property
∞
g(t) dt = G(0)
(1.120)
t=−∞
is valid only when G( f ) is continuous at f = 0. 18. Consider the system shown in Fig. 1.15. Assume that the current in branch AB is zero. The voltage at point A is v1 (t). (a) Find out the time-domain expression that relates v1 (t) and vi (t).
1Ω
A
v1 (t)
B
vo (t) = dv1 (t)/dt
+ vi (t) = 3 cos(6πt) (volts)
i(t) −
Fig. 1.15 Block diagram of the system
2F
d dt
1 Signals and Systems
25
(b) Using the Fourier transform, find out the relation between V1 ( f ) and Vi ( f ). (c) Find the relation between Vo ( f ) and Vi ( f ). (d) Compute the power dissipated when the output voltage vo (t) is applied across a 1 resistor. • Solution: We know that vi (t) = i(t)R + v1 (t).
(1.121)
However i(t) = C
dv1 (t) . dt
(1.122)
Thus vi (t) = RC
dv1 (t) + v1 (t). dt
(1.123)
Taking the Fourier transform of both sides, we get Vi ( f ) = RCj 2π f V1 ( f ) + V1 ( f ) Vi ( f ) ⇒ V1 ( f ) = . 1 + j 2π f RC
(1.124)
It is given that dv1 (t) dt ⇒ Vo ( f ) = j 2π f V1 ( f ) j 2π f Vi ( f ) . = 1 + j 2π f RC vo (t) =
(1.125)
Now 3 [δ( f − 3) + δ( f + 3)] . 2
(1.126)
j 6π 3 −j 6π 3 δ( f − 3) + δ( f + 3) . 2 1 + j 12π 2 1 − j 12π
(1.127)
Vi ( f ) = Therefore Vo ( f ) =
Taking the inverse Fourier transform we get vo (t) = c1 e j 6πt + c2 e−j 6πt .
(1.128)
26
1 Signals and Systems 0.5 H A
B
vo (t) = dv1 (t)/dt
+
+ vi (t) = 5 sin(4πt) (volts)
v1 (t)
i(t) 2Ω
−
d dt
−
Fig. 1.16 Block diagram of the system
Using Parseval’s power theorem we get the power of vo (t) as P = |c1 |2 + |c2 |2 =
2 × 9 36π 2 ≈ 9/8 W. 4 1 + 144π 2
(1.129)
19. Consider the system shown in Fig. 1.16. Assume that the current in branch AB is zero. The voltage at point A is v1 (t). (a) (b) (c) (d) (e)
Find out the time-domain expression that relates vi (t) and i(t). Using the Fourier transform, find out the relation between V1 ( f ) and Vi ( f ). Find the relation between Vo ( f ) and Vi ( f ). Compute vo (t). Compute the power dissipated when the output voltage vo (t) is applied across a 1/2 resistor.
• Solution: We know that vi (t) = v1 (t) + Ldi(t)/dt = i(t)R + Ldi(t)/dt.
(1.130)
Taking the Fourier transform of both sides we get Vi ( f ) = R I ( f ) + j 2π f L I ( f ) Vi ( f ) . ⇒ I( f ) = R + j 2π f L
(1.131)
Therefore V1 ( f ) = R I ( f ) Vi ( f ) = 1 + j 2π f L/R Vi ( f ) , = 1 + j π f /2
(1.132)
1 Signals and Systems
27
where we have substituted L = 0.5 H and R = 2 . Now vo (t) = dv1 (t)/dt ⇒ Vo ( f ) = j 2π f V1 ( f ) j 2π f = Vi ( f ). 1 + j π f /2
(1.133)
Since Vi ( f ) =
5 [δ( f − 2) − δ( f + 2)] 2j
(1.134)
we get Vo ( f ) = 5δ( f − 2)
(−2π) 2π − 5δ( f + 2) . 1 + jπ 1 − jπ
(1.135)
Taking the inverse Fourier transform we get vo (t) = c1 e j 4πt + c2 e−j 4πt ,
(1.136)
where c1 =
10π 1 + jπ
= Aej φ 10π c2 = 1 − jπ = Ae−j φ ,
(1.137)
where 10π A= √ 1 + π2 φ = − tan−1 (π).
(1.138)
Substituting (1.137) and (1.138) in (1.136), we obtain vo (t) = 2 A cos(4πt + φ). Using Parseval’s power theorem we get the power of vo (t) as
(1.139)
28
1 Signals and Systems
P = |c1 |2 + |c2 |2 = 2 A2 200π 2 = W. 1 + π2
(1.140)
The power dissipated across the 1/2 resistor is 2P. 20. Consider a complex-valued signal g(t). Let g1 (t) = g ∗ (−t). Let g1(n) (t) denote the nth derivative of g1 (t). Consider another signal g2 (t) = g (n) (t). Is g2∗ (−t) = g1(n) (t)? Justify your answer using Fourier transforms. • Solution: Let G( f ) denote the Fourier transform of g(t). Then we have g1 (t) = g ∗ (−t) G ∗ ( f ) = G 1 ( f )
(say).
(1.141)
Therefore g1(n) (t) (j 2π f )n G 1 ( f ) ⇒ g1(n) (t) (j 2π f )n G ∗ ( f ).
(1.142)
Next consider g2 (t). We have (say) g2 (t) = g (n) (t) (j 2π f )n G( f ) = G 2 ( f ) ∗ ∗ n ∗ ⇒ g2 (−t) G 2 ( f ) = (−j 2π f ) G ( f ).
(1.143)
Comparing (1.142) and (1.143) we see that g2∗ (−t) = g1(n) (t)
when n is even
g1(n) (t)
when n is odd.
g2∗ (−t)
=
(1.144)
21. (Simon Haykin 1983) Consider N stages of the RC-lowpass filter as illustrated in Fig. 1.17. (a) Compute the magnitude response |Vo ( f )/Vi ( f )| of the overall cascade connection. The current drawn by the buffers is zero.
R
R
V1 (f )
Vi (f )
Buffer
V2 (f )
V1 (f )
C
Fig. 1.17 N stages of the RC-lowpass filter
V2 (f )
R
Vo (f )
Buffer C
C
1 Signals and Systems
29
(b) Show that the magnitude response in the vicinity of f = 0 approaches a Gaussian function given by exp(−(N /2)4π 2 f 2 R 2 C 2 ), for large values of N . • Solution: For a single stage, the input and output voltages are related as follows: v1 (t) = i(t)R + v2 (t).
(1.145)
However, i(t) = C
dv2 (t) . dt
(1.146)
Therefore (1.145) can be rewritten as v1 (t) = RC
dv2 (t) + v2 (t). dt
(1.147)
Taking the Fourier transform of both sides, we get V1 ( f ) = (j 2π f RC + 1)V2 ( f ) V2 ( f ) 1 ⇒ = . V1 ( f ) 1 + j 2π f RC
(1.148)
The magnitude response of the overall cascade connection is Vo ( f ) 1 V ( f ) = (1 + 4π 2 f 2 τ 2 ) N /2 , i 0
(1.149)
where τ0 = RC. Note that Vo ( f ) =1 lim f →0 Vi ( f )
(1.150)
and for large values of N Vo ( f ) V (f) → 0 i
for 4π 2 f 2 τ02 > 1.
(1.151)
Therefore, we are interested in the magnitude response in the vicinity of f = 0, for large values of N . We make use of the Maclaurin series expansion of a function f (x) about x = 0 as follows (assuming that all the derivatives exist): f (x) = f (0) +
f (2) (0) 2 f (1) (0) x+ x + ··· 1! 2!
(1.152)
30
1 Signals and Systems
where f
(n)
d n f (x) (0) = . d x n x=0
(1.153)
1 (1 + 4π 2 τ02 x) N /2
(1.154)
In the present context f (x) = with x = f 2 . We have f (0) = 1 4π 2 τ02 −N (1) f (0) = 2 (1 + 4π 2 τ02 x) N /2+1 x=0 −N (4π 2 τ02 ) = 2 2 2 2 4π τ0 N N (2) +1 f (0) = 2 2 (1 + 4π 2 τ02 x) N /2+2 x=0 2 N 2 2 4π τ0 , ≈ 2
(1.155)
where we have assumed that for large N N N +1≈ . 2 2
(1.156)
Generalizing (1.155), we can obtain the nth derivative of the Maclaurin series as N N N + 1 ... +n−1 f (n) (0) = (−1)n 2 2 2 2 2 n 4π τ0 × (1 + 4π 2 τ02 x) N /2+n x=0 n N 4π 2 τ02 , ≈ (−1)n (1.157) 2 where we have used the fact that for any finite n N N N +n−1≈ . 2 2 Thus f (x) in (1.154) can be written as
(1.158)
1 Signals and Systems Fig. 1.18 A periodic waveform
31 gp (t)
(a)
2 t −5T /16
T
5T /16
0
|H(f )|
(b)
4 3 f −5/(2T )
−3/(2T )
0
3/(2T )
5/(2T )
∠H(f )
(c)
π f
−π
y2 y x + x2 − · · · 1! 2! = exp (−x y) ,
f (x) = 1 −
(1.159)
where y = (N /2)4π 2 τ02 = (N /2)4π 2 R 2 C 2 .
(1.160)
Finally, substituting for x we get the desired result which is Gaussian. Observe that, in order to satisfy the condition n N , the nth term of the series in (1.159) must tend to zero for n N . This can happen only when xy < 1 ⇒ (N /2)4π R C f 2 < 1 2
2
2
⇒ |f|
0. When a = 0, the function reduces to a constant g(− f 0 ) whose inverse Fourier transform is ∞ g(− f 0 ) e j 2π f t d f G 1 (t) = f =−∞
= g(− f 0 )δ(t).
(1.179)
The alternate solution is as follows: g(t) G( f ) 1 G( f /a) |a| 1 ⇒ h(t − t0 /a) = g(at − t0 ) H ( f )e−j 2π f t0 /a = G( f /a)e−j 2π f t0 /a . |a| (1.180) ⇒ h(t) = g(at) H ( f ) =
Since p( f ) P(−t) 1 ⇒ g(a f − f 0 ) G(−t/a)ej 2πt f0 /a . |a|
(1.181)
25. (Simon Haykin 1983) Let Rg (t) denote the autocorrelation of an energy signal g(t). (a) Using Fourier transforms, show that Rg(m+n) (τ )
= (−1)
n
∞
t=−∞
g2 (t)g1∗ (t − τ ) dt,
(1.182)
where the asterisk denotes the complex conjugate and g1 (t) = g (n) (t), g2 (t) = g (m) (t), where g (m) (t) denotes the mth derivative of g(t).
(1.183)
1 Signals and Systems
37
Fig. 1.21 Plot of g(t)
g(t) A t
T
2T
3T
4T
(b) Using the above relation, evaluate and sketch the autocorrelation of the signal in Fig. 1.21. You can use m = 1 and n = 0. • Solution: We know that Rg (t) = g(t) g ∗ (−t) G( f )G ∗ ( f ) ⇒ R (m+n) (t) (j 2π f )m G( f ) (j 2π f )n G ∗ ( f ) ⇒ R (m+n) (t) (j 2π f )m G( f ) (−1)n (−j 2π f )n G ∗ ( f ), (1.184) where “” denotes convolution. Let g1 (t) = g (n) (t) (j 2π f )n G( f ) ⇒ g1∗ (−t) (−j 2π f )n G ∗ ( f ).
(1.185)
g2 (t) = g (m) (t) (j 2π f )m G( f ).
(1.186)
Similarly let
Thus, using the fact that multiplication in the frequency domain is equivalent to convolution in the time domain, we get Rg(m+n) (t) = (−1)n g2 (t) g1∗ (−t) ∞ ⇒ Rg(m+n) (τ ) = (−1)n g2 (t)g1∗ (t − τ ) dt.
(1.187)
t=−∞
Hence proved. For the signal in Fig. 1.21, using m = 1 and n = 0 we get g1 (t) = g(t) g2 (t) = g (1) (t) = A (δ(t) − 2δ(t − 2T ) + 2δ(t − 3T ) − δ(t − 4T )) . (1.188)
38
1 Signals and Systems
(1)
Rg (t)
5A2
A2 t
−A2
−5A2
Rg (t)
4A2 T
A2 T t
−4T
−2T
0
T
−A2 T
3T
(1)
Fig. 1.22 Plot of Rg (t) and Rg (t)
Hence Rg(1) (t) = g2 (t) g ∗ (−t).
(1.189)
Using the fact that for any signal x(t) x(t) δ(t) = x(t),
(1.190)
we get Rg(1) (t) = A (g(−t) − 2g(−t + 2T ) + 2g(−t + 3T ) − g(−t + 4T )) . (1.191) Rg(1) (t) and Rg (t) are plotted in Fig. 1.22. 26. Let Rg (t) denote the autocorrelation of an energy signal g(t) shown in Fig. 1.23. (a) Express the first derivative of Rg (t) in terms of the derivative of g(t). (b) Draw the first derivative of Rg (t). Show all the steps.
1 Signals and Systems
39
Fig. 1.23 An energy signal
g(t) 2 1 t −1
0
2
(c) Hence sketch Rg (t). • Solution: We know that Rg (t) = g(t) g(−t) G( f )G ∗ ( f ) ⇒ d Rg (t)/dt (j 2π f G( f )) G ∗ ( f ) ⇒ d Rg (t)/dt = dg(t)/dt g(−t),
(1.192)
where “” denotes convolution. Now dg(t)/dt = δ(t + 1) + δ(t) − 2δ(t − 2).
(1.193)
Therefore, from (1.192), we get d Rg (t)/dt = s(t + 1) + s(t) − 2s(t − 2),
(1.194)
s(t) = g(−t).
(1.195)
where
The function d Rg (t)/dt is plotted in Fig. 1.24. Finally, the autocorrelation of g(t) is presented in Fig. 1.25. 27. (Simon Haykin 1983) Let R12 (τ ) and R21 (τ ) denote the cross-correlation functions of two energy signals g1 (t) and g2 (t) (a) Show that the total area under R12 (τ ) is defined by
∞ τ =−∞
R12 (τ ) dτ =
∞
g1 (t) dt
t=−∞
∞
g2 (t) dt
∗
. (1.196)
t=−∞
(b) Show that ∗ (−τ ). R12 (τ ) = R21
• Solution: We know that
(1.197)
40
1 Signals and Systems
Fig. 1.24 Obtaining the first derivative of the autocorrelation of g(t) in Fig. 1.23
s(t) = g(−t) 2 1 t −2
0
1 s(t + 1) 2 1 t
−3 −2 −1
0
1 s(t − 2)
2 1 t −3 −2 −1
0
1
2
3
dRg (t)/dt 4 3 2 0 1
2
3
t
−3 −2 −1 −2 −3 −4
R12 (τ ) =
∞
t=−∞
g1 (t)g2∗ (t − τ ) dt
= g1 (t) g2∗ (−t) G 1 ( f )G ∗2 ( f ),
(1.198)
where “” denotes convolution and we have used the fact that convolution in the time domain is equivalent to multiplication in the frequency domain. Thus, we have
1 Signals and Systems
41
Fig. 1.25 Obtaining the autocorrelation of g(t) in Fig. 1.23
dRg (t)/dt 4 3 2 0 1
2
3
t
−3 −2 −1 −2 −3 −4 Rg (t) 9
6
2
t −3 −2 −1
∞ τ =−∞
0
1
2
3
R12 (τ ) exp (−j 2π f τ ) dτ = G 1 ( f )G ∗2 ( f ) ∞ R12 (τ ) = G 1 (0)G ∗2 (0) ⇒ τ =−∞ ∞ g1 (t) dt = t=−∞ ∞ ∗ g2 (t) dt . t=−∞
(1.199)
42
1 Signals and Systems
Thus proved. To prove the second part we note that
∞
R21 (τ ) = ⇒
∗ (τ ) R21
=
t=−∞ ∞ t=−∞
g2 (t)g1∗ (t − τ ) dt g2∗ (t)g1 (t − τ ) dt.
(1.200)
Now we substitute t − τ = x to get ∗ (τ ) = R21 ∗ (−τ ) = ⇒ R21
∞
x=−∞ ∞ x=−∞
g2∗ (x + τ )g1 (x) d x. g2∗ (x − τ )g1 (x) d x.
= R12 (τ ).
(1.201)
Thus proved. 28. (Simon Haykin 1983) Consider two periodic signals g p1 (t) and g p2 (t), both of period T0 . Show that the cross-correlation function R12 (τ ) satisfies the Fourier transform pair: R12 (τ )
∞ 1 G 1 (n/T0 )G ∗2 (n/T0 )δ( f − n/T0 ), T02 n=−∞
(1.202)
where G 1 (n/T0 ) and G 2 (n/T0 ) are the Fourier transforms of the generating functions for the periodic functions g p1 (t) and g p2 (t), respectively. • Solution: For periodic signals we know that R12 (τ ) =
1 T0
T0 /2
t=−T0 /2
g p1 (t)g ∗p2 (t − τ ) dt.
(1.203)
Define g1 (t) = g2 (t) =
g p1 (t) for −T0 /2 ≤ t < T0 /2 0 elsewhere g p2 (t) for −T0 /2 ≤ t < T0 /2 0 elsewhere.
(1.204)
Note that g1 (t) and g2 (t) are the generating functions of g p1 (t) and g p2 (t), respectively. Hence, we have the following relationships:
1 Signals and Systems
43 ∞
g p1 (t) = g p2 (t) =
g1 (t − mT0 )
m=−∞ ∞
g2 (t − mT0 ).
(1.205)
m=−∞
Using the first equation of (1.204) and the second equation of (1.205), we get 1 R12 (τ ) = T0 1 ⇒ R12 (τ ) = T0
T0 /2
t=−T0 /2
∞
g1 (t)
g1 (t)
t=−∞
∞
g2∗ (t − τ − mT0 ) dt
m=−∞ ∞
g2∗ (t − τ − mT0 ) dt.
(1.206)
m=−∞
Interchanging the order of the summation and the integration, we get R12 (τ ) =
∞ 1 Rg g (τ + mT0 ). T0 m=−∞ 1 2
(1.207)
Note that (a) R12 (τ ) is periodic with period T0 . (b) The span of Rg1 g2 (τ ) may exceed T0 , hence the summation in (1.207) may result in aliasing (overlapping). Therefore R12 (τ ) =
1 Rg g (τ ) T0 1 2
for −T0 /2 ≤ τ < T0 /2.
(1.208)
Since R12 (τ ) is periodic with period T0 , it can be expressed as a complex Fourier series as follows: ∞
R12 (τ ) =
cn exp (j 2πnτ /T0 ) ,
(1.209)
n=−∞
where cn =
1 T0
T0 /2
τ =−T0 /2
R12 (τ ) exp (−j 2πnτ /T0 ) dτ .
(1.210)
Substituting for R12 (τ ) in the above equation, we get 1 cn = 2 T0
T0 /2
∞
τ =−T0 /2 m=−∞
Rg1 g2 (τ + mT0 ) exp (−j 2πnτ /T0 ) dτ . (1.211)
44
1 Signals and Systems
Substituting for Rg1 g2 (τ ) in the above equation, we get cn =
1 T02
∞
T0 /2
∞
τ =−T0 /2 m=−∞ t=−∞
g1 (t)g2∗ (t − τ − mT0 ) dt
exp (−j 2πnτ /T0 ) dτ .
(1.212)
Substituting t − τ − mT0 = x
(1.213)
and interchanging the order of the integrals, we get cn =
1 T02
∞
g1 (t) dt
t=−∞
∞ m=−∞
t+T0 /2−mT0 x=t−T0 /2−mT0
g2∗ (x)
exp (−j 2πn(t − x − mT0 )/T0 ) d x.
(1.214)
Combining the summation and the second integral and rearranging terms, we get ∞ 1 g1 (t) exp (−j 2πnt/T0 ) dt cn = 2 T t=−∞ 0∞ g2∗ (x) exp (j 2πnx/T0 ) d x x=−∞
= G 1 (n/T0 )G ∗2 (n/T0 ).
(1.215)
Hence R12 (τ ) =
∞ 1 G 1 (n/T0 )G ∗2 (n/T0 ) exp (j 2πnτ /T0 ) T02 n=−∞ ∞ 1 G 1 (n/T0 )G ∗2 (n/T0 )δ( f − n/T0 ). T02 n=−∞
(1.216)
Thus proved. 29. Compute the Fourier transform of d x(t) , dt
(1.217)
where the “hat” denotes the Hilbert transform. Assume that the Fourier transform of x(t) is X ( f ). • Solution: We have
1 Signals and Systems
45
Fig. 1.26 Plot of g(t)
g(t)
1 t −2 −1
1
2
d x(t) j 2π f X ( f ) dt d x(t) ⇒ −j sgn ( f ) (j 2π f ) X ( f ) dt = 2π| f |X ( f ).
(1.218)
30. Prove the following using Schwarz’s inequality:
∞
|g(t)| dt dg(t) ≤ dt dt t=−∞ ∞ 2 d g(t) ≤ dt 2 dt.
|G( f )| ≤
t=−∞ ∞
|j 2π f G( f )| (j 2π f )2 G( f )
(1.219)
t=−∞
Schwarz’s inequality states that
x2 x=x1
f (x) d x ≤
x2
| f (x)| d x.
(1.220)
x=x1
Evaluate the three bounds on |G( f )| for the pulse shown in Fig. 1.26. • Solution: We start from the Fourier transform relation: ∞ g(t)e−j 2π f t dt. G( f ) =
(1.221)
f =−∞
Invoking Schwarz’s inequality, we have |G( f )| ≤ ⇒ |G( f )| ≤ Similarly, we have
∞ f =−∞ ∞ f =−∞
g(t)e−j 2π f t dt |g(t)| dt.
(1.222)
46
1 Signals and Systems
∞
dg(t) −j 2π f t dt e f =−∞ dt ∞ dg(t) ⇒ |j 2π f G( f )| ≤ dt dt f =−∞ j 2π f G( f ) =
(1.223)
and
∞
d 2 g(t) −j 2π f t e dt 2 f =−∞ dt ∞ 2 d g(t) ⇒ (j 2π f )2 G( f ) ≤ dt 2 dt. (j 2π f )2 G( f ) =
(1.224)
f =−∞
The various derivatives of g(t) are shown in Fig. 1.27. For the given pulse
2
|g(t)| dt = 3 2 dg(t) dt dt = 2 t=−2 2+ 2 d g(t) dt 2 dt = 4. − t=−2
(1.225)
t=−2
Fig. 1.27 Various derivatives of g(t)
g(t) 1 t −2 −1
1
2
dg(t) dt
1 1
2
t
−2 −1 −1 d2 g(t) dt2
1 −1
t
1 2
−2 −1
1 Signals and Systems
47
3.5 3 2.5 |G(f)| 2/(2*pi*|f|) 4/(2*pi*f)^2 3
2 1.5 1 0.5 0 -0.5
-2
-1.5
-0.5
-1
0 f
0.5
1
1.5
2
Fig. 1.28 Various bounds for |G( f )|
Therefore, the three bounds are |G( f )| ≤ 3 2 2π| f | 4 |G( f )| ≤ . (2π f )2
|G( f )| ≤
(1.226)
These bounds on |G( f )| are shown plotted in Fig. 1.28. It can be shown that G( f ) =
cos(2π f ) − cos(4π f ) . 2π 2 f 2
(1.227)
31. (Simon Haykin 1983) A signal that is popularly used in communication systems is the raised cosine pulse. Consider a periodic sequence of these pulses as shown in Fig. 1.29. Compute the first three terms (n = 0, 1, 2) of the (real) Fourier series expansion. • Solution: The (real) Fourier series expansion of a periodic signal g p (t) can be written as follows: g p (t) = a0 + 2
∞ n=1
an cos(2πnt/T0 ) + bn sin(2πnt/T0 ).
(1.228)
48
1 Signals and Systems 3 2.5 1 + cos(2πt)
gp (t)
2 1.5 1 0.5 0 -0.5 -2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
t
Fig. 1.29 Plot of g p (t)
In the given problem T0 = 2. Moreover, since the given function is even, bn = 0. Thus, we have 1 1 g p (t) dt 2 t=−1 1 1/2 (1 + cos(2πt)) dt = 2 t=−1/2 1 = . 2
a0 =
(1.229)
Similarly 1 1 g p (t) cos(2πnt/2) dt 2 t=−1 1 1/2 (1 + cos(2πt)) cos(nπt) dt = 2 t=−1/2 1 sin(n − 2)π/2 sin(n + 2)π/2 1 sin(nπ/2) + + .(1.230) = nπ 2 (n − 2)π (n + 2)π
an =
Substituting n = 1, 2 in (1.230), we have 4 3π 1 a2 = . 4 a1 =
(1.231)
1 Signals and Systems
49
32. (Simon Haykin 1983) Any function g(t) can be expressed as the sum of ge (t) and go (t), where 1 [g(t) + g(−t)] 2 1 go (t) = [g(t) − g(−t)] . 2 ge (t) =
(1.232)
(a) Sketch the even and odd parts of g(t) = A rect
1 t . − T 2
(1.233)
(b) Find the Fourier transform of these two parts. • Solution: Note that A rect
t 1 − T 2
= A rect
t − (T /2) , T
(1.234)
which is nothing but rect (t/T ) shifted by T /2. This is illustrated in Fig. 1.30, along with the even and odd parts of g(t). We know that
Fig. 1.30 Plot of g(t) and the corresponding even and odd functions
g(t) A t T
0 ge (t) A/2
t −T
T
0 go (t) A/2
−T
t 0 −A/2
T
50
1 Signals and Systems
A rect
t AT sinc ( f T ) . T
(1.235)
Therefore by inspection, we have ge (t)
A (2T ) sinc ( f (2T )) = AT sinc (2 f T ) . 2
(1.236)
Similarly A A T sinc ( f T ) e−j 2π f T /2 − T sinc ( f T ) ej 2π f T /2 2 2 = −j AT sinc ( f T ) sin(π f T ). (1.237)
go (t)
33. Let g(t) = e−πt . Let 2
y(t) = g(at) δ(t − t0 ),
(1.238)
where “” denotes convolution, a and t0 are real-valued constants. (a) (b) (c) (d)
Write down the Fourier transform of g(t). No derivation is required. Derive the Fourier transform of y(t). Derive the Fourier transform of the autocorrelation of y(t). Find the autocorrelation of y(t).
Show all the steps. Use Fourier transform properties. Derivation of any Fourier transform property is not required. • Solution: We know that g(t) = e−πt e−π f = G( f ). 2
2
(1.239)
Now y(t) = g(at) δ(t − t0 ) ∞ = δ(τ − t0 )g[a(t − τ )] dτ τ =−∞
= g[a(t − t0 )] 2 2 = e−πa (t−t0 ) .
(1.240)
Using the Fourier transform properties of time scaling and time shifting, we have
1 Signals and Systems
51
x1 (t) = g(at) ⇒ x1 (t − t0 ) = g[a(t − t0 )] = y(t) = = =
1 G( f /a) = X 1 ( f ) |a| X 1 ( f )e−j 2π f t0 1 G( f /a)e−j 2π f t0 |a| Y( f ) 1 −π f 2 /a 2 −j 2π f t0 e e . (1.241) |a|
The Fourier transform of the autocorrelation of y(t) is |Y ( f )|2 =
1 −2π f 2 /a 2 e = H( f ) a2
(say).
(1.242)
The autocorrelation of y(t) is the inverse Fourier transform of H ( f ). Therefore √ 2 −π( f √2/a)2 1 e H( f ) = √ |a| 2 |a| 1 (1.243) √ g(ct), |a| 2 where a c= √ . 2
(1.244)
Thus, the autocorrelation of y(t) is h(t) =
1 2 2 √ e−πa t /2 . |a| 2
(1.245)
34. Consider the periodic pulse train in Fig. 1.31a. Here T ≤ T0 /2. (a) Using the Fourier transform property of differentiation in the time domain, compute the complex Fourier series representation of g p (t) in Fig. 1.31a. Hence, also find the Fourier transform of g p (t). (b) If g p (t) is passed through a filter having the magnitude and phase response as depicted in Fig. 1.31b, c, find the output y(t). • Solution: Consider the pulse: g(t) =
g p (t) for −T0 /2 < t < T0 /2 0 elsewhere.
(1.246)
52
1 Signals and Systems
Fig. 1.31 g p (t).g p (t)
gp (t)
(a)
A t −T
T
0
T0
|H(f )|
(b)
1 f −3/(2T0 )
3/(2T0 )
0 ∠H(f )
(c)
π/2 3/(2T0 ) −3/(2T0 )
f
0 −π/2
Clearly A d 2 g(t) = (δ(t + T ) − 2δ(t) + δ(t − T )) . dt 2 T
(1.247)
Hence A (exp(j 2π f T ) − 2 + exp(−j 2π f T )) f 2T AT = 2 2 2 sin2 (π f T ) π f T (1.248) = AT sinc2 ( f T ).
G( f ) = −
4π 2
We also know that g p (t) =
∞
cn exp (j 2πnt/T0 ) ,
(1.249)
n=−∞
where 1 ∞ g(t) exp (−j 2πnt/T0 ) dt T0 t=−∞ 1 G(n/T0 ). = T0
cn =
(1.250)
1 Signals and Systems
53
Substituting for G(n/T0 ) we have g p (t) =
∞ AT sinc2 (nT /T0 ) exp (j 2πnt/T0 ) . T0 n=−∞
(1.251)
Using the fact that exp (j 2π f c t) δ( f − f c )
(1.252)
the Fourier transform of g p (t) is given by ∞ AT n 2 . sinc (nT /T0 )δ f − G p( f ) = T0 n=−∞ T0
(1.253)
The filter output will have the frequency components 0, ±1/T0 . From Fig. 1.31b, c, we note that H (0) = 1 1 H (1/T0 ) = e−j π/3 3 1 j π/3 H (−1/T0 ) = e . 3
(1.254)
Hence AT AT + sinc2 (T /T0 )e j (2πt/T0 −π/3) T0 3T0 AT + sinc2 (T /T0 )e j (−2πt/T0 +π/3) 3T0 AT 2 AT = + sinc2 (T /T0 ) cos(2πt/T0 − π/3). T0 3T0
y(t) =
(1.255)
35. (Simon Haykin 1983) Prove the following Hilbert transform: sin(t) H T 1 − cos(t) . −→ t t
(1.256)
• Solution: We start with the familiar Fourier transform pair: A rect (t/T ) AT sinc ( f T ).
(1.257)
Applying duality and replacing T by B, we get A rect ( f /B) AB sinc (t B).
(1.258)
54
1 Signals and Systems
Substituting A = B = 1, we get rect ( f ) sinc (t) = sin(πt)/(πt).
(1.259)
Time scaling by 1/π, we get π rect (π f ) sin(t)/t = g(t)
(say).
(1.260)
Now, the Hilbert transform of sin(t)/t is best computed in the frequency domain. Thus ˆ f ) = −j sgn ( f )π rect (π f ) G( ∞ ˆ f ) exp (j 2π f t) d f G( ⇒ g(t) ˆ = f =−∞
⇒ g(t) ˆ = −j π + jπ
1/(2π) f =0 0
exp (j 2π f t) d f
−1/(2π)
exp (j 2π f t) d f
exp(j t) − 1 1 − exp(−j t) + jπ j 2πt j 2πt 1 − cos(t) . = t
= −j π
(1.261)
Thus proved. 36. (Simon Haykin 1983) Prove the following Hilbert transform: 1 t HT −→ . 2 1+t 1 + t2
(1.262)
• Solution: We start with the Fourier transform pair: 1 1 exp (−|t|) . 2 1 + (2π f )2
(1.263)
Time scaling by 2π, we get 1 1 exp (−|2πt|) . 2 (2π)(1 + f 2 )
(1.264)
Multiplying both sides by 2π and applying duality, we get π exp (−|2π f |)
1 = g(t) 1 + t2
(say).
(1.265)
1 Signals and Systems
55
Now the Hilbert transform of g(t) in the above equation is best obtained in the frequency domain. Thus g(t) ˆ = −j = −j π
∞ f =−∞ ∞
+ jπ = −j π =
f =0 0
sgn ( f )π exp (−|2π f |) exp (j 2π f t) d f
exp (−2π f (1 − j t)) d f
f =−∞
exp (2π f (1 + j t)) d f
1 −1 + jπ −2π(1 − j t) 2π(1 + j t)
t . 1 + t2
(1.266)
Thus proved. 37. (Simon Haykin 1983) Prove the following Hilbert transform: HT
rect (t) −→ −
1 t − 1/2 . ln π t + 1/2
(1.267)
• Solution: This problem can be easily solved in the time domain. From the basic definition of the Hilbert transform, we have 1 ∞ g(τ ) g(t) ˆ = dτ π τ =−∞ t − τ 1/2 1 1 dτ = π τ =−1/2 t − τ −1 t − 1/2 . (1.268) = ln π t + 1/2 Thus proved. 38. (Simon Haykin 1983) Let g(t) ˆ denote the Hilbert transform of a real-valued energy signal g(t). Show that the cross-correlation functions of g(t) and g(t) ˆ are given by Rggˆ (τ ) = − Rˆ g (τ ) Rgˆ g (τ ) = Rˆ g (τ ),
(1.269)
where Rˆ g (τ ) denotes the Hilbert transform of Rg (τ ). • Solution: We start from the basic definition of the cross-correlation function:
56
1 Signals and Systems
Rg1 g2 (τ ) =
∞
t=−∞
g1 (t)g2∗ (t − τ ) dt
= g1 (τ ) g2∗ (−τ ) ∞ = G 1 ( f )G ∗2 ( f ) exp (j 2π f τ ) d f.
(1.270)
f =−∞
Here g1 (t) = g(t) G( f ) ⇒
g2 (t) ∗ g2 (−t)
= g(t) ˆ −j sgn ( f )G( f ) = gˆ ∗ (−t) = g(−t) ˆ j sgn ( f )G ∗ ( f ),
(1.271)
since g(t) and hence g(t) ˆ are real-valued. Hence Rggˆ (τ ) = j =j
∞
f =−∞ ∞ f =−∞
sgn ( f )G( f )G ∗ ( f ) exp (j 2π f τ ) d f sgn ( f ) |G( f )|2 exp (j 2π f τ ) d f.
(1.272)
However, we know that Rg (τ ) |G( f )|2 ⇒ Rˆ g (τ ) −j sgn ( f ) |G( f )|2 .
(1.273)
From (1.272) and (1.273), it is clear that Rggˆ (τ ) = − Rˆ g (τ ).
(1.274)
The second part can be similarly proved. 39. (Simon Haykin 1983) Consider two bandpass signals g1 (t) and g2 (t) whose pre-envelopes are denoted by g1+ (t) and g2+ (t), respectively. (a) Show that
∞
1 {g1+ (t)} {g2+ (t)} dt = 2 t=−∞
∞
t=−∞
∗ g1+ (t)g2+ (t) dt
.
(1.275) How is this relation modified if g2 (t) is replaced by g2 (−t)? (b) Assuming that g(t) is a narrowband signal with complex envelope g(t) ˜ and carrier frequency f c , use the result of part (a) to show that
∞ t=−∞
g 2 (t) dt =
1 2
∞ t=−∞
|g(t)| ˜ 2 dt.
(1.276)
1 Signals and Systems
57
• Solution: From the basic definition of the pre-envelope, we have g1+ (t) = g1 (t) + j gˆ1 (t), g2+ (t) = g2 (t) + j gˆ2 (t),
(1.277)
where g1 (t) and g2 (t) are real-valued signals. Hence, gˆ1 (t) and gˆ2 (t) are also real-valued. Next, we note that the real part of the integral is equal to the integral of the real part. Hence, the right-hand side of (1.275) becomes 1 2
∞
g1 (t)g2 (t) + gˆ1 (t)gˆ2 (t) dt.
(1.278)
t=−∞
Next, we note that ∞ t=−∞
g1 (t)g2 (t) dt = g1 (t) g2 (−t)|t=0 =
∞ f =−∞
G 1 ( f )G ∗2 ( f ) d f,
(1.279)
where “” denotes convolution. Similarly
∞ t=−∞
gˆ1 (t)gˆ2 (t) dt = gˆ1 (t) gˆ2 (−t)t=0 =
∞ f =−∞
(−j sgn ( f )) G 1 ( f )
(j sgn ( f )) G ∗2 ( f ) d f ∞ = G 1 ( f )G ∗2 ( f ) d f f =−∞ ∞ g1 (t)g2 (t) dt, (1.280) = t=−∞
where we have used the fact that sgn2 ( f ) = 1
for f = 0.
(1.281)
Now, the left-hand side of (1.275) is nothing but
∞
g1 (t)g2 (t) dt.
(1.282)
t=−∞
Thus (1.275) is proved. Now, let us see what happens when g2 (t) is replaced by g2 (−t). Let g3 (t) = g2 (−t) ⇒ g3+ (t) = g3 (t) + j gˆ3 (t) ⇒
∗ g3+ (t)
= g2 (−t) + j gˆ2 (−t) = g3 (t) − j gˆ3 (t) = g2 (−t) − j gˆ2 (−t).
(1.283)
58
1 Signals and Systems
Clearly (1.275) is still valid with g2+ (t) replaced by g3+ (t) as given in (1.283), that is, ∞ ∞ 1 ∗ {g1+ (t)} {g3+ (t)} dt = g1+ (t)g3+ (t) dt . (1.284) 2 t=−∞ t=−∞ To prove part (b) we substitute g1 (t) = g2 (t) = g(t)
(1.285)
ˆ g+ (t) = g(t) + j g(t) = g(t) ˜ exp (j 2π f c t) ,
(1.286)
in (1.275) and note that
where g(t) ˜ is the complex envelope of g(t). Thus, (1.276) follows immediately. 40. Let a narrowband signal be expressed in the form: g(t) = gc (t) cos(2π f c t) − gs (t) sin(2π f c t).
(1.287)
Using G + ( f ) to denote the Fourier transform of the pre-envelope of g(t), express the Fourier transforms of the in-phase component gc (t) and the quadrature component gs (t) in terms of G + ( f ). • Solution: From the basic definition of the pre-envelope gc (t) + j gs (t) = g+ (t) exp (−j 2π f c t) = g1 (t) (say) 1
g1 (t) + g1∗ (t) ⇒ gc (t) = 2 1
g1 (t) − g1∗ (t) . gs (t) = 2j (1.288) Taking the Fourier transform of both sides in the above equation, we get 1
G 1 ( f ) + G ∗1 (− f ) 2 1
Gs ( f ) = G 1 ( f ) − G ∗1 (− f ) . 2j
Gc( f ) =
(1.289)
Note that G 1 ( f ) is equal to
⇒
G1( ∗ G 1 (−
f ) = G +( f + fc ) f ) = G ∗+ (− f + f c ).
(1.290)
1 Signals and Systems
59
Substituting for G 1 ( f ) in (1.289), we get 1
G + ( f + f c ) + G ∗+ (− f + f c ) 2 1
G + ( f + f c ) − G ∗+ (− f + f c ) . Gs ( f ) = 2j
Gc( f ) =
(1.291)
41. (Simon Haykin 1983) The duration of a signal provides a measure for describing the signal as a function of time. The bandwidth of a signal provides a measure for describing its frequency content. There is no unique set of definitions for the duration and bandwidth. However, regardless of the definition we find that their product is always a constant. The choice of a particular set of definitions merely changes the value of this constant. This problem is intended to explore these issues. (a) The root mean square (rms) bandwidth of a lowpass energy signal g(t) is defined by ∞
2 |G( f )|2 d f f =−∞ f ∞ |G( f )|2 d f f =−∞
Wrms =
0.5 .
(1.292)
The corresponding rms duration of the signal is defined by ∞
2 |g(t)|2 dt t=−∞ t ∞ |g(t)|2 dt t=−∞
Trms =
0.5 .
(1.293)
Show that (Heisenberg–Gabor uncertainty principle) Trms Wrms ≥ 1/(4π).
(1.294)
(b) Consider an energy signal g(t) for which |G( f )| is defined for all frequencies from −∞ to ∞ and symmetric about f = 0 with its maximum value at f = 0. The equivalent rectangular bandwidth is defined by ∞ Weq =
f =−∞
|G( f )|2 d f
2|G(0)|2
.
(1.295)
The corresponding equivalent duration is defined by
∞ Teq =
t=−∞ ∞ t=−∞
|g(t)| dt
2
|g(t)|2 dt
.
(1.296)
60
1 Signals and Systems
Show that Teq Weq ≥ 0.5.
(1.297)
• Solution: To prove Trms Wrms ≥ 1/(4π)
(1.298)
we need to use Schwarz’s inequality which states that
∞
t=−∞
∗ g1 (t)g2 (t) + g1 (t)g2∗ (t) dt
2
∞
≤4
|g1 (t)|2 dt
t=−∞ ∞
×
|g2 (t)|2 dt.
t=−∞
(1.299) For the given problem, we substitute g1 (t) = tg(t) dg(t) . g2 (t) = dt
(1.300)
Thus, the right-hand side of Schwarz’s inequality in (1.299) becomes
dg(t) 2 dt dt. t=−∞
(1.301)
dg(t) j 2π f G( f ) = G 3 ( f ). dt
(1.302)
∞
t 2 |g(t)|2 dt
4 t=−∞
∞
Let g3 (t) =
Then, by Rayleigh’s energy theorem
∞
|g3 (t)|2 dt =
t=−∞
∞ f =−∞
|G 3 ( f )|2 d f
∞ dg(t) 2 dt = 4π 2 ⇒ f 2 |G( f )|2 d f. dt t=−∞ f =−∞
∞
(1.303)
Thus, the right-hand side of Schwarz’s inequality in (1.299) becomes 16π
2
∞ t=−∞
t |g(t)| dt 2
2
∞ f =−∞
f 2 |G( f )|2 d f.
(1.304)
1 Signals and Systems
61
Let us now consider the square root of the left-hand side of Schwarz’s inequality in (1.299). We have
∞ dg(t) ∗ dg(t) dg(t) ∗ t g (t) t g ∗ (t) + g(t) dt = dt dt dt t=−∞ t=−∞ ∗ dg (t) + g(t) dt dt ∞ d = g(t)g ∗ (t) dt t dt t=−∞ ∞ d |g(t)|2 dt. t = t=−∞ dt (1.305) ∞
In the above equation, we have used the fact (this can also be proved using Fourier transforms) that
dg(t) dt
∗
=
dg ∗ (t) . dt
(1.306)
Let
g4 (t) = |g(t)|2 G 4 ( f ) dg4 (t) j 2π f G 4 ( f ) = G 5 ( f ) ⇒ g5 (t) = dt j dG 5 ( f ) ⇒ tg5 (t) 2π d f ∞ j dG 5 ( f ) tg5 (t) dt = ⇒ 2π d f f =0 t=−∞ ∞ j d ⇒ tg5 (t) dt = (j 2π f G 4 ( f )) 2π d f f =0
t=−∞
∞
d ⇒ t |g(t)|2 dt = −G 4 (0) t=−∞ dt ∞ ∞ d |g(t)|2 dt. t |g(t)|2 dt = − ⇒ t=−∞ dt t=−∞
(1.307)
Using Rayleigh’s energy theorem, the left-hand side of Schwarz’s inequality in (1.299) becomes
∞ t=−∞
|g(t)|2 dt
∞ f =−∞
|G( f )|2 d f.
(1.308)
62
1 Signals and Systems
Taking the square root of (1.304) and (1.308), we get the desired result in (1.298). To prove Teq Weq ≥ 0.5,
(1.309)
we invoke Schwarz’s inequality which states that
∞
t=−∞
g(t) dt ≤
∞
|g(t)| dt.
(1.310)
t=−∞
Squaring both sides, we obtain
2 ∞ 2 |g(t)| dt g(t) dt ≤ t=−∞ t=−∞ 2 ∞ |g(t)| dt . ⇒ |G(0)|2 ≤ ∞
(1.311)
t=−∞
Using the above inequality and the Rayleigh’s energy theorem which states that ∞ ∞ |g(t)|2 dt = |G( f )|2 d f, (1.312) f =−∞
t=−∞
we get the desired result Teq Weq ≥ 0.5.
(1.313)
42. x(t) is a bandpass signal whose Fourier transform is shown in Fig. 1.32. h(t) is an LTI system having a bandpass frequency response of the form h(t) = h c (t) cos(2π f c t).
(1.314)
The Fourier transform of h c (t) is given by
Fig. 1.32 A bandpass spectrum
X(f )
4Xc (f − fc ) f
−fc −fc − W
0 −fc + W
fc fc − W
fc + W
1 Signals and Systems
63
Hc ( f ) =
f 2 for | f | < B , 0 otherwise
(1.315)
where B > W and f c B, W . x(t) is input to h(t). (a) Write down the expression of x(t) in terms of xc (t). (b) Write down the expression of H ( f ) in terms of Hc ( f ). (c) Find y(t) (output of h(t)) in terms of xc (t) without performing convolution. • Solution: Note that X ( f ) = 4[X c ( f − f c ) + X c ( f + f c )] 8xc (t) cos(2π f c t) = x(t).
(1.316)
Similarly H( f ) =
Hc ( f − f c ) + Hc ( f + f c ) . 2
(1.317)
Now Y ( f ) = X ( f )H ( f ) = 2[X c ( f − f c )Hc ( f − f c ) + X c ( f + f c )Hc ( f + f c )]. (1.318) Let G c ( f ) = X c ( f )Hc ( f ) 2 f X c ( f ) for | f | < W = 0 otherwise −1 d 2 xc (t) 4π 2 dt 2 = gc (t).
(1.319)
Then y(t) = 4gc (t) cos(2π f c t) −1 d 2 xc (t) cos(2π f c t). = 2 π dt 2
(1.320)
43. Two periodic signals g p1 (t) and g p2 (t) are depicted in Fig. 1.33. Let g p (t) = g p1 (t) + g p2 (t).
(1.321)
64
1 Signals and Systems
Fig. 1.33 Periodic waveforms
gp1 (t) 1 t gp2 (t) 2 1 t 0
1
2
3
4
(a) Sketch g p (t). (b) Compute the power of dg p (t)/dt. (c) Compute the Fourier transform of d 2 g p (t)/dt 2 . Assume that the generating function (of d 2 g p (t)/dt 2 ) extends over [0, T ) where T is the period of d 2 g p (t)/dt 2 . • Solution: The signal g p (t) is illustrated in Fig. 1.34. Let g p3 (t) =
dg p (t) . dt
(1.322)
Then, the power of g p3 (t) is 1 T 2 g (t) dt T t=0 p3 14 = 4 = 3.5.
P=
(1.323)
Let g p4 (t) =
d 2 g p (t) dt 2
(1.324)
and let g4 (t) denote its generating function. Then g4 (t) = 3δ(t) − 5δ(t − 1) + δ(t − 2) + δ(t − 3) 3 − 5e−j 2π f + e−j 4π f + e−j 6π f = G 4 ( f ).
(1.325)
1 Signals and Systems
65
Fig. 1.34 Periodic waveforms
gp1 (t) 1 t gp2 (t) 2 1 t 1
0
2
3
4
gp (t) 3
1 t dgp (t)/dt 3 t −1 −2 d2 gp (t)/dt2
3 1
1
t
−5
Now g p4 (t) =
∞
g4 (t − kT )
k=−∞
G4( f )
∞
e−j 2π f kT
k=−∞ ∞ 1 = G 4 (n/T )δ( f − n/T ), T n=−∞
(1.326)
66
1 Signals and Systems
where T = 4 s. 44. Using Fourier transform properties and/or any other relation, compute
A sinc ( f T ) e−j π f T − A e−j 4π f T 2 d f. j 2π f f =−∞ ∞
(1.327)
Clearly state which Fourier transform property and/or relation is being used. The constant of any integration may be assumed to be zero. • Solution: We know from Parseval’s relation that ∞ ∞ |G( f )|2 d f = |g(t)|2 dt, f =−∞
(1.328)
t=−∞
where g(t) G( f )
(1.329)
is a Fourier transform pair. Let G( f ) =
A sinc ( f T ) e−j π f T − A e−j 4π f T . j 2π f
(1.330)
Let G 1 ( f ) = j 2π f G( f ) = A sinc ( f T ) e−j π f T − A e−j 4π f T g1 (t) = dg(t)/dt t − T /2 A rect − Aδ(t − 2T ). = T T
(1.331)
Both g1 (t) and g(t) are plotted in Fig. 1.35. Clearly
∞
|g(t)|2 dt =
t=−∞
2T
|g(t)|2 dt
t=0
4 A2 T , = 3
(1.332)
which is the required answer. 45. Compute the area under 2e−2πt e−3πt , where “” denotes convolution. Clearly state which Fourier transform property and/or relation is being used and show all the steps. 2
2
1 Signals and Systems
67
Fig. 1.35 Plot of g1 (t) and g(t)
g1 (t) A/T t
2T T A g(t) A
t T
2T
• Solution: We know that e−πt e−π f .
(1.333)
g(t) G( f ),
(1.334)
2
2
We also know that if
then, from the time scaling property of the Fourier transform, g(at)
1 G( f /a), |a|
(1.335)
where a = 0. Therefore 1 2 2 e−2πt √ e−π f /2 2 1 2 2 e−3πt √ e−π f /3 . 3
(1.336)
Moreover 2 2 2 2 2 2e−2πt e−3πt √ e−π f /2 e−π f /3 . 6
(1.337)
68
1 Signals and Systems
Using
∞
g(t) dt = G(0)
(1.338)
t=−∞
√ 2 2 the area under 2e−2πt e−3πt is 2/ 6. 46. State Rayleigh’s energy theorem. No derivation is required. • Solution: Let G( f ) denote the Fourier transform of g(t). Then Rayleigh’s energy theorem states that
∞ t=−∞
|g(t)| dt = 2
∞ f =−∞
|G( f )|2 d f.
(1.339)
47. Let G( f ) denote the Fourier transform of g(t). It is given that G(0) is finite and non-zero. Does g(t) contain a dc component? Justify your answer. • Solution: If g(t) has a dc component, then G( f ) contains a Dirac-delta function at f = 0. However, it is given that G(0) is finite. Therefore, g(t) does not contain any dc component. 48. Consider the following: if CONDITION then STATEMENT. The CONDITION is given to be necessary. Which of the following statement(s) are correct (more than one statement may be correct). (a) (b) (c) (d) (e) (f)
CONDITION is true implies STATEMENT is always true. CONDITION is true implies STATEMENT is always false. CONDITION is true implies STATEMENT may be true or false. CONDITION is false implies STATEMENT is always true. CONDITION is false implies STATEMENT is always false. CONDITION is false implies STATEMENT may be true or false.
• Solution: (c) and (e). For example, the necessary condition for the sum of an infinite series to exist is that the nth term must tend to zero as n → ∞. Here: (a) CONDITION is: n th term goes to zero as n → ∞. (b) STATEMENT is: The sum of an infinite series exists. The sum given by ∞ 1 S= n n=1
does not exist even though the nth term goes to zero as n → ∞.
(1.340)
1 Signals and Systems
69
49. Consider the following: if CONDITION then STATEMENT. The CONDITION is given to be sufficient. Which of the following statement(s) are correct (more than one statement may be correct): (a) (b) (c) (d) (e) (f)
CONDITION is true implies STATEMENT is always true. CONDITION is true implies STATEMENT is always false. CONDITION is true implies STATEMENT may be true or false. CONDITION is false implies STATEMENT is always true. CONDITION is false implies STATEMENT is always false. CONDITION is false implies STATEMENT may be true or false.
• Solution: (a) and (f). For example, Dirichlet’s conditions for the existence of the Fourier transform are sufficient. In other words, if Dirichlet’s conditions are satisfied, the Fourier transform is guaranteed to exist. However, if Dirichlet’s conditions are not satisfied, the Fourier transform may or may not exist. Here: (a) CONDITION is: Dirichlet’s conditions. (b) STATEMENT is: The Fourier transform exists. 50. State Parseval’s power theorem for periodic signals in terms of its complex Fourier series coefficients. • Solution: Let g p (t) be a periodic signal with period T0 . Then g p (t) can be represented in the form of a complex Fourier series as follows: g p (t) =
∞
cn e j 2πnt/T0 .
(1.341)
n=−∞
Then Parseval’s power theorem states that 1 T0
T0
|g p (t)| dt = 2
t=0
∞
|cn |2 .
(1.342)
n=−∞
51. State and derive the Poisson sum formula. Hence show that ∞
e j 2π f nT0 =
n=−∞
∞ 1 n . δ f − T0 n=−∞ T0
(1.343)
• Solution: Let g p (t) be a periodic signal with period T0 . Then g p (t) can be expressed in the form of a complex Fourier series as follows: g p (t) =
∞ n=−∞
cn e j 2πnt/T0 ,
(1.344)
70
1 Signals and Systems
where cn =
1 T0
T0
g p (t)e−j 2πnt/T0 dt.
(1.345)
t=0
Let g(t) be the generating function of g p (t), that is, g(t) =
g p (t) for 0 ≤ t < T0 0 otherwise.
(1.346)
Note that ∞
g p (t) =
g(t − nT0 ).
(1.347)
n=−∞
The Fourier transform of g(t) is G( f ) =
∞
t=−∞ T0
=
g(t)e−j 2π f t dt g p (t)e−j 2π f t dt.
(1.348)
t=0
From (1.345) and (1.348), we have cn T0 = G(n/T0 ).
(1.349)
Therefore, from (1.344), (1.347), and (1.349) we have ∞ 1 n e j 2πnt/T0 , g(t − nT0 ) = G T T 0 n=−∞ 0 n=−∞ ∞
(1.350)
which proves the Poisson sum formula. Taking the Fourier transform of both sides of (1.350) we obtain G( f )
∞
e
−j 2π f nT0
n=−∞
∞ 1 n δ( f − n/T0 ). = G T0 n=−∞ T0
(1.351)
For the particular case where G( f ) = 1, we obtain the required result ∞ n=−∞
e−j 2π f nT0 =
∞ 1 δ( f − n/T0 ). T0 n=−∞
(1.352)
1 Signals and Systems
71
52. Clearly define the unit step function. Derive the Fourier transform of the unit step function. • Solution: The unit step function is defined as u(t) =
t
τ =−∞
δ(τ ) dτ
⎧ ⎨ 0 for τ < 0 = 1/2 for τ = 0 , ⎩ 1 for τ > 0 where δ(·) is the Dirac-delta function. Now consider ⎧ ⎨ exp(−at) for t > 0 for t = 0 , g(t) = 0 ⎩ − exp(at) for t < 0
(1.353)
(1.354)
where a > 0 is a real-valued constant. Note that lim g(t) = sgn (t),
(1.355)
⎧ ⎨ 1 for t > 0 sgn (t) = 0 for t = 0 ⎩ −1 for t < 0.
(1.356)
a→0
where
Now, the Fourier transform of g(t) is G( f ) =
a2
−j 4π f . + (2π f )2
(1.357)
Taking the limit a → 0 in (1.357), we get from (1.355) sgn (t)
1 . jπ f
(1.358)
We know that 2u(t) − 1 = sgn (t).
(1.359)
72
1 Signals and Systems
Taking the Fourier transform of both sides of (1.359) we get 1 jπ f 1 δ( f ) ⇒ U( f ) = + , j 2π f 2
2U ( f ) − δ( f ) =
which is the Fourier transform of the unit step function. Note that if ⎧ ⎨ exp(−at) for t > 0 for t = 0 , h(t) = 1/2 ⎩ 0 for t < 0
(1.360)
(1.361)
where a > 0 is a real-valued constant, then lim h(t) = u(t).
a→0
(1.362)
Now, the Fourier transform of h(t) in (1.361) is H( f ) =
1 . a + j 2π f
(1.363)
Taking the limit a → 0 in (1.363), we obtain from (1.362), the Fourier transform of the unit step as u(t)
1 , j 2π f
(1.364)
which is different from (1.360). However, we note from (1.364) that the righthand side is purely imaginary, which implies that the unit step is an odd function, which is incorrect. Therefore, the Fourier transform of the unit step is given by (1.360). 53. Compute
∞
sinc3 (3t) dt,
(1.365)
sin(πt) . πt
(1.366)
t=−∞
where sinc (t) = • Solution: We know that
A rect (t/T ) AT sinc ( f T ).
(1.367)
1 Signals and Systems
73
Using duality we get A rect ( f /B) AB sinc (t B).
(1.368)
Substituting B = 3 in (1.368) we get 1 rect ( f /3) sinc (3t). 3
(1.369)
Using the property that multiplication in the time domain corresponds to convolution in the frequency domain, we get 1 (rect ( f /3) rect ( f /3) rect ( f /3)) sinc3 (3t) = g(t), 27 (1.370) where “” denotes convolution. Using the property G( f ) =
∞
g(t) dt = G(0),
(1.371)
t=−∞
we get
∞
1 rect ( f /3) 3 [(1 − | f |/3) rect ( f /6)] f =0 27 3/2 1 = (1 − |α|/3) dα 9 α=−3/2 2 3/2 = (1 − α/3) dα 9 α=0 1 (1.372) = . 4
sinc3 (3t) dt =
t=−∞
54. A signal g(t) has Fourier transform given by G(F) = sin(2π Ft0 ).
(1.373)
Compute g(t). ˆ • Solution: Clearly g(t) =
δ(t + t0 ) − δ(t − t0 ) , 2j
(1.374)
where δ(·) is the Dirac-delta function. The impulse response of the Hilbert transformer is 1 h(t) = . (1.375) πt
74
1 Signals and Systems
Therefore g(t) ˆ = g(t) h(t) 1 = (h(t + t0 ) − h(t − t0 )) 2j 1 1 1 = − 2π j t + t0 t − t0 −t0 1 . = π j t 2 − t02
(1.376)
55. Compute the Hilbert transform of g(t) =
24 . 9 + (2πt)2
(1.377)
• Solution: We start from the Fourier transform pair: ae−b|t|
b2
2ab . + (2π f )2
(1.378)
Using duality, we get G( f ) = ae−b| f |
2ab = g(t). b2 + (2πt)2
(1.379)
Here a = 4 and b = 3. In this problem, it is best to compute the Hilbert transform from the frequency domain. We have ˆ f ) = −j sgn( f )G( f ). G( ˆ f ) as given by Now g(t) ˆ is the inverse Fourier transform of G( g(t) ˆ =
∞
ˆ f )e j 2π f t d f G(
f =−∞ ∞
= −4j
+ 4j
e−3 f e j 2π f t d f
f =0 0
e3 f e j 2π f t d f f =−∞
∞ e− f (3−j 2πt) = −4j −(3 − j 2πt) f =0 f (3+j 2πt) 0 e + 4j (3 + j 2πt) f =−∞
(1.380)
1 Signals and Systems
75
Fig. 1.36 A signal
g(t) 1/(2T ) 1/(4T ) t −T /3 0
T /3
−1 = 4j (3 − j 2πt) 1 + 4j (3 + j 2πt) 16πt . = 9 + (2πt)2
2T /3
(1.381)
56. Consider the signal g(t) in Fig. 1.36. Which of the following statement(s) are correct? (a) g(t) g(t)|t=−T /3 =
5 . 48T
(1.382)
(b) The Fourier transform of g(t) g(t) is real-valued and even. (c) The g(t) g(−t) is real-valued and even. (d) g(t) δ(3t) = g(t)/3. Here “” denotes convolution. • Solution: Consider Fig. 1.37. Clearly g(t) g(t)|t=−T /3 = =
2T /3 τ =−T /3
g(τ )g(−T /3 − τ ) dτ
5 . 48T
(1.383)
Note that g(t) is neither even nor odd. Hence, its Fourier transform G( f ) is complex valued. Therefore g(t) g(t) G 2 ( f ).
(1.384)
Clearly, G 2 ( f ) is also complex valued. Next, since g(t) is real-valued, g(t) g(−t) is the autocorrelation of g(t), and is hence real-valued and even. Finally
76
1 Signals and Systems
Fig. 1.37 A signal
g(τ ) 1/(2T ) 1/(4T ) τ −T /3 0
T /3
2T /3
g(τ − T /3) 1/(2T ) 1/(4T ) τ 2T /3
0
T
g(−τ − T /3) 1/(2T ) 1/(4T ) τ −T
g(t) δ(3t) =
∞
τ =−∞ ∞
=
α=−∞
g(t) = . 3
−2T /3
0
δ(3τ )g(t − τ ) dτ 1 δ(α)g(t − α/3) dα 3 (1.385)
References Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983. Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition, 2002.
Chapter 2
Random Variables and Random Processes
1. Show that the magnitude of the correlation coefficient |ρ| is always less than or equal to unity. • Solution: For any real number a, we have E (a(X − m X ) − (Y − m Y ))2 ≥ 0 ⇒ a 2 σ 2X − 2a E [(X − m X )(Y − m Y )] + σY2 ≥ 0.
(2.1)
Since the above quadratic equation in a is nonnegative, its discriminant is nonpositive. Hence 4E 2 [(X − m X )(Y − m Y )] − 4σ 2X σY2 ≤ 0 |E [(X − m X )(Y − m Y )]| ≤ 1. ⇒ σ X σY
(2.2)
Hence proved. 2. (Papoulis 1991) Using characteristic functions, show that E[X 1 X 2 X 3 X 4 ] = C12 C34 + C13 C24 + C14 C23 ,
(2.3)
where E[X i X j ] = Ci j and the random variables X i are jointly normal (Gaussian) with zero mean. • Solution: The joint characteristic function of X 1 , X 2 , X 3 , and X 4 is given by E[e j (v1 X 1 +···+v4 X 4 ) ].
(2.4)
Expanding the exponential in the form of a power series and considering only the fourth power, we have
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_2
77
78
2 Random Variables and Random Processes
E e j (v1 X 1 +···+v4 X 4 ) 1 = · · · + E (v1 X 1 + · · · + v4 X 4 )4 + · · · 4! 24 = ··· + E[X 1 X 2 X 3 X 4 ]v1 v2 v3 v4 + · · · . 4!
(2.5)
Now, let W = v1 X 1 + · · · + v4 X 4 .
(2.6)
E[W ] = 0 vi v j Ci j = σw2 . E[W 2 ] =
(2.7)
Then
i, j
Now, the characteristic function of W is 2 E e j W = e−σw /2 = e− 2
1
i, j
vi v j Ci j
.
(2.8)
Expanding the above exponential once again we have ⎞2 ⎛ jW 1 ⎝1 = ··· + vi v j Ci j ⎠ + · · · E e 2! 2 i, j = ··· +
8 (C12 C34 + C13 C24 + C14 C23 ) v1 v2 v3 v4 + · · · (2.9) 8
Equating the coefficients in (2.5) and (2.9) we get the result. 3. (Haykin 1983) A Gaussian distributed random variable X of zero mean and variance σ 2X is transformed by a half-wave rectifier with input-output relation given below
Y =
X if X ≥ 0 0 if X < 0.
(2.10)
Compute the pdf of Y . • Solution: We start with the basic equation when a random variable X is transformed to a random variable Y through the relation Y = g(X ).
(2.11)
2 Random Variables and Random Processes
79
We note that the mapping between Y and X is one-to-one for X ≥ 0 and an inverse mapping exists for X ≥ 0 X = g −1 (Y ).
(2.12)
Thus the following relation holds f X (x) . f Y (y) = |dy/d x| X =g−1 (Y )
(2.13)
For the given problem dy = dx
1 for X > 0 0 for X < 0.
(2.14)
Since X < 0 maps to Y = 0, we must have P(X < 0) = P(Y = 0).
(2.15)
Thus the pdf of Y must have a Dirac delta function at y = 0. Let f Y (y) =
kδ(y) + √ 1
2πσ 2X
2 exp − 2σy 2 for Y ≥ 0
0
X
for Y < 0,
(2.16)
where k is a constant such that the pdf of Y integrates to unity. It is easy to see that ∞ f Y (y) dy = k + 1/2 = 1 y=−∞
⇒ k = 1/2, where we have used the fact that ∞
δ(y) dy = 1.
(2.17)
(2.18)
y=−∞
4. Let X be a uniformly distributed random variable between 0 and 2π. Compute the pdf of Y = sin(X ). • Solution: Clearly, the mapping between X and Y is not one-to-one. In fact, two distinct values of X give the same value of Y (excepting at the extreme points of y = −1 and y = 1). This is illustrated in Fig. 2.1. Let us consider two points x1 and x2 as shown in the figure. The probability that Y lies in the range y to y + dy is equal to the probability that X lies in the range [x1 , x1 + d x1 ] and [x2 , x2 + d x2 ]. Note that d x2 is actually negative and
80
2 Random Variables and Random Processes
Fig. 2.1 The transformation Y = sin(X )
1 0.8
y + dy
0.6
y
0.4
Y
0.2 0 -0.2
x1
-0.4 -0.6
x2 + dx2 x 2 x1 + dx1
-0.8 -1
0
1
2
3
4
5
6
X
dy dy = d x1 d x x=x1
(2.19)
Thus we have (since d x2 is negative) f Y (y) dy = f X (x1 ) d x1 − f X (x2 ) d x2 1/2π 1/2π ⇒ f Y (y) = + |dy/d x| x1 =sin−1 (y) |dy/d x| x2 =π−sin−1 (y) 1 ⇒ f Y (y) = π cos(x) x=sin−1 (y) 1 ⇒ f Y (y) = for − 1 ≤ y ≤ 1. π 1 − y2
(2.20)
Note that even though f Y (±1) = ∞, the probability that Y = ±1 is zero, since P(Y = 1) = P(X = π/2) =0 P(Y = −1) = P(X = 3π/2) = 0.
(2.21)
5. (Haykin 1983) Consider a random process X (t) defined by X (t) = sin(2π Ft)
(2.22)
in which the frequency F is a random variable with the probability density function
2 Random Variables and Random Processes
fF ( f ) =
81
1/W for 0 ≤ f ≤ W 0 elsewhere.
(2.23)
Is X (t) WSS? • Solution: Let us first compute the mean value of X (t). W 1 sin(2π Ft) d F E[X (t)] = W f =0 −1 = (cos(2πW t) − 1) 2W πt
(2.24)
which is a function of time. Hence X (t) is not WSS. 6. (Haykin 1983) A random process X (t) is defined by X (t) = A cos(2π f c t),
(2.25)
where A is a Gaussian distributed random variable with zero mean and variance σ 2A . This random process is applied to an ideal integrator producing an output Y (t) defined by Y (t) =
t τ =0
X (τ ) dτ .
(2.26)
(a) Determine the probability density function of the output Y (t) at a particular time tk . (b) Determine whether Y (t) is WSS. (c) Determine whether Y (t) is ergodic in the mean and in the autocorrelation. • Solution: The random variable Y (tk ) is given by Y (tk ) =
A sin(2π f c tk ). 2π f c
(2.27)
Since all the terms in the above equation excepting A are constants for the time instant tk , Y (tk ) is also a Gaussian distributed random variable with mean and variance given by E[A] sin(2π f c tk ) = 0 2π f c E[A2 ] 2 E[Y 2 (tk )] = sin (2π f c tk ) 4π 2 f c2 E[Y (tk )] =
=
σ 2A sin2 (2π f c tk ). 4π 2 f c2
(2.28)
82
2 Random Variables and Random Processes
Since the variance is a function of time, Y (tk ) is not WSS. Hence it is not ergodic in the autocorrelation. However, it can be shown that the time-averaged mean is zero. Hence Y (t) is ergodic in the mean. 7. (Haykin 1983) Let X and Y be statistically independent Gaussian random variables with zero mean and unit variance. Define the Gaussian process Z (t) = X cos(2πt) + Y sin(2πt).
(2.29)
(a) Determine the joint pdf of the random variables Z (t1 ) and Z (t2 ). (b) Is Z (t) WSS? • Solution: The random process Z (t) has mean and autocorrelation given by E[Z (t)] = E[X ] cos(2πt) + E[Y ] sin(2πt) = 0 E[Z (t1 )Z (t2 )] = E[(X cos(2πt1 ) + Y sin(2πt2 )) (X cos(2πt1 ) + Y sin(2πt2 ))] = cos(2πt1 ) cos(2πt2 ) + sin(2πt1 ) sin(2πt2 ) = cos(2π(t1 − t2 )) ⇒ E[Z 2 (t)] = 1.
(2.30)
In the above relations we have used the fact that E[X ] = E[Y ] = 0 E[X 2 ] = E[Y 2 ] = 1 E[X Y ] = E[X ]E[Y ] = 0.
(2.31)
Since the mean is independent of time and the autocorrelation is dependent only on the time lag, Z (t) is WSS. Since Z (t) is a linear combination of two Gaussian random variables, it is also Gaussian. Therefore Z (t1 ) and Z (t2 ) are jointly Gaussian. The joint pdf of two real-valued Gaussian random variables y1 and y2 is given by f Y1 , Y2 (y1 , y2 ) 1 = 2πσ1 σ2 1 − ρ2 σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2 , × exp − 2 2σ12 σ22 (1 − ρ2 ) (2.32)
2 Random Variables and Random Processes
83
where E[y1 ] = m 1 E[y2 ] = m 2 E[(y1 − m 1 )2 ] = σ12 E[(y2 − m 2 )2 ] = σ22 E[(y1 − m 1 )(y2 − m 2 )]/(σ1 σ2 ) = ρ.
(2.33)
Thus, for the given problem p Z (t1 ), Z (t2 ) (z(t1 ), z(t2 )) 1 = 2π| sin(2π(t1 − t2 ))| 2 z (t1 ) − 2 cos(2π(t1 − t2 ))z(t1 )z(t2 ) + z 2 (t2 ) . × exp − 2 sin2 (2π(t1 − t2 ))
(2.34)
8. Using ensemble averaging, find the mean and the autocorrelation of the random process given by X (t) =
∞
Sk p(t − kT − α),
(2.35)
k=−∞
where Sk denotes a discrete random variable taking values ±A with equal probability, p(·) denotes a real-valued waveform, 1/T denotes the bit-rate and α denotes a random timing phase uniformly distributed in [0, T ). Assume that Sk is independent of α and Sk is independent of S j for k = j. • Solution: Since Sk and α are independent E[X (t)] =
∞
E[Sk ]E[ p(t − kT − α)]
k=−∞
,=0
(2.36)
where for the given binary phase shift keying (BPSK) constellation E[Sk ] = (1/2)A + (1/2)(−A) = 0. The autocorrelation of X (t) is
(2.37)
84
2 Random Variables and Random Processes
R X (τ ) = E[X (t)X (t − τ )] ⎡ ⎤ ∞ ∞ = E⎣ Sk p(t − kT − α) S j p(t − τ − j T − α)⎦ k=−∞
=
j=−∞
∞ ∞
E[Sk S j ]E[ p(t − kT − α) p(t − τ − j T − α)]
k=−∞ j=−∞
=
∞ ∞
A2 δ K (k − j)
k=−∞ j=−∞
1 T p(t − kT − α) p(t − τ − j T − α) dα T α=0 ∞ A2 T = p(t − kT − α) p(t − τ − kT − α) dα, T α=0 k=−∞ ×
(2.38)
where we have assumed that Sk and α are independent and
δ K (k − j) =
1 for k = j 0 for k = j
(2.39)
is the Kronecker delta function. Let x = t − kT − α.
(2.40)
Substituting (2.40) in (2.38) we obtain ∞ A2 t−kT p(x) p(x − τ ) d x. R X (τ ) = T x=t−kT −T k=−∞
(2.41)
Combining the summation and the integral, (2.41) becomes A2 ∞ p(x) p(x − τ ) d x T x=−∞ A2 R p (τ ), = T
R X (τ ) =
(2.42)
where R p (·) is the autocorrelation of p(·). 9. (Haykin 1983) The square wave x(t) of constant amplitude A, period T0 , and delay td represents the sample function of a random process X (t). This is illustrated in Fig. 2.2. The delay is random and is described by the pdf
2 Random Variables and Random Processes Fig. 2.2 A periodic waveform with a random timing phase
85 x(t)
T0 /2
t T0 td
1/T0 for 0 ≤ td < T0 0 otherwise.
f Td (td ) = (a) (b) (c) (d)
(2.43)
Determine the mean and autocorrelation of X (t) using ensemble averaging. Determine the mean and autocorrelation of X (t) using time averaging. Is X (t) WSS? Is X (t) ergodic in the mean and the autocorrelation?
• Solution: Instead of solving this particular problem we try to solve a more general problem. Let p(t) denote an arbitrary pulse shape. Consider a random process X (t) defined by ∞
X (t) =
p(t − kT0 − td ),
(2.44)
k=−∞
where td is a uniformly distributed random variable in the range [0, T0 ). Clearly, in the absence of td , X (t) is no longer a random process and it simply becomes a periodic waveform. We wish to find out the mean and autocorrelation of the random process defined in (2.44). The mean of X (t) is given by E[X (t)] = E
∞
p(t − kT0 − td )
k=−∞
=
1 T0
T0
∞
td =0 k=−∞
p(t − kT0 − td ) dtd .
(2.45)
Interchanging the order of integration and summation and substituting t − kT0 − td = x
(2.46)
∞ t−kT0 1 E[X (t)] = p(x) d x. T0 k=−∞ x=t−kT0 −T0
(2.47)
we get
86
2 Random Variables and Random Processes
Combining the summation and the integral we get 1 E[X (t)] = T0
∞
p(x) d x.
(2.48)
x=−∞
For the given problem: E[X (t)] =
A . 2
(2.49)
The autocorrelation can be computed as
∞
E[X (t)X (t − τ )] = E
p(t − kT0 − td )
k=−∞ ∞
⎤
p(t − τ − j T0 − td )⎦
j=−∞
=
∞ ∞ 1 T0 k=−∞ j=−∞ T0 p(t − kT0 − td ) p(t − τ − j T0 − td ) dtd . (2.50) td =0
Substituting t − kT0 − td = x
(2.51)
we get E[X (t)X (t − τ )] =
∞ ∞ 1 T0 k=−∞ j=−∞ t−kT0 p(x) p(x + kT0 − τ − j T0 ) d x. (2.52) x=t−kT0 −T0
Let kT0 − j T0 = mT0 . Substituting for j we get
(2.53)
2 Random Variables and Random Processes
E[X (t)X (t − τ )] =
87
∞ ∞ 1 T0 k=−∞ m=−∞ t−kT0 p(x) p(x + mT0 − τ ) d x.
(2.54)
x=t−kT0 −T0
Now we interchange the order of summation and combine the summation over k and the integral to obtain ∞ ∞ 1 p(x) p(x + mT0 − τ ) d x E[X (t)X (t − τ )] = T0 m=−∞ x=−∞
=
∞ 1 R p (τ − mT0 ) = R X (τ ), T0 m=−∞
(2.55)
where R p (τ ) is the autocorrelation of p(t). Thus, the autocorrelation of X (t) is also periodic with a period T0 , hence X (t) is a cyclostationary random process. This is illustrated in Fig. 2.3c. Since the random process is periodic, the time-averaged mean is given by 1 T0 A = 2
< x(t) > =
td +T0
x(t) dt t=td
(2.56)
independent of td . Comparing (2.49) and (2.56) we find that X (t) is ergodic in the mean. The time-averaged autocorrelation is given by < x(t)x(t − τ ) > =
1 T0
=
1 T0
T0 /2
t=−T0 /2 ∞
x(t)x(t − τ ) dt
Rg (τ − mT0 ),
(2.57)
m=−∞
where Rg (τ ) is the autocorrelation of the generating function of x(t) (the generating function has been discussed earlier in this chapter of Haykin 2nd ed). The generating function can be conveniently taken to be
g(t) =
p(t − td ) for td ≤ t < td + T0 , 0 elsewhere
(2.58)
where p(t) is illustrated in Fig. 2.3a. Let P( f ) be the Fourier transform of p(t). We have
88
2 Random Variables and Random Processes p(t) (a)
A t T0 /2
T0
Rp (τ )
(b)
A2 T0 /2 τ −T0 /2
T0 /2 RX (τ )
(c)
A2 /2 τ −T0 /2
T0 /2
Fig. 2.3 Computing the autocorrelation of a periodic wave with random timing phase Fig. 2.4 A periodic waveform with a random timing phase
T0 /8
x(t) 2A A t 0
T0 /4
td T0
Rg (τ ) = g(t) g(−t) |P( f )|2 p(t) p(−t) = R p (τ ).
(2.59)
Therefore, comparing (2.55) and (2.57) we find that X (t) is ergodic in the autocorrelation. 10. A signal x(t) with period T0 and delay td represents the sample function of a random process X (t). This is illustrated in Fig. 2.4. The delay td is a random variable which is uniformly distributed in [0, T0 ). (a) (b) (c) (d)
Determine the mean and autocorrelation of X (t) using ensemble averaging. Determine the mean and autocorrelation of X (t) using time averaging. Is X (t) WSS? Is X (t) ergodic in the mean and the autocorrelation?
2 Random Variables and Random Processes
89
• Solution: Instead of solving this particular problem we try to solve a more general problem. Let p(t) denote an arbitrary pulse shape. Consider a random process X (t) defined by ∞
X (t) =
p(t − kT0 − td ),
(2.60)
k=−∞
where td is a uniformly distributed random variable in the range [0, T0 ). Clearly, in the absence of td , X (t) is no longer a random process and it simply becomes a periodic waveform. We wish to find out the mean and autocorrelation of the random process defined in (2.60). The mean of X (t) is given by E[X (t)] = E
∞
p(t − kT0 − td )
k=−∞
=
1 T0
∞
T0
td =0 k=−∞
p(t − kT0 − td ) dtd .
(2.61)
Interchanging the order of integration and summation and substituting t − kT0 − td = x
(2.62)
∞ t−kT0 1 E[X (t)] = p(x) d x. T0 k=−∞ x=t−kT0 −T0
(2.63)
we get
Combining the summation and the integral we get E[X (t)] =
1 T0
∞
p(x) d x.
(2.64)
x=−∞
For the given problem E[X (t)] =
3A . 8
The autocorrelation can be computed as E[X (t)X (t − τ )] = E
∞ k=−∞
p(t − kT0 − td )
(2.65)
90
2 Random Variables and Random Processes ∞
⎤ p(t − τ − j T0 − td )⎦
j=−∞
=
∞ ∞ 1 T0 k=−∞ j=−∞ T0 p(t − kT0 − td ) p(t − τ − j T0 − td ) dtd(2.66) . td =0
Substituting t − kT0 − td = x
(2.67)
we get E[X (t)X (t − τ )] =
∞ ∞ 1 T0 k=−∞ j=−∞ t−kT0 p(x) p(x + kT0 − τ − j T0 ) d x.(2.68) x=t−kT0 −T0
Let kT0 − j T0 = mT0 .
(2.69)
Substituting for j we get ∞ ∞ 1 E[X (t)X (t − τ )] = T0 k=−∞ m=−∞ t−kT0 p(x) p(x + mT0 − τ ) d x.
(2.70)
x=t−kT0 −T0
Now we interchange the order of summation and combine the summation over k and the integral to obtain E[X (t)X (t − τ )] = =
∞ ∞ 1 p(x) p(x + mT0 − τ ) d x T0 m=−∞ x=−∞ ∞ 1 R p (τ − mT0 ) = R X (τ ), T0 m=−∞
(2.71)
where R p (τ ) is the autocorrelation of p(t). Thus, the autocorrelation of X (t) is also periodic with a period T0 , hence X (t) is a cyclostationary random
2 Random Variables and Random Processes
91 p(t)
(a) 2A A
t T0 /8
T0 /4
Rp (τ )
(b)
5A2 T0 /8 2A2 T0 /8 τ −T0 /4
T0 /4
0 −T0 /8
T0 /8 RX (τ )
(c)
5A2 /8 2A2 /8 τ
−5T0 /4
−T0
−3T0 /4
3T0 /4
0
−T0 /4
T0
5T0 /4
T0 /4 −T0 /8
T0 /8
Fig. 2.5 Computing the autocorrelation of a periodic wave with random timing phase
process. This is illustrated in Fig. 2.5c. Since the random process is periodic, the time-averaged mean is given by 1 T0 3A = 8
< x(t) > =
td +T0
x(t) dt t=td
(2.72)
independent of td . Comparing (2.65) and (2.72) we find that X (t) is ergodic in the mean. The time-averaged autocorrelation is given by
92
2 Random Variables and Random Processes
1 < x(t)x(t − τ ) > = T0 =
1 T0
T0 /2
t=−T0 /2 ∞
x(t)x(t − τ ) dt
Rg (τ − mT0 ),
(2.73)
m=−∞
where Rg (τ ) is the autocorrelation of the generating function of x(t) (the generating function has been discussed earlier in this chapter of Haykin 2nd ed). The generating function can be conveniently taken to be
g(t) =
p(t − td ) for td ≤ t < td + T0 0 elsewhere
(2.74)
where p(t) is illustrated in Fig. 2.5a. Let P( f ) be the Fourier transform of p(t). We have Rg (τ ) = g(t) g(−t) |P( f )|2 p(t) p(−t) = R p (τ ).
(2.75)
Therefore, comparing (2.71) and (2.73) we find that X (t) is ergodic in the autocorrelation. 11. For two jointly Gaussian random variables X and Y with means m X and m Y , variances σ 2X and σY2 and coefficient of correlation ρ, compute the conditional pdf of X given Y . Hence compute the values of E[X |Y ] and var(X |Y ). • Solution: The joint pdf of two real-valued Gaussian random variables X and Y is given by 1
f X, Y (x, y) =
2πσ X σY 1 − ρ2 σY2 (x − m X )2 − 2σ X σY ρ(x − m X )(y − m Y ) + σ 2X (y − m Y )2 . × exp − 2σ 2X σY2 (1 − ρ2 ) (2.76)
We also know that f Y (y) = Therefore
σY
1 √
(y − m Y )2 . exp − 2σY2 2π
(2.77)
2 Random Variables and Random Processes
f X Y (x, y) f Y (y) 1 = σ X 2π(1 − ρ2 ) σ 2 A2 − 2σ X σY ρAB + ρ2 σ 2X B 2 × exp − Y , 2σ 2X σY2 (1 − ρ2 )
93
f X |Y (x|y) =
(2.78)
where we have made the substitution A = x − mX B = y − mY .
(2.79)
Simplifying (2.78) further we get 1 σ X 2π(1 − ρ2 ) (σY (x − m X ) − ρσ X (y − m Y ))2 × exp − 2σ 2X σY2 (1 − ρ2 ) 1 = σ X 2π(1 − ρ2 ) (x − (m X + ρ(σ X /σY )(y − m Y )))2 . × exp − 2σ 2X (1 − ρ2 )
f X |Y (x|y) =
(2.80)
By inspection we conclude from the above equation that X |Y is also a Gaussian random variable with mean and variance given by σX (y − m Y ) σY var (X |Y ) = σ 2X (1 − ρ2 ). E[X |Y ] = m X + ρ
(2.81)
12. If 1 (x − m)2 f X (x) = √ exp − 2σ 2 σ 2π compute (a) E (X − m)2n and (b) E (X − m)2n−1 .
(2.82)
• Solution: Clearly E (X − m)2n−1 = 0
(2.83)
since the integrand is an odd function. To compute E[(X − m)2n ] we use the method of integration by parts. We have
94
2 Random Variables and Random Processes
E (X − m)2n =
1 √ σ 2π
∞
(x − m)2n−1 (x − m)e−(x−m)
2
/(2σ 2 )
d x.
x=−∞
(2.84) The first function is taken as (x − m)2n−1 . The integral of the second function is (x − m)2 (x − m)2 2 d x = −σ exp − . (2.85) (x − m) exp − 2σ 2 2σ 2 x Therefore (2.84) becomes E (X − m)2n ∞ (x − m)2 1 = √ − σ 2 (x − m)2n−1 exp − 2σ 2 σ 2π x=−∞ ∞ (x − m)2 2n−2 2 − (2n − 1)(x − m) (−σ ) exp − dx 2σ 2 x=−∞ (2.86) = (2n − 1)σ 2 E (x − m)2n−2 . Using the above recursion we get E (X − m)2n = 1 · 3 · 5 · · · (2n − 1)σ 2n .
(2.87)
13. X is a uniformly distributed random variable between zero and one. Find out the transformation Y = g(X ) such that Y is a Rayleigh distributed random variable. You can assume that the mapping between X and Y is one-to-one and Y monotonically increases with X . There should not be any unknown constants in your answer. The Rayleigh pdf is given by y y2 f Y (y) = 2 exp − 2 σ 2σ
for y ≥ 0.
(2.88)
Note that f Y (y) is zero for y < 0. • Solution: Note that
f X (x) =
1 for 0 ≤ x ≤ 1 . 0 elsewhere
(2.89)
Since the mapping is assumed to be one-to-one and monotonically increasing (dy/d x is always positive), we have ⇒
Y y=0
f Y (y) dy = f X (x) d x X f Y (y) dy = f X (x) d x. 0
(2.90)
2 Random Variables and Random Processes
95
Substituting for f Y (y) and f X (x) we get 1 − e−Y
2
/(2σ 2 )
=X
⇒ Y = −2σ 2 ln(1 − X ). 2
(2.91)
14. Given that Y is a Gaussian distributed random variable with zero mean and variance σ 2 , use the Chernoff bound to compute the probability that Y ≥ δ for δ > 0. • Solution: We know that P[Y ≥ δ] ≤ E eμ(Y −δ) ,
(2.92)
where μ needs to be found out such that the RHS is minimized. Hence we set d μ(Y −δ) E e = 0. dμ
(2.93)
The optimum value of μ = μ0 satisfies E Y eμ0 Y = δ E eμ0 Y ∞ ∞ 1 δ 2 2 2 2 ⇒ √ yeμ0 y e−y /2σ dy = √ eμ0 y e−y /2σ dy σ 2π y=−∞ σ 2π y=−∞ δ (2.94) ⇒ μ0 = 2 . σ Therefore P[Y ≥ δ] ≤ e−μ0 δ E eμ0 Y ⇒ P[Y ≥ δ] ≤ e−δ /σ eδ /(2σ 2 2 ⇒ P[Y ≥ δ] ≤ e−δ /(2σ ) . 2
2
2
2
)
(2.95)
15. If two Gaussian distributed random variables, X and Y are uncorrelated, are they statistically independent? Justify your statement. • Solution: Since X and Y are uncorrelated E[(X − m x )(Y − m Y )] = 0 ⇒ ρ = E[(X − m X )(Y − m Y )]/σ X σY = 0. Therefore the joint pdf is given by
(2.96)
96
2 Random Variables and Random Processes SN (f ) (Watts/Hz) 1
f (Hz) −7
−5 −4
0
4
5
7
Fig. 2.6 Power spectral density of a narrowband noise process
1 1 2 2 2 2 √ e−(x−m X ) /(2σ X ) √ e−(y−m Y ) /(2σY ) σ X 2π σY 2π = f X (x) f Y (y).
f X Y (x, y) =
(2.97)
Hence X and Y are also statistically independent. 16. (Haykin 1983) The power spectral density of a narrowband noise process is shown in Fig. 2.6. The carrier frequency is 5 Hz. (a) Plot the power spectral density of Nc (t) and Ns (t). (b) Plot the cross-spectral density S Nc Ns ( f ). • Solution: We know that
S Nc ( f ) = S Ns ( f ) =
S N ( f − f c ) + S N ( f + f c ) for | f | < f c (2.98) 0 otherwise.
The psd of the in-phase and quadrature components is plotted in Fig. 2.7. We also know that the cross-spectral densities are given by
j [S N ( f + f c ) − S N ( f − f c )] for | f | < f c 0 otherwise. = −S Ns Nc ( f ).
S Nc Ns ( f ) =
(2.99)
The cross-spectral density S Nc Ns ( f ) is plotted in Fig. 2.8. 17. Let a random √ variable X have a uniform pdf over −1 ≤ x ≤ 3. Compute the pdf of Z = |X |. The transformation is plotted in Fig. 2.9. • Solution: Using z and x as dummy variables corresponding to Z and X , respectively, we note that
z = 2
Differentiating both sides we get
x for x > 0 −x for x < 0.
(2.100)
2 Random Variables and Random Processes
97
SN (f ) (Watts/Hz) 1 S2 (f )
S1 (f ) f (Hz)
−7
−5 −4
0
4
5
7 Watts/Hz
S1 (f + fc ) 1 0.5 f (Hz)
0
0 S2 (f − fc ) 1 0.5 f (Hz)
0
SNc (f ) = SNs (f ) 2
0.5 f (Hz)
0
f (Hz) −2 −1
0
1
2
Fig. 2.7 Power spectral density of Nc (t) and Ns (t)
2z dz =
d x for x > 0 −d x for x < 0.
(2.101)
This implies that
dz 1/(2z) for x > 0 = −1/(2z) for x < 0 dx dz 1 . ⇒ = dx 2z
(2.102)
98
2 Random Variables and Random Processes SN (f ) (Watts/Hz) 1
S2 (f )
S1 (f )
f (Hz) −7
−5 −4
0
4
5
7 Watts/Hz
S1 (f + fc ) 1 0.5 f (Hz) f (Hz)
0 0 −0.5 −1
−S2 (f − fc ) SNc Ns (f )/j
0.5 f (Hz)
0 −0.5
f (Hz) −2 −1
0
1
2
Fig. 2.8 Cross-spectral densities of Nc (t) and Ns (t)
Now, in the range −1 ≤ x ≤ 1 corresponding to 0 ≤ z ≤ 1, the mapping from X to Z is surjective (many-to-one) as indicated in Fig. 2.9. Hence f X (x) f X (x) + f Z (z) = |dz/d x| x=−z 2 |dz/d x| x=z 2 1/4 ⇒ f Z (z) = 2 1/(2z) =z for 0 ≤ z ≤ 1. In the range 1 ≤ x ≤ 3 corresponding to 1 ≤ z ≤ Z is one-to-one (injective). Hence
for 0 ≤ z ≤ 1
(2.103)
√ 3 the mapping from X to
2 Random Variables and Random Processes
99
1.8 1.6 1.4
Z
1.2 1 0.8 0.6 0.4 0.2 0
-1
1.5
1
0.5
0
-0.5
2
2.5
3
X
Fig. 2.9 The transformation Z =
√
|X |
f X (x) |dz/d x| x=z 2 1/4 ⇒ f Z (z) = 1/(2z) √ z for 1 ≤ z ≤ 3. = 2 f Z (z) =
(2.104)
To summarize
f Z (z) =
z for 0 ≤ z ≤ √ 1 z for 1 ≤ z ≤ 3. 2
(2.105)
18. Given that FX (x) =
π + 2 tan−1 (x) 2π
for − ∞ < x < ∞
(2.106)
find the pdf of the random variable Z given by
Z=
X 2 for X ≥ 0 −1 for X < 0
(2.107)
100
2 Random Variables and Random Processes
• Solution: Note that FX (−∞) = 0 FX (∞) = 1.
(2.108)
The pdf of X is given by d FX (x) dx 1 = . π(1 + x 2 )
f X (x) =
(2.109)
Note that
dz 2x for x ≥ 0 = 0 for x < 0 dx
√ dz 2 z for x ≥ 0 = ⇒ 0 for x < 0. dx
(2.110)
We also note that P(Z = −1) = P(X < 0) = FX (0) = 1/2.
(2.111)
Therefore f Z (z) must have a delta function at Z = −1. Further, the mapping from X to Z is one-to-one (injective) in the range 0 ≤ x < ∞. Hence, the pdf of Z is given by
f Z (z) =
kδ(z + 1) for z = −1 f X (x)/(dz/d x)|x=√z for 0 ≤ z < ∞,
(2.112)
where k is a constant such that the pdf of Z integrates to unity. Simplifying the above expression we get
f Z (z) =
kδ(z + 1) for z = −1 √ 1/(2π(1 + z) z) for 0 ≤ z < ∞.
(2.113)
Since
∞
f Z (z) dz = P(Z > 0) = P(X > 0) = 1 − FX (0) = 1/2 (2.114)
z=0
we must have k = 1/2. 19. (Papoulis 1991) Let X be a nonnegative continuous random variable and let a be any positive constant. Prove the Markov inequality given by
2 Random Variables and Random Processes
101
Fig. 2.10 Illustration of the Markov’s inequality X/a g(X)
1
X a
Fig. 2.11 The transformation Y = g(X )
Y = g(X) 5 2 X −2
−1
0
1
P(X ≥ a) ≤ m X /a,
2
(2.115)
where m X = E[X ]. • Solution: Consider a function g(X ) given by
g(X ) =
1 for X > a 0 for X < a
(2.116)
as shown in Fig. 2.10. Clearly g(X ) ≤ X/a for X, a ≥ 0 ∞ g(x) f X (x) d x ≤ (x/a) f X (x) d x ⇒ x=0 x=0 ∞ f X (x) d x ≤ m X /a ⇒
∞
x=a
⇒ P(X > a) ≤ m X /a
(proved).
(2.117)
20. Let a random variable X have a uniform pdf over −2 ≤ X ≤ 2. Compute the pdf of Y = g(X ), where
g(X ) =
for |X | ≤ 1 2X 2 3|X | − 1 for 1 ≤ |X | ≤ 2.
(2.118)
• Solution: We first note that the mapping g(x) is many-to-one. This is illustrated in Fig. 2.11. Let y and x denote particular values taken by the RVs Y and X , respectively. Therefore we have
102
2 Random Variables and Random Processes
[ f X (x) + f X (−x)] |d x| = f Y (y) |dy|.
(2.119)
Note that
f X (x) =
1/4 for − 2 < x < 2 0 elsewhere.
(2.120)
Substituting (2.120) in (2.119) we have: 1 d x = f Y (y). 2 dy Next we observe that
√ dx = 1/(2 2y) for 0 < y < 2 dy 1/3 for 2 < y < 5.
(2.121)
(2.122)
Substituting (2.122) in (2.121) we get
f Y (y) =
√ 1/(4 2y) for 0 < y < 2 1/6 for 2 < y < 5.
(2.123)
21. A Gaussian distributed random variable X having zero mean and variance σ 2X is transformed by a square-law device defined by Y = X 2 . (a) Compute the pdf of Y . (b) Compute E[Y ]. • Solution: We first note that the mapping is many-to-one. Thus the probability that Y lies in the range [y, y + dy] is equal to the probability that X lies in the range [x1 , x1 + d x1 ] and [x2 , x2 + d x2 ], as illustrated in Fig. 2.12. Observe that d x2 is negative. Mathematically f Y (y) dy = f X (x1 ) d x1 − f X (x2 ) d x2 f X (x) f X (x) ⇒ f Y (y) = + , |dy/d x| x=√ y |dy/d x| x=−√ y
(2.124)
where dy dy = d x1 d x x=x1 with x1 =
√
√ y and x2 = − y. Since
(2.125)
2 Random Variables and Random Processes
103
4.5 4 3.5 3 Y
2.5 2 1.5
y + dy y
1 0.5 0
x2 + dx2
-2
-1.5
x2
x1 + dx1
x1
-1
-0.5
0 X
0.5
1
1.5
2
Fig. 2.12 The transformation Y = X 2
f X (x) =
1 2 2 √ e−x /(2σ X ) σ X 2π
dy = 2x dx
(2.126)
we have f Y (y) =
1 2 e−y/(2σ X ) √ σ X 2π y
for y ≥ 0.
(2.127)
Finally E[Y ] = E[X 2 ] = σ 2X .
(2.128)
22. Consider two real-valued random variables A and B. (a) Prove that E 2 [AB] ≤ E[A2 ]E[B 2 ]. (b) Now, let X (t) = A and X (t + τ + τ p ) − X (t + τ ) = B. Prove that if R X (τ p ) = R X (0) for τ p = 0, then R X (τ ) is periodic with period τ p . • Solution: Note that E (Ax ± B)2 ≥ 0 ⇒ x 2 E A2 ± 2x E[AB] + E B 2 ≥ 0.
(2.129)
104
2 Random Variables and Random Processes
Thus we get a quadratic equation in x whose discriminant is nonpositive. Therefore E 2 [AB] − E A2 E B 2 ≤ 0
(2.130)
which proves the first part. Substituting A = X (t) B = X (t + τ + τ p ) − X (t + τ )
(2.131)
in we obtain E[AB] = E[X (t)[X (t + τ + τ p ) − X (t + τ )]] = R X (τ + τ p ) − R X (τ ) E A2 = R X (0) E B 2 = 2[R X (0) − R X (τ p )].
(2.132)
Substituting (2.132) in (2.130) we get
R X (τ + τ p ) − R X (τ )
2
≤ 2[R X (0) − R X (τ p )].
(2.133)
Since it is given that R X (0) = R X (τ p ), (2.133) becomes
R X (τ + τ p ) − R X (τ )
2
≤ 0.
(2.134)
However (2.134) cannot be less than zero; it can only be equal to zero. Therefore R X (τ + τ p ) = R X (τ )
(2.135)
which implies that R X (τ ) is periodic with period τ p . 23. Let X and Y be two independent and uniformly distributed random variables over [−a, a]. (a) Compute the pdf of Z = X + Y . (b) If a = π and X , Y , Z denote the phase over [−π, π], recompute the pdf of Z . Assume that X and Y are independent. • Solution: We note that the pdf of the sum of two independent random variables is equal to the convolution of the individual pdfs. Therefore f Z (z) = f X (z) f Y (z) ∞ f X (α) f Y (z − α) dα. = α=−∞
(2.136)
2 Random Variables and Random Processes Fig. 2.13 The pdf of Z = X +Y
105 fX (x) (a)
1/(2a) x −a
a
0
fZ (z) (b)
1/(2a) z −2a
0
2a fZ (z)
(c)
1/(2a) z −π
π
0
fZ (z) (d)
1/(2a) z −π
0
π
For the given problem, f Z (z) is illustrated in Fig. 2.13b. When a = π, we note that the pdf of Z extends over [−2π, 2π]. Since it is given that the phase extends only over [−π, π], the values of Z over [−2π, −π] is identical to [0, π]. Similarly, the values of Z over [π, 2π] is identical to [−π, 0]. Therefore, the modified pdf of Z over [−π, π] is computed as follows: f Z (z)
=
f Z (z) + f Z (z − 2π) for 0 ≤ z ≤ π f Z (z) + f Z (z + 2π) for − π ≤ z ≤ 0.
(2.137)
This is illustrated in Fig. 2.13c. The resulting pdf is shown in Fig. 2.13d. Thus we find that the modified pdf of Z is also uniform in [−π, π]. 24. Let X and Y be two independent RVs with pdfs given by
f X (x) =
1/4 for − 2 ≤ x ≤ 2 0 otherwise
(2.138)
Ae−3y for 0 ≤ y < ∞ 0 otherwise.
(2.139)
and
f Y (y) =
106
2 Random Variables and Random Processes
(a) Find A. (b) Find the pdf of Z = 3X + 4Y . • Solution: We have
⇒A
∞
f Y (y) dy = 1
y=0 ∞
e−3y dy = 1
y=0
⇒ A = 3.
(2.140)
Let U = 3X . Then
fU (u) =
1/12 for − 6 ≤ u ≤ 6 0 otherwise.
(2.141)
Similarly, let V = 4Y . Clearly 0 ≤ v < ∞. Since the mapping between V and Y is one-to-one, we have f V (v) dv = f Y (y) dy f Y (y) ⇒ f V (v) = dv/dy y=v/4 3 ⇒ f V (v) = e−3v/4 for 0 ≤ v < ∞ 4 3 = e−3v/4 S(v), 4
(2.142)
where S(v) denotes the modified unit step function defined by
S(v) =
1 for v ≥ 0 0 for v < 0.
(2.143)
Since X and Y are independent, so are U and V . Moreover, Z = U + V , therefore −6 ≤ z < ∞. Hence f Z (z) = fU (z) f V (z) ∞ = fU (α) f V (z − α) dα α=−∞
1 −3z/4 z e e3α/4 dα for z α=−6 3α/4 = 16 1 −3z/4 6 e dα for z α=−6 e 16
1 −3z/4 3z/4 −9/2 e e − e for z 12 = 1 −3z/4 9/2 e − e−9/2 for z e 12
≤6 ≥6 ≤6 ≥ 6.
(2.144)
2 Random Variables and Random Processes Fig. 2.14 Two filters connected in cascade
107
X(t)
h1 (t)
V (t)
Y (t)
h2 (t)
It can be verified that
∞
f Z (z) dz = 1.
(2.145)
z=−6
25. (Haykin 1983) Consider two linear filters h 1 (t) and h 2 (t) connected in cascade as shown in Fig. 2.14. Let X (t) be a WSS process with autocorrelation R X (τ ). (a) Find the autocorrelation function of Y (t). (b) Find the cross-correlation function RV Y (t). • Solution: Let g(t) = h 1 (t) h 2 (t).
(2.146)
Y (t) = X (t) g(t).
(2.147)
Then
Hence RY (τ ) = E [Y (t)Y (t − τ )] ∞ ∞ g(α)X (t − α) dα g(β)X (t − τ − β) dβ =E α=−∞ β=−∞ ∞ ∞ g(α)g(β)R X (τ + β − α) dα dβ. (2.148) = α=−∞
β=−∞
The cross-correlation is given by RV Y (τ ) = E [V (t)Y (t − τ )] ∞ h 1 (α)X (t − α) dα =E α=−∞ ∞ h 2 (β)V (t − τ − β) dβ × β=−∞ ∞ ∞ h 1 (α)h 2 (β)X (t − α) =E α=−∞ ∞
×
γ=−∞
β=−∞
h 1 (γ)X (t − τ − β − γ) dγ dα dβ
108
2 Random Variables and Random Processes
=
∞
α=−∞
∞ β=−∞
∞
γ=−∞
h 1 (α)h 2 (β)h 1 (γ)
× R X (τ + β + γ − α) dγ dα dβ.
(2.149)
26. (Haykin 1983) Consider a pair of WSS processes X (t) and Y (t). Show that their cross-correlations R X Y (τ ) and RY X (τ ) have the following properties: (a) RY X (τ ) = R X Y (−τ ). (b) |R X Y (τ )| ≤ 21 [R X (0) + RY (0)]. • Solution: We know that RY X (τ ) = E [Y (t)X (t − τ )] .
(2.150)
Now substitute t − τ = α to get RY X (τ ) = E [Y (α + τ )X (α).] .
(2.151)
Substituting τ = −β we get RY X (−β) = E [Y (α − β)X (α)] = R X Y (β).
(2.152)
Thus the first part is proved. To prove the second part we note that E (X (t) ± Y (t − τ ))2 1 E X (t)2 + E Y (t − τ )2 ⇒ 2 1 ⇒ [R X (0) + RY (0)] 2 1 ⇒ [R X (0) + RY (0)] 2
≥0 ≥ ∓E [X (t)Y (t − τ )] ≥ ∓R X Y (τ ) ≥ |R X Y (τ )|.
(2.153)
27. (Haykin 1983) Given that a stationary random process X (t) has the autocorrelation function R X (τ ) and power spectral density S X ( f ) show that (a) The autocorrelation function of d X (t)/dt is equal to minus the second derivative of R X (τ ). (b) The power spectral density of d X (t)/dt is equal to 4π 2 f 2 S X ( f ). (c) If S X ( f ) = 2 rect ( f /W ), compute the power of d X (t)/dt. • Solution: Let Y (t) =
d X (t) . dt
(2.154)
It is clear that Y (t) can be obtained by passing X (t) through an ideal differentiator. We know that the Fourier transform of an ideal differentiator is
2 Random Variables and Random Processes
109
H ( f ) = j 2π f.
(2.155)
SY ( f ) = S X ( f )|H ( f )|2 = 4π 2 f 2 S X ( f ) ∞ SY ( f ) exp (j 2π f τ ) d f ⇒ RY (τ ) = f =−∞ ∞ f 2 S X ( f ) exp (j 2π f τ ) d f. = 4π 2
(2.156)
Hence the psd of Y (t) is
f =−∞
Thus, the second part of the problem is proved. To prove the first part, we note that ∞ S X ( f ) exp (j 2π f τ ) d f R X (τ ) = f =−∞
d R X (τ ) = −4π 2 ⇒ dτ 2 2
∞ f =−∞
f 2 S X ( f ) exp (j 2π f τ ) d f.
(2.157)
Comparing (2.156) and (2.157) we see that RY (τ ) = −
d 2 R X (τ ) . dτ 2
(2.158)
The power of Y (t) is given by
E Y (t) = 2
=
∞
f =−∞ W/2
SY ( f ) d f 8π 2 f 2 d f
f =−W/2
2 = π2 W 3 . 3
(2.159)
28. (Haykin 1983) The psd of a random process X (t) is shown in Fig. 2.15.
Fig. 2.15 Power spectral density of a random process X (t)
SX (f ) δ(f ) 1 f −f0
f0
110
2 Random Variables and Random Processes
(a) (b) (c) (d)
Determine R X (τ ). Determine the dc power in X (t). Determine the ac power in X (t). What sampling-rates will give uncorrelated samples of X (t)? Are the samples statistically independent?
• Solution: The psd of X (t) can be written as |f| . S X ( f ) = δ( f ) + 1 − f0
(2.160)
We know that A rect (t/T0 ) AT0 sinc ( f T0 ) ⇒ A rect (t/T0 ) A rect (−t/T0 ) A2 T02 sinc2 ( f T0 ) |t| 2 A2 T02 sinc2 ( f T0 ). ⇒ A T0 1 − T0
(2.161)
Applying duality |f| A2 f 02 sinc2 (t f 0 ). A2 f 0 1 − f0
(2.162)
Given that A2 f 0 = 1. Hence |f| f 0 sinc2 (t f 0 ). 1− f0
(2.163)
R X (τ ) = 1 + f 0 sinc2 ( f 0 τ ).
(2.164)
Thus
Now, consider a real-valued random process Z (t) given by Z (t) = A + Y (t),
(2.165)
where A is a constant and Y (t) is a zero-mean random process. Clearly E[Z (t)] = A R Z (τ ) = E[Z (t)Z (t − τ )] = A2 + RY (τ ).
(2.166)
Thus we conclude that if a random process has a DC component equal to A, then the autocorrelation has a constant component equal to A2 . Hence from (2.164) we get
2 Random Variables and Random Processes
111
E[X (t)] = ±1 = m X
(say).
(2.167)
Hence the DC power (contributed by the delta function of the psd) is unity. The AC power (contributed by the triangular part of the psd) is f 0 . The covariance of X (t) is K X (τ ) = cov (X (t)X (t − τ )) = E[(X (t) − m X )(X (t − τ ) − m X )] = f 0 sinc2 ( f 0 τ ).
(2.168)
f 0 for τ = 0 0 for τ = n(k/ f 0 ), n, k = 0,
(2.169)
It is clear that
K X (τ ) =
where n and k are positive integers. Thus, when X (t) is sampled at a rate equal to f 0 /k, the samples are uncorrelated. However the samples may not be statistically independent. 29. The random variable X has a uniform distribution over 0 ≤ x ≤ 2. For the random process defined by V (t) = 6e X t compute (a) E[V (t)] (b) E[V (t)V(t − τ )] (c) E V 2 (t) • Solution: Note that E[V (t)] = 6E e X t 6 2 xt = e dx 2 x=0 3 2t e −1 . = t
(2.170)
The autocorrelation is given by E [V (t)V (t − τ )] = 36E e X t e X (t−τ ) 36 2 x(2t−τ ) = e dx 2 x=0 2(2t−τ ) 36 e −1 = 2(2t − τ ) 18 2(2t−τ ) e −1 . = 2t − τ Hence
(2.171)
112
2 Random Variables and Random Processes RY (τ ) 3 2 1 −4T
−3T
−2T
−T
0
τ T
2T
3T
4T
Fig. 2.16 Autocorrelation of a random process Y (t)
E V 2 (t) = E [V (t)V (t − τ )]τ =0 9 4t e −1 . = t
(2.172)
30. √ (Haykin 1983) A random process Y (t) consists of a DC component equal to 3/2 V, a periodic component G(t) having a random timing phase, and a random component X (t). Both G(t) and X (t) have zero mean and are independent of each other. The autocorrelation of Y (t) is shown in Fig. 2.16. Compute (a) The DC power. (b) The average power of G(t). Sketch a sample function of G(t). (c) The average power of X (t). Sketch a sample function of X (t). • Solution: The random process Y (t) can be written as: Y (t) = A + G(t) + X (t)
(2.173)
where G(t) and X (t) have zero mean and are independent of each other. The autocorrelation of Y (t) can be written as RY (τ ) = E[Y (t)Y (t − τ )] = E[(A + G(t) + X (t))(A + G(t − τ ) + X (t − τ ))] = A2 + RG (τ ) + R X (τ ), (2.174) where RG (τ ) and R X (τ ) denote the autocorrelation of G(t) and X (t), respectively, and A2 denotes the DC power. Note that since G(t) is periodic with a period T0 = 2T , we have RG (τ ) = E[G(t)G(t − τ )] = E[G(t)G(t − τ − kT0 )] = RG (τ + kT0 ).
(2.175)
2 Random Variables and Random Processes
113
Therefore RG (τ ) is also √ periodic with a period of kT0 . Moreover, since A is given to be equal to 3/2, we have A2 = 3/2. Therefore the DC power is 3/2. By inspecting Fig. 2.16 we conclude that G(t) and X (t) have the autocorrelation as indicated in Fig. 2.17a, b, respectively. Thus, the power in the periodic component is RG (0) = 0.5 and the power in the random component is R X (0) = 1. Recall that a periodic signal with period T0 and random timing phase td can be expressed as a random process: G(t) =
∞
p(t − kT0 − td ),
(2.176)
k=−∞
where td is a uniformly distributed random variable in the range [0, T0 ) and p(·) is a real-valued waveform. A sample function of G(t) and the corresponding p(t) is shown in Fig. 2.17c. Recall that the autocorrelation of G(t) is RG (τ ) =
∞ 1 R p (τ − mT0 ), T0 m=−∞
(2.177)
where R p (·) denotes the autocorrelation of p(t) in (2.176). Similarly, recall that a random binary wave is given by X (t) =
∞
Sk p(t − kT0 − α),
(2.178)
k=−∞
where Sk denotes a discrete random variable taking values ±A with equal probability, p(·) denotes a real-valued waveform, 1/T0 denotes the bit-rate and α denotes a random timing phase uniformly distributed in [0, T0 ). It is assumed that Sk is independent of α and Sk is independent of S j for k = j. A sample function of X (t) along with the corresponding p(t) is shown in Fig. 2.17d. Recall that the autocorrelation of X (t) is R X (τ ) =
A2 R p (τ ), T0
(2.179)
where R p (·) is the autocorrelation of p(·) in (2.178). In this problem, Sk = ±1. 31. The moving average of a random signal X (t) is defined as Y (t) =
1 T
t+T /2 λ=t−T /2
X (λ) dλ.
(2.180)
114
2 Random Variables and Random Processes RG (τ )
(a)
0.5 −3T
−T
T 0
−2T
3T
τ
2T −0.5 RX (τ )
(b)
1
τ −T (c)
T
0 g(t) √ 0.5
p(t) T
√ 0.5 T T0
t 0
t
0 √ − 0.5
√ − 0.5
T
T td (d)
x(t) √ 2
p(t) √ 2 T T T
t 0
0
t T0
√ − 2 T td
Fig. 2.17 a Autocorrelation of G(t). b Autocorrelation of X (t). c Sample function of G(t) and the corresponding p(t). d Sample function of X (t) and the corresponding p(t)
2 Random Variables and Random Processes
115
(a) The output Y (t) can be written as Y (t) = X (t) h(t).
(2.181)
Identify h(t). (b) Obtain a general expression for the autocorrelation of Y (t) in terms of the autocorrelation of X (t) and the autocorrelation of h(t). (c) Using the above relation, compute RY (τ ). • Solution: The random process Y (t) can be written as Y (t) = X (t) h(t),
(2.182)
t . T
(2.183)
where 1 h(t) = rect T We know that RY (τ ) =
∞
α=−∞
∞ β=−∞
h(α)h(β)R X (τ − α + β) dα dβ.
(2.184)
Let α − β = λ.
(2.185)
Substituting for β we get RY (τ ) =
∞ α=−∞
∞ λ=−∞
h(α)h(α − λ)R X (τ − λ) dα dλ.
(2.186)
Interchanging the order of the integrals we get RY (τ ) = =
∞
λ=−∞ ∞ λ=−∞
R X (τ − λ) dλ.
∞ α=−∞
h(α)h(α − λ) dα
R X (τ − λ)Rh (λ) dλ.
(2.187)
(1/T ) (1 − (|τ |/T )) for |τ | < T 0 elsewhere.
(2.188)
For the given problem
Rh (τ ) = Therefore
116
2 Random Variables and Random Processes
1 RY (τ ) = T
T
λ=−T
|λ| dλ. R X (τ − λ) 1 − T
(2.189)
32. Let a random process X (t) be applied at the input of a filter with impulse response h(t). Let the output process be denoted by Y (t). (a) Show that RY X (τ ) = h(τ ) R X (τ ).
(2.190)
(b) Show that SY X ( f ) = H ( f )S X ( f ). • Solution: Note that Y (t) =
∞ α=−∞
h(α)X (t − α) dα.
(2.191)
Therefore RY X (τ ) = E[Y (t)X (t − τ )] ∞ h(α)X (t − α)X (t − τ ) dα . =E
(2.192)
α=−∞
Interchanging the order of the expectation and the integral,we get RY X (τ ) = =
∞ α=−∞ ∞ α=−∞
h(α)E [X (t − α)X (t − τ )] dα h(α)R X (τ − α) dα
= h(τ ) R X (τ ).
(2.193)
Taking the Fourier transform of both sides and noting that convolution in the time-domain is equivalent to multiplication in the frequency domain, we get SY X ( f ) = H ( f )S X ( f ).
(2.194)
33. Let X (t1 ) and X (t2 ) be two random variables obtained by observing the random process X (t) at times t1 and t2 , respectively. It is given that X (t1 ) and X (t2 ) are uncorrelated. Does this imply that X (t) is WSS? • Solution: Since X (t1 ) and X (t2 ) are uncorrelated we have E[(X (t1 ) − m X (t1 ))(X (t2 ) − m X (t2 ))] = 0 ⇒ R X (t1 , t2 ) = m X (t1 )m X (t2 ), (2.195)
2 Random Variables and Random Processes
117
Fig. 2.18 PSD of N (t)
SN (f ) N0 /2
f −f2
−f1
0
f1
f2 W
where m X (t1 ) and m X (t2 ) are the means at times t1 and t2 , respectively. From the above equation, it is clear that X (t) need not be WSS. 34. The voltage at the output of a noise generator, whose statistics is known to be a WSS Gaussian process, is measured with a DC voltmeter and a true rms (root mean square) meter. The DC voltmeter reads 3 V. The rms meter, when it is AC coupled (DC component is removed), reads 2 V. (a) Find out the pdf of the voltage at any time. (b) What would be the reading of the rms meter if it is DC coupled? • Solution: The DC meter reads the average (mean) value of the voltage. Hence the mean of the voltage is m = 3 V. When the rms meter is AC coupled, it reads the standard deviation (σ). Hence σ = 2 V. Thus the pdf of the voltage takes the form pV (v) =
1 2 2 √ e−(v−m) /(2σ ) . σ 2π
(2.196)
We know that E[V 2 ] = σ 2 + m 2 = 13. Thus, when the rms meter is DC coupled, the reading would be
(2.197) √
13.
35. (Ziemer and Tranter 2002) This problem demonstrates the non-uniqueness of the representation of narrowband noise. Here we show that the in-phase and quadrature components depend on the choice of the carrier frequency f c . A narrowband noise process of the form N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t)
(2.198)
has a psd as indicated in Fig. 2.18, where f 1 , f 2 W . Sketch the psd of Nc (t) and Ns (t) for the following cases: (a) f c = f 1
118
2 Random Variables and Random Processes
(b) f c = f 2 (c) f c = ( f 1 + f 2 )/2 • Solution: It is given that f 2 − f 1 = W,
(2.199)
where f 1 , f 2 W . We know that
S N ( f − f c ) + S N ( f + f c ) for | f | < f c 0 otherwise. = S Ns ( f ).
S Nc ( f ) =
(2.200)
The psd of Nc (t) and Ns (t) for f c = f 1 is indicated in Fig. 2.19. The psd of Nc (t) and Ns (t) for f c = f 2 is shown in Fig. 2.20. The psd of Nc (t) and Ns (t) for f c = ( f 1 + f 2 )/2 is shown in Fig. 2.21. 36. Consider the system used for generating narrowband noise, as shown in Fig. 2.22. The Fourier transform of the bandpass filter is also shown. Assume that the narrowband noise representation is of the form N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(2.201)
The input W (t) is a zero mean, WSS, white Gaussian noise process with a psd equal to 2 × 10−8 W/Hz. (a) Sketch and label the psd of Nc (t) when f c = 22 MHz. Show the steps, starting from S N ( f ). (b) Write down the pdf of the random variable X obtained by observing Nc (t) at t = 2 s. (c) Sketch and label the cross-spectral density S Nc Ns ( f ) when f c = 22 MHz. (d) Derive the expression for the correlation coefficient (ρ(τ )) between Nc (t) and Ns (t − τ ) for f c = 22 MHz. (e) For what carrier frequency f c , is ρ(τ ) = 0 for all values of τ ? Assume the following:
R Nc Ns (τ ) = E[Nc (t)Ns (t − τ )]
R Nc (τ ) = E[Nc (t)Nc (t − τ )].
(2.202)
• Solution: The psd of N (t) is given by S N ( f ) = SW ( f )|H ( f )|2 N0 = |H ( f )|2 . 2
(2.203)
2 Random Variables and Random Processes
119 SN (f ) N0 /2
f −f2
−f1
f1
0
f2
W SN (f − f1 ) N0 /2
f
2f1 −W
f1 + f2
0 SN (f + f1 ) N0 /2
−2f1
f
−f1 − f2
W
0
SNc (f ) = SNs (f ) N0 /2
f −W
0
W
Fig. 2.19 PSD of Nc (t) and Ns (t) for f c = f 1
The psd of Nc (t) is shown in Fig. 2.23. Since W (t) is zero mean, WSS Gaussian random process and H ( f ) is an LTI system, N (t) is also zero mean, WSS Gaussian random process. Thus Nc (t) and Ns (t) are also zero mean, WSS Gaussian random processes. Therefore X = Nc (2) is a Gaussian RV with pdf f X (x) = Here
1 2 2 √ e−(x−m) /(2σ ) . σ 2π
(2.204)
120
2 Random Variables and Random Processes SN (f ) N0 /2
f −f1
−f2
f1
0
f2
W SN (f − f2 ) N0 /2
f1 + f2 W
0
f 2f2
SN (f + f2 ) N0 /2
−f1 − f2 −2f2
f −W
0 SNc (f ) = SNs (f ) N0 /2
f −W
0
W
Fig. 2.20 PSD of Nc (t) and Ns (t) for f c = f 2
m= 0 σ2 =
∞ f =−∞
S Nc ( f ) d f = 0.68.
(2.205)
The cross-spectral density S Nc Ns ( f ) is plotted in Fig. 2.24. The cross-spectral density is of the form S Nc Ns ( f ) = j [A1 rect (( f − f 1 )/B) + A2 rect (( f − f 2 )/B) − A1 rect (( f + f 1 )/B) − A2 rect (( f + f 2 )/B)](2.206) , where
2 Random Variables and Random Processes
121 SN (f ) N0 /2
f −f1
−f2
f2
f1
0
W SN (f − fc ) N0 /2
f −W/2 0 W/2
f1 + fc
f2 + fc
SN (f + fc ) N0 /2
f −f2 − fc −f1 − fc
−W/2 0 W/2 SNc (f ) = SNs (f ) N0 /2
f −W/2 0 W/2
Fig. 2.21 PSD of Nc (t) and Ns (t) for f c = ( f 1 + f 2 )/2 W (t)
N (t)
H(f )
H(f ) 3 2 f (MHz) −24 −23 −22 −21
0
21
22
Fig. 2.22 System used for generating narrowband noise
23
24
122
2 Random Variables and Random Processes SN (f ) (×10−8 ) 18 8 f (MHz) −24 −23 −22 −21
0
21
22
23
24
SN (f − fc ) (×10−8 ) 18 8 f (MHz)
−2 −1
1
0
43
44
45
46
SN (f + fc ) (×10−8 ) 18 8 f (MHz) −45 −44 −43
−1
0
−46
2
1
SNc (f ) (×10−8 ) 26 8 f (MHz) −2 −1
0
1
2
Fig. 2.23 PSD of Nc (t)
f 1 = 0.5 × 106 f 2 = 1.5 × 106 B = 106 A1 = 10 × 10−8 A2 = 8 × 10−8 .
(2.207)
The cross-correlation R Nc Ns (τ ) can be obtained by taking the inverse Fourier transform of S Nc Ns ( f ) and is given by
2 Random Variables and Random Processes
123 SN (f ) (×10−8 ) 18 8 f (MHz)
−24 −23 −22 −21
0
21
22
23
24
SN (f − fc ) (×10−8 ) 18 8 f (MHz)
−2 −1
1
0
43
44
45
46
SN (f + fc ) (×10−8 ) 18 8 f (MHz) −45 −44 −43
−1
0
2
1
SNc Ns (f )/j (×10−8 )
−46
10 8 0 −8 −10 f (MHz) −2 −1
0
1
2
Fig. 2.24 The cross-spectral density S Nc Ns ( f )
R Nc Ns (τ ) = jA1 B sinc (Bt) e j 2π f1 t − e−j 2π f1 t + jA2 B sinc (Bt) e j 2π f2 t − e−j 2π f2 t = −sinc (Bt) [0.2 sin(2π f 1 t) + 0.16 sin(2π f 2 t)] . (2.208) Since Nc (t) and Ns (t − τ ) are both N (0, σ 2 ) we have
124
2 Random Variables and Random Processes
Fig. 2.25 R(τ ) versus τ
R(τ ) 2 1 τ −2T −T
0
T
ρ(τ ) = R Nc Ns (τ )/σ 2 ,
2T
(2.209)
where σ 2 = 0.68. R Nc Ns (τ ) = 0 for all τ when f c = 22.5 MHz. Hence ρ(τ ) = 0 for all τ when f c = 22.5 MHz. 37. Consider the function R(τ ) as illustrated in Fig. 2.25. Is it a valid autocorrelation function? Justify your answer. • Solution: Note that t t + rect R(τ ) = rect 2T 4T 2T sinc (2 f T ) + 4T sinc (4 f T ).
(2.210)
Let 2π f T = θ. Hence R(τ )
sin(2θ) sin(θ) + 4T θ 2θ sin(θ) 2T (1 + 2 cos(θ)) θ 2T
= S(θ).
(2.211)
Clearly, S(θ) is negative for 3π/2 ≤ |θ| < 2π, as illustrated in Fig. 2.26 (sin(θ) is negative and cos(θ) is positive). Therefore S(θ) cannot be the power spectral density, and R(τ ) is not a valid autocorrelation. 38. Consider the function R(τ ) = rect (τ /T ).
(2.212)
Is it a valid autocorrelation function? Justify your answer. • Solution: The Fourier transform of R(τ ) is rect (τ /T ) T sinc ( f T )
(2.213)
2 Random Variables and Random Processes
125
6 5
S(θ)
4 3 2 1 0 -1
-6
-4
-2
0
2
4
6
θ
Fig. 2.26 Fourier transform of R(τ ) given in Fig. 2.25 Fig. 2.27 Power spectral density of n 1 (t)
SN1 (f ) a f −W
W
which can take negative values. Since the power spectral density cannot be negative, R(τ ) is not a valid autocorrelation, even though it is an even function. 39. (Haykin 1983) A pair of noise processes are related by N2 (t) = N1 (t) cos(2π f c t + θ) − N1 (t) sin(2π f c t + θ),
(2.214)
where f c is a constant and θ is a uniformly distributed random variable between 0 and 2π. The noise process N1 (t) is stationary and its psd is shown in Fig. 2.27. Find the psd of N2 (t). • Solution: The autocorrelation of N2 (t) is given by E[N2 (t)N2 (t − τ )] = E [(N1 (t) cos(2π f c t + θ) − N1 (t) sin(2π f c t + θ)) (N1 (t − τ ) cos(2π f c (t − τ ) + θ) − N1 (t − τ ) sin(2π f c (t − τ ) + θ))]
126
2 Random Variables and Random Processes SN2 (f ) a/2 f −fc − W
−fc
fc
fc + W
Fig. 2.28 Power spectral density of n 2 (t)
R N1 (τ ) cos(2π f c τ ) R N1 (τ ) cos(2π f c τ ) + 2 2 R N1 (τ ) sin(2π f c τ ) R N1 (τ ) sin(2π f c τ ) − . + 2 2 = R N1 (τ ) cos(2π f c τ ).
=
(2.215)
Hence S N2 ( f ) =
S N1 ( f − f c ) + S N1 ( f + f c ) . 2
(2.216)
The psd for N2 (t) is illustrated in Fig. 2.28. 40. The output of an oscillator is described by X (t) = A cos(2π f t + θ),
(2.217)
where A is a constant, f and θ are independent random variables. The pdf of f is denoted by f F ( f ) for −∞ < f < ∞ and the pdf of θ is uniformly distributed between 0 and 2π. It is given that f F ( f ) is an even function of f . (a) Find the psd of X (t) in terms of f F ( f ). (b) What happens to the psd of X (t) when f is a constant equal to f c . • Solution: The autocorrelation of X (t) is given by E [X (t)X (t − τ )] = A2 E [cos(2π f t + ) cos(2π f (t − τ ) + )] A2 = E [cos(2π f τ ) + cos(4π f t − 2π f τ + 2)] 2 A2 ∞ = cos(2π f τ ) f F ( f ) d f 2 f =−∞ ∞ 1 fF ( f ) d f + 2π f =−∞ 2π cos(4π f t − 2π f τ + 2) dθ θ=0
2 Random Variables and Random Processes
127
A2 ∞ = cos(2π f τ ) f F ( f ) d f 2 f =−∞ = R X (τ ).
(2.218)
Since R X (τ ) is real-valued, S X ( f ) is an even function. Hence R X (τ ) = =
∞
f =−∞ ∞ f =−∞
S X ( f ) exp (j 2π f τ ) d f S X ( f ) cos (2π f τ ) d f.
(2.219)
Comparing (2.218) and (2.219) we get SX ( f ) =
A2 f F ( f ). 2
(2.220)
When f is a constant, say equal to f c , then A2 cos(2π f c τ ) 2 A2 ⇒ SX ( f ) = [δ( f − f c ) + δ( f + f c )] . 4 R X (τ ) =
(2.221)
41. (Haykin 1983) A real-valued stationary Gaussian process X (t) with mean m X , variance σ 2X and power spectral density S X ( f ) is passed through two real-valued linear time invariant filters h 1 (t) and h 2 (t), yielding the output processes Y (t) and Z (t), respectively. (a) Determine the joint pdf of Y (t) and Z (t − τ ). (b) State the conditions, in terms of the frequency response of h 1 (t) and h 2 (t), that are sufficient to ensure that Y (t) and Z (t − τ ) are statistically independent. • Solution: The mean values of Y (t) and Z (t − τ ) are given by E[Y (t)] = E
∞
= mX
α=−∞
∞
α=−∞
h 1 (α)X (t − α) dα h 1 (α) dα
= m X H1 (0) = mY E[Z (t − τ )] = m X
∞ α=−∞
h 2 (α) dα
= m X H2 (0) = mZ.
(2.222)
128
2 Random Variables and Random Processes
The variance of Y (t) and Z (t − τ ) are given by: E[(Y (t) − m Y )2 ] = E Y 2 (t) − m 2Y ∞ = S X ( f )|H1 ( f )|2 d f − m 2Y f =−∞
= σY2 E[(Z (t − τ ) − m Z )2 ] = E Z 2 (t) − m 2Z ∞ = S X ( f )|H2 ( f )|2 d f − m 2Z f =−∞
= σ 2Z .
(2.223)
The cross covariance between Y (t) and Z (t − τ ) is given by E [(Y (t) − m Y ) (Z (t − τ ) − m Z )] = E[Y (t)Z (t − τ )] − m Y m Z ∞ = E h 1 (α)X (t − α) dα α=−∞
=
− mY m Z ∞ α=−∞
∞
β=−∞
∞ β=−∞
h 2 (β)X (t − τ − β) dβ
h 1 (α)h 2 (β)R X (τ + β − α) dα dβ − m Y m Z
= K Y Z (τ ).
(2.224)
The correlation coefficient between Y (t) and Z (t) is given by ρ=
K Y Z (τ ) . σY σ Z
(2.225)
Now, the joint pdf of two Gaussian random variables y1 and y2 is given by pY1 , Y2 (y1 , y2 ) 1 = 2πσ1 σ2 1 − ρ2 σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2 . × exp − 2 2σ12 σ22 (1 − ρ2 ) (2.226) The joint pdf of Y (t) and Z (t − τ ) can be similarly found out by substituting the appropriate values from (2.222)–(2.225). The random variables Y (t) and Z (t − τ ) are uncorrelated if their covariance is zero. This is possible when (a) H1 (0) = 0 or H2 (0) = 0 AND
2 Random Variables and Random Processes
White noise
Bandpass
W (t)
129
Lowpass
N (t)
filter
filter
H1 (f )
H2 (f )
Output N1 (t)
cos(2πfc t) H1 (f )
2B
H2 (f )
1
1 f
−fc
f
fc 2B
Fig. 2.29 System block diagram
(b) The product H1 ( f )H2 ( f ) = 0 for all f . This can be easily shown by computing the Fourier transform of K Y Z (τ ), which is equal to (assuming m Y m Z = 0) K Y Z (τ ) H1 ( f )H2∗ ( f )S X ( f ).
(2.227)
Thus if H1 ( f )H2 ( f ) = 0, then K Y Z (τ ) = 0, implying that Y (t) and Z (t − τ ) are uncorrelated, and being Gaussian, they are statistically independent. 42. (Haykin 1983) Consider a white Gaussian noise process of zero mean and psd N0 /2 which is applied to the input of the system shown in Fig. 2.29. (a) Find the psd of the random process at the output of the system. (b) Find the mean and variance of this output. • Solution: We know that narrow-band noise can be represented by n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t).
(2.228)
We also know that the psd of n c (t) and n s (t) are given by
S Nc ( f ) = S Ns ( f ) =
S N ( f − f c ) + S N ( f + f c ) for − B ≤ f ≤ B (2.229) 0 elsewhere,
where it is assumed that S N ( f ) occupies the frequency band f c − B ≤ | f | ≤ f c + B. For the given problem S N ( f ) is shown in Fig. 2.30a. Thus
R Nc (τ ) S Nc ( f ) =
N0 for − B ≤ f ≤ B 0 elsewhere.
(2.230)
130
2 Random Variables and Random Processes SN1 (f )
SN (f ) 2B N0 /2
N0 /4 f
−fc
f
fc 2B (a)
(b)
Fig. 2.30 a Noise psd at the output of H1 ( f ). b Noise psd at the output of H2 ( f )
The output of H2 ( f ) is clearly H2 ( f )
n(t) cos(2π f c t) −→ n c (t)/2 whose autocorrelation is R Nc (τ ) SN ( f ) n c (t) n c (t − τ ) = c . E 2 2 4 4
(2.231)
(2.232)
The psd of noise at the output of H2 ( f ) is illustrated in Fig. 2.30b. The output noise is zero-mean Gaussian with variance N0 B N0 × 2B = . 4 2
(2.233)
43. (Haykin 1983) Consider a narrowband noise process N (t) with its Hilbert transform denoted by Nˆ (t). Show that the cross-correlation functions of N (t) and Nˆ (t) are given by R N Nˆ (τ ) = − Rˆ N (τ ) R ˆ (τ ) = Rˆ N (τ ), NN
(2.234)
where Rˆ N (τ ) is the Hilbert transform of R N (τ ). • Solution: From the basic definition of the autocorrelation function we have R N Nˆ (τ ) = E N (t) Nˆ (t − τ ) N (α) 1 ∞ dα = E N (t) π α=−∞ t − τ − α ∞ E[N (t)N (α)] 1 dα = π α=−∞ t − τ − α ∞ R N (t − α) 1 dα = π α=−∞ t − τ − α
2 Random Variables and Random Processes
=
1 π
131
∞ β=−∞
R N (β) dβ β−τ
= − Rˆ N (τ ).
(2.235)
The second part can be similarly proved. 44. (Haykin 1983) A random process X (t) characterized by the autocorrelation function R X (τ ) = e−2ν|τ | ,
(2.236)
where ν is a constant, is applied to a first-order RC-lowpass filter. Determine the psd and the autocorrelation of the random process at the filter output. • Solution: The psd of X (t) is the Fourier transform of the autocorrelation function and is given by SX ( f ) =
0
e
2ντ −j 2π f τ
e
τ =−∞
dτ +
∞
e−2ντ e−j 2π f τ dτ
τ =0
2τ (ν−j π f ) 0 1 e τ =−∞ 2(ν − j π f ) −2τ (ν+j π f ) ∞ 1 e − τ =0 2(ν + j π f ) ν = 2 ν + π2 f 2 1/ν = . 1 + π 2 f 2 /ν 2 =
(2.237)
Since the transfer function of the LPF is H( f ) =
1 . 1 + j 2π f RC
(2.238)
Therefore the psd of the output random process is SY ( f ) = S X ( f )|H ( f )|2 ν 1 = 2 × 2 2 ν +π f 1 + (2π f RC)2 A B = 2 + . ν + π2 f 2 1 + (2π f RC)2 Solving for A and B we get
(2.239)
132
2 Random Variables and Random Processes
ν 1 − 4R 2 C 2 ν 2 −4R 2 C 2 ν B= . 1 − 4R 2 C 2 ν 2 A=
(2.240)
The autocorrelation of the output random process is the inverse Fourier transform of (2.239), and is given by A −2ν|τ | B −|τ |/(RC) e e + . ν 2RC
RY (τ ) =
(2.241)
45. Let X and Y be independent random variables with
f X (x) =
f Y (y) =
αe−αx x > 0 0 otherwise βe−β y y > 0 0 otherwise,
(2.242)
where α and β are positive constants. Find the pdf of Z = X + Y when α = β and α = β. • Solution: We know that the pdf of Z is the convolution of the pdfs of X and Y and is given by (for α = β) f Z (z) =
∞
f X (a) f Y (z − a) da
a=−∞ z
= = =
f X (a) f Y (z − a) da
a=0 z
αβe−αa e−β(z−a) da −βz e − e−αz for z > 0 0 for z < 0.
a=0
αβ α−β
(2.243)
When α = β we have z f Z (z) = α2 e−αz da a=0
2 −αz for z > 0 zα e = 0 otherwise.
(2.244)
46. A WSS random process X (t) with psd N0 /2 is passed through a first-order RC highpass filter. Determine the psd and autocorrelation of the filter output. • Solution: The transfer function of the first-order RC highpass filter is
2 Random Variables and Random Processes
133
H( f ) =
j 2π f RC . 1 + j 2π f RC
(2.245)
Let Y (t) denote the random process at the filter output. Therefore the psd of the filter output is (2π f RC)2 N0 × 2 1 + (2π f RC)2 1 N0 . 1− = 2 1 + (2π f RC)2
SY ( f ) =
(2.246)
The autocorrelation of Y (t) is the inverse Fourier transform of SY ( f ). Consider the Fourier transform pair 1 −|t| 1 . e 2 1 + (2π f )2
(2.247)
Using the time scaling property g(t) G( f ) 1 ⇒ g(at) G( f /a) |a|
(2.248)
with a = 1/(RC), (2.247) becomes 1 −|t|/(RC) RC e . 2 1 + (2π f RC)2
(2.249)
Thus from (2.246) and (2.249) we have RY (τ ) =
N0 −|t|/(RC) N0 δ(τ ) − e . 2 4RC
(2.250)
47. Let X be a random variable having pdf f X (x) = ae−b|x| for a, b > 0, −∞ < x < ∞.
(2.251)
(a) Find the relation between a and b. (b) Let Y = 2X 2 − c, where c > 0. Compute P(Y < 0). • Solution: We have
∞
2 x=0
ae−bx = 1 ⇒ 2a = b.
(2.252)
134
2 Random Variables and Random Processes
Now P(Y < 0) = P(2X 2 − c < 0) = P(X 2 < c/2) = P(|X | < c/2) √c/2 =2 ae−bx d x x=0
√ 2a 1 − e−b c/2 = b = 1 − e−b
√
c/2
.
(2.253)
48. A narrowband noise process N (t) has zero mean and autocorrelation function R N (τ ). Its psd S N ( f ) is centered about ± f c . Its quadrature components Nc (t) and Ns (t) are defined by Nc (t) = N (t) cos(2π f c t) + Nˆ (t) sin(2π f c t) Ns (t) = Nˆ (t) cos(2π f c t) − N (t) sin(2π f c t).
(2.254)
Using (2.234) show that R Nc (τ ) = R Ns (τ ) = R N (τ ) cos(2π f c τ ) + Rˆ N (τ ) sin(2π f c τ ) R Nc Ns (τ ) = −R Ns Nc (τ ) = R N (τ ) sin(2π f c τ ) − Rˆ N (τ ) cos(2π f c τ ). (2.255) • Solution: From the basic definition of the autocorrelation R Nc (τ ) = E [Nc (t)Nc (t − τ )] = E N (t) cos(2π f c t) + Nˆ (t) sin(2π f c t)
N (t − τ ) cos(2π f c (t − τ )) + Nˆ (t − τ ) sin(2π f c (t − τ )) =
R N (τ ) [cos(2π f c τ ) + cos(4π f c t − 2π f c τ )] 2 R ˆ (τ ) + NN [sin(4π f c t − 2π f c τ ) − sin(2π f c τ )] 2 R ˆ (τ ) + NN [sin(4π f c t − 2π f c τ ) + sin(2π f c τ )] 2 R ˆ (τ ) + N [cos(2π f c τ ) − cos(4π f c t − 2π f c τ )] . 2
(2.256)
2 Random Variables and Random Processes
135
Now S Nˆ ( f ) = S N ( f ) |−j sgn ( f )|2 = SN ( f ) ⇒ R Nˆ (τ ) = R N (τ ).
(2.257)
In the above equation, we have assumed that S N (0) = 0. Moreover from (2.234) R Nˆ N (τ ) = −R N Nˆ (τ ) = Rˆ N (τ ).
(2.258)
Substituting (2.257) and (2.258) in (2.256) we get R Nc (τ ) = R N (τ ) cos(2π f c τ ) + Rˆ N (τ ) sin(2π f c τ ).
(2.259)
The expression for R Ns (τ ) can be similarly proved. Similarly we have R Nc Ns (τ ) = E [Nc (t)Ns (t − τ )] = E N (t) cos(2π f c t) + Nˆ (t) sin(2π f c t)
Nˆ (t − τ ) cos(2π f c (t − τ )) − N (t − τ ) sin(2π f c (t − τ )) =
R N Nˆ (τ ) [cos(2π f c τ ) + cos(4π f c t − 2π f c τ )] 2 R N (τ ) − [sin(4π f c t − 2π f c τ ) − sin(2π f c τ )] 2 R ˆ (τ ) + N [sin(4π f c t − 2π f c τ ) + sin(2π f c τ )] 2 R ˆ (τ ) − NN [cos(2π f c τ ) − cos(4π f c t − 2π f c τ )] . 2
(2.260)
Once again substituting (2.257) and (2.258) in (2.260) we get R Nc Ns (τ ) = R N (τ ) sin(2π f c τ ) − Rˆ N (τ ) cos(2π f c τ ).
(2.261)
The expression for R Ns Nc (τ ) can be similarly proved. 49. Let X (t) be a stationary process with zero mean, autocorrelation function R X (τ ) and psd S X ( f ). Find a linear filter with impulse response h(t) such that the filter output is X (t) when the input is white noise with psd N0 /2. What is the corresponding condition on the transfer function H ( f )?
136
2 Random Variables and Random Processes
• Solution: Let Y (t) denote the input process. It is given that RY (τ ) =
N0 δ(τ ). 2
(2.262)
The autocorrelation of the output is given by R X (τ ) = E = =
∞
∞
h(α)Y (t − α) dα
α=−∞ ∞
∞ β=−∞
h(β)Y (t − τ − β) dβ
h(α)h(β)RY (τ + β − α) dα dβ
α=−∞ β=−∞ ∞ N0 ∞
h(α)h(β)δ(τ + β − α) dα dβ 2 α=−∞ β=−∞ ∞ N0 = h(α)h(α − τ ) dα 2 α=−∞ N0 = (h(t) h(−t)) . 2
(2.263)
The transfer function must satisfy the following relationship: SX ( f ) =
N0 |H ( f )|2 . 2
(2.264)
50. White Gaussian noise W (t) of zero mean and psd N0 /2 is applied to a firstorder RC-lowpass filter, producing noise N (t). Determine the pdf of the random variable obtained by observing N (t) at time tk . • Solution: Clearly, the first-order RC lowpass filter is stable and linear, hence N (t) is also a Gaussian random process. Hence N (tk ) is a Gaussian distributed random variable with pdf given by p N (tk ) (x) =
1 2 2 √ e−(x−m) /(2σ ) , σ 2π
(2.265)
where m is the mean and σ 2 denotes the variance. Moreover, since W (t) is WSS, N (t) is also WSS. It only remains to compute the mean and variance of N (tk ). The mean value of N (tk ) is computed as follows: ∞ h(τ )W (tk − τ ) dτ N (tk ) = τ =0 ∞ ⇒ m = E [N (tk )] = h(τ )E[W (tk − τ )] dτ τ =0
= 0.
(2.266)
2 Random Variables and Random Processes
137
The variance can be computed from the psd as
N0 σ = E N (tk ) = 2 2
2
∞ f =−∞
|H ( f )|2 d f.
(2.267)
For the given problem H ( f ) is given by 1/(j 2π f C) R + 1/(j 2π f C) 1 = 1 + j 2π f RC 1 ⇒ |H ( f )|2 = . 1 + (2π f RC)2 H( f ) =
(2.268)
Hence ∞ N0 1 −1 N0 tan (2π f RC) f =−∞ = .(2.269) σ 2 = E N 2 (tk ) = 2 2π RC 4RC 51. Let X and Y be two independent random variables. X is uniformly distributed between [−1, 1] and Y is uniformly distributed between [−2, 2]. Find the pdf of Z = max(X, Y ). • Solution: We begin by noting that Z lies in the range [−1, 2]. Next, we compute the cumulative distribution function of Z and proceed to compute the pdf by differentiating the distribution function. To compute the cumulative distribution function, we note that
Z=
X if X > Y Y if Y > X.
(2.270)
The pdfs of X and Y are given by
f X (x) =
f Y (y) =
1/2 for − 1 ≤ x ≤ 1 0 otherwise 1/4 for − 2 ≤ y ≤ 2 0 otherwise.
(2.271)
We also note that since the pdfs of X and Y are different in the range [−1, 1] and [1, 2], we expect the distribution function of Z to be different in these two regions. Using the fact that X and Y are independent and the events X > Y and Y > X are mutually exclusive we have for −1 ≤ z ≤ 1
138
2 Random Variables and Random Processes
P(Z ≤ z) = P(X ≤ z AND X > y) + P(Y ≤ z AND Y > x) x 1 z = dy dx 8 x=−1 y=−2 y 1 z + dy dx 8 y=−1 x=−1 1 2 z + 3z + 2 . = 8
(2.272)
When 1 ≤ z ≤ 2, then Z = Y . Hence the cumulative distribution takes the form 1 1 z dy dx P(Z ≤ z) = P(Z < 1) + 8 y=1 x=−1 1 2 z + 3z + 2 z=1 = 8 1 1 z + dy dx 8 y=1 x=−1 1 = [2z + 4] . (2.273) 8 Thus
f Z (z) =
(1/8) [2z + 3] for − 1 ≤ z ≤ 1 1/4 for 1 ≤ z ≤ 2.
(2.274)
52. Consider a random variable given by Z = a X + bY . Here X and Y are statistically independent Gaussian random variables with mean m X and m Y , respectively. The variance of X and Y is σ 2 . (a) Is Z a Gaussian distributed random variable? (b) Compute the mean and the variance of Z . • Solution: Z is Gaussian distributed because it is a linear combination of two Gaussian random variables. The mean value of Z is m Z = E[Z ] = am X + bm Y . The variance of Z is E (Z − m Z )2 = E Z 2 − m 2Z = a 2 E X 2 + b2 E Y 2 + 2abE[X ]E[Y ] − (am X + bm Y )2
(2.275)
2 Random Variables and Random Processes
139
= a 2 (σ 2 + m 2X ) + b2 (σ 2 + m 2Y ) + 2abm X m Y − (am X + bm Y )2 = σ 2 (a 2 + b2 ).
(2.276)
53. Let X and Y be two independent random variables. X is uniformly distributed between [−1, 1] and Y is uniformly distributed between [−2, 2]. Find the pdf of Z = min(X, Y ). • Solution: We begin by noting that Z lies in the range [−2, 1]. Next, we compute the probability distribution function of Z and proceed to compute the pdf by differentiating the cumulative distribution function with respect to z. To compute the cumulative distribution function, we note that
Z=
X if X < Y Y if Y < X.
(2.277)
The pdfs of X and Y are given by
f X (x) =
f Y (y) =
1/2 for − 1 ≤ x ≤ 1 0 otherwise 1/4 for − 2 ≤ y ≤ 2 0 otherwise.
(2.278)
Moreover, since the pdfs of X and Y are different in the region [−2, −1] and [−1, 1], we also expect the cumulative distribution function of Z to be different in these two regions. Let us first consider the region [−2, −1]. Using the fact that X and Y are independent and the events X < Y and Y < X are mutually exclusive we have for −2 ≤ z ≤ −1 P(Z ≤ z) = P(X ≤ z AND X < y) + P(Y ≤ z AND Y < x).
(2.279)
However we note that for −2 ≤ z ≤ −1 P(X ≤ z AND X < y) = 0
(2.280)
since X is always greater than −1. Hence P(Z ≤ z) = P(Y ≤ z AND Y < x) 1 1 z dy dx = 8 y=−2 x=−1 1 = [2z + 4] . 8
(2.281)
140
2 Random Variables and Random Processes
Similarly we have for −1 ≤ z ≤ 1. P(Z ≤ z) = P(X ≤ z AND X < y) + P(Y ≤ z AND Y < x) 2 1 z dy dx = P(Z < −1) + 8 x=−1 y=x 1 1 z dy dx + 8 y=−1 x=y 1 1 −1 dy dx = 8 y=−2 x=−1 2 1 z dy dx + 8 x=−1 y=x 1 1 z dy dx + 8 y=−1 x=y 1 2 −z + 3z + 6 . = 8
(2.282)
Thus
f Z (z) =
1/4 for − 2 ≤ z ≤ −1 . (1/8) [−2z + 3] for − 1 ≤ z ≤ 1
(2.283)
54. (Haykin 1983) Let X (t) be a stationary Gaussian process with zero mean and variance σ 2 and autocorrelation R X (τ ). This process is applied to an ideal halfwave rectifier. The random process at the rectifier output is denoted by Y (t). (a) Compute the mean value of Y (t). (b) Compute the autocorrelation of Y (t). Make use of the result
∞ u=0
∞
1 − θ cot(θ) (2.284) . uv exp −u 2 − v 2 − 2uv cos(θ) du dv = 4 sin2 (θ) v=0
• Solution: We begin by noting that Y (t) is some function of X (t), which we denote by g(X ). For the given problem
Y (t) = g(X ) =
X (t) if X (t) ≥ 0 0 if X (t) < 0.
(2.285)
We also know that since X (t) is given to be stationary, f X (x) is independent of time, hence
2 Random Variables and Random Processes
E [g(X )] =
141
∞
g(x) f X (x) d x ∞ x2 1 x exp − 2 d x = √ 2σ σ 2π x=0 ∞ σ = √ exp(−z) dz 2π z=0 σ = √ . 2π x=−∞
(2.286)
The autocorrelation of Y (t) is given by E [Y (t)Y (t − τ )] .
(2.287)
For convenience of representation, let X (t) = α X (t − τ ) = β.
(2.288)
To compute the autocorrelation we make use of the result E [g(α, β)] =
∞ α=−∞
∞
β=−∞
g(α, β) f α, β (α, β) dα dβ.
(2.289)
The joint pdf of two real-valued Gaussian random variables Y1 and Y2 is given by f Y1 , Y2 (y1 , y2 ) 1 = 2πσ1 σ2 1 − ρ2 σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2 . × exp − 2 2σ12 σ22 (1 − ρ2 ) (2.290) For the given problem, the joint pdf of α and β is given by 1 α2 − 2ραβ + β 2 f α, β (α, β) = , exp − 2σ 2 (1 − ρ2 ) 2πσ 2 1 − ρ2 where we have made use of the fact that
(2.291)
142
2 Random Variables and Random Processes
E[α] = E[β] = 0 E[α2 ] = E[β 2 ] = σ 2 E[αβ] R X (τ ) ρ = ρ(τ ) = = . 2 σ σ2
(2.292)
Thus the autocorrelation of Y (t) can be written as RY (τ ) =
1 2πσ 2 1 − ρ2
∞
α2 − 2ραβ + β 2 dα dβ. αβ exp − 2σ 2 (1 − ρ2 ) β=0 (2.293)
α=0
∞
Let A= B=
1 2σ 2 (1
− ρ2 )
1 2σ 2 (1
− ρ2 )
u = Aα v = −Bβ.
(2.294)
Thus RY (τ ) =
2σ 2 (1 − ρ2 )3/2 π
∞
u=0
∞
v=0
uv exp −(u 2 + 2uvρ + v 2 ) du dv (2.295)
Using (2.284) we get RY (τ ) =
σ 2 1 − ρ2 − ρ cos−1 (ρ) . 2π
(2.296)
55. (Haykin 1983) Let X (t) be a real-valued zero-mean stationary Gaussian process with autocorrelation function R X (τ ). This process is applied to a square-law device. Denote the output process as Y (t). Compute the mean and autocovariance of Y (t). • Solution: It is given that Y (t) = X 2 (t).
(2.297)
Since X (t) is given to be stationary, we have m Y = E X 2 (t) = R X (0). The autocorrelation of Y (t) is given by
(2.298)
2 Random Variables and Random Processes
143
RY (τ ) = E [Y (t)Y (t − τ )] = E X 2 (t)X 2 (t − τ ) .
(2.299)
X (t) = X 1 X (t − τ ) = X 2 .
(2.300)
Let
We know that E[X 1 X 2 X 3 X 4 ] = C12 C34 + C13 C24 + C14 C23 ,
(2.301)
where E[X i X j ] = Ci j and the random variables X i are jointly normal (Gaussian) with zero mean. For the given problem we note that X 3 = X 1 and X 4 = X 2 . Thus RY (τ ) = 2R 2X (τ ) + R 2X (0).
(2.302)
The autocovariance of Y (t) is given by K Y (τ ) = E[(Y (t) − m Y )(Y (t − τ ) − m Y )] = RY (τ ) − m 2Y = 2R 2X (τ ).
(2.303)
56. A random variable X is uniformly distributed in [−1, 4] and another random variable Y is uniformly distributed in [7, 8]. X and Y are statistically independent. Find the pdf of Z = |X | − Y . • Solution: The pdf of X and Y are shown in Fig. 2.31a, b, respectively. Let U = |X | V = −Y.
(2.304)
Then the pdf of U and V are illustrated in Fig. 2.31c, d. Thus Z = U + V,
(2.305)
where U and V are independent. Clearly Z lies in the range [−8, −3]. Moreover, the pdf of Z is given by the convolution of the pdfs of U and V and is equal to f Z (z) =
∞
α=−∞
fU (α) f V (z − α) dα.
(2.306)
144
2 Random Variables and Random Processes fX (x)
(a)
1/5 x −1
0
4 fY (y)
(b)
1 y 7
0
8
fU (u)
(c)
2/5 1/5 u 0
1 fV (v)
4
(d) 1 v −8 −7
0
Fig. 2.31 PDF of various random variables
The various steps in the convolution of U and V are illustrated in Fig. 2.32. We have f Z (−8) = 0 f Z (−7) = 2/5 f Z (−6) = 1/5 f Z (−4) = 1/5 f Z (−3) = 0.
(2.307)
The pdf of Z is illustrated in Fig. 2.33. 57. A random variable X is uniformly distributed in [2, 3] and [−3, −2]. Y is a discrete random variable which takes values −1 and 7 with probabilities 3/5 and 2/5, respectively. X and Y are statistically independent. Find the pdf of Z = X 2 + |Y |. • Solution: Note that f X (x) = 1/2
for 2 ≤ |x| ≤ 3
(2.308)
2 Random Variables and Random Processes Fig. 2.32 Various steps in the convolution of U and V
145 fU (α)
(a)
2/5 1/5 α 0
1 fV (−8 − α)
4
(b) 1 α −1
0 fV (−7 − α)
(c) 1 α 0
1 fV (−6 − α)
(d) 1 α 0
1
2
fV (−4 − α) (e) 1 α 0
3
4
fV (−3 − α) (f) 1 α 0
4
5
Fig. 2.33 PDF of Z
fZ (z) 2/5 1/5 z −8 −7 −6
−4 −3
0
146
2 Random Variables and Random Processes
Fig. 2.34 PDF of U = X 2
0.25
0.2
fU (u)
0.15
0.1
0.05
0
0
2
4
6
8
10
u
and f Y (y) =
2 3 δ(y + 1) + δ(y − 7). 5 5
(2.309)
2 3 δ(v − 1) + δ(v − 7). 5 5
(2.310)
Now, the pdf of V = |Y | is f V (v) = Let U = X 2 . The pdf of U is fU (u) = f X (x)/|du/d x|x=√u + f X (x)/|du/d x|x=−√u 1 = √ for 4 ≤ u ≤ 9 2 u
(2.311)
which is illustrated in Fig. 2.34. Therefore Z = U + V.
(2.312)
Since X and Y are independent, U and V are also independent. Moreover, the pdf of Z is given by the convolution of the pdfs of U and V and is equal to f Z (z) = =
∞
α=−∞
fU (α) f V (z − α) dα
3 2 fU (z − 1) + fU (z − 7). 5 5
The pdf of Z is shown in Fig. 2.35.
(2.313)
2 Random Variables and Random Processes
147
0.14
Fig. 2.35 PDF of U = X 2 + |Y |
0.12
fZ (z)
0.1 0.08 0.06 0.04 0.02 0
0
2
4
6
8
z
10
12
14
16
58. Let X (t) be a zero-mean Gaussian random process with psd N0 /2. X (t) is passed through a filter with impulse response h(t) = exp(−πt 2 ), followed by an ideal differentiator. Let the output process be denoted by Y (t). Determine the pdf of Y (1). • Solution: At the outset, we emphasize that Y (t) can be considered to be a random process as well as a random variable. Recall that a random variable is obtained by observing a random process at a particular instant of time. Note that both h(t) and the ideal differentiator are LTI filters. Hence Y (t) is also a Gaussian random process. Since X (t) is WSS, so is Y (t). Therefore the random variable Y (t) has a Gaussian pdf, independent of t. It only remains to compute the mean and the variance of Y (t). Since X (t) is zero mean, so is Y (t). Therefore E[Y (t)] = 0
(2.314)
independent of t. The variance of Y (t) is computed as follows. Let H ( f ) denote the Fourier transform of h(t). Clearly H ( f ) = exp(−π f 2 ).
(2.315)
The overall frequency response of h(t) in cascade with the ideal differentiator is G( f ) = j 2π f H ( f ).
(2.316)
Therefore the psd of the random process Y (t) is SY ( f ) = (N0 /2)4π 2 f 2 exp(−2π f 2 ).
(2.317)
148
2 Random Variables and Random Processes
The variance of the random variable Y (t) (this is also the power of the random process Y (t)) is σY2 = E |Y (t)|2 ∞ = SY ( f ) d f f =−∞
√ 2 N0 π. = 4
(2.318)
Hence the pdf of the random variable Y (t) is pY (y) =
1 √ exp(−y 2 /(2σY2 )) σY 2π
(2.319)
independent of t, and is hence also the pdf of the random variable Y (1). 59. Let X 1 and X 2 be two independent random variables. X 1 is uniformly distributed between [−1, 1] and X 2 has a triangular pdf between [−3, 3] with a peak at zero. Find the pdf of Z = max(X 1 , X 2 ). • Solution: Let us consider a more general problem given by Z = max(X 1 , . . . , X n ),
(2.320)
where X 1 , . . . , X n are independent random variables. Now P(Z < z) = P((X 1 < z) AND . . . AND (X n < z)) = P(X 1 < z) . . . P(X n < z).
(2.321)
Therefore d P(Z < z) dz d = [P(X 1 < z) . . . P(X n < z)] . dz
f Z (z) =
(2.322)
In the given problem P(Z < z) = P(X 1 < z)P(X 2 < z).
(2.323)
Moreover, it can be seen that Z lies in the range [−1, 3]. In order to compute (2.323), we need to partion the range of Z into three intervals, given by [−1, 0], [0, 1] and [1, 3]. Now
2 Random Variables and Random Processes
149
⎧ ⎨ (z + 1)/2 for − 1 ≤ z ≤ 0 P(X 1 < z) = (z + 1)/2 for 0 ≤ z ≤ 1 , ⎩ 1 for 1 ≤ z ≤ 3
(2.324)
where we have used the fact that P(X 1 < z) =
z x1 =−1 z
f X 1 (x1 ) d x1
1 d x1 2 x1 =−1 z+1 . = 2 =
(2.325)
Similarly, noting that f X 2 (0) = 1/3, we obtain ⎧ 2 ⎨ z /18 + z/3 + 1/2 for − 1 ≤ z ≤ 0 P(X 2 < z) = −z 2 /18 + z/3 + 1/2 for 0 ≤ z ≤ 1 ⎩ 2 −z /18 + z/3 + 1/2 for 1 ≤ z ≤ 3.
(2.326)
Substituting (2.324) and (2.326) in (2.323) and using (2.322) we get ⎧ 2 ⎨ z /12 + 7z/18 + 5/12 for − 1 ≤ z ≤ 0 f Z (z) = −z 2 /12 + 5z/18 + 5/12 for 0 ≤ z ≤ 1 ⎩ −z/9 + 1/3 for 1 ≤ z ≤ 3.
(2.327)
60. Let X 1 and X 2 be two independent random variables. X 1 is uniformly distributed between [−1, 1] and X 2 has a triangular pdf between [−3, 3] with a peak at zero. Find the pdf of Z = min(X 1 , X 2 ). • Solution: Let us consider a more general problem given by Z = min(X 1 , . . . , X n ),
(2.328)
where X 1 , . . . , X n are independent random variables. Now P(Z > z) = P((X 1 > z) AND . . . AND (X n > z)) = P(X 1 > z) . . . P(X n > z). Therefore d P(Z < z) dz d [1 − P(Z > z)] = dz
f Z (z) =
(2.329)
150
2 Random Variables and Random Processes
d [1 − P(X 1 > z) . . . P(X n > z)] dz d = [1 − (1 − P(X 1 < z)) . . . (1 − P(X n < z))] . (2.330) dz =
In the given problem P(Z > z) = P(X 1 > z)P(X 2 > z).
(2.331)
Moreover, it can be seen that Z lies in the range [−3, 1]. In order to compute (2.331), we need to partion the range of Z into three intervals, given by [−3, −1], [−1, 0] and [0, 1]. Now P(X 1 > z) = 1 − P(X 1 < z) ⎧ for − 3 ≤ z ≤ −1 ⎨1 = 1 − (z + 1)/2 for − 1 ≤ z ≤ 0 ⎩ 1 − (z + 1)/2 for 0 ≤ z ≤ 1 ⎧ for − 3 ≤ z ≤ −1 ⎨1 = (1 − z)/2 for − 1 ≤ z ≤ 0 ⎩ (1 − z)/2 for 0 ≤ z ≤ 1
(2.332)
where we have used the fact that P(X 1 < z) =
z x1 =−1 z
f X 1 (x1 ) d x1
1 d x1 2 x1 =−1 z+1 . = 2 =
(2.333)
Similarly, noting that f X 2 (0) = 1/3, we obtain P(X 2 > z) = 1 − P(X 2 < z) ⎧ ⎨ 1 − z 2 /18 − z/3 − 1/2 for − 3 ≤ z ≤ −1 = 1 − z 2 /18 − z/3 − 1/2 for − 1 ≤ z ≤ 0 ⎩ 1 + z 2 /18 − z/3 − 1/2 for 0 ≤ z ≤ 1 ⎧ 2 ⎨ −z /18 − z/3 + 1/2 for − 3 ≤ z ≤ −1 = −z 2 /18 − z/3 + 1/2 for − 1 ≤ z ≤ 0 (2.334) ⎩ 2 z /18 − z/3 + 1/2 for 0 ≤ z ≤ 1.
2 Random Variables and Random Processes
151
Substituting (2.332) and (2.334) in (2.331) and using (2.330) we get ⎧ for − 3 ≤ z ≤ −1 ⎨ z/9 + 1/3 f Z (z) = −z 2 /12 − 5z/18 + 5/12 for − 1 ≤ z ≤ 0 ⎩ 2 z /12 − 7z/18 + 5/12 for 0 ≤ z ≤ 1.
(2.335)
References Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983. A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, third edition, 1991. Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition, 2002.
Chapter 3
Amplitude Modulation
1. (Haykin 1983) Suppose that a non-linear device is available for which the output current i 0 and the input voltage vi are related by: i 0 (t) = a1 vi (t) + a3 vi3 (t),
(3.1)
where a1 and a3 are constants. Explain how this device may be used to provide (a) a product modulator (b) an amplitude modulator. • Solution: Let vi (t) = Ac cos(π f c t) + m(t),
(3.2)
where m(t) occupies the frequency band [−W, +W ]. Then i 0 (t) = a1 (Ac cos(π f c t) + m(t)) +
a3 A3c (3 cos(π f c t) + cos(3π f c t)) 4
3 + a3 A2c (1 + cos(2π f c t)m(t)) + 3a3 Ac cos(π f c t)m 2 (t) 2 + a3 m 3 (t) 3 3 = a1 + a3 A2c m(t) + a3 m 3 (t) + a1 Ac + a3 A3c cos(π f c t) 2 4 3 + 3a3 Ac cos(π f c t)m 2 (t) + a3 A2c m(t) cos(2π f c t) 2 a3 A3c cos(3π f c t). (3.3) + 4 Using the fact that multiplication in the time domain is equivalent to convolution in the frequency domain, we know that m 2 (t) occupies the frequency © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_3
153
154
3 Amplitude Modulation
f −3W
−W
0
W
fc /2
3W
fc
3fc /2
fc /2 + 2W fc − W
Fig. 3.1 Fourier transform of i 0 (t) Ac cos(πfc t)
s(t)
Nonlinear BPF device
m(t)
s(t) =
3 a A2 2 3 c
cos(2πfc t)m(t)
Fig. 3.2 Method of obtaining the DSBSC signal
band [−2W, +2W ] and m 3 (t) occupies [−3W, 3W ]. The Fourier transform of i 0 (t) is illustrated in Fig. 3.1. To extract the DSBSC component at f c using a bandpass filter, the following conditions must be satisfied: f c /2 + 2W < f c − W f c + W < 3 f c /2.
(3.4)
The two inequalities in the above equation can be combined to get f c > 6W.
(3.5)
The procedure for obtaining the DSBSC signal is illustrated in Fig. 3.2. The procedure for obtaining the AM signal using two identical nonlinear devices and bandpass filters is illustrated in Fig. 3.3. The amplitude sensitivity is controlled by A0 . Note that with this method, the amplitude sensitivity and the carrier power can be independently controlled. Moreover, the amplitude sensitivity is independent of a3 , which is not under user control (a3 is device dependent). 2. (Haykin 1983) In this problem we consider the switching modulator shown in Fig. 3.4. Assume that the carrier wave c(t) applied to the diode is large in amplitude compared to |m(t)|, so that the diode acts like an ideal switch, that is
3 Amplitude Modulation
155
Ac cos(πfc t)
s1 (t)
Nonlinear BPF device
m(t)
AM
s1 (t) =
3 a A2 2 3 c
s2 (t) =
3 a A A2 2 3 0 c
Ac cos(πfc t)
cos(2πfc t)m(t) signal cos(2πfc t) Nonlinear BPF s2 (t)
device
A0
Fig. 3.3 Method of obtaining the AM signal Fig. 3.4 Block diagram of a switching modulator
c(t) = Ac cos(2πfc t) + m(t)
−
+ v1 (t)
Rl
v2 (t)
−
v2 (t) =
v1 (t) c(t) > 0 0 c(t) < 0.
(3.6)
Hence we may write v2 (t) = (Ac cos(2π f c t) + m(t)) g p (t),
(3.7)
where g p (t) =
∞ 2 (−1)n−1 1 + cos(2π f c (2n − 1)t). 2 π n=1 2n − 1
Find out the AM signal centered at f c contained in v2 (t).
(3.8)
156
3 Amplitude Modulation
• Solution: We have Ac cos(2π f c t) 2 ∞ Ac (−1)n−1 + (cos(4π(n − 1) f c t) + cos(4πn f c t)) π n=1 2n − 1
v2 (t) =
+
∞ m(t) 2 (−1)n−1 + cos(2π(2n − 1) f c t)m(t). 2 π n=1 2n − 1
(3.9)
Clearly, the desired AM signal is Ac 2 cos(2π f c t) + cos(2π f c t)m(t). 2 π
(3.10)
3. (Haykin 1983) Consider the AM signal s(t) = [1 + 2 cos(2π f m t)] cos(2π f c t).
(3.11)
The AM signal s(t) is applied to an ideal envelope detector, producing output v(t). Determine the real (not complex) Fourier series representation of v(t). • Solution: The output of the ideal envelope detector is given by v(t) = |1 + 2 cos(2π f m t)| ,
(3.12)
which is periodic with a period of 1/ f m . The envelope detector output, v(x) is plotted in Fig. 3.5 where x = 2π f m t.
Fig. 3.5 Output of the ideal envelope detector
(3.13)
3 2.5
v(x)
2 1.5 1 0.5 0
-3
-2
-1
0
x
1
2
3
3 Amplitude Modulation
157
Now 1 + 2 cos(x) < 0
for π − π/3 < x < π + π/3.
(3.14)
This implies that v(x) =
1 + 2 cos(x) for 0 < |x| < π − π/3 −1 − 2 cos(x) for π − π/3 < |x| < π
(3.15)
Since v(x) is an even function, it has a Fourier series representation given by: v(x) = a0 + 2
∞
an cos(nx),
(3.16)
n=1
where a0 =
2 2π
π−π/3 x=0
√ 1 2 3 . = + 3 π
(1 + 2 cos(x)) d x −
2 2π
π
π−π/3
(1 + 2 cos(x)) d x (3.17)
Similarly an =
2 2π
π−π/3
(1 + 2 cos(x)) cos(nx) d x
x=0
π 2 (1 + 2 cos(x)) cos(nx) d x − 2π π−π/3 2 sin(2nπ/3) sin(2(n − 1)π/3) sin(2(n + 1)π/3) + + . = π n n−1 n+1 (3.18)
4. Consider the AM signal s(t) = [1 + 3 sin(2π f m t + α)] sin(2π f c t + θ).
(3.19)
The AM signal s(t) is applied to an ideal envelope detector, producing output v(t). (a) Find v(t). (b) Find α, 0 ≤ α < π, such that the real Fourier series representation of v(t) contains only dc and cosine terms. (c) Determine the real Fourier series representation of v(t) for α obtained in (b).
158
3 Amplitude Modulation
4 3.5 3
v(x)
2.5 2 1.5 1 0.5 0
-6
-4
-2
0
x
2
4
6
Fig. 3.6 Output of the ideal envelope detector for α = π/2
• Solution: The output of the ideal envelope detector is given by v(t) = |1 + 3 sin(2π f m t + α)| sin2 (θ) + cos2 (θ) = |1 + 3 sin(2π f m t + α)|
(3.20)
which is periodic with a period of 1/ f m . The envelope detector output, v(x) is plotted in Fig. 3.6 where x = 2π f m t.
(3.21)
Note that v(x) is periodic with a period 2π. If the real Fourier series representation of v(x) is to contain only dc and cosine terms, we require: α = π/2.
(3.22)
v(x) = |1 + 3 cos(x)| ∞ an cos(nx). = a0 + 2
(3.23)
Therefore
n=1
3 Amplitude Modulation
159
Let v(x1 ) = |1 + 3 cos(x1 )| =0 ⇒ x1 = π − cos−1 (1/3) = 1.911 rad.
(3.24)
Now x1 π 2 2 (1 + 3 cos(x)) d x − (1 + 3 cos(x)) d x 2π x=0 2π x1 4x1 − 2π + 12 sin(x1 ) = 2π = 2.02 (3.25)
a0 =
and an =
2 2π
x1
(1 + 3 cos(x)) cos(nx) d x π 2 (1 + 3 cos(x)) cos(nx) d x − 2π x1 2 sin(nx1 ) 3 sin((n − 1)x1 ) 3 sin((n + 1)x1 ) = + + for n > 1 π n π n−1 π n+1 (3.26) x=0
and a1 =
2 2π
x1
(1 + 3 cos(x)) cos(x) d x π 2 − (1 + 3 cos(x)) cos(x) d x 2π x1 2 3 3 sin(2x1 ) 3 − . = sin(x1 ) + x1 + π π π 2 2 x=0
(3.27)
5. (Haykin 1983) The AM signal s(t) = Ac [1 + ka m(t)] cos(2π f c t)
(3.28)
is applied to the system in Fig. 3.7. Assuming that |ka m(t)| < 1 for all t and m(t) is bandlimited to [−W, W ], and that the carrier frequency f c > 2W , which show that m(t) can be obtained from the square rooter output, v3 (t).
160
3 Amplitude Modulation
s(t)
Squarer
v1 (t)
Lowpass
v2 (t)
Square
filter
v3 (t)
rooter
Fig. 3.7 Nonlinear demodulation of AM signals Fig. 3.8 Spectrum of the message signal
M (f ) 1
−W
0
W
f
• Solution: We have v1 (t) =
A2c [1 + cos(4π f c t)] [1 + ka m(t)]2 . 2
(3.29)
We also have (assuming an ideal LPF) A2c [1 + ka m(t)]2 . 2
(3.30)
Ac v3 (t) = √ [1 + ka m(t)] . 2
(3.31)
v2 (t) = Therefore
6. (Haykin 1983) Consider a message signal m(t) with spectrum shown in Fig. 3.8, with W = 1 kHz. This message is DSB-SC modulated using a carrier of the form Ac cos(2π f c t), producing the signal s(t). The modulated signal is next applied to a coherent detector with carrier A0 cos(2π f c t). Determine the spectrum of the detector output when the carrier is (a) f c = 1.25 kHz (b) f c = 0.75 kHz. Assume that the LPF in the demodulator is ideal with unity gain. What is the lowest carrier frequency for which there is no aliasing (no overlap in the frequency spectrum) in the modulated signal s(t)? • Solution: We have s(t) = Ac cos(2π f c t)m(t) Ac ⇒ S( f ) = [M( f − f c ) + M( f + f c )] . 2
(3.32)
The block diagram of the coherent demodulator is shown in Fig. 3.9. We have
3 Amplitude Modulation
161
1
−W s(t)
W
g(t)
h(t) Ideal LPF
A0 cos(2πfc t)
Fig. 3.9 A coherent demodulator for DSB-SC signals. An ideal LPF is assumed G(f ) Ac A0 /2 Ac A0 /4 f kHz −3.5
−2.5
−1.5
−1
0
1
1.5
2.5
3.5
H(f ) Ac A0 /2
f kHz −1
0
1
Fig. 3.10 Spectrum at the outputs of the multiplier and the LPF for f c = 1.25
g(t) = Ac A0 m(t) cos2 (2π f c t) Ac A0 m(t) = [1 + cos(4π f c t)] 2 Ac A0 Ac A0 M( f ) + ⇒ G( f ) = [M( f − 2 f c ) + M( f + 2 f c )] . 2 4 (3.33) The spectrum of H ( f ) for f c = 1.25 and f c = 0.75 kHz are shown in Figs. 3.10 and 3.11, respectively. For no aliasing the following condition must be satisfied: 2 fc − W > W ⇒ f c > W.
(3.34)
162
3 Amplitude Modulation G(f ) Ac A0 /2 Ac A0 /4 f kHz −2.5
−1.5 −1 −0.5 0
0.5
1.5
1
2.5
H(f ) Ac A0 /2 Ac A0 /4 Ac A0 /8 f kHz −1 −0.5 0
0.5
1
Fig. 3.11 Spectrum at the outputs of the multiplier and the LPF for f c = 0.75
7. (Ziemer and Tranter 2002) An amplitude modulator has output s(t) = A cos(π400t) + B cos(π360t) + B cos(π440t).
(3.35)
The carrier power is 100 W and the power efficiency (ratio of sideband power to total power) is 40%. Compute A, B and the modulation factor μ. • Solution: The general expression for a tone modulated AM signal is: s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t) μAc cos(2π( f c − f m )t) = Ac cos(2π f c t) + 2 μAc cos(2π( f c + f m )t). + 2
(3.36)
In the given problem A = Ac μA B= 2 f c = 200 Hz f c − f m = 180 Hz f c + f m = 220 Hz.
(3.37)
3 Amplitude Modulation
163
The carrier power is given by: A2 = 100W 2 √ ⇒ A = 200.
(3.38)
Since the given AM wave is tone modulated, the power efficiency is given by: 2 μ2 = 2 + μ2 5 ⇒ μ = 1.155.
(3.39)
The constant B is given by: B=
μA = 8.165. 2
(3.40)
8. Consider a modulating wave m(t) such that (1 + ka m(t)) > 0 for all t. Assume that the spectrum of m(t) is zero for | f | > W . Let s(t) = Ac [1 + ka m(t)] cos(2π f c t),
(3.41)
where f c > W . The modulated wave s(t) is applied to a series combination of an ideal full-wave rectifier and an ideal lowpass filter. The transfer function of the ideal lowpass filter is: H( f ) =
1 for | f | < W 0 otherwise.
(3.42)
Compute the time-domain output v(t) of the lowpass filter. • Solution: The output of the full-wave rectifier is given by: |s(t)| = Ac [1 + ka m(t)]| cos(2π f c t)|.
(3.43)
2π f c t = x.
(3.44)
Let
Then | cos(x)| has a Fourier series representation: | cos(x)| = a0 + 2
∞ n=1
(an cos(nx) + bn sin(nx)) .
(3.45)
164
3 Amplitude Modulation
Note that bn = 0 since | cos(x)| is an even function. Moreover | cos(x)| is periodic with period π. The coefficients an are given by: 1 π/2 cos(x) d x a0 = π x=−π/2 2 = π 1 π/2 an = cos(x) cos(nx) d x π x=−π/2 0 for n = 2m + 1 = K n for n = 2m
(3.46)
where K n is given by: Kn =
1 1 sin((n − 1)π/2) + sin((n + 1)π/2). (3.47) π(n − 1) π(n + 1)
Thus the output of the LPF is clearly: v(t) = Ac [1 + ka m(t)]a0 =
2 Ac [1 + ka m(t)]. π
(3.48)
9. (Haykin 1983) This problem is related to the 2two-stage approach for generating SSB (frequency discrimination method) signals. A voice signal m(t) occupying the frequency band 0.3–3.4 kHz is to be SSB modulated with only the upper sideband transmitted. Assume the availability of bandpass filters which provide an attenuation of 50 dB in a transition band that is one percent of the center frequency of the bandpass filter, as illustrated in Fig. 3.12a. Assume that the first stage eliminates the lower sideband and the product modulator in the second stage uses a carrier frequency of 11.6 MHz. The message spectrum is shown in Fig. 3.12b and the spectrum of the final SSB signal must be as shown in Fig. 3.12c. Find the range of the carrier frequencies that can be used by the product modulator in the first stage, so that the unwanted sideband is attenuated by no less than 50 dB. • Solution: Firstly, we note that if only a single stage were was employed, the required Q-factor of the BPF would be very high, since Q-factor ≈
centre frequency . transition bandwidth
(3.49)
Secondly, we would like to align the center frequency of the bandpass filter in the center of the message band to be transmitted. The transmitted message band could be the upper or the lower sideband. Let us denote the carrier frequency of the first stage by f 1 and that of the second stage by f 2 . Then, the
3 Amplitude Modulation
165 20 log |H(f )| dB 0
(a) −50 f −fc
fc
M (f )
S(f )
0.01fc
(b)
(c) f
f −3.4
−0.3 0
0.3
0
3.4
Fig. 3.12 a Magnitude response of the bandpass filters. b Message spectrum. c Spectrum of the final SSB signal
center frequency of the BPF at the second stage is given by (see Fig. 3.13): f c2 =
1 [ f2 + f1 + fa + f2 + f1 + fb ] . 2
(3.50)
In the above equation: f 2 = 11600 kHz f a = 0.3 kHz f b = 3.4 kHz.
(3.51)
The transition bandwidth of the second BPF is 0.01 f c2 . The actual transition bandwidth at the output of the first SSB modulator is 2( f 1 + f a ). We require that 2( f 1 + f a ) ≥ 0.01 f c2 ⇒ f 1 (1 − 0.005) ≥ 58 + 0.01 × 0.925 − 0.3 f 1 ≥ 57.999 kHz.
(3.52)
The center frequency of the first BPF is given by: 1 [ f1 + fa + f1 + fb ] 2 = f 1 + 1.85.
f c1 =
(3.53)
166
3 Amplitude Modulation M (f )
f −fb
−fa
fa
0
fb
(a) S1 (f )
f −fc1
−f1
f1
0
fc1
(b) −f1 − fa
f1 + fa S(f )
f −fc2
−f2
0
f2
fc2
(c) −f2 − f1 − fa
f2 + f1 + fa
Fig. 3.13 a Message spectrum. b SSB signal at the output of the first stage. c SSB signal at the output of the second stage
The transition bandwidth of the first BPF is 0.01 f c1 and the actual transition bandwidth of the input message is 2 f a . We require that: 2 f a ≥ 0.01 f c1 ⇒ f 1 ≤ 58.15 kHz.
(3.54)
Thus the range of f 1 is given by 57.999 ≤ f 1 ≤ 58.15 kHz. 10. (Haykin 1983) This problem is related to the 2two-stage approach for generating SSB signals. A voice signal m(t) occupying the frequency band 0.3–3 kHz is to be SSB modulated with only the lower sideband transmitted. Assume the availability of bandpass filters which provide an attenuation of 60 dB in a transition band that is two percent of the center frequency of the bandpass filter, as illustrated in Fig. 3.14a. Assume that the first stage eliminates the upper sideband and the product modulator in the second stage uses a carrier frequency of 1 MHz.
3 Amplitude Modulation
167 20 log |H(f )| dB 0
(a) −60 f −fc
fc 0.02fc
M (f ) (b)
S(f )
(c) f
−3
−0.3 0
0.3
3
f 0
Fig. 3.14 a Magnitude response of the bandpass filters. b Message spectrum. c Spectrum of the final SSB signal
The message spectrum is shown in Fig. 3.14b and the spectrum of the final SSB signal must be as shown in Fig. 3.14c. (a) Find the range of carrier frequencies that can be used by the product modulator in first stage, so that the unwanted sideband is attenuated by no less than 60 dB. (b) Write down the expression of the SSB signal s(t) at the output of the second stage, in terms of m(t). Clearly specify the carrier frequency of s(t) in terms of the carrier frequency of the two product modulators. • Solution: Firstly, we note that if only a single stage were was employed, the required Q-factor of the BPF would be very high, since Q-factor ≈
centre frequency . transition bandwidth
(3.55)
Secondly, we would like to align the center frequency of the bandpass filter in the center of the message band to be transmitted. The transmitted message band could be the upper or the lower sideband. Let us denote the carrier frequency of the first stage by f 1 and that of the second stage by f 2 . Then, the center frequency of the BPF at the second stage is given by (see Fig. 3.15): f c2 = In the above equation:
1 [ f2 + f1 − fb + f2 + f1 − fa ] . 2
(3.56)
168
3 Amplitude Modulation M (f ) (a)
f −fb
−fa
fa
0
fb
S1 (f ) (b)
f −f1
−fc1
−f1 + fa
fc1
0 −f1 + fb f1 − fb
f1 f1 − fa
S(f ) (c)
f −fc2
−f2
f2
0
−f2 − f1 + fb −f2 − f1 + fa
fc2
f2 + f1 − fb f2 + f1 − fa
Fig. 3.15 a Message spectrum. b SSB signal at the output of the first stage. c SSB signal at the output of the second stage
f 2 = 1000 kHz f a = 0.3 kHz f b = 3.0 kHz.
(3.57)
The transition bandwidth of the second BPF is 0.02 f c2 . The actual transition bandwidth at the output of the first SSB modulator is 2( f 1 − f b ). We require that 2( f 1 − f b ) ≥ 0.02 f c2 ⇒ f 1 − f b ≥ 0.01[ f 2 + f 1 − 0.5( f a + f b )] f 1 ≥ 13.1146 kHz. The center frequency of the first BPF is given by:
(3.58)
3 Amplitude Modulation
169
Envelope
Input
Output
BPF detector cos(2πf0 t) Variable frequency oscillator
Fig. 3.16 Receiver for a frequency division multiplexing system
M1 (f )
M2 (f )
fc1
fc2
2W
2W
Fig. 3.17 Input signal to the FDM receiver in Fig. 3.16
1 [ f1 − fa + f1 − fb ] 2 = f 1 − 1.65.
f c1 =
(3.59)
The transition bandwidth of the first BPF is 0.02 f c1 and the actual transition bandwidth of the input message is 2 f a . We require that: 2 f a ≥ 0.02 f c1 ⇒ f 1 ≤ 31.65 kHz.
(3.60)
Thus, the range of f 1 is given by 13.1146 ≤ f 1 ≤ 31.65 kHz. The final SSB signal is given by: ˆ sin(2π f c t), s(t) = m(t) cos(2π f c t) + m(t)
(3.61)
where f c = f 2 + f 1 − f a + f a = f 2 + f 1 . 11. Consider the receiver in Fig. 3.16, for a frequency division multiplexing (FDM) system. An FDM system is similar to the simultaneous radio broadcast by many stations. Assume that the BPF is ideal with a bandwidth of 2W = 10 kHz. (a) Now consider the input signal in Fig. 3.17. Assume that the input signals are amplitude modulated, f 0 = 1.4 MHz, f c1 = 1.1 MHz, and f c2 = 1.7 MHz.
170
3 Amplitude Modulation
Assume that the BPF is centered at 300 kHz. Find out the expression for the signal at the output of the BPF. The frequency f c2 is called the image of f c1 . (b) Now assume that f c1 = 1.1 MHz is the desired carrier and f c2 = 1.7 MHz is to be rejected. Assume that f 0 can only be varied between f c1 and f c2 and that the center frequency of the BPF cannot be less than 200 kHz (this is required for proper envelope detection) and cannot be greater than 300 kHz (so that the Q-factor of the BPF is within practical limits). Find out the permissible range of values f 0 can take. Also find out the corresponding permissible range of values of the center frequency of the BPF. • Solution: Let the input signal be given by: s(t) = A1 [1 + ka1 m 1 (t)] cos(2π f c1 t) + A2 [1 + ka2 m 2 (t)] cos(2π f c2 t). (3.62) The output of the multiplier can be written as: s(t) =
A1 [1 + ka1 m 1 (t)] cos(2π( f 0 − f c1 )t) + cos(2π( f 0 + f c1 )t) 2
A2 [1 + ka2 m 2 (t)] cos(2π( f c2 − f 0 )t) + cos(2π( f c2 + f 0 )t) . + 2 (3.63)
The output of the BPF is: y(t) =
A1 [1 + ka1 m 1 (t)] cos(2π( f 0 − f c1 )t) 2 A2 [1 + ka2 m 2 (t)] cos(2π( f c2 − f 0 )t). + 2
(3.64)
Now consider Fig. 3.18. Denote the two difference (IF) frequencies by x and B − x, where B = f c2 − f c1 . To reject B − x, we must have: B − x − x > 2W ⇒ x < 295.
(3.65)
Thus, the BPF center frequency can vary between 200 and 295 kHz. Correspondingly, f 0 can vary between 1.3 and 1.395 MHz. 12. Consider the Costas loop for AM signals as shown in Fig. 3.19. Let the received signal r (t) be given by: r (t) = Ac [1 + ka m(t)] cos(2π f c t + θ) + w(t),
(3.66)
3 Amplitude Modulation
171 B−x
x
fc1
f0
fc2
B−x
x 2W
Fig. 3.18 Figure to illustrate the condition to be satisfied by the two difference frequencies, so that one of them is rejected by the BPF demodulated signal LPF In-phase arm x1 (t)
Integrator
z1 (T )
cos(2πfc t + α) r(t)
phase
y (T ) VCO
detector sin(2πfc t + α) Integrator x2 (t)
z2 (T ) Quadrature arm
Fig. 3.19 Costas loop for AM signals
where w(t) is zero-mean additive white Gaussian noise (AWGN) with psd N0 /2. The random variable z 1 (T ) is computed as: z 1 (T ) =
1 T
T
0
x1 (t) dt.
(3.67)
The random variable z 2 (T ) is similarly computed. The random variable y (T ) is computed as: y (T ) =
z 2 (T ) . z 1 (T )
(3.68)
172
3 Amplitude Modulation
Assume that: (a) (b) (c) (d) (e) (f)
θ and α are uniformly distributed random variables. α − θ is a constant and close to zero. w(t) and α are statistically independent. m(t) has zero -mean (zero dc). T is large, hence the integrator output is a dc term plus noise. The signal-to-noise ratio (SNR) at the input to the phase detector is high.
Compute the mean and the variance of the random variables z 1 (T ), z 2 (T ) and y (T ). • Solution: We have Ac [1 + ka m(t)] [cos(α − θ) + cos(4π f c t + α + θ)] + a1 (t) 2 Ac x2 (t) = [1 + ka m(t)] [sin(α − θ) + sin(4π f c t + α + θ)] + a2 (t) 2 , (3.69) x1 (t) =
where a1 (t) = w(t) cos(2π f c t + α) a2 (t) = w(t) sin(2π f c t + α).
(3.70)
The autocorrelation of a1 (t) and a2 (t) is given by: Ra1 (τ ) = E[a1 (t)a1 (t − τ )] N0 δ(τ ) = 4 = Ra2 (τ ).
(3.71)
Since w(t) and α are statistically independent E[a1 (t)] = E[a2 (t)] = 0.
(3.72)
The integrator outputs are: Ac cos(α − θ) + b1 2 Ac sin(α − θ) + b2 , z 2 (T ) = 2 z 1 (T ) =
where
(3.73)
3 Amplitude Modulation
173
1 T b1 = a1 (t) dt T t=0 T 1 , b2 = a2 (t) dt. T t=0
(3.74)
Then 1 T E[a1 (t)] dt T t=0 =0 = E[b2 ].
E[b1 ] =
(3.75)
The variance of b1 and b2 are given by: N0 ∞ |H ( f )|2 d f 4 f =−∞ N0 ∞ h 2 (t) dt = 4 t=−∞ T N0 = dt 4T 2 t=0 N0 = 4T = E[b22 ].
E[b12 ] =
(3.76)
In the above equation we have used the Rayleigh’s energy theorem. Thus: Ac cos(α − θ) 2 Ac sin(α − θ) E[z 2 (T )] = 2 N0 var [z 1 (T )] = 4T N0 . var [z 2 (T )] = 4T E[z 1 (T )] =
(3.77)
The phase detector output is: (Ac /2) sin(α − θ) + b2 (Ac /2) cos(α − θ) + b1 2b2 ≈ α−θ+ . Ac
y (T ) =
(3.78)
174
3 Amplitude Modulation M (f ) 1
f (kHz) −3.4
−0.3
0
0.3
3.4
Fig. 3.20 Message spectrum
wWhere we have made the high SNR approximation. Hence, the mean and variance of y (T ) is: E[y (T )]] = α − θ 4 N0 . var [y (T )] = 2 Ac 4T
(3.79)
13. Consider a message signal m(t) whose spectrum extends over 300 Hz to 3.4 kHz, as illustrated in Fig. 3.20. This message is SSB modulated to obtain: s(t) = Ac m(t) cos(2π f c t) + Ac m(t) ˆ sin(2π f c t).
(3.80)
At the receiver, s(t) is demodulated using a carrier of the form cos(2π( f c + f )t). Plot the spectrum of the demodulated signal at the output of the lowpass filter when (a) f = 10 Hz, (b) f = −10 Hz. Assume that the lowpass filter is ideal with unity gain and extends over [−4, 4] kHz, and f c = 20 kHz. Show all the steps required to arrive at the answer. Indicate all the important points on the X Y -axes. • Solution: The multiplier output can be written as: s1 (t) = s(t) cos(2π( f c + f )t)
= Ac m(t) cos(2π f c t) + m(t) ˆ sin(2π f c t) cos(2π( f c + f )t) Ac m(t) [cos(2π f t) + cos(2π(2 f c + f )t)] = 2 Ac m(t) ˆ [sin(2π(2 f c + f )t) − sin(2π f t)] . (3.81) + 2 The output of the lowpass filter is:
3 Amplitude Modulation
175 M1 (f )
(a)
Ac /2
f (kHz) −3.41
−0.31 −0.01 0
0.01
0.31
3.41
M2 (f ) (b)
Ac /2
f (kHz) −3.39
−0.29 −0.01 0
0.01
0.29
3.39
Fig. 3.21 Demodulated message spectrum in the presence of frequency offset
s2 (t) =
Ac
m(t) cos(2π f t) − m(t) ˆ sin(2π f t) . 2
(3.82)
When f is positive, s2 (t) is an SSB signal with carrier frequency f and upper sideband transmitted. This is illustrated in Fig. 3.21a, for f = 10 Hz. When f is negative, s2 (t) is an SSB signal with carrier frequency f and lower sideband transmitted. This is illustrated in Fig. 3.21b with f = −10. 14. Compute the envelope of the following signal: s(t) = Ac [1 + 2 cos(2π f m t)] cos(2π f c t + θ).
(3.83)
• Solution: The signal can be expanded as: s(t) = Ac [1 + 2 cos(2π f m t)] [cos(2π f c t) cos(θ) − sin(2π f c t) sin(θ)] . (3.84) Therefore the envelope is: a(t) = Ac |1 + 2 cos(2π f m t)| cos2 (θ) + sin2 (θ) = Ac |1 + 2 cos(2π f m t)|.
(3.85)
15. For the message signal shown in Fig. 3.22 compute the power efficiency (the ratio of sideband power to total power) in terms of the amplitude sensitivity ka , A1 , A2 , and T . The message is periodic with period T , has zero -mean, and is AM modulated
176
3 Amplitude Modulation
Fig. 3.22 Message signal
m(t) A1 t
0 T
−A2
Fig. 3.23 Message signal
m(t) A1 0
t1
−A2
t T
according to: s(t) = Ac [1 + ka m(t)] cos(2π f c t).
(3.86)
Assume that T 1/ f c . • Solution: Since the message is zero -mean we must have: 1 1 t1 A1 = (T − t1 )A2 2 2 A2 T ⇒ t1 = , A1 + A2
(3.87)
where t1 is shown in Fig. 3.23. For a general AM signal given by s(t) = Ac [1 + ka m(t)] cos(2π f c t),
(3.88)
the power efficiency is: η=
ka2 Pm A2c ka2 Pm /2 = , A2c /2 + A2c ka2 Pm /2 1 + ka2 Pm
(3.89)
where Pm denotes the message power. Here we only need to compute Pm . We have
3 Amplitude Modulation
177
m(t)
AC
LPF
Am(t)
amplifier c(t)
c(t) 1 t
0 −1
−1/fc
−1/(4fc )
0
1/(4fc )
1/fc
Fig. 3.24 Block diagram of a chopper-stabilized dc amplifier
1 Pm = T
T
m 2 (t) dt,
(3.90)
t=0
where m(t) =
for 0 < t < t1 A1 t/t1 A2 (t − T )/(T − t1 ) for t1 < t < T .
(3.91)
Substituting for m(t) in (3.90) we get: A2 Pm = 12 T t1
t1
A22 t dt + T (T − t1 )2 t=0
T
2
A21 t1 A2 (T − t1 ) + 2 3T 3T A1 A2 , = 3
(t − T )2 dt
t=t1
=
(3.92)
where we have substituted for t1 from (3.87). 16. (Haykin 1983) Figure 3.24 shows the block diagram of a chopper-stabilized dc amplifier. It uses a multiplier and an ac amplifier, which shifts the spectrum of the input signal from the vicinity of zero frequency to the vicinity of the carrier frequency ( f c ). The signal at the output of the ac amplifier is then coherently demodulated. The carrier c(t) is a square wave as indicated in the figure.
178
3 Amplitude Modulation
(a) Specify the frequency response of the ac amplifier so that there is no distortion in the LPF output, assuming that m(t) is bandlimited to −W < f < W . Specify also the relation between f c and W . (b) Determine the overall gain of the system ( A), assuming that the ac amplifier has a gain of K and the lowpass filter is ideal having unity gain. • Solution: The square wave c(t) can be written as: c(t) =
∞ 4 (−1)n−1 cos(2π f c (2n − 1)t). π n=1 2n − 1
(3.93)
The ac amplifier can have the frequency response of an ideal bandpass filter, with a gain of K and bandwidth f c − W < | f | < f c + W . Observe that for no aliasing of the spectrum centered at f c , we require: fc + W < 3 fc − W ⇒ W < fc .
(3.94)
Thus, the output of the ac amplifier is: s(t) =
4K m(t) cos(2π f c t). π
(3.95)
The output of the lowpass filter would be: LPF
s(t)c(t) −→
16K 8K m(t) = 2 m(t). 2 2π π
(3.96)
Therefore, the overall gain of the system is 8K /π 2 . 17. (Haykin 1983) Consider the phase discrimination method of generating an SSB signal. Let the message be given by: m(t) = Am cos(2π f m t)
(3.97)
and the SSB signal be given by: ˆ sin(2π f c t). s(t) = m(t) cos(2π f c t) − m(t)
(3.98)
Determine the ratio of the amplitude of the undesired side-frequency component to that of the desired side-frequency component when the modulator deviates from the ideal condition due to the following factors considered one at a time: (a) The Hilbert transformer introduces a phase lag of π/2 − δ instead of π/2. (b) The terms m(t) and m(t) ˆ are given by
3 Amplitude Modulation
179
m(t) = a Am cos(2π f m t) m(t) ˆ = b Am sin(2π f m t).
(3.99)
(c) The carrier signal applied to the product modulators are not in phase quadrature, that is: c1 (t) = cos(2π f c t) c2 (t) = sin(2π f c t + δ).
(3.100)
• Solution: In the first case, we have m(t) ˆ = Am cos(2π f m t − π/2 + δ) = Am sin(2π f m t + δ).
(3.101)
Thus s(t) = Am cos(2π f m t) cos(2π f c t) − Am sin(2π f m t + δ) sin(2π f c t) Am = [cos(2π( f c + f m )t) + cos(2π( f c − f m )t)] 2 Am − [cos(2π( f c − f m )t − δ) − cos(2π( f c + f m )t + δ)] 2 (3.102) The desired side frequency is f c + f m and the undesired side frequency is f c − f m . The phasor diagram for the signal in (3.102) is shown in Fig. 3.25. Clearly, the required ratio is R2 /R1 where R1 = R2 =
(Am /2)2 (1 + cos(δ))2 + (Am sin(δ)/2)2 (Am /2)2 (1 − cos(δ))2 + (Am sin(δ)/2)2 .
(3.103)
For the second case s(t) is given by: s(t) = a Am cos(2π f m t) cos(2π f c t) − b Am sin(2π f m t) sin(2π f c t) a Am = [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)] 2 b Am (3.104) − [cos(2π( f c − f m )t) − cos(2π( f c + f m )t)] . 2 Thus, the ratio of the amplitude of the undesired sideband to the desired sideband is R=
a−b . a+b
(3.105)
180
3 Amplitude Modulation
Am /2 fc + fm
R1
Am /2 δ
Am /2
fc − fm
R2 δ δ
Am /2
Am /2
Fig. 3.25 Phasor diagram for the signal in (3.102)
For the third case s(t) is given by: s(t) = Am cos(2π f m t) cos(2π f c t) − Am sin(2π f m t) sin(2π f c t + δ) Am = [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)] 2 Am − [cos(2π( f c − f m )t + δ) − cos(2π( f c + f m )t + δ)] . 2 (3.106) Again from the phasor diagram in Fig. 3.26, we see that the ratio of the amplitude of the undesired sideband to the desired sideband is R2 /R1 where R1 and R2 are given by (3.103). 18. Let the message be given by: m(t) = Am cos(2π f m t) + Am sin(2π f m t),
(3.107)
and the SSB signal be given by: ˆ [sin(2π f c t) + a sin(4π f c t)] . (3.108) s(t) = m(t) cos(2π f c t) + m(t)
3 Amplitude Modulation
181
Am /2 fc + fm
R1
Am /2 δ
Am /2
δ
fc − fm
Am /2
δ R2 Am /2
Fig. 3.26 Phasor diagram for the signal in (3.106)
Observe that the quadrature carrier contains harmonic distortion. Determine the ratio of the power of the undesired frequency component(s) to that of the desired frequency component(s). • Solution: We have: m(t) = Am cos(2π f m t) + Am sin(2π f m t) √ = Am 2 cos(2π f m t − π/4) √ ⇒ m(t) ˆ = Am 2 cos(2π f m t − π/4 − π/2) √ = Am 2 sin(2π f m t − π/4).
(3.109)
Therefore √ s(t) = Am 2 cos(2π f m t − π/4) cos(2π f c t) √ + Am 2 sin(2π f m t − π/4) [sin(2π f c t) + a sin(4π f c t)] √ = Am 2 cos (2π ( f c − f m ) t − π/4) a Am + √ cos (2π(2 f c − f m )t + π/4) 2 a Am (3.110) − √ cos (2π(2 f c + f m )t − π/4) . 2
182
3 Amplitude Modulation gp (t) A t −T
T
0
gp (t)
T0
m1 (t)
m(t)
LPF
AM
s(t)
modulator
H(f )
2 f
−4/T0
4/T0
Fig. 3.27 Block diagram of an AM system
Clearly, the desired frequency is f c − f m and the undesired frequencies are 2 f c ± f m . Therefore the required power ratio is: R= =
a 2 A2m /2 2 A2m /2 a2 . 2
(3.111)
19. Consider the system shown in Fig. 3.27. The input signal g p (t) is periodic with a period of T0 . Assume that T /T0 = 0.5. The signal g p (t) is passed through an ideal lowpass filter, that has a gain of two in the passband, as indicated in the figure. The LPF output m 1 (t) is further passed through a dc blocking capacitor to yield the message m(t). The capacitor acts as a short for the frequencies of interest. This message is used to generate an AM signal given by s(t) = Ac [1 + ka m(t)] cos(2π f c t).
(3.112)
What is the critical value of ka , beyond which s(t) gets overmodulated? • Solution: Consider the pulse: g(t) =
g p (t) for −T0 /2 < t < T0 /2 0 elsewhere.
(3.113)
Clearly A d 2 g(t) = (δ(t + T ) − 2δ(t) + δ(t − T )) . 2 dt T
(3.114)
3 Amplitude Modulation
183
Hence A (exp(j 2π f T ) − 2 + exp(−j 2π f T )) f 2T AT = 2 2 2 sin2 (π f T ) π f T (3.115) = AT sinc2 ( f T ).
G( f ) = −
4π 2
We also know that g p (t) =
∞
cn exp (j 2πnt/T0 ) ,
(3.116)
n=−∞
where 1 ∞ g(t) exp (−j 2πnt/T0 ) dt T0 t=−∞ 1 G(n/T0 ). = T0
cn =
(3.117)
Substituting for G(n/T0 ) we have g p (t) = =
∞ nT AT e j 2πnt/T0 sinc2 T0 n=−∞ T0 ∞ 1 A 4A + 2 cos(2π(2m − 1)t/T0 ), 2 π m=1 (2m − 1)2
(3.118)
where we have used the fact that T /T0 = 0.5. The output of the dc blocking capacitor is m(t) =
8A 8A cos(2πt/T0 ) + 2 cos(6πt/T0 ), . 2 π 9π
(3.119)
where we have used the fact that the LPF has a gain of two in the frequency range [−4/T0 , 4/T0 ]. The minimum value of m(t) is m min (t) = −
80 A 9π 2
(3.120)
and occurs at nT0 + T0 /2. To prevent overmodulation we must have: 1 + ka m min (t) ≥ 0 9π 2 ⇒ ka ≤ . 80 A
(3.121)
184
3 Amplitude Modulation
Fig. 3.28 Block diagram of a scrambler
M (f )
H(f )
A
2 f (Hz)
−5000
0
f (Hz) −5000
5000
m(t)
0 H(f )
5000 y(t)
2 cos(10000πt)
20. The system shown in Fig. 3.28 is used for scrambling audio signals. The output y(t) is the scrambled version of the input m(t). The spectrum (Fourier transform) of m(t) and the lowpass filter (H ( f )) are as shown in the figure. (a) Draw the spectrum of y(t). Label all important points on the x- and y-axis. (b) The output y(t) corresponds to a particular modulation scheme (i.e., AM, FM, SSB with lower/upper sideband transmitted, VSB, DSB-SC). Which modulation scheme is it? (c) Write down the precise expression for y(t) in terms of m(t), corresponding to the spectrum computed in part (a). (d) Suggest a method for recovering m(t) (not km(t), where k is a constant) from y(t). • Solution: The multiplier output is given by: y1 (t) = 2m(t) cos(2π f c t),
(3.122)
where f c = 5 kHz. The Fourier transform of y1 (t) is Y1 ( f ) = M( f − f c ) + M( f + f c )
(3.123)
The spectrum of y1 (t) and y(t) is shown in Fig. 3.29. From the spectrum of y(t), we can conclude that it is an SSB signal centered at f c , with the lower sideband transmitted. Hence we have ˆ sin(2π f c t), y(t) = km(t) cos(2π f c t) + k m(t)
(3.124)
where k is a constant to be found out. The Fourier transform of y(t) is
3 Amplitude Modulation
185
Fig. 3.29 a Y1 ( f ). b Y ( f ). c Procedure for recovering m(t)
Y1 (f ) A f −2fc
−fc
fc
0
2fc
(a) Y (f ) 2A f −2fc
−fc
0
fc
2fc
(b) 1
−fc y(t)
0 LPF
fc m(t)
cos(2πfc t) (c)
k [M( f − f c ) + M( f + f c )] 2 k
sgn( f − f c )M( f − f c ) − sgn( f + f c )M( f + f c ) = − 2 k k = M( f − f c )[1 − sgn( f − f c )] + M( f + f c )[1 + sgn( f + f c )]. 2 2 (3.125)
Y( f ) =
Comparing the above equation with Fig. 3.29b, we conclude that k = 2. Therefore ˆ sin(2π f c t) y(t) = 2m(t) cos(2π f c t) + 2m(t)
(3.126)
A method for recovering m(t) is shown in Fig. 3.29c. 21. Figure 3.30 shows the block diagram of a DSB-SC modulator. Note that the oscillator generates cos3 (2π f c t). The message spectrum is also shown. (a) Draw and label the spectrum (Fourier transform) of y(t).
186
3 Amplitude Modulation
Fig. 3.30 Block diagram of a DSB-SC modulator
m(t)
y(t)
Filter
m(t) cos(2πfc t)
cos3 (2πfc t) M (f ) 1 f −B
0
B
(b) Sketch the Fourier transform of a filter such that the output signal is m(t) cos(2π f c t), where m(t) is the message. The filter should pass only the required signal and reject all other frequencies. (c) What is the minimum usable value of f c ? • Solution: The multiplier output is given by: y(t) =
1 3 m(t) cos(2π f c t) + m(t) cos(6π f c t). 4 4
(3.127)
Therefore Y( f ) =
1 3 [M( f − f c ) + M( f + f c )] + [M( f − 3 f c ) + M( f + 3 f c )] . 8 8 (3.128)
The spectrum of y(t) and the bandpass filter is shown in Fig. 3.31. We must also have the following relationship: 3 fc − B ≥ fc + B ⇒ f c ≥ B.
(3.129)
Thus the minimum value of f c is B. 22. Consider the receiver of Fig. 3.32. Both bandpass filters are assumed to be ideal with unity gain in the passband. The passband of BPF1 is in the range 500– 1500 kHz. The bandwidth of BPF2 is 10 kHz and it selects only the difference frequency component. The input consists of a series of AM signals of bandwidth 10 kHz and spaced 10 kHz apart, occupying the band 500–1500 kHz as shown in the figure. When the LO frequency is 1505 kHz, the output signal is m 1 (t). (a) Compute the centere frequency of BPF2.
3 Amplitude Modulation
187 M (f ) 1 f −B
Y (f )
B
0
4/3 3/8 1/8 f fc − B
fc + B
fc
3fc − B
3fc + B
3fc
Fig. 3.31 Spectrum of y(t) and the bandpass filter Input signal
BPF 1
BPF 2
Envelope
Output
detector
signal
LO
M1 (f )
M2 (f ) f kHz
500
505
510
515
520
1490
1495
1500
Fig. 3.32 Block diagram of a radio receiver
(b) What should be the LO frequency so that the last message (occupying the band 1490–1500 kHz) is selected. (c) Can the LO frequency be equal to 1 MHz and the center frequency of BPF2 equal to 495 kHz? Justify your answer. • Solution: When the LO frequency is 1505 kHz, the difference frequency for m 1 (t) is 1505 − 505 = 1000 kHz. Therefore, the center frequency of BPF2 must be 1000 kHz. When the LO frequency is 1495 + 1000 = 2495 kHz, the last message is selected. When the LO frequency is 1 MHz and the center frequency of BPF2 is
188
3 Amplitude Modulation
equal to 495 MHz, then the output signal would be the sum of the first message (1000 − 505 = 495 kHz) and the last message (image at 1495 − 1000 = 495 kHz). Hence this combination should not be used. 23. (Haykin 1983) Consider a multiplex system in which four input signals m 0 (t), m 1 (t), m 2 (t), and m 3 (t) are, respectively, multiplied by the carrier waves c0 (t) = cos(2π f a t) + cos(2π f b t) c1 (t) = cos(2π f a t + α1 ) + cos(2π f b t + β1 ) c2 (t) = cos(2π f a t + α2 ) + cos(2π f b t + β2 ) c3 (t) = cos(2π f a t + α3 ) + cos(2π f b t + β3 )
(3.130)
to produce the multiplexed signal s(t) =
3
m i (t)ci (t).
(3.131)
i=0
All the messages are bandlimited to [−W, W ]. At the receiver, the ith message is recovered as follows: LPF
s(t)ci (t) −→ m i (t)
(3.132)
where the LPF is ideal with unity gain in the frequency band [−W, W ]. (a) Compute αi and βi , 0 ≤ i ≤ 3 for this to be feasible. (b) Determine the minimum separation between f a and f b . • Solution: Note that we require: LPF
ci (t)c j (t) −→ 0.5 cos(αi −α j ) + cos(βi −β j ) =
1 for i = j (3.133) 0 for i = j
provided | f b − f a | ≥ 2W fa ≥ W f b ≥ W.
(3.134)
Thus the minimum frequency separation is 2W . Now, in order to ensure orthogonality between carriers we proceed as follows. Let us first consider c0 (t) and c1 (t). Orthogonality is ensured when α1 = β1 = π/2.
(3.135)
3 Amplitude Modulation
189
Other solutions are also possible. Let us now consider c2 (t). We get the following relations: c2 (t) ⊥ c0 (t) ⇒ cos(α2 ) + cos(β2 ) = 0 c2 (t) ⊥ c1 (t) ⇒ cos(α2 − α1 ) + cos(β2 − β1 ) = 0 ⇒ sin(α2 ) + sin(β2 ) = 0.
(3.136)
The two equations in (3.136) imply that if α2 is in the 1st quadrant then β2 must be in the 3rd quadrant. Otherwise, if α2 is in the 2nd quadrant, then β2 must be in the 4th quadrant. Let us take α2 = π/4. Then β2 = 5π/4. Finally we consider c3 (t). We get the relations: c3 (t) ⊥ c0 (t) ⇒ cos(α3 ) + cos(β3 ) = 0 c3 (t) ⊥ c1 (t) ⇒ cos(α3 − α1 ) + cos(β3 − β1 ) = 0 ⇒ sin(α3 ) + sin(β3 ) = 0 c3 (t) ⊥ c2 (t) ⇒ cos(α3 − α2 ) + cos(β3 − β2 ) = 0 ⇒ cos(α3 ) − cos(β3 ) + sin(α3 ) − sin(β3 ) = 0.
(3.137)
The set of equations in (3.137) can be reduced to: cos(α3 ) = − sin(α3 ) cos(β3 ) = − sin(β3 ).
(3.138)
Thus we conclude that α3 and β3 must be an odd multiple of π/4 and must lie in the 2nd/4th quadrant and 4th/2nd quadrant, respectively. If we take α3 = 3π/4 then β3 = 7π/4. Note that there are infinite solutions to αi and βi (1 ≤ i ≤ 3). 24. Consider the system shown in Fig. 3.33a. The signal s(t) is given by:
Complex multiplication
s(t)
y(t) Ideal Hilbert transformer
|M (f )| sˆ(t)
x(t)
Local oscillator fl (a)
Fig. 3.33 Demodulation using a Hilbert transformer
f −W
0 (b)
W
190
3 Amplitude Modulation
s(t) = A1 m 1 (t) cos(2π f c t) − A2 m 2 (t) sin(2π f c t).
(3.139)
Both m 1 (t) and m 2 (t) are real-valued and bandlimited to [−W, W ]. The carrier frequency f c W . The inputs to the complex multiplier are s(t) + j sˆ (t) and x(t) (which could be real or complex-valued and depends on the local oscillator frequency fl ). The multiplier output is the complex-valued signal y(t). Let m(t) = A1 m 1 (t) + j A2 m 2 (t) M( f ).
(3.140)
(a) Find fl and x(t) such that y(t) = m(t). (b) The local oscillator that generates x(t) has a frequency fl = f c + f . Find y(t). Sketch |Y ( f )| when |M( f )| is depicted in Fig. 3.33b. • Solution: We know that sˆ (t) = A1 m 1 (t) cos(2π f c t − π/2) − A2 m 2 (t) sin(2π f c t − π/2) = A1 m 1 (t) sin(2π f c t) + A2 m 2 (t) cos(2π f c t). (3.141) Therefore s(t) + j sˆ (t) = m(t)e j 2π fc t .
(3.142)
If y(t) = m(t), then we must have fl = f c and x(t) = e−j 2π fc t .
(3.143)
x(t) = e−j 2π( fc + f )t .
(3.144)
If
then y(t) = m(t)e−j 2π f t ⇒ Y ( f ) = M( f + f ) ⇒ |Y ( f )| = |M( f + f )|. The spectrum of |Y ( f )| is shown in Fig. 3.34.
(3.145)
3 Amplitude Modulation
191
25. An AM signal is given by: s(t) = Ac [1 + ka m(t)] sin(2π f c t + θ).
(3.146)
The message m(t) is real-valued, does not have a dc component and its onesided bandwidth is W f c . Explain how km(t), where k is a constant, can be recovered from s(t) using a Hilbert transformer and other components. • Solution: When s(t) is passed through a Hilbert transformer, its output is: sˆ (t) = Ac [1 + ka m(t)] sin(2π f c t + θ − π/2) = −Ac [1 + ka m(t)] cos(2π f c t + θ).
(3.147)
s(t) + j sˆ (t) = Ac [1 + ka m(t)] e j (2π fc t+θ−π/2) .
(3.148)
Now
The message m(t) can be recovered using the block diagram in Fig. 3.35. Note that y(t) = Ac [1 + ka m(t)] .
(3.149)
26. Consider the Costas loop shown in Fig. 3.36. The input signal s(t) is s(t) = m(t) cos(2π f c t),
(3.150)
Fig. 3.34 Spectrum of |Y ( f )|
|Y (f )| = |M (f + Δf )|
f −Δf − W
s(t)
Ideal Hilbert transformer
−Δf
−Δf + W
Complex multiplication y(t)
sˆ(t)
DC blocking capacitor e−j (2πfc t+θ−π/2)
Fig. 3.35 Demodulating an AM signal using a Hilbert transformer
Ac ka m(t)
192
3 Amplitude Modulation xI (t) LPF s(t)
2 cos(2πfc t + θ) y VCO
∞ t=−∞
2 sin(2πfc t + θ) LPF
xQ (t)
M (f ) 2 f −W
W
0
Fig. 3.36 Costas loop
where m(t) is a real-valued message signal. The spectrum of m(t) is shown in Fig. 3.36. The LPFs are ideal with a gain of two in the passband [−W, W ]. It is given that f c W . (a) Determine the signals x I (t), x Q (t), and y. (b) For what values of θ will y be equal to zero? • Solution: Clearly 2m(t) cos(2π f c t) cos(2π f c t + θ) = m(t) cos(θ) + 2 f c term 2m(t) cos(2π f c t) sin(2π f c t + θ) = m(t) sin(θ) + 2 f c term. (3.151) Since the gain of the LPF is two in the passband x I (t) = 2m(t) cos(θ) x Q (t) = 2m(t) sin(θ).
(3.152)
Hence y = 2 sin(2θ) = 2 sin(2θ)
=
m 2 (t) dt
t=−∞ ∞
= 4 sin(2θ)
∞
f =−∞ W f =0
16 W sin(2θ) 3
|M( f )|2 d f −2 f +2 W
2 df (3.153)
3 Amplitude Modulation
193
Fig. 3.37 Spectrum of each voice signal
M (f ) 1
f (kHz) −3.3
−0.3 0
0.3
3.3
where we have used the Rayleigh’s energy theorem. It is clear that y = 0 when θ = kπ/2, where k is an integer. 27. It is desired to transmit 400 voice signals using frequency division multiplexing (FDM). The voice signals are SSB modulated with lower sideband transmitted. The spectrum of each of the voice signals is illustrated in Fig. 3.37. (a) What is the minimum carrier spacing required to multiplex all the signals, such that there is no overlap of spectra. (b) Using a single- stage approach and assuming a carrier spacing of 7 kHz and a minimum carrier frequency of 200 kHz, determine the lower and upper frequency limits occupied by the FDM signal containing 400 voice signals. (c) Determine the spectrum (lower and upper frequency limits) occupied by the 100th voice signal. (d) In the two-stage approach, the first stage uses SSB with lower sideband transmitted. In the first stage, the voice signals are grouped into L blocks, with each block containing K voice signals. The carrier frequencies in each block in the first stage is given by 7n kHz, for 1 ≤ n ≤ K . Note that the FDM signal using the two-stage approach must be identical to the singlestage approach in (b). i. Sketch the spectrum of each block at the output of the first stage. ii. Specify the modulation required for each of the blocks in the second stage. iii. How many carriers are required to generate the composite FDM signal? Express your answer in terms of L and K . iv. Determine L and K such that the number of carriers is minimized. v. Give the expression for the carrier frequencies required to modulate each block in the second stage. • Solution: The minimum carrier spacing required is 3 kHz. If the minimum carrier frequency is 200 kHz, then the spectrum of the first voice signal starts at 200 − 3.3 = 196.7 kHz. The carrier for the 400th voice signal is at 200 + 399 × 7 = 2993 kHz.
(3.154)
194
3 Amplitude Modulation
The spectrum of the 400th voice signal ends at 2993 − 0.3 = 2992.7 kHz. Therefore, the spectrum of the FDM signal extends from 196.7 to 2992.7 kHz, that is, 196.7 ≤ | f | ≤ 2992.7 kHz. The carrier for the 100th voice signal is at 200 + 99 × 7 = 893 kHz.
(3.155)
Hence, the spectrum of the 100th voice signal extends from 893 − 3.3 = 889.7 kHz to 893 − 0.3 = 892.7 kHz, that is, 889.7 ≤ | f | ≤ 892.7 kHz. The spectrum of each block at the output of the first stage is shown in Fig. 3.38b. The modulation required for each of the blocks in the second stage is SSB with upper sideband transmitted. For the second approach, let us first evaluate the number of carriers required to obtain each block in the first stage. Clearly, the number of carriers required is K . In the next stage, L carriers are required to translate each block to the appropriate frequency band. Thus, the total number of carriers required is L + K. Now, we need to minimize L + K subject to the constraint L K = 400. The problem can be restated as min K
400 + K. K
(3.156)
Differentiating with respect to K and setting the result to zero, we get the solution as K = 20, L = 20. The carrier frequencies required in the second stage is given by 200 − 7 + 7K (l − 1) for 1 ≤ l ≤ L. The block diagram of the two-stage approach is given in Fig. 3.38a. 28. It is desired to transmit 500 voice signals using frequency division multiplexing (FDM). The voice signals are DSB-SC modulated. The spectrum of each of the voice signals is illustrated in Fig. 3.39. (a) How many carrier signals are required to multiplex all the 500 voice signals? (b) What is the minimum carrier spacing required to multiplex all the signals, such that there is no overlap of spectra. (c) Using a single- stage approach and assuming a carrier spacing of 10 kHz and a minimum carrier frequency of 300 kHz, determine the lower and upper frequency limits occupied by the FDM signal containing 500 voice signals. (d) Determine the spectrum (lower and upper frequency limits) occupied by the 150th voice signal. (e) It is desired to reduce the number of carriers using a two-stage approach. The first stage uses DSB-SC modulation. In the first stage, the voice signals are grouped into L blocks, with each block containing K voice signals. The carrier frequencies in each block in the first stage is given by 10n kHz, for
3 Amplitude Modulation
195 ml, 1 (t)
(a)
sl (t) SSB
Sl (f )
for 1 ≤ l ≤ L
fc = 7 kHz ml, K (t) SSB fc = 7K kHz 1st stage s1 (t) SSB fc = 200 − 7 kHz
Final FDM signal sL (t) SSB fc = 200 − 7 + 7K(L − 1) kHz 2nd stage Sl (f )
(b)
(kHz) f
0 −7K + 3.3 −7K
−14
−10.7 −13.7
−7
−3.7 −6.7
7K − 3.3
10.7 13.7
3.7 6.7
14
7
7K
Fig. 3.38 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at the output of the first stage Fig. 3.39 Spectrum of each voice signal
M (f ) 1
f (kHz) −4
0
4
196
3 Amplitude Modulation
1 ≤ n ≤ K . Note that the FDM signal using the two-stage approach must be identical to the single-stage approach in (c). i. Sketch the spectrum of each block at the output of the first stage. ii. Specify the modulation required for each of the blocks in the second stage, such that, the minimum carrier frequency used in the second stage, is closest to 300 kHz. iii. How many carriers are required to generate the composite FDM signal? Express your answer in terms of L and K . iv. Determine L and K such that the number of carriers is minimized. v. Give the expression for the carrier frequencies required to modulate each block in the second stage. • Solution: The number of carriers required to multiplex all the voice signals is 500. The minimum carrier spacing required is 8 kHz. If the minimum carrier frequency is 300 kHz, then the spectrum of the first voice signal starts at 300 − 4 = 296 kHz. The carrier for the 500th voice signal is at 300 + 499 × 10 = 5290 kHz.
(3.157)
The spectrum of the 500th voice signal ends at 5290 + 4 = 5294 kHz. Therefore, the spectrum of the FDM signal extends from 296 to 5294 kHz, that is 296 ≤ | f | ≤ 5294 kHz. The carrier for the 150th voice signal is at 300 + 149 × 10 = 1790 kHz.
(3.158)
Hence the spectrum of the 150th voice signal extends from 1790 − 4 = 1786 kHz to 1790 + 4 = 1794 kHz, that is 1786 ≤ | f | ≤ 1794 kHz. The spectrum of each block at the output of the first stage is shown in Fig. 3.40b. The modulation required for each of the blocks in the second stage is SSB. In order to ensure that the minimum carrier frequency in the second stage is closest to 300 kHz, the upper sideband must be transmitted. Thus, the minimum carrier frequency in the second stage is 300 − 10 = 290 kHz. For the second approach, let us first evaluate the number of carriers required to obtain each block in the first stage. Clearly, the number of carriers required is K . In the next stage, L carriers are required to translate each block to the appropriate frequency band. Thus, the total number of carriers required is L + K. Now, we need to minimize L + K subject to the constraint L K = 500. The problem can be restated as min K
500 + K. K
(3.159)
3 Amplitude Modulation
197
ml, 1 (t)
(a)
sl (t) DSBSC
Sl (f )
for 1 ≤ l ≤ L
fc = 10 kHz ml, K (t)
(c)
L
K
L+K
2
250
252
DSBSC fc = 10K kHz 1st stage s1 (t) SSB fc = 300 − 10 kHz
Final FDM signal
sL (t) SSB
4
125
129
10
50
60
20
25
45
25
20
45
50
10
60
125
4
129
250
2
252
fc = 300 − 10 + 10K(L − 1) kHz 2nd stage (b)
Sl (f )
(kHz) f
0 −10K + 4 −10K − 4
−24
−16
−14
−6
6
10K − 4
16 14
24 10K + 4
Fig. 3.40 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at the output of the first stage. c Various possible values of L and K
Differentiating with respect to K and setting the result to zero, we get the solution as K = 22.36, which is not an integer. Hence, we need to obtain the solution manually, as given in Fig. 3.40c. We find that, there are two sets of solutions, L = 20, K = 25, and L = 25, K = 20. The carrier frequencies required in the second stage is given by 300 − 10 + 10K (l − 1) for 1 ≤ l ≤ L. The block diagram of the two-stage approach is given in Fig. 3.40a. 29. Let m(t) be a signal having Fourier transform M( f ), with M(0) = 0. When m(t) is passed through a filter with impulse response h(t), the Fourier transform of the output can be written as
198
3 Amplitude Modulation
⎧ ⎨ M( f ) e−j θ for f > 0 for f = 0 Y( f ) = 0 ⎩ M( f ) e j θ for f < 0
(3.160)
Determine h(t). • Solution: We know that if (this was done in class) y(t) = m(t) cos(θ) + m(t) ˆ sin(θ)
(3.161)
the Fourier transform of y(t) is Y ( f ) = M( f ) cos(θ) − j sgn( f )M( f ) sin(θ) ⎧ ⎨ M( f ) e−j θ for f > 0 for f = 0 = 0 ⎩ M( f ) e j θ for f < 0.
(3.162)
From (3.162) it is clear that h(t) = cos(θ)δ(t) +
sin(θ) . πt
(3.163)
30. Consider the DSB-SC signal s(t) = Ac cos(2π f c t)m(t).
(3.164)
Assume that the energy signal m(t) occupies the frequency band [−W, W ]. Now, s(t) is applied to a square law device given by y(t) = s 2 (t).
(3.165)
The output y(t) is applied to an ideal bandpass filter with a passband transfer function equal to 1/( f ), midband frequency of ±2 f c and bandwidth f . Assume that f → 0. All signals are real-valued. (a) Determine the spectrum of y(t). (b) Find the relation between f c and W for no aliasing in the spectrum of y(t). (c) Find the expression for the signal v(t) at the BPF output. • Solution: We have A2c [1 + cos(4π f c t)] m 2 (t) 2 A2 A2 ⇒ Y ( f ) = c G( f ) + c [G( f − 2 f c ) + G( f + 2 f c )] , (3.166) 2 4 y(t) =
where
3 Amplitude Modulation
199
G( f ) = M( f ) M( f ) M( f ) m(t).
(3.167)
In the above equation “” denotes convolution and G( f ) extends over [−2W, 2W ]. For no aliasing in Y ( f ), we require 2 f c − 2W > 2W ⇒ f c > 2W.
(3.168)
Observe that, as f → 0, the transfer function of the BPF can be expressed as two delta functions at ±2 f c . Therefore, the spectrum at the BPF output is given by: V( f ) =
A2c G(0) [δ( f − 2 f c ) + δ( f + 2 f c )] . 4
(3.169)
Therefore v(t) =
A2c G(0) cos(4π f c t). 2
(3.170)
Now G( f ) = ⇒ G(0) =
∞
M(x)M( f − x) d x
x=−∞ ∞
M(x)M(−x) d x.
(3.171)
x=−∞
Since m(t) is real-valued M(−x) = M ∗ (x) ∞ ⇒ G(0) = |M(x)|2 d x x=−∞
= E,
(3.172)
where we have used the Rayleigh’s energy theorem. Thus (3.170) reduces to: v(t) =
A2c E cos(4π f c t). 2
(3.173)
31. (Haykin 1983) Consider the quadrature -carrier multiplex system shown in Fig. 3.41. The multiplexed signal s(t) is applied to a communication channel of frequency response H ( f ). The channel output is then applied to the receiver input. Here f c denotes the carrier frequency and the message spectra extends over [−W, W ]. Find
200
3 Amplitude Modulation
Fig. 3.41 Quadrature carrier multiplexing system
m1 (t)
Ac cos(2πfc t)
s(t)
y(t) H(f )
Ac sin(2πfc t) m2 (t)
Ac m1 (t) LPF
y(t)
2 cos(2πfc t) 2 sin(2πfc t) Ac m2 (t) LPF
(a) the relation between f c and W , (b) the condition on H ( f ), and (c) the frequency response of the LPF that is necessary for recovery of the message signals m 1 (t) and m 2 (t) at the receiver output. Assume a real-valued channel impulse response and Ac = 1. • Solution: Assuming Ac = 1 we have s(t) = m 1 (t) cos(2π f c t) + m 2 (t) sin(2π f c t) M2 ( f − f c ) − M2 ( f + f c ) M1 ( f − f c ) + M1 ( f + f c ) + ⇒ S( f ) = 2 2j (3.174) Let Y ( f ) = S( f )H ( f ) y(t).
(3.175)
2y(t) cos(2π f c t) Y ( f − f c ) + Y ( f + f c ).
(3.176)
Now at the receiver:
Now
3 Amplitude Modulation
201
Y ( f − fc ) = H ( f − fc ) M1 ( f − 2 f c ) + M1 ( f ) M2 ( f − 2 f c ) − M2 ( f ) + 2 2j Y ( f + fc ) = H ( f + fc ) M1 ( f ) + M1 ( f + 2 f c ) M2 ( f ) − M2 ( f + 2 f c ) + 2 2j (3.177) Since the LPF eliminates frequency components beyond ±W , the output of the upper LPF is given by:
M1 ( f ) M2 ( f ) − H ( f − fc ) G1( f ) = 2 2j M2 ( f ) M1 ( f ) + H ( f + f c ). + 2 2j
(3.178)
From the above equation, it is clear that to recover M1 ( f ) from the upper LPF (G 1 ( f ) = M1 ( f )) we require H ( f − fc ) = H ( f + fc ) ∗
⇒ H ( fc − f ) = H ( f + fc )
for −W ≤ f ≤ W for −W ≤ f ≤ W
(3.179)
where it is assumed that the channel impulse response is real-valued. It can be shown that the above condition is also required for the recovery of M2 ( f ) from the lower LPF. Note that for distortionless recovery of the message signals, the frequency response of both the LPFs must be: HLPF ( f ) = 2/(H ( f − f c ) + H ( f + f c ))
for −W ≤ f ≤ W (.3.180)
Finally, for no aliasing we require f c > W . 32. (Haykin 2001) A particular version of AM stereo uses quadrature multiplexing. Specifically, the carrier Ac cos(2π f c t) is used to modulate the sum signal m 1 (t) = V0 + m l (t) + m r (t),
(3.181)
where V0 is a dc offset included for the purpose of transmitting the carrier component, m l (t) is the left hand audio signal, and m r (t) is the right hand audio signal. The quadrature carrier Ac sin(2π f c t) is used to modulate the difference signal m 2 (t) = m l (t) − m r (t).
(3.182)
202
3 Amplitude Modulation
(a) Show that an envelope detector may be used to recover the sum signal m r (t) + m l (t) from the quadrature multiplexed signal. How would you minimize the signal distortion produced by the envelope detector. (b) Show that a coherent detector can recover the difference m l (t) − m r (t). • Solution: The quadrature -multiplexed signal can be written as: s(t) = Ac m 1 (t) cos(2π f c t) + Ac m 2 (t) sin(2π f c t).
(3.183)
Ideally, the output of the envelope detector is given by: y(t) = Ac m 21 (t) + m 22 (t) m 2 (t) = Ac m 1 (t) 1 + 22 . m 1 (t)
(3.184)
The desired signal is Ac m 1 (t) and the distortion term is D(t) =
1+
m 22 (t) . m 21 (t)
(3.185)
The distortion can be minimized by increasing the dc offset V0 . Note, however, that increasing V0 results in increasing the carrier power, which makes the system more power inefficient. Multiplying s(t) by 2 cos(2π f c t) and passing the output through a lowpass filter yields: z 1 (t) = Ac m 1 (t) = Ac (V0 + m l (t) + m r (t)).
(3.186)
Similarly, multiplying s(t) by 2 sin(2π f c t) and passing the output through a lowpass filter yields: z 2 (t) = Ac m 2 (t) = Ac (m l (t) − m r (t)).
(3.187)
Adding z 1 (t) and z 2 (t) we get z 1 (t) + z 2 (t) = Ac V0 + 2 Ac m l (t).
(3.188)
Subtracting z 2 (t) from z 1 (t) yields: z 1 (t) − z 2 (t) = Ac V0 + 2 Ac m r (t).
(3.189)
3 Amplitude Modulation
203
The message signals m l (t) and m r (t) typically have zero dc, hence Ac V0 can be removed by a dc blocking capacitor. 33. (Haykin 1983) Using the message signal m(t) =
1 , 1 + t2
(3.190)
determine the modulated signal for the following methods of modulation: (a) AM with 50% modulation. Assume a cosine carrier. (b) DSB-SC. Assume a cosine carrier. (c) SSB with upper sideband transmitted. In all cases, the area under the Fourier transform of the modulated signal must be unity. • Solution: Note that the maximum value of m(t) occurs at t = 0. Moreover m(0) = 1 =
∞ f =−∞
, M( f ) d f
(3.191)
where M( f ) is the Fourier transform of m(t). Hence the AM signal with 50% modulation is given by: 0.5 cos(2π f c t). s(t) = Ac1 1 + 1 + t2
(3.192)
The Fourier transform of s(t) is: S( f ) =
Ac1 Ac1 [δ( f − f c ) + δ( f + f c )] + [M( f − f c ) + M( f + f c )] . 2 4 (3.193)
Since
∞ f =−∞
S( f ) d f = 1,
(3.194)
we require Ac1 Ac1 [1 + 1] + [1 + 1] = 1 2 4 ⇒ Ac1 = 2/3,
(3.195)
204
3 Amplitude Modulation
where we have used (3.191). Note that shifting the spectrum of M( f ) does not change the area. The DSB-SC modulated signal is given by: Ac2 cos(2π f c t). 1 + t2
(3.196)
Ac2 [M( f − f c ) + M( f + f c )] . 2
(3.197)
s(t) = The Fourier transform of s(t) is S( f ) =
Again, due to (3.191) and (3.194) we have Ac2 [1 + 1] = 1 2 ⇒ Ac2 = 1.
(3.198)
The Hilbert transform of m(t) is given by: 1 HT t . 2 1+t 1 + t2
(3.199)
Therefore, the SSB modulated signal with upper sideband transmitted is given by: s(t) = Ac3
1 t cos(2π f t) − sin(2π f t) . c c 1 + t2 1 + t2
(3.200)
The spectrum of s(t) is S( f ) =
Ac3 M( f − f c ) 1 + sgn( f − f c ) 2
Ac3 M( f + f c ) 1 − sgn( f + f c ) . + 2
(3.201)
Note that m(t) is real-valued and an even function of time. Hence, M( f ) is also real-valued and an even function of frequency. Therefore from (3.191)
∞ f =0
M( f ) d f = 1/2.
(3.202)
Hence, from (3.194) and (3.202) we again have 2 × 0.5Ac3 2 × 0.5Ac3 + =1 2 2 ⇒ Ac3 = 1.
(3.203)
3 Amplitude Modulation
205 M (f ) 2
f (kHz) −3
−0.2
0
0.2
3
Fig. 3.42 Message spectrum
Note that we can also use the relation ∞ S( f ) d f = s(0) = 1
(3.204)
f =−∞
to obtain Ac1 , Ac2 , and Ac3 . 34. Consider a message signal m(t) whose spectrum extends over 200 Hz to 3 kHz, as illustrated in Fig. 3.42. This message is SSB modulated to obtain: ˆ sin(2π f c t). s(t) = Ac m(t) cos(2π f c t) + Ac m(t)
(3.205)
At the receiver, s(t) is demodulated using a carrier of the form cos(2π( f c + f )t). Plot the spectrum of the demodulated signal at the output of the lowpass filter when (a) f = 20 Hz, (b) f = −10 Hz. Assume that the lowpass filter is ideal with unity gain and extends over [−4, 4] kHz, and f c = 20 kHz. Show all the steps required to arrive at the answer. In the sketch, indicate all the important points on the X Y -axes. • Solution: The multiplier output can be written as: s1 (t) = s(t) cos(2π( f c + f )t)
= Ac m(t) cos(2π f c t) + m(t) ˆ sin(2π f c t) cos(2π( f c + f )t) Ac m(t) [cos(2π f t) + cos(2π(2 f c + f )t)] = 2 Ac m(t) ˆ [sin(2π(2 f c + f )t) − sin(2π f t)] . (3.206) + 2 The output of the lowpass filter is:
206
3 Amplitude Modulation M1 (f ) (a)
Ac
f (kHz) −3.02
−0.22 −0.02 0
0.02
0.22
3.02
M2 (f ) (b)
Ac
f (kHz) −2.99
−0.19 −0.01 0
0.01
0.19
2.99
Fig. 3.43 Demodulated message spectrum in the presence of frequency offset Fig. 3.44 An AM signal transmitted through a series RLC circuit
vi (t)
L
C
vo (t)
R
s2 (t) =
Ac
m(t) cos(2π f t) − m(t) ˆ sin(2π f t) . 2
(3.207)
When f is positive, s2 (t) is an SSB signal with carrier frequency f and upper sideband transmitted. This is illustrated in Fig. 3.43a, for f = 20 Hz. When f is negative, s2 (t) is an SSB signal with carrier frequency f and lower sideband transmitted. This is illustrated in Fig. 3.43b with f = −10 Hz. 35. Consider the series RLC circuit in Fig. 3.44. The resonant frequency of the circuit is 1 MHz and the Q-factor is 100. The input signal vi (t) is given by: vi (t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t), where μ < 1, f c = 1 MHz, f m = 5 kHz. Determine vo (t). Note that the Q-factor of the circuit is defined as
(3.208)
3 Amplitude Modulation
207
Q = 2π f c L/R fc = −3 dB bandwidth
(3.209)
where f c denotes the resonant frequency of the RLC circuit. • Solution: The input signal vi (t) can be written as: vi (t) = Ac cos(2π f c t) +
μAc [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)] 2 (3.210)
The transfer function of the circuit is given by: Vo (ω) R = H (ω) = , Vi (ω) R + j ωL − j/(ωC)
(3.211)
where ω = 2π f . Let us now consider a frequency ω = ωc + ω
(3.212)
where ω ωc = 2π f c . Therefore H (ω) becomes: R R + j (ωc + ω)L − j/((ωc + ω)C) R ≈ R + j ωL + jω/(ωc2 C) 1 = 1 + j ωL/R + jω/(ωc2 RC) 1 = , 1 + j (2ω)L/R
H (ω) =
(3.213)
where we have used the following relationships: ωc L = 1/(ωc C) 1 ≈ 1 − ω/ωc 1 + ω/ωc
when ω ωc .
(3.214)
Since the Q-factor of the filter is high, we can assume that 2ω is the 3-dB bandwidth of the filter. Hence ω =
ωc = 2π × 5000 rad/s, 2Q
which is also equal to the message frequency. Thus
(3.215)
208
3 Amplitude Modulation
1 H (ωc + ω) = √ e−j π/4 2 1 j π/4 H (ωc − ω) = √ e 2 H (ωc ) = 1.
(3.216)
The output signal can be written as: μAc vo (t) = Ac cos(2π f c t) + √ {cos [2π( f c − f m )t + π/4] 2 2 + cos [2π( f c + f m )t − π/4]} μ = Ac 1 + √ cos (2π f m t − π/4) cos (2π f c t) . (3.217) 2 36. Let su (t) denote the SSB wave obtained by transmitting only the upper sideband, that is
ˆ sin(2π f c t) , su (t) = Ac m(t) cos(2π f c t) − m(t)
(3.218)
where m(t) is bandlimited to | f | < W f c . Explain how m(t) can be recovered from su (t) using only multipliers, adders/ subtracters and Hilbert transformers. Lowpass filters should not be used. • Solution: We have
ˆ sin(2π f c t) su (t) = Ac m(t) cos(2π f c t) − m(t)
sˆu (t) = Ac m(t) sin(2π f c t) + m(t) ˆ cos(2π f c t) .
(3.219)
From (3.219), we get m(t) =
1
su (t) cos(2π f c t) + sˆu (t) sin(2π f c t) . Ac
(3.220)
37. (Haykin 1983) A method that is used for carrier recovery in SSB modulation systems involves transmitting two pilot frequencies that are appropriately positioned with respect to the transmitted sideband. This is shown in Fig. 3.45a for the case where the lower sideband is transmitted. Here, the two pilot frequencies are defined by: f1 = fc − W − f f 2 = f c + f,
(3.221)
where f c is the carrier frequency, W is the message bandwidth, and f is chosen such that
3 Amplitude Modulation
209 S(f )
(a) f −f2
−f1
−fc
−fc + W
0
f1
fc
f2
fc − W
(b) s(t)
Narrowband filter centered at f1
Narrowband filter centered at f2
Frequency v1 (t)
Lowpass filter
v3 (t)
divide
v4 (t)
by n + 2
v2 (t)
Output v5 (t)
Narrowband filter centered at fc
Fig. 3.45 Carrier recovery for SSB signals with lower sideband transmitted
n=
W f
(3.222)
where n is an integer. Carrier recovery is accomplished using the scheme shown in Fig. 3.45b. The outputs of the two narrowband filters centered at f 1 and f 2 are given by: v1 (t) = A1 cos(2π f 1 t + φ1 ) v2 (t) = A2 cos(2π f 2 t + φ2 ).
(3.223)
The lowpass filter is designed to select the difference frequency component at the first multiplier output, due to v1 (t) and v2 (t). (a) Show that the output signal in Fig. 3.45b is proportional to the carrier wave Ac cos(2π f c t) if the phase angles satisfy: φ2 =
−φ1 . 1+n
(3.224)
(b) For the case when only the upper sideband is transmitted, the two pilot frequencies are
210
3 Amplitude Modulation
f1 = fc − f f 2 = f c + W + f.
(3.225)
How would you modify the carrier recovery scheme in order to deal with this case. What is the corresponding relation between φ1 and φ2 for the output to be proportional to the carrier signal? • Solution: The output of the lowpass filter can be written as: v3 (t) =
A1 A2 cos(2π( f 2 − f 1 )t + φ2 − φ1 ). 2
(3.226)
Substituting for f 1 and f 2 we get: A1 A2 cos(2π(n + 2)W t/n + φ2 − φ1 ) 2 A1 A2 cos(2π(n + 2)W t/n + φ2 − φ1 + 2πk) = 2
v3 (t) =
,
(3.227)
for 0 ≤ k < n + 2, where k is an integer. However, v3 (t) in (3.227) can be written as: v3 (t) =
A1 A2 cos(2π(n + 2)W (t − t0 )/n) 2
(3.228)
where −2π(n + 2)W t0 = φ2 − φ1 + 2πk n
(3.229)
for some value of t0 . The output of the frequency divider is: A1 A2 cos(2πW (t − t0 )/n) 2 A1 A2 cos(2πW t/n + (φ2 − φ1 )/(n + 2) + 2πk/(n + 2)) = 2 A1 A2 cos(2π f t + (φ2 − φ1 )/(n + 2) + 2πk/(n + 2)). = 2 (3.230)
v4 (t) =
Note that frequency division by n + 2 results in a phase ambiguity of 2πk/(n + 2). We need to choose the phase corresponding to k = 0. How this can be done will be explained later. The output of the second multiplier is given by:
3 Amplitude Modulation
211
v4 (t)v2 (t) =
A1 A22 cos(2π f t + (φ2 − φ1 )/(n + 2)) 2 × cos(2π( f c + f )t + φ2 ).
(3.231)
The output of the narrowband filter centered at f c is v5 (t) =
A1 A22 cos(2π f c t + φ2 − (φ2 − φ1 )/(n + 2)), 4
(3.232)
which is proportional to the carrier frequency when φ2 − (φ2 − φ1 )/(n + 2) = 0 −φ1 ⇒ φ2 = . n+1
(3.233)
Note that φ1 = φ2 = 0 is a trivial solution. One possible method to choose the correct value of k(= 0) could be as follows. Obtain v5 (t) using each value of k, for 0 ≤ k < n + 2, and use it to demodulate the SSB signal. The value of k that maximizes the message energy/power, is selected. For the case where the upper sideband is transmitted, the carrier recovery scheme is shown in Fig. 3.46. Here we again have A1 A2 cos(2π( f 2 − f 1 )t + φ2 − φ1 ) 2 A1 A2 cos(2π(n + 2)W t/n + φ2 − φ1 + 2πk), = 2
v3 (t) =
(3.234)
for 0 ≤ k < n + 2. Using similar arguments, the output of the frequency divider is (for k = 0): A1 A2 cos(2πW t/n + (φ2 − φ1 )/(n + 2)) 2 A1 A2 cos(2π f t + (φ2 − φ1 )/(n + 2)). = 2
v4 (t) =
(3.235)
The output of the second multiplier is given by: v4 (t)v1 (t) =
A21 A2 cos(2π f t + (φ2 − φ1 )/(n + 2)) 2 × cos(2π( f c − f )t + φ1 ).
(3.236)
212
3 Amplitude Modulation S(f )
(a) f −f2 −fc − W
−f1
0
fc + W f2
f1
−fc
fc
(b) s(t)
Narrowband filter centered at f2
Narrowband filter centered at f1
Frequency v2 (t)
Lowpass filter
v3 (t)
divide
v4 (t)
by n + 2
v1 (t)
Output v5 (t)
Narrowband filter centered at fc
Fig. 3.46 Carrier recovery for SSB signals with upper sideband transmitted
The output of the narrowband filter centered at f c is v5 (t) =
A21 A2 cos(2π f c t), 4
(3.237)
provided φ2 − φ1 + φ1 = 0 n+2 ⇒ φ2 = −(n + 1)φ1 .
(3.238)
38. Consider a signal given by: s(t) = m(t) ˆ cos(2π f c t) − m(t) sin(2π f c t),
(3.239)
where m(t) ˆ is the Hilbert transform of m(t). (a) Sketch the spectrum of s(t) when the spectrum of m(t) is shown in Fig. 3.47a. Label all the important points along the axes. (b) It is desired to recover m(t) from s(t) using the scheme shown in Fig. 3.47b. Give the specifications of filter1, filter2, and the value of x.
3 Amplitude Modulation
213 M (f )
(a)
2
f −fb
−fa
fa
0
fb
(b) s(t)
Filter1
m(t)
Filter2
2 cos(2πfc t)
x
Fig. 3.47 a Spectrum of the message signal. b Scheme to recover the message
• Solution: We know that the Fourier transform of m(t) ˆ is given by m(t) ˆ −j sgn ( f )M( f ).
(3.240)
Then −j
sgn ( f − f c )M( f − f c ) 2 + sgn ( f + f c )M( f + f c )
m(t) ˆ cos(2π f c t)
= S1 ( f ),
(3.241)
which is plotted in Fig. 3.48b. Similarly m(t) sin(2π f c t)
−j [M( f − f c ) − M( f + f c )] 2
= S2 ( f )
(3.242)
which is plotted in Fig. 3.48c. Finally, we have S( f ) = S1 ( f ) − S2 ( f ),
(3.243)
which is plotted in Fig. 3.48d. One possible implementation of the receiver is shown in Fig. 3.48e. 39. It is desired to transmit 100 voice signals using frequency division multiplexing (FDM). The voice signals are SSB modulated, with the upper sideband transmitted. The spectrum of each of the voice signals is illustrated in Fig. 3.49.
214
3 Amplitude Modulation M (f ) 2
(a)
f −fb
−fa
fa
0
fb
S1 (f ) j
(b) −fc + fa
fc + fa f
−fc − fa
0
−fc
fc − fa
fc
−j S2 (f ) (c)
j fc − fa
fc + fa f
−fc − fa
0
−fc
−fc + fa
fc
−j S(f )
(d)
−fc + fa
−fc + fb
2j
f 0
−fc −2 j (e)
fc fc − fb
fc − fa
1
−fb
0
s(t)
fb −m(t)
m(t) ˆ LPF 2 cos(2πfc t)
Fig. 3.48 a M( f ). b S1 ( f ). c S2 ( f ). d S( f )
m(t)
HT −1
3 Amplitude Modulation
215
Fig. 3.49 Spectrum of the message signal
M (f ) 1
f (kHz) −3.3
−0.3 0
0.3
3.3
(a) What is the minimum carrier spacing required to multiplex all the signals, such that there is no overlap of spectra. (b) Using a single- stage approach and assuming a carrier spacing of 4 kHz and a minimum carrier frequency of 100 kHz, determine the lower and upper frequency limits occupied by the FDM signal containing 100 voice signals. (c) In the two-stage approach, both stages use SSB with upper sideband transmitted. In the first stage, the voice signals are grouped into L blocks, each containing K voice signals. The carrier frequencies in each block in the first stage is given by 4n kHz, for 0 ≤ n ≤ K − 1. Note that the FDM signal using the two-stage approach must be identical to the single-stage approach in (b). i. Sketch the spectrum of each block in the first stage. ii. How many carriers are required to generate the FDM signal? Express your answer in terms of L and K . iii. Determine L and K such that the number of carriers is minimized. iv. Give the expression for the carrier frequencies required to modulate each block in the 2nd stage. • Solution: The minimum carrier spacing required is 3 kHz. If the minimum carrier frequency is 100 kHz, then the spectrum of the first voice signal starts at 100.3 kHz. The carrier for the 100th voice signal is at 100 + 99 × 4 = 496 kHz.
(3.244)
The spectrum of the 100th voice signal starts at 496.3 kHz and ends at 499.3 kHz. Therefore, the spectrum of the composite FDM signal extends from 100.3 to 499.3 kHz, that is 100.3 ≤ | f | ≤ 499.3 kHz. The spectrum of each block in the first stage is shown in Fig. 3.50b. For the second approach, let us first evaluate the number of carriers required to obtain each block in the first stage. Clearly, the number of carriers required is K − 1, since the first message in each block is not modulated at all. In the next stage, L carriers are required to translate each block to the appropriate frequency band. Thus, the total number of carriers required is L + K − 1. Now, we need to minimize L + K − 1 subject to the constraint L K = 100. The problem can be restated as
216
3 Amplitude Modulation ml, 1 (t)
(a)
ml, 2 (t)
sl (t) SSB
Sl (f )
for 1 ≤ l ≤ L
fc = 4 kHz ml, K (t) SSB fc = 4(K − 1) kHz 1st stage s1 (t) SSB fc = 100 kHz
Final FDM signal sL (t) SSB fc = 100 + 4K(L − 1) kHz 2nd stage Sl (f )
(b)
(kHz) −7.3 −4.3
−3.3
−4(K − 1) −4 −4(K − 1) − 0.3 −4(K − 1) − 3.3
0
−0.3
3.3 0.3
4.3 4
f
7.3
4(K − 1) 4(K − 1) + 0.3 4(K − 1) + 3.3
Fig. 3.50 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at the output of the first stage
min K
100 + K − 1. K
(3.245)
Differentiating with respect to K and setting the result to zero, we get the solution as K = 10, L = 10. The carrier frequencies required to modulate each block in the second stage is given by 100 + 4K l for 0 ≤ l ≤ L − 1. The block diagram of the two-stage approach is given in Fig. 3.50a. 40. Consider the modified switching modulator shown in Fig. 3.51. Note that v1 (t) = Ac cos(2π f c t) + m(t) v2 (t) = v1 (t)g p (t),
(3.246)
3 Amplitude Modulation
v1 (t)
217
Switching modulator
v2 (t)
Ideal Bandpass filter
s(t)
A1 cos3 (2πfc t)
Fig. 3.51 Block diagram of a modified switching modulator
where g p (t) =
∞ 2 (−1)n−1 1 + cos(2π f c (2n − 1)t). 2 π n=1 2n − 1
(3.247)
It is desired to obtain an AM signal s(t) centered at 3 f c . (a) Draw the spectrum of the ideal BPF required to obtain s(t). Assume a BPF gain of k. The bandwidth of m(t) extends over [−W, W ]. (b) Write down the expression for s(t), assuming a BPF gain of k. (c) It is given that the carrier power is 10 W with 100% modulation. The absolute maximum value of m(t) is 2 V. Compute A1 and k. • Solution: We have v2 (t) =
Ac cos(2π f c t) 2 ∞ Ac (−1)n−1 + (cos(4π(n − 1) f c t) + cos(4πn f c t)) π n=1 2n − 1 +
∞ m(t) 2 (−1)n−1 + cos(2π(2n − 1) f c t)m(t). (3.248) 2 π n=1 2n − 1
In (3.248), the only term centered at 3 f c is −
2 cos(6π f c t)m(t), 3π
(3.249)
which occurs at n = 2. Similarly A1 cos3 (2π f c t) =
A1 [3 cos(2π f c t) + cos(6π f c t)] , 4
(3.250)
which has a component at f c and 3 f c . Thus it is clear that in order to extract the components at 3 f c , the spectrum of the BPF must be as indicated in Fig. 3.52. The BPF output is given by:
218
3 Amplitude Modulation H(f ) k
−3fc
−2fc
−3fc − W
2fc
−3fc + W
3fc
3fc − W
3fc + W
Fig. 3.52 Spectrum of the BPF
A1 k 2k cos(6π f c t) − m(t) cos(6π f c t) 4 3π 8 k A1 1− m(t) cos(6π f c t). = 4 3π A1
s(t) =
(3.251)
For 100% modulation we require max
8 |m(t)| = 1 3π A1 16 ⇒ =1 3π A1 ⇒ A1 = 1.7.
(3.252)
k 2 A21 = 10 32 ⇒ k = 10.5.
(3.253)
The carrier power is given by
41. Consider an AM signal given by: s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t)
|μ| < 1.
(3.254)
Assume that the envelope detector is implemented using a diode and an RC filter, as shown in Fig. 3.53. Determine the upper limit on RC such that the capacitor voltage follows the envelope. Assume that RC 1/ f c , e−x ≈ 1 − x for small values of x and f c to be very large. • Solution: The envelope of s(t) is (see Fig. 3.54) a(t) = Ac |1 + μ cos(2π f m t)| = Ac [1 + μ cos(2π f m t)] ,
(3.255)
3 Amplitude Modulation
219 s(t)
Rs v(t) C
R
Fig. 3.53 Envelope detector using an RC filter
2
a(t0 )
1.5
a(t) v(t)
1 0.5 0 -0.5
s(t)
-1 -1.5 -2
0
0.05
0.1
0.15
0.2
0.25 t
0.3
0.35
0.4
0.45
0.5
Fig. 3.54 Plots of s(t), the envelope a(t) and the capacitor voltage v(t)
since |μ| < 1. Consider any point a(t0 ) on the envelope at time t0 , which coincides with the carrier peak. The capacitor starts discharging from a(t0 ) upto approximately the next carrier peak. The capacitor voltage v(τ ) can be written as: v(τ ) = a(t0 )e−τ /RC τ ≈ a(t0 ) 1 − RC
for 0 < τ < 1/ f c ,
(3.256)
where it is assumed that the new time variable τ = 0 at t0 . The absolute value of the slope of the capacitor voltage at t0 is given by dv(τ ) a(t0 ) . = dτ RC τ =0
(3.257)
220
3 Amplitude Modulation
We require that the absolute value of the slope of the capacitor voltage must exceed that of the envelope at time t0 . The absolute value of the slope of the envelope at t0 is da(t) = μAc (2π f m ) |sin(2π f m t0 )| . dt t=t0
(3.258)
Note that since t0 must coincide with the carrier peak, we must have t0 =
k fc
(3.259)
for integer values of k. From (3.257) and (3.258) we get the required condition as a(t0 ) ≥ μAc (2π f m ) |sin(2π f m t0 )| RC 1 + μ cos(2π f m t0 ) ⇒ RC ≤ 2πμ f m | sin(2π f m t0 )|
(3.260)
for every t0 given by (3.259). However, as f c becomes very large t0 → t. Thus the upper limit of RC can be found by computing the minimum value of the RHS of (3.260). Assuming that sin(2π f m t) is positive, we have d dt
1 + μ cos(2π f m t) =0 2πμ f m sin(2π f m t) ⇒ cos(2π f m t) = −μ.
(3.261)
We get the same result when sin(2π f m t) is negative. Substituting (3.261) in (3.260) we get RC ≤
1 − μ2 . 2πμ f m
(3.262)
Recall that for proper functioning of the envelope detector, we require: RC
1 , W
(3.263)
where W is the one-sided bandwidth of the message. We find that (3.263) is valid when μ is close to unity and f m is replaced by W in (3.262). 42. Consider a VSB signal s(t) obtained by passing s1 (t) = Ac m(t) cos(2π f c t)
(3.264)
3 Amplitude Modulation
221 H(f ) 4
f −fc − W
−fc
−fc + W
0
fc − W
fc
fc + W
Fig. 3.55 Spectrum of the VSB filter
through a VSB filter H ( f ) as illustrated in Fig. 3.55. Assume m(t) to be bandlimited to [−W, W ] and f c W . Let s(t) = sc (t) cos(2π f c t) − ss (t) sin(2π f c t).
(3.265)
(a) Determine the bandwidth of s(t). (b) Determine sc (t) and ss (t). (c) Draw the block diagram of the phase discrimination method of generating the VSB signal s(t) for this particular problem. Sketch the frequency response of Hs ( f )/j. Label all the important points. (d) In this problem, most of the energy in s(t) is concentrated on the upper sideband. How should the block diagram in (c) be modified (without changing Hs ( f )) such that most of the energy is concentrated in the lower sideband. (e) How should Hs ( f ) in (c) be modified so that s(t) becomes an SSB signal with upper sideband transmitted? Sketch the new response of Hs ( f )/j. Label all the important points. Note that in this part only Hs ( f ) is to be modified. There is no need to derive any formula. All symbols have their usual meaning. • Solution: The bandwidth of s(t) is 2W . We know that Ac M( f )H ( f c ) for −W ≤ f ≤ W Sc ( f ) = 0 otherwise.
(3.266)
In the given problem H ( f c ) = 2. Therefore sc (t) = 2 Ac m(t).
(3.267)
Ss ( f ) = Ac M( f )Hs ( f )/2,
(3.268)
Similarly
where
222
3 Amplitude Modulation
Hs ( f ) =
j [H ( f − f c ) − H ( f + f c )] for −W ≤ f ≤ W 0 otherwise.
(3.269)
From Fig. 3.56a–c and (3.269), it is clear that Hs ( f ) =
−j (4 f /W ) for −W ≤ f ≤ W 0 otherwise.
(3.270)
Hs ( f )/j is depicted in Fig. 3.56d. Hence Ss ( f ) = (−Ac /(πW ))j 2π f M( f ).
(3.271)
Taking the inverse Fourier transform of (3.271) we get ss (t) = −
Ac dm(t) . πW dt
(3.272)
The block diagram of the phase discrimination method of generating s(t) is given in Fig. 3.57. If most of the energy is to be concentrated in the lower sideband, z(t) in Fig. 3.57 must be subtracted instead of being added. If s(t) is to be an SSB signal with upper sideband transmitted, then Hs ( f )/j must be as shown in Fig. 3.56e. 43. The signal m 1 (t) = B cos(2π f m t), B > 0, is passed through a first-order RCfilter as shown in Fig. 3.58. It is given that 1/(2π RC) = f m /2. The output m(t) is amplitude modulated to obtain: s(t) = Ac [1 + ka m(t)] cos(2π f c t).
(3.273)
(a) Find m(t). (b) Find the limits of ka for no envelope distortion. (c) Compute the power of s(t). • Solution: The transfer function of the first-order lowpass filter is H( f ) =
1 , 1 + j f / f0
(3.274)
where f 0 is the –3 dB frequency given by: f0 =
1 fm = . 2π RC 2
(3.275)
Clearly m(t) = B|H ( f m )| cos(2π f m t + θ),
(3.276)
3 Amplitude Modulation
223 H(f )
(a)
4
f −fc − W
−fc −fc + W 0
fc − W fc
fc + W
H(f − fc ) (b)
4
f −W
W
0
2fc − W
2fc
2fc + W
H(f + fc ) (c)
4
f −2fc − W
2fc
−2fc + W
−W
W
0
Hs (f )/j (d)
4
W
f
−W −4
Hs (f )/j (e)
4
W −W −4
Fig. 3.56 Spectrum of the VSB filter and Hs ( f )/j
f
224
3 Amplitude Modulation m(t)
2Ac cos(2πfc t)
+ s(t)
Hs (f )
−(Ac /2) sin(2πfc t)
+ −
z(t)
Fig. 3.57 Block diagram for generating s(t) Fig. 3.58 First-order RC filter
m1 (t)
R
m(t)
C
where θ = − tan−1 (2) 1 |H ( f m )| = √ . 5
(3.277)
For no envelope distortion we require |ka |B|H ( f m )| ≤ 1 √ 5 ⇒ |ka | ≤ B
(3.278)
with the constraint that ka = 0. In order to compute the power of s(t) we note that s(t) = Ac cos(2π f c t) B + Ac ka √ cos(2π( f c − f m )t − θ) 2 5 B + Ac ka √ cos(2π( f c + f m )t + θ). 2 5
(3.279)
Thus the power of s(t) is just the sum of the power of the individual sinusoids and is equal to:
3 Amplitude Modulation
225
Fig. 3.59 a Message spectrum. b Recovery of the message
M (f ) 1
(a)
f −fb
−fa
fa
0
fb
(b) m(t)
s(t) H(f )
2 sin(2πfc t)
P=
A2 k 2 B 2 A2c + c a . 2 20
(3.280)
44. An AM signal s1 (t) = Ac [1 + ka m(t)] cos(2π f c t)
(3.281)
is passed through an ideal bandpass filter (BPF) having a frequency response H( f ) =
3 for f c − f b ≤ | f | ≤ f c − f a 0 otherwise.
(3.282)
The message spectrum is shown in Fig. 3.59a. (a) Write down the expression of the signal s(t), at the BPF output, in terms of m(t). (b) It is desired to recover m(t) (not cm(t) where c is a constant) from s(t) using the block diagram in Fig. 3.59b. Give the specifications of H ( f ). Assume ideal filter characteristics. • Solution: Clearly, the signal at the BPF output is SSB modulated with lower sideband transmitted. Therefore, the expression for s(t) is ˆ sin(2π f c t), s(t) = km(t) cos(2π f c t) + k m(t)
(3.283)
where k is a constant that needs to be found out. Now, the Fourier transform of s(t) is S( f ) =
k M( f − f c ) 1 − sgn ( f − f c ) 2
k + M( f + f c ) 1 + sgn ( f + f c ) . 2
(3.284)
226
3 Amplitude Modulation S(f ) 3Ac ka /2 f −fc −fc + fa −fc + fb
fc − fb
0
fc − fa
fc
Fig. 3.60 Spectrum at BPF output Inphase channel a(t)
c(t)
Lowpass filter
m(t)
cos(2πf0 t)
cos(2πfc t)
SSB signal
sin(2πf0 t)
sin(2πfc t)
e(t)
b(t)
Lowpass
d(t)
filter Quadrature channel M (f ) 1 f −fb
−fa fa
fb
Fig. 3.61 Weaver’s method of generating SSB signals transmitting the upper sideband
Comparing S( f ) in (3.284) with the spectrum in Fig. 3.60, we get k=
3Ac ka . 2
(3.285)
In order to recover the message, we need a lowpass filter with unity gain and passband [− f b , f b ] in cascade with an ideal Hilbert transformer followed by a gain of −1/k. These three elements can be combined into a single filter whose frequency response is given by H( f ) =
j (1/k)sgn( f ) for | f | < f b 0 otherwise.
(3.286)
45. (Haykin 1983) Consider the block diagram in Fig. 3.61. The message m(t) is bandlimited to f a ≤ | f | ≤ f b . The auxiliary carrier applied to the first pair of
3 Amplitude Modulation
227
product modulators is given by: f0 =
fa + fb . 2
(3.287)
The lowpass filters are identical and can be assumed to be ideal with unity gain and cutoff frequency equal to ( f b − f a )/2. The carrier frequency f c > ( f b − f a )/2. (a) Plot the spectra of the complex signals a(t) + j b(t), c(t) + j d(t) and the real-valued signal e(t). (b) Write down the expression for e(t) in terms of m(t). (c) How would you modify Fig. 3.61 so that only the lower sideband is transmitted. • Solution: We have a(t) + j b(t) = m(t) exp (j 2π f 0 t) = m 1 (t)
(say).
(3.288)
Thus m 1 (t) M1 ( f ) = M( f − f 0 ).
(3.289)
The plot of M1 ( f ) is given in Fig. 3.62b. Note that m 1 (t) cannot be regarded as a pre-envelope, since it has non-zero negative frequencies. The various edge frequencies in M1 ( f ) are given by: f 1 = ( f b − f a )/2 f 2 = ( f b + 3 f a )/2 f 3 = ( f a + 3 f b )/2.
(3.290)
Now the complex-valued signal m 1 (t) gets convolved with a real-valued lowpass filter (the lowpass filters have been stated to be identical, hence they can be considered to be a single real-valued lowpass filter), resulting in a complex-valued output m 2 (t). The plot of M2 ( f ) is shown in Fig. 3.62c. Let m 2 (t) = m 2, c (t) + j m 2, s (t) = c(t) + j d(t).
(3.291)
Now e(t) is given by e(t) = {m 2 (t) exp (−j 2π f c t)} 1
= m 2 (t) exp (−j 2π f c t) + m ∗2 (t) exp (j 2π f c t) 2 1
M2 ( f + f c ) + M2∗ (− f + f c ) . 2
(3.292)
228
3 Amplitude Modulation
Fig. 3.62 Plot of the signal spectra at various points
(a)
1
M (f ) f
−fa fa
−fb (b)
1
fb
M1 (f ) f
−f1
f1
f2
f3
M2 (f )
(c)
1 f −f1
f1 E(f )
(d)
0.5 f −fc
fc fc + f1 fc − f1
Since M2 ( f ) is real-valued, we have e(t)
1 [M2 ( f + f c ) + M2 (− f + f c )] = E( f ) 2
(say). (3.293)
E( f ) is plotted in Fig. 3.62d. Let f c1 = f c − f 1 − f a .
(3.294)
ˆ sin(2π f c1 t). e(t) = m(t) cos(2π f c1 t) − m(t)
(3.295)
Then
In order to transmit the lower sideband, the first product modulator in the quadrature arm must be fed with − sin(2π f 0 t). 46. (Haykin 2001) The spectrum of a voice signal m(t) is zero outside the interval f a < | f | < f b . To ensure communication privacy, this signal is applied to a
3 Amplitude Modulation
m(t)
v1 (t)
229
v2 (t)
Highpass
v3 (t)
Lowpass
filter
s(t)
filter
cos(2πfc t)
cos(2π(fc + fb )t)
Fig. 3.63 Block diagram of a scrambler
scrambler that consists of the following components in cascade: product modulator, highpass filter, second product modulator, and lowpass filter as illustrated in Fig. 3.63. The carrier wave applied to the first product modulator has frequency equal to f c , whereas that applied to the second product modulator has frequency equal to f b + f c . Both the carriers have unit amplitude. The highpass and lowpass filters are ideal with unity gain and have the same cutoff frequency at f c . Assume that f c > f b . (a) Derive an expression for the scrambler output s(t). (b) Show that the original voice signal m(t) may be recovered from s(t) by using an unscrambler that is identical to the scrambler. • Solution: The block diagram of the system is shown in Fig. 3.63. The output of the first product modulator is given by: V1 ( f ) =
1 [M( f − f c ) + M( f + f c )] . 2
(3.296)
The output of the highpass filter is an SSB signal with the upper sideband transmitted. This is illustrated in Fig. 3.64. Hence: v2 (t) =
1
m(t) cos(2π f c t) − m(t) ˆ sin(2π f c t) . 2
(3.297)
The output of the second product modulator is given by: v3 (t) = v2 (t) cos(2π( f c + f b )t) 1 = [m(t) (cos(2π(2 f c + f b )t) + cos(2π f b t)) 4 − m(t) ˆ (sin(2π(2 f c + f b )t) − sin(2π f b t)) ,
(3.298)
which after lowpass filtering becomes: s(t) =
1
m(t) cos(2π f b t) + m(t) ˆ sin(2π f b t) . 4
(3.299)
230
3 Amplitude Modulation M (f ) 1 f −fb
−fa
0
fa
fb
V1 (f )
1
Highpass filter
0.5 f −fc − fb
−fc
fc
0
fc + fb
V2 (f ) 0.5 f −fc − fb
−fc
fc
0
fc + fb
S(f ) 0.25 f −fb
fb
0
fa − fb
fb − fa S1 (f ) = M (f ) 1/16
−fb
−fa
0
fa
fb
Fig. 3.64 Illustrating the spectra of various signals in the scrambler
3 Amplitude Modulation
231
It is clear that s(t) is an SSB signal with lower sideband transmitted and carrier frequency f b . This is illustrated in Fig. 3.64. If s(t) is fed to the scrambler, the output would be similar to (3.299), that is: s1 (t) =
1
s(t) cos(2π f b t) + sˆ (t) sin(2π f b t) . 4
(3.300)
In other words, we transmit the lower sideband of s(t) with carrier frequency f b . Thus s1 (t) =
1 m(t) 16
(3.301)
which is again shown in Fig. 3.64. 47. (Haykin 1983) The single-tone modulating signal m(t) = Am cos(2π f m t)
(3.302)
is used to generate the VSB signal s(t) =
Am Ac [a cos(2π( f c + f m )t) + (1 − a) cos(2π( f c − f m )t)] (,3.303) 2
where 0 ≤ a ≤ 1 is a constant, representing the attenuation of the upper side frequency. (a) From the canonical representation of a bandpass signal, find the quadrature component of s(t). (b) The VSB signal plus a carrier Ac cos(2π f c t), is passed through an envelope detector. Determine the distortion produced by the quadrature component. (c) What is the value of a for which the distortion is maximum. (d) What is the value of a for which the distortion is minimum. • Solution: The VSB signal s(t) can be simplified to: Am Ac [cos(2π f c t) cos(2π f m t) + (1 − 2a) sin(2π f c t) sin(2π f m t)] . 2 (3.304) By comparing with the canonical representation of a bandpass signal, we see that the quadrature component is: s(t) =
−
Am Ac (1 − 2a) sin(2π f m t). 2
After the addition of the carrier, the modified signal is:
(3.305)
232
3 Amplitude Modulation
g(t)
x(t)
Bandpass
y(t)
filter
Energy
Output
meter
A cos(2πfc t) Variable frequency oscillator
Fig. 3.65 Block diagram of a heterodyne spectrum analyzer
Am cos(2π f m t) s(t) = Ac cos(2π f c t) 1 + 2 Am Ac (1 − 2a) sin(2π f c t) sin(2π f m t). + 2
(3.306)
The envelope is given by: Am 1 + D(t), E(t) = Ac 1 + cos(2π f m t) 2
(3.307)
where D(t) is the distortion defined by: D(t) =
(Am /2)(1 − 2a) sin(2π f m t) 1 + (Am /2) cos(2π f m t)
2 .
(3.308)
Clearly, D(t) is minimum when a = 1/2 and maximum when a = 0, 1. 48. (Haykin 1983) Figure 3.65 shows the block diagram of a heterodyne spectrum analyzer. The oscillator has an amplitude A and operates over the range f 0 to f 0 + W where ± f 0 is the midband frequency of the BPF and g(t) extends over the frequency band [−W, W ]. Assume that the BPF bandwidth f W and f 0 W and that the passband response of the BPF is unity. Determine the value of the energy meter output for an input signal g(t), for a particular value of the oscillator frequency, say f c . Assume that g(t) is a real-valued energy signal. • Solution: Let us denote the output of the product modulator by x(t). Hence x(t) = A cos(2π f c t)g(t) A [G( f − f c ) + G( f + f c )] = X ( f ), 2
(3.309)
where f c denotes the oscillator frequency. The BPF output can be written as:
3 Amplitude Modulation
233
f + f0 f − f0 + rect Y ( f ) = X ( f ) rect f f A f − f0 f + f0 X ( f 0 )rect + X (− f 0 )rect = 2 f f A f − f0 f + f0 G( f 0 − f c )rect + G(− f 0 + f c )rect = 2 f f , (3.310) where we have assumed that f is small enough so that X ( f ) can be assumed to be constant in the bandwidth of the filter. The energy of y(t) is given by:
∞
E= =
t=−∞ ∞ f =−∞
y 2 (t) dt |Y ( f )|2 d f
(3.311)
where we have used the Rayleigh’s energy theorem. Hence E=
A2 |G( f 0 − f c )|2 f 2
(3.312)
where we have used the fact that g(t) is real-valued, hence |G( f 0 − f c )| = |G(− f 0 + f c )|.
(3.313)
Note that as f c varies from f 0 to f 0 + W , we obtain the magnitude spectrum of g(t). 49. An AM signal is generated using a switching modulator as shown in Fig. 3.66. The signal v1 (t) = Ac cos(2π f c t) + m(t), where Ac |m(t)|. The diode D has vD −
+ D
v1 (t)
v2 (t)
iD
iD R vD 0
Vγ
i − v characteristics of D
Fig. 3.66 Generation of AM signal using a switching modulator
234
3 Amplitude Modulation
a cut-in voltage equal to Vγ (< Ac ). The i − v characteristics of the diode is also shown. √ (a) Assuming that Vγ /Ac = 1/ 2, derive the expression for the AM signal centered at f c and explain how it can be obtained. Assume also that f c W , where W is the two-sided bandwidth of m(t). (b) Find the minimum value of Ac for no envelope distortion, given that |m(t)| ≤ 1 for all t. • Solution: We have v2 (t) = ⇒ v2 (t) = ⇒ v2 (t) =
v1 (t) − Vγ when D is ON 0 when D is OFF v1 (t) − Vγ when v1 (t) ≥ Vγ 0 when v1 (t) < Vγ v1 (t) − Vγ when Ac cos(2π f c t) ≥ Vγ 0 when Ac cos(2π f c t) < Vγ
(3.314)
where we have assumed that v1 (t) ≈ Ac cos(2π f c t). The output voltage v2 (t) can also be written as: v2 (t) = (v1 (t) − Vγ )g p (t),
(3.315)
where g p (t) =
1 when Ac cos(2π f c t) ≥ Vγ 0 when Ac cos(2π f c t) < Vγ
(3.316)
2 1.5
Amplitude
1
gp (t)
0.5 0
−T0 T0
-0.5
cos(2πfc t) T
-1 -1.5 -2 -1.5
-1
-0.5
0
0.5
1
fc t
√ Fig. 3.67 Plot of g p (t) and cos(2π f c t) for Vγ /Ac = 1/ 2
1.5
3 Amplitude Modulation
235
as illustrated in Fig. 3.67. Note that: Ac cos(2π f c T0 ) = Vγ √ ⇒ cos(2π f c T0 ) = 1/ 2 ⇒ 2π f c T0 = π/4 ⇒ T0 /T = 1/8,
(3.317)
where f c = 1/T . The Fourier series expansion for g p (t) is: ∞ 2πnt 2πnt an cos + bn sin , g p (t) = a0 + 2 T T n=1
(3.318)
where a0 =
1 T
T /2
−T /2 T0
g p (t) dt
1 dt T −T0 = 1/4.
=
(3.319)
Since g p (t) is an even function, bn = 0. Now 1 an = T
T /2
−T /2 T0
cos(2πnt/T ) dt
1 cos(2πnt/T ) dt T −T0 1 sin(nπ/4). = nπ =
(3.320)
Therefore ∞ 1 1 +2 sin(nπ/4) cos(2πnt/T ) 4 nπ n=1 √ 1 2 1 = + cos(2π f c t) + cos(4π f c t) + · · · 4 π π
g p (t) =
(3.321)
Substituting (3.321) in (3.315) and collecting terms corresponding to the AM signal centered at f c , we obtain:
236
3 Amplitude Modulation
√ 2 Ac v2 (t) = cos(2π f c t) + m(t) cos(2π f c t) 4 √ π 2 Ac − Vγ cos(2π f c t) + cos(2π f c t) + · · · π 2π
(3.322)
Using a bandpass signal centered at f c and two-sided bandwidth equal to W we obtain the desired AM signal as: √ √ Ac Vγ 2 Ac 2 − + + m(t) cos(2π f c t) 4 π 2π π √ 1 1 2 1 − + + Ac m(t) cos(2π f c t) 4 π 2π π Ac √ 1 1 2 − + Ac m(t) cos(2π f c t) 4 2π π Ac 0.4501582 m(t) cos(2π f c t) Ac 0.0908451 + Ac 4.9552301 m(t) cos(2π f c t). 0.0908451Ac 1 + Ac
s(t) = = = = =
(3.323)
For no envelope distortion, the minimum value of Ac is 4.9552301. 50. A signal x(t) = A sin(2000πt) is a full wave rectified and passed through an ideal lowpass filter (LPF) having a bandwidth [−4.5, 4.5] kHz and a gain of 2 in the passband. Let the LPF output be denoted by m(t). Find y(t) = m(t) + j m(t) ˆ and sketch its Fourier transform. Compute the power of y(t). • Solution: Note that the period of x(t) is 1 ms. The full wave rectified sine wave can be represented by the complex Fourier series coefficients as follows: ∞
|x(t)| =
cn e j 2πnt/T1 ,
(3.324)
sin(ω0 t)e−j 2πnt/T1 dt
(3.325)
n=−∞
where T1 = 0.5 ms and cn =
A T1
T1 t=0
where ω0 = 2π/T0 , with T0 = 2T1 = 1 ms. The above equation can be rewritten as: T1 j ω0 t A e − e−j ω0 t e−j 2πnt/T1 dt cn = 2 j T1 t=0
3 Amplitude Modulation
237
Fig. 3.68 Fourier transform of y(t)
2c0 4c1 4c2 f 0
1/T1
2/T1
j (ω0 −2πn/T1 )t T1 e j (ω − 2πn/T ) 0 1 t=0 T A e−j (ω0 +2πn/T1 )t 1 − 2 j T1 −j (ω0 + 2πn/T1 ) t=0 2A . = π(1 − 4n 2 ) =
A 2 j T1
(3.326)
The output of the LPF is (gain of the LPF is 2): m(t) = 2
2
cn e j 2πnt/T1
n=−2
= 2c0 + 4c1 cos(ω1 t) + 4c2 cos(2ω1 t),
(3.327)
where ω1 = 2π/T1 = 2ω0 . The output of the Hilbert transformer is: m(t) ˆ = 4c1 cos(ω1 t − π/2) + 4c2 cos(2ω1 t − π/2) = 4c1 sin(ω1 t) + 4c2 sin(2ω1 t).
(3.328)
Note the absence of c0 in (3.328), since the Hilbert transformer blocks the dc component. Therefore y(t) = m(t) + j m(t) ˆ = 2c0 + 4c1 e j ω1 t + 4c2 e 2ω1 t .
(3.329)
The Fourier transform of y(t) is given by: Y ( f ) = 2c0 δ( f ) + 4c1 δ( f − 1/T1 ) + 4c2 δ( f − 2/T1 ),
(3.330)
which is shown in Fig. 3.68. The power of y(t) is P = 4c02 + 16c12 + 16c22 .
(3.331)
238
3 Amplitude Modulation
51. Explain the principle of operation of the Costas loop for DSB-SC signals given by s(t) = Ac m(t) cos(2π f c t). Draw the block diagram. Clearly identify the phase discriminator. Explain phase ambiguity. • Solution: See Fig. 3.69. We assume that m(t) is real-valued with energy E and Fourier transform M( f ). Clearly x1 (t) = Ac m(t) cos(φ) x2 (t) = Ac m(t) sin(φ).
(3.332)
and 1 2 2 A m (t) sin(2φ). 2 c
x3 (t) =
(3.333)
Note that m (t) 2
∞
α=−∞
M(α)M( f − α) dα
= G( f ).
(3.334)
Therefore G(0) =
∞ α=−∞
|M(α)|2 dα
=E
(3.335)
since M(−α) = M ∗ (α) (m(t) is real-valued), and we have used Rayleigh’s energy theorem. We also have lim H1 ( f ) = δ( f ).
(3.336)
X 4 ( f ) = X 3 ( f )δ( f ) 1 = A2c sin(2φ)Eδ( f ) 2 1 A2c sin(2φ)E 2 = x4 (t).
(3.337)
f →0
Hence
Thus, the (dc) control signal x4 (t) determines the phase of the voltage controlled oscillator (VCO). Clearly, x4 (t) = 0 when
3 Amplitude Modulation
239 x1 (t)
H(f )
Phase discriminator
2 cos(2πfc t + φ) s(t)
VCO
x4 (t)
H1 (f )
x3 (t)
2 sin(2πfc t + φ) H(f ) x2 (t) H1 (f ) H(f ) 1/Δf
1 f −W
f
W
0
Δf
Fig. 3.69 Block diagram of the Costas loop Fig. 3.70 Message signal m(t)
ae−t
t
0 T /5
T
−7
2φ = nπ ⇒ φ = nπ/2.
(3.338)
Therefore, the Costas loop exhibits 90◦ phase ambiguity. 52. For the message signal shown in Fig. 3.70 compute the power efficiency (the ratio of sideband power to total power) in terms of the amplitude sensitivity ka , and T . Find a in terms of T . The message is periodic with period T , has zero -mean and is AM modulated
240
3 Amplitude Modulation
according to: s(t) = Ac [1 + ka m(t)] cos(2π f c t).
(3.339)
Assume that T 1/ f c . • Solution: Since the message is zero- mean we must have:
T
m(t) dt = 0
t=0 T /5
⇒
t=0
1 T ×7× T − 2 5 14T . ⇒a= 5(1 − e−T /5 )
ae−t dt =
(3.340)
For a general AM signal given by s(t) = Ac [1 + ka m(t)] cos(2π f c t),
(3.341)
the power efficiency is: η=
ka2 Pm A2c ka2 Pm /2 = A2c /2 + A2c ka2 Pm /2 1 + ka2 Pm
(3.342)
where Pm denotes the message power. Here we only need to compute Pm . We have 1 T 2 m (t) dt. (3.343) Pm = T t=0 Let 1 T /5 2 −2t P1 = a e dt T t=0 a2
1 − e−2T /5 . = 2T Let P2 =
1 T
1 = T 1 = T
T
t=T /5 T
t=T /5
(mt + c)2 dt 2 2 m t + 2mct + c2 dt
m2t 3 + mct 2 + c2 t 3
T t=T /5
(3.344)
3 Amplitude Modulation
241
1 1 1 m2T 2 2 1− + mT c 1 − +c 1− = 3 125 25 5 2 35 124 24 4 = − + 4 3 × 125 25 5 = 13.0667, (3.345)
where 35 4T c = −mT.
m=
(3.346)
Now Pm = P1 + P2 .
(3.347)
The power efficiency can be obtained by substituting (3.340), (3.344), (3.345), and (3.347) in (3.342).
References Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983. Simon Haykin. Communication Systems. Wiley Eastern, fourth edition, 2001. Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition, 2002.
Chapter 4
Frequency Modulation
1. (Haykin 1983) In a frequency-modulated radar, the instantaneous frequency of the transmitted carrier f t (t) is varied as given in Fig. 4.1. The instantaneous frequency of the received echo fr (t) is also shown, where τ is the round-trip delay time. Assuming that f 0 τ 1 determine the number of beat (difference frequency) cycles in one second, in terms of the frequency deviation ( f ) of the carrier frequency, the delay τ , and the repetition frequency f 0 . Assume that f 0 is an integer. • Solution: Let the transmitted signal be given by s(t) = A1 cos 2π
t τ =0
f t (τ ) dτ .
(4.1)
Let the received signal be given by r (t) = A2 cos 2π
t
τ =0
fr (τ ) dτ .
(4.2)
The variation of the beat (difference) frequency ( f t (t) − fr (t)) with time is plotted in Fig. 4.2b. Note that the number of beat cycles over the time duration 1/ f 0 is given by N =
t1 +1/ f 0
| f t (t) − fr (t)| dt
t=t1
= |area of ABCD| + |area of DEFG| .
(4.3)
Note that f1 = fc + f − f2 .
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_4
(4.4)
243
244
4 Frequency Modulation
frequency 1/f0
fc + Δf
fc
t
fc − Δf ft (t)
τ
fr (t)
Fig. 4.1 Variation of the instantaneous frequency with time in an FM radar τ frequency Y
f2 fc
(a)
1/f0
fc + Δf
t4
t3 X
t2
t1
t t5
fc − Δf
τ ft (t) − fr (t) E
(b)
F
f1
−f1
A
D
B
C
G
t
1/(2f0 )
Fig. 4.2 Variation of the instantaneous difference frequency with time in an FM radar
The equation of the line XY in Fig. 4.2a is y = mt + c,
(4.5)
4 Frequency Modulation
245
where m = 4 f 0 f.
(4.6)
f c + f = 4 f 0 f (t1 + τ ) + c.
(4.7)
f 2 = 4 f 0 f t1 + c = fc + f − 4 f0 f τ .
(4.8)
f1 = fc + f − f2 = 4 f0 f τ .
(4.9)
At t = t1 + τ we have
At t = t1 we have
Therefore
Now 1 1τ f1 + 2 f1 −τ 22 2 f0 1 − 2τ f 0 = τ f1 + f1 f0 ≈ τ f1 + f1 / f0 .
N =4
(4.10)
Since f 0 is an integer, the number of cycles in one second is N f 0 = f 1 (1 + τ f 0 ) ≈ f1 = 4 f0 f τ ,
(4.11)
which is proportional to τ and hence twice the distance between the target and the radar. In other words, τ = 2x/c,
(4.12)
where x is the distance between the target and the radar and c is the velocity of light. Thus the FM radar can be used for ranging. 2. (Haykin 1983) The instantaneous frequency of a cosine wave is equal to fc − f for |t| < T /2 and f c for |t| > T /2. Determine the spectrum of this signal. • Solution: The Fourier transform of this signal is given by
246
4 Frequency Modulation
S( f ) =
−T /2
cos(2π f c t)e−j 2π f t dt
t=−∞ T /2
+
t=−T /2 ∞
cos(2π( f c + f )t)e−j 2π f t dt
cos(2π f c t)e−j 2π f t dt + t=T /2 ∞ cos(2π f c t)e−j 2π f t dt = t=−∞ T /2
+
t=−T /2
(cos(2π( f c + f )t) − cos(2π f c t)) e−j 2π f t dt
1 = [δ( f − f c ) + δ( f + f c )] 2 T + [sinc (( f − f c − f )T ) + sinc (( f + f c + f )T )] 2 T (4.13) − [sinc (( f − f c )T ) + sinc (( f + f c )T )] . 2 3. (Haykin 1983) Single sideband modulation may be viewed as a hybrid form of amplitude modulation and frequency modulation. Evaluate the envelope and the instantaneous frequency of an SSB wave, in terms of the message signal and its Hilbert transform, for the two cases: (a) When only the upper sideband is transmitted. (b) When only the lower sideband is transmitted. Assume that the message signal is m(t), the carrier amplitude is Ac /2 and carrier frequency is f c . • Solution: The SSB signal can be written as s(t) =
Ac m(t) cos(2π f c t) ± m(t) ˆ sin(2π f c t) . 2
(4.14)
When the upper sideband is to be transmitted, the minus sign is used and when the lower sideband is to be transmitted, the plus sign is used. The envelope of s(t) is given by a(t) =
Ac 2 m (t) + mˆ 2 (t) 2
(4.15)
is independent of whether the upper or lower sideband is transmitted. Note that s(t) can be written as s(t) = a(t) cos(2π f c t + θ(t)),
(4.16)
4 Frequency Modulation
247
where θ(t) denotes the instantaneous phase which is given by θ(t) = ± tan
−1
m(t) ˆ . m(t)
(4.17)
The plus sign in the above equation is used when the upper sideband is transmitted and the minus sign is used when the lower sideband is transmitted. The total instantaneous phase is given by θtot (t) = 2π f c t + θ(t).
(4.18)
The total instantaneous frequency is given by f tot (t) =
1 dθtot (t) . 2π dt
(4.19)
When the upper sideband is transmitted, the total instantaneous frequency is given by f tot (t) = f c +
ˆ (t) 1 mˆ (t)m(t) − m(t)m , 2 2 2π (m (t) + mˆ (t))
(4.20)
where m (t) denotes the derivative of m(t) and mˆ (t) denotes the derivative of m(t). ˆ When the lower sideband is transmitted, the total instantaneous frequency is given by f tot (t) = f c −
ˆ (t) 1 mˆ (t)m(t) − m(t)m . 2 2 2π (m (t) + mˆ (t))
(4.21)
4. (Haykin 1983) Consider a narrowband FM signal approximately defined by s(t) ≈ Ac cos(2π f c t) − β Ac sin(2π f m t) sin(2π f c t).
(4.22)
(a) Determine the envelope of s(t). What is the ratio of the maximum to the minimum value of this envelope. (b) Determine the total average power of the narrowband FM signal. Determine the total average power in the sidebands. (c) Assuming that s(t) in (4.22) can be written as s(t) = a(t) cos(2π f c t + θ(t))
(4.23)
expand θ(t) in the form of a Maclaurin series. Assume that β < 0.3. What is the power ratio of the third harmonic to the fundamental component. • Solution: The envelope is given by
248
4 Frequency Modulation
a(t) = Ac 1 + β 2 sin2 (2π f m t).
(4.24)
Therefore, the maximum and the minimum values of the envelope are given by Amax = Ac 1 + β 2 Amin = Ac Amax ⇒ = 1 + β2. Amin
(4.25)
The total average power of narrowband FM is equal to Ptot =
2β 2 A2c A2c + . 2 8
(4.26)
The total average power in the sidebands is equal to Pmes =
2β 2 A2c . 8
(4.27)
Assuming that β 1, the narrowband FM signal can be written as s(t) = a(t) cos(2π f c t + θ(t)),
(4.28)
where θ(t) denotes the instantaneous phase of the message component and is given by θ(t) ≈ tan−1 (β sin(2π f m t)).
(4.29)
Now, the Maclaurin series expansion of tan−1 (x) is (ignoring higher terms) tan−1 (x) ≈ x −
x3 . 3
(4.30)
Thus 1 θ(t) = β sin(2π f m t) − β 3 sin3 (2π f m t). 3
(4.31)
Using the fact that sin3 (θ) = (4.31) becomes
3 sin(θ) − sin(3θ) 4
(4.32)
4 Frequency Modulation m(t)
249
Narrowband
Frequency
FM modulator
multiplier n1
BPF
Wideband FM signal fc = 104 MHz
Frequency f0 = 1 MHz
multiplier n2
Fig. 4.3 Armstrong-type FM modulator
θ(t) ≈ β sin(2π f m t) 1 − β 3 (3 sin(2π f m t) − sin(2π(3 f m )t)) 12 β3 β3 sin(2π f m t) + sin(2π(3 f m )t). = β− 4 12
(4.33)
Therefore, the power ratio of the third harmonic to the fundamental is R=
4 β3 × 12 (4β − β 3 )
2 .
(4.34)
5. (Proakis and Salehi 2005) To generate wideband FM, we can first generate a narrowband FM signal, and then use frequency multiplication to spread the signal bandwidth. This is illustrated in Fig. 4.3, which is called the Armstrong-type FM modulator. The narrowband FM has a frequency deviation of 1 kHz. (a) If the frequency of the first oscillator is 1 MHz, determine n 1 and n 2 that is necessary to generate an FM signal at a carrier frequency of 104 MHz and a maximum frequency deviation of 75 kHz. The BPF allows only the difference frequency component. (b) If the error in the carrier frequency f c for the wideband FM signal is to be within ±200 Hz, determine the maximum allowable error in the 1 MHz oscillator. • Solution: The maximum frequency deviation of the output wideband FM signal is 75 kHz. The maximum frequency deviation of the narrowband FM (NBFM) signal is 1 kHz. Therefore n1 =
75 = 75. 1
(4.35)
Consequently, the carrier frequency at the output of the first frequency multiplier is 75 MHz. However, the required carrier frequency is 104 MHz. Hence, what we now require is a frequency translation. Therefore, we must have
250
4 Frequency Modulation
Fig. 4.4 Fourier transform of the frequency discriminator for negative frequencies
H(f )/j πaBT −fc − BT /2
f −fc
0
−fc + BT /2
−πaBT
s(t)
y(t)
Envelope
z(t)
H(f ) detector
1 × n 2 − 75 = 104 MHz ⇒ n 2 = 179.
(4.36)
Let us assume that the error in f 0 is x Hz. Hence, the error in the final output carrier frequency is n 2 x − n 1 x = ±200 Hz ⇒ x = ±1.92 Hz.
(4.37)
6. Figure 4.4 shows the Fourier transform of a frequency discriminator for negative frequencies. Here BT denotes the transmission bandwidth of the FM signal and f c denotes the carrier frequency. The block diagram of the system proposed for frequency demodulation is also shown, where s(t) = Ac cos 2π f c t + 2πk f
t τ =0
m(τ ) dτ ,
(4.38)
where m(t) denotes the message signal. (a) Sketch the frequency response of H ( f ) for positive frequencies given that h(t) is real-valued. (b) Find the output z(t). (c) Can the proposed system be used for frequency demodulation? Justify your answer. • Solution: Since h(t) is real-valued we must have H (− f ) = H ∗ ( f ),
(4.39)
4 Frequency Modulation
251 H(f )/j πaBT fc − BT /2
−fc − BT /2 −fc −fc + BT /2
0
f fc
fc + BT /2
−πaBT
Fig. 4.5 Frequency response of the proposed discriminator
which is illustrated in Fig. 4.5. Therefore H( f ) =
j 2πa( f − f c ) for f c − BT /2 < f < f c + BT /2 (4.40) j 2πa( f + f c ) for − f c − BT /2 < f < − f c + BT /2.
In order to compute y(t), we use the complex lowpass equivalent model. We assume that h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t)
j 2π f c t ˜ = 2h(t)e ,
(4.41)
˜ = h c (t) + j h s (t). h(t)
(4.42)
where
We know that the Fourier transform of the complex envelope of h(t) is given by H˜ ( f ) =
H ( f + f c ) for − BT /2 < f < BT /2 , 0 elsewhere
(4.43)
which reduces to H˜ ( f ) =
j 2πa f for − BT /2 < f < BT /2 . 0 elsewhere
(4.44)
Let us denote the Fourier transform of the complex envelope of the output by Y˜ ( f ). Then
252
4 Frequency Modulation
˜ f) Y˜ ( f ) = H˜ ( f ) S( ˜ f ), = j 2π f a S(
(4.45)
˜ f ) is the Fourier transform of the complex envelope of s(t). Note where S( that t m(τ ) dτ . (4.46) s˜ (t) = Ac exp j 2πk f τ =0
From (4.45) it is clear that y˜ (t) = a
d s˜ (t) dt
= j a Ac 2πk f m(t) exp j 2πk f = 2a Ac πk f m(t) exp j 2πk f
t τ =0 t
τ =0
m(τ ) dτ
m(τ ) dτ + j π/2 . (4.47)
Therefore y(t) = 2πa Ac k f m(t) cos 2π f c t + 2πk f
t
τ =0
m(τ ) dτ + π/2 . (4.48)
The output is z(t) = | y˜ (t)| = 2πa Ac k f |m(t)|.
(4.49)
Thus, the proposed system cannot be used for frequency demodulation. 7. Figure 4.6 shows the Fourier transform of a frequency discriminator for negative frequencies. Here BT denotes the transmission bandwidth of the FM signal and f c denotes the carrier frequency. The block diagram of the system proposed for frequency demodulation is also shown, where s(t) = Ac cos 2π f c t + 2πk f
t τ =0
m(τ ) dτ ,
(4.50)
where m(t) denotes the message signal. (a) Sketch the frequency response of H ( f ) for positive frequencies given that h(t) is real-valued. (b) Find the output z(t). (c) Can the proposed system be used for frequency demodulation? Justify your answer.
4 Frequency Modulation
253
Fig. 4.6 Fourier transform of the frequency discriminator for negative frequencies
H(f )/j −fc
−fc − BT /2
f −fc + BT /2
0
−2πaBT
s(t)
y(t)
Envelope
z(t)
H(f ) detector
• Solution: Since h(t) is real-valued we must have H (− f ) = H ∗ ( f ),
(4.51)
which is illustrated in Fig. 4.7. Therefore
−j 2πa( f − f c − BT /2) for f c − BT /2 < f < f c + BT /2 −j 2πa( f + f c + BT /2) for − f c − BT /2 < f < − f c + BT /2. (4.52) In order to compute y(t), we use the complex lowpass equivalent model. We assume that H( f ) =
h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t)
j 2π f c t ˜ = 2h(t)e ,
(4.53)
H(f )/j 2πaBT
−fc − BT /2
−fc + BT /2 −fc
f 0
fc − BT /2
−2πaBT
Fig. 4.7 Frequency response of the proposed discriminator
fc
fc + BT /2
254
4 Frequency Modulation
where ˜ = h c (t) + j h s (t). h(t)
(4.54)
We know that the Fourier transform of the complex envelope of h(t) is given by
H ( f + f c ) for − BT /2 < f < BT /2 , 0 elsewhere
(4.55)
−j 2πa( f − BT /2) for − BT /2 < f < BT /2 . 0 elsewhere
(4.56)
H˜ ( f ) = which reduces to H˜ ( f ) =
Let us denote the Fourier transform of the complex envelope of the output by Y˜ ( f ). Then ˜ f) Y˜ ( f ) = H˜ ( f ) S(
˜ f ), = −j 2πa( f − BT /2) S(
(4.57)
˜ f ) is the Fourier transform of the complex envelope of s(t). Note where S( that t m(τ ) dτ . (4.58) s˜ (t) = Ac exp j 2πk f τ =0
From (4.45) it is clear that y˜ (t) = −a
d s˜ (t) + j πa BT s˜ (t) dt
t = −j a Ac 2πk f m(t) exp j 2πk f m(τ ) dτ τ =0 t m(τ ) dτ +j a Ac π BT exp j 2πk f τ =0 t m(τ ) dτ . (4.59) = j πa Ac BT − 2k f m(t) exp j 2πk f τ =0
Therefore y(t) = πa Ac [BT − 2k f m(t)] cos 2π f c t + 2πk f The output is
t
τ =0
m(τ ) dτ + π/2 . (4.60)
4 Frequency Modulation s(t)
255
Delay
−
x(t)
line
Envelope
a(t)
detector +
Fig. 4.8 Delay-line method of demodulating FM signals
z(t) = | y˜ (t)| = πa Ac BT [1 − 2k f m(t)/BT ].
(4.61)
Therefore, the proposed system can be used for frequency demodulation provided 2k f m(t)/BT < 1.
(4.62)
8. (Haykin 1983) Consider the frequency demodulation scheme shown in Fig. 4.8 in which the incoming FM signal is passed through a delay line that produces a delay of T such that 2π f c T = π/2. The delay-line output is subtracted from the incoming FM signal and the resulting output is envelope detected. This demodulator finds wide application in demodulating microwave FM waves. Assuming that s(t) = Ac cos(2π f c t + β sin(2π f m t))
(4.63)
compute a(t) when β < 1 and the delay T is such that 2π f m T 1.
(4.64)
• Solution: The signal x(t) can be written as x(t) = s(t) − s(t − T ) = Ac cos(2π f c t + β sin(2π f m t)) −Ac cos(2π f c (t − T ) + β sin(2π f m (t − T ))) ≈ Ac cos(2π f c t + β sin(2π f m t)) −Ac cos(2π f c t + β sin(2π f m t) − 2π f m T β cos(2π f m t) − π/2) = Ac cos(2π f c t + β sin(2π f m t)) −Ac sin(2π f c t + β sin(2π f m t) − 2π f m T β cos(2π f m t)). (4.65) Let θ(t) = β sin(2π f m t) α(t) = θ(t) − 2π f m T β cos(2π f m t). Then x(t) can be written as
(4.66)
256
4 Frequency Modulation
x(t) = Ac cos(2π f c t + θ(t)) − Ac sin(2π f c t + α(t)) = Ac cos(2π f c t) [cos(θ(t)) − sin(α(t))] −Ac sin(2π f c t) [sin(θ(t)) + cos(α(t))] = a(t) cos(2π f c t + φ(t)),
(4.67)
where tan(φ(t)) =
sin(θ(t)) + cos(α(t)) cos(θ(t)) − sin(α(t))
(4.68)
and the envelope of x(t) is a(t) = Ac 2 + 2 sin(θ(t) − α(t)) √ ≈ Ac 2 [1 + (θ(t) − α(t))/2] √ = Ac 2 [1 + π f m T β cos(2π f m t)] ,
(4.69)
where we have made use of the fact that 2π f m T 1 β < 1 ⇒ θ(t) − α(t) 1 ⇒ sin(θ(t) − α(t)) ≈ θ(t) − α(t).
(4.70)
Note that a(t) > 0 for all t. The message signal cos(2π f m t) can be recovered by passing a(t) through a capacitor. 9. Consider the frequency demodulation scheme shown in Fig. 4.9 in which the incoming FM signal is passed through a delay line that produces a delay of T such that 2π f c T = π/2. The delay-line output is subtracted from the incoming FM signal and the resulting output is envelope detected. This demodulator finds wide application in demodulating microwave FM waves. Assuming that s(t) = Ac cos 2π f c t + 2πk f
t
τ =0
s(t)
Delay
−
x(t)
line
m(τ ) dτ
Envelope detector
+
Fig. 4.9 Delay-line method of demodulating FM signals
(4.71)
a(t)
4 Frequency Modulation
257
compute a(t) when the delay T is such that 2π f T 1
(4.72)
where f denotes the frequency deviation. Assume also that m(t) is constant over any interval T . • Solution: The signal x(t) can be written as x(t) = s(t) − s(t − T ),
(4.73)
where
s(t − T ) = Ac cos 2π f c (t − T ) + 2πk f = Ac cos 2π f c t − π/2 + 2πk f = Ac sin 2π f c t + 2πk f
t−T τ =0
t−T
τ =0 t−T τ =0
m(τ ) dτ m(τ ) dτ
m(τ ) dτ .
(4.74)
Now 2πk f
t−T τ =0
m(τ ) dτ = 2πk f = 2πk f
t τ =0 t τ =0
m(τ ) dτ − 2πk f
t τ =t−T
m(τ ) dτ
m(τ ) dτ − 2πk f T m(t)
= θ(t) − 2πk f T m(t) = α(t)
(say),
(4.75)
where 2πk f
t τ =0
m(τ ) dτ = θ(t).
(4.76)
Therefore x(t) in (4.73) becomes x(t) = Ac cos(2π f c t + θ(t)) − Ac sin(2π f c t + α(t)) = Ac cos(2π f c t) [cos(θ(t)) − sin(α(t))] −Ac sin(2π f c t) [sin(θ(t)) + cos(α(t))] = a(t) cos(2π f c t + φ(t)), where
(4.77)
258
4 Frequency Modulation
tan(φ(t)) =
sin(θ(t)) + cos(α(t)) cos(θ(t)) − sin(α(t))
(4.78)
and the envelope of x(t) is a(t) = Ac 2 + 2 sin(θ(t) − α(t)) √ ≈ Ac 2 [1 + (θ(t) − α(t))/2] √ = Ac 2 1 + πk f T m(t) ,
(4.79)
where we have made use of the fact that f = max |k f m(t)| 2π f T 1 (given) ⇒ |θ(t) − α(t)| 1 ⇒ sin(θ(t) − α(t)) ≈ θ(t) − α(t).
(4.80)
Note that a(t) > 0 for all t. The message signal m(t) can be recovered by passing a(t) through a capacitor. 10. A tone-modulated FM signal of the form: s(t) = Ac sin(2π f c t + β cos(2π f m t))
(4.81)
is passed through an ideal unity gain BPF with center frequency equal to the carrier frequency and bandwidth equal to 3 f m (±1.5 f m on either side of the carrier), yielding the signal z(t). (a) Derive the expression for z(t). (b) Assuming that z(t) is of the form z(t) = a(t) cos(2π f c t + θ(t))
(4.82)
compute a(t) and θ(t). Use the relations x=π 1 Jn (β) = e j (β sin(x)−nx) d x 2π x=−π Jn (β) = (−1)n J−n (β),
(4.83)
where Jn (β) denotes the nth order Bessel function of the first kind and argument β. • Solution: The input FM signal can be written as
4 Frequency Modulation
259
s(t) = Ac cos(2π f c t + β cos(2π f m t) − π/2) = s˜ (t)e j 2π fc t ,
(4.84)
s˜ (t) = Ac e j β cos(2π fm t)−j π/2 = −j Ac e j β cos(2π fm t) ,
(4.85)
where
which is periodic with a period 1/ f m . Hence, s˜ (t) can be expanded in the form of a complex Fourier series given by s˜ (t) =
∞
cn e j 2πn fm t ,
(4.86)
n=−∞
where cn = f m
1/(2 f m )
s˜ (t)e−j 2πn fm t dt
t=−1/(2 f m ) 1/(2 fm )
= −j Ac f m
e j β cos(2π fm t) e−j 2πn fm t dt.
(4.87)
t=−1/(2 f m )
Let 2π f m t = π/2 − x ⇒ 2π f m dt = −d x.
(4.88)
Thus j Ac cn = 2π
−π/2
e j(β sin(x)−n(π/2−x)) d x.
(4.89)
x=3π/2
Noting that the integrand is periodic with respect to x with a period 2π we can interchange the limits and integrate from −π to π to get −j Ac π e j(β sin(x)−n(π/2−x)) d x 2π x=−π Ac −j (n+1)π/2 π e e j(β sin(x)+nx) d x = 2π x=−π
cn =
= Ac J−n (β)e−j (n+1)π/2 . Therefore
(4.90)
260
4 Frequency Modulation ∞
J−n (β)e j (2πn fm t−(n+1)π/2) .
(4.91)
J−n (β) cos(2π f c t + 2πn f m t − (n + 1)π/2).
(4.92)
s˜ (t) = Ac
n=−∞
Thus s(t) = Ac
∞ n=−∞
The output of the BPF is z(t) = Ac J0 (β) cos(2π f c t − π/2) + Ac J−1 (β) cos(2π f c t + 2π f m t − π) +Ac J1 (β) cos(2π f c t − 2π f m t) = Ac J0 (β) sin(2π f c t) + Ac J1 (β) cos(2π f c t + 2π f m t) +Ac J1 (β) cos(2π f c t − 2π f m t) = Ac J0 (β) sin(2π f c t) + 2 Ac J1 (β) cos(2π f c t) cos(2π f m t).
(4.93)
Assuming that z(t) is of the form z(t) = a(t) cos(2π f c t + θ(t)) = a(t) cos(2π f c t) cos(θ(t)) − a(t) sin(2π f c t) sin(θ(t)) (4.94) we have a(t) cos(θ(t)) = 2 Ac J1 (β) cos(2π f m t) a(t) sin(θ(t)) = −Ac J0 (β).
(4.95)
Thus, the envelope of the output is a(t) = Ac (2J1 (β) cos(2π f m t))2 + J02 (β).
(4.96)
The phase is θ(t) = − tan−1
J0 (β) . 2J1 (β) cos(2π f m t)
(4.97)
11. (Haykin 1983) The bandwidth of an FM signal extends over both sides of the carrier frequency. However, in the single sideband version of FM, it is possible to transmit either the upper or the lower sideband. (a) Assuming that the FM signal is given by s(t) = Ac cos(2π f c t + φ(t))
(4.98)
4 Frequency Modulation
261
explain how we can transmit only the upper sideband. Express your result in terms of complex envelope of s(t) and Hilbert transforms. Assume that for all practical purposes, s(t) is bandlimited to f c − BT /2 < | f | < f c + BT /2 and f c BT . (b) Verify your answer for single-tone FM modulation when φ(t) = β sin(2π f m t).
(4.99)
Use the Fourier series representation for this FM signal given by s(t) = Ac
∞
Jn (β) cos(2π( f c + n f m )t).
(4.100)
n=−∞
• Solution: Recall that in SSB modulation with upper sideband transmitted, the signal is given by s(t) = m(t) cos(2π f c t) − m(t) ˆ sin(2π f c t) j 2π fc t = m(t) + j m(t) ˆ e ,
(4.101)
where m(t) ˆ is the Hilbert transform of the message m(t), which is typically bandlimited between [−W, W ] and f c W . Observe that m(t) in (4.101) can be complex. For the case of the FM signal given by s(t) = Ac cos(2π f c t + φ(t)) = Ac e j (2π fc t+φ(t))
(4.102)
the complex envelope is given by s˜ (t) = Ac e j φ(t) .
(4.103)
Clearly, s˜ (t) is bandlimited to −BT /2 < | f | < BT /2. Using the concept given in (4.101) the required equation for the FM signal with only the upper sideband transmitted is given by s1 (t) =
s˜ (t) + j s˜ (t) e j 2π fc t ,
(4.104)
where s˜ (t) is the Hilbert transform of s˜ (t). Now in the given example s(t) = Ac cos(2π f c t + β sin(2π f m t)) ∞ = Ac Jn (β) cos(2π( f c + n f m )t). n=−∞
(4.105)
262
4 Frequency Modulation
The complex envelope can be written as s˜ (t) = Ac = Ac
∞ n=−∞ ∞
Jn (β)e j 2πn fm t Jn (β) (cos(2πn f m t) + j sin(2πn f m t)) .
(4.106)
n=−∞
The Hilbert transform of s˜ (t) in (4.106) is s˜ (t) = Ac
−1 n=−∞ ∞
+Ac
Jn (β) (cos(2πn f m t + π/2) + j sin(2πn f m t + π/2)) Jn (β) (cos(2πn f m t − π/2) + j sin(2πn f m t − π/2))
n=1
= Ac
−1 n=−∞ ∞
+Ac
Jn (β) (− sin(2πn f m t) + j cos(2πn f m t)) Jn (β) (sin(2πn f m t) − j cos(2πn f m t))
(4.107)
n=1
since the Hilbert transformer removes the dc component (n = 0), introduces a phase shift of π/2 for negative frequencies (n < 0) and a phase shift of −π/2 for positive frequencies (n > 0). Hence s˜ (t) + j s˜ (t) = Ac J0 (β) + 2 Ac
∞
Jn (β)e j 2πn fm t
(4.108)
n=1
and s1 (t) in (4.104) becomes s1 (t) = Ac J0 (β) cos(2π f c t) + 2 Ac
∞
Jn (β) cos(2π( f c + n f m )t). (4.109)
n=1
Thus, we find that only the upper sideband is transmitted, which verifies our result. 12. (Haykin 1983) Consider the message signal as shown in Fig. 4.10, which is used to frequency modulate a carrier. Assume a frequency sensitivity of k f Hz/V and that the FM signal is given by s(t) = Ac cos(2π f c t + φ(t)).
(4.110)
4 Frequency Modulation
263 m(t) (volts)
Fig. 4.10 A periodic square wave corresponding to the message signal
1 t −T0 /4
T0 /4
0 −1 T0 /2
T0 /2
(a) Sketch the waveform corresponding to the total instantaneous frequency f (t) of the FM signal for −T0 /4 ≤ t ≤ 3T0 /4. Label all the important points on the axes. (b) Sketch φ(t) for −T0 /4 ≤ t ≤ 3T0 /4. Label all the important points on the axes. Assume that φ(t) has zero mean. (c) Write down the expression for the complex envelope of s(t) in terms of φ(t). (d) The FM signal s(t) can be written as s(t) =
∞
cn cos(2π f c t + 2nπt/T0 ),
(4.111)
n=−∞
where
β−n β+n + bn sinc , cn = an sinc 2 2
(4.112)
where β = k f T0 . Compute an and bn . Assume the limits of integration (for computing cn ) to be from −T0 /4 to 3T0 /4. • Solution: The total instantaneous frequency is depicted in Fig. 4.11b. Note that φ(t) = 2πk f m(τ ) dτ . (4.113) The variation of φ(t) is shown in Fig. 4.11c where − A + 2πk f
T0 =A 2
⇒ A = πk f
The complex envelope of s(t) is
T0 . 2
(4.114)
264
4 Frequency Modulation m(t)
Fig. 4.11 Plot of the total instantaneous frequency and φ(t)
(a)
1 t −T0 /4
T0 /4
0 −1 T0 /2
T0 /2
f (t) (b)
fc + k f t
fc −T0 /4
T0 /4
0
fc − k f T0 /2
T0 /2
φ(t)
(c) A
t −T0 /4
0
T0 /4
3T0 /4
−A
T0 /2
T0 /2
s˜ (t) = Ac e j φ(t) .
(4.115)
In order to compute the Fourier coefficients cn we note that φ(t) = Therefore
for − T0 /4 ≤ t ≤ T0 /4 2πk f t . 2πk f (−t + T0 /2) for T0 /4 ≤ t ≤ 3T0 /4
(4.116)
4 Frequency Modulation
265
1 cn = T0 Ac T0
=
3T0 /4
s˜ (t)e−j 2πnt/T0 dt
t=−T0 /4 T0 /4
e j 2πt (k f −n/T0 )
t=−T0 /4
Ac + e j πβ T0
3T0 /4
t=T0 /4
e−j 2πt (k f +n/T0 ) .
(4.117)
The first integral in (4.117) evaluates to I=
Ac sinc 2
β−n 2
.
(4.118)
The second integral in (4.117) evaluates to II = Ac
e−j βπ/2 e−j 3nπ/2 − ej βπ/2 e−j nπ/2 . −j 2π(β + n)
(4.119)
Now −
3π nπ 3nπ = − + π n − nπ = − − nπ. 2 2 2
(4.120)
nπ nπ π = − + π n − nπ = − nπ. 2 2 2
(4.121)
Similarly −
Substituting (4.120) and (4.121) in (4.119), we get Ac −j nπ β+n e sinc 2 2 Ac β+n . = (−1)n sinc 2 2
II =
(4.122)
Therefore cn =
Ac sinc 2
β−n 2
+
Ac (−1)n sinc 2
β+n 2
(4.123)
, which implies that Ac 2 Ac bn = (−1)n . 2
an =
(4.124)
266
4 Frequency Modulation
τ fc + Δf ft (t)
fr (t)
fc
fc − Δf 1/f0 t
Fig. 4.12 Variation of the instantaneous frequency with time in an FM radar
13. In a frequency-modulated radar, the instantaneous frequency of the transmitted carrier f t (t) is varied as given in Fig. 4.12. The instantaneous frequency of the received echo fr (t) is also shown, where τ is the round-trip delay time. Assuming that f 0 τ 1 so that cos(2π f 0 τ ) ≈ 1 and sin(2π f 0 τ ) ≈ 2π f 0 τ , determine the number of beat (difference frequency) cycles in one second, in terms of the frequency deviation ( f ) of the carrier frequency, the delay τ , and the repetition frequency f 0 . Assume that f 0 is an integer. Note that in Fig. 4.12 f t (t) = f c + f sin(2π f 0 t).
(4.125)
• Solution: Let the transmitted signal be given by s(t) = A1 cos 2π
t τ =0
f t (τ ) dτ .
(4.126)
Let the received signal be given by r (t) = A2 cos 2π
t
τ =0
fr (τ ) dτ .
(4.127)
The beat (difference) frequency is f t (t) − fr (t). The number of beat cycles over the time duration 1/ f 0 is given by
4 Frequency Modulation
267
N=
1/ f 0
| f t (t) − fr (t)| dt.
(4.128)
t=0
Note that fr (t) = f c + f sin(2π f 0 (t − τ )) = f c + f [sin(2π f 0 t) cos(2π f 0 τ ) − cos(2π f 0 t) sin(2π f 0 τ )] ≈ f c + f [sin(2π f 0 t) − 2π f 0 τ cos(2π f 0 t)] . (4.129) Therefore f t (t) − fr (t) = 2π f 0 τ f cos(2π f 0 t).
(4.130)
Hence (4.128) becomes N = 2π f 0 τ f × 4
1/(4 f 0 )
cos(2π f 0 t) dt t=0
sin(2π f 0 t) 1/(4 f0 ) = 8π f 0 τ f 2π f 0 t=0 = 4τ f.
(4.131)
Since f 0 is an integer, the number of beat cycles per second is N f 0 = 4τ f f 0 .
(4.132)
14. (Haykin 1983) The sinusoidal modulating wave m(t) = Am cos(2π f m t)
(4.133)
is applied to a phase modulator with phase sensitivity k p . The modulated signal s(t) is of the form Ac cos(2π f c t + φ(t)). (a) Determine the spectrum of s(t), assuming that the maximum phase deviation β p = k p Am does not exceed 0.3 rad. (b) Construct a phasor diagram for s(t). • Solution: The PM signal is given by
268
4 Frequency Modulation
Fig. 4.13 Phasor diagram for narrowband phase modulation
fc
Ac βp cos(2πfm t)
cos(2πfc t) Ac sin(2πfc t)
s(t) = Ac cos(2π f c t + k p Am cos(2π f m t)) = Ac cos(2π f c t) cos(k p Am cos(2π f m t)) −Ac sin(2π f c t) sin(k p Am cos(2π f m t)) ≈ Ac cos(2π f c t) − Ac β p sin(2π f c t) cos(2π f m t) Ac β p = Ac cos(2π f c t) − [sin(2π( f c − f m )t) + sin(2π( f c + f m )t)] . 2 (4.134) Hence, the spectrum is given by S( f ) =
Ac [δ( f − f c ) + δ( f + f c )] 2 Ac β p − [δ( f − f c + f m ) − δ( f + f c − f m )] 4j Ac β p − [δ( f − f c − f m ) − δ( f + f c + f m )] . 4j
(4.135)
The phasor diagram is shown in Fig. 4.13. 15. Consider a tone-modulated PM signal of the form s(t) = Ac cos(2π f c t + β p cos(2π f m t)),
(4.136)
where β p = k p Am . This modulated signal is applied to an ideal BPF with unity gain, midband frequency f c , and passband extending from f c − 1.5 f m to f c + 1.5 f m . Determine the envelope, phase, and instantaneous frequency of the modulated signal at the filter output as functions of time. • Solution: The PM signal is given by s(t) = Ac cos(2π f c t + k p Am cos(2π f m t)). The complex envelope is
(4.137)
4 Frequency Modulation
269
s˜ (t) = Ac exp( j k p Am cos(2π f m t)),
(4.138)
which is periodic with period 1/ f m and hence can be represented in the form of a Fourier series as follows: ∞
s˜ (t) =
cn exp( j 2πn f m t).
(4.139)
n=−∞
The coefficients cn are given by cn = Ac f m
1/(2 f m )
t=−1/(2 f m )
exp( j k p Am cos(2π f m t) − j 2πn f m t) dt. (4.140)
Let 2π f m t = π/2 − x.
(4.141)
Then cn =
−Ac 2π
−π/2
exp( j k p Am sin(x) − j (nπ/2 − nx)) d x. (4.142)
x=3π/2
Since the above integrand is periodic with a period of 2π, we can write Ac exp(−j nπ/2) cn = 2π
π
exp( j k p Am sin(x) + j nx)) d x. (4.143)
x=−π
We know that 1 Jn (β) = 2π
π
exp( j β sin(x) − j nx)) d x,
(4.144)
x=−π
where Jn (β) is the nth-order Bessel function of the first kind and argument β. Thus cn = Ac exp(−j nπ/2)J−n (β p ). The transmitted PM signal can be written as
(4.145)
270
4 Frequency Modulation
s(t) = {˜s (t) exp( j 2π f c t)} ∞ exp(−j nπ/2)J−n (β p ) exp(j 2πn f m t) exp( j 2π f c t) = Ac n=−∞
= Ac
∞
J−n (β p ) cos(2π( f c + n f m )t − nπ/2).
(4.146)
n=−∞
The BPF allows only the components at f c , f c − f m and f c + f m . Thus, the BPF output is x(t) = Ac J0 (β p ) cos(2π f c t) +Ac J−1 (β p ) cos(2π( f c + f m )t − π/2) +Ac J1 (β p ) cos(2π( f c − f m )t + π/2) = Ac J0 (β p ) cos(2π f c t) +Ac J−1 (β p ) sin(2π( f c + f m )t) −Ac J1 (β p ) sin(2π( f c − f m )t).
(4.147)
J−1 (β p ) = −J1 (β p ).
(4.148)
However
Hence x(t) = Ac J0 (β p ) cos(2π f c t) −Ac J1 (β p ) sin(2π( f c + f m )t) −Ac J1 (β p ) sin(2π( f c − f m )t) = Ac J0 (β p ) cos(2π f c t) −2 Ac J1 (β p ) sin(2π f c t) cos(2π f m t) = a(t) cos(2π f c t + θi (t)).
(4.149)
The envelope of x(t) is a(t) = Ac J02 (β p ) + 4J12 (β p ) cos2 (2π f m t).
(4.150)
The total instantaneous phase of x(t) is φi (t) = 2π f c t + tan
−1
2J1 (β p ) cos(2π f m t) . J0 (β p )
The total instantaneous frequency of x(t) is
(4.151)
4 Frequency Modulation
271
2J1 (β p ) 1 d tan−1 cos(2π f m t) 2π dt J0 (β p ) 2J1 (β p )/J0 (β p ) = fc − f m sin(2π f m t) 1 + (2J1 (β p )/J0 (β p ))2 cos2 (2π f m t) 2J1 (β p )J0 (β p ) f m sin(2π f m t). = fc − 2 J0 (β p ) + (2J1 (β p ))2 cos2 (2π f m t) (4.152)
f i (t) = f c +
16. (Haykin 1983) A carrier wave is frequency modulated using a sinusoidal signal of frequency f m and amplitude Am . (a) Determine the values of the modulation index β for which the carrier component of the FM signal is reduced to zero. (b) In a certain experiment conducted with f m = 1 kHz and increasing Am starting from zero volts, it is found that the carrier component of the FM signal is reduced to zero for the first time when Am = 2 V. What is the frequency sensitivity of the modulator? What is the value of Am for which the carrier component is reduced to zero for the second time? • Solution: From the table of Bessel functions, we see that J0 (β) is equal to zero for β = 2.44 β = 5.52 β = 8.65 β = 11.8.
(4.153)
k f Am fm β fm ⇒ kf = Am = 1.22 kHz/V.
(4.154)
For tone modulation β=
The value of Am for which the carrier component goes to zero for the second time is equal to β fm kf 5.52 ⇒ Am = 1.22 = 4.52 V. Am =
(4.155)
272
4 Frequency Modulation
17. An FM signal with modulation index β = 2 is transmitted through an ideal bandpass filter with midband frequency f c and bandwidth 7 f m , where f c is the carrier frequency and f m is the frequency of the sinusoidal modulating wave. Determine the spectrum of the filter output. • Solution: We know that the spectrum of the tone-modulated FM signal is given by ∞ Ac S( f ) = Jn (β) [δ( f − f c − n f m ) + δ( f + f c + n f m )] .(4.156) 2 n=−∞
The spectrum at the output of the BPF is given by X( f ) =
3 Ac Jn (2)δ( f − f c − n f m ) + δ( f + f c + n f m ), (4.157) 2 n=−3
which is illustrated in Fig. 4.14. 18. (Haykin 1983) Consider a wideband PM signal produced by a sinusoidal modulating wave m(t) = Am cos(2π f m t), using a modulator with phase sensitivity k p. (a) Show that if the phase deviation of the PM signal is large compared to one radian, the bandwidth of the PM signal varies linearly with the modulation frequency f m . (b) Compare this bandwidth of the wideband PM signal with that of a wideband FM signal produced by m(t) and frequency sensitivity k f .
0.6
0.35
0.35 0.3 0.08
fc − 3fm
fc − fm fc − 2fm
fc
fc + fm
fc + 3fm
−0.08 fc + 2fm
−0.6
Fig. 4.14 Spectrum of the signal at the BPF output for Ac = 2
4 Frequency Modulation
273
• Solution: The PM signal is given by s(t) = Ac cos(2π f c t + k p Am cos(2π f m t)).
(4.158)
The phase deviation is k p Am . The instantaneous frequency due to the message is 1 d k p Am cos(2π f m t) = −k p Am f m sin(2π f m t). 2π dt
(4.159)
The frequency deviation is f 1 = k p Am f m ,
(4.160)
which is directly proportional to f m . Now, the PM signal in (4.158) can be considered to be an FM signal with message given by m 1 (t) = −(k p /k f )Am f m sin(2π f m t).
(4.161)
Therefore, the bandwidth of the PM signal in (4.158) is BT, PM = 2( f 1 + f m ) = 2 f m (k p Am + 1) ≈ 2 f m k p Am ,
(4.162)
which varies linearly with f m . However, in the case of FM, the frequency deviation is f 2 = k f Am ,
(4.163)
which is independent of f m . The bandwidth of the FM signal is BT, FM = 2( f 2 + f m ) = 2(k f Am + f m ).
(4.164)
19. Figure 4.15 shows the block diagram of a system. Here s(t) and h(t) are FM signals, given by
Fig. 4.15 Block diagram of a system
g(t)
Output h(t)
s(t)
274
4 Frequency Modulation
s(t) = cos (2π f c t − π f m t) h(t) = cos (2π f c t + π f m t) .
(4.165)
Assume that g(t) is a real-valued lowpass signal with bandwidth [−W, W ], f m < W , f c W and h(t) is of the form h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t).
(4.166)
(a) Using the method of complex envelopes, determine the output of h(t). (b) Determine the envelope of the output of h(t). • Solution: Let the signal at the output of the multiplier be denoted by v1 (t). Then v1 (t) = g(t) cos(2π f c t − π f m t) = g(t) [cos(2π f c t) cos(π f m t) + sin(2π f c t) sin(π f m t)] . (4.167) Comparing the above equation with that of the canonical representation of a bandpass signal, we conclude that the complex envelope of v1 (t) is given by v˜1 (t) = g(t)e−j π fm t .
(4.168)
Note that according to the representation in (4.167), v1 (t) is a bandpass signal, with carrier frequency f c . Similarly, the complex envelope of the filter is given by ˜ = 1 e j π fm t , h(t) 2
(4.169)
where we have assumed that
j 2π f c t ˜ . h(t) = 2h(t)e
(4.170)
Thus, the complex envelope of the filter output can be written as ˜ v˜1 (t), y˜ (t) = h(t) where denotes convolution. Therefore
(4.171)
4 Frequency Modulation
275
y˜ (t) =
∞
τ =−∞ ∞
˜ − τ ) dτ v˜1 (τ )h(t
1 g(τ )e−j π fm τ e j π fm (t−τ ) dτ 2 τ =−∞ ∞ 1 g(τ )e−j 2π fm τ dτ = e j π fm t 2 τ =−∞ 1 = e j π fm t G( f m ). 2 =
(4.172)
Therefore, the output of h(t) is y(t) = y˜ (t)e j 2π fc t 1 = G( f m ) cos(2π( f c − f m /2)t). 2
(4.173)
Thus, the envelope of the output is given by | y˜ (t)| =
1 |G( f m )|. 2
(4.174)
Hence, the envelope of the output signal is proportional to the magnitude response of g(t) evaluated at f = f m . 20. (Haykin 1983) Figure 4.16 shows the block diagram of the transmitter and receiver for stereophonic FM. The input signals l(t) and r (t) represent lefthand and right-hand audio signals. The difference signal x1 (t) = l(t) − r (t) is DSB-SC modulated as shown in the figure, with f c = 25 kHz. The DSB-SC wave, x2 (t) = l(t) + r (t) and the pilot carrier are summed to produce the composite signal m(t). The composite signal m(t) is used to frequency modulate a carrier and the resulting FM signal is transmitted. Assume that f 2 = 20 kHz, f 1 = 200 Hz. (a) Sketch the spectrum of m(t). Label all the important points on the x- and y-axes. The spectrums of l(t) and r (t) are shown in Fig. 4.16. (b) Assuming that the frequency deviation of the FM signal is 90 kHz, find the transmission bandwidth of the FM signal using Carson’s rule. (c) In the receiver block diagram determine the signal y(t), the input-output characteristics of the device, and the specifications of filter1, filter2, filter3, and filter4 in the frequency domain. Assume ideal filter characteristics. • Solution: Note that m(t) = x2 (t) + cos(2π f c t) + x1 (t) cos(4π f c t). The spectrum of m(t) is depicted in Fig. 4.17. From Carson’s rule, the transmission bandwidth of the FM signal is
(4.175)
276
4 Frequency Modulation
l(t)
x1 (t)
+
Transmitter −
cos(4πfc t)
r(t)
Freq
Stereophonic FM signal s(t)
Pilot carrier source
doubler +
m(t)
+
FM
cos(2πfc t)
x2 (t) L(f ) 1
modulator
l(t)
R(f ) 1
r(t)
f −f2
s(t)
−f1
0
f
f2
f1
−f2
m(t)
FM
−f1
0
f2
f1
Receiver
x2 (t) Filter1
demodulator
cos(2πfc t)
y(t) Filter2
Filter3
Device
2 cos(4πfc t) x1 (t) Filter4
Fig. 4.16 Transmitter and receiver for stereophonic FM M (f ) 2 1 −f6 −f5
0.5
−f4 −f3 −2fc
−fc −f2 −f1
0
f1
f3 f2
fc
−0.5 f3 = 2fc − f2
f5 = 2fc + f1
f4 = 2fc − f1
f6 = 2fc + f2
Fig. 4.17 Spectrum of m(t)
f4
f5 2fc
f6 f
4 Frequency Modulation
277
Fig. 4.18 Variation of phase versus time for an FM signal
θ(t) (radians) 2π π t × 10−6 sec 0
2
3
5
6
BT = 2( f + W ),
(4.176)
where W = 2 f c + f 2 = 70 kHz is the maximum frequency content in m(t) and f = 90 kHz is the frequency deviation. Therefore BT = 320 kHz. At the receiver side y(t) = m(t). The device is a squarer. Filter1 is an LPF with unity gain and bandwidth [− f 2 , f 2 ]. Filter2 is a BPF with unity gain, center frequency f c , and bandwidth less than 5 kHz on either side of f c . Filter3 is a BPF with a gain of 4 with center frequency 2 f c and any suitable bandwidth, such that the dc component is eliminated. Filter4 is an LPF with unity gain and bandwidth [− f 2 , f 2 ]. 21. Consider an FM signal given by s(t) = A sin(θ(t)), where θ(t) is a periodic waveform as shown in Fig. 4.18. The message signal has zero mean. Compute the carrier frequency. • Solution: We know that the instantaneous frequency is given by 1 dθ 2π dt = f c + k f m(t),
f (t) =
(4.177)
which is plotted in Fig. 4.19b. Observe that f (t) is also periodic. Since m(t) has zero mean, the mean value of f (t) is f c , where 25 × 2 + 50 × 1 × 104 3 100 × 104 Hz. = 3
fc =
(4.178)
22. A 10 kHz periodic square wave g p (t) is applied to a first-order R L lowpass filter as shown in Fig. 4.20. It is given that R/(2πL) = 10 kHz. The output signal m(t) is FM modulated with frequency deviation equal to 75 kHz. Determine the bandwidth of the FM signal s(t), using Carson’s rule. Ignore those
278
4 Frequency Modulation (a)
θ(t) (radians) 2π π t × 10−6 sec 0
(b)
2
3
5
6
f (t) (Hz) 50 × 104
25 × 104 t × 10−6 sec 0
3
2
6
5
Fig. 4.19 Variation of phase versus time for an FM signal gp (t)
L
gp (t)
1/T = 10 kHz B
t
m(t)
FM modulator
s(t)
R
0 −B T /2
T /2
Fig. 4.20 Periodic signal applied to an R L-lowpass filter
harmonic terms in m(t) whose (absolute value of the) amplitude is less than 1% of the fundamental. • Solution: We know that ∞ 4B (−1)m g p (t) = cos [2π(2m + 1) f 0 t] , π m=0 (2m + 1)
(4.179)
where f 0 = 1/T = 10 kHz. The transfer function of the filter is R R + j ωL 1 = , 1 + j ω/ω0
H (ω) =
(4.180)
4 Frequency Modulation
279
where ω0 = R/L = 2π f 0 (given) is the −3 dB frequency in rad/s. Therefore m(t) =
∞ (−1)m 4B cos [2π(2m + 1) f 0 t + θm ] , Am π m=0 (2m + 1)
(4.181)
where Am = |H ((2m + 1)ω0 )| =
1 1 + (2m + 1)2
(4.182)
and θm = − tan−1 (2m + 1).
(4.183)
We need to ignore those harmonics that satisfy 1 1 4B 1 4B < 0.01 √ 2 π (2m + 1) 1 + (2m + 1) π 2 1 1 ⇒ < 0.00707 for m > 0. (4.184) (2m + 1) 1 + (2m + 1)2 We find that m ≥ 6 satisfies the inequality in (4.184). Therefore, the (onesided) bandwidth of m(t) is (2 × 5 + 1) f 0 = 11 f 0 = 110 kHz. Hence, the bandwidth of s(t) using Carson’s rule is B = 2(75 + 110) = 370 kHz.
(4.185)
23. A message signal m(t) = Ac cos(2π f m t) with f m = 5 kHz is applied to a frequency modulator. The resulting FM signal has a frequency deviation of 10 kHz. This FM signal is applied to two frequency multipliers in cascade. The first frequency multiplier has a multiplication factor of 2 and the second frequency multiplier has a multiplication factor of 3. Determine the frequency deviation and the modulation index of the FM signal obtained at the second multiplier output. What is the frequency separation between two consecutive spectral components in the spectrum of the output FM signal? • Solution: Let us denote the instantaneous frequency of the input FM signal by f i (t) = f c + f cos(2π f m t),
(4.186)
where f = 10 kHz. The overall multiplication factor of the system is 6, hence the instantaneous frequency of the output FM signal is
280
4 Frequency Modulation FM signal
Frequency multiplier n2
Message
Narrowband
Frequency
FM
multiplier
modulator
n1
f1 = 0.1 MHz
Mixer
f1 = 9.5 MHz
Oscillator
Oscillator
Fig. 4.21 Wideband frequency modulator using the indirect method
f o (t) = 6 f c + 6 f cos(2π f m t).
(4.187)
Thus, the frequency deviation of the output FM signal is 60 kHz. The modulation index is 60/5 = 12. The frequency separation in the spectrum of the output FM signal is unchanged at 5 kHz. 24. (Haykin 1983) Figure 4.21 shows the block diagram of a wideband frequency modulator using the indirect method. Note that a mixer is essentially a multiplier, followed by a bandpass filter which allows only the difference frequency component. This transmitter is used to transmit audio signals in the range 100 Hz to 15 kHz. The narrowband frequency modulator is supplied with a carrier of frequency f 1 = 0.1 MHz. The second oscillator supplies a frequency of 9.5 MHz. The system specifications are as follows: carrier frequency at the transmitter output f c = 100 MHz with frequency deviation, f = 75 kHz. Maximum modulation index at the output of the narrowband frequency modulator is 0.2 rad. (a) Calculate the frequency multiplication ratios n 1 and n 2 . (b) Specify the value of the carrier frequency at the output of the first frequency multiplier. • Solution: It is given that the frequency deviation of the output FM wave is equal to 75 kHz. Note that for a tone-modulated FM signal, the frequency deviation is related to the frequency of the modulating wave as f = β fm .
(4.188)
4 Frequency Modulation
281
The frequency deviation at the output of the narrowband FM modulator is fixed. Thus, the lowest frequency component in the message will produce the maximum modulation index, β = 0.2, modulator has a β = 0.2. Then the frequency deviation at the output of the narrowband FM modulator is f inp = 0.2 × 0.1 kHz = 0.02 kHz.
(4.189)
However, the required frequency deviation of the output FM signal is 75 kHz. Hence, we need to have 75 0.02 = 3750.
n1n2 =
(4.190)
The carrier frequency at the output of the first multiplier is 0.1n 1 MHz. The carrier frequency at the output of the second multiplier is n 2 (9.5 − 0.1n 1 ) = 100 MHz.
(4.191)
Solving for n 1 and n 2 we get n 1 = 75 n 2 = 50.
(4.192)
The carrier frequency at the output of the first frequency multiplier is n 1 f 1 = 7.5 MHz.
(4.193)
25. (Haykin 1983) The equivalent circuit of the frequency-determining network of a VCO is shown in Fig. 4.22. Frequency modulation is produced by applying the modulating signal Vm sin(2π f m t) plus a bias Vb to a varactor diode connected across the parallel combination of a 200-μH inductor (L) and a 100-pF capacitor
Fig. 4.22 Equivalent circuit of the frequency-determining network of a VCO L
C
Ci (t)
282
4 Frequency Modulation
(C). The capacitance in the varactor diode is related to the voltage V (t) applied across its terminals by Ci (t) = 100/ V (t) pF.
(4.194)
The instantaneous frequency of oscillation is given by f i (t) =
1 1 . √ 2π L(C + Ci (t))
(4.195)
The unmodulated frequency of oscillation is 1 MHz. The VCO output is applied to a frequency multiplier to produce an FM signal with carrier frequency of 64 MHz and a modulation index of 5. (a) Determine the magnitude of the bias voltage Vb . (b) Find the amplitude Vm of the modulating wave, given that f m = 10 kHz. Assume that Vb Vm . • Solution: The instantaneous frequency of oscillation is given by f i (t) =
1 1 . √ 2π L(C + Ci (t))
(4.196)
The unmodulated frequency of oscillation is given as 1 MHz. Thus 1 1 √ 2π L(C + C0 ) ⇒ C + C0 = 126.65 pF fc =
⇒ C0 = 26.651 pF ⇒ 100/ Vb = 26.651 ⇒ Vb = 14.078 V.
(4.197)
Since the final carrier frequency is 64 MHz and the frequency multiplication factor is 64. Thus, the modulation index at the VCO output is β=
5 = 0.078. 64
(4.198)
Thus, the FM signal at the VCO output is narrowband. The instantaneous frequency of oscillation at the VCO output is f i (t) =
1 1 . 2π L(C + 100(Vb + Vm sin(2π f m t))−0.5 )
Since Vb Vm the instantaneous frequency can be approximated as
(4.199)
4 Frequency Modulation
f i (t) ≈ =
283
1 1 2π L(C + 100V −0.5 (1 − V /(2V ) sin(2π f t))) m b m b 1 1 . √ 2π L(C + C0 (1 − Vm /(2Vb ) sin(2π f m t)))
(4.200)
Hence, the instantaneous frequency becomes 1 1 √ 2π L(C + C0 − C0 Vm /(2Vb ) sin(2π f m t))) 1 1 = √ 2π L(C + C0 ) 1 √ 1 − C0 Vm /(2Vb (C + C0 )) sin(2π f m t) C0 Vm ≈ fc 1 + sin(2π f m t) . 4Vb (C + C0 )
f i (t) =
(4.201)
The modulation index of the narrowband FM signal is given by β=
C0 Vm f c = 0.078. 4Vb f m (C + C0 )
(4.202)
Substituting f c = 1 MHz, f m = 10 kHz, and Vb = 14.078 V, we get Vm = 0.2087 V.
(4.203)
26. The equivalent circuit of the frequency-determining network of a VCO is shown in Fig. 4.23. Frequency modulation is produced by applying the modulating signal Vm sin(2π f m t) plus a bias Vb to a varactor diode connected across the parallel combination of a 25-μH inductor (L) and a 200-pF capacitor (C). The capacitance in the varactor diode is related to the voltage V (t) applied across its terminals by Ci (t) = 100/ V (t) pF.
(4.204)
Fig. 4.23 Equivalent circuit of the frequency-determining network of a VCO L
C
Ci (t)
284
4 Frequency Modulation
The instantaneous frequency of oscillation is given by f i (t) =
1 1 . √ 2π L(C + Ci (t))
(4.205)
The unmodulated frequency of oscillation is 2 MHz. The VCO output is applied to a frequency multiplier to produce an FM signal with carrier frequency of 128 MHz and a modulation index of 6. (a) Determine the magnitude of the bias voltage Vb . (b) Find the amplitude Vm of the modulating wave, given that f m = 20 kHz. Assume that Vb Vm . • Solution: The instantaneous frequency of oscillation is given by f i (t) =
1 1 . √ 2π L(C + Ci (t))
(4.206)
The unmodulated frequency of oscillation is given as 2 MHz. Thus 1 1 √ 2π L(C + C0 ) ⇒ C + C0 = 253.303 pF ⇒ C0 = 53.303 pF ⇒ 100/ Vb = 53.303 ⇒ Vb = 3.5196 V. fc =
(4.207)
Since the final carrier frequency is 128 MHz, the frequency multiplication factor is 128/2 = 64. Thus, the modulation index at the VCO output is β=
6 = 0.09375. 64
(4.208)
Thus, the FM signal at the VCO output is narrowband. The instantaneous frequency of oscillation at the VCO output is f i (t) =
1 1 . 2π L(C + 100(Vb + Vm sin(2π f m t))−0.5 )
Since Vb Vm the instantaneous frequency can be approximated as
(4.209)
4 Frequency Modulation
f i (t) ≈ =
285
1 1 2π L(C + 100V −0.5 (1 − V /(2V ) sin(2π f t))) m b m b 1 1 . √ 2π L(C + C0 (1 − Vm /(2Vb ) sin(2π f m t)))
(4.210)
Hence, the instantaneous frequency becomes 1 1 √ 2π L(C + C0 − C0 Vm /(2Vb ) sin(2π f m t))) 1 1 = √ 2π L(C + C0 ) 1 √ 1 − C0 Vm /(2Vb (C + C0 )) sin(2π f m t) C0 Vm ≈ fc 1 + sin(2π f m t) . 4Vb (C + C0 )
f i (t) =
(4.211)
The modulation index of the narrowband FM signal is given by β=
C0 Vm f c = 0.09375. 4Vb f m (C + C0 )
(4.212)
Substituting f c = 2 MHz, f m = 20 kHz, and Vb = 3.5196 V, we get Vm = 0.0627 V.
(4.213)
27. (Haykin 1983) The FM signal s(t) = Ac cos 2π f c t + 2πk f
t
τ =0
m(τ ) dτ
(4.214)
is applied to the system shown in Fig. 4.24. Assume that the resistance R is small compared to the impedance of C for all significant frequency components of s(t) and the envelope detector does not load the filter. Determine the resulting signal at the envelope detector output assuming that k f |m(t)| < f c for all t. • Solution: The transfer function of the highpass filter is R R + 1/(j 2π f C) ≈ j 2π f RC
H( f ) =
provided
(4.215)
286
4 Frequency Modulation C
Fig. 4.24 Frequency demodulation using a highpass filter FM
R
signal
Envelope
Output
detector
signal
s(t)
R
1 . 2π f C
(4.216)
Thus, over the range of frequencies in which (4.216) is valid, the highpass filter acts like an ideal differentiator. Hence, the output of the highpass filter is given by ds(t) dt = −RC Ac 2π f c + 2πk f m(t) sin 2π f c t + 2πk f
x(t) = RC
t τ =0
m(τ ) dτ . (4.217)
The output of the envelope detector is given by y(t) = 2π RC Ac f c + k f m(t) .
(4.218)
28. Consider the FM signal s(t) = Ac cos 2π f c t + 2πk f
t τ =0
m(τ ) dτ ,
(4.219)
where k f is 10 kHz/V and m(t) = 5 cos(ωt) V,
(4.220)
where ω = 20,000 rad/s. Compute the frequency deviation and the modulation index. • Solution: The total instantaneous frequency is f i (t) = f c + k f 5 cos(ωt).
(4.221)
Therefore, the frequency deviation is 5k f = 50 kHz.
(4.222)
4 Frequency Modulation
287
The transmitted FM signal is 10πk f sin(ωt) . s(t) = Ac cos 2π f c t + ω
(4.223)
Hence, the modulation index is β=
10πk f = 5π. ω
(4.224)
29. (Haykin 1983) Suppose that the received signal in an FM system contains some residual amplitude modulation as shown by s(t) = a(t) cos(2π f c t + φ(t)),
(4.225)
where a(t) > 0 and f c is the carrier frequency. The phase φ(t) is related to the modulating signal m(t) by φ(t) = 2πk f
t
τ =0
m(τ ) dτ .
(4.226)
Assume that s(t) is restricted to a frequency band of width BT centered at f c , where BT is the transmission bandwidth of s(t) in the absence of amplitude modulation, that is, when a(t) = Ac . Also assume that a(t) varies slowly compared to φ(t) and f c > k f m(t). Compute the output of the ideal frequency discriminator (ideal differentiator followed by an envelope detector). • Solution: The output of the ideal differentiator is s (t) =
ds(t) = a (t) cos(2π f c t + φ(t)) dt −a(t) sin(2π f c t + φ(t))(2π f c + φ (t)). (4.227)
The envelope of s (t) is given by y(t) =
(a (t))2 + a 2 (t)(2π f c + φ (t))2 .
(4.228)
Since it is given that a(t) varies slowly compared to φ(t), we have |φ (t)| |a (t)|
(4.229)
s (t) ≈ −a(t) sin(2π f c t + φ(t))(2π f c + 2πk f m(t)).
(4.230)
s (t) can be approximated by
The output of the envelope detector is
288
4 Frequency Modulation
y(t) = 2πa(t)( f c + k f m(t)),
(4.231)
where we have assumed that [ f c + k f m(t)] > 0. Thus, we see that there is distortion due to a(t), at the envelope detector output. 30. (Haykin 1983) Let s(t) = a(t) cos(2π f c t + φ(t)),
(4.232)
where a(t) > 0, be applied to a hard limiter whose output z(t) is defined by z(t) = sgn[s(t)] +1 for s(t) > 0 = −1 for s(t) < 0.
(4.233)
(a) Show that z(t) can be expressed in the form of a Fourier series as follows: z(t) =
∞ 4 (−1)n cos[2π f c t (2n + 1) + (2n + 1)φ(t)]. (4.234) π n=0 2n + 1
(b) Compute the output when z(t) is applied to an ideal bandpass filter with center frequency f c and bandwidth BT , where BT is the transmission bandwidth of s(t) in the absence of amplitude modulation. Assume that f c BT . • Solution: Since a(t) > 0, z(t) can be written as z(t) = sgn[cos(2π f c t + φ(t))].
(4.235)
α(t) = 2π f c t + φ(t).
(4.236)
Let
Then z(t) can be rewritten as z(α(t)) = sgn[cos(α(t))].
(4.237)
Note that z(α(t)) is periodic with respect to α(t), that is, z(α(t) + 2nπ) = z(α(t)).
(4.238)
This is illustrated in Fig. 4.25. Hence, z(t) can be written in the form of a Fourier series with respect to α(t) as follows:
4 Frequency Modulation
289
1
z(α(t))
cos(α(t))
0.5
0
-0.5
-1
-6
-4
-2
0
2
4
6
α(t)
Fig. 4.25 z(α(t)) is periodic with respect to α(t)
z(α(t)) = 2
∞
an cos(nα(t)),
(4.239)
n=1
where π 1 an = z(α(t)) cos(nα(t)) dα(t) 2π α(t)=−π π 1 = z(α(t)) cos(nα(t)) dα(t). π α(t)=0
(4.240)
For convenience denote α(t) = x. Thus 1 π an = z(x) cos(nx) d x π x=0 π/2 1 π 1 cos(nx) d x − cos(nx) d x = π x=0 π x=π/2 0 for n = 2m = 2(−1)m for n = 2m + 1. π(2m+1) Thus
(4.241)
290
4 Frequency Modulation
z(x) = ⇒ z(α(t)) = =
∞ 4(−1)m cos((2m + 1)x) π(2m + 1) m=0 ∞ 4(−1)m cos((2m + 1)α(t)) π(2m + 1) m=0 ∞ 4(−1)m cos((2m + 1)2π f c t + (2m + 1)φ(t)). π(2m + 1) m=0
(4.242) Observe that the mth harmonic has a carrier frequency at (2m + 1) f c and bandwidth (2m + 1)BT , where BT is the bandwidth of s(t) with amplitude modulation removed, that is, a(t) = Ac . If the fundamental component at m = 0 is to be extracted from z(α(t)), we require BT BT < (2m + 1) f c − (2m + 1) 2 2 1 BT 1+ < fc ⇒ for m > 0. 2 m fc +
(4.243)
The left-hand side of the second equation in (4.243) is maximum for m = 1. Thus, in the worst case, we require f c > BT .
(4.244)
Now if z(α(t)) is passed through an ideal bandpass filter with center frequency f c and bandwidth BT , the output is (assuming (4.244) is satisfied) y(t) =
4 cos(2π f c t + φ(t)), π
(4.245)
which has no amplitude modulation. 31. The message signal m(t) =
A
sin(t B) t
4 (4.246)
is applied to an FM modulator. Compute the two-sided bandwidth (on both sides of the carrier) of the FM signal using Carson’s rule. Assume frequency sensitivity of the modulator to be k f . • Solution: According to Carson’s rule, the two-sided bandwidth of the FM signal is BT = 2( f + W ),
(4.247)
4 Frequency Modulation
291
where f = max k f |m(t)|
(4.248)
is the frequency deviation and W is the one-sided bandwidth of the message. Clearly f = k f A4 B 4
(4.249)
since the maximum value of m(t) is A4 B 4 , which occurs at t = 0. Next, we make use of the Fourier transform pair: A rect( f /B). B
Asinc(t B)
(4.250)
Time scaling by 1/π, we obtain Aπ rect( f π/B) B sin(t B) Aπ ⇒A rect( f π/B) tB B sin(t B) ⇒A Aπ rect( f π/B), t Asinc(t B/π)
(4.251)
which has a one-sided bandwidth equal to B/(2π). Therefore, m(t) has a one-sided bandwidth of W = 2B/π. Hence BT = 2(k f A4 B 4 + 2B/π).
(4.252)
32. Explain the principle of operation of the PLL demodulator for FM signals. Draw the block diagram and clearly state the signal model and assumptions. • Solution: Consider the block diagram in Fig. 4.26. Here s(t) denotes the input FM signal (bandlimited to [ f c − BT /2, f c + BT /2]; f c is the carrier frequency, BT is the bandwidth of s(t)) given by s(t) = Ac sin(2π f c t + φ1 (t)) V,
(4.253)
where φ1 (t) = 2πk f
t τ =−∞
m(τ ) dτ ,
(4.254)
where k f is the frequency sensitivity of the FM modulator in Hz/V and m(·) is the message signal bandlimited to [−W, W ]. The voltage controlled oscillator (VCO) output is
292
4 Frequency Modulation Mixer Loop s(t)
e(t)
LPF
filter
v(t)
h(t)
r(t) VCO
Fig. 4.26 Block diagram of the phase locked loop (PLL) demodulator for FM signals
r (t) = Av cos(2π f c t + φ2 (t)) V,
(4.255)
where φ2 (t) = 2πkv
t τ =−∞
v(τ ) dτ ,
(4.256)
where kv is the frequency sensitivity of the VCO in Hz/V and v(·) is the control signal at the VCO input. The lowpass filter eliminates the sum frequency component at the multiplier output and allows only the difference frequency component. Hence e(t) = Ac Av km sin(φ1 (t) − φ2 (t)) V,
(4.257)
where km V−1 is the multiplier gain (a factor of 1/2 is absorbed in km ). The loop filter output is v(t) =
∞ τ =−∞
e(τ )h(t − τ ) dτ V.
(4.258)
Note that h(t) is dimensionless. Let φe (t) = φ1 (t) − φ2 (t) ⇒ φe (t) = φ1 (t) − 2πkv
t τ =−∞
v(τ ) dτ
dφ1 (t) dφe (t) = − 2πkv v(t) dt dt ∞ dφ1 (t) dφe (t) = − 2πK 0 ⇒ sin(φe (τ ))h(t − τ ) dτ , dt dt τ =−∞ ⇒
(4.259)
4 Frequency Modulation
293
where K 0 = km kv Ac Av Hz.
(4.260)
φe (t) 1 rad
(4.261)
sin(φe (t)) ≈ φe (t)
(4.262)
Now, under steady state
for all t, therefore
for all t. Hence, the last equation in (4.259) can be written as dφe (t) dφ1 (t) = − 2πK 0 dt dt
∞
τ =−∞
φe (τ )h(t − τ ) dτ .
(4.263)
Taking the Fourier transform of both sides, we get j 2π f e ( f ) = j 2π f 1 ( f ) − 2πK 0 e ( f )H ( f ) j f 1 ( f ) ⇒ e ( f ) = j f + K0 H ( f ) 1 ( f ) . ⇒ e ( f ) = 1 + K 0 H ( f )/(j f )
(4.264)
Now φ1 (t) is bandlimited to [−W, W ]. It will be shown later that φ2 (t) is also bandlimited to [−W, W ]. Therefore, both φe (t) and h(t) are bandlimited to [−W, W ]. If K0 H ( f ) 1 for | f | < W (4.265) f then the last equation in (4.264) reduces to e ( f ) =
j f 1 ( f ) . K0 H ( f )
(4.266)
Hence, the Fourier transform of (4.257), after applying (4.262), becomes E( f ) = Ac Av km e ( f ) and
(4.267)
294
4 Frequency Modulation
V ( f ) = E( f )H ( f ) = Ac Av km j f 1 ( f )/K 0 = (j f /kv ) 1 ( f ).
(4.268)
The inverse Fourier transform of (4.268) gives 1 dφ1 (t) 2πkv dt kf m(t). = kv
v(t) =
(4.269)
From (4.269), it is clear that v(t) and φ2 (t) are bandlimited to [−W, W ].
References Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983. J. G. Proakis and M. Salehi. Fundamentals of Communication Systems. Pearson Education Inc., 2005.
Chapter 5
Noise in Analog Modulation
1. A DSB-SC signal of the form S(t) = Ac M(t) cos(2π f c t + )
(5.1)
is transmitted over a channel that adds additive Gaussian noise with psd shown in Fig. 5.1. The message spectrum extends over [−4, 4] kHz and the carrier frequency is 200 kHz. Assuming that the average power of S(t) is 10 W and coherent detection, determine the output SNR of the receiver. Assume that the IF filter is ideal with unity gain in the passband and zero for other frequencies, and the narrowband representation of noise at the IF filter output is N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t),
(5.2)
where Nc (t) and Ns (t) are both independent of . • Solution: The received signal at the output of the IF filter is X (t) = Ac M(t) cos(2π f c t + ) + N (t) = Ac M(t) cos(2π f c t + ) + Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t) , (5.3) where M(t) denotes the random process corresponding to the message, is uniformly distributed in [0, 2π ) and N (t) is a narrowband noise process. The psd of N (t) is illustrated in Fig. 5.2. If the local oscillator (LO) at the receiver supplies 2 cos(2π f c t + ), then the LPF output is Y (t) = Ac M(t) + Nc (t) cos() + Ns (t) sin(). © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_5
(5.4) 295
296
5 Noise in Analog Modulation
Fig. 5.1 Noise psd
SW (f ) (Watts/Hz) 10−6
f (kHz) -400
0
400 SN (f )(×10−6 ) (Watts/Hz)
Fig. 5.2 Psd of narrowband noise at the IF filter output
0.51 0.49
f (kHz) −204
−196
0
196
204
Hence the noise power at the LPF output is E Nc2 (t) cos2 () + E Ns2 (t) sin2 () = E N 2 (t) ∞ = SN ( f ) d f f =−∞
= 8000 × 10−6 W. (5.5) Note that E [Nc (t)Ns (t)] = 0 E [sin() cos()] = 0
(5.6)
where we have used the fact that the cross spectral density S Nc Ns ( f ) is an odd function, therefore R Nc Ns (0) = 0. The power of the modulated message signal is A2c P/2 = 10 W.
(5.7)
Thus, the power of the demodulated message in (5.4) is A2c P = 20 W. Hence SNR O = 2.5 × 103 ≡ 33.98 dB.
(5.8)
5 Noise in Analog Modulation
297
Ac M (t) cos(2πfc t) + N (t)
V (t)
Lowpass
Y (t)
filter 2 cos(2πfc t + Θ(t))
Fig. 5.3 DSB-SC demodulator having a phase error (t)
2. (Haykin 1983) In a DSB-SC receiver, the sinusoidal wave generated by the local oscillator suffers from a phase error (t) with respect to the input carrier wave cos(2π f c t), as illustrated in Fig. 5.3. Assuming that (t) is a zero-mean Gaussian process of variance σ2 , and that most of the time |(t)| is small compared to unity, find the mean squared error at the receiver output for DSBSC modulation. The mean squared error is defined as the expected value of the squared difference between the receiver output for (t) = 0 and the message signal component of the receiver output when (t) = 0. Assume that the message M(t), the carrier phase (t), and noise N (t) are statistically independent of each other. Also assume that N (t) is a zero-mean narrowband noise process with psd N0 /2 extending over f c − W ≤ | f | ≤ f c + W and having the representation: N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.9)
The psd of M(t) extends over [−W, W ] with power P. The LPF is ideal with unity gain in [−W, W ]. • Solution: The output of the multiplier is V (t) = 2 [Ac M(t) cos(2π f c t) + N (t)] cos(2π f c t + (t)),
(5.10)
where N (t) is narrowband noise. The output of the lowpass filter is Y (t) = Ac M(t) cos((t)) + Nc (t) cos((t)) + Ns (t) sin((t)). (5.11) When (t) = 0 the signal component is Y0 (t) = Ac M(t). Thus, the mean squared error is E
(Y0 (t) − Y (t))2
= E [(Ac M(t)(1 − cos((t))) + Nc (t) cos((t)) + Ns (t) sin((t)))2
(5.12)
298
5 Noise in Analog Modulation
= E A2c M 2 (t)(1 − cos((t)))2 + E Nc2 (t) cos2 ((t)) (5.13) +E Ns2 (t) sin2 ((t)) . We now use the following relations (assuming |(t)| 1 for all t): 4 (t) (1 − cos((t)))2 ≈ 4 E Nc2 (t) = 2N0 W E Ns2 (t) = 2N0 W E M 2 (t) = P.
(5.14)
Thus E
(Y0 (t) − Y (t))2 A2 P = c E 4 (t) + 2N0 W E cos2 ((t)) + sin2 ((t)) 4 3A2c Pσ4 = + 2N0 W, 4
(5.15)
where we have used the fact that if X is a zero-mean Gaussian random variable with variance σ 2 E X 2n = 1 × 3 × · · · × (2n − 1)σ 2n .
(5.16)
3. (Haykin 1983) Let the message M(t) be transmitted using SSB modulation. The psd of M(t) is SM ( f ) =
a| f | W
0
for | f | < W elsewhere,
(5.17)
where a and W are constants. Let the transmitted signal be of the form: ˆ S(t) = Ac M(t) cos(2π f c t + θ ) + Ac M(t) sin(2π f c t + θ ),
(5.18)
where θ is a uniformly distributed random variable in [0, 2π ) and independent of M(t). White Gaussian noise of zero mean and psd N0 /2 is added to the SSB signal at the receiver input. Compute SNR O assuming coherent detection and an appropriate unity gain IF filter in the receiver front-end. • Solution: The average message power is
5 Noise in Analog Modulation
299
P=
W f =−W
SM ( f ) d f
= aW.
(5.19)
The received SSB signal at the output of the IF filter is given by ˆ x(t) = Ac M(t) cos(2π f c t + θ ) + Ac M(t) sin(2π f c t + θ ) +Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.20)
Assuming coherent demodulation by 2 cos(2π f c t + θ ), the LPF output is given by Y (t) = Ac M(t) + Nc (t) cos(θ ) + Ns (t) sin(θ ).
(5.21)
The noise power is E (Nc (t) cos(θ ) + Ns (t) sin(θ ))2 = E Nc2 (t) E cos2 (θ ) = +E Ns2 (t) E sin2 (θ ) = E N 2 (t) = N0 W, (5.22) since E Nc2 (t) = E Ns2 (t) = E N 2 (t) = N0 W 2 2 E cos (θ ) + sin (θ ) = 1.
(5.23)
Therefore, the output SNR is SNR O =
a A2c A2c aW = . N0 W N0
(5.24)
4. An unmodulated carrier of amplitude Ac and frequency f c and bandlimited white noise are summed and passed through an ideal envelope detector. Assume that the noise psd to be of height N0 /2 and bandwidth 2W centered at f c . Determine the output SNR when the input carrier-to-noise ratio is high. • Solution: The input to the envelope detector is x(t) = Ac cos(2π f c t) + n(t) = Ac cos(2π f c t) +n c (t) cos(2π f c t) − n s (t) sin(2π f c t).
(5.25)
300
5 Noise in Analog Modulation
The output of the envelope detector is y(t) =
(Ac + n c (t))2 + n 2s (t).
(5.26)
It is given that the carrier-to-noise ratio is high. Hence y(t) ≈ Ac + n c (t).
(5.27)
The output signal power is A2c and the output noise power is 2N0 W . Hence SNR O =
A2c . 2N0 W
(5.28)
5. (Haykin 1983) A frequency division multiplexing (FDM) system uses SSB modulation to combine 12 independent voice channels and then uses frequency modulation to transmit the composite signal. Each voice signal has an average power P and occupies the frequency band [−4, 4] kHz. Only the lower sideband is transmitted. The modulated voice signals used for the first stage of modulation are defined by Sk (t) = Ak Mk (t) cos(2π k f 0 t + θ ) + Ak Mˆ k (t) sin(2π k f 0 t + θ ),
(5.29)
for 1 ≤ k ≤ 12, where f 0 = 4 kHz. Note that E[Mk2 (t)] = P. The received signal consists of the transmitted FM signal plus zero-mean white Gaussian noise of psd N0 /2. Assume that the output of the FM receiver is given by Y (t) = k f S(t) + No (t),
(5.30)
where S(t) =
12
Sk (t).
(5.31)
N0 f 2 /A2c for | f | < 48 kHz 0 otherwise,
(5.32)
k=1
The psd of No (t) is S No ( f ) =
where Ac is the amplitude of the transmitted FM signal. Find the relationship between the subcarrier amplitudes Ak so that the modulated voice signals have equal SNRs at the FM receiver output.
5 Noise in Analog Modulation
301
• Solution: The power in the kth voice signal is E Sk2 (t) = A2k P.
(5.33)
The overall signal at the output of the first stage of modulation is given by S(t) =
12
Sk (t).
(5.34)
k=1
The bandwidth of S(t) extends over [−48, 48] kHz. The signal S(t) is given as input to the second stage, which is a frequency modulator. The output of the FM receiver is given by Y (t) = k f S(t) + No (t).
(5.35)
The psd of No (t) is S No ( f ) =
N0 f 2 /A2c for | f | < 48 kHz 0 otherwise,
(5.36)
where Ac is the amplitude of the transmitted FM signal. The noise power in the kth received voice band is PNk = 2 =
kB
S No ( f ) d f (k−1)B 2N0 B 3 2 3k − 3k + 3A2c
1 ,
(5.37)
where B = 4 kHz. The power in the kth received voice signal is Pk = k 2f A2k P.
(5.38)
The output SNR for the kth SSB modulated voice signal is SNR O, k =
3k 2f A2k P A2c 2N0 B 3 (3k 2 − 3k + 1)
.
(5.39)
We require SNR O, k to be independent of k. Hence the required condition on Ak is A2k = C(3k 2 − 3k + 1), where C is a constant.
(5.40)
302
5 Noise in Analog Modulation
Fig. 5.4 Multiplying a random process with a sine wave
X(t)
Y (t)
Lowpass filter H(f )
Z(t)
2 cos(2πfc t + θ)
6. Consider the system shown in Fig. 5.4. Here, H ( f ) is an ideal LPF with unity gain in the band [−W, W ]. Carefully follow the two procedures outlined below: (a) Let X (t) = M(t) cos(2π f c t + θ ),
(5.41)
where θ is a uniformly distributed random variable in [0, 2π ), M(t) is a random process with psd S M ( f ) in the range [−W, W ]. Assume that M(t) and θ are independent. Compute the autocorrelation and the psd of Y (t). Hence compute the psd of Z (t). (b) Let X (t) be any random process with autocorrelation R X (τ ) and psd S X ( f ). Compute the psd of Z (t) in terms of S X ( f ). Now assuming that X (t) is given by (5.41), compute S X ( f ). Substitute this expression for S X ( f ) into the expression for the psd of Z (t). (c) Explain the result for the psd of Z (t) obtained using procedure (a) and procedure (b). • Solution: In the case of procedure (a) Y (t) = M(t)[1 + cos(4π f c t + 2θ )].
(5.42)
Therefore RY (τ ) = E[Y (t)Y (t − τ )] 1 = R M (τ ) 1 + cos(4π f c τ ) . 2
(5.43)
Hence SY ( f ) = S M ( f ) +
1 [S M ( f − 2 f c ) + S M ( f + 2 f c )] . 4
(5.44)
The psd of Z (t) is S Z ( f ) = SY ( f )|H ( f )|2 = S M ( f ).
(5.45)
5 Noise in Analog Modulation
303
Using procedure (b) we have Y (t) = 2X (t) cos(2π f c t + θ ).
(5.46)
Hence RY (τ ) = 2R X (τ ) cos(2π f c τ ) ⇒ SY ( f ) = S X ( f − f c ) + S X ( f + f c ).
(5.47)
Therefore S Z ( f ) = SY ( f )|H ( f )|2 = [S X ( f − f c ) + S X ( f + f c )]|H ( f )|2 .
(5.48)
Now, assuming that X (t) = M(t) cos(2π f c t + θ ),
(5.49)
we get 1 R M (τ ) cos(2π f c τ ) 2 1 ⇒ S X ( f ) = [S M ( f − f c ) + S M ( f + f c )] . 4 R X (τ ) =
(5.50)
Substituting the above value of S X ( f ) into (5.48) we get 1 [S M ( f − 2 f c ) + S M ( f ) + S M ( f ) + S M ( f + 2 f c )] |H ( f )|2 4 1 = S M ( f ). (5.51) 2
SZ ( f ) =
The reason for the difference in the psd using the two procedures is that in the first case we are doing coherent demodulation. However, in the second case coherent demodulation is not assumed. In fact, the local oscillator supplying any arbitrary phase α would have given the result in (5.51). 7. Consider the communication system shown in Fig. 5.5. The message is assumed to be a random process X (t) given by X (t) =
∞ k=−∞
Sk p(t − kT − α),
(5.52)
304
5 Noise in Analog Modulation
Fig. 5.5 Block diagram of a communication system
X(t)
Ideal
Z(t)
LPF W (t)
−3
−1
1
3
where 1/T denotes the symbol-rate, α is a random variable uniformly distributed in [0, T ], and p(t) = sinc (t/T ).
(5.53)
The symbols Sk are drawn from a 4-ary constellation as indicated in Fig. 5.5. The symbols are independent, that is E[Sk Sk+n ] = E[Sk ]E[Sk+n ]
for n = 0
(5.54)
and equally likely, that is P(−3) = P(−1) = P(1) = P(3).
(5.55)
Also assume that Sk and α are independent. The term W (t) denotes an additive white noise process with psd N0 /2. The LPF is ideal with unity gain in the bandwidth [−W, W ]. Assume that the LPF does not distort the message. (a) Derive the expression for the autocorrelation and the psd of X (t). (b) Compute the signal-to-noise ratio at the LPF output. (c) What should be the value of W so that the SNR at the LPF output is maximized without distorting the message? • Solution: The autocorrelation of X (t) is given by R X (τ ) = E[X (t)X (t − τ )] ⎡ ⎤ ∞ ∞ = E⎣ Si p(t − i T − α) S j p(t − τ − j T − α)⎦ i=−∞
=
∞ ∞
j=−∞
E[Si S j ]E[ p(t − i T − α) p(t − τ − j T − α)].
i=−∞ j=−∞
(5.56)
5 Noise in Analog Modulation
305
Now E[Si S j ] = 5δ K (i − j),
(5.57)
where δ K (i − j) =
1 for i = j 0 for i = j
(5.58)
Similarly E [ p(t − i T − α) p(t − τ − j T − α)] 1 T p(t − i T − α) p(t − τ − j T − α) dα. = T α=0
(5.59)
Let t − iT − α = z
(5.60)
Thus R X (τ ) =
=
∞ ∞ 5 δ K (i − j) T i=−∞ j=−∞ t−i T p(z) p(z + i T − τ − j T ) dz z=t−i T −T ∞ t−i T
5 T
i=−∞ ∞
z=t−i T −T
p(z) p(z − τ ) dz
5 p(z) p(z − τ ) dz T z=−∞ 5 = R pp (τ ), T
=
(5.61)
where R pp (τ ) is the autocorrelation of p(t). The power spectral density of X (t) is given by 5 |P( f )|2 T = 5T rect ( f T ) 5 sinc (t/T ).
SX ( f ) =
(5.62)
306
5 Noise in Analog Modulation
k
(a)
−f2
f1 = fc − W f2 = fc + W
−f1
0
R(t)
1
f2
f1 IF filter
−W
0
W
Y (t)
V (t) LPF Ac cos(2πfc t + θ) SM (f ) (watts/kHz)
(b)
5 f (kHz) −4
0
4
Fig. 5.6 Block diagram of a DSB-SC receiver
Therefore R X (τ ) = 5 sinc (t/T ).
(5.63)
The power in X (t) is R X (0) =
∞ f =−∞
SX ( f ) d f
= 5.
(5.64)
The noise power at the LPF output is PN =
N0 × 2W = N0 W. 2
(5.65)
Therefore SNR O =
5 . N0 W
(5.66)
Clearly, SNR O is maximized when W = 1/(2T ). 8. Consider the coherent DSB-SC receiver shown in Fig. 5.6a. The signal R(t) is given by
5 Noise in Analog Modulation
307
R(t) = S(t) + W (t) = Ac M(t) cos(2π f c t + θ ) + W (t),
(5.67)
where M(t) has the psd as shown in Fig. 5.6b, θ is a uniformly distributed random variable in [0, 2π ), and W (t) is a zero-mean random process with psd SW ( f ) = a f 2 W/kHz
for − ∞ < f < ∞.
(5.68)
The power of S(t) is 160 W. The representation of narrowband noise is N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.69)
Assume that M(t), θ , N (t) are independent of each other. (a) (b) (c) (d)
Compute Ac . Derive the expression for the signal power at the receiver output. Derive the expression for the noise power at the receiver output. Compute the SNR in dB at the receiver output, if a = 10−3 , f c = 8 kHz, and W = 4 kHz.
• Solution: It is given that the power of S(t) is 160 W. Thus A2 P E S 2 (t) = A2c E M 2 (t) E cos2 (2π f c t + θ ) = c = 160, (5.70) 2 where
P = E M (t) = 2
4 f =−4
S M ( f ) d f = 20 W.
(5.71)
Therefore Ac = 4.
(5.72)
Y (t) = k Ac M(t) cos(2π f c t + θ ) + N (t).
(5.73)
The IF filter output is
The psd of N (t) is S N ( f ) = ak 2 f 2
for f c − W ≤ | f | ≤ f c + W.
(5.74)
The receiver output is V (t) =
Ac k A2c M(t) Ac + Nc (t) cos(θ ) + Ns (t) sin(θ ). 2 2 2
(5.75)
308
5 Noise in Analog Modulation Ac M (t) cos(2πfc t) + N (t)
V (t)
Lowpass
Y (t)
filter 2 cos(2πfc t + Θ(t))
Fig. 5.7 DSB-SC demodulator having a phase error (t)
The signal power at the receiver output is k 2 A4c P/4 = 1280k 2 .
(5.76)
The noise power at the receiver output is A2 A2c 2 2 E Nc (t) E cos (θ ) + E Ns2 (t) E sin2 (θ ) = c E N 2 (t) . 4 4 (5.77) Now fc +W f2df E N 2 (t) = 2ak 2 f = f c −W
4ak 2 3 fc W + W 3 . = 3 2
(5.78)
Therefore, the noise power at the receiver output becomes ak 2 A2c 2 3 f c W + W 3 = 4.4373k 2 . 3
(5.79)
The SNR at the receiver output in dB is SNR = 10 log(1280/4.4373) = 24.6 dB.
(5.80)
9. In a DSB-SC receiver, the sinusoidal wave generated by the local oscillator suffers from a phase error (t) with respect to the input carrier wave cos(2π f c t), as illustrated in Fig. 5.7. Assuming that (t) is uniformly distributed in [−δ, δ], where δ 1, find the mean squared error at the receiver output. The mean squared error is defined as the expected value of the squared difference between the receiver output for (t) = 0 and the message signal component of the receiver output when (t) = 0. Assume that the message M(t), the carrier phase (t), and noise N (t) are statistically independent of each other. Also assume that N (t) is a zero-mean narrowband noise process with psd N0 /2 extending over f c − W ≤ | f | ≤ f c + W and having the representation:
5 Noise in Analog Modulation
309
N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.81)
The psd of M(t) extends over [−W, W ] with power P. The LPF is ideal with unity gain in [−W, W ]. • Solution: The output of the multiplier is V (t) = 2 [Ac M(t) cos(2π f c t) + N (t)] cos(2π f c t + (t)),
(5.82)
where N (t) is narrowband noise. The output of the lowpass filter is Y (t) = Ac M(t) cos((t)) + Nc (t) cos((t)) + Ns (t) sin((t)). (5.83) When (t) = 0 the signal component is Y0 (t) = Ac M(t).
(5.84)
Thus, the mean squared error is E
(Y0 (t) − Y (t))2
= E [(Ac M(t)(1 − cos((t))) + Nc (t) cos((t)) + Ns (t) sin((t)))2 = E A2c M 2 (t)(1 − cos((t)))2 + E Nc2 (t) cos2 ((t)) (5.85) +E Ns2 (t) sin2 ((t)) . We now use the following relations (assuming |(t)| 1 for all t): 4 (t) (1 − cos((t)))2 ≈ 4 E Nc2 (t) = 2N0 W E Ns2 (t) = 2N0 W E M 2 (t) = P.
(5.86)
Thus E
(Y0 (t) − Y (t))2 A2 P = c E 4 (t) + 2N0 W E cos2 ((t)) + sin2 ((t)) 4 A2c Pδ 4 + 2N0 W, = 20
where we have used the fact that
(5.87)
310
5 Noise in Analog Modulation
δ 4 1 E (t) = α 4 dα 2δ α=−δ δ4 = . 5
(5.88)
10. (Haykin 1983) Consider a phase modulation (PM) system, with the received signal at the output of the IF filter given by x(t) = Ac cos(2π f c t + k p m(t)) + n(t),
(5.89)
n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t).
(5.90)
where
Assume that the carrier-to-noise ratio of x(t) to be high, the message power to be P, the message bandwidth to extend over [−W, W ], and the transmission bandwidth of the PM signal to be BT . The psd of n(t) is equal to N0 /2 for f c − BT /2 ≤ | f | ≤ f c + BT /2 and zero elsewhere. (a) Find the output SNR. Show all the steps. (b) Determine the figure-of-merit of the system. (c) If the PM system uses a pair of pre-emphasis and de-emphasis filters defined by Hpe ( f ) = 1 + j f / f 0 = 1/Hde ( f )
(5.91)
determine the improvement in the output SNR. • Solution: Let φ(t) = k p m(t) and n(t) = r (t) cos(2π f c t + ψ(t)).
(5.92)
Then the received signal x(t) can be written as (see Fig. 5.8): x(t) = a(t) cos(2π f c t + θ (t)),
(5.93)
a(t)
Fig. 5.8 Phasor diagram for x(t)
r(t) ψ(t) Ac
φ(t) θ(t)
5 Noise in Analog Modulation
311
where θ (t) = φ(t) + tan−1
r (t) sin(ψ(t) − φ(t)) . Ac + r (t) cos(ψ(t) − φ(t))
(5.94)
The phase detector consists of the cascade of a hard limiter, bandpass filter, frequency discriminator, and an integrator. The hard limiter and bandpass filter remove envelope variations in x(t) to yield x1 (t) = cos(2π f c t + θ (t)).
(5.95)
The frequency discriminator (cascade of differentiator, envelope Detector, and dc blocking capacitor) output is x2 (t) = dθ (t)/dt,
(5.96)
2π f c + dθ (t)/dt > 0,
(5.97)
assuming that
for all t. The output of the integrator is x3 (t) = θ (t)
r (t) sin(ψ(t) − φ(t)) Ac = φ(t) + n s (t)/Ac ,
≈ φ(t) +
(5.98)
where we have assumed that Ac r (t) (high carrier-to-noise ratio). It is reasonable to assume that the psd of n s (t) to be identical to n s (t) and is equal to N0 for f c − BT /2 < | f | < f c + BT /2 (5.99) S Ns ( f ) = 0 elsewhere. The noise power at the output of the postdetection (baseband) lowpass filter is 2N0 W/A2c . The output SNR is SNR O =
k 2p P A2c 2N0 W
.
(5.100)
The average power of the PM signal is A2c /2 and the average noise power in the message bandwidth is N0 W . Therefore, the channel signal-to-noise ratio is SNRC =
A2c . 2N0 W
(5.101)
312
5 Noise in Analog Modulation
Therefore, the figure-of-merit of the receiver is SNR O = k 2p P. SNRC
(5.102)
When pre-emphasis and de-emphasis is used, the noise psd at the output of the de-emphasis filter is S No ( f ) =
N0 A2c (1 + ( f / f 0 )2 )
| f | < W.
(5.103)
Therefore, the noise power at the output of the de-emphasis filter is
W f =−W
S No ( f ) d f = 2N0 f 0 tan−1 (W/ f 0 )/A2c .
(5.104)
Therefore, the improvement in the output SNR is D=
W/ f 0 . tan−1 (W/ f 0 )
(5.105)
11. (Haykin 1983) Suppose that the transfer functions of the pre-emphasis and deemphasis filters of an FM system are scaled as follows: jf H pe ( f ) = k 1 + f0 1 1 . Hde ( f ) = k 1 + j f / f0
(5.106)
The scaling factor k is chosen so that the average power of the emphasized signal is the same as the original message M(t). (a) Find the value of k that satisfies this requirement for the case when the psd of the message is SM ( f ) =
1/ 1 + ( f / f 0 )2 for − W ≤ f ≤ W 0 otherwise.
(5.107)
(b) What is the corresponding value of the improvement factor obtained by using this pair of pre-emphasis and de-emphasis filters. • Solution: The message power is
5 Noise in Analog Modulation
313
P=
W
SM ( f ) d f −1 W = 2 f 0 tan . f0 f =−W
(5.108)
The message psd at the output of the pre-emphasis filter is ( SM
k 2 (1+( f / f 0 )2 ) 1+( f / f 0 )2
f) = =
for − W ≤ f ≤ W otherwise
0
k 2 for − W ≤ f ≤ W 0 otherwise.
(5.109)
The message power at the output of the pre-emphasis filter is
P =
W f =−W
SM ( f ) d f = k 2 2W.
(5.110)
Solving for k we get k=
f0 tan−1 W
W f0
1/2 .
(5.111)
The improvement factor due to this pair of pre-emphasis and de-emphasis combination is k 2 (W/ f 0 )3 3[(W/ f 0 ) − tan−1 (W/ f 0 )] (W/ f 0 )2 tan−1 (W/ f 0 ) . = 3[(W/ f 0 ) − tan−1 (W/ f 0 )]
D=
(5.112)
12. Consider an AM receiver using a square-law detector whose output is equal to the square of the input as indicated in Fig. 5.9. The AM signal is defined by s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t)
s(t)
x2 (t)
x(t) Squarer
y(t)
(5.113)
Square
LPF rooter
n(t)
Fig. 5.9 Square-law detector in the presence of noise
z(t)
314
5 Noise in Analog Modulation
and n(t) is narrowband noise given by n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t).
(5.114)
Assume that (a) The bandwidth of the LPF is large enough so as to reject only those components centered at 2 f c . (b) The channel signal-to-noise ratio at the receiver input is high. (c) The psd of n(t) is flat with a height of N0 /2 in the range f c − W ≤ | f | ≤ fc + W . Compute the output SNR. • Solution: The squarer output is x 2 (t) = s 2 (t) + n 2 (t) + 2s(t)n(t) (1 + cos(4π f c t)) = A2c [1 + μ cos(2π f m t)]2 2 t)) (1 + cos(4π f (1 − cos(4π f c t)) c +n 2c (t) + n 2s (t) 2 2 −2n c (t)n s (t) cos(2π f c t) sin(2π f c t) (1 + cos(4π f c t)) +2 Ac n c (t)[1 + μ cos(2π f m t)] 2 −2 Ac n s (t)[1 + μ cos(2π f m t)] cos(2π f c t) sin(2π f c t). (5.115) The output of the lowpass filter is n 2 (t) n 2s (t) A2c [1 + μ cos(2π f m t)]2 + c + 2 2 2 +Ac n c (t)[1 + μ cos(2π f m t)] A2 ≈ c [1 + μ cos(2π f m t)]2 + Ac n c (t)[1 + μ cos(2π f m t)],(5.116) 2
y(t) =
where we have made the high channel SNR approximation. The output of the square rooter is
A2c [1 + μ cos(2π f m t)]2 + Ac n c (t)[1 + μ cos(2π f m t)] 2 2n c (t) Ac = √ [1 + μ cos(2π f m t)] 1 + Ac (1 + μ cos(2π f m t)) 2
z(t) =
n c (t) Ac ≈ √ [1 + μ cos(2π f m t)] + √ . 2 2
(5.117)
5 Noise in Analog Modulation
315
The signal power is P=
A2c μ2 . 4
(5.118)
PN =
2N0 W . 2
(5.119)
The noise power is
The output SNR is SNR O =
A2c μ2 . 4N0 W
(5.120)
13. (Haykin 1983) Consider the random process X (t) = A + W (t),
(5.121)
where A is a constant and W (t) is a zero-mean WSS random process with psd N0 /2. The signal X (t) is passed through a first-order RC-lowpass filter. Find the expression for the output SNR, with the dc component at the LPF output regarded as the signal of interest. • Solution: The signal component at the output of the lowpass filter is A. Assuming that 1/ f 0 = 2π RC, the noise power at the LPF output is PN =
N0 2
∞ f =−∞
1 N0 . df = 2 1 + ( f / f0 ) 4RC
(5.122)
Therefore the output SNR is: SNR O =
4RC A2 . N0
(5.123)
14. Consider the modified receiver for detecting DSB-SC signals, as illustrated in Fig. 5.10. Note that in this receiver configuration, the IF filter is absent, hence w(t) is zero-mean AWGN with psd N0 /2. The DSB-SC signal s(t) is given by
Fig. 5.10 Modified receiver for DSB-SC signals
y(t)
s(t) + w(t) LPF
cos(2πfc t + θ)
316
5 Noise in Analog Modulation
s(t) = Ac m(t) cos(2π f c t + θ ),
(5.124)
where θ is a uniformly distributed random variable in [0, 2π ]. The message m(t) extends over the band [−W, W ] and has a power P. Assume the LPF to be ideal, with unity gain over [−W, W ]. Assume that θ and w(t) are statistically independent. Compute the figure-of-merit of this receiver. Does this mean that the IF filter is redundant? Justify your answer. • Solution: The noise autocorrelation at the output of the multiplier is E[w(t)w(t − τ ) cos(2π f c t + θ ) cos(2π f c (t − τ ) + θ )] cos(2π f c τ ) N0 δ(τ ) = 2 2 N0 δ(τ ). = 4
(5.125)
The average noise power at the LPF output is PN =
N0 N0 W 2W = . 4 2
(5.126)
The signal component at the LPF output is s1 (t) = Ac m(t)/2.
(5.127)
The average signal power at the LPF output is E s12 (t) = A2c E m 2 (t) /4 = A2c P/4.
(5.128)
Therefore, SNR at the LPF output is SNR O =
A2c P . 2N0 W
(5.129)
The average power of the modulated signal is A2 P E s 2 (t) = c . 2
(5.130)
The average noise power in the message bandwidth for baseband transmission is N0 (2W ) = N0 W. PN1 = (5.131) 2 Thus, the channel SNR is
5 Noise in Analog Modulation
317
SNRC =
A2c P . 2N0 W
(5.132)
Hence the figure-of-merit of the receiver is 1. The above result does not imply that the IF filter is redundant. In fact, the main purpose of the IF filter in a superheterodyne receiver is to reject undesired adjacent stations. If we had used a bandpass filter in the RF section for this purpose, the Q-factor of the BPF would have to be very high, since the adjacent stations are spaced very closely. In fact, the required Q-factor of the RF BPF to reject adjacent stations would have been f RF, max (= 1605 kHz) bandwidth of message ( = 10 kHz) = 160.5,
Q≈
(5.133)
which is very high. However, the Q-factor requirement of the IF BPF is f IF (= 455 kHz) bandwidth of message ( = 10 kHz) = 45.5,
Q≈
(5.134)
which is reasonable. Note that the IF filter cannot reject image stations. The image stations are rejected by the RF bandpass filter. The frequency spacing between the image stations is 2 f IF , which is quite large, hence the Q-factor requirement of the RF bandpass filter is not very high. 15. Consider an FM demodulator in the presence of noise. The input signal to the demodulator is given by X (t) = Ac cos 2π f c t + 2π k f
t τ =0
M(τ ) dτ
+ N (t)
(5.135)
where N (t) denotes narrowband noise process with psd as illustrated in Fig. 5.11, and BT is the bandwidth of the FM signal. The psd of the message is SM ( f ) =
a f 2 for | f | < W 0 otherwise.
(5.136)
(a) Write down the expression for the signal at the output of the FM discriminator (cascade of a differentiator, envelope detector, dc blocking capacitor, and a gain of 1/(2π )). No derivation is required. (b) Compute the SNR at the output of the FM demodulator.
318
5 Noise in Analog Modulation SN (f ) N0 /2 f −fc −fc − BT /2
fc
0
fc − BT /2
−fc + BT /2
fc + BT /2
Fig. 5.11 PSD of N (t)
• Solution: The output of the FM discriminator is V (t) = k f M(t) + Nd (t) 1 d Ns (t) = k f M(t) + 2π Ac dt
(5.137)
where Ns (t) has the same statistical properties as Ns (t). Here Ns (t) denotes the quadrature component of N (t), as given by N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.138)
Thus the psd of Ns (t) is identical to that of Ns (t) and is given by S Ns ( f ) = =
S N ( f − f c ) + S N ( f + f c ) for | f | < BT /2 0 otherwise N0 /2 for | f | < BT /2 0 otherwise.
(5.139)
The noise psd at the demodulator output is ⎧ 2 ⎨ f S ( f ) for | f | < W S No ( f ) = A2c Ns ⎩ 0 otherwise ⎧ 2 ⎨ f N0 for | f | < W = 2 A2c ⎩ 0 otherwise.
(5.140)
The noise power at the demodulator output is
W f =−W
S No ( f ) d f =
N0 W 3 . 3A2c
(5.141)
5 Noise in Analog Modulation r(t)
319 Lowpass filter H2 (f )
Bandpass filter H1 (f )
mo (t) + no (t)
2 cos(2πfc t)
Fig. 5.12 Demodulation of an AM signal using nonideal filters
The signal power at the demodulator output is
W f =−W
SM ( f ) d f =
2k 2f aW 3 3
.
(5.142)
Therefore, the output SNR is SNR O =
2a A2c k 2f N0
.
(5.143)
16. (Haykin 1983) Consider the AM receiver shown in Fig. 5.12 where r (t) = [1 + m(t)] cos(2π f c t) + w(t),
(5.144)
where w(t) has zero mean and power spectral density N0 /2 and m(t) is WSS with psd S M ( f ). Note that H1 ( f ) and H2 ( f ) in Fig. 5.12 are nonideal filters, h 1 (t) and h 2 (t) are real-valued, and h 1 (t) = 2 h˜ 1 (t) e j 2π fc t .
(5.145)
It is also given that H1 ( f ) has conjugate symmetry about f c , that is H1 ( f c + f ) = H1∗ ( f c − f )
for 0 ≤ f ≤ B.
(5.146)
The two-sided bandwidth of H1 ( f ) about f c is 2B. The one-sided bandwidth of H2 ( f ) about zero frequency is B. The one-sided bandwidth of S M ( f ) may be greater than B. Assume that m(t) and m o (t) do not contain any dc component and any other dc component at the receiver output is removed by a capacitor. As a measure of distortion introduced by H1 ( f ), H2 ( f ) and noise, we define the mean squared error as E = E (m o (t) + n o (t) − m(t))2 .
(5.147)
Note that m o (t), n o (t) and m(t) are real-valued. All symbols have their usual meaning.
320
5 Noise in Analog Modulation
(a) Show that E =
∞
f =−∞
|1 − H ( f )|2 S M ( f ) + N0 |H ( f )|2 d f ,
(5.148)
where H ( f ) = H˜ 1 ( f )H2 ( f ).
(5.149)
(b) Given that SM ( f ) =
S0 1 + ( f / f 0 )2
| f | < ∞, f c f 0
(5.150)
and H( f ) =
1 for | f | ≤ B, f c B 0 otherwise
(5.151)
find B such that E is minimized. Ignore the effects of aliasing due to the 2 f c component during demodulation. • Solution: Given that (a) h 1 (t) = 2 h˜ 1 (t) e j 2π fc t .
(5.152)
(b) H1 ( f c + f ) = H1∗ ( f c − f )
for 0 ≤ f ≤ B.
(5.153)
| f | < ∞, f c f 0 .
(5.154)
(c) SM ( f ) =
S0 1 + ( f / f 0 )2
(d) H( f ) =
1 for | f | ≤ B, f c B 0 otherwise.
(5.155)
The Fourier transform of (5.152) is H1 ( f ) = H˜ 1 ( f − f c ) + H˜ 1∗ (− f − f c ),
(5.156)
5 Noise in Analog Modulation
321
where H˜ 1 ( f ) denotes the Fourier transform of the complex envelope, h˜ 1 (t). Substituting (5.156) in (5.153) we obtain H˜ 1 ( f ) = H˜ 1∗ (− f ) ⇒ h˜ 1 (t) = h 1, c (t)
for 0 ≤ f ≤ B (5.157)
that is, h˜ 1 (t) is real-valued. The signal component at the bandpass filter output is y(t) = y˜ (t) e j 2π fc t ,
(5.158)
where y˜ (t) = (1 + m(t)) h˜ 1 (t) = (1 + m(t)) h 1, c (t) = yc (t),
(5.159)
where “” denotes convolution. Substituting (5.159) in (5.158) we get y(t) = yc (t) cos(2π f c t).
(5.160)
The signal component at the lowpass filter output is yc (t) h 2 (t) = (1 + m(t)) h 1, c (t) h 2 (t) = (1 + m(t)) h(t),
(5.161)
where h(t) = h 1, c (t) h 2 (t) ⇒ H ( f ) = H˜ 1 ( f )H2 ( f ).
(5.162)
The dc term (1 h(t)) in (5.161) is removed by a capacitor. Therefore, the message component at the lowpass filter output is m o (t) = m(t) h(t).
(5.163)
Therefore, the message psd at the lowpass filter output is S Mo ( f ) = S M ( f )|H ( f )|2 .
(5.164)
Let us now turn our attention to the noise component. The noise psd at the bandpass filter output is
322
5 Noise in Analog Modulation
SN ( f ) =
N0 |H1 ( f )|2 . 2
(5.165)
The narrowband noise at the bandpass filter output is given by N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t).
(5.166)
The noise component at the output of the lowpass filter is n o (t) = Nc (t) h 2 (t).
(5.167)
The noise psd at the lowpass filter output is S No ( f ) = S Nc ( f )|H2 ( f )|2 ,
(5.168)
where S Nc ( f ) is the psd of Nc (t) and is given by
S N ( f − f c ) + S N ( f + f c ) for | f | < B 0 otherwise 2 (N0 /2) |H1 ( f − f c )| + |H1 ( f + f c )|2 for | f | < B = 0 otherwise ⎧ 2 2 ⎨ (N0 /2) H˜ 1∗ (− f ) + H˜ 1 ( f ) for | f | < B = ⎩ 0 otherwise 2 ˜ N0 H1 ( f ) for | f | < B = (5.169) 0 otherwise,
S Nc ( f ) =
where we have used (5.165), (5.156), and (5.157). Therefore (5.168) becomes 2 S No ( f ) = N0 |H2 ( f )|2 H˜ 1 ( f ) = N0 |H ( f )|2 ,
(5.170)
where we have used (5.162). Now E = E (m o (t) + n o (t) − m(t))2 = E m 2o (t) + n 2o (t) + m 2 (t) − 2m(t)m o (t) ,
(5.171)
where we have assumed that the noise and message are independent, and that the noise has zero mean. Simplifying (5.171) we obtain
5 Noise in Analog Modulation
E =
323
∞
S Mo ( f ) + S N o ( f ) + S M ( f ) − S M ( f )H ( f ) − S M ( f )H ∗ ( f ) d f ∞ |1 − H ( f )|2 S M ( f ) + N0 |H ( f )|2 d f, = f =−∞
(5.172)
f =−∞
where we have used the following relations E m 2o (t) dt = E n 2o (t) dt = E m 2 (t) dt = E [m(t)m o (t) dt] = = = = =
∞
f =−∞ ∞ f =−∞ ∞ f =−∞
S Mo ( f ) d f S No ( f ) d f SM ( f ) d f
∞ E m(t) h(α)m(t − α) dα α=−∞ ∞ h(α)E [m(t) m(t − α)] dα α=−∞ ∞ h(α)R M (α) dα α=−∞ ∞ H ( f )S M ( f ) d f f =−∞ ∞ H ∗ ( f )S M ( f ) d f.
(5.173)
f =−∞
The last two equations in (5.173) are obtained as follows
∞ α=−∞
h(α)R M (α) dα = h(−t) R M (t)|t=0 = h(t) R M (−t)|t=0 ,
(5.174)
which is also equal to the inverse Fourier transform evaluated at t = 0. The second part may be solved by substituting S M ( f ) in (5.154) and H ( f ) in (5.155) into (5.172). Therefore (5.172) becomes
∞
S0 d f + 2N0 B 1 + ( f / f 0 )2 f =B = 2S0 f 0 π/2 − tan−1 (B/ f 0 ) + 2N0 B.
E =2
(5.175)
In order to minimize E , we differentiate with respect to B and set the result to zero. Thus we obtain
324
5 Noise in Analog Modulation 3.2 3 2.8 2.6 2.4
E
2.2 2 1.8 1.6 1.4 1.2 0
1
2
3
4
5
6
7
8
9
10
B (Hz)
Fig. 5.13 Plot of E versus B Fig. 5.14 Block diagram of a system
R
S(t)
W (t)
S1 (t) + W1 (t)
C
dE 1 = −2S0 + 2N0 dB 1 + (B/ f 0 )2 =0 S0 ⇒ B = f0 − 1. N0
(5.176)
The plot of E versus B is shown in Fig. 5.13, for S0 = 1.0 W/Hz, N0 = 0.1 W/Hz, and f 0 = 1.0 Hz. 17. Consider the block diagram in Fig. 5.14. Here S(t) is a narrowband FM signal given by S(t) = Ac cos(2π f c t + θ ) − β Ac sin(2π f c t + θ ) sin(2π f m t + α),
(5.177)
where θ, α are independent random variables, distributed in [0, 2π ). uniformly The psd of W (t) is N0 /2. Compute E S12 (t) E W12 (t) . • Solution: Now S(t) can be written as
5 Noise in Analog Modulation
325
S(t) = Ac cos(2π f c t + θ ) β Ac − cos(2π f 1 t + θ − α) 2 β Ac + cos(2π f 2 t + θ + α), 2
(5.178)
where f1 = fc − fm f2 = fc + fm .
(5.179)
The autocorrelation of S(t) is R S (τ ) = E [S(t)S(t − τ )] A2 = c cos(2π f c τ ) 2 β 2 A2c cos(2π f 1 τ ) + 8 β 2 A2c cos(2π f 2 τ ). + 8
(5.180)
The psd of S(t) is the Fourier transform of R S (τ ) in (5.180), and is given by SS ( f ) =
A2c [δ( f − f c ) + δ( f + f c )] 4 β 2 A2c + [δ( f − f 1 ) + δ( f + f 1 )] 16 β 2 A2c + [δ( f − f 2 ) + δ( f + f 2 )] . 16
(5.181)
The psd of S1 (t) is SS1 ( f ) =
A2c |H ( f c )|2 [δ( f − f c ) + δ( f + f c )] 4 β 2 A2c |H ( f 1 )|2 [δ( f − f 1 ) + δ( f + f 1 )] + 16 β 2 A2c |H ( f 2 )|2 [δ( f − f 2 ) + δ( f + f 2 )] , + 16
(5.182)
where 1 1 + (2π f RC)2 ⇒ |H ( f )|2 = |H (− f )|2 . |H ( f )|2 =
(5.183)
326
5 Noise in Analog Modulation
The power of S1 (t) is E
S12 (t)
= =
∞
f =−∞ A2c
2
SS1 ( f ) d f
|H ( f c )|2 +
β 2 A2c β 2 A2c |H ( f 1 )|2 + |H ( f 2 )|2 . 8 8 (5.184)
Finally, the power of W1 (t) is E
W12 (t)
= = = = =
1 N0 ∞ df 2 f =−∞ 1 + (2π f RC)2 ∞ 1 N0 dx 4π RC x=−∞ 1 + x 2 ∞ 1 N0 dx 2π RC x=0 1 + x 2 N0 −1 ∞ tan (x) x=0 2π RC N0 . 4RC
Reference Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.
(5.185)
Chapter 6
Pulse Code Modulation
1. A speech signal has a total duration of 10 s. It is sampled at a rate of 8 kHz and then encoded. The SNR Q must be greater than 40 dB. Calculate the minimum storage capacity required to accommodate this digitized speech signal. Assume that the speech signal has a Gaussian pdf with zero mean and variance σ 2 and the overload factor is 5. Assume that the quantizer is uniform and of the mid-rise type. • Solution: We know that SNR Q =
12σ 2 , 2
(6.1)
where the step-size is given by =
2xmax . 2n
(6.2)
Here xmax = 5σ is the maximum input the quantizer can handle and n is the number of bits used to encode any representation level. Substituting for , the SNR Q becomes SNR Q =
12 × 22n = 104 100 ⇒ n = 8.17.
(6.3)
Since n has to be an integer and SNR Q must be greater than 40 dB, we take n = 9. Now, the number of samples obtained in 10 s is 8 × 104 . The number of bits obtained is 9 × 8 × 104 , which is the storage capacity required for the speech signal.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_6
327
328
6 Pulse Code Modulation
2. A PCM system uses a uniform quantizer followed by a 7-bit binary encoder. The bit-rate of the system is 56 Mbps. (a) What is the maximum message bandwidth for which the system operates satisfactorily. (b) Determine the SNR Q when a full-load sinusoidal signal of frequency 1 MHz is applied to the input. • Solution: The sampling-rate is 56 = 8 MHz. 7
(6.4)
Hence the maximum message bandwidth is 4 MHz. For a sinusoidal signal of amplitude A, the power is A2 /2. Since the sinusoid is stated to be at full-load, xmax = A. The step-size is =
2A . 27
(6.5)
The mean-square quantization error is σ 2Q =
2 . 12
(6.6)
Hence the SNR Q is SNR Q =
6 × 1282 ≡ 43.91 dB. 4
(6.7)
3. Twenty-four voice signals are sampled uniformly and then time division multiplexed (TDM). The sampling operation uses flat-top samples with 1 µs duration. The multiplexing operation includes provision for synchronization by adding an extra pulse also of 1 µs duration. The highest frequency component of each voice signal is 3.4 kHz. (a) Assuming a sampling-rate of 8 kHz and uniform spacing between pulses, calculate the spacing (the time gap between the ending of a pulse and the starting of the next pulse) between successive pulses of the multiplexed signal. (b) Repeat your calculation using Nyquist-rate sampling. • Solution: Since the sampling-rate is 8 kHz, the time interval between two consecutive samples of the same message is 106 /8000 = 125 µs. Thus, we can visualize a “frame” of duration 125 µs containing samples of the 24 voice signals plus the extra synchronization pulse. Thus the spacing between the starting points of 2 consecutive pulses is 125/25 = 5 µs. Since the pulsewidth is 1 µs, the spacing between consecutive pulses is 4 µs.
6 Pulse Code Modulation
329
When Nyquist-rate sampling is used, the frame duration is 106 /6800 = 147.059 µs. Therefore the spacing between the starting points of two consecutive pulses is 147.059/25 = 5.88 µs. Hence the spacing between the pulses is 4.88 µs. 4. Twelve different message signals, each with a bandwidth of [−W, W ] where W = 10 kHz, are to be multiplexed and transmitted. Determine the minimum bandwidth required for each method if the modulation/multiplexing method used is (a) SSB, FDM. (b) Pulse amplitude modulation (PAM), TDM with Dirac-delta samples followed by Nyquist pulse shaping. • Solution: Bandwidth required for SSB/FDM is 12 × 10 = 120 kHz. The sampling rate required for each message is 20 kHz. Thus, the output of a PAM transmitter for a single message is 20 ksamples/s. The samples are weighted Dirac-delta functions. The output of the TDM is 240 ksamples/s. Using Nyquist pulse shaping, the transmission bandwidth required is 120 kHz. 5. (Vasudevan 2010) Compute the power spectrum of NRZ unipolar signals as shown in Fig. 6.1. Assume that the symbols are equally likely, statistically independent and WSS. • Solution: Consider the system model in Fig. 6.2. Here the input X (t) is given by
1
A
0
1
1
0
0
0
1 t
0
T
Fig. 6.1 NRZ unipolar signals Fig. 6.2 System model for computing the psd of linearly modulated digital signals
X(t)
A
h(t)
Y (t)
h(t) 0 t T
1 Constellation
330
6 Pulse Code Modulation ∞
X (t) =
Sk δ(t − kT − α),
(6.8)
k=−∞
where Sk denotes a symbol occurring at time kT , drawn from an M-ary PAM constellation, 1/T denotes the symbol-rate and α is a uniformly distributed RV in [0, T ]. Since the symbols are independent and WSS E[Sk S j ] = E[Sk ]E[S j ] = m 2S ,
(6.9)
where m S denotes the mean value of the symbols in the constellation and is equal to mS =
M
P(Si )Si ,
(6.10)
i=1
where Si denotes the ith symbol in the constellation and P(Si ) denotes the probability of occurrence of Si , and M is the number of symbols in the constellation. In the given problem M = 2, P(Si ) = 0.5, and S1 = 0 and S2 = 1, hence m S = 0.5. The output Y (t) is given by ∞
Y (t) =
Sk h(t − kT − α),
(6.11)
k=−∞
where h(t) is the impulse response of the transmit filter. We know that the psd of Y (t) is given by SY ( f ) =
∞ P − m 2S m2 |H ( f )|2 + 2S |H (n/T )|2 δ( f − n/T ), (6.12) T T n=−∞
where H ( f ) is the Fourier transform of h(t) and P=
M
P(Si )Si2 = 0.5
(6.13)
i=1
is the average power of the constellation. Here H ( f ) is equal to H ( f ) = e−j π f T AT sinc ( f T ) ⇒ |H ( f )|2 = A2 T 2 sinc2 ( f T ). Therefore
(6.14)
6 Pulse Code Modulation
331
Table 6.1 The 15-segment μ-law companding characteristic Segment number Step-size Projections of the segment end points onto the horizontal axis 0 1a, 1b 2a, 2b 3a, 3b 4a, 4b 5a, 5b 6a, 6b 7a, 7b
±31 ±95 ±223 ±479 ±991 ±2015 ±4063 ±8159
2 4 8 16 32 64 128 256
SY ( f ) =
Representation levels
0, . . . , ±15 ±16, . . . , ±31 ±32, . . . , ±47 • • •
∞ A2 T 2 sinc2 ( f T ) A2 sinc2 (nT /T )δ( f − n/T ). + 4T 4 k=−∞
Since sinc( f T ) =
1 for f = 0 0 for f = n/T , n = 0
(6.15)
(6.16)
and δ( f − n/T ) is not defined for f = n/T , it is assumed that the product sinc(n)δ( f − n/T ) is zero for n = 0. The reason is as follows. Consider an analogy in the time-domain. If aδ(t) is input to a filter with impulse response h(t), the output is ah(t). This implies that if a = 0, then the output is also zero, which in turn is equivalent to zero input. Thus we conclude that 0δ( f ) is equivalent to zero. Thus SY ( f ) =
A2 A2 T sinc2 ( f T ) + δ( f ). 4 4
(6.17)
6. Consider the 15-segment piecewise linear characteristic used for μ-law companding (μ = 255), shown in Table 6.1. Assume that the input signal lies in the range [−8159, 8159] mV. Compute the representation level corresponding to an input of 500 mV. Note that in Table 6.1 the representation levels are not properly scaled, hence c(xmax ) = xmax . • Solution: From Table 6.1 we see that 500 mV lies in segment 4a. The first representation level corresponding to segment 4a is 64. The step-size for 4a is 32. Hence the end point of the first uniform segment in 4a is 479 + 32 = 511. Thus the representation level is 64.
332
6 Pulse Code Modulation y(x)
Fig. 6.3 Piecewise linear approximation of the μ-law characteristic
1.0 0.75 0.50 0.25 x 3α
α 7α
15α = 1
7. Consider an 8-segment piecewise linear characteristic which approximates the μ-law for companding. Four segments are in the first quadrant and the other four are in the third quadrant (odd symmetry). The μ-law is given by c(x) =
ln(1 + μx) for 0 ≤ x ≤ 1. ln(1 + μ)
(6.18)
The projections of all segments along the y-axis are spaced uniformly. For the segments in the first quadrant, the projections of the segments along the x-axis are such that the length of the projection is double that of the previous segment. (a) Compute μ. (b) Let e(x) denote the difference between the values obtained by the μ-law and that obtained by the 8-segment approximation. Determine e(x) for the second segment in the first quadrant. • Solution: Let the projection of the first segment in the first quadrant along the x-axis be denoted by α. Then the projection of the second segment in the first quadrant along the x-axis is 2α and so on. The projection of each of the segments along the y-axis is of length 1/4 since c(1) = 1. This is illustrated in Fig. 6.3. Thus α + 2α + 4α + 8α = 1 ⇒ α = 1/15.
(6.19)
Also ln(1 + μα) = ln(1 + μ) ln(1 + μ3α) = ln(1 + μ) ⇒μ= =
1/4 1/2 1/α 15.
(6.20)
6 Pulse Code Modulation
333
Fig. 6.4 Message pdf A
fX (x)
x −3
−1
1
0
3
The expression for the second segment in the first quadrant is y(x) = 15x/8 + 1/8
for 1/15 ≤ x ≤ 3/15.
(6.21)
e(x) = c(x) − y(x)
for 1/15 ≤ x ≤ 3/15.
(6.22)
Hence
8. A DPCM system uses a second-order predictor of the form x(n) ˆ = p1 x(n − 1) + p2 x(n − 2).
(6.23)
The autocorrelation of the input is given by: R X (0) = 1, R X (1) = 0.8, R X (2) = 0.6. Compute the optimum forward prediction coefficients and the prediction gain. • Solution: We know that p1 R X (1) R X (0) R X (1) . = p2 R X (1) R X (0) R X (2)
(6.24)
Solving for p1 and p2 we get p1 = 8/9, p2 = −1/9. The prediction error is σ 2E = σ 2X − p1 R X (1) − p2 R X (2) = 3.2/9 = 0.355.
(6.25)
The prediction gain is 1 σ 2X = 2 = 2.8125. σ 2E σE
(6.26)
9. Consider the message pdf shown in Fig. 6.4. The decision thresholds of a nonuniform quantizer are at 0, ±1, and ±3. Compute SNR Q at the output of the expander. • Solution: Since the area under the pdf must be unity, we must have A = 1/3. Moreover
334
6 Pulse Code Modulation
Fig. 6.5 Message pdf
fX (x)
A
x −3
0
3
f X (x) = −x/9 + 1/3.
(6.27)
The representation levels at the output of the expander are at ±0.5 and ±2. Here f X (x) cannot be considered a constant in any decision region, therefore σ 2Q
1
=2
(x − 1/2) f X (x) d x + 2
3
2
x=0
(x − 2)2 f X (x) d x
x=1 1
4x 3 13x 2 x −x 4 + − + 36 27 72 12 4 −x 7x 3 16x 2 +2 + − + 36 27 18 = 0.19444. =2
x=0
3 4x 3 x=1 (6.28)
10. Consider the message pdf shown in Fig. 6.5. (a) Compute the representation levels of the optimum 2-level Lloyd-Max quantizer. (b) What is the corresponding value of σ 2Q ? • Solution: Since the area under the pdf is unity, we must have A = 1/3. Due to symmetry of the pdf, one of the decision thresholds is x1 = 0. The other two decision thresholds are x0 = −3 and x2 = 3. The corresponding representation levels are y1 and y2 which are related by y1 = −y2 . Thus the variance of the quantization error is σ 2Q = 2
3
(x − y2 )2 f X (x) d x,
(6.29)
x=0
which can be minimized by differentiating wrt y2 . Thus 3 y2 = x=0 3 x=0
The numerator of (6.30) is
x f X (x) d x f X (x) d x
.
(6.30)
6 Pulse Code Modulation
w(t)
335 t = kT
R
x(t)
xk
x ˆk
2nd -order prediction filter
C
+
− ek
Fig. 6.6 Block diagram of a system using a prediction filter
3
3
x f X (x) d x =
x=0
x(−x/9 + 1/3) d x
x=0
3 = (−x 3 /27 + x 2 /6)x=0 = 1/2.
(6.31)
The denominator of (6.30) is
3
f X (x) d x =
x=0
1 1 1 ×3× = . 2 3 2
(6.32)
Thus y2 = 1. The minimum variance of the quantization error is σ 2Q, min
3
=2 =2
x=0 3
(x − 1)2 f X (x) d x (x − 1)2 (−x/9 + 1/3) d x
x=0
= 0.5.
(6.33)
11. Consider the block diagram in Fig. 6.6. Assume that w(t) is zero-mean AWGN with psd N0 /2 = 0.5 × 10−4 W/Hz. Assume that RC = 10−4 s and T = RC/4 s. Let xˆk = p1 xk−1 + p2 xk−2 .
(6.34)
(a) Compute p1 and p2 so that the prediction error variance, E[ek2 ], is minimized. (b) What is the minimum value of the prediction error variance? • Solution: We know that
336
6 Pulse Code Modulation
SX ( f ) = N0 /2 . 1 + (2π f RC)2
N0 |H ( f )|2 2 (6.35)
Now H( f ) =
1 1 −t/RC e u(t). 1 + j 2π f RC RC
(6.36)
We also use the following relation: |H ( f )|2 h(t) h(−t).
(6.37)
Thus the autocorrelation function of x(t) is N0 (h(τ ) h(−τ )) 2 N0 ∞ = h(t)h(t − τ ) dt 2 t=−∞ ∞ N0 e−t/RC u(t) = 2(RC)2 t=−∞
E[x(t)x(t − τ )] = R X (τ ) =
× e(τ −t)/RC u(t − τ ) dt.
(6.38)
When τ > 0 ∞ N0 e−t/RC e(τ −t)/RC dt E[x(t)x(t − τ )] = 2(RC)2 t=τ τ
N0 exp − . = 4RC RC
(6.39)
When τ < 0 ∞ N0 e−t/RC e(τ −t)/RC dt 2(RC)2 t=0 τ
N0 exp . = 4RC RC
E[x(t)x(t − τ )] =
(6.40)
Thus R X (τ ) is equal to |τ | N0 exp − . E[x(t)x(t − τ )] = R X (τ ) = 4RC RC The autocorrelation of the samples of x(t) is
(6.41)
6 Pulse Code Modulation
337
|mT | N0 exp − 4RC RC 1 ⇒ R X (m) = exp (−|m|/4) 4
E[x(kT )x(kT − mT )] = R X (mT ) =
(6.42) The relevant autocorrelation values are R X (0) = 0.25 R X (1) = 0.25 e−1/4 = 0.1947 R X (2) = 0.25 e−1/2 = 0.1516.
(6.43)
We know that
R X (0) R X (1) R X (1) R X (0)
p1 p2
=
R X (1) . R X (2)
(6.44)
Solving for p1 and p2 we get p1 = 0.7788, p2 = 0. The minimum prediction error is σ 2E = σ 2X − p1 R X (1) − p2 R X (2) = 0.25(1 − e−1/2 ) = 0.09837. (6.45) 12. A delta modulator is designed to operate on speech signals limited to 3.4 kHz. The specifications of the modulator are (a) Sampling-rate is ten times the Nyquist-rate of the speech signal. (b) Step-size = 100 mV. The modulator is tested with a 1 kHz sinusoidal signal. Determine the maximum amplitude of this test signal required to avoid slope overload. • Solution 0.1 f s 2π f c 0.1 × 68 = 2π = 1.08 V.
Am
0,
where K and C0 are constants. Using (6.120) we obtain
(6.130)
6 Pulse Code Modulation
355
C0 = xmax − K ln(xmax ).
(6.131)
Hence the ideal compressor characteristic is c(x) = K ln
x xmax
+ xmax
for x > 0.
(6.132)
Note that when x < 0, (6.119) needs to be applied. The ideal c(x) is not used in practice, since c(x) → −∞ as x → 0+ . 25. The probability density function of a message signal is f X (x) = ae−3|x|
for −∞ < x < ∞.
(6.133)
The message is applied to a 4-representation level Lloyd-Max quantizer. The initial set of representation levels are given by: y1, 0 = −4, y2, 0 = −1, y3, 0 = 1, y4, 0 = 4, where the second subscript denotes the 0th iteration. Compute the next set of decision thresholds and representation levels (in the 1st iteration). • Solution: Since the area under the pdf is unity we must have
∞
ae
−3|x|
dx =
x=−∞
∞
2ae−3x d x
x=0
=1 ⇒ a = 3/2.
(6.134)
Let the decision thresholds in the 1st iteration be given by x0, 1 , . . . , x4, 1 . We have x0, 1 = −∞ y1, 0 + y2, 0 x1, 1 = 2 = −2.5 y2, 0 + y3, 0 x2, 1 = 2 =0 y3, 0 + y4, 0 x3, 1 = 2 = 2.5 x4, 1 = ∞.
(6.135)
The representation levels in the 1st iteration are given by xk, 1 x=x
yk, 1 = xk, 1k−1, 1 x=xk−1, 1
x f X (x) d x f X (x) d x
for 1 ≤ k ≤ 4.
(6.136)
356
6 Pulse Code Modulation
Solving (6.136) we get y1, 1 = −2.83 = −y4, 1 y2, 1 = −0.33194 = −y3, 1 .
(6.137)
References Simon Haykin. Digital Communications. John Wiley & Sons, first edition, 1988. K. Vasudevan. Digital Communications and Signal Processing, Second edition (CDROM included). Universities Press (India), Hyderabad, www.universitiespress.com, 2010.
Chapter 7
Signaling Through AWGN Channel
1. For the transmit filter with impulse response given in Fig. 7.1, draw the output of the matched filter. Assume that the matched filter has an impulse response p(−t). • Solution: The matched filter output is plotted in Fig. 7.2. 2. Consider a communication system shown in Fig. 7.3. Here, an input bit bk = 1 is represented by the signal x1 (t) and the bit bk = 0 is represented by the signal x2 (t). Note that x1 (t) and x2 (t) are orthogonal, that is
T
x1 (t)x2 (t) dt = 0.
(7.1)
t=0
(a) It is given that the impulse response of the transmit filter is p(t) =
1 for 0 ≤ t ≤ T /4 0 otherwise,
(7.2)
where 1/T denotes the bit-rate of the input data stream bk . The “bit manipulator” converts the input bit bk = 0 into a sequence of Diracdelta functions weighted by symbols from a binary constellation. Similarly for input bit bk = 1. Thus the output of the bit manipulator is given by y1 (t) =
∞
ak δ(t − kT /4),
(7.3)
k=−∞
where ak denotes symbols from a binary constellation. The final PCM signal y(t) is
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6_7
357
358
7 Signaling Through AWGN Channel
Fig. 7.1 Impulse response of the transmit filter
p(t)
1 t −0.5
0
Fig. 7.2 Matched filter output
T /2
T
5T /8
t
−T /4
−T
y(t) =
∞
−T /2
0
ak p(t − kT /4).
T /2
T
(7.4)
k=−∞
Note that the symbol-rate of ak is 4/T , that is four times the input bit-rate. Draw the constellation and write down the sequence of symbols that are generated corresponding to an input bit bk = 0 and an input bit bk = 1. (b) The received signal is given by u(t) = y(t) + w(t),
(7.5)
where y(t) is the transmitted signal and w(t) is a sample function of a zeromean AWGN process with psd N0 /2. Compute the mean and variance of z 1 and z 2 , given that 1 (x1 (t)) was transmitted in the interval [0, T ]. Also compute cov(z 1 , z 2 ). (c) Derive the detection rule for the optimal detector. (d) Derive the average probability of error.
7 Signaling Through AWGN Channel
359
bk Input bits (1s and 0s)
y1 (t)
Bit
y(t)
Transmit
manipulator
filter
PCM signal
from quantizer Transmitter x1 (t)
x2 (t)
A/2
A/2 t
t
−A/2
−A/2 0
T 2
T
T 0 − 4
x1 (−t)
3T 4
T
z1 Optimum
u(t) = y(t) + w(t)
detector x2 (−t)
z2 t = kT
Receiver
Fig. 7.3 Block diagram of a communication system
• Solution: The impulse response of the transmit filter is p(t) =
1 for 0 ≤ t ≤ T /4 0 otherwise.
(7.6)
The input bit 0 gets converted to the symbol sequence: {A/2, −A/2, −A/2, A/2}.
(7.7)
The input bit 1 gets converted to the symbol sequence: {A/2, A/2, −A/2, −A/2}.
(7.8)
360
7 Signaling Through AWGN Channel
Given that 1 (x1 (t)) has been transmitted in the interval [0, T ], the received signal can be written as u(t) = x1 (t) + w(t).
(7.9)
The output of the upper MF is z 1 = u(t) x1 (−t)|t=0 ∞ = u(τ )x1 (−t + τ ) dτ τ =−∞ T
=
t=0
u(t)x1 (t) dt
t=0 2
A T + w1 , 4
=
(7.10)
where w1 =
T
w(t)x1 (t) dt.
(7.11)
t=0
Similarly z 2 = u(t) x2 (−t)|t=0 T = u(t)x2 (t) dt t=0
= w2 ,
(7.12)
where we have used the fact that x1 (t) and x2 (t) are orthogonal and w2 =
T
w(t)x2 (t) dt.
(7.13)
t=0
Since w(t) is zero mean, w1 and w2 are also zero mean. Since x1 (t) and x2 (t) are LTI filters, w1 and w2 are Gaussian distributed RVs. Moreover
w(t)x1 (t) dt
T
=
t=0 T
=
t=0
= 0.
T τ =0
x1 (t)x2 (τ )
x1 (t)x2 (t)
T
τ =0
t=0
T
E[w1 w2 ] = E
w(τ )x2 (τ ) dτ
N0 δ(t − τ ) dt dτ 2
N0 dt 2 (7.14)
7 Signaling Through AWGN Channel
361
Thus w1 and w2 are uncorrelated, and being Gaussian, they are also statistically independent. This implies that z 1 and z 2 are also statistically independent. The variance of w1 and w2 is E w12 = E = =
w(t)x1 (t) dt
t=0
T t=0 T t=0
=
T
T τ =0
x12 (t)
x12 (t)
T
τ =0
w(τ )x1 (τ ) dτ
N0 δ(t − τ ) dt dτ 2
N0 dt 2
N0 A2 T 8
= E[w22 ] = var(z 1 ) = var(z 2 ) = σ 2 (say).
(7.15)
The mean values of z 1 and z 2 are (given that 1 was transmitted) A2 T = m 1, 1 (say) 4 E[z 2 ] = 0 = m 2, 1 (say), E[z 1 ] =
(7.16)
where m 1, 1 and m 2, 1 denote the means of z 1 and z 2 , respectively, when 1 is transmitted. Similarly it can be shown that when 0 is transmitted E[z 1 ] = 0 = m 1, 0 (say) A2 T = m 2, 0 (say). E[z 2 ] = 4
(7.17)
Let T z = z1 z2 .
(7.18)
The maximum likelihood (ML) detector computes the probabilities: P( j|z) for j = 0, 1
(7.19)
and decides in favor of that bit for which the probability is maximum. Using Bayes’ rule we have P( j|z) =
f Z (z| j)P( j) , f Z (z)
(7.20)
362
7 Signaling Through AWGN Channel
where P( j) denotes the probability that bit j ( j = 0, 1) was transmitted. Since P( j) = 0.5 and f Z (z) are independent of j they can be ignored and computing the maximum of P( j|z) is equivalent to computing the maximum of f Z (z| j) which can be conveniently written as max f Z (z| j) for j = 0, 1. j
(7.21)
Since z 1 and z 2 are statistically independent (7.21) can be written as max f Z 1 (z 1 | j) f Z 2 (z 2 | j) j
(z 1 − m 1, j )2 + (z 2 − m 2, j )2 1 exp − ⇒ max . j 2πσ 2 2σ 2
(7.22)
Taking the natural logarithm and ignoring constants (7.22) reduces to min(z 1 − m 1, j )2 + (z 2 − m 2, j )2 for j = 0, 1. j
(7.23)
To compute the average probability of error we first compute the probability of detecting 0 when 1 is transmitted (P(0|1)). This happens when (z 1 − m 1, 0 )2 + (z 2 − m 2, 0 )2 < (z 1 − m 1, 1 )2 + (z 2 − m 2, 1 )2 . (7.24) Let e1 = m 1, 1 − m 1, 0 e2 = m 2, 1 − m 2, 0 2 e1 + e22 = d 2 .
(7.25)
Thus the detector makes an error when 2e1 w1 + 2e2 w2 < −d 2 .
(7.26)
Z = 2e1 w1 + 2e2 w2 .
(7.27)
Let
Then E[Z ] = 2e1 E[w1 ] + 2e2 E[w2 ] =0 E Z 2 = 4e12 σ 2 + 4e22 σ 2 = 4d 2 σ 2
7 Signaling Through AWGN Channel
363
= σ 2Z
(say).
(7.28)
Thus the probability of detecting 0 when 1 is transmitted is
−d 2 1 Z2 exp − 2 dz √ 2σ Z σ Z 2π −∞
2 1 d = erfc √ 2 σZ 2 d2 1 = erfc . 2 8σ 2
P(Z < −d 2 ) =
(7.29)
3. Consider the passband PAM system in Fig. 7.4. The bits 1 and 0 from the quantizer are equally likely. The signal b(t) is given by b(t) =
∞
ak δ(t − kT ),
(7.30)
k=−∞
where ak denotes a symbol at time kT taken from a binary constellation shown in Fig. 7.4 and δ(t) is the Dirac-Delta function. The mapping of the bits to symbols is also shown. The transmit filter is a pulse corresponding to the root-raised cosine (RRC) spectrum. The roll-off factor of the raised cosine (RC) spectrum
0
1
−d = A0
d = A1
0
Constellation Input bit stream from quantizer (1s and 0s)
b(t)
Bit
Transmit
s(t)
y(t)
filter
manipulator
p(t) Transmitter
cos(2πfc t + θ)
t = kT s1 (t)
y1 (t)
Matched
r(kT )
Optimum
filter detector
p(−t) cos(2πfc t + θ)
Receiver
Fig. 7.4 Block diagram of a passband PAM system
Output bit stream
364
7 Signaling Through AWGN Channel
(from which the root-raised cosine spectrum is obtained) is α = 0.5. Assume that the bit-rate 1/T = 1 kbps, the energy of p(t) = 2, θ is a uniformly distributed random variable in [0, 2π] and θ and w(t) are statistically independent. The received signal is given by s1 (t) = s(t) + w(t),
(7.31)
where w(t) is zero-mean AWGN with psd N0 /2. (a) Compute the two-sided bandwidth (bandwidth on either side of the carrier) of the psd of the transmitted signal s(t). (b) Derive the expression for r (kT ). (c) Derive the ML detection rule. Start from the MAP rule max P(A j |rk ) j
for j = 0, 1,
(7.32)
where for brevity rk = r (kT ). (d) Assume that the transmitted bit at time kT is 1. Compute the probability of making an erroneous decision, in terms of d and N0 . 4. Solution: The two-sided bandwidth is B = 2 × 500(1 + α) = 1500 Hz.
(7.33)
The signal at the multiplier output is given by s1 (t) cos(2π f c t) =
y(t) (1 + cos(4π f c t)) + w1 (t), 2
(7.34)
where w1 (t) is w1 (t) = w(t) cos(2π f c t).
(7.35)
The autocorrelation of w1 (t) is E[w1 (t)w1 (t − τ )] = E[w(t)w(t − τ )] E[cos(2π f c t + θ) cos(2π f c (t − τ ) + θ)] 1 N0 δ(τ ) cos(2π f c τ ) = 2 2 N0 δ(τ ). (7.36) = 4 Since the MF eliminates the signal at 2 f c , the effective signal at the output of the multiplier is
7 Signaling Through AWGN Channel
y1 (t) =
365
∞ 1 ai p(t − kT ) + w1 (t). 2 i=−∞
(7.37)
The MF output is r (t) = y1 (t) p(−t) ∞ 1 = ai g(t − i T ) + z(t), 2 i=−∞
(7.38)
where denotes convolution and g(t) = p(t) p(−t) z(t) = w1 (t) p(−t).
(7.39)
Note that g(t) is a pulse corresponding to the raised cosine spectrum, hence it satisfies the Nyquist criterion for zero intersymbol interference (ISI). The MF output sampled at time kT is r (kT ) = (1/2)ak g(0) + z(kT ) = ak + z(kT ).
(7.40)
The above equation can be written more concisely as r k = ak + z k .
(7.41)
Note that z k is a zero-mean Gaussian random variable with variance E[z k2 ] =
N0 N0 g(0) = = σ2 4 2
(say).
(7.42)
At time kT given that ak = A j was transmitted, the mean value of rk is E[rk |A j ] = A j .
(7.43)
Note that E[rk ] =
1
E[rk |A j ]P(A j ) = 0.
(7.44)
j=0
In other words, the unconditional mean is zero. At time kT , the MAP detector computes the probabilities P(A j |rk ) for j = 0, 1 and decides in favor of that symbol for which the probability is maximum. Using Bayes’ rule, the MAP detector can be re-written as
366
7 Signaling Through AWGN Channel
max j
f Rk |A j (rk |A j )P(A j ) f Rk (rk )
for j = 0, 1,
(7.45)
where f Rk |A j (rk |A j ) is the conditional pdf of rk given that A j was transmitted. Since the symbols are equally likely P(A j ) = 1/2, and is independent of j. Moreover f Rk (rk ) is also independent of j. Thus the MAP detector reduces to an ML detector given by max f Rk |A j (rk |A j ) j
for j = 0, 1.
(7.46)
Substituting for the conditional pdf we get 1 2 2 max √ e−(rk −A j ) /(2σ ) j σ 2π
for j = 0, 1.
(7.47)
Ignoring constants and noting that maximizing ex is equivalent to maximizing x, the ML detection rule simplifies to max −(rk − A j )2 j
for j = 0, 1.
(7.48)
However for x > 0, maximizing −x is equivalent to minimizing x. Hence the ML detection rule can be rewritten as min(rk − A j )2 j
for j = 0, 1.
(7.49)
Given that 1 was transmitted at time kT , the ML detector makes an error when (rk − A0 )2 < (rk − A1 )2 ⇒ (z k + d1 )2 < z k2 ⇒ z k2 + d12 + 2z k d1 < z k2 ⇒ 2z k d1 < −d12 ⇒ z k < −d1 /2,
(7.50)
where d1 = (A1 − A0 ) = 2d. Thus the probability of detecting 0 when 1 is transmitted is
(7.51)
7 Signaling Through AWGN Channel
w(t)
367
p(−t)
z(t)
t = kT
z(kT )
Fig. 7.5 Noise samples at the output of a root-raised cosine filter
−d1 /2 1 2 2 e−zk /(2σ ) dz k √ σ 2π zk =−∞ ⎞ ⎛ 2 d 1 1 ⎠ = erfc ⎝ 2 8σ 2 ⎛ ⎞ 1 d2 ⎠ = erfc ⎝ . 2 N0
P(z k < −d1 /2) =
(7.52)
5. Consider the block diagram in Fig. 7.5. Here w(t) is a sample function of a zeromean AWGN process with psd N0 /2. Let p(t) denote the pulse corresponding to the root-raised cosine spectrum. Let P( f ) denote the Fourier transform of p(t) and let the energy of p(t) be 2. Assume that w(t) is WSS. Compute E[z(kT )z(kT − mT )]. • Solution: The psd of z(t) is SZ ( f ) =
N0 |P( f )|2 , 2
(7.53)
where |P( f )|2 has a raised-cosine spectrum. Note that since w(t) is WSS, z(t) is also WSS. This implies that the autocorrelation of z(t) is E[z(t)z(t − τ )] = R Z (τ ) =
N0 g(τ ), 2
(7.54)
where g(τ ) is the inverse Fourier transform of the raised cosine spectrum. Therefore N0 g(mT ) 2 N0 for m = 0 = 0 otherwise
E[z(kT )z(kT − mT )] = R Z (mT ) =
(7.55)
Index
B Bandpass signal canonical representation, 12 Bayes’ rule, 361 Bessel function, 258
Compressor, 338 Condition necessary, 68 sufficient, 69 Conjugate complex, 36 symmetry, 319 Constellation BPSK, 83 Convolution, 12 Correlation auto, 5 coefficient, 77 cross, 15 Costas loop, 170 phase ambiguity, 238 phase discriminator, 238 Cross covariance, 128 Cross-spectral density, 96 Cumulative distribution function, 137
C Capacitor, 182 Carrier frequency, 56 recovery, 208 spacing, 193 Carrier-to-noise ratio, 299 Centroid, 344 Characteristic function, 77 Chernoff bound, 95 Communication privacy, 228 Companding A-law, 351 µ-law, 331
D Dc coupled, 117 Dc voltmeter, 117 Delta function Dirac, 1 Kronecker, 84 Delta modulator, 337 Differentiator, 147 Dirichlet’s conditions, 69 Discriminant, 77 Distortion, 178 DPCM, 333
A Ac coupled, 117 Aliasing, 32 Amplifier ac, 177 dc, 177 chopper stabilized, 177 Amplitude modulation (AM) amplitude sensitivity, 154 modulation factor, 162 overmodulation, 182 power efficiency, 162 residual, 287 Attenuation, 164
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 K. Vasudevan, Analog Communications, https://doi.org/10.1007/978-3-030-50337-6
369
370 E Echo, 243 Ensemble average, 83 Envelope complex, 12 detector, 156 using diode, 218 pre, 56 Ergodic in autocorrelation, 81 in mean, 81 Expander, 333
F Figure-of-merit, 310 Filter bandpass, 7 center frequency, 164 Q-factor, 164 cascade connection, 107 de-emphasis, 310 highpass, 132 IF, 295 linear time invariant, 127 lowpass, 28 postdetection, 311 matched, 357 narrowband, 209 pre-emphasis, 310 stable, 20 transmit, 330 Fourier series coefficient, 7 complex, 4 real, 31 Fourier transform, 1 area under, 203 differentiation in time, 2 duality, 35 time scaling, 10 Frequency beat, 243 deviation, 243 discriminator, 250 divider, 210 division multiplexing, 169 multiplier, 249 offset, 175 pilot, 208 repetition, 243 translation, 249 Frequency modulation (FM)
Index Carson’s rule, 275 delay line, 255 frequency deviation, 243 frequency discriminator, 250 frequency sensitivity, 262 microwave, 255 modulation index, 271 narrowband, 247 PM phase sensitivity, 267 radar, 243 SSB, 260 stereophonic, 275 wideband, 249 indirect method, 280 Function even, 21 odd, 21
G Gaussian pulse, 9
H Half-wave symmetry, 21 Hard limiter, 288 Harmonic, 33 distortion, 181 Heisenberg–Gabor uncertainty, 59 Heterodyne spectrum analyzer, 232 Hilbert transform, 10
I Image, 170 Inductor, 281 Integrator, 81 Intersymbol interference (ISI), 365 Nyquist criterion, 365
M Maclaurin series, 29 MAP detector, 364 Markov inequality, 100 Mean squared error, 297 Mixer, 280 ML detector, 361 Modulator amplitude, 153 stereo, 201 DSBSC, 154 FM
Index Armstrong-type, 249 product, 153 SSB, 164 frequency discrimination, 164 phase discrimination, 178 Weaver’s method, 226 switching, 154 VSB, 220 phase discrimination, 221 Multiplex, 188 quadrature -carrier, 199 Multiplier, 161
N Noise AWGN, 171 Gaussian, 118 generator, 117 narrowband, 96 white, 118 Non-linear device, 153
O Orthogonality, 188 Oscillator, 126 local, 295
P Parseval’s power theorem, 7 Phase, 6 ambiguity, 210 error, 297 Phasor, 179 Poisson sum formula, 69 Power spectral density (psd), 96 Prediction coefficients, 333 gain, 333 Probability density function (pdf), 78 conditional, 92 joint, 82 Pulse amplitude modulation (PAM), 329 Pulse shaping Nyquist, 329
Q Quadrant, 189 Quadratic equation, 77 Quantizer decision threshold, 333
371 Lloyd-Max, 334 non-uniform, 333 overload factor, 327 overload level, 340 partition cell, 342 reconstruction level, 342 representation level, 327 robust, 350 uniform, 327 mid-rise, 327 mid-step, 340
R Radio, 169 Random process, 80 cyclostationary, 87 Random timing phase, 83 Random variable, 77 discrete, 83 Gaussian, 77 independent, 95 Rayleigh distributed, 94 transformation of a, 78 uncorrelated, 95 uniformly distributed, 79 Rayleigh’s energy theorem, 20 Rectifier full wave, 7 half wave, 78 Response magnitude, 32 phase, 32 RLC circuit series, 206 resonant frequency, 206
S Sampling, 327 Nyquist-rate, 328 Schwarz’s inequality, 45 Scrambling, 184 Sideband, 162 lower, 164 upper, 164 Signal audio, 184 energy, 15 narrowband, 56 NRZ unipolar, 329 periodic, 21 power, 9
372 speech, 327 digitized, 327 Signal-to-noise ratio (SNR), 172 Slope overload, 337 SNR Q , 327 Spectrum, 10 RC, 363 roll-off factor, 363 RRC, 363 square-law device, 22 Square rooter, 159 Superheterodyne, 317 Surjective mapping, 98 System nonlinear, 15
Index T Time division multiplexed (TDM), 328 Transition bandwidth, 165 True rms meter, 117
U Unit step function, 71
V Varactor diode, 281 bias voltage, 282 Voltage controlled oscillator (VCO), 238
W Wide sense stationary (WSS), 81