475 12 12MB
English Pages 382 Year 2020
Mathuranathan Viswanathan
Wireless Communication Systems in Matlab Second Edition June 2020
Copyright © 2020 Mathuranathan Viswanathan. All rights reserved.
Wireless Communication Systems in Matlab, Second Edition Copyright ©2020 Mathuranathan Viswanathan Cover design ©2020 Mathuranathan Viswanathan Cover art ©2020 Mathuranathan Viswanathan Previous editions (black & white, color) ©2018, ©2019 All rights reserved. MATLAB is a registered trademark of The MathWorks, Inc. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the author, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to [email protected] https://www.gaussianwaves.com ISBN: 9798648350779 (paperback color print) ISBN: 9798648523210 (paperback black & white print) The author has used his best endeavors to ensure that the URLs for external websites referred in this book are correct and active at the time of publishing. However, the author bears no responsibility for the referred websites, and gives no guarantee that they will remain live or that the content is or will remain appropriate.
Dedicated to Advaith
Preface
Introduction There exist many textbooks that provide an in-depth treatment of various topics in wireless communication systems. Most of them underscore different theoretical aspects of design and performance analysis of wireless communication systems. Only a handful of books provide insight on how these techniques can be modeled and simulated. Predominantly, such books utilize the sophisticated built-in functions or toolboxes that are already available in software like Matlab. These built-in functions or toolboxes hide a lot of background computations from the user thereby making it difficult, especially for a learner, to understand how certain techniques are actually implemented inside those functions. In this book, I intend to show how the theoretical aspects of a wireless communication system can be translated into simulation models, using elementary matrix operations in Matlab. Most of the simulation models shown in this book, will not use any of the inbuilt communication toolbox functions. This provides an opportunity for a practicing engineer to understand the basic implementation aspects of modeling various building blocks of a wireless system. I intend the book to be used primarily by undergraduate and graduate students in electrical engineering discipline, who wish to learn the basic implementation aspects of a wireless system. I assume that the reader has a fair understanding on the fundamentals of programming in Matlab. Readers may consult other textbooks and documentations that cover those topics.
Intended audience I intend the book to be used primarily by undergraduate and graduate students in electrical engineering discipline, who wish to learn the basic implementation aspects of a wireless system. I assume that the reader has a fair understanding on the fundamentals of programming in Matlab. Readers may consult authoritative textbooks and documentations that cover those topics.
Organization of topics Theoretical aspects of wireless communication systems will be kept brief. For each topic discussed, a short theoretical background is provided along with the implementation details in the form of Matlab scripts. The Matlab scripts carry inline comments intended to help the reader understand the flow of implementation. As for the topics concerned, only the basic aspects of a wireless communication system are covered. Waveform simulation technique and the complex equivalent baseband simulation model will be provided on a case-by-case basis. Performance simulations of well known digital modulation techniques including OFDM are also provided.
i
ii
Preface
Chapters 1 and 2 introduce some of the basic signal processing concepts that will be used throughout this book. Concepts covered in chapter 1 include- signal generation techniques for generating well known test signals, interpreting FFT results and extracting magnitude/phase information using FFT, Power spectral density, computation of power and energy of a signal, various methods to compute convolution of two signals, analytic signal and its application, and FIR/IIR filters. Chapter 2 covers various random variables for simulating probabilistic systems. The random variables such as the following are covered: uniform, Bernoulli, Binomial, exponential, Poisson process, Gaussian, Chi-squared, non-Chi squared, Chi, Rayleigh, Ricean and Nakagami. Chapters 3 and 4 cover the concepts of channel capacity and coding theory. Topics like Shannon’s coding theorem, capacity calculation and Shannon’s limit are covered in chapter 3. Chapter 4 includes discussions on error control schemes, linear block codes, hard and soft decision decoding. Simulation of Hamming codes and their performance using hard and soft decision decoding are also given in this chapter. Chapters 5 and 6 are dedicated to digital modulations that form the heart of any wireless communication system. Complex baseband equivalent models for techniques such as M-ary PAM, M-ary PSK, M-ary QAM and M-ary FSK modulations are available in these chapters. Chapters 7,8 and 9 discuss the aspects of intersymbol interference, essentials of equalizers and receiver impairments. Chapter 7 covers various pulse shaping techniques such as rectangular, sync, raised-cosine (RC) and square-root raised-cosine (SRRC) pulse shaping. Method to construct an eye diagram - an important tool to visualize intersymbol interference, implementing a matched filter system, partial response signaling models and precoding techniques are discussed in Chapter 7. Chapter 8 contains simulation models for commonly encountered linear equalizers such as zero-forcing (ZF), minimum mean squared error (MMSE) equalizers and least mean square (LMS) algorithm for adaptive equalizers. A sample simulation of performance of BPSK modulation with ZF and MMSE equalizers, is given in this chapter. Chapter 9 covers the topic of modeling receiver impairment, estimation and compensation for such impairments and a sample performance simulation for such case. Chapters 10 and 11 are dedicated to modeling large-scale and small-scale models for wireless communication systems. Topics such as Friis free space propagation model, log distance path loss model, two ray ground reflection model, modeling diffraction loss using single knife-edge diffraction model and Hata Okumura empirical model are covered in Chapter 10. Chapter 11 contains various basic aspects of small-scale models for multipath effects. The topics covered include statistical characteristics of multipath channels like scattering function, power delay profile, Doppler power spectrum, Rayleigh and Rice process, and models for frequency flat and frequency selective multipath channels. Chapter 12, newly added in this second edition of the book, is dedicated to diversity techniques for multiple antenna systems, especially the spatial diversity schemes. Receive diversity methods like maximum ratio combining, equal gain combining and selection combining are discussed. Alamouti space-time coding is taken as an example for transmit diversity scheme. Chapters 13 and 14 cover two important techniques used in multiuser and multitone communication systems: spread spectrum and OFDM respectively. Chapter 13 is dedicated for discussions on basics of code sequences in a spread spectrum system, modeling and simulation of direct sequence spread spectrum and frequency hopping spread spectrum. Performance simulation models for cyclic prefixed OFDM are given in Chapter 14. Reference texts are cited in square brackets within the text and the references are provided at the end of each chapter.
Code documentation Code download and documentation for the Matlab scripts shown in this book, are available at the following URL: https://www.gaussianwaves.com/wireless comm sys matlab second edition/ Download and unzip the scripts to a location in your computer. Add the unzipped directory along with all its sub-directories, to Matlab search path.
Preface
iii
The scripts are thoroughly checked for integrity and they will execute without any error. If you face any issues during execution or spot any mistake in this book, do not hesitate to provide feedback or contact me via the email provided below.
Acknowledgments Finally, it is a pleasure to acknowledge the help I received while writing this book. I thank my wife Varsha Mathuranathan for getting the manuscript edited so quickly and for her support during the review process that greatly helped improve the manuscript. I also thank the numerous reviewers and readers for their generous comments that helped improve the contents of this book. Singapore, May 2020
Mathuranathan Viswanathan [email protected]
Contents
Part I Fundamental Concepts 1
Essentials of Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Generating standard test signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Sinusoidal signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Square wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Rectangular pulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Gaussian pulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.5 Chirp signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Interpreting FFT results - complex DFT, frequency bins and FFTShift . . . . . . . . . . . . . . . . . . . . 1.2.1 Real and complex DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Interpreting the FFT results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 FFTShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 IFFTShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.6 Some observations on FFTShift and IFFTShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Obtaining magnitude and phase information from FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Discrete-time domain representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Representing the signal in frequency domain using FFT . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Reconstructing the time domain signal from the frequency domain samples . . . . . . . . . 1.3.4 Plotting frequency response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Power spectral density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Power and energy of a signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Energy of a signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Power of a signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.3 Classification of signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.4 Computation of power of a signal - simulation and verification . . . . . . . . . . . . . . . . . . . . 1.6 Polynomials, convolution and Toeplitz matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Polynomial functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2 Representing single variable polynomial functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 Multiplication of polynomials and linear convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.4 Toeplitz matrix and convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Methods to compute convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.1 Method 1: Brute-force method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.2 Method 2: Using Toeplitz matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.3 Method 3: Using FFT to compute convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 3 4 5 6 7 9 10 12 13 15 17 18 18 18 19 22 23 23 25 25 26 27 27 29 30 30 30 31 33 33 33 34
v
vi
Contents
1.7.4 Miscellaneous methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analytic signal and its applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Analytic signal and Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.2 Applications of analytic signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Choosing a filter : FIR or IIR : understanding the design perspective . . . . . . . . . . . . . . . . . . . . . . 1.9.1 Design specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9.2 General considerations in design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 36 36 40 45 45 46 51
Random Variables - Simulating Probabilistic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Plotting the estimated PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Univariate random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Uniform random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Bernoulli random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Binomial random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Exponential random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.6 Gaussian random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.7 Chi-squared random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.8 Non-central Chi-Squared random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.9 Chi distributed random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.10 Rayleigh random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.11 Ricean random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.12 Nakagami-m distributed random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Central limit theorem - a demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Generating correlated random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Generating two sequences of correlated random variables . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Generating multiple sequences of correlated random variables using Cholesky decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Generating correlated Gaussian sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Spectral factorization method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Auto-Regressive (AR) model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53 53 53 56 56 59 59 61 62 64 65 68 70 72 72 73 74 77 77
1.8
2
78 80 81 85 89
Part II Channel Capacity and Coding Theory 3
Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Shannon’s noisy channel coding theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Unconstrained capacity for bandlimited AWGN channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Shannon’s limit on spectral efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Shannon’s limit on power efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Generic capacity equation for discrete memoryless channel (DMC) . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Capacity over binary symmetric channel (BSC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Capacity over binary erasure channel (BEC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Constrained capacity of discrete input continuous output memoryless AWGN channel . . . . . . . 3.8 Ergodic capacity over a fading channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93 93 93 94 95 96 99 102 104 105 108 110
Contents
4
Linear Block Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction to error control coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Error control schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Channel coding – metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Overview of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Error-detection and error-correction capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Decoders for block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Classification of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Theory of linear block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Optimum soft-decision decoding of linear block codes for AWGN channel . . . . . . . . . . . . . . . . 4.5 Sub-optimal hard-decision decoding of linear block codes for AWGN channel . . . . . . . . . . . . . 4.5.1 Standard array decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Syndrome decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Some classes of linear block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Repetition codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Hamming codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Maximum-length codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Hadamard codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Performance simulation of soft and hard decision decoding of hamming codes . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
113 113 114 115 117 118 119 119 119 122 124 125 128 132 132 132 134 135 136 139
Part III Digital Modulations 5
Digital Modulators and Demodulators : Complex Baseband Equivalent Models . . . . . . . . . . . . . 5.1 Passband and complex baseband equivalent model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Complex baseband representation of modulated signal . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Complex baseband representation of channel response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Modulators for amplitude and phase modulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Pulse Amplitude Modulation (M-PAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Phase Shift Keying Modulation (M-PSK) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Quadrature Amplitude Modulation (M-QAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Demodulators for amplitude and phase modulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 M-PAM detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 M-PSK detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 M-QAM detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Optimum detector on IQ plane using minimum Euclidean distance . . . . . . . . . . . . . . . . . 5.5 M-ary FSK modulation and detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Modulator for M orthogonal signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 M-FSK detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143 143 143 144 145 146 146 148 152 152 153 153 154 155 155 157 158
6
Performance of Digital Modulations over Wireless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 AWGN channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Signal to noise ratio (SNR) definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 AWGN channel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Theoretical symbol error rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.4 Unified simulation model for performance simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fading channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Linear time invariant channel model and FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Simulation model for detection in flat fading channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Rayleigh flat-fading channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159 159 159 160 162 163 166 166 167 169
viii
Contents
6.2.4 Ricean flat-fading channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Part IV Intersymbol Interference and Equalizers 7
Pulse Shaping, Matched Filtering and Partial Response Signaling . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Nyquist criterion for zero ISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Discrete-time model for a system with pulse shaping and matched filtering . . . . . . . . . . . . . . . . 7.3.1 Rectangular pulse shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Sinc pulse shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Raised-cosine pulse shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Square-root raised-cosine pulse shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Eye diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Implementing a matched filter system with SRRC filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Plotting the eye diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Performance simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Partial response (PR) signaling models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Impulse response and frequency response of PR signaling schemes . . . . . . . . . . . . . . . . 7.7 Precoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Implementing a modulo-M precoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Simulation and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
179 179 180 181 182 183 185 187 190 190 195 196 197 199 200 202 203 203
8
Linear Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Linear equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Symbol-spaced linear equalizer channel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Zero-forcing equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Least squares solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Noise enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Design and simulation of zero-forcing equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 Drawbacks of zero-forcing equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Minimum mean square error (MMSE) equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Alternative solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Design and simulation of MMSE equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Equalizer delay optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 BPSK modulation with zero-forcing and MMSE equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Adaptive equalizer: Least mean square (LMS) algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
205 205 206 208 209 211 212 212 217 218 220 221 224 225 229 231
9
Receiver Impairments and Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 DC offsets and compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 IQ imbalance model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 IQ imbalance estimation and compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Blind estimation and compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Pilot based estimation and compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Visualizing the effect of receiver impairments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Performance of M-QAM modulation with receiver impairments . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233 233 235 235 236 236 238 239 240 243
Contents
ix
Part V Wireless Channel Models 10
Large-scale Propagation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Friis free space propagation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Log distance path loss model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Two ray ground reflection model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Modeling diffraction loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Single knife-edge diffraction model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Fresnel zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Hata Okumura model for outdoor propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
247 247 248 251 253 256 256 258 260 262
11
Small-scale Models for Multipath Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Statistical characteristics of multipath channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Mutipath channel models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Scattering function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Power delay profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Doppler power spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.5 Classification of small-scale fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Rayleigh and Rice processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Probability density function of amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Probability density function of frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Modeling frequency flat channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Modeling frequency selective channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Method of equal distances (MED) to model specified power delay profiles . . . . . . . . . . 11.5.2 Simulating a frequency selective channel using TDL model . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
263 263 264 265 266 268 274 275 276 276 278 281 286 287 290 292
12
Multiple Antenna Systems - Spatial Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Diversity techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Single input single output (SISO) channel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.3 Multiple antenna systems channel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Diversity and spatial multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Classification with respect to antenna configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Two flavors of multiple antenna systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Spatial diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Spatial multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Receive diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Received signal model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Maximum ratio combining (MRC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.3 Equal gain combining (EGC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.4 Selection combining (SC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.5 Performance simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.6 Array gain and diversity gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Transmit diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 Transmit beamforming with CSIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.2 Alamouti code for CSIT unknown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
295 295 295 296 297 298 298 298 298 300 301 302 303 305 306 307 308 310 310 312 315
x
Contents
Part VI Multiuser and Multitone Communication Systems 13
Spread Spectrum Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Code sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Sequence correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Maximum-length sequences (m-sequences) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.3 Gold codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Direct sequence spread spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Simulation of DSSS system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Performance of direct sequence spread spectrum over AWGN channel . . . . . . . . . . . . . . 13.3.3 Performance of direct sequence spread spectrum in the presence of a Jammer . . . . . . . . 13.4 Frequency hopping spread spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Simulation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Binary frequency shift keying (BFSK) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Allocation of frequency channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Frequency hopping generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.5 Fast and slow frequency hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.6 Simulation code for BFSK-FHSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
319 319 319 320 321 324 328 329 334 335 339 339 339 344 345 347 348 351
14
Orthogonal Frequency Division Multiplexing (OFDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Understanding the role of cyclic prefix in a CP-OFDM system . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Circular convolution and designing a simple frequency domain equalizer . . . . . . . . . . . 14.2.2 Demonstrating the role of cyclic prefix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Verifying DFT property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Discrete-time implementation of baseband CP-OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on AWGN channel . . . . . . . . . . . . 14.5 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on frequency selective Rayleigh channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
353 353 353 354 354 357 357 360 362 364
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Part I
Fundamental Concepts
Chapter 1
Essentials of Signal Processing
Abstract This chapter introduces some of the basic signal processing concepts that will be used throughout this book. The goal is to enable the reader to appreciate the concepts and apply them in building a basic communication system. Concepts covered include - signal generation techniques for generating well known test signals like rectangular pulse, sine wave, square wave, chirp signal and gaussian pulse, interpreting FFT results and extracting magnitude/phase information using FFT, computation of power and energy of a signal, various methods to compute convolution of two signals, analytic signal and applications, FIR/IIR filters.
1.1 Generating standard test signals In experimental modeling and simulation, simple test inputs such as sinusoidal, rectangular pulse, gaussian pulse, and chirp signals are widely used. These test signals act as stimuli for the simulation model and the response of the model to the stimuli is of great interest in design verification.
1.1.1 Sinusoidal signals In order to generate a sine wave, the first step is to fix the frequency f of the sine wave. For example, we wish to generate a f = 10Hz sine wave whose minimum and maximum amplitudes are −1V and +1V respectively. Given the frequency of the sinewave, the next step is to determine the sampling rate. For baseband signals, the sampling is straight forward. By Nyquist Shannon sampling theorem, for faithful reproduction of a continuous signal in discrete domain, one has to sample the signal at a rate fs higher than at-least twice the maximum frequency fm contained in the signal (actually, it is twice the one-sided bandwidth occupied by a real signal. For a baseband signal bandwidth (0 to fm ) and maximum frequency fm in a given band are equivalent). Matlab is a software that processes everything in digital. In order to obtain a smooth sine wave, the sampling rate must be far higher than the prescribed minimum required sampling rate which is at least twice the frequency f - as per Nyquist-Shannon theorem. Hence we need to sample the input signal at a rate significantly higher than what the Nyquist criterion dictates. Higher oversampling rate requires more memory for signal storage. It is advisable to keep the oversampling factor to an acceptable value. An oversampling factor of 30 is chosen in the following code snippet. This is to plot a smooth continuouslike sine wave. Thus the sampling rate becomes fs = 30 × f = 30 × 10 = 300Hz. If a phase shift is desired for the sine wave, specify it too. The resulting plot from the code snippet shown next, is given in Figure 1.1.
3
4
1 Essentials of Signal Processing
Program 1.1: sinusoidal signal.m: Simulate a sinusoidal signal with given sampling rate f=10; %frequency of sine wave overSampRate=30; %oversampling rate fs=overSampRate*f; %sampling frequency phase = 1/3*pi; %desired phase shift in radians nCyl = 5; %to generate five cycles of sine wave t=0:1/fs:nCyl*1/f-1/fs; %time base g=sin(2*pi*f*t+phase); %replace with cos if a cosine wave is desired plot(t,g); title(['Sine Wave f=', num2str(f), 'Hz']);
Sine wave f=10 Hz
Fig. 1.1: A 10Hz sinusoidal wave with 5 cycles and phase shift 1/3π radians
1.1.2 Square wave The most logical way of transmitting information across a communication channel is through a stream of square pulse – a distinct pulse for ‘0’ and another for ‘1’. Digital signals are graphically represented as square waves with certain symbol/bit period. Square waves are also used universally in switching circuits, as clock signals synchronizing various blocks of digital circuits, as reference clock for a given system domain and so on. Square wave manifests itself as a wide range of harmonics in frequency domain and therefore can cause electromagnetic interference. Square waves are periodic and contain odd harmonics when expanded as Fourier Series (where as signals like saw-tooth and other real word signals contain harmonics at all integer frequencies). Since a square wave literally expands to infinite number of odd harmonic terms in frequency domain, approximation of square wave is another area of interest. The number of terms of its Fourier Series expansion, taken for approximating the square wave is often seen as Gibbs phenomenon, which manifests as ringing effect at the corners of the square wave in time domain.
1.1 Generating standard test signals
5
True square waves are a special class of rectangular waves with 50% duty cycle. Varying the duty cycle of a rectangular wave leads to pulse width modulation, where the information is conveyed by changing the duty-cycle of each transmitted rectangular wave. A true square wave can be simply generated by applying signum function over a periodic function. g(t) = sgn sin(2π f t) (1.1) where f is the desired frequency of the square wave and the signum function is defined as −1 i f x < 0, sgn(x) = 0 i f x = 0, 1 if x > 0 Program 1.2: square wave.m: Generate a square wave with given sampling rate f=10; %frequency of sine wave in Hz overSampRate=30; %oversampling rate fs=overSampRate*f; %sampling frequency nCyl = 5; %to generate five cycles of square wave t=0:1/fs:nCyl*1/f-1/fs; %time base g = sign(sin(2*pi*f*t)); plot(t,g); title(['Square Wave f=', num2str(f), 'Hz']);
Square wave f=10 Hz
Fig. 1.2: A 10Hz square wave with 5 cycles and 50% duty cycle
1.1.3 Rectangular pulse An isolated rectangular pulse of amplitude A and duration T is represented mathematically as
(1.2)
6
1 Essentials of Signal Processing
t g(t) = A · rect T where, rect(t) =
1 1
2 0
i f |t| < i f |t| = i f |t| >
(1.3)
1 2 1 2 1 2
(1.4)
The following code simulates a rectangular pulse with desired pulse width and the resulting plot is shown in Figure 1.3. Program 1.3: rectangular pulse.m: Generating a rectangular pulse with desired pulse width fs=500; %sampling frequency T=0.2; %width of the rectangule pulse in seconds t=-0.5:1/fs:0.5; %time base g=(t >-T/2) .* (t>X(1) 1.1762e-14 (approximately equal to zero) Note that the index for the raw FFT are integers from 1 → N. We need to process it to convert these integers to frequencies. That is where the sampling frequency counts. Each point/bin in the FFT output array is spaced by the frequency resolution ∆ f , that is calculated as ∆f =
fs N
(1.23)
where, fs is the sampling frequency and N is the FFT size that is considered. Thus, for our example, each point in the array is spaced by the frequency resolution ∆f =
fs 32 ∗ fc 320 = = = 1.25Hz N 256 256
(1.24)
The 10Hz cosine signal will register a spike at the 8th sample (10/1.25=8) - located at index 9 in Figure 1.9. >> abs(X(8:10)) %display samples 7 to 9 ans = 0.0000 128.0000 0.0000 Therefore, using the frequency resolution, the entire frequency axis can be computed as Program 1.9: Computing the frequency axis for plotting %calculate frequency bins with FFT df=fs/N %frequency resolution sampleIndex = 0:N-1; %raw index for FFT plot f=sampleIndex*df; %x-axis index converted to frequencies Now, plot the absolute value of the FFT against frequencies - the resulting plot is shown in the Figure 1.9. Program 1.10: Plotting the frequency domain view of a signal subplot(3,1,2); stem(sampleIndex,abs(X)); %sample values on x-axis title('X[k]'); xlabel('k'); ylabel('|X(k)|'); subplot(3,1,3); stem(f,abs(X)); %x-axis represent frequencies title('X[f]'); xlabel('frequencies (f)'); ylabel('|X(f)|');
1.2 Interpreting FFT results - complex DFT, frequency bins and FFTShift
15
After the frequency axis is properly transformed with respect to the sampling frequency, we note that the cosine signal has registered a spike at 10Hz. In addition to that, it has also registered a spike at 256 − 8 = 248th sample that belongs to the negative frequency portion. Since we know the nature of the signal, we can optionally ignore the negative frequencies. The sample at the Nyquist frequency ( fs /2) mark the boundary between the positive and negative frequencies. >> nyquistIndex=N/2+1; >> X(nyquistIndex-2:nyquistIndex+2).' ans = 1.0e-13 * -0.2428 + 0.0404i -0.1897 + 0.0999i -0.3784 -0.1897 - 0.0999i -0.2428 - 0.0404i Note that the complex numbers surrounding the Nyquist index are complex conjugates and are present at positive and negative frequencies respectively.
|X(k)|
150 X= 8 Y= 128
100
X= 248 Y= 128
50 0 0
50
100
150
200
250
|X(f)|
k X= 10 Y= 128
100
X= 310 Y= 128
50 0 0
50
100
150
200
250
300
frequencies (f)
Fig. 1.9: FFT magnitude response plotted against - sample index (top) and computed frequencies (bottom)
1.2.4 FFTShift From Figure 1.9, we see that the frequency axis starts with DC, followed by positive frequency terms which is in turn followed by the negative frequency terms. To introduce proper order in the x-axis, one can use FFTshift function Matlab, which arranges the frequencies in order: negative frequencies → DC → positive frequencies. The fftshift function need to be carefully used when N is odd. For even N, the original order returned by FFT is as follows (note: here, all indices correspond to Matlab’s indices)
16
1 Essentials of Signal Processing
• X[1] represents DC frequency component • X[2] to X[N/2] terms are positive frequency components • X[N/2 + 1] is the Nyquist frequency (Fs /2) that is common to both positive and negative frequencies. We will consider it as part of negative frequencies to have the same equivalence with the fftshift function • X[N/2 + 1] to X[N] terms are considered as negative frequency components FFTshift shifts the DC component to the center of the spectrum. It is important to remember that the Nyquist frequency at the (N/2+1)th Matlab index is common to both positive and negative frequency sides. FFTshift command puts the Nyquist frequency in the negative frequency side. This is captured in the Figure 1.10.
Fig. 1.10: Role of FFTShift in ordering the frequencies (assumption: N is even) Therefore, when N is even, ordered frequency axis is set as N N fs f = ∆ f × i = × i, where i = − , · · · , −1, 0, 1, · · · , − 1 N 2 2 When N is odd, the ordered frequency axis should be set as fs N +1 N +1 f = ∆ f × i = × i, where i = − , · · · , −1, 0, 1, · · · , −1 N 2 2
(1.25)
(1.26)
The following code snippet, computes the fftshift using both the manual method and using the Matlab’s in-built command. The results are plotted by superimposing them on each other. The plot in Figure 1.11 shows that both the manual method and fftshift method are in good agreement. Comparing the bottom figures in the Figure 1.9 and Figure 1.11, we see that the ordered frequency axis is more meaningful to interpret. Program 1.11: Plotting magnitude response using FFTShift %two-sided FFT with negative frequencies on left and positive %frequencies on right. Following works only if N is even, %for odd N see equation above X1 = [(X(N/2+1:N)) X(1:N/2)]; %order frequencies without using fftShift
1.2 Interpreting FFT results - complex DFT, frequency bins and FFTShift
17
X2 = fftshift(X);%order frequencies by using fftshift df=fs/N; %frequency resolution sampleIndex = -N/2:N/2-1; %raw index for FFT plot f=sampleIndex*df; %x-axis index converted to frequencies %plot ordered spectrum using the two methods figure;subplot(2,1,1);stem(sampleIndex,abs(X1));hold on; stem(sampleIndex,abs(X2),'r') %sample index on x-axis title('Frequency Domain'); xlabel('k'); ylabel('|X(k)|');
|X(k)|
subplot(2,1,2);stem(f,abs(X1)); stem(f,abs(X2),'r') xlabel('frequencies (f)'); ylabel('|X(f)|');%frequencies on x-axis
X= −8 Y= 128
100
X= 8 Y= 128
50 0 −100
−50
0
50
100
k
|X(f)|
150 X= −10 Y= 128
100
X= 10 Y= 128
50 0 −150
−100
−50
0
50
100
150
frequencies (f)
Fig. 1.11: Magnitude response of FFT result after applying FFTShift : plotted against sample index (top) and against computed frequencies (bottom)
1.2.5 IFFTShift One can undo the effect of fftshift by employing ifftshift function. The ifftshift function restores the raw frequency order. If the FFT output is ordered using fftshift function, then one must restore the frequency components back to original order before taking IFFT - the Inverse Fast Fourier Transform. Following statements are equivalent. X = fft(x,N) %compute X[k] x = ifft(X,N) %compute x[n] X = fftshift(fft(x,N)); %take FFT and rearrange frequency order
18
1 Essentials of Signal Processing
x = ifft(ifftshift(X),N)% restore raw freq order and then take IFFT
1.2.6 Some observations on FFTShift and IFFTShift When N is odd and for an arbitrary sequence, the fftshift and ifftshift functions will produce different results. However, when they are used in tandem, it restores the original sequence. >> 0 >> 5 >> 4 >> 0 >> 0
x=[0,1,2,3,4,5,6,7,8] 1 2 3 4 fftshift(x) 6 7 8 0 ifftshift(x) 5 6 7 8 ifftshift(fftshift(x)) 1 2 3 4 fftshift(ifftshift(x)) 1 2 3 4
5
6
7
8
1
2
3
4
0
1
2
3
5
6
7
8
5
6
7
8
When N is even and for an arbitrary sequence, the fftshift and ifftshift functions will produce the same result. When they are used in tandem, it restores the original sequence. >> 0 >> 4 >> 4 >> 0 >> 0
x=[0,1,2,3,4,5,6,7] 1 2 3 4 fftshift(x) 5 6 7 0 ifftshift(x) 5 6 7 0 ifftshift(fftshift(x)) 1 2 3 4 fftshift(ifftshift(x)) 1 2 3 4
5
6
7
1
2
3
1
2
3
5
6
7
5
6
7
1.3 Obtaining magnitude and phase information from FFT For the discussion here, lets take an arbitrary cosine function of the form x(t) = Acos (2π fct + φ ) and proceed step by step as follows • • • •
Represent the signal x(t) in computer (discrete-time x[n]) and plot the signal (time domain) Represent the signal in frequency domain using FFT (X[k]) Extract magnitude and phase information from the FFT result Reconstruct the time domain signal from the frequency domain samples
1.3.1 Discrete-time domain representation Consider a cosine signal of amplitude A = 0.5, frequency fc = 10Hz and phase φ = π/6 radians (or 30◦ )
1.3 Obtaining magnitude and phase information from FFT
19
x(t) = 0.5 cos 2π10t + π/6
(1.27)
In order to represent the continuous time signal x(t) in computer memory (Figure 1.12), we need to sample the signal at sufficiently high rate (according to Nyquist sampling theorem). I have chosen a oversampling factor of 32 so that the sampling frequency will be fs = 32 × fc , and that gives 640 samples in a 2 seconds duration of the waveform record. Program 1.12: Representing and storing a signal in computer memory A = 0.5; %amplitude of the cosine wave fc=10;%frequency of the cosine wave phase=30; %desired phase shift of the cosine in degrees fs=32*fc;%sampling frequency with oversampling factor 32 t=0:1/fs:2-1/fs;%2 seconds duration phi = phase*pi/180; %convert phase shift in degrees in radians x=A*cos(2*pi*fc*t+phi);%time domain signal with phase shift figure; plot(t,x); %plot the signal
x(t) = 0.5 cos (2π10t + π/6)
x(t)
0.5
0
-0.5 0
0.5
1
1.5
2
time (t seconds)
Fig. 1.12: Finite record of a cosine signal
1.3.2 Representing the signal in frequency domain using FFT Let’s represent the signal in frequency domain using the FFT function. The FFT function computes N-point complex DFT. The length of the transformation N should cover the signal of interest, otherwise we will loose some valuable information in the conversion process to frequency domain. However, we can choose a reasonable length if we know about the nature of the signal. For example, the cosine signal of our interest is periodic in nature and is of length 640 samples (for 2 seconds duration signal). We can simply use a lower number N = 256 for computing the FFT. In this case, only the first 256 time domain samples will be considered for taking FFT. However, we do not need to worry about loss of any valuable information, as the 256 samples will have sufficient number of cycles to extract the frequency of the signal. Program 1.13: Represent the signal in frequency domain using FFT N=256; %FFT size X = 1/N*fftshift(fft(x,N));%N-point complex DFT
20
1 Essentials of Signal Processing
In the code above, fftshift is used only for obtaining a nice double-sided frequency spectrum that delineates negative frequencies and positive frequencies in order. This transformation is not necessary. A scaling factor 1/N was used to account for the difference between the FFT implementation in Matlab and the text definition of complex DFT as given in equation 1.20.
1.3.2.1 Extract magnitude of frequency components (magnitude spectrum) The FFT function computes the complex DFT and hence results in a sequence of complex numbers of form Xre + jXim . The magnitude spectrum is computed as q 2 (1.28) |X[k]| = Xre2 + Xim For obtaining a double-sided plot, the ordered frequency axis, obtained using fftshift, is computed based on the sampling frequency and the magnitude spectrum is plotted (Figure 1.13). Program 1.14: Extract magnitude information from the FFT result df=fs/N; %frequency resolution sampleIndex = -N/2:N/2-1; %ordered index for FFT plot f=sampleIndex*df; %x-axis index converted to ordered frequencies stem(f,abs(X)); %magnitudes vs frequencies xlabel('f (Hz)'); ylabel('|X(k)|');
Magnitude spectrum
Fig. 1.13: Extracted magnitude information from the FFT result
1.3.2.2 Extract phase of frequency components (phase spectrum) Extracting the correct phase spectrum is a tricky business. I will show you why it is so. The phase of the spectral components are computed as Xim (1.29) ∠X[k] = tan−1 Xre The equation 1.29 looks naive, but one should be careful when computing the inverse tangents using computers. The obvious choice for implementation seems to be the atan function in Matlab. However, usage of
1.3 Obtaining magnitude and phase information from FFT
21
atan function will prove disastrous unless additional precautions are taken. The atan function computes the inverse tangent over two quadrants only, i.e, it will return values only in the [−π/2, π/2] interval. Therefore, the phase need to be unwrapped properly. We can simply fix this issue by computing the inverse tangent over all the four quadrants using the atan2(Ximg , Xre ) function. Let’s compute and plot the phase information using atan2 function and see how the phase spectrum looks. Program 1.15: Extracting phase information from the FFT result using atan2 function phase=atan2(imag(X),real(X))*180/pi; %phase information plot(f,phase); %phase vs frequencies
Phase spectrum
200
X[k]
100 0
-100
f (Hz) -200 -150
-100
-50
0
50
100
150
Fig. 1.14: Extracted phase information from the FFT result - phase spectrum is noisy The phase spectrum in Figure 1.14 is completely noisy, which is unexpected. The phase spectrum is noisy due to fact that the inverse tangents are computed from the ratio of imaginary part to real part of the FFT result. Even a small floating rounding off error will amplify the result and manifest incorrectly as useful phase information [5]. To understand, print the first few samples from the FFT result and observe that they are not absolute zeros (they are very small numbers in the order 10−16 ). Computing inverse tangent will result in incorrect results. >> X(1:5) ans = 1.0e-16 * -0.7286 -0.3637 -0.2501i -0.4809 -0.1579i -0.3602 -0.5579i 0.0261 -0.495i >> atan2(imag(X(1:5)),real(X(1:5))) ans = 3.1416 -2.5391 -2.8244 -2.1441 -1.5181 The solution is to define a tolerance threshold and ignore all the computed phase values that are below the threshold. Program 1.16: Extracting phase information from FFT - using a tolerance threshold X2=X;%store the FFT results in another array %detect noise (very small numbers (eps)) and ignore them threshold = max(abs(X))/10000; %tolerance threshold X2(abs(X) 0 (1.70) 0 f or f < 0 The corresponding spectrum of the resulting analytic signal is shown in Figure 1.20(b).
1.8 Analytic signal and its applications
37
Since the spectrum of the analytic signal is one-sided, the analytic signal will be complex valued in the time domain, hence the analytic signal can be represented in terms of real and imaginary components as z(t) = zr (t) + jzi (t). Since the spectral content is preserved in an analytic signal, it turns out that the real part of the analytic signal in time domain is essentially the original real-valued signal itself (zr (t) = x(t)). Then, what takes the place of the imaginary part ? What accompanies x(t), that occupies the imaginary part in the resulting analytic signal ? Summarizing the question as equation, z(t) = zr (t) + jzi (t) zr (t) = x(t)
zi (t) = ??
(1.71) (1.72)
It is interesting to note that Hilbert transform [15] can be used to find a companion function (imaginary part in the equation 1.72) to a real-valued signal such that the real signal can be analytically extended from the real axis to the upper half of the complex plane. Denoting Hilbert transform as HT {·}, the analytic signal is given by z(t) = zr (t) + jzi (t) = x(t) + jHT {x(t)} (1.73) One important property of an analytic signal is that its real and imaginary components are orthogonal. Z ∞ −∞
zi (t)zr (t) = 0
(1.74)
From these discussions, we can see that an analytic signal z(t) for a real-valued signal x(t), can be constructed using two approaches. • Frequency domain approach: The one-sided spectrum of z(t) is formed from the two-sided spectrum of the real-valued signal x(t) by applying equation 1.70. • Time domain approach: Using Hilbert transform approach given in equation 1.73.
1.8.1.2 Discrete-time analytic signal Consider a continuous real-valued signal x(t), that gets sampled at interval T seconds and results in N realvalued discrete samples x[n], i.e, x[n] = x(nT ). The spectrum of the continuous signal is shown in Figure 1.21(a). The spectrum of x[n] that results from the process of periodic sampling is given in Figure 1.21(b). The spectrum of discrete-time signal x[n] can be obtained by discrete-time fourier transform (DTFT) . N−1
X( f ) = T
∑ x[n]e− j2π f nT
(1.75)
n=0
At this point, we would like to construct a discrete-time analytic signal z[n] from the real-valued sampled signal x[n]. We wish the analytic signal is complex valued z[n] = zr [n] + jzi [n] and should satisfy the following two desired properties • The real part of the analytic signal should be same as the original real-valued signal. zr [n] = x[n]
(1.76)
• The real and imaginary part of the analytic signal should satisfy the property of orthogonality. N−1
∑ zr [n]zi [n] = 0
n=0
(1.77)
38
1 Essentials of Signal Processing
Fig. 1.21: (a) Spectrum of continuous signal x(t), (b) Spectrum of x[n] resulted due to periodic sampling and (c) Periodic one-sided spectrum of analytical signal z[n].
In frequency domain approach for the continuous-time case, we saw that an analytic signal is constructed by suppressing the negative frequency components from the spectrum of the real signal. We cannot do this for our periodically sampled signal x[n]. Periodic mirroring nature of the spectrum prevents one from suppressing the negative components. If we do so, it will suppress the entire spectrum. One solution to this problem is to set the negative half of each spectral period to zero. The resulting spectrum of the analytic signal is shown in Figure 1.21(c). Given a record of samples x[n] of even length N, the procedure to construct the analytic signal z[n] is as follows [16]. • Compute the N-point DTFT of x[n] using FFT • N-point periodic one-sided analytic signal is computed by the following transform X[0] f or m = 0 2X[m] f or 1 ≤ m ≤ N − 1 2 Z[m] = N N X[ ] f or m = 2 2 0 f or N2 + 1 ≤ m ≤ N − 1 • Finally, the analytic signal z[n] is obtained by taking the inverse DTFT of Z[m]
(1.78)
1.8 Analytic signal and its applications
39
z[n] =
1 NT
N−1
∑ z[m] exp
j2πmn/N
(1.79)
m=0
This method satisfies both the desired properties listed in equations 1.76 and 1.77. The given procedure can be coded in Matlab using the FFT function. Given a record of N real-valued samples x[n], the corresponding analytic signal z[n] can be constructed as given next. Program 1.27: analytic signal.m: Generating an analytic signal for a given real-valued signal function z = analytic_signal(x) %Generate analytic signal using frequency domain approach x = x(:); %serialize N = length(x); X = fft(x,N); z = ifft([X(1); 2*X(2:N/2); X(N/2+1); zeros(N/2-1,1)],N); end To test this function, we create a 5 seconds record of a real-valued sine signal and pass it as an argument to the function. The resulting analytic signal is constructed and its orthogonal components are plotted in Figure 1.22. From this plot, we can see that the real part of the analytic signal is identical to the original signal and the imaginary part of the analytic signal is −90◦ phase shifted version of the original signal. We note that the analytic signal’s imaginary part is Hilbert transform of its real part. Program 1.28: test analytic signal.m: Check and investigate components of an analytic signal %Test routine to check analytic_signal function t=0:0.001:0.5-0.001; x = sin(2*pi*10*t); %real-valued f = 10 Hz subplot(2,1,1); plot(t,x);%plot the original signal title('x[n] - real-valued signal'); xlabel('n'); ylabel('x[n]'); z = analytic_signal(x); %construct analytic signal subplot(2,1,2); plot(t, real(z), 'k'); hold on; plot(t, imag(z), 'r'); title('Components of Analytic signal'); xlabel('n'); ylabel('z_r[n] and z_i[n]'); legend('Real(z[n])','Imag(z[n])');
1.8.1.3 Hilbert Transform using Fourier transform Matlab’s signal processing toolbox has an inbuilt function to compute the analytic signal. The in-built function is called hilbert. We should note that the inbuilt hilbert function in Matlab returns the analytic signal z[n] and not the hilbert transform of the signal x[n]. To get the hilbert transform, we should simply get the imaginary part of the analytic signal. Since we have written our own function to compute the analytic signal, getting the hilbert transform of a real-valued signal goes like this. x_hilbert = imag(analytic_signal(x))
40
1 Essentials of Signal Processing
Fig. 1.22: Components of analytic signal for a real-valued sine function
1.8.2 Applications of analytic signal Hilbert transform and the concept of analytic signal has several applications. Two of them are detailed next.
1.8.2.1 Extracting instantaneous amplitude, phase, frequency The concept of instantaneous amplitude/phase/frequency is fundamental to information communication and appears in many signal processing applications. We know that a monochromatic signal of form x(t) = acos(ωt + φ ) cannot carry any information. To carry information, the signal need to be modulated. Take for example the case of amplitude modulation, in which a positive real-valued signal m(t) modulates a carrier cos(ωct). That is, the amplitude modulation is effected by multiplying the information bearing signal m(t) with the carrier signal cos(ωct). x(t) = m(t)cos (ωct) (1.80) Here, ωc is the angular frequency of the signal measured in radians/sec and is related to the temporal frequency fc as ωc = 2π fc . The term m(t) is also called instantaneous amplitude. Similarly, in the case of phase or frequency modulations, the concept of instantaneous phase or instantaneous frequency is required for describing the modulated signal. x(t) = acos φ (t) (1.81) Here, a is the constant amplitude factor the effects no change in the envelope of the signal and φ (t) is the instantaneous phase which varies according to the information. The instantaneous angular frequency is expressed as the derivative of instantaneous phase.
1.8 Analytic signal and its applications
41
d φ (t) dt 1 d φ (t) f (t) = 2π dt
ω(t) =
(1.82) (1.83)
Generalizing these concepts, if a signal is expressed as x(t) = a(t)cos φ (t) • • • •
(1.84)
The instantaneous amplitude or the envelope of the signal is given by a(t). The instantaneous phase is given by φ (t). The instantaneous angular frequency is derived as ω(t) = dtd φ (t). 1 d The instantaneous temporal frequency is derived as f (t) = 2π dt φ (t).
Let’s consider the following problem statement: An amplitude modulated signal is formed by multiplying a sinusoidal information and a linear frequency chirp. The information content is expressed as a(t) = 1 + 0.7 sin (2π3t) and the linear frequency chirp is made to vary from 20 Hz to 80 Hz. Given the modulated signal, extract the instantaneous amplitude (envelope), instantaneous phase and the instantaneous frequency. The solution can be as follows. We note that the modulated signal is a real-valued signal. We also take note of the fact that amplitude, phase and frequency can be easily computed if the signal is expressed in complex form. Which transform should we use such that the we can convert a real signal to the complex plane without altering the required properties ? Answer: Apply Hilbert transform and form the analytic signal on the complex plane. Figure 1.23 illustrates this concept.
Fig. 1.23: Converting a real-valued signal to complex plane using Hilbert Transform Express the real-valued modulated signal x(t) as an analytic signal z(t) = zr (t) + jzi (t) = x(t) + jHT {x(t)}
(1.85)
where, HT {·} represents the Hilbert transform operation. Now, the required parameters are very easy to obtain. • The instantaneous amplitude is computed in the complex plane as q a(t) = |z(t)| = z2r (t) + z2i (t) • The instantaneous phase is computed in the complex plane as zi (t) φ (t) = ∠z(t) = arctan zr (t)
(1.86)
(1.87)
• The instantaneous temporal frequency is computed in the complex plane as f (t) =
1 d φ (t) 2π dt
(1.88)
42
1 Essentials of Signal Processing
Once we know the instantaneous phase, the carrier can be regenerated as cos[φ (t)]. The regenerated carrier is often referred as temporal fine structure (TFS) in acoustic signal processing [17]. The following Matlab code demonstrates the extraction procedure and the resulting plots are shown in Figure 1.24. Program 1.29: extract envelope phase.m:Envelope and instantaneous phase extraction from analytic signal %Demonstrate extraction of instantaneous amplitude and phase from %the analytic signal constructed from a real-valued modulated signal. fs = 600; %sampling frequency in Hz t = 0:1/fs:1-1/fs; %time base a_t = 1.0 + 0.7 * sin(2.0*pi*3.0*t) ; %information signal c_t = chirp(t,20,t(end),80); %chirp carrier x = a_t .* c_t; %modulated signal subplot(2,1,1); plot(x);hold on; %plot the modulated signal z = analytic_signal(x); %form the analytical signal inst_amplitude = abs(z); %envelope extraction inst_phase = unwrap(angle(z));%inst phase inst_freq = diff(inst_phase)/(2*pi)*fs;%inst frequency %Regenerate the carrier from the instantaneous phase regenerated_carrier = cos(inst_phase); plot(inst_amplitude,'r'); %overlay the extracted envelope title('Modulated signal & extracted envelope');xlabel('n');ylabel('x(t) & |z(t)|'); subplot(2,1,2); plot(cos(inst_phase)); title('Extracted carrier or TFS');xlabel('n'); ylabel('cos[\omega(t)]');
Fig. 1.24: Amplitude modulation using a chirp signal and extraction of envelope and TFS
1.8 Analytic signal and its applications
43
1.8.2.2 Phase demodulation using Hilbert transform In communication systems, different types of modulations are available. They are mainly categorized as: amplitude modulation and phase modulation / frequency modulation. In amplitude modulation, the information is encoded as variations in the amplitude of a carrier signal. Demodulation of an amplitude modulated signal, involves extraction of the envelope of the modulated signal (see section 1.8.2.1). In phase modulation, the information is encoded as variations in the phase of the carrier signal. In its generic form, a phase modulated signal is expressed as an information-bearing sinusoidal signal modulating another sinusoidal carrier signal x(t) = Acos 2π fct + β + αsin (2π fmt + θ ) (1.89) where, m(t) = αsin (2π fmt + θ ) represents the information-bearing modulating signal, with the following parameters • α - amplitude of the modulating sinusoidal signal. • fm - frequency of the modulating sinusoidal signal. • θ - phase offset of the modulating sinusoidal signal. The carrier signal has the following parameters • A - amplitude of the carrier. • fc - frequency of the carrier and fc >> fm . • β - phase offset of the carrier. The phase modulated signal shown in equation 1.89, can be simply expressed as x(t) = Acos φ (t)
(1.90)
Here, φ (t) is the instantaneous phase that varies according to the information signal m(t). A phase modulated signal of form x(t) can be demodulated by forming an analytic signal (by applying Hilbert transform) and then extracting the instantaneous phase. Extraction of instantaneous phase of a signal was discussed in section 1.8.2.1. We note that the instantaneous phase, φ (t) = 2π fct + β + αsin (2π fmt + θ ), is linear in time and proportional to 2π fct. To obtain the information bearing modulated signal, this linear offset needs to be subtracted from the instantaneous phase. If the carrier frequency is known at the receiver, this can be done easily. If not, the carrier frequency term 2π fct needs to be estimated using a linear fit of the unwrapped instantaneous phase. The following Matlab code demonstrates all these methods. The resulting plots are shown in Figures 1.25 and 1.26. Program 1.30: phase demod app.m: Demodulate a phase modulated signal using Hilbert Transform %Demonstrate simple Phase Demodulation using Hilbert transform clearvars; clc; fc = 210; %carrier frequency fm = 10; %frequency of modulating signal alpha = 1; %amplitude of modulating signal theta = pi/4; %phase offset of modulating signal beta = pi/5; %constant carrier phase offset receiverKnowsCarrier= 'False'; %Set True if receiver knows carrier frequency & phase offset fs = 8*fc; %sampling frequency duration = 0.5; %duration of the signal t = 0:1/fs:duration-1/fs; %time base
44
1 Essentials of Signal Processing
%Phase Modulation m_t = alpha*sin(2*pi*fm*t + theta); %modulating signal x = cos(2*pi*fc*t + beta + m_t ); %modulated signal figure(); subplot(2,1,1); plot(t,m_t) %plot modulating signal title('Modulating signal'); xlabel('t'); ylabel('m(t)') subplot(2,1,2); plot(t,x) %plot modulated signal title('Modulated signal'); xlabel('t');ylabel('x(t)') %Add AWGN noise to the transmitted signal nMean = 0; nSigma = 0.1; %noise mean and sigma n = nMean + nSigma*randn(size(t)); %awgn noise r = x + n; %noisy received signal %Demodulation of the noisy Phase Modulated signal z= hilbert(r); %form the analytical signal from the received vector inst_phase = unwrap(angle(z)); %instaneous phase %If receiver knows the carrier freq/phase perfectly if strcmpi(receiverKnowsCarrier,'True') offsetTerm = 2*pi*fc*t+beta; else %else, estimate the subtraction term p = polyfit(t,inst_phase,1);%linearly fit the instaneous phase %re-evaluate the offset term using the fitted values estimated = polyval(p,t); offsetTerm = estimated; end demodulated = inst_phase - offsetTerm; figure(); plot(t,demodulated); %demodulated signal title('Demodulated signal'); xlabel('n'); ylabel('\hat{m(t)}');
Modulating signal m(t)
1 0 -1 0
0.1
0.2
0.3
0.4
0.5
0.4
0.5
t Modulated signal
x(t)
1 0 -1 0
0.1
0.2
0.3
t
Fig. 1.25: Phase modulation - modulating signal and modulated (transmitted) signal
45
^
1.9 Choosing a filter : FIR or IIR : understanding the design perspective
Fig. 1.26: Demodulated signal from the noisy received-signal and when the carrier is unknown at the receiver
1.9 Choosing a filter : FIR or IIR : understanding the design perspective Choosing the best filter for implementing a signal processing block is a challenge often faced by communication systems development engineer. There exists two different types of linear time invariant (LTI) filters from transfer function standpoint : finite impulse response (FIR) and infinite impulse response (IIR) filters and myriad design techniques for designing them. The mere fact that there exists so many techniques for designing a filter, suggests that there is no single optimal filter design. One has to weigh-in the pros and cons of choosing a filter design by considering the factors discussed here.
1.9.1 Design specification A filter design starts with a specification. We may have a simple specification that just calls for removing an unwanted frequency component or it can be a complete design specification that calls for various parameters like - amount of ripples allowed in passband, stop band attenuation, transition width etc. The design specification usually calls for satisfying one or more of the following: • desired magnitude response - |Hspec (ω)| • desired phase response - ∠ Hspec (ω) • tolerance specifications - that specifies how much the filter response is allowed to vary when compared with ideal response. Examples include how much ripples allowed in passband, stop band etc. Given the specifications above, the goal of a filter design process is to choose the parameters M, N, {bk } and {ak } such that the transfer function of the filter M
∑ bk z−1
H(z) =
i=0 N
∏(z − zi ) =
−1
1 + ∑ ak z
i
∏(z − p j )
(1.91)
j
i=1
yields the desired response: H(ω) ≈ Hspec (ω). In other words, the design process also involves choosing the number and location of the zeros {zi } and poles {p j } in the pole-zero plot. In Matlab, given the transfer function of a filter, the filtering process can be simulated using the filter command. The command y=filter(b,a,x) filters the input data x using the transfer function defined by the numerator coefficients {bk } and denominator coefficients {ak }.
46
1 Essentials of Signal Processing
Two types of filter can manifest from the transfer function given in equation 1.91. • When N = 0, there is no feedback in the filter structure (Figure 1.27(a)), no poles in the pole-zero plot (in fact all the poles sit at the origin). The impulse response of such filter dies out (becomes zero) beyond certain point of time and it is classified as finite impulse response (FIR) filter. It provides linear phase characteristic in the passband. • When N > 0, the filter structure is characterized by the presence of feedback elements (Figure 1.27(b)). Due to the presence of feedback elements, the impulse response of the filter may not become zero beyond certain point, but continues indefinitely and hence the name infinite impulse response (IIR) filter. • Caution:In most cases, the presence of feedback elements provide infinite impulse response. It is not always true. There are some exceptional cases where the presence of feedback structure may result in finite impulse response. For example, a moving average filter will have a finite impulse response. The output of a moving average filter can be described using a recursive formula, which will result in a structure with feedback elements.
(a)
(b)
Fig. 1.27: FIR and IIR filters can be realized as direct-form structures. (a) Structure with feed-forward elements only - typical for FIR designs. (b) Structure with feed-back paths - generally results in IIR filters. Other structures like lattice implementations are also available.
1.9.2 General considerations in design As specified earlier, the choice of filter and the design process depends on design specification, application and the performance issues associates with them. However, the following general considerations are applied in practical design.
1.9 Choosing a filter : FIR or IIR : understanding the design perspective
47
Minimizing number of computations In order to minimize memory requirements for storing the filter co-efficients {ak },{bk } and to minimize the number of computations, ideally we would like N + M to be as small as possible. For the same specification, IIR filters result in much lower order when compared to its FIR counter part. Therefore, IIR filters are efficient when viewed from this standpoint.
Need for real-time processing The given application may require processing of input samples in real-time or the input samples may exist in a recorded state (example: video/audio playback, image processing applications, audio compression). From this perspective, we have two types of filter systems • Causal filter – Filter output depends on present and past input samples, not on the future samples. The output may also depend on the past output samples, as in IIR filters. Strictly no future samples. – Such filters are very much suited for real-time applications. • Non-causal filter – There are many practical cases where a non-causal filter is required. Typically, such application warrants some form of post-processing, where the entire data stream is already stored in memory. – In such cases, a filter can be designed that can take in all type of input samples : present, past and future, for processing. These filters are classified as non-causal filters. – Non-causal filters have much simpler design methods. It can be often seen in many signal processing texts, that the causal filters are practically realizable. That does not mean non-causal filters are not practically implementable. In fact both types of filters are implementable and you can see them in many systems today. The question you must ask is : whether your application requires real-time processing or processing of pre-recorded samples. If the application requires real-time processing, causal filters must be used. Otherwise, non-causal filters can be used.
Consequences of causal filter If the application requires real-time processing, causal filters are the only choice for implementation. Following consequences must be considered if causality is desired. Ideal filters with finite bands of zero response (example: brick-wall filters), cannot be implemented using causal filter structure. A direct consequence of causal filter is that the response cannot be ideal. Hence, we must design the filter that provides a close approximation to the desired response . If tolerance specification is given, it has to be met. For example, in the case of designing a low pass filter with given passband frequency (ωP ) and stopband frequencies (ωS ), additional tolerance specifications like allowable passband ripple factor (δP ), stopband ripple factor (δS ) need to be considered for the design (Figure 1.28). Therefore, the practical filter design involves choosing ωP , ωS , δP and δS and then designing the filter with N, M, {ak } and {bk } that satisfies all the given requirements/responses. Often, iterative procedures may be required to satisfy all the above (example: Parks and McClellan algorithm used for designing optimal causal FIR filters [18]). For a causal filter, frequency response’s real part HR (ω) and the imaginary part HI (ω) become Hilbert transform pair [19]. Therefore, for a causal filter, the magnitude and phase responses become interdependent.
48
1 Essentials of Signal Processing
Fig. 1.28: A sample filter design specification
Stability A causal LTI digital filter will be bounded input bounded output (BIBO) stable, if and only if the impulse response h[n] is absolutely summable. n=∞
∑
|h[n]| < ∞
(1.92)
n=−∞
Impulse response of FIR filters are always bounded and hence they are inherently stable. On the other hand, an IIR filter may become unstable if not designed properly. Consider an IIR filter implemented using a floating point processor that has enough accuracy to represent all the coefficients in the following transfer function: H1 (z) =
1 1 − 1.845 z−1 + 0.850586 z−2
(1.93)
The corresponding impulse response h1 [n] is plotted in Figure 1.29(a). The plot shows that the impulse response decays rapidly to zero as n increases. For this case, the sum in equation 1.92 will be finite. Hence this IIR filter is stable. Suppose, if we were to implement the same filter in a fixed point processor and we are forced to round-off the co-efficients to 2 digits after the decimal point, the same transfer function looks like this H2 (z) =
1 1 − 1.85
z−1 + 0.85 z−2
(1.94)
The corresponding impulse response h2 [n] plotted in Figure 1.29(b) implies that the impulse response increases rapidly towards a constant value as n increases. For this case, the sum in equation 1.92 will approach infinity. Hence the implemented IIR filter is unstable. Therefore, it is imperative that an IIR filter implementation need to be tested for stability. To analyze the stability of the filter, the infinite sum in equation 1.92 need to be computed and it is often difficult to compute this sum. Analysis of pole-zero plot is an alternative solution for this problem. To have a stable causal filter,
1.9 Choosing a filter : FIR or IIR : understanding the design perspective
49
Fig. 1.29: Impact of poorly implemented IIR filter on stability. (a) Stable IIR filter, (b) The same IIR filter becomes unstable due to rounding effects.
the poles of the transfer function should rest completely, strictly, inside the unit circle on the pole-zero plot. The pole-zero plot for the above given transfer functions H1 (z), H2 (z) are plotted in Figure 1.30. It shows that for the transfer function H1 (z), all the poles lie within the unit circle (the region of stability) and hence it is a stable IIR filter. On the other hand, for the transfer function H2 (z), one pole lies exactly on the unit circle (ie., it is just out of the region of stability) and hence it is an unstable IIR filter.
Fig. 1.30: Impact of poorly implemented IIR filter on stability. (a) Stable causal IIR filter since all poles rest inside the unit circle, (b) Due to rounding effects, one of the poles rests on the unit circle, making it unstable.
50
1 Essentials of Signal Processing
Linear phase requirement In many signal processing applications, it is needed that a digital filter should not alter the angular relationship between the real and imaginary components of a signal, especially in the passband. In otherwords, the phase relationship between the signal components should be preserved in the filter’s passband. If not, we have phase distortion.
Fig. 1.31: An FIR filter showing linear phase characteristic in the passband
Phase distortion is a concern in many signal processing applications. For example, in phase modulations like GMSK [20], the entire demodulation process hinges on the phase relationship between the inphase and quadrature components of the incoming signal. If we have a phase distorting filter in the demodulation chain, the entire detection process goes for a toss. Hence, we have to pay attention to the phase characteristics of such filters. To have no phase distortion when processing a signal through a filter, every spectral component of the signal inside the passband should be delayed by the same amount of time delay measured in samples. In other words, the phase response φ (ω) in the passband should be a linear function (straight line) of frequency (except for the phase wraps at the band edges). A filter that satisfies this property is called a linear phase filter. FIR filters provide perfect linear phase characteristic in the passband region (Figure 1.31) and hence avoids phase distortion. All IIR filters provide non-linear phase characteristic. If a real-time application warrants for zero phase distortion, FIR filters are the immediate choice for design. It is intuitive to see that the phase response of a generalized linear phase filter should follow the relationship φ (ω) = −mω + c, where m is the slope and c is the intercept when viewing the linear relationship between the frequency and the phase response in the passband (Figure 1.31). The phase delay and group delay are the important filter characteristics considered for ascertaining the phase distortion and they relate to the intercept c and the slope m of the phase response in the passband. Linear phase filters are characterized by constant group delay. Any deviation in the group delay from the constant value inside the passband, indicates presence of certain degree of non-linearity in the phase and hence causes phase distortion. Phase delay is the time delay experienced by each spectral component of the input signal. For a filter with the frequency response H(ω), the phase delay response τ p is defined in terms of phase response φ (ω) =
References
51
∠H(ω) as φ (ω) (1.95) ω Group delay is the delay experienced by a group of spectral components within a narrow frequency interval about ω [21]. The group delay response τg (ω) is defined as the negative derivative of the phase response ω. τ p (ω) = −
τg (ω) = −
d φ (ω) dω
(1.96)
For the generalized linear phase filter, the group delay and phase delay are given by d φ (ω) = m dω
(1.97)
φ (ω) c = m− ω ω
(1.98)
τg (ω) = −
τ p (ω) = −
Summary of design choices • IIR filters are efficient, they can provide similar magnitude response for fewer coefficients or lower sidelobes for same number of coefficients • For linear phase requirement, FIR filters are the immediate choice for the design • FIR filters are inherently stable. IIR filters are susceptible to finite length words effects of fixed point arithmetic and hence the design has to be rigorously tested for stability. • IIR filters provide less average delay compared to its equivalent FIR counterpart. If the filter has to be used in a feedback path in a system, the amount of filter delay is very critical as it affects the stability of the overall system. • Given a specification, an IIR design can be easily deduced based on closed-form expressions. However, satisfying the design requirements using an FIR design, generally requires iterative procedures.
References 1. Mathuranathan Viswanathan, Digital Modulations using Matlab: Build Simulation Models from Scratch, ISBN: 9781521493885, pp. 85-93, June 2017. 2. James W. Cooley and John W. Tukey, An Algorithm for the Machine Calculation of Complex Fourier Series, Mathematics of Computation Vol. 19, No. 90, pp. 297-301, April 1965. 3. Good I. J, The interaction algorithm and practical Fourier analysis, Journal of the Royal Statistical Society, Series B, Vol. 20, No. 2 , pp. 361-372, 1958. 4. Thomas L. H, Using a computer to solve problems in physics, Applications of Digital Computers, Boston, Ginn, 1963. 5. Matlab documentation on Floating-Point numbers, https://www.mathworks.com/help/matlab/matlab prog/ floating-point-numbers.html 6. Hanspeter Schmid, How to use the FFT and Matlab’s pwelch function for signal and noise simulations and measurements, Institute of Microelectronics, University of Applied Sciences Northwestern Switzerland, August 2012. 7. Sanjay Lall, Norm and Vector spaces, Information Systems Laboratory, Stanford. University.http://floatium.stanford. edu/engr207c/lectures/norms 2008 10 07 01.pdf 8. Reddi.S.S, Eigen Vector properties of Toeplitz matrices and their application to spectral analysis of time series, Signal Processing, Vol 7, North-Holland, 1984, pp. 46-56. 9. Robert M. Gray, Toeplitz and circulant matrices – an overview, Department of Electrical Engineering, Stanford University, Stanford 94305,USA. https://ee.stanford.edu/∼gray/toeplitz.pdf 10. Matlab documentation help on Toeplitz command. https://www.mathworks.com/help/matlab/ref/toeplitz.html 11. Richard E. Blahut, Fast Algorithms for Signal Processing, Cambridge University Press, 1 edition, August 2010. 12. D. Gabor, Theory of communications, Journal of the Inst. Electr. Eng., vol. 93, pt. 111, pp. 42-57, 1946. See definition of complex signal on p. 432.
52
1 Essentials of Signal Processing
13. J. A. Ville, Theorie et application de la notion du signal analytique, Cables el Transmission, vol. 2, pp. 61-74, 1948. 14. S. M. Kay,Maximum entropy spectral estimation using the analytical signal, IEEE transactions on Acoustics, Speech, and Signal Processing, vol. 26, pp. 467-469, October 1978. 15. Poularikas A. D, Handbook of Formulas and Tables for Signal Processing, ISBN - 9781420049701, CRC Press , 1999. 16. S. L. Marple, Computing the discrete-time analytic signal via FFT, Conference Record of the Thirty-First Asilomar Conference on Signals, Systems and Computers , Pacific Grove, CA, USA, 1997, pp. 1322-1325 vol.2. 17. Moon, Il Joon, and Sung Hwa Hong, What Is Temporal Fine Structure and Why Is It Important ?, Korean Journal of Audiology 18.1 (2014): 1–7. PMC. Web. 24 Apr. 2017. 18. J. H. McClellan, T. W. Parks, and L. R. Rabiner, A Computer Program for Designing Optimum FIR Linear Phase Digital Filters, IEEE Trans, on Audio and Electroacoustics, Vol. AU-21, No. 6, pp. 506-526, December 1973. 19. Frank R. Kschischang, The Hilbert Transform, Department of Electrical and Computer Engineering, University of Toronto, October 22, 2006. 20. Thierry Turletti, GMSK in a nutshell, Telemedia Networks and Systems Group Laboratory for Computer Science, Massachusetts Institute of Technology April, 1996. 21. Julius O. Smith III, Introduction to digital filters - with audio applications, Center for Computer Research in Music and Acoustics (CCRMA), Department of Music, Stanford University.
Chapter 2
Random Variables - Simulating Probabilistic Systems
Abstract In this chapter, the basic methods for generating various random variables and random sequences, are presented. As the subject of generation of random variables is quite involved, we would be using inbuilt functions in Matlab to achieve the stated goal.
2.1 Introduction A simulation program written in computer software, attempts to mimic a real-world example based on a model. In fact, the simulation accuracy is determined by precision of the underlying model. Mathematically, probabilities of given events are determined by performing a well-defined experiment and counting the outcome. Suppose if we were to determine the probability of heads in a coin tossing experiment, we would toss the coin n times and count the number of heads obtained - (nH ). The probability of obtaining heads is calculated as pH = nH /n. For real-world problems, it is not possible to determine the probabilities by performing experiments a large number of times. With the advent of today’s computer processing capabilities, these tasks are simplified by software like MATLAB. Using such software, we only need to generate a random number for dealing with such problems. Basic methods for generating various random variables are presented here. For in-depth understanding of probability and random process, please consult an authoritative text.
2.2 Plotting the estimated PDF Generation of random variables with required probability distribution characteristic is of paramount importance in simulating a communication system. Let’s see how we can generate a simple random variable, estimate and plot the probability density function (PDF) from the generated data and then match it with the intended theoretical PDF. Normal random variable is considered here for illustration. Other types of random variables like uniform, Bernoulli, binomial, Chi-squared, Nakagami-m are illustrated in the next section.
Step 1: Create the random variable A survey of commonly used fundamental methods to generate a given random variable is given in [1]. For this demonstration, we will consider the normal random variable with the following parameters : µ - mean and σ - standard deviation. First generate a vector of randomly distributed random numbers of sufficient length (say
53
54
2 Random Variables - Simulating Probabilistic Systems
100000) with some valid values for µ and σ . There are more than one way to generate this. Two of them are given below • Method 1: Using the in-built random function (requires statistics toolbox) mu=0;sigma=1;%mean=0,deviation=1 L=100000; %length of the random vector R = random('Normal',mu,sigma,L,1);%method 1 • Method 2: Using randn function that generates normally distributed random numbers having µ = 0 and σ = 1. mu=0;sigma=1;%mean=0,deviation=1 L=100000; %length of the random vector R = randn(L,1)*sigma + mu; %method 2 • Method 3: Box-Muller transformation [2] method using rand function that generates uniformly distributed random numbers mu=0;sigma=1;%mean=0,deviation=1 L=100000; %length of the random vector U1 = rand(L,1); %uniformly distributed random numbers U(0,1) U2 = rand(L,1); %uniformly distributed random numbers U(0,1) Z = sqrt(-2*log(U1)).*cos(2*pi*U2);%Standard Normal distribution R = Z*sigma+mu;%Normal distribution with mean and sigma
Step 2: Plot the estimated histogram Typically, if we have a vector of random numbers that is drawn from a distribution, we can estimate the PDF using the histogram tool. Matlab supports two in-built functions to compute and plot histograms: • hist - introduced before R2006a • histogram - introduced in R2014b Matlab’s help page points that the hist function is not recommended for several reasons and the issue of inconsistency is one among them. The histogram function is the recommended for use. Estimate and plot the normalized histogram using the recommended histogram function. And for verification, overlay the theoretical PDF for the intended distribution. When using the histogram function to plot the estimated PDF from the generated random data, use pdf option for normalization option. Do not use the value probability for the normalization option, as it will not match the theoretical PDF curve. The output for the following code snippet is given in Figure 2.1. histogram(R,'Normalization','pdf'); %plot estimated pdf from data X = -4:0.1:4; %range of x to compute the theoretical pdf fx_theory = pdf('Normal',X,mu,sigma); %theoretical pdf hold on; plot(X,fx_theory,'r'); %plot computed theoretical PDF title('Probability Density Function'); xlabel('values - x'); ylabel('pdf - f(x)'); axis tight; legend('simulated','theory');
2.2 Plotting the estimated PDF
55
Probability Density Function
pdf - fx(x)
simulated theory
values - x Fig. 2.1: Estimated PDF (using histogram function) and the theoretical PDF
However, if you do not have Matlab version that was released before R2014b, use the hist function and get the histogram frequency counts ( f ) and the bin-centers (x). Using these data, normalize the frequency counts using the overall area under the histogram. The trapz function in Matlab computes the approximate area under the simulated PDF curve. We know that the area under any PDF curve integrates to unity. Thus we need to normalize the histogram using the total area obtained by the trapz function. This brings the total area under the normalized PDF curve to unity. This step is essential, only if we want to match the theoretical and simulated PDF. This method can be applied to any random variable. Plot the normalized histogram (Figure 2.2) and overlay the theoretical PDF for the chosen parameters. numBins=50; %choose appropriately %use hist function and get unnormalized values [f,x]=hist(R,numBins); figure; plot(x,f/trapz(x,f),'b-*');%normalized histogram from data X = -4:0.1:4; %range of x to compute the theoretical pdf fx_theory = pdf('Normal',X,mu,sigma); %theoretical pdf hold on; plot(X,fx_theory,'r'); %plot computed theoretical PDF title('Probability Density Function'); xlabel('values - x'); ylabel('pdf - f(x)'); axis tight; legend('simulated','theory');
Step 3: Theoretical PDF The code snippets given in step 2, already include the command to plot the theoretical PDF by using the pdf (requires statistics toolbox) function in Matlab. If you do not have access to this function, you could use the following equation for computing the theoretical PDF. The code snippet for that purpose is given next. " # 1 (x − µ)2 fX (x) = √ exp − (2.1) 2σ 2 2πσ 2
56
2 Random Variables - Simulating Probabilistic Systems
Probability Density Function
pdf - fx(x)
simulated theory
x - values
Fig. 2.2: Estimated PDF (using hist function) and the theoretical PDF
X = -4:0.1:4; %range of x to compute the theoretical pdf fx_theory = 1/sqrt(2*pi*sigmaˆ2)*exp(-0.5*(X-mu).ˆ2./sigmaˆ2); plot(X,fx_theory,'k'); %plot computed theoretical PDF
2.3 Univariate random variables This section focuses on some of the most frequently encountered univariate random variables in communication systems design. Basic installation of Matlab provides access to two fundamental random number generators: uniform random number generator (rand) and the standard normal random number generator (randn) . They are fundamental in the sense that all other random variables like Bernoulli, Binomial, Chi, Chi-square, Rayleigh, Ricean, Nakagami-m, etc.., can be generated by transforming them.
2.3.1 Uniform random variable Uniform random variables are used to model scenarios where the expected outcomes are equi-probable. For example, in a communication system design, the set of all possible source symbols are considered equally probable and therefore modeled as a uniform random variable. The uniform distribution is the underlying distribution for an uniform random variable. A continuous uniform random variable, denoted as X ∼ U(a, b), take continuous values within a given interval (a, b), with equal probability. Therefore, the PDF of such a random variable is a constant over the given interval is. ( 1 when a < x < b fX (x) = b−a (2.2) 0 otherwise In Matlab, rand function generates continuous uniform random numbers in the interval (0, 1). The rand function picks a random number in the interval (0, 1) in which the probability of occurrence of all the numbers
2.3 Univariate random variables
57
in the interval are equally likely. The command rand(n,m) will generate a matrix of size n × m. To generate a random number in the interval (a, b) one can use the following expression. a + (b-a)*rand(n,m); %Here nxm is the size of the output matrix To test whether the numbers generated by the continuous uniform distribution are uniform in the interval (a, b), one has to generate very large number of values using the rand function and then plot the histogram. The Figure 2.3 shows that the simulated PDF and theoretical PDF are in agreement with each other. Program 2.1: uniform continuous rv.m: Continuous uniform random variable and verifying its PDF a=2;b=10; %open interval (2,10) X=a+(b-a)*rand(1,1000000);%simulate uniform RV [p,edges]=histcounts(X,'Normalization','pdf');%estimated PDF outcomes = 0.5*(edges(1:end-1) + edges(2:end));%possible outcomes g=1/(b-a)*ones(1,length(outcomes)); %Theoretical PDF bar(outcomes,p);hold on;plot(outcomes,g,'r-'); title('Probability Density Function');legend('simulated','theory'); xlabel('Possible outcomes');ylabel('Probability of outcomes');
Probability Density Function
Fig. 2.3: Continuous uniform random variable - histogram and theoretical PDF On the other hand, a discrete random variable generates n discrete values that are equally probable. The underlying discrete uniform distribution is denoted as X ∼ U(S), where S = {s1 , s2 , s3 , ..., sn } ∈ Z, is a finite set of discrete elements that are equally probable as described by the probability mass function (PMF). ( 1 where x ∈ {s1 , s2 , ..., sn } fX (x) = n (2.3) 0 otherwise There exist several methods to generate discrete uniform random numbers and two of them are discussed here. The straightforward method is to use randi function in Matlab that can generate discrete uniform
58
2 Random Variables - Simulating Probabilistic Systems
numbers in the integer set 1 : n. The second method is to use rand function and ceil the result to discrete values. For example, the command to generate 100 uniformly distributed discrete numbers from the set S = {1, 2, 3, 4, ..., n} is X=ceil(n*rand(1,100)); The uniformity test for discrete uniform random numbers can be performed and it is very similar to the code shown for the continuous uniform random variable case. The only difference here is the normalization term. The histogram values should not be normalized by the total area under the histogram curve as in the case of continuous random variables. Rather, the histogram should be normalized by the total number of occurrences in all the bins. We cannot normalized based on the area under the curve, since the bin values are not dense enough (bins are far from each other) for proper calculation of total area. The code snippet is given next. The resulting plot (Figure 2.4) shows a good match between the simulated and theoretical PMFs. Program 2.2: uniform discrete rv.m: Simulatediscrete uniform random variable and verify its PMF X=randi(6,100000,1); %Simulate throws of dice,S={1,2,3,4,5,6} [pmf,edges]=histcounts(X,'Normalization','pdf');%estimated PMF outcomes = 0.5*(edges(1:end-1) + edges(2:end));%S={1,2,3,4,5,6} g=1/6*ones(1,6); %Theoretical PMF bar(outcomes,pmf);hold on;stem(outcomes,g,'r-'); title('Probability Mass Function');legend('simulated','theory'); xlabel('Possible outcomes');ylabel('Probability of outcomes');
Probability Mass Function
Fig. 2.4: Discrete uniform random variable - histogram and theoretical PMFs
2.3 Univariate random variables
59
2.3.2 Bernoulli random variable Bernoulli random variable is a discrete random variable with two outcomes - success and failure with probabilities p and (1 − p). It is a good model for binary data generators and also for modeling bit error patterns in the received binary data when a communication channel introduces random errors. To generate a Bernoulli random variable X, in which the probability of success P(X = 1) = p for some p ∈ (0, 1), the discrete inverse transform method [3] can be applied on the continuous uniform random variable U(0, 1) using the steps below. 1. Generate uniform random number U in the interval (0, 1) 2. If U < p, set X = 1, else set X = 0 Program 2.3: bernoulliRV.m: Generating Bernoulli random number with success probability p function X = bernoulliRV(L,p) %Generate Bernoulli random number with success probability p %L is the length of the sequence generated U = rand(1,L); %continuous uniform random numbers in (0,1) X = (U=Ti(i))=i; %counting process end subplot(1,2,1);plot(t, N); xlabel('time'); ylabel('N(t)'); title('Poisson process');legend('N(t)','Events'); hold on; stem(Ti,zeros(size(Ti))+0.5,'ˆ');%mark events axis([0 Ti(10) 0 10]); %zoom to first 10 events subplot(1,2,2);histogram(Xi,'Normalization','pdf'); title('PDF of interarrival times'); xlabel('Interarrival time');ylabel('frequency');
64
2 Random Variables - Simulating Probabilistic Systems
Poisson process
Frequency
PDF of interarrival times
Time
Fig. 2.8: Poisson counting process and PDF of interarrival times
2.3.6 Gaussian random variable Gaussian random variable plays an important role in communication systems design. The channel noise, when modeled as Gaussian random variable, is the worst channel impairment that results in the smallest channel capacity for a given fixed noise power. Thus, the error control codes designed under this most adverse environment will give superior and satisfactory performance in real environments [4]. The PDF fX (x) of a Gaussian random variable X, follows Gaussian distribution (also called as Normal Distribution) denoted by N(µ, σ 2 ) ! 1 (x − µ)2 fX (x) = √ exp − (2.10) 2σ 2 σ 2π Here, µ is the mean and σ is the standard deviation of the distribution. In Matlab, randn function is used to generate continuous random numbers drawn from standard normal distribution N(µ = 0, σ 2 = 1). To generate 100 pseudo random numbers drawn from normal distribution with mean a and standard deviation b, use the following expression a+ b*randn(100,1) Let’s generate 10, 000 random numbers drawn from Gaussian distribution (with µ = 1, σ = 2) and test whether the probability density curve follows the characteristic bell shaped curve. Program 2.11: gaussianrv test.m: Simulating a Gaussian random variable and verifying its PDF [f,x]=hist(1+2.*randn(10000,1),100); %Simulated PDF bar(x,f/trapz(x,f));hold on; g=1/sqrt(2*pi*4)*exp(-0.5*(x-1).ˆ2/(4)); %Theoretical PDF plot(x,g,'r');hold off; title('Probability Density Function');legend('simulated','theory'); xlabel('Possible outcomes');ylabel('Probability of outcomes'); The resulting histogram, plotted in Figure 2.9, shows that the generated numbers follow the probability density curve of Gaussian distribution with µ = 1 and σ = 2.
2.3 Univariate random variables
65
Probability Density Function
Fig. 2.9: Gaussian random variable - histogram and theoretical PDF curve
2.3.7 Chi-squared random variable A random variable is always associated with a probability distribution. When a random variable undergoes mathematical transformation, the underlying probability distribution no longer remains the same. Consider a random variable Z ∼ N(µ = 0, σ 2 = 1) whose PDF follows a standard normal distribution. Now, if the random variable is squared (a mathematical transformation), then the PDF of Z 2 is no longer a standard normal distribution. The newly transformed distribution is called Chi-Squared Distribution with 1 degree of freedom. The PDF of Z and Z 2 are shown in Figure 2.10. The mean of the random variable Z is E(Z) = 0 and for the
Fig. 2.10: Transformation of a random variable transformed variable Z 2 , the mean is given by E(Z 2 ) = 1. Similarly, the variance of the random variable Z is var(Z) = 1, whereas the variance of the transformed random variable Z 2 is var(Z 2 ) = 2. In addition to the mean and variance, the shape of the distribution is also changed. The distribution of the transformed variable Z 2 is no longer symmetric. In fact, the distribution is skewed to one side. Furthermore, the random variable Z 2 can take only positive values whereas the random variable Z can take negative values too (note the x-axis of both the plots in Figure 2.10). Since the new transformation is based on only one parameter (Z), the degree
66
2 Random Variables - Simulating Probabilistic Systems
of freedom for this transformation is 1. Therefore, the transformed random variable Z 2 follows Chi-squared distribution with 1 degree of freedom. Suppose, if Z1 , Z2 , ..., Zk are independent random variables that follows standard normal distribution, Zk ∼ N(0, 1), then the transformation, χk2 = Z12 + Z22 + ... + Zk2
(2.11)
is a Chi-squared distribution with k degrees of freedom. Figure 2.11 illustrates the transformation relationship between Chi-Squared distributions with 1 and 2 degrees of freedom and the normal distribution. In the same manner, the transformation can be extended to k degrees of freedom.
Fig. 2.11: Constructing a Chi-Squared random variable with 2 degrees of freedom Equation 2.11 is derived from k random variables that follow standard normal distribution whose mean is zero. Therefore, the transformation is called central Chi-square distribution. If, the underlying k random variables follow normal distribution with non-zero mean, then the transformation is called non-central Chisquared distribution. In channel modeling, the central Chi-squared distribution is related to Rayleigh Fading and the non-central Chi-squared distribution is related to Ricean Fading. Chi-squared distribution is also used in hypothesis testing and in estimating variances. Mathematically, the PDF of the central Chi-squared distribution with k degrees of freedom is given by f χ 2 (x) = k
1 2k/2
Γ
k
x
x 2 −1 e− 2 k 2
; for x ≥ 0
(2.12)
where, Γ (.) is the Gamma function. The mean and variance of such a centrally distributed Chi-squared random variable are k and 2k respectively. A Chi-Squared random variable can be easily simulated. The following function transforms the random numbers generated from a standard normal distribution and produces a sequence of random numbers that are Chi-Squared distributed. Program 2.12: chisquaredRV.m: Function for generating central Chi-Squared random numbers function Y= chisquaredRV(k,N) %Generate Chi-Squared Distributed univariate random numbers % k=degrees of freedom, N=Number of samples Y=0; for j=1:k, %Sum of square of k independent standard normal rvs Y= Y + randn(1,N).ˆ2; end
2.3 Univariate random variables
67
The following Matlab code tests the customized function for generating central Chi-squared distributed random variables with degrees of freedom k = 1, 2, 3, 4 and 5. Finally, the PDFs of the simulated variables are plotted in Figure 2.12. Program 2.13: chisquared test.m: Simulating a central Chi-Squared random variable and plotting its PDF %Test and Demonstrate Central Chi Squared Distribution clearvars; clc; k=[1,2,3,4,5]; N = 10e5;%degrees of freedom & Number of Samples lineColors=['r','g','b','k','m']; %line color arguments legendString =cell(1,5); %cell array of string for printing legend %Generating Chi-Square Distribution with k degrees of freedom for i=1:length(k), %outer loop - runs for each degrees of freedom Y= chisquaredRV(k(i),N); %Computing Histogram for each degree of freedom [Q,X] = hist(Y,1000); plot(X,Q/trapz(X,Q),lineColors(i)); %normalized histogram hold on; legendString{i}=strcat(' k=',num2str(i)); end legend(legendString);xlabel('Parameter - x');ylabel('f_X(x)'); title('Central Chi Squared RV - PDF');axis([0 8 0 0.5]);
PDFs of central Chi-squared random variable
Fig. 2.12: Simulated PDFs of central Chi-Squared random variables
68
2 Random Variables - Simulating Probabilistic Systems
2.3.8 Non-central Chi-Squared random variable If squares of k independent standard normal random variables are added, it gives rise to central Chi-squared distribution with k degrees of freedom. Instead, if squares of k independent normal random variables with non-zero means are added, it gives rise to non-central Chi-squared distribution. Non-central Chi-squared distribution is related to Ricean distribution, whereas the central Chi-squared distribution is related to Rayleigh distribution. The non-central Chi-squared distribution is a generalization of Chi-squared distribution. A non-central Chi squared distribution is defined by two parameters: degrees of freedom (k) and non-centrality parameter (λ ). The degrees of freedom specifies the number of independent random variables we want to square and sum-up to make the Chi-squared variable. Non-centrality parameter, given in equation 2.13, is the sum of squares of means of each independent underlying normal random variable. k
λ = ∑ µi2
(2.13)
i=1
The PDF f χ 2 (x, λ ) of the non-central Chi-squared distribution having k degrees of freedom and nonk centrality parameter λ is given by ∞ e−λ /2 λ /2 n fYk+2n (x) (2.14) f χ 2 (x, λ ) = ∑ k n! n=0 Here, the random variable Yk+2n is central Chi-squared distributed with k + 2n degrees of freedom. The factor e−λ /2 (λ /2)n n!
gives the probabilities of Poisson distribution. Thus, the PDF of the non-central Chi-squared distribution can be termed as the weighted sum of Chi-squared probabilities where the weights being equal to the probabilities of Poisson distribution. The procedure for generating the samples from a non-central Chi-squared random variable is as follows. 1. For a given degree of freedom k, let the k normal random variables be X1 , X2 , ..., Xk with variances σ12 , σ22 , · · · , σk2 and mean µ1 , µ2 , ..., µk respectively. 2. The goal is to add squares of these k independent normal random variables with variances set to one and means satisfying the condition set by equation 2.13 √ 3. Set µ1 = λ and µ2 , µ3 , · · · , µk = 0 4. Generate k√ − 1 standard normal random variables ∼ N(µ = 0, σ 2 = 1) and one normal random variable with µ1 = λ and σ12 = 1 5. Squaring and summing-up all the k random variables gives the non-central Chi-squared random variable. The PDF of the generated samples can be plotted using the histogram method described in section 2.2 Given the degrees of freedom k and non-centrality parameter λ , the following function generates a sequence of random numbers that are non-central Chi-Squared distributed. Refer [5] for the formula on computing the values for cumulative non-central chi-square distribution for more than 10,000 degrees of freedom. Program 2.14: nonCentralChiSquaredRV.m: Generate non-central Chi-Squared random numbers function X = nonCentralChiSquaredRV(k,lambda,N) %Generate non-central Chi-squared distributed random numbers %k - degrees of freedom, lambda - non-centrality parameter %N - number of samples X=(sqrt(lambda)+randn(1,N)).ˆ2;%one normal RV nonzero mean & var=1 for i=1:k-1, %k-1 normal RVs with zero mean & var=1 X = X + randn(1,N).ˆ2; end
2.3 Univariate random variables
69
The following snippet of code, generates and plots the PDFs for non-central Chi-Squared random variables for two different degrees of freedoms k = 2 and k = 4, each with three different non-centrality parameters λ = 1, 2, 3. Results of the simulation are shown in Figure 2.13. Program 2.15: nonCentralChiSquared test.m: Simulating a non-central Chi-Squared random variable %Demonstrate PDF of non-central Chi-squared Distribution k=2:2:4; %degrees of freedoms to simulate lambda = [1,2,3]; %non-centrality parameters to simulate N = 1000000; %Number of samples to generate lineColors=['r','g','b','c','m','k'];color=1; figure; hold on; for i=k, for j=lambda, X = nonCentralChiSquaredRV(i,j,N); [f1,x1] = hist(X,500);%Estimated PDFs - histogram with 500 bins plot(x1,f1/trapz(x1,f1),lineColors(color)); color=color+1; end end legend('k=2 \lambda=1','k=2 \lambda=2','k=2 \lambda=3',... 'k=4 \lambda=1','k=4 \lambda=2','k=4 \lambda=3'); title('Non-central Chi-Squared RV - PDF'); xlabel('Parameter - x');ylabel('f_X(x)');axis([0 8 0 0.3]);
PDFs of non-central Chi-squared random variable
Fig. 2.13: Simulated PDFs of non-central Chi-Squared random variables
70
2 Random Variables - Simulating Probabilistic Systems
2.3.9 Chi distributed random variable If a square root operation is applied on the Chi-Squared random variable, it gives the Chi random variable. This process is shown in Figure 2.14 for 2 degrees of freedom. A Chi-distributed random variable with two degrees of freedom is indeed a Rayleigh distributed random variable. As with Chi-Squared distribution, the Chi distribution can also be classified as central Chi distribution and non-central Chi distribution depending on whether the underlying normal distribution has zero mean or non-zero mean. Chi random variable can be transformed as Nagakami random variable (see section 2.3.12)
Fig. 2.14: Constructing a Chi distributed random variable with 2 degrees of freedom The equation 2.15 describes the PDF of the central Chi distribution with k degrees of freedom, where Γ (.) is the Gamma function. The mean and variance are shown in equations 2.16 and 2.17 respectively. x2
21−k/2 xk−1 e− 2 f χk (x) = Γ k/2
(2.15)
k+1 √ Γ 2 µ = 2. Γ 2k
(2.16)
σ 2 = k − µ2
(2.17)
The following function transforms the random numbers generated from a standard normal distribution (randn function) and produces a sequence of random numbers that follows central Chi distribution. Program 2.16: chiRV.m: Function for generating central-Chi distributed random numbers function Y= chiRV(k,N) %Central Chi distributed univariate random numbers % k=degrees of freedom, N=Number of samples Y=0; for j=1:k, %Sum of square of k independent standard normal Random Variables Y= Y + randn(1,N).ˆ2; end Y=sqrt(Y); %Taking Square-root
2.3 Univariate random variables
71
end The following Matlab code tests the customized function for generating central Chi distributed random variables with degrees of freedom k = 1, 2, 3, 4 and 5. Finally the PDFs of the simulated variables are plotted in Figure 2.15. Program 2.17: chi test.m: Simulating a central Chi distributed random variable and plotting its PDF %Test and Demonstrate Central Chi Distribution clearvars; clc; k=[1,2,3,4,5];N = 10e5; %degrees of freedom & Number of Samples lineColors=['r','g','b','c','m','y','k']; %line color arguments legendString =cell(1,5); %cell array of string for printing legend figure;hold on; %Generating Chi Distribution with k degrees of freedom for i=1:length(k), %outer loop - runs for each degrees of freedom Y= chiRV(k(i),N); [Q,X] = hist(Y,1000);%Histogram for each degree of freedom plot(X,Q/trapz(X,Q),lineColors(i)); legendString{i}=strcat(' k=',num2str(i)); end legend(legendString); title('Central Chi Distributed RV - PDF','fontsize',12); xlabel('Parameter - x');ylabel('f_X(x)');axis([0 4 0 1]);
PDFs of central Chi distributed random variable
Fig. 2.15: Simulated PDFs of Chi distributed random variables
72
2 Random Variables - Simulating Probabilistic Systems
2.3.10 Rayleigh random variable Rayleigh random variable is useful in modeling multipath fading. This is discussed in detail in chapter 11. Existence of large number of scattered components in a multipath environment, paves way for applying central limit theorem on the real and imaginary components of the mutipath signals that arrive at the receiver front end. If the number of scattered paths tends to be large, the inphase and quadrature components can be approximated as a Gaussian random variable. Therefore, the envelope of the signal takes the form of a complex Gaussian random variable - also called as Rayleigh random variable. Let us express the complex received signal as h = hr + jhi . Applying central limit theorem, the real and imaginary components of the faded signal can be modeled as a zero-mean Gaussian random variable N (0, σ 2 ) with mean µ = 0 and variance σ 2 . Thus, the real and imaginary components of the fading, individually follows Gaussian pdf " # (y − µ)2 1 (2.18) fX (y) = √ exp − 2σ 2 σ 2π Expressing the fading signal in polar co-ordinates h = hr + jhi = re jθ and assuming uniform distribution for the phase θ , the probability density function (PDF) of the envelope follows Rayleigh distribution (shown in Figure 2.16) " # r −r2 fR (|h|) = fR (r) = 2 exp (2.19) σh 2σh2 Rayleigh distributed fading samples can be simply generated by using randn function in Matlab as follows. Program 2.18: rayleigh pdf.m: Generating Rayleigh samples and plotting its PDF %Simulating rayleigh variable and plotting PDF s = 1.5; %sigma of individual gaussian rvs(real & imaginary parts) h = s*(randn(1,10e4) + 1i * randn(1,10e4));%zero-mean gaussian rvs [val,bin] = hist(abs(h),100); %PDF of envelope using hist function plot(bin,val/trapz(bin,val),'r','LineWidth',2);%plot normalized pdf title('Envelope pdf - Rayleigh distribution'); xlabel('|h|');ylabel('f(|h|)');axis tight; %to plot theoretical pdf r = 0:0.1:10; %rayleigh rv pdf = (r/sˆ2).*exp(-r.ˆ2/(2*sˆ2));%theoretical pdf hold on; plot(r,pdf,'k','LineWidth',2); grid on; legend('simulated','theoretical');
2.3.11 Ricean random variable Rayleigh variable is well suited for modeling a multipath channel where a dominant line-of-sight (LOS) path between the transmitter and the receiver, is absent. If a line of sight path do exist, the envelope distribution is no longer Rayleigh, but Ricean. In such case, the random multipath components gets added to the strongest LOS component. The technique to generate Ricean random variable and its PDF characteristics are discussed in details in chapter 11 section 11.3.1.
2.3 Univariate random variables
73
Fig. 2.16: Rayleigh distributed envelope, generated with two zero-mean and σ = 1.5 Gaussian random variables
2.3.12 Nakagami-m distributed random variable Nakagami distribution or Nakagami-m distribution is related to Gamma function. Nakagami distribution is characterized by two parameters - the shape parameter (m) and the spread parameter (ω). The PDF of the Nakagami distribution is given in equation 2.20 with the mean and variance in equations 2.21 and 2.22 respectively. m 2 2mm 2m−1 f (y; m, ω) = y exp − y (2.20) Γ (m)ω m ω r Γ (m + 0.5) ω µ= Γ (m) m " # 1 Γ (m + 0.5) 2 2 σ = ω 1− m Γ (m)
(2.21) (2.22)
Nakagami distribution is useful in modeling wireless links exhibiting multipath fading. Rayleigh fading is a special case of Nakagami fading when the shape parameter m is set to 1. When m = 0.5, it is equivalent to single-sided Gaussian distribution. Nakagami fading model was initially proposed since the model matches empirical results for short wave ionospheric propagation. It also describes the amplitude of the received signal after maximum ratio combining (MRC)(refer section 12.3.2). It is also used to model interference from multiple sources since the sum of multiple independent and identically distributed (i.i.d) Rayleigh distributed random variables have Nakagami distributed amplitude. Nakagami and Ricean fading behave similarly near their mean value. A Nakagami distributed random variable can be generated using gamma function as well as from Chi distributed random variable. In the following transformation, a Nakagami random variable (Y ) is shown to be generated from Chi random variable (χ).
74
2 Random Variables - Simulating Probabilistic Systems
r Y=
ω χ(2m) 2m
(2.23)
Using equation 2.23, the following steps are formulated to simulate a Nakagami-m random variable for given shape parameter (m) and spread parameter (ω). 1. Simulate a Chi random variable with degrees of freedom k set to 2m p 2. Multiply the result with ω/(2m) A wrapper function is written around the chiRV function (see section 2.3.9 ) for generating a Nakagami distributed random variable. Program 2.19: nakagamiRV.m: Function to generate Nakagami-m distributed random numbers function Y= nakagamiRV(m,w,N) %Generate univariate Nakagami distributed random numbers % m=parameter to control shape, w=parameter to control spread % N=number of samples to generate Y=sqrt(w/(2*m))*chiRV(2*m,N); end The following code tests the above mentioned customized function for generating Nakagami random variable for various values of shape parameter (m) and spread parameter (ω). The PDFs of the simulated Nakagami random variable are plotted in Figure 2.17. Program 2.20: nakagami test.m: Simulate a Nakagami distributed random variable and plot its PDF clearvars; clc; m=[1.8 1 0.75 0.5]; w=0.2; %shape and spread parameters to test N = 1000000; %Number of Samples lineColors=['r','g','b','c','m','y','k']; %line color arguments legendString =cell(1,4); figure; hold on; for i=1:length(m), %loop - runs for each shape parameter Y = nakagamiRV(m(i),w,N); [Q,X] = hist(Y,1000);%histogram plot(X,Q/trapz(X,Q),lineColors(i));%plot estimated PDF legendString{i}=strcat('m=',num2str(m(i)),',\omega=',num2str(w)); end legend(legendString);title('Nakagami-m - PDF '); xlabel('Parameter - y');ylabel('f_Y(y)');axis([0 2 0 2.6]);
2.4 Central limit theorem - a demonstration Central limit theorem states that the sum of independent and identically distributed (i.i.d) random variables (with finite mean and variance) approaches normal distribution as sample size N → ∞. In simpler terms, the theorem states that under certain general conditions, the sum of independent observations that follow same underlying distribution approximates to normal distribution. The approximation steadily improves as the number of observations increase. The underlying distribution of the independent observation can be anything - binomial, Poisson, exponential, Chi-Squared etc.
2.4 Central limit theorem - a demonstration
75
Nakagami-m PDF
fY(y)
Fig. 2.17: Simulated PDFs of Nakagami distributed random variables
Central limit theorem is applied in a vast range of applications including (but not limited to) signal processing, channel modeling, random process, population statistics, engineering research, predicting the confidence intervals, hypothesis testing, etc. One such application in signal processing is - deriving the response of a cascaded series of low pass filters by applying central limit theorem. In the article titled the central limit theorem and low-pass filters the author has illustrated how the response of a cascaded series of low pass filters approaches Gaussian shape as the number of filters in the series increase [6]. In digital communication, the effect of noise on a communication channel is modeled as additive Gaussian white noise. This follows from the fact that the noise from many physical channels can be considered approximately Gaussian. For example, the random movement of electrons in the semiconductor devices gives rise to shot noise whose effect can be approximated to Gaussian distribution by applying central limit theorem. The following codes illustrate how the central limit theorem comes to play when the number of observations is increased for two separate experiments: rolling N unbiased dice and tossing N coins. The following function written in Matlab, generates N i.i.d discrete uniform random variables that generates uniform random numbers from the set (1, k). In the case of the dice rolling experiment, k is set to 6, thus simulating the random pick from the sample space S = {1, 2, 3, 4, 5, 6} with equal probability. For the coin tossing experiment, k is set to 2, thus simulating the sample space of S = {1, 2} representing head or tail events with equal probability. Program 2.21: sumiidUniformRV.m: Generate N i.i.d uniformly distributed discrete random variables function [Y]=sumiidUniformRV(N,k,nSamp) % Generate N i.i.d uniformly distributed discrete random variables % each with nSamp samples in the interval 1:k and sum them up Y=0; for i=1:N, Y=Y+ceil(k*rand(1,nSamp)); end; end The next section of code, plots the histogram of N i.i.d random variables for both the experiments. It is evident from the plots 2.18 and 2.19 that as the number of observations (N) is increased; the combined sum of all the N i.i.d random variables approximates Gaussian distribution.
76
2 Random Variables - Simulating Probabilistic Systems
Program 2.22: CLT test.m: Demonstrating Central Limit Theorem %---------Central limit theorem using N dice----------------------number_of_dice = [1 2 5 10 20 50]; %number of i.i.d RVs k=6; %Max number on a dice - to produce uniform rv [1,...,6] nSamp=100000; figure; j=1; %nsamp - number of samples to generate for N=number_of_dice, Y=sumiidUniformRV(N,k,nSamp); subplot(3,2,j); histogram(Y,'normalization','pdf'); title(['N = ',num2str(N),' dice']); xlabel('possible outcomes'); ylabel('probability'); pause; j=j+1; end %----Central limit theorem using N coins--------------------------number_of_coins = [1 2 5 10 50 100]; %number of i.i.d RVs k=2; %Generating uniform random numbers from [Head=1,Tail=2] nSamp=10000; figure; j=1; %nsamp - number of samples to generate for N=number_of_coins, Y=sumiidUniformRV(N,k,nSamp); subplot(3,2,j); histogram(Y,'normalization','pdf'); title(['N = ',num2str(N),' coins']); xlabel('possible outcomes'); ylabel('probability'); pause; j=j+1; end
Fig. 2.18: Demonstrating Central Limit Theorem by rolling N unbiased dice
2.5 Generating correlated random variables
77
Fig. 2.19: Demonstrating Central Limit Theorem by tossing N unbiased coins
2.5 Generating correlated random variables 2.5.1 Generating two sequences of correlated random variables Generating two sequences of correlated random numbers, given the correlation coefficient ρ, is implemented in two steps. The first step is to generate two uncorrelated random sequences from an underlying distribution. Normally distributed random sequences are considered here. Program 2.23: Step 1: Generate two uncorrelated random sequences x1=randn(1,100); %Normal random numbers sequence 1 x2=randn(1,100); %Normal random numbers sequence 2 subplot(1,2,1); plot(x1,x2,'r*'); title('Uncorrelated RVs X_1 and X_2'); xlabel('X_1'); ylabel('X_2'); In the second step, the required correlated sequence is generated as Z = ρX1 +
p 1 − ρ 2 X2
Program 2.24: Step 2: Generate correlated random sequence z rho=0.9; z=rho*x1+sqrt(1-rhoˆ2)*x2; subplot(1,2,2); plot(x1,z,'r*'); title(['Correlated RVs X_1 and Z , \rho=',num2str(rho)]); xlabel('X_1'); ylabel('Z'); The resulting sequence Z will have ρ correlation with respect to X1 . The results are plotted in Figure 2.20
78
2 Random Variables - Simulating Probabilistic Systems
Uncorrelated RVs X 1 and X 2
Correlated RVs X 1 and Z , ρ=0.9
4
4 2
0
Z
X2
2
−2
0 −2
−4 −5
0
−4 −5
5
X1
0
5
X1
Fig. 2.20: Scatter plots - Correlated random variables Z and X1 on right
2.5.2 Generating multiple sequences of correlated random variables using Cholesky decomposition Generation of multiple sequences of correlated random variables, given a correlation matrix is discussed here.
Correlation matrix Correlation matrix, denoted as C, defines correlation among N variables. It is a symmetrical N × N matrix with the (i j)th element equal to the correlation coefficient ri j between the ith and the jth variable. The diagonal elements (correlations of variables with themselves) are always equal to 1. As an example, let’s say we would like to generate three sets of random sequences X,Y ,Z with the following correlation relationships. • Correlation coefficient between X and Y is 0.5 • Correlation coefficient between X and Z is 0.3 • Obviously the variable X correlates with itself 100% - i.e, correlation-coefficient is 1 Putting all these relationships in a compact matrix form, gives the correlation matrix. We take arbitrary correlation value (ρ = 0.3) for the relationship between Y and Z. X Y 1 0.5 C = 0.5 1 0.3 0.2 "
Z # 0.3 X 0.2 Y 1 Z
(2.24)
Now, the task is to generate three sets of random numbers X,Y and Z that follows the relationship above. The problem can be addressed in many ways. One of the method is to use Cholesky decomposition.
2.5 Generating correlated random variables
79
Cholesky decomposition method Cholesky decomposition[7] is the matrix equivalent of taking square root operation on a given matrix. As with any scalar values, positive square root is only possible if the given number is a positive (Imaginary roots do exist otherwise). Similarly, if a matrix need to be decomposed into square-root equivalent, the matrix need to be positive definite. The method discussed here, seeks to decompose the given correlation matrix using Cholesky decomposition: C = U T U = LLT (2.25) where, U and L are upper and lower triangular matrices. We will consider Upper triangular matrix here. Equivalently, lower triangular matrix can also be used, in which case the order of output needs to be reversed. C = UTU
(2.26)
For this decomposition to work, the correlation matrix should be positive definite. The correlated random sequences Rc = [X,Y, Z] (where X,Y, Z are column vectors) that follow the above relationship can be generated by multiplying the uncorrelated random numbers R with U . Rc = RU
(2.27)
For simulation, generate three sequences of uncorrelated random numbers R - each drawn from a normal distribution. For the three variable case, the R matrix will be of size k × 3 where k is the number of samples we wish to generate and we allocate the k samples in three columns for each variable X,Y and Z. Multiply this matrix with the Cholesky decomposed upper triangular version of the correlation matrix. The scattered plot of the simulated random sequences are plotted in Figure 2.21. Program 2.25: correlated rv multiple.m: Generating multiple sequences of correlated random variables C=[ 1 0.5 0.3; 0.5 1 0.3; 0.3 0.3 1 ;]; %Correlation matrix U=chol(C); %Cholesky decomposition R=randn(10000,3); %Random data in three columns each for X,Y and Z Rc=R*U; %Correlated matrix Rc=[X Y Z], RVs X,Y,Z are column vectors %Rest of the section deals with verification and plotting %Verify Correlation-Coeffs of generated vectors coeffMatrixXX=corrcoef(Rc(:,1),Rc(:,1)); coeffMatrixXY=corrcoef(Rc(:,1),Rc(:,2)); coeffMatrixXZ=corrcoef(Rc(:,1),Rc(:,3)); %Extract the required correlation coefficients coeffXX=coeffMatrixXX(1,2) %Correlation Coeff for XX; coeffXY=coeffMatrixXY(1,2) %Correlation Coeff for XY; coeffXZ=coeffMatrixXZ(1,2) %Correlation Coeff for XZ; %Scatterplots subplot(1,3,1);plot(Rc(:,1),Rc(:,1),'b.'); title(['X and X, \rho=' num2str(coeffXX)]);xlabel('X');ylabel('X'); subplot(1,3,2);plot(Rc(:,1),Rc(:,2),'r.'); title(['X and Y, \rho=' num2str(coeffXY)]);xlabel('X');ylabel('Y'); subplot(1,3,3);plot(Rc(:,1),Rc(:,3),'k.'); title(['X and Z,\rho=' num2str(coeffXZ)]);xlabel('X');ylabel('Z');
80
2 Random Variables - Simulating Probabilistic Systems
Fig. 2.21: Scatter plots for generated random variables X,Y and Z
2.6 Generating correlated Gaussian sequences Speaking of Gaussian random sequences such as Gaussian noise, we generally think that the power spectral density (PSD) of such Gaussian sequences is flat. We should understand that the PSD of a Gausssian sequence need not be flat. This bring out the difference between white and colored random sequences, as captured in Figure 2.22. A white noise sequence is defined as any random sequence whose PSD is constant across all frequencies. Gaussian white noise is a Gaussian random sequence, whose amplitude is gaussian distributed and its PSD is a constant. Viewed in another way, a constant PSD in frequency domain implies that the average auto-correlation function in time-domain is an impulse function (dirac-delta function). That is, the amplitude of noise at any given time instant is correlated only with itself. Therefore, such sequences are also referred as uncorrelated random sequences. White Gaussian noise processes are completely characterized by its mean and variance. A colored noise sequence is simply a non-white random sequence, whose PSD varies with frequency. For a colored noise, the amplitude of noise at any given time instant is correlated with the amplitude of noise occuring at other instants of time. Hence, colored noise sequences will have an auto-correlation function other than the impulse function. Such sequences are also referred as correlated random sequences. Coloured Gaussian noise processes are completely characterized by its mean and the shaped of power spectral density (or the shape of auto-correlation function). In mobile channel model simulations, it is often required to generate correlated Gaussian random sequences with specified mean and power spectral density (like Jakes PSD or Gaussian PSD given in section 11.3.2). An uncorrelated Gaussian sequence can be transformed into a correlated sequence through filtering or linear transformation, that preserves the Gaussian distribution property of amplitudes, but alters only the correlation property (equivalently the power spectral density). We shall see two methods to generate colored Gaussian noise for given mean and PSD shape: spectral factorization method and auto-regressive (AR) model.
Motivation Let’s say we observe a real world signal y[n] that has an arbitrary spectrum y( f ). We would like to describe the long sequence of y[n] using very few parameters, as in applications like linear predictive coding (LPC). The modeling approach, described here, tries to answer the following two questions:
2.6 Generating correlated Gaussian sequences
81
Fig. 2.22: Power spectral densities of white noise and colored noise
• Is it possible to model the first order (mean/variance) and second order (correlations, spectrum) statistics of the signal just by shaping a white noise spectrum using a transfer function ? (see Figure 2.22). • Does this produce the same statistics (spectrum, correlations, mean and variance) for a white noise input ? If the answer is yes to the above two questions, we can simply set the modeled parameters of the system and excite the system with white noise, to produce the desired real world signal. This reduces the amount to data we wish to transmit in a communication system application. This approach can be used to transform an uncorrelated white Gaussian noise sequence to a colored Gaussian noise sequence with desired spectral properties.
Linear time invariant (LTI) system model In the given model, the random signal y[n] is observed. Given the observed signal y[n], the goal here is to find a model that best describes the spectral properties of y[n] under the following assumptions • The sequence y[n] is WSS (wide sense stationary) and ergodic. • The input sequence x[n] to the LTI system is white noise, whose amplitudes follow Gaussian distribution with zero-mean and variance σx2 with flat the power spectral density. • The LTI system is BIBO (bounded input bounded output) stable.
2.6.1 Spectral factorization method In spectral factorization method, a filter is designed using the desired frequency domain characteristics (like PSD) to transform an uncorrelated Gaussian sequence x[n] into a correlated sequence y[n]. In the model shown in Figure 2.23, the input to the LTI system is a white noise whose amplitude follows Gaussian distribution with zero mean and variance σx2 and the power spectral density (PSD) of the white noise x[n] is a constant across all frequencies. Sxx ( f ) = σx2 , ∀f (2.28) The white noise sequence x[n] drives the LTI system with frequency response H( f ) producing the signal of interest y[n]. The PSD of the output process is therefore, Syy ( f ) = Sxx ( f )|H( f )|2 = σx2 |H( f )|2
(2.29)
If the desired power spectral density Syy ( f ) of the colored noise sequence y[n] is given, assuming σx2 = 1, the impulse response h[n] of the LTI filter can be found by taking the inverse Fourier transform of the frequency response H( f )
82
2 Random Variables - Simulating Probabilistic Systems
H( f ) =
q
Syy ( f )
(2.30)
Once, the impulse response h[n] of the filter is obtained, the colored noise sequence can be produced by driving the filter with a zero-mean white noise sequence of unit variance.
Fig. 2.23: Relationship among various power spectral densities in a filtering process
2.6.1.1 Example: Generating colored noise with Jakes PSD For example, we wish to generate a Gaussian noise sequence whose power spectral density follows the normalized Jakes power spectral density (see section 11.3.2) given by Syy ( f ) =
1 q 2 , π fmax 1 − f / fmax
| f | ≤ fmax
(2.31)
Applying spectral factorization method, the frequency response of the desired filter is H( f ) =
q
Syy ( f ) =
p
h 2 i−1/4 π fmax 1 − f / fmax
(2.32)
The impulse response of the filter is [8], h[n] = C fmax x−1/4 J1/4 (x),
x = 2π fmax |nTs |
(2.33)
where, J1/4 (.) is the fractional Bessel function of the first kind, Ts is the sampling interval for implementing the digital filter and C is a constant. The impulse response of the filter can be normalized by dividing h[n] by C fmax . hnorm [n] = x−1/4 J1/4 (x), x = 2π fmax |nTs | (2.34) The filter h[n] can be implemented as a finite impulse response (FIR) filter structure. However, the FIR implementation requires that the impulse response h[n] be truncated to a reasonable length. Such truncation leads to ringing effects due to Gibbs phenomenon. To avoid distortions due to truncation, the filter impulse response h[n] is usually windowed using a window function such as Hamming window. hw [n] = hnorm [n]wH [n]
(2.35)
wH [n] = 0.54 − 0.46 cos(2πn/N)
(2.36)
where, the Hamming window is defined as
The following function implements a windowed Jakes filter using the aforementioned equations. The impulse response and the spectral characteristics of the filter are plotted in Figure 2.24.
2.6 Generating correlated Gaussian sequences
83
Program 2.26: Jakes filter.m: Jakes filter using spectral factorization method function [hw] = Jakes_filter(f_max,Ts,N) %FIR channel shaping filter with Jakes doppler spectrum %S(f) = 1/ [sqrt(1-(f/f_max)ˆ2)*pi*f_max] %Use impulse response obtained from spectral factorization %Input parameters are %f_max = maximum doppler frequency in Hz %Ts = sampling interval in seconds %N = desired filter order %Returns the windowed impulse response of the FIR filter %Example: fd=10; Ts=0.01; N=512; Jakes_filter(fd,Ts,N) L=N/2; n=1:L; J_pos=besselj(0.25,2*pi*f_max*n*Ts)./(n.ˆ0.25); J_neg = fliplr(J_pos); J_0 = 1.468813*(f_max*Ts)ˆ0.25; J = [J_neg J_0 J_pos]; %Jakes filter %Shaping filter smoothed using Hamming window n=0:N; hamm = 0.54-0.46*cos(2*pi*n/N);%Hamming window hw = J.*hamm;%multiply Jakes filter with the Hamming window hw = hw./sqrt(sum(abs(hw).ˆ2));%Normalized impulse response %Plot impulse response and spectrum figure; subplot(1,2,1); plot(hw); axis tight; title('Windowed impulse response'); xlabel('n'); ylabel('h_w[n]'); %use the function given in section 1.3.4 to plot the spectrum [fftVals,freqVals]=freqDomainView(hw,1/Ts,'double');%section 1.3.4 H_w = (Ts/length(hw))*abs(fftVals).ˆ2;%PSD H(f) subplot(1,2,2); plot(freqVals,H_w); grid on; xlim([-20 20]); title('Jakes Spectrum');xlabel('f');ylabel('|H_{w}(f)|ˆ2'); end A white noise can be transformed into colored noise sequence with Jakes PSD, by processing the white noise through the implemented filter. Following script illustrates this concept by transforming a white noise sequence into a colored noise sequence. The simulated noise samples and its PSD are plotted in Figure 2.25. Program 2.27: colored noise example.m: Simulating colored noise with Jakes PSD %-- Colored noise generating with Jakes PSD--fd=10; %maximum Doppler frequency Fs=100;%sampling frequency N=512;%order of Jakes filter Ts=1/Fs;%sampling interval [h] = Jakes_filter(fd,Ts,512);%design Jakes filter %--Exciting the filter with white noise--x=randn(1,10000); %Generate white noise of 10,000 samples y = conv(x,h,'valid');%convolve with Jakes filter %--Plot time domain and freq domain view of colored noise y
84
2 Random Variables - Simulating Probabilistic Systems
subplot(1,2,1); plot(log10(abs(y))); axis tight; %Noise envelope xlabel('n'); ylabel('y[n]');title('Noise samples') [fftVals,freqVals]=freqDomainView(y,1/Ts,'double'); S_yy = (Ts/length(y))*abs(fftVals).ˆ2; subplot(1,2,2); plot(freqVals,S_yy);%PSD of generated noise title('Noise spectrum');xlabel('f');ylabel('|S_{yy}(f)|ˆ2');
Jakes spectrum
Windowed impulse response
hw[n]
|Hw(f)|2
frequency (f)
samples (n)
Fig. 2.24: Impulse response & spectrum of windowed Jakes filter ( fmax = 10Hz, Ts = 0.01s, N = 512)
log |y[n]|
Envelope function
samples (n)
|Syy(f)|2
Power Spectral Density
frequency (f)
Fig. 2.25: Simulated colored noise samples and its power spectral density ( fmax = 10Hz)
2.6 Generating correlated Gaussian sequences
85
2.6.2 Auto-Regressive (AR) model An uncorrelated Gaussian random sequence x[n] can be transformed into a correlated Gaussian random sequence y[n] using an AR time-series model. If a time series random sequence is assumed to be following an auto-regressive AR(N) model of form, y[n] + a1 y[n − 1] + a2 y[n − 2] + · · · + aN y[n − N] = x[n]
(2.37)
where x[n] is the uncorrelated Gaussian sequence of zero mean and variance σx2 , the natural tendency is to estimate the model parameters a1 , a2 , · · · , aN . Least Squares method can be applied here to find the model parameters, but the computations become cumbersome as the order N increases. Fortunately, the AR model coefficients can be solved for using Yule Walker equations. Yule Walker equations relate auto-regressive model parameters a1 , a2 , · · · , aN to the auto-correlation Ryy [k] of random process y[n]. Finding the model parameters using Yule-Walker equations, is a two step process: • Given y[n], estimate auto-correlation of the process Ryy [n]. If Ryy [n] is already specified as a function, utilize it as it is (see auto-correlation equations for Jakes spectrum or Doppler spectrum in section 11.3.2). • Solve Yule Walker equations to find the model parameters ak (k = 0, 1, · · · , N) and the noise sigma σx2 . The AR time-series model in equation 2.37 can be written in compact form as N
∑ ak y[n − k] = x[n],
a0 = 1
(2.38)
k=0
Multiplying equation 2.38 by y[n − l] on both sides and taking expectation E{.}, N
∑ ak E
y[n − k]y[n − l] = E x[n]y[n − l] ,
a0 = 1
(2.39)
k=0
Now, we can readily identify the auto-correlation Ryy (.) and cross-correlation Rxy (.) terms as N
∑ ak |E
k=0
y[n − k]y[n − l] = E x[n]y[n − l] {z } | {z } Ryy(l−k)
(2.40)
Rxy(l)
N
∑ ak Ryy [l − k] = Rxy [l],
a0 = 1
(2.41)
k=0
The next step is to compute the cross-correlation term Rxy (l) and relate it to the auto-correlation term Ryy (l −k). The term x[n − l] can also be obtained from equation 2.37 as N
y[n − l] = − ∑ ak y[n − k − l] + x[n − l]
(2.42)
k=1
We note that correlated output sequence y[n] and the uncorrelated input sequence x[n] are always uncorrelated, therefore: E{y[n − k − l]x[n] = 0}. Also, the auto-correlation of uncorrelated sequence x[n] is zero at all lags except at lag 0 where its value is equal to σx2 . These two properties are used in the following steps. Restricting the lags only to zero and positive values,
86
2 Random Variables - Simulating Probabilistic Systems
Rxy (l) = E x[n]y[n − l]
(2.43)
! N = E x[n] − ∑ ak y[n − k − l] + x[n − l]
(2.44)
k=1
!
N
= E − ∑ ak y[n − k − l]x[n] + x[n − l]x[n]
(2.45)
k=1
= E x[n − l]x[n] =
( 0,
l>0
(2.46)
σx2 , l = 0
Substituting equation 2.46 in equation 2.41, N
∑ ak Ryy [l − k] =
( 0,
k=0
l>0
σx2 ,
l=0
,
a0 = 1
(2.47)
Here, there are two cases to be solved: - l > 0 and l = 0. For l > 0 case, equation 2.47 becomes, N
∑ ak Ryy [l − k] = −Ryy [l],
l>0
(2.48)
k=1
Note that the lower limit of the summation has changed from k = 0 to k = 1. Now, equation 2.48 written in matrix form, gives the Yule-Walker equations that comprises of a set of N linear equations and N unknown parameters. Ryy [1] a1 Ryy [0] Ryy [−1] · · · Ryy [1 − N] Ryy [0] · · · Ryy [2 − N] a2 Ryy [2] Ryy [1] (2.49) .. .. .. .. .. = − .. . . . . . . Ryy [N] aN Ryy [N − 1] Ryy [N − 2] · · · Ryy [0] Representing equation 2.49 in a compact form, Ryy a = −ryy
(2.50)
The AR model parameters a can be found by solving a = −Ryy −1 ryy
(2.51)
After solving for the model parameters ak , the noise variance σx2 can be found by applying the estimated values of ak in equation 2.47 by setting l = 0. Matlab’s aryule command efficiently solves the Yule-Walker equations using Levinson Algorithm [7][9]. Once the model parameters ak are obtained, the AR model can be implemented as an infinte impulse response (IIR) filter of form H(z) =
1 N
(2.52)
1 + ∑ ak z−k k=1
To solve Yule-Walker equations, one has to know the model order N. Although the Yule-Walker equations can be used to find the model parameters, it cannot give any insight into the model order N. Some of the methods to ascertain the model order is listed next for completeness and the treatment of such techniques is beyond the scope of this book.
2.6 Generating correlated Gaussian sequences
87
• Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) may give insight into the model order of the underlying process • Akaike Information Criterion (AIC) associates the number of model parameters and the goodness of fit. It also associates a penalty factor to avoid over-fitting (increasing model order always improves the fit and the fitting process has to be stopped somewhere) [10]. • Bayesian Information Crieterion (BIC) method that is very similar to AIC [11]. • Cross validation method in which the given time series is broken into subsets. One subset of data is taken at a time and model parameters are estimated. The estimated model parameters are cross validated with data from the remaining subsets [12].
2.6.2.1 Example: 1/ f α power law noise generation The power law 1/ f α in the power spectrum characterizes the fluctuating observables in many natural systems. Many natural systems exhibit some 1/ f α noise which is a stochastic process with a power spectral density having a power exponent that can take values −2 ≥ α ≤ 2. Simply put, 1/ f α noise is a colored noise with a power spectral density of 1/ f α over its entire frequency range. S( f ) =
1 | f |α
(2.53)
The 1/ f α noise can be classified into different types based on the value of α. Violet noise - α = -2, the power spectral density is proportional to f 2 . Blue noise - α = -1, the power spectral density is proportional to f . White noise - α = 0, the power spectral density is flat across the whole spectrum. Pink noise - α = 1, the power spectral density is proportional to 1/ f , i.e, it decreases by 3 dB per octave with increase in frequency. • Brownian noise - α = 2, the power spectral density is proportional to 1/ f 2 , therefore it decreases by 6 dB per octave with increase in frequency.
• • • •
The power law noise can be generated by sequencing a zero-mean white noise through an auto-regressive (AR) filter of order N: k=N
∑ ak y[n − k] = x[n]
(2.54)
k=0
where, x[n] is a zero-mean white noise process. Referring the AR generation method described in [13], the coefficients of the AR filter can be generated as a0 = 1 α ak − 1 ak = k − 1 − 2 k
(2.55)
which can be implemented as an infinite impulse response (IIR) filter using the filter transfer function described in equation 2.52. The following script implements this method and the sample results are plotted in Figure 2.26. The script uses the plotWelchPSD function described in section 1.4 of chapter 1. Program 2.28: power law noise.m: Generate power law colored noises and plot their spectral density alpha = 1; %valid values -2C Unattainable
R = C dary) boun y t i c pa
(ca 100
R 0.5 %randomized input message to block encoder c = mod(x*G,2) %codeword generated by the block encoder %--------Channel---------------------d = 1; %d-bit error pattern to test with %all possible combinations of d-bit error patterns [E,nRows,nCols] = error_pattern_combos(d,n); %randomly select a d-bit error pattern from the matrix of %all d-bit error patterns index = ceil(rand * size(E,1)); e = E(index,:) %randomly selected error pattern r = mod(c + e,2) %received vector containing errors %--------Receiver with syndrome table decoder--------------------s = mod(r*H.',2) %find the syndrome s_dec = bi2de(s,'left-msb') %sorted syndomes (first column in table) %read corresponding error pattern from second column errorEst=syndromeTable(s_dec+1,2);%point at index s_dec and return e errorEst = cell2mat(errorEst) %to convert cell to matrix recoveredCW = mod(r - errorEst,2) %recovered codeword if (recoveredCW==c), disp('Decoder success'); else disp('Decoder error'); end
132
4 Linear Block Coding
4.6 Some classes of linear block codes Some linear block codes that are frequently used in practice are briefly described in this section.
4.6.1 Repetition codes A binary repetition code, denoted as (n, 1) code, is a linear block code having two codewords of length n: the all-zero codeword and all-one codeword. It has a code rate of R = 1/n and a minimum distance dmin = n. The generator matrix and the parity-check matrices are of size 1 × n and (n − 1) × n respectively 1 0 0 ··· 0 1 0 1 0 · · · 0 1 h i 0 0 1 · · · 0 1 (4.35) G = 1 1 1 ··· 1 H= . . . . . . . .. 0 1 . . . 0 0 0 ··· 1 1
4.6.2 Hamming codes Linear binary Hamming codes fall under the category of linear block codes that can correct single bit errors. For every integer p ≥ 3 (the number of parity bits), there is a (2 p − 1, 2 p − p − 1) Hamming code. Here, 2 p − 1 is the number of symbols in the encoded codeword and 2 p − 1 − p is the number of information symbols the encoder can accept at a time. All such Hamming codes have a minimum Hamming distance dmin = 3 and thus they can correct any single bit error and detect any two bit errors in the received vector. The characteristics of a generic (n, k) Hamming code is given below
Codeword length: Number of information symbols:
n = 2p − 1 k = 2p − p − 1
Number of parity symbols: n − k = p Minimum distance: dmin = 3 Error correcting capability:
t =1
(4.36)
With the simplest configuration - p = 3, we get the most basic (7, 4) binary Hamming code. The block encoder accepts blocks of 4-bit of information, adds 3 parity bits to each such block and produces 7-bits wide Hamming coded blocks. Hamming codes can be implemented in systematic or non-systematic form. A systematic linear block code can be converted to non-systematic form by elementary matrix transformations. A systematic Hamming code is described next. Let a codeword belonging to (7, 4) Hamming code be represented by [D1 D2 D3 D4 P1 P2 P3 ], where D represents information bits and P represents parity bits at respective bit positions. Given the information bits, the parity bits are calculated from three linearly independent equations using modulo-2 additions. P1 = D1 ⊕ D2 ⊕ D3 P2 = D2 ⊕ D3 ⊕ D4 P3 = D1 ⊕ D3 ⊕ D4
(4.37)
4.6 Some classes of linear block codes
133
Figure 4.8 illustrates the general process of computing the parity bits for (7, 4) Hamming code.
Fig. 4.8: Computing the parity bits for (7,4) Hamming code At the transmitter side, a Hamming encoder implements a generator matrix - G. It is easier to construct the generator matrix from the linear equations listed in 4.37. The linear equations show that the information bit D1 influences the calculation of parities at P1 and P3 . Similarly, the information bit D2 influences P1 and P2 , D3 influences P1 , P2 & P3 and D4 influences P2 & P3 . D1 1 0 G = 0 0
D2 0 1 0 0
D3 0 0 1 0
D4 0 0 0 1
P1 1 1 1 0
P2 0 1 1 1
P3 1 0 1 1
(4.38)
The encoder accepts 4 bit message block m, multiplies it with the generator matrix G and generates 7 bit codewords c. Note that all the operations (addition, multiplication etc.,) are in modulo-2 domain. c = mG
(4.39)
Given a generator matrix, the Matlab code snippet for generating a codebook containing all possible codewords (c ∈ C) is given next. The resulting codebook C can be used as a Look-Up-Table (LUT) when implementing the encoder. This implementation will avoid repeated multiplication of the input blocks and the generator matrix. The list of all possible codewords for the generator matrix given in equation 4.38 are listed in table 4.4. Program 4.9: Generating all possible codewords from Generator matrix %program to generate all possible codewords for (7,4) Hamming code G=[ 1 0 0 0 1 0 1; 0 1 0 0 1 1 0; 0 0 1 0 1 1 1; 0 0 0 1 0 1 1];%generator matrix - (7,4) Hamming code m=de2bi(0:1:15,'left-msb');%list of all numbers from 0 to 2ˆk codebook=mod(m*G,2) %list of all possible codewords The three check equations shown in equation 4.37 can be expressed collectively as a parity check matrix H, which can be used in the receiver side for error-detection and error-correction.
134
4 Linear Block Coding
Table 4.4: All possible codewords for (7,4) Hamming code Message Codeword m c 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
D1 " 1 H= 0 1
D2 1 1 0
0000000 0001011 0010111 0011100 0100110 0101101 0110001 0111010 1000101 1001110 1010010 1011001 1100011 1101000 1110100 1111111
D3 1 1 1
D4 0 1 1
P1 1 0 0
P2 0 1 0
P3 # 0 0 1
(4.40)
Using the function - gen2parmat given in section 4.3, the parity-check matrix shown in equation 4.40 can be verified. The syndrome table for the Hamming code can be constructed using the getSyndromeTable function given in section 4.5.2. The resulting syndrome table is tabulated in Table 4.5. The error correction theory discussed in the previous section 4.6.2 can be applied for Hamming codes and the next section details a sample implementation to demonstrate the error correction capability of Hamming codes. Program 4.10: Constructing parity-check matrix and syndrome table G=[ 1 0 0 0 1 0 1; %generator matrix for (7,4) Hamming code 0 1 0 0 1 1 0; 0 0 1 0 1 1 1; 0 0 0 1 0 1 1]; H = gen2parmat(G) %find the H matrix %Syndrome table for G, index sorted according to syndromes s=rHˆT syndromeTable = getSyndromeTable(G)
4.6.3 Maximum-length codes Maximum-length codes are the duals of Hamming codes, denoted as (2m − 1, m) codes with m ≥ 3 and dmin = 2m−1 . The generator matrix of the maximum-length codes are the parity-check matrix of the Hamming code and vice-versa.
4.6 Some classes of linear block codes
135
Table 4.5: Syndrome table for (7,4) Hamming code Syndrome Error pattern s = rHT e0 000 001 010 011 100 101 110 111
0000000 0000001 0000010 0001000 0000100 1000000 0100000 0010000
4.6.4 Hadamard codes In Hadamard codes, the rows of a Hadamard matrix are selected as codewords. By definition, a Hadamard matrix of order n is a n × n matrix Mn with elements 1 and −1 such that Mn Mn T = nIn , where In is an identity matrix of order n. To generate linear Hadamard block codes of length n = 2m , we are primarily interested in generating Hadamard matrices where the order n is a power of 2 [6]. Hadamard matrix of order n = 2 is given by " # 1 1 M2 = (4.41) 1 −1 The order 2m Hadamard matrix is constructed as M2m
" # M2m−1 M2m−1 = M2m−1 −M2m−1
(4.42)
Hadamard matrices can also be generated using Kronecker product. If A is a p × q matrix and B is a r × s matrix, then the Kronecker product A ⊗ B is the pr × qs matrix given by a11 B a12 B · · · a1q B a21 B a22 B · · · a2q B A⊗B = (4.43) .. . . . .. . .. . . a p1 B a p2 B · · · a pq B
If Mn1 and Mn2 are Hadamard matrices of orders n1 and n2 respectively, then the Kronecker product Mn1n2 = Mn1 ⊗ Mn2 is a Hadamard matrix of order n1 n2 . It follows that a Hadamard matrix of order 2m can be constructed by m times repeated Kronecker product of M2 . M2m = M2 ⊗ M2 ⊗ · · · ⊗ M2 = M2 ⊗ M2m−1
(4.44)
The Matlab function, shown next, implements the construction of n = 2m order Hadamard matrix based on Kronecker products. Program 4.11: hadamard matrix.m: Constructing Hadamard matrix using Kronecker products function [M] = hadamard_matrix(m) %Construct Hadamard matrix of order 2ˆm, m is positive integer if m 3 eyes per plot %L can be integral multiple of oversampling factor. Valid data %starts from 2*filtDelay after matched filtering, plots 100 traces Next, we assume that the receiver has perfect knowledge of symbol timing instants and therefore, we will not be implementing a symbol timing synchronization subsystem in the receiver. At the receiver, the matched filter symbols are first passed through a downsampler that samples the filter output at correct timing instances. The sampling instances are influenced by the delay of the FIR filters (SRRC filters in Tx and Rx). For symmetric FIR filters of length L f ir , the filter delay is δ = L f ir /2. Since the communication link contains two filters, the total filter delay is 2δ . Therefore, the first valid sample occurs at 2δ + 1th position in the matched
194
7 Pulse Shaping, Matched Filtering and Partial Response Signaling
Fig. 7.11: Received signal with AWGN noise (left) and the output of the matched filter (right)
filter’s output vector (+1 is added due to the fact that Matlab array indices starts from 1). The downsampler that follows, starts to sample the signal from this position and returns every Lth symbol. The downsampled output, shown in Figure 7.12, is then passed through a demodulator that decides on the symbols using an optimum detection technique and remaps them back to the intended message symbols. Program 7.15: Symbol rate sampler and demodulation %------Symbol rate Sampler----uCap = vCap(2*filtDelay+1:L:end-(2*filtDelay))/L; %downsample by L from 2*filtdelay+1 position result by normalized L, %as the matched filter result is scaled by L figure; stem(real(uCap)); hold on; title('After symbol rate sampler $\hat{u}$(n)',... 'Interpreter','Latex'); dCap = demodulate(MOD_TYPE,M,uCap); %demodulation
Fig. 7.12: Downsampling - output of symbol rate sampler
7.5 Implementing a matched filter system with SRRC filtering
195
7.5.1 Plotting the eye diagram The technique of computing and plotting an eye diagram was given in section 7.4. We will use the same technique to plot the eye at the output of the matched filter in the receiver. The eye diagram can be plotted using the following statement (refer section 7.4 for function definition). Program 7.16: Generating eye plot at the matched filter output figure; plotEyeDiagram(vCap,L,3*L,2*filtDelay,100); Here, the first argument vCap denotes the output of the matched filter and the second argument is the oversampling rate of the system L. The third argument is the number of samples we wish to see in one trace. It is also typically set to integral number of the oversampling ratio. Here it is set to 3L, therefore we will see visual information pertaining to three symbol instants. The forth argument is the offset from where we would like begin plotting the eye diagram. Typically, we would like to plot from the position 2 ∗ δ that accounts for the filter delays in the communication link. Finally, the fifth argument is the total number of such traces we wish to plot. Sample eye plots shown in Figure 7.13 are generated for no noise case with Eb /N0 = 100dB. From the eye plots, we can see that there is no ISI at symbol sampling instants (t/Tsym = 0, 1, 2, 3).
Fig. 7.13: Eye plot at the matched filter output just before the symbol sampler at the receiver
196
7 Pulse Shaping, Matched Filtering and Partial Response Signaling
7.5.2 Performance simulation The following code puts all the aforementioned code snippets together for simulating the performance of an MPAM communication system with SRRC transmit and matched filters (refer Figure 7.8). It reuses some of the functions for modulation and demodulation from chapter 5. The functions for adding AWGN noise and for the computation of theoretical error rate performance were already described in chapter 6. The resulting performance plot will match the plots given in Figure 6.4 in chapter 6. Program 7.17: mpam srrc matched filtering.m: Performance simulation of an MPAM modulation based communication system with SRRC transmit and matched filters %Performance simulation of MPAM with SRRC pulse shaping clearvars; clc; N = 10ˆ5; %Number of symbols to transmit MOD_TYPE = 'PAM'; %modulation type is 'PAM' M = 4; %modulation level for the chosen modulation MOD_TYPE EbN0dB = -4:2:10; %SNRs for generating AWGN channel noise %----Pulse shaping filter definitions----beta = 0.3;% roll-off factor for SRRC filter Nsym = 8;%SRRC filter span in symbol durations L = 4; %Oversampling factor (L samples per symbol period) [p,t,filtDelay] = srrcFunction(beta,L,Nsym);%SRRC filter SER=zeros(1,length(EbN0dB)); %Symbol Error Rate place holders snr = 10*log10(log2(M))+EbN0dB; %Converting given Eb/N0 dB to SNR for i=1:length(snr),%simulate the system for each given SNR %-------- Transmitter ------------------d = ceil(M.*rand(1,N));%random numbers from 1 to M for input to PAM u = modulate(MOD_TYPE,M,d); %modulation v = [u ;zeros(L-1,length(u))];%insert L-1 zero between each symbols v = v(:).';%Convert to a single stream, output is at sampling rate s = conv(v,p,'full');%Convolve modulated symbols with p[n] %-------- Channel ------------------r = add_awgn_noise(s,snr(i),L);%add AWGN noise for given SNR, r=s+w %-------- Receiver ------------------vCap = conv(r,p,'full'); %convolve received signal with SRRC filter uCap = vCap(2*filtDelay+1:L:end-(2*filtDelay))/L;%downsample by L %from filtdelay+1 position, matched filter output is scaled by L dCap = demodulate(MOD_TYPE,M,uCap); %demodulation SER(i) = sum(dCap ˜=d)/N; %Calculate symbol error rate end SER_theory = ser_awgn(EbN0dB,MOD_TYPE,M);%theoretical SER for PAM semilogy(EbN0dB,SER,'r*'); hold on; semilogy(EbN0dB,SER_theory,'k-')
7.6 Partial response (PR) signaling models
197
7.6 Partial response (PR) signaling models Consider the generic baseband communication system model and its equivalent representation, shown in Figure 7.14, where the various blocks in the system are represented as filters. To have no ISI at the symbol sampling instants, the equivalent filter H( f ) should satisfy Nyquist’s first criterion.
Transmit Filter
Detector
^
Detector
^
Receive Filter
Channel
Fig. 7.14: A generic communication system model and its equivalent representation If the system is ideal and noiseless, it can be characterized by samples of the desired impulse response h(t). Let’s represent all the N non-zero sample values of the desired impulse response, taken at symbol sampling spacing Tsym , as {qn }, for n = 0, 1, 2, ..., N − 1. The partial response signaling model, illustrated in Figure 7.15, is expressed as a cascaded combination of a tapped delay line filter with tap coefficients set to {qn } and a filter with frequency response G( f ). The filter Q( f ) forces the desired sample values. On the other hand, the filter G( f ) bandlimits the system response and at the same time it preserves the sample values from the filter Q( f ) . The choice of filter coefficients for the filter Q( f ) and the different choices for G( f ) for satisfying Nyquist first criterion, result in different impulse response H( f ), but renders identical sample values bn in Figure 7.15 [3]. To have a system with minimum possible bandwidth, the filter G( f ) is chosen as ( Tsym , | f | ≤ 1/(Tsym ) G( f ) = (7.20) 0, elsewhere The inverse Fourier transform of G( f ) results in a sinc pulse. The corresponding overall impulse response of the system h(t) is given by
N−1 h(t) =
qn ∑
n=0
sin
h
π
Tsym t − nTsym π t − nT sym Tsym
i
(7.21)
If the bandwidth can be relaxed, other ISI free pulse-shapers like raised cosine can be considered for the G( f ) filter.
198
7 Pulse Shaping, Matched Filtering and Partial Response Signaling Tapped Delay Line
Σ
−
2
−
Nyquist Filter
Fig. 7.15: A generic partial response (PR) signaling model
Given the nature of real world channels, it is not always desirable to satisfy Nyquist’s first criterion. For example, the channel in magnetic recording, exhibits spectral null at certain frequencies and therefore it defines the channel’s upper frequency limit. In such cases, it is very difficult to satisfy Nyquist first criterion. An alternative viable solution is to allow a controlled amount of ISI between the adjacent samples at the output of the equivalent filter Q( f ) shown in Figure 7.15. This deliberate injection of controlled amount of ISI is called partial response (PR) signaling or correlative coding. Several classes of PR signaling schemes and their corresponding transfer functions represented as Q(D) (where D is the delay operator) are shown in Table 7.1. The unit delay is equal to a delay of 1 symbol duration (Tsym ) in a continuous time system. Table 7.1: Partial response signaling schemes Q(D)
Classification
Remarks
1+D 1−D (1 + D)2 = 1 + 2D + D2 (1 + D)(2 − D) = 2 + D − D2 (1 + D)(1 − D) = 1 − D2 (1 + D)2 (1 − D) = 1 + D − D2 − D3 (1 + D)3 (1 − D) = 1 + 2D − 2D3 − D4 (1 + D)2 (1 − D)2 = 1 − 2D2 + D4
PR1 PR1 PR2 PR3 PR4 EPR4 E2PR4 PR5
Duobinary signaling Dicode signaling
Modified Duobinary signaling Enhanced Class 4 Enhanced EPR4
7.6 Partial response (PR) signaling models
199
7.6.1 Impulse response and frequency response of PR signaling schemes Consider a minimum bandwidth system in which the filter H( f ) is represented as a cascaded combination of a partial response filter Q( f ) and a minimum bandwidth filter G( f ). Since G( f ) is a brick-wall filter, the frequency response of the whole system H( f ) is equivalent to frequency response of the FIR filter Q( f ), whose transfer function, for various partial response schemes, is listed in Table 7.1. The following function generates the overall partial response signal for the given transfer function Q(D). The function records the impulse response of the Q( f ) filter by sending an impulse through it. These samples are computed at each symbol sampling instants. In order to visualize the pulse shaping functions and to compute the frequency response, the impulse response of Q(D) are oversampled by a factor L. This converts the samples from symbol rate domain to sampling rate domain. The oversampled impulse response of Q(D) filter is convolved with a sinc filter g(t) that satisfies the Nyquist first criterion. This results in the overall response b(t) of the equivalent filter H( f ) (refer Figure 7.15). Program 7.18: PRSignaling.m: Function to generate PR signals at symbol sampling instants function [b,t]=PRSignaling(Q,L,Nsym) %Generate the impulse response of a partial response System given, %Q - partial response polynomial for Q(D) %L - oversampling factor (Tsym/Ts) %Nsym - filter span in symbol durations %Returns the impulse response b(t) that spans the %discrete-time base t=-Nsym:1/L:Nsym %excite the Q(f) filter with an impulse to get the PR response qn = filter(Q,1,[ 0 0 0 0 0 1 0 0 0 0 0 0]); %Partial response filter Q(D) q(t) and its upsampled version q=[qn ;zeros(L-1,length(qn))];%Insert L-1 zero between each symbols q=q(:).';%Convert to a single stream, output is at sampling rate %convolve q(t) with Nyqyist criterion satisfying sinc filter g(t) %Note: any filter that satisfy Nyquist first criterion can be used Tsym=1; %g(t) generated for 1 symbol duration t=-(Nsym/2):1/L:(Nsym/2);%discrete-time base for 1 symbol duration g = sin(pi*t/Tsym)./(pi*t/Tsym); g(isnan(g)==1)=1; %sinc function b = conv(g,q,'same');%convolve q(t) and g(t) end The Matlab code to simulate both the impulse response and the frequency response of various PR signaling schemes, is given next. The simulated results are plotted in Figure 7.16. Program 7.19: test PRSignaling.m: PR Signaling schemes and their frequency domain responses %Impulse response and Frequency response of PR signaling shemes clearvars; clc; L = 50; %oversample factor Tsym/Ts Nsym = 8; %PRS filter span in symbols QD_arr = cell(8,1); %place holder for 8 different PR co-effs Q(D) QD_arr{1}=[1 1]; %PR Class I Duobinary scheme QD_arr{2}=[1 -1]; %PR Class I Dicode channel scheme QD_arr{3}=[1 2 1]; %PR Class II
200
7 Pulse Shaping, Matched Filtering and Partial Response Signaling
QD_arr{4}=[2 1 -1]; %PR Class III QD_arr{5}=[1 0 -1]; %PR Class IV (Modified Duobinary) QD_arr{6}=[1 1 -1 -1]; %EPR4 (Enhanced Class IV) QD_arr{7}=[1 2 0 -2 -1]; %E2PR4 (Enhanced EPR4) QD_arr{8}=[1 0 -2 0 1]; %PR Class V A=1;%filter co-effs in Z-domain(denominator) for any FIR type filter titles={'PR1 Duobinary','PR1 Dicode','PR Class II','PR Class III'... ,'PR4 Modified Duobinary','EPR4','E2PR4','PR Class V'}; for i=1:8 %loop for each PR scheme %--------Impulse response-----------------Q = QD_arr{i}; %tap values for Q(f) filter %construct PR signals as cascaded combination of Q(f) and G(f) [b,t]=PRSignaling(Q,L,Nsym);%impulse response selected PR scheme subplot(1,2,1);plot(t,b); hold on; stem(t(1:L:end),b(1:L:end),'r'); %Highlight symbol sample instants grid on; title([titles{i} '- b(t)']); xlabel('t/T_{sym}'); ylabel('b(t)'); axis tight;hold off; %--------Frequency response--------------[H,W]=freqz(Q,A,1024,'whole');%Frequency response H=[H(length(H)/2+1:end); H(1:length(H)/2)];%for double sided plot response=abs(H); norm_response=response/max(response);%Normalize response norm_frequency= W/max(W)-0.5;%Normalized frequency to -0.5 to 0.5 subplot(1,2,2); plot(norm_frequency,norm_response,'b'); grid on; axis tight; title([titles{i} '- Frequency response Q(D)']) xlabel('f/F_{sym}');ylabel('|Q(D)|') pause %pause for user response end
7.7 Precoding Intersymbol interference (ISI) is a common problem in telecommunication systems, such as terrestrial television broadcasting, digital data communication systems, and cellular mobile communication systems. Dispersive effects in high-speed data transmission and multipath fading are the main reasons for ISI. To maximize the capacity, the transmission bandwidth must be extended to the entire usable bandwidth of the channel and that also leads to ISI. To mitigate the effect of ISI, equalization techniques (refer chapter 8) can be applied at the receiver side. Under the assumption of correct decisions, a zero-forcing decision feedback equalization (ZF-DFE) completely removes the ISI and leaves the white noise uncolored. It was also shown that ZF-DFE in combination with powerful coding techniques, allows transmission to approach the channel capacity [4]. DFE is adaptive and works well in the presence of spectral nulls and hence suitable for various PR channels that has spectral nulls. However, DFE suffers from error propagation and is not flexible enough to incorporate itself with powerful channel coding techniques such as trellis-coded modulation (TCM) and low-density parity codes (LDPC). These problems can be practically mitigated by employing precoding techniques at the transmitter side. Precoding eliminates error propagation effects at the source if the channel state information is known precisely
7.7 Precoding
201 PR1 Duobinary Frequency response Q(D)
0.5
0.5
0 −4
0
−2
2
0 −0.5
4
t/T
0
−2
PR1 Dicode b(t)
2
0.5
0 −0.5
4
t/T
PR1 Dicode Frequency response Q(D)
PR Class III
0
0.5
f/F
sym
sym
sym
b(t)
PR Class III
Frequency response Q(D)
1
1
1
−4
0
−2
2
0.5
0
0 −0.5
4
t/T
0
−1 −4
0.5
PR4 Modified Duobinary b(t)
−2
0
2
0 −0.5
4
t/T
b(t)
|Q(D)|
−1 −4
0
−2
0
2
1
0.5
0 −0.5
4
0
0.5
f/Fsym
PR Class V b(t)
EPR4 Frequency response Q(D)
EPR4 b(t)
1
t/Tsym
f/Fsym
sym
E2PR4 Frequency response Q(D)
0
−2 −4
0.5
0.5
sym
2
0.5
0
f/F
E2PR4 b(t)
PR4 Frequency response Q(D) 1
0
0 −0.5
4
sym
sym
1
2
t/T
f/F
sym
0
−2
0.5
|Q(D)|
−1
1
b(t)
0
|Q(D)|
2
|Q(D)|
b(t)
1
0 −4
0.5
1
f/F
sym
b(t)
0
PR Class II Frequency response Q(D)
2
b(t)
|Q(D)|
1
b(t)
PR Class II b(t)
1
|Q(D)|
PR1 Duobinary b(t)
PR Class V Frequency response Q(D) 1
1 0
0.5
−1
−1 −4
−2
0
t/Tsym
2
4
|Q(D)|
0
b(t)
|Q(D)|
b(t)
1
0 −0.5
0
0.5
−2 −4
−2
f/Fsym
0
t/T
sym
2
4
0.5
0 −0.5
0
0.5
f/F
sym
Fig. 7.16: Impulse response and frequency response of various Partial response (PR) signaling schemes
at the transmitter. Additionally, precoding at transmitter allows coding techniques to be incorporated in the same way as for channels without ISI. In this text, a partial response (PR) signaling system is taken as an example to demonstrate the concept of precoding. In a PR signaling scheme, a filter Q(Z) is used at the transmitter to introduce a controlled amount of ISI into the signal. The introduced ISI can be compensated for, at the receiver by employing an inverse filter [Q(z)]−1 . In the case of PR1 signaling, the filters would be Q[z] = 1 + z−1 (FIR filter in the Tx) ⇒ [Q[z]]−1 =
1 (IIR filter in the Rx) 1 + z−1
(7.22)
Generally, the filter Q(z) is chosen to be of FIR type and therefore its inverse at the receiver will be of IIR type. If the received signal is affected by noise, the usage of IIR filter at the receiver is prone to error propagation. Therefore, instead of compensating for the ISI at the receiver, a precoder can be implemented at the transmitter as shown in Figure 7.17. Since the precoder is of IIR type, the output can become unbounded. For example, let’s filter a binary data sequence dn = [1, 0, 1, 0, 1, 0, 1, 0, 1, 0] through the precoder used for PR1 signaling scheme [Q(z)]−1 = (1 + z−1 )−1 . >> d=[1,0,1,0,1,0,1,0,1,0] >> filter(1,[1 1],d) ans = 1 -1 2 -2 3 -3 4 -4 5 -5
202
7 Pulse Shaping, Matched Filtering and Partial Response Signaling
1
^ Partial Response system
Precoder
Receiver
Fig. 7.17: A pre-equalization system incorporating a modulo-M precoder
The result indicates that the output becomes unbounded and some additional measure has to be taken to limit the output. Assuming M-ary signaling schemes like MPAM is used for transmission, the unbounded output of the precoder can be bounded by incorporating modulo-M operation.
7.7.1 Implementing a modulo-M precoder For M-ary signaling schemes where the data can take M discrete values, the precoder incorporates a moduloM operation in its filter structure as shown in Figure 7.17. Thus, in the absence of noise, the original data sequence can be recovered at the receiver by interpreting the received symbols in the modulo-M domain [3] [5]. Consider a partial response system which can be described by a monic polynomial, meaning that the leading coefficient in the polynomial is 1 (i.e, q0 = 1 in the following equation). Q(z) = 1 + q1 z−1 + q2 z−2 + ... + qN−1 z1−N
(7.23)
The inverse filter (IIR type) can be readily obtained as −1 Q(z) =
1 1 + q1 z−1 + q2 z−2 + ... + qN−1 z1−N
(7.24)
As explained earlier, a modulo-M operation needs to be incorporated in the inverse filter to bound the output to M signaling levels. The implementation of a modulo-M precoder, described in the reference [6], is shown in Figure 7.18.
−
Fig. 7.18: Inverse filter (precoder) of a partial response (PR) system
References
203
7.7.2 Simulation and results The following snippet of Matlab code simulates the discrete time equivalent model, shown in Figure 7.17 for an M-ary PR1 signaling system and the sample result for an input sequence of length 10 is tabulated in Table 7.2. Program 7.20: PR1 precoded system.m: Discrete-time equivalent partial response class 1 signaling model %M-ary Partial Response Signaling (Discrete time equivalent model) %with mod-M precoding. This is an ideal model with no noise added. M=4; %Choose number of signaling levels for M-ary system N=100000; %Number of symbols to transmit a=randi([0,M-1],N,1) %random input stream of M-ary symbols Q=[1 1]; % q0=1,q1=1, PR Class 1 Scheme (Duobinary coding) x=zeros(size(a)); %output of precoder D=zeros(length(Q),1); %memory elements in the precoder %Precoder (Inverse filter of Q(z)) with Modulo-M operation for k=1:length(a) x(k)=mod(a(k)-(D(2:end).*Q(2:end)),M); D(2)=x(k); if length(D)>2 D(3:end)=D(2:end-1); %shift one symbol for next cycle end end disp(x); %display precoded output bn=filter(Q,1,x)%Sampled received sequence-if desired,add noise here acap=mod(bn,M) %modulo-M reduction at receiver errors=sum(acap˜=a) %Number of errors in the recovered sequence
Table 7.2: Results from the simulated partial response (PR) system model Variable Description an xn bn aˆn
input precoder output partial response output receiver output
n=0
n=1
n=2
n=3
n=4
n=5
n=6
n=7
n=8
n=9
2 2 2 2
3 1 3 3
1 0 1 1
2 2 2 2
0 2 4 0
1 3 5 1
0 1 4 0
1 0 1 1
0 0 0 0
1 1 1 1
References 1. Clay S. Turner, Raised cosine and root raised cosine formulae, Wireless Systems Engineering, Inc, May 29, 2007. 2. Michael Joost, Theory of root-raised cosine filter, Research and Development, 47829 Krefeld, Germany, EU, Dec 2010. https://www.michael-joost.de/rrcfilter.pdf 3. Peter Kabal and Subbarayan Pasupathy, Partial-response signaling, IEEE Transactions on Communications, Vol. 23, No. 9, pp. 921-934, September 1975.
204
7 Pulse Shaping, Matched Filtering and Partial Response Signaling
4. R. Price, Nonlinear Feedback Equalized PAM versus Capacity for Noisy Filter Channels, in Proceedings of the Int. Conference on Comm. (ICC ’72), 1972, pp. 22.12-22.17. 5. Kretzmer E.R, Generalization of a technique for binary data communication, IEEE Transactions on Communications, Vol14, Issue-1, pp. 67-68, Feb 1966. 6. Tomlinson M, New automatic equalizer employing modulo arithmetic, Electronics Letters, Vol.7, pp. 138–139, March 1971.
Chapter 8
Linear Equalizers
Abstract In communication systems that are dominated by inter-symbol interference, equalizers play an important role in combating unwanted distortion effects induced by the channel. Linear equalizers are based on linear time-invariant filters that are simple to analyze using conventional signal processing theory. This chapter is focused on design and implementation of three important types of linear equalizers namely, the zero-forcing equalizer, the minimum mean squared error equalizer and the least mean square adaptive equalizer.
8.1 Introduction Transmission of signals through band-limited channels are vulnerable to time dispersive effects of channel that could manifest as intersymbol interference (ISI). Three major approaches to deal with ISI were introduced in the previous chapter and are reproduced here for reference. • Nyquist first criterion: Force ISI effect to zero by signal design - pulse shaping techniques like sinc filtering, raised-cosine filtering and square-root raised-cosine filtering. • Partial response signaling: Introduce controlled amount of ISI in the transmit side and prepare to deal with it at the receiver. • Design algorithms to counter ISI: Learn to live with the presence of ISI and design robust algorithms at the receiver - Viterbi algorithm (maximum likelihood sequence estimation), equalizer etc., This chapter focuses on the third item, particularly on the technique of employing an equalizer at the receiver. An equalizer is a digital filter at the receiver that mitigates the distortion effects of intersymbol interference (ISI), introduced by a time-dispersive channel. If the channel is unknown and time varying, optimum design of transmit and receive filters is not possible. However, one can neutralize (equalize) the channel effects, if the impulse response of the channel is made known at the receiver. The problem of recovering the input signal in a time-dispersive channel boils down to finding the inverse of the channel and using it to neutralize the channel effects. However, finding the inverse of the channel impulse response may be difficult due to the following reasons: • If the channel response is zero at any frequency, the inverse of the channel response is undefined at that frequency. • Generally, the receiver does not know the channel response and extra circuitry (channel estimation) may be needed to estimate it. • The channel can be varying in real time and therefore, the equalization needs to be adaptive. Additionally, even if the channel response is of finite length, the equalizer can have infinite impulse response. Usually, from stability perspective, the equalizer response has to be truncated to a finite length. As a result, perfect equalization is difficult to achieve and does not always provide the best performance. 205
206
8 Linear Equalizers
Classification of equalizer structures Based on implementation structures, the equalizer structures are categorized as follows: • Maximum likelihood sequence estimation (MLSE) equalizer – Provides the most optimal performance. – Computational complexity can be extremely high for practical applications. • Decision feedback equalizer (DFE) – A nonlinear equalizer that employs previous decisions as training sequences. – Provides better performance but suffers from the serious drawback of error propagation in the event of erroneous decisions. – To provide robustness, it is often combined with training sequence based feed-forward equalization. • Linear equalizer – The most suboptimum equalizer with lower complexity.
8.2 Linear equalizers The MLSE equalizers provide the most optimum performance among all the equalizer categories. For long channel responses, the MLSE equalizers become too complex and hence, from practical implementation point of view, other suboptimum filters are preferred. The most suboptimum equalizer is the linear equalizer that offers lower complexity. Linear equalizers are particularly effective for those channels where the ISI is not severe. Commonly, equalizers are implemented as a digital filter structure having a number of taps with complex tap coefficients (also referred as tap weights).
Choice of adaptation of equalizer tap weights Based on the adaptation of tap coefficients, the linear equalizers can be classified into two types. • Preset equalizers: If the channel is considered to be time-invariant during the data transmission interval, the equalizer weights are calculated during the beginning of the transmission session and are fixed during the rest of the transmission session. • Adaptive equalizers: For time variant channels, the coefficients are computed using an adaptive algorithm to adapt for the change in the channel conditions, thus rendering the name. There exist numerous choices of algorithms for calculating and updating the adaptive weights - least mean squares, recursive lattice square, conventional Kalman, square-root Kalman and fast Kalman - to name a few.
Choice of sampling time The choice of sampling time with respect to the symbol time period, affects the design and performance of equalizers. As a consequence, two types of equalization techniques exist. • Symbol-spaced linear equalizer: Shown in Figure 8.1, where, the downsampler in front of the equalizer, operates at 1-sample per symbol time (Tsym ). These equalizers are very sensitive to sampling time, potentially introducing destructive aliasing effects and therefore the performance may not be optimal.
8.2 Linear equalizers
207
• Fractionally-spaced linear equalizer: Here, the idea is to reduce the space between the adjacent equalizer taps to a fraction of the symbol interval. Equivalently, the downsampler that precedes the equalizer operates at a rate higher than symbol rate (say spacing in time kTsym /M - for some integers k and M). Finally, the equalizer output is downsampled by symbol time, before delivering the output symbols to the detector. This is illustrated in Figure 8.2. Though it requires more computation, fractionally-spaced equalizers simplify the rest of the demodulator design [1] and are able to compensate for any arbitrary sampling time phase [2]. This chapter deals only with symbol-spaced linear equalizer with preset tap weights and adaptive weights using least mean squares algorithm. The treatment of fractionally-spaced equalizers is beyond the scope of this book.
r[k] Fig. 8.1: Continuous-time channel model with symbol-spaced linear equalizer
r[k] Fig. 8.2: Continuous-time channel model with fractionally-spaced linear equalizer
Choice of criterion for tap weights optimization The equalizer tap weights are usually chosen based on some optimal criterion of which two of them are given. • Peak distortion criterion: Aims at minimizing the maximum possible distortion of a signal, caused by ISI, at the output of the equalizer. This is also equivalent to zero-forcing criterion. This forms the basis for zero-forcing equalizers [3]. • Mean square error criterion: Attempts to minimize the mean square value of an error term computed from the difference between the equalizer output and the transmitted information symbol. This forms the basis for linear minimum mean square error (LMMSE) equalizer and least mean square (LMS) algorithm.
208
8 Linear Equalizers
8.3 Symbol-spaced linear equalizer channel model Consider the continuous-time channel model in Figure 8.1. In this generic representation, the transmit pulse shaping filter and the receiver filter are represented by the filters HT ( f ) and HR ( f ) respectively. The transmit filter may or may not be a square-root raised cosine filter. However, we assume that the receive filter is a square-root raised cosine filter. Therefore, the sampled noise n[k] = hR (t) ∗ n(t) |t=kT sym is white Gaussian. In general, the magnitude response of the channel |Hc ( f )| is not a constant over a range of frequencies channel, that is the impulse response of the channel hc (t) is not ideal. Therefore, linear distortions become unavoidable. We designate the combined effect of transmit filter, channel and the receive filter as overall channel. The overall impulse response h(t) is given by h(t) = hT (t) ∗ hc (t) ∗ hR (t)
(8.1)
The received signal after the symbol rate sampler is given by ∞
r[k] =
∑
h[i]a[k − i] + z[k]
(8.2)
i=−∞
The resulting simplified discrete-time channel model is given in Figure 8.3, where a[k] represent the information symbols transmitted through a channel having an arbitrary impulse response h[k], r[k] are the received symbols at the receiver and n[k] is white Gaussian noise. ∞
r[k] =
∑
h[i]a[k − i] + n[k]
(8.3)
i=−∞
Usually, in practice, the channel impulse response can be truncated to some finite length L. Under assumptions of causality of transmit filter, channel and the receive filter, we can safely hold that h[i] = 0 for i < 0. If L is sufficiently large, h[i] ≈ 0 holds for i ≥ L. Hence the received signal can be rewritten as L−1
r[k] =
∑ h[i]a[k − i] + n[k]
(8.4)
i=0
r
d
Fig. 8.3: Discrete-time channel model for symbol-spaced linear equalization The equalizer co-efficients are represented by w[k]. Symbol-by-symbol decisions are made on the equalizer output d[k] and the transmitted information symbols are estimated as a[k]. ˆ
8.4 Zero-forcing equalizer
209
Figure 8.4 shows the equivalent frequency domain representation of the channel model, where the frequency response can be computed using Z-transform as ∞
W (z) =
w[k]z−k
∑
(8.5)
k=−∞
A linear equalizer is the most simplest type of equalizer, that attempts to invert the channel transfer function H(z) and filters the received symbols with the inverted response. Subsequently, using a symbol-by-symbol decision device, the filtered equalizer outputs are used to estimate the information symbols. The structure of a linear equalizer can be of FIR or IIR type. The equalizer tap weights are calculated based an optimization criterion. We will be focusing on zero-forcing (ZF) and minimum mean square error (MMSE) criterions.
r
d
Fig. 8.4: Equivalent channel model in frequency domain with linear equalizer
8.4 Zero-forcing equalizer A zero-forcing (ZF) equalizer is so named because it forces the residual ISI in the equalizer output d[k] to zero. An optimum zero-forcing equalizer achieves perfect equalization by forcing the residual ISI at sampling instants kT except k = 0. This goal can be achieved if we allow for equalizer with infinite impulse response (IIR) structure. In most practical applications, the channel transfer function H(z) can be approximated to finite response (FIR) filter and hence its perfect equalizing counterpart W (z) will be an IIR filter. W (z) =
1 H(z)
(8.6)
As a result, the resulting overall transfer function, denoted by Q(z), is given by Q(z) = H(z)W (z) = 1
(8.7)
Otherwise, in time-domain, the zero-forcing solution forces the overall response to zero at all positions except for the position at k0 where its value is equal to Dirac delta function. ( 1, f or k = k0 q[k] = h[k] ∗ w[k] = δ [k − k0 ] = (8.8) 0, f or k 6= k0 For practical implementation, due to their inherent stability and better immunization against finite length word effects, FIR filters are preferred over IIR filters (refer chapter chapter 1 section 1.9 for more details).
210
8 Linear Equalizers
Implementing a zero-forcing equalizer as FIR filter would mean imposing causality and a length constraint on the equalizer filter whose transfer function takes the form N−1
W (z) =
∑ w[k]z−k
(8.9)
k=0
Given the channel impulse response of length L and the equalizer filter of length N, the zero-forcing solution in equation 8.8, can be expressed as follows (refer chapter 1 section 1.6). ∞
q[k] = h[k] ∗ w[k] =
∑
h[i]w[k − i] = δ [k − k0 ],
k = 0, · · · , L + N − 2
(8.10)
i=−∞
In simple terms, with the zero-forcing solution, the overall impulse response takes the following form. q[k] = h[k] ∗ w[k] = δ [k − k0 ] = [0, 0, · · · , 0, 1, 0, · · · , 0, 0]
(8.11)
where, q[k] = 1 at position k0 . The convolution sum of finite length sequence h of length L and a causal finite length sequence w of length N can be expressed as in matrix form as h[0] h[−1] · · · h[−(N − 1)] w[0] q[0] h[1] h[0] · · · h[−(N − 2)] w[1] q[1] h[2] h[1] · · · h[−(N − 3)] . w[2] q[2] = (8.12) . . . . . .. . . . . . . . . . . . h[L + N − 2] h[L + N − 3] · · · h[L − 1] w[N − 1] q[L + N − 2] Obviously, the zero-forcing solution in equation 8.8 can be expressed in matrix form as 0 .. w[0] h[0] h[−1] · · · h[−(N − 1)] . h[1] h[0] · · · h[−(N − 2)] w[1] 0 h[2] h[1] · · · h[−(N − 3)] . w[2] = 1 .. .. .. .. .. 0 . . . . . . . w[N − 1] h[L + N − 2] h[L + N − 3] · · · h[L − 1] . 0 H.w = δk0
(8.13)
If we assume causality of the channel impulse response, h[k] = 0 for k < 0 and if L is chosen sufficiently large h[k] ≈ 0 holds for k ≥ L. Then, the zero-forcing solution can be simplified as 0 h[0] 0 · · · 0 0 . w[0] h[1] h[0] · · · .. 0 0 w[1] h[2] h[1] · · · 0 0 0 w[2] . = .. . . .. .. 1 . (8.14) . . . .. . . . . 0 . 0 w[N − 2] .. 0 0 . . h[L − 1] . w[N − 1] 0 0 ··· 0 h[L − 1] 0 which can be expressed in terms of a Toeplitz matrix (see chapter 1 section 1.7), H = T(h)
8.4 Zero-forcing equalizer
211
T(h).w = δk0 H.w = δk0
(8.15)
If the zero-forcing equalizer is assumed to perfectly compensate the channel impulse response, the output of the equalizer will be a Dirac delta function that is delayed by certain symbols due to equalizer delay (k0 ). In matrix form, this condition is represented as H · w = δk0
(8.16)
where H is a rectangular channel matrix as defined in equations 8.13 and 8.15, w is a column vector containing the equalizer tap weights and δk0 is a column vector representing the Dirac delta function, with all elements equal to zero except for the unity element at (k0 )th position. The position of the unity element in the δk0 matrix determines the equalizer delay (k0 ). The equalizer delay allows for end-to-end system delay. Without the equalizer delay, it would be difficult to compensate the effective channel delay using a causal FIR equalizer.
8.4.1 Least squares solution The method of least squares can be applied to solve an overdetermined system of linear equations of form Hw = δk0 , that is, a system in which the H matrix is a rectangular (L + N − 1) × N matrix with more equations than unknowns (L + N − 1) > N . In general, for the overdetermined (L + N − 1) × N system Hw = δk0 , there are solutions w minimizing the
2 Euclidean distance Hw − δk0 and those solutions are given by the square N × N system HH Hw = HH δk0
(8.17)
where HH represents the Hermitian transpose (conjugate transpose of a matrix) of the channel matrix H. Besides, when the columns of H are linearly independent, it is apparent that HH H is symmetric and invertible, and so the solution for the zero-forcing equalizer coefficients w is unique and given by −1 w = HH H HH δk0 = H† δk0
(8.18)
−1 H H is called pseudo-inverse matrix or Moore-Penrose generalized matrix The expression H† = HH H inverse. To solve for equalizer taps using equation 8.18, the channel matrix H has to be estimated. The mean square error (MSE) between the equalized samples and the ideal samples is calculated as [5] k0
ξN = 1 − ∑ wN (i)h∗c (k0 − i)
(8.19)
i=0
The equations for zero-forcing equalizer have the same form as that of MMSE equalizer (see equations 8.40 and 8.41) , except for the noise term. Therefore, the MSE of the ZF equalizer can be written as [6] [7] (8.20) ξmin = σa2 1 − δkT0 HH† δk0 The optimal-delay that minimizes the MSE is simply the index of the maximum diagonal element of the matrix HH† optimum delay = kopt = arg max diag HH† (8.21) index
212
8 Linear Equalizers
8.4.2 Noise enhancement From the given discrete-time channel model (Figure 8.3), the received sequence r can be computed as the convolution sum of the input sequence a of length N and the channel impulse response h of length L ∞
r[k] = h[k] ∗ a[k] + n[k] =
∑
h[i]a[k − i] + n[k]
k = 0, 1, · · · , L + N − 2
(8.22)
i=−∞
Following the derivations similar to equations 8.10 and 8.12, the convolution sum can be expressed in matrix form as r[0] h[0] h[−1] · · · h[−(N − 1)] a[0] n[0] r[1] h[1] h[0] · · · h[−(N − 2)] a[1] n[1] r[2] h[2] h[1] · · · h[−(N − 3)] . a[2] + n[2] = (8.23) .. .. .. .. .. .. .. . . . . . . . r[L + N − 2] h[L + N − 2] h[L + N − 3] · · · h[L − 1] a[N − 1] n[N − 1] r = H.a + n
(8.24)
If a linear zero-forcing equalizer is used, obtaining the equalizer output is equivalent to applying the pseudoinverse matrix H† on the vector of received signal sequence r. H† r = H† H.a + H† n = a + n˜
(8.25)
This renders the additive white Gaussian noise become colored and thereby complicates optimal detection.
8.4.3 Design and simulation of zero-forcing equalizer The simulation model for ZF equalizer is shown in Figure 8.3. The simulation model involves the overall continuous time channel impulse response h(t) with discrete-time equivalent h[k] and a discrete-time zeroforcing symbol-spaced equalizer w[k]. The goal is to design the zero-forcing equalizer of length N that could compensate for the distortion effects introduced by the channel of length L. As an example for simulation, the sampling frequency of the overall system is assumed as Fs = 100Hz with 5 samples per symbol (nSamp = 5) and a channel model with the following overall channel impulse response at baud-rate is also assumed. 1 (8.26) h(t) = 2 t 1 + Tsym Program 8.1: Channel model %System parameters nSamp=5; %Number of samples per symbol determines baud rate Tsym Fs=100; %Sampling Frequency of the system Ts=1/Fs; %Sampling time Tsym=nSamp*Ts; %symbol time period %Define transfer function of the channel k=6; %define limits for computing channel response N0 = 0.001; %Standard deviation of AWGN channel noise
8.4 Zero-forcing equalizer
213
t = -k*Tsym:Ts:k*Tsym; %time base defined till +/-kTsym h_t = 1./(1+(t/Tsym).ˆ2);%channel model, replace with your own model h_t = h_t + N0*randn(1,length(h_t));%add Noise to the channel response h_k = h_t(1:nSamp:end);%downsampling to represent symbol rate sampler t_inst=t(1:nSamp:end); %symbol sampling instants Next, plot the channel impulse response of the above model, the resulting plot is given in Figure 8.5. Program 8.2: Plot channel impulse response figure; plot(t,h_t); hold on; %channel response at all sampling instants stem(t_inst,h_k,'r'); %channel response at symbol sampling instants legend('continuous-time model','discrete-time model'); title('Channel impulse response'); xlabel('Time (s)');ylabel('Amplitude');
Channel impulse response continous-time model discrete-time model
Fig. 8.5: Channel impulse response Since the channel response, plotted in Figure 8.5, is of length L = 13, a zero-forcing forcing filter of length N > L is desired. Let’s fix the length of the zero-forcing filter to N = 14. The zero-forcing equalizer design, follows equation 8.18 and the equalizer delay determines the position of the solitary ‘ 1 ’ in the δk0 matrix that represents the Dirac-delta function. Given the channel impulse response and the desired equalizer length, the following function designs a zero-forcing equalizer using equations 8.18, 8.20 and 8.21. Program 8.3: zf equalizer.m: Function to design ZF equalizer and to equalize a signal function [w,err,optDelay]=zf_equalizer(h,N,delay) % Delay optimized zero forcing equalizer. % [w,err,optDelay]=zf_equalizer(h,N,delay) designs a ZERO FORCING equalizer % w for given channel impulse response h, the desired length of the equalizer % N and equalizer delay (delay). Also returns the equalizer error (err),the % best optimal delay (optDelay) that could work best for the designed equalizer %
214
8 Linear Equalizers
% % %
[w,err,optDelay]=zf_equalizer(h,N) designs a DELAY OPTIMIZED ZERO FORCING equalizer w for given channel impulse response h, the desired length of the equalizer N. Also returns equalizer error(err),the best optimal delay(optDelay) h=h(:); %Channel matrix to solve simultaneous equations L=length(h); %length of CIR H=convMatrix(h,N); %(L+N-1)xN matrix - see Chapter 1 %section title - Methods to compute convolution %compute optimum delay based on MSE Hp = inv(H'*H)*H'; %Moore-Penrose Pseudo inverse [˜,optDelay] = max(diag(H*Hp)); optDelay=optDelay-1;%since Matlab index starts from 1 k0=optDelay;
end
if nargin==3, if delay >=(L+N-1), error('Too large delay'); end k0=delay; end d=zeros(N+L-1,1); d(k0+1)=1; %optimized position of equalizer delay w=Hp*d;%Least Squares solution err=1-H(k0+1,:)*w; %equalizer error (MSE)-reference [5] MSE=(1-d'*H*Hp*d);%MSE and err are equivalent
Let us design a zero-forcing equalizer with N = 14 taps for the given channel response in Figure 8.5 and equalizer delay k0 set to 11. The designed equalizer is tested by sending the channel impulse response as the input, r[k] = h[k]. The equalizer is expected to compensate for it completely as indicated in Figure 8.6. The overall system response is computed as the convolution of channel impulse response and the response of the zero-forcing equalizer. Program 8.4: Design and test the zero-forcing equalizer %Equalizer Design Parameters N = 14; %Desired number of taps for equalizer filter delay = 11; %design zero-forcing equalizer for given channel and get tap weights %and filter the input through the equalizer find equalizer co-effs for given CIR [w,error,k0]=zf_equalizer(h_k,N,delay); %[w,error,k0]=zf_equalizer(h,N,); %Try this delay optimized equalizer r_k=h_k; %Test the equalizer with the sampled channel response as input d_k=conv(w,r_k); %filter input through the eq h_sys=conv(w,h_k); %overall effect of channel and equalizer disp(['ZF Equalizer Design: N=', num2str(N), ... ' Delay=',num2str(delay), ' Error=', num2str(error)]); disp('ZF equalizer weights:'); disp(w);
8.4 Zero-forcing equalizer
215
Compute and plot the frequency response of the channel, equalizer and the overall system. The resulting plot, given in Figure 8.6, shows how the zero-forcing equalizer has compensated the channel response by inverting it. Program 8.5: Frequency responses of channel equalizer and overall system [H_F,Omega_1]=freqz(h_k); %frequency response of channel [W,Omega_2]=freqz(w); %frequency response of equalizer [H_sys,Omega_3]=freqz(h_sys); %frequency response of overall system figure; plot(Omega_1/pi,20*log(abs(H_F)/max(abs(H_F))),'g'); hold on; plot(Omega_2/pi,20*log(abs(W)/max(abs(W))),'r'); plot(Omega_3/pi,20*log(abs(H_sys)/max(abs(H_sys))),'k'); legend('channel','ZF equalizer','overall system'); title('Frequency response');ylabel('Magnitude(dB)'); xlabel('Normalized frequency(x \pi rad/sample)');
Frequency response
Magnitude (dB)
0
overall system
10 20
ZF equalizer
channel
30 40 50
0
0.2
0.4
0.6
Normalized Frequency (x
0.8
1
rad/sample)
Fig. 8.6: Zero-forcing equalizer compensates for the channel response The response of the filter in time domain is plotted in Figure 8.7. The plot indicates that the equalizer has perfectly compensated the input when r[k] = h[k]. Program 8.6: Design and test the zero-forcing equalizer figure; %Plot equalizer input and output subplot(2,1,1); stem(0:1:length(r_k)-1,r_k); title('Equalizer input'); xlabel('Samples'); ylabel('Amplitude'); subplot(2,1,2); stem(0:1:length(d_k)-1,d_k); title(['Equalizer output- N=', num2str(N), ...
216
8 Linear Equalizers
' Delay=',num2str(delay), ' Error=', num2str(error)]); xlabel('Samples'); ylabel('Amplitude'); The calculated zero-forcing design error depends on the equalizer delay. For an equalizer designed with 14 taps and a delay of 11, the MSE error =7.04 × 10−4 for a noiseless input. On the other hand the error=0.9985 for delay=1. Thus the equalizer delay plays a vital role in the zero-forcing equalizer design. Knowledge of the delay introduced by the equalizer is very essential to determine the position of the first valid sample at the output of the equalizer (see sections 8.6 and 8.7). The complete simulation code, that includes all the above discussed code snippets, is given next. Program 8.7: zf equalizer test.m: Simulation of Zero-forcing equalizer clearvars; clc; %System parameters nSamp=5; %Number of samples per symbol determines baud rate Tsym Fs=100; %Sampling Frequency of the system Ts=1/Fs; %Sampling time Tsym=nSamp*Ts; %symbol time period %Define transfer function of the channel k=6; %define limits for computing channel response N0 = 0.001; %Standard deviation of AWGN channel noise t = -k*Tsym:Ts:k*Tsym; %time base defined till +/-kTsym h_t = 1./(1+(t/Tsym).ˆ2);%channel model, replace with your own model h_t = h_t + N0*randn(1,length(h_t));%add Noise to the channel response h_k = h_t(1:nSamp:end);%downsampling to represent symbol rate sampler t_inst=t(1:nSamp:end); %symbol sampling instants figure; plot(t,h_t); hold on; %channel response at all sampling instants stem(t_inst,h_k,'r'); %channel response at symbol sampling instants legend('continuous-time model','discrete-time model'); title('Channel impulse response'); xlabel('Time (s)');ylabel('Amplitude'); %Equalizer Design Parameters N = 14; %Desired number of taps for equalizer filter delay = 11; %design zero-forcing equalizer for given channel and get tap weights %and filter the input through the equalizer find equalizer co-effs for given CIR [w,error,k0]=zf_equalizer(h_k,N,delay); %[w,error,k0]=zf_equalizer(h,N,); %Try this delay optimized equalizer r_k=h_k; %Test the equalizer with the sampled channel response as input d_k=conv(w,r_k); %filter input through the eq h_sys=conv(w,h_k); %overall effect of channel and equalizer disp(['ZF equalizer design: N=', num2str(N), ... ' Delay=',num2str(delay), ' error=', num2str(error)]); disp('ZF equalizer weights:'); disp(w); %Frequency response of channel,equalizer & overall system [H_F,Omega_1]=freqz(h_k); %frequency response of channel
8.4 Zero-forcing equalizer
217
[W,Omega_2]=freqz(w); %frequency response of equalizer [H_sys,Omega_3]=freqz(h_sys); %frequency response of overall system figure; plot(Omega_1/pi,20*log(abs(H_F)/max(abs(H_F))),'g'); hold on; plot(Omega_2/pi,20*log(abs(W)/max(abs(W))),'r'); plot(Omega_3/pi,20*log(abs(H_sys)/max(abs(H_sys))),'k'); legend('channel','ZF equalizer','overall system'); title('Frequency response');ylabel('Magnitude(dB)'); xlabel('Normalized frequency(x \pi rad/sample)'); figure; %Plot equalizer input and output(time-domain response) subplot(2,1,1); stem(0:1:length(r_k)-1,r_k); title('Equalizer input'); xlabel('Samples'); ylabel('Amplitude'); subplot(2,1,2); stem(0:1:length(d_k)-1,d_k); title(['Equalizer output- N=', num2str(N), ... ' Delay=',num2str(delay), ' Error=', num2str(error)]); xlabel('Samples'); ylabel('Amplitude');
Equalizer outputN=14 Delay=11 Error=0.00070493
1
1
0.8
0.8
Amplitude
Amplitude
Equalizer input
0.6 0.4
0.6 0.4 0.2
0.2 0 0 0
5
10
Samples
0
10
20
Samples
Fig. 8.7: Input and output samples from the designed zero-forcing equalizer
8.4.4 Drawbacks of zero-forcing equalizer Since the response of a zero-forcing equalizer is the inverse of the channel response (refer Figure 8.6), the zero-forcing equalizer applies large gain at those frequencies where the channel has severe attenuation or spectral nulls. As a result, the large gain also amplifies the additive noise at those frequencies and hence the zero-forcing equalization suffers from noise-enhancement. Another issue is that the additive white noise (which can be added as part of channel model) can become colored and thereby complicating the optimal detection. The simulated results in Figure 8.8, (obtained by increasing standard deviation of additive noise),
218
8 Linear Equalizers
show the influence of additive channel noise on the equalizer output. These unwanted effects can be eliminated by applying minimum mean squared-error criterion.
σn=0.01
σn=0.1
1
1
0.5
0.5
0
0
−0.5
−0.5 0
5
10
0
σn=1
5
10
σn=10
5
10
0
0
−5
−10 0
5
10
0
5
10
Fig. 8.8: Effect of additive noise on zero-forcing equalizer output
8.5 Minimum mean square error (MMSE) equalizer The zero-forcing equalizer suffers from the ill-effects of noise enhancement. An improved performance over zero-forcing equalizer can be achieved by considering the noise term in the design of the equalizer. An MMSE filter design achieves this by accounting for a trade-off between noise enhancement and interference suppression.
0
Fig. 8.9: Filter design model for MMSE symbol-spaced linear equalizer
8.5 Minimum mean square error (MMSE) equalizer
219
Minimization of error variance and bias is the goal of the MMSE filter design problem, in other words, it tries to minimize the mean square error. In this design problem, we wish to design an FIR MMSE equalizer of length N having the following filter transfer function. N−1
W (z) =
∑ w[k]z−k
(8.27)
k=0
With reference to the generic discrete-time channel model given in Figure 8.9, an MMSE equalizer tries to minimize the variance and bias, from the following error signal e[k] = d[k] − a[k ˆ − k0 ]
(8.28)
To account for the delay due to the channel and equalizer, the discrete-time model (Figure 8.9) includes the delay k0 at the decision device output. We note that the error signal depends on the estimated information symbols a[k ˆ − k0 ]. Since there is a possibility for erroneous decisions at the output of decision device, it is difficult to account for those errors. Hence, for filter optimization, in training phase, we assume a[k ˆ − k0 ] = a[k − k0 ]. e[k] = d[k] − a[k − k0 ] (8.29) The error signal can be rewritten to include the effect of an N length FIR equalizer on the received symbols N−1
e[k] = d[k] − a[k − k0 ] =
∑ w[i]r[k − i] − a[k − k0 ]
(8.30)
i=0
Given the received sequence or noisy measurement r, the goal is to design an equalizer filter w that would recover the information symbols a. This is a classic Weiner filter problem. The Weiner filter design approach requires that we find the filter coefficients w that minimizes the mean square error ξ . This criterion can be concisely stated as n o 2 (8.31) w = arg min ξ = arg min E e[k] where E[·] is the expectation operator. For complex signal case, the error signal given in equation 8.30 can be rewritten as N−1
e[k] = d[k] − a[k − kd ] =
∑ w∗ [i]r[k − i] − a[k − k0 ]
i=0
rk
h i rk−1 ∗ ∗ ∗ − a[k − k0 ] = w0 w1 · · · wN−1 .. . rk−(N−1) = wH r − a[k − k0 ]
(8.32)
where, w = [w0 , w1 , · · · , wN−1 ]H are the complex filter coefficients with superscript H denoting Hermitian transpose and r = [rk , rk−1 , · · · , rk−N−1 ]T denotes the complex received sequence. Using equation 8.32, the mean square error (MSE) can be derived as
220
8 Linear Equalizers
n o 2 ξ = E e[k] = E e[k]e∗ [k] H =E wH r − a[k − k0 ] wH r − a[k − k0 ] n o n o n 2 o = wH E rrH w − wH E ra∗ [k − k0 ] −E a[k − k0 ]rH w + E a[k − k0 ] | {z } | {z } Rrr
Rra
n 2 o = wH Rrr w − wH Rra − RH w + E a[k − k ] 0 ra
(8.33)
with autocorrelation matrix Rrr = E r · rH and crosscorrelation vector Rra = E ra∗ [k − k0 ] . To find the complex tap weights w that provide the minimum MSE, the gradient of ξ is set to zero. The gradient of mean square error is given by ∂ H ∂ H ∂ ξ = w R w − w R rr ra ∂ w∗ ∂ w∗ ∂ w∗ n 2 o ∂ ∂ H Rra w + E a[k − k0 ] − ∂ w∗ ∂ w∗ = Rrr w − Rra − 0 + 0
(8.34) (8.35) (8.36)
Equating the gradient of MSE to zero, the Weiner solution to find the complex filter coefficients of the MMSE filter is given by w = R−1 (8.37) rr Rra This equation is called as the Weiner-Hopf equation with the autocorrelation matrix Rrr and the cross correlation vector Rra given by n o n o ∗ ∗ ∗ E rk rk−1 ··· E rk rk−(N−1) E rk rk n o n o ∗ ∗ ∗ n o E rk−1 rk E r r · · · E r r k−1 k−1 k−1 k−(N−1) (8.38) Rrr = E rrH = .. .. .. .. . . . . n o n o n o ∗ ∗ E rk−(N−1) rk∗ E rk−(N−1) rk−1 · · · E rk−(N−1) rk−(N−1) n o n o n oT ∗ ∗ ∗ ∗ Rra = E ra [k − k0 ] = E rk ak−k0 E rk−1 ak−k0 · · · E rk−(N−1) ak−k0
(8.39)
8.5.1 Alternative solution Another form of equation to solve for MMSE equalizer taps [6] [7], that allow for end-to-end system delay is given by !−1 σn2 H w = H H+ 2I HH δk0 (8.40) σa Here, H is the channel matrix as defined in equation 8.15, HH its Hermitian transpose, I is an identity 2 matrix, the ratio σσn2 is the inverse of signal-to-noise ratio (SNR) and the column vector δk0 = [· · · , 0, 1, 0, · · · ]T a with a 1 at k0 th position. This equation is very similar to the equation 8.18 of the zero-forcing equalizer, except for the signal-to-noise ratio term. Therefore, for a noiseless channel, the solution is identical for the MMSE
8.5 Minimum mean square error (MMSE) equalizer
221
equalizers and the zero-forcing equalizers. This equation allows for end-to-end system delay. Without the equalizer delay (k0 ), it is difficult to compensate for the effective channel delay using a causal FIR equalizer. The solution for the MMSE equalizer depends on the availability and accuracy of the channel matrix H, the knowledge of the variances σa2 and σn2 (which is often the hardest challenge), and the invertibility of the matrix 2
HHH + σσn2 I. In this section, it is assumed that the receiver possesses the knowledge of the channel matrix H a
and the variances - σa2 , σn2 , and our goal is to design a delay-optimized MMSE equalizer. The minimum MSE is given by [6] [7] " #−1 2 σ HH δk0 ξmin = σa2 1 − δkT0 H HH H + n2 I σa
(8.41)
The optimal-delay that minimizes the MSE is simply the index of the maximum diagonal element of the −1 2 matrix H HH H + σσn2 I HH . a
" #−1 2 σ HH = arg max diag H HH H + n2 I σa index
optimum delay = kopt
(8.42)
8.5.2 Design and simulation of MMSE equalizer A Matlab function for the design of delay-optimized MMSE equalizer is given next. This function is based on the equations 8.40, 8.41 and 8.42. Program 8.8: mmse equalizer.m: Function to design a delay-optimized MMSE equalizer function [w,mse,optDelay]=mmse_equalizer(h,snr,N,delay) % Delay optimized MMSE Equalizer. % [w,mse]=mmse_equalizer(h,snr,N,delay) designs a MMSE equalizer w % for given channel impulse response h, input 'snr' (dB), the desired length % of the equalizer N and equalizer delay (delay). It also returns the mean % square error (mse) and the optimal delay (optDelay) of the designed equalizer. % % [w,mse,optDelay]=mmse_equalizer(h,snr,N) designs a DELAY OPTIMIZED MMSE % equalizer w for given channel impulse response h, input 'snr' (dB) and % the desired length of the equalizer N. Also returns the mean square error(mse) % and the optimal delay (optDelay) of the designed equalizer. h=h(:);%channel matrix to solve for simultaneous equations L=length(h); %length of CIR H=convMatrix(h,N); %(L+N-1)xN matrix - see Chapter 1 %section title - Methods to compute convolution gamma = 10ˆ(-snr/10); %inverse of SNR %compute optimum delay [˜,optDelay] = max(diag(H*inv(H'*H+gamma*eye(N))*H')); optDelay=optDelay-1; %since Matlab index starts from 1 k0=optDelay;
222
8 Linear Equalizers
if nargin==4, if delay >=(L+N-1), error('Too large delay'); end k0=delay; end d=zeros(N+L-1,1); d(k0+1)=1; %optimized position of equalizer delay w=inv(H'*H+gamma*eye(N))*H'*d; %Least Squares solution mse=(1-d'*H*inv(H'*H+gamma*eye(N))*H'*d);%assume var(a)=1 end The simulation methodology to test the zero-forcing equalizer was discussed in the previous section. The same methodology is applied here to test MMSE equalizer. Here, the same channel model with ISI length L = 13 is used with slightly increased channel noise (σn2 = 0.1). The channel response at symbol sampling instants are shown in Figure 8.10.
Channel impulse response continuous-time model discrete-time model
Fig. 8.10: A noisy channel with CIR of length N = 14 Next, a delay-optimized MMSE equalizer of length N = 14 is designed and used for compensating the distortion introduced by the channel. Figure 8.11 shows the MMSE equalizer in action that beautifully compensates the noisy distorted data. The complete simulation code is given next.
8.5 Minimum mean square error (MMSE) equalizer
Program 8.9: mmse equalizer test.m: Simulation of MMSE equalizer clearvars clc; %System parameters nSamp=5; %Number of samples per symbol determines baud rate Tsym Fs=100; %Sampling Frequency of the system Ts=1/Fs; %Sampling period Tsym=nSamp*Ts; %Define transfer function of the channel k=6; %define limits for computing channel response N0 = 0.1; %Standard deviation of AWGN noise t = -k*Tsym:Ts:k*Tsym; %time base defined till +/-kTsym h_t = 1./(1+(t/Tsym).ˆ2); %channel model, replace with your own model h_t = h_t + N0*randn(1,length(h_t)); %add Noise to the channel response h_k= h_t(1:nSamp:end);%downsampling to represent symbol rate sampler t_k=t(1:nSamp:end); %symbol sampling instants figure;plot(t,h_t);hold on;%channel response at all sampling instants stem(t_k,h_k,'r'); %channel response at symbol sampling instants legend('continuous-time model','discrete-time model'); title('Channel impulse response'); xlabel('Time (s)');ylabel('Amplitude'); %Equalizer Design Parameters nTaps = 14; %Desired number of taps for equalizer filter %design DELAY OPTIMIZED MMSE eq. for given channel, get tap weights %and filter the input through the equalizer noiseVariance = N0ˆ2; %noise variance snr = 10*log10(1/N0); %convert to SNR (assume var(signal) = 1) [w,mse,optDelay]=mmse_equalizer(h_k,snr,nTaps); %find eq. co-effs r_k=h_k; %Test the equalizer with the channel response as input d_k=conv(w,r_k); %filter input through the equalizer h_sys=conv(w,h_k); %overall effect of channel and equalizer disp(['MMSE equalizer design: N=', num2str(nTaps),' delay=',num2str(optDelay)]) disp('MMSE equalizer weights:'); disp(w) figure; subplot(1,2,1); stem(0:1:length(r_k)-1,r_k); xlabel('Samples'); ylabel('Amplitude');title('Equalizer input'); subplot(1,2,2); stem(0:1:length(d_k)-1,d_k); xlabel('Samples'); ylabel('Amplitude'); title(['Equalizer output- N=', num2str(nTaps),' delay=',num2str(optDelay)]);
223
224
8 Linear Equalizers
Equalizer input
Equalizer output- N=14 Delay=13 0.8
1
Amplitude
Amplitude
0.6
0.5
0
0.4 0.2 0
0
5
Samples
10
0
10
20
Samples
Fig. 8.11: MMSE equalizer of length L = 14 and optimized delay n0 = 13
8.6 Equalizer delay optimization For preset equalizers, the tap length N and the decision delay k0 are fixed. The chosen tap length and equalizer delay, significantly affect the performance of the entire communication link. This gives rise to the need for optimizing the tap length and the decision delay. In this section, only the optimization of equalizer delay for both zero-forcing equalizer and MMSE equalizers are simulated. Readers may refer [8] for more details on this topic. Equalizer delay is very crucial to the performance of the communication link. The equalizer delay determines the value of the symbol that is being detected at the current instant. Therefore, it directly affects the detector block that comes after the equalizer. The Matlab functions, (shown in the previous sections), for designing zero-forcing equalizer (zf equalizer.m) and the MMSE equalizer (mmse equalizer.m), already include the code that calculates the optimum decision delay. The optimization algorithm is based on equations 8.21 and 8.42. The following code snippet demonstrates the need for delay optimization of an MMSE equalizer for each chosen equalizer tap lengths N = [5, 10, 15, 20, 25, 30]. For this simulation, the channel impulse response is chosen as h[k] = [−0.1, −0.3, 0.4, 1, 0.4, 0.3, −0.1]. The equalizer delay is swept for each case of tap length and the mean squared error is plotted for visual comparison (Figure 8.12). For a given tap length N, the equalizer with optimum delay is the one that gives the minimum MSE. The same methodology can be applied to test the delay-optimization for the zero-forcing equalizer. Program 8.10: mmse equalizer delay opti.m: Delay optimization of MMSE equalizer for fixed N %The test methodology takes the example given in %Yu Gong et al., Adaptive MMSE equalizer with optimum tap-length and %decision delay, sspd 2010 and try to get a similar plot as given in %Figure 3 of the journal paper h=[-0.1 -0.3 0.4 1 0.4 0.3 -0.1]; %test channel SNR=10; %Signal-to-noise ratio at the equalizer input in dB Ns=5:5:30; %sweep number of equalizer taps from 5 to 30 maxDelay=Ns(end)+length(h)-2; %max delay cannot exceed this value mse=zeros(length(Ns),maxDelay); optimalDelay=zeros(length(Ns),1);
8.7 BPSK modulation with zero-forcing and MMSE equalizers
225
i=1;j=1; for N=Ns, %sweep number of equalizer taps for delays=0:1:N+length(h)-2; %sweep delays %compute MSE and optimal delay for each combination [˜,mse(i,j),optimalDelay(i,1)]=mmse_equalizer(h,SNR,N,delays); j=j+1; end i=i+1;j=1; end plot(0:1:N+length(h)-2,log10(mse.')) %plot mse in log scale title('MSE Vs eq. delay for given channel and equalizer lengths'); xlabel('Equalizer delay');ylabel('log_{10}[mse]'); %display optimal delays for each selected filter length N. this will %correspond with the bottom of the buckets displayed in the plot disp('Optimal Delays for each N value ->');disp(optimalDelay);
Fig. 8.12: MSE Vs equalizer delay for different values of equalizer tap lengths N
8.7 BPSK modulation with zero-forcing and MMSE equalizers Having designed and tested the zero-forcing and MMSE FIR linear equalizers, the next obvious task is to test their performance over a communication link. The performance of the linear equalizers completely depend on the characteristics of the channel over which the communication happens. As an example, three distinct channels [4] with the channel characteristics shown in Figure 8.13, are put to test. In the simulation model, given in Figure 8.14, a string of random data, modulated by BPSK modulation, is individually passed through the discrete channels given in 8.13. In addition to filtering the BPSK modulated
8 Linear Equalizers
0.72 0.36 0.21 0.07
Channel A 0.5 0.04
0.07
0 −0.5
−0.05 −0.21 −0.5 5
0
Tsym
0.03 10
Amplitude (dB)
226
0 −5 −10
0
1
2
3
4
0
1
B 0.815
1.5
Channel C
1
0.407
2
2.5 Tsym 0.460
0.227 0.5 1
0.227 2
3
Tsym
4
0 −20 −40
0
1
2
3
Frequency ω
0.688
0.460
0
3
5
Amplitude (dB)
1 Channel 0.407 0.5
Amplitude (dB)
Frequency ω
0 −20 −40
0
1
2
3
4
Frequency ω
Fig. 8.13: Discrete time characteristics of three channels
data through those channels, the transmitted data is also mixed with additive white Gaussian noise (as dictated by the given Eb /N0 value). The received data is then passed through the delay-optimized versions of the zero-forcing equalizer and the MMSE equalizer, independently over two different paths. The outputs from the equalizers are passed through the threshold detector that gives an estimate of the transmitted information symbols. The detected symbols are then compared with the original data sequence and the bit-error-rate is calculated. The simulation is performed for different Eb /N0 values and the resulting performance curves are plotted in Figure 8.15. The time domain and frequency domain characteristics of the three channels A,B and C are plotted in Figure 8.13. The simulated performance results in Figure 8.15, also include the performance of the communication link over an ISI-free channel (no inter-symbol interference). The performance of both the equalizers are the best in channel A, which has a typical response of a good telephone channel (channel response is gradual throughout the band with no spectral nulls). The MMSE equalizer performs better than ZF equalizer for channel C, which has the worst spectral characteristic. The next best performance of MMSE equalizer is achieved for channel B. Channel B still has spectral null but it is not as severe as that of channel C. The zero-forcing filters perform the worst over both the channels B and C. This is because the design of zero-forcing filter inherently omits the channel noise in the design equation for the computation of tap weights. It can be concluded that the linear equalizers yield good performance over well-behaved channels that do not exhibit spectral nulls. On the other hand, they are inadequate for fully compensating inter-symbol interference over the channels that exhibit spectral nulls, which is often the reality. Decision feedback equalizers offer an effective solution for this problem. Non-linear equalizers with reduced complexity are also the focus of researchers to overcome the aforementioned limitations of linear equalizers.
8.7 BPSK modulation with zero-forcing and MMSE equalizers
227
Fig. 8.14: Simulation model for BPSK modulation with zero-forcing and MMSE equalizers
Fig. 8.15: Error rate performance of MMSE and zero-forcing FIR equalizers of length N = 31
The complete simulation code is given next. The simulation code reuses the add awgn noise.m function defined in section 6.1 of chapter 6 and the iqOptDetector.m function given in section 5.4.4 of chapter 5.
228
8 Linear Equalizers
Program 8.11: bpsk isi channels equalizers.m: Performance of FIR linear equalizers over ISI channels % Demonstration of Eb/N0 Vs SER for baseband BPSK modulation scheme % over different ISI channels with MMSE and ZF equalizers clear all; clc; %---------Input Fields-----------------------N=1e5;%Number of bits to transmit EbN0dB = 0:2:30; % Eb/N0 range in dB for simulation M=2; %2-PSK h_c=[0.04 -0.05 0.07 -0.21 -0.5 0.72 0.36 0.21 0.03 0.07];%Channel A h_c=[0.407 0.815 0.407]; %uncomment this for Channel B h_c=[0.227 0.460 0.688 0.460 0.227]; %uncomment this for Channel C nTaps = 31; %Desired number of taps for equalizer filter SER_zf = zeros(length(EbN0dB),1); SER_mmse = zeros(length(EbN0dB),1); %-----------------Transmitter--------------------d= randi([0,1],1,N); %Uniformly distributed random source symbols ref=cos(((M:-1:1)-1)/M*2*pi); %BPSK ideal constellation s = ref(d+1); %BPSK Mapping x = conv(s,h_c); %apply channel effect on transmitted symbols for i=1:length(EbN0dB), %----------------Channel--------------------r=add_awgn_noise(x,EbN0dB(i));%add AWGN noise r = x+n %---------------Receiver-------------------%DELAY OPTIMIZED MMSE equalizer [h_mmse,MSE,optDelay]=mmse_equalizer(h_c,EbN0dB(i),nTaps);%MMSE eq. y_mmse=conv(h_mmse,r);%filter the received signal through MMSE eq. y_mmse=y_mmse(optDelay+1:optDelay+N);%samples from optDelay position %DELAY OPTIMIZED ZF equalizer [h_zf,error,optDelay]=zf_equalizer(h_c,nTaps);%design ZF equalizer y_zf=conv(h_zf,r);%filter the received signal through ZF equalizer y_zf = y_zf(optDelay+1:optDelay+N);%samples from optDelay position %Optimum Detection in the receiver - Euclidean distance Method %[estimatedTxSymbols,dcap]= iqOptDetector(y_zf,ref);%See chapter 3 %Or a Simple threshold at zero will serve the purpose dcap_mmse=(y_mmse>=0);%1 is added since message symbols are m=1,2 dcap_zf=(y_zf>=0);%1 is added since message symbols are m=1,2 SER_mmse(i)=sum((d˜=dcap_mmse))/N;%SER when filtered thro MMSE eq. SER_zf(i)=sum((d˜=dcap_zf))/N;%SER when filtered thro ZF equalizer end theoreticalSER = 0.5*erfc(sqrt(10.ˆ(EbN0dB/10)));%BPSK theory SER figure; semilogy(EbN0dB,SER_zf,'g'); hold on; semilogy(EbN0dB,SER_mmse,'r','LineWidth',1.5);
8.8 Adaptive equalizer: Least mean square (LMS) algorithm
229
semilogy(EbN0dB,theoreticalSER,'k'); ylim([1e-4,1]); title('Probability of Symbol Error for BPSK signals'); xlabel('E_b/N_0 (dB)');ylabel('Probability of Symbol Error - P_s'); legend('ZF Equalizer','MMSE Equalizer','No interference');grid on; [H_c,W]=freqz(h_c);%compute and plot channel characteristics figure;subplot(1,2,1);stem(h_c);%time domain subplot(1,2,2);plot(W,20*log10(abs(H_c)/max(abs(H_c))));%freq domain
8.8 Adaptive equalizer: Least mean square (LMS) algorithm To solve for MMSE equalizer coefficients, the Weiner-Hopf solution (equation 8.37) requires the estimation of autocorrelation matrix Rrr , correlation matrix Rra and the matrix inversion R−1 rr . This method can be computationally intensive and may render potentially unstable filter. Another solution is to use an iterative approach in which the filter weights w are updated iteratively in a −1 R ). This is achieved by iteratively moving the filter direction towards the optimum Weiner solution (Rrr ra weights w in the direction of the negative gradient wn+1 = wn − µOξ [n]
(8.43)
where, wn is the vector of current filter tap weights, wn+1 is the vector of filter tap weights for the update, µ is the step size and Oξ [n] denotes the gradient of the mean square error as derived in equation 8.36.
Fig. 8.16: Adaptive filter structure using LMS update The least mean square (LMS) algorithm is derived from the gradient descent algorithm which is an iterative algorithm to search for the minimum error condition with the cost function equal to the mean square error at
230
8 Linear Equalizers
H output and Rra = ∗ of the equalizer filter. The algorithm is not realizable until we compute Rrr = E rr E ra [k] , involving an expectation operator. Therefore, for real-time applications, instantaneous squared H error is preferred instead of mean squared error. This is achieved by replacing R = E rr and Rra = rr E ra∗ [k] , with instantaneous estimates Rrr = rrH and Rra = ra∗ [k], respectively. The channel model utilizing LMS algorithm for adapting the FIR linear equalizer is shown in Figure 8.16. The LMS algorithm computes the filter tap updates using the error between the reference sequence and the filter output. The channel model can be operated in two modes: training mode and decision directed mode. In training mode, the transmitted signal a[k] acts as the reference signal for error generation. In decision directed mode (engaged when the actual data transmission happens), the output of the decision device a[k] ˆ acts as the reference signal for error generation. With reference to the channel model in Figure 8.16, the recursive equations for computing the error term and the filter coefficient updates of the LMS algorithm in training mode are given by e = a[k] − wH nr wn+1 = wn + µer
(8.44) (8.45)
where, wn = [w0 , w1 , · · · , wN−1 ]H are the current filter coefficients at instant n with superscript H denoting Hermitian transpose and r = [rk , rk−1 , · · · , rk−N−1 ]T denotes the complex received sequence. For decision directed mode replace a[k] with a[k] ˆ in the LMS equations. The summary of the LMS algorithm for designing a filter of length N and its Matlab implementation are given. Algorithm 1: Least mean square (LMS) algorithm Input: N : filter length, µ: step size, r : received signal sequence, a: reference sequence Output: w: final filter coefficients, e: estimation error 1 Initialization: wn = zeros(N,1) 2 for k = N:length(r) do 3 Compute r = [rk , rk−1 , · · · , rk−N−1 ]T 4 Compute error term e = a[k] − wH nr 5 Compute filter tap update wn+1 = wn + µer 6
return w
Program 8.12: lms.m: Implementing LMS algorithm function w=lms(N,mu,r,a) %Function implementing LMS update equations % N = desired length of the filter % mu = step size for the LMS update % r = received/input signal % a = reference signal % %Returns: % w = optimized filter coefficients w=zeros(1,N);%initialize filter taps to zeros for k=N:length(r) r_vector = r(k:-1:k-N+1); e = a(k)-w *r_vector'; w = w + mu*e*r_vector; end end
References
231
To verify the implemented function, a simple test can be conducted. The adaptive filter should be able to identify the response of a short FIR filter whose impulse response is known prior to the test. Program 8.13: lms test.m: Verifying the LMS algorithm %System identification using LMS algorithm clearvars; clc; N = 5; %length of the desired filter mu=0.1; %step size for LMS algorithm r=randn(1,10000);%random input signal h=randn(1,N)+1i*randn(1,N); %random complex system a=conv(h,r);%reference signal w=lms(N,mu,r,a);%designed filter using input signal and reference disp('System impulse response (h):'); disp(h) disp('LMS adapted filter (w): '); disp(w); A random input signal r[k] and a random complex system impulse response are generated h[n]. They are convolved to produce the noiseless reference signal a[k]. Pretending that h[n] is unknown, using only the input signal r[k] and the reference signal a[k], the LMS algorithm is invoked to design a FIR filter of length N = 5. The resulting optimized filter coefficients w[n] should match the impulse response h[n]. Results from a sample run are given next. System impulse response (h): -0.5726 + 0.0607i 0.0361 - 0.5948i -1.1136 - 0.121i 0.5275 - 0.4212i 1.7004 + 1.8307i LMS adapted filter (w): -0.5726 + 0.0607i 0.0361 - 0.5948i -1.1136 - 0.121i 0.5275 - 0.4212i 1.7004 + 1.8307i
References 1. J. R. Treichler, I. Fijalkow, and C. R. Johnson, Jr., Fractionally-spaced equalizers: How long should they really be?, IEEE Signal Processing Mag., vol. 13, pp. 65–81, May 1996 2. Edward A. Lee, David G. Messerschmitt John R. Barry, Digital Communications, 3rd ed.: Springer, 2004 3. R.W. Lucky, Automatic equalization for digital communication, Bell System Technical Journal 44, 1965 4. J. G. Proakis, Digital Communications, 3rd ed.: McGraw-Hill, 1995. 5. Monson H. Hayes , Statistical Digital Signal Processing and Modeling, chapter 4.4.5, Application FIR least squares inverse filter, Wiley, 1 edition, April 11, 1996. 6. C.R Johnson, Hr. et al., On Fractionally-Spaced Equalizer Design for Digital Microwave Radio Channels, Signals, Systems and Computers, 1995. 1995 Conference Record of the Twenty-Ninth Asilomar Conference on, vol. 1, pp. 290-294, November 1995. 7. Phil Schniter, MMSE Equalizer Design, March 6, 2008, http://www2.ece.ohio-state.edu/∼schniter/ee501/ handouts/mmse eq.pdf 8. Yu Gong et al., Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay, Sensor Signal Processing for Defence (SSPD 2010), 29-30 Sept. 2010.
Chapter 9
Receiver Impairments and Compensation
Abstract IQ signal processing is widely used in today’s communication receivers. All the IQ processing receiver structures, suffer from problems due to amplitude and phase mismatches in their I and Q branches. IQ imbalances are commonplace in analog front-end circuitry and it results in interference of image signal on top of the desired signal. The focus of this chapter is to introduce the reader to some of the basic models of receiver impairments like phase imbalances, gain imbalances and DC offsets. Well known compensation techniques for DC offsets and IQ imbalances are also discussed with code implementation and results.
9.1 Introduction Direct conversion image rejection receivers [1] for IQ processing - are quite popular in modern RF receiver front-end designs. Direct conversion receivers are preferred over conventional superheterodyne receivers because they do not require separate image filtering [2] [3]. In a direct conversion receiver, the received signal is amplified and filtered at baseband rather than at some intermediate frequency [2]. A direct conversion receiver shown in Figure 9.1 will completely reject the image bands, only if the following two conditions are met : 1) The local oscillator tuned to the desired RF frequency must produce the cosine and sine signals with a phase difference of exactly 90◦ , 2) the gain and phase responses of the I and Q branches must match perfectly. In reality, analog components in the RF front-end are not perfect and hence it is impossible to satisfy these requirements. Therefore, complete rejection of the image bands, during RF-IQ conversion, is unavoidable. The IQ imbalance results from non-ideal RF front-end components due to power imbalance and/or nonorthogonality of the I,Q branches caused by imperfect local oscillator outputs. In Figure 9.1, the power imbalance on the IQ branches is captured by the gain parameter g on the quadrature branch. The phase error between the local oscillator outputs is captured by the parameter φ . The effect of IQ imbalance is quite disastrous for higher order modulations that form the basis for many modern day communication systems like IEEE 802.11 WLAN, UMTS, LTE, etc. Furthermore, the RF frontend may also introduce DC offsets in the IQ branches, leading to more performance degradation. In order to avoid expensive RF front-end components, signal processing algorithms for compensating the IQ imbalance and DC offsets are a necessity. In this section, we begin by constructing a model, shown in Figure 9.2 , to represent the effect of following RF receiver impairments • Phase imbalance and cross-talk on I,Q branches caused by local oscillator phase mismatch - φ . • Gain imbalance on the I,Q branches - g. • DC offsets in the I and Q branches - dci , dcq . In this model, the complex signal r = ri + jrq denotes the perfect unimpaired signal to the receiver and z = zi + jzq represents the signal after the introduction of IQ imbalance and DC offset effects in the receiver 233
234
9 Receiver Impairments and Compensation
Fig. 9.1: Direct conversion image rejection receiver
Fig. 9.2: RF receiver impairments model
front-end. The following RF receiver impairment model implemented in Matlab, consists of two steps. First it introduces IQ imbalance in the incoming complex signal and then adds DC offsets to the in-phase and quadrature branches. The details of the individual sub-functions for the IQ imbalance model and DC offset model are described in the subsequent sections. The DC offset impairment model and its compensation is discussed first, followed by the IQ impairment modeling and compensation. Program 9.1: receiver impairments.m: Function to add RF receiver impairments function z=receiver_impairments(r,g,phi,dc_i,dc_q) %Function to add receiver impairments to the IQ branches %[z]=iq_imbalance(r,g,phi) introduces DC and IQ imbalances between inphase % and quadrature components of the complex baseband signal r. The model % parameter g represents gain mismatch between the IQ branches of the receiver % and parameter 'phi' represents phase error of local oscillator (in degrees). % DC biases associated with each I,Q path are represented by dc_i and dc_q. k = iq_imbalance(r,g,phi); %Add IQ imbalance z = dc_impairment(k,dc_i,dc_q); %Add DC impairment
9.3 IQ imbalance model
235
9.2 DC offsets and compensation The RF impairment model in the Figure 9.2 contains two types of impairments namely IQ imbalance and DC offset impairment. DC offsets dci and dcq on the I and Q branches are simply modelled as additive factors on the incoming signal. Program 9.2: dc impairment.m: Model for introducing DC offsets in the IQ branches function [y]=dc_impairment(x,dc_i,dc_q) %Function to create DC impairments in a complex baseband model % [y]=iq_imbalance(x,dc_i,dc_q) introduces DC imbalance % between the inphase and quadrature components of the complex % baseband signal x. The DC biases associated with each I,Q path % are represented by the paramters dc_i and dc_q y = x + (dc_i+1i*dc_q); Correspondingly, the DC offsets on the branches are simply removed by subtracting the mean of the signal on the I,Q branches from the incoming signal. Program 9.3: dc compensation.m: Model for introducing DC offsets in the IQ branches function [v]=dc_compensation(z) %Function to estimate and remove DC impairments in the IQ branch % v=dc_compensation(z) removes the estimated DC impairment iDCest=mean(real(z));%estimated DC on I branch qDCest=mean(imag(z));%estimated DC on I branch v=z-(iDCest+1i*qDCest);%remove estimated DCs
9.3 IQ imbalance model Two IQ imbalance models, namely, single-branch IQ imbalance model and double-branch IQ imbalance model, are used in practice. These two models differ in the way the IQ mismatch in the inphase and quadrature arms are envisaged. In the single-branch model, the IQ mismatch is modeled as amplitude and phase errors in only one of the branches (say Q branch). In contrast, in the double-branch model, the IQ mismatch is modeled as amplitude and phase errors in both the I and Q branches. The estimation and compensation algorithms for both these models have to be adjusted accordingly. In this text, only the single-branch IQ imbalance model and its corresponding compensation algorithms are described. With reference to Figure 9.2, let r = ri + jrq denote the perfect unimpaired signal and z = zi + jzq represent the signal with IQ imbalance. In this single-branch IQ imbalance model, the gain mismatch is represented by a gain term g in the Q branch only. The difference 1 − g is a measure of amplitude deviation of Q branch from the perfectly balanced condition. The presence of phase error π in the local oscillator outputs, manifests as cross-talk between the I and Q branches. In essence, together with the gain imbalance g and the phase error φ , the impaired signals on the I,Q branches are represented as " # " #" # zi [k] 1 0 ri [k] = (9.1) zq [k] −g.sin(φrad ) g.cos(φrad ) rq [k] For the given gain mismatch g and phase error φ , the following Matlab function introduces IQ imbalance in a complex baseband signal.
236
9 Receiver Impairments and Compensation
Program 9.4: iq imbalance.m: IQ imbalance model function [z]= iq_imbalance(r,g,phi) %Function to create IQ imbalance impairment in a complex baseband % [z]=iq_imbalance(r,g,phi) introduces IQ imbalance and phase error % signal between the inphase and quadrature components of the % complex baseband signal r. The model parameter g represents the % gain mismatch between the IQ branches of the receiver and 'phi' % represents the phase error of the local oscillator (in degrees). Ri=real(r); Rq=imag(r); Zi= Ri; %I branch Zq= g*(-sin(phi/180*pi)*Ri + cos(phi/180*pi)*Rq);%Q branch crosstalk z=Zi+1i*Zq; end
9.4 IQ imbalance estimation and compensation Several IQ imbalance compensation schemes are available. This section uses two of them which are based on the works of [4] and [5]. The first algorithm [4] is a blind estimation and compensation algorithm and the one described in [5] is a pilot based estimation and compensation algorithm. Both these techniques are derived for the single-branch IQ imbalance model.
9.4.1 Blind estimation and compensation The blind technique is a simple, low complexity algorithm, that is solely based on the statistical properties of the incoming complex signal. It does not require additional processing overheads like preamble or training symbols for the IQ imbalance estimation.
Fig. 9.3: Blind estimation and compensation for IQ imbalance
9.4 IQ imbalance estimation and compensation
237
The blind compensation scheme is shown in Figure 9.3. Let z = zi + jzq , represent the IQ imbalanceimpaired complex baseband signal, that needs to be compensated. First, the complex baseband signal z is used to estimate three imbalance parameters θ1 , θ2 and θ3 . θ1 = −E sgn(zi )zq θ2 = E |zi | n o θ3 = E zq (9.2) where, sgn(x) is the signum function −1 sgn(x) = 0 1
x0
The compensator coefficients c1 and c2 are then calculated from the estimates of θ1 , θ2 and θ3 . s θ32 − θ12 θ1 c2 = c1 = θ2 θ22
(9.3)
(9.4)
Finally, the inphase and quadrature components of the compensated signal y = yi + jyq , are calculated as yi = zi
yq =
c1 zi + zq c2
Program 9.5: blind iq compensation.m: Blind IQ imbalance estimation and compensation function y=blind_iq_compensation(z) %Function to estimate and compensate IQ impairments for the single%branch IQ impairment model % y=blind_iq_compensation(z) estimates and compensates IQ imbalance I=real(z); Q=imag(z); theta1=(-1)*mean(sign(I).*Q); theta2=mean(abs(I)); theta3=mean(abs(Q)); c1=theta1/theta2; c2=sqrt((theta3ˆ2-theta1ˆ2)/theta2ˆ2); yI = I; yQ = (c1*I+Q)/c2; y= (yI +1i*yQ); end
238
9 Receiver Impairments and Compensation
9.4.2 Pilot based estimation and compensation Pilot based estimation algorithms are widely used to estimate various channel properties. The transmitter transmits a pilot sequence and the IQ imbalance is estimated based on the received sequence at the baseband signal processor. The pilot estimation technique is well suited for wireless applications like WLAN, UMTS and LTE, where a known pilot sequence is transmitted as part of the data communication session. The low complexity algorithm, proposed in [5], uses a preamble of length L = 64 in order to estimate the gain imbalance (Kest ) and phase error Pest . v u L u 2 u u ∑ zq [k] u (9.5) Kest = u k=1 u L 2 t z [k]
∑
i
k=1 L
∑ (zi [k].zq [k]) Pest =
k=1 L
(9.6)
∑ z2i [k] k=1
where, the complex signal z = zi + jzq , represents the impaired version of the long preamble as given in the IEEE 802.11a specification [6]. Program 9.6: pilot iq imb est.m: IQ imbalance estimation using Pilot transmission function [Kest,Pest]=pilot_iq_imb_est(g,phi,dc_i,dc_q) %Length 64 - Long Preamble as defined in the IEEE 802.11a preamble_freqDomain = [0,0,0,0,0,0,1,1,-1,-1,1,1,-1,1,... -1,1,1,1,1,1,1,-1,-1,1,1,-1,1,-1,1,1,1,1,... 0,1,-1,-1,1,1,-1,1,-1,1,-1,-1,-1,-1,-1,1,1,... -1,-1,1,-1,1,-1,1,1,1,1,0,0,0,0,0];%freq. domain representation preamble=ifft(preamble_freqDomain,64);%time domain representation %send known preamble through DC & IQ imbalance model and estimate it r=receiver_impairments(preamble,g,phi,dc_i,dc_q); z=dc_compensation(r); %remove DC imb. before IQ imbalance estimation %IQ imbalance estimation I=real(z); Q=imag(z); Kest = sqrt(sum((Q.*Q))./sum(I.*I)); %estimate gain imbalance Pest = sum(I.*Q)./sum(I.*I); %estimate phase mismatch end Let d = di + jdq be the impaired version of complex signal received during the normal data transmission interval. Once the parameters given in equation 9.5 and 9.6 are estimated during the preamble transmission, the IQ compensation during the normal data transmission is as follows wi = di dq − Pest .di p wq = 2 kest 1 − Pest
(9.7)
9.5 Visualizing the effect of receiver impairments
239
Program 9.7: iqImb compensation.m: Compensation of IQ imbalance during data transmission interval function y=iqImb_compensation(d,Kest,Pest) %Function to compensate IQ imbalance during the data transmission % y=iqImb_compensation(d,Kest,Pest) compensates the IQ imbalance % present at the received complex signal d at the baseband % processor. The IQ compensation is performed using the gain % imbalance (Kest) and phase error (Pest) parameters that are % estimated during the preamble transmission. I=real(d); Q=imag(d); wi= I; wq = (Q - Pest*I)/sqrt(1-Pestˆ2)/Kest; y = wi + 1i*wq; end
9.5 Visualizing the effect of receiver impairments With the receiver impairments model and compensation techniques in place, let us visualize their effects in a complex plane. For instance, consider a higher order modulation like 64-QAM. The receiver impairments like IQ imbalance and DC offsets are added to the QAM modulated signal. The unimpaired sequence and the sequence affected by receiver impairments are plotted on a complex plane as shown in Figure 9.4. The following code re-uses the mqam modulator function that was already defined in section 5.3.3 of chapter 5. Program 9.8: test rf impairments.m: Visualize receiver impairments in a complex plane clearvars; clc; M=64;%M-QAM modulation order N=1000;%To generate random symbols d=ceil(M.*rand(N,1));%random data symbol generation s = mqam_modulator(M,d); %M-QAM modulated symbols (s) g_1=0.8; phi_1=0; dc_i_1=0; dc_q_1=0; %gain mismatch only g_2=1; phi_2=12; dc_i_2=0; dc_q_2=0; %phase mismatch only g_3=1; phi_3=0; dc_i_3=0.5; dc_q_3=0.5; %DC offsets only g_4=0.8; phi_4=12; dc_i_4=0.5; dc_q_4=0.5; %All impairments r1=receiver_impairments(s,g_1,phi_1,dc_i_1,dc_q_1); r2=receiver_impairments(s,g_2,phi_2,dc_i_2,dc_q_2); r3=receiver_impairments(s,g_3,phi_3,dc_i_3,dc_q_3); r4=receiver_impairments(s,g_4,phi_4,dc_i_4,dc_q_4); subplot(2,2,1);plot(real(s),imag(s),'b.');hold on; plot(real(r1),imag(r1),'r.'); title('IQ Gain mismatch only') subplot(2,2,2);plot(real(s),imag(s),'b.');hold on; plot(real(r2),imag(r2),'r.'); title('IQ Phase mismatch only') subplot(2,2,3);plot(real(s),imag(s),'b.');hold on; plot(real(r3),imag(r3),'r.'); title('DC offsets only') subplot(2,2,4);plot(real(s),imag(s),'b.');hold on; plot(real(r4),imag(r4),'r.');title('IQ impairments & DC offsets');
240
9 Receiver Impairments and Compensation
(a) IQ Gain mismatch only (b) IQ Phase mismatch only 10 10 0
0
-10 -10 10
0
10
0
10
(d) IQ impairments & DC offsets 10
(c) DC offsets only
0 -10 -10
-10 -10
0
0
10
-10 -10
0
10
Fig. 9.4: Constellation plots for unimpaired (blue) and impaired (red) versions of 64-QAM modulated symbols: (a) Gain imbalance g = 0.8 (b) Phase mismatch φ = 12◦ (c) DC offsets dci = 0.5, dcq = 0.5 (d) With all impairments g = 0.8, φ = 12◦ , dci = 0.5, dcq = 0.5
9.6 Performance of M-QAM modulation with receiver impairments In any physical layer of a wireless communication system that uses OFDM with higher order M-QAM modulation, the effect of receiver impairments is of great concern. The performance of a higher order M-QAM under different conditions of receiver impairments is simulated here.
Fig. 9.5: Simulation model for M-QAM system with receiver impairments Figure 9.5 shows the functional blocks used in this performance simulation. A received M-QAM signal is impaired by IQ imbalance (both gain imbalance g and phase imbalance φ ) and DC offsets (dci , dcq ) on
9.6 Performance of M-QAM modulation with receiver impairments
241
the IQ arms. First, the DC offset is estimated and compensated; Next, the IQ compensation is separately performed using blind IQ compensation and pilot based IQ compensation. The symbol error rates are finally computed for following four cases: (a) received signal (with no compensation), (b) with DC compensation alone, (c) with DC compensation and blind IQ compensation and (d) with DC compensation and pilot based IQ compensation. The complete simulation code for this performance simulation is given next. The code re-uses many functions defined in chapter 5 and chapter 6. The function to perform MQAM modulation and the coherent detection technique was already defined in sections 5.3.3 and 5.4.3. The AWGN noise model and the code to compute theoretical symbol error rates are given in sections 6.1.2 and 6.1.3 respectively. Figure 9.6 illustrates the results obtained for the following values of receiver impairments g = 0.9, φ = 8◦ , dci = 1.9, dcq = 1.7. For this condition, both the blind IQ compensation and pilot based IQ compensation techniques are on par. Figure 9.7 illustrates the results obtained for the following values of receiver impairments g = 0.9, φ = 30◦ , dci = 1.9, dcq = 1.7, where the phase mismatch is drastically increased compared to the previous case. For this condition, the performance with the pilot based IQ compensation is better than that of the blind compensation technique. Additionally, the constellation plots of the impaired signal and the compensated signal, are also plotted for various combinations of DC offsets, gain imbalances and phase imbalances, which is shown in Figure 9.8. Program 9.9: mqam iq imbalance awgn.m: Performance simulation of M-QAM modulation with receiver impairments and compensation %Eb/N0 Vs SER for M-QAM modulation with receiver impairments clear all;clc; %---------Input Fields-----------------------N=100000; %Number of input symbols EbN0dB = -4:5:24; %Define EbN0dB range for simulation M=64; %M-QAM modulation order g=0.9; phi=8; dc_i=1.9; dc_q=1.7;%receiver impairments %---------------------------------------------k=log2(M); %Bits per symbol EsN0dB = 10*log10(k)+EbN0dB; %Converting Eb/N0 to Es/N0 SER1 = zeros(length(EsN0dB),1);%Symbol Error rates (No compensation) SER2 = SER1;%Symbol Error rates (DC compensation only) SER3 = SER1;%Symbol Error rates (DC comp & Blind IQ compensation) SER4 = SER1;%Symbol Error rates (DC comp & Pilot IQ compensation) d=ceil(M.*rand(1,N));%random data symbols drawn from [1,2,..,M] [s,ref]=mqam_modulator(M,d);%MQAM symbols & reference constellation %See section 3.4 on chapter 'Digital Modulators and Demodulators %- Complex Baseband Equivalent Models' for function definition for i=1:length(EsN0dB), r = add_awgn_noise(s,EsN0dB(i)); %see section 4.1 on chapter 4 z=receiver_impairments(r,g,phi,dc_i,dc_q);%add impairments v=dc_compensation(z); %DC compensation y3=blind_iq_compensation(v); %blind IQ compensation [Kest,Pest]=pilot_iq_imb_est(g,phi,dc_i,dc_q);%Pilot based estimation y4=iqImb_compensation(v,Kest,Pest);%IQ comp. using estimated values
242
9 Receiver Impairments and Compensation
%Enable this section - if you want to plot constellation diagram %figure(1);plot(real(z),imag(z),'rO'); hold on; %plot(real(y4),imag(y4),'b*'); hold off; %title(['Eb/N0=',num2str(EbN0dB(j)),' (dB)']);pause; %-------IQ Detectors - defined in section 3.5.4 chapter 3-------[estTxSymbols_1,dcap_1]= iqOptDetector(z,ref);%No compensation [estTxSymbols_2,dcap_2]= iqOptDetector(v,ref);%DC compensation only [estTxSymbols_3,dcap_3]= iqOptDetector(y3,ref);%DC & blind IQ comp. [estTxSymbols_4,dcap_4]= iqOptDetector(y4,ref);%DC & pilot IQ comp. %------ Symbol Error Rate Computation------SER1(i)=sum((d˜=dcap_1))/N; SER2(i)=sum((d˜=dcap_2))/N; SER3(i)=sum((d˜=dcap_3))/N; SER4(i)=sum((d˜=dcap_4))/N; end theoreticalSER = ser_awgn(EbN0dB,'MQAM',M); %theoretical SER figure(2);semilogy(EbN0dB,SER1,'r*-'); hold on; %Plot results semilogy(EbN0dB,SER2,'bO-'); semilogy(EbN0dB,SER3,'gˆ-'); semilogy(EbN0dB,SER4,'m*-'); semilogy(EbN0dB,theoreticalSER,'k'); legend('No compensation','DC comp only',... 'Sim- DC & blind iq comp','Sim- DC & pilot iq comp','Theoretical'); xlabel('E_b/N_0 (dB)');ylabel('Symbol Error Rate (Ps)'); title('Probability of Symbol Error 64-QAM signals');
Fig. 9.6: Performance of 64-QAM with receiver impairments: g = 0.9, φ = 8◦ , dci = 1.9, dcq = 1.7
References
243
Fig. 9.7: Performance of 64-QAM with receiver impairments: g = 0.9, φ = 30◦ , dci = 1.9, dcq = 1.7 (b) 10
5
5
Quadrature
Quadrature
(a) 10
0 −5 −10 −10
0
0 −5 −10 −10
10
10
10
5
5
0 −5 −10 −10
0
Inphase
0
10
Inphase
Quadrature
Quadrature
Inphase (c)
10
(d)
0 −5 −10 −10
0
10
Inphase
Fig. 9.8: Constellation plots at Eb /N0 = 21dB for various combinations of receiver impairments: (a) Gain imbalance only - g = 0.7 (b) Phase mismatch only - φ = 25◦ (c) Gain imbalance and phase mismatch g = 0.7, φ = 25◦ (d) All impairments - g = 0.7, φ = 25◦ , dci = 1.5, dcq = 1.5
References 1. T.H. Meng W. Namgoong, Direct-conversion RF receiver design, IEEE Transaction on Communications, 49(3):518–529, March 2001. 2. A. Abidi, Direct-conversion radio transceivers for digital communications, IEEE Journal of Solid-State Circuits, 30:1399–1410, December 1995.
244
9 Receiver Impairments and Compensation
3. B. Razavi, RF microelectronics, ISBN 978-0137134731, Prentice Hall, 2 edition, October 2011. 4. Moseley, Niels A., and Cornelis H. Slump. A low-complexity feed-forward I/Q imbalance compensation algorithm, 17th Annual Workshop on Circuits, 23-24 Nov 2006, Veldhoven, The Netherlands. pp. 158-164. Technology Foundation STW. ISBN 978-90-73461-44-4. 5. K.H Lin et al., Implementation of Digital IQ Imbalance Compensation in OFDM WLAN Receivers, IEEE International Symposium on Circuits and Systems, 2006. 6. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: High-Speed Physical Layer in the 5 GHz Band, IEEE Standard 802.11a-1999-Part II, Sep.1999.
Part V
Wireless Channel Models
Chapter 10
Large-scale Propagation Models
Abstract Radio propagation models play an important role in designing a communication system for real world applications. Propagation models are instrumental in predicting the behavior of a communication system over different environments. This chapter is aimed at providing the ideas behind the simulation of some of the subtopics in large-scale propagation models, such as, free space path loss model, two ray ground reflection model, diffraction loss model and Hata-Okumura model.
10.1 Introduction Communication over a wireless network requires radio transmission and this is usually depicted as a physical layer in network stack diagrams. The physical layer defines how the data bits are transferred to and from the physical medium of the system. In case of a wireless communication system, such as wireless LAN, the radio waves are used as the link between the physical layer of a transmitter and a receiver. In this chapter, the focus is on the simulation models for modeling the physical aspects of the radio wave when they are in transit. Radio waves are electromagnetic radiations. The branch of physics that describes the fundamental aspects of radiation is called electrodynamics. Designing a wireless equipment for interaction with an environment involves application of electrodynamics. For example, design of an antenna that produces radio waves, involves solid understanding of radiation physics. Let’s take a simple example. The most fundamental aspect of radio waves is that it travels in all directions. A dipole antenna, the simplest and the most widely used antenna can be designed with two conducting rods. When the conducting rods are driven with the current from the transmitter, it produces radiation that travels in all directions (strength of radiation will not be uniform in all directions). By applying field equations from electrodynamics theory, it can be deduced that the strength of the radiation field decreases by 1/d in the far field, where d being the distance from the antenna at which the measurement is taken. Using this result, the received power level at a given distance can be calculated and incorporated in the channel model. Radio propagation models are broadly classified into large-scale and small-scale models. Large scale effects typically occur in the order of hundreds to thousands of meters in distance. Small scale effects are localized and occur temporally (in the order of a few seconds) or spatially (in the order of a few meters). This chapter is dedicated for simulation of some of the large-scale models. The small-scale simulation models are discussed in the next chapter. The important questions in large-scale modeling are - how the signal from a transmitter reaches the receiver in the first place and what is the relative power of the received signal with respect to the transmitted power level. Lots of scenarios can occur in large-scale. For example, the transmitter and the receiver could be in line-of-sight in an environment surrounded by buildings, trees and other objects. As a result, the receiver may receive - a direct attenuated signal (also called as line-of-sight (LOS) signal) from the transmitter and indirect
247
248
10 Large-scale Propagation Models
signals (or non-line-of-sight (NLOS) signal) due to other physical effects like reflection, refraction, diffraction and scattering. The direct and indirect signals could also interfere with each other. Some of the large-scale models are briefly described here. The Free-space propagation model is the simplest large-scale model, quite useful in satellite and microwave link modeling. It models a single unobstructed path between the transmitter and the receiver. Applying the fact that the strength of a radiation field decreases as 1/d in the far field, we arrive at the Friis free space equation that can tell us about the amount of power received relative to the power transmitted (refer equation 10.1). The log distance propagation model is an extension to Friis space propagation model. It incorporates a path-loss exponent that is used to predict the relative received power in a wide range of environments. In the absence of line-of-sight signal, other physical phenomena like refection, diffraction, etc.., must be relied upon for the modeling. Reflection involves a change in direction of the signal wavefront when it bounces off an object with different optical properties. The plane-earth loss model is another simple propagation model that considers the interaction between the line-of-sight signal and the reflected signal. Diffraction is another phenomena in radiation physics that makes it possible for a radiated wave bend around the edges of obstacles. In the knife-edge diffraction model, the path between the transmitter and the receiver is blocked by a single sharp ridge. Approximate mathematical expressions for calculating the lossdue-to-diffraction for the case of multiple ridges were also proposed by many researchers [1] [2] [3] [4]. Of the several available large-scale models, five are selected here for simulation: • • • • •
Friis free space propagation model: to model propagation path loss Log distance path loss model: incorporates both path loss and shadowing effect Two ray ground reflection model, also called as plane-earth loss model Single knife-edge diffraction model - incorporates diffraction loss Hata - Okumura Model - an empirical model
10.2 Friis free space propagation model Friis free space propagation model is used to model the LOS path loss incurred in a free space environment, devoid of any objects that create absorption, diffraction, reflections, or any other characteristic-altering phenomenon to a radiated wave. It is valid only in the far field region of the transmitting antenna [5] and is based on the inverse square law of distance which states that the received power at a particular distance from the transmitter decays by a factor of square of the distance. The Friis equation for received power is given by Pr (d) = Pt
Gt Gr λ 2 (4πd)2 L
(10.1)
where, Pr (d) is the received signal power in Watts expressed as a function of separation distance (d meters) between the transmitter and the receiver, Pt is the power of the transmitted signal’s Watts, Gt and Gr are the gains of transmitter and receiver antennas when compared to an isotropic radiator with unit gain, λ is the wavelength of carrier in meters and L represents other losses that is not associated with the propagation loss. The parameter L may include system losses like loss at the antenna, transmission line attenuation, loss at various filters etc. The factor L is usually greater than or equal to 1 with L = 1 for no such system losses. The Friis equation can be modified to accommodate different environments, on the reason that the received signal decreases as the nth power of distance, where the parameter n is the path-loss exponent (PLE) that takes constant values depending on the environment that is modeled (see Table 10.1 for various empirical values for PLE). Gt Gr λ 2 (10.2) Pr (d) = Pt (4π)2 d n L
10.2 Friis free space propagation model
249
Table 10.1: Path loss exponent for various environements Environment Free space Urban area cellular radio Shadowed urban cellular radio Inside a building - line-of-sight Obstructed in building Obstructed in factory
Path Loss Exponent (n) 2 2.7 to 3.5 3 to 5 1.6 to 1.8 4 to 6 2 to 3
The propagation path loss in free space, denoted as PL , is the loss incurred by the transmitted signal during propagation. It is expressed as the signal loss between the feed points of two isotropic antennas in free space. " # 4πd λ2 PL (dB) = −10 log10 = +20 log10 (10.3) (4πd)2 λ The propagation of an electromagnetic signal, through free space, is unaffected by its frequency of transmission and hence has no dependency on the wavelength λ . However, the variable λ exists in the path loss equation to account for the effective aperture of the receiving antenna, which is an indicator of the antenna’s ability to collect power. If the link between the transmitting and receiving antenna is something other than the free space, penetration/absorption losses are also considered in path loss calculation. Material penetrations are fairly dependent on frequency. Incorporation of penetration losses require detailed analysis. Usually, the transmitted power and the receiver power are specified in terms of dBm (power in decibels with respect to 1 mW ) and the antenna gains in dBi (gain in decibels with respect to an isotropic antenna). Therefore, it is often convenient to work in log scale instead of linear scale. The alternative form of Friis equation in log scale is given by Pr (d) dBm = [Pt ]dBm + [Gt ]dBi + [Gr ]dBi + 20 log10 (λ ) − 20 log10 (4πd) − 10 log10 (L) (10.4) Following function, implements a generic Friis equation that includes the path loss exponent, n, whose possible values are listed in Table 10.1). Program 10.1: FriisModel.m: Function implementing Friis propagation model function [Pr_dBm,PL_dB] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n) %Pt_dBm = Transmitted power in dBm %Gt_dBi = Gain of the Transmitted antenna in dBi %Gr_dBi = Gain of the Receiver antenna in dBi %f = frequency of transmitted signal in Hertz %d = array of distances at which the loss needs to be calculated %L = Other System Losses, No Loss case L=1 %n = path loss exponent (n=2 for free space) %Pr_dBm = Received power in dBm %PL_dB = constant path loss factor (including antenna gains) lambda=3*10ˆ8/f; %Wavelength in meters PL_dB = Gt_dBi + Gr_dBi + 20*log10(lambda/(4*pi))-10*n*log10(d)-10*log10(L);% Pr_dBm = Pt_dBm + PL_dB; %Receieved power in dBm at d meters end For example, consider a WiFi (IEEE 802.11n standard) transmission-reception system operating at f = 2.4 GHz or f = 5 GHz band with 0 dBm (1 mW ) output power from the transmitter. The gain of the transmitter
250
10 Large-scale Propagation Models
antenna is 1 dBi and that of receiving antenna is 1 dBi. It is assumed that there is no system loss, therefore L = 1. The following Matlab code uses the Friis equation and plots the received power in dBm for a range of distances (Figure 10.1). From the plot, the received power decreases by a factor of 6 dB for every doubling of the distance. Program 10.2: Friis model test.m: Friis free space propagation model %Matlab code to simulate Friis Free space equation %-----------Input section-----------------------Pt_dBm=0; %Input - Transmitted power in dBm Gt_dBi=1; %Gain of the Transmitted antenna in dBi Gr_dBi=1; %Gain of the Receiver antenna in dBi d =2.ˆ[0,1,2,3,4,5] ; %Array of input distances in meters L=1; %Other System Losses, No Loss case L=1 n=2; %Path loss exponent for Free space f=2.4e9; %Transmitted signal frequency in Hertz (2.4G WiFi) %---------------------------------------------------[Pr1_dBm,PL1_dB] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n); semilogx(d,Pr1_dBm,'b-o');hold on; f=5e9; %Transmitted signal frequency in Hertz (5G WiFi) [Pr2_dBm,PL2_dB] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n); semilogx(d,Pr2_dBm,'r-o');grid on;title('Free space path loss'); xlabel('Distance (m)');ylabel('Received power (dBm)')
Received power (dBm)
Friis transmission model
Fig. 10.1: Received power using Friis model for WiFi transmission at f = 2.4 GHz and f = 5 GHz
10.3 Log distance path loss model
251
10.3 Log distance path loss model Log distance path loss model is an extension to the Friis free space model. It is used to predict the propagation loss for a wide range of environments, whereas, the Friis free space model is restricted to unobstructed clear path between the transmitter and the receiver. The model encompasses random shadowing effects due to signal blockage by hills, trees, buildings etc. It is also referred as log normal shadowing model. In the far field region of the transmitter, for distances beyond d f , if PL (d0 ) is the path loss at a distance d0 meters from the transmitter, then the path loss at an arbitrary distance d > d0 is given by d +χ , d f ≤ d0 ≤ d (10.5) [PL (d)]dB = [PL (d0 )]dB + 10 n log10 d0 where, PL (d) is the path loss at an arbitrary distance d meters, n is the path loss exponent that depends on the type of environment, as given in Table 10.1. Also, χ is a zero-mean Gaussian distributed random variable with standard deviation σ expressed in dB, used only when there is a shadowing effect. The reference path loss PL (d0 ), also called close-in reference distance, is obtained by using Friis path loss equation (equation 10.2) or by field measurements at d0 . Typically, d0 = 1m to 10m for microcell and d0 = 1 Km for a large cell. The path-loss exponent (PLE) values given in Table 10.1 are for reference only. They may or may not fit the actual environment we are trying to model. Usually, PLE is considered to be known a-priori, but mostly that is not the case. Care must be taken to estimate the PLE for the given environment before design and modeling. PLE is estimated by equating the observed (empirical) values over several time instants, to the established theoretical values. Refer [6] for a literature on PLE estimation in large wireless networks. Program 10.3: logNormalShadowing.m:Function to model Log-normal shadowing function [PL,Pr_dBm] = logNormalShadowing(Pt_dBm,Gt_dBi,Gr_dBi,f,d0,d,L,sigma,n) %Pt_dBm = Transmitted power in dBm %Gt_dBi = Gain of the Transmitted antenna in dBi %Gr_dBi = Gain of the Receiver antenna in dBi %f = frequency of transmitted signal in Hertz %d0 = reference distance of receiver from the transmitter in meters %d = array of distances at which the loss needs to be calculated %L = Other System Losses, for no Loss case L=1 %sigma = Standard deviation of log Normal distribution (in dB) %n = path loss exponent %Pr_dBm = Received power in dBm %PL = path loss due to log normal shadowing lambda=3*10ˆ8/f; %Wavelength in meters K = 20*log10(lambda/(4*pi)) - 10*n*log10(d0) - 10*log10(L);%path-loss factor X = sigma*randn(1,numel(d)); %normal random variable PL = Gt_dBi + Gr_dBi + K -10*n*log10(d/d0) - X ;%PL(d) including antennas gains Pr_dBm = Pt_dBm + PL; %Receieved power in dBm at d meters end The function to implement log-normal shadowing is given above and the test code is given next. Figure 10.2 shows the received signal power when there is no shadowing effect and the case when shadowing exists. The results are generated for an environment with PLE n = 2, frequency of transmission f = 2.4 GHz, reference distance d0 = 1 m and standard deviation of the log-normal shadowing σ = 2dB. Results clearly show that the log-normal shadowing introduces randomness in the received signal power, which may put us close to reality.
252
10 Large-scale Propagation Models
Program 10.4: log distance model test.m: Simulate Log Normal Shadowing for a range of distances Pt_dBm=0; %Input transmitted power in dBm Gt_dBi=1; %Gain of the Transmitted antenna in dBi Gr_dBi=1; %Gain of the Receiver antenna in dBi f=2.4e9; %Transmitted signal frequency in Hertz d0=1; %assume reference distance = 1m d=100*(1:0.2:100); %Array of distances to simulate L=1; %Other System Losses, No Loss case L=1 sigma=2;%Standard deviation of log Normal distribution (in dB) n=2; % path loss exponent %Log normal shadowing (with shadowing effect) [PL_shadow,Pr_shadow] = logNormalShadowing(Pt_dBm,Gt_dBi,Gr_dBi,f,d0,d,L,sigma,n); figure;plot(d,Pr_shadow,'b');hold on; %Friis transmission (no shadowing effect) [Pr_Friss,PL_Friss] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n); plot(d,Pr_Friss,'r');grid on; xlabel('Distance (m)'); ylabel('P_r (dBm)'); title('Log Normal Shadowing Model');legend('Log normal shadowing','Friss model');
Fig. 10.2: Simulated results for log normal shadowing path loss model
10.4 Two ray ground reflection model
253
10.4 Two ray ground reflection model Friis propagation model considers the line-of-sight (LOS) path between the transmitter and the receiver. The expression for the received power becomes complicated if the effect of reflections from the earth surface has to be incorporated in the modeling. In addition to the line-of-sight path, a single reflected path is added in the two-ray ground reflection model, as illustrated in Figure 10.3. This model takes into account the phenomenon of reflection from the ground and the antenna heights above the ground. The ground surface is characterized by reflection coefficient - R which depends on the material properties of the surface and the type of wave polarization. The transmitter and receiver antennas are of heights ht and hr respectively and are separated by the distance of d meters.
ℎ
ℎ
Fig. 10.3: Two ray ground reflection model
The received signal consists of two components: LOS ray that travels the free space from the transmitter and a reflected ray from the ground surface. The distances traveled by the LOS ray and the reflected ray are given by q dlos = d 2 + (ht − hr )2 (10.6) q dre f = d 2 + (ht + hr )2 (10.7) Depending on the phase difference (φ ) between the LOS ray and reflected ray, the received signal may suffer constructive or destructive interference. Hence, this model is also called as two ray interference model. 2π dre f − dlos φ= (10.8) λ where, λ is the wavelength of the radiating wave that can be calculated from the transmission frequency. Under large-scale assumption, the power of the received signal can be expressed as
λ Pr = Pt 4π
2 p 2 √ Gre f e− jφ Glos +R dlos dre f
(10.9)
√ √ √ √ where Glos = Ga Gb is the product of antenna field patterns along the LOS direction and Glos = Gc Gd is the product of antenna field patterns along the reflected path. The following piece of code implements equation 10.9 and plots the received power (Pr ) against the separation distance (d). The resulting plot for f = 900 MHz, R = −1, ht = 50 m, hr = 2 m, Glos = Gre f = 1 is
254
10 Large-scale Propagation Models
shown in the Figure 10.4. In this plot, the transmitter power is normalized such that the plot starts at 0 dBm. The plot also contains approximations of the received power over three regions. Program 10.5: twoRayModel.m: Two ray ground reflection model simulation f=900e6; %frequency of transmission (Hz) R = -1; %reflection coefficient Pt = 1; %Transmitted power in mW Glos=1;%product of tx,rx antenna patterns in LOS direction Gref=1;%product of tx,rx antenna patterns in reflection direction ht = 50;%height of tx antenna (m) hr = 2;%height of rx antenna (m) d=1:0.1:10ˆ5;%separation distance between the tx-rx antennas(m) L = 1; %no system losses %Two ray ground reflection model d_los= sqrt((ht-hr)ˆ2+d.ˆ2);%distance along LOS path d_ref= sqrt((ht+hr)ˆ2+d.ˆ2);%distance along reflected path lambda = 3*10ˆ8/f; %wavelength of the propagating wave phi = 2*pi*(d_ref-d_los)/lambda;%phase difference between the paths s = lambda/(4*pi)*(sqrt(Glos)./d_los + R*sqrt(Gref)./d_ref.*exp(1i*phi)); Pr = Pt*abs(s).ˆ2;%received power Pr_norm = Pr/Pr(1);%normalized received power to start from 0 dBm semilogx(d,10*log10(Pr)); hold on; ylim([-160 -55]); title('Two ray ground reflection model'); xlabel('log_{10}(d)'); ylabel('Normalized Received power (in dB)'); %Approximate models in three difference regions dc=4*ht*hr/lambda; %critical distance d1 = 1:0.1:ht; %region 1 -- d fmax
If a sinusoidal signal (depicted by a single spectral line in the frequency domain) is transmitted through a fading channel, the power spectrum at the receiver will show the spectral spreading according to the Jakes PSD plot shown in Figure 11.8. The range of frequencies where the power spectrum is non-zero defines the amount of Doppler spread. The corresponding auto-correlation function for the Jakes PSD is given by, rµi µi (τ) = rµq µq (τ) = σ02 J0 (2π fmax τ)
(11.38)
where, J0 (.) denotes the 0-th order Bessel function of the first kind. The Jakes PSD and its auto-correlation function are plotted in Figure 11.8. Program 11.8 contains the Matlab script used to obtain those plots. Another important Doppler PSD shape is the frequency-shifted Gaussian power spectral density [8]. r 2 σ02 ln 2 −ln 2 f −fcfs Sµi µi ( f ) = Sµq µq ( f ) = e (11.39) fc π where, fs and fc are the frequency shift and 3-dB cutoff √ frequency, respectively. The 3-dB cutoff frequency is related to the maximum Doppler frequency as fc = ln 2 fmax . Frequency-shifted Gaussian PSDs are utilized for frequency-selective channel modeling, typically in GSM communication systems. The corresponding autocorrelation function for the Gaussian PSD is given by 2 −π √fc τ
rµi µi (τ) = rµq µq (τ) = σ02 e
ln 2
e j2π fs τ
(11.40)
280
11 Small-scale Models for Multipath Effects
Jakes PSD
Auto-correlation function
Fig. 11.8: Jakes PSD and its autocorrelation function for fmax = 91Hz, σ02 = 1 Aeronautical channels are modeled with frequency-nonshifted version of the Gaussian PSD, obtained by setting fs = 0 in equations 11.39 and 11.40. The Gaussian PSD and its auto-correation function are plotted in Figure 11.9. Program 11.8 contains the Matlab script used to obtain those plots. Program 11.8: doppler psd acf.m: Generating Ricean flat-fading samples and plotting its PDF function [S,f,r,tau] = doppler_psd_acf(TYPE,T_s,f_max,sigma_0_2,N) %Jakes or Gaussian PSD and their auto-correlation functions (ACF) %TYPE='JAKES' or 'GAUSSIAN' %Ts = sampling frequency in seconds %f_max = maximum doppler frequency in Hz %sigma_0_2 variance of gaussian processes (sigma_0ˆ2) %N = number of samples in ACF plot %EXAMPLE: doppler_psd_acf('JAKES',0.001,91,1,1024) tau = T_s*(-N/2:N/2); %time intervals for ACF computation if strcmpi(TYPE,'Jakes'), r=sigma_0_2*besselj(0,2*pi*f_max*tau);%JAKES ACF elseif strcmpi(TYPE,'Gaussian'), f_c = sqrt(log(2))*f_max; %3-dB cut-off frequency r=sigma_0_2*exp(-(pi*f_c/sqrt(log(2)).*tau).ˆ2);%Gaussian ACF else error('Invalid PSD TYPE specified') end %Power spectral density using FFT of ACF [S,f] = freqDomainView(r,1/T_s,'double');%chapter 1, section 1.3.4 subplot(1,2,1); plot(f,abs(S)); xlim([-200,200]); title([TYPE,' PSD']);xlabel('f(Hz)');ylabel('S_{\mu_i\mu_i}(f)'); subplot(1,2,2); plot(tau,r);axis tight; xlim([-0.1,0.1]); title('Auto-correlation fn.');xlabel('\tau(s)');ylabel('r_{\mu_i\mu_i}(\tau)'); end
11.4 Modeling frequency flat channel
281
Auto-correlation function
Gaussian PSD
Fig. 11.9: Frequency-nonshifted Gaussian PSD and its autocorrelation function for fmax = 91Hz, σ02 = 1
11.4 Modeling frequency flat channel In the previous section of this chapter, Rayleigh and Rice processes were described using two independent real-valued colored Gaussian random processes. With reference to mobile radio channels, the Rayleigh and Rice processes are appropriate models that describe the amplitude fluctuations of the received signal in the equivalent complex baseband model. Accordingly, the mobile radio channel models can be termed Rayleigh or Rice channels. Rayleigh and Rice channel models are classified as frequency non-selective fading channel models. On the other hand, frequency selective channel models involve usage of finite impulse response (FIR) filter with L complex coefficients that vary in time. Therefore, such a filter would require 2L real-valued colored Gaussian random processes. From these examples, it is already evident that efficient generation of colored Gaussian random process is the key to model both frequency non-selective and frequency selective fading channels. In the literature, one often finds two fundamental methods for modeling a channel: stochastic and deterministic models. Stochastic models possess some inherent randomness, such that the same set of parameter values and initial conditions will lead to an ensemble of different outputs. Stochastic models utilize probability density functions in one form or another, they need to be well based in statistical theory. The parameters required for a stochastic model are typically estimated by applying statistical inference on the observed data. While the stochastic models have the ability to capture and describe patterns as observed in nature, they usually do not integrate physical concepts. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. Obviously, the natural world is full of stochasticity. But, stochastic models are considerably more complicated and the deterministic models are generally easier to analyze. In stochastic model, the two fundamental approaches to model a colored Gaussian random process are: filter method and Rice method[8]. In the filter method, shown in Figure 11.10(a), a linear time-invariant filter with transfer function Hi ( f ) is driven with zero-mean unit variance white Gaussian noise sequence vi (t). The transfer p function Hi ( f ) can be fitted to the desired power spectral density of the colored output µi (t) as Hi ( f ) = Sµi µi ( f ). An example for this approach is given in chapter 2 section 2.6.1. The Rice method referred in Figure 11.10(b) is based on weighted summation of an infinite number of harmonic waves with random phases and equidistant frequencies . In this method, the colored Gaussian random process µi (t) is described as Ni µi (t) = lim ∑ ci,n cos 2π fi,nt + θi,n (11.41) Ni →∞
n=1
282
11 Small-scale Models for Multipath Effects
where, the weights ci,n are given by q ci,n = 2 n ∆ fi Sµi µi ( fi,n ),
(11.42)
the random phases θi,n are random variables uniformly distributed in the interval (0, 2π] and the frequency spacing ∆ fi is chosen in such a way that the n equidistant frequencies cover the whole frequency range.
f
Fig. 11.10: Stochastic models for colored Gaussian random process (a) filter method (b) Rice method [8] Though the filter method and the Rice method yield identical stochastic processes, they are not exactly realizable. The filter method is not exactly realizable because of the assumption that the filter should be ideal and that the input white Gaussian noise cannot be realized exactly. Rice method is also impossible to implement, since an infinite harmonic terms are not realizable in hardware. Therefore, the requirement for an ideal model needs to be relaxed and traded for realizability. The Rice method or the filter method can be approximated and made realizable as long as the statistics of the output does not deviate from the desired ideal Gaussian random process. Such a realizable Rice method is given here for simulation purposes.
Fig. 11.11: Deterministic Rice model for colored Gaussian random process µi (t) (θi,n are constant phases) [8] The stochastic Rice method can be equivalently converted to a deterministic realizable model by using only a finite number of harmonic functions Ni and by taking the phase terms θi,n out of a random generator whose samples are uniformly distributed in the interval (0, 2π]. Ni
µi (t) =
∑ ci,n cos
n=1
2π fi,nt + θi,n
(11.43)
11.4 Modeling frequency flat channel
283
Since the phase terms are chosen as outcomes of a uniformly distributed random variable, they are no longer random variables but constant quantities. The equivalent deterministic model to generate colored Gaussian random sequence µi (t), using a finite number of harmonic functions and constant phase terms is shown in Figure 11.11. Figure 11.12 shows a deterministic model for generating a generic Rice process as required by equation 11.29. This simulation model is essentially an extension of Figure 11.11 and it incorporates equations 11.25,11.26 and11.29. By appropriately choosing the model parameters ci,n , fi,n and θi,n , the time-variant fading behaviour caused by the Doppler effect can be simulated. From now, the model parameters are named as follows: ci,n are called Doppler coefficients, fi,n are termed as discrete Doppler frequencies, and θi,n are the Doppler phases. For the purpose of computer simulations, the model can be translated to its discrete time equivalent by setting t = kTs , where Ts is the sampling interval and k is an integer. In computer simulations, the model parameters ci,n , fi,n , and θi,n are computed for n = 1, 2, · · · , Ni and kept constant for the whole duration of the simulation.
1 1
2 2
Fig. 11.12: Deterministic simulation model using Rice method Of the several methods that are available to compute the model parameters, the method of exact Doppler spread (MEDS), outlined in [8], is chosen here for simulation. This method can be used to simulate a frequency non-selective Rice or Rayleigh fading channel. It is well suited for generating the underlying colored Gaussian random samples with Jakes power spectral density and Gaussian power spectral density (see section 11.3.2). Using the MEDS method, the Rice process with Jakes PSD can be generated by choosing samples from an uniform random generator in the interval (0, 2π] for θi,n and by setting the following values for the other model parameters. " r # 2 π 1 , fi,n = fmax sin n− (11.44) ci,n = σ0 Ni 2Ni 2 for all n = 1, 2, · · · , Ni and i = 1, 2. Similarly, the Rice process with Gaussian PSD can be generated by choosing samples from an uniform random generator in the interval (0, 2π] for θi,n and by setting the following values for the other model parameters.
284
11 Small-scale Models for Multipath Effects
r ci,n = σ0
2 , Ni
"
fc er f −1 fi,n = √ ln2
2n − 1 2Ni
# (11.45)
for all n = 1, 2, · · · , Ni and i = 1, 2. The parameter fc denotes carrier frequency of transmission. The aforementioned equations pertaining to MEDS method for generating parameters for the fading model can be coded in Matlab as follows. Program 11.9: param MEDS.m: Generate parameters for deterministic model using MEDS method function [c_i_n,f_i_n,theta_i_n] = param_MEDS(DOPPLER_PSD,N_i,f_max,f_c,sigma_0) %Generate parameters for Sum of Sinusoid (SOS) simulator using the Method of %exact Doppler spread (MEDS) %DOPPLER_PSD: Defines type of PSD used for Doppler spreading % 'JAKES' - Jakes Power Spectral Density % 'GAUSS' - Gaussian Power Spectral Density %N_i = Number of harmonic functions %f_max = Maximum Doppler frequency %f_c = carrier frequency %sigma_0 = average power of the real deterministic Gaussian process mu_i(t) %Generate Doppler coefficients c_i and Doppler frequencies f_i using MEDS method n=(1:N_i).'; if strcmpi(DOPPLER_PSD,'JAKES'), c_i_n = sigma_0*sqrt(2/N_i)*ones(size(n)); f_i_n = f_max*sin(pi/(2*N_i)*(n-1/2)); elseif strcmpi(DOPPLER_PSD,'GAUSS'), c_i_n = sigma_0*sqrt(2/N_i)*ones(size(n)); f_i_n=f_c/sqrt(log(2))*erfinv((2*n-1)/(2*N_i)); else error('Invalid DOPPLER_PSD specified'); end theta_i_n=2*pi*rand(N_i,1); %randomized Doppler phases The Rice method, shown in Figure 11.12, for generating frequency non-selective fading samples is implemented as given in the following code and the simulation results are plotted in Figure 11.13. Program 11.10: Rice method.m: Function to simulate deterministic Rice model function Rice_method(T_sim,T_s,DOPPLER_PSD,N_i,f_max,f_c,sigma_0,... rho,f_rho,theta_rho,PLOT) %T_sim = Simulation time, T_s = Sampling interval %DOPPLER_PSD: Defines type of PSD used for Doppler spreading % 'JAKES' - Jakes Power Spectral Density % 'GAUSS' - Gaussian Power Spectral Density %N_i = Number of harmonic functions (Ni=N1=N2 for simplicity) %f_max = Maximum Doppler frequency, f_c = carrier frequency %sigma_0 = average power of the real deterministic Gaussian process mu_i(t) %----LOS component parameters--%rho = amplitude of LOS component m(t), %f_rho = Doppler frequency of LOS component m(t) %theta_rho = phase of LOS component m(t) %PLOT = 1 plots different graphs %Example: Rice_method(2,0.0001,'JAKES',200,100,1000,1,1,5,0.5,1)
11.4 Modeling frequency flat channel
285
%Generate c_i, f_i and theta_i_n using equations for MEDS method %independently for both inphase and quadrature arm [c_1,f_1,theta_1] = param_MEDS(DOPPLER_PSD,N_i,f_max,f_c,sigma_0); [c_2,f_2,theta_2] = param_MEDS(DOPPLER_PSD,N_i+1,f_max,f_c,sigma_0); no_samples = ceil(T_sim/T_s); %#samples generated for simulation t=(0:no_samples-1)*T_s; %discrete-time instants mu_i = 0; mu_q = 0; for n=1:N_i,%computation for the two branches mu_i = mu_i + c_1(n)*cos(2*pi*f_1(n)*t+theta_1(n)); mu_q = mu_q + c_2(n)*cos(2*pi*f_2(n)*t+theta_2(n)); end %Generate LOS components and add it to the gaussian processes m_i = rho*cos(2*pi*f_rho*t+ theta_rho); m_q = rho*sin(2*pi*f_rho*t+ theta_rho); xi_t = abs(mu_i+m_i+1i*(mu_q+m_q)); if PLOT, subplot(1,2,1); plot(t,20*log10(xi_t)); xlabel('time (s)'), ylabel('amplitude');title('Fading samples'); %use the function given in section 1.3.4 to plot the spectrum [S_mu_f,F] = freqDomainView(mu_i,1/T_s,'double'); subplot(1,2,2); plot(F,abs(S_mu_f)); axis tight; xlim([-f_max-50,f_max+50]);title('PSD of \mu_i(t)'); xlabel('frequency'); ylabel('S_{\mu_i\mu_i(f)}'); end
Fig. 11.13: Frequency non-selective fading samples ξ (t) and the PSD of underlying process µi (t) generated using Rice method for Ni = 200, fmax = 100 Hz, fc = 1000 Hz, σ0 = 1, ρ = 1, fρ = 5 Hz, θρ = 0.5
286
11 Small-scale Models for Multipath Effects
The discrete-time simulation model discussed so far is good for waveform level simulations. For details on complex baseband equivalent simulation model, readers are referred to chapter 6 section 6.2.
11.5 Modeling frequency selective channel Tapped-Delay-Line (TDL) filters with N number taps can be used to simulate a multipath frequency selective fading channel. Frequency selective channels are characterized by time varying nature of the channel. For simulating a frequency selective channel, it is mandatory to have N > 1. In contrast, if N = 1, it simulates a zero-mean fading channel where all the multipath signals arrive at the receiver at the same time. Let an (t) be the associated path attenuation corresponding to the received power Pn (t) and propagation delay (τn ) of the nth path. In continuous time, the complex path attenuation is given by a˜n (t) = an (t)exp[− j2π fc τn (t)]
(11.46)
The complex channel response is given by N−1
h(t, τ) =
∑ a˜n (t)δ (t − τn (t))
(11.47)
n=0
As a special case, in the absence any movements or other changes in the transmission channel, the channel can remain fairly time invariant (fixed channel with respect to instantaneous time t) even though the multipath is present. The complex channel response for the time-invariant case is given by N−1
h(τ) =
∑ a˜n δ (τ − τn )
(11.48)
n=0
A sample power delay profile plot for a fixed, discrete, three ray model with its corresponding implementation using a tapped-delay line filter is shown in Figure 11.14. Here, both the path attenuation and the propagation delays are fixed.
Fig. 11.14: A 3-ray multipath time invariant channel and its equivalent TDL model (path attenuations an and propagation delays τn are fixed) The next level of modeling involves, introduction of randomness in the above mentioned model there by rendering the channel response time variant. If the path attenuations are typically drawn from a complex Gaussian random variable, then at any given time t, the absolute value of the impulse response h(t, τ) is
11.5 Modeling frequency selective channel
287
• Rayleigh distributed - if the mean of the distribution E h(t, τ) =0 • Ricean distributed - if the mean of the distribution E h(t, τ) 6= 0 Respectively, these two scenarios model the presence or absence of a line-of-sight (LOS) path between the transmitter and the receiver (see Figure 11.15). The first propagation delay τ0 has no effect on the model behavior and hence the delay element τ0 can be removed from the TDL model.
Fig. 11.15: A 3-ray multipath time variant channel and its equivalent TDL model (an - path attenuations, Rn complex random variables representing random variations in path gains and τn - propagation delays ) Similarly, the propagation delays can also be randomized, resulting in a more realistic but extremely complex model to implement.
11.5.1 Method of equal distances (MED) to model specified power delay profiles Specifications of continuous multipath power delay profile models can be found in many standards. For example, the COST-207 model in GSM specification incorporates continuous domain equations for modeling multipath PDP [9]. Such continuous time power-delay-profile models can be simulated using discrete-time tapped delay line (TDL) filter with N number of taps with variable tap gains. The power-delay-profile specifications with arbitrary time delays, warrant non-uniformly spaced tappeddelay-line filters and hence they are not suitable for practical simulations. For ease of implementation, the given PDP model with arbitrary time delays can be converted to tractable uniformly spaced statistical model by a combination of interpolation/approximation/uniform-sampling of the given power-delay-profile. Given the order N, the problem boils down to determining the tap spacing ∆ τ and the path gains an , in such a way that the simulated channel closely follows the specified multipath PDP characteristics. A survey of method to find a solution for this problem can be found in [10]. The method of equal distances (MED) is one such method, where the continuous domain PDP is sampled at regular intervals (spaced ∆ τ apart), to obtain the discrete-time PDP model. In general, the continuous-time domain PDP equations, denoted as p(τ), are defined for an interval of propagation delays, say [0, τmax ] - outside which the PDP is zero. To simulate this model using TDL filters, the given continuous domain equation needs to be sampled at N − 1 regular intervals (see Figure 11.16 (a) ), where the tap spacing ∆ τ is defined as τmax (11.49) ∆τ = N −1
288
11 Small-scale Models for Multipath Effects
The N discrete multipath delays τn can be obtained as integral multiples of the constant quantity ∆ τ. τn = n.∆ τ,
n = 0, 1, · · · , N − 1
(11.50)
Having determined the tap spacing ∆ τ using equation 11.49, the next task is to find the path gains an at the corresponding sampled delays τn . For obtaining this, the given interval I = [0, τmax ] is partitioned to N sub-intervals In according to the following equation n=0 [0, ∆ τ/2) In = [τn − ∆ τ/2, τn + ∆ τ/2) n = 1, 2, · · · , N − 2 (11.51) [τn + ∆ τ/2, τmax ] n = N −1 Then, the path gains an are computed as the square root of average power delay profile in each sub-interval. rZ p(τ)dτ (11.52) an = τ∈In
Simulation example The COST-207 model in GSM standard defines the following continuous-time PDP for typical urban (TU) areas. ( 1 e−τ , 0 ≤ τ ≤ 7 p(τ) = 1−e−7 (11.53) 0 , otherwise The following code implements the MED method for representing the given continuous PDP equation using a 12-tap discrete TDL filter. Figure 11.16(a) shows the continuous PDP and the simulated discrete multipath power delay profile, and their corresponding frequency correlation functions are plotted in Figure 11.16(b). The channel impulse response of the simulated TDL filter is plotted in Figure 11.17. Table 11.4 shows the parameters of the 12-taps TDL filter, obtained through the simulation. The sampling frequency of the discrete filtering system is fs = 1/∆ τ = 1/(0.63636 × 10−6 ) = 1.5714MHz. Table 11.4: Design parameters for the 12-taps TDL filter to approximate the COST-207 (TU) model Tap number n
Tap weights a[n]
Tap delays (µs) τ[n]
0 1 2 3 4 5 6 7 8 9 10 11
0.52228 0.58549 0.42593 0.30985 0.22541 0.16398 0.11929 0.086778 0.063129 0.045924 0.033408 0.018491
0 0.63636 1.2727 1.9091 2.5455 3.1818 3.8182 4.4545 5.0909 5.7273 6.3636 7
11.5 Modeling frequency selective channel
Program 11.11: pdp model.m: TDL implementation of specified power delay profile close all; clearvars; clc; N=12; %number of desired taps in the TDL model %------COST-207 model - continuous PDP model-----------------Ts = 0.1;%distance between each points in the continuous PDP (0.1us) tau=0:Ts:7;%range of propagation delays as given in COST-207 model pdp_continuous=1/(1-exp(-7))*exp(-tau);%COST-207(TU) continuous PDP figure; subplot(1,2,1); plot(tau,10*log10(pdp_continuous)); hold on;%continuous PDP %-----COST-207 model - discrete (sampled) PDP for TDL implementation %sample the PDP at regular intervals and use it for TDL filter model tauMax=max(tau); dTau=tauMax/(N-1);%step size for sampling the PDP (us) tau=0:dTau:tauMax;%re-sampled delays pdp_discrete = 1/(1-exp(-7))*exp(-tau); %sampled PDP (discrete) %------Compute path gains for N-tap TDL filter-------------p_n=zeros(1,N);%multipath PDP of TDL filter, initialize to all zeros numFun=@(x) exp(-x)/(1-exp(-7));%given PDP equation for COST-207(TU) p_n(1)=integral(numFun,0,dTau/2);%integration in 1st interval for i=1:N-2,%integration for next N-2 intervals tau_i = i*dTau; p_n(i+1)=integral(numFun,tau_i-dTau/2,tau_i+dTau/2); end p_n(N)=integral(numFun,tauMax-dTau/2,tauMax);%last interval integral a_n=sqrt(p_n); %computed path gains of TDL filter disp('Tap delays tau[n] (microsecs):'); disp(num2str(tau)); disp('Tap weights a[n] : '); disp(num2str(a_n)); disp('Sampling interval(delta tau) microsecs');disp(num2str(dTau)); disp(['Sampling frequency required (MHz): ', num2str(1/dTau)]); plotObj=stem(tau,10*log10(p_n),'r--');%plot discrete PDP set(plotObj,'basevalue',-40); %to show the stem plot bottom up title('Multipath PDP');xlabel('Delay \tau(\mu s)'); ylabel('PDP(dB)'); legend('Ref. model','TDL model'); %--Frequency Correlation Function (FCF) - Fourier Transform of PDPnfft = 216;%FFT size FCF_cont=fft(pdp_continuous,nfft)/length(pdp_continuous);%cont PDP FCF_cont=abs(FCF_cont)/max(abs(FCF_cont));%normalize FCF_tdl = fft(p_n,nfft)/length(p_n);%FT of discrete PDP FCF_tdl = abs(FCF_tdl)/max(abs(FCF_tdl)); %normalize f=1/Ts/2*linspace(0,1,nfft/2+1);%compute frequency axis based on Ts f2=1/dTau/2*linspace(0,1,nfft/2+1);%freq. axis based on dTau %single-sided FCF of continous PDP
289
290
11 Small-scale Models for Multipath Effects
subplot(1,2,2);plot(f,FCF_cont(1:nfft/2+1),'b');hold on; %single-sided FCF of N-tap sampled PDP plot(f2,FCF_tdl(1:nfft/2+1),'r--'); title('Frequency Correlation Function (FCF)'); xlabel('Frequency \nu (Hz)');ylabel('|FCF|');legend('Ref. model','TDL model'); %---------Response of the modeled filter--------%It is assumed that the incoming signal's sampling rate is dTau x=[zeros(1,2) 1 zeros(1,15)];%impulse input to TDL y=filter(a_n,1,x);%filter the impulse through the designed filter figure; stem(y,'r');%observe multipath effects of the given model title(['Channel response of ', num2str(N), 'tap TDL model']); xlabel('samples');ylabel('received signal amplitude');
Fig. 11.16: (a) Multipath PDP for a COST-207 TU channel and its sampled version for TDL implementation, (b) absolute value of frequency correlation function (FCF) for reference and TDL models.
11.5.2 Simulating a frequency selective channel using TDL model TDL model for modeling multipath channel was described in section 11.5.1. The need for uniform tap spacing (symbol spaced TDL model) was also emphasized. One can also employ the method of equal distances, described in section 11.5.1, to convert continuous time PDP specifications to uniformly spaced TDL models. This section is devoted to simulation of a frequency selective channel, given uniformly spaced PDP values. Frequency selective fading channel can be modeled using TDL filter with the number of taps N > 1. A 3-ray multipath time variant channel and its corresponding TDL model were given in figure 11.15. As an example, a 5-rays multipath channel and its equivalent N = 5 taps TDL filter model are considered here. If
11.5 Modeling frequency selective channel
291
Fig. 11.17: Channel impulse response of the simulated TDL filter for COST-207 TU model
we assume fixed PDP, it implies that the tap gains a[n] of the TDL model do not vary. We will also assume block fading process, which implies that the fading process is approximately constant for a given transmission interval. For block fading, the random tap coefficients [R0 , R1 , R2 , R3 , R4 ] are random variables and for each channel realization, a new set of random variables will be generated. Lets take the following specifications for the multipath model: • • • • •
Number of Taps : N = 5 Propagation delays (ns): τ[n] = [0, 50, 100, 150, 200] List of received powers (dBm) : Pi = [0, −5, −10, −15, −20] Number of channel realizations : L = 5000 Sampling frequency of the discrete system: fs = 1/∆ τ = 1/(50 × 10−9 ) = 20MHz
The discrete PDP of the channel and its corresponding TDL model with uniform tap spacing are shown in Figure 11.18. The random variables [R0 , R1 , R2 , R3 , R4 ] are commonly drawn from distributions like Rayleigh, Ricean, Nakagami-m etc.., For the example here, we will draw them from Rayleigh distribution. That makes it a frequency selective Rayleigh block fading channel. 1 symbol delay [ ]
ℎ[ ]
[ ]
[ ]
z
z
z
z
PDP (dBm) 0
0
5
4
10 15
0
3
2
0
20
50 100 150 200
2
3
4
Time (ns)
Fig. 11.18: TDL model for a frequency selective Rayleigh block fading channel with specified PDP
[ ]
292
11 Small-scale Models for Multipath Effects
The following code generates the taps for the TDL model for 5000 channel realizations as per the channel √ 2 to normalize the Rayleigh fading specifications given above. The code uses two normalization factors : 1/ √ variables and 1/ ∑ PDP to normalize the output power of the TDL filter to unity. The Rayleigh fading variables are generated using two normal random (Z1 , Z2 ) process in quadrature as R = Z1 + jZ2 . The mean and variance of both the normal random variables are 0 and 1 respectively. When the two normal random variables are added in quadrature, the power (variance) of the resulting variable R will be 2, which is√equivalent to √ having an amplitude of value 2. Thus, the Rayleigh random variables are scaled down by 1/ 2 to have an amplitude of 1, which is equivalent to having unit power output. √ Applying the same analogy to the second normalization factor, the taps are scaled down by the factor 1/ ∑ PDP . This makes the overall channel path gain to unity. Therefore, for an input signal with power 1, the average output power of the channel is 1. The following simulation results proves that the average power of the taps and the overall path gain of the channel are simulated as expected. • Average tap powers: [−2.6149, −7.6214, −12.6481, −17.6512, −22.8034] • Overall path gain of the channel : |h| = 1.01 Program 11.12: freq selective TDL model.m: Simulate frequency selective Rayleigh block fading channel %Freq. sel. Rayleigh block fading TDL model(uniformly spaced taps) PDP_dBm = [0 -5 -10 -15 -20]; %Power delay profile values in dBm L=5000; %number of channel realizations N = length(PDP_dBm); %number of channel taps PDP = 10.ˆ(PDP_dBm/10); %PDP values in linear scale a = sqrt(PDP); %path gains (tap coefficients a[n]) %Rayleigh random variables R0,R1,R2,...R{N-1} with unit avg power %for each channel realization (block type fading) R=1/sqrt(2)*(randn(L,N) + 1i*randn(L,N));%Rayleigh random variable taps= repmat(a,L,1).*R; %combined tap weights = a[n]*R[n] %Normalize taps for output with unit average power normTaps = 1/sqrt(sum(PDP))*taps; display('Average normalized power of each taps'); average_power = 20*log10(mean(abs(normTaps),1)) display('Overall path gain of the channel'); h_abs = sum(mean(abs(normTaps).ˆ2,1))
References 1. Jean-Paul M.G. Linnartz et. al, Wireless Communication Reference Website, 1996-2010. http://www. wirelesscommunication.nl/reference/chaptr03/fading/scatter.htm. 2. P. A. Bello, Characterization of randomly time-variant linear channels, IEEE Trans. Comm. Syst., vol. 11, no. 4, pp. 360–393, Dec. 1963. 3. Andermo, P.G and Larsson, G., Code division testbed, CODIT, Universal Personal Communications, 1993. Personal Communications, Gateway to the 21st Century. Conference Record., 2nd International Conference on , vol.1, no., pp.397,401 vol.1, 12-15 Oct 1993. 4. Huseyin Arslan, Cognitive Radio, Software Defined Radio, and Adaptive Wireless Systems, pp. 238, 2007, Dordrecht, Netherlands, Springer. 5. Dave Pearce, Getting Started with Communication Engineering, lecture notes, part 8, https://www-users.york.ac.uk/ ∼dajp1/Introductions/
References
293
6. M. G. Lafin, Draft final report on RF channel characterization, Records of Joint Technical Committee on Wireless Access: Air Interface Standards, Dec. 06, 1993. 7. C. Tepedelenlioglu, A. Abdi, and G. B. Giannakis, The Ricean K factor: Estimation and performance analysis, IEEE Trans. Wireless Communication ,vol. 2, no. 4, pp. 799–810, Jul. 2003. 8. M. Patzold, U. Killat, F. Laue and Yingchun Li, On the statistical properties of deterministic simulation models for mobile fading channels, IEEE Transactions on Vehicular Technology, vol. 47, no. 1, pp. 254-269, Feb 1998. 9. COST 207, Digital land mobile radio communications, Office Official Publ. Eur. Communities, Final Rep., Luxemburg, 1989. 10. M. Paetzold, A. Szczepanski, N. Youssef, Methods for Modeling of Specified and Measured Multipath Power-Delay Profiles, IEEE Trans. on Vehicular Techn., vol.51, no.5, pp.978-988, Sep 2002.
Chapter 12
Multiple Antenna Systems - Spatial Diversity
Abstract Regarded as a breakthrough in wireless communication system design, multiple antenna systems fuel the ever increasing data rate requirements of advanced technologies like UMTS, LTE, WLAN etc. Multiple antenna systems come in different flavors tat are aimed at improving the reliability and the spectral efficiency of the overall system. The goal of this chapter is to cover various aspects of multiple antenna systems specifically the spatial diversity.
12.1 Introduction 12.1.1 Diversity techniques Diversity techniques are employed to make a communication system robust and reliable even over varying channel conditions. Diversity techniques exploit the channel variations rather than mitigating it. Diversity techniques combat fading and interference by presenting the receiver with multiple uncorrelated copies of the same information bearing signal. Essentially, diversity techniques are aimed at creating uncorrelated random channels, thereby creating uncorrelated copies of the same signal at the receiver front end. Combining techniques are employed at the receiver to exploit multipath propagation characteristics of a channel (see chapter 11). Broadly speaking, the term diversity is categorized as • Time diversity: Multiple copies of information sent on different time slots. The time slots are designed in such a way that the delay between the signal replicas should be greater than the coherence time (Tc ) of the channel. This condition will create uncorrelated channels over those time slots. Coding and interleaving are also equivalent techniques that break the channel memory into multiple chunks, thereby spreading and minimizing the effect of deep fades. This technique consumes extra transmission time. • Frequency diversity: Signal replicas are sent across different frequency bands that are separated by at-least the coherence bandwidth (Bc ) of the channel - thereby creating uncorrelated channels for the transmission. In this method, the level of fading experienced by each frequency band is different and at-least one of the frequency bands may experience the lowest level of fading, consequently the strongest signal in this band. This technique requires extra bandwidth. Examples : orthogonal frequency-division multiplexing (OFDM) and spread spectrum. • Multiuser diversity: Employing adaptive modulation and user scheduling techniques to improve the performance of a multiuser system. In such systems, the channel quality information for each user is utilized by a scheduler, to select the number of users, coding and modulation type, so that an objective function (throughput and fairness of scheduling) is optimized. Example: orthogonal frequency-division multiple access(OFDMA) and access scheme used in latest wireless systems such as IEEE 802.16e (Mobile WiMAX).
295
296
12 Multiple Antenna Systems - Spatial Diversity
• Spatial diversity (antenna diversity): Aimed at creating uncorrelated propagation paths for a signal, spatial diversity is effected by the usage of multiple antennas in the transmitter and/or the receiver. Employing multiple antennas at the transmitter is called transmit diversity and multiple antennas at the receiver is called receive diversity. Diversity combining techniques like selection combining (SC), feedback or scanning combining, maximum ratio combining (MRC) can be employed by the receiver to exploit the multipath effects. Spatial diversity techniques can also be used to increase the data rates (spatial multiplexing) rather than improving the reliability of the channel. Example: MIMO, beamforming and space-time coding (STC). • Polarization diversity (antenna diversity): Multiple copies of the same signal are transmitted and received by antennas of different polarization. Used to mitigate polarization mismatches of transmit and receiver antennas. • Pattern diversity (antenna diversity): Multiple versions of the signal are transmitted via two or more antennas with different radiation patterns. The antennas are spaced such that they collectively act as a single entity aimed at providing more directional gain compared to an omni-directional antenna. • Adaptive arrays (antenna diversity): Ability to control the radiation pattern of a single antenna with active elements or an array of antennas. The radiation pattern is adapted based on the existing channel conditions. The aforementioned list of diversity techniques is just a broad classification. There exists more advanced diversity techniques, such as cooperative diversity, that can be a combination of one or more techniques listed above. The following text concentrates on spatial diversity techniques - especially the transmit and receive diversity schemes.
12.1.2 Single input single output (SISO) channel model The conventional radio communication systems contain one antenna at the transmitter and one at the receiver (Figure 12.1). In multiple antenna systems jargon, this is called single input and single output system (SISO). The capacity of a continuous AWGN channel that is bandwidth limited to B Hz and average received power constrained to P Watts, is given by the well-known Shannon Hartley theorem for a continuous input continuous output memoryless AWGN channel (CCMC)[7]. Cawgn (P, B) = B × log2 (1 + SNR)
Fig. 12.1: (a) SISO channel; (b) SISO model from channel perspective
(12.1)
12.1 Introduction
297
12.1.3 Multiple antenna systems channel model Here, the system configuration[1][2][3][4][5][6] typically contains M antennas at the transmitter and N antennas at the receiver front end as illustrated in Figure 12.2. This is a multiple input multiple output (MIMO) channel model. Each receiver antenna receives not only the direct signal intended for it, but also receives a fraction of signal from other propagation paths. Thus, the channel response is expressed as a transmission matrix H. The direct path formed between antenna 1 at the transmitter and the antenna 1 at the receiver is represented by the channel response h11 . The channel response of the path formed between antenna 1 in the transmitter and antenna 2 in the receiver is expressed as h21 and so on. Thus, the channel matrix is of dimension N × M. The received vector y is expressed in terms of the channel transmission matrix H, the input vector x and noise vector n as y = Hx + n (12.2) where the various symbols are h11 h12 x1 y1 h21 h22 x2 y2 y= .. .. x = .. H = .. . . . . hN1 hN2 xM yN
n1 · · · h1M · · · h2M n2 n= . . .. .. . . . nM · · · hNM
(12.3)
For asymmetrical antenna configuration (M 6= N), the number of data streams (or the number of uncoupled equivalent channels) in the system is always less than or equal to the minimum of the number of transmitter and receiver antennas - min(M, N).
Fig. 12.2: (a) MIMO channel; (b) MIMO model from channel perspective
298
12 Multiple Antenna Systems - Spatial Diversity
12.2 Diversity and spatial multiplexing The wireless communication environment is very hostile. The signal transmitted over a wireless communication link is susceptible to fading (severe fluctuations in signal level), co-channel interference, dispersion effects in time and frequency, path loss effect, etc. On top of these woes, the limited availability of bandwidth presents a significant challenge to a designer in designing a system that provides higher spectral efficiency and higher quality of link availability at low cost. Multiple antenna systems are the current trend in many of the wireless technologies that is essential for their performance. They improve the spectral efficiency and offer high quality links when compared to traditional single input single output (SISO) systems. Many theoretical studies [9][10] and communication system design experiments [11][12][13] on multiple antenna systems demonstrated a great improvement in performance of such systems.
12.2.1 Classification with respect to antenna configuration In multiple antenna systems jargon, communication systems are broadly categorized into four categories with respect to number of antennas in the transmitter (Tx) and the receiver (Rx), as listed below. • • • •
SISO - single input single output system - 1 Tx antenna ,1 Rx antenna SIMO - single input multiple output system - 1 Tx antenna, N Rx antennas (N > 1) MISO - multiple input single output system - M Tx antennas, 1 Rx antenna (M > 1) MIMO - multiple input multiple output system - M Tx antennas, N Rx antennas (M, N > 1)
12.2.2 Two flavors of multiple antenna systems Apart from the antenna configurations, there are two flavors of multiple antenna systems with respect to how data is transmitted across a given channel. Existence of multiple antennas in a system, means presence of different propagation (spatial) paths. Now we are presented with two design choices. The first choice is to send the same data across different propagation paths and the second option is to place different portions of the data on separate propagation paths. The first choice improves the reliability of the communication system and is referred as spatial diversity. Fading can be mitigated by employing receiver and transmit diversity [16][17], thereby improving the reliability of the transmission link. Improved coverage can be obtained by employing coherent combining techniques that increase the signal to noise ratio of the system. The second choice, referred as spatial-multiplexing, increases the data rate of the system for the given bandwidth, thereby improving the spectral efficiency. Spatial multiplexing techniques [14], such as the Bell Laboratories Layer Space-Time (BLAST) architecture [15], yields increased data rates in wireless communication links.
12.2.3 Spatial diversity In mobile wireless systems, the envelope of the received signal is composed of superposition of slow and fast fading components. Slow fading is caused by the coarse variations in the terrain between the mobile and base stations. Fast fading is caused by the interference between the multipath waves scattered from the surrounding scatterers. The effects of rapid and deep fading are combated using spatial diversity techniques.
12.2 Diversity and spatial multiplexing
299
In spatial diversity techniques, same information is sent across independent uncorrelated channels to combat fading. When multiple copies of the same data are sent across independently fading channels, the amount of fade suffered by each copy of the data will be different. This guarantees that at-least one of the copies will suffer less fading compared to rest of the copies. At the receiver, the signal from the independent propagation paths are combined and hence the chance of properly receiving the transmitted data increases. In effect, this improves the reliability of the entire system. This also reduces the co-channel interference significantly. This technique is referred as inducing spatial diversity in the communication system. With reference to spatial dimensions, a wireless channel has an interesting property called spatial correlation function. Spatial correlation gives the cross-correlation of signal envelopes of multiple antennas as a function of spatial separation between the antenna. Decorrelation distance is the spacing between the antenna at which the magnitude of correlation becomes close to zero. This implies that if the transmitter/receiver has multiple antennas that are separated by decorrelation distance, the signals propagating on the fading paths will be statistically uncorrelated. Therefore, in spatial diversity techniques, independent uncorrelated fading paths are realized by using multiple antennas spaced apart by decorrelation distance. Consider a SISO system where a data stream [1, 0, 1, 1, 1] is transmitted through a channel with deep fades (Figure 12.3). Due to the variations in the channel quality, the data stream may get completely lost or severely corrupted. Such rapid channel variations can be combated by adding independent uncorrelated fading channels. Independent fading channels are created by increasing the number of transmitter antennas or receiver antennas or both. The SISO antenna configuration will not provide any diversity as there is no parallel link. Thus the diversity is indicated as 0 in Figure 12.3.
Tx antenna (input)
Fig. 12.3: SISO channel provides no diversity Instead of transmitting with single antenna and receiving with single antenna (as in SISO), let’s increase the number of receiving antennas by one count separated by decorrelation distance. In this single input multiple output (SIMO) antenna system, two copies of the same data are put on two different channels having independent fading characteristics. Even if one of the link fails to deliver the data, the chances of proper delivery of the data across the other link is very high. Thus, additional fading channels increase the reliability of overall transmission. This improvement in reliability translates as performance improvement - measured as diversity gain. For a system with M transmitter antennas and N receiver antennas, the maximum number of diversity paths is M × N. In the SIMO configuration shown in Figure 12.4, the total number of diversity paths created is 1 × 2 = 2.
300
12 Multiple Antenna Systems - Spatial Diversity
Fig. 12.4: 1x2 SIMO channel model and its diversity
In this way, more diversity paths can be created by adding multiple antennas at transmitter or receiver or both. Figure 12.5 illustrates a 2 × 2 MIMO system with number of diversity paths equal to 2 × 2 = 4.
12.2.4 Spatial multiplexing For power and bandwidth limited systems, usage of multiple antennas at the transmitter and/or receiver has opened up another dimension - space that can be exploited in a similar way as time and frequency. Similar to frequency division and time division multiplexing, multiple antenna systems presents the possibility of multiplexing signals in space. In multiple antenna systems, fading can be made beneficial, through increasing the degrees of freedom available for communication [10]. If path gains of the individual channels between the transmit and receive antennas fade independently, the probability of having a well-conditioned channel state information matrix is high, in which case parallel spatial channels are created. By transmitting independent information on the parallel spatial channels, the data rate of transmission is increased. This effect is called spatial multiplexing. It can be compared to orthogonal frequency-division multiplexing (OFDM) technique, where, different frequency sub-channels carry different parts of the modulated data. But in spatial multiplexing, if the scattering by the environment is rich enough, several independent spatial sub-channels are created in the same allocated bandwidth. Thus the multiplexing gain comes at no additional cost on bandwidth or power. Denoting the signal to noise ratio (SNR) as γ, it is well known that the capacity of a AWGN SISO channel scales as log(γ) in the high SNR regime. Contrarily, for a single user MIMO channel with M inputs and N outputs, the capacity scales as min(M, N)log(γ). From this, the natural definition of spatial degrees of freedom η follows as C(γ) η , lim (12.4) γ→∞ log(γ)
12.3 Receive diversity
301
Fig. 12.5: 2x2 MIMO channel model and its diversity
where, C(γ) is the sum capacity at SNR γ for the MIMO system. Put differently, the degrees of freedom η represent the maximum multiplexing gain of the generalized MIMO system.
12.3 Receive diversity SIMO channel configuration is characterized by 1 transmit antenna and multiple receive antennas (Figure 12.6). SIMO configuration provides receive diversity, where the same information is received across independent fading channels to combat fading. When multiple copies of the same data are received across independently fading channels, the amount of fade suffered by each copy of the data will be different. This guarantees that at-least one of the copies will suffer less fading compared to rest of the copies. Thus, the chance of properly receiving the transmitted data increases. In effect, this improves the reliability of the entire system. The channels are independent and identically distributed (i.i.d) and hence the error event across those independent channels are also independent. The signal to noise ratios (SNR) of the channels are also i.i.d (independent and identically distributed) and every channel path has the same average SNR. This is the fundamental assumption of the SIMO model incorporating receive diversity.
302
12 Multiple Antenna Systems - Spatial Diversity
Fig. 12.6: Single input multiple output (SIMO) channel model shown with receiver processing
12.3.1 Received signal model Assuming the case of MPSK transmission and the use of N receive antennas in the receiver, the received signal striking the kth antenna is given by [18] rk (t) = ak Ac cos 2π fc (t − τk ) + θm + φk + ek (t), 0 ≤ t < Tsym , k = 1, 2, · · · , N (12.5) where, Ac is the amplitude of the transmitted signal, Tsym is the symbol duration, fc is the frequency of the RF carrier, θm is the signal phase carrying the information. The kth fading channel is characterized by ak = |hk | the fading magnitude, φk = ∠hk the uniformly distributed random phase unchanged over the symbol period and τk the delay introduced by the channel path. The received signal is also getting mixed with AWGN noise e(t) of mean zero and variance σe2k = N0 /2. In diversity reception, the first step is to compensate for the different time delays introduced by the individual channel paths. This is achieved by performing time alignment among all the N received signals as τmax = max (τ1 , τ2 , · · · , τN ). After time alignment, the received signal rk (t) is processed by an IQ receiver, whose outputs on the I and Q branches are combined to form a single complex output √ rk = ak e jφk Es e jθm + nk (12.6) where hk = ak e jφk is the complex impulse response of the kth channel path, Es is the transmit symbol energy and nk ∼ CN (0, N0 ) accounts for the combined complex noise from the independent I/Q channels in the √ receiver. Representing the complex equivalent baseband modulated signal as s = Es e jθm , the received signals on each channel path is simplified as rk = hk s + nk (12.7) In the context of an additive white Gaussian noise (AWGN) channel, the signal-to-noise ratio (SNR) for a given channel condition, is a constant. But in the case of fading channels, the signal-to-noise ratio is no longer a constant as the signal is fluctuating when passed through a fading channel. Therefore, for fading channel, the SNR has a random variable component built into it. Hence, we don’t call it just SNR, instead it is called instantaneous SNR which depends on the current conditions of the channel (or equivalently, the value of the random variable at that instant). The instantaneous SNR for kth channel path is proportional to the magnitude of the channel response at a particular instant n o E |s|2 Es γk = |hk |2 = ak 2 (12.8) 2 σnk N0
12.3 Receive diversity
303
Since the SNR of a fading channel is a random variable, we can also talk about its expected (average) value, which is called average SNR. The average SNR for kth path is given by n o o n n o E |s|2 2 Es (12.9) Γk = E |hk |2 = E |h | k σn2k N0 For the case of N receive antennas (Figure 12.6), N independent received signal copies are r1 = h1 s + n1
(12.10)
r2 = h2 s + n2 .. .
(12.11)
rN = hN s + nN
(12.13)
(12.12)
In matrix form, n1 h1 r1 n2 r2 h2 . = . s+ . . . . . . . hN
rN
(12.14)
nN
Writing in compact form, the received signal is r = h s+n
(12.15)
where, r, h and n are the vectors containing the received signal samples, channel state information and noise component for each channel path. Given the N noisy received signal, the combiner in the receiver linearly combines them with weighting factors w = [w1 , w2 , · · · , wN ]T and produces the combined signal r1 h i k=N r2 H H H (12.16) y = ∑ w∗ k rk = w∗1 w∗2 · · · w∗N . . = w r = w hs + w n . k=1 rN where the superscript H denotes Hermitian transpose. The choice of weighting factors w depends on the type of combining technique used. Selection combining, maximum ratio combining and equal gain combining techniques are the well known combining schemes.
12.3.2 Maximum ratio combining (MRC) Maximum ratio combining works on the signal in spatial domain and is very similar to what a matched filter in frequency domain does to the incoming signal. MRC maximizes the inner product of the weights and the signal vector. With reference to Figure 12.7, the maximum ratio combining technique, uses all the N received signal elements, it weights them and combines the weighted signals so that the SNR at combiner output γMRC is maximized. The SNR is maximized if the receiver perfectly knows the channel gains and phases. The MRC architecture performs four operations: time alignment of signals on the branches, co-phasing, channel gain matching and combining.
304
12 Multiple Antenna Systems - Spatial Diversity
Requiring the knowledge of channel gain and phase estimates, the weights are chosen as ˆ wMRC = hˆ k = |hˆ k |e j∠hk , k
k = 1, · · · , N
(12.17)
The output of MRC is [19] y=
N
N
k=1
k=1
∑ w∗k rk = ∑ hˆ ∗k (hk s + nk )
(12.18)
= h1 hˆ ∗1 + h2 hˆ ∗2 + · · · hN hˆ ∗N s + noise
(12.19)
The decision variable for demodulation is obtained by power normalization N
sˆ =
y = 2 2 ˆ ˆ |h1 | + |h2 | + · · · + |hˆ N |2
∑ hˆ ∗k rk k=1 N
(12.20)
∑ |hˆ k |2 k=1
For the case of perfect channel estimation, the instantaneous SNR after MRC processing is γMRC =
Es 2 |h1 | + |h2 |2 + · · · + |hN |2 N0
Fig. 12.7: Maximum ratio combining with perfect fading channel estimation
(12.21)
12.3 Receive diversity
305
12.3.3 Equal gain combining (EGC) In MRC architecture, the detection is performed on a linear combination of co-phased signals weighted with the fading gain of each channel path. It is the optimal technique that maximizes the SNR in the receiver side. However, the main disadvantage of MRC is complexity and it requires perfect knowledge of fading gains and phases.
Fig. 12.8: Maximum ratio combining with perfect fading channel estimation In equal gain combining, shown in Figure 12.8, the detection is performed on a linear combination of cophased signals that are equally weighted. For the case of MPSK transmission, the combiner for EGC scheme, does not require the gain estimation of the fading channels and hence it offers relatively less complexity in implementation. EGC provides comparable performance with respect to MRC scheme. For example, the performance of EGC is only 1 dB inferior to that of MRC in the Rayleigh flat-fading case. Though EGC is sub-optimal, it provides a practical solution. The weights of the EGC scheme that accounts for co-phasing and equal fading channel gain weighting usually set to unity for all N paths is given by ˆ
= e j∠hk , wEGC k
k = 1, · · · , N
(12.22)
If we assume perfect estimation of phases ∠hˆ k = ∠hk , the output of the EGC combining is given by [19] N
y= =
N
N
k=1
k=1
ˆ
∑ w∗k rk = ∑ w∗k (hk s + nk ) = ∑ e− j∠hk k=1 N
∑
|hk |e j∠hk s + nk
ˆ |hk |s + e− j∠hk nk = |h1 | + |h2 | + · · · + |hN | s + noise
(12.23) (12.24)
k=1
For MPSK modulations, the output of the equal gain combiner (y) is sufficient for demodulation, therefore channel gain estimation is not needed for this case. However, for other modulations like QAM, the decision
306
12 Multiple Antenna Systems - Spatial Diversity
variable for demodulation is obtained by the following normalization step N
ˆ
∑ e− j∠hk rk
y = sˆ = ˆ ˆ |h1 | + |h2 | + · · · + |hˆ N |
k=1 N
(12.25)
∑ |hˆ k | k=1
The instantaneous SNR after EGC processing is γEGC =
2 Es |h1 | + |h2 | + · · · + |hN | N0
(12.26)
12.3.4 Selection combining (SC) In selection combining, the received signal from the antenna that experiences the highest SNR (i.e, the strongest signal from N received signals) is chosen for processing at the receiver (Figure 12.9). ( 1, γk = arg maxi γi SC wk = (12.27) 0, otherwise Assuming same channel noise across all branches, the branch SNRs are dominated by |hi | - the gains of each fading channel. For this case, the weight of the path that has the highest |hˆ i | estimate, is set as wk = 1,
k = arg max |hˆ i | i
(12.28)
which is equivalent to switching to the path that offers maximum signal-to-noise ratio (SNR). The output after selecting the best path i, is k=N
y=
∑ w∗ k rk = w∗ i ri = ri
(12.29)
k=1
Using the knowledge of the channel impulse response of the selected best path hˆ i , the detection in a flat-fading channel (refer 6.2.2) is given by hˆ ∗ hˆ ∗ sˆ = i y = i ri (12.30) |hˆ i |2 |hˆ i |2 The instantaneous SNR at the combiner output is the maximum among instantaneous SNRs of all the received branches γSC = arg max γi (12.31) i
Selection combining performs detection based on only one signal having the highest SNR. Therefore it is the scheme that offers least complexity. However, by selecting only one signal among the received branches, it does not fully exploit the diversity. To take advantage of the scheme’s simplicity, selection combining is usually followed by non-coherent detection technique that does not require expensive and complex carrier recovery circuit.
12.3 Receive diversity
307
Fig. 12.9: Selection combining receiver processing
12.3.5 Performance simulation As an example, the symbol error rate performance of a BPSK modulated transmission over a slow Rayleigh flat-fading SIMO channel, for a range of values, for the number of receive antennas is simulated here. Performance with MRC, EGC and SC receive diversity schemes are compared. Perfect estimation of fading channel state information is assumed. The Matlab code, shown next, utilizes the modulate-demodulate functions defined in chapter 5. The addition of Gaussian white noise needs to be multidimensional. The method discussed in chapter 6 section 6.1 is extended here for computing and adding the required amount of noise across the N branches of signals. The simulated symbol error rates are plotted in Figure 12.10. Program 12.1: receive diversity.m: Receive diversity error rate performance simulation %Eb/N0 Vs SER for BPSK over Rayleigh flat-fading with receive diversity clearvars;clc; %---------Input Fields-----------------------nSym = 10e5; %Number of symbols to transmit N = [1,2,20]; %number of diversity branches EbN0dBs = -20:2:36; %Eb/N0 range in dB for simulation M = 2; %M-ary PSK modulation MOD_TYPE = 'PSK'; %Modulation type k=log2(M);EsN0dBs = 10*log10(k)+EbN0dBs; %EsN0dB calculation figure; for nRx = N %simulate for each # of received branchs ser_MRC = zeros(1,numel(EsN0dBs));%simulated symbol error rates ser_EGC = zeros(1,numel(EsN0dBs)); ser_SC = zeros(1,numel(EsN0dBs));q=1; %---------- Transmitter ----------d = ceil(M*rand(1,nSym));%uniform random symbols 1:M s = modulate(MOD_TYPE,M,d);%Modulation s_diversity = kron(ones(nRx,1),s);%nRx signal branches
308
12 Multiple Antenna Systems - Spatial Diversity
for EsN0dB = EsN0dBs h = sqrt(1/2)*(randn(nRx,nSym)+1j*randn(nRx,nSym));%Rayleigh flat-fading signal = h.*s_diversity; %effect of channel on the modulated signal gamma = 10.ˆ(EsN0dB/10);%for AWGN noise addition P = sum(abs(signal).ˆ2,2)./nSym; %calculate power in each branch of signal N0 = P/gamma; %required noise spectral density for each branch %Scale each row of noise with the calculated noise spectral density noise = (randn(size(signal))+1j*randn(size(signal))).*sqrt(N0/2); r = signal+noise;%received signal branches %MRC processing assuming perfect channel estimates s_MRC = sum(conj(h).*r,1)./sum(abs(h).ˆ2,1); %detector decisions d_MRC = demodulate(MOD_TYPE,M,s_MRC); %demodulation decisions %EGC processing assuming perfect channel estimates h_phases = exp(-1j*angle(h));%estimated channel phases s_EGC = sum(h_phases.*r,1)./sum(abs(h),1); %detector decisions d_EGC = demodulate(MOD_TYPE,M,s_EGC); %demodulation decisions %SC processing assuming perfect channel estimates [h_max,idx] = max(abs(h),[],1); %max |h| values along all branches h_best = h(sub2ind(size(h),idx,1:size(h,2)));%best path's h estimate y = r(sub2ind(size(r), idx, 1:size(r,2)));%selected output s_SC = y.*conj(h_best)./abs(h_best).ˆ2; %detector decisions d_SC = demodulate(MOD_TYPE,M,s_SC); %demodulation decisions ser_MRC(q) = sum(d_MRC ˜= d)/nSym;%Error rates computation ser_EGC(q) = sum(d_EGC ˜= d)/nSym; ser_SC(q) = sum(d_SC ˜= d)/nSym;q=q+1;
end semilogy(EbN0dBs,ser_MRC,'b-','lineWidth',1.5);hold on; semilogy(EbN0dBs,ser_EGC,'r--','lineWidth',1.5); semilogy(EbN0dBs,ser_SC,'g-.','lineWidth',1.5);
end xlim([-20,36]);ylim([0.00001,1.1]); xlabel('Eb/N0(dB)');ylabel('Symbol Error Rate (P_s)') title('Receive diversity schemes in Rayleigh flat-fading')
12.3.6 Array gain and diversity gain Compared to the SNR of just a single element of an antenna array, the combining of signals from every antenna increases the SNR at the receiver. This increase in average SNR of combined signals over the average SNR of just a single antenna element is called array gain. Array gain is applicable to both AWGN and fading channels. A system with array gain in a fading channel, achieved with multiple antennas, provides better performance
12.3 Receive diversity
309
N=1 (no diversity) (MRC,EGC,SC)
N=20
N=2 EGC
SC
SC EGC
MRC MRC
Fig. 12.10: Performance of BPSK with receive diversity schemes in Rayleigh flat-fading
than a system without diversity in AWGN channel. The array gain, illustrated in Figure 12.11, is measured as the SNR difference between a system with no diversity versus the system with diversity.
array gain = shift in Eb/N0 at specific error rate diversity gain = increase in slope
Fig. 12.11: Array gain and diversity gain illustrated for selection combining
310
12 Multiple Antenna Systems - Spatial Diversity
Diversity gain results from averaging over uncorrelated diversity paths. Diversity gain is maximal when the different received antenna signals are completely uncorrelated and is minimal when the signals are completely correlated [20]. Diversity gain is a function of number of independent signal paths, fading distributions, and the combining technique. As illustrated in Figure 12.11, diversity gain is measured as the increase in slope of the average error probability curve due to diversity. Array gain and diversity gain, could also be measured in a similar way on the outage probability charts.
12.4 Transmit diversity When the receiver is equipped with multiple antennas, the quality of the transmission can be improved by using receive diversity techniques. This method is especially useful for uplink transmission (from mobile terminal to base station), since the base station is fitted with multiple antennas. However, equipping the mobile terminal with multiple antenna is not practical due to limitations in size and the complexity. Therefore, it is practically difficult to improve the performance of the downlink (from base station to mobile terminal) using receive diversity techniques. In such case, the performance of the downlink can be improved by using the multiple transmit antennas at the base station. This gives us two choices, depending on the availability of channel state information at the transmitter. • Transmit beamforming - The channel state information at transmitter (CSIT) is known and transmit beamforming techniques such as pre-maximum-ratio-combining (P-MRC), pre-equal-gain-combining (PEGC) and antenna selection (AS), can be used to achieve array gain as well as diversity gain. • Transmit diversity If the channel state information at transmitter is not available, transmit diversity techniques like Alamouti scheme can be used achieve diversity gain.
12.4.1 Transmit beamforming with CSIT In transmit beamforming scheme, depicted in Figure 12.12, we consider a multiple input single output (MISO) channel with N transmit antennas and 1 receive antenna. Transmitted signal vector x is formed by weighting the information symbols s with a transmit beamforming vector that obeys sum-power constraint w = [w1 , w2 , · · · , wN ]T . x = ws (12.32) The fading-channel for the N independent fading paths is given by the vector h = [h1 , h2 · · · , hN ]T . Then, the received signal is r = hT ws + n (12.33) where, n ∼ C N (0, σn2 ) denotes complex Gaussian additive noise. The instantaneous SNR at the receiver is given by 2 N ∑i=1 hi wi (12.34) γ= σn2 It is desirable to choose w - the weights at the transmitter to maximize the SNR γ, subject to the sumpower constraint ||w||2 ≤ P. Therefore, the design of w requires the knowledge of h at the transmitter. In time division duplex transmission links, the transmitter obtains the knowledge of h by exploiting a phenomenon called channel reciprocity. In transmission links employing frequency division duplex method, channel state information at transmitter is feedback from the receiver over one of the frequency channels. If CSIT is known, following tranmsmit beamforming schemes are available for selecting the weights w at the transmitter.
12.4 Transmit diversity
311
Fig. 12.12: Transmit beamforming
• Pre maximum ratio combining (P-MRC) - The power across the transmit antennas are weighted according to the instantaneous channel gains. Any phase shift seen in the channels is also accounted for during transmission. v u u P P−MRC wk · h∗k , k = 1, · · · , N (12.35) =u N u 2 t ∑ |hk | k=1
• Pre equal gain combining (P-EGC) - The power across all the transmit antennas are equally split. Any phase shift in the corresponding channel is also compensated for, such that the signals from all the transmit antennas coherently add up at the receiver. r P − jφi wP−EGC = e , k = 1, · · · , N (12.36) k k • Antenna selection (AS) - The antenna with the largest instantaneous SNR is selected for transmission. (√ P, γk = arg maxi γi AS wk = (12.37) 0, otherwise The equations 12.35,12.36,12.37, for computing the weights of P-MRC,P-EGC and AS transmit beamforming schemes are similar to that of MRC, EGC and SC receive diversity schemes (equations 12.17,12.22,12.27). Since the structure of the transmit beamforming schemes are similar to that of receive diversity schemes, the details of simulation code is omitted and is left for the readers to try it themselves.
312
12 Multiple Antenna Systems - Spatial Diversity
12.4.2 Alamouti code for CSIT unknown S.M Alamouti, published a paper [16] describing a simple transmit diversity scheme that employs two antennas at the transmitter and one antenna at the receiver. The technique does not require any feedback from the receiver (meaning channel state information is unknown at the transmitter), nor does it require any bandwidth expansion and the computational complexity is similar to MRC receive diversity technique with one transmit and two receive antennas.
Fig. 12.13: Alamouti diversity scheme for MPSK signal transmission
In this technique (Figure 12.13), at a given symbol transmission period Tsym , two signals are simultaneously transmitted from the two transmit antennas. The signal from the transmit antenna 1 is denoted as s1 and the signal from the transmit antenna 2 is s2 . At the next symbol transmission period, −s∗2 is transmitted from antenna 1 and s∗1 is transmitted from antenna 2, where ∗ denotes complex conjugate operation. Therefore, the transmit signals at symbol time n = 1, 2 are " # " # ∗ s −s 2 x1 = 1 x2 = (12.38) s2 s∗1 The equation is extended similarly for symbol periods n = 3, 4, n = 5, 6 and so on. We note that the Alamouti encoding is done in space and time, hence this belongs to the class of space-time codes. The encoding can also be done in space and frequency, such codes are called space-frequency codes. The code rate of Alamouti space-time code is R = 1, since two complex valued symbols are transmitted in two consecutive time slots. It is common to write the Alamouti encoding scheme in terms of a generator matrix whose columns are orthogonal. " # h i H s1 s2 1 2 =0 (12.39) G2 = s s = , s1 · s2 ∗ ∗ −s2 s1 Due to the orthogonality of columns, the generator matrix has the following property 2 2 0 |s1 | + |s2 | 2 2 GH G = = |s | + |s | I2 2 1 2 2 0 |s1 |2 + |s2 |2
(12.40)
where I2 is a 2 × 2 identity matrix. For the purpose of having an uncomplicated maximum likelihood decoding system at the receiver, we assume that the fading channel coefficients along the two transmission paths remain constant for a block of two successive symbol periods.
12.4 Transmit diversity
313
h1 (t) = h1 (t + Tsym ) = h1 = |h1 |e j∠h1 = α1 e jφ1 h2 (t) = h2 (t + Tsym ) = h2 = |h2 |e
j∠h2
= α2 e
jφ2
(12.41) (12.42)
Due to these reasons, the Alamouti coding scheme is classified as orthogonal space-time block codes. The received signals after passing through the additive white Gaussian noise (AWGN) channel, received at first and second time instants are r1 = r(t) = h1 s1 + h2 s2 + n1
(12.43)
r2 = r(t + Tsym ) = −h1 s∗2 + h2 s∗1 + n2
(12.44)
where n1 and n2 are complex variables capturing the effect of noise and interference. Expressed in matrix form " # " #" # " # r1 s1 s2 h1 n = + 1 (12.45) r2 −s∗2 s∗1 h2 n2 The equation can also be represented in the following equivalent form " # " #" # " # r1 h1 h2 s1 n = ∗ + 1 r2∗ h2 −h∗1 s2 n2
(12.46)
which may be written in compact form as r = Hs + n
(12.47)
ˆ H - the At the receiver, the output of the combiner rˆ is obtained by multiplying the above equation by H ˆ Hermitian transpose (conjugate transpose of a matrix) of the estimate of channel matrix H. If we assume ˆ ≈ H, the output of the combiner is given by perfect channel estimated at the receiver, H y = HH r = HH Hs + HH n Therefore, the 2 × 1 signal vector at the combiner output is expressed as " # " #" # y1 h∗1 h2 r1 = ∗ y2 h2 −h1 r2∗ " #" #" # " # h∗1 h2 h1 h2 s1 n˜ = ∗ + 1 h2 −h1 h∗2 −h∗1 s2 n˜ 2 " #" # " # |h1 |2 + |h2 |2 0 s1 n˜ = + 1 n˜ 2 0 |h1 |2 + |h2 |2 s2 where n˜ = HH n is the noise vector after combiner processing. Therefore, the combiner outputs are y1 = |h1 |2 + |h2 |2 s1 + n˜ 1 y2 = |h1 |2 + |h2 |2 s2 + n˜ 2
(12.48)
(12.49)
(12.50)
(12.51)
(12.52) (12.53)
For equal energy signal constellations like MPSK, the sufficient decision variables for demodulation are sˆ1 =
y1 2 |h1 | + |h2 |2
(12.54)
sˆ2 =
y2 |h1 |2 + |h2 |2
(12.55)
314
12 Multiple Antenna Systems - Spatial Diversity
As an example, the error rate performance of a BPSK modulated transmission over a slow Rayleigh flatfading SIMO channel, for no diversity case, 2x1 Alamouti transmit scheme and 1x2 antenna MRC receive diversity scheme - are simulated here. Perfect estimation of fading channel state information is assumed. The Matlab code, shown next, utilizes the modulate-demodulate functions defined in chapter 5. The addition of Gaussian white noise needs to be multidimensional. The method discussed in chapter 6 section 6.1 is extended here for computing and adding the required amount of noise across two branches of signals. Figure 12.14 shows the bit error rate performance of uncoded coherent BPSK for no diversity case, MRC receive diversity scheme having 1 transmit-2 receive antenna configuration and the Alamouti transmit scheme for 2 transmit-1 receive antenna configuration. The performance of Alamouti scheme is 3-dB worse than that of MRC scheme. This is because the simulations assume that each transmit antenna radiate symbols with half the energy so that the total combined energy radiated is same as that of a single transmit antenna. In the Alamouti scheme, if each of the transmit antennas radiate same energy as that of the 1 transmit-2 receive antenna configuration used for MRC scheme, their performance curves would match.
No diversity (1Tx, 1Rx)
Alamouti (2Tx, 1Rx)
Maximum ratio combining (1Tx, 2Rx)
Fig. 12.14: Performance of BPSK with Alamouti transmit diversity and MRC receive diversity
Program 12.2: Alamouti.m: Receive diversity error rate performance simulation %Eb/N0 Vs SER for BPSK over Rayleigh flat-fading with transmit diversity clearvars;clc; %---------Input Fields-----------------------nSym = 10e5; %Number of symbols to transmit EbN0dBs = -20:2:36; %Eb/N0 range in dB for simulation M = 2; %M-ary PSK modulation MOD_TYPE = 'PSK'; %Modulation type k=log2(M);EsN0dBs = 10*log10(k)+EbN0dBs; %EsN0dB calculation ser_sim = zeros(1,numel(EsN0dBs));q=1; %simulated symbol error rates
References
315
for EsN0dB = EsN0dBs %simulate for each # of received branches %---------- Transmitter ----------d = ceil(M*rand(nSym,1));%uniform random symbols 1:M s = modulate(MOD_TYPE,M,d);%Modulation ss = kron(reshape(s,2,[]),ones(1,2));%shape as 2xnSym vector h = sqrt(1/2)*(randn(2,nSym/2)+1j*randn(2,nSym/2));%channel coeffs H = kron(h,ones(1,2)); %shape as 2xnSym vector H(:,2:2:end) = conj(flipud(H(:,2:2:end))); H(2,2:2:end) = -1*H(2,2:2:end);%Alamouti coded channel coeffs signal = sum(H.*ss,1);%effect of channel on the modulated signal gamma = 10.ˆ(EsN0dB/10);%for AWGN noise addition P = sum(abs(signal).ˆ2,2)./nSym; %calculate power in each branch of signal N0 = P/gamma; %required noise spectral density for each branch %Scale each row of noise with the calculated noise spectral density noise = (randn(size(signal))+1j*randn(size(signal))).*sqrt(N0/2); r = signal+noise; %received signal %Receiver processing rVec = kron(reshape(r,2,[]),ones(1,2)); %2xnSym format Hest = H; %perfect channel estimation Hest(1,1:2:end) = conj(Hest(1,1:2:end)); Hest(2,2:2:end) = conj(Hest(2,2:2:end));%Hermitian transposed Hest matrix y = sum(Hest.*rVec,1); %combiner output sCap = y./sum(conj(Hest).*Hest,1);%decision vector for demod dCap = demodulate(MOD_TYPE,M,sCap).';%demodulation ser_sim(q) = sum(dCap ˜= d)/nSym;q=q+1;%error rate calculation
end figure; semilogy(EbN0dBs,ser_sim,'r','lineWidth',1.5);hold on;%plot simulated error rates title('2x1 Transmit diversity - Alamouti coding');grid on; xlabel('EbN0 (dB)');ylabel('Symbol error rate (Ps)');
References 1. M Torabi, D Haccoun, Performance analysis of joint user scheduling and antenna selection over MIMO fading channels, IEEE Signal Process Letter 18(4), 235–238 (2011). 2. L Jin, X Gu, Z Hu, Low-complexity scheduling strategy for wireless multiuser multiple-input multiple-output downlink system, IET Communication 5(7), 990–995 (2011). 3. B Makki, T Eriksson, Efficient channel quality feedback signaling using transform coding and bit allocation, Vehicular Technology Conference, VTC (Ottawa, ON, 2010), pp. 1–5. 4. T Eriksson, T Ottosson, Compression of feedback for adaptive transmission and scheduling, Proc IEEE 95(12), 2314–2321 (2007). 5. G Dimic, ND Sidiropoulos, On downlink beamforming with greedy user selection: performance analysis and a simple new algorithm, IEEE Trans Signal Process 53(10), 3857–3868 (2005).
316
12 Multiple Antenna Systems - Spatial Diversity
6. VKN Lau, Proportional fair space-time scheduling for wireless communications, IEEE Trans Communication 53(8), 1353–1360 (2005). 7. J. G. Proakis, Digital Communications, Mc-Graw Hill International, Editions, 3rd ed., 1995. 8. Torlak, M.; Duman, T.M., MIMO communication theory, algorithms, and prototyping, Signal Processing and Communications Applications Conference (SIU), 2012 20th , vol., no., pp.1,2, 18-20 April 2012. 9. I. E. Telatar, Capacity of multi-antenna gaussian channels, European Transactions on Telecommunication, vol. 10, pp. 585–595, Nov./Dec. 1999. 10. G. J. Foschini, Layered space-time architecture for wireless communication in a fading environment when using multielement antennas, Bell Labs Tech. J., vol. 1, no. 2, pp. 41–59, 1996. 11. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time block code from orthogonal designs, IEEE Trans. Inform. Theory, vol. 45, pp. 1456–1467, July 1999. 12. G. Foschini, G. Golden, R. Valenzuela, and P. Wolniansky, Simplified processing for high spectal efficiency wireless communication employing multi-element arrays, IEEE J. Select. Areas Commun., vol. 17, pp. 1841–1852, Nov. 1999. 13. R. Heath, Jr. and A. Paulraj, Switching between multiplexing and diversity based on constellation distance, in Proc. Allerton Conf. Communication, Control and Computing, Oct. 2000. 14. A. Paulraj and T. Kailath, Increasing capacity in wireless broadcast Systems using distributed transmission/directional reception (DTDR), US Patent No. 5,345,599, Issued 1993. 15. Gerard. J. Foschini, Layered Space-Time Architecture for Wireless Communication in a Fading Environment When Using Multi-Element Antennas,Bell Laboratories Technical Journal: 41–59,(October 1996). 16. S. M. Alamouti, A simple transmit diversity technique for wireless communications, IEEE Journal on Selected Areas in Communications pp. 1451-8, vol. 16, no. 8, October 1998. 17. V. Tarokh, N. Seshadri, A. Calderbank, Space-Time codes for high data rate wireless communication: performance criterion and code construction, IEEE Trans. on. Information Theory, Vol. 44, No.2, pp.744-765, March 1998. 18. Keith Q. T. Zhang, Wireless Communications Principles, Theory and Methodology, ISBN: 978-1119978671, WileyBlackwell publishers, December 2015. 19. Jin, Jinghua, Diversity receiver design and channel statistic estimation in fading channels (2006), Retrospective Theses and Dissertations, 1268. https://lib.dr.iastate.edu/rtd/1268 20. Mattheijssen et. al, Antenna-Pattern diversity versus space diversity for use at handhelds, IEEE transactions on vehicular technology, vol. 53, no. 4, July 2004.
Part VI
Multiuser and Multitone Communication Systems
Chapter 13
Spread Spectrum Techniques
Abstract Spread spectrum system, originally developed for military applications, is extremely resistant to unauthorized detection, jamming, interference and noise. It converts a narrowband signal to wideband signal by the means of spreading. Standards like WiFi and bluetooth use spread spectrum technology such as direct sequence spread spectrum and frequency hopping spread spectrum respectively. Spread spectrum techniques rely on the use of one or other form of coding sequences like m-sequences, Gold codes, Kasami codes, etc., The study of properties of these codes is important in the design and implementation of a given spread spectrum technique. In this chapter, generation of spreading sequences and their main properties are covered, followed by simulation models for direct sequence spread spectrum and frequency hopping spread spectrum systems.
13.1 Introduction A spread-spectrum signal is a signal that occupies a bandwidth that is much larger than necessary to the extent that the occupied bandwidth is independent of the bandwidth of the input-data. With this technique, a narrowband signal such as a sequence of zeros and ones is spread across a given frequency spectrum, resulting in a broader or wideband signal. Spread spectrum was originally intended for military application and it offers two main benefits. First, a wideband signal is less susceptible to intentional blocking (jamming) and unintentional blocking (interference or noise) than a narrowband signal. Secondly, a wideband signal can be perceived as part of noise and remains undetected. The two most popular spread spectrum techniques widely used in commercial applications are direct sequence spread spectrum (DSSS) and frequency hopping spread spectrum (FHSS). Bluetooth, cordless phones and other fixed broadband wireless access techniques use FHSS; WiFi uses DSSS. Given that both the techniques occupy the same frequency band, co-existence of bluetooth and Wifi devices is an interesting issue. Both FSSS and DSSS devices perceive each others as noise, i.e, the WiFi and Bluetooth devices see each other as mutual interferers. All the spread spectrum techniques make use of some form of spreading or code sequences.
13.2 Code sequences Be it DSSS or FHSS, the key element in any spread spectrum technique is the use of code sequences. In DSSS, a narrowband signal representing the input data is XOR’ed with a code sequence of much higher bit rate, rendering a wideband signal. In FHSS, the transmitter/receiver agrees to and hops among a given set of frequency channels according to a seemingly random code sequence. The most popular code sequences used in spread spectrum applications are
319
320
• • • • • •
13 Spread Spectrum Techniques
Maximum-length sequences (m-sequences) Gold codes Walsh-Hadamard sequences Kasami sequences Orthogonal codes Variable length orthogonal codes
These codes are mainly used in the spread spectrum for protection against intentional blocking (jamming) as well as unintentional blocking (noise or interference), to provide a provision for privacy that enables protection of signals against eavesdropping and to enable multiple users share the same resource. The design and selection of a particular code sequence for a given application largely depends on its properties. As an example, we will consider generation of m-sequences and Gold codes, along with the demonstration of their code properties.
13.2.1 Sequence correlations Given the choice of numerous spreading codes like m-sequences, Gold codes, Walsh codes etc., the problem of selecting a spreading sequence for a given application reduces to the selection of such codes based on good discrete-time periodic cross-correlation and auto-correlation properties. The cross-correlation of two discrete sequences x and y, normalized with respect to the sequence length N, is given by 1 Rxy (k) = [#agreements − #disagreements when comparing one f ull period N o f sequence x with k position cyclic shi f t o f the sequence y]
(13.1)
Similarly, the auto-correlation of a discrete sequence refers to the degree of agreement between the given sequence and its phase shifted replica. To avoid problems due to false synchronization, it is important to select a spreading code with excellent auto-correlation properly. The periodic autocorrelation functions of m-sequences approach the ideal case of noise when the sequence length N is made very large. Hence, msequences are commonly employed in practice when good autocorrelation functions are required. For applications like CDMA, the codes with good cross-correlation properties are desired. The following function computes the non-normalized sequence correlation between two given discrete sequences of same length. It can also be used to compute auto-correlation when the first two input arguments are set as same sequence. This function is used in sections 13.2.2 and 13.2.3 for plotting the sequence correlation properties of m-sequences and Gold code sequences. Program 13.1: sequence correlation.m: Computing sequence correlation between two sequences function Rxy=sequence_correlation(x,y,k1,k2) %Compute correlation between two binary sequences x and y %k1 and k2 define the limits of the range of desired shifts x=x(:);y=y(:); %serialize if length(x) ˜= length(y), error('Sequences x and y should be of same length'); end L=length(x); rangeOfKs = k1:1:k2; %range of cyclic shifts Rxy=zeros(1,length(rangeOfKs)); %determine initial shift for sequence x if k1˜=0, %do not shift x if k1 is 0 start=mod(L+k1,L)+1;%+1 for matlab index
13.2 Code sequences
321
final=mod(L+k1-1,L)+1;%+1 for matlab index x=[x(start:end);x(1:final)];%initial shifted sequence
end q=floor(length(x)/length(y)); %assumes x is always larger r=rem(length(x),length(y)); %remainder part y=[repmat(y,q,1); y(1:r)]; for i=1:length(rangeOfKs), agreements= sum(x==y);%number of agreements disagreements = sum(x˜=y); %number of disagreements x=[x(2:end);x(1)]; %shift left once for each iteration Rxy(i)=agreements-disagreements; %sequence correlation end,end
13.2.2 Maximum-length sequences (m-sequences) Maximum-length sequences (also called as m-sequences or pseudo random (PN) sequences) are constructed based on Galois field theory which is an extensive topic in itself. A detailed treatment on the subject of Galois field theory can be found in [1] and [2]. Maximum length sequences are generated using linear feedback shift registers (LFSR) structures that implement linear recursion. There are two types of LFSR structures available for implementation - 1) Galois LFSR and 2) Fibonacci LFSR. The Galois LFSR structure is a high speed implementation structure, since it has less clock to clock delay path compared to its Fibonacci equivalent. What follows in this discussion is the implementation of an m-sequence generator based on Galois LFSR architecture. The basic Galois LFSR architecture for an Lth -order generating polynomial in GF(2) is given in Figure 13.1. The integer value L denotes the number of delay elements in the LFSR architecture and g0 , g1 , ..., gL represent the coefficients of the generator polynomial. The generator polynomial of the given LFSR is g(x) = g0 + g1 x + g2 x2 + · · · + gL−1 xL−1 + gL xL (mod 2)
(13.2)
where, g0 , g1 , ..., gL−1 ∈ GF(2), i.e, they take only binary values. The first and last coefficients are usually unity: g0 = gL = 1.
… …
2
1
L 1
… ( )
1(
)
2(
)
L 1(
)
Fig. 13.1: A basic LFSR architecture For generating an m-sequence, the characteristic polynomial that dictates the feedback coefficients, should be a primitive polynomial. Table 13.1 lists some of the primitive polynomials of degree upto L = 12. More comprehensive tables of primitive polynomials can be found in [3].
322
13 Spread Spectrum Techniques
Table 13.1: Primitive polynomials upto degree L=12 Degree (L)
Sequence length (N = 2L − 1)
Primitive polynomial
1 2 3 4 5 6 7 8 9 10 11 12
1 3 7 15 31 63 127 255 511 1023 2047 4095
x+1 x2 + x + 1 x3 + x + 1 x4 + x + 1 x5 + x2 + 1 x6 + x + 1 x7 + x + 1 8 x + x7 + x2 + x + 1 x9 + x4 + 1 x10 + x3 + 1 x11 + x2 + 1 x12 + x6 + x4 + x + 1
For implementation in Matlab, the LFSR structure can be coded in a straightforward manner that involves atleast two for loops. If we could exploit the linear recursion property of LFSR and its equivalent matrix model, the LFSR can be implemented using only one for loop as shown in the Matlab function given next. The function implements the LFSR structure (Figure 13.1) by using the following equivalent matrix equation: x(n + 1) = Ax(n)
n >= 0
(13.3)
with the initial state of the shift registers represented by the vector x(0), and x(n) is an L dimensional vector given by h iT (13.4) x(n) = x0 (n) x1 (n) · · · xL−1 (n) The matrix A is of dimension L × L, given by 000 1 00 0 1 0 A= 0 0 1 . . . . . . . . .
··· ··· ··· ··· .. .
0 0 0 0 .. .
g0 g1 g2 g3 .. .
0 0 0 · · · 1 gL−1
Program 13.2: lfsr.m: Implementation of matrix form of LFSR for m-sequence generation function [y,states] = lfsr(G,X) %Galois LFSR for m-sequence generation % G - polynomial (primitive for m-sequence) arranged as vector % X - initial state of the delay elements arranged as vector % y - output of LFSR % states - gives the states through which the LFSR has sequenced. %This is particularly helpful in frequency hopping applications %The function outputs the m-sequence for a single period %Sample call: %3rd order LFSR polynomial:xˆ5+xˆ2+1=>g5=1,g4=0,g3=0,g2=1,g1=0,g0=1 %with intial states [0 0 0 0 1]: lfsr([1 0 0 1 0 1],[0 0 0 0 1]) g=G(:); x=X(:); %serialize G and X as column vectors
(13.5)
13.2 Code sequences
323
if length(g)-length(x)˜=1, error('Length of initial seed X0 should be equal to the number of delay elements (length(g)-1)'); end %LFSR state-transistion matrix construction L = length(g)-1; %order of polynomial A0 = [zeros(1,L-1); eye(L-1)]; %A-matrix construction g=g(1:end-1); A = [A0 g]; %LFSR state-transistion matrix N = 2ˆL-1; %period of maximal length sequence y = zeros(1,length(N));%array to store output states=zeros(1,length(N));%LFSR states(useful for Frequeny Hopping) for i=1:N, %repeate for each clock period states(i)=bin2dec(char(x.'+'0'));%convert LFSR states to a number y(i) = x(end); %output the last bit x = mod(A*x,2); %LFSR equation end Correlation properties of m-sequences are demonstrated here. It uses the sequence correlation function defined in section 13.2.1. Let’s generate an m-sequence of period N = 31 using the 5th order characteristic polynomial g(x) = x5 + 2 x + 1. The coefficients of the characteristic polynomial is G = [g5 , g4 , g3 , g2 , g1 , g0 ] = [1, 0, 0, 1, 0, 0] and lets start the generator with initial seed X0 = [0, 0, 0, 0, 1]. Typically, the autocorrelation function of m-sequences are two valued. The normalized autocorrelation of an m-sequence of length N, takes two values [1, −1/N]. This is demonstrated in Figure 13.2. We observe that that autocorrelation of m-sequence carries some similarities with that of a random sequence. If the length of the m-sequence is increased, the out-of-peak correlation −1/N reduces further and thereby the peaks become more distinct. This property makes the m-sequences suitable for synchronization and in the detection of information in single-user Direct Sequence Spread Spectrum systems. Program 13.3: Generate an m-sequence and compute its auto-correlation >>y=lfsr([1 0 0 1 0 1],[0 0 0 0 1]) %g(x)=xˆ5+xˆ2+1 >>N=31;%Period of the sequence N=2ˆL-1 >>Ryy=1/N*sequence_correlation(y,y,-35,35)%normalized autocorr. >>plot(-35:1:35,Ryy) %Plot autocorrelation To demonstrate cross-correlation properties of two different m-sequences, consider two different primitive polynomials g1 (x) = x5 + x4 + x2 + x + 1 and g2 (x) = x5 + x4 + x3 + x + 1 that could generate two different m-sequences of length N = 31. The normalized cross-correlation of the aforementioned m-sequences, shown in Figure 13.3, is given by the following script. Program 13.4: Generate two m-sequences and compute their cross-correlation >>x=lfsr([1 1 0 1 1 1],[0 0 0 0 1]) %g_1(x)=xˆ5+xˆ4+xˆ2+x+1 >>y=lfsr([1 1 1 0 1 1],[0 0 0 0 1]) %g_2(x)=xˆ5+xˆ4+xˆ3+x+1 >>N=31; %Period of the sequence N=2ˆL-1 >>Rxy=1/N*sequence_correlation(x,y,0,31)%cross-correlation >>plot(0:1:31,Rxy)%Plot cross-correlation The cross-correlation plot contains high peaks at certain lags (as high as 35%) and hence the m-sequences causes multiple access interference (MAI), leading to severe performance degradation. Hence, the m-sequences are not suitable for orthogonalization of users in multi-user spread spectrum systems like CDMA.
324
13 Spread Spectrum Techniques
1
0.5
0 −30
−20
−10
0
10
20
30
Fig. 13.2: Normalized autocorrelation of m-sequence generated using the polynomial g(x) = x5 + x2 + 1
0.5 0.25 0 −0.25 −0.5 0
5
10
15
20
25
30
Fig. 13.3: Normalized cross-correlation of two m-sequences generated using the polynomials g1 (x) = x5 + x4 + x2 + x + 1 and g2 (x) = x5 + x4 + x3 + x + 1
13.2.3 Gold codes In applications like code division multiple access (CDMA) technique and satellite navigation, a large number of codes with good cross-correlation property is a necessity. Code sequences that have bounded small crosscorrelations, are useful when multiple devices are broadcasting in the same frequency range. Gold codes, named after Robert Gold [4], are suited for this application, since a large number of codes with controlled correlation can be generated by a simple time shift of two preferred pair of m-sequences. Gold sequences belong to the category of product codes where two preferred pair of m-sequences of same length are XOR’ed (modulo-2 addition) to produce a Gold sequence. The two m-sequences must maintain the same phase relationship till all the additions are performed. A slight change of phase even in one of the m-sequences, produces a different Gold sequence altogether. Gold codes are non-maximal and therefore they have poor auto-correlation property when compared to that of the underlying m-sequences. A typical implementation of Gold code generator is shown in Figure 13.4. In this implementation, two linear feedback shift registers (LFSR), each of length L, are configured to generate two different m-sequences. The initial values of the LFSRs are controlled by two different seed values that set the LFSRs to non-zero states. The phase of the m-sequence loaded into an LFSR can be controlled by shifting the initial seed, there by providing an option to generate a new Gold sequence. There exists numerous choice of Gold sequences based on the m-sequence configuration of the LFSRs and what is initially loaded into the two LFSRs. Not all the Gold sequences possess good correlation properties. In
13.2 Code sequences
325
Fig. 13.4: Hardware architecture for generating Gold codes
order to have a Gold code sequence with three valued peak cross-correlation magnitudes, that is both bounded and uniform, the m-sequences generated by the two LFSRs need to be of preferred type. Table 13.2: Polynomials (feedback connections) for generating preferred pairs of m-sequences Length of LFSR
Gold code period
Preferred pairs
L
N = 2L − 1
polynomial 1
polynomial 2
5
31
[2,3,4,5]
[2,5]
-9
-1
7
6
63
[1,2,5,6]
[1,6]
-17
-1
15
7
127
[1,2,3,7] [1,3,7] [1,2,3,4,5,7]
[3,7] [1,7] [1,2,3,7]
-17 -17 -17
-1 -1 -1
15 15 15
9
511
[3,4,6,9] [1,4,8,9]
[4,9] [3,4,6,9]
-33 -33
-1 -1
31 31
10
1023
-65 -65 -65
-1 -1 -1
63 63 63
11
2047
-65
-1
63
[1,4,6,7,9,10] [3,4,5,6,7,8,9,10] [1,2,4,6,7,10] [1,5,8,10] [1,4,6,7,9,10] [1,3,4,5,6,7,8,10] [2,5,8,11]
[2,11]
Three valued correlation normalized)
cross(un-
Algebraic details on selecting the LFSR polynomials for generating preferred pair of m-sequences is detailed in Gold’s work [4]. It warrants the need for searching the preferred pairs, instead, the configurations given in Table 13.2 can be used for generating Gold codes that provide optimal correlation properties (three valued correlation as mentioned in reference [1]). Following points must be noted when choosing the length of LFSR (L) when generating the Gold code.
326
13 Spread Spectrum Techniques
• Three valued cross-correlation values occur when L is odd or when mod(L, 4) = 2. • The peak cross-correlation magnitude achieves the minimum possible value only when L is odd (L ∈ {5, 7, 9, 11, · · · }). Therefore, choosing odd values for L is the best choice. • When mod(L, 4) = 2, the polynomials listed in the table are still of preferred type (produces three valued cross-correlation), but the lower bound is weaker when compared to the lower bound when L is odd. A sample implementation for generating a Gold code of period N = 127 using the preferred pair polynomials: [7, 3, 2, 1] and [7, 3], is shown in Figure 13.5. In this implementation, each register in the LFSR is realized as a D-flip flop, all connected in cascaded fashion and operating at a given synchronous clock. Changing the initial seed values, for the shift registers, produces a different set of Gold code. Generation of Gold codes is coded in the following function.
M-sequence 1
Register #
1
2
3
4
5
6
7
Register #
1
2
3
4
5
6
7
Gold Code period N=127
M-sequence 2
Fig. 13.5: Gold sequence generator with for the preferred pair [1,2,3,7] and [3,7]
Program 13.5: gold code generator.m: Software implementation of Gold code generator function [y] = gold_code_generator( G1,G2,X1,X2) %Implementation of Gold code generator %G1-preferred polynomial 1 for LFSR1 arranged as [g0 g1 g2 ... gL-1] %G2-preferred polynomial 2 for LFSR2 arranged as [g0 g1 g2 ... gL-1] %X1-initial seed for LFSR1 [x0 x1 x2 ... xL-1] %X2-initial seed for LFSR2 [x0 x1 x2 ... xL-1] %y-Gold code %The function outputs the m-sequence for a single period %Sample call: % 7th order preferred polynomials [1,2,3,7] and [3,7] (polynomials : 1+x+xˆ2+xˆ3+x ˆ7 and 1+xˆ3+xˆ7) % i.e, G1 = [1,1,1,1,0,0,0,1] and G2=[1,0,0,1,0,0,0,1] % with intial states X1=[0,0,0,0,0,0,0,1], X2=[0,0,0,0,0,0,0,1]: %gold_code_generator(G1,G2,X1,X2) g1=G1(:); x1=X1(:); %serialize G1 and X1 matrices g2=G2(:); x2=X2(:); %serialize G2 and X2 matrices if length(g1)˜=length(g2) && length(x1)˜=length(x2),
13.2 Code sequences
327
error('Length mismatch between G1 & G2 or X1 & X2'); end %LFSR state-transistion matrix construction L = length(g1)-1; %order of polynomial A0 = [zeros(1,L-1); eye(L-1)]; %A-matrix construction g1=g1(1:end-1);g2=g2(1:end-1); A1 = [A0 g1]; %LFSR1 state-transistion matrix A2 = [A0 g2]; %LFSR2 state-transistion matrix N = 2ˆL-1; %period of maximal length sequence y = zeros(1,length(N));%array to store output for i=1:N, %repeate for each clock period y(i)= mod(x1(end)+x2(end),2);%XOR of outputs of LFSR1 & LFSR2 x1 = mod(A1*x1,2); %LFSR equation x2 = mod(A2*x2,2); %LFSR equation end Figure 13.3 shows the periodic cross-correlation of two m-sequences that are not a preferred pair. Let’s take a preferred pair from the Table 13.2 for N = 31 having the feedback connections - [2, 3, 4, 5] and [2, 5]. The correlation of two sequences can be computed using the function given in section 13.2.1 of this chapter. The cross-correlation of the chosen preferred pair, shown in Figure 13.6, is generated as follows Program 13.6: Generate two preferred pair m-sequences and compute cross-correlation G1=[1 1 1 1 0 1]; G2=[1 0 0 1 0 1]; %feedback connections X1=[ 0 0 0 0 1]; X2=[ 0 0 0 0 1]; %initial states of LFSRs y1=lfsr(G1,X1); y2=lfsr(G2,X2);N=31; %m-sequence 1 and 2 Ry1y2= 1/N*sequence_correlation(y1,y2,0,31);%cross-correlation plot(0:1:31,Ry1y2)%plot correlation
0.5
0
−0.5 0
5
10
15
20
25
30
Fig. 13.6: Normalized cross-correlation of preferred pair m-sequences using the feedback connections [2,3,4,5] and [2,5] The auto-correlation of Gold code sequence, plotted in Figure 13.7, can be obtained using the following script. The autocorrelation and cross-correlation plots reveal that the Gold code sequence does not possess
328
13 Spread Spectrum Techniques
the excellent auto-correlation property as that of individual m-sequences, but it possess good cross-correlation properties in comparison with the individual m-sequences. Program 13.7: Generate Gold code and compute auto-correlation N=31;%period of Gold code G1=[1 1 1 1 0 1]; G2=[1 0 0 1 0 1]; %feedback connections X1=[ 0 0 0 0 1]; X2=[ 0 0 0 0 1]; %initial states of LFSRs y=gold_code_generator(G1,G2,X1,X2);%Generate Gold code Ryy= 1/N*sequence_correlation(y,y,0,31);%auto-correlation plot(0:1:31,Ryy);%plot auto-correlation
1
0.5
0
−0.5 0
5
10
15
20
25
30
Fig. 13.7: Autocorrelation of Gold code sequence generated using the preferred pair feedback connections [2,3,4,5] and [2,5]
13.3 Direct sequence spread spectrum Direct sequence spread spectum (DSSS) is one of the several ways of generating a spread spectrum signal. Figure 13.8 shows the block diagram of a DSSS transmitter. A binary input sequence d ∈ {0, 1} is converted to a waveform of bit duration Tb . The data rate of the input is Rb = 1/Tb . The resulting waveform is then multiplied with a pseudo-random spreading sequence waveform having the rate Rc = 1/Tc that is much larger than the input data rate i.e, Rc >> Rb . The bit duration of the spreading sequence Tc is called chip duration and Rc = 1/Tc is called the chip rate. The process of multiplying a low rate input data sequence with a very high rate pseudo random sequence, results in a waveform that occupies a bandwidth much larger than the bandwidth of the input data. This conforms with the definition of a spread-spectrum signal as given in the beginning of this chapter. For transmission, the signal after spreading, is then modulated using BPSK modulation. The power spectral density of the resulting signal s(t) will have a main lobe with nulls that are 1/Tc far apart from the carrier frequency. Hence, the null-to-null bandwidth of the DSSS signal is 2/Tc . Figure 13.9 shows the block diagram of a DSSS receiver. The receiver down-converts the transmitted signal using a locally generated carrier. The down-converted signal is then passed through a low-pass filter that is designed to remove the unwanted spectral components generated by the down-conversion process. The resulting baseband spread-spectrum signal is then multiplied by the same spreading sequence waveform
13.3 Direct sequence spread spectrum
329
PSD
1 1
1
1
Baseband DS-SS waveform
input bits
Level converter (. ) 1
( )
BPSK Modulator
( ) DS-SS over a carrier
Spreading sequence 1 1
Fig. 13.8: Direct Sequence Spread Spectrum transmitter using BPSK modulation
(synchronized with the transmitter). The de-spreaded signal is then sampled and the data bits are estimated after passing them through a threshold detector. PSD
−
Lowpass Filter
^ recovered bits
Received signal sampler
Local oscillator −
Spreading sequence (locally generated)
Fig. 13.9: Direct sequence spread spectrum receiver for BPSK modulated carrier The designer is free to choose the pulse shape p(t) of the spreading sequence, but should make sure the correlator/matched filter at the receiver uses the same pulse shape p(t) in the successive intervals of Tb .
13.3.1 Simulation of DSSS system Discrete time equivalent models for DSSS transmitter and receiver are helpful in software based simulation of a complete DSSS system. This section aims to implement the software model for a simple DSSS transmitter/receiver. Once the model is implemented, the transmitter and receiver models will be put to test by evaluating the their performance over AWGN channel. If the performance measurement over AWGN channel
330
13 Spread Spectrum Techniques
conforms with the expected theoretical results, we will have a good custom made working model for DSSS transmitter/receiver which can be extended further to include other aspects of a complete spread spectrum system.
Rect filter
data bits
Spreading
−
Baseband DS-SS
[n]
↑
− [n] XOR gate
PRBS waveform
DS-SS signal
Level converter
↑ BPSK modulation
Fig. 13.10: Discrete time equivalent model for simulating DSSS transmitter
13.3.1.1 Generate PRBS signal Ability to generate a PRBS signal such as m-sequence or Gold code sequence is an essential part of the simulation model. The following function implements a generic PRBS generator that has the ability to generate either m-sequences or Gold code sequences. This function is used in this section and in the section 13.3.3 as well where a DSSS Jammer is implemented. The function leverages other functions given in 13.2.2 and 13.2.3 that individually generate m-sequence and Gold code sequences. Program 13.8: generatePRBS.m: Generating PRBS sequence - from msequence or gold code function [prbs] = generatePRBS(prbsType,G1,G2,X1,X2) %Generate PRBS sequence - choose from either msequence or gold code %prbsType - type of PRBS generator - 'MSEQUENCE' or 'GOLD' % If prbsType == 'MSEQUENCE' G1 is the generator poly for LFSR % and X1 its seed. G2 and X2 are not used % If prbsType == 'GOLD' G1,G2 are the generator polynomials % for LFSR1/LFSR2 and X1,X2 are their initial seeds. %G1,G2 - Generator polynomials for PRBS generation %X1,X2 - Initial states of LFSRs %The PRBS generators results in 1 period PRBS, %need to repeat it to suit the data length if strcmpi(prbsType,'MSEQUENCE'), prbs=lfsr( G1, X1);%use only one poly and initial state vector elseif strcmpi(prbsType,'GOLD'), prbs=gold_code_generator(G1,G2,X1,X2);%full length Gold sequence
13.3 Direct sequence spread spectrum
331
else %Gold codes as default G1=[1 1 1 1 0 1]; G2 = [1 0 0 1 0 1]; %LFSR polynomials X1 = [ 0 0 0 0 1]; X2=[ 0 0 0 0 1] ; %initial state of LFSR prbs = gold_code_generator(G1,G2,X1,X2); end In the simulation model, the input data and the PRBS sequence must be matched properly with regards to their length. So it is necessary to have a function that can repeat a given sequence of arbitrary length to match the length of another sequence, say the PRBS sequence. The following function is drafted for that purpose. This function is also used in section 13.3.3 where a DSSS jammer is implemented. Program 13.9: repeatSequence.m: Repeat given sequence to match another sequence of arbitrary length function [y]= repeatSequence(x,N) %Repeat a given sequence x of arbitrary length to match the given %length N. This function is useful to repeat any sequence %(say PRBS) to match the length of another sequence x=x(:); %serialize xLen = length(x); %length of sequence x %truncate or extend sequence x to suite the given length N if xLen >= N, %Truncate x when sequencelength less than N y = x(1:N); else temp = repmat(x,fix(N/xLen),1); %repeat sequence integer times residue = mod(N,xLen); %reminder when dividing N by xLen %append reminder times if residue ˜=0, temp = [temp; x(1:residue)]; end y = temp; %repeating sequence matching length N end
13.3.1.2 DSSS transmitter The discrete-time equivalent model for a BPSK DSSS transmitter is given in Figure 13.10 and its corresponding Matlab implementation is given next. The custom function accepts a stream of binary input (d) which needs to be spread using a pseudo-random binary sequence (PRBS). The function allows choosing between two types of PRBS generators - msequence or gold code and the user is expected to supply the correct generator polynomials and initial LFSR seeds for the chosen PRBS type. Refer sections 13.2.2 and 13.2.3 for details on implementation of m-sequence and Gold code generators. The matlab function also calls two generic functions, namely generatePRBS and repeatSequence which were defined in section 13.3.1.1. We should recognize that the length of the input data d and the period of the chosen PRBS may be different. As a first task, the function generates a repeating PRBS pattern based on three factors - the length of the input data, required data rate Rb and the required PRBS chip rate Rc . Next, in order to represent the data and the PRBS as smooth waveforms, an oversampling factor L is used. The oversampling factor is determined by the highest frequency used in the transmitter, which turns out to be the carrier frequency ( fc >> Rc >> Rb ). For convenience, we wish to have integral number of carrier cycles inside one PRBS chip. For the performance simulation of the Tx-Rx chain, fc is chosen to be twice the chip rate and setting L = 8 is sufficient to represent two cycles of carrier waveforms inside one PRBS chip. If the user wishes to increase the carrier frequency further, then the oversampling factor L needs to be increased accordingly for good waveform representation.
332
13 Spread Spectrum Techniques
If the PRBS sequence is oversampled by a factor of L, then the data path needs to be oversampled by L.Rc /Rb . This is done to match the input lengths to the XOR gate that performs the spreading operation. The baseband DSSS signal, that results from the spreading operation, is finally multiplied by a cosine carrier fc and the BPSK modulated DSSS signal is transmitted. Program 13.10: dsss transmitter.m: Function implementing a DSSS transmitter function [s_t,carrier_ref,prbs_ref]=dsss_transmitter(d,prbsType,G1,... G2,X1,X2,Rb,Rc,L) % Direct Sequence Spread Spectrum (DSSS) transmitter - returns the DSSS % waveform (s), reference carrier, the prbs reference waveform for use % in synchronization in the receiver %d - input binary data stream %prbsType - type of PRBS generator - 'MSEQUENCE' or 'GOLD' % If prbsType == 'MSEQUENCE' G1 is the generator poly for LFSR % and X1 its seed. G2 and X2 are not used % If prbsType == 'GOLD' G1,G2 are the generator polynomials % for LFSR1/LFSR2 and X1,X2 are their initial seeds. %G1,G2 - Generator polynomials for PRBS generation %X1,X2 - Initial states of LFSRs %Rb - data rate (bps) for the data d %Rc - chip-rate (Rc >> Rb AND Rc is integral multiple of Rb) %L - oversampling factor for waveform generation prbs = generatePRBS(prbsType,G1,G2,X1,X2); prbs=prbs(:); d=d(:); %serialize dataLen= length(d)*(Rc/Rb);%required PRBS length to cover the data prbs_ref= repeatSequence(prbs,dataLen);%repeat PRBS to match data d_t = kron(d,ones(L*Rc/Rb,1)); %data waveform prbs_t = kron(prbs_ref,ones(L,1)); %spreading sequence waveform sbb_t = 2*xor(d_t,prbs_t)-1; %XOR data and PRBS, convert to bipolar n=(0:1:length(sbb_t)-1).'; carrier_ref=cos(2*pi*2*n/L); s_t = sbb_t.*carrier_ref; %modulation,2 cycles per chip figure(1); %Plot waveforms subplot(3,1,1); plot(d_t); title('data sequence'); hold on; subplot(3,1,2); plot(prbs_t); title('PRBS sequence'); subplot(3,1,3); plot(s_t); title('DS-SS signal (baseband)'); end The following Matlab code provides an outline on how to use the function given above. The implemented DSSS transmitter is tested with a random data and a Gold code generator. The resulting waveforms at various stages of the transmitter are plotted in Figure 13.11. Program 13.11: Code snippet to plot DSSS transmitter waveforms Rb=100; Rc=1000;L=32; %data rate, chip rate and oversampling factor prbsType='GOLD'; %PRBS type is set to Gold code G1=[1 1 1 1 0 0 0 1]; G2 = [1 0 0 1 0 0 0 1];%LFSR polynomials X1 = [0 0 0 0 0 0 1]; X2=[ 0 0 0 0 0 0 1] ; %initial state of LFSRs d = rand(1,2) >=0.5; %10 bits of random data [s_t,carrier_ref,prbs_ref]=dsss_transmitter(d,prbsType,G1,G2,X1,X2,Rb,Rc,L);
13.3 Direct sequence spread spectrum
333
Fig. 13.11: BPSK DSSS transmitter waveforms using Gold code as spreading code
13.3.1.3 DSSS receiver Figure 13.12 depicts the discrete time equivalent of a BPSK DSSS receiver. The receiver implemented as a Matlab function is given next. The receiver consists of a coherent detector for BPSK demodulation followed by despreading operation. In the coherent detection technique, the receiver should know the frequency and phase of the carrier that is synchronized with that of the transmitter. This is usually done by means of carrier recovery circuits like PLL or Costas loop. For simplicity, we assume that the carrier phase recovery was done and therefore we directly use the generated reference frequency at the receiver. After coherent detection, the demodulated symbols are XOR’ed with the spreading sequence that is synchronized with that of the transmitter. At this stage, the receiver should know the spreading sequence used at the transmitter and must be synchronized with it. If not, the spread spectrum signal will appear noisy to the receiver and this demonstrates the cornerstone property of spread spectrum - the characteristic of low probability of interception.
BPSK demodulation de-spreading
Σ Received signal
⋅
↓
↓
XOR gate s 2
Local oscillator
Spreading sequence
Fig. 13.12: Discrete time equivalent model for simulating DSSS receiver
^ recovered bits
334
13 Spread Spectrum Techniques
Program 13.12: dsss receiver.m: Function implementing a DSSS receiver function d_cap = dsss_receiver(r_t,carrier_ref,prbs_ref,Rb,Rc,L) %Direct Sequence Spread Spectrum (DSSS) Receiver (Rx) %r_t - received DS-SS signal from the transmitter (Tx) %carrier_ref - reference carrier (synchronized with transmitter) %prbs_ref - reference PRBS signal(synchronized with transmitter) %Rb - data rate (bps) for the data d %Rc - chip-rate ((Rc >> Rb AND Rc is integral multiple of Rb) %L - versampling factor used for waveform generation at the Tx %The function first demodulates the receiver signal using the %reference carrier and then XORs the result with the reference %PRBS. Finally returns the recovered data. %------------BPSK Demodulation---------v_t = r_t.*carrier_ref; x_t=conv(v_t,ones(1,L)); %integrate for Tc duration y = x_t(L:L:end);%sample at every Lth sample (i.e, every Tc instant) z = ( y > 0 ).'; %Hard decision (gives demodulated bits) %-----------De-Spreading---------------y = xor(z,prbs_ref.');%reverse the spreading process using PRBS ref. d_cap = y(Rc/Rb:Rc/Rb:end); %sample at every Rc/Rb th symbol
13.3.2 Performance of direct sequence spread spectrum over AWGN channel To test the transmitter and receiver models, presented in the previous section, we wish to assemble them as a communication link over an AWGN channel. The following Matlab code demonstrates the performance of the DSSS transmitter and receiver over a range of signal-to-noise ratios of an AWGN channel. The simulated performance will match that of the conventional BPSK [5]. Refer Figure 6.4 for the performance of conventional BPSK over AWGN. Program 13.13: dsss tx rx chain.m : Performance of DSSS system over AWGN channel %----PRBS definition---prbsType='MSEQUENCE'; %PRBS type G1=[1 0 0 1 0 1];%LFSR polynomial X1=[0 0 0 0 1];%initial seed for LFSR G2=0;X2=0;%G2,X2 are zero for m-sequence (only one LFSR used) %--- Input data, data rate, chip rate-----N=10e5; %number of data bits to transmit d= rand(1,N) >=0.5; %random data Rb=2e3; %data rate (bps) for the data d Rc=6e3; %chip-rate(Rc >> Rb AND Rc is integral multiple of Rb) L=8; %oversampling factor for waveform generation SNR_dB = -4:3:20; %signal to noise ratios (dB) BER = zeros(1,length(SNR_dB)); %place holder for BER values for i=1:length(SNR_dB), [s_t,carrier_ref,prbs_ref]=dsss_transmitter(d,prbsType,G1,G2,X1,X2,Rb,Rc,L);%Tx %-----Compute and add AWGN noise to the transmitted signal--Esym=L*sum(abs(s_t).ˆ2)/(length(s_t));%Calculate symbol energy
13.3 Direct sequence spread spectrum
335
N0=Esym/(10ˆ(SNR_dB(i)/10)); %Find the noise spectral density n_t = sqrt(N0/2)*randn(length(s_t),1);%computed noise r_t = s_t + n_t; %received signal dCap = dsss_receiver(r_t,carrier_ref,prbs_ref,Rb,Rc,L);%Receiver BER(i) = sum(dCap˜=d)/length(d); %Bit Error Rate end theoreticalBER = 0.5*erfc(sqrt(10.ˆ(SNR_dB/10))); figure; semilogy(SNR_dB, BER,'k*'); hold on; semilogy(SNR_dB, theoreticalBER,'r');
13.3.3 Performance of direct sequence spread spectrum in the presence of a Jammer Direct sequence spread spectrum systems were originally meant for providing low probability interception and for anti-jamming applications. Many commercial applications measure the performance of such systems in the presence of a jammer. DSSS systems spread the baseband data signal over a larger bandwidth that provides resistance to jammers and hence offers low probability of interception. The amount of bandwidth spread is decided by the factor called processing gain, denoted as G p . It is defined as the ratio of the RF bandwidth of the transmitted signal to the the unspread baseband bandwidth of the information signal. For a DSSS system with spreading that is independent of the data, the processing gain is the ratio of chip rate of the spreading sequence to the data rate of the information sequence. Gp =
chip rate Rc = data rate Rb
(13.6)
The effectiveness of the DSSS in the presence of an interference can be demonstrated by using different types of signal jammers and this forms the objective of this section. The simulation block diagram for this purpose is given in Figure 13.13. The model in Figure 13.13 is for baseband simulation - meaning that all the processing, including the addition of the jamming signal, happens in baseband. The PRBS generating functions for m-sequence and Gold codes, given in sections 13.2 and 13.3.1.1 are leveraged in the simulation code. It should be emphasized that the discrete time simulation of the given model, for high processing gain, is impractical due to the huge number of samples involved and has the potential to cause memory issues during the runtime. Hence, it is advised to keep the processing gain (G p ) at a reasonable value. We could see that the simulation model presented in this section is very similar to the one given in the previous section of this chapter. Only difference is the addition of jammer block. The jammer block can be modeled to generate different types of jamming (interfering) signal. A signal is considered as an interference only if all the following conditions are simultaneously satisfied. • Interference occurs within the demodulation bandwidth and at the same time as that of the demodulation. • Interfering signal is strong enough to cause data corruption. In the case of a jammer, this is quantified in terms of jammer-to-signal ratio (JSR) which is the ratio of jammer power to the signal power. The different types of jamming on a DSSS system include broadband noise jamming, partial-band noise jamming, pulsed jamming and tone jamming. The tone jamming is further classified into single tone jamming and multi-tone jamming. A single tone jammer can be formed by a sinusoidal interference. Of all the jammer types the single tone jammer is the easiest to generate. The discrete time equation to generate a single tone jammer is given by p J[n] = 2Pj sin 2π f j n + θ j (13.7)
336
13 Spread Spectrum Techniques
Repeat times data source
PRBS generator
^
↓
recovered bits
Threshold Detection
AWGN noise
Jammer
Σ
Take every Sum ℎ sample
⋅
samples
Fig. 13.13: Simulation model for BPSK DSSS system with tone jamming
where, Pj is the jammer power for the required JSR, f j is the frequency offset of the jammer from the center frequency and θ j is the phase offset of the jammer which can be a uniform random variable between 0 and 2π. The Matlab implementation of the single tone jammer is given next. Program 13.14: tone jammer.m: Single tone jammer function [J,JSRmeas] = tone_jammer(JSR_dB,Fj,Thetaj,Esig,L) %Generates a single tone jammer (J) for the following inputs %JSR_dB - required Jammer to Signal Ratio for generating the jammer %Fj- Jammer frequency offset from center frequency (-0.5 < Fj < 0.5) %Thetaj - phase of the jammer tone (0 to 2*pi) %Esig -transmitted signal power to which jammer needs to be added %L - length of the transmitter signal vector %The output JSRmeas is the measured JSR from the generated samples JSR = 10.ˆ(JSR_dB/10); %Jammer-to-Signal ratio in linear scale Pj= JSR*Esig; %required Jammer power for the given signal power n=(0:1:L-1).';%indices for generating jammer samples J = sqrt(2*Pj)*sin(2*pi*Fj*n+Thetaj); %Single Tone Jammer Ej=sum(abs(J).ˆ2)/L; %computed jammer energy from generated samples JSRmeas = 10*log10(Ej/Esig);%measured JSR disp(['Measured JSR from jammer & signal: ',num2str(JSRmeas),'dB']); A multitone jammer inserts K equal power tones uniformly distributed over the spread spectrum bandwidth. For multiple tone jamming, the equation 13.7 can be extended further as follows r K 2Pj J[n] = ∑ sin 2π f ji n + θ ji (13.8) K i=1
13.3 Direct sequence spread spectrum
337
The primary performance measure for a spread spectrum is the probability of bit error rates Vs. the signalto-noise ratio (SNR) and the jammer-to-signal ratio (JSR). The following code implements the simulation model given in Figure 13.13 for the single tone jamming case and the corresponding performance curves are shown in Figure 13.14. Note: The matlab code calls two generic functions, namely generatePRBS and repeatSequence that are defined in section 13.3.1.1. Program 13.15: dssss tx rx chain with jammer.m: Performance simulation of DSSS in the presence of a single tone jammer and AWGN noise %Perf. of BPSK DSSS over AWGN channel in presence of a jammer clearvars; clc; %--- Input parameters-----Gp = 31; %processing gain N=10e5; %number of data bits to transmit EbN0dB = 0:2:14; %Eb/N0 ratios (dB) JSR_dB = [-100, -10,-5, 0, 2]; %Jammer to Signal ratios (dB) Fj = 0.0001; %Jamming tone - normalized frequency (-0.5 < F < 0.5) Thetaj = rand(1,1)*2*pi;%random jammer phase(0 to 2*pi radians) %----PRBS definition (refer previous sections of this chapter)--prbsType='MSEQUENCE';%PRBS type,period of the PRBS should match Gp G1=[1 0 0 1 1 1];%LFSR polynomial X1=[0 0 0 0 1];%initial seed for LFSR G2=0;%G2,X2 set to zero for m-sequence (only one LFSR used) X2=0; %---------------Transmitter-----------------d= rand(1,N) >=0.5; %Random binary information source dp =2*d-1; %converted to polar format (+/- 1) dr = kron(dp(:),ones(Gp,1)); %repeat d[n] - Gp times L = N*Gp; %length of the data after the repetition prbs = generatePRBS(prbsType,G1,G2,X1,X2);%1 period of PRBS prbs = 2*prbs-1; %convert PRBS to +/- 1 values prbs=prbs(:);% serialize c=repeatSequence(prbs,L);%repeat PRBS sequence to match data length s = dr.*c; %multiply information source and PRBS sequence %Calculate signal power (used to generate jamming signal) Eb=Gp*sum(abs(s).ˆ2)/L; plotColors =['r','b','k','g','m']; for k=1:length(JSR_dB),%loop for each given JSR %Generate single tone jammer for each given JSR J = tone_jammer(JSR_dB(k),Fj,Thetaj,Eb,L);%generate a tone jammer BER = zeros(1,length(EbN0dB)); %place holder for BER values for i=1:length(EbN0dB),%loop for each given SNR
338
13 Spread Spectrum Techniques
%-----------AWGN noise addition--------N0=Eb/(10ˆ(EbN0dB(i)/10)); %Find the noise spectral density w = sqrt(N0/2)*randn(length(s),1);%computed whitenoise r = s + J + w; %received signal %------------Receiver-------------yr = r.*c; %multiply received signal with PRBS reference y=conv(yr,ones(1,Gp)) ; %correlation type receiver dCap = y(Gp:Gp:end) > 0; %threshold detection BER(i) = sum(dCap.'˜=d)/N; %Bit Error Rate semilogy(EbN0dB, BER, [plotColors(k) '-*']); hold on; %plot
end end %theoretical BER (when no Jammer) theoreticalBER = 0.5*erfc(sqrt(10.ˆ(EbN0dB/10))); %reference performance for BPSK modulation semilogy(EbN0dB, theoreticalBER,'k-O'); xlabel('Eb/N0 (dB)'); ylabel('Bit Error Rate (Pb)');
Fig. 13.14: Performance of BPSK DSSS having G p = 31 in presence of a tone jammer and AWGN noise
13.4 Frequency hopping spread spectrum
339
13.4 Frequency hopping spread spectrum Frequency hopping spread spectrum (FHSS) is a type of wireless transmission that rapidly switches the data transmission over a number of frequency channels. By using many frequency channels in the same transmission band, the signal is spread over the entire band, making it a wideband signal. Effectively, in FHSS, the signal transmission hops from one frequency channel to another, according to a seemingly random pattern called hopping sequence. For proper operation of a frequency-hopping system, the hopping sequence must be agreed upon and followed by both the transmitter and the receiver through a process called synchronization. An FHSS system has several advantages. • Secure against eavesdropping. Unlike a fixed frequency system, where only one frequency needs to be known for the eavesdropper, the FHSS requires the eavesdropper to know the exact sequence of pseudorandom frequencies, that too with precise timings. • Since the signal transmission only stays briefly at a given frequency, the likelihood of collision with another device operating at the same frequency is reduced greatly. This property provides great resistance to narrowband interference on a single channel. • More users can be made to share the same band by allowing the users to hop on to different frequency channels at different times, thereby reducing the probability of interference. • The system is resistant to multipath fading due to its inherent frequency diversity. Since the frequency of the transmission changes, so does the wavelength of the transmission. This effect changes the relationship between the direct and reflected waves that causes destructive/constructive interference.
13.4.1 Simulation model In reality, FHSS signal occupies the RF spectrum. For example, the 863 MHz - 870 MHz band for short range devices (SRD680 band) can be used to test the FHSS/DSSS system [6]. Due to very high sampling rate and hence, the humongous amount of memory requirements, it is not viable to simulate the FHSS system for the specified RF band. On the other hand, baseband implementations offer excellent opportunity to design, simulate and test a system prior to translating them to the given RF band. It is worthwhile to note that the practical RF transceivers are of direct conversion type that offers easier integration of baseband architectures with low power consumption and at low manufacturing costs [7]. FHSS is commonly used with MFSK modulation and BFSK-FHSS is the simplest configuration. The simulation model for the BFSK-FHSS system is given in Figure 13.15. We will assume non-coherent detection for demodulating the received BFSK signal. BFSK and MFSK modulations and their simulation models are elaborated in reference [8]. As we are interested in a BFSK-FHSS system, only the relevant portions of the BFSK simulation model from reference [8] is given here.
13.4.2 Binary frequency shift keying (BFSK) Frequency shift keying (FSK) is a frequency modulation scheme in which the digital signal is modulated as discrete variations in the frequency of the carrier signal. The simplest FSK configuration, called binary FSK (BFSK), uses two discrete frequencies to represent the binary data, one frequency for representing binary 0 and another frequency for representing binary 1. When extended to more frequencies, it is called M-ary FSK or simply MFSK. Waveform simulation of BFSK modulator-demodulator, is discussed in this section.
340
13 Spread Spectrum Techniques
AWGN noise FHSS signal Non-coherent BFSK demodulator
BFSK modulator binary source
^ recovered bits
(hopping pattern synchronized at the receiver)
Frequency Synthesizer LFSR State
PRBS generator (LFSR)
. .
Channel Mapping Table
Fig. 13.15: Simulation model for BFSK-FHSS system
In binary-FSK, the binary data is transmitted using two waveforms of frequencies f1 and f2 - that are slightly offset by ∆ f from the center of the carrier frequency fc . f1 = fc +
∆f , 2
f2 = fc −
∆f 2
(13.9)
Sometimes, the frequency separation ∆ f is normalized to the bit-rate Tb using the modulation index as h = ∆ f Tb The BFSK waveform is generated as ∆f Acos [2π f1t + φ1 ] = Acos 2π fc + 2 t + φ1 , 0 ≤ t ≤ Tb , f or binary 1 SBFSK (t) = ∆f Acos [2π f2t + φ2 ] = Acos 2π fc − 2 t + φ2 , 0 ≤ t ≤ Tb , f or binary 0
(13.10)
(13.11)
Here, φ1 and φ2 are the initial phases of the carrier at t = 0. If φ1 = φ2 , then the modulation is called coherent BFSK, otherwise it is called non-coherent BFSK. From equations 13.10 and 13.11, the binary FSK scheme is represented as πh t + φ Acos 2π f + c 1 , 0 ≤ t ≤ Tb , f or binary 1 Tb SBFSK (t) = (13.12) t + φ , 0 ≤ t ≤ T , f or binary 0 Acos 2π fc − πh 2 b T b
The basis functions of the BFSK can be made orthogonal in signal space by proper selection of the modulation index h or the frequency separation ∆ f . Often, the frequencies f1 and f2 are chosen to be orthogonal. The choice of non-coherent or coherent FSK generation affects the orthogonality between the chosen frequencies. To maintain orthogonality between the frequencies f1 and f2 , the frequency separation between the two frequencies, for non-coherent case (φ1 6= φ2 )
13.4 Frequency hopping spread spectrum
341
is given by ∆ f = f1 − f2 =
n , n = 1, 2, 3, ... Tb
(13.13)
Hence, the minimum frequency separation for non-coherent FSK is obtained when n = 1. When expressed in terms of modulation index, using equation 13.10), the minimum frequency separation for obtaining orthogonal tones in the case of non-coherent FSK, occurs when h = 1. (∆ f )min =
1 ⇒ hmin = 1 Tb
(13.14)
Similarly, the frequency separation for coherent FSK (φ1 = φ2 ) is given by ∆ f = f1 − f2 =
n , n = 1, 2, 3, ... 2Tb
(13.15)
and the minimum frequency separation for coherently detected FSK is obtained when n = 1. When expressed in terms of modulation index, using equation 13.10, the minimum frequency separation required for obtaining orthogonal tones for the case of coherent FSK, occurs when h = 0.5, (which corresponds to minimum shift keying (MSK) should the phase change be continuous). (∆ f )min =
1 ⇒ hmin = 0.5 2Tb
(13.16)
13.4.2.1 BFSK modulator The discrete time equivalent model for generating orthogonal BFSK signal is shown in Figure 13.16. Here, the discrete time sampling frequency fs is chosen high enough when compared to the carrier frequency fc . The number of discrete samples in a bit-period L is chosen such that each bit-period is represented by sufficient number of samples to accommodate a few cycles of the frequencies of f1 and f2 . The frequency separation ∆ f between f1 and f2 is computed from equation 13.10.
Oscillator 1
1 MUX
Oscillator 2
0
Coherent BFSK signal when Non-Coherent BFSK signal when ≠
∈
Input data
↑ Rectangular pulse shaping
Discrete time : Oversampling factor :
Fig. 13.16: Discrete-time equivalent model for BFSK modulation
342
13 Spread Spectrum Techniques
The function bfsk mod implements the discrete-time equivalent model for BFSK generation, shown in Figure 13.16. It supports generating both the coherent and non-coherent forms of BFSK signal. If coherent BFSK is chosen, the function returns the generated initial random phase φ , in addition to the generated BFSK signal. This phase information is crucial only for coherent detection at the receiver. Program 13.16: bfsk mod.m: Generating coherent and non-coherent discrete-time BFSK signals function [s,t,phase,at] = bfsk_mod(a,Fc,Fd,L,Fs,fsk_type) %Function to modulate an incoming binary stream using BFSK %a - input binary data stream (0's and 1's) to modulate %Fc - center frequency of the carrier in Hertz %Fd - frequency separation measured from Fc %L - number of samples in 1-bit period %Fs - Sampling frequency for discrete-time simulation %fsk_type - 'COHERENT' (default) or 'NONCOHERENT' FSK generation %at each bit period when generating the carriers %s - BFSK modulated signal %t - generated time base for the modulated signal %phase - initial phase generated by modulator, applicable only for %coherent FSK. It can be used when using coherent detection at Rx %at - data waveform for the input data phase=0; at = kron(a,ones(1,L)); %data to waveform t = (0:1:length(at)-1)/Fs; %time base if strcmpi(fsk_type,'NONCOHERENT'), c1 = cos(2*pi*(Fc+Fd/2)*t+2*pi*rand);%carrier 1 with random phase c2 = cos(2*pi*(Fc-Fd/2)*t+2*pi*rand);%carrier 2 with random phase else phase=2*pi*rand;%random phase from uniform distribution [0,2pi) c1 = cos(2*pi*(Fc+Fd/2)*t+phase);%carrier 1 with random phase c2 = cos(2*pi*(Fc-Fd/2)*t+phase);%carrier 2 with random phase end s = at.*c1 +(-at+1).*c2; %BFSK signal (MUX selection) doPlot=0; if doPlot, figure;subplot(2,1,1);plot(t,at);subplot(2,1,2);plot(t,s); end;
13.4.2.2 Non-coherent BFSK demodulator A non-coherent BFSK demodulator does not care about the initial phase information used at transmitter, hence avoids the need for carrier phase recovery at the receiver. As a result, the non-coherent demodulator is easier to build and therefore offers a better solution for practical applications. The discrete-time equivalent model for a correlator/square-law based non-coherent demodulator is shown in Figure 13.17. It can be used to demodulate both coherent FSK and non-coherent FSK signal.
13.4 Frequency hopping spread spectrum
343
Σ⋅ c
↓
⋅
2
∑
2
2
2
↓
⋅
^
−
2
∑
2
Σ⋅ −
2
∑
Σ⋅ c
⋅
2
Σ⋅ −
↓
↓
⋅
2
Discrete time : Oversampling factor :
2
Fig. 13.17: Discrete-time equivalent model for square-law based noncoherent demodulation for coherent/noncoherent FSK signal
Program 13.17: bfsk noncoherent demod.m: Square-law based non-coherent demodulator function a_cap = bfsk_noncoherent_demod(r,Fc,Fd,L,Fs) %Non-coherent demodulation of BFSK modulated signal %r - BFSK modulated signal at the receiver %Fc - center frequency of the carrier in Hertz %Fd - frequency separation measured from Fc %L - number of samples in 1-bit period %Fs - Sampling frequency for discrete-time simulation %a_cap - data bits after demodulation t = (0:1:length(r)-1)/Fs; %time base F1 = (Fc+Fd/2); F2 = (Fc-Fd/2); %define four basis functions p1c = cos(2*pi*F1*t); p2c = cos(2*pi*F2*t); p1s = -sin(2*pi*F1*t); p2s = -sin(2*pi*F2*t); %multiply and integrate from 0 to Tb r1c = conv(r.*p1c,ones(1,L)); r2c = conv(r.*p2c,ones(1,L)); r1s = conv(r.*p1s,ones(1,L)); r2s = conv(r.*p2s,ones(1,L)); %sample at every sampling instant r1c = r1c(L:L:end); r2c = r2c(L:L:end); r1s = r1s(L:L:end); r2s = r2s(L:L:end); x = r1c.ˆ2 + r1s.ˆ2; y = r2c.ˆ2 + r2s.ˆ2;%square and add a_cap=(x-y)>0; %compare and decide
344
13 Spread Spectrum Techniques
Now that we have the simulation models for BFSK modulator and demodulator, the next step is to model the other aspects of the FHSS system.
13.4.3 Allocation of frequency channels In the FHSS simulation model, a binary data source of bit rate Rb = 20 Kbps is modulated using BFSK technique as described in section 13.4.2. BFSK technique maps the ones and zeros, to two distinct frequency tones. Assuming non-coherent BFSK with modulation index h = 1 and center frequency fc = 50KHz, from equation 13.10, the two BFSK frequency tones are separated by ∆ f = h × Rb = 20KHz. From equation 13.9, the two distinct BFSK tones are calculated as fH = fc + ∆ f /2 = 60KHz and fL = fc + ∆ f /2 = 40KHz. Let’s assume we need to design a FHSS system to operate in the 150 KHz − 1650 KHz band. Since frequency hopping uses a series of frequency channels in a given band, we need to define the number of frequency channels and the bandwidth of each such channel. The bandwidth of each frequency channel in FHSS is dictated by the bandwidth of the underlying modulation, BFSK in this case. Each channel in the frequency hopping system needs to have the bandwidth that is atleast equal to the transmission bandwidth of the underlying BFSK signal. Applying Carson’s rule, the transmission bandwidth Bch of the BFSK signal sm (t) is calculated as 60KHz. Bch = ( fH − fL ) + 2B 3
(13.17) 3
= (60 × 10 − 40 × 10 ) + 2Rb 3
(B = Rb , for rectangular pulse shape
(13.18)
3
= 20 × 10 + 2 × 20 × 10 = 60KHz
(13.19)
Assuming a guard band of Bg = 40KHz between each frequency channel, we can accommodate N = 15 hopping frequency channels in the 150 KHz − 1650 KHz band. N=
1500 × 103 (1650 − 150) × 103 = = 15 Bch + Bg (60 + 40) × 103
1
PSD
2
200
(13.20)
15
00
1 00
= 100 channel 1
channel 2
channel 15
Frequency
40 * In time domain, each channel lasts for ℎ
0 Fig. 13.18: Frequency allocation scheme for FHSS system
ℎ
duration
13.4 Frequency hopping spread spectrum
345
13.4.4 Frequency hopping generator Given the design parameters Bch = 60 KHz, Bg = 40 KHz and N = 15, one possible frequency channel allocation scheme for FHSS is shown in the Figure 13.18. In this scheme, the center frequencies of the 15 frequency channels are chosen as [200 KHz, 300 KHz, 400 KHz, · · · , 1600 KHz]. The center frequency of the channels are separated by fspace = 100 KHz spacing. The transmitted signal is made to hop among these different channels according to a pseudo-random sequence called frequency hopping sequence. The frequency hopping sequences are usually predefined in various standards. For the simulation in this text, a PRBS generator is used to generate the frequency hopping sequence. It is worth to note that the transmitted signal stays in a particular frequency channel for a time duration called dwell time denoted as Th . After the expiry of the dwell time, the transmitted signal hops to a different frequency channel as dictated by the pseudo-random frequency hopping sequence. The block diagram and the look-up table implementation model for a pseudo-random frequency hopping sequence synthesizer is shown in Figure 13.19. The PRBS generator modeled for the direct sequence spread spectrum (DSSS) simulation model(refer 13.3.1), can be utilized for generating the pseudo-random hopping sequence, but with a small tweak. In the case of DSSS signal generation, the PRBS sequence is the output of an LFSR. Whereas, in the case of FH, the PRBS sequence is the sequential states of the LFSR.
base frequency LFSR State
PRBS generator (LFSR)
. .
convert to decimal
≤
ℎ
waveform for ℎ hop
channel spacing Frequency Synthesizer LFSR State
PRBS generator (LFSR)
. .
convert to decimal
Look-Up Table (LUT)
waveform for ℎ hop
Frequency Synthesizer – look-up table implementation Fig. 13.19: (a) Block diagram of frequency hop synthesizer used in FHSS and (b) its look-up table implementation for discrete-time model Given an m-sequence of length N, there are 2N − 1 possible states that could generate 2N − 1 hopping frequencies. Each state of the LFSR represents a specific channel frequency in the frequency hopping pattern. When the LFSR sequences through various states, in a pseudo-random manner, it selects a frequency waveform that is formed by adding a frequency offset i × fspace to the base frequency Fbase , where i is an integer that represent the pseudo random states of the PRBS generator. The frequency waveform for each channel is
346
13 Spread Spectrum Techniques
defined only for Th duration or Lh samples in the discrete time domain. Table 13.3 list a sample look-up table for a PRBS based frequency hopping pattern generator for N = 15 with fbase = 200 KHz and fspace = 100 KHz. Table 13.3: Hopping sequence using LFSR with generator polynomial G=[1 0 0 1 1] and seed [0 0 0 1] Sequence No.
LFSR State
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0001 1001 1101 1111 1110 0111 1010 0101 1011 1100 0110 0011 1000 0100 0010
Channel number i
Frequency (KHz) fi = fbase + i × fspace
1 9 13 15 14 7 10 5 11 12 6 3 8 4 2
200 KHz 1000 KHz 1400 KHz 1600 KHz 1500 KHz 800 KHz 1100 KHz 600 KHz 1200 KHz 1300 KHz 700 KHz 400 KHz 900 KHz 500 KHz 300 KHz
The following code generates the look-up table for the frequency synthesizer. The user needs to specify the base frequency fbase , the frequency spacing between the frequency channels fspace , the sampling frequency fs , number of discrete-time samples Lh corresponding to the dwell time Th and the the total number of frequency channels N. Program 13.18: gen FH code table.m: Lookup table generation for frequency synthesizer function freqTable = gen_FH_code_table(Fbase,Fspace,Fs,Lh,N) %Generate frequency translation table for Frequency Hopping (FH) %Fbase - base frequency (Hz) of the hop %Fspace - channel spacing (Hz) of the hop %Fs - sampling frequency (Hz) %Lh - num of discrete time samples in each hop period %N - num of total hopping frequencies required(full period of LFSR) %Return the frequency translation table t = (0:1:Lh-1)/Fs; %time base for each hopping period freqTable = zeros(N,Lh);%Table to store N different freq waveforms for i=1:N, %generate frequency translation table for all N states Fi=Fbase+(i-1)*Fspace; freqTable(i,:) = cos(2*pi*Fi*t); end The following function implements the frequency synthesizer for the given inputs - m-sequence LFSR polynomial G, LFSR initial state X, number of total hops required nHops that is dependent on the length of the incoming sequence, number of discrete-time samples Lh in a dwell time, the base channel frequency fbase and the separation between the frequency channels fspace . It leverages the lfsr.m function given in section 13.2.2 for generating the LFSR states. The repeatsequence.m function given in section 13.3.1.1 is also used in this function, for repeating the LFSR states depending on the total number of hops required.
13.4 Frequency hopping spread spectrum
347
Program 13.19: hopping chip waveform.m: Hopping frequency synthesizer function [c] = hopping_chip_waveform(G,X,nHops,Lh,Fbase,Fspace,Fs) %Generate Frequency Hopping chip sequence based on m-sequence LFSR %G,X - Generator poly. and initial seed for underlying m-sequence %nHops - total number of hops needed %Lh - number of discrete time samples in each hop period %Fbase - base frequency (Hz) of the hop %Fspace - channel spacing (Hz) of the hop %Fs - sampling frequency (Hz) [prbs,STATES] = lfsr( G, X); %initialize LFSR N = length(prbs); %PRBS period LFSRStates= repeatSequence(STATES.',nHops);%repeat LFSR states depending on nHops freqTable=gen_FH_code_table(Fbase,Fspace,Fs,Lh,N); %freq translation c = zeros(nHops,Lh); %place holder for the hopping sequence waveform for i=1:nHops,%for each hop choose one freq wave based on LFSR state LFSRState = LFSRStates(i); %given LFSR state c(i,:) = freqTable(LFSRState,:);%choose corresponding freq. wave end c=c.';c=c(:); %transpose and serialize as a single waveform vector
13.4.5 Fast and slow frequency hopping In frequency hopping, the transmitted signal occupies a specific frequency channel for a constant time duration, referred as the dwell time. After the expiry of the dwell time, the transmitted signal hops to another frequency channel as dictated by the pseudo random frequency hopping pattern. Based on the relationship between the instantaneous rate of change of hops to the symbol rate of the baseband modulated information, FHSS technique is classified into two types: fast frequency hopping spread spectrum (fast-FHSS) and slow frequency hopping spread spectrum (slow-FHSS). When the hoping rate is faster than the symbol rate, the frequency hopping scheme is referred as fast frequency hopping. Otherwise it is classified as slow frequency hopping. In other words, the transmitted signal hops to multiple channels within a symbol duration in the case of fast-FHSS. Whereas, in slow-FHSS, the transmitted signal stays in a given frequency channel for multiple symbol durations. Let the symbol duration of the baseband modulated data be Ts and the dwell time of the transmitted signal in a frequency channel be Th , then: Ts ≥ Th for fast hopping Ts < Th for slow hopping
(13.21)
Figure 13.20 illustrates the concept of fast frequency hopping of BFSK modulated signal, obtained by setting Th = Ts /4, where the transmitted signal changes to four different frequency channels within a symbol duration. Figure 13.21 illustrates the concept of slow frequency hopping of BFSK modulated signal, obtained by setting Th = 4 ∗ Ts , where the transmitted signal hops to a new frequency channel only every four modulated symbols. For BFSK modulation, the bit duration and symbol duration are equivalent: Tb = Ts .
348
13 Spread Spectrum Techniques
Fast Frequency Hopping T b=4 × Th 1 0.5
Freq channels
Tb
0
Th
Freq spacing − F space
15 pseudorandom frequency states 0
0.04
0.08
0.12
0.16
0.2
Time (s) −>
0.24
0.28
Fig. 13.20: Fast frequency hopping spread spectrum - BFSK-FHSS
Slow5Frequency5Hopping5T b =5Th/4
1 0.5
Tb Freq5spacing5−5F
space
Th
Frequency5Channels
0
155pseudorandom5frequency5states 0
0.01
0.02
0.03
0.04
Time5(s)5−>
0.05
0.06
0.07
0.08
Fig. 13.21: Slow frequency hopping spread spectrum - BFSK-FHSS
13.4.6 Simulation code for BFSK-FHSS The following code implements the BFSK-FHSS simulation model given in Figure 13.15 with all the related concepts and blocks discussed so far. The code can be customized to simulate fast-FHSS or slow-FHSS system, by appropriately choosing the relationship between the dwell time and the symbol time. The simulated waveforms at various points of the transmitter-receiver chain is shown in Figures 13.22 and 13.23.
13.4 Frequency hopping spread spectrum
349
Note that the simulation code utilizes several functions defined from various sections: bfsk mod.m defined in section 13.4.2.1, bfsk noncoherent demod.m defined in section 13.4.2.2, hopping chip waveform.m and gen FH code table.m defined in section 13.4.4. Please carefully review these sections, in order to incorporate all the required functions to run this code snippet. Program 13.20: FHSS BFSK.m: Simulation of BFSK-FHSS system %***************Source********************* nBits = 60;%number of source bits to transmit Rb = 20e3; %bit rate of source information in bps %**********BFSK definitions***************** fsk_type = 'NONCOHERENT'; %BFSK generation type at the transmitter h = 1; %modulation index (0.5=coherent BFSK/1= non-coherent BFSK) Fc = 50e3; %center frequency of BFSK %**********Frequency Allocation************* Fbase = 200e3; %The center frequency of the first channel Fspace = 100e3;%freq. separation between adjacent hopping channels Fs = 10e6; %sufficiently high sampling frequency for discretization %*********Frequency Hopper definition******* G = [1 0 0 1 1]; X=[0 0 0 1];%LFSR generator poly and seed hopType = 'FAST_HOP'; %FAST_HOP or SLOW_HOP for frequency hopping %--------Derived Parameters------Tb = 1/Rb ;%bit duration of each information bit. L = Tb*Fs;%num of discrete-time samples in each bit Fd = h/Tb; %frequency separation of BFSK frequencies %Adjust num of samples in a hop duration based on hop type selected if strcmp(hopType,'FAST_HOP'),%hop duration less than bit duration Lh = L/4; %4 hops in a bit period nHops = 4*nBits; %total number of Hops during the transmission else%default set to SLOW_HOP: hop duration more than bit duration Lh = L*4; %4 bit worth of samples in a hop period nHops = nBits/4; %total number of Hops during the transmission end %-----Simulate the individual blocks---------------------d = rand(1,nBits) > 0.5 ; %random information bits [s_m,t,phase,dt]=bfsk_mod(d,Fc,Fd,L,Fs,fsk_type);%BFSK modulation c = hopping_chip_waveform(G,X,nHops,Lh,Fbase,Fspace,Fs);%Hopping wfm s = s_m.*c.';%mix BFSK waveform with hopping frequency waveform n = 0;%Left to the reader -modify for AWGN noise(see prev chapters) r = s+n; %received signal with noise v = r.*c.'; %mix received signal with synchronized hopping freq wave d_cap = bfsk_noncoherent_demod(v,Fc,Fd,L,Fs); %BFSK demod
350
13 Spread Spectrum Techniques
bie = sum(d ˜= d_cap); %Calculate bits in error disp(['Bits in Error: ', num2str(bie)]); %--------Plot waveforms at various stages of tx/rx------figure; subplot(2,1,1);plot(t,dt) ;title('Source bits - d(t)'); subplot(2,1,2);plot(t,s_m);title('BFSK modulated - s_m(t)') figure; subplot(3,1,1);plot(t,s_m);title('BFSK modulated - s_m(t)') subplot(3,1,2);plot(t,c);title('Hopping waveform at Tx - c(t)') subplot(3,1,3);plot(t,s);title('FHSS signal - s(t)') figure; subplot(3,1,1);plot(t,r);title('Received signal - r(t)') subplot(3,1,2);plot(t,c);title('Synced hopping waveform at Rx-c(t)') subplot(3,1,3);plot(t,v);title('Signal after mixing with hop pattern-v(t)')
BFSK modulated - s m(t) 1 0 −1 0
1
Hopping waveform at Tx - c(t) 1
2 −4
x 10
0
−1 0
1
FHSS signal - s(t)
1
2 −4
x 10
0 −1 0
1
2 −4
x 10
Fig. 13.22: Fast FHSS simulation - transmitted side signals at various points
References
351
Received signal - r(t) 1 0 −1 0
1
2
Synced hopping waveform at Rx - c(t)
−4
x 10
1 0 −1 0
1
Signal after mixing with hopping pattern - v(t) 1
2 −4
x 10
0 −1 0
1
2 −4
Fig. 13.23: Fast FHSS simulation - receiver side signals at various points
References 1. D. V. Sarwate and M. B. Pursley, Crosscorrelation properties of pseudorandom and related sequences, Proc. IEEE, vol. 68, no. 5, pp. 593–619, May 1980. 2. S. W. Golomb, Shift-register sequences and spread-spectrum communications, Proceedings of IEEE 3rd International Symposium on Spread Spectrum Techniques and Applications (ISSSTA’94), Oulu, Finland, 1994, pp. 14-15 vol.1. 3. R. L. Peterson, R. E. Ziemer, and D. E. Borth, Introduction to Spread Spectrum Communications, Prentice Hall, Inc., 1995. 4. Gold, R. (1967), Optimal binary sequences for spread spectrum multiplexing, IEEE Transactions on Information Theory, 13 (4), pp. 619-621. 5. C. Wang et. al, On the Performance of DS Spread-Spectrum Systems in AWGN Channel, International Conference on Communications and Mobile Computing, 2009, Volume 1, pp. 311 - 315. 6. ERC Recommendation 70-03 Relating to the use of Short Range Devices (SRD), European Conference of Postal and Telecommunications Administrations, September 2015. 7. Ahmadreza et al., A Single-Chip 900-MHz Spread-Spectrum Wireless Transceiver in 1 µm CMOS—Part II: Receiver Design, IEEE journal of solid-state circuits, vol 33, no. 4, April 1998. 8. Mathuranathan Viswanathan, Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 9781521493885, pp. 85-93, June 2017.
Chapter 14
Orthogonal Frequency Division Multiplexing (OFDM)
Abstract This chapter is an introduction to orthogonal frequency-division multiplexing (OFDM), a multicarrier modulation technique. The focus is on describing the signal processing blocks required for simulating a basic discrete-time cyclic-prefixed OFDM communication system.
14.1 Introduction Modulations like BPSK, QPSK, QAM are single carrier modulation techniques, in which the information is modulated over a single carrier. OFDM is a multicarrier modulation technique that divides the available spectrum into a large number of parallel narrow-band subchannels. The data stream in each subchannel is modulated using MPSK/MQAM modulation. The modulated symbols in all the subchannels are collectively referred as an OFDM symbol. In a frequency selective environment, each narrowband subchannel in an OFDM symbol, experiences flat-fading and hence complex equalizers are not needed at the receiver. To provide high spectral efficiency, the frequency response of the subchannels are made overlapping and orthogonal, hence the name orthogonal frequency division multiplexing. With relatively easier implementation, OFDM provides an efficient way to deal with multipath propagation effects. OFDM transmission is robust against narrowband interference. Sensitivity to frequency/phase offsets and high peak-to-average power ratio (PAPR) leading to lowered efficiency of RF amplifier, are some of the demerits of an OFDM system. A complete OFDM system, proposed by Weinstein and Ebert [1], generates a multicarrier signal using discrete Fourier transform (DFT) and with guard intervals between the OFDM symbols to improve the performance in a multipath channel. However, the system could not maintain perfect orthogonality among the subcarriers. This problem was solved by adding a cyclic prefix in each OFDM block [2]. An OFDM signal with cyclic prefix is classified as cyclic-prefixed OFDM (CP-OFDM). There are several other flavors of OFDM such as zero padded OFDM (ZP-OFDM) and time-domain synchronous OFDM (TD-OFDM). The system studied in this chapter is a CP-OFDM system.
14.2 Understanding the role of cyclic prefix in a CP-OFDM system A CP-OFDM transceiver architecture is typically implemented using inverse discrete Fourier transform (IDFT) and discrete Fourier transform (DFT) blocks (refer Figure 14.3). In an OFDM transmitter, the modulated symbols are assigned to individual subcarriers and sent to an IDFT block. The output of the IDFT block, generally viewed as time domain samples, results in an OFDM symbol. Such OFDM symbols are then
353
354
14 Orthogonal Frequency Division Multiplexing (OFDM)
transmitted across a channel with certain channel impulse response (CIR). On the other hand, the receiver applies DFT over the received OFDM symbols for further demodulation of the information in the individual subcarriers. In reality, a multipath channel or any channel in nature, acts as a linear filter on the transmitted OFDM symbols. Mathematically, the transmitted OFDM symbol (denoted as s[n]) gets linearly convolved with the CIR h[n] and gets corrupted with additive white gaussian noise - designated as w[n]. Denoting linear convolution as ‘∗’, the received signal in discrete-time can be represented as r[n] = h[n] ∗ s[n] + w[n] (∗ → linear convolution)
(14.1)
The idea behind using OFDM is to combat frequency selective fading, where different frequency components of a transmitted signal can undergo different levels of fading. The OFDM divides the available spectrum into small chunks of subchannels. Within the subchannels, the fading experienced by the individual modulated symbol can be considered flat. This opens up the possibility of using a simple frequency domain equalizer to neutralize the channel effects in the individual subchannels.
14.2.1 Circular convolution and designing a simple frequency domain equalizer From one of the DFT properties, we know that the circular convolution of two sequences in time domain is equivalent to the multiplication of their individual responses in frequency domain. Let s[n] and h[n] be two sequences of length N with their DFTs denoted as S[k] and H[k] respectively. Denoting circular convolution as ~, h[n] ~ s[n] ≡ H[k]S[k] (14.2) If we ignore channel noise in the OFDM transmission, from equation 14.1, the received signal is written as r[n] = h[n] ∗ s[n]
(∗ → linear convolution)
(14.3)
We can note that the channel performs a linear convolution operation on the transmitted signal. Instead, if the channel performs circular convolution (which is not the case in nature) then the equation would have been r[n] = h[n] ~ s[n]
(~ → circular convolution)
(14.4)
By applying the DFT property given in equation 14.2, r[n] = h[n] ~ s[n]
≡ R[k] = H[k]S[k]
(14.5)
As a consequence, the channel effects can be neutralized, at the receiver, using a simple frequency domain equalizer (actually this is a zero-forcing equalizer when viewed in time domain) that just inverts the estimated channel response and multiplies it with the frequency response of the received signal, to obtain the estimates of the OFDM symbols in the subcarriers as ˆ = R[k] S[k] H[k]
(14.6)
14.2.2 Demonstrating the role of cyclic prefix The simple frequency domain equalizer shown in equation 14.6 is possible only if the channel performs circular convolution. But in nature, all channels perform linear convolution. The linear convolution can be converted
14.2 Understanding the role of cyclic prefix in a CP-OFDM system
355
into circular convolution by adding cyclic prefix (CP) in the OFDM architecture. The addition of CP makes the linear convolution imparted by the channel appear as circular convolution to the DFT process at the receiver [3]. Let’s understand this by demonstration. To simplify stuffs, we will create two randomized vectors for s[n] and h[n]. s[n] is of length 8, the channel impulse response h[n] is of length 3 and we intend to use N = 8-point DFT/IDFT wherever applicable. N=8; %period of DFT s = randn(1,8); h = randn(1,3); >> s = -0.0155 2.5770 >> h = -0.4878 -1.5351
1.9238 -0.0629 -0.8105 0.2355
0.6727 -1.5924 -0.8007
Now, convolve the vectors s and h linearly and circularly. The outputs are plotted in Figure 14.1. We note that the linear convolution and the circular convolution do not yield identical results. lin_s_h = conv(h,s) %linear convolution of h and s cir_s_h = cconv(h,s,N)%circular convolution of h and s with period N lin_s_h = 0.0076 -1.2332 -4.898 -2.316 0.854 -0.188 cir_s_h = 0.8618 -1.4217 -4.898 -2.316
4
linear convolution : s ∗ h
2
2
0
0
value
value
4
2 4
0.945
0.901 -0.446
2.993
0.945
0.901 -0.446
2.993
Circular convolution : s ⊛ h
2 4
6
6 0
5
sample index
10
0
2
4
6
8
sample index
Fig. 14.1: Difference between linear convolution and circular convolution Let’s append a cyclic prefix to the signal s by copying last Ncp symbols from s and pasting it to its front. Since the channel delay is 3 symbols (CIR of length 3), we need to add atleast 2 CP symbols. Ncp = 2; %number of symbols to copy and paste for CP s_cp =[s(end-Ncp+1:end) s];%copy last Ncp syms from s, add as prefix
356
14 Orthogonal Frequency Division Multiplexing (OFDM)
s_cp = -1.5924 -0.8007 -0.0155 -1.5924 -0.8007
2.577
1.9238 -0.0629 -0.8105
0.6727
Let us assume that we are sending the cyclic-prefixed OFDM symbol scp through a channel that performs linear filtering. lin_scp_h=conv(h,s_cp)%linear conv. of CP-OFDM symbol s_cp and CIR h lin_scp_h = 0.777 2.835 0.8618 -1.4217 -4.898 -2.316 -0.446 2.993 0.854 -0.189
0.945
0.901
Compare the outputs due to cir s h = s ~ h and lin scp h = scp ∗ h. We can immediately recognize that the middle section of lin scp h is exactly same as the cir s h vector. This is shown in the figure 14.2. cir_s_h = 0.8618 -1.4217 -4.898 -2.316 0.945 0.901 -0.446 lin_scp_h = 0.777 2.835 0.8618 -1.4217 -4.898 -2.316 0.945 -0.446 2.993 0.854 -0.189
2.993 0.901
5
value
circular convolution : s 0
−5 −2
0
remove these 5 samples at receiver value
⊛h
2
4 sample index
6
8
10
remove these samples at receiver
linear convolution : scp ∗ h
0
−5
0
2
4
6 sample index
8
10
12
Fig. 14.2: Equivalence of circular convolution and linear convolution with cyclic prefixed signal We have just tricked the channel to perform circular convolution by adding a cyclic extension to the OFDM symbol. At the receiver, we are only interested in the middle section of the lin scp h which is the channel output. The first block in the OFDM receiver removes the additional symbols from the front and back of the channel output. The resultant vector is the received symbol r after the removal of cyclic prefix in the receiver. r = lin_scp_h(Ncp+1:N+Ncp)%cut from index Ncp+1 to N+Ncp
14.3 Discrete-time implementation of baseband CP-OFDM
357
14.2.3 Verifying DFT property The DFT property given in equation 14.5 can be re-written as r[n] = IDFT R[k] = IDFT H[k]S[k] n o = IDFT DFT h[n] DFT s[n] = h[n] ~ s[n]
(14.7)
To verify this in Matlab, take N-point DFTs of the received signal and CIR. Then, we can see that the IDFT of product of DFTs of s and h will be equal to the N-point circular convolution of s and h. R = fft(r,N); %frequency response of received signal H = fft(h,N); %frequency response of CIR S = fft(s,N); %frequency response of OFDM signal (non CP) r1 = ifft(S.*H); %IFFT of product of individual DFTs display(['IFFT(DFT(H)*DFT(S)) : ',num2str(r1)]) display([cconv(s,h): ', numstr(r)]) IFFT ( DFT ( H ) * DFT ( S ) ) : 0.8618 -1.4217 -4.898 -2.316 -0.446 2.993 cconv (s , h ) : 0.8618 -1.4217 -4.898 -2.316 -0.446 2.993
0.945
0.901
0.945
0.901
14.3 Discrete-time implementation of baseband CP-OFDM The schematic diagram of a simplified cyclic-prefixed OFDM (CP-OFDM) data transmission system is shown in Figure 14.3. The basic parameter to describe an OFDM system is to specify the number of subchannels (N) required to send the data. The number of subchannels is typically set to powers of 2, such as {64, 256, 512, 1024, · · · }. The size of inverse discrete Fourier transform (IDFT) and discrete Fourier transform (DFT) need to be set accordingly. The transmission begins by converting the source information stream into N parallel subchannels. For convenience, the information stream d is already represented as a symbol from the set {1, 2, ..., M}. The data symbol in each subchannel is modulated using the chosen modulation technique such as BPSK,QPSK or MQAM (refer chapter 5 for their models). Since this is a baseband discrete-time model, where the signals are represented at symbol sampling instants, the information symbol on each parallel stream is assumed to be modulating a single orthogonal carrier. At this juncture, the modulated symbols X on the parallel streams can be visualized as coming from different orthogonal subchannels in the frequency domain. The components of the orthogonal subchannels in the frequency domain are converted to time domain using IDFT operation. The following generic function implements the modulation mapper (constellation mapping) shown in the Figure 14.3. The function supports MPSK modulation for M ∈ {2, 4, 8, · · · } and MQAM modulation that has square constellation : M ∈ {4, 16, 64, 256, 1024, · · · }. It is built over the mpsk modulator.m and mqam modulator.m functions given in sections 5.3.2 and 5.3.3 of chapter 5.
358
14 Orthogonal Frequency Division Multiplexing (OFDM) . .
constellation mapping
1
. .
. .
Map 1
S/P
Map
2
. .
IDFT
2
Map
. . .
1
1
Add Cyclic Prefix
2
. .
. . . 1
OFDM symbol
. . .
. .
Map 1
P/S
Channel Impulse Response
1
constellation
^ demapping Demap
^
1
P/S
1
1
Demap . . .
. . .
^
AWGN noise
DFT
. . .
Delete Cyclic Prefix
. . .
. . .
S/P Received symbol
Demap 1
1
1
Fig. 14.3: Discrete-time simulation model for OFDM transmission and reception
Program 14.1: modulation mapper.m: Implementing the modulation mapper for MPSK and MQAM function [X,ref]=modulation_mapper(MOD_TYPE,M,d) %Modulation mapper for OFDM transmitter % MOD_TYPE - 'MPSK' or 'MQAM' modulation % M - modulation order, For BPSK M=2, QPSK M=4, 256-QAM M=256 etc.., % d - data symbols to be modulated drawn from the set {1,2,...,M} %returns % X - modulated symbols % ref -ideal constellation points that could be used by IQ detector if strcmpi(MOD_TYPE,'MPSK'), [X,ref]=mpsk_modulator(M,d);%MPSK modulation else if strcmpi(MOD_TYPE,'MQAM'), [X,ref]=mqam_modulator(M,d);%MQAM modulation else error('Invalid Modulation specified'); end end;end
14.3 Discrete-time implementation of baseband CP-OFDM
359
OFDM signal is a composite signal that contains information from subchannels. Since the modulated symbols X are visualized to be in frequency domain, it is converted to time-domain using IDFT. In the receiver, the corresponding inverse operation is performed by the DFT block. The IDFT and DFT blocks in the schematic can also be interchanged and it has no impact to the transmission. In a time-dispersive channel, the orthogonality of the subcarriers cannot be maintained in a perfect state due to delay distortion. This problem is addressed by adding a cyclic extension (also called cyclic prefix) to the OFDM symbol [2]. A cyclic extension is added by copying the last Ncp symbols from the vector x and pasting it to its front as shown in Figure 14.4. Cyclic extension of OFDM symbol converts the linear convolution channel to a channel performing cyclic convolution and this ensures orthogonality of subcarriers in a timedispersive channel. It also completely eliminates the subcarrier interference as long as the impulse response of the channel is shorter than the cyclic prefix. At the receiver, the added cyclic prefix is simply removed from the received OFDM symbol.
1
copy last
−1
−2
−
2
−
−1
−
−2
−1
samples to the front
1
2
−
−1
−
−2
−1
Fig. 14.4: Adding a cyclic prefix in CP-OFDM
Program 14.2: add cyclic prefix.m: Function to add cyclic prefix of length Ncp symbols function s = add_cyclic_prefix(x,Ncp) %function to add cyclic prefix to the generated OFDM symbol x that %is generated at the output of the IDFT block % x - ofdm symbol without CP (output of IDFT block) % Ncp-num. of samples at x's end that will copied to its beginning % s - returns the cyclic prefixed OFDM symbol s = [x(end-Ncp+1:end) x]; %Cyclic prefixed OFDM symbol end
Program 14.3: remove cyclic prefix.m: Function to remove the cyclic prefix from the OFDM symbol function y = remove_cyclic_prefix(r,Ncp,N) %function to remove cyclic prefix from the received OFDM symbol r % r - received ofdm symbol with CP % Ncp - num. of samples at beginning of r that need to be removed % N - number of samples in a single OFDM symbol % y - returns the OFDM symbol without cyclic prefix y=r(Ncp+1:N+Ncp);%cut from index Ncp+1 to N+Ncp end
360
14 Orthogonal Frequency Division Multiplexing (OFDM)
On the receiver side, the demapper for demodulating MPSK and MQAM can be implemented by using a simple IQ detector that uses the minimum euclidean distance metric for demodulation. (discussion and function definitions in section 5.4.4 of chapter 5)
14.4 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on AWGN channel The following code puts together all the functional blocks of an OFDM transmission system, that were described in section 14.3, to simulate the performance of a CP-OFDM system over an AWGN channel. The code supports two types of underlying modulations for OFDM - MPSK or MQAM. It generates random data symbols, modulates them using the chosen modulation type, converts the modulated symbols to frequency domain using IDFT operation and adds cyclic prefix to form an OFDM symbol. The resulting OFDM symbols are then added with AWGN noise vector that corresponds to the specified Eb /N0 value. The AWGN noise model is described in section 6.1 of chapter 6. On the receiver side, cyclic prefix is removed from the received OFDM symbol, DFT is performed and then the symbols are sent through a demapper for getting an estimate of the source symbols. The demapper is implemented by using a simple IQ detector that uses the minimum euclidean distance metric for demodulation (discussion and function definitions in section5.4.4 of chapter 5). Finally, the symbol error rates are computed and compared against the theoretical symbol error rate curves for the respective modulations over AWGN. The theoretical equations and the function to calculate symbol error rates for MPSK and MQAM modulations are given in 6.1.3. Figure 14.5 shows that the simulated symbol error rate points concur well with the theoretical symbol error rate curves. Program 14.4: ofdm on awgn.m: OFDM transmission and reception on AWGN channel clearvars; clc; %--------Simulation parameters---------------nSym=10ˆ4; %Number of OFDM Symbols to transmit EbN0dB = 0:2:20; % bit to noise ratio MOD_TYPE='MPSK'; %modulation type - 'MPSK' or 'MQAM' M=64; %choose modulation order for the chosen MOD_TYPE N=64; %FFT size or total number of subcarriers (used + unused) 64 Ncp= 16; %number of symbols in the cyclic prefix %--------Derived Parameters-------------------k=log2(M); %number of bits per modulated symbol EsN0dB=10*log10(k)+EbN0dB;%convert to symbol energy to noise ratio errors= zeros(1,length(EsN0dB)); %to store symbol errors for i=1:length(EsN0dB),%Monte Carlo Simulation for j=1:nSym %-----------------Transmitter-------------------d=ceil(M.*rand(1,N));%uniform distributed random syms from 1:M [X,ref]=modulation_mapper(MOD_TYPE,M,d); x= ifft(X,N);% IDFT s = add_cyclic_prefix(x,Ncp); %add CP %-------------- Channel ---------------r = add_awgn_noise(s,EsN0dB(i));%add AWGN noise r = s+n %-----------------Receiver----------------------
14.4 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on AWGN channel
361
y = remove_cyclic_prefix(r,Ncp,N);%remove CP Y = fft(y,N);%DFT [˜,dcap]= iqOptDetector(Y,ref);%demapper using IQ detector %----------------Error counter-----------------numErrors=sum(d˜=dcap);%Count number of symbol errors errors(i)=errors(i)+numErrors;%accumulate symbol errors end end simulatedSER = errors/(nSym*N); theoreticalSER=ser_awgn(EbN0dB,MOD_TYPE,M); %Plot theoretical curves and simulated BER points plot(EbN0dB,log10(simulatedSER),'ro');hold on; plot(EbN0dB,log10(theoreticalSER),'r-');grid on; title(['Performance of ',num2str(M),'-', MOD_TYPE,' OFDM over AWGN channel']); xlabel('Eb/N0 (dB)');ylabel('Symbol Error Rate'); legend('simulated','theoretical');
MPSK−CP−OFDMsonsAWGN 64−PSK OFDM
0
SymbolsErrorsRates−sP s
32−PSK OFDM −2
−2 −3
16PSK OFDM
−4
QPSK OFDM
−5 −6
256−QAM OFDM
−1
SymbolsErrorsRates−sP s
−1
MQAM−CP−OFDMsonsAWGN
0
0
5
8PSK OFDM
10
Eb/N0 5dB)
15
−3
64−QAMs 16−QAMs OFDM OFDM
−4 −5
20
−6
4−QAMs OFDM 0
5
10
15
20
Eb/N0 5dB)
Fig. 14.5: Symbol Error Rate Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on AWGN channel
362
14 Orthogonal Frequency Division Multiplexing (OFDM)
14.5 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on frequency selective Rayleigh channel Now we study the effect of using OFDM over a frequency selective channel. The block diagram for simulating a discrete-time baseband CP-OFDM over a frequency selective channel is shown in Figure 14.6. Transmitter
Mapper
IDFT
Receiver
Delete CP
Add CP
Demapper
DFT
Frequency domain equalizer
CIR
Fig. 14.6: OFDM transmitter and receiver with a simple frequency domain equalizer In this model, the channel impulse response h corresponds to that of a frequency selective channel and let H be its corresponding frequency response. The channel noise n is considered to be additive white Gaussian. Denoting circular convolution as ‘~’, the OFDM transmission system can be summarized as Y = DFT IDFT (X) ~ h + n = DFT IDFT (X) ~ h + z where, X is the input to the IDFT block in the transmitter, Y is the output of the DFT block in the receiver and z = DFT (n) represents uncorrelated Gaussian noise. Since, the DFT of two circularly convoluted signals is equivalent to the product of their individual DFTs, the equation can be further simplified as Y = X · DFT [h] + z = X·H+z
(14.8)
If we know the frequency response of the channel, the process of equalization in the frequency domain is as simple as diving the received signal by the channel frequency response. Y (14.9) H A frequency selective Rayleigh channel model is chosen for implementing the simulation code. The following code puts together all the functional blocks of an OFDM transmission system, that were described in section 14.3, to simulate the performance of an OFDM system over a frequency selective Rayleigh fading channel. The code supports two types of underlying modulations for OFDM : MPSK or MQAM. The code generates random data symbols, modulates them using the chosen modulation type, converts the modulated symbols to frequency domain using IDFT operation and adds cyclic prefix to form an OFDM symbol. The OFDM symbols are then filtered through a frequency selective Rayleigh channel which can be modeled using a TDL filter with the number of taps L > 1 (refer section 11.5.2 of chapter 10). The filtered OFDM symbols are added with AWGN noise that corresponds to the specified Eb /N0 value. The AWGN noise model is described in section 6.1 of chapter 6. On the receiver side, cyclic prefix is removed from the received OFDM symbol, DFT is performed followed by a perfect frequency domain equalization as described by equation 14.9. The resulting equalized symbols are sent through a demapper for getting an estimate of the source symbols. The demapper is implemented as a simple IQ detector that uses the minimum euclidean distance metric for demodulation. (discussion and function definitions in section5.4.4 of chapter 5 ). Finally, the symbol error rates are computed and compared V=
14.5 Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on frequency selective Rayleigh channel
363
against the theoretical symbol error rate curves for a flat-fading Rayleigh channel. The theoretical equations and the function to calculate symbol error rates for MPSK and MQAM modulations are given in 6.2.3.1. Figure 14.7 shows that the performance of a perfectly designed OFDM system over a frequency selective channel, is equivalent to the performance over a flat-fading channel, which justifies the objective of using OFDM. Program 14.5: ofdm on freq sel chan.m: OFDM on frequency selective Rayleigh fading channel clearvars; clc; %--------Simulation parameters---------------nSym=10ˆ4; %Number of OFDM Symbols to transmit EbN0dB = 0:2:20; % bit to noise ratio MOD_TYPE='MQAM'; %modulation type - 'MPSK' or 'MQAM' M=4; %choose modulation order for the chosen MOD_TYPE N=64; %FFT size or total number of subcarriers (used + unused) 64 Ncp= 16; %number of symbols in the cyclic prefix L=10; %Number of taps for the frequency selective channel model %--------Derived Parameters-------------------k=log2(M); %number of bits per modulated symbol EsN0dB = 10*log10(k*N/(N+Ncp))+EbN0dB; %account for overheads errors= zeros(1,length(EsN0dB)); %to store symbol errors for i=1:length(EsN0dB),%Monte Carlo Simulation for j=1:nSym %-----------------Transmitter-------------------d=ceil(M.*rand(1,N));%uniform distributed random syms from 1:M [X,ref]=modulation_mapper(MOD_TYPE,M,d); x= ifft(X,N);% IDFT s = add_cyclic_prefix(x,Ncp); %add CP %-------------- Channel ---------------h =1/sqrt(2)*(randn(1,L)+1i*randn(1,L)); %CIR H = fft(h,N); hs = conv(h,s);%filter the OFDM sym through freq. sel. channel r = add_awgn_noise(hs,EsN0dB(i));%add AWGN noise r = s+n %-----------------Receiver---------------------y = remove_cyclic_prefix(r,Ncp,N);%remove CP Y = fft(y,N);%DFT V = Y./H;%equalization [˜,dcap]= iqOptDetector(V,ref);%demapper using IQ detector %----------------Error counter-----------------numErrors=sum(d˜=dcap);%Count number of symbol errors errors(i)=errors(i)+numErrors;%accumulate symbol errors end end simulatedSER = errors/(nSym*N);%symbol error from simulation %theoretical SER for Rayleigh flat-fading theoreticalSER=ser_rayleigh(EbN0dB,MOD_TYPE,M); semilogy(EbN0dB,simulatedSER,'ko');hold on; semilogy(EbN0dB,theoreticalSER,'k-');grid on;
364
14 Orthogonal Frequency Division Multiplexing (OFDM)
title(['Performance of ',num2str(M),'-', MOD_TYPE,' OFDM over Freq Selective Rayleigh channel']); xlabel('Eb/N0 (dB)');ylabel('Symbol Error Rate'); legend('simulated','theoretical');
0
10
MPSK)OFDM) on)Freq)Selective)Rayleigh
MQAM)OFDM) on)Freq)Selective)Rayleigh
0
10
256)QAM)OFDM
32PSK)OFDM −1
10
−1
10
16PSK OFDM 8PSK) OFDM QPSK) OFDM
−2
10
64QAM OFDM 16QAM OFDM
−2
10
BPSK OFDM
−3
10
Symbol)Error)Rate)−)P s
Symbol)Error)Rate)−)P s
64PSK)OFDM
4QAM OFDM −3
0
5
10
Eb/N0 4dB8
15
20
10
0
5
10
15
20
Eb/N0 4dB8
Fig. 14.7: Symbol Error Rate Performance of MPSK-CP-OFDM and MQAM-CP-OFDM on frequency selective Rayleigh channel with AWGN noise
References 1. S.B. Weinstein and Paul M. Ebert, Data Transmission by Frequency-Division Multiplexing Using the Discrete Fourier Transform, IEEE Transactions on communication technology, Vol 19, Issue 5, October 1971. 2. A. Peled and A. Ruiz, Frequency domain data transmission using reduced computational complexity algorithms, in Proc. IEEE ICASSP- 80, Vol. 5, pp.964 - 967, April 1980. 3. H. Sari, G. Karam, and I. Jeanclaude, Transmission Techniques for Digital Terrestrial TV Broadcasting, IEEE Commun. Magazine, Vol. 33, pp. 100-109, Feb. 1995.
Index
adaptive equalizer, 206, 229 additive white Gaussian noise (AWGN), 159, 271 Alamouti space-time coding, 312 amplitude distortion, 180 analytic signal, 36 angle of arrival (AoA), 172, 263 angular frequency, 40 antenna selection (AS), 310 array gain, 308 auto-correlation, 320 auto-correlation function, 265, 278 auto-regressive (AR) model, 85 autocorrelation matrix, 220 automatic repeat requests (ARQ), 114 bandwidth efficiency, 95 Bayes’ theorem, 101 binary erasure channel (BEC), 104 binary frequency shift keying (BFSK), 339 binary symmetric channel (BSC), 102, 125 bit error probability (BEP), 115 bit error rate (BER), 115 block fading, 167 bounded input bounded output (BIBO), 48 bounded-distance decoder, 119 causal filter, 47 central limit theorem, 72, 74 channel capacity, 93, 102 constellation constrained capacity, 106 ergodic capacity, 108 Shannon power efficiency limit, 97 ultimate Shannon limit, 97 channel impulse response (CIR), 167 channel models linear time invariant (LTI), 166 multiple input multiple output (MIMO), 297 multiple input single output system (MISO), 298 single input and single output (SISO), 296 single input multiple output (SIMO), 298 uncorrelated scattering (US), 265 wide sense stationary (WSS), 265 wide sense stationary uncorrelated scattering (WSSUS), 266
Cholesky decomposition, 79 circular convolution, 32, 34, 354 coding gain, 117 coherence bandwidth, 266, 270, 271, 295 coherence time, 266, 275, 295 coherent BFSK, 156 coherent detection, 154 colored noise, 80, 276 blue noise, 87 Brownian noise, 87 pink noise, 87 violet noise, 87 complete decoder, 119 complex baseband equivalent model, 144, 276 complex baseband models channel models AWGN channel, 159 Rayleigh flat fading, 169 Ricean flat fading, 172 demodulators, 152 IQ Detector, 154 M-FSK, 157 MPAM, 152 MPSK, 153 MQAM, 153 modulators, 145 MFSK, 155 MPAM, 146 MPSK, 146 MQAM, 148 complex envelope, 144 complex Fresnel integral, 256 conditional entropy, 102 constellation constrained capacity, 106 continuous time fourier transform (CTFT), 36 convolution, 29 circular, 32, 34 linear, 31 correlation matrix, 78 correlative coding, 198 critical distance, 254 cross-correlation, 320 cross-over probability, 102 crosscorrelation vector, 220
365
366 cyclic prefix, 270 cyclic prefix (CP), 355 cyclic-prefixed OFDM (CP-OFDM), 353 dc compensation, 235 decoder bounded-distance decoder, 119 complete decoder, 119 hard-decision decoding, 124 maximum likelihood decoder (MLD), 123 nearest-neighbor decoding, 118 soft-decision decoding, 122 standard array decoder, 125 syndrome decoding, 128 decoding error, 118 decorrelation distance, 299 degrees of freedom, 300 delay distortion, 180, 359 delay spread, 268, 272 diffraction, 248 diffraction loss, 256 dipole antenna, 247 direct sequence spread spectrum (DSSS), 319, 328 discrete memoryless channel (DMC), 99, 123 discrete-time fourier transform (DTFT), 37 diversity adaptive arrays, 296 frequency diversity, 295 multiuser diversity, 295 pattern diversity, 296 polarization diversity, 296 receive diversity, 296 spatial diversity, 296, 298 time diversity, 295 transmit diversity, 296 diversity gain, 299, 310 Doppler coefficients, 283 Doppler effect, 263 Doppler frequencies, 266, 283 Doppler frequency shift, 263 Doppler phases, 283 Doppler power spectral density, 264, 266 energy of a signal, 25 entropy, 100 conditional entropy, 102 mutual information, 101 envelope, 41 envelope detector, 157 equal gain combining (EGC), 305 equalizer, 205 adaptive, 206 decision feedback, 206 fractionally-spaced, 207 linear, 206 maximum likelihood sequence estimation (MLSE), 206 preset, 206 symbol-spaced, 206 erasure probability, 104 ergodic capacity, 108 error detection, 117
Index error probability, 102 Euclidean distance, 118, 211 Euclidean norm, 26 Eucliden norm, 168 eye diagram, 190 fading, 172, 264 fading rate, 274 fast fading, 275 Fast Fourier Transform, 12, 34 FFTshift, 15 IFFTShift, 17 finite impulse response (FIR), 45, 167, 209 flat fading channel, 166, 167 forward error correcting (FEC), 114 fractionally-spaced equalizer, 207 frequency correlation function (FCF), 271 frequency hopping sequence, 345 frequency hopping spread spectrum (FHSS), 319, 339 fast-FHSS, 347 slow-FHSS, 347 frequency non-selective channel, 264 frequency selective channel, 166, 264, 362 frequency shift keying (FSK), 339 frequency-non-selective channel, 167 frequency-shifted Gaussian power spectral density, 279 Fresnel zones, 258 Fresnel-Krichhoff diffraction parameter, 256 Friis propagation model, 248 generator matrix, 120 Gibbs phenomenon, 4, 82, 185 Gold codes, 324 group delay, 50, 179 Hadamard codes, 135 Hadamard matrix, 135 Hamming code, 132 Hamming distance, 118, 124 hard-decision, 124 hard-decision decoding, 124 Hata Okumura model, 260 Hermitian transpose, 211, 219, 230, 303, 313 Hilbert transform, 37 histogram, 55 infinite impulse response (IIR), 45, 87 instantaneous amplitude, 40 instantaneous phase, 40 intersymbol interference (ISI), 166, 179, 180, 270 inverse square law of distance, 248 IQ detection, 154 IQ imbalance model, 235 Jakes power spectral density, 82, 266, 278 jammer-to-signal ratio (JSR), 335 knife-edge diffraction model, 248, 256 Kronecker product, 135 law of large numbers, 59 least mean square (LMS) algorithm, 229 least squares (LS), 211
Index line of sight (LOS) path, 172 line-of-sight (LOS), 72, 247, 287 linear convolution, 31, 144 linear equalizers, 205 linear feedback shift registers (LFSR), 321, 324 linear phase filter, 50 linear time invariant (LTI) channel, 166 log distance path loss model, 251 log normal shadowing, 251 log-likelihood function, 123 M-ary frequency shift keying (MFSK), 339 matched filter, 181 maximum delay spread, 270 maximum excess delay, 270 maximum likelihood decoder (MLD), 123 maximum ratio combining (MRC), 303 maximum-length codes, 134 maximum-length sequences (m-sequences), 321 mean delay, 268 mean square error (MSE), 211, 219 mean square error criterion, 207 method of equal distances (MED), 287 method of exact Doppler spread (MEDS), 283 minimum distance, 118 minimum shift keying (MSK), 341 MMSE equalizer, 218 modulation order, 146 moment generating function (MGF), 169, 173 monic polynomial, 202 Moore-Penrose generalized matrix inverse, 211 multipath intensity profile, 266 multipath propagation, 263 mutual information, 101 non line of sight (NLOS) path, 172 non-causal filter, 47 non-coherent detection, 154 non-line-of-sight (NLOS), 248 non-systematic encoding, 119 norm, 29 2-norm, 25, 29 L2 norm, 25 p-norm, 29 Nyquist criterion for zero ISI, 180, 181, 192 Nyquist sampling theorem, 3 orthogonal frequency-division multiplexing (OFDM), 353 cyclic-prefixed OFDM (CP-OFDM), 353 time-domain synchronous OFDM (TD-OFDM), 353 zero padded OFDM (ZP-OFDM), 353 parity-check matrix, 120 parity-check theorem, 120 partial response (PR) signaling, 197 passband model, 143 path-loss exponent (PLE), 248, 251 peak distortion criterion, 207 periodogram, 24 phase delay, 50 phase distortion, 50, 180
367 pilot based estimation, 238 Poisson process, 62 polynomial functions, 30 power delay profile (PDP), 266, 268, 271 power efficiency, 96 power of a signal, 26 power spectral density (PSD), 23 pre-equal-gain-combining (P-EGC), 310 pre-maximum-ratio-combining (P-MRC), 310 precoding, 200 preferred pair m-sequences, 324 preset equalizers, 206 primitive polynomials, 321 probability density function (PDF), 53, 72 probability mass function (PMF), 57 process intensity, 62 processing gain, 335 propagation delay, 265 propagation path loss, 249 pseudo-inverse matrix, 211 pulse shaping raised-cosine pulse, 185 rectangular pulse, 182 sinc pulse, 183, 197 square-root raised-cosine, 187 raised-cosine pulse, 185 random variable Bernoulli, 59 binomial, 59 Chi, 70 central Chi, 70 non-central Chi, 70 Chi-squared, 65 central Chi-squared, 66 non-central Chi-squared, 66 complex Gaussian, 72 exponential, 61 Gaussian, 64, 251 independent and identically distributed (i.i.d), 63, 73, 74, 301 Nakagami-m, 73 Rayleigh, 70, 72 Ricean, 68, 72, 276 uniform, 56 rate parameter, 61 Rayleigh flat-fading, 169 receive diversity equal gain combining (EGC), 305 maximum ratio combining (MRC), 303 selection combining (SC), 306 rectangular pulse, 182 reflection, 248 repetition codes, 132 RF impairments model, 233 Ricean flat-fading, 172 Ricean K factor, 277 Rician K factor, 173 RMS delay spread, 268 scattering function, 266
368 selection combining (SC), 306 Shannon power efficiency limit, 96, 97 Shannon spectral efficiency limit, 95 Shannon theorem, 94 Shannon’s capacity limit, 94 signal energy signal, 27 power signal, 27 signal to noise ratio (SNR), 94 signal-to-noise-ratio (SNR) average SNR, 303 instantaneous SNR, 302 signum function, 5 sinc pulse, 183, 197 slow fading, 275 soft-decision decoding, 122 soft-decisions, 122 spaced-frequency correlation function, 266, 270, 271 spaced-time correlation function, 266, 275 spatial correlation, 299 spatial diversity, 298 spatial multiplexing, 300 spatial-multiplexing, 298 spectral efficiency, 95 spread spectrum, 319 direct sequence spread spectrum (DSSS), 319, 328 frequency hopping spread spectrum (FHSS), 319, 339 fast-FHSS, 347 slow-FHSS, 347 spreading sequences, 319 Gold codes, 324 maximum-length sequences (m-sequences), 321 preferred pair m-sequences, 324 pseudo random (PN) sequences, 321 square-root raised-cosine filter, 192 square-root raised-cosine pulse, 187 standard array decoder, 125 symbol-spaced equalizer, 206 syndrome decoding, 128
Index systematic encoding, 119 tapped delay line (TDL) filter, 167 temporal fine structure (TFS), 42 temporal frequency, 40 test signals, 3 chirp signal, 7 gaussian pulse, 6 rectangular pulse, 5 sinusoidal, 3 square wave, 4 time-domain synchronous OFDM (TD-OFDM), 353 time-nonselective fading, 275 time-selective fading, 275 Toeplitz matrix, 32, 210 transmit beamforming, 310 antenna selection (AS), 310 pre-equal-gain-combining (P-EGC), 310 pre-maximum-ratio-combining (P-MRC), 310 transmit diversity, 310 Alamouti space-time coding, 312 two ray ground reflection model, 253 two ray interference model, 253 ultimate Shannon limit, 97 uncorrelated scattering (US) channel model, 265 Weiner filter, 219 Weiner-Hopf equation, 220 wide sense stationary (WSS) channel model, 265 wide sense stationary uncorrelated scattering (WSSUS) channel model, 266 Wiener-Khinchin theorem, 23 Yule Walker equations, 85 zero padded OFDM (ZP-OFDM), 353 zero-forcing equalizer, 209 zero-mean channel, 271