Signal Processing and Linear systems [2 ed.] 0190299045, 9780190299040

(2021 Version w/ updated problems)This second edition contains much of the content of Linear Systems and Signals, Third

133 45 157MB

English Pages 1140 [1155] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Signal Processing and Linear Systems
Contents
Preface
Background
B.1 Complex Numbers
B1.1 A Historical Note
B.1.2 Algebra of Complex Numbers
B.2 Sinusoids
B.2.1 Addition of Sinusoids
B.2.2 Sinusoids in Terms of Exponentials
B.3 Sketching Signals
B.3.1 Monotonic Exponentials
B.3.2 The Exponentiality Varying Sinusoid
B.4 Cramer's Rule
B.5 Partial Fraction Expansion
B.5.1 Method of Clearing Fractions
B.5.2 The Heaviside "Cover-Up" Method
B.5.3 Repeated Factors of Q(x)
B.5.4 A Combination of Heaviside "Cover-Up" and Clearing Fractions
B.5.5 Improper F(x) with m = n
B.5.6 Modified Partial Fractions
B.6 Vectors and Matrices
B.6.1 Some Definitions and Properties
B.6.2 Matrix Algebra
B.7 MATLAB: Elementary Operations
B.7.1 MATLAB Overview
B.7.2 Calculator Operations
B.7.3 Vector Operations
B.7.4 Simple Plotting
B.7.5 Element-by-Element Operations
B.7.6 Matrix Operations
B.7.7 Partial Fraction Expansions
B.8 Appendix: Useful Mathematical Formulas
B.8.1 Some Useful Constants
B.8.2 Complex Numbers
B.8.3 Sums
B.8.4 Taylor and Maclaurin Series
B.8.5 Power Series
B.8.6 Trigonometric Functions
B.8.7 Common Derivative Formulas
B.8.8 Indefinite Integrals
B.8.9 L'Hopital's Rule
B.8.10 Solution of Quadratic and Cubic Equations
Problems
Chapter 1: Signals and Systems
1.1 Size of a Signal
1.1.1 Signal Energy
1.1.2 Signal Power
1.2 Some Useful Signal Operations
1.2.1 Time Shifting
1.2.2 Time Scaling
1.2.3 Time Reversal
1.2.4 Combined Operations
1.3 Classification of Signals
1.3.1 Continuous-Time and Discrete-Time Signals
1.3.2 Analog and Digital Signals
1.3.3 Periodic and Aperiodic Signals
1.3.4 Energy and Power Signals
1.3.5 Deterministic and Random Signals
1.4 Some Useful Signal Models
1.4.1 The Unit Step Function u(t)
1.4.2 The Unit Impulse Function delta(t)
1.4.3 The Exponential Function e^st
1.5 Even and Odd Functions
1.5.1 Some Properties of Even and Odd Functions
1.5.2 Even and Odd Components of a Signal
1.6 Systems
1.7 Classification of Systems
1.7.1 Linear and Nonlinear Systems
1.7.2 Time-Invariant and Time-Varying Systems
1.7.3 Instantaneous and Dynamic Systems
1.7.4 Causal and Noncausal Systems
1.7.5 Continuous-Time and Discrete-Time Systems
1.7.6 Analog and Digital Systems
1.7.7 Invertible and Noninvertible Systems
1.7.8 Stable and Unstable Systems
1.8 System Model: Input-Output Description
1.8.1 Electrical Systems
1.8.2 Mechanical Systems
1.8.3 Electromechanical Systems
1.9 Internal and External Descriptions of a System
1.10 Internal Description: The State Space Description
1.11 MATLAB: Working with Functions
1.11.1 Anonymous Functions
1.11.2 Relation Operators and the Unit Step Function
1.11.3 Visualizing Operations on the Independent Variable
1.11.4 Numerical Integration and Estimating Signal Energy
1.12 Summary
Problems
Chapter 2: Time-Domain Analysis of Continuous-Time Systems
2.1 Introduction
2.2 System Response to Internal Conditions: The Zero-Input Response
2.2.1 Some Insights into the Zero-Input Behavior of a System
2.3 The Unit Impulse Response h(t)
2.4 System Response to External Input: The Zero-State Response
2.4.1 The Convolution Integral
2.4.2 Graphical Understanding of Convolution Operation
2.4.3 Interconnected Systems
2.4.4 A Very Special Function for LTIC Systems: THe Everlasting Exponential e^st
2.4.5 Total Response
2.5 System Stability
2.5.1 External (BIBO) Stability
2.5.2 Internal (Asymptotic) Stability
2.5.3 Relationship Between BIBO and Asymptotic Stability
2.6 Intuitive Insights into System Behavior
2.6.1 Dependence of System Behavior on Characteristic Modes
2.6.2 Response Time of a System: The System Time Constant
2.6.3 Time Constant and Rise Time of a System
2.6.4 Time Constant and Filtering
2.6.5 Time Constant and Rate of Information Transmission
2.6.7 The Resonance Phenomenon
2.7 MATLAB: M-Files
2.7.1 Script M-Files
2.7.2 Function M-Files
2.7.3 For-Loops
2.7.4 Graphical Understanding of Convolution
2.8 Appendix: Determining the Impulse Response
2.9 Summary
Problems
Chapter 3: Signal Representation By Fourier Series
3.1 Signals as Vectors
3.1.1 Component of a Vector
3.1.2 Component of a Signal
3.1.3 Extension to Complex Signals
3.2 Signal Comparison: Correlation
3.2.1 Application to Signal Detection
3.2.2 Correlation Functions
3.3 Signal Representation by and Orthgonal Signal Set
3.3.1 Orthogonal Vector Space
3.3.2 Orthogonal Signal Space
3.4 Trigonometric Fourier Series
3.4.1 The Effect of Symmetry
3.4.2 Determining the Fundamental Frequency and Period
3.5 Existence and Convergence of the Fourier Series
3.5.1 Convergence of a Series
3.5.2 The Role of Amplitude and Phase Spectra in Waveshaping
3.6 Exponential Fourier Series
3.6.1 Exponential Fourier Spectra
3.6.2 Parseval's Theorem
3.6.3 Making Life Easier: Fourier Series Properties
3.7 LTIC System Response to Periodic Inputs
3.8 Numerical Computation of D_n
3.9 MATLAB: Fourier Series Applications
3.9.1 Periodic Functions and the Gibbs Phenomenon
3.9.2 Optimization and Phase Spectra
3.10 Summary
Problems
Chapter 4: Continuous-Time Signal Analysis: The Fourier Transform
4.1 Aperiodic Signal Representation by the Fourier Integral
4.1.1 Physical Appreciation of the Fourier Transform
4.1.2 LTIC System Response Using the Fourier Transform
4.2 Transforms of Some Useful Functions
4.3 Some Properties of the Fourier Transform
Time-Frequency Duality in the Transform Operations
Linearity
Conjugation and Conjugate Symmetry
Duality
The Scaling Property
Reciprocity of Signal Duration and Its Bandwidth
The Time-Shifting Property
The Frequency-Shifting Property
4.4 Signal Transmission Through LTIC Systems
4.5 Ideal and Practical Filters
4.6 Signal Energy
4.7 Application to Communications: Amplitude Modulation
4.7.1 Double-Sideband, Suppressed Carrier (DSB-SC) Modulation
4.7.2 Amplitude Modulation (AM)
4.7.3 Single-Sideband Modulation (SSB)
4.8 Angle Modulation
4.8.1 The Concept of Instantaneous Frequency
4.8.2 Bandwidth of Angle-Modulated Signals
4.8.3 Generation and Demodulation of Angle-Modulated Signals
4.8.4 Frequency-Division Multiplexing
4.9 Data Truncation: Window Functions
4.9.1 Using Windows in Filter Design
4.10 MATLAB: FOurier Transform Topics
4.10.1 The Sinc Function and the Scaling Property
4.10.2 Parseval's Theorem and Essential Bandwidth
4.10.3 Spectral Sampling
4.10.4 Kaiser Window Functions
4.11 Summary
Problems
Chapter 5: Sampling
5.1 The Sampling Theorem
5.1.1 Practical Sampling
5.2 Signal Reconstruction
5.2.1 Practical Difficulties in Signal Reconstruction
5.2.2 Some Applications of the Sampling Theorem
5.3 Analog-to-Digital (A/D) Conversion
5.4 Dual of Time Sampling: Spectral Sampling
5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform
5.5.1 Some Properties of the DFT
5.5.2 Some Applications of the DFT
5.6 The Fast Fourier Transform (FFT)
5.7 MATLAB: The Discrete Fourier Transform
5.7.1 Computing the Discrete Fourier Transform
5.7.2 Improving the Picture with Zero Padding
5.7.3 Quantization
5.8 Summary
Problems
Chapter 6: Continuous-Time System Analysis Using the Laplace Transform
6.1 The Laplace Transform
6.1.1 An Intuitive Undesrtanding of the Laplace Transform
6.1.2 Analytical Development of the Bilateral Laplace Transform
6.1.3 Finding the Inverse Transform
6.2 Some Properties of the Laplace Transform
6.2.1 Time Shifting
6.2.2 Frequency Shifting
6.2.3 The Time-Differentiation Property
6.2.4 The Time-Integration Property
6.2.5 The Scaling Property
6.2.6 Time Convolution and Frequency Convolution
6.3 Solution of Differential and Integro-Differential Equations
6.3.1 Comments on Initial Conditions at 0- and at 0+
6.3.2 Zero-State Response
6.3.3 Stability
6.4 Analysis of Electrical Networks: The Transformed Network
6.4.1 Analysis of Active Circuits
6.5 Block Diagrams
6.6 System Realization
6.6.1 Direct Form I Realization
6.6.2 Direct Form II Realization
6.6.3 Cascade and Parallel Realization
6.6.4 Transposed Realization
6.6.5 Using Operational Amplifiers for System Realization
6.7 Application to Feedback and Controls
6.7.1 Analysis of a Simple Control System
6.7.2 Analysis of a Second-Order System
6.7.3 Root Locus
6.7.4 Steady-State Errors
6.7.5 Compensation
6.7.6 Stability Considerations
6.8 The Bilarteral Laplace Transform
6.8.1 Properties of Bilateral Laplace Transform
6.8.2 Using the Bilateral Transform for Linear System Analysis
6.9 Summary
Problems
Chapter 7: Frequency Response and Analog Filters
7.1 Frequency Response of an LTIC system
7.1.1 Steady-State Response to Causal Sinusoidal Inputs
7.2 Bode Plots
7.2.1 Constant (k * a1 * a2) / (b1 * b3)
7.2.2 Pole (or Zero) at the origin
7.2.3 First-Order Pole (or Zero)
7.2.4 Second-Order Pole (or Zero)
7.2.5 The Transfer Function from the Frequency Response
7.3 Control System Design Using Frequency Response
7.3.1 Relative Stability: Gain and Phase Margins
7.3.2 Transient Performance in Terms of Frequency Response
7.4 Filter Design by Placement of Poles and Zeros of H(s)
7.4.1 Dependence of Frequency Response on Poles and Zeros of H(s)
7.4.2 Lowpass Filters
7.4.3 Bandpass Filters
7.4.4 Notch (Bandstop) Filters
7.4.5 Practical Filters and Their Specifications
7.5 Butterworth Filters
7.6 Chebyshev FIlters
7.6.1 Inverse Chebyshev Filters
7.6.2 Elliptic Filters
7.7 Frequency Transformations
7.7.1 Highpass Filters
7.7.2 Bandpass Filters
7.7.3 Bandstop Filters
7.8 Filters to Satisfy Distortionless Transmission Conditions
7.9 MATLAB: Continuous-Time Filters
7.9.1 Frequency Response and Polynomial Evaluation
7.9.2 Butterworth Filters and the Find Command
7.9.3 Using Cascaded Second-Order Sections for Butterworth FIlter Realization
7.10 Summary
Problems
Chapter 8: Discrete-Time Signals and Systems
8.1 Introduction
8.1.1 Size of a Discrete-Time Signal
8.2 Useful Signal Operations
8.3 Some Useful Discrete-Time Signal Models
8.3.1 Discrete-Time IMpulse Function delta[n]
8.3.2 Discrete-Time Unit Step Function u[n]
8.3.3 Discrete-Time Exponential y^n
8.3.4 Discrete-Time Complex Exponential e^j(ohm)(n)
8.3.5 Discrete-Time Sinusoid cos((ohm)(n) + (theta))
8.4 Aliasing and Sampling Rate
8.5 Examples of Discrete-Time Systems
8.6 MATLAB: Representing, Manipulating, and Plotting Discrete-Time Signals
8.6.1 Discrete-Time Functions and Stem Plots
8.7 Summary
Problems
Chapter 9: Time-Domain Analysis of Discrete-Time Systems
9.1 Classification of Discrete-Time Systems
9.2 Discrete-Time System Equations
9.2.1 Recursive (Iterative) Solution of Difference Equation
9.3 System Response to Internal Conditions: The Zero-Input Response
9.4 The Unit Impulse Response h[n]
9.4.1 The Closed-Form Solution of h[n]
9.5 System Response to External Input: The Zero-State Response
9.5.1 Graphical Procedure for the Convolution Sum
9.5.2 Interconnected Systems
9.5.3 Total Response
9.6 System Stability
9.6.1 External (BIBO) Stability
9.6.2 Internal (Asymptotic) Stability
9.6.3 Relationship between BIBO and Asymptotic Stability
9.7 Intuitive Insights into System Behavior
9.8 MATLAB: Discrete Time Systems
9.8.1 System Responses Through Filtering
9.8.2 A Custom Filter Function
9.8.3 Discrete-Time Convolution
9.9 Appendix: Impulse Response for a Special Case
9.10 Summary
Problems
Chapter 10: Fourier Analysis of Discrete-Time Signals
10.1 Periodic Signal Representation by Discrete-Time Fourier Series
10.1.1 Fourier Spctra of a Periodic Signal x[n]
10.2 Aperiodic Signal Representation by Fourier Integral
10.3 Properties of the DTFT
10.4 DTFT Connection with the CTFT
10.5 LTI Discrete-Time System Analysis by DTFT
10.5.1 Distortionless Transmission
10.5.2 Ideal and Practical Filters
10.7 Generalization of the DTFT to the z-Transform
10.6 Signal Processing by the DFT and FFT
10.6.1 Computation of the Discrete-Time Fourier Series (DTFS)
10.6.2 Computation of the DTFT and Its Inverse
10.6.3 Discrete-Time Filtering (Convolution) Using the DFT
10.6.4 Block Convolution
10.8 MATLAB: Working with the DTFS and the DTFT
10.8.1 Computing the Discrete-Time Fourier Series
10.8.2 Measuring Code Performance
10.9 Summary
Problems
Chapter 11: Discrete-Time System Analysis Using the z-Transform
11.1 The z-Transform
11.1.1 Inverse Transform by Partial Fraction Expansion and Tables
11.1.2 Inverser z-Transform by Power Series Expansion
11.2 Some Properties of the z-Transform
11.2.1 Time-Shifting Properties
11.2.2 z-Domain Scaling Property (Multiplication by y^n)
11.2.3 z-Domain Differentiation Property (Multiplication by n)
11.2.4 Time-Reversal Property
11.2.5 Convolution Property
11.3 z-Transform Solution of Linear Difference Equations
11.3.1 Zero-State Response of LTID Systems: The Transfer Function
11.3.2 Stability
11.3.3 Inverse Systems
11.4 System Realization
11.5 Connecting the Laplace and z-Transforms
11.6 Sampled-Data (Hybrid) Systems
11.7 The Bilateral z-Transform
11.7.1 Properties of the Bilateral z-Transform
11.7.2 Using the Bilateral z-Transform for Analysis of LTID Systems
11.8 Summary
Problems
Chapter 12: Frequency Response and Digital Filters
12.1 Frequency Response of Discrete-Time Systems
12.1.1 The Periodic Nature of Frequency Response
12.2 Frequency Response From Pole-Zero Locations
12.3 Digital Filters
12.4 Filter Design Camera
12.4.1 Time-Domain Equivalence Criterion
12.4.2 Frequency-Domain Equivalence Criterion
12.5 Recursive Filter Design by the Time-Domain Criterion: The Impulse Invariance Method
12.6 Recursive Filter Design By The Frequency-Domain Criterion: The BIlinear Transformation Method
12.6.1 Bilinear Transformation Method with Prewarping
12.7 Nonrecursive Filters
12.7.1 Symmetry Conditions for LInear-Phase Response
12.8 Nonrecursive Filter Design
12.8.1 Time-Domain Equivalence Method of FIR Filter Design
12.8.2 Nonrecursive Filter Design by the Frequency-Domain Criterion: The Frequency Sampling Method
12.9 MATLAB: Designing High-Order Filters
12.9.1 IIR Filter Design Using the Bilinear Transform
12.9.2 FIR Filter Design Using Frequency Sampling
12.10 Summary
Problems
Chapter 13: State-Space Analysis
13.1 Mathematical Preliminaries
13.1.1 Derivatives and Integrals of a Matrix
13.1.2 The Characteristic Equation of a Matrix: The Cayley-Hamilton Theorem
13.1.3 Computation of an Exponential and a Power of a Matrix
13.2 Introduction of State Space
13.3 A Systematic Procedure to Determine State Equations
13.3.1 Electrical Circuits
13.3.2 State Equations from a Transfer Function
13.4 Solution of State Equations
13.4.1 Laplace Transform Solution of State Equations
13.5 Linear Transformation of a State Vector
13.5.1 Diagonalization of Matrix A
13.6 Controllability and Observability
13.6.1 Inadequacy of the Transfer Function Description of a System
13.7 State-Space Analysis of Discrete-Time Systems
13.7.1 Solution in State Space
13.8 MATLAB: Toolboxes and State-Space Analysis
13.8.1 z-Transform Solutions to Discrete=Time, State-Space Systems
13.8.2 Transfer Functions from State-Space Representations
13.8.3 Controllability and Observability of Discrete-Time Systems
13.9 Summary
Problems
Index
A - B
B - C
C - D
D - F
F - I
I - M
M - N
N - R
R - S
S - T
U - Z
Recommend Papers

Signal Processing and Linear systems [2 ed.]
 0190299045, 9780190299040

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

r

Not for Profit. All for Education. Oxford University Press USA is a not-for-profit publisher dedicated to offering the highest quality textbooks at the best possible prices. We believe that it is important to provide everyone with access to superior textbooks at affordable prices. Oxford University Press textbooks are 30%-70% less expensive than comparable books from commercial publishers. The press is a department of the University of Oxford, and our publishing proudly serves the university's mission: promoting excellence in research, scholarship, and education around the globe. We do not publish in order to generate revenue: we generate revenue in order to publish and also to fund scholarships, provide start-up grants to early-stage researchers, and refurbish libraries. What does this mean to you? It means that Oxford University Press USA published this book to best support your studies while also being mindful of your wallet. Not for Profit. All for Education.

As• nor•for-profir publisher, Oxford Uruvcniry Pres, USA is uniqucl7 ,iruarcd to offer the highest quality ocholanhip II rhc best pouiblc prices.

OXFORD \TNIVERSITY PRESS

SIGNAL PROCESSING AND LINEAR SYSTEMS

THE OXFORD SER IES I N ELECTRICAL AND COMPUTER ENGINEERING Adel S. Sedra, Series Editor Allen and Holberg, CMOS Analog Circuit Design, 3rd edition Bobrow, Elementary Linear Circuit Analysis, 2nd edition Bobrow, Fundamentals of Electrical Engineering, 2nd edition Campbell, Fabrication Engineering at the Micro- and Nanoscale, 4th edition Chen, Digital Signal Processing Chen, Linear System Theory and Design, 4th edition Chen, Signals and Systems, 3rd edition Comer, Digital logic and State Machine Design, 3rd edition Comer, Microprocessor-Based System Design Cooper and McGillem, Probabilistic Methods of Signal and System Analysis, 3rd edition Dintitrijev, Principles of Semiconductor Device, 2nd edition Dimitrijev, Understanding Semiconductor Devices Fortney, Principles of Electronics: Analog & Digital Franco, Electric Circuits Fundamentals Ghausi, Electronic Devices and Circuits: Discrete and Integrated Guru and Hiziroglu, Electric Machinery and Transformers, 3rd edition Houts, Signal Analysis in Linear Systems Jones, Introduction to Optical Fiber Communication Systems Krein, Elements of Power Electronics, 2nd edition Kuo, Digital Control Systems, 3rd edition Lathi and Green, Linear Systems and Signals, 3rd edition Lathi and Ding, Modem Digital and Analog Communication Systems, 5th edition Lathi, Signal Processing and Linear Systems, 2nd edition Martin, Digital Integrated Circuit Design Miner, lines and Electromagnetic Fields for Engineers Mitra, Signals and Systems Parhami, Computer Architecture Parhami, Computer Arithmetic, 2nd edition Roberts and Sedra, SPICE, 2nd edition Roberts, Taenzler, and Bums, An Introduction to Mixed-Signal IC Test and Measurement, 2nd edition Roulston, An Introduction to the Physics of Semiconductor Devices Sadiku, Elements of Electromagnetics, 7th edition Santina, Stubberud, and Hostetter, Digital Control System Design, 2nd edition Sarma, Introduction to Electrical Engineering Schaumann, Xiao, and Van Valkenburg, Design of Analog Filters, 3rd edition Schwarz and Oldham, Electrical Engineering: An Introduction, 2nd edition Sedra and Smjth, Microelectronic Circuits, 8th edition Stefani, Shahian, Savant, and Hostetter, Design of Feedback Control Systems, 4th edition Tsividis, Operation and Modeling of the MOS Transistor, 3rd edition Van Yalkenburg, Analog Filter Design Warner and Grung, Semiconductor Device Electronics Wolovich, Automatic Control Systems Yariv and Yeh, Photonics: Optical Electronics in Modern Communications, 6th edition Zak, Systems and Control

SIGNAL PROCESSING AND LINEAR SYSTEMS SECOND EDITION

B. P. Lathi and R. A. Green

New York Oxford OXFORD UNIVERSITY PRESS 2018

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trademark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.

© 2021 by Oxford University Press For titles covered by Section 112 of the US Higher Education Opportunity Act, please visit www.oup.com/us/he for the latest information about pricing and alternate formats. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Lathi, B. P. (Bhagwandas Pannalal), author. I Green, R. A. (Roger A.), author. Title: Signal processing and linear systems / B.P. Lathi and R.A. Green. Description: Second edition. I New York: Oxford University Press, 2021. I Series: The Oxford series in electrical and computer engineering I Includes index. I ) Subjects: LCSH: Signal processing. I Linear systems. I System analysis.

Printed by LSC Communications, United States of America

CONTENTS

PREFACE x

B BACKGROUND B.l Complex Numbers I B.2 Sinusoids 16 B.3 Sketching Signals 20 B.4 Cramer's Rule 22 B.5 Partial Fraction Expansion 25 B.6 Vectors and Matrices 35 B.7 MATLAB: Elementary Operations 42 B.8 Appendix: Useful Mathematical Formulas 54 References 58 Problems 59

1 SIGNALS AND SYSTEMS Size of a Signal 64 Some Useful Signal Operations 71 Classification of Signals 78 Some Useful Signal Models 82 Even and Odd Functions 92 Systems 95 Classification of Systems 97 System Model: Input-Output Description 111 Internal and External Descriptions of a System 119 I .IO Internal Description : Tbe State-Space Description 121 1.11 MATLAB: Working with Functions 126 1.12 Summary 133 References 135 Problems 136

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

V

vi

Contents

2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS 2.1 Introduction 150 2.2 System Response to Internal Conditions: The Zero-Input Response 151 2.3 The Unit Impulse Response h(t) 163 2.4 System Response to External Input: The Zero-State Response I 68 2.5 System Stability 196 2.6 Intuitive Insights into System Behavior 203 2.7 MATLAB: M-Files 212 2.8 Appendix: Determining the Impulse Response 220 2.9 Summary 221 References 223 Problems 223

3 SIGNAL REPRESENTATION BY FOURIER SERIES 3. l Signals as Vectors 237 3.2 Signal Comparison: Correlation 243 3.3 Signal Representation by an Orthogonal Signal Set 250 3.4 Trigonometric Fourier Series 261 3.5 Existence and Convergence of the Fourier Series 277 3.6 Exponential Fourier Series 286 3.7 LTIC System Response to Periodic Inputs 303 3.8 Numerical Computation of Dn 307 3.9 MATLAB: Fourier Series Applications 309 3.10 Summary 316 References 317 Problems 3 I 8

4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM 4.1 Aperiodic Signal Representation by the Fourier Integral 330 4.2 Transforms of Some Useful Functions 340 4.3 Some Properties of the Fourier Transform 352 4.4 Signal Transmission Through LTIC Systems 372 4.5 Ideal and Practical Filters 381 4.6 Signal Energy 384 4.7 Application to Communications: Amplitude Modulation 388 4.8 Angle Modulation 401 4.9 Data Truncation: Window Functions 414 4.10 MATLAB: Fourier Transform Topics 420 4.11 Summary 425

Contents References 427 Problems 427

5 SAMPLING 5.1 5.2 5.3 5.4 5.5

The Sampling Theorem 440 Signal Reconstruction 449 Analog-to-Digital (AID) Conversion 463 Dual of Time Sampling: Spectral Sampling 466 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform 469 5.6 The Fast Fourier Transform (FFT) 488 5.7 MATLAB: The Discrete Fourier Transform 491 5.8 Summary 498 References 499 Problems 499

6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM 6.1 The Laplace Transform 509 6.2 Some Properties of the Laplace Transform 532 6.3 Solution of Differential and Integro-Differential Equations 544 6.4 Analysis of Electrical Networks: The Transformed Network 557 6.5 Block Diagrams 570 6.6 System Realization 572 6.7 Application to Feedback and Controls 588 6.8 The Bilateral Laplace Transform 612 6.9 Summary 622 References 624 Problems 624

7 FREQUENCY RESPONSE AND ANALOG FILTERS 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9

Frequency Response of an LTIC System 638 Bode Plots 646 Control System Design Using Frequency Response 662 Filter Design by Placement of Poles and Zeros of H(s) 667 Butterworth Filters 677 Chebyshev Filters 688 Frequency Transformations 700 Filters to Satisfy Distortionless Transmission Conditions 714 MATLAB: Continuous-Time Filters 716

vii

·iii

Contents 7.10 Summary 721 References 722 Problems 722

8 DISCRETE-TIME SIGNALS AND SYSTEMS 8.1 Introduction 730 8.2 Useful Signal Operations 733 8.3 Some Useful Discrete-Time Signal Models 738 8.4 Aliasing and Sampling Rate 753 8.5 Examples of Discrete-Time Systems 756 8.6 MATLAB: Representing, Manipulating, and Plotting Discrete-Time Signals 765 8.7 Summary 766 Problems 768

9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS 9.1 Classification of Discrete-Time Systems 774 9.2 Discrete-Time System Equations 777 9.3 System Response to Internal Conditions: The Zero-Input Response 782 9.4 The Unit Impulse Response h[n] 789 9.5 System Response to External Input: The Zero-State Response 793 9.6 System Stability 811 9.7 Intuitive Insights into System Behavior 817 9.8 MATLAB: Discrete-Time Systems 819 9.9 Appendix: Impulse Response for a Special Case 823 9.10 Summary 824 Problems 825

10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNAL S Periodic Signal Representation by Discrete-Time Fourier Series 838 Aperiodic Signal Representation by Fourier Integral 847 Properties of the DTFT 859 DTFT Connection with the CTFT 869 LTI Discrete-Time System Analysis by DTFT 872 Signal Processing by the DFT and FFT 877 Generalization of the DTFT to the z-Transform 899 MATLAB: Working with the DTFS and the DTFT 902 Summary 905 Reference 906 Problems 906

10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9

Contents

11 DISCRETE-TIME SYSTEM ANALYSIS USING THE z-TRANSFORM 11.l 11.2 11.3 11.4 11.5 11.6 11.7 11.8

The z-Transform 918 Some Properties of the z-Transforrn 931 z-Transform Solution of Linear Difference Equations 939 System Realization 950 Connecting the Laplace and z-Transforms 956 Sampled-Data (Hybrid) Systems 959 The Bilateral z-Transform 966 Summary 975 Reference 975 Problems 976

12 FREQUENCY RESPONSE AND DIGITAL FILTERS Frequency Response of Discrete-Time Systems 986 Frequency Response from Pole-Zero Locations 993 Digital Filters 1001 Filter Design Criteria 1003 Recursive Filter Design by the Time-Domain Criterion: The Impulse Invariance Method 1006 12.6 Recursive Filter Design by the Frequency-Domain Criterion: The Bilinear Transformation Method 1012 12.7 Nonrecursive Filters 1027 12.8 Nonrecursive Filter Design 1031 12.9 MATLAB: Designing High-Order Filters 1047 12.10 Summary 1053 Reference 1054 Problems 1054 12.l 12.2 12.3 12.4 12.5

13 STATE-SPACE ANALYSIS 13.l 13.2 13.3 13.4 13.5 13.6 13.7 13.8

Mathematical Preliminaries I 065 Introduction to State Space 1069 A Systematic Procedure to Determine State Equations 1072 Solution of State Equations 1082 Linear Transformation of a State Vector 1095 Controllability and Observability 1103 State-Space Analysis of Discrete-Time Systems 1109 MATLAB: Toolboxes and State-Space Analysis 1117 13.9 Summary 1125 References 1125 Problems 1126 INDEX

1130

ix

PREFACE

This book, Signal Processing and Linear Systems, presents a comprehensive treatment of signals and linear systems at an introductory level, most suitable for junior and senior college/university students in electrical engineering. This second edition contains most of the material from the third edition of our book Linear Systems and Signals (2018), with added chapters on analog and digital filters and digital signal processing. Additional applications to communications and controls are also included. The sequence of topics in this book are somewhat different from those in the Linear Systems and Signals book. Here, the Laplace transform follows Fourier, whereas in the 2018 book, the sequence is the exact opposite. This book, like its 2018 sibling, contains enough material on discrete-time systems so that it can be used not only for a traditional course in Signals and Systems, but also for an introductory course in Digital Signal Processing. One perceptive author has said: "The function of a teacher is not so much to cover the topics of study as to uncover them for students." The same can be said of a textbook. It is in this spirit that our textbooks emphasize a physical appreciation of concepts through heuristic reasoningt and the use of metaphors, analogies, and creative explanations. Such an approach is much different from a purely deductive technique that uses mere mathematical manipulation of symbols. There is a temptation to treat engineering subjects as a branch of applied mathematics. Such an approach is a perfect match to the public image of engineering as a dry and dull discipline. It ignores the physical meaning behind various derivations and deprives a student of not only an intuitive understanding but also the enjoyable experience of logically uncovering the subject matter. In this book, we use mathematics not so much to prove axiomatic theory as to support and enhance physical and intuitive understanding. Wherever possible, theoretical results are interpreted heuristically and enhanced by carefully chosen examples and analogies. This second edition, which closely follows the organization of the first edition, has been refined in many ways. Discussions are streamlined, with material added or trimmed as needed. Equation, example, and section labeling is simplified and improved. Computer examples are fully updated to reflect the most current version of MATLAB, and new sections are included throughout the text that illustrate the use of MATLAB as a useful tool to investigate concepts and solve problems. Hundreds of additional problems provide new opportunities to learn and understand topics.

t Heuristic (from the Greek heuriskein, meaning "to invent, discover"): a method of education in which students are trained to find out things for themselves. The word "eureka" (I have found it) is the first-person singular perfect active indicative of heuriskein. We hope that this book provides every reader with many opportunities for their own eureka moments. X

Preface

xi

NO TABLE FEATURES The notable features of this book include the following: l . Intuitive and heuristic understanding of the concepts and physical meaning of mathematical results are emphasized throughout. Such an approach not only leads to deeper appreciation and easier comprehension of the concepts, but also makes learning enjoyable for students. 2. Often, students lack an adequate background in basic material such as complex numbers, sinusoids, quick sketching of functions, Cramer's rule, partial fraction expansion, and matrix algebra. We include a background chapter that addresses these basic and pervasive topics in electrical engineering. The response by students has been unanimously enthusiastic. 3. There are hundreds of worked out examples in addition to exercises (usually with answers) for students to test their understanding. Also, there are over 900 end-of-chapter problems of varying difficulty. 4. Modern electrical engineering practice requires the use of computer calculation and simulation, most often the software package MATLAB. Thus, we integrate MATLAB into many of the worked examples throughout the book. Additionally, most chapters conclude with a section devoted to learning and using MATLAB in the context and support of book topics. Problem sets also contain numerous computer problems. 5. The discrete-time and continuous-time systems may be treated in sequence, or they may be integrated by using a parallel approach. 6. The summary at the end of each chapter will prove helpful to students in summing up essential developments in the chapter. 7. There are several historical notes to enhance students' interest in the subject. This information introduces students to the historical background that influenced the development of electrical engineering. 8. Unlike Linear Signals and Systems, this book provides extensive applications in the areas of communication, controls, and filtering.

ORGANIZATION AND USE The book opens with a chapter titled "Background," which deals with the mathematical background material that the reader is expected to have already mastered. It includes topics such as complex numbers, sinusoids, sketching signals, Cramer's rule, partial fraction expansion, and matrix algebra. The next seven chapters address continuous-time signals and systems, followed by five chapters treating discrete-time signals and systems. The last chapter introduces state-space analysis. There are MATLAB examples dispersed throughout the book. The book can be readily tailored for a variety of courses of 30 to 90 lecture hours. It can also be used as a text for a first undergraduate course in Digital Signal Processing (DSP). The organization of the book permits a great deal of flexibility in teaching continuous-time and discrete-time concepts. The natural sequence of chapters is meant for a sequential approach in which continuous-time analysis is covered first, followed by discrete-time analysis. It is also possible to integrate (interweave) continuous-time and discrete-time analysis by using an appropriate sequence of chapters.

xii

Preface

I

MATLAB

is a sophisticated language that serves as a powerful tool to b ett e engineering topics, including control theory, filter design, and, of course, linea r r UOderstand . . . signaJs. MATLAB's flexible programming structure promotes rap1'd deveJopment systerns and and n Outstanding visualization capabilities provide unique insight into system behavi or � �Y sis.

MATLAB

character.

an s ignal

As with a ny language, learning MATLAB is incremental and require s prac r e book provides two levels of exposure to MATLAB. First, MATLAB is integrated i�� . This. examples throughout the text to reinforce concepts and perform various compu tation ;any s . e se examples utilize standard MATLAB functions as well as functions from the co ntrol signal-processing, and symbolic math toolboxes. MATLAB has many more toolboxes av:t�ll , a lel , but these three are commonly available in most engineering departments. A second and deeper level of exposure to MATLAB is achieved by conc luding chapters with a separate MATLAB section. Taken together, these sections provide a self-conr:;: introduction to the MATLAB environment that allows even novice users to quickly gain MATI.,AB proficiency and competence. These sessions provide detailed instruction on how to use MATI.AB to solve problems in linear systems and signals. Except for the very last chapter, care has been taken to generally avoid the use of toolbox functions in the MATLAB sessions. Rather, readers are shown the process of developing their own code. In this way, those readers without toolbox acce ss are not at a disadvantage. All of this book's MATLAB code is available for download at the OUP Companion Website.

°

CREDITS AND ACKNOWLEDGMENTS

Portrait of Gerolamo Cardano, courtesy of Wellcome Collection. Portrait of Karl Friedrich Gauss, courtesy of GauB-Geselischaft Gottingen e. V (Foto: A. Wittmann). Photo of Albert Michelson, courtesy of Smithsonian Institution Libraries. Portrait of Josiah Willard Gibbs, courtesy of F. B. Carpenter. Portrait of Jean-Baptiste-Joseph Fourier, courtesy of Smithsonian Institution Librarie s. Portrait of Napoleon, © iStock/GeorgiosArt. Portrait of Pierre-Simon de Laplace, cou rtes y of Wellcome Collection. CC BY Photo of Oliver Heaviside, courtesy of Smithsonian Institution

Libraries.

Many individuals have helped us in the preparation of this book, as well as its earlier ed ition� o We are grateful to each and every one of them for their helpful suggestions and comm ents. B � bor 5 writing is an obsessively time-consuming activity, which causes much hardship for an aut family. We both are grateful to our families for their enormous but invisible sacrific es.

B. P. Lathi R. A. Green

I

I I

I I

I I

BACKGROUND

The topics discussed in this chapter are not entirely new to students taking this course. You have already studied many of these topics in earlier courses or are expected to know them from your previous training. Even so, this background material deserves a review because it is so pervasive in the area of signals and systems. Investing a little time in such a review will pay big dividends later. Furthermore, this material is useful not only for this course but also for several courses that follow. It will also be helpful later, as reference material in your professional career.

8.1 COMPLEX NUMBERS Complex numbers are an extension of ordinary numbers and are an integral part of the modem number system. Complex numbers, particularly imaginary numbers, sometimes seem mysterious and unreal. This feeling of unreality derives from their unfamiliarity and novelty rather than their supposed nonexistence! Mathematicians blundered in calling these numbers "imaginary," for the term immediately prejudices perception. Had these numbers been called by some other name, they would have become demystified long ago, just as irrational numbers or negative numbers were. Many futile attempts have been made to ascribe some physical meaning to imaginary numbers. However, this effort is needless. In mathematics we assign symbols and operations any meaning we wish as long as internal consistency is maintained. The history of mathematics is full of entities that were unfamiliar and held in abhorrence until familiarity made them acceptable. This fact will become clear from the following historical note.

B.1.1 A Historical Note Among early people the number system consisted only of natural numbers (positive integers) needed to express the number of children, cattle, and quivers of arrows. These people had no need for fractions. Whoever heard of two and one-half children or three and one-fourth cows! However, with the advent of agriculture, people needed to measure continuously varying quantities, such as the length of a field and the weight of a quantity of butter. The number system, therefore, was extended to include fractions. The ancient Egyptians and Babylonians knew how 1

2

CHAPTERB BACKGROUND to handle fractions, but Pythagoras discovered that some numbers (like the diagonal of a unit square) could not be expressed as a whole number or a fraction. Pythagoras, a number mystic, who regarded numbers as the essence and principle of all things in the universe, was so appalled at his discovery that he swore his followers to secrecy and imposed a death penalty for divulging this secret [ l]. These numbers, however, were included in the number system by the time of Descartes, and they are now known as irrational numbers. Until recently, negative numbers were not a part of the number system. The concept of negative numbers must have appeared absurd to early man. However, the medieval Hindus had a clear understanding of the significance of positive and negative numbers [2, 3]. They were also the first to recognize the existence of absolute negative quantities (4). The works of Bhaskar ( 1114-1185) on arithmetic (Lilavati) and algebra (Bijaganit) not only use the decimal system but also give rules for dealing with negative quantities. Bhaskar recognized that positive numbers have two square roots (5). Much later, in Europe, the men who developed the banking system that arose in Florence and Venice during the late Renaissance (fifteenth century) are credited with introducing a crude form of negative numbers. The seemingly absurd subtraction of 7 from 5 seemed reasonable when bankers began to allow their clients to draw seven gold ducats while their deposit stood at five. All that was necessary for this purpose was to write the difference, 2, on the debit side of a ledger (6). Thus, the number system was once again broadened (generalized) to include negative numbers. The acceptance of negative numbers made it possible to solve equations such as x+ 5 = 0, which had no solution before. Yet for equations such as x2 + 1 = 0, leading to x2 = -1, the solution could not be found in the real number system. It was therefore necessary to define a completely new kind of number with its square equal to -1. During the time of Descartes and Newton, imaginary (or complex) numbers came to be accepted as part of the number system, but they were still regarded as algebraic fiction. The Swiss mathematician Leonhard Euler introduced the notation i (for imaginary) around 1777 to represent .J=T. Electrical engineers use the notation j instead of i to avoid confusion with the notation i often used for electrical current. Thus,

J·2 =- l

and

✓-l=±j

This notation allows us to determine the square root of any negative number. For example,

✓4 - = ✓4 X ✓1- = ±2j When imaginary numbers are included in the number system, the resulting numbers are called

complex numbers.

ORIGINS OF COMPLEX NUMBERS Ironically (and contrary to popular belief), it was not the solution of a quadratic equation, such as x?- + I = 0, but a cubic equation with real roots that made imaginary numbers plausible and acceptable to early mathematicians. They could dismiss .J=T as pure nonsense when it appeared as a solution to x?- + l = 0 because this equation has no real solution. But in 1545, Gerolamo Cardano of Milan published A rs Magna (The Great Art), the most important algebraic work of the Renaissance. In this book, he gave a method of solving a general cubic equation in which a root

-, B.l Complex Numbers

3

of a negative number appeared in an intermediate step. According to his method, the solution to a third-order equation t is given by 3 b 2 a3 + 3 -�x= -�+j + 2 4 27 2

✓ �+ 4

a3

27

For example, to find a solution of x3 + 6x - 20 = 0, we substitute a =6, b = -20 in the foregoing equation to obtain X=

J

J

10 + ✓lQ8 + 10 - ✓108 = ..Y20J92 - ..Y0.392 = 2

We can readily verify that 2 is indeed a solution of x3 + 6x - 20 = 0. But when Cardano tried to solve the equation x3 - 15x - 4 = 0 by this formula, his solution was

What was Cardano to make of this equation in the year 1545? In those days, negative numbers were themselves suspect, and a square root of a negative number was doubly preposterous! Today, we know that (2±j) 3 =2±)11 =2±.J=ill T herefore, Cardano's formula gives X

= (2 + }) + (2 - }) = 4

We can readily verify that x = 4 is indeed a solution of x3 - 15x - 4 = 0. Cardano tried to explain halfheartedly the presence of J=ill but ultimately dismissed the whole enterprise as being "as subtle as it is useless." A generation later, however, Raphael Bombelli (1526-1573), after examining Cardano's results, proposed acceptance of imaginary numbers as a necessary vehicle that would transport the mathematician from the real cubic equation to its real solution. In other words, although we begin and end with real numbers, we seem compelled to move into an unfamiliar world of imaginaries to complete our journey. To mathematicians of the day, this proposal seemed incredibly strange [7]. Yet they could not dismiss the idea of imaginary numbers so easily because this concept yielded the real solution of an equation. It took two more centuries t This equation is known as the depressed cubic equation. A general cubic equation y3 +p/+qy+r=0

can always be reduced to a depressed cubic form by substituting y = x- (p/3). Therefore, any general cubic equation can be solved if we know the solution to the depressed cubic. The depressed cubic was independently solved, first by Scipione del Ferro (1465-1526) and then by Niccolo Fontana (1499-1557). The latter is better known in the history of mathematics as Tartaglia ("Stammerer"). Cardano learned the secret of the depressed cubic solution from Tartaglia. He then showed that by using the substitution y = x- (p/3), a general cubic is reduced to a depressed cubic.

4

CHAPTER B BACKGROUND for the full importance of complex numbers to become evident in the works of Euler, Gauss, and Cauchy. Still, Bombelli deserves credit for recognizing that such numbers have a role to play in algebra [7].

Gerolamo Cardano and Karl Friedrich Gauss In 1799 the German mathematician Karl Friedrich Gauss, at the ripe age of 22, proved the fundamental theorem of algebra, namely that every algebraic equation in one unknown has a root in the form of a complex number. He showed that every equation of the nth order has exactly 11 solutions (roots), no more and no less. Gauss was also one of the first to give a coherent account of complex numbers and to interpret them as points in a complex plane. It is he who introduced the term complex numbers and paved the way for their general and systematic use. The number system was once again broadened or generalized to include imaginary numbers. Ordinary (or real) numbers became a special case of generalized (or complex) numbers. The utility of complex numbers can be understood readily by an analogy with two neighboring countries X and Y, as illustrated in Fig. B. I. If we want to travel from City a to City b (both in Country X), the shortest route is through Country Y, although the journey begins and ends in Country X. We may, if we desire, perform this journey by an alternate route that lies exclusively in X, but this alternate route is longer. ln mathematics we have a similar situation with real numbers (Country X) and complex numbers (Country Y). Most real-world problems start with real numbers, and the final results must also be in real numbers. But the derivation of results is considerably simplified by using complex numbers as an intermediary. It is also possible to solve any real-world problem by an alternate method, using real numbers exclusively, but such procedures would increase the work needlessly.

B.l Complex Numbers

5

Country

X

...... ....�.��... . .....Alt

··.. .... a .......�..•······

\to.....

••• yI'

Country

y

Figure B.1 Use of complex numbers can

reduce the work.

B.1.2 Algebra of Complex Numbers A complex number (a,b) or a+ jb can be represented graphically by a point whose Cartesian coordinates are (a,b) in a complex plane (Fig. B.2). Let us denote this complex number by z so that z=a+ jb (B.1) This representation is the Cartesian (or rectangular) form of complex number z. The numbers a and b (the abscissa and the ordinate) ofz are the real part and the imaginary part, respectively, of z. They are also expressed as Rez=a

and

Imz=b

Note that in this plane all real numbers lie on the horizontal axis, and all imaginary numbers lie on the vertical axis. Complex numbers may also be expressed in terms of polar coordinates. If (r,0) are the polar coordinates of a point z=a+ j b (see Fig. B.2), then a= rcos 0

Consequently,

i Imaginary

and

b= rsin0

z = a + jb = rcos 0 + j rsin 0 = r( cos 0 + j sin0)

(B.2)

b ....................... z

····· · :a · ··.. .. · . .. . . .. . -b ................... ::·.-_.,. z

Real➔

*

Figure B.2 Representation of a number in the complex plane.

6

CHAPTERS BACKGROUND Euler'sfonnula states that

e i0 = cos 0 + j sin 0

(B.3)

To prove Euler's formula, we use a Maclaurin series to expand e i0 , cos 0, and sin 0: 6 . (1'0)2 (;'0)3 (j0) 4 (j0) s (j0) -I+ l·0+--+--+--+--+--+· e 18 3! 4! 5! 6! 2! 05 0 6 02 03 0 4 -. . = l + j0j j . 2! - 3! + 4! + 5! - 6! 4 6 0 08 02 0 cos 0 = I - - + - - - + - · · · 4! 6! 8! 2! 03 05 01 sin 0 = 0 - - + - - - + · · 3! 5! 7!

..

Clearly, it follows that ei0 = cos 0 + j sin 0. Using Eq. (B.3) in Eq. (B.2) yields (B.4) This representation is the polar form of complex number z. Summarizing, a complex number can be expressed in rectangular form a+ j b or polar form 8 rei with a= rcos 0 r=Ja 2+b2 and (B.5) b=rsin0 0 =tan-I(�) Observe that r is the distance of the point z from the origin. For this reason, r is also called the m a gnitude (or absolute value) of z and is denoted by lzl . Similarly, 0 is called the angle of z and is denoted by Lz. Therefore, we can also write polar form of Eq. ( B.4) as where lzl

= r and Lz = 0

Using polar form, we see that the reciprocal of a complex number is given by

CONJUGATE OF A COMPLEX NUMBER

We define z*, the conjugate of z =a+ jb, as (B.6) The graphical representations of a number z and its conjugate z* are depicted in Fig. B.2. Observe that z* is a mirror image of z about the horizontal axis. To find the conjuga te of any number; we need only replace j with - j in that number (which is the same as changing the sign of its angle). The sum of a complex number and its conjugate is a real number equal twice the real part to of the number:

z+z* =(a+ jb) + ( a - jb ) = 2a = 2Rez

7

B.1 Complex Numbers Thus, we see that the real part of complex number z can be computed as

z+z* Rez=-2 Similarly, the imaginary part of complex number z can be computed as

(B.7)

z-z*

Imz=-2j

(B.8)

The product of a complex number z and its conjugate is a real number lzl 2 , the square of the magnitude of the number: (B.9) UNDERSTANDING SOME USEFUL IDENTITIES

In a complex plane, rei 8 represents a point at a distance r from the origin and at an angle 0 with the horizontal axis, as shown in Fig. B.3a. For example, the number -1 is at a unit distance from the origin and has an angle re or -re (more generally, re plus any integer multiple of 2rr ), as seen from Fig. B.3b. Therefore, iC1r+21rnl n integer -I = e The number I, on the other hand, is also at a unit distance from the origin, but has an angle O (more generally, 0 plus any integer multiple of 2Jr). Therefore, 1 = ej2rrn

n integer

(B.10)

The number j is at a unit distance from the origin and its angle is I (more generally, integer multiple of 2rr ), as seen from Fig. B.3b. Therefore,

I plus any

n integer Similarly,

n integer

Notice that the angle of any complex number is only known within an integer multiple of 2:,r. i Im

j

Re➔

i Im

-I

Re➔

-j (a)

Figure B.3 Understanding some useful identities in terms of rei8 •

(b)

8

CHAPTERS BACKGROUND picture of rej0 • This picture is also helpful This discussion shows the usefulness of the graphic · (a+jw)r as t--+ oo , we note · · . m several other apphcat1ons. For example, to determme the 1·1mi't of e that Now the magnitude of eJw, is unity regardless of the value of wort because e1"'1 Therefore, ea' determines the behavior of e> real(2•exp(1j•pi/3)) ans = 1.0000 >> imag(2•exp(1j•pi/3)) ans = 1.7321 Since MATLAB defaults to Cartesian form, we could have verified the entire result in one step. >> 2•exp(1j•pi/3) ans = 1.0000 + 1.7321i One can also use the pol2cart command to convert polar to Cartesian coordinates.

B.1

Im

i

11

Re ➔

lm

-2fi

Complex Numbers

-2fi

4e-J3,r/4

Re ➔ (a)

(b)

i

i

Im

Im

2ei"l2 = j2 Re ➔ 3e-/311 = -3

Re ➔ (c)

(d)

i

i

Im

Im

Re ➔

Re ➔ 2e

i411

=2

ze-14 = 2 11

(e)

(f)

Figure B.5 From polar to Cartesian form.

ARITHMETICAL OPERATIONS, POWERS, AND ROOTS OF COMPLEX NUMBERS To conveniently perform addition and subtraction, complex numbers should be expressed in Cartesian form. Thus, if 0 z1 = 3 + j4 = 5ei53·1 and

z2 = 2 + j3 = JT3ei56. 3

then Z1

o

+ z2 = (3 + j4) + (2 + j3) = 5 + j7

12

CHAPTER B BACKGROUND If z1 and z2 are given in polar fonn, we would need to convert them into Cartesian fonn for the purpose of adding (or subtracting). Multiplication and division, however, can be carried out in either Cartesian or polar fonn, although the latter proves to be much more convenient. This is because if z 1 and z2 are expressed in polar fonn as and then and Moreover, and

(B.l1)

This shows that the operations of multiplication, division, powers, and roots can be carried out with remarkable ease when the numbers are in polar fonn. Strictly speaking, there are n values for z 1 l11 (the nth root of z). To find all the n roots, we reexamine Eq. (B.l1): k=0,1,2,.. . ,n-1

(B.12)

The value of z 1 111 given in Eq. (B.l 1) is the principal value of z 1 l11 , obtained by talcing the nth root of the principal value of z, which corresponds to the case k = 0 in Eq. (B.12).

EXAMPLE B.3 Multiplication and Division of Complex Numbers Using both polar and Cartesian forms, determine z1z2 and z1/z2 for the numbers and

Z2

= 2+ j3 = .J'i3ej56·3

0

Multiplication: Cartesian Form Z1Z2

= (3+ )4)(2+ j3) = (6-12) + )(8+9) = -6+ j17

Multiplication: Polar Form Z1Z2 = (5ej 53.l o ){ .J'i3ej56.3o ) Division: Cartesian Form

Z1 3+ j4 -=-z2 2+ j3

= 5.J'i3ej\09.4o

13

8.1 Complex Numbers To eliminate the complex number in the denominator, we multiply both the numerator and the denominator of the right-hand side by 2 - }3, the denominator's conjugate. This yields z2

=

(3+j4)(2-j3)

18-jl 18-jl 18 .1 = = = 2 13- 1 13 13 (2+ j3)(2-j3) 2 +32

Division: Polar Form 5ej53.1

z,

°

JT3ei56.3

z2

°

5 = -eiC53.1 JT3

°

°

-56.3 J

5 -i3.2° =e JT3

It is clear from this example that multiplication and division are easier to accomplish in polar fonn than in Cartesian form. These results are also easily verified using MATLAB. To provide one example, let us use Cartesian forms in MATLAB to verify that z1 z2 =-6+j17.

>> z1 = 3+4j; z2 = 2+3j;

»

Z1*Z2

ans = -6.0000

+

17.0000i 0

As a second example , let us use polar forms in MATLAB to verify that z1 / z2 =1.38 68e-i3•2 • Since MATLAB generally expects angles to be represented in the natural units of radians, we must use appropriate conversion factors in moving between degrees and radians (and vice versa).

>> z1 = 5*exp(1j*53.1*pi/180); z2 = sqrt(13)*exp(1j*56.3*pi/180); » abs(z1/z2) ans = 1.3868 >> angle(z1/z2)*180/pi ans = -3.2000

orking with Complex Numb For z1 = 2ei1r/ 4 and z2 = Sei"/3, find the following: (a) 2z1 - z2, (b) l/z 1, (c)

ffe.i..

z,/zj, and (d)

(a) Since subtraction cannot be performed directly in polar form, we convert z1 and z2 to Cartesian form:

z1 =2e i1r /4 =2(cos:+jsin:) =Ji+ Jh z2 = 8ei"13 =8(cos;+}sin;) =4+j4✓3

l

14

CHAPTERB BACKGROUND Therefore,

2z 1 -z2 = 2(h+ jh)-(4+ j4../3) = (2✓2-4) + j(2✓2-4✓3) (b)

I -j1r/4 I =-e I -=-2ejtr/4 2 ZI

(c)

(d)

= -1.17- j4.l

1r 4 � _ 2ei / z� - (8ei1r/3)2

r ei1 /4 ei21r/3

= 642

e-j(S1r /12) = __!_ej(1r/4-21r /3) = _l 32

32

There are three cube roots of 8ei = 8eiot - ./3 sin a>ot (b) x(t) = -3 cos wot + 4 sin wot (a) x(t)

19

B.2 Sinusoids (a) In this case, a= 1 and b= -J3. Using Eq. (8.17) yields 2

C=J12 +(-./3) =2 and 0=tan- 1 (f)=60 ° Therefore,

°

x(t) =2cos (wot+60

)

We can verify this result by drawing phasors corresponding to the two sinusoids. The sinusoid cos Wot is represented by a phasor of unit length at a zero angle with the horizontal. The phasor sin wot is represented by a unit phasor at an angle of -90° with the horizontal. Therefore, -./3 sin wot is represented by a phasor of length ../3 at 90° with the horizontal, as depicted in Fig. B.8a. The two phasors added yield a phasor of length 2 at 60° with the horizontal (also shown in Fig. B.8a). i Im

-3

i Im

Re➔

-ff···············

Re➔ (a)

(b)

Figure B.8 Phasor addition of

sinusoids.

Alternately, we note that a - j b= l +j ../3=2ej1r/3• Hence, C=2 and 0 = rr /3. Observe that a phase shift of ±rr amounts to multiplication by -1 . Therefore, x(t) can also be expressed alternatively as x(t) = -2cos(wot +60° ± 18 0°)=-2cos(wot-120°) = -2cos(tuot+240°)

In practice, the principal value, that is, -120° , is preferred. (b) In this case, a= -3 and b = 4. Using Eq. (B.17) yields C = J(-3)2 +42 = 5 and 0 =tan- 1 (:;) = -126.9°

Observe that Therefore,

x(t) = 5cos(wot- 126.9°)

This result is readily verified in the phasor diagram in Fig. B.8b. Alternately, a 0 -3 - j4 = 5e-j 126·9 , a fact readily confirmed using MATLAB.

-

jb

=

... 20

CHAPTER B BACKGROUND >> >>

C = abs(-3+4j) C = 5 theta = angle(-3+4j)•180/pi theta = 126.8699

Hence, C = 5 and 0 = -126.8699°.

We can also perform the reverse operation, expressing C cos (wot+ 0) in terms of cos wot and sin wot by again using the trigonometric identity Ccos (.)2 + ·

· · +a,_,(x- )..)'- 1

(x-}..) ' (x-}..)' ( x-)..)' -+ .. ·+kn---+k2 -+k,-x-a2 x-an x-a1

If we let x =).. on both sides ofEq. (8.29), we obtain

(8.29)

Therefore, ao is obtained by concealing the factor (x-)..)' in F(x) and letting x =>.. in the remaining expression (the Heaviside "cover-up" method). If we take the derivative (with respect to x) of both sides of Eq. (8.29), the right-hand side is a 1 + terms containing a factor (x->..) in their numerators. Letting x = ).. on both sides of this equation, we obtain !!_[(x-.l.YF(x)]I =a, dx x=).

Thus, a 1 is obtained by concealing the factor (x->..)' in F(x), talcing the derivative of the remaining expression, and then letting x = )... Continuing in this manner, we find l dj a·= - -. [(x-.l.YF(x)] J 1·1. dxJ I x=>..

(8.30)

Observe that (x - >..)'F(x) is obtained from F(x) by omitting the factor (x - >..)' from its denominator. Therefore, the coefficient ai is obtained by concealing the factor (x - >..)' in F(x), taking the jth derivative of the remaining expression, and then letting x =). (while dividing by j !).

Expand F(x) into partial fractions if

F(x) =

The partial fractions are

4x3 + 16.x2 + 23x+ 13 (x+ 1) 3 (x+2)

k a2 a, ao F(x) = --+--2 +-+­ (x+ 1)3 (x+l) x+I x+2

l

32

CHAPTER B BACKGROUND

The coefficient k is obtained by concealing the factor (x x = -2 in the remaining expression: k=

in F(x) and then subSLitutin g

4x3 + J6x 2 + 23x + 13 =1 --= - 3-(x-+ 21=--- x=-l _ ) --(x_+l

+ I)3 in F(x) and let x = - I

To find 00, we conceal the factor (x ao=

+ 2)

in the remaining expression:

4x3+ I6x2+23x+13 (x+1)3(x+2) x=-1

To find a1, we conceal the factor (x expression, and then let x = - I:

+ 1)3

in F(x), take the derivative of the remaining

!!_ 4x3+16x2+23x+l3 a1= =I ] [ -l (x:+.1)3(x+2) dx x= Similarly,

_!_� 4x3+16x2+23x+13 a2= ] [ x:-t-J_)3(x+2) 2! dx2

Therefore, F(x)

l

2

=3 .t=-1

3

I

= -+ -� + -- + -(x + 1)3 (x+ 1) 2 x+ I x+2

B.5.4 A Combination of Heaviside "Cover-Up" and Clearing Fractions For multiple roots, especially of higher order, the Heaviside expansion method, which requires repeated differentiation, can become cumbersome. For a function that c ontains several repeated and unrepeated roots, a hybrid of the two procedures proves to be the best. The simpler coefficients are determined by the Heaviside method, and the remaining coefficients are found by clearing fractions or shortcuts, thus incorporating the best of the two methods. We demonstrate this procedure by solving Ex. B. l O once again by this method. In Ex. B.10, coefficients k and ao are relatively simple to determine by the Heaviside expansion method. These values were found to be k 1 = l and a0 = 2 . Therefore, 4x3+16x2 +23x+ 13 (x+ 1)3(x+2)

=

2 (x+ 1)3

aI + (x+ 1)2

We now multiply both sides of this equation by (x + I)3(x 4.x3+ l6x2+23x+13

I a2 + + x + 1 x+2

+ 2) to clear the fractions. This yields

=2(x+2)+a1(x+ l)(x+2)+a2(x+I)2(x+ 2) + (x+ 1)3 = (I+a2).x3+(a1+4a2 + 3)x2+ (5 + 3a1 + 5a2)x+(4+ 2a1+2a2+ I)

B.5 Partial Fraction Expansion

33

Equating coefficients of the third and second powers of x on both sides, we obtain

We may stop here if we wish because the two desired coefficients, a 1 and a2, are now determined. However, equating the coefficients of the two remaining powers of x yields a convenient check on the answer. Equating the coefficients of Lhe x 1 and fJ terms, we obtain

+ 3a1 + Sa2 13 = 4 + 2a + 2a2 + 1 23 = 5

1

These equations are satisfied by the values a1 = I and a2 = 3, found earlier, providing an additional check for our answers. Therefore,

which agrees with the earlier result.

A COMBINATION OF HEAVISIDE "COVER-UP" AND SHORTCUTS In Ex. B. l 0, after determining the coefficients ao = 2 and k = 1 by the Heaviside method as before, we have

4x3

+ 16x2 + 23x + 13

2

a

a2

1

I ------= ---+--3 (x+ 1)2 +--+-3 x+1 x+ (x+ 1) (x+ 2)

(x+ 1)

2

There are only two unknown coefficients, a 1 and a2. If we multiply both sides of this equation by x and then let x ➔ oo, we can eliminate a1 • This yields

Therefore,

4x3

+ 16x2 + 23x+13

(x+ 1)3(x+2)

2

a

3

1

1 ---+---+--+-­ (x+1)3 (x+1)2 x+l x+2

There is now only one unknown a 1, which can be readily found by setting x equal to any convenient value, say, x = 0. This yields

which agrees with our earlier answer. There are other possible shortcuts. For example, we can compute ao (coefficient of the highest power of the repeated root), subtract this term from both sides, and then repeat the procedure.

B.5.5 Improper F(x) with m = n A general method of handling an improper function is indicated in the beginning of this section. However, for the special case of when the numerator and denominator polynomials of F(x) have

34

CHAPTERB BACKGROUND

the same degree (m = n), the procedure is the same as that for for

the coefficients k 1 , k2, •

. • , kn

a proper function . We can sh ow that

are computed as if F(x) were proper. Thus,

For quadratic or repeated factors, the appropriate procedures discussed in Secs. B.5.2 or B.5.3 should be used as if F(x) were proper. In other words, when m = n, the only difference between the proper and improper case is the appearance of an extra constant bn in the latter. Otherwise, the procedure remains the same. The proof is left as an exercise for the reader.

Expand F(x) into partial fractions if F(x) =

3.x2 + 9 x- 20 = 3.x2 + 9x - 20 x2 +x-6

(x-2)(x+3)

Here, m = n = 2 with bn = b2 = 3. Therefore,

3 9 x-20 F(x) = .x2 + (x-2)(x+3)

in which

ki

= 3x2+9x-20 (x + 3) = x=2

=3+�+� x-2

12+18-20 (2 +3)

x+3

10

=

5 =2

and

Therefore,

-20 2 4 _ 3.x2 + 9x F(X ) ----= 3+--+ --

(x-2)(x+3)

x-2

x+3

B.6 Vectors and Matrices

35

8.5.6 Modified Partial Fractions In finding the inverse z-transform (Ch. 11), we require partial fractions of the form kx/(x - >-..;Y rather than k/(x- >-..;)'. This can be achieved by expanding F(x)/x into partial fractions. Consider, for example, F(x) =

5x2 + 20x + 18 (x + 2)(x + 3) 2

Dividing both sides by x yields F(x) 5x2 + 2 0x + 18 ------2 x(x+ 2)(x+ 3)

x

Expansion of the right-hand side into partial fractions as usual yields 04 03 a2 sx2+20x+18 OJ F(x) -=-----=-+--+--+-2 x x + 2 (x+3) (x+3) 2 x(x+2)(x+3) x

Using the procedure discussed earlier, we find OJ

1

2

1

1

F(x)

= 1, a2 = 1, a3 = -2, and a4 = 1. Therefore,

-=-+-----+--2 x x x+ 2 x+ 3 (x+3) Now multiplying both sides by x yields F(x) = 1+

X

x-+-2

-

2x X x-+-3 + -(x_+_3_)2

This expresses F(x) as the sum of partial fractions having the form kx/(x-,\;)'.

8.6 VECTORS AND MATRICES An entity specified by n numbers in a certain order (ordered n-tuple) is an n-dimensional vector. Thus, an ordered n-tuple (x 1, x2, ..., Xn) represents an n-dimensional vector x. A vector may be represented as a row (row vector): X

=[

XJ

X2

· • ·

or as a column (column vector):

X= Xn

Xn ]

' 36

CHAPTERB BACKGROUND

. . . as the transformation of one vector into another. Simultaneous linear equations can be viewed · . r equations Consider, for example, them simultaneous linea y, =a11x1+a,2x2+·. -+a1n Xn y2 = a21X1 + a22x2 + · · · + 02nfn

(B.31)

If we define two column vectors x and y as

and

then Eq. (B.31) may be viewed as the relationship or the function that transforms vector x into vector y. Such a transformation is called a linear transfonnation of vectors. To perform a linear transformation, we need to define the array of coefficients a ii appearing in Eq. (B.31). This array is called a matrix and is denoted by A for convenience:

A=

[

a 11 a21

a�,1

Om2

A matrix with m rows and n columns is called a matrix of order (m,n) or an (m x n) matrix. For the special case of m = n, the matrix is called a square matrix of order n. It should be stressed at this point that a matrix is not a number such as a determinant, but an array of numbers arranged in a particular order. It is convenient to abbreviate the representation of matrix A with the form (aij)111x11, implying a matrix of order m x n with aij as its ijth element. In practice, when the order m x n is understood or need not be specified, the notation can be abbreviated to (a;j), Note that the first index i of aij indicates the row and the second index j indicates the column of the element aij in matrix A. Equation (B.31) may now be expressed in a matrix form as Yi

011

a12

Y2

a21

a22

a111 a2,,

X1 X2

01111

a,,,2

am,,

Xn

y,,,

or

y=Ax

(B.32)

At this point, we have not defined the multiplication of a m t , ., is a nx · by a vector. The quanu.-ty tu . . not meaningful until such an operation has been defined.

B.6 Vectors and Matrices

37

8.6.1 Some Definitions and Properties A square matrix whose elements are zero everywhere except on the main diagonal is a diagonal matrix. An example of a diagonal matrix is

0 0 I 0] 0 5 A diagonal matrix with unity for all its diagonal elements is called an matrix, denoted by I. This is a square matrix:

I=

I O O 0 l O 0 0

0 0 0

0 0 0

1

identity matrix

or a unit

The order of the unit matrix is sometimes indicated by a subscript. Thus, In represents then x n unit matrix (or identity matrix). However, we shall omit the subscript since order is easily understood by context. A matrix having all its elements zero is a zero matrix. A square matrix A is a symmetric matrix if aij = ai; (symmetry about the main diagonal). Two matrices of the same order are said to be equal if they are equal element by element. Thus, if and

B = (bij)mxn

then A= B only if aii = bii for all i and j. [f the rows and columns of an m x n matrix A are interchanged so that the elements in the ith row now become the elements of the ith column (for i = I, 2, ..., m), the resulting matrix is called the transpose of A and is denoted by A 7 . It is evident that A7 is an n x m matrix.For example, if

then

2 3 l ] Ar -[ - I 2 3

Using the abbreviated notation, if A= (aij)nr x,,, then A 7 = (aji)nxm• Intuitively, further notice that (A 7) 7 =A.

B.6.2 Matrix Algebra We shall now define matrix operations, such as addition, subtraction, multiplication, and division of matrices. The definitions should be formulated so that they are useful in the manipulation of matrices.

38

CHAPTERB BACKGROUND ADDITION OF MATRICES

B, both of the same order (m x ,i),

For two matrices A and

and

we define the sum A+ B as

(a11 +b11) (a21 +b21)

(a12 +b12) (a22 + b22)

(a1n+b1n) (a2n+b2n)

]

(amn +bmn ) or

Note that two matrices can be added only if they are of the same order.

[

MULTIPLICATION OF A MATRIX BY A SCALAR We multiply a matrix A by a scalar c as follows:

cA=c

a11

a21 : aml

] =Ac camn

am2

Thus, we also observe that the scalar c and the matrix A commute: cA = Ac. MATRIX MULTIPLICATION

We define the product AB=C

in which C;j, the element of C in the ith row and jth column, is found by adding the products of the elements of A in the ith row multiplied by the corresponding elements of Bin the jth column. Thus, C;j

= a;1b1j + a,1b2j + · · · + a;nbnj = L a;kbkj k=I

(B.33)

B.6 Vectors and Matrices

39

This result is expressed as follows:

...

A(mxn)

B(nxp)

Cij

.•.

C(mxp)

Note carefully that if this procedure is to work, the number of columns of A must be equal to the number of rows of B. In other words, AB, the product of matrices A and B, is defined only if the number of columns of A is equal to the number of rows of B. If this condition is not satisfied, the product AB is not defined and is meaningless. When the number of columns of A is equal to the number of rows of B, matrix A is said to be conformable to matrix B for the product AB. Observe that if A is an m x n matrix and B is an n x p matrix, A and B are conformable for the product, and C is an m x p matrix. We demonstrate the use of the rule in Eq. (B.33) with the following examples: 2 3 1 3 1 2 ] [8 9 [ l l ][ - 3 4 2 3 2 1 1 l 3 1 - 5 10 45 77 ]

In both cases, the two matrices are conformable. However, if we interchange the order of the first matrices as follows: [

2][7 �]

I 3 I 2 I 1 1

3 l the matrices are no longer conformable for the product. It is evident that, in general,

AB/BA Indeed, AB may exist and BA may not exist, or vice versa, as in our examples. We shall see later that for some special matrices, AB= BA. When this is true, matrices A and Bare said to commute. We re-emphasize that, in general, matrices do not commute. In the matrix product AB, matrix A is said to be postmultiplied by B or matrix B is said to be premultiplied by A. We may also verify the following relationships: (A+B)C=AC+BC C(A+B) =CA+CB

40

CHAPTERB BACKGROUND We can verify that any matrix A premultiplied or postmultiplied by the identity matrix I remains unchanged: Al=IA=A Of course, we must make sure that the order of I is such that the matrices are conformable for the corresponding product. We give here, without proof, another important property of matrices: IABI = IAIIBI where /Al and /Bl represent determinants of matrices A and B. MULTIPLICATION OF A MATRIX BY A VECTOR

Consider Eq. (B.32), which represents Eq.(B.31).The right-hand side of Eq. (B.32) is a product of the m x n matrix A and a vector x. If, for the time being, we treat the vector x as if it were an n x1 matrix, then the product Ax, according to the matrix multiplication rule, yields the right-hand side of Eq. (B.31).Thus, we may multiply a matrix by a vector by treating the vector as if it were an n x 1 matrix.Note that the constraint of conformability still applies.Thus, in this case, xA is not defined and is meaningless. MATRIX INvERSION

To define the inverse of a matrix, let us consider the set of equations represented by Eq. (B.32) whenm=n: YI Y2 (B.34) Yn We can solve this set of equations for x1, x2, ... , Xn in terms of Y1, Y2, ... , Yn by using Cramer's rule [see Eq. (B.21)]. This yields 1D21 IDlll --I -IAI IAI

IDnil --

IAI

1D121 --

1D221 --

IAI

IDn2I -IAI

ID1nl --

!D2nl --

IDnnl --

IAI

IAI

IAI

YI Y2

(B.35)

Yn

IAI

in which IA/ is the determinant of the matrix A and ID ij I is the cofactor of element aij in the matrix A. The cofactor of element aij is given by ( -1 ) i+ i times the determinant of th e (n - 1) x (n - 1) matrix that is obtained when the ith row and the jth column in matrix A are deleted. We can express Eq.(B.34) in compact matrix form as

y=Ax

(B.36)

8.6 Vectors and Matrices

41

We now define A-1, the inverse of a square matrix A, with the property (unit matrix) Then, premultiplying both sides ofEq. (B.36) by A-1, we obtain A- 1 y= A- 1 Ax=lx=x or

x=A- 1 y

A comparison ofEq. (B.37) withEq. (B.35) shows that IDntl IDn2I ID�inl

l

(B.37)

One of the conditions necessary for a unique solution of Eq. (B.34) is that the number of equations must equal the number of unknowns. This implies that the matrix A must be a square matrix. In addition, we observe from the solution as given in Eq. (B.35) that if the solution is to exist, IAI -::p o.t Therefore, the inverse exists only for a square matrix and only under the condition that the determinant of the matrix be nonzero. A matrix whose determinant is nonzero is a nonsingular matrix. Thus, an inverse exists only for a nonsingular, square matrix. Since A- I A = I = AA-I, we further note that the matrices A and A-I commute. t The operation of matrix division can be accomplished through matrix inversion.

Let us find A-1 if

2

1

1

3 2

I

1 2 3 ]

Here,

1D111=-4, 1D211 = 1, ID3il = I,

1D121 =8, 1D221 = -1, IDnl=-5,

1D131=-4 1D231 = -1 ID33I = 3

t These two conditions imply that the number of equations is equal to the number of unknowns and that all the equations are independent tTo prove AA-1 = I, notice first that we define A- 1 A =I. Thus, IA= AI= A(A-1 A) = (AA- 1 )A. Subtracting (AA -I )A, we see that IA - (AA -t )A = 0 or (I - AA-I )A =0. This requires AA-I = I.

42

CHAPTER B

and IAI

BACKGROUND

= -4. Therefore,

B.7 MATLAB: ELEMENTARY OPERATIONS B.7.1 MATLAB Overview Although MATLAB® (a registered trademark of The MathWorks, Inc.) is easy to use, it can be intimidating to new users. Over the years, MATLAB has evolved into a sophistica ted computational package with thousands of functions and thousands of pages of documentation. This section provides a brief introduction to the software environment. When MATLAB is first launched, its command window appears. When MATLAB is ready to accept an instruction or input, a command prompt(>>) is displayed in the command window. Nearly all MATLAB activity is initiated at the command prompt. Entering instructions at the command prompt generally results in the creation of an object or objects. Many classes of objects are possible, including functions and strings, but usually objects are just data. Objects are placed in what is called the MATLAB workspace. If not visible, the workspace can be viewed in a separate window by typing workspace at the command prompt. The workspace provides important information about each object, including the object's name, size, and class. Another way to view the workspace is the whos command. When whos is typed at the command prompt, a summary of the workspace is printed in the command window. The who command is a short version of whos that reports only the names of workspace objects. Several functions exist to remove unnecessary data and help free system resources. To remove specific variables from the workspace, the clear command is typed, followed by the names of the variables to be removed. Just typing clear removes all objects from the workspace. Additionally, the clc command clears the command window, and the elf command clears the current figure window. Often, important data and objects created in one session need to be saved for future use. The save command, followed by the desired filename, saves the entire workspace to a file, which has the .mat extension. It is also possible to selectively save objects by typing save followed by the filename and then the names of the objects to be saved. The load command followed by the filename is used to load the data and objects contained in a MATLAB data file (.mat file). Although MATLAB does not automatically save workspace data from one session to the next, lines entered at the command prompt are recorded in the command history. Previous command lines can be viewed, copied, and executed directly from the command history window. From the command window, pressing the up or down arrow key scrolls through previous commands and redisplays them at the command prompt. Typing the first few characters and then pressing the arrow keys scrolls through the previous commands that start with the same characters. The arrow keys allow command sequences to be repeated without retyping. Perhaps the most important and useful command for new users is help. To learn more about a function, simply type help followed by the function name. Helpful text is then displayed in

B.7 MATLAB: Elementary Operations

43

the command window. The obvious shortcoming of help is that the function name must first be known. This is especially limiting for MATLAB beginners. Fortunately, help screens often conclude by referencing related or similar functions. These references are an excellent way to learn new MATLAB commands. Typing help help, for example, displays detailed information on the help command itself and also provides reference to relevant functions, such as the lookfor command. The lookfor command helps locate MATLAB functions based on a keyword search. Simply type lookfor followed by a single keyword, and MATLAB searches for functions that contain that keyword. MATLAB also has comprehensive HTML-based help. The HTML help is accessed by using MATLAB's integrated help browser, which also functions as a standard web browser. The HTML help facility includes a function and topic index as well as full text-searching capabilities. Since HTML documents can contain graphics and special characters, HTML help can provide more information than the command-line help. After a little practice, it is easy to find information in MATLAB. When MATLAB graphics are created, the print command can save figures in a common file format such as postscript, encapsulated postscript, JPEG, or TIFF. The format of displayed data, such as the number of digits displayed, is selected by using the format command. MATLAB help provides the necessary details for both these functions. When a MATLAB session is complete, the exit command terminates MATLAB.

B.7.2 Calculator Operations MATLAB can function as a simple calculator, working as easily with complex numbers as with real numbers. Scalar addition, subtraction, multiplication, division, and exponentiation are accomplished using the traditional operator symbols+,-,*,/, and -. Since MATLAB predefines i = j = .J=T, a complex constant is readily created using Cartesian coordinates. For example,

»

z = -3-4j z = -3.0000 - 4.0000i

assigns the complex constant -3 - j4 to the variable z. The real and imaginary components of z are extracted by using the real and imag operators. In MATLAB, the input to a function is placed parenthetically following the function name. >>

z_real = real(z); z_imag = imag(z);

When a command is terminated with a semicolon, the statement is evaluated but the results are not displayed to the screen. This feature is useful when one is computing intermediate results, and it allows multiple instructions on a single line. Although not displayed, the results z_real = -3 and z_imag = -4 are calculated and available for additional operations such as computing lzl. There are many ways to compute the modulus, or magnitude, of a complex quantity. Trigonometry confinns that z = -3 - j4, which corresponds to a 3-4-5 triangle, has modulus lzl = 1-3 - j41 = J(-3)2 (-4)2 = 5. The MATLAB sqrt command provides one way to compute the required square root.

+

>>

z_mag = sqrt(z_rea1-2 + z_imag-2) z_mag = 5

44

CHAPTER B

BACKGROUND

In MATLAB, most commands, including sqrt, accept inputs in a variety of forms, inclu ding constants, variables, functions, expressions, and combinations thereof. The same result is also obtained by computing lzl = R. In this case, complex conju gatio n is perfonned by using the conj command. >>

z_mag = sqrt(z*conj(z)) z_mag = 5

More simply, MATLAB computes absolute values directly by using the abs command. >>

z_mag = abs(z) z_mag = 5

In addition to magnitude, polar notation requires phase information. The angle command provides the angle of a complex number. >>

z_rad = angle(z) z_rad = -2.2143

MATLAB expects and returns angles in a radian measure. Angles expressed in degrees require an appropriate conversion factor. >>

z_deg = angle(z)*180/pi z_deg = -126.8699

Notice, MATLAB predefines the variable pi = re. It is also possible to obtain the angle of z using a two-argument arc-tangent function, atan2. >>

z_rad = atan2(z_imag,z_real) z_rad = -2.2143

Unlike a single-argument arctangent function, the two-argument arctangent function ensures that the angle reflects the proper quadrant. MATLAB supports a full complement of trigonometric functions: standard trigonometric functions cos, sin, tan; reciprocal trigonometric functions sec, csc, cot; inverse trigonometric functions acos, asin, atan, asec, acsc, acot; and hyperbolic variations cosh, sinh, tanh, sech, csch, coth, acosh, asinh, atanh, asech, acsch, and acoth. Of course, MATLAB comfortably supports complex arguments for any trigonometric function. As with the angle command, MATLAB trigonometric functions utilize units of radians. The concept of trigonometric functions with complex-valued arguments is rather intriguing. The results can contradict what is often taught in introductory mathematics courses. For example, a common claim is that lcos(x)I :5 1. While this is true for real x, it is not necessarily true for complex x. This is readily verified by example using MATLAB and the cos function. »

cos(lj) ans = 1.5431

Problem B. I - 19 investigates these ideas further. Similarly, the claim that it is impossible to take the logarithm of a neoative number is false. For example, the principal value of In(- I) is j rc, a fact easily verified by means of Euler's

B.7 MATLAB: Elementary Operations

45

equation. In MATLAB, base-10 and base-e logarithms are computed by using the log 10 and log commands, respectively. »

log(-1) ans = 0 + 3.1416i

B.7.3 Vector Operations The power of MATLAB becomes apparent when vector arguments replace scalar arguments. Rather than computing one value at a time, a single expression computes many values.Typically, vectors are classified as row vectors or column vectors.For now, we consider the creation of row vectors with evenly spaced, real elements.To create such a vector, the notation a: b: c is used, where a is the initial value, b designates the step size, and c is the termination value. For example, 0: 2:11 creates the length-6 vector of even-valued integers ranging from 0 to 10. »

k = 0:2:11

k =

0

2

4

6

8

10

In this case, the termination value does not appear as an element of the vector. Negative and noninteger step sizes are also permissible. >>

k = 11:-10/3:0 k = 11.0000 7.6667

4.3333

1.0000

If a step size is not specified, a value of I is assumed. »

k =0:11

k =

0

1

2

3

4

5

6

7

8

9

10

11

Vector notation provides the basis for solving a wide variety of problems. For example, consider finding the three cube roots of minus one, w3 = -1 = ei> plot(real(w),imag(w),'o'); >> xlabel('Re(w)'); ylabel('Im(w)'); axis equal

The axis equal command ensures that the scale used for the horizontal axis is equal to the scale used for the vertical axis. Without axis equal, the plot would appear elliptical rather than circular. Figure B.13 illustrates that the 100 unique roots of w 100 = -1 lie equally spaced on the unit circle, a fact not easily discerned from the raw numerical data. MATLAB also includes many specialized plotting functions. For example, MATLAB commands semilogx, semilogy, and loglog operate like the plot command but use base-10 logarithmic scales for the horizontal axis, vertical axis, and the horizontal and vertical axes,

0.5

0

--0.5

-I

-0.5

0 Re(w)

0.5

Figure B.13 Unique roots of w 100 = -1.

48

CHAPTERB BACKGROUND respectively. Monochrome and color images can be displayed by using the image command, and contour plots are easily created with the contour command. Furthermore, a variety of three-dimensional plotting routines are available, such as plot 3, contour 3, mesh, and surf. Information about these instructions, including examples and related functions, is available from MATLAB help.

B.7.5 Element-by-Element Operations Suppose a new function h(t) is desired that forces an exponential envelope on the sinusoid/(t), h(t) = f(t)g(t), where g(t) = e- 101• First, row vector g(t) is created. >>

g = exp(-10*t);

Given MATLAB's vector representation of g(t) andf(t), computing h(t) requires some fonn of vector multiplication. There are three standard ways to multiply vectors: inner product, outer product, and element-by-element product. As a matrix-oriented language, MATLAB defines the standard multiplication operator* according to the rules of matrix algebra: the multiplicand must be conformable to the multiplier. A 1 x N row vector times an N x I column vector results in the scalar-valued inner product. An N x I column vector times a 1 x M row vector results in the outer product, which is an N x M matrix. Matrix algebra prohibits multiplication of two row vectors or multiplication of two column vectors. Thus, the * operator is not used to perform element-by-element multiplication. t Element-by-element operations require vectors to have the same dimensions. An error occurs if element-by-element operations are attempted between row and column vectors. In

such cases, one vector must first be transposed to ensure both vector operands have the same

dimensions. In MATLAB, most element-by-element operations are preceded by a period. For example, element-by-element multiplication, division, and exponentiation are accomplished using . *, . /, and . -, respectively. Vector addition and subtraction are intrinsically element-by-element operations and require no period. Intuitively, we know h(t) should be the same size as both g(t) and/(t). Thus, h(t) is computed using element-by-element multiplication. >>

h = f.*g;

The plot command accommodates multiple curves and also allows modification of line properties. This facilitates side-by-side comparison of different functions, such as h(t) andf(t). Line characteristics are specified by using options that follow each vector pair and are enclos ed in single quotes. >> plot(t,f,'-k',t,h,':k'); >> xlabel('t'); ylabel('Amplitude'); >> legend('f(t)','h(t)'); Here, '-k' instructs MATLAB to plot/(t) using a solid black line, while ' : k, instructs MATLAB to use a dotted black line to plot h(t). A legend and axis labels complete the plot, as shown in t While grossly inefficient, element-by-element multiplication can be accomplished by extracting the main diagonal from the outer product of two N-length vectors.

B.7 MATLAB: Elementary Operations

49

0.5 .g

.�

i5.

0 -0.5 -I,....______._�_____.______.__,._..____.

0

0.05

0.1

0.15

0.2

Figure B.14 Graphical comparison off(t) and h(t).

Fig. B.14. It is also possible, although more cumbersome, to use pull-down menus to modify line properties and to add labels and legends directly in the figure window.

B.7.6 Matrix Operations Many applications require more than row vectors with evenly spaced elements; row vectors, column vectors, and matrices with arbitrary elements are typically needed. MA1LAB provides several functions to generate common, useful matrices. Given integers m, n, and vector x, the function eye (m) creates them x m identity matrix; the function ones (m, n) creates the m x n matrix of all ones; the function zeros (m, n) creates the m x n matrix of all zeros; and the function diag(x) uses vector x to create a diagonal matrix. The creation of general matrices and vectors, however, requires each individual element to be specified. Vectors and matrices can be input spreadsheet style by using MATLAB's array editor. This graphical approach is rather cumbersome and is not often used. A more direct method is preferable. Consider a simple row vector r,

r=[l O O] The MATLAB notation a: b: c cannot create this row vector. Rather, square brackets are used to create r. »

r = [1 0 O] r - 1 0

0

Square brackets enclose elements of the vector, and spaces or commas are used to separate row elements. Next, consider the 3 x 2 matrix A,

50

CHAPTERB BACKGROUND Matrix A can be viewed as a three-high stack of two-element row vectors. With a sem icolon to separate rows, square brackets are used to create the matrix. >> A= [2 A = 2 4 0

3;4

3 5 6

5;0 6]

Each row vector needs to have the same length to create a sensible matrix. In addition to enclosing string arguments, a single quote performs the complex conjugate transpose operation. In this way, row vectors become column vectors and vice versa. For example, a column vector c is easily created by transposing row vector r. >>

C C

=

-

r' 1

0 0

Since vector r is real, the complex-conjugate transpose is just the transpose. Had r been complex, the simple transpose could have been accomplished by either r. ' or ( conj (r)) '. More formally, square brackets are referred to as a concatenation operator. A concatenation combines or connects smaller pieces into a larger whole. Concatenations can involve simple numbers, such as the six-element concatenation used to create the 3 x 2 matrix A. It is also possible to concatenate larger objects, such as vectors and matrices. For example, vector c and matrix A can be concatenated to form a 3 x 3 matrix B. >>

B

B

=

=

[c A]

1

2

3

4

0

5

0

0

6

Errors will occur if the component dimensions do not sensibly match; a 2 x 2 matrix would not be concatenated with a 3 x 3 matrix, for example. Elements of a matrix are indexed much like vectors, except two indices are typically used to specify row and column. t Element ( 1, 2) of matrix B, for example, is 2. >> B(1,2) ans = 2 Indices can likewise be vectors. For example, vector indices allow us to extract the ele ments common to the first two rows and last two columns of matrix B. >>

B(1:2,2:3) 3 ans = 2 4

5

t Matrix elements can also be accessed by means of a single index, which enumerates along col umns, Fonnally, the element from row m and column n of an M x N matrix may be obtained with a single index (n - l)M + m. For example, element (1, 2) of matrix Bis accessed by using the index (2- 1)3 + 1::::: 4. That 2. yields 4) ( B is,

B.7 MATLAB: Elementary Operations

51

One indexing technique is particularly useful and deserves special attention. A colon can be used to specify all elements along a specified dimension. For example, B (2, : ) selects all column elements along the second row of B.

»

8(2,:) ans = 0

4

5

Now that we understand basic vector and matrix creation, we tum our attention to using these tools on real problems. Consider solving a set of three linear simultaneous equations in three unknowns. X1 -2.x2 + 3X3

=1 -.J3x1 +x2 -v'Sx3 = rr 3x 1 - ✓?x2 +x3 = e This system of equations is represented in matrix form according to Ax= y, where

A= [

I -.J3

3

-12

-,./7

3 ], -./5

l

Although Cramer's rule can be used to solve Ax= y, it is more convenient to solve by multiplying both sides by the matrix inverse of A. That is, x = A-1 Ax= A- 1y. Solving for x by hand or by calculator would be tedious at best, so MATLAB is used. We first create A and y.

>> A= [1 -2 3;-sqrt(3) 1 -sqrt(5);3 -sqrt(7) 1]; y = [1;pi;exp(1)]; The vector solution is found by using MATLAB's inv function.

»

x= inv(A)*y = -1. 9999 -3.8998 -1. 5999

X

It is also possible to use MATLAB's left divide operator x = A\y to find the same solution. The left divide is generally more computationally efficient than matrix inverses. As with matrix multiplication, left division requires that the two arguments be conformable. Of course, Cramer's rule can be used to compute individual solutions, such as x1, by using vector indexing, concatenation, and MATLAB 's det command to compute determinants.

>> xl = det([y,A(:,2:3)])/det(A) x1 = -1.9999 Another nice application of matrices is the simultaneous creation of a family of curves. Consider ha (t) = e-a r sin (2rr lOt + 1r /6) over 0 ::: t :::: 0.2. Figure B.14 shows ha (t) for a = 0 and a= 10. Let's investigate the family of curves ha (t) for a= [0, 1, ... , 10].

52

CHAPTER B BACKGROUND

An inefficient way to solve this problem is create ha (t) for each ct of interest. This requires 11 individual cases. Instead, a matrix approach allows all 11 curves to be computed simultaneously. First, a vector is created that contains the desired values of ct. >>

alpha = (0:10);

By using a sampling interval of 1 millisecond,�/= 0.001, a time vector is also created. >>

t = (0:0.001:0.2)';

The result is a length-20 l column vector. By replicating the time vector for each of the 11 curves required, a time matrix T is created. This replication can be accomplished by using an outer product between t and a 1 x 11 vector of ones. t >>

T = t*ones(1,11);

The result is a 201 x 11 matrix that has identical columns. Right multiplying T by a diagonal matrix created from ct, columns of T can be individually scaled and the final result is computed.

Here, H is a 201 x 11 matrix, where each column corresponds to a different value of ct. That is, H = [h0,h 1 , . . . ,h 1 o], where ha are column vectors. As shown in Fig. B.15, the 11 desired curves are simultaneously displayed by using MATLAB 's plot command, which allows matrix arguments. >>

plo t(t,H); xlabel('t'); ylabel('h(t)');

This example illustrates an important technique called vectorization, which increases execution efficiency for interpretive languages such as MATLAB. Algorithm vectorization uses matrix and

0.5 -::-

0 --0.5 -1

0

0.05

0.1 t

0.15

0.2

Figure B.15 ha (t) for a = [O, I, ... , 10).

t The repmat command provides a more flexible method to replicate or tile objects. Equivalently, T = repmat (t, 1, 11).

B.7 MATLAB: Elementary Operations

53

.....

vector operations to avoid manual repetition and loop structures. It takes practice and effort to become proficient at vectorization, but the worthwhile result is efficient, compact code. t

8.7.7 Partial Fraction Expansions There are a wide variety of techniques and shortcuts to compute the partial fraction expansion of rational function F(x) = B(x)/A(x), but few are more simple than the MATLAB residue command. The basic form of this command is >>

[R,P,K] = residue(B,A)

The two input vectors B and A specify the polynomial coefficients of the numerator and denominator, respectively. These vectors are ordered in descending powers of the independent variable. Three vectors are output. The vector R contains the coefficients of each partial fraction, and vector P contains the corresponding roots of each partial fraction. For a root repeated r times, the r partial fractions are ordered in ascending powers. When the rational function is not proper, the vector K contains the direct terms, which are ordered in descending powers of the independent variable. To demonstrate the power of the residue command, consider finding the partial fraction expansion of

F(x) =

x5 +,r x5+,r = 3 (x + Ji) (x - ./2) x4 - ./8x3 + ./fu - 4

By hand, the partial fraction expansion of F(x) is difficult to compute.MATLAB, however, makes short work of the expansion. >> >>

[R,P,K] = residue([! 0 0 0 0 pi], (1 -sqrt(8) 0 sqrt(32) -4]); R = R.', P = P.', K 0.1112 3.1107 5.9713 R = 7.8888 1.4142 -1.4142 1. 4142 P = 1.4142 2.8284 K = 1.0000

Li t •

I

Written in standard form, the partial fraction expansion of F(x) is 7.8888

5.9713

3.1107

0.1112

F(x)=x+ 2 .8284 +--+---+---+-­ x - ./2 (x - ./2)2 (x - ./2) 3 x + ./2 The signal-processing toolbox function residuez is similar to the residue command and offers more convenient expansion of certain rational functions, such as those commonly encountered in the study of discrete-time systems. Additional information about the residue and residuez commands is available from MATLAB's help facilities.

t The benefits of vectorization are less pronounced in recent versions of MATLAB.

;

54

CHAPTERB BACKGROUND

B.8 APPENDIX: USEFUL MATHEMATICAL FORMULAS We conclude this chapter with a selection of useful mathematical facts.

B.8.1 Some Useful Constants 3.1415926535 e � 2.7182818284

1r �

- � 0.36787944 l l e

log lO 2 � 0.30103 log 10 3 � 0.47712

B.8.2 Complex Numbers iJ;jrr/2

= ±j

11-I

e±i111r =

n even n odd

e±i9 = cos 0 ± j sin 0

a + j b = rei9 (re i9/ = leik9



r = a2 + b2 , 0 = tan- 1

( �)

(r,ei91 ) (r2ei8i ) = r1r2ei(91+92)

B.8.3 Sums -r" ti= ,-n r-1 +I

k=m

k= L k=O 2= k L 11

n

k=O

n(n

+ I)

2

n(n+

1)(2n+ I) 6

r,'=l k2,; == r[(I + r)(l - ,n) - 2n(I - r)r1 L 3 11

k=O

(I - r)

n2(1

,)2,n]

B.8 Appendix: Useful Mathematical Formulas

B.8.4 Taylor and Maclaurin Series f(x) = f(a)+

(x-a) - k (x-a)2 j(a) +...= � (x a) j(a) j(a)+ l! 2! k!

f:o

x2.. 2!

X .

f(x) =f(0) + -/(0) +-f(0) + ·

l.

· · = �xk -f(k) (0) k=O

kl

B.8.5 Power Series x2 x3 x' tr= l+x+-+ -+· · ·+-+ ·, · 2! 3! n! 1 x x sinx = x-- + -:5 - - + • • 3!3 5! 7! xX4 X6 XS ;x2 cos.x=1- - +- - -+- -· 2!

4!

6!

8!

••

x3 2x:5 17x7 tanx=x+-+-+-+· .. 15 315 3 l 7x7 x 2x tanhx=x-+··· +--:5 33 315 15

x2>

[r,p,k] = residue(b,a) r = 0 + 2.0000i 0 - 2.0000i p = 3 -3

k =

0 + 1.0000i

B.7-10

Let N = [n1,n6,n5, ...,n2,nil represent the seven digits of your phone number. Construct a rational function according to 1 2 HN (s) = n1s + n6s + ns + n4s2 n3s + n2s + n1 Use MATLAB's residue command to com­ pute the partial fraction expansion of HN(s).

B.7-11

When plotted in the complex plane for -TC � w � ,r, the function f(w) = cos(w) + jO. l sin(2w) results in a so-called Lissajous figure that resembles a two-bladed pro­ peller. (a) In MATLAB, create two row vectors fr and fi corresponding to the real and imaginary portions off(w), respectively, over a suitable number N samples of w. Plot the real portion against the imaginary portion and verify the figure resembles a propeller. (b) Let complex constant w = x + jy be represented in vector form

I

Consider the 2 x 2 rotational matrix R: R=

[

c�s 0 sm0

- sin 8 cos0

]

Show that Rw rotates vector w by 8 radians. (c) Create a rotational matrix R corresponding to 10° and multiply it by the 2 x N matrix f = [fr; fi] ; . Plot the result to verify that the "propeller" has indeed rotated counterclockwise. (d) Given the matrix R determined in part (c), what is the effect of performing RRf? How about RRRf? Generalize the result. (e) Investigate the behavior of multiplying j(w) by the function ei8•

r

SIGNALS AND SYSTEMS

In this chapter we shall discuss basic aspects of signals. We shall also introduce fundamental concepts and qualitative explanations of the hows and whys of systems theory, thus building a solid foundation for understanding the quantitative analysis in the remainder of the book. For simplicity, the focus of this chapter is on continuous-time signals and systems. Chapter 8 presents the same ideas for discrete-time signals and systems. SIGNALS

A signal is a set of data or information. Examples include a telephone or a television signal, monthly sales of a corporation, or daily closing prices of a stock market (e.g., the Dow Jones averages). In all these examples, the signals are functions of the independent variable time. This is not always the case, however. When an electrical charge is distributed over a body, for instance, the signal is the charge density, a function of space rather than time. In this book we deal almost exclusively with signals that are functions of time. The discussion, however, applies equally well to other independent variables. SYSTEMS

Signals may be processed further by systems, which may modify them or extract additional information from them. For example, an anti-aircraft gun operator may want to know the future location of a hostile moving target that is being tracked by his radar. Knowing the radar signal, he knows the past location and velocity of the target. By properly processing the radar signal (the input), he can approximately estimate the future location of the target. Thus, a system is an entity that processes a set of signals (inputs) to yield another set of signals (outputs). A system may be made up of physical components, as in electrical, mechanical, or hydraulic systems (hardware realization), or it may be an algorithm that computes an output from an input signal (software realization).

1.1 SIZE OF A SIGNAL The size of any entity is a number that indicates the largeness or strength of that entity. Generally speaking, the signal amplitude varies with time. How can a signal that exists over a certain time

64

1.1 Size of a Signal

65

interval with varying amplitude be measured by one number that will indicate the signal size or signal strength? Such a measure must consider not onJy the signal amplitude, but also its duration. For instance, if we are to devise a single number V as a measure of the size of a human being, we must consider not only his or her width (girth), but also the height. If we make a simplifying assumption that the shape of a person is a cylinder of variable radius r (which varies with the height h), then one possible measure of the size of a person of height His the person's volume V, given by V = ,r

1H r2(h)dh

1.1.1 Signal Energy Arguing in this manner, we may consider the area under a signal x(t) as a possible measure of its size, because it takes account not only of the amplitude but also of the duration. However, this will be a defective measure because even for a large signal x(t), its positive and negative areas could cancel each other, indicating a signal of small size. This difficulty can be corrected by defining the signal size as the area under lx(t)l2 , which is always positive. We call this measure the signal energy Ex, defined as Ex=

1_:

lx(t) 1 dt 2

(l.l)

This definition simplifies for a real-valued signal x(t) to Ex = J�x2(t)dt. There are also other possible measures of signal size, such as the area under Jx(t)J. The energy measure, however, is not only more tractable mathematically but is also more meaningful (as shown later) in the sense that it is indicative of the energy that can be extracted from the signal.

1.1.2 Signal Power Signal energy must be finite for it to be a meaningful measure of signal size. A necessary condition for the energy to be finite is that the signal amplitude➔ 0 as !ti ➔ oo (Fig. 1. l a). Otherwise, the integral in Eq. (1.1) will not converge. W hen the amplitude of x(t) does not ➔ 0 as ltl ➔ oo (Fig. 1.1 b), the signal energy is infinite. A more meaningful measure of the signal size in such a case would be the time average of the energy, if it exists. This measure is called the power of the signal. For a signal x(t), we define its power Px as 1 lT/2 Px = lim (1.2) lx(t) 1 2 dt T➔ oo T _71 2 This definition simplifies for a real-valued signal x(t) to Px = limr-H;,o 2 x2(t)dt. Observe that the signal power Px is the time average (mean) of the signal magnitude squared, that is, the mean-square value of lx(t)I. Indeed, the square root of Px is the familiar rms (root-mean-square) value of x(t). Generally, the mean of an entity averaged over a large time interval approaching infinity exists if the entity either is periodic or has a statistical regularity. If such a condition is not satisfied, the average may not exist. For instance, a ramp signal x(t) = t increases indefinitely as It! ➔ oo, and neither the energy nor the power exists for this signal. However, the unit step function, which is not periodic nor has statistical regularity, does have a finite power.

i J�?�

66

CHAPTER 1 SIGNALS AND SYSTEMS

,_ (a)

(b)

Figure 1.1 Examples of signals: (a) a signal with finite energy and (b) a signal with finite power.

When x(t) is periodic, lx(t) 1 2 is also periodic. Hence, the power of x(t) can be computed from Eq. (1.2) by averaging lx(t) 1 over one period. 2

Comments. The signal energy as defined in Eq. (1.1) does not indicate the actual energy (in the conventional sense) of the signal because the signal energy depends not only on the signal, but also on the load. It can, however, be interpreted as the energy dissipated in a normalized load of a 1 ohm resistor if a voltagex(t) were to be applied across the 1 ohm resistor [or if a currentx(t) were to be passed through the I ohm resistor]. The measure of "energy " is therefore indicative of the energy capability of the signal, not the actual energy. For this reason the concepts of conservation of energy should not be applied to this "signal energy." Parallel observation applies to "signal power" defined in Eq. (1.2). These measures are but convenient indicators of the signal size, which prove useful in many applications. For instance, if we approximate a signal x(t) by another signal g(t), the error in the approximation is e(t) = x(t) - g(t). The energy (or power) of e(t) is a convenient indicator of the goodness of the approximation. It provides us with a quantitative measure of determining the closeness of the approximation. In communication systems, during transmission over a channel, message signals are corrupted by unwanted signals (noise). The quality of the received signal is judged by the relative sizes of the desired signal and the unwanted signal (noise). In this case the ratio of the message signal and noise signal powers (signal-to-noise power ratio) is a good indication of the received signal quality. Units of Energy and Power. Equation (1.1) is not correct dimensionally. This is because here we are using the term energy not in its conventional sense, but to indicate the signal size. The same observation applies to Eq. (1.2) for power. The units of energy and power, as defined here, depend on the nature of the signal x(t). If x(t) is a voltage signal, its energy Ex has units of volts squared-seconds (V2 s), and its power Px has units of volts squared. If x(t) is a current signal, these units will be amperes squared-seconds (A2 s) and amperes squared, respectively.

1.1 Size of a Signal

EXAMPLEL Determine the suitable measures of the signals in Fig. 1.2. x(t)

2

-I

0

2

4

,_

(a)

(b)

Figure 1.2 Signals for Ex. 1.1.

In Fig. 1.2a, the signal amplitude ➔ 0 as ltl ➔ oo. Therefore, the suitable measure for this signal is its energy Ex given by

In Fig. 1.2b, the signal magnitude does not ➔ 0 as ltl ➔ oo. However, it is periodic, and therefore its power exists. We can use Eq. (1.2) to determine its power. We can simplify the procedure for periodic signals by observing that a periodic signal repeats regularly each period (2 seconds in this case). Therefore, averaging lx(t)l 2 over an infinitely large interval is identical to averaging this quantity over one period (2 seconds in this case). Thus,

Recall that the signal power is the square of its rms value. Therefore, the nns value of this signal is 1/ ,/3.

67

68

CHAPTER 1 SIGNALS AND SYSTEMS

EXAMPLE 1.2 Determining Power and RMS Value Detennine the power and the rms value of (a) x(t) = C cos(Wot+0)

(b) x(t) = C1 cos(w1 t +01) +C2 cos(Ct'2t+ 02 ) (c) x(t) = Deiwor

(a) This is a periodic signal with period To = 2rr /WQ. The suitable measure of this signal

is its power. Because it is a periodic signal, we may compute its power by averaging its energy over one period To = 2rr /wo. However, for the sake of demonstration, we shall use Eq. (1.2) to solve this problem by averaging over an infinitely large time interval. c2 r;2 1 r12 Px = lim - j C2 cos2 (Wot+0)dt= lim - j [l +cos(2wot+20)]dt T➔ oo T -T/2 T➔ oo 2T -T/2 c2 r;2 c2 r;2 = lim - 1 dt+ lim - 1 cos (2w0t+20) dt T➔ oo2T -T/2 T➔ oo2T -T/2

The first term on the right-hand side is equal to C2 /2. The second term, however, is zero because the integral appearing in this term represents the area under a sinusoid over a very large time interval T with T ➔ oo. This area is at most equal to the area of half the cycle because of cancellations of the positive and negative areas of a sinusoid. The second term is this area multiplied by C2 /2T with T ➔ oo. Clearly, this term is zero, and Px=

c2 2

This shows that a sinusoid of amplitude C has a power C2 /2 regardless of the value of its frequency Wo(wo =fa 0) and phase 0. The rms value is C/ �- If the signal frequency is zero (de or a constant signal of amplitude C), the reader can show that the power is C2 . (b) In Ch . 3 we shall show that a sum of two sinusoids may or may not be periodic, depending on whether the ratio w1 /w2 is a rational number. Therefore, the period of this signal is not known. Hence, its power will be determined by averaging its energy over T seconds with T ➔ oo. Thus,

1.1 Size of a Signal

The first and second integrals on the right-hand side are the powers of the two sinusoids, which are C 12/2 and C//2, as found in part (a). The third term, the product of two sinusoids, can be expressed as a sum of two sinusoids cos [(w 1 +wi)t+ (0 1 +02)] and cos [(w 1 -wi)t+ (0 1 -02)], respectively. Now, arguing as in part (a), we see that the third term is zero. Hence, we havet

69

and the nns value is ( C 1 2 C2 2) /2. We can readily extend this result to a sum of any number of sinusoids with distinct frequencies. Thus, if

J +

x(t) =

L C cos (w t + 0 ) 00

n

n

n

n=I

assuming that none of the two sinusoids have identical frequencies and Wn Px=

00

½LC/ n=I

If x(t) also has a de term, as

x(t) = Co+ then

'F 0, then

Px=

L C cos (w t + 0 ) 00

n

n

n

n=I

C5+ ½ LC/ 00

11=1

(1.3)

(c) In this case the signal is complex, and we use Eq. (1.2) to compute the power. T/2

!De1WO'l2 Px = lim T➔oo T 1 f_712

Recall that jeiWO'j = 1 so that IDeiW01 j2 = IDl2, and The rms value is IDI.

Px = JDl 2

(1.4)

t This is true only if w1 -:fo W2, If w1 = wi, the integrand of the third tenn contains a constant cos (01 - Bi), and the third term ➔ 2C I C2 cos (0 1 - 02) as T ➔ oo.

...

70

CHAPTER 1

SIGNALS AND SYSTEMS

Comment. In part (b) of Ex. 1.2. we have shown that the power of the sum of two sinusoids is equal to the sum of the powers of the sinusoids. It may appear that the power of x 1 (t) + x2(t) is P.x 1 + P.x2 • UnfortunateJy, this concJusion is not true in general. It is true only under a certain condition (orthogonality), discussed later (Sec. 3.1.3).

DRILL 1.1 Computing Energy, Power, and RMS Value Show that the energies of the signals in Figs. 1.3a, 1.3b, 1.3c, and 1.3d are 4, 1, 4/3, and 4/3, i:espectively. Observe that doubling a signal quadruples the energy, and time shifting a signal has no effect on the energy. Show also that the power of the signal in Fig. l .3e is 0.4323. What is the rms value of the signal in Fig. 1.3e?

,½(t) 2 ·························

t_.

0

0

-3

-2

-1

-I

t_.

(c)

(b)

(a)

-4

t_.

2

0

,_.

0

(d)

2

3

(e)

1.3 Signals for Drill 1.1.

, . : I.la to find the power of a sinusoid C cos (Wot+ 0) by averaging the signal energy one'period To = 2rr /Wo (rather than averaging over the infinitely large interval). Show also e power of a de signal x(t) = Co is CJ, and its rms value is Co.

tif a>1 = W?, the power of x(t) = C 1 cos(w1t+B1) + C2 cos (W2t+02) is [C 1 2 + C/ + ,.... �) ]/2, which is not equal to the Ex. 1.2b result of (C 1 2 + C2 2) /2.

1.2 Some Useful Signal Operations

71

1.2 SOME USEFUL SIGNAL OPERATIONS We discuss here three useful signal operations: shifting, scaling, and inversion. Since the independent variable in our signal description is time, these operations are discussed as time shifting, time scaling, and time reversal (inversion). However, this discussion is valid for functions having independent variables other than time (e.g., frequency or distance).

1.2.1 Time Shifting Consider a signal x(t) (Fig. 1.4a) and the same signal delayed by T seconds (Fig. I .4b), which we shall denote by -5 or 1 :St < 5 otherwise

1.2.4 Combined Operations Certain complex operations require simultaneous use of more than one of the operations just described. The most general operation involving all three operations is x(at- b), which is realized in two possible sequences of operation: 1. Time-shiftx(t) by b to obtain x(t-b). Now time-scale the shifted signal x(t- b) by a [i.e., replace t with at] to obtain x(at - b). 2. Time-scale x(t) by a to obtain x(at). Now time-shift x(at) by b/a [i.e., replace t with t - (b/a )] to obtain x[a(t- b/a )] = x(at - b). In either case, if a is negative, time scaling involves time reversal. For example, the signal x(2t -6) can be obtained in two ways. We can delay x(t) by 6 to obtain x(t - 6), and then time-compress this signal by factor 2 (replace t with 2t) to obtain x(2t - 6).

78

CHAPTER 1 SIGNALS AND SYSTEMS Alternately, we can first time-compress x(t) by factor 2 to obtain x(2t), then delay this signal by 3 (replace t with t - 3) to obtain x(2t -6).

1.3 CLASSIFICATION OF SIGNALS Classification helps us better understand and utilize the items around us. Cars, for example, are classified as sports, offroad, family, and so forth. Knowing you have a sports car is useful in deciding whether to drive on a highway or on a dirt road. Knowing you want to drive up a mountain, you would probably choose an offroad vehicle over a family sedan. Similarly, there are several classes of signals. Some signal classes are more suitable for certain applications than others. Further, different signal classes often require different mathematical tools. Here, we shall consider only the following classes of signals, which are suitable for the scope of this book: 1. 2. 3. 4. 5.

Continuous-time and discrete-time signals Analog and digital signals Periodic and aperiodic signals Energy and power signals Deterministic and probabilistic signals

1.3.1 Continuous-Time and Discrete-Time Signals A signal that is specified for every value of time t is a continuous-time signal. Since the signal is known for every value of time, precise event localization is possible. The tidal height data displayed in Fig. I. I 0a is an example of a continuous-time signal, and signal features such as daily tides as well as the effects of a massive tsunami are easy to locate. A signal that is specified only at discrete values of time is a discrete-time signal. Ordinarily, the independent variable for discrete-time signals is denoted by the integer n. For discrete-time signals, events are localized within the sampling period. The two decades of quarterly United States Gross National Product (GNP) data displayed in Fig. I. I Ob is an example of a discrete-time signal, and features such as the modest 2001 and massive 2008 recessions are easily visible.

1.3.2 Analog and Digital Signals The concept of continuous time is often confused with that of analog. The two are not the same. The same is true of the concepts of discrete time and digital. A signal whose amplitude can take on any value in a continuous range is an analog signal. This means that an analog signal amplitude can take on an infinite number of values. A digital signal, on the other hand, is one whose amplitude can take on only a finite number of values. Signals associated with a digital computer are digital because they take on only two values (binary signals). A digital signal whose amplitudes can take on M values is an M-ary signal of which binary (M = 2) is a special case. The terms continuous time and discrete time qualify the nature of a signal along the time (horizontal) axis. The terms analog and digital, on the other hand, qualify the nature of the signal amplitude (vertical axis). Figure 1.1 I shows examples of signals of various types. It is clear that analog is not necessarily continuous-time and digital need not be discrete-time. Figure I. I le shows an example of an analog discrete-time signal. An analog signal can be converted into a digital signal [analog-to-digital (AID) conversion] through quantization (rounding off ), as explained in Sec. 5.3.

1.3 Classification of Signals Tidal height [cm

79

a Station,

1350

1150 ,pt.. Japan Coast Guard

Dec.26

Dec. 27

Dec. 28 (a)

Dec. 29

Dec. 30

10 5 0 ...,-.,..Llj.l1..1..&.Lfrl..,..ll.LL.l�L.J.l!.L.Ll...L!l,JL.Lli.li.tµ,LLL&e,.J...,�_..l...l...LLU.,,..L&JLJ...L..._.�14--l..LI..JU.Z-,X.,u..J.i.LLIU!-J..LI

-5 2000

2004

2012

2008 (b)

2016

Figure 1.10 (a) Continuous-time and (b) discrete-time signals.

1.3.3 Periodic and Aperiodic Signals A signal x(t) is said to be periodic if for some positive constant To x(t) = x(t + To)

for all t

(1.7)

The smallest value of To that satisfies the periodicity condition of Eq. (1.7) is the.fundamental period of x(t). The signals in Figs. 1.2b and l .3e are periodic signals with periods 2 and 1, respectively. A signal is aperiodic if it is not periodic. Signals in Figs. 1.2a, 1.3a, 1.3b, 1.3c, and 1.3d are all aperiodic. By definition, a periodic signal x(t) remains unchanged when time-shifted by one period. For this reason, a periodic signal must start at t = -oo: if it started at some finite instant, say, t = 0, the time-shifted signal x(t + To) would start at t = -To and x(t + T0) would not be the same as x(t). Therefore, a periodic signal, by definition, must start at t = -oo and continue forever, as illustrated in Fig. 1.12. Another important property of a periodic signal x(t) is that x(t) can be generated by periodic extension of any segment of x(t) of duration To (the period). As a result, we can generate x(t) from any segment of x(t) having a duration of one period by placing this segment and the reproduction thereof end to end ad infinitum on either side. Figure 1.13 shows a periodic signal x(t) of period To = 6. The shaded portion of Fig. 1.13a shows a segment of x(t) starting at t = -1 and having a duration of one period (6 seconds). This segment, when repeated forever in either direction, results in the periodic signal x(t). Figure 1.13b shows another shaded segment of x(t) of duration

l

80

CHAP1ER 1

SIGNALS AND SYSTEMS x(t)

-

-

-

-

-

1--

(a)

, __

(b)

x(t)

I'

◄t

, __

It

'

,, '

I

,,

It

,.

'

t

'

--

(d)

(c)

Figure 1.11 Examples of signals: (a) analog, continuous-time, (b) digital, continuous-time, (c) analog, discrete-time, and (d) digital, discrete-time.

x(t)

=""'.:it----To ---➔�

Figure 1.12 A periodic signal of period To.

To starting at t = 0. Again, we see that this segment, when repeated forever on either side, results in x(t). The reader can verify that this construction is possible with any segment of x(t) starting at any instant as long as the segment duration is one period. An additional useful property of a periodic signal x(t) of period To is that the area under x(t) over any interval of duration To is the same; that is, for any real numbers a and b, 1a+To a

x(t) dt =

1b+T0 b x(t) dt

1.3

-7

Classification of Signals

11

81

,-

(a)

(b)

Figure 1.13 Generation of a periodic signal by periodic extension of its segment of

one-period duration.

This result follows from the fact that a periodic signal takes the same values at the intervals of To . Hence, the values over any segment of duration To are repeated in any other interval of the same duration. For convenience, the area under x(t) over any interval of duration To will be denoted by { x(t)dt

lr0

It is helpful to label signals that start at t = -oo and continue forever as everlasting signals. Thus, an everlasting signal exists over the entire interval -oo < t < oo. The signals in Figs. l . I b and 1.2b are examples of everlasting signals. Clearly, a periodic signal, by definition, is an everlasting signal. A signal that does not start before t = 0 is a causal signal. In other words, x(t) is a causal signal if t 0 can be determined if we know the force x(t) over the interval from Oto t and the ball's initial velocity v(0).

=

4 Define w(t) eJ N will magnify the high-frequency components of noise through differentiation. Jt is entirely possible for noise to be magnified so much that it swamps the desired system output even if the noise signal at the system's input is tolerably small. Hence, practical systems generally use M � N. For the rest of this text, we assume implicitly that M � N. For the sake of generality, we shall assume M = N in Eq. (2.1). In Ch. I , we demonstrated that a system described by Eq. (2.2) is linear. Therefore, its response can be expressed as the sum of two components: the zero-input response and the zero-state response (decomposition property).* Therefore, total response= zero-input response+ zero-state response The zero-input response is the system output when the input x(t) = 0, and thus it is the result of internal system conditions (such as energy storages, initial conditions) alone. It is independent of the external input x(t). In contrast, the zero-state response is the system output to the external input x(t) when the system is in zero state, meaning the absence of all internal energy storages: that is, all initial conditions are zero.

2.2 SYSTEM RESPONSE TO INTERNAL CONDITIONS:

THE ZERO-INPUT RESPONSE

The zero-input response y0 (t) is the solution of Eq. (2.2) when the input x(t) = 0 so that Q(D)yo(t)

=0

...1 Noise is any undesirable signal, natural or manufactured, that interferes with the desired signals in the

system. Some of the sources of noise are the electromagnetic radiation from stars, the random motion of electrons in system components, interference from nearby radio and television stations, transients produced by automobile ignition systems, and fluorescent lighting. t We can verify readily that the system described by Eq. (2.2) has the decomposition property. If yo(t) is the zero-input response, then, by definition, Q(D)yo(t) = 0

If y(t) is the zero-state response, then y(t) is the solution of Q(D)y(t) = P(D)x(t)

subject to zero initial conditions (zero-state). Adding these two equations, we have Q(D)[yo(t) + y( r)J = P(D)x(t)

Clearly, y0 (t) + y(t) is the general solution of Eq. (2.2).

152

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS or (2.3) A solution to this equation can be obtained systematically [l]. However, we will take a shortcut by using heuristic reasoning. Equation (2.3) shows that a linear combination of yo(t) and its N successive derivatives is zero, not at some values of t, but for all t. Such a result is possible if and only if y0 (t) and all its N successive derivatives are of the same form.Otherwise, their sum can never add to zero for all values of t. We know that only an exponential function e>-1 has this property.So, let us assume that

is a solution to Eq.(2.3). Then

Substituting these results in Eq.(2.3), we obtain

For a nontrivial solution of this equation,

(2.4) This result means that ceJ..1 is indeed a solution of Eq. (2.3), provided .l. satisfies Eq.(2.4).Note that the polynomial in Eq. (2.4) is identical to the polynomial Q(D) in Eq.(2.3), with .l. replacing D.Therefore, Eq. (2.4) can be expressed as Q(.l.) = 0 Expressing Q(.l.) in factorized form, we obtain (2.5) Clearly,).. has N solutions: .l. 1, .l. 2, • • . , AN , assuming that all A; are distinct.Consequently, Eq.(2.3) . h as N poss1'ble soI ut1ons: c1eJ..11 , c2eJ..21 , ... , cNeAN I , w1'th c,, c2, ... ,cN as arb'1trary constant s.

2.2 System Response to Internal Conditions: The Zero-Input Response

153

We can readily show that a general solution is given by the sum of these N solutionst so that (2.6) where c 1 , c2, ..., CN are arbitrary constants determined by N constraints (the auxiliary conditions) on the solution. Observe that the polynomial Q(.>..), which is characteristic of the system, has nothing to do with the input. For this reason the polynomial Q(>-.) is called the characteristic polynomial of the system.The equation Q().) = 0 is called the characteristic equation of the system. Equation (2.5) clearly indicates that >-. 1 , ).. 2 , ..., AN are the roots of the characteristic equation; consequently, they are called the characteristic roots of the system. The terms characteristic values, eigenvalues, and natural frequencies are also used for characteristic roots.* The exponentials e""·;' (i = 1, 2, ..., n) in the zero-input response are the characteristic modes (also known as natural modes or simply as modes) of the system.There is a characteristic mode for each characteristic root of the system, and the zero-input response is a linear combination of the characteristic modes of the system. An LTIC system's characteristic modes comprise its single most important attribute. Characteristic modes not only determine the zero-input response but also play an important role in determining the zero-state response. In other words, the entire behavior of a system is dictated primarily by its characteristic modes.In the rest of this chapter, we shall see the pervasive presence of characteristic modes in every aspect of system behavior. REPEATED ROOTS The solution of Eq.(2.3) as given in Eq.(2.6) assumes that the N characteristic roots .>.. 1 , ).. 2 , ..., AN are distinct. If there are repeated roots (same root occurring more than once), the form of the solution is modified slightly.By direct substitution, we can show that the solution of the equation

is given by

t To prove this assertion, assume that Y1 (t), Y2 (t), ..., YN(t) are all solutions of Eq.(2.3). Then Q(D)y1 (t) = 0 Q(D)y2(t) = 0 Q(D)yN(l) = 0 Multiplying these equations by c1, c2, ... , CN, respectively, and adding them together yield

Q(D)[C1Y1 (t) + c2y2(t) + · · · + CNYn (t)] = 0 This result shows that C1Y1 (t) + c2y2(t) + · • · + CNYn (t) is also a solution of the homogeneous equation [Eq. (2.3)). t Eigenvalue is German for "characteristic value."

)

154

CHAPTER 2 TIM:E-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS In this case, the root).. repeats twice. Observe that the characteristic modes in this case are e1'-' and te>..'. Continuing this pattern, we can show that for the differential equation (D- >.)'yo(t) the characteristic modes are e>..1 , te>..1 , t2e>..1 ,

• • •,

=0

t'-1 e>..t , and that the solution is

Consequently, for a system with the characteristic polynomial

COMPLEX ROOTS

The procedure for handling complex roots is the same as that for real roots. For complex roots, the usual procedure leads to complex characteristic modes and the complex form of solution. However, it is possible to avoid the complex form altogether by selecting a real form of solution, as described next. For a real system, complex roots must occur in pairs of conjugates if the coefficients of the characteristic polynomial Q(>..) are to be real. Therefore, if et+ j/3 is a characteristic root, a - j/3 must also be a characteristic root. The zero-input response corresponding to this pair of complex conjugate roots is (2.7) For a real system, the response yo(t) must also be real. This is possible only if c1 and c2 are conjugates. Let C

·9

CJ= -e1

2

and

This yields Yo(t) = =.eifJe(a+jfJ)r + :.e-ifle.. 2+4>-.+ 40 = (A+2 -j 6) (A+2 + j 6). The characteristic roots are -2 ± j6.t The solution can be written either in the complex form [Eq. (2.7)] or in the real form [Eq. (2.8)]. The complex form is y0(t) = c1 e l 11 + c2e .l2 ', where >..1 = -2 + j6 and )..2 = -2 - j6. Since a= -2 and f3 = 6, the real-form solution is [see Eq. (2.8)] Yo(t) = ce-21 cos (6t + 0)

Differentiating this expression, we obtain yo (t)

= -2ce- 21 cos (6t+0) - 6ce-2' sin (6t + 0)

To determine the constants c and 0, we set t = 0 in the equations for Yo(t) and Yo(t) and substitute the initial conditions yo(O) = 2 and .Yo(O) = 16.78, yielding 2 = ccos0 16.78 = -2ccos0 - 6csin0 Solution of these two simultaneous equations in two unknowns c cos 0 and c sin 0 y ields ccos0

=2

csin0 = -3.463

and

Squaring and then adding these two equations yield c2 = (2) 2 + (-3.464) 2

= 16 ==} C = 4

Next, dividing c sin0 = - 3.463 by c cos 0 = 2 yields

-3.463 tan0=---

2

and

_ -3.463 rr 0=tan 1 (---) =--

2

Therefore,

21 Yo(t) =4e- cos

3

(6r-;)

For the plot of yo(t), refer again to Fig. B.1 l c.

t The complex conjugate roots of a second-order polynomial can be determined by usin g the fonnula in Sec. B.8.1O or by expressing the polynomial as a sum of two squares. The latter can be accomplished by completing the square with the first two terms, as follows: >..2 +4>- +40 = (>-2 +4>-+4)+36 = (A +2 )2 + (6) 2

= (A +2- j6)(.>.. +2+ j6)

2.2

System Response to Internal Conditions: The Zero-Tnput Response

EXAMPLE 2.2 Using MATLAB to Find Polynomial Roots Find the roots J... 1 and J...2 of the polynomial A2 + 4A +k for three values ofk: {a) k = 3, (b) k = 4, and (c) k = 40. (a)

>>

r = roots([1 4 3]).' r = -3 -1

Fork= 3, the polynomial roots are therefore A l (b) >>

= -3 and A2 = -1.

r = roots([! 4 4]).' r = -2 -2

Fork= 4, the polynomial roots are therefore A 1 = A 2 = -2.

(c) >>

r = roots([1 4 40]).' r = -2.00+6.00i -2.00-6.00i

Fork= 40, the polynomial roots are therefore At= -2 + }6 and A2 = -2- j6.

EX Consider an LTIC system specified by the differential equation 2

(D

+ 4D + k)y(t) = (3D + S)x(t)

Using initial conditions y0 (0) = 3 and j,o(O) = - 7, apply MATLAB's dsolve command to determine the zero-input response when (a)k = 3, (b) k = 4, and (c)k = 40. (a) >>

= dsolve('D2y +4*Dy+3*y =0','y(0)=3','Dy(0)=-7','t') y_O = 1/exp(t) + 2/exp(3*t) y _O

Fork= 3, the zero-input response is therefore yo(t ) = e- 1 + 2e-3'. (b)

>>

= dsolve('D2y+4*Dy+4*y =O','y(0)=3','Dy(0)=-7','t') y_O = 3/exp(2*t) - t/exp(2*t) y_O

157

158

CHAPTER 2

TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

For k = 4, the zero-input response is therefore Yo (t) = 3e-21 - te-21•

(c)

>>

y_O = dsolve('D2y+4*Dy+40*y=O','y(0)=3','Dy(0)=-7','t') y_O = (3*cos(6*t))/exp(2*t) - sin(6*t)/(6*exp(2*t))

Fork= 40, the zero-input response is therefore Yo(t) = 3e- 2' cos (6t) - ¼e- 21 sin (6t).

DltILL 2.1 Finding the Zero-Input Response of a First-Order System F!F.ilDO

the zero-input response of an LTIC system described by (D + 5)y(t) = x(t) if the initial

·tron is y(O) = 5.

o-Input Response of a Second-Order

0

o-

o

PRACTICAL INITIAL CONDITIONS AND THE MEANING OF AND + In Ex. 2.1, the initial conditions yo(O) and .Yo(O) were supplied. In practical problems, we must derive such conditions from the physical situation. For instance, in an RLC circuit, we may be given the conditions (initial capacitor voltages, initial inductor currents, etc.). From this information, we need to derive yo(O), Yo(O), . .. for the desired variable as demonstrated in the next example. In much of our discussion, the input is assumed to start at t = 0, unless otherwise mentioned. Hence, t = 0 is the reference point. The conditions immediately before t = 0 Gust before the input is applied) are the conditions at t = o-, and those immediately after t = 0 (just after the input is applied) are the conditions at t = o+ (compare this with the historical time frames BCE and CE).

2.2 System Response to Internal Conditions: The Zero-Input Response

159

In practice, we are likely to know the initial conditions at t = o- rather than at t = o+ . The two sets of conditions are generally different, although in some cases they may be identical. The total response y(t) consists of two components: the zero-input response y0(t) [response due to the initial conditions alone with x(t) = OJ and the zero-state response resulting from the input alone with all initial conditions zero. At t = o-, the total response y(t) consists solely of the zero-input response yo(t) because the input has not started yet. Hence, the initial conditions on y(t) are identical to those of yo(t). Thus, y(O-) = yo(O-), y(O-) = .Yo(O-), and so on. Moreover, yo(t) is the response due to initial conditions alone and does not depend on the input x(t). Hence, application of the input at t = 0 does not affect y0 (t). This means the initial conditions on y0 (t) at t = o- and o + are identical; that is, yo(O-), .Yo(O-), ... are identical to yo(O+ ), j,0(Q+ ), ..., respectively. It is clear that for y0 (t), there is no distinction between the initial conditions at t = o-, 0, and o+ . They are all the same. But this is not the case with the total response y(t), which consists of both the zero-input and zero-state responses. Thus, in general, y(Q-) # y(Q + ), y(Q-) -1, y(O + ), and so on.

A voltage x(t) = IOe-3'u(t) is applied at the input of the RLC circuit iUustrated in Fig. 2.1a. Find the loop current y(t) for t � 0 if the initial inductor current is zero [y(Q-) = OJ and the initial capacitor voltage is 5 volts [vc(O-) = 5). The differential (loop) equation relating y(t) to x(t) was derived in Eq. (1.29) as (D

2

+ 3D + 2)y(t) = Dx(t)

The zero-state component of y(t) resulting from the input x(t), assuming that all initial conditions are zero, that is, y(O-) = vc(O-) = 0, will be obtained later in Ex. 2.9. In this example we shall find the zero-input reponse Yo(t). For this purpose, we need two initial conditions, y0(0) and j,0(0). These conditions can be derived from the given initial conditions, y(O-) = 0 and vc(O-) = 5, as follows. Recall that Yo(t) is the loop current when the input terminals are shorted so that the input x(t) = 0 (zero-input), as depicted in Fig. 2.lb. We now compute y0(0) and j,0(0), the values of the loop current and its derivative at t = 0, from the initial values of the inductor current and the capacitor voltage. Remember that the inductor current cannot change instantaneously in the absence of an impulsive voltage. Similarly, the capacitor voltage cannot change instantaneously in the absence of an impulsive current. Therefore, when the input terminals are shorted at t = 0, the inductor current is still zero and the capacitor voltage is still 5 volts. Thus,

Yo(O) = 0

... ... illa.,,.____al_a.lll_______�-------

160

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

3 fl

lH

x(t)

0

IH

g

+ Vc(t)

.!. 2F

3n

+ Vc(t)

.!.p 2

(b)

(a)

Figure 2.1 Circuits for Ex. 2.4.

To determine y0 (0), we use the loop equation for the circuit in Fig. 2.1b. Because the voltage across the inductor is L(dyo/ dt) or yo (t), this equation can be written as follows:

Setting t = 0, we obtain

.Yo(t) + 3yo(t) + vc(t) = 0 .Yo(O) + 3yo(O) + vc(O) = 0

But Yo(O) = 0 and vc(O) = 5. Consequently,

j,o(O) = -5

Therefore, the desired initial conditions are Yo(O) = 0

and

Y o(O) = -5

Thus, the problem reduces to finding Yo(t), the zero-input component of y(t) of the system specified by the equation (D2 + 3D + 2)y(t) = Dx(t), when the initial conditions are y0 (0) = O and j,o(O) = -5. We have already solved this problem in Ex. 2. l a, where we found yo(t) = -5e-r + 5e-2'

t�0

This is the zero-input component of the loop current y(t). It is interesting to find the initial conditions at t = o- and o+ for the total response y(t). Let us compare y(O-) and j,(0-) with y(O+) and j,(O+). The two pairs can be compared by writing the loop equation for the circuit in Fig. 2.1a at t = o- and t = o+ . The only difference between the two situations is that at t = o-, the input x(t) = 0, whereas at t = o+ , the input x(t) = l 0 [because x(t) = 1oe-3']. Hence, the two loop equations are y(O-) + 3y(O-) + vc(O-) = 0 y(O+ ) + 3y(O + ) + vc(O+ ) = 10

The loop current y(O+ ) = y(O-) = 0 because it cannot change instantaneously in the absence of impulsive voltage. The same is true of the capacitor voltage. Hence, vc(O+) =

161

2.2 System Response to Internal Conditions: The Zero-Input Response vc(O-) = 5. Substituting these values in the foregoing equations, we obtain j(0-) y(O +) = 5. Thus,

= -5 and (2.10)

DRILL 2.3 Zero-Input Response of an RC Circuit

In the circuit in Fig. 2. la, the inductance L = 0 and the initial capacitor voltage vc(O) = 30 volts. Show that the zero-input component of the loop current is given by yo(t) = - 10e-2tt3 for t�O.

INDEPENDENCE OF THE ZERO-INPUT AND ZERO-STATE RESPONSES In Ex. 2.4, we computed the zero-input component without using the input x(t). The zero-state response can be computed from the knowledge of the input x(t) alone; the initial conditions are assumed to be zero (system in zero state). The two components of the system response (the zero-input and zero-state responses) are independent of each other. The two worlds of zero-input

response and zero-state response coexist side by side, neither one knowing or caring what the other is doing. For each component, the other is totally irrelevant. ROLE OF AUXILIARY CONDITIONS IN SOLUTION OF DIFFERENTIAL EQUATIONS

The solution of a differential equation requires additional pieces of information (the auxiliary conditions). Why? We now show heuristically why a differential equation does not, in general,

have a unique solution unless some additional constraints (or conditions) on the solution are known. Differentiation operation is not invertible unless one piece of information about y(t) is given. To get back y(t) from dy/dt, we must know one piece of information, such as y(O). Thus, differentiation is an irreversible (noninvertible) operation during which certain information is lost. To invert this operation, one piece of information about y(t) must be provided to restore the original y(t). Using a similar argument, we can show that, given d2 y/dt2 , we are able to determine y(t) uniquely only if two additional pieces of information (constraints) about y(t) are given. In general, to determine y(I) uniquely from its Nth derivative, we need N additional pieces of information (constraints) about y(t). These constraints are also called auxiliary conditions. When these conditions are given at t = 0, they are called initial conditions.

2.2.1 Some Insights into the Zero-Input Behavior of a System By definition, the zero-input response is the system response to its internal conditions, assuming that its input is zero. Understanding this phenomenon provides interesting insight into system behavior. If a system is disturbed momentarily from its rest position and if the disturbance is then removed, the system will not come back to rest instantaneously. In general, it will come back to

162

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS rest over a period of time and only through a special type of motion that is characteristic of the system.t For example, if we press on an automobile fender momentarily and then release it at t = 0, there is no external force on the automobile fort> O.* The auto body will eventually come back to its rest (equilibrium) position, but not through any arbitrary motion. It must do so by using only a form of response that is sustainable by the system on its own without any external source, since the input is zero. Only characteristic modes satisfy this condition. The system uses a proper combination of characteristic modes to come back to the rest position while satisfying appropriate boundary ( or initial) conditions. If the shock absorbers of the automobile are in good condition (high damping coefficient), the characteristic modes will be monotonically decaying exponentials, and the auto body will come to rest rapidly without oscillation. In contrast, for poor shock absorbers (low damping coefficients), the characteristic modes will be exponentially decaying sinusoids, and the body will come to rest through oscillatory motion. When a series RC circuit with an initial charge on the capacitor is shorted, the capacitor will start to discharge exponentially through the resistor. This response of the RC circuit is caused entirely by its internal conditions and is sustained by this system without the aid of any external input. The exponential current waveform is therefore the characteristic mode of the RC circuit. Mathematically, we know that any combination of characteristic modes can be sustained by the system alone without requiring an external input. This fact can be readily verified for the series RL circuit shown in Fig. 2.2. The loop equation for this system is (D + 2)y(t) = x(t) It has a single characteristic root>-..= -2, and the characteristic mode is e-21• We now verify that a loop current y(t) = ce- 2' can be sustained through this circuit without any input voltage. The input voltage x(t) required to drive a loop current y(t) = ce-21 is given by dy(t) x(t) = L- + Ry(t) dt

lH

x(t)

2n

Figure 2.2 Modes always get a free ride.

t This assumes that the system will eventually come back to its original rest (or equilibrium) position.

:j: We ignore the force of gravity, which merely causes a constant displacement of the auto body without affecting the other motion.

2.3 The Unit Impulse Response h(t)

163

Clearly, the loop current y (t) = ce-21 is sustained by the RL circuit on its own, without the necessity of an external input. THE RESONANCE PHENOMENON

We have seen that any signal consisting of a system's characteristic mode is sustained by the system on its own; the system offers no obstacle to such signals. Imagine what would happen if we were to drive the system with an external input that is one of its characteristic modes. This would be like pouring gasoline on a fire in a dry forest or hiring a child to eat ice cream. A child would gladly do the job without pay. Think what would happen if he were paid by the amount of ice cream he ate! He would work overtime. He would work day and night, until he became sick. The same thing happens with a system driven by an input of the form of characteristic mode. The system response grows without limit, until it burns out.t We call this behavior the resonance phenomenon. An intelligent discussion of this important phenomenon requires an understanding of the zero-state response; for this reason, we postpone this topic until Sec. 2.6.7.

2.3 THE UNIT IMPULSE RESPONSE h(t) In Ch. 1, we explained how a system response to an input x(t) may be found by breaking this input into narrow rectangular pulses, as illustrated earlier in Fig. 1.27a, and then summing the system response to all the components. The rectangular pulses become impulses in the limit as their widths approach zero. Therefore, the system response is the sum of its responses to various impulse components. This discussion shows that if we know the system response to an impulse input, we can determine the system response to an arbitrary input x(t). We now discuss a method of determining h(t), the unit impulse response of an LTIC system described by the Nth-order differential equation [Eq. (2.1)]: N

N

(t

dy ) d -I y(t) d y (t) -+a1---+ · · · +aN_,-- +aNy(t) dt drN-i dtN

dMx(t) = bN-M dtM

+ bN-M+I

dM-I x(t) d(M-t + ·

dx(t)

· · + bN-1 dt + bNx(t)

Recall that noise considerations restrict practical systems to M ::; N. Under this constraint, the most general case is M = N. Therefore, Eq. (2.1) can be expressed as

Before deriving the general expression for the unit impulse response h(t), it is illuminating to understand qualitatively the nature of h(t). The impulse response h(t) is the system response to an impulse input o (t) applied at t = 0 with all the initial conditions zero at t = o-. An impulse input 8(1) is like lightning, which strikes instantaneously and then vanishes. But in its wake, in that single moment, objects that have been struck are rearranged. Similarly, an impulse input o(t) appears momentarily at t = 0, and then it is gone forever. But in that moment it generates energy storages; that is, it creates nonzero initial conditions instantaneously within the system at t In practice, the system in re sonance is more likely to go in saturation because of high amplitude levels.

164

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS t = o+. Although the impulse input o(t) vanishes fort> 0 so that the system has no input after the impulse has been applied, the system will still have a response generated by these newly created inhial conditions. The impulse response h(t), therefore, must consist of the system's characteristic modes fort � o + . As a result, h(t)

= characteristic mode terms

This response is valid fort > 0. But what happens at t = O? At a single moment t = 0, there can at most be an impulse,t so the form of the complete response h(t) is h(t) = Aoo(t) + characteristic mode terms

t�

(2.12)

0

Setting x(t) = 8(t) and y(t) = h(t) in Eq. (2.11) yields (If+ a1rf-l

+ · · · + aN-tD + aN)h(t) = (bo rf + b1if-1 + · · · + bN-tD + bN)o(t)

In this equation, we substitute h(t) from Eq. (2.12) and compare the coefficients of similar impulsive terms on both sides. The highest order of the derivative of impulse on both sides is N, with its coefficient value as Ao on the left-hand side and bo on the right-hand side. The two values must be matched. Therefore, Ao = bo and h(t) = b08(t) + characteristic modes

(2.13)

In Eq. (2.11), if M < N, b0 = 0. Hence, the impulse term boo(t) exists only if M = N. The unknown coefficients of the N characteristic modes in h(t) in Eq. (2.13) can be determined by using the technique of impulse matching, as explained in the following example.

Find the impulse response h(t) for a system specified by (D2 + 5D + 6)y(t) = ( D + 1)x(t)

(2.14)

ln this case, b0 = 0. Hence, h(t) consists of only the characteristic modes. The characteristic polynomial is )..2 + 5).. + 6 = ().. + 2)().. + 3). The roots are -2 and -3. Hence, the impulse

t It might be possible for the derivatives of o(t) to appear at the origin. However, if M :s: N, it is impossible for h(t) to have any derivatives of o(t). This conclusion follows from Eq. (2.11) with x(t) = o(t) and y(t) = h(t). The coefficients of the impulse and all its derivatives must be matched on both sides of this equation. If h(t) contains o(J \t), the first derivative of o(t), the left-hand side of Eq. (2.11) will contain a term o ( t). But the highest-order derivative term on the right-hand side is s > y_n = dsolve('D2y+3*Dy+2*y=O','y(0)=0','Dy(0)=1','t'); h = diff(y_n) h = 2/exp(2*t) - 1/exp(t) Therefore, h(t)

= (2e-21 - e-')u(t).

Finding the Determine the unit impulse response of LTIC systems equations: (a) (D + 2)y(t) = (3D + 5)x(t)

= (D + 4)x(t) + 2D + l)y(t) = Dx(t)

(b) D(D + 2)y(t) (c) (D

2

ANSWERS (a) 38 (t) - e- 21 u(t) (b) (2 - e-21 )u(t)

L

168

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS SYSTEM RESPONSE TO DELAYED IMPULSE If h(t) is the response of an LTIC system to the input 8 (t), then h(t-1) is the response of this same system to the input 8(t -T). This conclusion follows from the time-invariance property of LTIC systems. Thus, by knowing the unit impulse response h(t), we can determine the system response to a delayed impulse 8(1 - 1). Next, we put this result to good use in finding an LTIC system's zero-state response.

2.4 SYSTEM RESPONSE TO EXTERNAL INPUT: THE ZERO-STATE RESPONSE This section is devoted to the determination of the zero-state response of an LTIC system. This is the system response y(t) to an input x(t) when the system is in the zero state, that is, when all initial condjtions are zero. We shall assume that the systems discussed in this section are in the zero state unless mentioned otherwise. Under these conditions, the zero-state response will be the total response of the system. We shall use the superposition property for finding the system response to an arbitrary input x(t). Let us define a basic pulse p(t) of unit height and width 6.r, stamng at t = 0 as illustrated in Fig. 2.3a. Figure 2.3b shows an input x(t) as a sum of narrow rectangular pulses. The pulse starting at t = nt::.. r in Fig. 2.3b has a height x(nb..r) and can be expressed as x(nt::.. r)p(t-nt::.. r). Now, x(t) is the sum of all such pulses. Hence,

. '°'

. '°'

x(nb..r) x(t) = bm L-x(nt::..r)p(t-nb..r) = hm L....t [---] p(t-nb..r)b..r 6T->0 6. "C ln->0 T

T

The term [x(nb..r)/ t::.. r]p(t - nb..r) represents a pulse p(t -nb..r) with height [x(nb..r) /6.r]. As t::..r ➔ 0, the height of this strip ➔ oo, but its area remains x(nb..r). Hence, this strip approaches an impulse x(nt::.. r)8 (t- nb..r) as 6.r ➔ 0 (Fig. 2.3e). Therefore, x(t) = lim Lx(nt::..r)8(t-n6.r)6.r lH ➔ O

(2.22)

r

To find the response for this input x(t), we consider the input and the corresponding output pairs, as shown in Figs. 2.3c-2.3f and also shown by directed arrow notation as follows: input ===} output 8 (t) ===} h(t)

8(t- nt::..r) ===} h(t- nt::..r)

[x(n6.r)t::..r]8(t-n6.r) ==> [x(nb..r)b..r]h(t-nb..r)

lim

6r➔O

'°' x(nt::..r)8 (t-nb..r) t::..r Lr

x(r)

[see Eq. (2.22) I

===}

lim � x(nb..r)h(t-nb..r)t::..r LT

6r->0

y(I)

2.4 System Response to External Input: The Zero-State Response

p(t)

Q

,_

6.T

,_

-1-46.T (b)

(a) h(t)

5(t)

,_

0

,_

0 (c)

5(1 - n!::i.T)

0

,_

116.T

0

n/:,.T

,_

(d)

=

0

t::i.y(t)

[x(n!::i.T)!::i.T]5(t - n!::i.T)

,_

116.T

x(n!::i.T)h(t - n6.T)M

0

n/:,.T

(e)

y(t)

.:-:.: : :.: •=��?; : : •.-: ,_ ·. '··. ·...

0

n!::i.T

(f)

Figure 2.3 Findjng the system response lo an arbitrary input x(t).

,_

169

170

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS Therefore, t y(t)

= 6rlimoL'°'x(n.6.r)h(t-n.6.r).6.r ➔

= 1_: x(r)h(t - r)dr

(2.23)

This is the result we seek. We have obtained the system response y(t) to an arbitrary input x(t) in terms of the unit impulse response h(t). Knowing h(t), we can determine the response y(t) to any input. Observe once again the all-pervasive nature of the system's characteristic modes. The system response to any input is determined by the i mpulse response, which, in tum, is made up of characteristic modes of the system. It is important to keep in mind the assumptions used in deriving Eq. (2.23). We assumed a linear time-invariant (LTI) system. L inearity allowed us to use the principle of superposition, and time invariance made it possible to express the system's response to 8 (t - n6. r) as h(t-n6. r).

2.4.1 The Convolution Integral The zero-state response y(t) obtained in Eq. (2.23) is given by an integral that occurs frequently in the physical sciences, engineering, and mathematics. For this reason, this integral is given a special name: the convolution integral. The convolution integral of two functions Xt (t) and x2(t) is denoted symbolically by x 1 (t) * x2 (t) and is defined as xi (t) *X2(t)

= i: X1 (r)x2(t-r)dr

(2.24)

Some important properties of the convolution integral follow.

THE COMMUTATIVE PROPERTY

*

*

Convolution operation is commutative; that is, x1 (t) x2 (t) = x2(t) x1 (t). This property can be proved by a change of variable. In Eq. (2.24), if we let z = t -r so that r = t - z and dr = -dz, we obtain

L-oo = 1:

x1(t) *X2(t) = -

x2(z)x1(t-z)dz

x2(z)x1 (t-z)dz

=Xz(t) *X1 (t)

(2.25)

t In deriving this result, we have assumed a time-invariant system. If the system is time-varying, the n the system response to the input 8(t-nLlr) cannot be expressed as h(t-n.Llr) but instead has the form h(t ,nLlt). Use of this form modifies Eq. (2.23) to y(t) =

1_: x(r)h(t, r)dr

where h(t, r) is the system response at instant t to a unit impulse input located at r.

2.4 System Response to External Input: The Zero-State Response

171

THE DISTRIBUTIVE PROPERTY According to the distributive property, x, (t) * [x 2(t) + X3 (!)]

= Xt (t) * x2 (t) + Xt (t) *X3 (t)

(2.26)

THE ASSOCIATIVE PROPERTY

According to the associative property, (2.27) The proofs of Eqs. (2.26) and (2.27) follow directly from the definition of the convolution integral. They are left as an exercise for the reader. THE SHIFT PROPERTY If

then More generally, we see that (2.28) Proof. We are given

Therefore, x 1 (t) *X2(t-T) =

1_:

x1 (r)x2(t-T-r)dr

= c(t-T) The equally simple proof of Eq. (2.28) follows a similar approach. CONVOLUTION WITH AN IMPULSE

Convolution of a function x(t) with a unit impulse results in the function x(t) itself. By definition of convolution, x(t) *8(t)

= 1_:x(r)8(t- r)dr

Because 8 (t - r) is an impulse located at r = t, according to the sampling property of the impulse [Eq. (1.11) ], the integral here is just the value of x( r) at r = t, that is, x(t). Therefore, x(t) *8(t) =x(t) Actually, this result was derived earlier [Eq. (2.22)].

172

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

THE WIDTH PROPERTY

If the durations (widths) of x, (t) and x2(t) are finite, given by T1 and T2 , respectively, then the

duration (width) of x, (t) * x2(t) is T1 + T2 (Fig. 2.4). The proof of this property follows readily from the graphical considerations discussed later in Sec. 2.4.2.

=

*

,-

,-

Figure 2.4 Width property of convolution.

,-

ZERO-STATE RESPONSE AND CAUSALITY The (zero-state) response y(t) of an LTIC system is y(t) =x(t) *h(t)

=

r:

x(r)h(t- r)dr

(2.29)

In deriving Eq. (2.29), we assumed the system to be linear and time-invariant. There were no other restrictions either on the system or on the input signal x(t). Since, in practice, most systems are causal, their response cannot begin before the input. Furthermore, most inputs are also causal, which means they start at t = 0. Causality restriction on both signals and systems further simplifies the limits of integration in Eq. (2.29). By definition, the response of a causal system cannot begin before its input begins. Consequently, the causal system's response to a unit impulse 8(t) (which is located at t = 0) cannot begin before t = 0. Therefore, a causal system's unit impulse response h(t) is a causal

signal.

It is important to remember that the integration in Eq. (2.29) is performed with respect to r (not t). If the input x(t) is causal, x( r) = 0 for r < 0. Therefore, x( r) = 0 for r < 0, as illustrated in Fig. 2.5a. Similarly, if h(t) is causal, h(t - r) = 0 fort - r < 0; that is, for r > t, as depicted in Fig. 2.5a. Therefore, the product x( r)h(t - r) = 0 everywhere except over the nonshaded interval 0 � r � t shown in Fig. 2.5a (assuming t � 0). Observe that if tis negative, x(r)h(t - r) = 0 for all r, as shown in Fig. 2.5b. Therefore, Eq. (2.29) reduces to y(t)

= x(t) * h(I) = {

f x(r)h(t- r) dr o-

(2.30)

The lower limit of integration in Eq. (2.30) is taken as to avoid the difficulty in integration that can arise if x(t) contains an impulse at the origin. This result shows that if x(t) and h(t) are both causal, the response y(t) is also causal.

2.4 System Response to External Input: The Zero-State Response

173

h(t - r) = 0

���� r(a)

.i-(r) = 0 -���uu���iprrrr

t

..2 = -2) to be (e-r - e -2')u(t). The following example demonstrates the utility of this table.

Use Table 2.1 to compute the loop current y(t) of the RLC circuit in Ex. 2.4 for the input x(t) = lOe-31 u(t) when all the initial conditions are zero. The loop equation for this circuit [see Ex. 1.16 or Eq. (1.29)] is

( D2 + 3D + 2)y(t) = Dx(t) The impulse response h(t) for this system, as obtained in Ex. 2.6, is h(t)

The input is x(t)

= (2e-2' - e-')u(t)

= 10e- 31u(t), and the response y(t) is y(t) = x(t) * h(t) =10e-31 u(t) * [2e-21 - e-1 ]u(t)

Using the distributive property of the convolution [Eq. (2.26)], we obtain y(t) = l0e- 3' u(t) * 2e-21 u(t) - 10e-3'u(t) * e-1 u(t)

= 20[e- 31 u(t) * e- 2r u(t)] - 10[e-31 u(t) * e-r u(t)]

Now using pair 4 in Table 2.1 yields y(t)

10 20 = ----[ e-3' - e-2' ]u(t) - ---[e-3' - e-']u(t) -

-3- (-2) 3- (-1) 3 - e-')u(t) 21 3 = -20(e- ' - e- )u(t) + S(e- '

=(-Se-I+ 2oe-21 - 1se-31 )u(t)

176

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS TABLE 2.1

Select Convolution Integrals

No.

Xt (t)

l

x(t)

o(t -T )

2

e'A1u(t)

u(t)

3

u(t)

u(t)

x(t -T)

1 -e>..1 - --u(t) -). tu(t) e>.. 1 1 -e>-2' ---u(t) Al -.A.2

4

t�'u(t)

5

1 )./ -re u(t) 2

6

7

rN u(t)

e>..1u(t)

8

rM u(t)

r' u(t)

M!N! fl+N+Iu(t) (M+N+l)!

r' �'u(t )

M!N! ff+N+l�t u(t) (N+M+l)!

9

(-llM!(N+k)!rM-ke>-1 1 E k!(M - k)!(.A. 1 - A 2 )N+k+t u(t) M

11

+ 12

e-a, cos (/3t + 0)u(t)

cos (8 - )i·' -e-a, cos (/3t + 8 - ) ----r=====c====---:..:.. u(t) (a+ ).)2 + 132

J

13

14

(-l/N!(M+k)!r'-ke >-2 1 � k!(N -k)!().2 -) q )M+k+I u(t) N

= tan- 1 [ - /3/ (a + ).) ]

el ,, u(t) + elit u(-t) A 2 -A 1

2.4

System Response to External lnput: The Zero-State Response

J 77

vo�ution by Tables

onse by Convolution Table

.

esponse by Convolution Table sponse h(t) = e-21u(t), determine the zero-state : Use pair 12 from Table 2.1.]

(3t+ 33.68° )]u(t)

RESPONSE TO COMPLEX INPUTS

The LTIC system response discussed so far applies to general input signals, real or complex. However, if the system is real, that is, if h(t) is real, then we shall show that the real part of the input generates the real part of the output, and a similar conclusion applies to the imaginary part. If the input is x(t) = x,(t) + jx;(t), where x,(t) and x;(t) are the real and imaginary parts of x(t), then for real h(t)

where y,(t) and y;(t) are the real and the imaginary parts of y(t). Using the right-directed-arrow notation to indicate a pair of the input and the corresponding output, the foregoing result can be expressed as foJlows. If x(t) = x,( t) + j x;(t)

then

x,(t)

===>

y,(t)

===> and

y(t) = y,(t) + jy;(t)

(2.31)

MULTIPLE INPUTS

Multiple inputs to LTI systems can be treated by applying the superposition principle. Each input is considered separately, with all other inputs assumed to be zero. The sum of all these individual system responses constitutes the total system output when all the inputs are applied simultaneously.

178

CHAPTER 2

TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

2.4.2 Graphical Understanding of Convolution Operation The convolution operation can be grasped readily through a graphical interpretation of the convolution integral. Such an understanding is helpful in evaluating the convolution integral of more complex signals. In addition, graphical convolution allows us to grasp visually or mentally the convolution integral's result, which can be of great help in sampling, filtering, and many other problems. Finally, many signals have no exact mathematical description, so they can be described only graphically. If two such signals are to be convolved, we have no choice but to perform their convolution graphically. We shall now explain the convolution operation by convolving the signals x(t) and g(t), illustrated in Figs. 2.7a and 2.7b, respectively. If c(t) is the convolution of x(t) with g(t), then c(t) =

L:

x(r)g(t- r)dr

One of the crucial points to remember here is that this integration is performed with respect to r so that t is just a parameter (like a constant). This consideration is especially important when we sketch the graphical representations of the functions x(r) and g(t- r). Both these functions should be sketched as functions of r, not of t. The function x(r) is identical to x(t), with r replacing t (Fig. 2.7c). Therefore, x(t) and x(r) will have the same graphical representations. Similar remarks apply to g(t) and g(r) (Fig. 2.7d). To appreciate what g(t - r) looks like, let us start with the function g(r) (Fig. 2.7d). Time reversal of this function (reflection about the vertical axis r= 0) yields g(-r) (Fig. 2.7e). Let us denote this function by ¢(r): ¢(r) = g(-r) Now ¢(r) shifted by t seconds is ¢(r - t), given by ¢(r -t) = g[-(r - t)] = g(t- r)

Therefore, we first time-reverse g(r) to obtain g(-r) and then time-shift g(-r) by t to obtain g(t - r). For positive t, the shift is to the right (Fig. 2.7f); for negative t, the shift is to the left (Figs. 2.7g and 2.7h). The preceding discussion gives us a graphical interpretation of the functions x( r) and g(t -r). The convolution c(t) is the area under the product of these two functions. Thus, to compute c(t) at some positive instant t = t 1, we first obtain g(-r) by inverting g(r) about the vertical axis. Next, we right-shift or delay g(-r) by t 1 to obtain g(t 1 - r) (Fig. 2.7f), and then we multiply this function by x(r), giving us the product x(r)g(t 1 - r) (shaded portion in Fig. 2.7f). The area A i under this product is c(t 1), the value of c(t) at t =ti. We can therefore plot c(ti) = A 1 on a curve describing c(t), as shown in Fig. 2.7i. The area under the product x(r)g(-r) in Fig. 2.7e is c(O), the value of the convolution for t= 0 (at the origin). A similar procedure is followed in computing the value of c(t) at t = t2, where t2 is negative (Fig. 2.7g). In this case, the function g(-r) is shifted by a negative amount (that is, left-shifted) to obtain g(t2 -r). Multiplication of this function with x( r) yields the productx(r )g(t2 - r). The area under this product is c(t2 ) = A 2 , giving us another point on the curve c(t) at t = t2 (Fig. 2.7i). This procedure can be repeated for all values oft, from -oo to oo. The result will be a curve describing c(t) for all time t. Note that when t < -3, x( r) and g(t- r) do not overlap (see Fig. 2.7h); therefore, c(t) = 0 fort< -3.

2.4 System Response to External Input: The Zero-State Response x(t)

-I

g(t)

,-

0 (a)

-I

,-

0

-2

(b)

0

0

-2

'T-

(c)

'T-

(d)

X('T) (e)

-I 0

0

g(t

'T-

x(-r)

g(t - 'T) I= t 1 > 0

(f)

(g)

2

2

'T-

x(-r)

- 'T) 0

I = t2
-1 u(t) 1-

2.5 System Stability

x(t)

y(I)

201

Figure 2.19 Composite system

for Ex. 2.13. The composite system impulse response h(t) is given by e1 =e'u(t) -2[

1 /-

]u(t)

= e-'u(t) lf the composite cascade system were to be enclosed in a black box with only the input and the output temtinals accessible, any measurement from these external terminals would show that the impulse response of the system is e- 1 u(t), without any hint of the dangerously unstable system harbored within. The composite system is BIBO-stable because its impulse response, e-1 u(t), is absolutely integrable. Observe, however, that the subsystem S2 has a characteristic root l, which lies in the RHP. Hence, S2 is asymptotically unstable. Eventually, S2 will bum out (or saturate) because of the unbounded characteristic response generated by intended or unintended initial conditions, no matter bow small. We shall show in Ex. 13.12 that this composite system is observable, but not controllable. If the positions of S1 and S2 were interchanged (S2 followed by S 1 ), the system is still BIBO-stable, but asymptotically unstable. In this case, the analysis in Ex. 13.12 shows that the composite system is controllable, but not observable. This example shows that BIBO stability does not always imply asymptotic stability. However, asymptotic stability always implies BIBO stability.

Fortunately, uncontrollable and/or unobservable systems are not commonly observed in practice. Henceforth, in determining system stability, we shall assume that unless otherwise mentioned, the internal and the external descriptions of a system are equivalent, implying that the system is controllable and observable.

EXAMPLE 2.14 Investigating Asymptotic and BIBO Stability Investigate the asymptotic and the BIBO stability of LTIC systems described by the following equations, assuming that the equations are internal system descriptions: (a) (D + 1)(D 2 + 4D + 8)y(t) = (D - 3)x(t)

(b) (D- l)(D2 + 4D + 8)y(z) = (D + 2)x(t) (c) (D + 2)(D2 + 4)y(t) = (D2 + D + l)x(t)

(d) (D + l)(D2 + 4)2y(t) = (D2 + 2D + 8)x(t)

202

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS The characteristic polynomials of these systems are (a) (>.. + l )(>..2 +4 >..+ 8) =(>..+ I)(>.. +2- }2)(>.. + 2+ )2) (b) (>.. - 1)().. 2 +4).. + 8) =()..-I)(>..+ 2- } 2)(>.. + 2 + j2) 2)( - }2)(>-. +)2) (c) (>.. + 2)( 2 +4) =(

>-. + >.. (d) (>-. + l)(>.. 2 +4)2 =().. +2)()..- } 2)2().. + j2)2 )..

Consequently, the characteristic roots of the systems are (see Fig. 2.20) (a) -1, -2±}2 (b) 1, -2±}2

(c) -2, ±}2 (d) -1, ±j2, ±j2 System (a) is asymptotically stable (all roots in LHP), system (b) is unstable (one root in RHP), system (c) is marginally stable (unrepeated roots on imaginary axis) and no roots in RHP, and system (d) is unstable (repeated roots on the imaginary axis). BIBO stability is readily determined from the asymptotic stability. System (a) is BIBO-stable, system (b) is BIBO-unstable, system (c) is BIBO-unstable, and system (d) is BIBO-unstable. We have assumed that these systems are controllable and observable.

X

X

X

X

(a)

(b)

(c)

Figure 2.20 Characteristic root locations for the systems of Ex. 2.14.

For each case, plot the characteristic f'()Ots and Assume the equations reflect internal descriptio

D(D + 2)y(t) = 3x(t) D2 (D+3)y(t) = (D+S)x(t) (D+ l)(D+2)y(t) = (W+3)x(t) (D2 + l)(D2 + 9)y(t) = (D2 + W + 4)x(t) (e) (D+ l)(D2 -4D+9)y(t) = (D+7)x(t)

(a) (b) (c) (d)

(d)

2.6 lntuitive Insights into System Behavior

203

IMPLICATIONS OF STABILITY

All practical signal-processing systems must be asymptotically stable. Unstable systems are useless from the viewpoint of signal processing because any set of intended or unintended injtial conditions leads to an unbounded response that either destroys the system or (more likely) leads it to some saturation conditions that change the nature of the system. Even if the discernible irritial conditions are zero, stray voltages or thermal noise signals generated within the system will act as irritial conditions. Because of exponential growth of a mode or modes in unstable systems, a stray signal, no matter how small, will eventually cause an unbounded output. Marginally stable systems, though BIBO unstable, do have one important application in the oscillator, wmch is a system that generates a signal on its own without the application of an external input. Consequently, the oscillator output is a zero-input response. If such a response is to be a sinusoid of frequency Wo, the system should be marginally stable with characteristic roots at ±jWQ. Thus, to design an oscillator of frequency euo, we should pick a system with the characteristic polynomial (A - jWQ)(A + jWQ) = A2 + WQ2 . A system described by the differential equation

(D2 +Wo2)y(t) =x(t)

will do the job. However, practical oscillators are invariably realized using nonlinear systems.

2.6

INTUITIVE INSIGHTS INTO SYSTEM BEHAVIOR

This section attempts to provide an understanding of what determines system behavior. Because of its intuitive nature, the discussion is more or less qualitative. We shall now show that the most important attributes of a system are its characteristic roots or characteristic modes because they determine not only the zero-input response but also the entire behavior of the system.

2.6.1 Dependence of System Behavior on Characteristic Modes Recall that the zero-input response of a system consists of the system's characteristic modes. For a stable system, these characteristic modes decay exponentially and eventually vanish. This behavior may give the impression that these modes do not substantially affect system behavior in general and system response in particular. This impression is totally wrong! We shall now see that the system's characteristic modes leave their imprint on every aspect of the system behavior. We may compare the system's characteristic modes (or roots) to a seed that eventually dissolves in

204

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

the ground; however, the plant that springs from it is totally determined by the seed. The imprint of the seed exists on every cell of the plant. To understand this interesting phenomenon, recall that the characteristic modes of a system are very special to that system because it can sustain these signals without the application of an external input. In other words, the system offers a free ride and ready access to these signals. Now imagine what would happen if we actually drove the system with an input having the form of a characteristic mode! We would expect the system to respond strongly (this is, in fact, the resonance phenomenon discussed later in this section). If the input is not exactly a characteristic mode but is close to such a mode, we would slill expect the system response to be strong. However, if the input is very different from any of the characteristic modes, we would expect the system to respond poorly. We shall now show that these intuitive deductions are indeed true.

THE. At.JSWE R L".:i INTutT1VEL'1'

OBVI0L>S.

COl'IIGLIO

Intuition can cut the math jungle instantly!

We have devised a measure of similarity of signals later (see in Ch. 3). Here, we shall take a simpler approach. Let us restrict the system's inputs to exponentials of the form e( ', where { is generally a complex number. The similarity of two exponential signals e ( ' and e).,1 will then be measured by the closeness of { and 1. If the difference { - ;._ is small, the signals are similar; if { - ;._ is large, the signals are dissimilar. Now consider a first-order system with a single characteristic mode e°"·' and the input e''· The impulse response of this system is then given by AeM , where the exact value of A is not important for this qualitative discussion. The system response y(t) is given by

From the convolution table (Table 2.1), we obtain

(2.46)

2.6 Intuitive Insights into System Behavior

205

s -).. s-

Clearly, if the input e{ ' is similar to e>-1 , is small and the system response is large. The closer the input x(t) to the characteristic mode, the stronger the system response. In contrast, if the input is very different from the natural mode, >.. is large and the system responds poorly. This is precisely what we set out to prove. We have proved the foregoing assertion for a single-mode (first-order) system. It can be generalized to an Nth-order system, which has N characteristic modes. The impulse response h(t) of such a system is a linear combination of its N modes. Therefore, if x(t) is similar to any one of the modes, the corresponding response will be high; if it is similar to none of the modes, the response will be small. Clearly, the characteristic modes are very influential in determining system response to a given input. It would be tempting to conclude on the basis of Eq. (2.46) that if the input is identical to the characteristic mode, so that s = ).., then the response goes to infinity. Remember, however, that if = ).., the numerator on the right-hand side of Eq. (2.46) also goes to zero. We shall study thjs interesting behavior (resonance phenomenon) later in this section. We now show that mere inspection of the impulse response h(t) (which is composed of characteristic modes) reveals a great deal about the system behavior.

s

2.6.2 Response Time of a System: The System Time Constant Like human beings, systems have a certain response time. In other words, when an input (stimulus) is applied to a system, a certain amount of time elapses before the system fully responds to that input. This time lag or response time is called the system time constant. As we shall see, a system's time constant is equal to the width of its impulse response h(t). An input o(t) to a system is instantaneous (zero duration), but its response h(t) has a duration Th . Therefore, the system requires a time T,1 to respond fully to this input, and we are justified in viewing T,, as the system's respo ,se time or time constant. We arrive at the same conclusion via another argument. The output is a convolution of the input with h(r). If an input is a pulse of width Tx , then the output pulse width is Tr + T1, according to the width property of convolution. This conclusion shows that the system requires r,, seconds to respond fully to any input. The system time constant indicates how fast the system is. A system with a smaller time constant is a faster system that responds quickly to an input. A system with a relatively large time constant is a sluggish system that cannot respond well to rapidly varying signals. Strictly speaking, the duration of the impulse response h(t) is oo because the characteristic modes approach zero asymptotically as t ➔ oo. However, beyond some value oft, h(t) becomes negligible. It is therefore necessary to use some suitable measure of the impulse response's effective width. There is no single satisfactory definition of effective signal duration (or width) applicable to every situation. For the situation depicted in Fig. 2.21, a reasonable definition of the duration h(t) would be T1, , the width of the rectangular pulse h(t). This rectangular pulse h(t) has an area identical to that of h(t) and a height identical to that of h(t) at some suitable instant t = to. In Fig. 2.21, to is chosen as the instant at which h(t) is maximum. According to this definition, t T1,h(to) =

1_:

h(t) dt

t This definition is satisfactory when h(t) is a single, mostly positive (or mostly negative) pulse. Such systems are Jowpass systems. This definition should not be applied indiscriminately to all systems.

206

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

0

'o

r,,

,-

Figure 2.21 Effective duration of an impulse response.

or T1r = Now if a system has a single mode h(t)

f�00 h(t)dt h(to)

(2.47)

= Ae" 1 u(t)

with >.. negative and real, then h(t) is maximum at t = 0 with value h(O) = A. Therefore, according to Eq. (2.47), T1i =

1

roo ,\/ 1 o Ae dt = ->.

AJ

Thus, the time constant in this case is simply the (negative of the) reciprocal of the system's characteristic root. For the multimode case, h(t) is a weighted sum of the system's characteristic modes, and T1i is a weighted average of the time constants associated with the N modes of the system.

2.6.3 Time Constant and Rise Time of a System Rise time of a system, defined as the time required for the unit step response to rise from 10% to 90% of its steady-state value, is an indication of the speed of response.t The system time constant may also be viewed from a perspective of rise time. The unit step response y(t) of a system is the convolution of u(t) with h(t). Let the impulse response h(t) be a rectangular pulse of width Th, as shown in Fig. 2.22. This assumption simplifies the discussion, yet gives satisfactory results for qualitative discussion. The result of this convolution is illustrated in Fig. 2.22. Note that the output does not rise from zero to a final value instantaneously as the input rises; instead, the output takes Th seconds to accomplish this. Hence, the rise time T, of the system is equal to the system time constant This result and Fig. 2.22 show clearly that a system generally does not respond to an inp ut instantaneously. Instead, it takes time T1, for the system to respond fully. t Because of varying definitions of rise time, the reader may find different results in the literature. The qualitative and intuitive nature of this discussion should always be kept in mind.

2.6 Intuitive Insights into System Behavior x(t)

lt(t)

207

y(t)

*

,-

0

,-

r, ,

0

r-

Figure 2.22 Rise time of a system.

2.6.4 Time Constant and Filtering A larger time constant implies a sluggish system because the system takes longer to respond fully to an input. Such a system cannot respond effectively to rapid variations in the input. In contrast, a smaller time constant indicates that a system is capable of responding to rapid variations in the input. Thus, there is a direct connection between a system's time constant and its filtering properties. A high-frequency sinusoid varies rapidly with time. A system with a large time constant will not be able to respond well to this input. Therefore, such a system will suppress rapidly varying (high-frequency) sinusoids and other high-frequency signals, thereby acting as a lowpass filter (a filter allowing the transmission of low-frequency signals only). We shall now show that a system lt(t)

r,,

0

,-

(a) h(t - T)

y(t) x(-r)

0

r-

-r(b) h(t - T) X(T)

0 (c)

Figure 2.23 Time constant and filtering.

208

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

with a time constant Th acts as a lowpass filter having a cutoff frequency of Jc = 1 /Th hertz so that sinusoids with frequencies below Jc Hz are transmitted reasonably well, while those with frequencies above Jc Hz are suppressed. To demonstrate this fact, let us determine the system response to a sinusoidal input x(t) by convolving this input with the effective impulse response h(t) in Fig. 2.23a. From Figs. 2.23b and 2.23c, we see the process of convolution of h(t) with the sinusoidal inputs of two different frequencies. The sinusoid in Fig. 2.23b has a relatively high frequency, while the frequency of the sinusoid in Fig. 2.23c is low. Recall that the convolution of x(t) and h(t) is equal to the area under the product x(r)h(c - r). This area is shown shaded in Figs. 2.23b and 2.23c for the two cases. For the high-frequency sinusoid, it is clear from Fig. 2.23b that the area under x(-r)h(t- r) is very small because its positive and negative areas nearly cancel each other out. In this case the output y(t) remains periodic but has a rather sma11 amplitude. This happens when the period of the sinusoid is much smaller than the system time constant T,,. In contrast, for the low-frequency sinusoid, the period of the sinusoid is larger than T1, , rendering the partial cance11ation of area under x( r)h(t - r) less effective. Consequently, the output y(t) is much larger, as depicted in Fig. 2.23c. Between these two possible extremes in system behavior, a transition point occurs when the period of the sinusoid is equal to the system time constant Th . The frequency at which this transition occurs is known as the cutofffrequency fc of the system. Because Th is the period of cutoff frequencyJc,

The frequency fc is also known as the bandwidth of the system because the system transmits or passes sinusoidal components with frequencies below fc while attenuating components with frequencies above fc • Of course, the transition in system behavior is gradual. There is no dramatic change in system behavior at fc = l/T1, . Moreover, these results are based on an idealized (rectangular pulse) impulse response; in practice these results will vary somewhat, depending on the exact shape of h(t). Remember that the "feel" of general system behavior is more important than exact system response for this qualitative discussion. Since the system time constant is equal to its rise time, we have or

(2.48)

Thus, a system's bandwidth is inversely proportional to its rise time. Although Eq. (2.48) was derived for an idealized (rectangular) impulse response, its implications are valid for lowpass LTIC systems, in general. For a general case, we can show that [ 1] k

lc = ­ T, where the exact value of k depends on the nature of h(t). An experienced engineer often can estimate quickly the bandwidth of an unknown system by simply observing the system response to a step input on an oscilloscope.

2.6 Intuitive insights into System Behavior

209

2.6.5 Time Constant and Pulse Dispersion (Spreading) In general, the transmission of a pulse through a system causes pulse dispersion (or spreading). Therefore, the output pulse is generally wider than the input pulse. This system behavior can have serious consequences in communication systems in which information is transmitted by pulse amplitudes. Dispersion (or spreading) causes interference or overlap with neighboring pulses, thereby distorting pulse amplitudes and introducing errors in the received information. Earlier we saw that if an input x(t) is a pulse of width Tx , then Ty , the width of the output y(t), is This result shows that an input pulse spreads out (disperses) as it passes through a system. Since Th is also the system's time constant or rise time, the amount of spread in the pulse is equal to the time constant (or rise time) of the system.

2.6.6 Time Constant and Rate of Info rmation Transmission In pulse communications systems, which convey information through pulse amplitudes, the rate of information transmission is proportional to the rate of pulse transmission. We shall demonstrate that to avoid the destruction of information caused by dispersion of pulses during their transmission through the channel (transmission medium), the rate of information transmission should not exceed the bandwidth of the communications channel. Since an input pulse spreads out by Th seconds, the consecutive pulses should be spaced T1i seconds apart to avoid interference between pulses. Thus, the rate of pulse transmission should not exceed I/Th pulses/second. But l/Th = Jc , the channel's bandwidth, so that we can transmit pulses through a communications channel at a rate offc pulses per second and still avoid significant interference between the pulses. The rate of information transmission is therefore proportional to the channel's bandwidth (or to the reciprocal of its time constant)_t The discussion of Secs. 2.6.2, 2.6.3, 2.6.4, 2.6.5, and 2.6.6 shows that the system time constant determines much of a system's behavior-its filtering characteristics, rise time, pulse dispersion, and so on. In turn, the time constant is determined by the system's characteristic roots. Clearly, the characteristic roots and their relative amounts in the impulse response h(t) determine the behavior of a system.

Find the time constant T11 , rise time Tr, and cutoff frequency fc for a lowpass system that has impulse response h(t) = te-1 u(t). Determine the maximum rate that pulses of 1 second

t Theoretically, a channel of bandwidth Jc can transmit correctly up to 2fc pulse amplitudes per second [5]. Our derivation here, being very simple and qualitative, yields only half the theoretical limit. In practice it is not easy to attain the upper theoretical limit.

210

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS d uration can be transmitted through the system so that interference is essentially avoided between adjacent pulses at the system output. The system impulse response h(t) = te-1 u(t), which looks similar to the impulse response of Fig. 2.21, has a peak value of e - 1 = 0.3679 at a time to= l. According to Eq. (2.47) and using integration by parts, the system time constant is therefore

Thus, Th = 2.7183 s,

Tr= T11

= 2.7183 s,

and

1

le = T h

= 0.3679 Hz

Due to its lowpass nature, this system will spread an input pulse of 1 second to an output with width Ty= Tx +Th= 1 + 2.7183 = 3.7183 S

To avoid interference between pulses at the output, the pulse transmission rate should be no more than the reciprocal of the output pulse width. That is, maximum pulse transmission rate= 3_/183 = 0.2689 pulse/s By narrowing the input pulses, the pulse transmission rate could increase up to le = 0.3679 pulse/s.

2.6. 7 The Resonance Phenomenon Finally, we come to the fascinating phenomenon of resonance. As we have already mentioned several times, this phenomenon is observed when the input signal is identical or is very close to a characteristic mode of the system. For the sake of simplicity and clarity, we consider a first-ord er system having only a single mode, eJ..t . Let the impulse response of this system be t h(t) =Ai·l and let the input be

x(t)

= e--E)t

The system response y(t) is then given by

y(t)

= Ae>..t * e..-E)t

t For convenience, we omit multiplying x(t) and h(t) by u(t). Throughout this discussion, we assume that they are causal.

2.6 Intuitive Insights into System Behavior

211

From the convolution table, we obtain (2.49) Now, as € ➔ 0, both the numerator and the denominator of the term in the parentheses approach zero. Applying L'Hopital's rule to this term yields lim y(t) = Ati'

l➔O

Clearly, the response does not go to infinity as E ➔ 0, but it acquires a factor t, which approaches oo as t ➔ oo. If A has a negative real part (so that it lies in the LHP), eM decays faster than t and y(t) ➔ 0 as t ➔ oo. The resonance phenomenon in this case is present, but its manifestation is aborted by the signal's own exponential decay. This discussion shows that resonance is a cumulative phenomenon, not instantaneous. It builds up linearly with t.t When the mode decays exponentially, the signal decays too fast for resonance to counteract the decay; as a result, the signal vanishes before resonance has a chance to build it up. However, if the mode were to decay at a rate less than 1/t, we should see the resonance phenomenon clearly. This specific condition would be possible if Re). 2: 0. For instance, when Re >.. = 0 so that),, lies on the imaginary axis of the complex plane(>..= jw), the output becomes y(t)

= Ate jwr

Here, the response does go to infinity linearly with t. For a real system, if ). = jw is a root, A* = -j w must also be a root; the impulse response is of the form Ae jwr + Ae-jwr = 2Acos wt. The response of this system to input Acoswt is 2A cos wt* coswt. The reader can show that this convolution contains a term of the form At cos wt. The resonance phenomenon is clearly visible. The system response to its characteristic mode increases linearly with time, eventually reaching oo, as indicated in Fig. 2.24. Recall that when),,= jw, the system is marginally stable. As we have indicated, the full effect of resonance cannot be seen for an asymptotically stable system; only in a marginally stable system does the resonance phenomenon boost the system's response to infinity when the system's input y(t)

,Figure 2.24 Buildup of system response in resonance.

t If the characteristic root in question repeats r times, the resonance effect increases as ,r- 1 eJ..1 - O as t - oo for any value of r, provided Re>.. < 0 (>.. in the LHP).

r- 1•

However,

212

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS is a characteristic mode. But even in an asymptotically stable system, we see a manifestation of resonance if its characteristic roots are close to the imaginary axis so that Re A. is a small, negative value. We can show that when the characteristic roots of a system are a± jWQ, then the system response to the input eiWO' or the sinusoid cos(t) represents the kth derivative of y11 (t). The right-hand side contains a single impulse term, 8 (t). This is possible only if y�N-I) (t) has a unit jump discontinuity al t = O so that y�N>(t) = 8(t). Moreover, the lower-order terms cannot have any jump discontinuity because this would mean the presence of the derivatives of 8(t). Therefore y11 (0) = l,1)(0) = . . . = Y?°'- 2> (0) = O (no discontinuity at t = 0), and the N initial conditions on y11 (t) are and

(2.53)

This discussion means that y 11 (t) is the zero-input response of the system S subject to initial conditions [Eq. (2.53)]. We now show that for the same input x(t) to both systems, S and So, their respective outputs y(t) and w(r) are related by y(t) = P(D)w(t) (2.54) To prove this result, we operate on both sides of Eq. (2.52) by P(D) to obtain Q(D)P(D)w(t) = P(D)x(t)

Comparison of this equation with Eq. (2.2) leads immediately to Eq. (2.54). Now if the input x(t) = 8(t), the output of So is y11 (t), and the output of S, according to Eq. (2.54), is P(D)y 11 (t). This output is h(t), the unit impulse response of S. Note, however, that because it is an impulse response of a causal system So, the function y11 (t) is causal. To incorporate this fact, we must represent this function as y11 (t)u(t). Now it follows that h(t), the unit impulse response of the system S, is given by h(t)

= P(D)[yn(t)u(t)]

(2.55 )

where y11 (t) is a linear combination of the characteristic modes of the system subject to initial conditions (2.53). The right-hand side of Eq. (2.55) is a linear combination of the derivatives of yn(t)u(t). Evaluating these derivatives is clumsy and inconvenient because of the presence of u(t). The derivatives will generate an impulse and its derivatives at the origin. Fortunately when M � N [Eq. (2.11)], we can avoid this difficulty by using the observation in Eq. (2.51), which asserts that at t = 0 (the origin), h(t) = bo8(t). Therefore, we need not bother to find h(t) at the origin. This simplification means that instead of deriving P(D)[y 11 (t)u(t)], we can derive P(D)yn(t) and add to it the term b08(t) so that h(t) = bo8(t) + P(D)yn(t)

t ::'.'. 0

= bo8(t) + [P(D)yn (t)]u(t)

This expression is valid when M � N [the form given in Eq. (2.11)). When M > N, Eq. (2.55) should be used.

2.9 SUMMARY This chapter discusses time-domain analysis of LTIC systems. The total response of a linear system is a sum of the zero-input response and zero-state response. The zero-input response is the system

222

CHAPTER 2

TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

response generated only by the internal conditions (initial conditions) of the system, assuming that the external input is zero; hence the adjective "zero-input." The zero-state response is the system response generated by the external input, assuming that aJI initial conditions are zero, that is, when the system is in zero state. Every system can sustain certain forms of response on its own with no external input (zero input). These forms are intrinsic characteristics of the system; that is, they do not depend on any external input. For this reason they are called characteristic modes of the system. Needless to say, the zero-input response is made up of characteristic modes chosen in a combination required to satisfy the initial conditions of the system. For an Nth-order system, there are N distinct modes. The unit impulse function is an idealized mathematical model of a signal that cannot be generated in practice. t Nevertheless, introduction of such a signal as an intermediary is very helpful in analysis of signals and systems. The unit impulse response of a system is a combination of the characteristic modes of the system* because the impulse 8 (t) = 0 for t > 0. Therefore, the system response for t > 0 must necessarily be a zero-input response, which, as seen earlier, is a combination of characteristic modes. The zero-state response (response due to external input) of a linear system can be obtained by breaking the input into simpler components and then adding the responses to all the components. In this chapter we represent an arbitrary input x(t) as a sum of narrow rectangular pulses [staircase approximation of x(t)]. In the limit as the pulse width -+ 0, the rectangular pulse components approach impulses. Knowing the impulse response of the system, we can find the system response to all the impulse components and add them to yield the system response to the input x(t). The sum of the responses to the impulse components is in the form of an integral, known as the convolution integral. The system response is obtained as the convolution of the input x(t) with the system's impulse response h(t). Therefore, the knowledge of the system's impulse response allows us to determine the system response to any arbitrary input. LTIC systems have a very special relationship to the everlasting exponential signal e' because the response of an LTIC system to such an input signal is the same signal within a multiplicative constant. The response of an LTIC system to the everlasting exponential input e5' is H(s)e', where H(s) is the transfer function of the system. If every bounded input results in a bounded output, the system is stable in the bounded-input/bounded-output (BIBO) sense. An LTIC system is BIBO-stable if and only if its impulse response is absolutely integrable. Otherwise, it is BIBO-unstable. BIBO stability is a stability seen from external terminals of the system. Hence, it is also called external stability or zero-state stability. In contrast, internal stability (or the zero-input stability) examines the system stability from inside. When some initial conditions are applied to a system in zero state, then, if the system eventually returns to zero state, the system is said to be stable in the asymptotic or Lyapunov sense. If the system's response increases without bound, it is unstable. If the system does not go to zero state and the response does not increase indefinitely, the system is marginally stable. The internal stability criterion, in terms of the location of a system's characteristic roots, can be summarized as follows:

t However, it can be closely approximated by a narrow pulse of unit area and having a width that is much smaller than the time constant of an LTIC system in which it is used. + There is the possibility of an impulse in addition to the characteristic modes.

223

Problems

1. An LTIC system is asymptotically stable if, and only if, all the characteristic roots are in the LHP. The roots may be repeated or unrepeated. 2. An LTIC system is unstable if, and only if, either one or both of the following conditions exist: (i) at least one root is in the RHP; (ii) there are repeated roots on the imaginary axis. 3. An LTIC system is marginally stable if, and only if, there are no roots in the RHP, and there are some unrepeated roots on the imaginary axis. It is possible for a system to be externally (BIBO) stable but internally unstable. When a system is controllable and observable, its external and internal descriptions are equivalent. Hence, external (BIBO) and internal (asymptotic) stabilities are equivalent and provide the same information. Such a BIBO-stable system is also asymptotically stable, and vice versa. Similarly, a BIBO-unstable system is either a marginally stable or asymptotically unstable system. The characteristic behavior of a system is extremely important because it determines not only the system response to internal conditions (zero-input behavior), but also the system response to external inputs (zero-state behavior) and the system stability. The system response to external inputs is determined by the impulse response, which itself is made up of characteristic modes. The width of the impulse response is called the time constant of the system, which indicates how fast the system can respond to an input. The time constant plays an important role in determining such diverse system behaviors as the response time and filtering properties of the system, dispersion of pulses, and the rate of pulse transmission through the system.

REFERENCES l.

Lathi, B. P., Signals and Systems,

2.

P. J. Nahin, "Behind the Laplace transfonn," in IEEE Spectrum, vol. 28, no. 3, pp. 60, March

Berkeley-Cambridge Press, Carmichael, CA, 1987.

l 0.1109/6.67288.

1991, doi:

3. Mason, S. J., Electronic Circuits, Signals, and Systems, Wiley, New York, 1960. 4. Kailath, T., Linear System, Prentice-Hall, Englewood Cliffs, NJ, 1980. Lathi, B. P. and Ding, Z., Modem Digital and Analog Communication Systems, 4th ed., Oxford University Press, New York, 2009.

5.

2.2-1

Detenni ne the constants c 1 , c2, >. 1 , and ).,2 for each of the following second-order systems, which have zero-input responses of the form Yzir(t) c,e'- 11 + cie '-21 • (a ) y(t) + 2j,(t) + 5y(t) x(t) - 5x(r) with .Y 2 and Yzir(0) = zir(0) 0. (b) y(t) + 2y(t) + 5y(t) x(t) - 5x(t) with Yz ir(0) 4 and .Yzir(0) -1. (c) y(t) + 2*1y(t) = x(t) with YzirCO) = l and .Yzir(0) 2. (d) (D2 + 2D + l 0) {y(t) } (D 5 - D) {x(t)} with 1. Yzir(0 ) = .Yzir(0) (e) (D2 + � D + ){y(t)) ( D + 2) {x(t)) with Yzir(0) = 3 and Yzir(0) = -8. [ Caution: The

=

f

= = = =

=

=

!

=

=

=

second IC is given in tenns of the second derivative, not the first derivative.]

(f) 13y(t) + 4iy(t) + �y(t) = 2x(t) - 4�x(t) withYzir(0) = 3 and .Yzir(0) = -15. [Caution: The second IC is given in terms of the second derivative, not the first derivative.]

2.2-2 Consider a linear time-invariant system with input x(t) and output y(r) that is described by the differentiaJ equation (D + l)(D - 1) {y(t)} 2

Furthennore, assume y(0)

= (D5 - 1) {x(t)}

= j,(0) = y(0) = 1.

224

2.2-3

CHAPTER 2

TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

(a) What is the order of this system? (b) What are the characteristic roots of this system? (c) Detennine the zero-input response Yzir(t). Simplify your answer.

2.2-9

A real LTIC system with input x(t) and output y(t) is described by the following constant-coefficient linear differential equation:

2.2-10

(D3 + 9D) {y(I)}

= (2D3 + l) {x(t))

(a) What is the characteristic equation of this system? (b) What are the characteristic modes of this system? (c) Assuming Yzi r (0) = 4, .Yzir(0) = -18, and .Yzir(0) 0, determine this system's zero-input response Yzir (t). Simplify Yzir(t) to include only real terms (i.e., no j's should appear in your answer).

\

2 D2 (D + l)y(t) = (D + 2)x(t)

with Yo(0-)

= 2, _yo(0-) = -1, and .Yo(0-) = S.

2.2-11 A system is described by a constant-coefficient

linear differential equation and has zero-input response given by Yo(t) = 2c' + 3. (a) Is it possible for the system's characteristic equation to be A + l = 0? Justify your answer. (b) Is it possible for the system's characteristic equation to be ./3(>-.2+A)= 0? Justify your answer. (c) Is it possible for the system's characteristic equation to be >-.(A+ 1)2 = 0? Justify your answer.

2.2-12 Consider the circuit of Fig. P2.2- l 2. Using

operator notation, this system can be described as (D+a1){y(t)} = (boD+b i ){x(t)). (a) Determine the constants a1, ho, and b 1 in terms of the system components R, Rt , and C. (b) Assume that R = 300 kQ, R1 = 1.2 MQ, and C = 5 µF. What is the zero-input response Yo( t) of this system, assuming vc(O) IV?

(a) Find the characteristic polynomial, charac­ teristic equation, characteristic roots, and characteristic modes of this system. (b) Find yo (t), the zero-input component of the response y(t) for t � 0, if the initial conditions are yo(0-) = 2 and .Yo(0-) = -1.

2.2-5

Repeat Prob. 2.2-4 for 2

An LTIC system is specified by the equation (D2 +5D+6)y(t) = (D+ l )x(t)

with yo (0-) = 4, .Yo(0-) = 3, and .Yo(0-) = -1. (D+ l)(D + 5D+6)y(t) = Dx(t)

=

2.2-4

Repeat Prob. 2.2-4 for

=

Repeat Prob. 2.2-4 for (D2 + 4D+4)y(t) = Dx(t) and Yo(0-) = 3, .Yo(0-)

2.2-6 Repeat Prob. 2.2-4 for

= -4. + y{t)

D(D + l)y(t) = (D + 2)x(t)

2.2-7

and Yo(0-) = .Yo(0-) = 1. Repeat Prob. 2.2-4 for (D and yo(0-)

2

Figure P2.2-12

+ 9)y(t) = (3D + 2)x(t)

= 0, .Yo(0-) = 6.

2.2-8 Repeat Prob. 2.2-4 for

(D2 + 4D + 13)y(t)

= 4(D + 2)x(t)

with yo(0-) = 5, .Yo(0-) = 15.98.

2.3-1

Detennine the characteristic equation, char acter­ istic modes, and impulse response h(t) for e ach of the following real LTIC systems. Since the systems are real, express each h(t) using onlY r real terms (i.e., no j's should appear in you answers).

225

Problems

2.3•2

2 (a) ( D + 1) {y(t)) = 2D (x(t)) 3 3 (b) (D +D) {y(t)} = (2D + l){x(t)) (c) �y(r) + 21,y(t) + 5y(t) = 8x(t)

(c) Determine the impulse response h(t) of this system.

2.4-1

F ind the unit impulse response of a system specified by the equation

(D1 + 4D + 3)y(t) = (D + 5)x(t) 2.3- 3

Repeat Prob. 2.3-2 for (D2 + 5D+6) y(t)

= (D1 + 7D+ I l)x(t)

2.3-4 Repeat Prob. 2.3-2 for the first-order allpass filter specified by the equation

(D + I )y(t) = -(D - I)x(t )

2.4-2 Consider signals h(t) = u(t + 3) - 2u (t + I) + u(t - 1) and x(t) = cos(t)

2.3-5 Find the unit impulse response of an LTIC system specified by the equation 2

(D

[u(t - 1T/2) - u(t - 3Jr/2)]. Let y(t) = x(t) * h(t). (a) Determine the last time tiast that y(t) is nonzero. That is, find the smallest value t1as, such that y(t) = 0 for all t > t1as1• (b) Determine the approximate time tmax where y(t) is a maximum.

+ 6D + 9)y(t) = (2D + 9)x(t)

23-6 Determine and plot the unit impulse response h(r) of the op-amp circuit of Fig. P2.2-l 2, assuming that R = 300 kQ, R1 = 1.2 MQ, and C=SµF. 2.3-7

f dt +ff ff x(t)dt- fff 3y(t)

+

h(t) = -u(t + 2) 3u(t - 1) - 2u(t - �) and x(t) = sin(t)[u(t+21r)-u(t+Jr)]. Determine the approximate time lmin, where y(t) = x(t) * h(t) is a minimum. Note, the minimum value of y(t) -::j:. O!

2.4-3 Consider signals

A causal LTIC system with input x(i) and output y(t) is described by the constant coefficient integral equation y(t) +

Let/(t) = h 1 (t) * h2(t), where h 1 (r) and h2 (t) are shown in Pig. P2.4-l. In the following, use the graphical convolution procedure where you flip and shift h1 (t). (a) Plot h 1 (-r) and h2(t - -r) as functions oft. Clearly label the plots, including necessary function parameterizations. (b) Determine the (piecewise) regions of f(t) and set up the corresponding integrals that describe J(t) in those regions. Do not evaluate the integrals, only set them up! (c) Determine /(l ), which is f(t) evaluated at t = I. Provide a number, not a fonnula.

2y(t) dt =

2.4-4 An LTIC system has impulse response h(t) = 3u(t - 2). For input x(t) shown in Fig. P2.4-4,

x(t) dt

use the graphical convolution procedure to deter­ mine Yzsr Ct) = h(t) * x(t). Accurately sketch Yzsr (t). When solving for Yzs r(t), flip and shift x(t) and explicitly show all integration steps-even if apparently trivial!

(a) Express this system as a constant coeffi­ cient linear differential equation in standard operator form. (b) Determine the characteristic modes of this system.

t

-1

-1

Figure P2.4-1

1

-2

-1

1

t

CHAPTER 2

226

TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS 2.4-6

x(t)

Repeat Prob. 2.4-5 using the signals of Fig. P2.4-6, rather than those of Fig. P2.4-5. Be careful! Figure P2.4-6 shows h(t - 1), not h(t).

1 l 2

-2

-1

x(t)

4

2 1

1

5 t

3

1

Figure P2.4-4

-1

0

t

2

1

h(t - 1)

t2

-1 0

1

2

Figure P2.4-6 2.4-5

Suppose an LTIC system has impulse response

2.4-7

h(t) and input x(t) = u(t). Figure P2.4-5 shows x(r) and h(t + 1), respectively. Be careful! Figure P2.4-5 shows h(t + I), not h(t). x(t)

1

l+-----

-1 0

t

2

1

h(t + 1)

2.4-8 1

-1 0

2

t

Figure P2.4-5

Use the graphical convolution procedure to determine Yzsr (/) = x(t) * h(t). Accurately sketch Yzsr (t). When solving for Yzsr(t), flip and shift x(t) and explicitly show all integration steps. An LTIC system has impulse response h(t), as shown in Fig. P2.4-8. Let t have units of seconds. Let the input be x(t) = u( -t - 2) and designate the output as Yz.sr (t) = x(t) * h(t). (a) Use the graphical convolution procedure where h(t) is flipped and shifted to deter­ mine Yzsr (t). Accurately plot your result. h(t)

(a) Is system h(t) causal? Mathematically jus­ tify your answer. (b) Use the graphical convolution procedure to determine Yzsr O) = x(t) * h(t). Accurately sketch Yzsr (t). When solving for Yzsr (t), flip and shift x(t) and explicitly show all integration steps. x(t)

Suppose an LTIC system has impulse response

h(t) and input x(t), both shown in Fig. P2.4-7.

1-----

-2 -1

0

3

2

1

Figure P2.4-8

h(t)

2 ....

_

-2 -1

O

L

1

'

3

2

----+-----......J-2

2 4

t

-2 -1

0

1

3

._ -3

Figure P2.4-7

4

t

4

5 t

t

Problems (b) Use the graphical convolution procedure where x(t) is flipped and shifted to deter­ mine Y1s,(t). Accurately plot your result.

2.4-9 If c(t) = x(t) * g(t), then show that A c = AxAg , where Ax,Ag , and A c are the areas under x(t), g(t), and c(t), respectively. Verify this area property of convolution in Exs. 2.10 and 2.12. 2.4-10 If x(t) * g(t) = c(t), then show that x(at) * g(at) = I 1 /a Jc(at). T his time-scaling property of convolution states that if both x(t) and g(t) are time-scaled by a, their convolution is also time-scaled by a (and multiplied by 11/al). 2.4-11 Show that the convolution of an odd and an even function is an odd function and the convolution of two odd or two even functions is an even function. [Hint: Use the time-scaling property of convolution in Prob. 2.4-10.) 2.4-12

2.4-18

Repeat Prob. 2.4-16 for h(t) and input x(t)

2.4-19

227

= (I -2t)e-21u(t)

= u(t).

Repeat Prob. 2.4-16 for h(t) = 4e- 2' cos 3r u(t) and each of the following inputs x(t): (a) u(t) (b) e- 1 u(t)

2.4-20

Repeat Prob. 2.4-16 for h(t) = e-1 u(t) and each of the following inputs x(t): (a) e-21 u(t) (b) e-2(1-3) u(/) (c) e- 2'u(t-3) (d) the gate pulse depicted in Fig. P2.4-20; also provide a sketch of y(t).

Suppose an LTIC system has impulse response h(t) = (1 - t)[u(t) - u(t - 1)] and input x(t) = u(-1- I)+ u(t - l ). Use the graphical con­ volution procedure to detenrune Yzsr(t) = x(t) * h(t). Accurately sketch y25 ,(t). When solving for Yts,(t), flip and shift h(t), explicitly show all integration steps, and simplify your answer.

2.4-13 Using direct integration, find e-01 u(t) * e-b, u(t). 2.4-14

Using direct integration, find u(t) e-01u(t) * e-at u(t), and tu(t) * u(t).

*

u(t),

*

2.4-15 Using direct integration, find sin t u(t) u(t) and cos t u(t) * u(t). 2.4-16 The unit impulse response of an LTIC system is

Figure P2.4-20 2.4-21

h(t) = e-1 u(t) Find this system's (zero-state) response y(t) if the input x(t) is: (a) u(t) (b) e-1u(t) (c) e-2,u(t) (d) sin 3t u(t) Use the convolution table (Table 2.1) to find your answers. Z,4-l7 Repeat Prob. 2.4-16 for

and if the input x(t) is: (a) u(t) (b) e- 1u(t) (c) e-21 u(r)

,-

0

A first-order allpass filter impulse response is given by h(t) = -8 (t) + 2e-1 u(t) (a) Find the zero-state response of this filter for the input e' u(-1). (b) Sketch the input and the corresponding zero-state response.

2.4-22

Figure P2.4-22 shows the input x(r) and the impulse response h(t) for an LTIC system. Let the output be y(r). (a) By inspection of x(t) and h(t), find y(-1), y(0),y(1),y(2),y(3),y(4),y(5), and y(6). Thus, by merely examining x(r) and h(t), you are required to see what the result of convolution yields at t = -1, 0, l, 2, 3, 4, 5, and 6 . (b) Find the system response to the input x(t).

228

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS x(t)

I

I

2

0

3

-3

t -

h(t)

0

3

,-

Figure P2.4-22 2.4-23

The zero-state response of an LTJC system to an input x(t) = 2e- 21u(r) is y(t) = [4e- 21 + 6e-3'Ju(t). Find the impulse response of the sys­ tem. [Hint: We have not yet developed a method of finding h(t) from the knowledge of the input and the corresponding output. Knowing the fonn of x(t) and y(t), you will have to make the best guess of the general form of h(t).]

2.4-24

Sketch the functions x(t) = l/(1 2 + I) and u(t). Now find x(t) u(t) and sketch the result.

2.4-25

Figure P2.4-25 shows x(t) and g(t). Find and sketch c(t) = x(t) g(t).

*

sin

Figure P2.4-25 Find and sketch c(t) = x(t) * g(t) tions depicted in Fig. P2.4-26.

for the

Consider the electric circuit shown in Fig. P2.4-32. (a) Determine the differential equation that relates the input x(r) to output y(t). Recall dv (I) 1 that ic(t) = C J, and VL (I) = L di�J >. (b) Find the characteristic equation for this circuit, and express the root(s) of the char­ acteristic equation in terms of Land C. (c) Determine the zero-input response given an initial capacitor voltage of I volt and an initial inductor current of zero amps. That is, find Yo(t) given vc(O) = I V and

func­

2.4-27

Find and sketch c(r) = xi (t) *X2(t) for the pairs of functions i11ustrated in Fig. P2.4-27.

2.4-28

Use Eq. (2.37) to find the convolution of x(t) and w(t), shown in Fig. P2.4-28.

x(r)

sin t

g(t)

I

21T

,_

Figure P2.4-26 2.4-29

2.4-32

,-

,-

Determine H(s), the transfer function of an ideal lime delay of T seconds. Find your answer by two methods: using Eq. (2.39) and using Eq. (2.40).

for the signals

Two linear time-invariant systems, each with impulse response h(t), are connected in cascade. Refer to Fig. P2.4-3J. Given input x(t) = u(t), determine y(J). That is, determine the step response at time r = 1 for the cascaded system shown.

t

0

*

Determine y(t) = x(t) h(t) depicted in Fig. P2.4-30.

2.4-31

*

x(t)

2.4-26

2.4-30

,-

229

Problems X1(/)

8

4

0

6

,-

X 1 (t)

X2(t)

A

,-

-5-4

Xz(I)

3

0

5

,-

(a)

0

0

,-

(b)

X1(t)

-2

-5 -3

X2(I)

-2

1-

0

I

I ,-

x 1 (t)

I Xz(I)

r-

0

(c)

r-

0

1-

(e)

,-

0 ,-

(d)

0

0 ,-

-3

0

3 ,-

0

1-

(f)

,-

0

0 ,-

-2

(g)

(h)

Figure P2.4-27 w(I)

x(t)

2

0

r-

I

0

-1 (a)

,-

(b)

Figure P2.4-28

ii(O) = 0 A. [Hint: The coefficient(s) in Yo(t) are independent of Land C.] (d) Plot Yo(t) for t :::: 0. Does the zero-input response, which is caused solely by initial conditions, ever "die out"?

(e) Determine the total response y(t) to the input x(t) = e- 1 u(t). Assume an initial inductor current of ii (O-) = 0 A, an initial capacitor voltage of vc(O-) = l V, L= l H, and C= 1 F.

230

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS h(t)

x(t)

2

-I

-1

Figure P2.4-30 h(t) x(t) � y(t)

2

-1

Figure P2.4-31

+ y(t )

x(t)

Figure P2.4-32

x(t)

x(t)

(b)

(a)

Figure P2.4-33 2.4-33

Two LTIC systems have impulse response func­ tions given by h 1 (t) = ( I - t)[u(t) - u(t- l)] and h2 (t) =t[u(t+2)-u(t-2)]. (a) Carefully sketch the functions h 1 (t) and h 2 (t). (b) Assume that the two systems are connected in parallel, as shown in Fig. P2.4-33a. Care­ fully plot the equivalent impulse response function, hp (!). (c) Assume that the two systems are connected in cascade, as shown in Fig. P2.4-33b. Care­ fully plot the equivalent impulse response function, hs (t).

2.4-34

Consider the circuit shown in Fig. P2.4-34. (a) Find the output y(t) given an initial capac­ itor voltage of y(O) = 2 volts and an input x(t) = u(t). (b) Given an input x(t ) = u(t - 1), determine the initial capacitor voltage y(O) so that the output y(t) is 0.5 volt at t = 2 seconds.

x(t)

Figure P2.4-34

+

y(t)

...

Problems 2,4-35

An analog signal is given by x(t) = t[u(t) u(t - 1)], as shown in Fig. P2.4-35. Detennine and plot y(t) = x(t) * x(2t).

2.4-37

231

An LTI system has step response given by g(t) = - e-21u(t). Determine the output of this system y(t) given an input x(t) = 8(t - ,r) cos( ../J)u(I).

e-,u(t)

2.4-38

-I

2.4-39

Figure P2.4-3S 2.4-36 Consider the electric circuit shown in Fig. P2.4-36. (a) Determine the differential equation that relates the input current x(t) to output current y(t). Recall that

The periodic signal x(t) shown in Fig. P2.4-38 is input to a system with impulse response function h(t) = t[u(t) - u(t - 1.5)], also shown in Fig. P2.4-38. Use convolution to determine the output y(t) of this system. Plot y(t) over (-3�t�3). Consider the electric circuit shown in Fig. P2.4-39. (a) Determine the differential equation relating input x(t) to output y(t). (b) Determine the output y(t) in response to the input x(t) = 4te-311 2 u(t). Assume compo­ nent values of R = 1 n, C1 = 1 F, and C2 = 2 F, and initial capacitor voltages of Vc1 = 2 V and Vc2 = 1 V.

dii(t) VL(f)=L-­ dt

Find the characteristic equation for this circuit, and express the root(s) of the char­ acteristic equation in tenns of L 1 , /Ji, and R. (c) Determine the zero-input response given initial inductor currents of I ampere each. That is, find yo(t) given iL 1 (0) = h,i (O) = l A.

(b)

Figure P2.4-39 2.4-40

i y(t)

R

x(r)

x(r)

2.4-41 Figure P2.4-36

2

-I

Figure P2.4-38

A cardiovascular researcher is attempting to model the human heart. He has recorded ven­ tricular pressure, which he believes corresponds to the heart's impulse response function h(t), as shown in Fig. P2.4-4 l. Comment on the function

-

x(r)

I

•••

An LTIC system has impulse response h(t) = 3e-1'1• (a) Is the system causal? Mathematically justify your answer. (b) Determine the zero-state response of this system if the input is x(t) = u(2 - t).

•••

I

2

t

-I

232

CHAPTER 2 TilviE-OOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS 2.4-44

h(t) shown in Fig. P2.4-41. Can you establish any system properties, such as causality or stability? Do the data suggest any reason to suspect that the measurement is not a true impulse response?

Consider the circuit shown in Fig. P2.4-44. This circuit functions as an integrator. Assume ideal op-amp behavior and recalJ that ic(t) = c

140 .----.-------.-------.-...-----r-----.--...-......-------

dV

c(t)

dt

C

120 100 'C'

80

+

� 60

y(t)

_L

40 20

Figure P2.4-44

o,____.____.____.__.____.____.__L...-_.____.___J

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 t (seconds)

(a) Determine the differential equation that relates the input x(t) to the output y(t). (b) This circuit does not behave well at de. Demonstrate this by computing the zero-state response y(t) for a unit step input

Figure P2.4-41 2.4-42 Consider an integrator system, y(t) = f�oo x(r)dr. (a) What is the unit impulse response hi (t) of this system? (b) If two such integrators are put in paral­ lel, what is the resulting impulse response

x(t) = u(t).

2.4-45

hp(t)?

(c) If two such integrators are put in series, what is the resulting impulse response hs (t)? 2.4-43 The autocorrelation of a function x(t) is given by rxx (t) = f�00 x(r)x(r - t) dr. This equation is computed in a manner nearly identical to convolution. (a) Show rxx (t) =x(t) *X(-t). (b) Determine and plot rx.x(t) for the signal x(t) depicted in Fig. P2.4-43. [Hint: rx.x(t) = rxx (-t).]

-r - nd-r

1

Figure P2.4-43

Derive the result in Eq. (2.37) in another way. As mentioned in Ch. 1 (Fig. 1.27b), it is possible to express an input in terms of its step components, as shown in Fig. P2.4-45. Find the system response as a sum of the responses to the step components of the input.

x(t)

x(t)

-1

-

Figure P2.4-45

1

2

t

2.4-46 Show that an LTIC system response to an everlasting sinusoid cos wot is given by y(t) = IH(jwo)I cos[wot+ LH(jwo)]

Problems where H(jw) = f00 h(t)e-jwr dt -oo

assuming the integral on the right-hand side exists. 2.4-47 A line charge is located along the x axis with a charge density Q(x) coulombs per meter. Show that the electric field E(x) produced by this line charge al a point x is given by E(x)

t). Is this system BIBO-stable? Mathematically justify your answer. 2.5-4

Consider an LTIC system with unit impulse response h(t) = f u(t - T). (a) Detennine, if possible, the value(s) of T for which this system is causal. (b) Determine, if possible, the value(s) of T for which this system is BIBO-stable. Justify all answers mathematically.

2.5-5

You are given the choice of a system that is guaranteed internally stable or a system that is guaranteed externally stable. Which do you choose? Why?

2.5-6

For a certain LTIC system, the impulse response

= Q(x) * h(x)

where h(x) = l/4rrE.x2. [Hint: The charge over an interval �r located at r = n 6 r is Q(nll r)6 r. Also by Coulomb's law, the electric field E(r) at a distance r from a charge q coulombs is given by E(r) = q/4rrET2 .] 2.4-48 A system is called complex if a real-valued input can produce a complex-valued output. Suppose a linear time-invariant complex system has impulse response h(t) = j [u(-t + 2) -

2.5-7

In Sec. 2.5, we demonstrated that for an LTIC system, the condition of Eq. (2.45) is sufficient for BIBO stability. Show that this is also a necessary condition for BIBO stability in such systems. In other words, show that if Eq. (2.45) is not satisfied, then there exists a bounded input that produces an unbounded output. [Hint: Assume that a system ex_ists for which h(t) violates Eq. (2.45) and yet produces an out­ put that is bounded for every bounded input. Establish the contradiction in this statement by considering an input x(t) defined by x(r, -r) = l when h(r) 2: 0 and x(t 1 -r) = -1 when h(r) < 0, where t 1 is some fixed instant. ]

2.5-8

An analog LTIC system with impulse response function h(t) = u(t + 2) - u(t - 2) is presented with an input x(t) = t(u(t) - u(t- 2)). (a) Determine and plot the system output y(t) = x(t) * h(t). (b) Is this system stable? Is this system causal? Justify your answers.

2.5-9

A system has an impulse response function shaped like a rectangular pulse, h(t) = u(t) - u(t -1). Is the system stable? Is the system causal?

2.5-10

A continuous-time LTI system has impulse response function h(t) = 0 (0.Sf8(t-i).

(a) Is this system causal? Explain. (b) Use convolution to determine the zero-state response y, (t) of this system in response to the unit-duration pulse x 1 (t) = u(t) u(t-1).

(c) Using the result from part (a), determine the zero-state response yz (t) in response to xz(I) = 2u(t - 1)- u(t - 2) - u(t - 3).

U

h(t) = u(t).

(a) Determine the characteristic root(s) of this system. (b) Is this system asymptotically or marginally stable, or is it unstable? (c) Is this system BIBO-stable? (d) What can this system be used for?

u(-t)].

2.S-1 Explain, with reasons, whether the LTIC sys­ tems described by the following equations are (i) stable or unstable in the BIBO sense; (ii) asymptotically stable, unstable, or marginally stable. Assume that the systems are controllable and observable. (a) (D2 + 8D + 12)y(t) = (D - I )x(t) (b) D(D 2 + 3D + 2)y(t) = (D + S)x(t) (c) D2(D2 + 2)y(t) = x(t) (d) (D + l)(D2 - 6D + S)y(t) = (3D + l)x (t) 2.S-2 Repeat Prob. 2.5-1 for the following: (a) (D + l)(D2 + 2D + 5)2 y(t) = x(t) (b) (D + l)( D2 + 9)y(t) = (2 D + 9)x(t) (c) (D + I)(D2 + 9) 2y(t) = (2D + 9)x(t) (d) (D2 + l)(D2 + 4)(D2 + 9)y(t) = 3Dx(t) 2.S-3 Consider an LTIC system with unit impulse response h(t) = e1 cos( ½t) + ½ sin(7r t)] u(l2 3-

233

I::

234

2.6-1

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS (a) Is the system causal? Prove your answer. (b) Is the system stable? Prove your answer. Data at a rate of l million pulses per second are to be transmitted over a certain communications channel. The unit step response g(t) for this channel is shown in Fig. P2.6- l. (a) Can this channel transmit data at the required rate? Explain your answer. (b) Can an audio signal consisting of compo­ nents with frequencies up to 15 kHz be transmitted over this channel with reason­ able fidelity?

(a) Determine the width (duration) of the received pulse. (b) Find the maximum rate at which these pulses can be transmitted over this channel without interference between the successive pulses. 2.6-5

A first-order LTIC system has a characteristic root ).. = -1a4. (a) Determine T,, the rise time of its unit step input response. (b) Determine the bandwidth of this system. (c) Determine the rate at which the informa­ tion pulses can be transmitted through this system.

2.6-6

A Iowpass system with a 6 MHz cutoff fre­ quency needs to transmit data pulses that are ns wide. Detennine a suitable transmission rate Frate (pulses/s) for this system.

2.6-7

Sketch an impulse response h(t) of a non-causal LP system that has an approximate cutoff fre­ quency of 5 kHz. Since many solutions are possible, be sure to properly justify your answer.

2.6-8

Two LTIC transmission channels are available: the first has impulse response h 1 (t) = u(t) u(t - 1) and the second has impulse response h2(t) = 8 (t) + 0.58(t- l) +0.258(t-2). Explain which channel is better suited for the transmis­ sion of high-speed digital data (pulses).

2.6-9

Consider a linear time-invariant system with impulse response h(t) shown in Fig. P2.6-9. Outside the interval shown, h(t) = 0. (a) What is the rise time T, of this system? Remember, rise time is the time between the application of a unit step and the moment at which the system has "fully" responded. (b) Suppose h(t) represents the response of a communication channel. What conditions might cause the channel to have such an impulse response? What is the maximum average number of pulses per unit time that can be transmitted without causing interference? Justify your answer. (c) Determine the system output y(t) = x(t) * h(t) for x(t) = [u(t - 2) - u(t)]. Accurately sketch y(t) over (0 � t � 10).

g(t)

,-

0 Figure P2.6-1 2.6-2

2.6-3

Determine a frequency (J) that will cause the input x(t) = cos(wt) to produce a strong response when applied to the system described by (D2 + 2D + 13/4){y(t)} = x(t). Carefully explain your choice. Figure P2.6-3 shows the impulse response h(t) of a lowpass LTIC system. Determine the peak amplitude A and time constant T1i so that rec­ tangular impulse response h(t) is an appropri­ ate approximation of h(t). The two graphs of Fig. P2.6-3 are not necessarily drawn to the same scale.

2

h(t)

h(t)

A......----.

1 -1

1

2

t

0

Figure P2.6-3 2.6-4

A certain communication channel has a band­ width of 10 kHz. A pulse of 0.5 ms duration is transmitted over this channel.

t

1

Problems 1.2

-:::- 0.6 0.4 0.2

-

0 -0.2

roots by hand and then verify your answers using MATLAB's roots command. (b) From Eq. (2.17), computing h(t) requires a ckt?-*'. First, determine signal Yn(r) = 1 a matrix representation of the system of equations needed to solve for the four coef­ ficients Ck, Second, write MATLAB code that computes the length-4 column vector of coefficients q.

I:t,,,

0.8 :c'

235

0

0.5

1.5

2

2.5

3

3.5

4

2.7-2 Define x(r) = 2u(t + �) - 2u(t). Further, define the periodic signal h 1 (t) as h i ( t)

Figure P2.6-9 2.6-10 A lowpass LTIC system has impulse response

={

t h i (r+ l)

O::;;t-4 )? Determine the

+

y(t)

_L Figure P2.7-3

2.7-3 Consider the circuit shown in Fig. P2.7-3. Assume ideal op-amp behavior and recall that dVc(t) ic(t)=C-dt Without a feedback resistor R1 , the circuit func­ tions as an integrator and is unstable, particularly

236

CHAPTER 2 TIME-DOMAIN ANALYSIS OF CONTINUOUS-TIME SYSTEMS

+

+

x(t)

y(t)

Figure P2.7-4 at de. A feedback resistor Rt corrects this prob­ lem and results in a stable circuit that functions as a "lossy" integrator. (a) Determine the differential equation that relates the input x(r) to the output y(t). What is the corresponding characteristic equation? (b) To demonstrate that this "lossy" integrator is well behaved at de, determine the zero-state response y(t) given a unit step input x(t) = u(t). (c) Investigate the effect of 10% resistor and 25% capacitor tolerances on the system's characteristic root(s). 2.7-4

Consider the electric circuit shown in Fig. P2.7-4. Let C, = C2 = 10 µF, R 1 =R2 = 100 kQ, and R3 = 50 kQ.

(a) Determine the corresponding differential equation describing this circuit. Is the circuit BIBO-stable? (b) Determine the zero-input response Yo(t) if the output of each op amp initially reads l volt. (c) Determine the zero-state response y(t) to a step input x(t) = u(t). (d) Investigate the effect of 10% resistor and 25% capacitor tolerances on the system's

characteristic roots.

2.7-5

Input x(t) = 3[u(t) - u(t - I)] + 2[u(t - 2) u(t - 3)] is applied to a lowpass LTIC system with impulse response h(t) = (4 - t)[u(t) u(t - 2)] to produce output y(t) = x(t) * h(t). Modify program CH2MP4.m in Sec. 2.7.4 to perform the graphical convolution procedure to produce a plot of y(t).

SIGNAL REPRESENTATION BY FOURIER SERIES

This chapter is important for basic understanding of signal representation and signal comparison. In Ch. 2, we expressed an arbitrary inputx(t) as a sum of its impulse components. The LTI system (zero-state) response to input x(t) was obtained by summing the system's responses to all these components in the form of the convolution integral. There are infinite possible ways of expressing an inputx(t) in terms of other signals. For this reason, the problem of signal representation in tenns of a set of signals is very important in the study of signals and systems. This chapter addresses the issue of representing a signal as a sum of its components. The problem is similar to that of representing a vector in tenns of its components.

3.1 SIGNALS AS VECTORS There is a perfect analogy between signals and vectors; the analogy is so strong that the tenn analogy understates the reality. Signals are not just like vectors. Signals are vectors! A vector can be represented as a sum of its components in a variety of ways, depending on the choice of coordinate system. A signal can also be represented as a sum of its components in a variety of ways. Let us begin with some basic vector concepts and then apply these concepts to signals.

3.1.1 Component of a Vector A vector is specified by its magnitude and its direction. We shall denote all vectors by boldface. For example, xis a certain vector with magnitude or length Ix!. For the two vectors x and y shown in Fig. 3.1, we define their dot (inner or scalar) product as X·Y= lxll ylcos0 where0 is the angle between these vectors. Using this definition, we can express Ix!, the length of a vector x, as

237

238

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

cy

y

Figure 3.1 Component (projection) of a vector along another vector.

Let the component of x along y be cy as depicted in Fig. 3.1. Geometrically, the component of x along y is the projection of x on y and is obtained by drawing a perpendicular from the tip of x on the vector y, as illustrated in Fig. 3.1. What is the mathematical significance of a componen t of a vector along another vector? As seen from Fig. 3.1, the vector x can be expressed in terms of vector y as

x=cy+e However, this is not the only way 10 express x in terms of y. From Fig. 3.2, which shows two of the infinite other possibilities, we have

In each of these three representations, x is represented in terms of y plus another vector called the If we approximate x by cy,

error vector.

X '.:::'. cy the error in the approximation is the vectOC' e = x - cy. Similarly, the errors in approximations in these drawings are e 1 (Fig. 3.2a) and Ci (Fig. 3.2b). W hat is unique about the approximation in Fig. 3.1 is that the error vector is the smallest. We can now define mathematically the component of a vector x along vector y to be cy, where c is chosen to minimize the length of the error vector e = x - cy. Now, the length of the component of x along y is l xlcos8. But it is also clyl, as seen from Fig. 3.1. Therefore, c)yl = lxl cos8

Multiplying both sides by IYI yields clyl2 = lxllylcos 8 = X · Y Therefore,

X· Y C= Y,Y

I = jyrX·Y

(3.1)

From Fig. 3.1, it is apparent that when x and y are perpendicular, or orthogonal, then x has a zero component along y; consequently, c = 0. Keeping an eye on Eq. (3. I), we therefore define x and y to be orthogonal if the inner (scalar or dot) product of the two vectors is zero, that is, if

x-y=O

3.1 SignaJs as Vectors

�\

y

""'-------..... C --..y iJ

(b)

(a)

Figure 3.2

239

Approximation of a vector in terms of another vector.

3.1.2 Component of a Signal The concept of a vector component and orthogonality can be extended to signals. Consider the problem of approximating a real signal x(t) in terms of another reaJ signal y(t) over an interval (ft, t2): x(t):::::: cy(t) The error e(t) in this approximation is et( ) _ -{

x(t) - cy(t) 0

!1 < l < t2 otherwise

We now select a criterion for the "best approximation." We know that the signal energy is one possible measure of a signaJ size. For best approximation, we shall use the criterion that minimizes the size or energy of the error signal e(t) over the interval (t 1 , t2). This energy Ee is given by Ee =

2 1 2 1 1

e {t)dt =

1

12 1 r

[x(t) - cy(t) J 2dt

1

Note that the right-hand side is a definite integral with t as the dummy variable. Hence, Ee is a function of the parameter c (not t) and Ee is minimum for some choice of c. To minimize Ee , a necessary condition is

� [1'

or

2

de

[x(t) - cy(t)] 2dr]

=0

r1

Expanding the squared term inside the integral, we obtain

�de [1' x1-(t)dt]- �de [2cl x(t)y(t)dt] +:c [c f, 12 l(t)dt] = 0 12

2

11

,,

from which we get -2

1

2 1

2

11

1

2

x(t)y(t)dt

+ 2c 11 l(t)dt = 0 11

240

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES c=

2 (' x(t)y(t) dt }1,, 1

1'•

,2

l(t)dt

=

-1 l

Ey

12

x(t)y(t)dt

11

(3.2)

We observe a remarkable similarity between the behavior of vectors and signals, as indicated by Eqs. (3.1) and (3.2). It is evident from these two parallel expressions that the area under the product of two signals corresponds to the inner ( scalar or dot) product of two vectors. In fact, the area under the product of x(t) and y(t) is called the inner product of x(t) and y(t), and is denoted by (x, y). The energy of a signal is the inner product of a signal with itself, and corresponds to the vector length square (which is the inner product of the vector with itself). To summarize our discussion, if a signal x(t) is approximated by another signal y(t) as x(t) � cy(t)

then the optimum value of c that minimizes the energy of the error signal in this approximation is given by Eq. (3.2). Talcing our clue from vectors, we say that a signal x(t) contains a component cy(t), where c is given by Eq. (3.2). Note that in vector terminology, cy(t) is the projection of x(t) on y(t). Continuing with the analogy, we say that if the component of a signal x(t) of the form y(t)is zero (i.e., c = 0), the signals x(t) and y(t) are orthogonal over the interval (t 1, t2). Therefore, we define the real signals x(t) and y(t) to be orthogonal over the interval (t 1, t2) if t

1

1?

x(t)y(t)dt = 0

(3.3)

II

SquareW For the square signal x(t) shown in Fig. 3.3, find the component in x(t) of the form sint. In other words, approximate x(t) in terms of sin t x(t) � csint

0

i

T

2

3

T

4

-1 -1.5

f 5

0

'



,.



••

1

2

3

4

5

n

n

Figure 3.11 Using MATLAB to plot the Fourier series coefficients of Ex. 3.5.

PERIODICITY OF THE TRIGONOMETRIC FOURIER SERIES

We have shown how an arbitrary signal x(t) may be expressed as a trigonometric Fourier series over any interval of To seconds. In Ex. 3.5, for instance, we represented e-112 over the interval from Oto ,r /2 only. The Fourier series found in Eq. (3.37) is equal to e-112 over this interval alone. O utside this interval, the series is not necessarily equal to e-112• It would be interesting to find out what happens to the Fourier series outside this interval. We now show that the trigonometric Fourier series is a periodic function of period To (the period of the fundamental). Let us denote the trigonometric Fourier series on the right-hand side of Eq. (3.35) by > A= 1; D = ©(n) -A*4j*sin(n*pi/2)./(n.-2*pi-2); >> TO= 2; omegaO = 2*pi/TO; t = (-TO:.0O1:TO); Next, we set the de portion of the signal. >>

DO= O; x1O = DO*ones(size(t));

To add the desired 10 harmonics, we enter a loop for 1 � n � 10 and add in the Dn and D-n terms.Although x(t) should be real, small round-off errors cause the reconstruction to be complex. These small imaginary parts are removed using the real command. >> >> >>

for n = 1:10, x1O = x1O+real(D(n)*exp(1j*omegaO*n*t)+D(-n)*exp(-1j*omegaO*n*t)) end

Lastly, we plot the resulting truncated Fourier series synthesis of x(t). >>

plot(t,x1O,'k'); xlabel('t'); ylabel('x_{1O}(t)');

Since the synthesized waveform shown in Fig. 3.27 closely matches the original waveform in Fig. 3.12, we have high confidence that the computed D,1 are correct.

-I ,____.,____...,____.=...__ -2 -l.S -I -0.S 0

.....1....__--1.. ____,i__��-____J

0.5

t

Figure 3.27 A 10-hannonic truncated Fourier series of x(t).

1.5

2

3.7 LTIC System Response to Periodic Inputs

303

3.7 LTIC SYSTEM RESPONSE TO PERIODIC INPUTS A periodic signal can be expressed as a sum of everlasting exponentials (or sinusoids). We also know how to find the response of an LTIC system to an everlasting exponential. From this information, we can readily determine the response of an LTIC system to periodic inputs. A periodic signal x(t) with period To can be expressed as an exponential Fourier series x(t)

=

L D ejTUAl() 00

n=-oo

n

2rr Wo=­ To

t

In Sec. 2.4.4, we showed that the response of an LTIC system with transfer function H(s) to an everlasting exponential input ejM is an everlasting exponential H(j w)eiwt . This input-output pair can be displayed ast iwt

e ..__., input

===?

H(jw)eiwt

..__,__. ou1put

Therefore, from the linearity property,

L D H(jnwa)e 00

n

11=-0Q

input

inwo t

(3.56)

11=-0Q

x(t)

response y(r)

The response y(t) is obtained in the form of an exponential Fourier series and is therefore a periodic signal of the same period as that of the input. We shall demonstrate the utility of these results with the following example.

A full-wave rectifier (Fig. 3.28a) is used to obtain a de signal from a sinusoid sin t. The rectified signal x(t), depicted in Fig. 3.25, is applied to the input of a lowpass RC filter, which suppresses the time-varying component and yields a de component with some residual ripple. Find the filter output y(t). Find also the de output and the rms value of the ripple voltage.

t This result applies only to asymptotically stable systems. This is because when s = jw, the integral on the right-hand side of Eq. (2.39) does not converge for unstable systems. Moreover, for marginally stable systems also, that integral does not converge in the ordinary sense, and H(jw) cannot be obtained from H(s) by replacing s with jw.

304

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

1s n

w.,

0

+

sin 1

+

Full-wave

x(t)

rectifier

0 (a)

'P l

·1

0

+

y(t) 0

y(t)

-31T

37T

0 (b)

Figure 3.28 (a) Full-wave rectifier with a lowpass filter and (b) its output.

First, we shall find the Fourier series for the rectified signal x(t), whose period is To = Consequently, wo = 2, and

LDe 00

x(t) =

l[.

n

j2m

11=-00

where D,1

Therefore,

= n-1

17r o

x(t)=

-2 -2sinte-1.2111 dt = n( l-4n )

(3.57)

2 n(l-4n 2 ) ll=-00

L

---ej2nt

Next, we find the transfer function of the RC filter in Fig. 3.28a. This filter is identical to the RC circuit in Ex. 1.17 (Fig. 1.35) for which the differential equation relating the output (capacitor voltage) to the inputx(r) was found to be [Eq. (1.31)] (3D + l)y(t) = x(t)

3.7 LTIC System Response to Periodic Inputs

305

The transfer function H(s) for this system is found from Eq. (2.41) as l

H(s)=-3s+ I and

I H( . )JW - 3jw+ I

(3.58)

From Eq. (3.56), the filter output y(t) can be expressed as (with wo = 2) y(r) =

L D,,H(jnwo)e 00

r,;;::.-00

j11wo 1

=

L D H(j2n)e 00

11=-00

11

F 2111

Substituting D,, and H(j211) from Eqs. (3.57) and (3.58) in the foregoing equation, we obtain y

=

I: 00

2 ------ej2nt 2 rr(l -4n )(j6n+ I) 11=-00

Note that the output y(t) is also a periodic signal given by the exponential Fourier series on the right-hand side. The output is shown in Fig. 3.28b. The output Fourier series coefficient corresponding to n = 0 is the de component of the output, given by 2/rr. The remaining terms in the Fourier series constitute the unwanted component called the ripple. We can determine the rrns value of the ripple voltage by using Eq. (3.54) to find the power of the ripple component The power of the ripple is the power of all the components except the de (n = 0). Note that b11 , the exponential Fourier coefficient for the outputy(r), is

2 D11 = -------rr(l - 4n2 )(j6n + 1) A

Therefore, from Eq. (3.55), we have

P·npplc -2 -

LI 00

11=1

D111 2-2

00

L

11=1

I

2

rr( l -4n2)( J·6n + 1)

12 --8 - rr2

L 00

11=1

l

(l -4n2)2(36n2 + 1)

Numerical computation of the right-hand side yields Pnpplc = 0.0025, and the ripple rms value = JPnpple = 0.05. This shows that the rrns ripple voltage is 5% of the amplitude of the input sinusoid.

WHY USE EXPONENTIALS? During the development of the Fourier series, it is natural to ask what is so special about exponential signals. If x(r) can be represented in terms of hundreds of different orthogonal sets, why do we exclusively use the exponential (or trigonometric) set for the representation of signals or LTI systems? It so happens that the exponential signal is an eigenfunction of LTI systems.

I

306

CHAPTER 3

SIGNAL REPRESENTATION BY FOURIER SERIES

In other words, for an LTI system, only an exponential input e5' yields the response that is also an exponential of the same form, given by H(s)e51 • The same is true of the trigonometric set. This fact makes the use of exponential signals natural for LTI systems in the sense that the system analysis using exponentials as the basis signals is greatly simplified. Furthermore, the exponential form is generaJJy pre ferred over the trigonometric form. The exponential Fourier series is just another way of representing trigonometric Fourier series (or vice versa). The two forms carry identical information-no more, no less. The reasons for preferring the exponential form have already been mentioned: this form is more compact, and the expression for deriving the exponential coefficients is also more compact than those in the trigonometric series. Furthermore, the LTIC system response to exponential signals is also simpler (more compact) than the system response to sinusoids. In addition, the exponential form proves to be much easier than the trigonometric form to manipulate mathematica!Jy and otherwise handle in the area of signals as well as systems. Moreover, exponential representation proves much more convenient for analysis of complex x(t). For these reasons, in our future discussion we shall use the exponential fonn exclusively. A minor disadvantage of the exponential form is that it cannot be visualized as easily as sinusoids. For intuitive and qualitative understanding, the sinusoids have the edge over exponentials. Fortunately, this difficulty can be overcome readily because of the close connection between exponential and Fourier spectra. For the purpose of mathematical analysis, we shall continue to use exponential signals and spectra; but to understand the physical situation intuitively or qualitatively, we shall speak in terms of sinusoids and trigonometric spectra. Thus, although all mathematical manipulation will be in terms of exponential spectra, we shall now speak of exponential and sinusoids interchangeably when we discuss intuitive and qualitative insights in attempting to arrive at an understanding of physical situations. This is an important point; readers should make an extra effort to familiarize themselves with the two forms of spectra; their relationships, and their convertibility. DUAL PERSONALITY OF A SIGNAL

The discussion so far has shown that a periodic signal has a dual personality-the time domain and the frequency domain. It can be described by its waveform or by its Fourier spectra. The time­ and frequency-domain descriptions provide complementary insights into a signal. For an in-depth perspective, we need to understand both these identities. It is important to learn to think of a signal from both perspectives. In the next chapter, we shall see that aperiodic signals also have this dual personality. Moreover, we shall show that even LTI systems have this dual personality, which offers complementary insights into the system behavior.

LIMITATIONS OF THE FOURIER SERIES METHOD O F ANALYSIS

We have developed here a method of representing a periodic signal as a weighted sum of everlasting exponentials whose frequencies lie along the uJ axis in the s plane. This representation (Fourier series) is valuable in many applications. However, as a tool for analyzing linear systems, it has serious limitations and consequently has limited utility for the following reasons: 1. The Fourier series can be used only for periodic inputs. All practical inputs are aperiodic (remember that a periodic signal starts at t -oo ). 2. The Fourier methods can be applied readily to BIBO-stable (or asymptotically stable) systems. It cannot handle unstable or even marginally stable systems.

=

3.8 Numerical Computation of D11

307

The first limitation can be overcome by representing aperiodic signals in terms of everlasting exponentials. This representation can be achieved through the Fourier integral, which may be considered to be an extension of the Fourier series. We shall therefore use the Fourier series as a stepping-stone to the Fourier integral developed in the next chapter. The second limitation can be overcome by using exponentials I ', where s is not restricted to the imaginary axis but is free to take on complex values. This generalization leads to the Laplace integral, discussed in Ch. 6(the Laplace transform ). f

3.8 NUMERICAL COMPUTATION OF Dn We can compute D11 numerically by using the OFT (the discrete Fourier transform discussed in Sec. 5.5 ), which uses the samples of a periodic signal x(t) over one period. The sampling interval is T seconds. Hence, there are No= To/T number of samples in one period T0. To find the relationship between D11 and the samples of x(t), consider Eq. (3.46) and write D,, =

-I

I, x(t)e- wo dt 11 1

To r0

1

No-I

x(kT)e-i11wokTT L T O NoT

= Jim _1_ ➔

= lim _1

k=O

No-I

L x(kT)e-jnrlok

T➔ O No k=O

(3.59)

where x(kT) is the kth sample of x(t) and To No=T

and

2rr no=waT= No

In practice, it is impossible to make T ➔ 0 in computing the right-hand side of Eq. (3.59). We can make T small, but not zero, which will cause the data to increase without limit. Thus, we shall ignore the limit on Tin Eq. (3.59) with the implicit understanding that Tis reasonably small. Nonzero Twill result in some computational error, which is inevitable in any numerical evaluation of an integral. The error resulting from nonzero Tis called the aliasing error, which is discussed in more detail in Ch. 5. Thus, we can express Eq. (3.59) as No-I

D 11

x(kT)e-inf2ok _IL �N o

(3.60)

k=O

Since Q0N0 = 2rr, we know that einf2o(k+No)

= einf2ok, and it follows that

The periodicity property D,i+No = D,. means that beyond n = No/2, the coefficients represent the values for negative n. For instance, when No= 32, D 1 1 = D-15, Dis= D-14, ... ,D31 = D-1- The cycle repeats again from n = 32 on. We can use the efficient FFf (the fast Fourier transform discussed in Sec. 5.6) to compute the right-hand side of Eq. (3.60). We shall use MATLAB to implement the FFf algorithm. For this purpose, we need samples of x(t) over one period starting at t= 0. In this algorithm, it is also preferable (although not necessary ) that No be a power of 2 (i.e., No= 2"', where mis an integer ).

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

308

BJ. E ".17 Numerical Computation of Fourier Spectra Numerically compute and then plot the exponential Fourier spectra for the periodic signal in Fig. 3.1Ob (Ex. 3.5). The samples of (()(t) start at t = 0 and the last (Noth) sample is at t = To - T. At the points of discontinuity, the sample value is taken as the average of the values of the function on two sides of the discontinuity. Thus, the sample at t 0 is not 1 but (e-rr/ 2 + 1)/2 0.604. To determine No , we require that D,, for n::::. No /2 to be negligible. Because > elf; subplot(l,2,1); stem(n,abs(fftshift(D_n)) ,'.k'); >> axis([-5 5 0 .6]); xlabel('n'); ylabel('ID_nl '); >> subplot(l,2,2); stem(n,angle(fftshift(D_n)),'.k'); >> axis([-5 5 -2 2]); xlabel('n'); ylabel('\angle D_n [rad]'); As shown in Fig. 3.29, the resulting approximation is visually indistinguishable from the true Fourier series spectra shown in Fig. 3.20 or Fig. 3.21.

2

0.6

s::

g

,.



I

0.4 -

,.

0.2 0

-5

0 n

5

-2

-5

'

0 n

Figure 3.29 Numerical approximation of exponential Fourier series spectra using the DFf.

I

5

---- I

3.9 MATLAB: Fourier Series Applications

309

3.9 MATLAB: FOURIER SERIES APPLICATIONS Computational packages such as MATLAB simplify the Fourier-based analysis, design, and synthesis of periodic signals. MATLAB permits rapid and sophisticated calculations, which promote practical application and intuitive understanding of the Fourier series.

3.9.1 Periodic Functions and the Gibbs Phenomenon It is sufficient to define any To-periodic function over the interval (0 :5 t < T0 ). For example, consider the 2rr-periodic function given by

I

t/A I x(t) - 0 x(t+2rr)

0,:5t> >> >>

A= pi/2; [x_20,t] = CH6MP1(A,20); plot(t,x_20,'k',t,x(t,A),'k:'); axis([-pi/4,2*pi+pi/4,-0.1,1.1]); xlabel('t'); ylabel('x_{20}(t)');

As expected, the falling edge is accompanied by the overshoot that is characteristic of the Gibbs phenomenon. Increasing N to 100, as shown in Fig. 3.31, improves the approximation but does not reduce the overshoot.

>>

>> >>

[x_100,t]

=

CH3MP1(A,100);

plot(t,x_100,'k',t,x(t,A),'k:'); axis([-pi/4,2•pi+pi/4,-0.1,1.1]); xlabel('t'); ylabel('x_{100}(t)');

Reducing A to 1r /64 produces a curious result. For N = 20, both the rising and falling edges are accompanied by roughly 9% of overshoot, as shown in Fig. 3.32. As the number of terms is increased, overshoot persists only in the vicinity of jump discontinuities. For XN(t), increasing N

-=-

0.8 0.6 0.4 0.2 0 0

2

Figure 3.30 Comparison of x20(t) and x(t) when A = n /2.

3

4 t

5

6

7

3.9 MATLAB: Fourier Series Applications

311

0.8 S 0.6 8 >< 0.4 0.2

0

2

Figure 3.31 Comparison of x100 (t) and x(t) when A

3

4

5

6

7

4

5

6

7

= ,r/2.

0.8

-=- 0.6

-...., 0

0.4 0.2

0 0

2

3

Figure 3.32 Comparison of x2o(t) and x(r) when A= 1r /64.

decreases the overshoot near the rising edge but not near the falling edge. Remember that it is a true jump discontinuity that causes the Gibbs phenomenon. A continuous signal, no matter how sharply it rises, can always be represented by a Fourier series at every point within any small error by increasing N. This is not the case when a true jump discontinuity is present. Figure 3.33 illustrates this behavior using N = 100.

3.9.2 Optimization and Phase Spectra Although magnitude spectra typically receive the most attention, phase spectra are critically important in some applications. Consider the problem of characterizing the frequency response of an unknown system. By applying sinusoids one at a time, the frequency response is empirically measured one point at a time. This process is tedious at best. Applying a superposition of many sinusoids, however, allows simultaneous measurement of many points of the frequency response. Such measurements can be taken by a spectrum analyzer equipped with a transfer function mode or by applying Fourier analysis techniques, which are discussed in later chapters.

312

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

0.8

S

8

>
> >> >> >>

m = ©(theta,t,omega) su.m(cos(omega*t+theta*ones(size(t)))); N = 20; omega = 2*pi*100*[1:N] ';theta = zeros(size(omega)); t = linspace(-0.01,0.01,10000); plot(t,m(theta,t,omega),'k');xlabel('t [sec]'); ylabel('m(t) [volts]');

3.9 MATLAB: Fourier Series Applications

313

15

vi' .::: 10

l C E

5 0

-5 ---�--�--.....____.'--___.__._.____,______.____.___ ---0.0 I ---0. 008 ---0.006 ---0.004 ---0. 002 0.002 0 0.004 0.006 0.008 1 [sec]

---L,.__.___.

0.01

Figure 3.34 Test signal m(r) with 0,1 = 0.

As shown in Fig. 3.34, 011 = 0 causes each sinusoid lo constructively add. The resulting 20 volt peak can saturate system components, such as operational amplifiers operating with± 12 volt rails. To improve signal performance, the maximum amplitude of m(t) overt needs to be reduced. One way to reduce max, (lm(t)I) is to reduce M, the strength of each component. Unfortunately, this approach reduces the system's signal-to-noise ratio and ultimately degrades measurement quality. Therefore, reducing Mis not a smart decision. The phases 011 , however, can be adjusted to reduce max, (lm(t)I) while preserving signal power. In fact, since 011 = 0 maximizes max, (lm(t)I), just about any other choice of 011 will improve the situation. Even a random choice

should improve performance.

As with any computer, MATLAB cannot generate truly random numbers. Rather, it generates pseudo-random numbers. Pseudo-random numbers are deterministic sequences that appear to be random. The particular sequence of numbers that is realized depends entirely on the initial state of the pseudo-random number generator. Setting the generator's initial state to a known value allows a "random" experiment with reproducible results. The command rng(O) initializes the state of the pseudo-random number generator to a known condition of zero, and the MATLAB command rand(a,b) generates an a-by-b matrix of pseudo-random numbers that are uniformly distributed over the interval (0, 1). Radian phases occupy the wider interval (0, 2rr), so the results from rand need to be appropriately scaled.

>> rng(0); theta_rand0 = 2*pi*rand(N,1); Next, we recompute and plot m(t) using the randomly chosen 011 .

>> m_rand0 = m(theta_rand0,t,omega); >> plot(t,m_rand0,'k'); axis((-0.01,0.01,-10,10]); >> xlabel('t [sec]'); ylabel('m(t) [volts]'); »

set(gca, 'ytick', [min(m_rand0) ,max(m_rand0)]); grid on;

For a vector input, the min and max commands return the minimum and maximum values of the vector. Using these values to set y-axis tick marks makes it easy to identify the extreme values of m(t). As seen from Fig. 3 .35, the maximum amplitude is now 7.6307, which is significantly smaller than the maximum of 20 when 0,, = 0.

314

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

7.6307

-7.4460 -0.0 I

-0.008

-0.006

-0.004

-0.002

0 t [sec]

0.002

0.004

0.006

0.008

0.01

0.004

0.006

0.008

0.01

Figure 3.35 Test signal m(t) with random On found by using mg ( 0) .

!!J

0

....

,....._ '-'

-8.2399 -0.01

-0.008

-0.006

-0.004

-0.002

0 t [sec]

0.002

Figure 3.36 Test signal m(t) with random On found by using rng(5).

Randomly chosen phases suffer a fatal fault: there is little guarantee of optimal performance. For example, repeating the experiment with rng(5) produces a maximum magnitude of 8.2399 volts, as shown in Fig. 3.36. This value is significantly higher than the previous maximum of 7.6307 volts. Clearly, it is better to replace a random solution with an optimal solution. What constitutes "optimal"? Many choices exist, but desired signal criteria naturally suggest that optimal phases minimize the maximum magnitude of m(t) over all t. To find these optimal phases, MATLAB's fminsearch command is useful. First, the function to be minimized, called the objective function, is defined. >> >>

maxmagm = ©(theta,t,omega) max(abs(sum( ... cos(omega*t+theta*ones(size(t))))));

The anonymous function argument order is important; fminsearch uses the first input argu ment as the variable of minimization. To minimize over 0, as desired, 0 must be the first argument of the objective function maxmagm. Next, the time vector is shortened to include only one period of m(t).

r1

3.10 MATLAB: Fourier Series Applications

315

5.3414

0



-5.3632 -0.01

-0.008

-0.006

-0.004

-0.002

0

0.002

0.004

0.006

0.008

0.01

t [sec] Figure 3.37 Test signal m(t) with optimized phases.

>>

t

=

linspace(0,0.01,401);

A full period ensures that all values of m(t) are considered; the short length of t helps ensure that functions execute quickly. An initial value of 0 is randomly chosen to begin the search. >> rng(0); theta_init = 2*pi*rand(N,1); >> theta_opt = fminsearch(maxmagm,theta_init,[],t,omega); Notice that fminsearch finds the minimizer to maxmagm over 0 by using an initial value theta_init. Most numerical minimization techniques are capable of finding only local minima, and fminsearch is no exception. As a result, fminsearch does not always produce a unique solution. The empty square brackets indicate no special options are requested, and the remaining ordered arguments are secondary inputs for the objective function. Full format details for fminsearch are available from MATLAB's help facilities. Figure 3.37 shows the phase-optimized test signal. The maximum magnitude is reduced to a value of 5.3632 volts, which is a significant improvement over the original peak of 20 volts. Although the signals shown in Figs. 3.34 through 3.37 look different, they all possess the same magnitude spectra. The signals differ only in phase spectra. It is interesting to investigate the similarities and differences of these signals in ways other than graphs and mathematics. For example, is there an audible difference between the signals? For computers equipped with sound capability, the MATLAB sound command can be used to find out. >> Fs = 8000; t = (0:1/Fs:2]; >> sound(m(theta,t,omega)/20,Fs);

o/. Two seconds at 8kHz sampling rate o/. Play (scaled) m(t) made with 0 phases

Since the sound command clips magnitudes that exceed 1, the input vector is scaled by 1/20 to avoid clipping and the resulting sound distortion. The signals using other phase assignments are created and played in a similar fashion. How well does the human ear discern the differences in phase spectra? If you are like most people, you will not be able to discern any differences in how these waveforms sound.

316

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

3.10 SUMMARY This chapter discusses the foundations of signal representation in terms of its components. There is a perfect analogy between vectors and signals; the analogy is so strong that the term analogy understates the reality. Signals are not just like vectors. Signals are vectors. The inner or scalar product of two (real) signals is the area under the product of the two signals. If this inner or scalar product is zero, then the signals are said to be orthogonal. A signal x(t) has a component cy(t), where c is the inner product of x(t) and y(t) divided by Ey, the energy of y(t). A good measure of similarity of two signals x(t) and y(t) is the correlation coefficient p, which is equal to the inner product of x(t) and y(t) divided by /Ex Ey . It can be shown that -1 � p � l. The maximum similarity (p = 1) occurs only when the two signals have the same waveform within a (positive) multiplicative constant, that is, when x(t) = Ky(t). The maximum dissimilarity (p = -1) occurs only when x(t) = -Ky(t). Zero similarity (p = 0) occurs when the signals are orthogonal. In binary communication, where we are required to distinguish between two known waveforms in the presence of noise and distortion, selecting the two waveforms with maximum dissimilarity (p = -1) provides maximum distinguishability. Just as a vector can be represented by the sum of its orthogonal components in a complete orthogonal vector space, a signal can also be represented by the sum of its orthogonal components in a complete orthogonal signal space. Such a representation is known as the generalized Fourier series representation. A vector can be represented by its orthogonal components in many different ways, depending on the coordinate system used. Similarly, a signal can be represented in terms of different orthogonal signal sets, of which the trigonometric and the exponential signal sets are two examples. We have shown that the trigonometric and exponential Fourier series are periodic with a period equal to that of the fundamental in the set. In this chapter, we have shown how a periodic signal can be represented as a sum of (everlasting) sinusoids or exponentials. If the frequency of a periodic signal is wo, then it can be expressed as a sum of the sinusoid of frequency wo and its harmonics (trigonometric Fourier series). We can reconstruct the periodic signal from a knowledge of the amplitudes and phases of these sinusoidal components (amplitude and phase spectra). If a periodic signal x(t) has an even symmetry, its Fourier series contains only cosine terms (including de). In contrast, if x(t) has an odd symmetry, its Fourier series contains only sine terms. If x(t) has neither type of symmetry, its Fourier series contains both sine and cosine terms. At points of discontinuity, the Fourier series for x(t) converges to the mean of the values of x(t) on either side of the discontinuity. For signals with discontinuities, the Fourier series converges in the mean and exhibits the Gibbs phenomenon at the points of discontinuity. The amplitude spectrum of the Fourier series for a periodic signal x(t) with jump discontinuities decays slowly (as 1/n) with frequency. We need a large number of terms in the Fourier series to approximatex(t) within a given error. In contrast, a smoother periodic signal amplitude spectrum decays faster with frequency and we require a fewer number of terms in the series to approximate x(t) within a given error. A sinusoid can be expressed in terms of exponentials. Therefore, the Fourier series of a periodic signal can also be expressed as a sum of exponentials (the exponential Fourier series). The exponential form of the Fourier series and the expressions for the series coefficients are more compact than those of the trigonometric Fourier series. Also, the response of LTIC systems to an

..

Summary

317

exponential input is much simpler than that for a sinusoidal input. Moreover, the exponential form of representation lends itself better to mathematical manipulations than does the trigonometric form. This includes the establishment of useful Fourier series properties that simplify work and help provide a more intuitive understanding of signals. For these reasons, the exponential form of the series is preferred in modern practice in the areas of signals and systems. The plots of amplitudes and angles of various exponential components of the Fourier series as functions of the frequency are the exponential Fourier spectra (amplitude and angle spectra) of the signal. Because a sinusoid cosWQ/ can be represented as a sum of two exponentials, ejwo, and e-jwo,, the frequencies in the exponential spectra range from w = -oo to oo. By definition, the frequency of a signal is always a positive quantity. The presence of a spectral component of a negative frequency -nWQ merely indicates that the Fourier series contains terms of the form e-jnwo,. The spectra of the trigonometric and exponential Fourier series are closely related, and one can be found by the inspection of the other. In Sec. 3.7, we discuss a method of finding the response of an LTIC system to a periodic input signal. The periodic input is expressed as an exponential Fourier series, which consists of everlasting exponentials of the form ei11wo 1• We also know that the response of an LTIC system to • an everlasting exponential ejnwo, is H (j nWQ)ej 11wo1 The system response is the sum of the system's responses to all the exponential components in the Fourier series for the input. The response is, therefore, also an exponential Fourier series. Thus, the response is also a periodic signal of the same period as that of the input. The Fourier series coefficients C11 or D11 may be computed numerically using the discrete Fourier transform (DFT) , which can be implemented by an efficient FFT (fast Fourier transform) algorithm. This method uses No uniform samples of x(t) over one period starting at t = 0.

REFERENCES 1. Lathi, B. P. and Ding, Z., Modem Digital and Analog Communication Systems, 4th ed., Oxford University Press, New York, 2009. 2. Bell, E. T., Men ofMathematics, Simon & Schuster, New York, 1937. 3. Durant, W., and Durant, A., The Age ofNapoleon, Part XI in The Story of Civilization Series, Simon & Schuster, New York, 1975. 4. Calinger, R., Classics of Mathematics, 4th ed., Moore Publishing, Oak Park, IL, 1982. 5. Lanczos, C., Discourse on Fourier Series, Oliver Boyd, London, 1966. 6. Komer, T. W., Fourier Analysis, Cambridge University Press, Cambridge, UK, 1989. 7. Walker P. L., The Theory of Fourier Series and Integrals, Wiley, New York, 1986. 8. Churchill, R. V., and Brown, J. W., Fourier Series and Boundary Value Problems, 3rd ed., McGraw-Hill, New York, 1978. 9. Guillemin, E. A., Theory of Linear Physical Systems, Wiley, New York, 1963. 10. Gibbs, W. J., Nature, vol. 59, p. 606, April 1899. 11. B6cher, M., Annals of Mathematics, vol. 7, no. 2, 1906. 12. Carslaw, H. S., Bulletin of the American Mathematical Society, vol. 31, pp. 420--424, October 1925.

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES

318

3.1-1

3.1-2

Derive Eq. (3. I) in an alternate way by observ­ ing that e = (x-cy) and lel2 = (x-cy) · (x­ cy) = lxl 2 + c2 1yl2 - 2cx · y. [Hint: Find the value of c to minimize lel2 .]

3.1-5 3.1-6

A signal x(I) is approximated in terms of a signal y(t) over an interval (t,, t2 ): x(t) ::::::'. cy(t)

where c is chosen to minimize the error energy. (a) Show that y(t) and the error e(t) = x(t) cy(t) are orthogonal over the interval (ti, t2). (b) If possible, explain the result in terms of a signal-vector analogy. (c) Verify this result for the square signal x(t) in Fig. 3.3 and its approximation in tenns of signal sin t.

3.1-3

(a) For the signals x(t) and y(t) depicted in Fig. P3.l-3, find the component of the form y(t) contained in x(t). In other words, find the optimum value of c in the approximation x(t):::::: cy(t) so that the error signal energy is minimum. (b) Find the error signal e(t) and its energy Ee , Show that the error signal is orthogonal to y(t), and that Ex = c2Ey + Ee , Explain this result in terms of vectors.

0

0 (a)

(b)

Figure P3.1-3 For the signals x(t) and y(t) shown in Fig. P3.l-3, find the component of the fonn x(t) contained in y(t). In other words, find the optimum value of c in the approximation y(t) :=:::: cx(t) so that the error signal energy is minimum. What is the error signal energy?

If x(t) and y(t) are orthogonal, then show that the energy of the signal x(t) + y(t) is identical to the energy of the signal x(t) - y(t) and is given by Ex + Ey , Explain this result by using the vector analogy. In general, show that for orthogonal signals x(t) and y(t) and for any pair of arbitrary real constants c1 and c2, the energies of c1x(t) + c2y(t) and c1x(t) - c2y(t) are both given by Ci Ex + �Ey.

In Ex. 3.4, we represented the function in Fig. 3.9 by Legendre polynomials. (a) Use the results in Ex. 3.4 to represent the signal g(t) in Fig. P3.l-7 by Legendre polynomials. (b) Compute the error energy for the approx­ imations having one and two (nonzero) terms.

x(t) -7T

0

7T

-lt-------'

Figure P3.1-7 3.1-8

y(t)

x(t)

3.1-4

3.1-7

Repeat Prob. P3. l-3 if y(t) is instead defined as y(t) = sin(2:rrt)[u(t) - u(t - I)].

For the four-dimensional real space 'R4 , the so-called Walsh basis is given by J = [l,-1,-1,l], and 4 ]. Evaluate the three-term sum to determine the four elements of vector ZJv .

(a) Show that Lo(t) is normal. That is, show that

=

3.1-9

(b) Show that l 1 (t) is normal. That is, show lhat

lo"° e-1L1 (t)Lj(t)dt = I

A function can be expanded in terms of many different types of basis functions, not just the complex exponentials of Fourier analysis. For example, Legendre polynomials are explored in Ex. 3.4. Another possible set of basis func­ tions are Laguerre polynomials Lk(t), which have support on the interval [0,oo). The Laguerre expansion of x(t) is given by x(t) = L�o ckL k(t), where Lo(t) = I, L 1 (t) = (I­

t), ... , Lk (t)

= e' � (fe-

(c) Show that Lo(t) is orthogonal to L, (t). (d) Compute the coefficient co that produces the best approximation xo (t) colo(t) of the function x(t) = e- 1 u(t). (e) For the best estimate (t) = colo(t) + c1L 1 (t) of the function x(t) e-'u(t), the coefficient c0 remains unchanged from part (d) and c1 =¼-Confirm that c1 = ¼ and explain why the coefficient co does not change.

=

x,

). As with a Fourier series, we can truncate the expansion using any number of terms we choose. For the Laguerre expansion, we define orthogonality a little differently as 1

Notice the presence of the e- 1 term in the integral. Using this definition, Laguerre poly­ nomials are orthonormal.

=

3.2-1

Find the correlation coefficient p of signal y(r) and each of the four pulses x1 (t), x2(t), x3(r), and x4 (t) depicted in Fig. P3.2-l. W hich pair of pulses would you select for a binary communication in order to provide maximum margin against noise along the transmission path?

3.3-1

Let x1 (t) and x2 (t) be two signals that are orthonormal (mutually orthogonal and unit energies) over an interval from t r, to t2.

=

X i(t)

y(t)

-sin(2'11"t)

0

0

0

X

3 (t)

X4 (t)

I

I

vi

vi

0

0

-...L vi

Figure P3.2-1

320

CHAPTER 3 SIGNAL REPRESENTATION BY FOURIER SERIES thus obtained is identical to that found in part (a). (c) Show that, in general, time compression of a periodic signal by a factor a expands the Fourier spectra along the w axis by the same factor a. In other words, Co, Cn , and 0n remain unchanged, but the fundamental frequency is increased by the factor a, thus expanding the spectrum. Similarly, time expansion of a periodic signal by a factor a compresses its Fourier spectra along the w axis by the factor a.

Consider a signal/(t) where

This signal can be represented by a two-dimensional vector f(c,, c2 ). (a) Determine the vector representation of the following six signals in the two- dimensional vector space: fa(t) = 2x 1 (r) - x2(t), fb (t) = -x, (t) + 2x2 (t), /c (t) = -x2(t), fd (t) = x, (t) + 2x2(t), /c(t) = 2x 1 (t) +x2(t), and /r(t) = 3x1 (!). (b) Point out pairs of mutually orthogonal vec­ tors among these six vectors. Verify that the pairs of signals corresponding to these orthogonal vectors are also orthogonal. 3.4-1 Sketch the signal x(t) = t2 for all t and find the trigonometric Fourier series

sgn (t) = 2u(t) - 1

Using Eqs. (4.21) and (4.24) and the linearity property, we obtain 2 sgn(t)-¢=:::::}-_

JW

Table 4.1 provides many common Fourier transform pairs.

4.2 Transforms of Some Useful Functions

TABLE 4.1 No.

351

Select Fourier Transform Pairs x(t)

X(w) a+Jw

2

l

e01u(-t)

a-jw 2a

3

4

a2

,e-a,u(t)

a>O

(a+jw)2 n!

5 6

+w2

o(r)

7

2rro(w)

8

21to(w -Wo)

9

cos wot

rr[o(w-Wo)+o(w+Wo)]

10

sin wot

jrr[o(w+Wo)-o(w -wo)]

11

u(t)

rr o(w) +-. JW

12

sgnt

13

cos wotu(t)

I

2 jw 7t jw - [o(w-wo)+o(w+Wo)]+ 2 2 2 W O

7t Wo (J) -_[o(w-wo)-o(w+wo))+ 2 2 -W2

14

sin wotu(t)

15

e- sin wotu(t)

16

e-at cos wotu(t)

17

rect (�)

18

W sinc( t) W 7t

19

20 21 22

01

�(�)

w0

J

(a +Jw)2 +w5 a+Jw (a +jw)2 +w� r r sinc(� )

r

2

sinc2

(wr )

4

1

; sinc2 (: )

L o(t-nT)

11=-00 e

_,2r2o2

Wo z: o 1, although the argument still holds if a < L. In the latter case, compression becomes expansion by factor 1/a, and vice versa.

4.3 Some Properties of the Fourier Transform

357

x(1) X( w)

0

7'

1-

X(I)

X(w)

1-

7'

Figure 4.20 The scaling property of the Fourier transform. RECIPROCITY OF SIGNAL DURATION AND ITS BANDWIDTH The scaling property implies that if x(t) is wider, its spectrum is narrower, and vice versa. Doubling the signal duration halves its bandwidth, and vice versa. This suggests that the bandwidth of a signal is inversely proportional to the signal duration or width (in seconds) .t We have already verified this fact for the gate pulse, where we found that the bandwidth of a gate pulse of width r seconds is 1/r Hz. More discussion of this interesting topic can be found in the literature [2]. By letting a= -1 in Eq. (4.26), we obtain the inversion (or reflection) property of time and frequency: (4.27) x(-t) X(-w)

EXAMPLE 4.12 Fourier Transform Reflection Property Using the reflection property of the Fourier transform and Table 4.1, find the Fourier transforms of e0'u(-t) and e-01,1. Application of Eq. (4.27) to pair 1 of Table 4.1 yields e0'u(-t)

1 a-JW

Also,

t When a signal bas infinite duration, we must consider its effective or equivalent duration. There is no unique definition of effective signal duration. One possible definition is given in Eq. (2.47).

358

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM

Therefore,

e-altl jwX(w) Repeated application of this property yields d"x(t)

� ¢=> (jwt X (w)

The time-integration p�operty [Eq. (�.37)) has already been proved in Ex. 4.16. Table 4.2 summanzes the most important properties of the Fourier transform.

4.3

Some Properties of the Fourier Transform

369

llXA�PLE 4.17 Fourier Transfonn Tlltle-Differentiation Property Use the time-differentiation property to find the Fourier transform of the triangle pulse b. (t/r) illustrated in Fig. 4.27a. Verify the correctness of the spectrum by using it to synthesize a periodic replication of the original time-domain signal with r = 1.

'T

0

2

,�

'T

X(w)

) I sinc ("; 2

(a)

dx

2

-;;:

dt

'T

,�

0 2

- 81T T

41T T

0

(b)

2

81T 'T

(d)

1'T

d 2x d,2

'T

0

'T

'T

Ci)-

'T

2

41T

'T

,�

4 'T

(c)

Figure 4.27 Finding the Fourier transform of a piecewise-linear signal using the time-differentiation property.

To find the Fourier transform of this pulse, we differentiate the pulse successively, as illustrated in Figs. 4.27b and 4.27c. Because dx/dt is constant everywhere, its derivative, cP-x/di2, is zero everywhere. But dx/dt has jump discontinuities with a positive jump of 2/r att = ±r:/2, and a negative jump of 4 / r at t = 0. Recall that the derivative of a signal at a jump discontinuity is t an impulse at that point of strength equal to the amount of jump. Hence, d2x/d1, the derivative of dx/dt, consists of a sequence of impulses, as depicted in Fig. 4.27c; that is,

370

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM

d:,�t) = �[8(r+ i)-28(t)+o(,-i)] From the time-differentiation property [Eq. (4.36)], d2x(t) dt2

¢:::=>

(jw) 2X(w)

= -w2X(w)

Also, from the time-shifting property [Eq. (4.29)], 8(1 - to)¢:::=> e-jwto

Combining these results, we obtain 8 . 2 (WT WT 2 . . 4 -w2X(w) =-[e1 (wrf2)-2+e-J(wt"/2)] =-( cos- -1 ) = --sm -) r r 2 T 4 and X(w) =

W

�T

sin

2

(:•) =;

sn [ i �;:)

r

= isinc2 (:')

The spectrum X(w) is depicted in Fig. 4.27d. This procedure of finding the Fourier transform can be applied to any functionx(t) made up of straight-line segments with x(t) ➔ 0 as ltl ➔ oo. The second derivative of such a signal yields a sequence of impulses whose Fourier transform can be found by inspection. This example suggests a numerical method of finding the Fourier transform of an arbitrary signal x(t) by approximating the signal by straight-line segments. SYNTHESIZING A PERIODIC REPLICATION TO VERIFY SPECTRUM CORRECTNESS

While a signal's spectrum X(w) provides useful insight into signal character, it can be difficult to look at X(w) and know that it is correct for a particular signal x( t). Is it obvious, for example, that X(w) = ½sinc2 (wT/4) is really the spectrum of a T-duration rectangle function? Or is it possible that a mathematical error was made in the determination of X(w)? It is difficult to be certain by simple inspection of the spectrum. The same uncertainties exist when we are looking at a periodic signal's Fourier ser ies spectrum. In the Fourier series case, we can verify the correctness of a signal's spectru m by synthesizing x(t) with a truncated Fourier series; the synthesized signal will match the original only if the computed spectrum is correct. This is exactly the approach that was taken in Ex. 3.15. And since a truncated Fourier series involves a simple sum, tools like MATLAB make waveform synthesis relatively simple, at least in the case of the Fourier series. In the case of the Fourier transform, however, synthesis of x(t) using Eq. (4.10) requires integration, a task not well suited to numerical packages such as MATLAB. All is not lost, however. Consider Eq. (4.5). By scaling and sampling the spectrum X(w) of an aperiodic signal x(t), we obtain the Fourier series coefficient of a signal that is the periodic replication of x(t).

4.3 Some Properties of the Fourier Transform

371

Simj)ar to Ex. 3.15, we can then synthesize a periodic replication of x(t) with a truncated Fourier series to verify spectrum correctness. Let us demonstrate the idea for the current example with r = I. To begin, we represent X(w) = ½sinc 2(wr/4) using an anonymous function in MATLAB. Since MATLAB computes sine (x) as (sin (:,r x))/rrx, we must scale the input by 1/rr to match the notation of sine in this book. >>

tau

=

1; X

=

©(omega) tau/2•(sinc(omega•tau/(4•pi))).A2;

For our periodic replication, let us pick T0 = 2, which is comfortably wide enough to accommodate our ( r = I )-width function without overlap. We use Eq. (4.5) to define the needed Fourier series coefficients D11 • >>

TO= 2; omegaO = 2•pi/TO; D = ©(n) X(n•omegaO)/TO;

Let us use 25 harmonics to synthesize the periodic replication x25(t) of our triangular signal x(t). To begin waveform synthesis, we set the de portion of the signal. >>

t

=

(-TO: .OO1:TO); x25

=

D(O)•ones(size(t));

To add the desired 25 harmonics, we enter a loop for 1 � n � 25 and add in the Dn and D_ n terms. Although the result should be real, small round-off errors cause the reconstruction to be complex. T hese small imaginary parts are removed by using the real command. >> >> >>

for n = 1:25, x25 = x25+real(D(n)•exp(1j•omegaO•n•t)+D(-n)•exp(-1j•omegaO•n•t)); end

Lastly, we plot the resulting truncated Fourier series synthesis of x(t). >> >>

plot(t,x25,'k'); xlabel('t'); ylabel('x_{25}(t)'); axis([-2 2 -.1 1.1]);

Since the synthesized waveform shown in Fig. 4.28 closely matches a 2-periodic replication of the triangle wave in Fig. 4.27a, we have high confidence that both the computed Dn and, by extension, the Fourier spectrum X(w) are correct.

� 0.5

-2

-J.5

-I

--0.5

0

0.5

1.5

Figure 4.28 Synthesizing a 2-periodic replication of x(t) using a truncated Fourier series.

2

372

CHAPTER 4 CONTINUOUS-TIME SlGNAL ANALYSIS: THE FOUR1ER TRANSFORM

DRILL 4.9 Fourier Transform Time-Differentiation Property Use the time-differentiation property to find the Fourier transform of rect (t/r).

4.4 SIGNAL TRANSMISSION THROUGH LTIC SYSTEMS If x(t) and y(t) are the input and output of an LTIC system with impulse response h(t), then, as demonstrated in Eq. (4.35), Y(w)

= H(w)X(w)

This equation does not apply to (asymptotically) unstable systems because h(t) for such systems is not Fourier transformable. It applies to BIBO-stable as well as most of the marginally stable systems. t Similarly, this equation does not apply if x(t) is not Fourier transformable. In Ch. 6, we shall see that the Laplace transform, which is a generalized Fourier transfonn, is more versatile and capable of analyzing all kinds of LTIC systems whether stable, unstable, or marginally stable. The Laplace transform can also handle exponentially growing inputs. In comparison to the Laplace transform, the Fourier transform in system analysis is not just clumsier, but also very restrictive. Hence, the Laplace transform is preferable to the Fourier transfonn in LTIC system analysis. We shall not belabor the application of the Fourier transform to LTIC system

analysis. We consider just one example here.

Use the Fourier transform to find the zero-state response of a stable LTIC system with frequency response 1 H(s)=­ s+2 and the input is x(t) = e-1u(t). Stability implies that the region of convergence of H (s) includes thew axis. In this case,

1 X(w)=-.-­ Jw+ 1

t For marginally stable systems, if the input x(t) contains a finite-amplitude sinusoid of the system's natural

frequency, which leads to resonance, the output is not Fourier transformable. It does, however, apply to marginally stable systems if the input does not contain a finite-amplitude sinusoid of the system's natural frequency.

4.4 Signal Transmission Through LTIC Systems

373

Moreover, because the system is stable, the frequency response H(jw) = H(w). Hence,

I JW+ 2

H(w)=H(s)l.f=jw= .

Therefore, Y(w) - H(w)X(w) -

I

(Jw+2)(1w+ I)

Expanding the right-hand side in partiaJ fractions yields

I

1

Y(w)-----jw+ 1 jw+2

and

etermine the Zero-State . response to the in,Put e1 u( -t) is y(t) = Fourier transform ofe'u(-t).]

HEURISTIC UNDERSTANDING OF LINEAR SYSTEM RESPONSE

In finding the linear system response to arbitrary input, the time-domain method uses the convolution integral and the frequency-domain method uses the Fourier integral. Despite the apparent dissimilarities of the two methods, their philosophies are amazingly similar. In the time-domain case, we express the input x(t) as a sum of its impuJse components; in the frequency-domain case, the input is expressed as a sum of everlasting exponentials (or sinusoids). In the former case, the response y(t) obtained by summing the system's responses to impulse components results in the convolution integral; in the latter case, the response obtained by summing the system's response to everlasting exponential components results in the Fourier integral. These ideas can be expressed mathematically as follows: 1. For the time-domain case, o (t)

==> h(t)

shows the system response to o (t) is the impulse response h(t) expresses x(t) as a sum of impulse components

y(t) = f:Ox(r)h(t- r)dr

expresses y(t) as a sum of responses to the impulse components of input x(t)

r',,. 374

CHAPTER 4 CONTINUOUS-TlME SIGNAL ANALYSIS: THE FOURIER TRANSFORM 2. For the frequency-domain case, ei wt ==> H(w)ei w1 x(t)

X(w)e 1w'dw -00

00 1 =2 f ,r

shows the system response to eiwt is H(w)e/wt expresses x(t) as a sum of everlasting exponential components expresses y(t) as a sum of responses to the exponential components of input x(t)

The frequency-domain view sees a system in terms of its frequency response (system response to various sinusoidal components). It views a signal as a sum of various sinusoidal components. Transmission of an input signal through a (linear) system is viewed as transmission of various sinusoidal components of the input through the system. It was not by coincidence that we used the impulse function in time-domain analysis and the exponential eJwr in studying the frequency domain. The two functions happen to be duals of each other. Thus, the Fourier transform of an impulse 8(t - r) is e-Jw., and the Fourier transform of eiwor is an impulse 2n8(w - cuo). This time-frequency duality is a constant theme in the Fourier transform and linear systems.

4.4.1 Signal Distortion During Transmission For a system with frequency response H(w), if X(w) and Y(w) are the spectra of the input and the output signals, respectively, then Y(w) =X(w)H(w) (4.38) The transmission of the input signal x(t) through the system changes it into the output signaly(t). Equation (4.38) shows the nature of this change or modification. Here, X(w) and Y(w) are the spectra of the input and the output, respectively. Therefore, H(w) is the spectral response of the system. The output spectrum is obtained by the input spectrum multiplied by the spectral response of the system. Equation (4.38), which clearly brings out the spectral shaping (or modification)o f the signal by the system, can be expressed in polar form as

Therefore, IY(w)I

= IX(w)I IH(w)I

and

LY(w) = LX(w) + LH(w)

During transmission, the input signal amplitude spectrum IX(w)I is changed to IX(w)I IH(w)I. Similarly, the input signal phase spectrum LX(w) is changed to LX(w) + LH(w). An input sign al spectral component of frequency w is modified in amplitude by a factor IH(w)I and is shifted in phase by an angle LH(w). Clearly, jH(w)I is the amplitude response, and LH(w) is the phase response of the system. The plots of IH(w)I and LH(w) as functions of w show at a glan ce how the system modifies the amplitudes and phases of various sinusoidal inputs. This is the reason why H(w) is also called the frequency response of the system. During transmission through the system, some frequency components may be boosted in amplitude, while others may be attenuat ed.

4.4 Signal Transmission Through LTIC Systems

375

The relative phases of the various components also change. In general, the output waveform will be different from the input waveform. DrsTORTIONLESS TRANSMISSION

In several applications, such as signal amplification or message signal transmission over a communication channel, we require that the output waveform be a replica of the input waveform. In such cases, we need to minimize the distortion caused by the amplifier or the communication channel. It is, therefore, of practical interest to determine the characteristics of a system that allows a signal to pass without distortion (distortionless transmission). Transmission is said to be distortionless if the input and the output have identical waveshapes within a multiplicative constant. A delayed output that retains the input waveform is also considered to be distortionless. Thus, in distortionless transmission, the input x(t) and the output y(t) satisfy the condition y(t)

= Gox(t- td)

The Fourier transform of this equation yields Y(w) = GoX(w)e-jw,d But

Y(w) = X(w)H(w)

Therefore,

H(w) = Go e-jwid

This is the frequency response required of a system for distortionless transmission. From this

equation, it follows that

IH(w)I =Go

and

(4.39)

This result shows that for distortionless transmission, the amplitude response IH(w)I must be a constant, and the phase response LH(w) must be a linear function of w with slope -Id, where td is the delay of the output with respect to input (Fig. 4.29). MEASURE OF TIME-DELAY VARIATION WITH FREQUENCY

The gain IH(w)I = Go means that every spectral component is multiplied by a constant Go. Also, as seen in connection with Fig. 4.22, a linear phase LH(w) = -wtd means that every spectral component is delayed by td seconds. This results in the output equal to Go times the input delayed

.... .. .. .··-...

0

·····• . ./(

LH(w)

Figure 4.29 LTIC system frequency response for distortionless transmission.

,j 376

CHAPTER 4 CONTINUOUS-TTh1E SIGNAL ANALYSIS: THE FOURIER TRANSFORM

by td seconds. Because each spectral component is attenuated by the same factor (Go) and delayed by exactly the same amount (td), the output signaJ is an exact replica of the input (except for attenuating factor Go and delay fd). For distortionless transmission, we require a linear phase characteristic. The phase is not only a linear function of w, it should also pass through the origin w = 0. In practice, many systems have a phase characteristic that may be only approximately linear. A convenient way of judging phase linearity is to plot the slope of LH(w) as a function of frequency. This slope, which is constant for an ideal linear phase (ILP) system, is a function of w in the general case and can be expressed as d tg (w) = --LH(w) dw

(4.40)

If tg(w) is constant, all the components are delayed by the same time interval tg . But if the slope is

not constant, the time delay t8 varies with frequency. This variation means that different frequency components undergo different amounts of time delay, and consequently, the output waveform will not be a replica of the input waveform. As we shall see, t8 (w) plays an important role in bandpass systems and is called the group delay or envelope delay. Observe that constant td [Eq. (4.39)] implies constant t8. Note that LH(w) = o - wtg over a band 2W centered at frequency We - Over this band, we can describe H(w) as t (4.41) The phase of H(w) in Eq. (4.41), shown dotted in Fig. 4.30b, is linear but does not pass through the origin. Consider a modulated input signal z(t) = x(t) coswet.. This is a bandpass signal, whose spectrum is centered at w = We . The signal cosw,t is the carrier, and the signal x(t), which is a lowpass signal of bandwidth W (see Fig. 4.25), is the envelope of z(t).* We shall now show that the transmission of z(t) through H(w) results in distortionless transmission of the envelope x(t). However, the carrier phase changes by 0 for all t

············•·········

l.......

r(b)

(c)

✓ Envelope

A+ m(t)

(-+

•.:

(e)

(d)

Figure 4.39 An AM signal (a) for two values of A (b, c) and the respective envelopes (d, e).

EXAMPLE 4.23 Amplitude Modulation Sketch T/2). lt is also possible to use a window in which the weight assigned to the data within the window may not be constant. In a triangular window wr(t), for example, the weight assigned to data decreases linearly over the window width (shown later in Fig. 4.53b). Consider a signal x(t) and a window function w(t). If x(t) {=::::::} X(w) and w(t) ),and if the windowed function xw (t) ¢=:} Xw(w), then

Xw (t) = x(t)w(t)

and

According to the width property of convolution, it follows that the width of Xw (w) equals the suro of the widths of X(w) and W(w). Thus, truncation of a signal increases its bandwidth by the amount of bandwidth of w(t). Clearly, the truncation of a signal causes its spectrum to spread (or smear) by the amount of the bandwidth of w(t). Recall that the signal bandwidth is inversely proportio nal to the signal duration (width). Hence, the wider the window, the smaller its bandwidth , and the smaller the spectral spreading. This result is predictable because a wider window means that we are accepting more data (closer approximation), which should cause smaller distortio n (smaller spectral spreading). Smaller window width (poorer approximation) causes more spectral spreading (more distortion). In addition, since W(w) is really not strictly bandlirnited and its spectr Um � O of only asymptotically, the spectrum of Xw(w) ➔ 0 asymptoticall y also at the same rate as th at W(w), even if X(w) is, in fact, strictly bandlimited. Thus, windowing causes the spectr um of X(@l

4.9

Data Truncation: Window Functions

415

to spread into the band where it is supposed to be zero. This effect is called Leakage. The following example clarifies these twin effects of spectral spreading and leakage. Let us consider x(t) = cos wot and a rectangular window WR (t) = rect (t/1), illustrated in Fig. 4.51b. The reason for selecting a sinusoid for x(t) is that its spectrum consists of spectral lines of zero width (Fig. 4.5 l a). Hence, this choice will make the effect of spectral spreading and leakage easily discernible. The spectrum of the truncated signal xw (t) is the convolution of the two impulses of X(w) with the sine spectrum of the window function. Because the convolution of any function with an impulse is the function itself (shifted at the location of the impulse), the resulting spectrum of the truncated signal is 1 /2rr times the two sine pulses at ±WQ, as depicted in Fig. 4.5 lc (also see Fig. 4.26). Comparison of spectra X(w) and Xw (w) reveals the effects of truncation. These are: 1. The spectral lines of X (w) have zero width. But the truncated signal is spread out by 2,r /T about each spectral line. The amount of spread is equal to the width of the mainlobe of the window spectrum. One effect of this spectral spreading (or smearing) is that if x(t) has two spectral components of frequencies differing by less than 4n /T rad/s (2/T Hz), they will be indistinguishable in the truncated signal. The result is loss of spectral resolution. We would like the spectral spreading [mainlobe width of W(w)] to be as small as possible. 2. In addition to the mainlobe spreading, the truncated signal has sidelobes, which decay slowly with frequency. The spectrum of x(t) is zero everywhere except at ±wo. On the other hand, the truncated signal spectrum Xw (w) is zero nowhere because of the sidelobes. These sidelobes decay asymptotically as 1/w. Thus, the truncation causes spectral leakage in the band where the spectrum of the signal x(t) is zero. The peak sidelobe magnitude is 0.217 times the mainlobe magnitude (13.3 dB below the peak mainlobe magnitude). Also, the sidelobes decay at a rate 1/w, which is -6 dB/octave (or -20 dB/decade). This is the sidelobe's rolloff rate. We want smaller sidelobes with a faster rate of decay (high rolloff rate). Figure 4.51d, which plots IWR (w)I as a function of w, clearly shows the mainlobe and sidelobe features, with the first sidelobe amplitude -13.3 dB below the mainlobe amplitude and the sidelobes decaying at a rate of -6 dB/octave (or -20 dB/decade). So far, we have discussed the effect on the signal spectrum of signal truncation (truncation in the time domain). Because of the time-frequency duality, the effect of spectral truncation (truncation in frequency domain) on the signal shape is similar.

REMEDIES FOR SIDE EFFECTS OF TRUNCATION For better results, we must try to minimize the twin side effects of truncations: spectral spreading (mainlobe width) and leakage (sidelobe). Let us consider each of these ills. l. The spectral spread (mainlobe width) of the truncated signal is equal to the bandwidth of the window function w(t). We know that the signal bandwidth is inversely proportional to the signal width (duration). Hence, to reduce the spectral spread (mainlobe width), we need to increase the window width. 2. To improve the leakage behavior, we must search for the cause of the slow decay of sidelobes. In Ch. 3, we saw that the Fourier spectrum decays as 1/w for a signal with jump discontinuity, decays as ljc,.>2 for a continuous signal whose first derivative is

416

CHAPTER4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM

-

1

(a)

r-

T

0

(. --0.217T w . 41T ; . T -

i---.-i

(b)

t-

(c)

0

Mainlobe

-IO

Sidelobes

-13.3 -20 -30

- 20 dB/decade

w-

Figure 4.51 Windowing and its effects.

4.9 Data Truncation: Window Functions

417

discontinuous, and so on. t Smoothness of a signal is measured by the number of continuous derivatives it possesses. The smoother the signal, the faster the decay of its spectrum. Thus, we can achieve a given leakage behavior by selecting a suitably smooth (tapered) window. 3. For a given window width, the remedies for the two effects are incompatible. If we try to improve one, the other deteriorates. For instance, among all the windows of a given width, the rectangular window has the smallest spectral spread (mainlobe width), but its sidelobes have high level and they decay slowly. A tapered (smooth) window of the same width has smaller and faster-decaying sidelobes, but it has a wider mainlobe.* But we can compensate for the increased mainlobe width by widening the window. Thus, we can remedy both the side effects of truncation by selecting a suitably smooth window of sufficient width. There are several well-known tapered-window functions, such as the Bartlett (triangular), Hanning (von Hann), Hamming, Blackman, and Kaiser windows, which truncate the data gradually. These windows offer different trade-offs with respect to spectral spread (mainlobe width), the peak sidelobe magnitude, and the leakage rolloff rate, as indicated in Table 4.3 [8, TABLE 4.3

No.

Some Window Functions and Their Characteristics

Window w(t) Rectangular: rect ( !). (

Mainlobe Width

'f)

2

Bartlett:

3

2 Hanning: 0.5 [ I + cos ( ;

4

27ft ) Hamming: 0.54 + 0.46cos (

5

4,rt 27ft ) Blackman: 0.42+0.5cos ( ) +0.08cos (

6

Kaiser:

�) 2

t)] T

T

. 1+J1-4m'] lo(a)

0�a�lO

T

4;r T 8.1r T 8,r T 8;r T l2JT T 11.2,r -T

Rolloff Rate Level (dB)

Peak Sidelobe

-6

-13 .3

-12

-26.5

-18

-31.5

-6

-42.7

-1 8

-58.l

-6

-59.9

(a= 8.168)

t This result was demonstrated for periodic signals. However, it applies to aperiodic signals also. This is because we showed in the beginning of this chapter that if xr0 (t) is a periodic signal formed by periodic extension of an aperiodic signal x(t), then the spectrum of xr0 (t) is (1/To times) the samples of X(w). Thus, what is true of the decay rate of the spectrum of xr0 (t) is also true of the rate of decay of X(w). t A tapered window yields a higher mainlobe width because the effective width of a tapered window is smaller than that of the rectangular window; see Sec. 2.6.2 [Eq. (2.47)] for the definition of effective width. Therefore, from the reciprocity of the signal width and its bandwidth, it follows that the rectangular window mainlobe is narrower than a tapered window.

418

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM

(a)

(b)

Figure 4.52 (a) Hanning and (b) Hamming windows.

9). Observe that all windows are symmetrical about the origin (i.e., are even functions of 1). Because of this feature, W(cu) is a real function of (L); that is, LW(w) is either O or 'Jf. Hence, the phase function of the truncated signal has a minimal amount of distortion. Figure 4.52 shows two well-known tapered-window functions, the von Hann (or Hanning) window WH an(x) and the Hamming window wHam (x). We have intentionally used the independent variable x because windowing can be performed in the time domain as well as in the frequency domain, so x could be t or cu, depending on the application. There are hundreds of windows, all with different characteristics. But the choice depends on a particular application. The rectangular window has the narrowest mainlobe. The Bartlett (triangle) window (also called the Fejer or Cesaro) is inferior in all respects to the Hanning window. For this reason, it is rarely used in practice. Hanning is preferred over Hamming in spectral analysis because it has faster sidelobe decay. For filtering applications, on the other hand, the Hamming window is chosen because it has the smallest sidelobe magnitude for a given mainlobe width. The Hamming window is the most widely used general-purpose window. The Kaiser window, which uses /o(a), the modified zero-order Bessel function, is more versatile and adjus table. Selecting a proper value of a (0 :5 a :5 I 0) allows the designer to tailor the window to suit a particular application. The parameter a controls the mainlobe-sidelobe trade-off. When a = 0, the Kaiser window is the rectangular window. For a = 5.4414, it is the Hamming window, and when a = 8.885, it is the Blackman window. As a increases, the mainlobe width increases and the sidelobe level decreases.

4.9.1 Using Windows in Filter Design We shall design an ideal lowpass filter of bandwidth W radls, with frequency response H(w), as shown in Fig. 4.53e or 4.53f. For this filter, the impulse response h(t) = (W/1r)sinc(Wr) (Fig. 4.53c) is noncausal and, therefore, unrealizable. Truncation of h(t) by a suitable window (Fig. 4.53a) makes it realizable, although the resulting filter is now an approximation to the desired ideal filter.t We shall use a rectangular window wR(t) and a triangular (Bartlett) window W7(t) to truncate h(t), and then examine the resulting filters. The truncated impulse responses addition to truncation, we need to delay the truncated function by T/2 to render it causal. However, to the time delay only adds a linear phase to the spectrum without changing the amplitude spectrum. Th us, simplify our discussion, we shall ignore the delay.

t In

l

I

T

-2

wR(r)

T

2

(a)

T O T �

1-

2

-2

,-

(b)

,(c)

(d)

H(w)

=

*

-w

0

w w-

w-

,...

::, Q.. 0 �

H(w)

-w

0

::r. ?. 0

(e)

II II

(') II>

=

* W w-

O!

; 41T

w-

--y-(f)

Figure 4.53 Window-based filter design.

-w

(') c:r. 0 w-

420

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM hR (t) = h(r)wR(r) and h r(t) = h(t)wr (t) are depicted in Fig. 4.53d. Hence, the windo wed filt er frequency response is the convolution of H(w) with the Fourier transform of the win dow, as illustrated in Figs. 4.53e and 4.53f. We make the following observations: l . The windowed filter spectra show spectral spreading at the edges, and instead of a sudden switch there is a gradual transition from the passband to the stopband of the filter. The transition band is smaller (2rr /T rad/s) for the rectangular case than for the triangular case (4rr/T rad/s). 2. Although H(w) is bandlimited, the windowed filters are not. But the stopband behavior of the triangular case is superior to that of the rectangular case. For the rectangular window, the leakage in the stopband decreases slowly (as 1/w) in comparison to that of the triangular window (as 1 /w2 ). Moreover, the rectangular case has a higher peak sidelobe amplitude than that of the triangular window.

4.10 MATLAB: FOURIER TRANSFORM TOPICS MATLAB is useful for investigating a variety of Fourier transform topics. In this section, a rectangular pulse is used to investigate the scaling property, Parseval's theorem, essential bandwidth, and spectral sampling. Kaiser window functions are also investigated.

4.10.1 The Sine Function and the Scaling Property As shown in Ex. 4.2, the Fourier transform of x(t) = rect(t/r) is X(cv) ; r sinc(wr/2). To

represent X(c.v) in MATLAB, a sine function is first required. As an alternative to the signal processing toolbox function sine, which computes sinc(x) as sin(nx)/:rrx, we create our owo function that follows the conventions of this book and defines sinc(x) = sin(x)/x. function [y] = CH4MP1(x) ¼ CH4MP1. m : Chapter 4 , MATLAB Pro g ram 1 ¼ Function M-file computes the sine function, y = sin(x)/x. y(x--0) = 1; y(x- = 0) = sin(x(x-=0))./x(x-=0); The computational simplicity of sine (x) = sin (x)/x is somewhat deceptive: sin (0)/0 resulls in a · divide-by-zero error. Thus, program CH4MP1 assigns sine (0) = l and computes the remaining values according to the definition. Notice that CH4MP1 cannot be directly replaced by an anonymous function. Anonymous functions cannot have multiple lines or cont ain certain commands such as = , if, or for. M-files, however, can be used to define an anonymous fun ction. For example, we can represent X(w) as an anonymous function that is defined in terms of CH4MP1• >>

X

=

©(omega,tau ) tau*CH4MP1(omega*tau/2);

Once we have defined X(w), it is simple to investigate the effects of scaling the pulse width r. Consider the three cases r = 1.0, r = 0.5, and r = 2.0. >>

omega

=

linspace(-4*pi,4•pi,1001);

4.10 MATLAB: Fourier Transform Topics 2

421

.-----r-----.-------:7T"._------.------;=====,----, \

I

I ,�----� _

__./_ ________ I

-J O

I

---r=I -· - ·--r=0.5 - - --r=2.0

\

-5

0

Figure 4.54 Spectra X (w) = t sine (wr /2) for t

w

\___ \ \

'

/

5

10

= 1.0, r = 0.5, and t = 2.0.

>> plot(omega,X(omega,1),'k-',omega,X(omega,0 .5),'k-.',... omega,X(omega,2),'k--'); >> >> grid; axis tight; xlabel('\omega'); ylabel('X(\omega)'); >> legend('\tau = 1' ,'\tau = 0.5','\tau = 2 .0','Location','Bes t'); Figure 4.54 confirms the reciprocal relationship between signal duration and spectra] bandwidth: time compression causes spectral expansion, and time expansion causes spectral compression. Additionally, spectral amplitudes are directly related to signal energy. As a signal is compressed, signal energy and thus spectral magnitude decrease. The opposite effect occurs when the signal is expanded.

4.10.2 Parseval's Theorem and Essential Bandwidth

100

Parseval's theorem concisely relates energy between the time domain and the frequency domain:

1

00 -oo

1

lx(t)l 2 dt= 2n

-oo

IX(w)l2dw

This too is easily verified with MATLAB. For example, a unit amplitude pulse x(t) with duration r has energy Ex = r. Thus, IX(w)l2dw = 2rrr

1_:

Letting r = 1, the energy of X(w) is computed by using the quad function. >> X_squared = ©(omega, tau) (tau*CH4MP1(omega*tau/2)).~2; >> quad(X_ s quared,-1e6,1e6,[] ,[],1) ans = 6.28 17 Although not perfect, the result of the numerical integration is consistent with the expected value of 2,r � 6.2832. For quad, the first argument is the function to be integrated, the next two arguments are the limits of integration, the empty square brackets indicate default values for special options, and the last argument is the secondary input r for the anonymous function X_ s quared. Full format details for quad are available from MATLAB's help facilities. A more interesting problem involves computing a signal's essential bandwidth. Consider, for example, finding the essential bandwidth W, in radians per second, that contains fraction f3 of the energy of the square pulse x(t). That is, we want to find W such that

422

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM 1 -

21l'

fw IX(w)l -W

2

dw = /31:

Program CH4MP2 uses a guess-and-check method to find W.

function [W,E_W] = CH4MP2(tau,beta,tol) ¼ CH4MP2 .m : Chapter 4, MATLAB Program 2 ¼ Function M-file computes essential bandwidth W for square pulse. ¼ INPUTS: tau = pulse width ¼ beta = fraction of signal energy desired in W tol= tolerance of relative energy error ¼ OUTPUTS: W= essential bandwidth [rad/s] ¼ E_W= Energy contained in bandwidth W ¼ W= O; step= 2*pi/tau; % Initial guess and step values X_squared = ©(omega,tau) (tau*CH4MP1(omega*tau/2)).-2; E = beta*tau; ¼ Desired energy in W relerr = (E-0)/E ; ¼ Initial relative error is 100 percent while(abs(relerr) > tol), % W too small, so . . . if (relerr>O), W=W+step; ¼ . . . increase W by step elseif (relerr> D = ©(n,tau,T_O) tau/T_O*CH4MP1(n*pi*tau/T_O); » tau = pi; T_O = 2*pi; n = [0:10]; >> stem(n,D(n,tau,T_O),'k.'); xlabel('n'); ylabel('D_n'); axis tight; The results, shown in Fig. 4.55, agree with Fig. 3.14b. Doubling the period to To= 4,r effectively doubles the density of spectral samples and halves the spectral amplitude, as shown in Fig. 4.56. As To increases, the spectral sampling becomes progressively finer while the amplitude becomes infinitesimal. An evolution of the Fourier series toward the Fourier integral is seen by allowing the period To to become large. Figure 4.57 shows the result for To = 40,r. If To = r, the signal x70 is a constant and the spectrum should concentrate energy at de. In this case, the sine function is sampled at the zero crossings and Dn = 0 for aJI n not equal to 0. I

I

I

I

'

'

I

'

'

0.4,..

ct

o.2 ... 0

0

,

I

2

1

3

T I

4

I

5

I

t

6

7

12

14

' 8

9

10

n

Figure 4.55 Fourier spectra for r = ,r and To = 2.1r.

0.2

ct o.1

0

2

4

6

8

10 n

Figure 4.56 Fourier spectra for r = ,r and To = 4.1r.

16

18

20

424

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM xl0-3

20 C

C

10

0 0

20

60

40

Figure 4.57 Fourier spectra for •

=

TC

80

120

100 n

140

160

180

200

and To = 40:,r.

Only the sample corresponding to n = 0 is nonzero, indicating a de signal, as expected. It is a simple matter to modify the previous code to verify this case.

4.10.4 Kaiser Window Functions A window function is useful only if it can be easily computed and applied to a signal. The Kaiser window, for example, is flexible but appears rather intimidating: ltl < T/2 otherwise Fortunately, the bark of a Kaiser window is worse than its bite! The function /0 (x), a zero-order modified Bessel function of the first kind, can be computed according to

Io(x) =

oo ( y} )2

L k=O

2kk!

or, more simply, by using the MATLAB function besseli(O,x). In fact, MATLAB supponsa wide range of Bessel functions, including Bessel functions of the first and second kinds (besselj and bessely), modified Bessel functions of the first and second kinds (besseli and besselk}, Hankel functions (besselh), and Airy functions (airy). Program CH4MP3 computes Kaiser windows at times t by using parameters T and a. function [w_K] = CH4MP3(t,T,alpha) % CH4MP3.m : Chapter 4, MATLAB Program 3 % Function M-file computes a width-T Kaiser window using parameter alpha, % Alpha can also be a string identifier: 'rectangular', 'Hamming', or % 'Blackman' . % INPUTS: t = independent variable of the window function T = window width % alpha = Kaiser parameter or string identifier % % OUTPUTS: w_K = Kaiser window function if strncmpi(alpha,'rectangular' ,1), alpha = O;

4.11

,,,.-;· /

s:.: 0.5 �

0

-:__.•.:_:---

--0.6

.// /

--0.4

/

"·''·, '' '

425

-�.

?-

/

/

--0.2

Summary

0

0.2

......

·,. ......

......

--

--- Rectangular -·-·- Hamming - - - Blackman

�-�

0.4

Figure 4.58 Special-case, unit duration Kaiser windows.

elseif strncmpi(alpha,'Hamming',3), alpha = 5.4414; elseif strncmpi(alpha,'Blackman',1), alpha = 8. 885; elseif isa(alpha,'char') disp('Unrecognized string ide ntifier.'); return end w_K = zeros(size (t)); i = find(abs(t)> » >>

t = [-0.6: .001:0.6]; T = 1; p lot(t,CH4MP3(t,T,'r'),'k-',t,CH4MP3(t,T,'ham'),'k-.',t,CH4MP3(t,T,'b'),'k--'); axis([-0.6 0.6 -.1 1.1]); xlabel('t'); ylabel('v_K (t)'); legend('Rectangular','Hamming','Blackman','Location','EastOutside');

4.11 SUMMARY In Ch. 3, we represented periodic signals as a sum of (everlasting) sinusoids or exponentials (Fourier series). In this chapter, we extended this result to aperiodic signals, which are represented

426

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM by the Fourier integral (instead of the Fourier series). An aperiodic signal x(t) may be regarded as a periodic signal with period To ➔ oo so that the Fourier integral is basically a Fourie r se ries with a fundarnentaJ frequency approaching zero. Therefore, for aperiodic signaJs, the Fou rier spectra are continuous. This continuity means that a signaJ is represented as a sum of sinusoid s (or exponentials) of all frequencies over a continuous frequency interval. The Fourier transform X(w), therefore, is the spectral density (per unit bandwidth in hertz). An ever-present aspect of the Fourier transform is the duaJity between time and frequen cy, which also implies duality between the signal x(t) and its transform X (w). 'flus duality arises because of near-symmetrical equations for direct and inverse Fourier transforms. The duality principle has far-reaching consequences and yields many vaJuable insights into signal analysis. The scaJing property of the Fourier transform leads to the conclusion that the signal bandwidth is inversely proportional to signal duration (signal width). Time shifting of a signal does not change its amplitude spectrum, but it does add a linear phase component to its spectrum. Multiplication of a signal by an exponential eiwo1 shifts the spectrum to the right by wo. In practice, spectral shifting is achieved by multiplying a signal by a sinusoid such as cosu>ot (rather than the exponential eiWO'). This process is known as amplitude modulation. Multiplication of two signals results in convolution of their spectra, whereas convolution of two signals results in multiplication of their spectra. For an LTIC system with the frequency response H(w), the input and output spectraX(w) and f(w) are related by the equation Y(w) = X(w)H(w). This is valid only for asymptotically stable systems. It also applies to marginally stable systems if the input does not contain a finite-amplitude sinusoid of the natural frequency of the system. For asymptotically unstable systems, the frequency response H(w) does not exist. For distortionless transmission of a signal through an LTIC system, the amplitude response IH(w)I of the system must be constant, and the phase response LH(1.v) should be a linear function of w over a band of interest. Ideal filters, which allow distortionless transmission of a certain band of frequencies and suppress all the remaining frequencies, are physically unrealizable (noncausal). In fact, it is impossible to build a physical system with zero gain [H(w) = 0] over a finite band of frequencies. Such systems (which include ideal filters) can be realized only with infinite time delay in the response. The energy of a signal x(t) is equal to l/2rr times the area under IX(w)21 (Parseval's theorem). The energy contributed by spectral components within a band t:J.f (in hertz) is given by IX(w)l 2 6f. Therefore, IX(w)l 2 is the energy spectral density per unit bandwidth (in hertz). The energy spectral density IX(w) 1 2 of a signal x(t) is the Fourier transform of the autocorrelation function i/lAt) of the same signal x(t). Thus, a signal autocorrelation function has a direct link to its spectral infonnation. The process of modulation shifts the signal spectrum to different frequencies. Modulation is used for many reasons: to transmit several messages simultaneously over the same channel for the sake of utilizing the channel's high bandwidth, to effectively radiate power over a radio link, to shift a signal spectrum at higher frequencies to overcome the difficulties associated with signal processing at lower frequencies, and to effect the exchange of transmission band width and transmission power required to transmit data at a certain rate. Broadly speaking, there are two types of modulation, amplitude and angle modulation. Each class has several subclas ses. Amplitude modulation bandwidth is generally fixed. The bandwidth in angle modulation, ho w ever, is controllable. The higher the bandwidth, the more immune is the scheme to noise. In practice, we often need to truncate data. Truncating is like viewing data through a win d ow, which permits only certain portions of the data to be seen and hides (suppresses) the re maind er. a Abrupt truncation of data amounts to a rectangular window, which assigns a unit weight to dat

Problems

427

seen from the window and zero weight to the remaining data. Tapered windows, on the other hand, reduce the weight gradually from I to 0. Data truncation can cause some unsuspected problems. For example, in the computation of the Fourier transform, windowing (data truncation) causes spectral spreading (spectral smearing) that is characteristic of the window function used. A rectangular window results in the least spreading, but it does so at the cost of a high and oscillatory spectral leakage outside the signal band, which decays slowly as I /w. In comparison to a rectangular window, tapered windows, in general, have larger spectral spreading (smearing), but the spectral leakage is smaller and decays faster with frequency. [f we try to reduce spectral leakage by using a smoother window, the spectral spreading increases. Fortunately, spectral spreading can be reduced by increasing the window width. Therefore, we can achieve a given combination of spectral spread (transition bandwidth) and leakage characteristics by choosing a suitable tapered window function of a sufficiently long width T.

REFERENCES I. Churchill, R. V., and Brown, J. W., Fourier Series and Boundary Value Problems, 3rd ed., McGraw-Hill, New York, 1978. 2. Bracewell, R. N., Fourier Transform and Its Applications, rev. 2nd ed., McGraw-Hill, New York, 1986. 3. Guillemin, E. A., Theory of Linear Physical Systems, Wiley, New York, 1963. 4. Lathi, B. P. and Ding, Z., Modern Digital and Analog Communication Systems, 4th ed., Oxford University Press, New York, 2009. 5. Carson, J., "Notes on Theory of Modulation," Proc. IRE, vol. I0, pp. 57-64, February 1922. 6. Carson, J., ''The Reduction of Atmospheric Disturbances,!! Proc. IRE, vol. 16, pp. 966-975, July 1928. 7. Armstrong, E. H., "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation," Proc. IRE, vol. 24, pp. 689-740, May 1936. 8. Hamming, R. W., Digital Filters, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ, 1983. 9. Harris, F. J., "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform," Proceedings of the IEEE, vol. 66, no. 1, pp. 51-83, January 1978.

PROBLEMS 4.1-1 Suppose signal x(t) = t2 [u(t) - u(I- 2)) has Fourier transform X(w). Define a 3-periodic replication of x(t) as y(t) = 00 2.x(t - I 3n). Determine Yk , the Fourier series of y(t), in terms of the Fourier tr ansform X ( •).

z=:_

4.1-2 Show that for a real x(t), Eq. (4. I 0) can be expressed as

4.1-3

This is the trigonometric form of the Fourier integral. Compare this with the compact trigonometric Fourier series. Show that ifx(t) is an even function oft, then X(w)

= 2 fo

00

x(l)cos w1dr

and if x(r) is an odd function of r, then x(t) =

J

T(

100 0

IX(w)I cos[wl + LX(w)]dw

X(w) = -2j

fo

00

x(t)sinwrdt

428

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: TIIE FOURIER TRANSFORM Hence, prove that if x(t) is a real and even function of r, then X(w) is a real and even function of w. In addition, if x(t) is a real and odd function oft, then X(w) is an imaginary and odd function of w.

4.1-4

4.1-5

Using Eq. (4.9), find the Fourier transfonns of the signals x(t) in Fig. P4. l-5.

4.1-6

Using Eq. ( 4.9), find the Fourier transfonns of the signals depicted in Fig. P4. l-6. Use Eq. (4.10) to find the inverse Fourier transforms of the spectra in Fig. P4.l-7. Use Eq. (4.10) to find the inverse Fourier transforms of the spectra in Fig. P4.l-8. If x(t) {=::::::} X(w), then show that

4.1-7

A signal x(t) can be expressed as the sum of even and odd components (see Sec. 1.5.2):

4.1-8 4.1-9

(a) If x(r) � X(w), show that for real x(t), X(O)

100

sine (x) dx =

1_:

x(O)

x0 (t) � j lm[X(w)] (b) Verify these results by finding the Fourier transforms of the even and odd compo­ nents of the following signals: (i) u(t) and (ii) e-ar u(t).

T

0

1_:

=27r

and

and

= l

Also show that

1_:

,�

0

,�

T

(a)

(b)

Figure P4.l-5 x(t)

x(t)

2 2

,_

,_

0

-T

(a)

T

(b)

Figure P4.1-6 X(

(a)

Figure P4.1-7

-2

-1

w

) r

(b)

w-+2

x(t)dt

-oo

X(w)dw

sinc2 (x)d.t = ;r

Problems X(w)

429

I w-

(b)

Figure P4.l-8 4.Z•l Sketch the following functions: (a) rect (1/2) (b) 6(3w/100) (c) rect ((r- 10)/8) (d) sinc(,rw/5) {e) sine ((w/5) - 2,r) (0 sine(r/5) rect (t / l Orr)

in the Laplace transform of e"' u(t)? Explain. (b) Find the Laplace transform of x(r) shown in Fig. P4.2-5. Can you find the Fourier transform of x(t) by settings= jw in its Laplace transform? Explain. Verify your answer by finding the Fourier and the Laplace transforms of x(t).

4.Z-Z Using Eq. (4.9), show that the Fourier trans­ form of reel (t - 5) is sine (w/2)e- iSw. Sketch the resulting amplitude and phase spectra.

x(t)

4.2-3 Using Eq. (4. I 0), show that the inverse Fourier transform of rect ((w - I0)/2rr) is sine (,rr) eilOr.

Figure P4.2-5 4.3-1

4.2-5 (a) Can you find the Fourier transform of e"1 u(t) when a> 1 by setting s = jw

Apply the duality property to the appropriate pair in Table 4.1 to show that (a) ½(o(t) + j/m] {=::::} u(w) (b) o(t+T)+o(t-T) {=::::}2cos Tw (c) o(t + T) - o(t-T) {=::::} 2j sin Tw IX(w)I

IX(w)I

0

0

LX(w)

Fig ure P4.2-4

T

0

41-4 Find the inverse Fourier transform of X(w) for the spectra illustrated in Fig. P4.2-4. [Hinr: X(w) = IX(w)jeiLX(w>. This problem illustrates how different phase spectra (both with the same amplitude spectrum) represent entirely different signals.]

(a)

.------i'TT/2

I

LX(w)

Wo

430

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM Xz(I)

0

-1

0

-I

0

····························2· ............................. .. X3(t)

I�

-2

0

t-+

t-+



V········ ...... - I /3: O

2

-0.5

-1.5

-2

0.5

2

.=::::::::::::::"SJ

1.5

r--➔

4.3-4

The Fourier transform of the triangular pulse x(t) in Fig. P4.3-4 is expressed as

Figure P4.3-4 4.3-2

A signal x(t) has Fourier transform X(w). Determine the Fourier transform Y(w) in terms of X(w) for each of the following signals y(t): (a) y(t) = !x(-2t + 3) (b) y(t)=e121r-(-3t -6)

4.3-3

A signal x(t) has Fourier transform X(w). Determine the inverse Fourier transform y(t) in terms of x(t) for each of the following spectra Y(w): (a) Y(w)= ie-J2wl3X(-w/3) 2) (b) Y(w) = ½ei2x• ("'3

-T

Use this information, and the time-shifting and time-scaling properties, to find the Fourier transforms of the signals Xi(l)(i = 1,2,3,4,5) shown in Fig. P4.3-4. 4.3-5

Using only the time-shifting property and Table 4.1, find the Fourier transforms of the signals depicted in Fig. P4.3-5.

T

0

t ......

-1-f----..J

0

(a)

1T

(b)

c-.... 0 (c)

Figure P4.3-S

1r/2

T

(d)

t ......

Problems

J

j'

I

-4 -3 -2

Ir4

I

3

2

(a)

6

-4 - 3 - 2

J'

431

6 ,_ 2

3

4

(b)

Figure P4.3-7 4.3-6 Consid er the fact that the r-duration triangle function 6 ( �) has inverse Fourier trans­ fonn 4: sinc2 (!J-). Us� the duality property to determine the Founer transform Y(w) of signal y(t) = 6(t).

(d) For Yd(t) = x2(t), sketch Yd(w). (e) For Ye(t) = I -x2(t), sketch Ye(w). 4.3-12

Use the time-convolution property to prove pairs 2, 4, 13, and 14 in Table 2.l (assume >. < 0 in pair 2, >-. 1 and >-2 < 0 in pair 4, >- 1 < 0 and >-.2 > 0 in pair 13, and AJ and A2 > 0 in pair 14). These restrictions are placed because of the Fourier transformability issue for the signals concerned. For pair 2, you need to apply the result in Eq. (1.10).

4.3-13

A signal x(t) is bandlimited to B Hz. Show that the signal x"(t) is bandlimited to nB Hz.

4.3-14

Find the Fourier transform of the signal in Fig. P4.3-5a by three different methods: (a) By direct integration using Eq. (4.9) (b) Using only pair 17 (Table 4.1) and the time-shifting property (c) Using the time-differentiation and time-shifting properties, along with the fact that o(t) ¢:::::> I

4.3-15

(a) Prove the frequency-clifferentiation prop­ erty (dual of the time-differentiation property):

4.3-7 Use the time-shifting property to show that if x(t) � X(w), then x(t+ T) +x(t-T) 2X(w)cosTw

This is the dual of Eq. (4.32). Use this result and Table 4.1 to find the Fourier transforms of the signals shown in Fig. P4.3-7.

4.3-8 Prove the following results, which are duals of each other: x(t)sin wot- ii [X(w-wo)-X(w+wo)] b[x(t+T)-x(t-T)] X(w)sin Tw Use the latter result and Table 4.1 to find the Fourier transform of the signal in Fig. P4.3-8.

2 -4 -3 -2

0 -1

3

4

- jtx(t)

t-+

d -X(w) dw

(b) Use this property and pair 1 (Table 4.1)

to determine the Fourier transform of te-0' u(t).

Figure P4.3-8 4.3.9 The signals in Fig. P4.3-9 are modulated signals with carrier cos 1Ot. Find the Fourier transforms of these signals by using the appropriate properties of the Fourier trans­ fonn and Table 4.1. Sketch the amplitude and phase spectra for Figs. P4.3-9a and P4.3-9b. 4.3-10 Use the frequency-shifting property and Table 4.1 to find the inverse Fourier transform of the spectra depicted in Fig. P4.3-10. 4-3.U Let X(w) = rect(w) be the Fourier transform of a signal x(t). (a) For ya(t) = x(t) *X(t), sketch Ya (w). (b) For Yb(t) = x(t) *X(t/2), sketch Yb(w). (c) For Yc (t) = 2x(t), sketch Yc(w).

¢:::::>

4.3-16

4.3-17

4.4-1

Adapt the method of Ex. 4.17 and use the frequency-differentiation (see Prob. 4.3-15) and other properties to find the inverse Fourier transform x(t) of the triangular spec­ trum X(w) = 6.(w/2).

Adapt the method of Ex. 4.17 and use the frequency-differentiation (see Prob. 4.3-15) and other properties to find the inverse Fourier transform x(t) of the spectrum rect (�)X(w) = rr

III

For a stable LTIC system with transfer function I H(s)=­ s+l

432

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSI S: THE FOURIER TRANSF ORM

,1T

(b)

(a)

,(c)

Figure P4.3-9

-5

-3

3

5

(a)

Figure P4.3-10

find the (zero-state) response if the input x(t) is (a) e- 2'u(t) (b) e-1 u(t) (c) e1 u(-t) (d) u(r) 4.4-2

(c) Sketch the system's magnitude response IH(w)I over -lOrr � w � lOrr. (d) Is the system h(t) distortionless? Explain. (e) Determine y(t). 4.4-4

A stable LTIC system is specified by the frequency response

h(t).

-1 H(w)=-.JW-2

Find the impulse response of this system and show that this is a noncausal system. Find the (zero-state) response of this system if the input x(t) is (a) e- 1 u(t) (b) e1u(-t) 4.4-3 A periodic signal x(r) = 1 + 2cos(5Jrt) + 3 sin(8rr t) is applied to an LTIC system with impulse response h(t) = 8sinc(4rr t) cos(2rr t) to produce output y(t) = x(t) * h(t). (a) Determine Wo, the fundamental radian frequency of x(t). (b) Determine X(w), the Fourier transform of x(r).

A periodic delta train x(t) = [:_00 0(1 rrn) is applied to an LTIC system with impulse response h(t) = sin(3t)sinc2 (1) to produce zero-state output y( r) = x(I) *

(a) Determine Wo, the fundamental radian frequency of x(t). (b) Determine X( w), the Fourier transform of x(t).

(c) Sketch the system's magnitude response IH(w)I over -10 � w � 10. (d) Is the system h(t) distortionless? Exp]ain. ( e) Determine y(t). 4.4-5

Signals x1(t)= 104 rect ( l04 t) and x2(t) == o(t) are applied at the inputs of th e ideal lowpass filters H1(a>) = rect (w/40,()()()11') and H2(w) = rect (w/20,000,r) (F ig. P4.4 5l• The outputs y1(t) and y2 (t) of these filie!S are multiplied to obtain the si gnal y(r) ::: YI (t)y2 (t). (a) Sketch X1(a>) and X2(w).

Problems

(b) Sketch H1 (w) and H2(w). (c) Sketch Y1 (w) and Y2(w). (d) Find the bandwidths of YI (t), Y2(t), and y(t).

and

y,(I)

L H2(w) Figure P4.4-5 4.4-6 A lowpass system time constant is often defined as the width of its unit impulse response h(r) (see Sec. 2.6.2). An input pulse p(t) to this system acts like an impulse of strength equal to the area of p(t) if the width of p(t) is much smaller than the system time constant, and provided p(t) is a lowpass pulse, implying that its spectrum is concentrated at low frequencies. Verify this behavior by considering a system whose unit impulse response is h(t) = rect (t/10 -3). The input pulse is a triangle pulse p(t) = �(t/10-6). Show that the system response to this pulse is very nearly the system response to the input Ao(t), where A is the area under the pulse p(t).

4.4-7 A lowpass system time constant is often defined as the width of its unit impulse response h(t) (see Sec. 2.6.2). An input pulse p(t) to this system passes practically without distortion if the width of p(t) is much greater than the system time constant, and provided p(t) is a lowpass pulse, implying that its spectrum is concentrated at low frequencies. Verify this behavior by considering a sys­ tem whose unit impulse response is h(t) = rect(t/10-3). The input pulse is a triangle pulse p(t) = �(t). Show that the system out­ put to this pulse is very nearly kp(t), where k is the system gain to a de signal, that is,

4.5-1

4.5-2

00

X(w)

rr -ex> w - Y

dw

1l

R(w)

00 -00 (J) -

y

dw

assuming that h(t) has no impulse at the origin. This pair of integrals defines the Hilbert transform. [Hint: Let h�(t) and h0 (t) be the even and odd components of h(t). Use the results in Prob. 4.1-4. See Fig. 1.24 for the relationship between hit) and h0 (t).] This problem states one of the important properties of causal systems: that the real and imaginary parts of the frequency response of a causal system are related. If one specifies the real part, the imaginary part cannot be specified independently. The imaginary part is predetermined by the real part, and vice versa. This result also leads to the conclusion that the magnitude and angle of H(w) are related, provided all the poles and zeros of H(w) lie in the LHP. Consider a filter with the frequency response

= e-(kw2 +jwro)

Show that this filter is physically unreal­ izable by using the time-domain criterion [noncausal h(t)] and the frequency-domain (Paley-Wiener) criterion. Can this filter be made approximately realizable by choosing to sufficiently large? Use your own (reason­ able) criterion of approximate realizability to determine 10. [Hint: Use pair 22 in Table 4.1.] Show that a filter with frequency response H(w) =

2(105 ) e-jwto w2 + JOIO

is unrealizable. Can this filter be made approximately realizable by choosing a suf­ ficiently large lo? Use your own (reasonable) criterion of approximate realizability to deter­ mine to.

4.5-3 Determine whether the filters with the follow­

4,4-8 A causal signal h(t) has a Fourier transform H(w). If R( w) and X(w) are the real and the imaginary parts of H(w), that is, H(w) = R(w) + jX(w), then show that

= .!.. f

= -� f

H(w)

k=H(O).

R(w)

X(w)

433

4.5-4

ing frequency response H(w) are p hysically realizable. If they are not realizable, can they be realized approximately by aHowing a finite time delay in the response? (a) 10- 6 sinc (l 0-6w) (b) 10-4 6 (w/40,000rr) (c) 2rr o(w) Consider signal x, (t), its Fourier transform X 1 (j), and several other signals, as shown in

434

CHAPTER 4 CONTINUOUS-TIME SIGNAL ANALYSIS: THE FOURIER TRANSFORM Fig. P 4.5-4. Notice, spectra are drawn as a function of hertzian frequency f rather than radian frequency w. (a) Accurately sketch X2 (f), the Fourier transform of x2 (t). (b) Accurately sketch x3(t), the inverse Fourier transform of X3 (f). (c) The signal x4 (t) x1 (t) + x2 (t) is passed through an ideal lowpass filter with 3 Hz cutoff to produce outputy4(t). Accurately sketch y4 (t).

4.6-2

Show that the energy of a Gaussian pulse

4.6-3

is l /(2a ,,fir). Verify this result by usin Parseval 's theorem to derive the energy from X(w). [Hint: See pair 22 in Table 4.( Use the fact that 00 e-x1- 1 2 dx = 5.) Use Parseval's theorem of Eq. (4.45) to show

=

•�

l o

-2

4.6-4

I•-

2

s -2

-4

0

4.6-5

2

4

-2

0

-2

0

00

sinc2 (kx) dx =

� k

A Jowpass signal x(t) is applied to a squaring 2 device. The squarer output x (t) is applied to a lowpass filter of bandwidth ll.f (in hertz) (Fig. P4.6-4). Show that if fl/ is very small (!::,.f-+ 0), then the filter output is a de signal y(t)::::: 2Ex !::,.f. [Hint: If x2(t) � A(w), then show that Y(w) � [4rrA(0)6f]c5(w) if 6f ➔ 0. Now, show thatA(O) Ex .]

=

Generalize Parseval's theorem to show that for real, Fourier-transformable signals x1 (r) and x2 (t):

x1 (t)xi (t) dt

2

4.6-6

-4

1

that

1_:

/ [Hz]

i

J�

-oo

.. --

- r _.,.�

1 l 2 x(t) = -- e-i /2a a-.fiii

2

4

/ [Hz]

Show that

1_:

sine (Wt - m1r) sine (Wt- mr)dt

m f. n

m=n [Hint: Recognize that

-2

0

Figure P4.5-4 4.6-1

Define x(t) = 2� sinc(t/2) with Fourier trans­ form X(w) = rect(w). Use ParsevaJ's theorem to determine 00 sinc2(t-2)dt.

J�

=

sine (Wt - krr) sine [W (te-jklrw/lY rect (..!!!...) !!.. 2W W

2 4.6-7

J)]

Use this fact and the result in Prob. 4.6-5.)

(a) What does it mean to compute the 95� essential bandwidth B of a signal x(t) with Fourier transform X(w)? (b) Determine the 95% essential ban dwidth B of a signal with spectrum X(w) ::::: rect(w).

Problems x(t)

435

y(t) == 2EJ V

Lowpass filter

Figure P4.6-4 (c) Detennine the 95% essential bandwidth 8 of a signal with spectrum X(w) = l1(w). 4.6·8 Using a 95% energy criterion. determine the essential bandwidth B of a signal that has a Fourier transform given by X(w) = e-lwl. 4.6-9 For the signal

x(r)

= 1 2 2a +a2

detem1ine the essential bandwidth B (in hertz) of x(1) such that the energy contained in the spectral components of x(1) of frequencies below B Hz is 99% of the signal energy Ex. 4.7•1 For each of the following baseband sig­ nals (i) m(t) = cos 10001, (ii) m(1) 2cos 10001 + cos 20001, and (iii) m(r) = cos IOOOtcos 30001: (a) Sketch the spectrum of m(t). (b) Sketch the spectrum of the DSB-SC signal m(1) cos 10,000t. (c) Identify the upper sideband (USB) and the lower sideband (LSB) spectra. (d) Identify the frequencies in the baseband, and the corresponding frequencies in the DSB-SC, USB, and LSB spectra. Explain the nature of frequency shifting in each case. 4.7-2 A message signal m(t) with spectrum M(w) = 1�6. (2rr�) is to be transmit­ ted using a communication system. Assume

4.7-3

m(t)

a

-21rB

0

(a )

Figure P4.7-3

2TrB w -

all single-sideband systems have suppressed carriers. (a) What is the hertzian bandwidth of m(1)? (b) Sketch the spectrum of the transmitted signal if the communication system is DSB-SC with we = 2rr I 00,000. (c) Sketch the spectrum of the transmitted signaJ if the communication system is AM with We = 2rr I 00,000 and a mod­ ulation index of µ, = I. What is the corresponding carrier amplitude A? (d) Sketch the spectrum of the transmitted signal if the communication system is USB with We = 2rr 100,000. (e) Sketch the spectrum of the transmitted signal if the communication system is LSB with We = 2rr 100,000. (f) Suppose we want to transmit m(t) on each of an FDM system's four channels: DSB-SC at carrier w1, AM (µ, = l ) at carrier w2, USB at carrier w3, and LSB at carrier w4. Determine carrier frequencies w1 < wi < WJ < w4 so that the FDM spectrum begins at a frequency of 100,000 Hz with 5,000 Hz deadbands separating adjacent messages. What is the end hertzian frequency of the FDM signal? You are asked to design a DSB-SC modulator to generate a modulated signal km(r) cos a>cf, where m(r) is a signal bandlimited to B Hz (Fig. P4.7-3a). Figure P4.7-3b shows a

km(t) cos We i ___ Bandpass ____ filter C

b

cos3 wet

(Carrier) (b)

436

CHAPTER 4 CONTINUOUS-TI ME SIGNAL ANALYSIS: THE FOURIER TRANSF ORM Show that this scheme can generate an amplitude-modulated signal kcos Wet. Deter­ mine the value of k. Show that the same scheme can also be used for demodu lation provided the bandpass filter in Fig. P4.7-4� is replaced by a lowpass (or baseband) filter.

DSB-SC modulator available in the stock­ room. The bandpass filter is tuned to We and has a bandwidth of 28 Hz. The carrier generator available generates not cos we t, but cos3 We i. (a) Explain whether you would be able to generate the desired signal using only this equipment. 1f so, what is the value of k? (b) Determine the signal spectra at points b and c, and indjcate the frequency bands occupied by these spectra. (c) What is the minimum usable value of We ? (d) Would this scheme work if the carrier generator output were cos 2 2B or Ws > 4rr B). The result is X(w), consisting of repetitions of X(w) with a finjte bandgap between successive cycles, as illustrated in Fig. 5.7c. Now, we can recover X((/)) from X(w) using a lowpass filter with a gradual cutoff characteristic, shown dotted in Fig. 5.7c. But even in this case, if the unwanted spectrum is to be suppressed, the filter gain must be zero beyond some frequency (see Fig. 5.7c). According to the Paley-Wiener criterion [Eq. (4.43)], itis impossible to realize even thls filter. The only advantage in this case is that the required filter can be closely approximated with a smaller time delay. All this means that it is impossible in practice to recover a bandlimited signal x(t) exactly from its samples, even if the sampling rate is higber than the Nyquist rate. However, as the sampling rate increases, the recovered signal approaches the desired signal more closely. THE TREACHERY OF ALIASING There is another fundamental practical difficulty in reconstructing a signal from its samples. The sampling theorem was proved on the assumption that the signal x(t) is bandlimited. All practical signals are timelimited; that is, they are of finite duration or width. We can demonstrate (see Prob. 5.2-20) that a signal cannot be timelimited and bandlimited simultaneously. If a signal is timelimited, it cannot be bandlirnjted, and vice versa (but it can be simultaneously nontimelimited and nonbandlirnited). Clearly, all practical signals, which are necessarily timelimited, are nonbandlimited, as shown in Fig. 5.8a; they have infinite bandwidth, and the spectrum X(w) consists of overlapping cycles of X(w) repeating every ls Hz (the sampling frequency), as iUustrated in Fig. 5.8b.t Because of infinite bandwidth in thjs case, the spectral overlap is t Figure 5.8b shows that from the infinite number of repeating cycles, only the neighboring spectral cycles overlap. This is a somewhat simplified picture. In reality, all the cycles overlap and interact with every other cycle because of the infinite width of all practical signal spectra. Fortunately, all practical spectra also must decay at higher frequencies. This results in an insignificant amount of interference from cycles other than the immediate neighbors. When such an assumption is not justified, aliasing computations become a little more involved.

5.2 Signal Reconstruction x(t)

x(r)

Sampler

Ideal lowpass

453

x(t)

filter cutoff B Hz

(a)

-

X(w)

(b)

-

X(w)

w(c)

Figure 5.7 (a) Signal reconstruction from its samples. (b) Spectrum of a signal sampled at the

Nyquist rate. (c) Spectrum of a signal sampled above the Nyquist rate.

unavoidable, regardless of the sampling rate. Sampling at a higher rate reduces but does not eliminate overlapping between repeating spectral cycles. Because of the overlapping tails, X(w) no longer has complete information about X (w), and it is no longer possible, even theoretically, to recover x(t) exactly from the sampled signal x(t). If the sampled signal is passed through an ideal lowpass filter of cutoff frequency fs/2 Hz, the output is not X(w) but Xa (w) (Fig. 5.8c), which is a version of X(w) distorted as a result of two separate causes: 1. The loss of the tail of X(w) beyond lfl > fs/2 Hz. 2. The reappearance of this tail inverted or folded onto the spectrum. Note that the spectra cross at frequency fs/2 = 1/2T Hz. This frequency is called the folding frequency. The spectrum may be viewed as if the lost tail is folding back onto itself at the folding frequency. For instance, a component of frequency (ls/2) + fz shows up as or "impersonates" a component of lower frequency (ls/2) - fz in the reconstructed signal. Thus, the components of frequencies above fs/2 reappear as components of frequencies below fs/2. This tail inversion, known as spectral folding or aliasing, is shown shaded

454

CHAPTER 5 SAMPLING

w-

0 (a)

X(w) H(w) : ··················· ··················

Sample signal spectrum

wLost tail is folded back -fs/2

Lost tail fs/2

Is

f-

(b)

fs/2

-fs/2 (c)

x(t)

HOIJ(w)

Xaa(t)

87(1) (d)

-wsf2

0

w-

Lost tail results in loss of higher frequencies

-fs/2 (e)

fs/2

Is

J-

Figure 5.8 Aliasing effect. (a) Spectrum of a practical signal x(t). (b) Spectrum of sampledx(I). d (c) Reconstructed signal spectrum. (d) Sampling scheme using anti-aliasing filter. (e) Sample_ 15 signal spectrum (dotted) and the reconstructed signal spectrum (solid) when anti-aliasing filter used.

5.2 Signal Reconstruction

455

in Fig. 5.8b and also in Fig. 5.8c. In the process of aliasing, not only are we losing all the components of frequencies above the folding frequencyff/2 Hz, but these very components reappear (aliased) as lower-frequency components, as shown in Figs. 5.8b and 5.8c. Such aliasing destroys the integrity of the frequency components below the folding frequency ff /2, as depicted in Fig. 5.8c. The aliasing problem is analogous to that of an army with a platoon that has secretly defected to the enemy side. The platoon is, however, ostensibly loyal to the army. The army is in double jeopardy. First, the army has lost this platoon as a fighting force. In addition, during actual fighting, the army will have to contend with sabotage by the defectors and will have to find another Joyal platoon to neutralize the defectors. Thus, the army has lost two platoons in nonproductive activity. DEFECTORS ELIMINATED: THE ANTI-ALIASING FILTER If you were the commander of the betrayed army, the solution to the problem would be obvious. As soon as the commander got wind of the defection, he would incapacitate, by whatever means, the defecting platoon before the fighting begins. This way he loses only one (the defecting) platoon. This is a partial solution to the double jeopardy of betrayal and sabotage, a solution that partly rectifies the problem and cuts the losses to half. We follow exactly the same procedure. The potential defectors are all the frequency components beyond the folding frequency fs/2 = 1/2T Hz. We should eliminate (suppress) these components from x(t) before sampling x(t). Such suppression of higher frequencies can be accomplished by an ideal lowpass filter of cutofffs/2 Hz, as shown in Fig. 5.8d. This is called the anti-aliasing.filter. Figure 5.8d also shows that anti-aliasing filtering is performed before sampling. Figure 5.8e shows the sampled signal spectrum (dotted) and the reconstructed signal X00 (w) when an anti-aliasing scheme is used. An anti-aliasing filter essentially bandlimits the signal x(t) to fs/2 Hz. This way, we lose only the components beyond the folding frequencyfs/2 Hz. These suppressed components now cannot reappear to corrupt the components of frequencies below the folding frequency. Clearly, use of an anti-aliasing filter results in the reconstructed signal spectrum Xaa 2f Hz. or

7r

O< - w

~

·"

......

,_ ,,._

....

V

.... •··••···•·· . .�

.'/ ,,/

..

-·-

(a)

Digit

Binary equivalent

0

0000

I

0001

2

(KHO

3 4

0011 0100 0101

I• t

I'

6

0110

7

0111

8

1000

9

1001

10

1010

11

1011

12

1100

13

1101

14

1110

15

1111

Pulse code waveform

•••• •••■ •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •• •• •••• •••• ••••

(b)

Figure 5.14 Analog-to-digital (AID) conversion of a signal: (a) quantizing and (b) pulse coding.

5.3 Analog-to-Digjtal (AID) Conversion

465

A typical audio signal's bandwidth is about 15 kHz, but subjective tests show that signal articulation (intelligibility) is not affected if all the components above 3400 Hz are suppressed [3]. Since the objective in telephone communication is intelligibility rather than high fidelity, the components above 3400 Hz are eliminated by a lowpass filter.t The resulting signal is then sampled at a rate of 8000 samples/s (8 kHz). This rate is intentionally kept higher than the Nyquist sampling rate of 6.8 kHz to avoid unrealizable filters required for signal reconstruction. Each sample is finally quantized into 256 levels (L = 256), which requires a group of eight binary pulses to encode each ample (28 = 256). Thus, a digitized telephone signal consists of data amounting to 8 x 8000 = 64,000 or 64 kbit/s, requiring 64,000 binary pulses per second for its transmission. The compact disc (CD), a high-fidelity application of AID conversion, requires the audio signal bandwidth of 20 kHz. Although the Nyquist sampling rate is only 40 kHz, an actual sampling rate of 44. J kHz is used for the reason mentioned earlier. The signal is quantized into a rather large number of levels (L = 65,536) to reduce quantizing error. The binary-coded samples are now recorded on the CD. A HISTORICAL NOTE The binary

system of representing any number by using ls and Os was invented in India by Pingala (ca. 200 BCE). It was again worked out independently in the West by Gottfried Wilhelm Leibniz (1646-1716). He felt a spiritual significance in this discovery, reasoning that 1 representing unity was clearly a symbol for God, while O represented the nothingness. He reasoned that if all numbers can be represented merely by the use of 1 and 0, this surely proves that God created the universe out of nothing!

EXAMPLE 5.5 ADC Bit Number and Bit Rate A signal x(t) bandlimited to 3 kHz is sampled at a rate 33f % higher than the Nyquist rate. The maximum acceptable error in the sample amplitude (the maximum error due to quantization) is 0.5% of the peak amplitude V. The quantized samples are binary-coded. Find the required sampling rate, the number of bits required to encode each sample, and the bit rate of the resulting PCM signal. The Nyquist sampling rate is/Nyq = 2 x 3000 = 6000 Hz (samples/s). The actual sampling rate isfA = 6000 x (I½)= 8000 Hz. The quantization step is !)., and the maximum quantization error is ±6./2, where 6. = 2V/L. The maximum error due to quantization, 6./2, should be no greater than 0.5% of the signal peak amplitude V. Therefore,

1

Components below 300 Hz may also

be suppressed without affecting the articulation.

466

CHAPTER 5 SAMPUNG

=>

L=200

For binary coding, L must be a power of 2. Hence, the next higher value of L that is a power of 2 is L = 256. Because log 2 256 = 8, we need 8 bits to encode each sample. Therefore, the bit rate of the PCM signal is 8000 x 8 = 64,000 bits/s

DRILL 5.5 Bit Number and Bit Rate for ASCII The American Standard Code for Information Interchange (ASCII) has 128 characters, which are binary-coded. A certain computer genel1j.tes 100,000 characters per second. Show that (a) 7 bits (binary digits) are required to encode each character.

(b) 700,000 bits/s are required to transmit the computer output.

5.4

DUAL OF TIME SAMPLING: SPECTRAL SAMPLING

As in other cases, the sampling theorem has its dual. In Sec. 5.1, we discussed the time-sampling theorem and showed that a signal bandlirnited to B Hz can be reconstructed from the signal sample s taken at a rate of Is > 2B samples/s. Note that the signal spectrum exists over the frequency range (in hertz) of -B to B. Therefore, 2B is the spectral width (not the bandwidth, which is B) of the signal. This fact means that a signal x(t) can be reconstructed from samples taken at a rate fs > the spectral width of X(w) in hertz Us> 2B). We now prove the dual of the time-sampling theorem. This is the spectral sampling theorem. which applies to timelirnited signals (the dual of bandlimited signals). A timelirnited signal x(t) exists only over a firute interval of r seconds, as shown in Fig. 5.15a. Generally, a timeli mited signal is characterized by x(t) = 0 fort< T1 and t > T2 (assuming T2 > T1 ). The signal width or duration is r = T2 - T, seconds. The spectral sampling theorem states that the spectrum X (w) of a signal x(t) timelimited to a duration of r seconds can be reconstructed from the samples of X (c.v) taken at a rate R samples/Hz. where R > r (the signal width or duration) in seconds. Figure 5.15a shows a timelimited signal x(t) and its Fourier transform X(w). Although X(w) is complex in general, it is adequate for our line of reasoning to show X( w) as a real function.

(5.8)

467

5.4 Dual of Time Sampling: Spectral Sampling

,-

-

X(w)

w-

0 (a)

D"

I

T,X(w)

'

L'---, ot:;? ,�- - ___.l.%:1..LLLLL..Lo.µ..J. ,�

.LLJL.LL:Uw..___

..1. ••

1

; :

fo = -;;:'·O : :

J-

(b)

Figure 5.15 Periodic repetition of a signal amounts to sampling its spectrum.

We now construct XTo (t), a periodic signal formed by repeating x(t) every To seconds (To > r), as depicted in Fig. 5.15b. This periodic signal can be expressed by the exponential Fourier series XTo (t) = where (assuming To > r)

LDe n

jnwot

n=-oo

2rr wo=­ To

1'['

To 1 . 1 . x(t)e-J"llJO'dt Dn = - 1 x(t)e-JIIWO'dt= To o To o

From Eq. (5.8), it fo l lows that

Dn = -X(nwo) To This result indicates that the coefficients of the Fourier series for x70 (t) are (1 /To) times the sample values of the spectrum X(w) taken at interval s of evo. This means that the spectrum of the periodic signal xro(t) is the sampl ed spectrumX(w), as illustrated in Fig. 5.15b. Now as long as To > r, the successive cycles of x(t) appearing in XTo (t) do not overlap, and x(t) can be recovered from XTo (t). Such recovery impl ies indirectly that X(w) can be reconstructed from its samples. These sampl es are separated by the fundamental frequency Jo= 1 /To Hz of the periodic signal XTo (t). Hence, the condition for recovery is To > r; that is, l r

Jo< -Hz Therefore, to be able to reconstruct the spectrum X(w) from the samples of X(w), the samples should be taken at frequency interva ls Jo < 1/r Hz. If R is the sampling rate (samples/Hz), then R

= -1 > r samples/Hz Jo

468

CHAPTER 5 SAMPLING SPECTRAL INTERPOLATION Consider a signal timelimited to r seconds and centered at Tc, We now show that the spectr um X(w) of x(t) can be reconstructed from the samples of X(w). For this case, using the dual of the approach employed to derive the signal interpolation formula in Eq. (5.6), we obtain the spectral interpolation formulat 21r wo=­ T o

To> r

(5.9)

For the case in Fig. 5.15, Tc = To/2. lf the pulsex(t) were to be centered at the origin, then Tc =O, and the exponential term at the extreme right in Eq. (5.9) would vanish. In such a case, Eq. (5.9) would be the exact dual of Eq. (5.6).

EXAMPLE 5.6 Spectral Sampling and Interpolation The spectrum X(cv) of a unit-duration signal x(t), centered at the origin, is sampled at the intervals of I Hz or 2,r rad/s (the Nyquist rate). The samples are X(O) = I

and

X(±2nn) =0

n = l, 2, 3, ...

Find x(t). We use the interpolation formula Eq. (5.9) (with Tc = 0) to construct X(w) from its samples. Since all but one of the Nyquist samples are zero, only one term (corresponding ton= 0) in the summation on the right-hand side of Eq. (5.9) survives. Thus, with X(O) = 1 and r =To = I, we obtain X(w) =sine(�) and x(t) = rect(t) For a signal of unit duration, this is the only spectrum with the sample values X(O) = l and X(21rn) = O(n # 0). No other spectrum satisfies these conditions.

t This can be obtained by observing that the Fourier transform of x7, (t) is 2rr � D O (w - nwo) [see L,1 n o ·er . . Eq. (4.23)]. We can recover x(t� from xr0 (t) by multtply mg the latter with rect(t _ Tc)/To , whose fo� r transform is Tosinc(wTo/2 )e-iwTc_ Hence, X(w) is l/2rr times the convolution of the se two follfle transforms, whjch yields Eq. (5.9).

5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform

469

5.5 NUMERICAL COMPUTATION OF THE FOURIER

TRANSFORM: THE DISCRETE FOURIER TRANSFORM

Numerical computation of the Fourier transform of x(t) requires sample values of x(t) because a digital computer can work only with discrete data (sequence of numbers). Moreover, a computer can compute X(cv) only at some discrete values of w [samples of X(w)]. We therefore need to relate the samples of X(w) to samples of x(t). This task can be accomplished by using the results of the two sampling theorems developed in Secs. 5.1 and 5.4. We begin with a timelimited signal x(t) (Fig. 5.16a) and its spectrum X(w) (Fig. 5.16b). Since x(t) is timelimited, X(w) is nonbandlimited. For convenience, we shall show all spectra as functions of the frequency variable f (in hertz) rather than w. According to the sampling theorem, the spectrum X(cv) of the sampled signal x(t) consists of X(w) repeating every Is Hz, wherefs = 1/T, as depicted in Fig. 5.16d.t In the next step, the sampled signal in Fig. 5.16c is repeated periodically every To seconds, as illustrated in Fig. 5. l 6e. According to the spectral sampling theorem, such an operation results in sampling the spectrum at a rate of To samples/Hz. This sampling rate means that the samples are spaced atfo = I/To Hz, as depicted in Fig. 5.16f. The foregoing discussion shows that when a signal x(t) is sampled and then periodically repeated, the corresponding spectrum is also sampled and periodically repeated. Our goal is to relate the samples of x(t) to the samples of X(cv). NUMBER OF SAMPLES

One interesting observation from Figs. 5.16e and 5.16f is that No, the number of samples of the signal in Fig. 5.16e in one period To, is identical to Nh, the number of samples of the spectrum in Fig. 5.16f in one periodfs. To see this, we notice that

To No=­ T Using these relations, we see that

and

To

Is

l Jo=­ To

(5.10)

,

No=-=-=No T Jo ALIASING AND LEAKAGE IN NUMERICAL COMPUTATION

Figure 5.16f shows the presence of aliasing in the samples of the spectrum X(w). This aliasing error can be reduced as much as desired by increasing the sampling frequency Is (decreasing the sampling interval T = I/fs). The aliasing can never be eliminated for timelimited x(t), however, because its spectrum X(w) is nonbandlimited. Had we started with a signal having a bandlimited spectrum X(w), there would be no aliasing in the spectrum in Fig. 5.16f. Unfortunately, such a signal is nontimelimited, and its repetition (in Fig. 5.16e) would result in signal overlapping (aliasing in the time domain). In this case, we shall have to contend with errors in signal samples. In other words, in computing the direct or inverse Fourier transform numerically, we can reduce the error as much as we wish, but the error can never be eliminated. This is true trhere is a multiplying const ant 1/T for the spectrum in Fig. 5.16d [see Eq. (5.2)], but this is irrelevant to our discussion here.

0� T (a)

,-

f(b)

,(c)

... t111ll11!11ttu.1tllll1111iilt;;� -t!Ullll11nu,,t...o�-70 -(e)

Figure 5.16 Relationship between samples of x(t) and X(w).

(d)

5.5 Numerical Compu tation of the Fourier Transform: The Discrete Fourier Transform

471

of numerical computation of the direct and inverse Fourier transforms, regardless of the method used. For example, if we determine the Fourier transform by direct integration numerically, by using Eq. (4.9), there will be an error because the interval of integration D.t can never be made zero. Similar remarks apply to numerical computation of the inverse transform. Therefore, we should always keep in mind the nature of this error in our results. In our discussion (Fig. 5.16), we assumed x(t) to be a timelimited signal. If x(t) were not timelimited, we would need to timelimit it because numerical computations can work only with finite data. Furthermore, this data truncation causes error because of spectral spreading (smearing) and leakage, as discussed in Sec. 4.9. The leakage also causes aliasing. Leakage can be reduced by using a tapered window for signal truncation. But this choice increases spectral spreading or smearing. Spectral spreading can be reduced by increasing the window width (i.e., more data), which increases To and reduces Jo (increases spectral or frequency resolution). PICKET FENCE EFFECT

The numerical computation method yields only the uniform sample values of X(w). The major peaks or valleys of X(w) can lie between two samples and may remain hidden, giving a false picture of reality. Viewing samples is like viewing the signal and its spectrum through a "picket fence" with upright posts that are very wide and placed close together. What is hidden behind the pickets is much more than what we can see. Such misleading results can be avoided by using a sufficiently large No, the number of samples, to increase resolution. We can also use zero padding (discussed later) or the spectral interpolation formula [Eq. (5.9)] to determine the values of X(w) between samples. POINTS OF DISCONTIN UITY

If x(t) or X(w) has a jump discontinuity at a sampling point, the sample value should be taken as the average of the values on the two sides of the discontinuity because the Fourier representation at a point of discontinuity converges to the average value. DERIVATION OF THE DISCRETE FOURIER TRAN SFORM (DPT)

If x(nT) and X(rwa) are the nth and rth samples of x(t) and X(w), respectively, then we define new variables Xn and X, as Xn

T N

o = Tx(nn = -x(nn o

and

where wa = 2rrfo =

2rr To

(5.11)

472

CHAPTER 5 SAMPLING .

.

We shall now show that x11 and Xr are related by the following equations Xr =

L X,re-jrrlon

No-I n=O

and

No-I

I � X e jrflo11 __ x,,� r NO r=O

where

t

(5 .12)

(5.13)

2rc

no= woT = N o

These equations define the direct and the inverse discrete Fourier transforms, with Xr the direct discrete Fourier transform (OFT) of .x,,, and x,, the inverse discrete Fourier transform (IOFf) of X,. The notation is also used to indicate that x,, and X, are a OFT pair. Remember that x,, is To/No times the nth sample of x(t) and X, is the rth sample of X(w). Knowing the sample values of x(t), we can use the OFT to compute the sample values of X(w)-and vice versa.Note, however, that x11 is a function of n (n = 0, 1, 2, ...,No - 1) rather than oft and that Xr is a function of r (r = 0, I, 2, ...,No -1) rather than of cu. Moreover, both x11 and X, are periodic sequences of period No (Figs. 5.16e, 5.16f). Such sequences are called No-periodic sequences. The proof of the DFT relationships in Eqs. (5.12) and (5.13) follows directly from the results of the sampling theorem. The sampled signal x(r) (Fig. 5.16c) can be expressed as x(t)

=

No-I

L x(nT)8 (t - nT) n=O

Since 8 (t - nT) e-jnwr, applying the Fourier transform yields X(w) =

L x(nT)e-jnwT

No-I n=O

But from Fig. 5.l f [or Eq. (5.2)], it is clear that over the interval lwl transform of x(t) is X(w)/T, assuming negLigible aliasing. Hence,

s cvs/2, X(w), the Fourier

No-I

X(w) = TX(w) =TL x(nT)e-jnwT 11=0

t In Eqs. (5.12) and (5.13), the summatfon is performed from o to No _ I. It is shown in Sec. IO.I . I [Eqs. (10.5) and (10.6)) that the summation may be performed over any successive No values of II or r.

5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform

and

473

No-I

Xr = X(reuo) =TL x(n7)e-nkrwoT 11=0

lf we let cuoT = no, then from Eq. (5.10),

(5.14)

2rr no = euoT = 2rrfoT = No

Also, from Eq. (5.1 I),

Tx(n7) =X11

Therefore, Eq. (5.14) becomes Xr =

L x,.e-jrrlo11

No-I 11=0

2rr Oo =­ No

This is Eq. (5.12), which we set to prove. The inverse transform relationship of Eq. (5.13) can be derived by using a similar procedure with the roles oft and w reversed, but here we shall use a more direct proof. To prove Eq. (5.13), we multiply both sides of Eq. (5.12) by eimrlo r and sum over r as

By interchanging the order of summation on the right-hand side, we have

As the footnote below readily shows, the inner sum on the right-hand side is zero for n # m and is No when n = m.t Thus, the outer sum will have only one nonzero term when n = m, and it is t We show that

'°"'

No-I

L..Jejkn011 _ {No - 0

n=O

k = 0, ±No, ±2No, ...

otherwise

(5.15)

Recall that 0.oNo = 21r. So eikrlon = I when k = 0, ±No, ±2No, . ... Hence, the sum on the left-hand side of Eq. (5.15) is No. To compute the sum for other values of k, we note that the sum on the left-hand side of Eq. (5.15) is a geo metric progression with common ratio a = eikno . Therefore (see Sec. B.8.3),

'°"' jkf"l L..J e

No- I 11=0

on

N - ejkf2o o - 1 = 0 - e ikr!o - I

474

CHAPTER 5 SAMPLING No xn

= Noxm . Therefore,

. No-I

- _I � Xrejm'2or � XmNo r=O

21r Oo=­ No

Because Xr is No periodic, we need to determine the vaJues of Xr over any one period. It is customary to determine Xr over the range (0, No - l), rather than over the range (-No/2, (No/2) - 1).1 CHOICE OF

T AND To

In DFf computation, we first need to select suitable vaJues for No and Tor To. For this purpose, we begin by deciding on B, the essential bandwidth (in hertz) of the signal. The sampling frequency ls must be at least 2B, that is,

f2-> B

Moreover, the sampling interval T= 1/fr [Eq. (5.1 0) ], and

(5.16) Once we pick B, we can choose Taccording to Eq. (5.16). Also,

(5.17) where Jo is the frequency resolution [separation between samples of X(w)]. Hence, if Jo is given, we can pick To according to Eq. (5.17). Knowing To and T, we detennine No from

To T

No=ZERO PADDING

Recall that observing Xr is like observing the spectrum X(w) through a picket fence. If the frequency sampling interval Jo is not sufficiently small, we could miss out on some significant details and obtain a misleading picture. To obtain a higher number of samples, we need to reduce J0. BecauseJo = 1 /To, a higher number of samples requires us to increase the value of To, th e period of repetition for x(t). This option increases No, the number of samples of x(t), by adding dummY samples of O value. This addition of dummy samples is known as zero padding. Thus, zero padding increases the number of samples and may help in getting a better idea of the spectrum X(w) from its samples Xr . To continue with our picket fence analogy, z ero padding is like using more, and narrower, pickets. t The DFr of Eq. (5.12) and the IDFr of Eq. (5.13) represent a transform in their own right, and they ate exact. There i� no approximation. However, x,, and X,, thus obtained, are only approximations to the acrual samples of a signal x(t) and of its Fourier transform X(w).

5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform

475

ZERO PADDING DOES NOT IMPROVE ACCURACY OR RESOLUTION

Actually, we are not observing X(w) through a picket fence. We are observing a distorted version of X(w) resulting from the truncation of x(t). Hence, we should keep in mjnd that even if the fence were transparent, we would see a reality distorted by aliasing. Seeing through the picket fence just gives us an imperfect view of the imperfectly represented reality. Zero padding only allows us to look at more samples of that imperfect reality. It can never reduce the imperfection in what is behind the fence. The imperfection, which is caused by aliasing, can be lessened only by reducing the sampling interval T. Observe that reducing T also increases No, the number of samples, and is like increasing the number of pickets while reducing their width. But in this case, the reality behind the fence is also better dressed and we see more of it.

EXAMPLE 5.7 Number of Samples and Frequency Resolution A signal x(t) has a duration of 2 ms and an essential bandwidth of l O kHz. It is desirable to have a frequency resolution of 100 Hz in the DFT Cfo = 100). Deterrrune No. To have Jo= 100 Hz, the effective signal duration T0 must be 1

I

To= - = Jo 100

= l O ms

Since the signal duration is only 2 ms, we need zero padding over 8 ms. Also, B Hence,fs = 2B = 20,000 and T =I/ls=5 0 µs. Furthermore, No = fs

Jo

=

20,000 100

= 10,000.

= 200

The fast Fourier transform (FFf) algorithm (discussed later; see Sec. 5.6) is used to compute DFr, where it proves convenient (although not necessary) to select No as a power of 2; that is, No = 211 (n, integer). Let us choose No = 256. Increasing No from 200 to 256 can be used to reduce aliasing error (by reducing T), to improve resolution (by increasing To using zero padding), or a combination of both.

Reducing Aliasing Error. We maintain the same To so that Jo=100. Hence,

ls= Nofo = 256 x 100 = 25,600

and

l T= - =39µs

Is

Thus, increasing No from 200 to 256 permits us to reduce the sampling interval T from 50 µs to 39 µs while maintaining the same frequency resolution (Jo = 100).

Improving Resolution. Here, we maintain the same T = 50 µs, which yields To = No T = 256(50 x 10- 6) = 12.8 ms

and

I = 78.125 Hz fo = To

476

CHAPTER 5 SAMPLING Thus, increasing No from 200 to 256 can improve the frequency resolution from I00 to 78.125 Hz while maintaining the same aliasing error (T = 50 µs). Combination of Reducing Aliasing Error and Improving Resolution. To simultaneously reduce alias en-or and improve resolution, we could choose T = 45 µs and To = 11.5 ms so that Jo = 86.96 Hz. Many other combinations exist as well.

EXAMPLE 5.8 DFf to Compute the Fourier Transform of an Exponential Use the DFf to compute (samples of) the Fourier transform of e- 21 u(t). Plot the resulting Fourier spectra. We first determine T and T0• The Fourier transform of e- 21 u(t) is 1/(jw + 2). This lowpass signal is not bandlimited. In Sec. 4.6, we used the energy criterion to compute the essential bandwidth of a signal. Here, we shall present a simpler, but workable alternative to the energy criterion. The essential bandwidth of a signal will be taken as the frequency at which IX(w)I drops to l % of its peak value (see the footnote on p. 387). In this case, the peak value occurs at w = 0, where IX (O)I = 0.5. Observe that 1 IX(w) I = ---;=;;;::::= ✓w2 +4 � w-

w»2

Also, 1% of the peak value is 0.01 x 0.5 = 0.005. Hence, the essential bandwidth B is at w = 2rr B, where l 100 IX(w)I � rr = 0.005 :=} B=-Hz 2 B 7C and from Eq. (5.16),

1 7C T = = 0.015708 S. 28 200 Had we used I% energy criterion to determine the essential bandwidth, following the procedure in Ex. 4.20, we would have obtained B = 20.26 Hz, which is somewhat sm aller than the value just obtained by using the l % amplitude criterion. The second issue is to determine To . Because the signal is not timelimited, we have to truncate it at To such that x(To ) « 1. A reasonable choice would be To = 4 because x(4) = e- 8 = 0.000335 « 1. The result is No = T0 /T = 254.6, which is not a power of 2. Hence, we choose To = 4, and T = 0.015625 = 1/64, yielding No = 256, which is a power of 2. Note that there is a great deal of flexibility in determining T and To , depending on the accuracy desired and the computational capacity available. We could just as well have chosen T = 0.03125, yielding No = 128, although this choice would have given a slightly hi gher aliasing error.

5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform

477

Because the signal has a jump discontinuity at t= 0, the first sample (at t = 0) is 0.5, the averages of the values on the two sides of the discontinuity. We compute Xr (the DFf) from the samples of e- 2, u(t) according to Eq. (5.12). Note that Xr is therth sample of X(w), and thes e samples are spaced atfo= I /To = 0.25 Hz (wo= rr /2 rad/s). Because Xr is No periodic, Xr = Xcr+256) so that X2s6 = X0. Hence, we need to plot Xr over the ranger= 0 to 255 (not 256). Moreover, because of this periodicity, X-r = Xc- r+256), and the values of Xr over the range r = -127 to -1 are identical to those over the .range r = 129 to 255. Thus, X-121 = X129, X-126 = X130, ... ,x_, = X255 . In addition, because of the property of conjugate symmetry of the Fourier transform, X_ r = x;, it follows that x_, = x7, X-2 = X2 , . . . ,X-1 2s = X728 . Thus, we need Xr only over the ranger= 0 to No/2 (128 in this case). Figure 5.17 shows the computed plots of IXr l and LXr . The exact spectra are depicted by continuous curves for comparison. Note the nearly perfect agreement between the two sets of spectra. We have depicted the plot of only the first 28 points rather than all 128 points, which would have made the figure very crowded, resulting in loss of clarity. The points are at the intervals of 1/To = 1/4 Hz or wo = 1.5708 rad/s. The 28 samples, therefore, exhibit the plots over the range w = 0 to w = 28(1.5708) � 44 rad/s or 7 Hz.

t

0.5

IX(w)I

0.4 0.3

FFT values

0.2 0.1

t LX(w)

20

0

II

30

-0.S -) -7r/2

\� Exacl

�L

FFT values

Figure 5.17 Discrete Fourier transform of an exponential signal e-21 u(t).

40 w-

I

478

CHAPTER 5 SAMPLING In this example, we knew X(w) beforehand; hence, we could make intelligent choices for B (or the sampling frequency fs). In practice, we generally do not know X(w) beforehand. In fact, that is the very thing we are trying to determine. In such a case, we must make an intelligent guess for B or Is from circumstantial evidence. We should then continue reducing the value of T and recomputing the transform until the result stabilizes within the desired nu mber of significant digits. USING

MATLAB TO COMPUTE AND PLOT THE RESULTS

Let us now use MATLAB to confirm the results of this example. First, parameters are defined and MATLAB's fft command is used to compute the DFf. >> >> >>

T_O = 4; N_O = 256; T = T_O/N_O; t = (O:T:T*(N_0-1))'; x = T*exp(-2*t); x(1) = x(1)/2; X_r = fft(x); r = [-N_0/2:N_0/2-1] '; omega_r = r*2•pi/T_O;

The true Fourier transform is also computed for comparison. >>

omega = linspace(-pi/T,pi/T ,5001); X = 1./(j•omega+2);

For clarity, we display spectrum over a restricted frequency range. subplot(1,2,1); stem(omega_r,fftshift(abs(X_r)),'k.'); line(omega,abs(X),'color',[0 0 O]); axis([-0.01 44 -0.01 0.51]); xlabel('\omega'); ylabel('IX(\omega)I '); subplot(1,2,2); stem(omega_r,fftshift(angle(X-r)),'k.'); >> line(omega,angle(X),'color',[0 0 O]); axis([-0.01 44 -pi/2-0.010.01)) >> xlabel('\omega'); ylabel('\angle X(\omega)'); >> >> >> >>

The results, shown in Fig.5.18, match the earlier results shown in Fig. 5.17.

0 ..,..T"T"1rT"T--"T"T""�l""l""I............................,.,.,.,

0.5 0.4

-0.5

;;::::.. 0.3 3

0.2

'-I

'

-l

0.1

'I, -1.5 0

10

20

w

30

40

0

'�...

..._

10

Figure 5.18 MATLAB-computed DFT of an exponential signal e-2, u(t).

20

w

30

40

5.5 Numerical Computation of the Fourier Transform: The Discrete Fourier Transform

479

EXAMPLE 5.9 OFT to Compute the Fourier Transform of a Rectangular Pulse Use the DFT to compute the Fourier transform of 8 rect(t). This gate function and its Fourier transform are illustrated in Figs. 5.19a and 5.19b. To determine the value of the sampling interval T, we must first decide on the essential bandwidth B. In Fig. 5.19b, we see that X(w) decays rather slowly with w. Hence, the essential bandwidth Bis rather large. For instance, at B = 15.5 Hz (97.39 rad/s), X(w) = -0.1643, which is about 2% of the peak at X(O). Hence, the essential bandwidth is well above 16 Hz if we use the 1% of the peak amplitude criterion for computing the essential bandwidth. However, we shall deliberately take B = 4 for two reasons: to show the effect of aliasing and because the use of B > 4 would give an enormous number of samples, which could not be conveniently displayed on the page without losing sight of the essentials. Thus, we shall intentionally accept approximation to graphically clarify the concepts of the DFT. The choice of B = 4 results in the sampling interval T = 1/28 = 1/8. Looking again at the spectrum in Fig. 5.19b, we see that the choice of the frequency resolution Jo = 1/4 Hz is reasonable. Such a choice gives us four samples in each lobe of X(w). ln this case, To= I/Jo= 4 seconds and No = To/T = 32. The duration of x(t) is only l second. We must repeat it every 4 seconds (To = 4), as depicted in Fig. 5. l 9c, and take samples every 1/8 second. This choice yields 32 samples (No= 32). Also, X,1

= Tx(nn = ½x> >> >> >>

The result, shown in Fig. 5.23, matches the earlier result shown in Fig. 5.22d. Recall, this DFf-based approach shows the samples y,1 of the filter output y(t) (sampled in this case at a rate T = ½) over OS n S No - 1 = 3 1 when the input pulse x(t) is periodically replicated to form samp les x11 (see Fig. 5.19c).

C >,

0

5

10

15 n

20

Figure 5.23 Using MATLAB and the DFT to determine filter output.

25

30

488

CHAPTER 5 SAMPLING

5.6

THE FAST FOURIER TRANSFORM (FFT)

The number of computations required in performing the DFf was dramatically reduced by an algorithm developed by Cooley and Tukey in 1965 [5].This algorithm, known as thefast Fourier transform (FFI'), reduces the number of computations from something on the order of NJ to No logN0. To compute one sample Xr from Eq. (5.12), we require No complex multiplications and No - 1 complex addilions. To compute No such values (Xr for r = 0, l , ...,No - 1), we require a total of N't, complex multiplications and No (No - l) complex additions. For a large No, these computations can be prohibitively time-consuming, even for a high-speed computer. The FFI' algorithm is what made the use of the Fourier transf orm accessible for digital signal process ing. How DoEs THE

FFT REDUCE THE NuMBER OF CoMPUTATIONs?

It is easy to understand the magic of the FFf. The secret is in the linearity of the Fourier transform and also of the DFf. Because of linearity, we can compute the Fourier transform of a signal x(t) as a sum of the Fourier transforms of segments of x(t) of shorter duration. The same principle applies to the computation of the DFf. Consider a signal of length No = 16 samples. As seen earlier, OFT computation of this sequence requires N5 = 256 multiplications and No (N o - 1) = 240 additions. We can split this sequence into two shorter sequences, each of length 8. To compute DFT of each of these segments, we need 64 multiplications and 56 additions. Thus, we need a total of 128 multiplications and 112 additions. Suppose, we split the original sequence in four segments of length 4 each. To compute the DFf of each segment, we require 16 multiplications and 12 additions. Hence, we need a total of 64 multiplications and 48 additions. If we split the sequence in eight segments of length 2 each, we need 4 multiplications and 2 additions for each segmen� resulting in a total of 32 multiplications and 8 additions. Thus, we have been able to reduce the number of multiplications from 256 to 32 and the number of additions from 240 to 8. Moreover, some of these multiplications turn out to be multiplications by 1 or -1. All this fantastic economy in the number of computations is realized by the FFT without any approximation! The values obtained by the FFf are identical to those obtained by the DFf. ln this example, we considered a relatively small value of No = 16.The reduction in the number of computations is much more dramatic for higher values of N0. The FFf algorithm is simplified if we choose No to be a power of 2, although such a choice is not essential. For convenience, we define

so that

No-I

Xr

=

x,,

11

n=O

and =

LX

1 -

WNo

0� r�No -1

N0 -J

LXr WN

No r=O

-nr 0

0:::: n:::: No-1

(5.21)

(5.22)

wo Although there are many variations of the Tukey-Cooley algorithm, these can be grouped into 1 basic types: decimation in time and decimation infrequency.

5.6 The Fast Fourier Transform (FFT)

489

THE DECIMATION-IN-TIME ALGORITHM

Here, we d1vide the No-point data sequencexn into two (No/2)-point sequences consisting of even­ and odd- numbered samples, respectively, as follows:

sequence g11

sequence hn

Then , from Eq. (5.21), (No/2)-1 ""



(No/2)-1

" X2,, WN2nr + L.., o n=O

X2,,+1

W(2n+l)r No

Also, since we have X,

=

(No/2)-1

L

(No/2)-1

X211 WNo 12 + WNo

L

x2n+ 1 WNo/2

(5.23) where G, and H, are the (No/2)-point DFfs of the even- and odd-numbered sequences, 8n and h11 , respectively. Also, G, and H,, being the (No/2)-point DFfs, are (No/2) periodic. Hence, G,+(No/2)

= G,

and

Moreo ver,

H,+(No/2)

= H,

(5.24) (5.25)

From Eqs. (5.23), (5.24), and (5.25), we obtain (5.26) This property can be used to reduce the number of computations. We can compute the first No/2 points (0 :S: n :S: (No/2) - 1) of X, by using Eq. (5.23) and the last No/2 points by using Eq. (5.26) as O> >>

stem(f-1/(T•2),fftshift(angle(X)),'k.'); axis([-25 25 -1.l•pi 1.l*pi]); xlabel('f [Hz]'); ylabel('\angleX(f)'); I

I

'

I

I

'

I

0.4 ... � 0.2

... ---

0

I --

0

5

IO

15

- T --- --- i 20

25 f [Hz)

--- I

30

Figure 5.26 IX(f)I computed over (0 �J < 50) by using fft.

-

- -- - -

T -35

40

45

50

I

I

./

I

c

0.4

5.7 MATLAB: The Discrete Fourier Transform

,.

I

...

I

,.

I

I

I

.

?5 0.2 ... 0

--

-25

Figure 5.27

- '- - i -20

-1 5

IX(f)I displayed

-

I

-

- - -- i

-10

i-

-

-5

0

--

- - - - -

;

5

f [Hz]

10

493

7

7

15

20

-25

over (-25 � f < 25) by using fftshift.

2 'I

0

-2 -25

- 20

-15

-10

-5

0

f [Hz)

5

IO

15

20

25

Figure 5.28 LX(f) displayed over (-25 �f < 25).

Since the signal is real, the phase spectrum necessarily has odd symmetry. Additionally, the phase at ± 10 Hz is zero, as expected for a zero-phase cosine function. More interesting, however, are the phase values found at the remaining frequencies. Does a simple cosine really have such complicated phase characteristics? The answer, of course, is no. The magnitude plot of Fig. 5.27 helps identify the problem: there is zero content at frequencies other than± 10 Hz. Phase computations are not reliable at points where the magnitude response is zero. One way to remedy this problem is to assign a phase of zero when the magnitude response is near or at zero.

5.7.2 Improving the Picture with Zero Padding DFf magnitude and phase plots paint a picture of a signal's spectrum. At times, however, the picture can be somewhat misleading. Given a sampling frequency ls = 50 Hz and a sampling interval T = 1 /.fs, consider the signal

This complex-valued, periodic signal contains a single positive frequency at IO! Hz. Let us compute the signal's DFf using 50 samples. >>

y = T*exp(j*2*pi*(10+1/3)*n*T); Y = fft(y );

»

stem (f -25,fft shift(ab s(Y)), 'k. '); '); .axis([-25 25 -0.05 1.05]); xlabel('f [Hz]' ); ylabel('IY(f)I

»

494

CHAPTER 5 SAMPLING

c 0.5 �

0 ��::!::::!:�������:::!::!:!:!::===f:::=::==::��±::::::::=::::::�� 20 15 10 -15 5 25 - 25 -20 0 -5 f [Hz]

Figure 5.29 IY(f)I using 50 data points.

5

6

7

8

9

10 f [Hz]

11

12

13

14

15

Figure 5.30 IYzp (f)I over 5 '5:/:::: 15 using 50 data points padded with 550 zeros.

In this case, the vector y contains a noninteger number of cycles. Figure 5.29 shows the significant frequency leakage that results. Also notice that since y[n] is not real, the DFf is not conjugate symmetric. In this example, the discrete DFf frequencies do not include the actual 10½ Hz frequency of the signal. Thus, it is difficult to determine the signal's frequency from Fig. 5.29. To improve the picture, the signal is zero-padded to 12 times its original length. >> y_zp = [y,zeros(1,11*length(y))]; Y_zp = fft(y_zp); >> f_zp = (0:12*N_0-1)/(T*12*N_O); >> stem(f_zp-25,fftshift(abs(Y_zp)),'k.'); >> axis([5 15 -0.05 1.05]); xlabel('f [Hz]'); ylabel(' IY_{zp}(f)I'); Figure 5.30, zoomed in to 5 �f � 15, correctly shows the peak frequency at 10f Hz and better represents the signal's spectrum. It is important to keep in mind that zero padding does not increase the resolution or accuracy of the OFT. To return to the picket fence analogy, zero padding increases the number of pickets in our fence but cannot change what is behind the fence. More formally, the characteristics of e the sine function, such as main beam width and sidelobe levels, depend on the fixed width of th · · of pulse, not on the number of zeros that follow. Adding zeros cannot change the char act ensucs os the sine function and thus cannot change the resolution or accuracy of the OFT. Add ing zer simply allows the sine function to be sampled more finely.

.J

I'

5.7 MATLAB: The Discrete Fourier Transform

495

5.7.3 Quantization AB-bit analog-to-digital converter (ADC) samples an analog signal and quantizes amplitudes by using 28 discrete levels. T his quantization results in signal distortion that is particularly noticeable for small B. Typically, quantization is classified as symmetric or asymmetric and as either rounding or truncating. Let us investigate rounding-type quantizers. The quantized output Xq of an asymmetric rounding converter is given as 1

The quantized output Xq of a symmetric rounding converter is given as

Program CH5MP1 quantizes a signal using one of these two rounding quantizer rules and also ensures no more than 28 output levels. function [xq] = CH5MP1(x,x.max,B,method) 'l. CH5MP1.m : Chapter 5, MATLAB Program 1 'l. Function M-file quantizes x over (-xmax,xmax) using 2-b levels. 'l. Uses rounding rule, supports symmetric and asymmetric quantization x = input signal 'l. INPUTS: 'l. x.ma.x = maximum magnitude of signal to be quantized 'l. B = number of quantization bits method = default 'sym' for symmetrical, 'asym' for asymmetrical 'l. 'l. OUTPUTS: xq = quantized signal if (nargin4), disp('Too many inputs.'); return end ¼ Limit amplitude to xmax x(abs(x)>xma.x)=xma.x*sign(x(abs(x)>xmax)); switch lower(method) case 'asym' xq = xma.x/(2-(B-1))*floor(x*2-(B-1)/xmax+1/2); ¼ Ensure only 2-0 levels xq(xq>=xmax)=xmax*(1-2-(1-B)); case 'sym' xq = xma.x/(2-(B-1))*(floor(x*2-(B-1)/xmax)+1/2); ¼ Ensure only 2-0 levels xq(xq>=xmax)=xmax*(1-2-(1-B)/2); t La rge values of x may return quantized values xq outside the 28 allowable levels. In such cases, Xq should be clamped to the nearest permitted level.

496

CHAPTER 5 SAMPLING

end

otherwise disp('Unrecognized quantiz ation method .'); return

Several MATLAB commands require discussion. First, the n argin function returns the number of input arguments. In this program, nargin is used to ensure that a correct number of inputs is supplied. If the number of inputs suppHed is incorrect, an error message is displayed and the function termjnates. If only three input arguments are detected, the quantization type is not explicitly specified and the program assigns the default symmetric method. As with many high-level languages such as C, MATLAB supports general switch/case structurest: switch switch_expr, case case_expr, statements; otherwise, statements; end CH5MP1 switches among cases of the string method. In this way, method-specific parameters are easily set. The command lower is used to convert a string to all lowercase characters. In this way, strings such as SYM, Sym, and sym are all indistinguishable. Similar to lower, the MATIAB command upper converts a string to all uppercase. The floor command rounds input values to the nearest integer toward minus infinity. Mathematically, it computes L · J. To accommodate different types of rounding, MATLAB supplies three other rounding commands: ceil, round, and fix. The ceil command rounds input values to the nearest integers toward infinity, (f·l); the round command rounds input values toward the nearest integer; the fix command rounds input values to the nearest integer toward zero. For example, if x = [-0.5 0.5] ;, floor(x) yields [-1 OJ, ceil(x) yields [O 1], round(x) yields [-1 1], and fix (x) yields [0 OJ. Finally, CH5MP1 checks and, if necessary, corrects large values of xq that may be outside the allowable 28 levels. To verify operation, CH5MP1 is used to determine the transfer characteristics of a symmetric 3-bit quantizer operating over ( -10,10). >> >> >>

x = (-10: .0001:10); xsq = CH5MP1(x,10,3,'sym'); plot(x,xsq,'k'); axis([-10 10 -10.5 10.5]); grid on; xl abel('Quantizer input'); ylabel('Quantizer output');

Figure 5.31 shows the results. Clearly, the quantized output is }jmjted to 28 = 8 levels. Zero is not a quantization level for symmetric quantizers, so half of the levels occur above zero and half of the levels occur below zero. In fact, symmetric quantizers get their name from the symmetrY in quantization levels above and below zero. By changing the method in CH5MP1 from 'sym' to ' asym ,, we obtain the transfer characteristics of an asymmetric 3-bit quantizer, as shown in Fig. 5.32. Again, the quantized output is limited to 28 = 8 levels, and zero is now one of the included levels. With zero as a quanti zation t A functionally equivalent structure can be written by using if, elseif, and else statements.

5.7 MATLAB: The Discrete Fourier Transform

497

10 ..,

:, Q.

s ... -�

Cl

5 0

-�

-5 -10

-10

-8

-6

-4

-2 0 2 Quantizer input

4

6

8

JO

6

8

10

Figure 5.31 Transfer characteristics of a symmetric 3-bit quantizer. 10 ..,

s ... -�

5 0

Cl -5

-10 -10

-8

-6

-4

2 -2 0 Quantizer input

4

Figure 5.32 Transfer characteristics of an asymmetric 3-bit quantizer.

level, we need one fewer quantization level above zero than there are levels below. Not surprisingly, asymmetric quantizers get their name from the asymmetry in quantization levels above and below zero. There is no doubt that quantization can change a signal. It follows that the spectrum of a quantized signal can also change. While these changes are difficult to characterize mathematically, they are easy to investigate by using MATLAB. Consider a 1 Hz cosine sampled atfs = 50 Hz over 1 second.

>> T = 1/50; N_0 = 50; n = (0:N_0-1); f = (0:N_ 0 -1)/(T*N_ 0); >> x = cos(2*pi*n*T); X = fft(x); Upon quantizing by means of a 2-bit asymmetric rounding quantizer, both the signal and spectrum are substantially changed.

>> >> >> >> >>

xaq = CH5MP1(x,1,2,' asym'); Xaq = fft(xaq); subplot (2,2,1); stem(n,x,'k.'); axis([0 49 -1.1 1.1]); xlabel('n');ylabel('x[n] '); subplot(2,2,2); stem(f-25,fftshift(abs(X)),'k.'); axis([-25,25 -1 26]); xlabel('f');ylabel('IX(f)I');

498

CHAPTER 5 SAMPLING

s >
0

Res> 0

(6.14)

For the unilateral Laplace transform, there is a unique inverse transform of X(s); consequently, there is oo need to specify the ROC explicitly. For this reason, we shall generally ignore any mention of the ROC for unilateral transforms. Recall, also, that in the unilateral Laplace transform it is understood that every signal x(t) is zero fort < 0, and it is appropriate to indicate this fact by multiplying the signal by u(t).

DRILL 6.1 Bilateral Laplace Transform of Gate Functions By direct integration, find the Laplace transform X(s) and the region of convergence of X(s) for the gate functions shown in Fig. 6.3.

ANSWERS (a)

!o - e-2s) for alls

s

(b) -(1 - e-2s)e-2s for alls

x(t)

l

,, 0

,-

2 (a)

Figure 6.3 Gate functions for Drill 6.1.

0

x(t)

4

2 (b)

,-

520

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM CONNECTION TO THE FOURIER TRANSFORM

The definition of the Laplace transform is identical to that of the Fourier transform with jw replaced bys. It is reasonable to expectX(s), the Laplace transform of x(t), to be the same asX(w), the Fourier transform of x(t) with j w replaced bys. For example, we found the Fourier transform of e-01 u(t) to be 1/(jw+a). Replacingjw withs in the Fourier transform results in l/(s+a), which is the Laplace transform as seen from Eq. (6.10). Unfortunately, this procedure is not valid for all x(t). We may use it only if the region of convergence for X(s) includes the imaginary (jw) axis. For instance, the Fourier transform of the unit step function is rr8(w)+ (1/jw). The corresponding Laplace transform is 1 /s, and its region of convergence, which is Res > 0, does not include the imaginary axis. In this case, the connection between the Fourier and Laplace transforms is not so simple. The reason for this complication is related to the convergence of the Fourier integral, wh ere the path of integration is restricted to the imaginary axis. Because of this restriction, the Fourier integral for the step function does not converge in the ordinary sense as Ex. ?? demonstrates. We had to use a generalized function ( impulse) for convergence. The Laplace integral for u(t), in contrast, converges in the ordinary sense, but only for Res > 0, a region forbidden to the Fourier transform. Another interesting fact is that although the Laplace transform is a generalization of the Fourier transform, there are signals ( e.g., periodic signals) for which the Laplace transform does not exist, although the Fourier transform exists ( but not in the ordinary sense).

6.1.3 Finding the Inverse Transform Finding the inverse Laplace transform by using Eq. (6.5) requires integration in the complex plane, a subject beyond the scope of this book ( but see, e.g., [2]). For our purpose, we can find the inverse transforms from Table6.1. All we need is to express X(s) as a sum of simpler functions of the forms listed in the table. Most of the transforms X (s) of practical interest are rational functions, that is, ratios of polynomials ins. Such functions can be expressed as a sum of simpler functions by using partial fraction expansion ( see Sec. B.5). Values ofs for whichX(s) = 0 are called the zeros of X(s); the values ofs for which X(s) ➔ oo are called the poles of X(s). If X(s) is a rational function of the form P(s)/Q(s), the roots of P(s) are the zeros and the roots of Q(s) are the poles of X(s).

Find the inverse unilateral Laplace transforms of (a)

7s-6

-s-6 2s2 +5 (b) 2 s +3s+2 6(s+34) (c) 2 s(s + lOs + 34) s2

6.1 The Laplace Transform

8s+IO

521

(d) (s + l)(s + 2)3

In no case is the inverse transform of these functions directly available in Table 6.1. Rather, we need to expand these functions into partial fractions, as discussed in Sec. B.5.1. Today, it is very easy to find partial fractions via software such as MATLAB. However, just as the availability of a calculator does not obviate the need for learning the mechanics of arithmetical operations (addition, multiplication, etc.), the widespread availability of computers does not eliminate the need to learn the mechanics of partial fraction expansion. (a)

1s-6 _ k1 k2 --------X(s) +-(s+2)(s-3) s+2 s-3 To determine k 1 , corresponding to the term (s + 2), we cover up (conceal) the term (s + 2) in X(s) and substitute s= -2 (the value of s that makes s + 2=0) in the remaining expression (see Sec. B.5.2): 7s-6 -1 4-6 k 1 = ---- I =---=4 s 2](s-3) s=- 2 -2-3 Similarly, to determine k2 corresponding to the term (s - 3), we cover up the term (s - 3) in X(s) and substitute s= 3 in the remaining expression k2 = Therefore,

7s-6 (s+2)

I

s=J =

2 1-6 3+ 2

=

3

4 3 7s-6 X(s)=----=-+­ (s+2)(s-3) s+2 s-3

(6.15)

CHECKING THE ANSWER It is easy to make a mistake in partial fraction computations. Fortunately, it is simple to check the answer by recognizing that X(s) and its partial fractions must be equal for every value of s if the partial fractions are correct. Let us verify this assertion in Eq. (6.15) for some convenient value, say, s= 0. Substitution of s =0 in Eq. (6.15) yieldst 1=2- 1=1

We can now be sure of our answer with a high margin of confidence. Using pair 5 of Table 6.1 in Eq. (6.15), we obtain x(t) =

c-

1

(s+2 4

3 -+ - -) s-3

= ( 4e-21 +3e3')u(t)

t Because X(s) = oo at its poles, we should avoid the pole values (-2 and 3 in the present case) for checking. The answers may check even if partial fractions are wrong. This situation can occur when two or more errors cancel their effects. But the chances of this problem arising for randomly selected values of s are extremely small.

522

CHAPfER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFOR M 2s2 +5 2s 2 +5 X(s) = 2+3 + s 2 - (s+ l )(s+2) s

(b)

Observe that X(s) is an improper function with M = N. In such a case, we can express X(s) as a sum of the coefficient of the highest power in the numerator plus partial fractions corresponding to the poles of X(s) (see Sec. B.5.5). In the present case, the coefficient of the highest power in the numerator is 2. Therefore, X(s)=2+-+­ s+ 1 s+2 k1

where

k2

and Therefore,

7 13 X(s)=2+--­ s+ 1 s+2 From Table 6.1, pairs 1 and 5, we obtain

(c)

X(s)=

=

_ 6(s+34) 6(s+34) s(s+ 5- j3)(s+ 5 +j3) s(s2 + 10s + 34) k2 k1 ki s

+

s+5-j3

+

s+5+j3

Note that the coefficients (k2 and ki) of the conjugate terms must also be conjugate (see Sec. B.5). Now ki = k = 2 Therefore,

6 34 6(s+34) x _6 �(s 2 + 10s + 34) s=O - 34 6(s+34) 29+j3 = I s 5- r1}(s+5 + j3) s=-5+j3 -3- jS

I _

= -3 +

Jc;=-3-j4

To use pair 1Ob of Table 6.1, we need to express k2 and k; in polar form. -3 + j4 = ( /32 + 42) eitan-1(4/-3)

= Seitan-1(4/-3)

. J4

6.1 The Laplace Transform

523

1 1 Observe that �n- (�/-3) =I- tan- (-4/3). This fact is evident in Fig. 6.4. For further discussion of this topic, see Ex. B. l.

-3 + j4

I ..........

j4

I s

-3

... : .......:-:-.-:-1

3 -j4

Figure 6.4 Visualizing tan - 1 (-4/3) ltan- 1 (4/-3).

From Fig. 6.4, we observe that k2

= -3+ j4=5eil26.9

o

so Therefore,

5e-jl2 6.9° 5eil26.9° 6 X(s)=-+---+--­ s s+5- j3 s+5+)3

From Table 6.1 (pairs 2 and 1 Ob), we obtain ° x(t) =[6+ 10 e-5 'cos(3t+126.9 ))u(t)

ALTERNATIVE METHOD USING QUADRATIC FACTORS

The foregoing procedure involves considerable manipulation of complex numbers. Pair 1 0c (Table 6.1) indicates that the inverse transform of quadratic terms (with complex conjugate poles) can be found directly without having to find first-order partial fractions. We discussed such a procedure in Sec. B.5.2. For this purpose, we shall express X(s) as As+B 6(s+34) k1 = + = X(s) s(s2 + 10s+34) � s2 + 10s+34

We have already determined that k 1

= 6 by the (Heaviside) "cover-up" method. Therefore,

� As+B 6(s+34) = + 2 s(s2 + l 0s+34) s s + 10s+34 Clearing the fractions by multiplying both sides by s(s2 + 1Os+ 34) yields 6(s+ 34) = (6 + A)s2+(60 + B)s+ 204

524

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFOR M Now, equating the coefficients ofs 2 ands on both sides yieJds A=-6

B=-54

and

and

-6s - 54 6 X(s)= + 2 � s +lOs+ 34 We now use pairs 2 and 10c to find the inverse LapJace transform. The parameters for pair10c are A=-6, B=-54, a=5, c = 34, b =Jc -a2= 3, and r=

A 2 c+B2 -2ABa ------=10 c-a2

Therefore,

_1

0=tan

Aa-B

=126.90

c-a2

A�

x(t)= [6+10e-5'cos( 3t+126 .9 )]u(t ) °

which agrees with the earlier result. SHORTCUTS

The partial fractions with quadratic terms also can be obtained by using shortcuts. We have X(s)

As+B

6(s+34) = 6-s + s---= s(s ----2+1Os+ 34 2 + 10s + 34)

We can determine A by eliminating Bon the right-hand side. This step can be accomplished by multiplying both sides of the equation for X(s) bys and then lettings-+ oo. This procedure yields Therefore,

-6s+B 6 6(s+34) ------=-+----s s2+IOs+ 34 s(s2+10s+34)

To find B, we lets take on any convenient value, say, s B- 6 210 =6+ � 45

=}

= l, in this equation to obtain B=-54

a result that agrees with the answer found earlier. (d) 8s + 10 k1 ao a1 a2 -=-X( s)= ---+---+ --+-­ . (s+l)(s+2)3 s+l (s+2)3 (s+2)2 s+2 where kI

=

8s + 10 rs+.D(s+2) 3

I

-2

s=-1 -

a

6.1 The Laplace Transform

Ss+ 10 o = (s+ 1 ) s+2)3 I .r=-2 = 6

d 8s+ 10 a,= { [ ds (s+I)(s+2) 3 ]

I

s=-

I d Ss + IO a2 = 2 { 2 [ ds (s+ l)(s+2)' ] 2

Therefore,

X(s)=--+-­3 s+ I (s+2)

2

6

I

2

525

=-2

s=-

2

= -2

2 2 -----2 s+2 (s+2)

and ALTERNATIVE METHOD:

A HYBRID OF HEAVISIDE AND CLEARING

In this method, the simpler coefficients k 1 and ao are determined by the Heaviside "cover-up" procedure, as discussed earlier. To determine the remaining coefficients, we use the clearing-fraction method. Using the values k 1 = 2 and a0 = 6 obtained earlier by the Heaviside "cover-up" method, we have FRACTIONS

2 8s + 10 a1 6 a2 -----3=--+---+---+-2 s+ 1 (s+2)3 (s+2) (s+ l)(s+2) s+2

We now clear fractions by multiplying both sides of the equation by (s + l)(s + 2) 3• This procedure yieldst 8s+IO= 2(s+ 2) 3+ 6(s+ I)+a 1(s+ l)(s+ 2) +a 2(s+ l)(s+ 2) 2

2 3 = (2+a2)s + (12+a, +5a2)s +(3 0+3a, + 8a2)s+ (22+2a 1+4a2)

Equating coefficients ofs 3 and s2 on both sides, we obtain

0 = (2+a2) ===} a2 = -2 0 = 12+a1 +5a2 =2+a, ===} a,

=-2

We can stop here if we wish, since the two desired coefficientsa 1 and a2 have already been found. However, equating the coefficients of s1 ands ° serves as a check on our answers. T his step yields 1... e could have cleared fractions without finding k 1 and ao. This alternative, however, proves more W laborious because it increases the number of unknowns to 4. By predetermining k1 and ao, we reduce the unknowns to 2. Moreover, this method provides a convenient check on the solution. This hybrid procedure achieves the best of both methods.

526

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM 8 = 30+ 3a 1+ 8a2 10 = 22+2a, +4a2 Substitution of a 1 = a2 = -2, obtained earlier, satisfies these equations. This step con fir ms the correctness of our answers. ANOTHER ALTERNATIVE: A HYBRID OF HEAVISIDE AND SHORTCUTS

In this method, the simpler coefficients k 1 and ao are determined by the Heaviside "cover-up" procedure, as discussed earlier. The usual shortcuts are then used to determine the remaining coefficients. Using the values k 1 = 2 and a0 = 6, determined ear]jer by the Heaviside method, we have aI a2 2 6 8s + 10 -----=--+---+---+-­ 3 (s+ l)(s+2) s+ 1 (s+2)3 (s+2)2 s+2 There are two unknowns, a 1 and a2• If we multiply both sides by s and then let s eliminate a 1• This procedure yields

Therefore,

8s+10 (s+ l)(s+2)

2

6 (s+2)

a1 (s+2)



oo, we

2 s+2

-----3 = -- + ---3 + ---2 - --

s+ l

There is now only one unknown, a1. This value can be determined readily by settings equal to any convenient value, say, s = 0. This step yields

Using the MATLAB residue command, determine the inverse Laplace transform of each of the following functions: 2s 2 + 5 s 2+3s+2 2s 2+7s+4 (b) X b (s) = (s+ l)(s+2)2 8s 2+21s+l9 (c) Xc(s) = (s+2)(s 2+ s+7)

(a) Xa(s) =

7 6.1

The Laplace Transform

527

In each case, we use the MATLAB residuecommand to perform the necessary partial fraction expansions. The inverse Laplace transform follows using Table 6.1. (a)

>> >>

num = [2 0 5]; den = [1 3 2]; [r, p, k] = residue(num,den) r = -13 p = k

=

7

-2 -1 2

Therefore, Xa (s) = -13/(s + 2) + 7/(s +I)+ 2 and Xa (t) = (- I 3e-21 + 1e-1)u(t) + 28(t). (b) Here, we use the conv command as a method to carry out polynomial multiplication

and find the coefficients of the (expanded) denominator polynomial. » >>

num = [2 7 4]; den = [conv([1 1],conv([1 2], [1 2])) ]; [r, p, k] = residue(num,den) r = 3 2 -1 p = -2 -2 k

-1

= []

Therefore, Xb (s)

= 3/(s + 2) + 2/(s + 2) 2 - 1/(s + 1) and Xb(t) = (3e-21 + 2te-21 - e- 1)u(t).

(c) In this case, a few calculations are needed beyond the results of the residuecommand so that pair 1Ob of Table 6.1 can be utilized. » >>

num = [8 21 19]; den = [conv([1 2], [1 1 7])]; [r, p, k] = residue(num,den) r = 3.5000-0.48113i 3.5000+0.48113i 1.0000 p = -0.5000+2.5981i -0.5000-2.5981i -2.0000 k = [] » ang = angle(r), mag = abs(r) ang = -0.13661 0.13661 0 mag= 3.5329 3.5329 1.0000

528

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM Thus,

and

3.5329e-j0.J3661 l -+ · s+2 s+0.5- J2.5981

+

3.5329ei0.13661 s+ 0 .5 + J'2 . 5981

Xc(s)

=

xc(t)

= [e-21 + l.7665e-0·5' cos(2.598 l t- O.1366)]u(t).

EXAMPLE 6.5 Symbolic Laplace and Inverse Laplace Transforms with MATLAB Using MATLAB 's symbolic math toolbox, determine the following:

= sin (at)+ cos(bt) the inverse unilateral Laplace transform of Xb(s) = as2 /(s2 + b2 )

(a) the direct unilateral Laplace transform of Xa (t) (b)

(a) Here, we use thesym command to symbolically define our variables and expression for xa (t), and then we use the laplace command to compute the (unilateral) Laplace transform. >> symsab t; x_a = sin(a*t)+cos( >> X_a = laplace(x_a);

b *t);

X_a = a/(a-2 +s-2) +s/(b-2 +s-2)

Therefore,Xa(s) = s 2:a2 rational form.

+· s2�

b2

. It is also easy

to use MATLAB to determineXa(s) in standard

t(Xa _ ) >> X_a = coll ec X_a = (a-2*s +a*b-2 +a*s-2 +s-3)/ (s-4 + (a-2 +

b -2)* s -2

+a-2*b-2)

2 3 +a2 s+ab2 Thus, we also see that Xa (s) = s4s +as +(a 2 +b2 )s2 +a 2b2 · (b) A similar approach is taken for the inverse Laplace transform, except that the ilaplace command is used rather than the laplace command.

>> symsabs; X_b = (a*s-2)/(s-2+b -2); >> x_b = ilaplace(X_b)

x_ b = a*dirac(t) - a*

b *sin(b*t)

Therefore, Xb(t) = a8(t) - ab sin (bt)u(t).

6.1

The Laplace Transform

529

DRILL 6.2 Laplace Transform. ° 31 Show that the Laplace transform of IOe- cos (4t + 53.13 ) is (6s - 14)/(s2 Table 6.1.

+ 6s + 25). Use

DRILL 6.3 Inverse Laplace Transform Find the inverse Laplace transform of the following: (a )

s+ 17

s2 +4s-5

(b) (s

(c)

3s-5 + l)(s2 + 2s + 5)

16s+43 (s - 2)(s + 3)2

ANSWERS f 5t (a) (3e - 2e- )u(t) ° (b) [-2e-r �e-r cos (2t- 36.87 )]u(r)

+

(c) [3e2' + (t- 3)e-3f]u(t)

A HISTORICAL NOTE:

MARQUIS PIERRE-SIMON DE LAPLACE

(1749-1827)

The Laplace transform is named after the great French mathematician and astronomer Laplace, who first presented the transform and its applications to differential equations in a paper published in 1779. Laplace developed the foundations of potential theory and made important contributions to special functions, probability theory, astronomy, and celestial mechanics. In his Exposition du systeme du monde (1796), Laplace formulated a nebular hypothesis of cosmic origin and tried to explain the universe as a pure mechanism. In his Traite de mecanique celeste (celestial mechanics), which completed the work of Newton, Laplace used mathematics and physics to subject the solar system and all heavenly bodies to the laws of motion and the principle of gravitation. Newton had been unable to explain the irregularities of some heavenly bodies; in desperation, he concluded that God himself must intervene now and then to prevent such catastrophes as Jupiter eventually falling into the sun (and the moon into the earth), as predicted by Newton's calculations. Laplace proposed to show that these irregularities would correct themselves periodically and that a little patience-in Jupiter's case, 929 years-would see everything returning automatically to order; thus, there was no reason why the solar and the stellar systems could not continue to operate by the laws of Newton and Laplace to the end of time [3]. Laplace presented a copy of Mecanique celeste to Napoleon, who, after reading the book, took Laplace to task for not including God in his scheme: "You have written this huge book on the system of the world without once mentioning the author of the universe." "Sire," Laplace retorted, "I had no need of that hypothesis." Napoleon was not amused, and when he reported this reply to

530

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

Pierre-Simon de Laplace and Oliver Heaviside another great mathematician-astronomer, Louis de Lagrange, the latter remarked, "Ah, but that is a fine hypothesis. It explains so many things" [4]. Napoleon, following his policy of honoring and promoting scientists, made Laplace the minister of the interior. To Napoleon's dismay, however, the new appointee attempted to bring "the spirit of infinitesimals" into administration, and so Laplace was transferred hastily to the Senate. OLIVER HEAVISIDE

(1850-1925)

Although Laplace published his transform method to solve differential equations in 1779, the method did not catch on until a century later. It was rediscovered independently in a rather awkward form by an eccentric British engineer, Oliver Heaviside ( 1850-1925), one of the tragic figures in the history of science and engineering. Despite his prolific contributions to ele ctrical engineering, he was severely criticized du.ring his lifetime and was neglected later to the point that hardly a textbook today mentions his name or credits him with contributions. Neverthele ss, his stuclies had a major impact on many aspects of modem electrical engineering. It was Heaviside who made transatlantic communication possible by inventing cable loading, but few mention him as a pioneer or an innovator in telephony. It was Heaviside who suggested the use of induc tive cable loading, but the credit is given to M. Pu pin, who was not even responsible for building the first loading coii.t In addition, Heaviside was [5]: • The first to find a solution to the distortionless transmission line. • The innovator of lowpass filters. t Heaviside developed the theory for cable loading, George Campbell built the first loading coil, and the telephone circuits using Campbell's coils were in operation before Pupin published his paper. In the legal

fight over the patent, however, Pupin won the battle: he was a shrewd self-promoter, and Campbell bad peor legaJ support.

6.2 The Laplace Transform

531

The first to write Maxwell's equations in modern form. The codiscoverer of rate energy transfer by an electromagnetic field. An early champion of the now-common phasor analysis. An important contributor to the development of vector analysis. In fact, he essentiaJly created the subject independently of Gibbs (6). • An originator of the use of operational mathematics used to solve linear integro-differential equations, which eventually led to rediscovery of the ignored Laplace transform. • The first to theorize (along with Kennelly of Harvard) that a conducting layer (the Kennelly-Heaviside layer) of atmosphere exists, which allows radio waves to follow earth's curvature instead of traveling off into space in a straight line. • The first to posit that an electrical charge would increase in mass as its velocity increases, an anticipation of an aspect of Einstein's special theory of relativity [7]. He also forecast the possibility of superconductivity. • • • •

Heaviside was a self-made, self-educated man. Although his formal education ended with elementary school, he eventually became a pragmatically successful mathematical physicist. He began his career as a telegrapher, but increasing deafness forced him to abandon this career at the age of 24. He then devoted himself to the study of electricity. His creative work was disdained by many professional mathematicians because of his lack of formal education and his unorthodox methods. Heaviside had the misfortune to be criticized both by mathematicians, who faulted him for lack of rigor, and by men of practice, who faulted him for using too much mathematics and thereby confusing students. Many mathematicians, trying to find solutions to the distortionJess transmission line, failed because no rigorous tools were available at the time. Heaviside succeeded because he used mathematics not with rigor, but with insight and intuition. Using his much maligned operational method, Heaviside successfully attacked problems that rigid mathematicians could not solve, problems such as the flow-of-heat in a body of spatially varying conductivity. Heaviside brilliantly used this method in 1895 to demonstrate a fatal flaw in Lord Kelvin's determination of the geological age of the earth by secular cooling; he used the same flow-of-heat theory as for his cable analysis. Yet the mathematicians of the Royal Society remained unmoved and were not the least impressed by the fact that Heaviside had found the answer to problems no one else could solve. Many mathematicians who examined his work dismissed it with contempt, asserting that his methods were either complete nonsense or a rehash of known ideas [5]. Sir William Preece, the chief engineer of the British Post Office, a savage critic of Heaviside, ridiculed Heaviside's work as too theoretical and, therefore, leading to faulty conclusions. Heaviside's work on transmission lines and loading was dismissed by the British Post Office and might have remained hidden, had not Lord Kelvin himself publicly expressed admiration for it [5]. Heaviside's operational calculus may be formally inaccurate, but, in fact, it anticipated the operational methods developed in more recent years [8]. Although his method was not fully understood, it provided correct results. When Heaviside was attacked for the vague meaning of his operational calculus, his pragmatic reply was, "Shall I refuse my dinner because I do not fully understand the process of digestion?" Heaviside lived as a bachelor hermit, often in near-squalid conditions, and died largely unnoticed, in poverty. His life demonstrates the persistent arrogance and snobbishness of the intellectual establishment, which does not respect creativity unless it is presented in the strict language of the establishment.

532

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

6.2

SOME PROPERTIES OF THE LAPLACE TRANSFORM

Because it is a generalized form of the Fourier transform, we expect the Laplace transform to have properties similar to those of the Fourier transform. However, we are discussing here mainly the properties of the unilateral Laplace transform, and they differ somewhat from those of the Fourier transform (which is a bilateral transform). Properties of the Laplace transform are useful not only in the derivation of the Laplace transform of functions but also in the solutions of linear integro-differential equations. A glance at Eqs. (6.5) and (6.6) shows that there is a certain measure of symmetry in going from x(t) to X(s), and vice versa. This symmetry or duality is also carried over to the properties of the Laplace transform. This fact will be evident in the following development. We are already familiar with two properties: linearity [Eq. (6.7)] and the uniqueness propeny of the Laplace transform discussed earlier.

6.2.1 Time Shifting The time-shifting property states that if x(t)

then for to � 0

¢=::::>

X(s)

x(t - to) ¢=::::> X(s)e-st0

(6.16)

Observe that x(t) starts at t = 0, and, therefore, x(t- to) starts at t = t0. This fact is implicit, but is not explicitly indicated in Eq. (6. 16). This often leads to inadvertent errors. To avoid such a pitfall, we should restate the property as follows. If x(t)u(t)

then

Proof.

x(t- to )u(t - to) £ [x(t-

Setting t- to = r, we obtain

¢=::::>

s'o

¢=::::> X(s)e-

00 t )u(t- t )]= 1 o

X(s)

o

x(t- to )u(t - t0 )e-s' dt

[, [x(t- to)u(t - to)] = f00 x(r)u(r)e-s(r+io) dr -lo Because u(r) = 0 for r < 0 and u.(r) = 1 for r � 0, the limits of integration can be taken fromO to oo. Thus, £ [x(t- to )u(t- to )]=

100

= e-sto

x(r)e-s(r+io) dr

loo

=X(s)e-sro

x(r)e-sr dr

6.2 Some Properties of the Laplace Transform

533

Note that x(t -to)u(t -to) is the signal x(t)u(t) delayed by to seconds. The time-shifting property states that delaying a signal by to seconds amounts to multiplying its transform e- s10. This property of the unilateral Laplace transform holds only for positive to because if to were g ne ative, the signal x(t -to)u(t- to) may not be causal. We can readily verify this property in Drill 6.1. If the signal in Fig. 6.3a is x(t)u(t), then the signal in Fig. 6.3b is x(t -2)u(t - 2). The Laplace transform for the pulse in Fig. 6.3a is (1/s)(l -e- 2s ). Therefore, the Laplace transform for the pulse in Fig. 6.3b is (1/s)(l -e-2s)e-2s. The time-shifting property proves very convenient in finding the Laplace transform of functions with different descriptions over different intervals, as the following example demonstrates.

1• EXAMPLE 1,�,: '

6.6 Laplace Transform and the Time-Shifting Property

I

Find the Laplace transform of x(t) depicted in Fig. 6.5a .

I

•\'{I)

I ..

0

I

, ,'

Xi(t)

-

2

3

4

,_

I --

0

, ,

I .-

/' :1 '

+

2

I -

,½(1)

I

1-

I --·-···-

0

2

4

1-

I (b)

(a)

Figure 6.5 Finding a piecewise representation of a signal x(t).

Describing mathematically a function such as the one in Fig. 6.5a is discussed in Sec. 1 .4. The functionx(t) in Fig. 6.5a can be described as a sum of two components shown in Fig. 6.5b. The equation for the first component is t - 1 over l .:::: t .:::: 2 so that this component can be described by (t- l) [ u(t-l) - u(t -2)]. The second component can be described by u(t -2)-u(t - 4). Therefore, x(t)

= (t- l)[u(t- l) - u(t-2)] + [u(t- 2)-u(t -4)]

= (t- l)u(t - 1) -(t - l)u(t-2) + u(t- 2) -u(t-4)

(6.17)

The first term on the right-hand side is the signal tu(t) delayed by l second. Also, the third and fourth terms are the signal u(t) delayed by 2 and 4 seconds, respectively. The second term, however, cannot be interpreted as a delayed version of any entry in Table 6.1. For this reason, we rearrange it as

534

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM (t- l)u(t- 2) = (t -2

+ l)u(t-2) = (t-2)u(t- 2) + u(t - 2)

We have now expressed the second term in the desired form as tu(t) delayed by 2 seconds plus u(t) delayed by 2 seconds. With this result, Eq. (6.17 ) can be expressed as x(t) = (t- l)u(t-1)- (t-2)u(t-2)-u(t-4)

Application of the time-shifting property to tu(t) 1 s

s (t - l)u(t- 1) ¢=> -e­ 2

Also, u(t) ¢=>

Therefore,

1 s

-

and and

¢=>

I/s2 yields

1 2s (t - 2)u(t- 2) ¢=> -e­ s2

u ( t- 4)

_4s ¢=> l

-e

1 -s 1 -2r -I -4s X(s) = -e - e - -e s s2 s2

EXAMPLE 6.7 Inverse Laplace Transform and the 11D1e-Shifting Ptopetty Find the inverse Laplace transform of

s+ 3+5e-2s X (s) _ - (s+l)(s + 2) Observe the exponential term e-2s in the numerator of X(s), indicating time delay. In such a case, we should separate X(s) into terms with and without a delay factor, as

X(s) =

s+3 5e-2s + (s+ l)(s+2) (s+ 1)(s+2)

Xi(s) =

s+3 (s+ l)(s+2)

where

= _2___I_

s+ 1 s+2 5 5 5 X2(s) = =---(s+l)(s+2) s+ 1 s+2

6.2 Some Properties of the Laplace Transform

Therefore,

x, (t) = (2e- 1 - e- 21 )u(t) x2(r)

= S(e-

1

-

e - 21 )u(t)

Also, because we can write X(t)

= XJ (l) + X2 (t - 2) = (2e_, - e-21 )u(t) + 5 [e- m) [poles of sX(s) in LHP]

6.3 SOLUTION OF DIFFERENTIAL AND INTEGRO-DIFFERENTIAL EQUATIONS

The time-differentiation property of the Laplace transform has set the stage for solvi ng linear differential (or integro-differential) equations with constant coefficients. Because dky/df ¢::=? .f Y(s), the Laplace transform of a differential equation is an al gebraic equation that can be readily solved for Y(s). Next we take the inverse Laplace transform of Y(s) to find the desired solution y(t). The following examples demonstrate the Laplace transform procedure for solving linear differential equations with constant coefficients.

Solution of Differential and lntegro-Differential Equations

6.3

545

EXAMPLE 6.12 Laplace Transform to Solve a Second-Order Lineat Differential Equation Solve the second-order linear differential equation (D2 +5D+6)y(t) = (D+ l)x(t) for the initial conditions y(O-) = 2 and j,(O-) = 1 and the inputx(t) = e-41 u(t). The equation is

d2 y(t) d (t) +5 y 2 dt dt

Let

y(t)

Then from Eq. (6.19),

and

d2y00 -dt 2

+ 6y(t) = dxdt(t) + x(t)

(6.26)

Y(s)

d (t)

y sY(s) - y(O-) = sY(s) - 2 dt

s2 Y(s) - sy(O-) -y(O-) = s2 Y(s)-2s-1

Moreover, for x(t) = e=41 u(t), X(s) =

1 --

and

s+4

s s dx(t) - sX(s)-x(O-) =- -0=& s+4 s+4

Taking the Laplace transform of Eq. (6.26), we obtain [s 2 Y(s) -2s-1] +5[sY(s)-2] +6Y(s) =

1 s + s+4 s+4

Collecting all the terms of Y(s) and the remaining terms separately on the left-hand side, we obtain s+l (6.27) (s2 +5s+6)Y(s)- (2s+ 11) = s+ 4 Therefore, s +1 2s2 +20s+45 2 (s +5s+6)Y(s)=(2s+l l )+ = s+4 s+4

and

Y(s)

=

+

+

+

2s2 20s 45 2s2 + 20s 45 (s2 +5s +6)(s +4) = (s +2)(s +3)(s+4)

546

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING TIIE LAPLACE TRANSFOIUi Expanding the right-hand side into partial fractions yields 13/2 3 3/2 Y(s)=----­ s+4 s+2 s+3 The inverse Laplace transfonn of this equation yields y(t)

= ( 1}e- 21 - 3e- 3' - ¾e-4') u(I)

(6.28)

Example 6.12 demonstrates the ease with which the Laplace transform can solve Jin differential equations with constant coefficients. The method is general and can solv e a lin differential equation with constant coefficients of any order. ZERO-INPUT AND ZERO-STATE COMPONENTS OF RESPONSE

The Laplace transform method gives the total response, which includes zero-input and zero-stat components. It is possible to separate the two components if we so desire. The initial conditio terms in the response give rise to the zero-input response. For instance, in Ex. 6.12, the te attributable to initial conditions y(O-) = 2 and j,(0-) = 1 in Eq. (6.27) generate the zero-inpu response. These initial condition terms are -(2s + 11), as seen in Eq. (6.27). The terms on the right-hand side are exclusively due to the input. Equation (6.27) is reproduced below with the proper labeling of the tenns: (s 2+5s+6)Y(s)-(2s+11)=

s+1 s+4

so that (s2+5s+6)Y(s)

=

(2s+ 11) initial condition tenns

s+l s+4

+

'-.,..-,' input lenru

Therefore, Y(s)=

2s+l l s2 +5s+6

s+l + (s+4)(s 2 +5s+6)

'--v--'

Taking the inverse transform of this equation yields

ZCR

ZSR

6.3

Solution of Differential and Integro-Differential Equations

547

6.3.1 Comments on Initial Conditions at o- and at o+ The initial conditions in Ex. 6.12 are y(O-) = 2 and j,(Q-) = I. If we let t = O in the total response in E q. (6.28), we find y(O) = 2 and y(O) = 2, which is at odds with the given initial conditions. Why? Because the initial conditions are given at t = o- (just before the input is applied), when only the zero-input response is present. The zero-state response is the result of the input x(t) applied at r = 0. Hence, this component does not exist at t = o-. Consequently, the initial conditions at 1 = o- are satisfied by the zero-input response, not by the total response. We can readily verify in this example that the zero-input response does indeed satisfy the given initial conditions at t = o-. It is the total response that satisfies the initial conditions at t = o+ , which are generally different from the initial conditions at o-. There also exists a .l + version of the Laplace transform, which uses the initial conditions at 1 = o+ rather than at o- (as in our present£_ version). The C + version, which was in vogue until the early 1960s, is identical to the ,C_ version except the Ii nuts of the Laplace integral [Eq. (6.11 )] are from o+ to oo. Hence, by definition, the origin t = 0 is excluded from the domain. This version, still used in some math books, has some serious difficulties. For instance, the Laplace transform of o(t) is zero because o(t) = 0 fort::: o+ . Moreover, this approach is rather clumsy in the theoretical study of linear systems because the response obtained cannot be separated into zero-input and zero-state components. As we know, the zero-state component represents the system response as an explicit function of the input, and without knowing this component, it is not possible to assess the effect of the input on the system response in a general way. The .C+ version can separate the response in terms of the natural and the forced components, which are not as interesting as the zero-input and the zero-state components. Note that we can always determine the natural and the forced components from the zero-input and the zero-state components [e.g., Eq. (2.44) from Eq. (2.43)], but the converse is not true. Because of these and some other problems, electrical engineers (wisely) started discarding the .C+ version in the early 1960s. It is interesting to note the time-domain duals of these two Laplace versions. The classical method is the dual of the .l + method, and the convolution (zero-input/zero-state) method is the dual of the[,_ method. The first pair uses the initial conditions at o+ , and the second pair uses those at t = o-. The first pair (the classical method and the .C+ version) is awkward in the theoretical study of linear system analysis. It was no coincidence that the [,_ version was adopted immediately after the introduction to the electrical engineering community of state-space analysis (which uses zero-input/zero-state separation of the output).

548

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFOR M

EXAMPLE 6.13 Laplace Transform to Solve an Electric Circuit In the circuit of Fig. 6.8a, the switch is in the closed position for a long time before t = 0, when it is opened instantaneously. Find the inductor current y(l) fort � 0.

y(r)

sn

1ov-=.

S)

lOu(r)

,=0

2n

IH

2n

lH

.!.F 5

(a)

vc(1)

.!. F

(b)

2

0

r(c)

Figure 6.8 Analysis of a network with a switching action.

When the switch is in the closed position ( for a long time), the inductor current is 2 amperes and the capacitor voltage is IO volts. When the switch is opened, the circujt is equivalent IO that depicted in Fig. 6.8b, with the initial inductor current y(Q-) = 2 and the initial capacitor voltage vc(O-) = IO. The input voltage is IO volts, starting at t = O, and, therefore, can be represented by lOu(t). The loop equation of the circuit in Fig. 6.8b is dy(t)

dt +2y(t) +5

( y(r)dr }_ 00

If y(t) {=} Y(s)

=

lOu(t)

(6.29)

6.3

Solution of Differential and Integro-Differential Equations

then

549

dy(t)

-;fr {=? sY(s) - y(0-) = sY(s) - 2

and [see Eq. (6.20)] I

1 y(r)dt -oo

{=?

Y(s) s

o-

+ f_

00

J�-

y(r)dt s

Because y(r) is the capacitor current, the integral y(r)dr is qc(0-), the capacitor charge at t = o-, which is given by C times the capacitor vohage at 1 = o-. Therefore,

o-

J_oo y(r)dr = qc(0-) = Cvc(0-) = ¼(t0) = 2 and

1,

-oo

y(r)dr ¢=>

Y(s) s

+� s

Using these results, the Laplace transform of Eq. (6.29) is

or

5Y(s) 10 10 sY(s) -2+2Y(s)+ --+- = s s s

and

I

I

2s Y(s)---­ 2 - s +2s+5

I

To find the inverse Laplace transform of Y(s), we use pair lOc (Table 6. l) with values A= 2, B = 0, a= 1, and c = 5. This yields

Therefore,

I

I

y(t) = .Jse-1 cos (2t + 26.6° )u(t)

I

This response is shown in Fig. 6.8c.

Comment. In our discussion so far, we have multiplied input signals by u(t), implying that the signals are zero prior to t = 0 . This is needlessly restrictive. These signals can have any arbitrary value prior to t = O. As long as the initial conditions at t = 0 are specified, we need only the knowledge of the input fort ::::: O to compute the response for r ::::: 0. Some authors use the notation l (t) to denote a function that is equal to u(t) fort ::::: 0 and that bas arbitrary value for negative t. We have abstained from this usage to avoid needless confusion caused by the introduction of a new function, which is very similar to u(t).

I

I

I

I

I

550

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSF O

RM

6.3.2 Zero-State Response Consider an Nth-order LTIC system specified by the equation Q(D)y(t) = P(D)x(t)

or

We shaJI now find the general expression for the zero-state response of an LTIC system. Zero-state response y(t), by definition, is the system response to an input when the system is initially relaxed (in zero state). Therefore, y(t) satisfies Eq. (6.30) with zero initial conditions

Moreover, the input x(t) is causal so that

Let y(t)

{=::::}

y(s)

and

x(t) {=} X(s)

Because of zero initial conditions,

Therefore, the Laplace transform of Eq. (6.30) yields

or Y() s =

bosM+b1sM-1 sN+a,sN-1

+ · · ·+bN-is+bN + ...+aN-1s+aN

X (s )

P(s)

= Q(s) X(s)

But we have shown in Eq. (6.22) that Y(s) = H(s)X(s). Consequently, H(s) =

P(s) Q(s)

(6.31)

This is the transfer function of a linear differential system specified in Eq. (6.30). The same result has been derived earlier in Eq. (2.41) using an alternate (time-domain) approach. t We have shown that Y(s), the Laplace transform of the zero-state response y(t), is the produc of X(s) and H(s), where X(s) is the Laplace transform of the input x(t) and H(s) is the systell1 transfer function [relating the particular output y(t) to the input x(t)].

6.3 Solution of Differential and Integro-Differential Equations

551

INTUITIVE INTERPRETATION OF THE LAPLACE TRANSFORM

So far we have treated the Laplace transform as a machine that converts linear integro-differential equations into algebraic equations. There is no physical understanding of how this is accomplished or what it means. We now discuss a more intuitive interpretation and meaning of the Laplace transform. In Ch. 2, Eq. (2.38), we showed that LTI system response to an everlasting exponential tr' is H(s)tf'. If we could express every signal as a linear combination of everlasting exponentials of the form es', we could readily obtain the system response to any input. For example, if K

x(t) = LX(s;)ei' k=l

the response of an LTIC system to such input x(t) is given by K

y(t) = LX(s;)H(s;)es;i k=I

Unfortunately, the class of signals that can be expressed in this form is very small. However, we can express almost all signals of practical utility as a sum of everlasting exponentials over a continuum of frequencies. This is precisely what the Laplace transform in Eq. (6.5) does. x(t) =

1

-. 1 21f)

c+ joo

c -joo

X(s)e51 ds

(6.32)

Invoking the linearity property of the Laplace transform, we can find the system response y(t) to inputx(t) in Eq. (6.32) ast y(t)

Clearly,

I

= -.

ic '+joo

2rrJ c'-joo

X(s)H(s)es' ds = r,-1X(s)H(s)

Y(s)

(6.33)

= X(s)H(s)

We can now represent the transformed version of the system, as depicted in Fig. 6.9a. The input X(s) is the Laplace transform of x(t), and the output Y(s) is the Laplace transform of (the zero-input response) y(t). The system is described by the transfer function H(s). The output Y(s) is the product X(s)H(s). Recall that s is the complex frequency of fl'. This explains why the Laplace transform method is also called the frequency-domain method. Note that X(s), Y(s), and H(s) are the frequency-domain representations of x(t),y(t), and h(t), respectively. We may view the boxes marked [, and r,- 1 in Fig. 6.9a as the interfaces that convert the time-domain entities into the corresponding frequency-domain entities, and vice versa. All real-life signals begin in the time domain, and the final answers must also be in the time domain. First, we convert the time-domain input(s) into the frequency-domain counterparts. The problem itself is solved in the frequency domain, resulting in the answer Y(s), also in the frequency domain. Finally, we convert t Recall that H(s) has its own region of validity. Hence, the limi1s of integration for the integral in Eq. (6.32) are modified in Eq. (6.33) to accommodate the region of existence (validity) of X(s) as well as H(s).

552

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM x(t)

X(s)

Expresses x(t) as a sum of everlasting exponemials

Y(s)

H(s)

= H(s)X(s)

y(t)

The sum of all exponential responses results in the output y(t)

Sys1em response to an exponenlial component X(s)e" is H(s)X(sk" (a) X(s)

Y(s)

H(1)

= H(s)X(s)

(b)

Figure 6.9 Alternate interpretation of the Laplace transfonn.

Y(s) to y(t). Solving the problem is relatively simpler in the frequency domain than in the time domain. Henceforth, we shall omit the explicit representation of the interface boxes[., and r,- 1, representing signals and systems in the frequency domain, as shown in Fig. 6 .9b.

Find the response y(t) of an LTIC system described by the equation d2y(t) y(t) + 6 (t) = dx(t) + 5 d dt2 dt y dt

+ x(t)

if the input x(t) = 3e-5 'u(t) and all the initial conditions are zero; that is, the system is in the zero state. The system equation is

(D 2+5D+6)y(t) = (D+ I)x(t) Q(D)

Therefore,

H(s) =

Also,

P(D)

P(s) Q(s)

=

s+ l s +5s+6 2

X(s) = £[3e-5ru(t)]

and f(s) =X(s)H(s) =

=

3

= __ s+5

3(s+ l)

(s+ 5)(s 2 + 5s+6) -2 1 3(s+ 1) = (s+5)(s+2)(s+3) s+5 - s+2

The inverse Laplace transform of this equation is

+

3

s+3

553

Solution of Differential and Integro-Differential Equations

6.3

.. �..... PL E 6.15 Laplace Transform to Find System Transfer Functions Show that the transfer function of:

(a) an ideal delay of T seconds is e -sT

(b) an ideal differentiator is s (c) an ideal integrator is 1/s

(a) Ideal Delay. For an ideal delay of T seconds, the input x(t) and output y(t) are related by y(t)

Therefore,

= x(t - n

and

Y(s)

= X(s)e-sT

Y(s) H(s) = X(s)

[see Eq. (6.16)]

= e-sT

(6.34)

(b) Ideal Differentiator. For an ideal differentiator, the inputx(t) and the outputy(t) are related

by

(t) y(t) = dx dt

The Laplace transform of this equation yields f(s)

[x(O-) = 0 for a causal signal]

= sX(s)

and

f(s) H(s)=-=s X(s)

(6.35)

(c) Ideal Integrator. For an ideal integrator with zero initial state, that is, y(Q-) = 0,

y(t)

Therefore,

1

= 1 x(r)d-c

and

H(s) =

-1

Y(s)

= �X(s) (6.36)

554

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

(a) Describe the differential equation relating the input x(t) and output y(t). (b) Find the system response y(t) to the input x(t) = e-1.1 u(t) if the system is initially in zero state.

ANSWERS 2 (a) d y(t) + 4 dy(t) + 3 = dx(t) t) ( y dt2 dt dt 1 2 (b) y(t ) = (2e- - 3e- ' + e-31 )u(t)

+ 5x(t)

6.3.3 Stability Equation (6.31) shows that the denominator of H(s) is Q(s), which is apparently identical to the characteristic polynomial Q(A) defined in Ch. 2. Does this mean that the denominator of H(s) is the characteristic polynomial of the system? This may or may not be the case, since if P(s) and Q(s) in Eq. (6.31) have any common factors, they cancel out, and the effective denominatorof H(s) is not necessarily equal to Q(s). Recall also that the system transfer function H(s), like h(t), is defined in terms of measurements at the external terminals. Consequently, H(s) and h(t) are both external descriptions of the system. In contrast, the characteristic polynomial Q(s) is an internal description. Clearly, we can determine only external stability, that is, BIBO stability, from H(s). If all the poles of H(s) are in the LHP, all the terms in h(t) are decaying exponentials, and h(t) is absolutely integrable [see Eq. (2.45)]. t Consequently, the system is BIBO-stable. Otherwise, the system is BIBO-unstable.

Beware of right half-plane poles! b cbtbe t Values of s for which H(s) is oo are the poles of H(s). Thus, poles of H(s) are the values of s for w i denominator of H(s) is zero.

6.3 Solution of Differential and Integro-Differential Equations

555

So far, we have assumed that H(s) is a proper function, that is. M � N. We now show that if H(s) is improper, that is, if M > N, the system is BIBO-unstable. In such a case, using Jong division, we obtain H(s) = R(s) + H'(s), where R(s) is an (M - N)th-order polynomial and H'(s) is a proper transfer function. For example, H (s)

s3 + 4s 2 + 4s + 5 s2 + 2s + 5 = -----= s + -:---s + 3s + 2 s + 3s + 2 2

2

As shown in Eq. (6.35), the term s is the transfer function of an ideal differentiator. If we apply step function (bounded input) to this system, the output will contain an impulse (unbounded output). Clearly, the system is BIBO-unstable. Moreover, such a system greatly amplifies noise because differentiation enhances higher frequencies, which generally predominate in a noise signal. These are two good reasons to avoid improper systems (M > N). In our future discussion, we shall implicitly assume that the systems are proper, unless stated otherwise. If P(s) and Q(s) do not have common factors, then the denominator of H(s) is identical to Q(s), the characteristic polynomial of the system. In this case, we can determine internal stability by using the criterion described in Sec. 2.5. Thus, if P(s) and Q(s) have no common factors, the asymptotic stability criterion in Sec. 2.5 can be restated in terms of the poles of the transfer function of a system, as follows: 1. An LTIC system is asymptotically stable if and only if all the poles of its transfer function H(s) are in the LHP. The poles may be simple or repeated. 2. An LTIC system is unstable if and only if either one or both of the following conditions exist: (i) at least one pole of H(s) is in the RHP; (ii) there are repeated poles of H(s) on the unagmary ax.1s. 3. An LTIC system is marginally stable if and only if there are no poles of H(s) in the RHP and some unrepeated poles on the imaginary axis. ;

l

I

The locations of zeros of H(s) have no role in determining the system stability.

Figure 6.10a shows a cascade connection of two LTIC systems S1 followed by S2. The transfer functions of these systems are H 1 (s) = 1/(s - I) and H2 (s) = (s - 1)/(s + 1), respectively. Determine the BIBO and asymptotic stability of the composite (cascade) system.

. ' ·····························••········•···

• (a)

x(t)

·······························•·········· (b)

..

y(t)

Figure 6.10 Distinction between BIBO and asymptotic stability.

,,. 556

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM If the impulse responses of S 1 and S2 are h 1 (t) and h2(t), respectively, then the impulse response of the cascade syslem is h(t);:::: h 1 (t) *h2 (t). Hence, H(s);:::: H, (s)H2(s). In the present case,

H(s) =

( s � l) (;�:) = s � l

The pole of S 1 al s = 1 cancels with the zero at s = l of S2. This results in a composite system having a single pole al s = -1. If the composite cascade system were to be enclosed inside a black box with only the input and the output terminals accessible, any measurement from these external terminals would show that the transfer function of the system is 1/(s+ I), without any hint of the fact that the system is housing an unstable system (Fig. 6.lOb). The impulse response of the cascade system is h(t) = e-, u(t), which is absolutely integrable. Consequently, the system is BIBO-stable. To determine the asymptotic stability, we note that S 1 has one characteristic root at I, and S2 also has one root at -1. Recall that the two systems are independent (one does not load the other), and the characteristic modes generated in each subsystem are independent of the other. Clearly, the mode e' will not be eliminated by the presence of S2 . Hence, the composite system has two characteristic roots, located at ±I, and the system is asymptotically unstable, though BIBO-stable. Interchanging the positions of S 1 and S2 makes no difference in this conclusion. This example shows that BIBO stability can be misleading. If a system is asymptotically unstable, it will destroy itself (or, more likely, lead to a saturation condition) because of unchecked growth of the response due to intended or unintended stray initial conditions. BIBO stability is not going to save the system. Control systems are often compensated to realize certain desirable characteristics. One should never try to stabilize an unstable system by canceling its RHP pole(s) with RHP zero(s). Such a misguided attempt will fail, not because of the practical impossibility of exact cancellation but for the more fundamental reason, as just explained.

DRILL6.9 BIBO and Asymptotic Stability

6.3.4 Inverse Systems If H(s) is the transfer function of a system S, then Si, its inverse system, has a transfe r function H;(s) given by l

H;(s) =



H(s)

This follows from the fact that the cascade of S with its inverse system S; is a n identity system, with impulse response o(t), implying H(s)H;(s) = 1. For example, an ideal integraior

1 6.4 Analysis of Electrical Networks: The Transformed Network

557

and its inverse, an ideal differentiator, have transfer functions l/s ands, respectively, leading to H(s)H;(s) = 1.

6.4 ANALYSIS OF ELECTRICAL NETWORKS: THE

TRANSFORMED NETWORK

Exa mple 6.12 shows how electrical networks may be analyzed by writing the integro-differential equation(s) of the system and then solving these equations by the Laplace transform. We now show that it is also possible to analyze electrical networks directly without having to write the integro-differential equations.This procedure is considerably simpler because it permits us to treat an electrical network as if it were a resistive network.For this purpose, we need to represent a network in the "frequency domain" where all the voltages and currents are represented by their Laplace transforms. For the sake of simplicity, let us first discuss the case with zero initial conditions.If v(t) and i(t) are the voltage across and the current through an inductor of L henries, then v(t)

=L

di(t) dt

The Laplace transform of this equation (assuming zero initial current) is V(s)

= Lsl(s)

Similarly, for a capacitor of C farads, the voltage-current relationship is i(t) = C(dv/dt) and its Laplace transform, assuming zero initial capacitor voltage, yields /(s) = CsV(s); that is, V(s) =

1 J(s) Cs

For a resistor of R ohms, the voltage-current relationship is v(t) = Ri(t), and its Laplace transform is V(s)

= Rl(s)

Thus, in the "frequency domain," the voltage-current relationships of an inductor and a capacitor are algebraic; these elements behave like resistors of "resistance" Ls and 1/Cs, respectively. The generalized "resistance" of an element is called its impedance and is given by the ratio V(s)/J(s) for the element (under zero initial conditions).The impedances of a resistor of R ohms, an inductor of L henries, and a capacitance of C farads are R, Ls, and l /Cs, respectively. Also, the interconnection constraints (Kirchhoff's laws) remain vabd for voltages and currents in the frequency domain.To demonstrate this point, let vj (t) (j = 1,2, ..., k) be the voltages across k elements in a loop and let ij (t) (j = 1,2, ..., m) be the j currents entering a node. Then

L vi (t) = 0

and

I:>jcr) = o j= l

Vj ( s)

and

ij (t)

k

j =I

Now if

Vj (t) ¢::::=}

m

¢::::::? lj (s)

558

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM then

m

k

z=vj(s)=O

and

j=I

This result shows that if we represent all the voltages and currents in an electrical network by their Laplace transforms, we can treat the network as if it consisted of the "resistances" R, Ls, and 1/Cs corresponding to a resistor R, an inductor L, and a capacitor C, respectively. The system equatjons (loop or node) are now algebraic. Moreover, the simplification techniques that have been developed for resistive circuits--equivalent series and parallel impedances, voltage and current divider rules, Thevenin and Norton theorems--can be applied to general electrical networks. The following examples demonstrate these concepts.

EXAMPLE 6.17 Transform Analysis of a Simple Circuit Find the loop current i(t) in the circuit shown in Fig. 6.11a if all the initial conditions are zero.

I H

s

3 D,

IOu(t)



..!.F 2

g

.!Q s

(a)

3

2 s

(b)

Figure 6.11 (a) A circuit and (b) its transformed version.

In the first step, we represent the circuit in the frequency domain, as illustrated in Fig. 6. I lb. All the voltages and currents are represented by their Laplace transforms. The voltage IOu(I) is represented by IO/s and the (unknown) current i(t) is represented by its Laplace transform l(s). All the circuit elements are represented by their respective impedances. The induc tor of I henry is represented by s, the capacitor of 1 /2 farad is represented by 2/ s, and the resistor of 3 ohms is represented by 3. We now consider the frequency -domain representation of voltages and currents. The voltage across any element is /(s) times its impedance. Therefo re, the total voltage drop in the loop is l(s) times the total loop impedance, and it must be equal to V(s), (transform of) the input voltage. The total impedance in the loop is

Z(s) =s+3+ � = s

s2 +3s+2

s

, 6.4 Analysis of Electrical Networks: The Transformed Network

559

The input"voltage" is V(s) = 10/s. Therefore, the "loop current" /(s) is V(s) s) = ( l Z(s)

=

(s2

10/s + 3s + 2)/s

10 to 10 = s2 + 3s + 2 = (s + I )(s + 2) = s + 1

10 - s +2

The inverse transform of this equation yields the desired result:

INITIAL CONDITION GENERATORS

The discussion in which we assumed zero initial conditions can be readily extended to the case of nonzero initial conditions because the initial condition in a capacitor or an inductor can be represented by an equivalent source. We now show that a capacitor C with an initial voltage v(O-) (Fig. 6.12a) can be represented in the frequency domain by an uncharged capacitor of impedance I/Cs in series with a voltage source of value v(O-)/s (Fig. 6.12b) or as the same uncharged capacitor in parallel with a current source of value Cv(O-) (Fig. 6.12c). Similarly, an inductor L with an initial current i(O-) (Fig. 6.12d) can be represented in the frequency domain by an inductor of impedance Ls in series with a voltage source of value Li(O-) (Fig. 6. l2e) or by the same inductor in parallel with a current source of value i(Q-)/s (Fig. 6. l2t). To prove this point, consider the terminal relationship of the capacitor in Fig. 6.12a: i(t) = c

dv(

t) dt

The Laplace transform of this equation yields

This equation can be rearranged as

1

vco-)

V(s) = -l(s) + -s Cs

(6.37)

Observe that V(s) is the voltage (in the frequency domain) across the charged capacitor and l(s)/Cs is the voltage across the same capacitor without any charge. Therefore, the charged capacitor can be represented by the uncharged capacitor in series with a voltage source of value v(O-)/s, as depicted in Fig. 6.12b. Equation (6.37) can also be rearranged as V(s) =

1 Cs

- [l(s) + Cv(O-)]

560

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM i(r)

+o

/(s) J

+

·:_1-�0-1

V(s)

1o I +

v(O-)

+

l(s)

t

_I

V(s)

Cs

Cv(O-)

-� (a)

(c)

(b)

l(s)

i(t)

+

l(s) 0-.....- ..----,

Ls L

(d)

V(s)

(e)

Ls

i(O-)

(f)

Figure 6.12 Initial condition genera tors for a capacitor and an inductor.

This equation shows that the charged capacitor voltage V(s) is equal to the uncharged capacitor voltage caused by a current /(s) + Cv(O-). This result is reflected precisely in Fig. 6.12c, where the current through the uncharged capacitor is /(s) + Cv(O-).t For the inductor in Fig. 6.12d, the tenninal equation is di(t) v(t) =L­ dt

and (6.38)

t In the time domain, a charged capacitor C with initial voltage v(O-) can be represented as the same capacitor

uncharged in series with a voltage source v(O-)u(t), or in parallel with a current source Cv(O-)o(t). Similarly, an inductor L with initial current i(O-) can be represented by the same inductor with zero initial current in series with a voltage source Li(0-)8(t) or with a parallel current source i(O-)u(t).

6.4 Analysis of Electrical Networks: The Transformed Network

561

This expression is consistent with Fig. 6. I2e. We can rearrange Eq. (6.38) as [ i(0-)] V(s) = fa l (s) - - s

This expression is consistent with Fig. 6. I 2f. Let us rework Ex. 6.13 using these concepts. Figure 6.13a shows the circuit in Fig. 6.8b with the initial conditions y(0-) = 2 and vc(O-) = I0. Figure 6.13b shows the frequency-domain representation (transformed circuit) of the circuit in Fig. 6.13a. The resistor is represented by its impedance 2; the inductor with initial current of 2 amperes is represented according to the arrangement in Fig. 6. l 2e with a series voltage source Ly(O-) = 2. The capacitor with an initial voltage of IO volts is represented according to the arrangement in Fig. 6.12b with a series voltage source v(0-)/s = 10/s. Note that the impedance of the inductor is s and that of the capacitor is 5/s. The input of IOu(t) is represented by its Laplace transform 10/s. The total voltage in the loop is (l O/s) + 2 - (10/s) = 2, and the loop impedance is (s+ 2 + (5/s)). Therefore, Y(s)

2s 2 = ---= -,----­ 2 s + 2 + 5/s s +2s+5

which confirms our earlier result in Ex. 6.13.

-IH

y(O-)

=

2!1

2

5

+ ..!..5 F

1 Ou(t)

(a)

10

s

10

T

LO T (b)

Figure 6.13 A circuit and its transformed version with initial condition generators.

EXAMPLE 6.18 Transformed Analysis of a Circuit with a Switch The switch in the circuit of Fig. 6.14a is in the closed position for a long time before t = 0, when it is opened instantaneously. Find the currents Y1 (!) and Y2 (t) fort:::: 0.

562

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFO RM + Ve -

l'·

2ov-=-

I D.

)'1(1)

vc(O) = 16

f:)

4y .=. 1

=0

.!.5 n.

Y2(0)

=4

.!. H

(a)

•a

(

(/

4

s

20

V(s)



s

b (b)

(c)

Figure 6.14 Using initial condition generators and Thevenin equivalent representation.

Inspection of this circuit shows that when the switch is closed and the steady-state conditions are reached, the capacitor voltage vc = 16 volts, and the inductor current y2 = 4 amperes. Therefore, when the switch is opened (at t = 0), the initial conditions are vc(O-) = 16 and y2(Q-) = 4. Figure 6.14b shows the transformed version of the circuit in Fig. 6.14a. We have used equivalent sources to account for the initial conditions. The initial capacitor voltage of 16 volts is represented by a series voltage of 16/s and the initial inductor current of 4 amperes is represented by a source of value Ly 2 co-) = 2. From Fig. 6.14b, the loop equations can be written directly in the frequency domain as

Application of Cramer's rule to this equation yields

Yi(s) = 24(s+2) = 24(s+ 2) = -24 � s2 + 7s+ 12 (s+3)(s+ 4) s + 3 + s+4

6.4 Analysis of Electrical Networks: T he Transformed Network

563

and Simjlarly, we obtain

and

31 41 Y2(t) = (16e- - l2e- )u(t)

We also could have used Thevenin's theorem to compute Y1 (s) and Y2 (s) by replacing the circui t to the right of the capacitor (right of terminals ab) with its Thevenin equivalent, as sh own in Fig. 6.14c. Figure 6.14b shows that the Thevenin impedance Z(s) and the Thevenin source V(s) are Z(s)=

(s )

-1 -+l 5 2 1

s

+-+ 1 5 2

=

s+2 5s+ 12

1 -4 5 2=-V(s) = l s 5s+ 12 -+-+ 1 5 2 According to Fig. 6.14c, the current Y 1 (s) is given by Y,(s) =

4 - - V(s)

24(s + 2) = ..;;_-{-s2 + 7s+ 12 +Z(s)

which confirms the earlier result. We may determine Y2(s) in a similar manner.

The switch in the circuit in Fig. 6.15a is at position a for a long time before t = 0, when it is moved instantaneously to position b. Determine the current )11 (t) and the output voltage vo(t) fort� 0.

564

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFO RM R 1 = 2, R2 = R3 = 1 M = l, L1 = 2, L 2 == 3

+

sv-=- 1ov-=-

(a)

·"·

L1 -M

M

Li

l2

-

L1 + M

L2-M

-�

M



£i+M

-M

-

(c)

(b)

10 s

1

(d)

Figure 6.15 Solution of a coupled inductive network by the transformed circuit method.

Just before switching, the values of the loop currents are 2 and 1, respectively, that is, Yt (Q-) = 2 and Y2(0-) = 1. The equivalent circuits for two types of inductive coupling are illustrated in Figs. 6.15b and 6.15c. For our situation, the circuit in Fig. 6.15c applies. Figure 6.15d shows the transformed version of the circuit in Fig. 6.15a after switching. Note that the inductors L 1 + M, Li+ M,

6.4 Analysis of Electrical Networks: The Transformed Network and -M are 3, 4, and -1 henries with impedances 3s, 4s, and -s, respectively. The initiaJ condition voltages in the three branches are (L, + M)y,(0- ) = 6, (ui. + M)y2 (Q-) = 4, and -M[y,(0-) - y2(0-)] = -1, respectively. The two loop equations of the circuit are 10 (2s+3)Y,(s)+(s-l)Y2 (s)=-+5 s (s- l) Y,(s)+(3s+2)Y2 (s) = 5 or

2s +3 s-l y1 (s) [ ] [ s-1 3s+2 Y2 (s) ]

[5s� IO ]

5

Solving for Y1 (s), we obtain 2 +9s+4 _ 4 Y,(s)= 2s s(s2 +3s+l) s

Therefore, Similarly,

and

s+0.382

s+2.618

y,(t) = (4 _ e-0.3821 _ e-2.618,)u(t) s2 +2s+2 =� l.618 + 0.618 = Y2(s) 2 s(s +3s+l) s s+0.382 s+2.618 Y2 (t) = (2- l .618e-0·382' + O.618e-2·618')u(t)

The output voltage is therefore

e RLC circuit in Fig. 6.16, the input is switched on at t = 0. = 2 amperes and vc(0-) = 50 volts. Find the loop current y(t) fort> 0.

-, os(2t+8

"nitial co the capacit

565

566

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

r�r T :; T 2fi

t=O

lH

C'\

24v -

I

+

0.2F

•-

Figure 6.16 Circuit for

6.4.1 Analysis of Active Circuits Although we have considered examples of only passive networks so far, the circuit analysis procedure using the Laplace transform is also applicable to active circuits. All that is needed is to replace the active elements with their mathematical models (or equivalent circuits) and proceed as before. The operational amplifier (depicted by the triangular symbol in Fig. 6.17 a) is a well-known element in modem electronic circuits. The terminals with the positive and the negative signs correspond to noninverting and inverting terminals, respectively. This means that the polarity of the output voltage v2 is the same as that of the input voltage at the terminal marked by the positive sign (noninverting). The opposite is true for the inverting terminal, marked by the negative sign.



: :1(t)

·:r --

•+

+

V1(t} +

V2(t)



-Av 1 (t)

--

(a)

V2(t)

(b)

ix(t) vx

io(t)

V1(t)

-

Rb

Ra

+

+ VI (t)

V2(t)

Kv 1 (t)

(c)

Figure 6.17 Operational amplifier and its equivalent circuit.

(d)

vi(t)

6.4 Analysis of Electrica l Ne

tworks: The Transform ed Network

567 shows the 7b 6.1 model . e . ur (equ ivalent cir cuit) of Fig . . r (op amp) in the operatio . n al ampbfie cal op amp typi has A a 7a. very lar ge g . Fig. 6. I The output voltage v2 = -Av , whe 6 5 re A is 1 cypicaUy 10 to I 0 . The input i mpeda n ce is very:�ig· h n e 0rder of 12 10 ohms, and the output nce is very low (50-IOO oh ms). For most appj-°cati� m 1• peda n s, we are justified in assurru n g the i ? ain A and the input impedance to be in finite and the output impedan ce to be zero For this reason, . !e se e an ideal voltage source at the output. Consider now the operational amplifier with res·15tor� Ra and Rb connected, as shown in . rtmg confi g uration is known as_ the nonmve amplifier. Observe that the i nput FJ•g. �-- J 7c_- This . . . nu gurat m con on this a fi e i nverte r es i d m • p·1g. 6.17a. u, compa r iso n to those tn vve now show . Pola and the v2 voltage ut mput utp voltage o e v 1 th in this case at are related by th where K = l + R&. a First, we recognize that because the input impedance and the gain of the operational amplifier approach infinity, the input current ix and the input voltage Vx in Fig. 6.17c are in finitesimal and may be taken as zero. The dependent source in this case is Avx instead of -Avx because of the input polarity inversion. The dependent source Avx (see Fig. 6.17b) at the output will generate current '.o, as illustrated in Fig. 6.17c. Now

and also

Therefore,

or

v2 (t) == Kv1 (t)

. amplifier is depicted in fig. 6. I 7d. . ertm g The equivalent circuit of the nonmv

r . which is frequently used in filte . uit, len-Key circ t voltage Vo (t) to the input voltage al S d e I outpu The circuit in Fig. 6.18a is c� relati ng the ;) � H on cn design. Find the transfer fun V;(t).

568

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

b

V;(t)

(a)

I C 1s

a

+

+

+

V,,(.r)

V;(s)

(b)

Figure 6.18 (a) Sallen-Key circuit and (b) its equivalent.

We are required to find H(s)

= Vo(s) V;(s)

assuming all initial conditions to be zero. Figure 6.18b shows the transformed version of the circuit in Fig. 6.18a. The noninverting amplifier is replaced by its equivalent circuit. All the voltages are replaced by their Lapl ace transforms, and all the circuit elements are shown by their impedances. All the initial conditions are assumed to be zero, as required for determining H(s). We shall use node analysis to derive the result. There are two unknown node voltages, V0 (s) and Vb(s), requiring two node equations.

. .. 6.5 Analysis of Electrical Networks: The Transformed Network

569

At node a, iR 1 (s), the current in R, (leaving the node a), is lVa (s) - V;(s)]/R,. Similarly, fR (s), the current in R 2 (leaving the node a), is lVa (s) - Vb (s)]/R2 , and le , (s), the current in 2 ca pacitor C 1 (leaving the node a), is [Va (s) - V0 (s)]C 1 s = [V0 (s) - KVb (s)]C,s. The sum of all the three currents is zero. Therefore,

or

1 (_I+ _!_ + C,s) V0 (s) - (_!_+KCis) Vi,(s) = - V;(s) R, R2 R2 R,

Similarly, the node equation at node b yields

or

1 I V0 (s) + (- + C2 s) Vb (s) = 0 __ R2 R2

The two node equations in two unknown node voltages V0 (s) and Vb (s) can be expressed in matrix form as

where

and

Application of Cramer's rule yields

where

G1G2 Vb (s) 2 V;(s) - C 1 C 2 s +[G 1 C2 +G2 C2+G2 C1 (1-K)]s+G1G2 2 Wo = s 2 + 2as+wo 2 and

Now Therefore,

570

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

6.5 BLOCK DIAGRAMS Large systems may consist of an enormous number of components or elements. As anyone who has seen the circuit diagram of a radio or a television receiver can appreciate, analyzing such syst ems all at once could be next to impossible. In such cases, it is convenient to represent a system by suitably interconnected subsystems, each of which can be readily analyzed. Each subsyst em can be characterized in terms of its input-output relationships. A linear system can be characterized by its transfer function H(s). Figure 6. l9a shows a block diagram of a system with a transfer function H(s) and its input and output X(s) and Y(s), respectively. Subsystems may be interconnected by using cascade, parallel, and feedback interconnect ions (Figs. 6.19b, 6.19c, 6.19d), the three elementary types. When transfer functions appear in cascade, as depicted in Fig. 6.19b, then, as shown earlier, the transfer function of the overall system is the product of the two transfer functions. This result can also be proved by observing that in Fig. 6.19b Y(s) _ W(s) Y(s)

_ H ( )H s 1 s 2( )

X(s) - X(s) W(s) -

X(s)-------1

H(s)

:--�--Y(s)

(a) X(s)

H1(s)

W(s)

Hz(S)

Y(s)



.

=

X(s)

Y(s)

X(s)

Y(s)

(b)

X(s)

Y(s)

,..._=

(c)

G(s)

Y(s)

=

X(s)

G(s)

l + G(s)H(s)

H(s) (d)

Figure 6.19 Elementary connections of blocks and their equivalents.

Y(s)

6.5 Block Diagrams

571

We can extend this result to any number of transfer functions in cascade. It follows from this discussion that the subsystems in cascade can be interchanged without affecting the overall transfer function. This commutation property of LTI systems follows directly from the commutative (and associative) property of convolution. We have already proved this propert y in Sec. 2.4.3. Every possible ordering of the subsystems yields the same overall transfer function. However, there may be practical consequences (such as sensitivity to parameter variation) affecting the behavior of different ordering. Similarly, when two transfer functions, H 1 (s) and H2 (s ), appear in parallel, as illustrated in Fig. 6. I 9c, the overall transfer function is given by H1 (s) + H2 (s), the sum of the two transfer functions. The proof is trivial. This result can be extended to any number of systems in parallel. When the output is fed back to the input, as shown in F ig. 6.19d, the overall transfer function Y(s)/X(s) can be computed as follows. The inputs to the adder are X(s) and -H(s)Y(s). Therefore, E(s), the output of the adder, is E(s) = X(s) -H(s)Y(s) But Y(s) = G(s)E(s) = G(s)[X(s) - H(s)Y(s)]

Therefore,

so that

Y(s)[ l

+ G(s)H(s)] = G(s)X(s)

Y(s) X(s)

-

G(s)

1

+ G(s)H(s)

(6.39)

Therefore, the feedback loop can be replaced by a single block with the transfer function shown in Eq. (6.39) (see Fig. 6.19d). In deriving these equations, we implicitly assume that when the output of one subsystem is connected to the input of another subsystem, the latter does not load the former. For example, the transfer function H1 (s) in Fig. 6.19b is computed by assuming that the second subsystem H2(s) was not connected. This is the same as assuming that H2(s) does not load H, (s). In other words, the input-output relationship of H 1 (s) will remain unchanged regardless of whether H2(s) is connected. Many modern circuits use op amps with high input impedances, so this assumption is justified. When such an assumption is not valid, H, (s) must be computed under operating conditions [i.e., with H2 (s) connected].

Consider the feedback system of Fig. 6.19d with G(s) = K/(s(s + 8)) and H(s) = 1. Use MATLAB to determine the transfer function for each of the following cases: (a) K = 7, (b) K = 16, and (c) K = 80.

.,..

�r

572

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM We solve these cases using the control system toolbox function feedback. (a)

>>

H = tf(1,1); K Ha=

=

7; G

=

tf([O OK], [1 8 O]); TFa

=

feedback(G,H)

7

Thus, Ha(s) = 7/(s2 +8s+7). (b)

>>

H = tf(1, 1); K Hb = 16

=

16; G

=

tf([O OK], (1 8 O]); TFb

=

feedback(G,H)

=

tf([O OK] , [1 8 O]); TFc

=

feedback(G,H)

-------------s-2 + 8 s + 16

Thus, Hb(s) = 16/(s2 +8s+16). (c)

>>

H = tf(l,1); K He = 80

=

80; G

-------------s-2 + 8 s + 80

Thus, Hc(s) = 80/(s2 +8s+80).

6.6 SYSTEM REALIZATION

We now develop a systematic method for realization (or implementation) of an arbitrary Nth -order transfer function. The most general transfer function with M = N is given by H(s)=

bol' +b,1'- 1 +· · •+bN-is+bN sN +a 1 sN-t +· · · +aN-is+aN

(6.40)

Since realization is basically a synthesis problem, there is no unique way of realizing a system. A given transfer function can be realized in many different ways. A transfer function H(s) can be realized by using integrators or differentiators along with adders and multipliers. We avoid us�of differentiators for practical reasons discussed in Secs. 2.1 and 6.3.3. Hence, in our implementau�n, we shall use integrators along with scalar multipliers and adders. We are already fam iliar w,tb representation of aJI these elements except the integrator. The integrator can be represented by sfer a box with an integral sign (time-domain representation, Fig. 6.20a) or by a box with tran function 1 / s (frequency-domain representation, Fig. 6.20b ).

6.6 System Realization y(t) =

x(r)

f; x(-r)d(-r)

573

Y(s) = tX(s)

X(s)

(a)

(b)

Figure 6.20 (a) Time-domain and (b) frequency-domain representations of an integrator.

6.6.1 Direct Form I Realization Rather than realize the general Nth-order system described by Eq. (6.40), we begin with a specific case of the following third-order system and then extend the results to the Nth-order case:

We can express H(s) as

We can realize H(s) as a cascade of transfer function H1 (s) followed by H2(s), as depicted in Fig. 6.21a, where the output of H1 (s) is denoted by W(s). Because of the commutative property of LTI system transfer functions in cascade, we can also realize H(s) as a cascade of H2 (s) followed by H1 (s), as illustrated in Fig. 6.21b, where the (intermediate) output of H2(s) is denoted by V(s). The output of H1 (s) in Fig. 6.21a is given by W(s) = H 1 (s)X(s). Hence,

( b1 b2 b3) W(s) = bo + - + + - X(s) s s2 s3

(6.41)

Also, the output Y(s) and the input W(s) of H 2(s) in Fig. 6.21a are related by Y(s) = H2(s)W(s). Hence, (6.42) We shall first realize H 1 (s). Equation (6.41) shows that the output W(s) can be synthesized by adding the input bo X(s) to b 1 (X(s)/s),b 2 (X(s)/s2 ), and b3(X(s)/s3 ). Because the transfer function

.

'······-····························......·······4·····

(a)

=

•. .. .........................................................:

Figure 6.21 Realization of a transfer function in two steps.

(b)

574

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

(b)

(a)

Figure 6.22 Direct fonn I realization of an LTIC system: (a) third-order and (b) Nth-order.

of an integrator is 1/s, the signals X(s)/s, X(s)/s 2 , and X(s)/s 3 can be obtained by successive integration of the inputx(t). The left-half section of Fig. 6.22a shows how W(s) can be synthesized fromX(s), according to Eq. (6.41). Hence, this section represents a realization of H1 (s). To complete the picture, we shall realize H2 (s), which is specified by Eq. (6.42). We can rearrange Eq. (6.42) as a1 a2 a3 Y(s) = W(s) - (- + -2 + -3 ) Y(s) (6.43)

s

s

s

Hence, to obtain Y(s), we subtract a 1 Y(s)/s, a2 Y(s)/s2 , and a3Y(s)/ s3 from W(s). We have already obtained W(s) from the first step [output of H1 (s)]. To obtain signals Y(s)/s, Y(s)/s 2 , and Y(s)/s3, we assume that we a]ready have the desired output Y(s). Successive integration of Y(s) yields the needed signals Y(s)/s, Y(s)/s 2 , and Y(s)/s 3 . We now synthesize the final output Y(s) according to Eq. (6.43), as seen in the right-half section of Fig. 6.22a.t The left-half section in Fig. 6.22a represents H1 (s) and the right-half is H2 (s). We can generalize this procedure, known as the direct form I (DFI) realization, for any value of N. This procedure requires 2N integrators to realize an Nth-order transfer function, as shown in Fig. 6.22b.

6.6.2 Direct Form II Realization In the direct form I, we realize H(s) by implementing H1 (s) followed by H2 (s), as shown in Fig. 6.21a. We can also realize H(s), as shown in Fig. 6.21b, where H2 (s) is followed by H1 (s). This procedure is known as the direct form II realization. Figure 6.23a shows direct form II tit may seem odd that we first assumed the existence of Y(s), integrated it successively, and then in turll generated Y(s) from W(s) and the three successive integrals of signal Y(s). This procedure poses a di}enuna similar to "Which came first, the chicken or the egg?" The problem here is satisfactorily resolved by wririr'.g the expression for Y(s) at the output of the right-hand adder (at the top) in Fig. 6.22a and verifying that this expression is indeed the same as Eq. (6.42).

6.6 System Realization

575

V(s)

(a)

Figure 6.23

Direct form

(b)

II realization of an Nth-order LTIC system.

realization, where we have interchanged sections representing H1 (s) and H2 (s) in Fig. 6.22b. The output of H2 (s) in this case is denoted by V(s).t An interesting observation in Fig. 6.23a is that the input signal to both the chains of integrators is V(s). Clearly, the outputs of integrators in the left-side chain are identical to the corresponding outputs of the right-side integrator chain, thus making the right-side chain redundant. We can eliminate this chain and obtain the required signals from the left-side chain, as shown in Fig. 6.23b. This implementation halves the number of integrators to N and, thus, is more efficient in hardware utilization than either Figs. 6.22b or 6.23a. This is the directform II (DFII) realization. An Nth-order differential equation with N = M has a property that its implementation requires a minimum of N integrators. A realization is canonic if the number of integrators used in the realization is equal to the order of the transfer function realized. Thus, canonic realization has no redundant integrators. The DFII form in Fig. 6.23b is a canonic realization, and is also called the direct canonic form. Note that the DFI is noncanonic. The direct form I realization (Fig. 6.23b) implements zeros first [the left-half section represented by H 1 (s)] followed by realization of poles [the right-half section represented by H2 (s)] of H(s). In contrast, canonic direct implements poles first followed by zeros. Although both these realizations result in the same transfer function, they generally behave differently from the viewpoint of sensitivity to parameter variations.

tTbe reader can show that the equations re lating X(s), V(s), and Y(s) in Fig. 6.23a are

and

576

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

EXAMPLE 6.22 Cano,nic Direct Form Realizations Find the canonic direct form realization of the following transfer functions: (a)

(b) (c) (d)

5 s+7 s

s+7

s+5

s+7 4s+28 2 s +6s+5

All four of these transfer functions are special cases of H(s) in Eq. (6.40). (a) The transfer function 5 / (s + 7) js of the first order (N = 1 ); therefore, we need only one integrator for its realization. The feedback and feedforward coefficients are and

bo =0,

bi= 5

The realization is depicted in Fig. 6.24a. Because N = I, there is a single feedback connection from the output of the integrator to the input adder with coefficient a 1 = 7. For N = 1, generally, there are N + I = 2 feedforward connections. However, in this case, bo = 0, and there is only one feedforward connection with coefficient b1 = 5 from the output of the integrator to the output adder. Because there is only one input signal to the output adder, we can do away with the adder, as shown in Fig. 6.24a. (b) s H(s)=-

s+7

In this fust-order transfer function, b 1 = 0. The realization is shown in Fig. 6.24b. Because there is only one signal to be added at the output adder, we can discard the adder. (c)

s H(s)= +5

s+7

The realization appears in Fig. 6.24c. Here, H(s) is a first-order transfer function with a1 == 1 and bo = 1, b, = 5. There is a single feedback connection (with coefficient 7) from the 1 integrator output to the input adder. There are two feedforward connections (Fig. 6.24c) . t When M = N (as in this case), H(s) can also be realized in another way by recognizing that 2 H(s)= l-s+7 We now reaJjze H(s) as a parallel combination of two transfer functions, as indicated by this equation.

,. 6.6 System Realization (d)

H(s)=

577

4s+28 + 6s+ 5

s2

This is a second-order system with bo = 0, b 1 = 4, b 2 = 28, a 1 = 6, an d a2 = 5. Figure 6.24d shows a realization with two feedback connections and two feed forward connections.

X(s)

Y(s)

-7

-7

5 (a)

(b)

Y(s)

Y(s)

-7

5 (c)

28

-5 (d)

Figure 6.24 Realizations of H(s) for Ex. 6.22.

Give the canonic direct-teal.ization of

2r H(s) = s2+6 s+25

6.6.3 Cascade and Parallel Realizations An Nth-order transfer function H (s) can be expressed as a product or a sum of N first-order transfer functions. Accordingly, we can also realize H(s) as a cascade (series) or parallel form of these N

578

CHAPTE R 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

X(s)

_1_ s+S

-1_ ___

.r +5

(b)

(a)

Figure 6.25 Realization of (4s + 28)/((s + l)(s + 5)): (a) cascade form and (b) parallel form.

first-order transfer functions. Consider, for instance, the transfer function in part (d) of Ex. 6.22. H(s) =

4s+28 s2 + 6s+ 5

We can express H(s) as H(s) =

1 4s+28 4s+28 ) )( ( = s+5 s+ 1 (s+ l)(s+5)

.___.,..___, H1(s)

H2(s)

We can also express H(s) as a sum of partial fractions as H(s) =

4s+28 (s+ l)(s + 5)

=

2 6 s+ s+ -1 - 5 -

'-v-' H3(s)

--...-., H4(s)

These equations give us the option of realizing H(s) as a cascade of H 1 (s) and H2 (s), as shown in Fig. 6.25a, or a parallel of H3(s) and H4(s), as depicted in Fig. 6.25b. Each of the first-order transfer functions in Fig. 6.25 can be implemented by using canonic direct realizations, discussed earlier. This discussion by no means exhausts all the possibilities. In the cascade form alo ne, there are different ways of grouping the factors in the numerator and the denominator of H(s), and each grouping can be realized in DFI or canonic direct form. Accordingly, several cascade forms are possible. In Sec. 6.6.4, we shall discuss yet another form that essentially doubles the numbers of realizations discussed so far. From a practical viewpoint, parallel and cascade forms are preferable because parallel and certain cascade forms are numerically less sensitive than canonic direct form to small parameter variations in the system. Qualitatively, this difference can be explained by the fact that in a canonic realization all the coefficients interact with each other, and a change in any coefficient will be magnified through its repeated influence from feedback and feedforward connections. In a parallel realization, in contrast, the change in a coefficient will affect only a localized segment; the case with a cascade realization is similar. er In the examples of cascade and parallel realization, we have separated H(s) into first-o� l factors. For H(s) of higher orders, we could group H(s) into factors, not all of which are necessan Y_ of the first order. For example, if H(s) is a third-order transfer f unction, we could realize tlllS function as a cascade (or a parallel) combination of a first-order and a second-order factor.

J

' 6.6 System Realization

579

REALIZATION OF COMPLEX CONJUGATE POLES The complex poles in H(s) should be realized as a second-order (quadratic) factor because we cannot implement multiplication by complex numbers. Consider, for example, H(s)

10s+50 = (s+3)(s 2

+4s+ 13) 10s+ 50 (s+3)(s+2- j3)(s+2+j3) 2 1- j2 l + j2 s+3 s+2- j3 s+2+j3

We cannot realize first-order transfer functions individually with the poles -2 ± j3 because they require multiplication by complex numbers in the feedback and the fe.edforward paths. Therefore, we need to combine the conjugate poles and realize them as a second-order transfer function.t In the present example, we can create a cascade realization from H(s) expressed in product form as

s+5 H(s) ) ( 2 - (� s+3 s + 4s + 13 ) Similarly, we can create a parallel realization from H(s) expressed in sum form as 2s-8 2 H(s) = _ _ _ 2 s + 3 s + 4s + 13 REALIZATION OF REPEATED POLES When repeated poles occur, the procedure for canonic and cascade realization is exactly the same as before. For a parallel realization, however, the procedure requires special handling, as explained in Ex. 6.23.

Determine the parallel realization of H(s)

=

7s 2 + 37s + 51 (s+2)(s+3)2

=

5 2 3 + s+2 s+3 - (s+3)2

This third-order transfer function should require no more than three integrators. But if we try to realize each of the three partial fractions separately, we require four integrators because of the one second-order term. This difficulty can be avoided by observing that the terms l / (s+3) and

t It i s p ossible to realize complex, conjugate poles indirectly by using a cascade of two first-order transfer functions and fee dback. A transfer function with poles -a ± j b can be realized by using a cascade of two identical first-order transfer functions, each having a pole at -a (see Prob. 6.6-15).

580

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFO RM I /(s + 3)2 can be realized with a cascade of two subsystems, each having a transfer function I /(s + 3), as shown in Fig. 6.26. Each of the three first-order transfer functions in Fig. 6.26 may now be realized as in Fig. 6.24.

1 s+2

5

)

X(s)

Y(s)

2

)1

---, __ I_ :-))---))---::

s+3

J

3

s+3

Figure 6.26 Parallel realization of (7s2 + 37s + 5 I )/((s + 2)(s + 3)2 ).

Realizatio

6.6.4 Transposed Realization Two realizations are said to be equivalent if they have the same transfer function. A simple way to generate an equivalent realization from a given realization is to use its transpose. To g enerate a transpose of any realization, we change the given realization as follows: 1. Reverse all the arrow directions without changing the scalar multiplier values. 2. Replace pickoff nodes by adders and vice versa. 3. Replace the input X(s) with the output Y(s) and vice versa. Figure 6.27a shows the transposed version of the canonic direct form realization in Fig . 6.23b found according to the rules just listed. Figure 6.27b is Fig. 6.27a reoriented in the conventional form so that the inputX(s) appears at the left and the output Y(s) appears at the right. Ob serve that this realization is also canonic. Rather than prove the theorem on equivalence of the transposed realizations, we shall verifY that the transfer function of the realization in Fig. 6.27b is identical to that in Eq. (6.40).

6.6 System Realization Y(s)

X(s)

(a)

X(s)

581

Y(s)

(b)

Figure 6.27 Realization of an Nth-order LTI transfer function in the transposed form.

Figure 6.27b shows that Y(s) is being fed back through N paths. The fed-back signal appearing at the input of the top adder is

The signal X(s), fed to the top adder through N + 1 forward paths, contributes

The output Y(s) is equal to the sum of these two signals (feed forward and feed back). Hence,

Transporting all the Y(s) terms to the left side and multiplying throughout by /v, we obtain (I" +a 1 1"- 1 + ·. · + aN-is+aN)Y(s) = (bol" +b11"-' + · · -+bN-1s+bN)X(s) Consequently, H(s)

=

Y(s) X(s)

=

1 bol" +b 1 1"- + · · · +bN_,s+bN 1 5N + a 1 sN- + · • -+aN-is+aN

582

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

Hence, the transfer function H(s) is identical to that in Eq. (6.40). We have essentially doubled the number of possible realizations. Every realization that was found earlier has a transpose. Note that the transpose of a transpose results in the same realization.

EXAMPLE 6.24 Transposed Realizations Find the transpose canonic direct realizations for parts (c) and (d) of Ex. 6.22 (Figs. 6.24c and 6.24d). The transfer functions are: (a)

s+s

s+7 4s+28 (b) 2 s +6s+5 Both these realizations are special cases of the one in Fig. 6.27b.

=

1 with a 1 = 7,bo = 1,b 1 = 5. The desired realization can be obtained by transposing Fig. 6.24c. However, we already have the general model of the transposed realization in Fig. 6.27b. The desired solution is a special case of Fig. 6.27b with N = I and a 1 = 7,bo = l,b1 = 5, as shown in Fig. 6.28a. (b) In this case, N = 2 with bo = 0, bi = 4, b2 = 28, a 1 = 6, a2 = 5. Using the model of Fig. 6.27b, we obtain the desired realization, as shown in Fig. 6.28b. (a) In this case, N

Y(s)

X(s)

Y(s)

-7

5 (a)

X(s) 4

-6

28

-5 (b)

Figure 6.28 Transposed canonic direct form realizations of (a) (s + 5) I (s + 7) and (b) (4s+28)/(s2 +6s+5).

6.6 System Realization

583

DRILL 6.13 Transposed Realizations Find the transposed DFI and transposed canonic direct (TDFll) realizations of H(s) irr Drill 6.11.

6.6.5 Using Operational Amplifiers for System Realization In this section, we discuss practical implementation of the realizations described in Sec. 6.6.4. Earlier we saw that the basic elements required for the synthesis of an LTIC system (or a given transfer function) are (scalar) multipliers, integrators, and adders. All these elements can be realized by operational amplifier (op-amp) circuits. OPERATIONAL AMPLIFIER CIRCUITS

Figure 6.29 shows an op-amp circuit in the frequency domain (the transformed circuit). Because the input impedance of the op amp is infinite (very high), all the current /(s) flows in the feedback path, as illustrated. Moreover Vx(s), the voltage at the input of the op amp, is zero (very sma11) because of the infinite (very large) gain of the op amp. Therefore, for aJI practical purposes, Y(s)

Moreover, because

Vx �

= -/(s)Zj(s)

0, l(s)

=

X(s) Z(s)

Substitution of the second equation in the first yields

z1(s) Y(s) = --X(s) Z(s)

Therefore, the op-amp circuit in Fig. 6.29 has the transfer function Z1(s) H(s)=-Z(s) By properly choosing Z(s) and z1(s), we can obtain a variety of transfer functions, as the following development shows.

l(s)

f7 Z(s) X(s)

l(s)

Y(s)

Figure 6.29 A basic inverting configuration op-amp circuit.

584

CHAPTER 6 CONTINUOUS-TIME S YSTEM ANALYSIS USING THE LAPLACE TRANSFORM THE SCALAR MULTIPLIER

If we use a resistor R1 in the feedback and a resistor R at the input (Fig. 6.30a), then Zj(s) ::::: Ri, Z(s) = R, and H(s)

R1 = -R

The system acts as a scalar multiplier (or an amplifier) with a negative gain R1 IR. A positive gain can be obtained by using two such multipliers in cascade or by using a single noninverting amplifier, as depicted in Fig. 6.17c. Figure 6.30a also shows the compact symbol used in circuit diagrams for a scalar multiplier. THE INTEGRATOR

If we use a capacitor C in the feedback and a resistor Rat the input (Fig. 6.30b), then Zj(s) = I/Cs, Z(s) = R, and

)!

1 H(s)=(-RC s

The system acts as an ideal integrator with a gain -1 /RC. Figure 6.30b also shows the compact symbol used in circuit diagrams for an integrator.

R

+ F(s)

Y(s)

F(s)

Y(s)



Y(s)

(a)

1 Cs R +

F(s)

Y(s)



F(s)

-1

k= R C

(b)

Figure 6.30 (a) Op-amp inverting amplifier. (b) Integrat or.

6.6 System Realization

r----.J\1\/\

585

l1 + l2 + ... + I,

-......

,-

R1

I 1(s)

R2 X1(s)

+

J 2(s)

X 1 (s)

R,

Y(s)

Y(s)

(a)

(b)

Figure 6.31 Op-amp summing and amplifying circuit. THE ADDER

Consider now the circuit in Fig. 6.31a with r inputs X1(s), X2(s), ... , X,(s).As usual, the input voltage Vx(s) � 0 because the op-amp gain ➔ oo.Moreover, the current going into the op amp is very small(� 0) because the input impedance➔ oo. Therefore, the total current in the feedback resistor R1 is /1(s) + fi(s) + · · · + I,(s).Moreover, because Vx(s) = 0, lj(s)

= Xj(s) Rj

j

= 1,2, ...,r

Also,

where

-R k;=--1

R;

Clearly, the circuit in Fig. 6.31 serves an adder and an amplifier with any desired gain for each of the input signals. Figure 6.31 b shows the compact symbol used in circuit diagrams for an adder with r inputs.

Use op-amp circuits to realize the canonic direct form of the transfer function

2s+5

, 586

CHAPTER 6 CONTINUOU S-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

s2 W(s)

sW(s)

2

sW(s)

-4

I

Y(s)

-10 -10

W(s)

5 (b)

(a)

X(s) -W(s)

sW(s)

(c)

100 kn

100 kn X(s)

10 µ.F

10 µ.F

100 kn

IOOkO 50 k!l

.....__...--0 + 20 kn

-=-

(d)

Figure 6.32 Op-amp realization of a second-order transfer function (2 s + 5)/(s2 + 4s + 10).

Y(s)

, 6.6 System Realization

587

The basic canonic realization is shown in Fig. 6.32a. The same realization with horizontal reorientation is shown in Fig. 6.32b. Signals at various points are also indicated in the realization. For convenience, we denote the output of the last integrator by W(s). Consequently, the signals at the inputs of the two integrators are sW(s) and s2 W(s), as shown in Figs. 6.32a and 6.32b. Op-amp elements (multipliers, integrators, and adders ) change the polarity of the output signals. To incorporate this fact, we modify the canonic realization in Fig. 6.32b to that depicted in Fig. 6.32c. In Fig. 6.32b, the successive outputs of the adder and the integrators are s2 W(s),sW(s), and W(s), respectively. Because of polarity reversals in op-amp circuits, these outputs are -s2 W(s),sW(s), and -W(s), respectively, in Fig. 6.32c. This polarity reversal requires corresponding modifications in the signs of feedback and feedforward gains. According to Fig. 6.32b, s2 W(s) Therefore,

-s2 W(s)

= X(s) -4sW(s) - l0W(s) = -X(s) + 4sW(s) + l0W(s)

Because the adder gains are always negative (see Fig. 6.31b), we rewrite the foregoing equation as -s2 W(s) = -l [X( s)]-4[-sW(s)]-10[-W(s)] Figure 6.32c shows the implementation of this equation. The hardware realization appears in Fig. 6.32d. Both integrators have a unity gain, which requires RC= 1. We have used R = l 00 kQ and C = 10 µF. The gain of 10 in the outer feedback path is obtained in the adder by choosing the feedback resistor of the adder to be 100 kQ and an input resistor of 10 kQ. Similarly, the gain of 4 in the inner feedback path is obtained by using the corresponding input resistor of 25 kQ. The gains of 2 and 5, required in the feedforward connections, are obtained by using a feedback resistor of 100 kQ and input resistors of 50 and 20 kQ, respectively.t The op-amp realization in Fig. 6.32 is not necessarily the one that uses the fewest op amps. This example is given just to illustrate a systematic procedure for designing an op-amp circuit of an arbitrary transfer function. There are more efficient circuits (such as Sallen-Key or biquad) that use fewer op amps to realize a second-order transfer function. t It is possible to avoid tbe two inverting op amps (with gain -1) in Fig. 6.32d by adding signal sW(s) to the input and output adders directly, using the noninverting amplifier configuration in Fig. 6.17d.

DRILL 6.14 Transfer Functions of Op-Amp Circuits Show that the transfer functions of the op-amp circuits in Figs. 6.33a and 6.33b are H1 (s) and H2(s), respectively, where 1

a=--

R1CJ 1-

a=-

R1Ct

1 b=­ RC

588

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

R

R

l

X(s)

l

X(s)

(b)

(a)

Figure 6.33 Op-amp circuits for Drill 6.14.

6.7 APPLICATION TO FEEDBACK AND CONTROLS Generally, systems are designed to produce a desired output y(t) for a given input x(t). Using the given performance criteria, we can design a system, as shown in Fig. 6.34a. Ideally, such an open-loop system should yield the desired output. In practice, however, the system characteristics change with time, as a result of aging or replacement of some components, or because of changes in the operating environment. Such variations cause changes in the output for the same input. Clearly, this is undesirable in precision systems. A possible solution to this problem is to add a signal component to the input that is not a predetermined function of time but will change to counteract the effects of changing system characteristics and the environment. In short, we must provide a correction at the system inpu t to account for the undesired changes just mentioned. Yet since these changes are generally unpredictable, it is not clear bow to preprogram appropriate corrections to the input. Howe ver, the difference between the actual output and the desired output gives an indication of the suitable correction to be applied to the system input. It may be possible to counteract the variations by feeding the output (or some function of output) back to the input. x(t)

y(r)

(a) y(t)

(b)

Figure 6.34 (a) O pen-loop and (b) closed-lOOP (feedback) systems.

6.7

Application to Feedback and Controls

589

We unconsciously apply this principle in daily life. Consider an example of marketing a certain product. The optimum price of the product is the value that maximizes the profit of a merchant. The output in this case is the profit, and the input is the price of the item. The output (profit) can be controlled (within limits) by varying the input (price). The merchant may price the product too high initially, in which case, he will sell too few items, reducing the profit. Using feedback of the profit (output), he adjusts the price (input), to maximize bis profit. If there is a sudden or unexpected change in the business environment, such as a strike-imposed shutdown of a large factory in town, the demand for the item goes down, thus reducing his output (profit). He adjusts his input (reduces price) using the feedback of the output (profit) in a way that will optimize his profit in the changed circumstances. If the town suddenly becomes more prosperous because a new factory opens, he will increase the price to maximize the profit. Thus, by continuous feedback of the output to the input, he realizes his goal of maximum profit (optimum output) in any given circumstances. We observe thousands of examples of feedback systems around us in ever yday life. Most social, economical, educational, and political processes are, in fact, feedback processes. A block diagram of such a system, called the feedback or closed-loop system, is shown in Fig. 6.34b. A feedback system can address the problems arising because of unwanted disturbances such as random-noise signals in electronic systems, a gust of wind affecting a tracking antenna, a meteorite hitting a spacecraft, and the rolling motion of anti-aircraft gun platforms mounted on ships or moving tanks. Feedback may also be used to reduce nonlinearities in a system or to control its rise time (or bandwidth). Feedback is used to achieve, with a given system, the desired objective within a given tolerance, despite partial ignorance of the system and the environment. A feedback system, thus, has an ability for supervision and self-correction in the face of changes in the system parameters and external disturbances (change in the environment). Consider the feedback amplifier in Fig. 6.35. Let the forward amplifier gain G = 10,000. One-hundredth of the output is fed back to the input (H = 0 .01). The gain T of the feedback amplifier is obtained by [see Eq. (6.39)) 1 0,000 G T=--=-- =99.0 1 l +GH 1+100 Suppose that because of aging or replacement of some transistors, the gain G of the forward amplifier changes from 10,000 to 20,00 0. The new gain of the feedback amplifier is given by T=

G 1 +GH

=

20,000 1 +200

=

99_5

Surprisingly, 100 % variation in the forward gain G causes only 0 .5% variation in the feedback amplifier gain T. Such reduced sensitivity to parameter variations is a must in precision amplifiers. In this example, we reduced the sensitivity of gain to parameter variations at the cost of forward gain, which is reduced from 10,000 to 99. There is no dearth of forward gain (obtained by cascading stages). But low sensitivity is extremely precious in precision systems. Now, consider what happens when we add (instead of subtract) the signal fed back to the input. Such addition means the sign on the feedback connection is + instead of - (which is the same as changing the sign of H in Fig. 6.35). Consequently, T=

1-GH

590

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

J

,,

y(t)

. Figure 6.35 Effects of negative and positive feedback.

If we let G = 10 , 000 as before and H = 0 .9 x 10 -4, then

T

=

I O, OOO l - 0.9(1 04)(10 - 4)

= 100 000 '

Suppose that because of aging or replacement of some transistors, the gain of the forward amplifier changes to 11, 000 . The new gain of the feedback amplifier is 11,000 1-0.9(11,000)(10-

T=-------=11 00 000 4 )

'

'

Observe that in this case, a mere 10% increase in the forward gain G caused I 000% increase in the gain T (from 1 00, 000 to 1,100, 000). Clearly, the amplifier is very sensitive to parameter variations. This behavior is exactly the opposite of what was observed earlier, when the signal fed back was subtracted from the input. What is the difference between the two situations? Crudely speaking, the former case is called the negative feedback and the latter is the positive feedback. The positive feedback increases system gain but tends to make the system more sensitive to parameter variations. It can also lead to .instability. In our example, if G were to be 111,111, then GH = 1, T = oo, and the system would become unstable because the signal fed back was exactly equal to the input signal itself, since GH =1. Hence, once a signal has been applied, no matter how small and how short in duration, it comes back to reinforce the input undiminished, which further passes to the output, and is fed back again and again and again. In essence, the signal perpetuates itself forever. This perpetuation, even when the input ceases to exist, is precisely the symptom of instability. Generally speaking, a feedback system cannot be described in black and white terms, such as positive or negative. Usually, His a frequency-dependent component, more accurately represented by H(s); hence, it varies with frequency. Consequently, what was negative feedback at lower frequencies can turn into positive feedback at higher frequencies and may give rise to instability. This is one of the serious aspects of feedback systems, which warrants a designer's careful attention.

6.7.1 Analysis of a Simple Control System

Figure 6.36a represents an automatic position control system, which can be used to control the angular position of a heavy object (e.g., a tracking antenna, an anti-aircraft gun mount, or the t position of a ship). The input 0i is the desired angular position of the object, which can be se e at any given value. The actual angular position 00 of the object (the output) is measurd by a potent iometer whose wiper is mounted on the output shaft. The difference between th e inp�tedB; lifi (set at the desired output position) and the output 00 (actual position) is amplified; th e amp ut o output, which is proportional to 0; - 00 , is applied to the motor input. If 0; - 00 = 0 (the utp

6.7 Application to Feedback and Controls

591

being equal to the desired angle), there is no input to the motor, and the motor stops. But if 00 -1- 0;, there will be a nonzero input to the motor, which will turn the shaft until 00 = 0;. It is evident that by setting the input potentiometer at a desired position in this system, we can control the angular position of a heavy remote object. The block diagram of this system is shown in Fig. 6.36b. The amplifier gain is K, where K is adjustable. Let the motor (with load) transfer function that relates the output angle 00 to the motor input voltage be G(s) [for a starting point, see Eq. (l .32)). This feedback arrangement is identical to that in Fig. 6.19d with H(s) = 1. Hence, T(s), the (closed-loop) system transfer function relating the output 00 to the input 0;, is 0o(s) 0;(s)

= T(s) =

KG(s) l + KG(s)

From this equation, we shall i nvestigate the behavior of the automatic position control system in Fig. 6.36a for a step and a ramp input. STEP INPUT

If we desire to change the angular position of the object instantaneously, we need to apply a step input. We may then want to know how long the system takes to position itself at the desired angle, whether it reaches the desired angle, and whether it reaches the desired position smoothly (monotonically) or oscillates about the final position. If the system oscillates, we may want to know how Jong it takes for the oscillations to settle down. All these questions can be readily answered by finding the output 00 (t) when the input 0;(t) = u(t). A step input implies instantaneous change in the angle. This input would be one of the most difficult to follow; if the system can perform well for this input, it is likely to give a good account of itself under most other expected situations. This is why we test control systems for a step input. For the step input 0;(t) = u(t), 0;(s) = 1/s and KG(s) 1 0o(s) = �T(s) = s[l +KG(s)] Let the motor (with load) transfer function relating the load angle 00 (t) to the motor input voltage be G(s) = 1/(s(s+ 8)). This yields

K 0o(s) =

s(s+ 8) - 2 K K s(s + 8s + K) ] s 1+ s(s+ 8) [

Let us investigate the system behavior for three different values of gain K. For K =7,

and

592

CHAPTER 6 CONTINUOU-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM S

Thjs response, illustrated in Fig. 6.36c, shows that the system reaches the desired angle, but at a rather leisurely pace. To speed up the response, let us increase the gain to, say, 80. For K = 80, 80 80 =s (s) Go = s(s2+8s+80) (s+4-}8)(s+4+}8) ./5 e-j1s30 ./5 ej1s30 I +-4___ =-+ 4 s s+4- j8 s+4+ j8

and

00 (t) = [1

+ f e-41 cos(8t+ 153 ° )]u(t)

This response, also depicted in Fig. 6.36c, achieves the goal of reaching the final position at a faster rate than that in the earlier case (K = 7). Unfortunately, the improvement is achieved at the cost of ringing ( oscillations) with high overshoot. In the present case, the percent overshoot (PO) is 21%. The response reaches its peak value at peak time lp = 0.393 second. The rise time, defined as the time required for the response to rise from 10% to 90% of its steady-state value, indicates the speed of response. t In the present case, tr = 0.175 second. The steady-state value of the response is unity so that the steady-state error is zero. Theoretically, it takes infinite time for the response to reach the desired value of umty. In practice, however, we may consider the response to have settled to the final value if it closely approaches the final value. A widely accepted measure of closeness is within 2% of the fina.I value. The time required for the response to reach and stay within 2%of the final value is called the settling time ts, i: In Fig. 6.36c, we find ts � I second (when K = 80). A good system has a small overshoot, small t,. and ts , and a small steady-state error. A large overshoot, as in the present case, may be unacceptable in many applications. Let us try to determine K (the gain) that yields the fastest response without oscillations. Complex characteristic roots lead to oscillations; to avoid oscillations, the characteristic roots should be real. In the present case, the characteristic polynomial is s2 + 8s + K. For K > I 6, the characteristic roots are complex; for K < 16, the roots are real. The fastest response without oscillations� obtained by choosing K = 16. We now consider this case. For K= 16,

eo and

(s) =

16 s(s2 + Ss + 16)

16 1 1 4 ---=------2

s(s+4 )2

s

s+4

(s+4)

0o (t) = [1 - (4t+ I)e-41 ]u(t)

This response also appears in Fig. 6.36c. The system with K > 16 is said to be underdamped (oscillatory response), whereas the system with K < 16 is said to be overdamped. For K == 16, the system is said to be critically damped. There is a trade-off between undesirable overshoot and rise time. Reducing overshoots leads to higher rise time (sluggish system). In practice, a small overshoot, which is still faster tban t Delay time td, defined as the time required for the response to reach 50% of its steady-state vaJue, is anotbel indication of speed. For the present case, td = 0.141 second. t Typical percentage values used are 2 to 5% for ts.

6.7 Application to Feedback and Controls Input potentiometer

593

Output

potentiometer

0;

(Jo

de amplifier

£,,,(s)

(a)

Amplifier

Motor and load

K

G(s)

(b)

1.2

2

4 (c)

Desired --._ ,..•·····

....•··

·· ····· ·· ·· ·· ·

· ······ ·· · · ·· ··

···

O. I

r-

L

,, • ·········· · ··

· ······ ·· ·· ·...•····

,(d)

Figure 6.36 (a) An automatic position control system. (b) Its block diagram. (c) The unit

step response. (d) The unit ramp response.

594

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM the critical damping, may be acceptable. Note that percent overshoot PO and peak time tp are meaningless for the overdamped or critically damped cases. In addition to adjusting gain K, we may need to augment the system with some type of compensator if the specifications on overshoot and the speed of response are too stringent. RAMP INPUT If the anti-aircraft gun in Fig. 6.36a is tracking an enemy plane moving with a uniform velocity, the gun-position angle must increase linearly with t. Hence, the input in thfa case is a ramp; that is, 0;(t) = tu(t). Let us find the response of the system to this input when K = 80. In this case,

0;(s) = l/s2 , and

80 (s)

80 s (s + 8s+ 80)

0.1 s

1 s

0.1 (s - 2)

= ----= -- + -2 + s---2 + 8s + 80 2 2

Use of Table 6.1 yields 00 (t) = [-0. l + t + ½e- 81 cos (8t + 36.87°)] u(t) This response, sketched in Fig. 6.36d, shows that there is a steady-state error er = 0.1 radian. In many cases, such a small steady-state error may be tolerable. If, however, a zero steady-state error to a ramp input is required, this system in its present form is unsatisfactory. We must add some form of compensator to the system.

Using the feedback system of Fig. 6.19d with G(s) = K/(s(s + 8)) and H(s) = 1, determine the step response for each of the following cases: (a) K = 7, (b) K = 16, and (c) K = 80. Additionally, find the unit ramp response when (d) K = 80. Example 6.21 computes the transfer functions of these feedback systems in a simple way. In this example, the conv command is used to demonstrate polynomial multiplication of the two denominator factors of G(s). Step responses, shown in Fig. 6.37, are computed by using the step command. (a-c)

>> >> >> >> >>

H= tf(l ,1); K = 7; G = tf([K],conv([1 OJ,[1 8])); Ha= feedback(G, H); H= tf(l,1); K = 16; G = tf([KJ,conv([l 0],[1 8])); Hb = feedback(G, H) H = tf(l,1); K = 80; G = tf([KJ,conv([l 0].[1 8])); He= feedback(G, H) elf; step(Ha,'k-',Hb,'k--',Hc,'k -. '); legend('K = 7','K = 16','K = 80', 'Location' ,'best');

r 6.7

Application to Feedback and Controls

595

Step Response

! ·,.'·

I 'I I

/

! I !I !I 0

-- -------==-==========-----------------, -·-.

/-

II

---K=7

- - -K=16 -·-·-K=80

2

4

6

Figure 6.37 Step responses for Ex. 6.26. (d) The unit ramp response is equivalent to the integral of the unit step response. We can obtain the ramp response by taking the step response of the system in cascade with an integrator. To help highlight waveform detail, we compute the ramp response over the short time interval of 0 � t � 1.5 as shown in Fig. 6.38.

>> t = 0: .001:1.5; Hd = series(Hc,tf([1],[1 0])); >> step(Hd,'k-' ,t); title('Unit Ramp Response');

Unit Ramp Response 1.5 .----------------------------.- -------------,

-�a. () 'O

E (

0.5

0

0.5

Figure 6.38 Ramp response for Ex. 6.26 with K = 80.

1.5

596

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM DESIGN SPECIFICATIONS

Now the reader has some idea of the various specifications a control system might require. Generally, a control system is designed to meet given transient specifications, steady-state error specifications, and sensitivity specifications. Transient specifications include percent overshoot (PO), rise time (t,), and settling time Us ) of the response to step input. Steady-state erro r is the difference between the desired response and the actuaJ response to a test input in stea dy state. The system should also satisfy specified sensitivity specifications to various system parameter variations or to certain disturbances. Above all, the system must remain stab]e under operating conditions.

6.7.2 Analysis of a Second-Order System Transient response depends upon the location of poles and zeros of the transfer function T(s). In the general case, however, there is no quick way of predicting transient response parameters (PO, tr , ts) from the poles and zeros of T(s). However, for a second-order system with no zeros, there is a direct relationship between the pole locations and the transient response. In sucb a case, the pole locations can be immediately determined from the knowledge of the transient parameter specifications. As we shall see, the study of the second-order system can be used to study many higher-order systems. For this reason, we shall now study the behavior of a second-order system in detail. Let us consider a second-order transfer function T(s), given by (6.44) The poles of T(s) are -{w11 ± jw11 J1 - { 2, as depicted in Fig. 6.39. These are complex wbeo the damping ratio { < 1 (underdamped case), and are real for t = 1 (critically damped) and {>I (overdamped). Smaller { means smaller damping, leading to higher overshoot and faster re spo nse.

s plane

-r

(On

j(/

..··.

...··

....···

..··

·· ••

··

·

/

0

16, the poles become complex with values -4 ± jJK - 16 (underdamping). Since the real part of the poles is -4 for all K > 16, the path of the poles is vertical, as illustrated in Fig. 6.43. One pole moves up and the other (its conjugate) moves down along the vertical line passing through -4. We can label the values of K for several points along these paths, as depicted in Fig. 6.43. Each of these paths represents the locus of a pole of T(s) or the locus of a root of the characteristic equation of T(s) as K is varied from O to oo. For this reason, this set of paths is called the root locus. The root locus gives us information as to how the poles of the closed-loop transfer function T(s) move as the gain K is varied from Oto oo. In our design problem, we must choose a value of K such that the poles of T(s) lie in the shaded region illustrated in Fig. 6.43. This figure shows that the system will meet the given specifications [Eq. (6.48)] for 25 < K::: 64. For K = 64, for instance, we have PO= 16%,

tr = 0.2 seconds,

ts =

4 4

= 1 seconds

600

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM

11 ID I

-26

'S"

0

_.. II

-24

r-

:£ 0

_.. II

-22

N



-20

-18

N

N N N

_..

_..

0 II

0 II

-16

er

-14

_..

-ll n 0 I!.

Figure 6.42 Contours of second-order system poJ e 1ocations for constant PO, constant t , and constant t,. s

-ll -16

6.7 Application to Feedback and Contr

ols

t

6

Q)

4 2 K= 16 ✓ -4 -2

-2 -4

Figure 6.43 Designing a second-order system to meet a given set of transient specifications.

HIGHER-ORDER SYSTEMS

Our discussion, so far, has been limited to second-order T(s) only. If T(s) has additional poles that are far away to the left of the w axis, they have only a negligible effect on the transient behavior of the system. The reason is that the time constants of such poles are considerably smaller when compared to the time constant of the complex conjugate poles near the w axis. Consequently, the exponentials arising because of poles far away from the w axis die quickly compared to those arising because of poles located near the w axis. In addition, the coefficients of the former terms are much smaller than unity. Hence, they are also very small to begin with and decay rapidly. The poles near the w axis are called the dominant poles. A criterion commonly used is that any pole that is five times as far from the w axis as the dominant poles contributes negligibly to the step response, and the transient behavior of a higher-order system is often reduced to that of a second-order system. In addition, a closely placed pole-zero pair (called a dipole) contributes negligibly to transient behavior. For this reason, many of the pole-zero configurations, in practice, reduce to two or three poles with one or two zeros. Designers utilize compute�-ge�erated �harts to track the transient behavior of these systems for several such pole-zero combmations, which may be u sed to design most higher-order systems.

6.7.3 Root Locus

. of root locus in the design of control The examp1e m • S ec. 6. 7. 2 gives a good idea of the utility . · bas1c · rules prov1'ded by g certain usm . ckly qui syste Su n.smgly, ed tch the root locus can be ske ms. rp

------------

601

602

CHAPTER 6 CONTI N UOUS-TIME SYSTEM ANALYSIS U SI NG THE LAPLACE TRANSFORM G(s)

K

y( t)

H(s)

Figure 6.44 A feedback system with a variable gain K.

W.R. Evans in 1948.t With the ready availability of computers, root locus can be produced easily. Nevertheless, understanding these rules can be a great help in developing the intuition needed for design. Here, we shall present the rules, but omit the proofs for some of them. We begin with a feedback system depicted in Fig. 6.44, which is identical to Fig. 6.19d, except for the explicit representation of a variable gain K. The system in Fig. 6.36a is a special case with H(s) = 1. For the system in Fig. 6.44, T

(s)

=

KG(s) 1 + KG(s)H(s)

(6.49)

The characteristic equation of this system is* 1 + KG(s)H(s) = 0 We shall consider the paths of the roots of 1 + KG(s)H(s) = 0 as K varies from Oto oo. In the parlance of control theory, KG(s)H(s) is referred to as the open-loop transfer function.§ The rules for sketching the root locus are as follows: 1. Root loci begin (K = 0) at the open-loop poles and terminate at the open-loop zeros (K = oo). This fact means that the number of loci is exactly n, the order of the open-loop transfer function. Let G(s)H(s) = N(s) /D(s), where N(s) and D(s) are polynomials of powers m and n, respectively. Hence, 1 + KG(s)H(s) = 0 implies D(s) + KN(s) = 0. Therefore, D(s) = 0 when K = 0. In this case, the roots are poles of G(s)H(s); that is, the open-loop poles. Similarly, when K � oo, D(s) + KN(s) = 0 implies N(s) = 0. Hence, the roots are the open-loop zeros. For the system in Fig. 6.36a, the open-loop transfer function is K/s(s + 8). The open-loop poles are Oand -8 and the zeros [where K/s(s + 8) =0] are both oo. We can verify from Fig. 6.43 that the root loci do begin at Oand -8 and terminate at oo. 2. A real axis segment is a part of the root locus if the sum of the real axis poles and zeros of G(s)H(s) that lie to the right of the segment is odd. Moreover, the root loci are symmetric about the real axis. tThis procedure was developed as early as 1868 in Maxwell's paper "On Governors." + This characteristic equation is also valid when the gain K is in the feedback path [lumped with H(s)] rather than in the forward path. The equation applies as long as the gain K is in the loop at any point. Hence, the root locus rules discussed here apply to all such cases. § This tenninology is more clearly understandable viewed from another common feedback con�l configuration, where H(s) is present in the feedforward loop rather than the feedback loop, as done in tlll5 text.

6.7 Application to Feedback and Controls

603

We can readily verify in Fig. 6.43 that the real axis segment to the right of -8 has only one pole (and no zeros).Hence, this segment is a part of the root locus. 3. Then - m root loci terminate at oo at angleskrr /(n -m) fork= 1, 3, 5, . ... Note that, according to rule 1, m loci terminate on the open-loop zeros, and the remaining n -m loci terminate at oo according to this rule. In Fig. 6.43, we verify that n. -m = 2 loci terminate at oo at angleskrr / 2 fork= I and 3. Now we shall make an interesting observation.If a transfer function G(s) has m (finite) zeros and n poles, then lims--+oo G(s) = s'11 /s'1 = I/s'1 - 111• Hence, G(s) has n-m zeros at oo. This fact shows that although G(s) has only m finite zeros, there are additional n - m zeros at oo.According to rule I, m loci terminate on m finite zeros and the remaining n -m loci terminate at oo, which are also zeros of G(s).This result means all loci begin on open-loop poles and terminate on open-loop zeros. 4. The centroid of the asymptotes (the point where the asymptotes converge) of the (n -m) loci that terminate at oo is (p, +P2 +· · · +p,,) -( z1+z2 -· · ·+Zm ) �=--------------(n-m)

where p 1, P2, ..., Pn are the poles and z1, z2, ..., Zm are the zeros, respectively, of the open-loop transfer function. Figure 6.43 verifies that the centroid of the loci is [(-8 + 0) -0]/2 = -4. 5. There are additional rules that allow us to compute the points where the loci intersect and where they cross the w axis to enter the right-half plane. These rules allow us to draw a quick and rough sketch of the root loci. But the ready availability of computers and programs makes it much easier to draw actual loci. The first four rules are still very helpful for a quick sketching of the root loci. Understanding these rules can be helpful in the design of control systems, as demonstrated later. They aid in determining what modifications should be made (or what kind of compensator to add) to the open-loop transfer function in order to meet given design specifications.

F�back Using the four rules for root loci, sketch the root locus for a system with open-loop transfer function KG(s)H(s)

= s(s

K

+2)(s +4)

Rule 1: For this G(s)H(s), n = 3. Hence, there are three root loci, which begin at the poles of

G(s)H(s); that is, at 0, -2, and -4.

Y. \ I 604

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSF O

RM

Rule 2: There are odd numbers of poles to the right of the real axis segment s < -4, and -2 < s < 0. Hence, these segments are part of the root locus. In other words, the entire re al axis in the left-half plane, except the segment between -2 and -4, is a part of the root locus . Rule 3: In this case, n -m = 3. Hence, all three loci terminate at oo along asymptotes at angles br/3 fork= 1, 3, and 5. Thus, the asymptote angles are 60 ° , 120° , and 180°. Rule 4: The centroid (where all three asymptotes converge) is (0- 2- 4)/3 =-2. We draw three asymptotes starting at -2 at angles 60° , 120° and 180 °, as illustrated in Fig. 6.45. These three straight-line asymptotes suffice to give an idea about the root locus. The actual root loci are also shown as continuous curves in Fig. 6.45. Two of the asymptotes cross over to the RHP, behavior which shows that for some range of K, the system becomes unstable.

K

s(s + 2)(s + 4)

(J)

y( t)

(a) (b)

K=48 (J

s =-0.847

Figure 6.45 A thjrd-order feedback system and its root locus.

MATLAB's control system toolbox provides a simple tool to generate a system's root locus plot. The result, shown in Fig. 6.46, matches our earlier result of Fig. 6.45b. >> >>

num = (0 0 0 1]; den = conv(conv([1 OJ ,[1 2]), (1 4]); H = tf(num,den); rlocusplot(H, 'k-');

6.7 Application to Feedback and Controls

605

Root Locus

,....

5

-�

0

·co .5 -5 -10

-15 ._____.___.....1-__L.---L---.L.------'---.L._-----'---....l--__J -14 4 -2 -12 -10 2 -4 -6 -8 6 0 Real Axis (seconds -I)

Figure 6.46 MATLAB root locus plot for Ex. 6.27.

6.7.4 Steady-State Errors Steady-state specifications impose additional constraints on the closed-loop transfer function T(s). We define the error as the difference between the desired output [reference x(t)] and the actual output y(t). Thus, e(t) = x(t) - y(t), and

[

Y(s)] E(s)=X(s)-Y(s)=X(s) 1-- =X(s)[l-T(s)] X(s)

(6.50)

The steady-state error ess is the value of e(t) as t ➔ oo. This value can be readily obtained from the final-value theorem [Eq. (6.25)] as ess

= lirn sE(s) = limsX(s)[l - T(s)] s➔ O

s➔ O

Let us look at steady-state error for a variety of inputs.

(6.51)

606

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS U SING THE LAPLACE TRANSFORM

1 . For the unit step input, the steady-state error es is given by es = lim [1 - T(s)] = 1 - T(O)

(6.52)

s➔O

If T(O) = l, the steady-state error to a unit step input is zero. 2. For a unit ramp input, X(s) = l/s 2 and er , the steady-state error, is given by . 1- T(s) er =hm --s--+o

(6.53)

s

If T(O) =j:. I, e, = oo. Hence for a finite steady-state error to ramp input, a necessary _

condition is T(O) = 1, implying zero steady-state error to a step mput. Assuming T(O) ::: 1 and applying L'Hopital's rule to Eq. (6.53), we have e, = lim [-T(s)] = -T(O) s➔O

3. Using a similar argument, we can show_ that for a parabolic input x(t) 3

X (s) = 1/s , and, assuming T(O) = 1 and T(O) = 0, the steady state error is 7'(0)

ep =--2

= (r)2)u(t), (6.54)

Many systems in practice have unity feedback, as depicted in Fig. 6.47. In such a case, the steady-state error analysis is greatly simplified. Let us define the positional error constant Kp, velocity error constant Kv , and acceleration error constant Ka as KP = lim[KG(s)], s➔O

Kv = lims[KG(s)], s➔O

Ka = lims2[KG(s)] s--+0

(6.55)

Because T(s) = KG(s)/1 + KG(s), from Eq. (6.50), we obtain E(s) =

1 X(s) 1 + KG(s)

The steady-state errors are given by 1/s . es = 1Im S---­ s➔O 1 +KG(s) 1/s2 . e,=hm s---­ s➔O 1 +KG(s) 1/s3 . ep =hms---­ s--+O 1 +KG(s) e ( t)

K

1 = 1 + lims--+0 [KG(s)] 1 + Kp 1 1 lims--+O s[KG(s)] Kv 1 1 lims-..o s2 [KG(s)] - Ka G (s)

Figure 6.47 A unity feedback system with variable gain K.

y(t)

( 6.56)

6.7 Application to Feedback and Controls

607

For the system in Fig. 6.36a, 1 G(s)=-­ s(s+ 8) Hence, from Eq. (6.55),

K Kv = 8'

Kp = oo,

(6.57)

Substitution of these values into Eq. (6.56) yields ep=OO

A system where G(s) has one pole at the origin (as in the present case) is designated as a type I system. Such a system can track the position of an object with zero (steady-state) error (es = 0), and yields a constant error in tracking an object moving with constant velocity (e, = a constant). But the type I system is not suitable for tracking a constant acceleration object. If G(s) has no poles at the origin, then Kp is finite and Kv =Ka =0. Thus, for Gs ()

(s+2) - (s+ 1)(s+ 10) -

Kp = K/5 and Kv =Ka =0. Hence, es =5/(5+K) and e, = ep = oo. Such systems are designated as type O systems. These systems have finite es, but infinite e, and ep . These systems may be acceptable for step inputs (position control), but not for ramp or parabolic inputs (tracking velocity or acceleration). If G(s) has two poles at the origin, the system is designated as a type 2 system. In this case, Kp = Kv = oo, and Ka is finite. Hence, es=e, = 0 and ep is finite. In general, if G(s) has q poles at the origin, it is a type q system. Clearly, for a unity feedback system, increasing the number of poles at the origin in G(s) improves the steady-state performance. However, this procedure increases n and reduces the magnitude of a, the centroid of the root locus asymptotes. This shifts the root locus toward the w axis, with consequent deterioration in the transient performance and the system stability. It should be remembered that the results in Eqs. (6.55) and (6.56) apply only to unity feedback systems (Fig. 6.47). Steady-state error specifications in this case are translated in terms of constraints on the open-loop transfer function KG(s). In contrast, the results in Eqs. (6.52) through (6.54) apply to unity as well as nonunity feedback systems, and are more general. Steady-state-error specifications in this case are translated in terms of constraints on the closed-loop transfer function T(s). The unity feedback system in Fig. 6.36a is a type l system. We have designed this system earlier to meet the transient specifications of Eq. (6.48): PO= 16%, Let us further specify that the system meet the following steady-state specifications: and

(6.58)

608

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE For this case, we already found es = 0, e, = 8/ K, and ep = e, :S 0.15. Therefore,

!K- /3, the magnitude of o-, the root locus centroid, is reduced. This reducuoo causes the root locus to shift toward thew axis, with the consequent deterioration of the transient performance. This side effect of a lag compensator can be made negligible by choosing a an /3 such that a - /3 is very small, but the ratio a/f3 is high. Such a pair of pole and zero actd like a dipole and have only a negligible effect on the transient behavior of the systern. The root locus also changes very little. We can realize such a dipole by placing both the pole and the zero of Gc(s) close to the origin (a and f3 ➔ 0). For instance, if we select a == 0. 1 and /3 = 0.01, the centroid will be shifted by only a negligible amount (a- fi)/(n -m) = 0.09/(n-JII)·

l

6.7 Application to Feedback and Controls

Ri

611

s plane

+

C

V,. (s) s +ex G(s)=-=-c s+� V,(s)

where

ex=

Figure 6.50 A lag compensator.

However, since a/ f3 = 10, all the error constants are increased by a factor of 10. Thus, we can have our cake and eat it, too! We can improve the transient and steady-state perfonnance simultaneously by using a combination of lead and lag networks.

6.7.6 Stability Considerations In practice, we rarely use positive feedback because, as explained earlier, such systems are prone to instability and are very sensitive to changes in the system parameters or environment. Would negative feedback make a system stable and Jess sensitive to unwanted changes? Not necessarily! The reason is that if a feedback were truly negative, the system would be stable. But a system that bas negative feedback at one frequency may have positive feedback at some other frequency because of phase shift in the transmission path. In other words, a feedback system generally cannot be described in black and white terms such as having positive or negative feedback. Let us clarify this statement by an example. Consider the case G(s)H(s) = l/s(s + 2)(s + 4). The root locus of this system appears in Fig. 6.45b. This system shows negative feedback at lower frequencies. But because of phase shift at higher frequencies, the feedback becomes positive. Consider the loop gain G(s)H(s) at a frequency w = 2.83 (at s = j2.83). G(jw)H(jw) =

l jw(jw+2)(jw+4)

Atw = 2.83,

G(j2·83)H(j2·83) =

1

j2.83(j2.83 + 2)(j2.83 + 4) l = _!_e-jt8oo = __ 48 48

Recall that the overall gain (transfer function) T(s) is KG(s) T(s)-----­ - 1 + KG(s)H(s)

612

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM At frequency s = j2.83 (w = 2.83), the gain is

T(j2.83) =

KG(j2 3) } l - 48

As long as K remains below 48, the system is stable, but for K = 48, the system gain goes to oo, b and the system becomes unstable. The feedback, which was negative below w = 2.83 (ecause the phase shift has not reached -180° ), becomes positive. If there is enough gain (K = 48) at this frequency, the signal fed back is equal to the input signal, and the signal perpetuates itself forever. In other words, the signal starts generating ( oscillating) at this frequency, which is precisely the instability. Note that the system remains unstable for all values of K > 48. This is clear fro m the root locus in Fig. 6.45b, which shows that the two branches cross over to the RHP for K > 48. The crossing point is s = j2.83. This discussion shows that the same system, which has negative feedback at lower frequency, may have positive feedback at higher frequency. For this reason, feedback systems are quite prone to instability, and the designer has to pay a great deal of attention to this aspect. Root locus does indicate the region of stability.

6.8 THE BILATERAL LAPLACE TRANSFORM Situations involving noncausal signals and/or systems cannot be handled by the (unilateral) Laplace transform discussed so far. These cases can be analyzed by the bilateral (or two-sided) Laplace transfonn defined by X(s) = x(t)e-st dt

1:

and x(t) can be obtained from X(s) by the inverse transformation x(t) =

I

c+joo

-. !

2n:J c-joo

X(s)est ds

Observe that the unilateral Laplace transform discussed so far is a special case of the bilateral Laplace transform, where the signals are restricted to the causal type. Basically, the two transforms are the same. For this reason, we use the same notation for the bilateral Laplace transform. Earlier we showed that the Laplace transforms of e-a'u(t) and of -e-a' u(-t) are identical. Toe only difference is in their regions of convergence (ROC). The ROC for the former is Res> -a; that for the latter is Res < -a, as illustrated in Fig. 6.2. Clearly, the inverse Laplace transforro of X(s) is not unique unless the ROC is specified. If we restrict all our signals to the ca usal type, ho�ever, this ambiguity does not arise. The inverse transform of 1 / (s + a) is e-at u(t). Thus, in the unilateral Laplace transform, we can ignore the ROC in determining the inverse transform of X(s). We now show that any bilateral transform can be expressed in terms of two uni lateral transforms. It is, therefore, possible to evaluate bilateral transforms from a table o f unilateral transforms. Consider the function x(t) appearing in Fig. 6.51 a. We separate x(t) into two comp on ents , x,{I) us O and x2(t), representing the positive time (causal) component and the negative time (antica a component of x(t), respectively (Figs . 6.51 b and 6.5 l c):

x 1 (t) = x(t)u(t)

and

x2 (t)

= x(t)u(-t)

6.8 The Bilateral Laplace Transform

613

,_ (a)

,_

0 (b)

,_

0 (c)

,_

0 (d)

Figure 6.51 Expressing a signal as a sum of causal and anticausal components.

The bilateral Laplace transform of x(t) is given by X(s) =

1_:

x(t)e-s' dt

(6.59) where X1 (s) is the Laplace transform of the causal component x1 (t), and X2(s) is the Laplace transform of the anticausal componentx2 (t). Consider X2(s), given by

614

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFO RM

Therefore,

X2 (-s) =

f

00

lo+

x2 (-t)e-st dt

If x(t) has any impulse or its derivative(s) at the origin, they are included in x, (t). Consequently, x2 (t) = 0 at the origin; that is, x 2(0) = 0. Hence, the lower limit on the integration in the preceding equation can be taken as o- instead of o+ . Therefore,

X2 (-s) =

f

00

lo-

x2(-t)e-st dt

Because x2 (-t) is causal (Fig. 6.5 l d), X2 (-s) can be found from the unilateral transform table. Changing the sign of s in X2 (-s) yields X2 (s). To summarize, the bilateral transform X(s) in Eq. (6.59) can be computed from the unilateral transforms in two steps: 1. Split x(t) into its causal and anticausal components, x, (t) and x2 (t), respectively. 2. Since the signals x1 (t) and x 2(-t) are both causal, take the (unilateral) Laplace transform of x1 (t) and add to it the (unilateral) Laplace transform of x2 ( -t), with s replaced by-s. This procedure gives the (bilateral) Laplace transform of x(t). Since xi (t) and x2(-t) are both causal, X1 (s) and X2(-s) are both unilateral Laplace transforms, Let a,1 and a,2 be the abscissas of convergence of X1 (s) and X2 (-s), respectively. This statement implies that X1 (s) exists for alls with Re s > ac,, and X2(-s) exists for all s with Res> O'c 2· Therefore, X2(s) exists for all s with Res< -ac2·t Therefore, X(s) = X1 (s) tX2 (s) exists for all s such that The regions of convergence of X1 (s), X2 (s), and X(s) are shown in Fig. 6.52. BecauseX(s) � finite for all values of s lying in the strip of convergence (ac1 < Res< -ac 2), poles of X(s) must lie outside this strip. The poles of X (s) arising from the causal component x1 (t) lie to the left of the strip (region) of convergence, and those arising from its anticausal component x2 (t) lie to its right (see Fig. 6.52). This fact is of crucial importance in finding the inverse bilateral transform. This result can be generalized to left-sided and right-sided signals. We define a signal x(t) as a right-sided signal if x(t) = 0 fort< Ti for some finite positive or negative number T1 • A causal signal is always a right-sided signal, but the converse is not necessarily true. A signal is said 10 be left-sided if it is zero for t> T2 for some finite, positive, or negative number T2 . An anti caus� signal is always a left-sided signal, but the converse is not necessarily true. A two-sided signal 15 of infinite duration on both positive and negative sides oft and is neither right-sided nor left -sided. We can show that the conclusions for ROC for causal signals also hold for right -side d sign als, or and those for anticausal signals hold for left-sided signals. In other words, if x(t) is cau sal right-sided, the poles of X(s) lie to the left of the ROC, and if x(t) is anticausal or left-sided, !he poles of X(s) lie to the right of the ROC. t For instance, if x(r) exists for all t> I 0, then x(-t), its time-inverted form, exists fort < - IO.

�..

�c:iJ�::

V 6.8 The Bilateral Laplace Transform

615

� Region of convergence for causa.l component of x(t) � Region of convergence for anticausal component of x(I)

Figure 6.52 Regions of convergence for causal, anticausal, and combined signals.

� Region (strip) of convergence for the entire x(t)

To prove this generalization, we observe that a right-sided signal can be expressed as x(t) + x1(t), where x(t) is a causal signal and x1(t) is some finite-duration signal. The ROC of any finite-duration signal is the entire s plane ( no finite poles). Hence, the ROC of the right-sided signal x(t) + x1 (t) is the region common to the ROCs of x(t) and x1(t), which is the same as the ROC for x(t). This proves the generalization for right-sided signals. We can use a similar argument to generalize the result for left-sided signals. Let us find the bilateral Laplace transform of x(t) = e br u(-t) + ear u(t)

(6.60)

We already know the Laplace transform of the causal component earu(t)

1

a

(6.61)

For the anticausal component, x2 (t) = ebru(-t), we have Res> -b so that

-1 1 X2(s)=--=­ -s+b s-b

Therefore,

u(-t) 0. Consider the circuit of Fig. P6.4- I I. (a) Using transform-domain techniques, determine the system's standard-form transfer function H(s). (b) Using transform-domain techniques and Jetting R = L = l, determine the circuit's zero-stale response Yzsr(t) to the input x(t) = e-21 u(t - I). (c) Using transform-domain techniques and letting R = 2L = I, determine the circuit's

+ y(t)

Figure P6.4-l l 6.4-12

es Show that the transfer function that relai e a i the output voltaoe y(t) to the input voJ � . "' . . . 6 4-12a is x(t) for the op-amp circuit m Fig. P given by

631

Problems R

C

+

I-

C

.r(t)

I?

y(t)

(b)

(a)

Figure P6.4-12 H(s)

Kn

= s+a --,

R1 K=l+­, Rn

H(s) relating the output voltage y(t) to the

where and

input voltage x(t) is given by

I a= RC

-s +&s+ 12

H(s)----­

-

and that the transfer function for the circuit in Fig. P6.4- 12b is given by

6.4-14

Ks H(s)= -

s+a

6.4-13 For the second-order op-amp circuit in Fig. P6.4- I 3, show that the transfer function

s2

Consider the op-amp circuit P6.4-14. (a) Determine the standard-form function H(s) of this system. (b) Determine the standard-form coefficient linear differential description of this circuit.

of

Fig.

transfer constant equation

I fl

1.n

+ x(1)

y(t)

-

Figure P6.4-13

.!.n 2 +

IF

+

2n

x(t)

Figure P6.4-14

2n .!.n 3

y(t)

H2

632

CHAPTER 6 CONTINUOUS-TIME SYSTEM ANALYSIS USING THE LAPLACE TRANSFORM R2 100µ.F R1

+

x(t)

R2

R3 R1

+ y(t)

-

Figure P6.4-15

100µ.F

+

100µ.F

+ y(t) 5 kfl

5 k!1

x(t)

5 kn

Figure P6.4-16

(c) Using transfonn-domain techniques, determine the circuit's zero-state response 21 Yzsr(t) to the input x(t) = e u(t + I). (d) Using transfonn-domain techniques, determine the circuit's zero-input response Yw(t) if the r = o- capacitor voltage (first op-amp output voltage) is 3 volts. 6.4-15

We desire the op-amp circuit of Fig. P6.4-15 to behave as j,(t) - 1.5y(t) = -3.x(t) + 0.75x(t).

(a) Determine resistors R1, R2, and R3 so that the circuit's input-output behavior follows the desired differential equation of j,(t) l.Sy(t) = -3x(t) + 0.75x(t). (b) Using transfonn-domain techniques, determine the circuit's zero-input response Yzir(t) if the t = 0 capacitor voltage (first op-amp output voltage) is 2 volts. (c) Using transfonn-domain techniques, determine the impulse response h(t) of this circuit.

(d) Determine the circuit's zero-state response Yzsr(t) to the input x(t) = u(t - 2). 6.4-16 We desire the op-amp circuit of Fig. P6.4-16 to behave as ff y(t) + � Jy(t) + !Y(I) = ff x(t) - J x(t). (a) Determine the resistors R1 , R2 , and R3 to produce the desired behavior. (b) Using transform-domain techniques. determine the circuit's zero-input respell.IC Yzir(t) if the t = 0 capacitor voltages (fifil two op-amp outputs) are each I volt. 6.4-17 (a) Using the initial and final value theorems, find the initial and final valu es of the zero-state response of a system with the transfer function H(s) =

6s2 +3s+ 10 2s2 +6s+5

and input x(t) = u(t). (b) Repeat part (a) for the input x(t) :::e·'u(r).

Problems

633

rn

2n

+

2n

X(I)

(a)

ID

y(I)

(b)

Figure P6.5-2 (c) Find _v(0 + ) and s2 +5s+6 s2 +3s+2 (d) Find y(O+ ) and s3 +4s 2 + I Os + 7 s2 +2s+3

y(oo)

if

Y(s)

y(oo)

if

Y(s)

distortion as much as possible by using the system that is the inverse of the channel model. For simplicity, let us assume that a signal is propagated by two paths whose time delays differ by r seconds. The channel over the intended path has a delay of T seconds and unity gain. The signal over the unintended path has a delay of T + r seconds and gain a. Such a channel can be modeled, as shown in Fig. P6.5-3. Find the inverse system transfer function to correct the delay distortion and show that the inverse system can be realized by a feedback system. The inverse system should be causal to be realizable. [Hinz: We want to correct only the distortion caused by the relative delay r seconds. For distortionless transmission, the signal may be delayed. What is important is to maintain the shape of x(t). Thus, a received signal of the form cx(t - T) is considered to be distortionless.]

=

6.5-1 Consider two LTIC systems. The first has transfer function H, (s) = s�I , and the second has transfer function H2(s) = seJu(-t + 1) + j8(t-5) (d) Xd(t) = J'u(-t) + 8(t- rr)

A bounded-amplitude signal x(t) has bilateral Laplace transform X(s) given by s2s X(s)=---(s - l)(s+ 1)

(a) Determine the corresponding region of convergence. (b) Determine the time-domain signal x(t).

==-=--,

FREQUENCY RESPONSE AND ANALOG FILTERS

Filtering is an important area of signal processing. We have already discussed ideal filters in Ch. 4. In this chapter, we shall discuss the practical filter characteristics and their design. Filtering characteristics of a system are indicated by its response to sinusoids of various frequencies varying from O to oo. Such characteristics are called the frequency response of the filter. Let us start with determining the frequency response of an LTIC system. Recall that for h(t), we use the notation H(w) for its Fourier transform and H(s) for its Laplace transform. Also, when the system is causal and asymptotically stable, all the poles of H(s) lie in the LHP. Hence, the region of convergence for H(s) includes the w axis, and we can obtain the Fourier transform H(w) by substituting s = jcv in the corresponding Laplace transform H(s) (see p. 520). Therefore, H(jw) and H(w) represent the same entity when the system is asymptotically stable. In this and later chapters, we shall often find it convenient to use the notation H(jw) instead of H(w).

7.1 FREQUENCY RESPONSE OF AN LTIC SYSTEM In this section, we find the system response to sinusoidal inputs. In Sec. 2.4.4, we showed !hat an LTIC system response to an everlasting exponential input x(t) = (fr is also an everlasting exponential H(s)lf1 • As before, we use an arrow directed from the input to the output to represent an input-output pair: es, ==> H(s)es, (7.1) Setting s = jw in this relationship yields eiwt

==> H(jw)eiwt

(7.2)

Noting that coswt is the real part of eiwt , the use of Eq. (2.31) yields coswt ==> Re[H(jw)eiw1] We can express H(jw) in polar form as H(jw ) = IH(jw)leiLH(jw)

638

(7.3)

7.1 Frequency Response of an LTIC System

639

With this result,Eq. (7.3) becomes coswt =} IH(Jw)Icos[wt+ LH(jw)]

In other words, the system response y(t) to a sinusoidal input cos wt is given by y(t) = IH(Jw)lcos[wt+ LH(jw)]

Using a similar argument, we can show that the system response to a sinusoid cos (wt+ 0) is y(t) = IH(jw)I cos [wt+ 0 + LH(jw)]

(7.4)

This result is valid only for BIBO-stable systems. The frequency response is meaningless for BIBO-unstable systems. This follows from the fact that the frequency response in Eq. (7.2) is obtained by setting s = jw in Eq. (7.1). But, as shown in Sec. 2.4.4 [Eqs. (2.38) and (2.39)], Eq. (7.1) applies only for the values of s for which H(s) exists. For BIBO-unstable systems, the ROC for H(s) does not include thew axis, wheres= jw [seeEq. (6.14)]. This means that H(s) whens= jw is meaningless for BIBO-unstable systems.t Equation (7.4) shows that for a sinusoidal input of radian frequencyw, the system response is also a sinusoid of the same frequencyw. The amplitude of the output sinusoid is IH(jw)I times the input amplitude, and the phase of the output sinusoid is shifted by LH(jw) with respect to the input phase (see later Fig. 7.1 inEx. 7.1). For instance, a certain system with IH(J 10)1 = 3 and LH(j IO) = -30° amplifies a sinusoid of frequencyw = 10 by a factor of 3 and delays its phase by 30°. The system response to an input 5 cos( IOt + 50° ) is 3 x 5 cos (l Ot + 50° - 30° ) = 15cos(10t + 20° ). Clearly, IH(jw) I is the amplitude gain of the system, and a plot of IH(jw) I versusw shows the amplitude gain as a function of frequency w. We shall call lH(jw)I the amplitude response. It also goes under the name magnitude response.* Similarly, LH(jw) is the phase response, and a plot of LH(jw) versus w shows how the system modifies or changes the phase of the input sinusoid. Plots of the magnitude response IH(jw)I and phase response LH(jw) show at a glance how a system responds to sinusoids of various frequencies. Observe that H(jw) has the information of IH(jw)I and LH(jw) and is therefore termed the frequency response of the system. Clearly, the frequency response of a system represents its filtering characteristics. tThis may also be argued as follows. For BIBO-unstable systems, the zero-input response contains nondecaying natural mode terms of the form coswot or ea, cos wot (a> 0). Hence, the response of such a system to a sinusoid cos wt will contain not just the sinusoid of frequency w, but also nondecaying natural modes, rendering the concept of frequency response meaningless. *strictly speaking, IH(w)I is magnitude response. There is a fine distinction between amplitude and magnitude. Amplitude A can be positive and negative. ln contrast, the magnitude IAI is always nonnegative. We refrain from relying on this useful distinction between amplitude and magnitude in the interest of avoiding proliferation of essentially similar entities. This is also why we shall use the "amplitude" (instead of "magnitude") spectrum for IH(w) I-

640

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

'SXAMPLE 7.1 Frequency Response

------------'-------.....;..,.�:.!f

Find the frequency response (amplitude and phase responses) of a system whose transfer function is s+O.l H(s) = s+ 5 Also, find the system response y(t) if the input x(t) is (a) cos 2t (b) cos (lOt- 50° )

In this case,

H(jw) =

jw+0.1 jw+5

Therefore, and Both the amplitude and the phase response are depicted in Fig. 7.la as functions of w. These plots furnish the complete information about the frequency response of the system to sinusoidal inputs. (a) For the input x(t) = cos 2t, w = 2 and

We also could have read these values directly from the frequency response plots in Fi g. 7.la corresponding tow= 2. This result means that for a sinusoidal .input with frequency w = 2, the amplitude gain of the system is 0.372, and the phase shift is 65.3° . In other words, the output amplitude is 0.372 times the input amplitude, and the phase of the output is shifted with respect to that of the input by 65.3° . Therefore, the system response to the input cos 21 is y(t)

= 0.372cos(2t + 65.3° )

The input cos 2t and the corresponding system response 0.372cos (2t + 65.3° ) are illustrated in Fig. 7.lb.

7.1 Frequency Response of an LTIC System

641

IH(jw)I

I ········ ··•·······························••······························ ···

0

2

10

w-

0

2

10

w-

(a ) y(t) = 0.372 cos(2t + 65.3°)

(b)

Figure 7.1 Responses for the system of Ex. 7 .1.

(b) For the input cos ( I Ot - 50 ° ), instead of computing the values IH(j w) I and LH(j w) as in part (a), we shall read them directly from the frequency response plots in Fig. 7.l a corresponding tow= 10. These are

IH(JlO)I =0.894

and

LH(j 10) = 26°

Therefore, for a sinusoidal input of frequency w = 10, the output sinusoid amplitude is 0.894 times the input amplitude, and the output sinusoid is shifted with respect to the input sinusoid by26°. Therefore, the system response y(t) to an input cos (1 0t -50 ° ) is y(t)

= 0.894cos(lOt- 50 ° + 26° ) = 0.894cos(10t-24°)

If the input were sin( l 0t - 50°), the response would be 0. 894sin (1 0t - 50° + 26° ) = 0.894 sin(lOt-24°).

EZiT:?ft::ern:::t:'OR

642

,::i.1::z ,.....

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS The frequency response plots in Fig. 7.1 a show that the system has highpass filtering characteristics; it responds well to sinusoids of higher frequencies (w well above 5), and suppresses sinusoids of lower frequencies (w well below 5). PLOTTING FREQUENCY RESPONSE WITH

MATLAB

It is simple to use MATLAB to create magnitude and phase response plots. Here, we consider two methods. In the first method, we use an anonymous function to define the transfer function H(s) and then obtain the frequency response plots by substituting jw for s. >> >> >>

H = ©(s) (s+0.1)./(s+5); omega = 0 : .01:2 0; subplot(l,2,1); plot(omega,abs(H(lj*omega)),'k-'); subplot(l,2,2); plot(omega,angle(H(1j*omega))*180/pi,'k-');

In the second method, we define vectors that contain the numerator and denominator coefficients of H(s) and then use the freqs command to compute frequency response. >> >> >>

B = [1 0.1]; A= [1 5]; H = freqs(B,A,omega); omega subplot(l,2,1); plot(omega,abs(H),'k-'); subplot(l,2,2); plot(omega,angle(H)*180/pi,'k-');

=

0: .01:20;

Both approaches generate plots that match Fig. 7. la.

elay, Differentiator, and Find and sketch the frequency responses (magnitude and phase) for (a) an ideal delay of T seconds, (b) an ideal differentiator, and (c) an ideal integrator.

(a) Ideal delay of T seconds. The transfer function of an ideal delay is [see Eq. (6.34)] H(s) = e-sT Therefore, Consequent!y,

H(jw) IH(jw)I

=l

= e-jwT

and

LH(jw)

= -wT

These amplitude and phase responses are shown in Fig. 7.2a. The amplitude response is constant (unity) for all frequencies. The phase shift increases linearly with frequency with a slope of -T. This result can be explained physically by recognizing that if a sinusoid coswt is passed through an ideal delay of T seconds, the output is cosw(t - T). The output sinusoid

7.1 Frequency Response of an LTIC System

643

amplitude is the same as that of the input for all values of w. Therefore, the amplitude response (gain) is unity for all frequencies. Moreover, the output cosw(t - T) = cos (wt -wT) has a phase shift -wT with respect to the input coswt. Therefore, the phase response is linearly proportional to the frequency w with a slope -T. (b) An ideal differentiator. The transfer function of an ideal differentiator is [see Eq. (6.35)] H(s) = s Therefore,

H(jw) = jw = wej"/2

Consequently,

LH(Jw) = 2 These amplitude and phase responses are depicted in Fig. 7.2b. The amplitude response increases linearly with frequency, and phase response is constant (,r /2) for all frequencies. This result can be explained physically by recognizing that if a sinusoid coswt is passed through an ideal differentiator, the output is -wsin wt = wcos [wt+ (,r /2)]. Therefore, the output sinusoid amplitude is w times the input amplitude; that is, the amplitude response (gain) increases linearly with frequency w. Moreover, the output sinusoid undergoes a phase shift 1r/2 with respect to the input coswt. Therefore, the phase response is constant (rc/2) with frequency. In an ideal differentiator, the amplitude response (gain) is proportional to frequency [IH(jw)I = w] so that the higher-frequency components are enhanced (see Fig. 7.2b). All practical signals are contaminated with noise, which, by its nature, is a broadband (rapidly varying) signal containing components of very high frequencies. A differentiator can increase the noise disproportionately to the point of drowning out the desired signal. This is why ideal differentiators are avoided in practice. (c) An ideal integrator. The transfer function of an ideal integrator is [see Eq. (6.36)] IH(jw)I =w

H(s)

Therefore,

1 . H(1w)=-.

JW

Consequently, IH (jw)I =

1

7r



and

=s

1

-j -1' = -=-e W

,r/2

W

1l'

LH(jw) =-2 These amplitude and phase responses are illustrated in Fig. 7 .2c. The amplitude response is inversely proportional to frequency, and the phase shift is constant (-,r/2) with frequency. This result can be explained physically by recognizing that if a sinusoid coswt is passed through an ideal integrator, the output is (1/w) sin wt= (l /w) cos [wt -(,r /2)). Therefore, the amplitude response is inversely proportional to w, and the phase response is constant ( -re/2)

w

and

644

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS with frequency.t Because its gain is I /w, the ideal integrator suppresses higher-frequency components but enhances lower-frequency components with c.v < l. Consequently, noise signals (if they do not contain an appreciable amount of very -low-frequency components) are suppressed (smoothed out) by an integrator.

Ideal Integrator

Ideal Differentiator

ldeal Delay

IH{jw)I

IH{jw)I

w-

0

w-

0 LH(jw)

LH(jw)

LH(jw)

-rr/2

w-

w-

w-

0

0

w-

0

0 --rr/2

(a)

(b)

(c)

Figure 7.2 Frequency response of an ideal (a) delay, (b) differentiator, and (c) integrator.

t A puzzling aspect of this result is that in deriving the transfer function of the integrator in Eq. (6.36), we have assumed that the input starts at t = 0 . In contrast, in deriving its frequency response, we assume that the everlasting exponential input eJwr starts at t = -oo. There appears to be a fundamental contradiction between the everlasting input, which starts at t = -oo, and the integrator, which opens its gates only at 1 = 0. Of what use is everlasting input, since the integrator starts integrating at t = O? The answer is that the integrator gates are always open, and integration begins whenever the input starts. We restricted the input to start at t == 0 in deriving Eq. (6.36) because we were finding the transfer function using the unilateral transfonn, where the inputs begin at t = 0. So the integrator starting to integrate at t = 0 is restricted because of the limitations of the unilateral transform method, not because of the limitations of the integrator itself. If we we re to find the integrator transfer function using Eq. (2.40 ), where there is no such restriction on the input, we would still find the transfer function of an integrator as 1/s. Similarly, even if we were to use the bilateral Laplace fer transform, where t starts at -oo, we would find the transfer function of an integrator to be 1/s. The trans function of a system is the property of the system and does not depend on the method used to find it.

7.1

Frequency Response of an LTIC System

645

DRILL 7.1 Sinusoidal Response of an LTIC System Fmd the response of an LTIC system specified by dz y(t) dt2 if the input is a sinusoid 20 sin (3t

+ 3 dy(t) + 2y(t) = dx(t) + 5 dt

dt

+ 35°).

(t) X

7.1.1 Steady-State Response to Causal Sinusoidal Inputs So far we have discussed the LTIC system response to everlasting sinusoidal inputs (starting at t = -oo). In practice, we are more interested in causal sinusoidal inputs (sinusoids starting at t = 0). Consider the input ejwr u(t), which starts at t = 0 rather than at t = -co. In this case, X(s) = 1/(s - jw). Moreover, according to Eq. (6.31), H(s) = P(s)/Q(s), where Q(s) is the characteristic polynomial given by Q(s) = (s-) q )(s-)..2) · · · (s-AN).t Hence, Y(s) =X(s)H(s) =

P(s)

(s- )1.1 )(s-A2) ·, · (S- AN)(s- jw)

In the partial fraction expansion of the right-hand side, let the coefficients corresponding to the N terms (s - ).. 1 ), (s - )..2), ... , (s- )..N) be kt, k2 , ... , kN. The coefficient corresponding to the last term (s- jw) is P(s)/Q(s)l s=jw = H(jw). Hence, Y(s)

n

k

H(.

) = �S-A.· "-i + J� S-JW i=I

and y(t) = Lk;i·i' u(t) i=l

..._.,-,

I

+

H(jw)ejwt u(t) steady-state component )'ss (r)

transien t component )'tr(t)

For an asymptotically stable system, the characteristic mode terms e>..;r decay with time and, therefore, constitute the so-called transient component of the response. The last term H(jw)e jwr persists forever, and is the steady-state component of the response given by wt YssU) = H(jw)e j u(t)

This result also explains why an everlasting exponential input e jw, results in the total response H(j w)ejwr for BIBO systems. Because the input started at t = -oo, at any finite time the decaying t For simplicity, we have assumed nonrepeating characteristic roots. The procedure is readily modified for

repeated roots, and the same conclusion results.

\

646

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS transient component has long v�nished, leaving only the steady-state component. Hence, the totaJ response appears to be H(jw)e1 "''. From the argument that led to Eq. (7.4), it follows that for a causal sinusoidal input co sM, th e steady-state response Yss (t) is given by Yss Ct) = IH(Jw)I cos [wt+ LH(jw)]u(t) ln summary, I H(j w)I cos [wt+ LH(jw)] is the total response to everlasting sinusoid coswt. In contrast, it is the steady-state response to the same input applied at t = 0.

7.2 BODE PLOTS

Sketching frequency response plots (IH(jw)I and LH(jw) versus w) is considerably facilitated by the use of logarithmic scales. The amplitude and phase response plots as a function of won a logarithmic scale are known as Bode plots. By using the asymptotic behavior of the amplitude and the phase responses, we can sketch these plots with remarkable ease, even for higher-order transfer functions. Let us consider a system with the transfer function H(s) =

K(s+a1)(s+a2) s(s + b1)(s2 + b2s + b3)

(7.5)

where the second-order factor (s2 + b2 s + b3) is assumed to have complex conjugate roots.1 We shall rearrange Eq. (7.5) in the form

and

This equation shows that H(jw) is a complex function of w. The amplitude respons e IH(jw)I and the phase response LH(jw) are given by

(7.6)

t Coefficients a1, a2 and b1, b2, b3 used in this section are not to be confused with those used in the representation of Nth-order LTIC system equations given earlier [Eqs. (2.1) or (6.30)].

7.2 Bode Plots and L H (jw)

647

= L ( Ka I a2) + L ( 1 + jw) + L ( I + jw) b1b3

- Ljw

a1

- L ( I + jbiw) -

a2 b w (jw)2 ] L[ 1 + j i + b3 �

(7.7)

From Eq. (7 .7), we see that the phase function consists of the addition of terms of four kinds: (i) the phase of a constant, (ii) the phase of jw, which is 90° for all values of w, (iii) the phase for the first-order term of the form 1 + jw/a, and (iv) the phase of the second-order term

We can plot these basic phase functions for w in the range O to oo and then, using these plots, we can construct the phase function of any transfer function by properly adding these basic responses. Note that if a particular term is in the numerator, its phase is added, but if the term is in the denominator, its phase is subtracted. This makes it easy to plot the phase function L H(jw) as a function of w. Computation of IH(jw)I, unlike that of the phase function, however, involves the multiplication and division of various terms. This is a formidable task, especially when we have to plot this function for the entire range of w (0 to oo). We know that a log operation converts multiplication and division to addition and subtraction. So, instead of plotting IH(Jw)I, why not plot log IH(jw)I to simplify our task? We can take advantage of the fact that logarithmic units are desirable in several applications, where the variables considered have a very large range of variation. This is particularly true in frequency response plots, where we may have to plot frequency response over a range from a very low frequency, near 0, to a very high frequency, in the range of 10 10 or higher. A plot on a linear scale of frequencies for such a large range will bury much of the useful information at lower frequencies. Also, the amplitude response may have a very large dynamic range from a low of I o- 6 to a high of 106• A linear plot would be unsuitable for such a situation. Therefore, logarithmic plots not only simplify our task of plotting, but, fortunately, they are also desirable in this situation. There is another important reason for using logarithmic scale. The Weber-Fechner law (first observed by Weber in 1834) states that human senses (sight, touch, hearing, etc.) generally respond in a logarithmic way. For instance, when we hear sound at two different power levels, we judge one sound to be twice as loud when the ratio of the two sound powers is I 0. Human senses respond to equal ratios of power, not equal increments in power [ l ]. This is clearly a logarithmic response. t The logarithmic unit is the decibel and is equal to 20 times the logarithm of the quantity (log to the base 10). Therefore, 20log 1 0 IH(jw)I is simply the log amplitude in decibels (dB).i Thus, instead of plotting JH(jw)I, we shall plot 20log 10 IH(jw)I as a function of w. These plots t Obse rve that the frequencies of musical notes are spaced logarithmically (not linearly). The octave is a ratio of 2. The frequencies of the same note in the successive octaves have a ratio of 2. On the Western musical scale, there are 12 distinct notes in each octave. The frequency of each note is about 6% higher than the frequency of the preceding note. Thus, the successive notes are separated not by some constant frequency, but by a constant ratio of 1.06. t Originally, the unit bel (after the inventor of telephone, Alexander Graham Bell) was introduced to represent the power ratio as log 10 P2 /P 1 bels. A tenth of this unit is a decibel, as in 10 log i oP2/P1 decibels.

648

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

(log ampHtude and phase) are called Bode plots. For the transfer function in Eq. (7.6), the log amplitude is j j Ka I a2 201oglH(jw)l=20log -- +20log 1+- +20log l+- -20log1Jwl a1w aw 2 I b1b3 2 jb2cu (j ) jw - 20log I+ -20log 1+ + (7.8) � w

I

I

I bi I

I

I

hJ

I

I

I

The term 20log(Ka 1 a 2 /b 1 b3) is a constant. We observe that the log amplitude is a sum of four basic terms corresponding to a constant, a pole or zero at the origin (20log ljwl), a first-order pole or zero (20 log1 1 + jw/ al), and complex-conjugate poles or zeros (20 log 11+jwbifb3 + (jw)2 /b3I). We can sketch these four basic terms as functions of w and use them to construct the log-amplitude plot of any desired transfer function. Let us discuss each of the terms.

7.2.1 Constant Ka1a2/b1b3

The log amplitude of the constant Ka 1 aifb 1 b2 term is also a constant, 201og1Ka1a2/b 1 f>J I. The phase contribution from this term is zero for a positive value and rr for a negative value of the constant (complex constants can have different phases).

7.2.2 Pole (or Zero) at the Origin LOG MAGNITUDE

A pole at the origin gives rise to the term -20log lj wl, which can be expressed as -20log ljwl = -201ogw h Tis function can be plotted as a function of w. However, we can effect further simplificati on by using the logarithmic scale for the variable w itself. Let us define a new variable u such that

u=Iogw Hence, -20logw = -20u The Iog-ampHtude function -20u is plotted as a function of u in Fig. 7.3a. This is a straight line with a slope of -20. It crosses the u axis at u =0. Thew-scale (u = Iogw) also appears in Fig. 7Ja. r Semilog graphs can be conveniently used for plotting, and we can directly plot w on semil og pape . A ratio of 10 is a decade, and a ratio of 2 is known as an octave. Furthermore, a decade al ong the 2 Since the power ratio of two signals is proportional to the amplitude ratio squared, or IH( Jw)l , we h3'" IO log10P2/P1 = 10 log l O IH(jw)l 2 = 20 log'° IH(}w)I dB.

.J

7.2 30

�.....

:--......

20



10 0

� 00

_g

.........,... ...,.

-10 -20 -30 0.01 (u

= -2)

.. . .. . . . . . -· . .

. 0.05

0.1

(u

=

0.5

-1)

.. • .•·-··

..

..

Bode Plots

649

.. . . .. w-



...........

w (u

=I = 0)

�.....

.......,.

5

� I"--... 10

(u = 1)

so

100 (u

= 2)

(a)

)5Q0 1---+----t---t-t-+-t-++-----+-+--c-+--+-+++-+--l--+-+-+-+-t-++ll---+---+--t-t-t-+++I 90° ----- -- -- - ••• ----- --- - - 50° 1------+---+--t-t--++++-----+-+--r-t--+++tt--t-----+-+-t-+-1-++11--+---+--+-t-t-t-++t � 0° 1------+--t---t-t--++++-----+-+--r-t--+++tt--t-----+-+-t-+-i-++ll--+----+---+-t-+-HH+--w° 0., -50 t-----+---t--t---t-t--t-+-H-----+-+--r-t--+++H --l--+-+-t-+-1 -++11--+----t--t--t-t-+++I -90° l==:t==t=::::;::::t=;::��==t==i==l==t=l:#1=1==::::;:::::::t==t=:l=t=:j::!::j:::;=:==l=:::::t==l=�;:t:::l:I -150° 1---+----+--t-t-++-1+-----+-+--c-+--+-+++-1--+----+-+-+-+-1-++l---+---+--t-t-t-t-++t 0.05 0.1 5 0.5 w = I 10 50 0.01 100 C)

(u

=

-2)

(u

=

(u

-1)

= 0)

(u

=

1)

(u

= 2)

(b)

Figure 7.3 (a) Amplitude and (b) phase responses of a pole or a zero at the origin.

w scale is equivalent to 1 unit along the u scale. We can also show that a ratio of 2 (an octave) along thew scale equals 0.3010 (which is log 10 2) along the u scale.t Note that equal increments in u are equivalent to equal ratios on thew scale. Thus, 1 unit along the u scale is the same as one decade along the w scale. This means that the amplitude plot has a trhis point can be shown as foJlows. Letw 1 and wi along thew scale correspond to u 1 and u2 along the u scale so that logw 1 = u 1 and loga>z = u2• Then

u2 - u1

Thus, if

= log l Owi -log10w1 = log10(wi/w1) (which is a decade)

then

and if then

u2 -u1

= log10 10 = 1 (which is an octave)

650

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

slope of-20 dB/decade or-20(0.3010) =-6.02dB/octave (commonly stated as -6 dB/octave). Moreover, the amplitude plot crosses the w axis at w = 1, since u = log IO w = 0 when w = I. For the case of a zero at the origin, the log-amplitude term is 20 log w. This is a straigh t line passing through w = 1 and having a slope of 20 dB/ decade (or 6 dB/ octave). This plot is a mirror image about thew axis of the plot for a pole at the origin and is shown dashed in Fig. 7.3a. PHASE

The phase function corresponding to the pole at the origin is -Ljw [see Eq. (7.7)]. Thus, LH(jw) = -Ljw = -90°

The phase is constant (-90°) for all values of w, as depicted in Fig. 7.3b. For a zero at the origin, the phase is Ljw = 90°. This is a mirror image of the phase plot for a pole at the origin and is shown dashed in Fig. 7.3b.

7.2.3 First-Order Pole (or Zero) THE

Loe

MAGNITUDE

The log amplitude of a first-order pole at -a is -20logll + jw/al. Let us investigate the asymptotic behavior of this function f or extreme values of w (w w11 • The second asymptote !he two asymptotes are zero for w < w,, and -40u - 40log is a straight line with a slope of -40 dB/decade (or -1 2 dB/octave) when plotted against the log w scale. It begins at w = w,, [ see Eq. (7.1 1 )]. The asymptotes are depicted in Fig. 7.6a. The exact log amplitude is given by [see Eq. (7.10)] 2 ( ( log amplitude= -20log { [ l - :.)']' +4< ;.)'}

112

(7.12)

erent plot for each value The log amplitude in this case involves a parameter s, resulting in a diff of curves for a number of s. For comple x-conjugate poles, t s < I. Hence, we must sketch a family

se two ger complex but real, and each of the ors � I, the two pol es in the second-order factor are no lon tF real poles can be dealt with as a separate first-order factor.

I

I I I I

I

I

I I I I

I

I

I I I

I 654

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS 20 l 0.707 but becomes pronounced as ( �

...

7.2 Bode Plots

655

14 r-----.-- -r-�,--.--,--,--.--r-7"1,------,,------r--.,....----,-----,---�

10

j

5

w-

-51-----+--�::.....L+--f-l--l--l-'l--+-✓----+---+--'-------+--+---l----l---l-J -6L.....-----+-----'--..L....-f-L.....-L...l----1...3f£-------1---..,L_-L_--+--1...--1....� 0.lwn

0.5w11

w

= w,.

(a)

90° ,----- --,--- -,----,------. -,--,--,---.------------------

u 03 .c

oo

w-

-30°

-90° -------+--.....L..-..L....---4-'--"---''---'-.31--------+----'---..__----1---'-----1..----'-....1........j 0.lw,.

w = w11 (b)

Figure 7.7 Errors in the asymptotic approximation of a second-order pole. PHASE

The phase function for second-order poles, as apparent in Eq. (7.9), is (7.13)

656

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS For (1) « W n, LH(jw)�O For (1)

>> Wn, LH(jw)::::: -180°

Hence, the phase ➔ -180° as w ➔ oo. As in the case of amplitude, we also have a family of phase plots for various values of {, as illustrated in Fig. 7.6b. A convenient asymptote for the phase of complex-conjugate poles is a step function that is 0° for w < Wn and -180° for w > Wn. Error plots for such an asymptote are shown in Fig. 7.7 for various values of {. The exact phase is the asymptotic value plus the error. For complex-conjugate zeros, the amplitude and phase plots are mirror images of those for complex conjugate-poles. We shall demonstrate the application of these techniques with two examples.

r Second-Order Transfer Fundio Sketch Bode plots for the transfer function H(s) =

20s(s+ 100) (s + 2)(s + 10)

MAGNITUDE PLOT

First, we write the transfer function in normalized form

Here, the constant term is 100; that is, 40 dB (20 log 100 = 40). This term can be adde d to tbe plot by simply relabeling the horizontal axis (from which the asymptotes begin) as th e 4� �15 5 line (see Fig. 7.8a). Such a step implies shifting the horizontal axis upward by 40 dB. Tlu precisely what is desired.

..."r.'>,.,,._

. -�,iJlt.l

r. 7.2 Bode Plots

50

,

45

l

lt

V

,t

35

:x:

30

0

N

25 20 9

00

I

/

j

� ' --...

'

h' � ''

,,/ 1

'•

..::.:.

---

--- -- ----- -

�I:::! Pl�

9

..

I •�

'

.

.•.

'�

.''

''

"

,

,,

,,

,

I bo

l�kl

w f------,

"' -.

��

"

i' � �

\

... ..

"i-►

I, • .J



.... ..

-- --- -

- ----

-

·•

��

o�" ..... ..

-

..

,,

11bo

21 '' � i\. ''

',

,,

, ,,

,,



(a)

'



00

.



8,t: Cl plot I/

'

45

-45

o\.

,

°

°

As mp oti� 1 Ic I

"

II

p....

l

,

V

40

i_,,

,

, ,,

657

.. . -· . ...-·

IElx�tl �lot I/

t.

.. 12... 5\ � �o .. ... .. . .. ..... •. .. --.. ...."· ., "► '.-►, - .... ... .----.... "► ..... .

.

•'.

.

----- --- - - . . .. . .. ..

I ••

00

f'�

.. ··i-.

,,

c:.v�

Al f l

�I:::

i..--

V

� ��� ) syn:

,_

..

r.�

•'--

(

1000

...11

1>tc ti,I Iil,)1

.

(b)

Figure 7.8 (a) Amplitude and (b) phase responses of a second-order system with real poles.

In addition, we have two first-order poles at -2 and -10, one zero at the origin, and one zero at -100.

Step l. For each of these terms, we draw an asymptotic plot as follows (shown in Fig. 7 .8a by dashed lines):

658

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS (a) For the zero at the origin, draw a straight line with a slope of 20 dB/dec ade passing

through w = 1. (b) For the pole at -2, draw a straight line with a slope of -20 dB/decade (for (J) > 2) beginning at the corner frequency w = 2. (c) For the pole at -10, draw a straight line with a slope of -20 dB/decade beginning at the comer frequency w = 10. (d) For the zero at -100, draw a straight line with a slope of 20 dB/decade beginning at the corner frequency w = 100.

Step 2. Add all the asymptotes, as depicted in Fig. 7 .8a by solid line segments. Step 3. Apply the following corrections (see Fig. 7 .5a): (a) The correction at w

(b)

(c)

(d)

(e)

=l

because of the comer frequency at w = 2 is -1 dB. The correction at w = 1 because of the comer frequencies at w = 10 and w = 100 is quite small (see Fig. 7.5a) and may be ignored. Hence, the net correction at w = 1 is -1 dB. The correction at w = 2 because of the comer frequency at w = 2 is -3 dB, and the correction because of the comer frequency at w = 10 is -0.17 dB. The correction because of the comer frequency w = 100 can be safely ignored. Hence, the net correction at w = 2 is -3.17 dB. The correction at w = 10 because of the comer frequency at w = 10 is -3 dB, and the correction because of the comer frequency at w = 2 is -0.17 dB. The correction because of w = 100 can be ignored. Hence, the net correction at w = IO is -3.17 dB. The correction at w = 100 because of the corner frequency at w = 100 is 3 dB, and the corrections because of the other comer frequencies may be ignored. In addition to the corrections at corner frequencies, we may consider corrections at intermediate points for more accurate plots. For instance, the corrections at w = 4 because of corner frequencies at w = 2 and 10 are -1 and about -0.65, totaling -1.65 dB. In the same way, the corrections at w = 5 because of comer frequencies at w = 2 and 10 are -0.65 and -1, totaling -1.65 dB.

With these corrections, the resulting amplitude plot is illustrated in Fig. 7.8a. PHASE PLOT We draw the asymptotes corresponding to each of the four factors: (a) The zero at the origin causes a 90° phase shift. (b) The pole at s = -2 has an asymptote with a zero value for -oo < w < 0.2 and a slope of -45°/decade beginning at w = 0.2 and going up tow= 20. The asymptotic value forw > 20 is -90° . (c) The pole at s = -10 has an asymptote with a zero value for -oo < w < l and a slope of -45° /decade beginning at w = I and going up tow= 100. The asymptotic value forw > 100 is -90° .

7.2 Bode Plots

659

(d) The zero at s = - l 00 has an asymptote with a zero value for -oo < w < 10 and a slope of 45 ° /decade beginning at w 10 and going up tow= IO00. The asymptotic value for w > 1000 is 90°. All the asymptotes are added, as shown in Fig. 7.8b. The appropriate corrections are applied from Fig. 7.5b, and the exact phase plot is depicted in Fig. 7.8b.

=

EXAMPLE 7.4 Bode Plots for Second-Order Transfer Function with Complex Poles Sketch the amplitude and phase response (Bode plots) for the transfer function H(s)

=

IO(s 100) = + 10 2 s 2s 100

+ +

s

I+ WO

s

s2

I+ 50 + 100

MAGNITUDE PLOT

Here, the constant term is 10: that is, 20 dB (20 log 10 = 20). To add this term, we simply label the horizontal axis (from which the asymptotes begin) as the 20 dB line, as before (see Fig. 7.9a). In addition, w e have a real zero at s = -100 and a pair of complex conjugate poles. When we express the second-order factor in standard form, s2 + 2s + 100 = S2 + 2s WnS + W� we have Wn

= IO

s =0.1

and

Step 1. Draw an asymptote of -40 dB/decade ( -12 dB/octave) starting at w = 10 for the complex conjugate poles, and draw another asymptote of 20 dB/decade starting at w = 100 for the (real) zero. Step 2. Add both asymptotes. Step 3. Apply the correction at w = l 00, where the correction because of the corner frequency w=100 is 3 dB. The correction because of the comer frequencyw = l 0, as seen from Fig. 7. 7a for = 0.1, can be safely ignored. Next, the correction at w = 10 because of the comer frequencyw = 10 is 13.90 dB (see Fig. 7.7a for = 0.1). The correction because of the real zero at -100 can be safely ignored at w = l 0. We may find corrections at a few more points. The resulting plot is illustrated in Fig. 7. 9a.

s

s

660

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

30

l

i :t:

\-�o

1,/

20

I

i--

.... . . . . . . ..

I\

I/

10

Exilet p ot

1,)0 ,

/

...., .. 103

4)0

w ,-i.

"'

-

'

I\: �rnototic o ��

0

��

-10

�,

-20

"��, �

-30

.....

(a) 0 90

. . . . .. . . . . ...

°

50

I

�r-�

!k>

10



... .. . . . I Sill(



I

po

/Jl➔

3(00

103

w ,-►

-50° � 1000, the asymptote is 90°. Tbe two asymptotes add to give the sawtooth shown in Fig. 7 .9b. We now apply tbe corrections from Figs. 7.5b and 7.7b to obtain the exact plot.

y 7.2 Bode Plots

661

BODE PLOTS WITH MATLAB Bode plots make it relatively simple to hand-draw straight-line approximations to a system's mag nitude and frequency responses. To produce exact Bode plots, we tum to MATLAB and its bode command. >> bode(tf([10 1000], [1 2 100]),'k-'); The resulting MATLAB plots, shown in Fig. 7.10, match the plots shown in Fig. 7.9.

Bode Diagram

40

,....

20 0

•2 -20 ::e�o -60

0

�5

� .._, -90 �

5: -135 -180 100

JOI

102

103

104

Frequency (rad/s)

Figure 7.10 MATLAB-generated Bode plots for Ex. 7.4.

Comment. These two examples demonstrate that actual frequency response plots are very close to asymptotic plots, which are so easy to construct. Thus, by mere inspection of H(s) and its poles and zeros, one can rapidly construct a mental image of the frequency response of a system. This is the principal virtue of Bode plots. POLES AND ZEROS IN THE RIGHT HALF-PLANE In our discussion so far, we have assumed the poles and zeros of the transfer function to be in the left half-plane. What if some of the poles and/or zeros of H(s) lie in the RHP? If there is a pole in the RHP, the system is unstable. Such systems are useless for any signal-processing application. For this reason, we shall consider only the case of the RHP zero. The term corresponding to RHP zero at s = a is (s/a) - l, and the corresponding frequency response is (jw/a) - 1. The amplitude response is

662

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS 11: -1 .

I

=

(

:22

+

)1, I 2

This shows that the amplitude response of an RHP zero at s = a is identical to that of an LHP ze ro or s = -a. Therefore, the log amplitude plots remain unchanged whether the zeros are in the LHP or the RHP. However, the phase corresponding to the RHP zero at s = a is L

(jwa-1 )

=L-

(1-ajw)

=Jr+tan

_, -w -1 w =Jr-tan ( a) ( -;;-)

whereas the phase corresponding to the LHP zero at s = -a is tan- 1 (w/a). The complex-conjugate zeros in the RHP give rise to a term s2 -2swns+w�, which is identical to the term s2 + w11 s + w; with a sign change in Hence, from Eqs. (7 .12) and (7 .13), it follows that the amplitudes are identical, but the phases are of opposite signs for the two terms. Systems whose poles and zeros are restricted to the LHP are classified as minimum phase \ systems. Minimum phase systems are particularly desirable because the system and its inverse are both stable.

2s

s.

7.2.5 The Transfer Function from the Frequency Response In the preceding section, we were given the transfer function of a system. From a k:nowledgeof the transfer function, we developed techniques for determining the system response to sinusoidal inputs. We can also reverse the procedure to determine the transfer function of a minimum phase system from the system's response to sinusoids. This application has significant practical utility.If we are given a system in a black box with only the input and output terminals available, the transfer function has to be determined by experimental measurements at the input and output terminals. The frequency response to sinusoidal inputs is one of the possibilities that is very attractive because the measurements involved are so simple. One needs only to apply a sinusoidal signal at the inpul and observe the output. We find the amplitude gain IH(jw)I and the output phase shift LH(jw) (with respect to the input sinusoid) for various values of w over the entire range from Oto oo. This information yields the frequency response plots (Bode plots) when plotted against log //J. From these plots, we determine the appropriate asymptotes by taking advantage of the fact that the slopes of all asymptotes must be multiples of ±20 dB/decade if the transfer function is a rational function (a f unction that is a ratio of two polynomials in s). From the asymptotes, the comer frequencies are obtained. Comer frequencies determine the poles and zeros of the traDSfer function. Because of the ambiguity about the location of zeros since LHP and RHP zeros (zeros al s = ±a) have identical magnitudes, this procedure works only for minimum phase systems.

7.3 CONTROL SYSTEM DESIGN USING FREQUENCY RESPONSE Figure 7.1 l a shows a basic closed-loop system, whose open-loop transfer functio n is KG(s)H(s) (see Sec. 6.7.3). According to Eq. (6.49), the closed-loop transfer function is

663

7.3 Control System Design Using Frequency Response Y(s)

,__..,___, KG ( s) t--..,_-tl�(a)

H(s)

24 Ctl '0

.s

'0

·a

0

.3

-12

bl)

bl)

Gain crossover frequency C08 = 1.9

12

0.2

0.4

0.6

0)-

-24

-90"

(b)

Phase margin = 22.5° '---..

� -180" �

0.2

0.4

ro-

0.6 Phase crossover frequency Olp= 2.83

-270"

190" -17(r

Figure 7.11 Gain and phase margins of a system with open-loop transfer function s(s+i>�s+4) .

664

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS T( s) = 1

KG(s) + KG(s)H(s)

The time-domain method of control system design discussed in Sec. 6.7 works only when the transfer function of the plant (the system to be controlled) is known and is a rational function (ratio of two polynomials ins). The input-output description of practical systems is often unknown and is more likely to be nonrational. A system containing an ideal time delay (dead time) is an example of a nonrational system. In such cases, we can determine the frequency response of the open-loop system empirically and use this data to design the (closed-loop) system. In this section, we shall discuss feedback system design procedure based on the frequency response description of a system. However, the frequency response design method is not as convenient as the time-domain design method from the viewpoint of transient and steady-state error specifications. Consequently, the time-domain method in Sec. 6. 7 and the frequency response method should be considered as complementary rather than as alternatives or as rivals. Frequency response information can be presented in various forms, of which the Bode plot is one. The same information can be presented by the Nyquist plot (also known as the polar plot) or by the Nichols plot (also known as log-magnitude versus angle plot). Here, we shall discuss the techniques using Bode and Nyquist plots only. Figure 7. l lb shows Bode plots for the open-loop transfer function K/ s(s + 2)(s + 4) wheo K = 24. The same information is presented in polar form in the corresponding Nyquist plot of Fig. 7. l lc. For example, at cv = 1, IH(j cv) I = 2.6 and LH (j w) = -130.6°. For the polar plo� we plot a point at a distance 2.6 units and at an angle -130.6° from the horizontal axis. This point is labeled as w = 1 for identification (see Fig. 7.11c). We plot such points for several frequencies in the range from w = 0 to oo and draw a smooth curve through them to obtain the Nyquist plot. 1 As we next discuss, Bode or Nyquist (or Nichols) plots of the open-loop transfer function allow usto readily investigate stability aspects of the closed-loop system.

7.3.1 Relative Stability: Gain and Phase Margins For the system in Fig. 7.1 l a, the characteristic equation is I+ KG(s)H(s) = O and the characteristic roots are the roots of KG(s)H(s) = -1. The system becomes unstable when the root loci crossover to the RHP. This crossing occurs on the imaginary axis wheres= jw (see Fig. 6.45). Hence, the verge of instability (marginal stability) occurs when KG(jw)H(jw) = -1

= 1 e±jrr

Correspondingly, at the verge of instability, the magnitude and angle of the open loop gain KG(jw}H(jw) are IKG(jw)H(jw)I = 1

and

LG(jw)H(jw) = ±rr

t Although we do not show it here, the same infonnation is also found in a Nichols plot. For example, at 81 w = 1, the log magnitude is 20 log 2.6 = 8.3 dB, and the phase at w = 1 is -130.6°. We plot a p0int (I) coordinates x = 8.3, y = -130.6° and label it as w = l for identification. We do this for several values of from O to oo and draw a curve through these points to obtain the Nichols plot.

7.3 Control System Design Using Frequency Response

665

Thus, on the verge of instability, the open-loop transfer function has unity gain and a phase of ±rr. In order to understand the significance of these conditions, let us consider the system in Fig. 7. l I a, with open-loop transfer function KI s(s + 2)(s + 4). The root locus for this system is illustrated in Fig. 6.45. The loci cross over to the RHP for K > 48. For K < 48, the system is stable. Let us consider the case K = 24. The Bode plot for the K = 24 case is depicted in Fig. 7 .11 b. Let the frequency where the angle plot crosses -180° be Wp (the phase crossover frequency). Observe that atwp , the gain is 0.5 or -6 dB. This shows that the gain K will have to double (to value 48) to have unity gain, which is the verge of instability. For this reason, we say that the system has a gain margin of 2 (6 dB). On the other hand, ifw8 is the frequency where the gain is unity or O dB (the gain crossover frequency), then the corresponding open-loop phase is -157.5° . The phase will have to decrease from this value to -180° before the system becomes unstable. Thus, the system has a phase margin of 22.5° . Clearly, the gain and phase margins are measures of the relative stability of the system. Figure 7.1 l c shows that the Nyquist plot of KG(s)H(s) crosses the real axis at -0.5 for K = 24. If we double K to a value of 48, the magnitude of every point doubles (but the phase is unchanged). This step expands the Nyquist plot by a factor of 2. Hence, for K = 48, the Nyquist plot Lies on the real axis at - l; that is, KG(jcv)H(jcv) = -1, and the system becomes unstable. For K > 48, the plot crosses and goes beyond the point -1. For unstable systems, the critical point -1 lies inside the curve; that is, the curve encircles the critical point -1. When the Nyquist plot of an open-loop transfer function encircles the critical point -1, the corresponding closed-loop system becomes unstable. This statement, roughly speaking, is the well-known Nyquist criterion in simplified form.t For the Nyquist plot in Fig. 7.1 lb (for K = 24), the gain will have to double before the system becomes unstable. Thus, the gain margin is 2 (6 dB) in this case. In general, if the Nyquist plot crosses the negative real axis at -fim , then the gain margin is 1/fim . Similarly, if -Jr+ 0m is the angle at which the Nyquist plot crosses the unit circle, the phase margin is 0m . In the present case, 0m = 22.5° . In order to protect a system from instability because of variations in system parameters (or in the environment), the system should be designed with reasonable gain and phase margins. Small margins indicate that the poles of the closed-loop system are in the LHP but too close to thew axis. The transient response of such systems will have a large percent overshoot (PO). On the other hand, very large (positive) gain and phase margins may indicate a sluggish system. Generally, a gain margin higher than 6 dB and a phase margin of about 30° to 60° are considered desirable. Design specifications for transient performance are often given in terms of gain and phase margins.

7.3.2 Transient Performance in Terms of Frequency Response For a second-order system in Eq. (6.44), we saw the dependence of the transient response (PO, t,, td, and ts ) on dominant pole locations. Using this knowledge, we developed in Sec. 6.7 a procedure for designing a control system for a specified transient performance. In order to develop a procedure based o n the system's frequency response (rather than its transfer function), we must know the relationship between the frequency response and the transient response of the system t The Nyquist criterion states as follows: a closed curve Cs in the s plane enclosing Z zeros and P poles of an open-loop transfer function W(s) maps into a closed curve Cw in the W plane encircling the origin of the W plane Z-P times, in the same direction as that of Cs ,

1 666

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS Mm dB ························································...............'....�--

�------=======--.,..,.--=1-.J.,��=-----------J

OdB P � o::,·•I!....... .. ,.... . ,...._ h -3 dB ·············.....................................................................

··•··....

t

·····.....

.. . ······• ... .... ·•.

ro-

··•....

Figure 7.12 Frequency response of a second•order all-pole system.

in Eq. (6.44). Figure 7 .12 shows the frequency response of a second-order system in Eq. (6.44). The peak frequency response Mp (the maximum value of the amplitude response), which occurs at frequency wp , indicates the relative stability of the system. A higher peak response generally indicates a smaller (see Fig. 7.6a), which implies poles that are closer to the imaginary axis and have less relative stability. Higher Mp also means higher PO. Generally acceptable values of Mp, in practice, range from 1.1 to 1.5. The 3 dB bandwidth Wb of the frequency response indicates the speed of the system. We can show that Wb and tr are inversely proportional. Hence, higherui0 indicates smaller t, (faster response). For the second-oder system in Eq. (6.44), we have

s

T(jw) =

(J)2

(jw)2 + 2)

s WnW + w; ,1

To find Mp , we let di T(j w) I/dw = 0. From the solution of this equation, we find

s S. 0.707 (7.14) These equations show that we can determines and w11 from Mp and wp . Knowledge of { and�, allows us to determine the transient parameters, such as PO, t,, and ts , as seen from Eqs. (6.4)), (6.46), and (6.47). Conversely, if we are given certain transient specifications PO, t,, and ts, we can determine the required Mp and wp . Thus, the problem now reduces to designing a system that has a certain Mp and wp for the closed-loop frequency response. In practice, we kno w the open-loo? system frequency response. So, the ultimate problem reduces to relating the frequen cy response of the closed-loop system to that of the open-loop system. To do this, we shall consider th e case of unity feedback system, where the feedback transfer function is H(s) = i.t I n this case, [he closed-loop transfer function and frequency responses are t The results for a unity feedback system can be extended to nonunity feedback systems.

7.4 Filter Design by Placement of Poles and Zeros of H(s)

_ KG()s s T() 1 +KG()s Let Therefore,

T(jw)

= Meja(w)

T(1' w) =

and and

Meja(w)

=

KG(jw)

667

KG(jw)

1 + KG(jw)

= x(w) + jy(w)

x + jy I +x+jy

Straightf orward manipulation of this equation yields

M2 2 M2 (x+ 2 ) M -1 +/=(M2-1) 2

J.�

J_

This is an equation of a circle centered at [1 , 0] and of radius 1 in the KG(jw) plane. Figure 7. l 3a shows circles for various values of M. Because M is the closed-loop system amplitude response, these circles are the contours of constant amplitude response of the closed-loop system. For example, the point A = -2 - j 1.85 lies on the circle M = l .3. This means, at a frequency where the open-loop transfer function is G(jw) = -2 - jl.85, the corresponding closed-loop transfer function amplitude response is 1.3.t To obtain the closed-loop frequency response, we superimpose on these contours the Nyquist plot of the open-loop transfer function KG(jw). For each point of KG(jw), we can determine the corresponding value of M, the closed-loop amplitude response. From similar contours for constant a (the closed-loop phase response), we can determine the closed-loop phase response. Thus, the complete closed-loop frequency response can be obtained from this plot. We are primarily interested in finding Mp, the peak value of M, and Wp, the frequency where it occurs. Figure 7 .13b indicates how these values may be determined. The circle to which the Nyquist plot is tangent corresponds to Mp, and the frequency at which the Nyquist plot is tangential to this circle is wp. For the system, whose Nyquist plot appears in Fig. 7. l 3b, Mp = 1.6 and wp = 2. From these values, we can estimate t and Wn, and determine the transient parameters PO, tr, and ts , In designing systems, we first determine the Mp and wp required to meet the given transient specifications. T he Nyquist plot in conjunction with M circles suggests how these values of Mp and Wp may be realized. In many cases, a mere change in gain K of the open-loop transfer function will suffice. Increasing K expands the Nyquist plot and changes the values Mp and wp correspondingly. If this is not enough, we should consider some form of compensation such as lag and/or lead networks. Using a computer, one can quickly observe the effect of a particular form of compensation on Mp and Wp-

7.4 FILTER DESIGN BY PLACEMENT OF POLES AND ZEROS OF H(s) In this section, we explore the strong dependence of frequency response on the location of poles and zeros of H(s). This dependence points to a simple intuitive procedure to filter design. Systematic filter design procedures to meet given specifications are discussed in later sections. twe can find similar contours for constant a (the closed-loop phase response).

668

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

M =0.8

-4.0

-3.0

-2.0

-1.0 -0.5

M= 1.6

LO

0

(a)

Im

c.o= lO (.t):oo

Re

(b)

C.O= I

Figure 7.13 Relationship between open-loop and closed-loop frequency responses.

669

7.4 Filter Design by Placement of Poles and Zeros of H(s) p

t

p

Im

Re➔

(a)

(b)

Figure 7.14 Vector representations of (a) complex numbers and (b) factors of H(s).

7.4.1 Dependence of Frequency Response on Poles and Zeros of H(s)

Frequency response of a system is basically the information about the filtering capability of the system. A system transfer function can be expressed as

where z1, z2, ... , ZN and A 1 , A2 , ... , AN are the zeros and poles, respectively, of H(s). Now the value of the transfer function H(s) at some frequency s =pis (7.15) This equation consists of factors of the formp-Z; andp-A;. The factorp-z; is a complex number represented by a vector drawn from point z to the point p in the complex plane, as illustrated in Fig. 7.14a. The length of this line segment is IP -z;I, the magnitude ofp - z;. The angle of this directed line segment (with the horizontal axis) is L(p- Z;). To compute H(s) at s =p, we draw line segments from all poles and zeros of H(s) to the pointp, as shown in Fig.7.14b. The vector connecting a zero z; to the point p is p - z;. Let the length of this vector be r;, and let its angle with the horizontal axis be¢;.Thenp-Z; = r;eN>;. Similarly, the vector connecting a pole A; to the point p is p -A; = d;ei81, where d; and 0; are the length and the angle (with the horizontal axis), respectively, of the vectorp -A;.Now from Eq. (7.15) it follows that (r1eN>1 )(r2eN>2), ..(T Nei'PN) ( ) s= = bo Hsl p (d,ei01 )(d2 eiB2) .. . (dN eiB N) T1 T2 · . . TN jl( 1 +N)-(8 1 +9i + .. ·+8N)l e = bo d1d 2 · · · dN Therefor e,

product of distances of zeros top •• dN = bo product of distances of poles top

TJ T2 •. •TN

() IH s ls=p = bo d i d2 .

(7.16)

670

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS and LH(s)ls=p = (¢1 +2 + · · · +> omegap = 10; Ghatp = -2; omegas= 20; Ghats = -20; >> N = ceil(log((10-(-Ghats/10)-1)/(10-(-Ghatp/10)-1))/ ... >> (2•log(omegas/omegap))) N = 4 Step 2: Determine the half-power cutoff frequency we Next, the range of possible We is determined using Eqs. (7.29) and (7.30). >> omegac = [omegap/(10-(-Ghatp/10)-1).-(1/(2*N)), ... omegas/(10-(-Ghats/10)-1).-(1/(2*N))] >> omegac= 10.6934 11.2610

Choosing We = 10.6934 rad/s, which exactly meets passband specifications and exceeds stopband specifications, provides no margin for error on the passband side. Similarly, choosing We = 11.2610 rad/s, which just meets stopband requirements and surpasses passband requirements, provides no margin for error on the stopband side. As in Ex. 7.7, let us choose the first case, which exactly meets passband specifications. >> omegac omegac

=

omegac(1)

= 10.6934

Step 3: Determine the normalized transfer function 1-l(s) By computing the Butterworth pole locations [see Eq. (7.22)) and then converting these polls to expanded polynomial form, MATLAB conveniently computes the desired transfer function coefficients. >> k= 1:N; pk= exp(1j*pi/(2*N)*(2*k+N-1)); denHnorm= real(poly(pk)) denHnorm= 1.0000 2.6131 3.4142 2.6131 1.0000 Using this result with Eq. (7.23) confirms the normalized transfer function as s 1-l( )

=

s4+2.6131s3+3.4142s 2+2.6131s+1

Step 4: Determine the final transfer function H(s) The desired transfer function with We = 10.693 is obtained by replacing s with s/10.693 in the normalized transfer function 1-l(s). This is equivalent to scaling the poles by We and the numerator of H(s by wt )

•'t

686

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

>>

numH= omegac-N, denH = real(poly(pk•omegac)) numH= 13075.60 390.41 denH= 1.00 27.94 3195.26

13075.60

This confinns the transfer function as

13075.60 s) ( H = s4 + 27.94s3 + 390.4 ls 2 + 3195.2 6s + 13075.60

The corresponding magnitude response, plotted using MATLAB, is shown in Fig. 7.26. Following our design choice, notice that the filter exactly meets passband requirements and exceeds stopband requirements. >> >> >> >> >>

omega= linspace(0,25, 1001); H= (polyval(numH,1j•omega))./(polyval(denH, 1 j•omega)); plot(omega,ab s(H),'k-'); grid on; set(gca, 'xtick' ,(0 10 20]) set(gca,'ytick', [0 10-(Ghats/20) 10-(Ghatp/20) 1]) axis([0 25 0 1.05)) ; xlabel('\omega'); ylabel(' IH(j\omega)I');

0.7943

0.1 0

0

10

w

20

Figure 7.26 Magnitude response of the Butterworth lowpass filter for Ex. 7 .8.

AN ALTERNATE APPROACH

We can also design this filter using the built-in MATLAB functions buttord and butter. We first use b uttord to determine filter order and We· >>

[N,omegac] = buttord(omegap,omegas,-Ghatp,-Ghats,'s' ) N= 4 omegac = 11.26096

Notice, in using b uttord, we cannot specify whether to exactly meet passband specificati ons, exactly meet stopband specifications, or something in between. Indeed, as we can tell from to the value of We computed, the buttord command automatically decides to design the filte� non exactly meet stopband ( rather than passband) requirements.This lack of choice is a limi ta of using the canned function buttord.

�-

-�f)

.. 7.5

Butterworth Filters

687

Next, we use the butter command to determine the filter transfer function. [num.H,denH] = butter(N,omegac,'s') 0 0 numH = O 29.43 1.00 denH = 432.95

0 3731.53

16080.61 16080.61

The corresponding transfer function is thus H(s)

_ 16080.61 3 - s4 + 29.43s + 432.95s2 + 3731.53s + 16080.61

As shown in Fig. 7.27, the corresponding magnitude response exactly meets stopband specifications and exceeds passband specifications, as expected. >> >> >> >> >>

omega = linspace(0,25,1001); H = (polyval(numH,1j*omega))./(polyval(denH,1j*omega)); plot(omega,abs(H),'k-'); grid on; set(gca,'xtick', [O 10 20]) set(gca, 'ytick', [O 10-(Ghats/20) 10-(Ghatp/20) 1]) axis([O 25 0 1.05]); xlabel('\omega'); ylabel('IH(j\omega)I');

0.7943

0.1 0

0

10

w

20

Figure 7.27 Alternate magnitude response for Ex. 7.8.

specifications:

688

SE AND ANALOG CHAPTER 7 FREQUENCY RESPON

FILTERS

7.6 CHEBYSHEV FILTERS

The amplitude response of a normalize

d Chebyshev lowpass filter is given by

where CN (w), the Nth-order Chebyshev pol

1

0.3J)

ynomial, is given by or

(7.32)

for le.vi < 1, while the seco nd fo The first form is most convenient to compute CN (w) (see [3]) that CN (W) is also expre:b:: convenient for computing CN (w) for lcvl > 1. We can show in polynomial form, as shown in Table 7.3 for N = 1 to 10.t (7.31)] is depicte d in Fig. 7_28 The normalized Chebyshev lowpass amplitude response [Eq. for N = 6 and N = 7. We make the following general obser vations: 1. The Chebyshev amplitude response has ripples in the passband and is smooth (monotonic) in the stopband. The passband is O ::; w :S 1, and there is a total of N maxima and minima over the passband O ::; w ::; l . 2. From Table 7.3, we observe that Nodd N even TABLE 7.3 0

l

2

3

4 5 6

7

8

9 10

Chebyshev Polynomials

1 w 2w2 -1 4w3 -3w 8w4 -8w2 + 1 16w5 -20w3 + 5w 32w6 -48w4 + 18w2 -1 64w1 - l 12w5 + 56w3 - 7w 128w8 -256 w6 + l60w4 -32w2 + 1 256w� -576w7 + 432w5 -120w3 + 9w 512w 0 - 1280w8 + 1120w6 -400w4 + 50w2 -1

t The Chebyshev polynomial cN (w) has

the property [3]

CN(w) = 2 wCN-1 (w) - C -2(w) N

N>2 fof eofN, u Thus, knowing that Co (w) = 1 and C J a v r w _ example, C2 (w) = 2wc1 (w) _ C (w 1� ) � w, we can (recursively) construct CN (w) fo any ) -2w - 1, and so on. 0

7.6 Chebyshev Filters

t

f 11-lUoo) I

11-lUw)I

I � 1.../+e 2

0

689

l _j_ r✓+e 2

CO



N=7

W➔

0

Figure 7.28 Amplitude responses of normalized sixth- and seventh-order lowpass Chebyshev filters. Therefore, the de gain is 11-£(0)1 = I

!



Nodd Neven

(7.33)

3. The parameter E controls the passband ripple. In the passband, r, the ratio of the maximum gain to the minimum gain is This ratio r, specified in decibels, is

so that

(7.3 4)

Because all the ripples in the passband are of equal height, Chebyshev polynomials CN (w) are known as equal-ripple functions. 4. Ripple is present only over the passband 0:::: w::: 1. At w = 1, the amplitude response is 1/ ✓1 + E 2 = I/r. For w > 1, the gain decreases monotonically. 5. For Chebyshev filters, the ripple parameter r [dB] takes the place of Gp (the minimum gain in the passband). For example, r::: 2 dB specifies that gain variations of more than 2 dB cannot be tolerated in the passband. ln a Butterworth filter, Gp = -2 dB means the same thing. 6. If we reduce the ripple, the passband behavior improves, but it does so at the cost of stopband behavior. As r is decreased (E is reduced), the gain in the stopband increases, and vice versa. Hence, there is a tradeoff between the allowable passband ripple and the desired attenuation in the stopband. Note that the extreme case E = 0 yields zero ripple, but the filter now becomes an all pass filter, as seen from Eq. (7.31) by letting E = 0.

690

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

7. Finally, the Chebyshev filter has a sharper cutoff (smaller transitio� band) than the same-order Butterworth filter, but this is achieved at the expense of inferior passband behavior (rippling). t DETERMINATION OF CHEBYSHEV FILTER ORDER

N

For a norrnabzed Chebyshev filter, the gain G in dB [see Eq. (7 .31)] is

The gain is Gs at Ws , Therefore,

or The use of Eqs. (7.32) and (7.34) in the above equation yields

Hence, [ 10-Gs/lO - 1] 1 ----N=---cosh-1 1Qr/IO - 1 cosh-1 (u>s)

Note that these equations are for normalized filters, where wp Ws with !£J.. to obtain (l)p

1/2

= I. For a general cas e, we replace (7.35)

CHEBYSHEV POLE LOCATIONS AND NORMALIZED ThANSFER FUNCTION We could follow the procedure ?f the Butterworth filter to obtain the pole �ocations of:� _ Chebyshev filter. The procedure 1s straightforward but tedious and does not yield any spe at tbe insight into our development. Butterworth filter poles lie on a semicircle. We can show th _nor d (]ll poles of an Nth-order normalized Chebyshev filter lie on a semiellipse of the major an semiaxes cosh x and sinh x, respectively, where (see [3])

(7.36)

t We can show (s ee (21) that at higher (stopband) frequencies, the Chebyshev filter gain is smaller {hall t!Je comparable Butterworth filter gain by about 6(n - 1) dB.

7.6 Chebyshev Filters

691

The Cbebyshev filter poles are Pk =

. [(2k- l)rr] . . [(2k- l);r] srnh x + J cos cosh x k = l, 2, ...,N 2N 2N

- sm

(7.3 7)

The geometrical construction for determining the pole location is depicted in Fig. 7.29 for N = 3. A similar procedure applies to any N; it consists of drawing two semicircles of radii a= sinh x and b = cosh x. We now draw radial lines along the corresponding Butterworth angles and locate the Nth-order Butterworth poles (shown by crosses) on the two circles. The location of the kth Chebyshev pole is the intersection of the horizontal projection and the vertical projection from the corresponding J..'th Butterworth poles on the outer and inner circles,respectively. T he transfer function 1-l(s) of a normalized Nth-order lowpass Cbebyshev filter is 1-l(s) =

KN (s-p1)(s-p2)· · •(s-pN) KN KN 1 N N C�(s) s +a N-ts - + · · · +ais+ao

(7.38)

The constant KN is selected following Eq.(7 .3 3) to provide the proper de gain: Nodd

N even

(7.3 9)

The design procedure is simplified by ready-made tables of the polynomial CN(s) in Eq.(7.38). Table 7.4 lists the coefficients ao, a1,a2,...,a N-t of the polynomial C�(s) in Eq. (7.38) for r = 0.5, 1, 2, and 3 dB ripples corresponding to the values of E = 0.34 93,0.5088,0.7648, and 0.9976,respectively.Computer tools,such as MATLAB , make Chebyshev filter design even more simple,as the next example demonstrates.

°

30

°

60

y

// a ....� ..•

°

30

,•

: :

/

�-----------

Figure 7.29 Poles of a normalized third­

order lowpass Chebyshev filter and their connection to Butterworth pole locations.

------·-�-

.,. ... ...'

692

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS TABLE 7.4

Coefficients of Normalized Chebyshev Denominator Polynomials C� =I'+ aN_,,IJ-I + · • • +a,s+ao

N

a6

l 2 3 4 5 6 7

0.1 dB of ripple (ap=0.l)

l 2 3 4 5 6 7

1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7

1.693224

a5

l.712166 3.183504

Q4

1.743963 2.965756 3.169246

a3

1.803773 2.770704 2.779050 2.705144

0.5 dB of ripple (ap = 0.5)

1.151218

1.159176 2.412651

1.172491 2.171845 1.869408

1.197386 1.937367 1.589764 1.647903

1 dB of ripple (ap = 1)

0.923123

0.928251 2.176078

0.936820 l.930825 1.428794

0.952811 1.688816 1.202140 1.357545

2 dB of ripple (ap = 2)

0.698091

0.701226 1.993665

0.706461 1.745859 1.039546

0.716215 1.499543 0.867015 1.144597

3 dB of ripple (ap = 3)

0.568420

0.570698 1.911551

0.574500 1.662848 0.831441

0.581580 1.415025 0.6906IO 1.051845

a2

a,

ao

1.938811 2.626798 2.396959 2.047841 1.482934

2.372356 2.629495 2.025501 1.435558 0.901760 0.561786

6.5 5 2203 3.31 4037 1.638 051 0.828509 0.409513 0.207127 0.102378

1.252913 1.716866 1.309575 1.171861 0.755651

1.425625 1.534895 1.025455 0.752518 0.432367 0.282072

2.862775 1.516203 0.715694 0.379051 0.178923 0.094763 0.044731

0.988341 1.453925 0.974396 0.939346 0.548620

l.097734 1.238409 0.742619 0.580534 0.307081 0.213671

l.965227 l.102510 0.491307 0.275 628 0.122827 0.068907 0.030707

0.737822 1.256482 0.693477 0.771462 0.382638

0.803816 1.022190 0.516798 0.459349 0.210271 0.166126

1.307560 0.823060 0.326890 0.205765 0.081723 0.05144 1 0.020431

0.644900 0.928348 0.404768 0.407966 0.163430 0.146153

l.002377 0.707948 0.250594 0.176987 0.06 2649 0.04424 7 0.015662

0.597240 1.169118 0.548937 0.699098 0.300017

-

7.6 Chebyshev Filters

693

ixAMPLE 7.9 Chebyshev Lowpass Filter Design Design a Chebyshev lowpass filter to satisfy the criteria shown in Fig. 7.30 and summarized as: 2 dB over a passband O � w � 10 (wp = 10) and G.r :'.:: -20 dB for w > 16.5 (ws = 16.5).

r�

0

10

16.5

Figure 7.30 Amplitude response of the Chebyshev lowpass filter for Ex. 7.9.

Observe that the specifications are the same as those in Ex. 7.7, except for the transition band. Here, the transition band is from 10 to 16.5, whereas in Ex. 7.7 it is 10 to 20. Despite this more stringent requirement, we shall find that Chebyshev requires a lower-order filter than the Butterworth filter found in Ex. 7.7. Step 1: Determine the order N According to Eq. (7.35), we have 102 -1 1 1 --[ ] N=---cosh 10 °-2 - 1 cosh- 1 (1.65)

1/2

=2.999

Because N must be an integer, we select N = 3. Observe that even with more stringent requirements, the Chebyshev filter requires only N = 3. The passband behavior of the Butterworth filter, however, is superior (maximally flat at w = 0) compared to that of the Chebyshev, which has rippled passband characteristics. Our calculation of N is easily confirmed in MATIAB.

>> omegap = 10; rhat = 2; omegas = 16.5; Ghats = -20; . · >> N = ceil(acosh(sqrt(( 10-(-Ghats/10)-1)/(10-(rhat/ 10)-1)))/ · >> acosh(omegas/omegap)) N = 3

694

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG ALTERS

Step 2: Determine the normalized transfer function 1l(s) We may use Table 7.4 to determine 11,(s). For N = 3 and = 2 dB, we read the coefficients of the denominator polynomial of 11,(s) as ao = 0.3269, a1 = 1.0222, and a2 = 0.737 8. Also in Eq. (07.39), for odd N, the numerator is given by a constant KN = ao = 0.3269. Ther efore,

r

11, s) (

=

0.3269 s 3+0.7378s 2+ 1.0222s+0.3269

(7.40)

r,

Because there are infinite possible combinations of N and Table 7.4 can list values of the denominator coefficients for values of r in quantum increments only (3]. For the va lues of N and not listed in the table, we can compute pole locations from Eq. (7.37). For the sake of demonstration, we now recompute 11,(s) using this method. In this case, the value of€ is [see Eq. (7.34)] 10°- 2- 1 = 0.7647 E = JIOr/lO - 1 =

r



From Eq. (7.36),

x=

!n sinh- 1 !E = !3 sinh- 1 ( l .3077) = 0.3610

Now from Eq. (7.37), we havep 1 j0.9231. Therefore, 11,(s)

= -0.1844+ j0.9231,p2 = -0.3689, andp3 = -0.1844-

KN = ------------------

(s+0.3689)(s+0.1844+ j0.9231)(s+ 0.1844- J0.9231) 0.3269 KN 3 2 2 3 - s +0.7378s +1.0222s+0.3269 - s +0.7378s + 1.0222s+0.3269

This confirms our earlier result of Eq. (7 .40). Of course, all of these calculations are easily performed in MATLAB as well. >> >> >>

epsilon = sqrt(1o~(rhat/10)-1); x = asinh(l/epsilon)/N; k = 1:N; pk = -sin( (2*k-1)*pi/(2*N))*sinh(x)+1j *Cos((2*k-1)*pi/( 2*N))*cosh(x); denHnorm = real(poly(pk)); denHnorm = 1. 0000 0.7378 1.0222 0.3269

Step 3: Determine the final transfer function H (s) Recall that Wp = 1 for the normalized transfer function. For wp = IO, the desired transfer function H(s) can be obtained from the normalized transfer function 11,(s) by replacing s with s/wp = s/10. Therefore, H(s) =

0.3269 (1� +0.3689)(/0 +0.1844+ j0.9231)(t +0.1844- j0.9231) 0 326.9 s3 +7.378s2+102.22s+ 326.9

These same coefficients for H(s) are also easily computed using MATLAB.

< I

7.6 Chebyshev Filters

695

>> numH = omegap-N*denHnorm(end), denH = real(poly(pk*omegap)) numH = 326.8901 denH = 1 .0000 7.3782 102 .2190 326.8901 Next, we tum our attention to the amplitude response of the filter. In the present case, r = 2 dB means that [see Eq. (7.34)]

The frequency response is [see Eq. (7.31) and Table 7.3] 11-l(jw)I =

1

JI+ 0.5849(4w3 - 3w)2

This is the normalized filter amplitude response. The actual filter response IH(jw)I is obtained by replacing w with �; that is, with � in 1-l(jw). Thus, 1 IH(Jw)I = --;::::======== 3 2 1 +0.5849[4(�) -3(�) ] 103 ----;:===;::============ ✓9.3584w 6 - 140 3.76w4 +52640w2 + 1Q6

Observe that despite more stringent specifications than those in Ex. 7.7, the Chebyshev filter requires N = 3 compared to the Butterworth filter in Ex.7.7, which requires N = 4. Figure 7. 30 shows the amplitude response. We can easily generate the filter's magnitude response using MATLAB. The result, shown in Fig.7.31, clearly confirms the response computed earlier and shown in Fig. 7.30. >> >> >> >> >>

omega= linspace(0,20,1001); H = (polyval(numH,1j*omega)) ./(polyval(denH,1j*omega)); plot(omega,abs(H),'k-'); grid on; set(gca,'xtick',[0 omegap omegas]) set(gca,'ytick', [0 10-(Ghats/20) 10-(-rhat/20) 1]) axis([0 20 O 1 .05]); xlabel('\omega'); ylabel(' IH(j\omega)I ');

AN ALTERNATE APPROACH

We can also design this filter using the built-in MATLAB functions cheb1ord and cheby1. We first use ch eb1ord to determine filter order and Wp»

[N,omegap] = cheblord(omegap, omegas,rhat,-Ghats,'s') N = 3 omegap = 10

Next, we use the cheby1 command to determine the filter transfer function. >>

[numH,denH]= ch eby1(N,rhat,omegap,'s') 0 326.8901 numH = o o denH = 1 .0000 7.3782 102 .2190 326.8901

696

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS This result agrees with our earlier calculations.

0.7943

3 :;

0

o. t

0

16.5

10

0

w

Figure 7.31 Magnitude response of the Chebyshev Iowpass filter for Ex. 7.9.

DRILL 7.4 Chebyshev Lowpass Filter Design Determine the order N and the transfer function H(s) of a Chebyshev filter that meds Ille following specifications: r= 2 dB, Gs = -20 dB, Wp = 10 rad/s, and w,, = 28 rad/s.

7.6.1 Inverse Chebyshev Filters The passband behavior of Chebyshev filters exhibits ripples, and the stopband is smooth. Generally, passband behavior is more important, and we would prefer that the passband have a smooth response. However, ripples can be tolerated in the stopband as long as they meet given specifications. The inverse Chebyshev filter does exactly that. Both Butterworth and Chebyshev filters have finite poles and no finite zeros. An inverse Chebyshev filter has finite zeros and poles. It exhibits a maximally flat passband response and equal-ripple stopband response. The inverse Chebyshev response can be obtained from the Chebyshev in two steps as foUows. Let 1-lc(w) be the Chebyshev amplitude response given in Eq. (7.31). In the first step, we subtract 11-lc(w) 1 from 1 to obtain a highpass filter characteristic where the stopband (from O to l) has ripples and the passband (from 1 to oo) is smooth. In the second step, we interchange the stopband and passband by a frequency transformation where w is replaced by 1/w. This step inve rts the passband from the range 1 to oo to the range O to 1, and the stopband now goes from I to co. Moreover, the passband is now smooth and the stopband has ripples. This is precisely the inverse Chebyshev amplitude response, given as 2

11i(w)l 2 = l - l1-lc(l/w)l 2 =

C1(1/w) 1 +E 2 C1(1/w) E

2

where CN(w) are the Nth-order Chebyshev polynomials listed in Table 7.3.

_, 7.6 Chebyshev Filters

697

Inverse Chebyshev filters are preferable to Chebyshev filters in many ways. For example, the passband behavior, especially for small cv, is better for the inverse Chebyshev than for the Chebyshev or even for the Butterworth filter of the same order. An inverse Chebyshev filter also has the smallest transition band of the three filters. Moreover, the phase function (or time -delay) characteristic of an inverse Chebyshev filter is better than that of a Chebyshev filter (2).Both Chebyshev and inverse Chebyshev ftlters require the same order N to meet a given set of specifications [3]. AJthough an inverse Chebyshev realization requires more elements, and thus is less economical, than a Chebyshev filter, it requires fewer elements than a comparable perfonnance Butterworth filter. Rather than give the complete development of inverse Chebyshev filters, we shall demonstrate inverse Chebyshev filter design using functions from MATLAB 's Signal Processing Toolbox.

IXAMPL E 7.10 Inverse Chebyshev Lowpass Filter Design Design an inverse Chebyshev lowpass filter to satisfy the criteria of Ex.7.9 and summarized as: r :'.S 2 dB over a passband 0 :'.S w :'.S 10 (wp = 10) and Gs :'.S -20 dB for cv > 16.5 (w1 = 16.5). We design this filter using the MATLAB functions cheb2ord and cheby2. We first use cheb2ordto determine filter order and stopband frequency w5 • >> >>

omegap = 10 ; rhat = 2; omegas = 16.5; Ghats = -20; [N,omegas] = cheb2ord(omegap,omegas,rhat,-Ghats,'s') N= 3 omegas = 16.4972

Next, we use the cheby2 command to determine the filter transfer function.

»

[numH,denH] = cheby2(N,-Ghats,omegas,'s') 0 4.97 0 numH = 256.40 23.18 denH = 1.00

The filter's transfer function is thus

1804.97 1804.97

+

4.97s 2 1804.97 H(s) - ---------s 3 + 2 3.18s 2 + 256.4s + 1804.97 We next use MATLAB to plot the filter's magnitude response.As shown in Fig.7.32, the filter displays the characteristics of an inverse Chebyshev filter (monotonic passband, equiripple stopband) and also meets design specifications. >> omega = lin space(0,20 ,1001); >> H = (polyval(numH,1j*omega))./(polyval(denH,1j*omega)); >> p lot(omega,abs(H),'k-'); grid on ; set(gca,'xtick',[0 omegap omegas]) >> set(gca, 'ytick', (0 10-(Ghats/20) 10-(-rhat/20) 1]) >> axis([O 20 O 1.05]); xlabel('\omega> ); ylabel('IH(j\omega)I ');

698

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

0.7943

______JL..-..---�::::::::,-,:::::::::::::::=======1

o.i L_____ 0

10

16.4972

w

Figure 7.32 Magnitude response of the inverse Chebyshev lowpass filter for Ex. 7. I 0.

7.6.2 Elliptic Filters

Recall our discussion in Sec. 7.4 that placing a zero on the imaginary axis (at s = jw) causes the gain (IH(Jcv)I) to go to zero (infinite attenuation). We can realize a sharper cutoff characteristic by placing a zero (or zeros) near cv = cvs . Butterworth and Chebyshev filters do not make use of zeros in 1-l(s). Both inverse Chebyshev and elliptic filters do. This is part of the reason for their superior response characteristics. A Chebyshev filter has a smaller transition band compared to that of a Butterworth filter because a Chebyshev filter allows rippling in the passband (or stopband). If we allow ripple in both the passband and the stopband, we can achieve further reduction in the transition band. Such is the case with elliptic (or Cauer) filters, whose normalized amplitude response is given by

where RN(cv,�) is an Nth-order elliptic rational function with discrimination factor rt Using the properties of RN (see [2]), we see that an elliptic filter's equiripple passband has magnitude limits of 1 and 1 / �- The parameter E controls the passband ripple, and the gain at the (normalized) passband frequency (cvp = 1) is 1 / ✓1 + E 2 . Similarly, the filter's equiripple stopband has magnitude limits of 1/ 1 + E 2 �2 and 0. Thus, the discrimination factor� is directly related 10 the filter's maximum stopband gain. Like Chebyshev filters, elliptic filters are normalized in terms of the passband edge wp.

J

t Elliptic rational functions are sometimes referred to as Chebyshev rational functions, a name also give�

10

l c onfus10":

an entirely different class of functions as well. We use the former convention to minimize potentia {, Additionally, some sources express RN in terms of a selectivity factor r ather than the discrimination factor the discrimination factor provides more insight for filter design applications.

)

7.6 Chebyshev Filters

699

If we can tolerate both passband and stopband ripple, an elliptic filter is the most efficient type of filter. For a given transition band, it provides the largest ratio of the passband gain to stopband gain, or for a given ratio of passband to stopband gain, it requires the smallest transition band. In compensation, however, we must accept ripple in both the passband and the stopband. In addition, because of the zeros of 1l(s), an elliptic filter response decays at a slower rate at frequencies higher than Ct.>s . For instance, the amplitude response of a third-order elliptic filter decays at a rate of only -6 dB/octave at very high frequencies. This is because the filter has two zeros and three poles. The two zeros increase the amplitude response at a rate of 12 dB/octave, and the three poles reduce the amplitude response at a rate of -18 dB/octave, thus giving a net decay rate of -6 dB/octave. For Butterworth and Chebyshev filters, there are no zeros in 1l(s). Therefore, their amplitude response decays at a rate of -18 dB/octave. However, the rate of decay of the amplitude response is seldom important as long as we meet our specification of a given Gs at W5 • Calculation of pole-zero locations of elliptic filters is much more complicated than that in Butterworth or even Chebyshev filters. Fortunately, this task is greatly simplified by computer programs, such as MATLAB, as the next example demonstrates.t

Design an elliptic lowpass filter to satisfy the criteria of Ex. 7.9 and summarized as: over a passband O ::: Ct.> ::: 10 (p = 10) and Gs ;;; -20 dB for w > 16.5 (a,5 = 16.5).

r::: 2 dB

We design this filter using the MATLAB functions ellipord and ellip. We first use ell ipord to determine filter order and passband frequency a>p . >> >>

omegap = 10; rhat = 2; omegas = 16.5; Ghats = -20; [N,omegap] = ellipord(omegap,omegas,rhat,-Ghats,'s') N = 3 omegap = 10

Next, we use the ellip command to determine the filter transfer function.

»

[numH, denH] = ellip (N,rhat, -Ghats, omegap, 's') numH = 0 2.7882 0 481.1613 denH = 1.0000 7.2610 106.9988 481.1613

The filter's transfer function is thus

2.7882s2 +481.1613 H(s) = s3 + 7.2610s2+ 106.9988s+481.1613

t Extensive ready-made design tables, available in resources such as [4], are stiU available for those with a sense of nostalgia and old-school determination.

700

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS We next use MATLAB to plot the filter's magnitude response. As shown in F ig. 7.33, the filt displays the characteristics of an elliptic filter (equiripple passband and stopband) and al: meets design specifications. >> omega= linspace(0,20,1001); >> H = (polyval(numH,1j*omega)) ./(polyval(denH,1j*omega)); >> plot(omega,abs(H), 'k-'); grid on; set(gca, 'xtick', [0 omegap omegas)) >> set(gca,'y tick',[0 10-(Ghats/20) 10-(-rhat/20) 1]) >> axis([0 20 0 1.05]); xlabel('\omega'); ylabel(' IH(j\omega)I');

0.7943

0.1

____;:,....:::::...___.J....___________...J 0 L_________J_

0

16.5

10

w

Figure 7.33 Magnitude response of the elliptic lowpass filter for Ex. 7 .11.

7.7 FREQUENCY TRANSFORMATIONS

Earlier we saw how a lowpass filter transfer function of arbitrary specifications can be obtained from a normalized lowpass filter using frequency scaling. Using certain frequency transformations, we can obtain transfer functions of highpass, bandpass, and bandstop filters from a basic lowpas-1 filter (the prototype filter) design. For example, a highpass filter transfer function can be obtained from a prototype lowpass filter transfer function (with normalized passband frequency) by replacing s with wp /s. Similar transformations allow us to design bandpass and bandstop filters from appropriate lowpass prototype filters. The prototype filter may be of any kind, such as Butterworth, Chebyshev, elliptic, and soon. We first design a suitable prototype lowpass filter 1ip (s). In the next step, we repl aces witb a proper transformation T(s) to obtain the desired highpass, bandpass, or bandstop filter.

7.7.1 Highpass Filters

JoW� Figure 7 .34a shows an amplitude response of a typical highpass filter. The appropri ate 4 . .3 prototype response required for the design of this highpass filter is depicted in Fig. 7 b

7.7 Frequency Transformations

701

I HHP(jro) I G,, (b)

(a)

Gs

0

ros

ro,,

ro �

Figure 7.34 Frequency transformation for highpass filters.

must first determine this prototype filter transfer function 1-lp(s) with passband 0 ::: cv ::: 1 and stopband {I) > wp /lt.>s- The desired transfer function of the highpass filter to satisfy specifications in Fig. 7.34a is then obtained by replacing s with T(s) in 1-l,, (s), where T(s)

= Wp s

(7.41)

Design a Chebyshev highpass filter to satisfy the criteria shown in Fig. 7.35 and summarized as: Ws = 100, {J)P = 165, Gs = 0.1 (Gs= -20 dB), and Gp = 0.794 (Gp = -2 dB).

Step 1: Determine the prototype lowpass filter 1-lp (s)

The prototype lowpass filter has Wp = 1 and Ws = 165/100 = 1.65. This means the prototype filter in Fig. 7 .3 4b has a passband O ::: w < 1 and a stopband w � 1.65, as shown in Fig. 7 .35b. Also, Gp = 0.794 (Gp = -2 dB) and Gs = 0.1 (Gs = -20 dB). We already designed a Chebyshev filter with these specifications in Ex. 7.9. The transfer function of this filter is [Eq. (7.40)] 0.3269 (s) = p 1-l s3 +0.7378s2+ 1.0222s+0.3269 The amplitude response of this prototype filter is depicted in Fig. 7.35b. Step 2: Determine the highpass filter H (s) The desired highpass filter transfer function H(s) is obtained from 1-lp(s) by replacing s with T(s) = wp /s = 165/s. Therefore,

702

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS OdB

I .?lp v(J)l I -2dB

0.794 0.7 (a)

(b)

0.5

0.3

· ···· · · · ···· ·······

0.1 0

100

400

165

600

800 (I)

-20dB· ·· ·

1000

0

1.65



CJ)➔

Figure 7.35 Chebyshev highp ass filter design for Ex. 7 .12.

H(s)

0.3269 = --------------5

(1� )

3

+0. 7378 ('�5) + 1.0222( 1�5) +0.3269 2

3

s --------------2 s3 +515.94s

+61445.7 5s+ 13742005

The amplitude response IH (jw) I for this filter is illustrated in Fig. 7.35a. HIGHPASS CHEBYSHEV FILTER DESIGN WITH

MATLAB

We can also carry out this design using the MATLAB functions cheb1ord and cheby1. We first use cheb1ord to determine filter order and passband frequency wp .

>> >>

omegap = 165; Ghatp = -2; omegas = 100; Ghats = -20; [N,omegap] = cheb1ord(omegap,omegas,-Ghatp,-Ghats,'s') N = 3

omegap = 165 Next, we use the cheby1 command to determine the filter transfer function. >>

[numH,denH] numH = 1.00 denH = 1.00

=

cheby1(N,-Ghatp,omegap,'high','s') 0 0 0 515.96 61449 .38 13742005.16

7.7 Frequency Transformations

703

Toe filter's transfer function is thus H(s)

s3 -------­ = ---:;---=-:--=-::-� 2 s + 515.96s + 61449.38s + 1 3742005. I 6 3

As expected, the MATLAB result matches our earlier hand calculations. We next use MATLAB to plot the filter's magnitude response. The result, shown in Fig. 7 .36, matches our earlier result of Fig. 7. 35a. >> >> >> >> >> >>

omega = linspace(0,800,1001); H = (polyval(numH,lj*omega))./(polyval(denH,lj•omega)); plot(omega,abs(H),'k-'); grid on; set(gca,'xtick',[O omegas omegap 300:200:800)) set(gca,'ytick', [0 10-(Ghats/20) 10-(Ghatp/20) 1)) axis([0 800 0 1.05]); xlabel('\omega'); ylabel('IH(j\omega) I');

0.7943

0.1

0 '---=--'-----'--------L---------'--------1--____J 0

100

165

500

300

w

700

Figure 7.36 Magnitude response of the Chebyshev highpass filter for Ex. 7.12. It is a simple matter to design different filter types for the same specifications in MATLAB. Rather than use cheb1ord and cheby1, we could use buttord and butter to design a Butterworth filter, cheb2ordand cheby2to design an inverse Chebyshev filter, and ellipord and ellip to design an elliptic filter.

7.7.2 Bandpass Filters Figure 7.37a shows the amplitude response of a typical bandpass filter. To design such a filter, we first find 1-lp(s), the transfer function of a prototype lowpass filter (with normalized passband frequency), to meet the specifications in Fig. 7.37b, where Ws is given by the smaller of Wp1WP2 -Ws1

w51 (wP2

2

- Wp 1)

and

2

Ws2 -Wp1WP2 Ws2 (Wp2 - Wp1)

(7.42)

nz

704

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS 1 :,{p

t

uw> I

I GP (a)

--=--__.I-----L--�-

Gs -➔ -

0

Figure

ws1 rop1

roP2 ros2

(b)

G.r

ro -4

0

7.37 Frequency transformation for bandpass filters.

Now, the desired transfer function of the bandpass filter to satisfy the specifications in Fig. 7.37a is obtained from 1-lp (s) by replacing s with T(s), where (7.43)

EXAMPLE 7.13 Chebyshev Bandpass Filter Design Design a Chebyshev bandpass filter with the amplitude response specifications shown in Fig. 7.38a with Wp1 = 1000, WP2 = 2000, Ws1 = 450, Ws2 = 4000, Gs = 0.1 (Gs = -20 dB), and Gp = 0.891 (Gp = -1 dB). Observe that for a Chebyshev filter, Gp = -1 dB is equivalent tor= 1 dB. The solution is executed in two steps. In the first step, we determine the Iowpass prototype filter transfer function 1-lp (s). In the second step, the desired bandpass filter transfer function is obtained from 1-lp (s) by substituting s with T(s), the lowpass-to-bandpass transfo rmation in Eq. (7.43). Step 1: Determine the prototype lowpass filter 1ip (s) To begin, we find the stopband frequency Ws of the prototype. From Eq. (7.42), Ws is the smaller of 2 (1000) (2000) - (450)2 = 3·99 and (4000) - (1000) (2000) = 3.5 450(2000- 1000) 4000(2000 - 1000) Thus, Ws = 3.5. We now need to design a prototype lowpass filter in Fig. 7 .37b with G = - I dB, Gs === -20 p dB, Wp = 1, and Ws = 3.5, as illustrated in Fig. 7 .38b. The Chebyshev filter order N required 10 meet these specifications is obtained from Eq. (7.35) as

7.7 Frequency Transformations

OdB

I H(jw) I

I :Hr vw) I

-I dB

0.891

0.7

705

(a)

(b)

0.5

0.3

0.1

0

-20dB 1000

2000

4000

6000

8000

2

0

3 3.5 4

5

ID➔

Figure 7.38 Chebyshev bandpass filter design for Ex. 7.13.

N

1

= cosh- 1 (3.5) cosh-

1 [

I

102 - 1 1 ] 10°- 1 -1

= 1.904

Rounding up to the nearest integer, we see that the required prototype order is N = 2. We can obtain the prototype transfer function by computing its poles using Eq. (7.37) with N = 2 and r = I (€ = 0.5088). However , since Table 7.4 lists the denominator polynomial for r = 1 and N = 2, we need not perform the computations and may use the ready-made transfer function directly as 0.9826 1-lp(s) = 2+ l .0977s+ (7.44) s 1.1025 Here, we used Eq. (7.39) to find the numerator KN= � =

Ji'.���9 = 0.9826. The amplitude

response of this prototype filter is depicted in Fig. 7.38b. Step 2: Determine the bandpass filter H(s) Finally, the desired bandpass filter transfer function H(s) is obtained from 1-lp (s) by replacing s with T(s), where [see Eq. (7.43)] s2+2(10) 6 T(s) = 1000s Replacing s with T(s) in the right-hand side of Eq. (7.44) yields the final bandpass transfer function ( ) H s = s4 + 1097.7s3 +5.1025(10) 6s2 +2.195(10) 9s+4(10) 12

The amplitude response I (jcu)I of this filter is shown in Fig. 7.38a. H

706

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS BANDPASS CHEBYSHEV FILTER DESIGN WITH MATLAB We can readily verify this design using the MATLAB functions c�eb1ord and che by l. We first use cheblord to determine filter order and passband frequencies Wp. >> >>

omegap = [1000,2000]; omegas= [450,4000]; Ghatp = -1; Ghats::: -20· [N,omegap] = cheb1ord(omegap,omegas,-Ghatp,-Ghats,'s')

I

N = 2

2000.00

omegap = 1000.00

Next, we use the cheby1 command to determine the filter transfer function. Since specified as a two-element vector, the design defaults to a bandpass filter. >>

(J)P

is

[numH,denH] = chebyl(N,-Ghatp,omegap,'s') 0 numH = 0 0 982613.36 0 denH = 1.00 1097.73 5102510.33 2195468657.13 4000000000000

The filter's transfer function is thus 98 2613.36s2 Hs _ ( )- s4+1097 .7 3s3 +5102510. 3 3s2 + 2195468657.13s+4000000000000 This MATLAB result matches our earlier hand calculations. We next use MATLAB to plot the filter's magnitude response. The result, shown in Fig, 7.3 9, matches our earlier result of Fig. 7. 38a. >> >> >> >> >> >>

omega = linspace(0,8000,1001); H = (polyval(numH,1j*omega))./(polyval(denH,1j*omega)); plot(omega,abs(H),'k-'); grid on; set(gca,'xtick',[0 450 1000 2000 4000 6000 8000]) set(gca,'ytick',[0 10-(Ghats/20) 10-(Ghatp/20) 1]) axis([O 8000 0 1.05]); xlabel('\omega'); ylabel(' IH(j\omega)I');

I 0.8913

2000

4000

w

6000

Figure 7.39 Magnitude response of the Chebyshev bandpass filter for Ex. 7.13.

8000

7.7 Frequency Transformations

707

We may use a similar procedure for Butterworth bandpass filters. Compared to the Chebysbev case, a Butterworth bandpass filter design involves two additional steps. First, we need to compute the cutoff frequency We of the prototype filter. For a Chebyshev filter, the critical frequency happens to be the frequency where the gain is Gp , This frequency is w = I in the prototype filter. For Butterworth, on the other hand, the critical frequency is the half power (or 3 dB cutoff) frequency We, which is not necessarily the frequency where the gain .is Gp . To find the transfer function of the Butterworth prototype filter, it is essential to know We. Once we know w,, the prototype filter transfer function is obtained by replacing s with sfwe in the normalized transfer function 1l(s). This step is also unnecessary in the Chebyshev filter design. Our next example demonstrates the procedure for Butterworth bandpass filter design.

Design a Butterworth bandpass filter with the amplitude response specifications illustrated in Fig. 7.40a with Wp 1 = 1000, WP2 = 2000, Ws 1 = 450, Ws2 = 4000, Gp = 0.7586 (Gp = -2.4 dB), and Gs = 0.1 (Gs = -20 dB).

... . . "f ... ... ............................... · · ···· · · O·dB···· IH(iro)

I

·············-··-2.4dB

0.7586

1

t

I :J£p uw> I

- -·

0.7 (a)

0.5

0.3

-20dB

0.1 0

450 1000

2000

4000

6000

0

8000

ro➔

Figure 7.40 Butterworth bandpass filter design for Ex. 7 .14.

2

3

3.S

4

5

708

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS As in Ex. 7.13, the solution is executed in two steps. In the first step, we determine th e lowpass prototype filter transfer function 1lp (s). In the second step, the desired bandpass filte transfer function is obtained from 1lp (s) by substituting s with T(s), the lowpass to ban dpas: transformation in Eq. (7.43). Step 1: Determine the prototype lowpass filter 1-lp (s) Our goal here is the design of the prototype lowpass Butterworth filter (see Ex. 7.7). First, we find w5 • For the prototype Jowpass filter transfer function 1lp (s) with the amplitude re sponse shown in Fig. 7.40b, the frequency Ws is found [using Eq. (7.42)] to be the smaller of (1000)(2000) - (450)2 = 3.99 and 450(2000 - l 000)

)2 �(4_00_0___ _ )_ (2_0_00_) _ _ _(1_ 0_00 3.S 4000(2000 - 1000) -

Thus, Ws = 3.5, as depicted in Fig. 7.40b. Next, we determine the filter order N. For a prototype lowpass filter in Fig. 7.37b, Gp = -2.4 dB, Gs = -20 dB, wp = 1, and Ws = 3.5. Hence, according to Eq. (7.28), the Butterworth filter order N required to meet these specifications is 1 102 - 1 N= ---log [ --] =1.955 lQ0-24 - 1 2 log 3.5 Since filter order must be an integer, we round this result up to N = 2. Next (which is not necessary for a Chebyshev design), we determine the half-power(3 dB) cutoff frequency We for the prototype filter. Use of Eq. (7.29) yields W e

=

1 4 2 (10- - 1)1/4

= 1.0790

We now determine the normalized transfer function 11,(s). According to Table 7.1, the required second-order lowpass Butterworth transfer function is I 1l(s)=---s2 +-/2s+ I This is the transfer function of a filter that is normalized to the half-power frequency (meaning We = 1). Finally, the prototype filter transfer function 1lp (s) is obtained by substituting s with s/wc = s/1.0790 in the normalized transfer function 1l(s) as 1l (s) = P

(1.0790)2 s2

+.J2(1.0790)s+( l .0790)2

=

1.1642 s2 +1.6464s+l .1642

(?.45)

Essentially, this step converts the filter 1l(s), which is normalized to we= 1, to the filterrlp(s�, 18 which is normalized to wp = 1. The amplitude response of this prototype filter is illustrated Fig. 7.40b. Step 2: Determine the bandpass filter H(s) lacing Finally, the desired bandpass filter transfer function H(s) is obtained from H.p (s) by rep s with T(s), where [see Eq. (7.43)]

7.7 Frequency Transformations T(s)=

709

s2 +2(10) 6 1000s

Making this substit ution in the right-hand side of Eq. (7.45) yields the final bandpass transfer function l .1642(10) 6 s2 H(s)= s4 + 1526s3 + 5.1642(L0)6 s2 + 3.05 I 8(10)9s + 4(10)12 The amplitude response IH(jw)I of this filter is shown in Fig. 7.40a. BUTTERWORTH BANDPASS FILTER DESIGN WITH

MATLAB

Once again, we can easily use MATLAB to design a filter to meet the design specifications. We first use butto rd to determine filter order and half-power frequencies w,. >> >>

omegap = [1000 2000]; omegas = [450 4000); Ghatp = -2.4; Ghats = -20; [N,omegac] = but tord(omegap,omegas,-Ghatp,-Ghats,'s')

N = 2

omegac = 964.35

2073.93

Next, we use the butter command to determine the filter transfer function. Since specified as a two-element vector, the design defaults to a bandpass filter. >>

We

is

[numH,denH] = butter(N,omegac,'s') numH = 0 0 1231171.32 0 0 denH = 1.00 1569.19 5231171.32 3138370690.27 4000000000000

The filter's transfer function is thus H(s)

=

1231171.32s 2 s4 + 1569.1 9s 3 + 5231171.32s 2 + 3138370690.27s +4000000000000

This MATLAB result is slightly different than our hand calculations. The reason for the difference is that our hand calculations produced a bandpass filter that exactly meets passband specifications (and exceeds stopband specifications), while MATLAB generates a filter that exactly meets stopband specifications (and exceeds passband specifications). We next use MATLAB to plot the filter's magnitude response. The result, shown in Fig. 7. 41, is nearly identical to Fig. 7. 40a. W hile both solutions meet design specifications, the design of Fig. 7.41 exactly meets (upper, most stringent) stopband specifications (and exceeds passband specifications), while the design of Fig. 7.40a exactly meets passband specifications (and exceeds stopband specifications). >> omega = lin space(0,8000,1001); » H = (po lyval (numH,1j *omega))./(po lyval(denH, 1j *omega)) ; >> plo t(omega,abs(H),'k-'); grid o n ; >> set(gca,'xtick', [0 450 1000 2000 4000 6000 8000]) >> set(gca,'y tick', [0 10-(Ghats/20) 10-(Ghat p/20) 1]) >> axis([0 8000 0 1.05]); xl abel('\omega'); ylabel('IH(j\omega)I ');

710

CHAPTER 7

FREQUENCY RESPONSE AND ANALOG FILTERS

0.7586

0.6 0

��__,1.__ _j____

450

1000

2000

�==:::===:=:====t:=====d 6000 8000 4000 w

Figure 7.41 Magnitude response of the Butterworth bandpass filter for Ex. 7.15.

7.7.3 Bandstop Filters Figure 7.42a shows an amplitude response of a typical bandstop filter. To design such a filter, we first find 1ip (s), the transfer function of a prototype lowpass filter, to meet the specifications in Fig. 7.42b, where (J)5 is given by the smaller of (w P2 - {J)PI )Ws1 2 (J)PI Wp2 - Ws1

and

(wpz - Wp1 )ws2

(7.46)

Ws22 - Wpl Wp2

The desired transfer function of the bandstop filter to satisfy the specifications in Fig. 7.42ais obtained from 1ip (s) by replacing s with T(s), where

(7.47)

I ,t vc.o) I p

GP (b)

(a) Gj

0

cop , Cils1

cos2 c.op2

Cil ➔

Figure 7.42 Frequency transformation for bandstop filters .

0

7.7 Frequency Transformations

711

EXAMPLE 7.15 Butterworth Bandstop Filter Design Design a Butterworth bandstop filter with the amplitude response specifications illustrated in Fig. 7.43a with Wp1 = 60, Wp2 = 260, Ws1 = 100, Ws2 = 150, Gp = 0.7776 (Gp = -2.2 dB), and Gs = 0.1 (Gs = -20 dB). Here, our bandstop design solution follows a two-step procedure similar to that for the Butterworth bandpass filter design of Ex. 7.14. In the first step, we determine the lowpass prototype filter transfer function 1lp (s). In the second step, we use the lowpass to bandstop transformation in Eq. (7.47) to obtain the desired bandstop filter transfer function H(s). Step 1: Determine the prototype lowpass filter 1lp (s) Our goal here is the design of the prototype lowpass Butterworth filter (see Ex. 7.7). First, we find '!Js • For the prototype lowpass filter transfer function 1lp (s) with the amplitude response shown in Fig. 7.43b, the frequency Ws is found [using Eq. (7.46)] to be the smaller of (100)(260 - 60) (260)(60) -1002

= 3·57

and

150(260- 60) = 4.347 1502 - (260)(60)

Thus, Ws = 3.57, as depicted in Fig. 7.43b. Next, we determine the filter order N. For the prototype lowpass filter in Fig. 7.43b, Gp = -2.2 dB, Gs = -20 dB, Wp = 1, and Ws = 3.57. According to Eq. (7.28), the Butterworth filter order N required to meet these specifications is

I H(jro) I

0.9 O.TI6 0.7

(b)

0.5

0.3

0.1 0

· •· · .. · ..

60 100 150 200

260

500

400

0

ro➔

Figure 7.43 Butterworth bandstop filter design for Ex. 7.15.

co➔

712

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS N-

2 10 _ 1 1 ] [ 2 log(3.57) 10°· 22 - I

. 89 = J.96

Since filter order must be an integer, we round this result up to N = 2. The half-power frequency we for the prototype Butterworth filter, using Eq. (7.29) with wP = 1, is l W = l. 1096 We = . p 1 1 - ----(lQ-Gp/10 _ 1)�

000.22 - IP

We now detennine the normalized transfer function H(s). According to Tab required second-order lowpass Butterworth transfer function is

7 .1. the

(s) 1-l = s- 2_+_�-2-s +-1 This is the transfer function of a filter that is normalized to the half-power frequency T-1eaning We = I). Finally, the prototype filter transfer function 1-lp (s) is obtained by substitutin =. s with s / We = s/ 1.1096 in the normalized transfer function 1-l(s) as 1-lP(s)

--

1 1.2312 ------ -------2 s + l.5692s + 1.2312 )2 + �..L + 1 ( ..L We We

(7.48)

Essentially, thjs step converts the filter 1-l(s), which is normalized to We = I, to the filter 1ip (s). which is normalized to Wp = l. The amplitude response of this prototype filter is illustrated in Fig. 7 .43b. Step 2: Determine the bandstop filter H(s) Finally, the desired bandstop filter transfer function H(s) is obtained from 1-lp (s) by replacing s with T(s), where [see Eq. (7.47)] 200s T(s)---­ 2 - s + 15,600

Making this substitution in the right-hand side of Eq. (7.48) yields the final bandpa ss transfer function H(s)

=

( s2J��.�00

)2

1.2312 + l.5692( s21��

600)

+ 1.2312

(s2 + 15600) 2 ---:--- - ----------s4 + 254 . 9s 3 + 63690.9s 2 + (3.977) 106s + (2 .433) I 0 8 The amplitude response IH(Jw)I of this filter is shown in Fig. 7.43a. BUTTERWORTH BANDSTOP FILTER DESIGN WITH MATLAB

sign We now use MATLAB to design a filter to meet the Butterwor th bandstop de ies Wr· specifications. We first use buttord to determine filter order and half-power frequenc

7.7 Frequency Transformations

713

>> omegap = (60 260]; omegas = [100 150]; Ghatp = -2.2; Ghats = -20; > > (N,omegac] = buttord(omegap,omegas,-Ghatp,-Ghats,'s') N = 2

omegac = 66.8077

224.5249

Next, we use the butter command to determine the filter transfer function. In addition to We being specified as a two-element vector, we must also specify that a bandstop (rather than bandpass) filter is desired.

»

[numH,denH] = butter(N,omegac,'stop','s') numH = 1.00 0 30000.00 0 denH = 1.00 223.05 54874.69 3345685 .76

225000022 . 66 225000022.66

The filter's transfer function is thus s4 + 30000s2 + 225000022.66 H(s) _ - s4 + 223.05s3 + 54874.69s2 + 3345685.76s + 225000022.66 As in Ex. 7.14, this MATLAB result is slightly different from our hand calculations. The reason for the difference is that our hand calculations produced a bandstop filter that exactly meets passband specifications (and exceeds stopband specifications), while MATLAB generates a filter that exactly meets stopband specifications (and exceeds passband specifications). We next use MATLAB to plot the filter's magnitude response. The result, shown in Fig. 7.44, is nearly identical to Fig. 7.43a. While both solutions meet design specifications, the design of Fig. 7.44 exactly meets stopband specifications (and exceeds passband specifications), while the design of Fig. 7.43a exactly meets passband specifications (and exceeds stopband specifications). >> >> >> >> >> >>

omega = linspace(0,500,1001); H = (polyval(numH,1j*omega))./(polyval(denH,1j*omega)); plot(omega,abs(H),'k-'); grid on; set(gca,'xtick', (0 60 100 150 260 500]) set(gca, 'ytick', [0 10-(Ghats/20) 10-(Ghatp/20) 1]) axis([0 500 0 1.05)); xlabel('\omega'); ylabel('IH(j\omega) I');

0.7762

0.1 0

0

60

100

150

260

w

Figure 7.44 Magnitude response of the Butterworth bandpass filter for Ex. 7.15.

500

1

714

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS

7.8

FILTERS TO SATISFY DISTORTIONLES S TRANSMISSION CONDITIONS The purpose of a filter is to suppress unwanted frequency components and to transmit the desired frequency components without distortion. In Sec. 4.4, we saw that this requires the filter amplit ude response to be constant and the phase response to be a linear function of w over the passband. The filters discussed so far have stressed the constancy of the amplitude response. The linearity of the phase response has been ignored. As we saw earlier, the human ear is sensitive to amplitude distortion but somewhat insensitive to phase distortion. For this reason, audio filters are designed primarily for constant amplitude response, and the phase response is only a secondary consideration. We also saw earlier that the human eye is sensitive to phase distortion and relatively insensitive to amplitude distortion. Therefore, in video applications we cannot ignore phase distortion. In pulse communication, both amplitude and phase distortion are important for correct information transmission. Thus, in practice, we also need to design filters primarily for phase linearity in video applications. In pulse communication applications, it is important to have filters with a constant amplitude response and a linear phase response. We shall briefly discuss some aspects and approaches to the design of such filters. More discussion appears in the literature (see [21). We showed [see Eq. (4.40)] that the group delay tg resulting from the transmission of a signal through a filter is the negative of the s lope of the filter phase response LH(j w); that is, d t8 (w) = --LH(jw) dw If the slope of LH(jw) is constant over the desired band (i.e., if LH(jw) is linear with w), all the components are delayed by the same time interval rd (i.e., tg (w) = td)- In this case, the output is a replica of the input, assuming that all components are attenuated equally; that is, IH(jw)I = constant over the passband. If the slope of the phase response is not constant, group delay tg varies with frequency. This variation means that different frequency components undergo different amounts of time delay, and consequently, the output waveform will not be a replica of the input waveform even if the amplitude response is constant over the passband. A good way of judging phase linearity is to plo1 tg as a function of frequency. For a distortionless system, tg (the negative slope of LH(jw)) should be constant over the band of interest. This is in addition to the requirement of constancy of the ampl.itude response. Generally speaking, the two requirements of distortionless transmission conflict. The m ore we approach the ideal amplitude response, the further we deviate from the ideal phase response. The sharper the cutoff characteristic (the sma1ler the transition band), the more nonlinear is lhe phase response near the transition band. We can verify this fact from Fig. 7.45, which shows the group delay characteristics of various order Butterworth and Chebyshev filters. The Chebys he11 filter, which has a sharper cutoff than that of the Butterworth, shows considerably more variati on in group delay across frequency as compared to that of the Butterworth. For applications where phase linearity is also important, there are two possible approaches:

= constant (phase linearity) is the primary requirement, we design a filler for which lg is maximally flat around w = 0 and accept the resulting amplitude response, which may

1. If

tg

7.8 Filters to Satisfy Distortionless Transmission Conditions

715

14

,,....

12

10

..

a _

>,

0.

,_

a

8

(a)

6 4 2

0 36

0.1

() 2

0l

0.5

0.8

I

2

8

5

3

10

ffi -

32 >

X

X

-1

X X

-2

-1

X

X

X

2

0

X 10

4

Figure 7.46 Roots of IH(jw)l 2 for N = IO and (J)c = 3000(2rr).

poles= poles(find(real(denroots)40dB at 5 kHz). >> A= poly(poles); A= A/A(end); B = 1; >> f = linspace(0,6000,501); omega= 2*pi*f; >> Hmag = abs(polyval(B,1j*omega)./polyval(A,1j*omega)); >> plot(f,Hmag,'k-',f,abs(f*2*pi)> axis([O 6000 -0.05 1.05]); xlabel('f [Hz]'); ylabel('IH(j2\pi f)l 1 );

7.9.3 Using Cascaded Second-Order Sections for Butterw orth Filter Realization

For our Butterworth filter to be useful, we must be able to implement it. Since the �ansfer function H(s) is known, the differential equation is also known. T herefore, it is possibl� 10 try to implement the design by using op-amp integrators summers and scalar muJtiphers. · . ' Unfortunately, this approach will not work well. To understa' nd why, consider the denorm· �ator coefficients ao = 1. 766 x 10-43 and aw = 1. T he smallest coefficient is 43 orders of magnitude smaller than the largest coefficient! It is practically impossible to accurately realize such a broad · h (hat · · range m . sea 1e va Iues. To understand this, skeptics should try to find realistic resistors sue

7.9 MATLAB: Continuous-Time Filters

719

0.8 C' 06 ta

("I 0

:!:



0.4 0.2

0

1000

3000 f [Hz]

2000

4000

5000

6000

figure 7.47 Magnitude response IH(J2rr/)1 of a I 0th-order Butterworth filter. Rt / R = 1.766 x 10- 43. Additionally, small component variations will cause large changes in actual pole location. A better approach is to cascade five second-order sections, where each section implements one complex conjugate pair of poles. By pairing poles in complex conjugate pairs, each of the resulting second-order sections has real coefficients. With this approach, the smallest coefficients are only about nine orders of magnitude smaller than the largest coefficients. Furthermore, pole placement is typically less sensitive to component variations for cascaded structures. The Sallen-Key circuit shown in Fig. 7.48 provides a good way to realize a pair of complex-conjugate poles.t The transfer function of this circuit is 1

Geometrically, cuo is the distance from the origin to the poles and Q = 1 /2 cos ,fl, where ,fl is the angle between the negative real axis and the pole. Termed the "quality factor" of a circuit, Q provides a measure of the peakedness of the response. High-Q filters have poles close to the w axis, which boost the magnitude response near those frequencies.

+

_j_

y(t)

Figure 7.48 Sallen-Key filter stage.

t A more general version of the Sallen-Key circuit has a resisto� Ra from the negative terminal to ground and a resistor Rb between the negative terminal and the output. In Fig. 7.48, Ra = 00 and Rb = 0.

720

CHAPTER 7

FREQUENCY RESPONSE AND ANALOG FILTERS

Although many ways exist to determine suitable component values, a simple method is to assign Rt a realistic value and then letR2 =R i , C t =2Q/woR 1 , and C2 = 1/2QwoR2. Butterworth poles are a distance We from the origin, so wo = We. For our I 0th-order Butterworth filter, the angles VI are regularly spaced at 9, 27, 45, 63, and 81 degrees. MATLAB program CH7MP1 automates the task of computing component values and magnitude responses for each stage. ¼ CH7MP1.m : Chapter 7, MATLAB Program 1 ¼ Script M-file computes Sallen-Key component values and magnitude ¼ responses for each of the five cascaded second-order filter se ctions. omegaO = 3OOO*2*pi; f = linspace(O,6OOO,5O1); psi= [9 27 45 63 81]*pi/18O; ¼ Butterworth pole angles Hmag_SK = zeros(5,5O1); % Pre-allocate array for magnitude responses for stage = 1:5, Q = 1/(2*cos(psi(stage))); % Compute Q for current stage ¼ Compute and display filter components to the screen: disp(['Stage ',num2str(stage), ... ' (Q = ',num2str(Q), .. . '): Rl = R2 = ',num2str(56OOO), ... ', Cl= ' ,num2str(2*Q/(omegaO*56OOO)),... ', C2 = ',num2str(l/(2*Q*omegaO*56OOO))]); B= omegao-2; A = [1 omegaO/Q omegao-2]; Hmag_SK(stage,:)= abs(polyval(B,lj*2*pi*f)./polyval(A,lj*2*pi*f)); end plot(f,Hmag_SK,'k',f,prod(Hmag_SK),'k:') xlabel('f [Hz]'); ylabel('Magnitude Response') The disp command displays a character string to the screen. Character strings must be enclosed in single quotation marks. The num2str command converts numbers to character strings and facilitates the formatted display of information. The prod command multiplies along the columns of a matrix; it computes the total magnitude response as the product of the magnitude responses of the five stages. Executing the program produces the following output:

>>

CH7MP1

Stage 1 (Q = 0.50623): Rl = R2 = 56000, Cl= 9.5916e-1O, C2 = 9.3569e-1O Stage 2 (Q = 0.56116): Rl= R2 = 56000, Cl = 1.O632e-O9, C2 = 8.441e-1O Stage 3 (Q = 0.70711): Rl = R2= 56000, Cl = 1.3398e-O9, C2 = 6.6988e-1O Stage 4 (Q = 1.1013): R1 = R2 = 56000, Cl = 2.O867e-O9, C2 = 4.3OO9e-1O Stage 5 (Q = 3.1962): Rl = R2 = 56000, Cl = 6.O559e-O9, C2 = 1.482e-10

Since all the component values are practical, this filter is possible to implement. Figure 7.49 displays the magnitude responses for all five stages (soljd lines). The total response (dotted line)

-

r

7.10 Summary

721

3.5 .------,---- ,------., ----,----�---� 3

2.5

0.5 0 '------'------'-------l----·...:.".:..1• .:..:...:..:..,.._._._.�-...l..------1

0

1000

2000

3000

f [Hz]

4000

5000

6000

Figure 7.49 Magnitude responses for Sallen-Key filter stages.

confirms a 10th-order Butterworth response. Stage 5, which has the largest Q and implements the pair of conjugate poles nearest the w axis, is the most peaked response. Stage 1, which has the smallest Q and implements the pair of conjugate poles furthest from thew axis, is the least peaked response. In practice, it is best to order high-Q stages last; this reduces the risk that the high gains will saturate the filter hardware.

7.10 SUMMARY The response of an LTIC system with transfer function H(s) to an everlasting sinusoid of frequency w is also an everlasting sinusoid of the same frequency. The output amplitude is IH(Jcv)I times the input amplitude, and the output sinusoid is shifted in phase with respect to the input sinusoid by LH(jcv) radians. The plot of IH(jw)I versus cv indicates the amplitude gain of sinusoids of various frequencies and is called the amplitude response of the system. The plot of LH(jw) versus w indicates the phase shift of sinusoids of various frequencies and is called the phase response. Plotting of frequency response is remarkably simplified by using logarithmic units for amplitude as well as frequency. Such plots are known as Bode plots. The use of logarithmic units makes it possible to add (rather than multiply) the amplitude response of four basic types of factors that occur in transfer functions: ( 1) a constant, (2) a pole or a zero at the origin, (3) a first-order pole or zero, and (4) complex-conjugate poles or zeros. For phase plots, we use linear units for phase and logarithmic units for the frequency. Like the amplitude components, the phases corresponding to the basic types of factors mentioned above add. The asymptotic properties of the amplitude and phase responses allow their plotting with remarkable ease even for transfer functions of high orders. The frequency response of a system is determined by the locations in the complex plane of the poles and zeros of its transfer function. We can design frequency selective filters by proper

722

CHAPTER 7 FREQUENCY RESPONSE AND ANALOG FILTERS placement of its transfer function poles and zeros. Placing a pole (a zero) near a frequency jW 1.

8.2 USEFUL SIGNAL OPERATIONS Signal operations for shifting, and scaling, as discussed for continuous-time signals also apply, with some modifications, to discrete-time signals. SHIFTING

Consider a signal x[n] (Fig. 8.4a) and the same signal delayed (right-shifted) by 5 units (Fig. 8.4b), which we shall denote by xs [n].f Using the argument employed for a similar operation in continuous-time signals (Sec. I .2), we obtain Xs [n] = x[n - 5) t The tenns "delay" and "advance" are meaningful only when the independent variable is time. For other independent variables, such as frequency or distance, it is more appropriate to refer to the "right shift" and "left shift" of a sequence.

734

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS x[n] I (0.9)"

----

-

-

.

6

3

8

10

.

-

� n-

J5

(a) I

xs [n] = x[n - 5]

(0.9)n

-S

• -

-

-

-

-

:: -

-

8

10

.

15

12

(b) I (0.9)-

- ::

::

-JO

n

x,[n] = x[-n]

··'

-6

-3

:



:: -

0

-

- 6

- -

l5

(c)

I

x[k - n]

co.9l-"

for k=5

·'.

•'

-IO

-5

-3

0

2

: - -

6

: : -

-

:

-

15

-

(d)

Figure 8.4 Shifting and time reversal of a signal. 1 Therefore, to shift a sequence by M units (M integer), we replace n with n - M. Thus, x[n -_i fl_ gan ­ represents x[n] shifted by M units. If Mis positive, the shift is to the right (delay). If Mis n� �� ) 00115' the shift is to the left (advance). Accordingly, x[n - 5] is x[n] delayed (right-shifted) by and x[n + 5] is x[n] advanced (left-shifted) by 5 units.

8.2 Useful Signal Operations

735

DRILL 8.2 Left-Shift Operation Show that x[n] in Fig. 8.4a left-shffted by 3 units can be ex zero otherwise. Sketch the shifted signal.

DRILL 8.3 Right-Shift Operation

..... jj

Show that x[-k- n] can be obtained from x[n] by first right-shffting x[n] by k units and then

time-reversing this shifted signal.

TIME REVERSAL

To time-reverse x[n] in Fig. 8.4a, we rotate x[n] about the vertical axis to obtain the time-reversed signal Xr [11] shown in Fig. 8.4c. Using the argument employed for a similar operation in continuous-time signals (Sec. 1.2), we obtain x,[n]

= x[-n]

Therefore, to time-reverse a signal, we replace n with -n so that x[-n] is the time-reversed x[n). For example, if x[n] = (0.9)'1 for 3 :::: n :::: 10, then x,[n) = (0.9)-n for 3 :::: -n :::: 10; that is, -3 ::'.:. n � -10, as shown in Fig. 8.4c. The origin n = 0 is the anchor point, which remains unchanged under time-reversal operation because at n = 0, x[n] = x[-n] = x[O]. Note that while the reversal of x[n] about the vertical axis is x[ -n], the reversal of x[n] about the horizontal axis is -x[n].

EXAMPLE 8.2 Time Reversal and Shifting In the convolution operation, discussed later, we need to find the function x[k- n] from x[n]. This can be done in two steps: (i) time-reverse the signal x[n] to obtain x[-n]; (ii) now, right-shift x[-n] by k. Recall that right-shifting is accomplished by replacing n with n - k. Hence, right-shifting x[-n] by k units is x[-(n - k)] = x[k- n]. Figure 8.4d shows x[5 - n] obtained this way. We first time-reverse x[n] to obtain x[-n] in Fig. 8.4c. Next, we shift x[-n] by k = 5 to obtain x[k- n] = x[5 - n], as shown in Fig. 8.4d. In this particular example, the order of the two operations employed is interchangeable. We can first left-shift x[k] to obtain x[n + 5]. Next, we time-reverse x[n + 5] to obtain x[-n + 5) = x[5 - n]. The reader is encouraged to verify that this procedure yields the same result, as in Fig. 8.4d.

736

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS

Sketch the signal x( n] = e-o.Sn for -3 � h � 2, and zero otheiwise. Sketch the corresponding time-reversed signal and show that it can be expressed dS x,[n] = eO.Sn for -2::: n::: 3.

I

SAMPLING RATE ALTERATION: DowNSAMPLING, UPSAMPLING, AND INTERPOLATION

Alteration of the sampling rate is somewhat similar to time scaling in continuous-time signals. Consider a signal x[n] compressed by factor M. Compressing a signal x[n] by factor M yields xd[n] given by Because of the restriction that discrete-time signals are defined only for integer values of the argument, we must restrict M to integer values.The values of x[Mn] at n = 0, l, 2, 3, ...are x[0J, x[M], x[2M], x[3M], ....This means x[Mn] selects every Mth sample of x[n] and deletes aU the samples in between.It reduces the number of samples by factor M. If x[n] is obtained by sampling a continuous-time signal, this operation implies reducing the sampling rate by factor M. For this reason, this operation is commonly called downsampling. Figure 8.Sa shows a signal x[n] and Fig. 8.Sb shows the signal x[2n], which is obtained by deleting odd-numbered samples of x[n].t 1n the continuous-time case, time compression merely speeds up the signal without loss of any data. In contrast, downsampling x[n] generally causes loss of data. Under certain conditions-for example, if x[n] is the result of oversampling some continuous-time signal-then xd[n] may still retain the complete information about x[n]. An interpolated signal is generated in two steps; first, we expand x[n] by an integer factor L to obtain the expanded signal Xe [n], as _ x[n/L] Xe [n ] - { O

n=0,±L ±2L, .. . , otherwise

(8.2)

To understand this expression, consider a simple case of expanding x[n] by a factor of 2 (L = )2 . When n is odd, n/2 is a noninteger, and Xe [n] = 0. That is, Xe [l] = Xe [3] = Xe [ S], ...are all zero, as depicted in Fig. 8.Sc.Moreover, n/2 is an integer for even n, and the values of Xe [n] = x[n/2] for n = 0, 2, 4, 6, ...are x[0], x[ I], x[2], x[3], ..., as shown in fjg, 8.Sc.In general, for n = 0, 1,2,..., Xe [n] is given by the sequence x[0], 0, 0, ..., 0, 0,x[ l ],0, 0, ..., 0, O,x[2], 0, 0, ... , 0, 0, ... ..__,_,

l-1 zeros

--,...-,,

'-_,--1

L-1 zeros

L-1 zeros

Thus, the sampling rate of Xe [n] is L times that of x[n]. Hence, this operation is commonly called upsampling. The upsampled signal Xe [n] contains all the data of x[n], although in an expanded form. t Odd-numbered samples of x[n] can be retained (and even-numbered samples deleted) by using !he transformation xd[n] = x[2n + I].

737

8.2 Useful Signal Operations x[n]

2

4

6

8

I 0 12

14

16

18 20

n(a)

xin] = x[2n] Downsampling

2

4

6

8

10

n(b)

.. .. .. ..

xe[n]

xefn)

= x[1"] Upsampling

....

: :

2

4

6

8

10 12

14 16

18 20 22 24 26 28 30 32 34 36 38 40

n-

(c)

X;[n]

2

4

6

8

10

12

14

16

18 20 22 24 26 28 30 32 34 36 38 40

II-

(d)

Figure 8.5 Compression (downsampling) and expansion (upsampling, interpolation) of a signal.

In the expanded signal in Fig. 8.5c, the missing (zero-valued) odd-numbered samples can be reconstructed from the non-zero-valued samples by using some suitable interpolation formula. Figure 8.5d shows such an interpolated signal x;[n], where the missing samples are constructed by using an interpolating filter. The optimum interpolating filter is usually an ideal lowpass filter, which is realizable only approximately. In practice, we may use an interpolation that is nonoptimum but realizable. The process of filtering to interpolate the zero-valued samples is called interpolation. Since the interpolated data are computed from the existing data, interpolation does

738

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS not result in a gain of information. While further discussion of interpolation is beyond our scope, Drill 8.5 and, later, Prob. 9.8-8 introduce the idea of linear interpolation.

DRILL 8.5 Expansion and Interpolation A signal x[n] is expanded by a factor of 2 to obtain signal x[n/2). The odd-numbered samples (n odd) in this signal have zero value. Show that the linearly interpolated odd-numbered samples are given by x;[n] = (1 /2) {x[11 - I]+x[n +I]}.

8.3 SOME USEFUL DISCRETE-TIME SIGNAL MODELS We now discuss some important discrete-time signal models that are encountered frequently in the study of discrete-time signals and systems.

8.3.1 Discrete-Time Impulse Function �[n] The discrete-time counterpart of the continuous-time impulse function 8(1) is o[n], a Kronecker delta function, defined by

This function, also called the unit impulse sequence, is shown in Fig. 8.6a. The shifted impulse sequence o[n - m] is depicted in Fig. 8.6b. Unlike its continuous-time counterpart o(t) (the Dirac delta), the Kronecker delta is a very simple function, requiring no special esoteric knowledge of distribution theory. o[n]

n(a)

o[n - m]

m

n-

(b)

Figure 8.6 Discrete-time impulse function: (a) unit impulse seq1.1ence and (b) shifted impulse sequenc e.

8.3 Some Useful Discrete-Time Signal Models

739

u[nl

0

3

5

n-

(a)

x[11]

4 2 0

5

10

n-

(b)

Figure 8.7 (a) A discrete-time unit step function u[n] and (b) its application.

8.3.2 Discrete-Time Unit Step Function u[n] The discrete-time counterpart of the unit step function u(t) is u[n] (Fig. 8.7a), defined by u[n]=

{�

for n::::, 0 for n < 0

If we want a signal to start at n = 0 (so that it has a zero value for all n < 0), we need only multiply the signaJ by u[n].

EXAMPLE 8.3 Describing Signals with Unit Step and Unit Impulse Functions Describe the signal x[n] shown in Fig. 8.7b by a single expression valid for all n. There are many different ways of viewing x[n]. Although each way of viewing yields a different expression, they are all equivalent. We shall consider here just one possible expression. The signal x[n] can be broken into three components: (l ) a ramp component xi [n] from n == 0 to 4, (2) a scaled step componentx2 [n] fromn = 5 to 10, and (3) an impulse component x3[n] represented by the negative spike at n. = 8. Let us consider each one separately. We express x1 [n.] = n(u[n] u[n 5]) to account for the signal from n = 0 to 4. Assuming that the spike atn = 8 does not exist, we can expressx2 (n]=4(u[n- 5]-u[n 1 l])to account for the signal from n = 5 to 10. Once these two components have been added, the only part that is unaccounted for is a spike of amplitude -2 at n = 8, which can be represented by

- -

-



740

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS x3[n] = -28[n - 8] . Hence, x[n] = x 1 [n] + x2[n] + X3[n]

= n(u[n] - u[n - 5]) + 4 (u[n - 5] - u[n - 11])- 28[n - 8]

for all n

We stress again that the expression is valid for all values of n. The reader can find several other equivalent expressions for x[n]. For example, one may consider a scaled step function from n = 0 to 10, subtract a ramp over the range n = 0 to 3, and subtract the spike. You can also play with breaking n into different ranges for your expression.

8.3.3 Discrete-Time Exponential y n A continuous-time exponential e>..1 can be expressed in an alternate form as (y

= i or >-. = ln y)

For example, e-0-31 = (0.7408f because e-0-3 = 0.7408. Conversely, 41 = e l .386' because e l .386 = 4, that is, ln4 = 1.386. In the study of continuous-time signals and systems, we prefer the form e>..t rather than yr . In contrast, the exponential form yn is preferable in the study of discrete-time signals and systems, as will become apparent later. The discrete-time exponential yn can also be expressed by using a natural base, as (y = e>.. or A= In y) Because of unfamiliarity with exponentials with bases other than e, exponentials of the form yn may seem inconvenient and confusing at first. The reader is urged to plot some exponentials to n acquire a sense of these functions. Also observe that y-n = ( .

f)

DRILL 8.6 Equivalent Forms of DT Exponentials (a) Show that (i) (0.25)-n = 4n , (ii) 4- n = (0.25)", (iii) e 21 = (7 .389) 1, (iv) e-21 = (0.1353)1 = (7.389)-1, (v) e3n = (20.086?, and (vi) e-1. 511 = (0.2231)" = (4.4817r". (b) Show that (i) 2" = e0·693", (ii) (0.5)" = e-0•69311, and (iii) (0.8)-n = e0•223111•

Nature of y". The signal e>..n grows exponentially with n if Re>-. > 0 (J-. in the RHP), and decays exponentially if Re>-. < 0 (J-. in the LHP). It is constant or oscillates with constant amplitude if Re>-. = 0 (J-. on the imaginary axis). Clearly, the location of A in the complex plane indicates whether the signal e>..n will grow exponentially, decay exponentially, or oscillate with constant amplitude (Fig. 8.8a). A constant signal ().. = 0) is also an oscillation with zero frequency. We now find a similar criterion for determining the nature of y 11 from the location of y in the complex plane.

741

8.3 Some Useful Discrete-Time Signal Models LHP

Tm

RHP

Im

F..xponenuall) increasing

110

Cl)

C

C

·=

-�

u

(.)

-0

>,



ca ·.:;

=;;j

E e.,

C

Exponentially decreasing

Re➔

J Re

C

C

8.

,,._

w

)(

Ill

,\ Plane

Pl

(a)

(b)

Figure 8.8 The A plane, the y plane, and their mapping. Figure 8.8a shows a complex plane (A plane). Consider a signal eJnn_ In this case, A= jQ lies on the imaginary axis (Fig. 8.8a), and therefore is a constant-amplitude oscillating signal. This signal eJnn can be expressed as y n , where y = eJn. Because the magnitude of e1n is unity, Iy I= 1. Hence, when A lies on the imaginary axis, the corresponding y lies on a circle of unit radius, centered at the origin (the unit circle illustrated in Fig. 8.8b). Therefore, a signal y 11 oscillates with constant amplitude if y lies on the unit circle. Thus, the imaginary axis in the A plane maps into the unit circle in the y plane. Next consider the signal eJ..'Z, where A lies in the left half-plane in Fig. 8.8a. This means A =a+ j b, where a is negative (a < 0). In this case, the signal decays exponentially. This signal can be expressed as y 11 , where and

because le1h l = l

Also, a is negative (a < 0). Hence, I y I = ea < I. This result means that the corresponding y lies inside the unit circle. Therefore, a signal y 11 decays exponentially if y lies within the unit circle (Fig. 8.8b). If, in the preceding case, we select a to be positive (A in the right half-plane), then IY I > 1, and y lies outside the unit circle. Therefore, a signal y 11 grows exponentially if y lies outside lhe unit circle (Fig. 8.8b). To summarize, the imaginary axis in the ). plane maps into the unit circle in the y plane. The left half-plane in the A plane maps into the inside of the unit circle and the right half of the A plane maps into the outside of the unit circle in the y plane, as depicted in Fig. 8.8. Plots of (0.8) 11 and (-0.8)" appear in Figs. 8.9a and 8.9b, respectively. Plots of (0.5) 1' and ( 1.1 ) 11 appear m Figs. 8.9c and 8.9d, respectively. These plots verify our earlier conclusions about the location of y and the nature of signal growth. Observe that a signal (-ly I)" alternates sign

742

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS I



(-0.8)"

(0.8)" •

'•





I

2

3

. ...

. ...



..

2

0 0

.. ..

4

5

6

3

n __.,

4

5

7

8

n-.

... •··

....

(a)

6

...

(b)

(0.5)"

II

0

I

2

3

4

5

6

n-.

0

(c)

I

2

3

4

5

6

n-.

(d)

Figure 8.9 Discrete-time exponentials y".

successively (is positive for even values of n and negative for odd values of n, as depicted in Fig. 8.9b). Also, the exponential (0.5) 11 decays faster than (0.8)11 because 0.5 is closer to the origin than 0.8. The exponential (0.5)11 can also be expressed as 2-n because (0.5)- 1 = 2.

Sketching DT Exponentials Sketch the following signals: (a) (l) n , (b) (-l )n , (c) (0.5)n , (d) (-0.5)", ( and (g) ( -2)". Express these exponentials as y 11 , and plot y in the complex p Verify that y" decays exponentially with n if y lies inside the unit circle and tha with n if y is outside the unit circle. If y is en the unit circle, y n is constant or , · constant amplitude.

Accurately hand-sketching DT signals can be tedious and difficult. As the next example shows, MATLAB is particularly well suited to plot DT signals, including exponentials.

743

8.3 Some Useful Discrete-Time Signal Models

EXAMPLE 8.4 Plotting OT Exponentials with MATLAB Use MATLAB to plot the following discrete-time signals over (0 � n � 8): (a) x0 [n] = (0.8Y, (b)xb[n] = (-0.8)", (c)xc [n) = (0.5)", and (d)x,1 [n] = (1.1)". To begin, we use anonymous functions to represent each of the four signals. Next, we pl.ot these functions over the desired range of n. T he results, shown in Fig. 8.10, match the ear)jer Fig. 8.9 plots of the same signals.

>> >> >> >> >> >>

n = (0:8); x_a = ©(n) (0.8).-n; x_b = ©(n) (-0.8).-(n); x_c = ©(n) (0.5).-n; x_d = ©(n) (1.1).-n; subplot(2,2,1); stem(n,x_a(n),'k.'); ylabel('x_a[n]'); xlabel('n'); subplot(2,2,2); stem(n,x_b(n),'k. '); ylabel('x_b[n]'); xlabel('n'); subplot(2,2,3); stem(n,x_c(n),'k.'); ylabel('x_c[n]'); xlabel('n'); subplot(2,2,4); stem(n,x_d(n),'k.'); ylabel('x_d[n]'); xlabel('n');

'2



,. 0.5 ....., X 0

.D

X

0

2

4

6

0

-1

8

2

0

6

4

8

n

n 3 '2 .....,t.)

....., 2 0.5

-0

>
0, ejnn = ejlflln moves counterclockwise along the unit circle by an angle IQI for each unit increase in n, as shown in Fig. 8.1 la. For Q < 0, e j nn = e-jlfll11 moves clockwise along the unit circle by an angle l!:21 for each unit increase inn, as shown in Fig. 8.11 b. In either cas e, the locus of e j nn may be viewed as a phasor rotating stepwise at a unif orm rate of IQ I radians per unit sample interval. The sign of Q specifies the direction of rotation, while In I establishes the rate of rotation, or frequency, of ejfln. Using Euler's formula, we can express the complex exponential einn in terms of sinusoids as e jfln = (cosQn + j sin On)

and

e-jr>.n

= (cos O n - jsinQn)

(8.3)

These equations show that the frequency of both e jr>.n and e- j r>." is n (radians/sample). Therefore, the frequency of e jn,, is IOI. Because of Eq. (8.3), exponentials and sinusoids have si milar properties and peculiarities. Discrete-time sinusoids will be considered next.

8.3.5 Discrete-Time Sinusoid cos (nn + O) A general discrete-time sinusoid can be expressed as Ccos (On +0), where C is the amplitude. and 0 is the phase in radians. Also, On is an angle in radians. Hence, the dimensions of the frequency Q are radians per sample. This sinusoid may also be expressed as Ccos (Qn + 0) = Ccos (2rr Fn + 0)

8.3 Some Useful Discrete-Tune Signal Models

745

cos(�n+;)

II �

Figure 8.12 A discrete-time sinusoid cos(fin+ �).

where F = Q/2TC. Therefore, the dimensions of the discrete-time frequency Fare (radians/21r) per sample, which is equal to cycles per sample. This means if No is the period (sampJes/cycle) of the sinusoid, then the frequency of the sinusoid F = l /No (samples/cycle). Figure 8.12 shows a discrete-time sinusoid cos( �2 11 + � ). For this case, the frequency is Q = rr/12 ra> n = (-30:30); x = ©(n) cos(n*pi/12+pi/4 ); >> elf; stem(n,x(n) ,'k'); ylabel('x[n] '); xlabel('n 1);

n Figure 8.13 Sinusoid plot for Ex. S.S.

746

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS

A SAMPLED CONTINUOUS-TIME SINUSOID YIELDS

A DISCRE TE-TIME

SINUSOID

A continuous-time sinusoid cos wt sampled every T seconds yields a discret�-time sequenc e whose nth element (at r = nT) is coswnT. Thus, the sampled signal x[n] is given by x[n] = cosw11T= cosnn,

where n =wT

Thus, a continuous-time sinusoid coswt sampled every T seconds yields a discr ete-time sinusoid cos nn, where n = wT. Superficially, it may appear that a discrete-ti me sinusoid is a continuous-time sinusoid's cousin in a striped suit. As we shall see, however, so me of the properties of discrete-time sinusoids are very different from those of co11tinuous-time sinusoids. In the continuous-time case, the period of a sinusoid can take on any value: integral, fra ctio nal, or even irrational. A discrete-time signal, in contrast, is specified on]y at integral value s of n. Therefore, the period must be an integer (in terms of n) or an integral multiple of T (in terms variable t). SOME PECULIARITIES OF DISCRETE-TIME SINUSOIDS

There are two unexpected properties of discrete-time sinusoids that distinguish them from their continuous-time relatives. I. A continuous-time sinusoid is always periodic regardless of the va1ue of its frequency w. But a discrete-time sinusoid cos nn is periodic only if n is 2rr times some rational number (i is a rational number). 2. A continuous-time sinusoid cos wt has a unique waveform for each value of w. In c ontrast, a sinusoid cos nn does not have a unique waveform for each value of Q. In fac� discrete-time sinusoids with frequencies separated by multiples of 2n are identical. Thus, a sinusoid cos nn = cos (Q + 2n )n = cos (Q + 4.rr )n = .... We now examine each of these peculiarities. NOT ALL DISCRETE-TIME SINUSOIDS ARE PERIODIC

A discrete-time signal x[n] is said to be No-periodic if x[n]

= x[n + No]

(8.4)

for some positive integer No. The smallest value of No that satisfies Eq. (8.4) is the period of x[n]. Figure 8.14 shows an example of a periodic signal of period No = 6. Observe that each period contains six samples (or values). If we consider the first cycle starts at n = 0, the las! sample (or value) in this cycle occurs at n = No - 1 = 5 (not at n = No = 6). Note also that. by definition, a periodic signal must begin at n = -oo (an everlasting signal) for the reaso ns discussed in Sec. 1 .3.3. If a signal cos On is No-periodic, then cos On= cos n (n + No) = cos (nn + QN0 ) This result is possible only if nNo is an integral multiple of 2rr; that is, S"2No = 2nm

m integer

J

8.3 Some Useful Discrete-Trme Signal Models

-12

r

747

x[n]

-6

6

0

12

n-

Figure 8.14 A discrete-time periodic signal.

or

n m -=-

2rr

No

Because both m and No are integers, this equation implies that the sinusoid cos nn is periodic only if ff; is a rational number. In this case, the period No is given by (8.5) To compute No, we must choose the smallest value of m that will make m(t) an integer. For then the smallest value of m that will make mt = m 1J an integer is 2. In this example, if n = case, we therefore see that 17 2rr No =m- =2- =17

i� ,

Q

2

Using a similar argument, we can show that this discussion also applies to a discrete-time exponential eirln. Thus, a discrete-time exponential eiR.,i is periodic only if J.! is a rational number.t PHYSICAL EXPLANATION OF THE PERIODICITY RELATIONSHIP

Qualitatively, the periodicity relationship [see Eq. (8.5)) can be explained by recognizing that a discrete-time sinusoid cos Qn can be obtained by sampling a continuous-time sinusoid cos Qr at unit time interval T = I; that is, cos Qt sampled at t = 0, I, 2, 3, .... This fact means cos Qr is the envelope of cos nn. Since the period of cos nt is 2rr/ n, there are 2rr/ n number of samples (elements) of cos nn in one cycle of its envelope. This number may or may not be an integer. Figure 8.15 shows three sinusoids: cos (lf n), cos n), and cos (0.8n). Figure 8.15a shows = 8). Thus, cos(�n), for which there are exactly eight samples in each cycle of its envelope cos(in) repeats every cycle of its envelope. Clearly, cos(4n/rr) is periodic with period 8. On the other hand, Fig. 8.15b, which shows cos Ci� n), has an average of 2; = 8.5 samples (not an integral number) in one cycle of its envelope. Therefore, the second cycle of the envelope will not be identical to the first cycle. But there are 17 samples (an integral number) in two cycles of its envelope. Hence, the pattern becomes repetitive every two cycles of its envelope.

1, the amplitude grows exponentially.

8.4 ALIASING AND SAMPLING RATE The non-uniqueness of discrete-time sinusoids and the periodic repetition of the same waveforms at intervals of 2n may seem innocuous, but in reality it leads to a serious problem for processing continuous-time signals by digital filters. A continuous-time sinusoid coswt sampled every T seconds (t = nT) results in a discrete-time sinusoid coswnT, which is cosQn with Q = wT. The discrete-time sinusoids cos Qn have unique waveforms only for the values of frequencies in the range n < 1r or wT < 1r. Therefore, samples of continuous-time sinusoids of two (or more) different frequencies can generate the same discrete-time signal, as shown in Fig. 8.19. This phenomenon is known as aUasing because through sampling, two entirely different analog sinusoids take on the same "discrete-time" identity.t t Figure 8.19 shows samples of two sinusoids cos 12rrt and cos2rrr taken every 0.2 second. The corresponding discrete-time frequencies (Q = wT = 0.2w) are cos2.4rr and cos0.4rr. The apparent frequency of 2.4rr is 0.4rr, identical to the discrete-time frequency corresponding to the lower sinusoid. This shows that the samples of both these continuous-time sinusoids at 0.2-second intervals are identical, as verified from Fig. 8.19.

754

CHAPTER 8 DISCRETE-TIME SIGNALS AND SYSTEMS cos 127Tf

J= 6Hz

0

fs = 5 Hz

Figure 8.19 Demonstration of the aliasing effect. Aliasing causes ambiguity in digital signal processing, which makes it impossible to determine the true frequency of the sampled signal. Consider, for instance, digitally processing a continuous-time signal that contains two distinct components of frequencies w1 and wi. The samples of these components appear as discrete-time sinusoids of frequencies Q, = w1 T and Q 2 = w-iT. If Q 1 and Q 2 happen to differ by an integer multiple of 2rr (if wi - w, = 2hrtn, the two frequencies will be read as the same (lower of the two) frequency by the digital processor.' As a result, the higher-frequency component Wz not only is lost for good (by losing its identity10 w 1 ), but also it reincarnates as a component of frequency w 1, thus distoning the true amplitudeof the original component of frequency w 1• Hence, the resulting processed signal will be distorted. Clearly, aliasing is highly undesirable and should be avoided. To avoid aliasing, the frequencie.sof the continuous-time sinusoids to be processed should be kept within the fundamental bandwT 9 or w :5 re/T. Under this condition, the question of ambiguity or aliasing does not arise because any continuous-time sinusoid of frequency in this range has a unique waveform when it is sampled. Therefore, if wh is the highest frequency to be processed, then, to avoid aliasing, J'C

w, < I T If /h is the highest frequency in hertz,/h

fi1
2J,.

or

� ls l (y outside the unit circle) and decays exponentially if IYI < l (y within the unit circle). If y lies on the unit circle (i.e., IYI = I), the exponential is either a constant or oscillates with a constant envelope. Discrete-time sinusoids have two properties not shared by their continuous-time cousins. First, a discrete-time sinusoid cos Qn is periodic only if Q/2rr is a rational number. Second, discrete-Lime sinusoids whose frequencies Q differ by an integral multiple of 2n are identical. Consequently, a discrete-time sinusoid of any frequency Q is identical to some discrete-time sinusoid whose frequency (called the apparentfrequency Q0 ) lies in the interval -rr torr. Notice, jQ0 j is, at most, rr. which reflects the highest rate of oscillation for a discrete-time sinusoid. This peculiarity of nonuniqueness of waveforms in discrete-time sinusoids of different frequenciec; has far-reaching consequences in discrete-time signal processing. Sampling a continuous-time sinusoid cos (wt+ 0) at uniform intervals of T seconds result,; in a discrete-time sinusoid cos (Qn + 0), where Q = wT. A continuous time sinusoid of frequer -:.y f Hz must be sampled at a rate no less than 2/ Hz. Otherwise, the resulting sinusoid is aliased that is, it appears as a sampled version of a sinusoid of lower frequency. Discrete-time systems may be used to process discrete-time signals, or to prxr::ss continuous-time signals using appropriate interfaces at the input and output. At the input. the continuous-time input signal is converted into a discrete-time signal through sampling. The resulting discrete-time signal is processed by a discrete-time system yielding a discrete-time output, which is then converted into a continuous-time output. Discrete-time systems are characterized by difference equations, and can be realized by using scalar multipliers, summers, and time delays. These operations can be readily performed by digital computers. As discussed in Sec. 8.5, discrete-time systems possess many advantages over continuous-time systems, which helps explain the ever-growing popularity of discrete-time systems.

PROBLEMS Find the energy of the signals depicted in Fig. P8.1-1. 8.1-2 Find the power of the signals illustrated in Fig. P8.1-2. 8.1-3 Show that the power of a signal Vej11 is Px = signal x[n ] = 1v,12 . Use the fact that 8.1-1

L��o' No-I

L k=O

8.1-4

L�o'

ei.. and y in the complex plane. Verify that an exponential is growing if y lies outside the unit circle (or if >.. lies in the RHP), is decaying if y lies within the unit circle (or if>,. lies in the LHP), and has a constant amplitude if y lies on the unit circle (or if>.. lies on the imaginary axis).

772

8.3-11 8.3-12

CHAPTER 8 DISCRETE-TrME SIGNALS AND SYSTEMS

(a)

ei

(b)

(c)

e-il.9511

(d)

Repeat the problem if range O ::: < 2,r.

n

e-1I0.7,rn

n is required to be in the

A continuous-time sinusoid cos(wor) is sam­ pled at a rate fs = I 00 Hz. The sampled signal is found to be cos(0.6m,). If there is more than one possible value for wo, find the general expression for wo, and determine the three smallest values of lwol.

8.4-2

Samples of a continuous-time sinusoid cos(lOOJTt) are found to be cos(,rn). Find the sampling frequency fs. Explain whether there is only one possible value for fs. If there is more than one possible value, find the general expression for the sampling frequency, and detennine the three largest possible values.

8.4-3

A discrete-time processor uses a sampling interval T = 0.5 µs. What is the highest fre­ quency of a signal that can be processed with this processor without aliasing? If a signal of frequency 2 MHz is sampled by this processor, what is the (aliased) frequency of the resulting sampled signal?

8.4-6

A sampler with sampling interval T = 0. I ms

c10-4 s) samples continuous-time sinusoids of

(c)

f

= JO kHz f = 32 kHz

(b )

f = 8500Hz

(d) /=1500Hz (f) f = 960() Hz

8.S-1

A cash register output y[n] represent s the tola] cost of n items rung up by a cashier. The inl)IJt x[n] is the cost of the nth item. (a) Write the difference equation relating yin] to x[n]. (b) Realize this system using a lime-delay element.

8.5-2

Let p[n] be the population of a certain couniry at the beginning of the nth year. The binh and death rates of the population during any year are 3.3 and 1.3%. respectively. If i!nl is the total number of immigrants entering the country during the nth year, write the difference equation relating p[n + I), p[nl, and i[n]. Assume that the immigrants enter the country throughout the year at a uniform rate.

8.5-3

A moving average is used to detect a trend of a rapidly fluctuating variable, such as the stock market average. A variable may llucruate (up and down) daily. masking its long-tenn (secular) trend. We can discern the long-tenn trend by smoothing or averaging the past N values of the variable. For the stock market average, we may consider a 5-day moving average y[n] to be the mean of the pasl 5 days' market closing values x[,l].x(11 -1), . ... x[n - 4]. (a) Write the difference equation relating_1!n l

Continuous-time sinusoids IO cos(l lrrt + i) and 5 cos(29,r t - i) are sampled using a sampling interval of T 0. l s. Express the resulting discrete-time sinusoids in tenns of their apparent frequencies. Consider a signal x(t) = 10cos(2000Jrt) + -v'2sin(3000JTt) +2cos(5000Jrt+ %). (a) Assuming that x(t) is sampled at a rate of 4000 Hz, find the resulting sampled signal x[n], expressed in terms of apparent frequencies. Does this sampling rate cause any aliasing? Explain. (b) Determine the maximum sampling interval T that can be used to sample the signal x(t) without aliasing.

J = 12.5 kHz

D etennin e the apparent frequency of the sam. pied signal in each case.

=

8.4-5

(a) (e)

e14rrn

8.4-1

8.4-4

the following frequencies:

Show that cos(0.61r11 + (1r/6)) + v'3 cos( J.4m1 + (,r/3)) = 2 cos (0.61rn (,r /6)). Express the following exponentials in the form ei(On+6). where -Jr ::: n < Jr :

to the input x[11]. (b) Use time-delay elements to 5-day moving-average filter.

8.S-4

realize the

The digital integrator in Ex. 8.13 is specified by

y[n] - y[n -

I]= Tx[n]

If an input u[n] is applied to such an integra��

show that the output is (11 l)T11[11l, 1�� . approaches the desired ramp 11Tu( 11) as T

+

,, 773

Problems

R

R

R

aR

...

aR

v[N - I)

R

aR

R

R

aR

aR

R aR

v[N)

R

aR

Figure PS.5-6 8.5•5

Approximate the following second-order dif­ ferential equation with a difference equation:

8.6-1

d y(t) dy(t) di1 + a1 dt + aoy(t) = x(t) 2

x

8.5-6 The voltage at the nth node of a resistive ladder in Fig.P8.5- 6 is v(n], (n=0, 1,2, ...,N). Show that v[n] satisfies the second-order difference equation vfn+2]-Av[n+l]+v[n]=0

I A=2+­

a

Determine whether each of the following state­ ments is true or false. If the statement is false, demonstrate by proof or example why the statement is false. If the statement is true,

explain why.

(a) A discrete-time signal with finite power cannot be an energy signal. (b) A discrete-time signal with infinite energy must be a power signal. (c) If an energy signal x[n] has energy E, then the energy of x[an] is E/lal.

=

Q(n) exp(-n/5).•cos(pi•n/5).•(n>=O);

While this anonymous function operates cor­ rectly for a downsampling operation such as x [2n), it does not operate correctly for an upsampling operation such as x [n/2].Modify the anonymous function x so that it also cor­ rectly accommcxlates upsampling operations. Test your code by computing and plotting x(n/2) over (-10.::: n.::: 10).

[Hint: Consider the node equation at the nth ncxle with voltage v[n].] 8.5-7

Consider the discrete-time function x[n] = e-n/S cos (1rn/5)u[n). Section 8.6 uses anony­ mous functions in describing OT signals:

8.6-2

Suppose a vector x exists in the MATLAB workspace, corresponding to a finite-duration

DT signal x[n].

(a) Write a MATLAB function that, when passed vector x, computes and returns Ex, the energy of x[n]. (b) Write a MATLAB function that, when passed vector x, computes and returns Px, the power of x(n]. Assume that x[n] is periodic and that vector x contains data for an integer number of periods of x[n].

TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

In this chapter, we explore the time-domain analysis of linear, time-invariant, discrete-time (LTID) systems. We show how to compute the zero-input response, determine the unit impulse response, and use convolution to evaluate the zero-state response .. The procedure is parallel to that for continuous-time systems, with minor differences.

9.1 CLA SSIFICATION OF DISCRETE-TIME SYSTEMS Before examining the nature of discrete-time system equations, let us review the concepts of linearity, time invariance (or shift invariance), and causality, which apply to discrete-time systems much as they do to continuous-time systems. LINEARITY AND TIME INVARIANCE

For discrete-time systems, the definition of linearity is identical to that for continuous-time systems, as given in Eq. (1.22). Referring to the examples from Sec. 8.5, we can show that lhe systems in Exs. 8.10, 8.11, 8.12, and 8.13 are all linear. Time invariance (or shift invariance) for discrete-time systems is also defined in a way similar to that for continuous-time systems. Systems whose parameters do not change with time (withn) are time-invariant or shift-invariant (also constant-parameter) systems. For such a system, if !he input is delayed by k units or samples, the output is the same as before but delayed by k samples (assuming the initial conditions also are delayed by k). The systems in Exs. 8. IO, 8.11, 8.12. and 8. I3 are time-invariant because the coefficients in the system equations are constants (independeni of n). If these coefficients were functions of n (time), then the systems would be linear time-rarying systems. Consider, for example, a system described by

uip For this system, let a signal x 1 (n] yield the output y 1 [n], and another input x2 [n] yiel d the o y2[n]. Then

ul

and

774

)

9.1

[f we Jet x2[n]

Classification of Discrete-Time Systems

775

= xi [n - No], then

Clearly, this is a lime-var;i11g parameter system. CAUSAL AND NoNCAUSAL SYSTEMS

A causal (also known as a physical or 11onanticiparive) system is one for which the output at any instant n = k depends only on the value of the input x[n] for n � k. In other words, the value of the output at the present ir.stant depends only on the past and present values of the input x[n], not on its future values. The systems in Exs. 8.10, 8.11, 8.12, and 8.13 are all causal. We shall soon introduce the unit imp ulse response h[n] of a discrete-time system, and we shall see that causal systems require causal h[n]. lNvERTIBLE AND

N ONINVERTIBLE SYSTEMS

A discrete-time system S is invertible if an inverse system Si exists such that the cascade of S and Si results in an identity system. An identity system .is defined as one whose output is identical to the input. In other words, for an invertible system, the input can be uniquely determined from the corresponding output. For every input, there is a unique output. When a signal is processed through such a system, its input can be reconstructed from the corresponding output. There is no loss of information when a signal is processed through an invertible system. A cascade of a unit delay with a unit advance results in an identity system because the output of such a cascaded system is identical to the input. Clearly, the inverse of an ideal unit delay is ideal unit advance, which is a noncausal (and unrealizable) system. In contrast, a compressor y[n] = x[Mn] is not invertible because th.is operation loses all but every Mth sample of the input, and, generally, the input cannot be reconstructed. Similarly, operations, such as y[n] = cosx[n] or y[n] = lx[n]I , are not invertible.

DRILL 9.1 Invertibility Show that a system specified by equation y[n] y[n] = lx[n]l 2 is noninvertible.

= ax[n] + b is invertible but that the system

STABLE AND UNSTABLE SYSTEMS

The concept of stability is similar to that in continuous-time systems. Stability can be intemal or external. If every bounded input applied at the input terminal results in a bounded output, the system is said to be stable externally. External stability can be ascertained by measurements at the external terminals of the system. This type of stability is also known as the stability in the BIBO (bounded-input/bounded-output) sense. Both internal and external stability are discussed in greater detail in Sec. 9.6.

776

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS MEMORYLESS SYSTEMS AND SYSTEMS WITH MEMORY

The concepts of memoryless ( or instantaneous) systems and those with memory (or dynamic)ar e _ identical to the corresponding concepts of the continuous-lime case. A system is memoryless if its response at any instant n depends at most on the input at the same insta nt 11. The output at any instant of a system with memory generally depends on the past, present, and future values of the input. For example,y[n] = sinx[n] is an example of instantaneous system. and y[n]-y[n-11 ==x[n l is an example of a dynamic system or a system with memory.

BXAMPLE9.1 Investigating DT System Properties Consider a DT system described asy[n + I]= x[n + I ]x[n]. Determine whether the syste m is (a) linear, (b) time-invariant, (c) causal, (d) invertible, (e) BIBO-stable, and (0 memoryless. Let us delay the input-output equation by I to obtain the equivalent but more convenient representation ofy[n] = x[n]x[n - 1]. (a) Linearity requires both homogeneity and additivity. Let us first investigate homogeneity. Assuming x[n] =} y[n], we see that ax[n] => (a x[n])(ax[n - 1])

= a 2y [n] :/= ay[n]

Thus, the system does not satisfy the homogeneity property. The system also does not satisfy the additivity property. Assuming x 1 [n] =} yi[n] and x2 [n] =} y2[n], we see that input x[n] =x1[n] +x2[n] produces outputy[n] as y[n] = (x1 [n] + x2[ n])(x1 [n - 1] + x2 [n - 1]) = xi [n]x1[n - 1] + x2[n]x2[n - 1] + xi [n]x2[n - 1] + x2[n]x1 [11 - l] =yi[n] + Y2[ n] + xi [n]x2[n - 1] + x2[n]x 1 [n - 1]

# YI [n] + Y2[n]

Clearly, additivity is not satisfied. Since the system does not satisfy both the homogeneity and additivity properti es, we conclude that the system is not linear. (b) To be time-invariant, a shift in any input should cause a corresponding shift in respective output. Assume that x[n] => y[n]. Applying a delay version of this input to the system yields x[n -N] =} x[n -N]x[n - l -N]

= x[(n -N)]x[(n -N) - l] = y [n -N]

Since shifting an input causes a corresponding shift in the output, we conclude that the system is time-invariant. (c) To be causal, an output value cannot depend on any future input values. The ou1pu1.1• at time n depends on the input x at present and past times n and n - l. Since the current output does not depend on future input values, the system is causal.

)

9.2 Discrete-Time System Equations

777

(d) For a system to be invertible, every input must generate a unique output, which allows exact recovery of the input from the output. Consider two inputs to this system: x1[n] = 1 and x2 [n] = -1. Both inputs generate the same output y 1 [n]= y2 [n] = 1. Since unique inputs do not always generate unique outputs, we conclude that the system is not invertible. (e) To be BIBO-stabk, any bounded input must generate a bounded output. A bounded input satisfies lx[n] I � Mx < oo for all n. Given this condition, the system output magnitude behaves as ly[n]I =-= lx[n]x[n - l ]I= lx[n]I lx[n- I JI� M; < oo Since any bounded input is guaranteed to produce a bounded output, it follows that the system is BIBO-stable. (0 To be memoryless, a system's output can only depend on the strength of the current input. Since the output y at time n depends on the input x not only at present time n but also on past time 11 - 1, we see that the system is not memoryless.

9.2 DISCRETE-TIME SYSTEM EQUATIONS In this section, we discuss time-domain analysis of LTID (linear, time-invariant, discrete-time systems). With minor differences, the procedure is parallel to that for continuous-time systems. DIFFERENCE EQUATIONS

Equations (8.10), (8.12), (8.15), and (8.20) are examples of difference equations. Equations (8.10), (8.15), and (8.20) are first-order difference equations, and Eq. (8.12) is a second-order difference equation. Ali these equations are linear, with constant (not time-varying) coefficients.t Before giving a general form of an Nth-order linear difference equation, we recall that a difference equation can be written in two forms: the first form uses delay terms such as y[n - 1], y[n- 2], x[n - I], x[n- 2], and so on; and the alternate form uses advance terms such as y[n + I], y[n + 2], and so on. Although the delay form is more natural, we shall often prefer the advance form, not just for the general notational convenience, but also for resulting notational uniformity with the operator form for differential equations. This facilitates the commonality of the solutions and concepts for continuous-time and discrete-time systems. We start here with a general difference equation, written in advance form as y[n+N] + a 1 y[n+N- l]+· · •+aN-1y[n+ l ]+aNy[n]= bN-MX[n + M] + bN -M+i x[n + M- I]+·

· · + bN-1x[n+I]+ bNx[n]

(9.1)

This is a linear difference equation whose order is max(N,M). We have assumed the coefficient of y[n + N] to be unity (ao = 1) without loss of generality. If ao -=I= 1, we can divide the equation thro ughout by a 0 to normalize the equation to have ao = 1.

t Equations such as (8. I 0), (8. 12), (8.15), and (8.20) are considered to be linear according to the classical definition of linearity. Some authors label such equations as increme11tally linear. We prefer the classical definition. It is just a matter of individual choice and makes no difference in the final results.

778

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS CAUSALITY CONDITION

For a causal system, the output cannot depend on future input values. This means that whe n the system equation is in the advance form of Eq. (9.1), causality requires M < N. If M were to be greater than N, theny[n + N], the output at n + N would depend on x[n + M], which is the input at the later instant n + M. For a general causal case, M = N, and Eq. (9.1) can be expressed as y[n +N]+a 1y[n+N- 1]+· · •+aN_,y[n + I]+aNy[n] = boX[n + N] + b 1x[n + N-1] + · · · + bN_,x[n + l] + bNx[n]

(9.)2

where some of the coefficients on either side can be zero. In this N th-order equation, ao, the coefficient ofy[n + N], is normalized to unity.Equation (9.2) is valid for all values of n. Therefore, it is still valid if we replace n by n -N throughout the equation [see Eqs. (8.1 0) and (8.11)]. Such replacement yields a delay-form alternative: y[n] + a,y[n-1] + · · · + aN-1y[n -N + 1] + aNy[n-N] = bQX[n] + b,x[n -l] + · · · + bN-1x[n -N + 1] + bNx[n-N]

(9.3)

9.2.1 Recursive (Iterative) Solution of Difference Equation Equation (9.3) can be expressed as y[n] =

- aiy[n-1]-a2}'[n-2]-· · ·-aNy[n-N] + box[n] + b,x[n - 1] + · · · + bNx[n-N]

(9.4)

In Eq.(9.4), y[n] is computed from 2N + 1 pieces of information; the preceding N values of the output:y[n-1],y[n- 2], ..., y[n-N], and the preceding N values of the input:x[n-1], x[n-2], ..., x[n -N], and the present value of the input x[n]. Initially, to compute y[O], the N initial conditionsy(-1],y[-2], ..., y[-N] serve as the preceding N output values.Hence, knowing the N initial conditions and the input, we can determine recursively the entire output y[O], y[l], y[ 2], y[3], ..., one value at a time.For instance, to find y[0], we set n = 0 in Eq. (9.4).The left-hand side is y[O], and the right-hand side is expressed in terms of N initial conditions y[-1], y[-]2 , ..., y[-N] and the inputx[0] ifx[n] is causal (because of causality, other input termsx[-n] = 0). Similarly, knowingy[0] and the input, we can computey[l] by setting n = I in Eq.(9.4).Knowing y[O] and y[l], we find y[ 2], and so on. Thus, we can use this recursive procedure to find the complete response y[O], y[ 1],y[ 2], ....For this reason, this equation is classed as a recursive form. This method basically reflects the manner in which a computer would solve a recursive differe nce equation, given the input and initial conditions. Equation (9.4) [or Eq. (9.3)] is nonrecursive if all the N -I coefficients a, = 0 (i = 1 , 2, ...,N- 1). In this case, it can be seen that y[n] is computed only from the input values and without using any previous outputs.Generally speaking, the recursive procedure applies only to equations in the recursive form. The recursive (ite rative) procedure is demonstrated by the following examples.

9.2

779

Discrete-Time System Equations

EXAMPLE 9.2 Iterat1 ¥ ��-lution to a First-Order Difference Equation Solve iteratively

y[n] - 0.Sy[n - 1] = x[n]

with initial condition y[-1} expressed as

=

16 and causal input x[n]

= n2 u[n].

This equation can be

y[n] = 0.5y[n- l] +x[n]

(9.5)

If w e set n = 0 in Eq. (9 .5), we obtain y[0] = 0.Sy[-1] + x[0] =0.5(16) +0= 8

Now, setting n = 1 in Eq. (9.5) and using the value y[0] x[l] = (1) 2 = 1, we obtain

= 8 (computed in the first step) and

y[l] = 0.5(8) + (1) 2 = 5

Next, setting n = 2 in Eq. (9.5) and using the value y[l] = 5 (computed in the previous step) and x[2] = (2) 2, we obtain y[2] = 0.5(5) + (2) 2 = 6.5

Continuing in this way iteratively, we obtain y[3] = 0.5(6.5) + (3) 2 = 12.25 y[4]

= 0.5(12.25) + (4) 2 = 22.125

The outputy[n] is depicted in Fig. 9.1. y[n]

12.25

• ••

6.5 5I

0

2

Figure 9.1 Iterative solution of a difference equation.

3

4

5

II-

780

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS We now present one more example of iLeralive solutjon-this tjme for a secon d-order equation. The iterative method can be applied to a difference equation in delay form or advance form. In Ex. 9.2, we considered the former. Let us now apply the iterative method to the advance form.

EXAMPLE 9.3 Iterative Solution to a Second-Order Difference Equation �.

�!

I ..

-

Solve iteratively

y[n +2] -y[n +I]+0.24y[n] = x[n +2]- 2x[n +1) with initial conditions y[-1] equation can be expressed as

= 2, y[-2) = 1 and a causal input x[n] = nu.[n]. The system

y[n +2] = y[n +I]- 0.24y[n] +x[n+2]- 2x[n +I]

(9.6)

Setting n = -2 in Eq. (9.6) and then substituting y[-1] = 2, y[-2] = I, x[O] = x[-1] =0, we obtain y[0] = 2-0.24(1) +0-0 =1.76 Setting n = we obtain

-I in Eq.

(9.6) and then substituting y[0] = 1.76, y[- I] = 2, x[ l ] = I, x[O] = 0, y[I]= 1.76-0.24(2) +1-0=2.28

Setting n =0 in Eq. (9.6) and then substituting y[O] = I. 76, y[1) = 2.28, x[2] = 2, and x[ l] = I yield y[2] = 2.28-0.24(1.76) + 2- 2(1) = 1. 8576 and so on. With MATLAB, we can readily verify and extend these recursive calculations.

>> >> >> » >>

n = -2:4; y = [1,2,zeros(1,length(n)-2)]; x = [0,0,n(3:end)]; fork= 1:length(n)-2, y(k+2) = y(k+1)-0.24*y(k)+x(k+2)-2*x(k+1); end n,y

n = -2 -1 y = 1.0000 2.0000

0

1

1.7600 2.2800

2

1.8576

3

4

0.3104 -2.1354

Note carefully the recursive nature of the computations. From the N initial conditions (and the input), we obtained y[O] first. Then, using this value of y[O] and the preceding N - I init ial conditions ( along with the input), we find y[l]. Next, using y[O], y[l] along with the past N - 2 initial conditions and input, we obtained y[2J, and so on. This method is general and can be applied to a recursive difference equation of any order. It is interesting that the hardware realization of

, 9.2 in Fig. 8.21 (with Eq. (9 .5) depicted fashion.

Discrete-Time System Equations

781

a= 0.5) generates the solution precisely in this (iterative)

DRILL 9.2 Iterative Solution to a Difference Equation Using the iterative method, find the first three tenns of y[11J for

y[n +I] -2y[11J = x[11]

The initial condition is y[-1 J = l O and the input .x[11] = 2 starting at n = 0.

ANSWER y[O] = 20, y[ l]

= 42, and y[21 = 86

We shall see in the future that the solution of a difference equation obtained in this direct (iterative) way is useful in many situations. Despite the many uses of this method, a closed-fonn solution of a difference equation is far more useful in the study of system behavior and its dependence on the input and various system parameters. For this reason, we shall develop a systematic procedure to analyze discrete-time systems along lines similar to those used for continuous-time systems. OPERATOR NOTATION In difference equations, it is convenient to use operator notation similar to that used in differential equations for the sake of compactness. In continuous-time systems, we used the operator D to denote the operation of differentiation. For discrete-time systems, we shall use the operator E to denote the operation for advancing a sequence by one time unit. Thus,

Ex[n] =X[n + I] E 2x[n]

= x[n + 2]

Let us use this advance operator notation to represent several systems investigated earlier. The first-order difference equation of a savings account is [see Eq. (8.11 )]

y[n+ l]-ay[n]=x[n+ I] Using the operator notation, we can express this equation as

Ey[n] - ay[n] = Ex[n]

or

(£ - a)y[n]

= Ex[n]

Similarly, the second-order book sales estimate described by Eq. (8.13) as

y[n + 2] + ¼y[n + L] + /6 y[n] = x[n + 2]

782

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

can be expressed in operator notation as

(E2 + ¼E + /6 )y[n]

= E2x[n]

The general Nth-order advance-form difference equalion of Eq. (9.2) can be expressed as (EN +a1EN-t + .. • +aN-iE+aN )y[n] or

= (boEN +b,EN-I + · · · +bN-1£+bN)x[n]

Q[E]y[n] = P[E] x[n]

(9.7)

where Q[E] and P[E] are Nth-order polynomial operators Q[E] =EN +a1EN -I

+ · · · + aN-tE + aN

P[E] = boE +b 1 E -i + · N

N

· · +bN -1E+bN

RESPONSE OF LINEAR DISCRETE-TIME SYSTEMS

Following the procedure used for continuous-time systems, we can show that Eq. (9.7) is a linear equation (with constant coefficients). A system described by such an equation is a linear, time-invariant, discrete-time (LTID) system. We can verify, as in the case of LTJC systems (see the footnote on p. 151), that the general solution of Eq. (9.7) consists of zero-input and zero-state components.

9.3 SYSTEM RESPONSE TO INTERNAL CONDITIONS: THE ZERO-INPUT RESPONSE The zero-input response yo[n] is the solution of Eq. (9.7) with x[n] = 0; that is, Q[E]yo[n] = 0 or

(9.8)

Although we can solve this equation systematically, even a cursory examination points to the solution. This equation states that a linear combination of y 0 [n] and advanced y0 [n] is zero, notfor some values ofn, butforall n. Such a situation is possible ifand only ifyo[n] and advancedy0 [n ] have the same form. Only an exponential function y" has this property, as the following equation indicates: Ek {y"} = y n+k = y ky n This expression shows that y" advanced by k units is a constant (yk) times y". Therefore, the solution of Eq. (9 .8) must be of the formt Yo[n] = cy 11

(9.9)

t A signal of the form n m y 11 also satisfies this requirement under certain conditions (repeated roots), discussed later.

9.3 System Response to Internal Conditions: The Zero-Input Response

783

n+k To determine c and y, we substitute this solution in Eq.(9.8).Since £ky0 [n] = y0[n + k] = cy , this produ ces

For a nontrivial solution of this equation, (9.10) or

Q[y] = 0

Our solution cy 11 [Eq. (9.9)] is correct, provided y satisfies Eq.(9.10).Now, Q[y] is an Nth-order polynomial and can be expressed in the factored form (assuming all distinct roots): Clearly, y has N solutions Y1, Y2, ..., YN and, therefore, Eq.(9.8) also has N solutions c 1 y�, c2y.2, ..., Cn YN· In such a case, we have shown that the general solution is a linear combination of the N solutions (see the footnote on p. 153). Thus,

where Y1, Y2, ..• , Yn are the roots of Eq. (9.10) and c1, c2, . . . , Cn are arbitrary constants determined from N auxiliary conditions, generally given in the form of initial conditions. The polynomial Q[y] is called the characteristic polynomial of the system, and Q[y] = 0 [Eq. (9.10)] is the characteristic equation of the system.Moreover, Y1, Y2, ..., YN, the roots of the characteristic equation, are called characteristic roots or characteristic values (also eigenvalues) of the system. The exponentials Yt (i = 1, 2, ..., N) are the characteristic modes or natural modes of the system. A characteristic mode corresponds to each characteristic root of the system, and the zero-input response is a linear combination of the characteristic modes of the system.

EXAMPLE 9.4 Zero-Input Response of a Second-Order System with Real Roots The LTID system described by the difference equation y [n + 2) - 0.6y[n + 1) - 0.16y[n] = 5x[n + 2]

has input x[n] = 4-11 u[n] and initial conditions y[-1) = 0 and y(-2) = 25/4. Determine the zero-input response y0[n]. The zero-state response of this system is considered later, in Ex.9.13. The system equation in operator notation is (E2 -0.6E-0.16)y[n] = 5E x[n] 2

784

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS The characteristic polynomial is y 2 - 0.6y -0.16 = (y + 0.2)(y - 0.8) The characteristic equation is The characteristic roots are y1

(y + 0.2)(y - 0.8) = 0

= -0.2 and Y2 = 0.8. The zero-input resr on-.e is Yo[n]

= C1 (-0.2t + C2(0.8) 11

(9.11)

To determine arbitrary constants c I and c2, we set n = - I and -2 in Eq (9 11 ), then substitute Yo[-1] = 0 and Yo[-2] = 25/4 to obtain t

Therefore,

Yo[n] = ½(-0.2)'1 + i(0.8) 11

n>0

The reader can verify this solution by computing the first few terms using the iterative method (see Exs. 9.2 and 9.3). t The initial conditions y(-1] and y[ -2] are the conditions given on the total response. But because the input does not start until n = 0, the zero-state response is zero for n < 0. Hence, at 11 = - I and-�. the total response consists of only the zero-input component so that y( - l] = Yo [ - I] and y[ -2] = Jo[-21.

nse of First-Order Systems for the systems described by the following

=�[n+ll ncfili

l 0. Verify the solutions by computing the first

9.3 System Response to InternaJ Conditions: The Zero-Input Response

785

DRILL 9.4 Zero-Input Response of a Second-Order System with Real Roots Find the zero-input response of a system described by the equation y[n]+0.3y[n- l]-0.ly[1t-2)=-t[11]-+ 2..t[n The initial conditions are Yof-1] first three tenns iteratively.

I]

= I and Yo[-2] = 33. Verify the soluuon by comoutmg the

ANSWER

10[n] = (0.2)" + 2(-0.5)" Section 9.2.1 introduced the method of recursion to solve difference equations. As the next example illustrates, the zero-input response can likewise be found through recursion. Since it does not provide a closed-form solution, recursion is generally not the pre ferred method of solving difference equations.

Applying the initial conditions y[ -1] = 2 and y[ -2] = I, use MATLAB to iteratively compute and then plot the zero-input response for the system described by (E2 - 1.56£ + 0.8 l)y[n] = (E+3)x[n]. >> >> >> » >>

n = (-2:20)'; y = [1;2;zeros(length(n)-2,1)]; fork = 1:length(n)-2, y(k+2) = 1.56*y(k+1)-0.81*y(k); end; stem(n,y, 'k. '); xlabel('n'); ylabel('y[n] '); axis([-2 20 -1.5 2.5]); 2

0 -1

0

s

10 n

Figure 9.2 Zero-input response for Ex. 9.5.

15

20

786

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TilvfE SYSTEMS REPEATED ROOTS

er system to have N distinct charact istic roots Yi, Y2, •• So far we have assurned. the ,, ,, . more roots coinet·de (repea·' Y.v Wi " If two or . . te-0 r001slb corresponding charactensllc modes Y 1 , Y2 ' · · · ' YN · tut on f sh st tha ows su irect 1 1 b t d r a D . i modifie oot y re 'J . . the iorm " o f characten• s u· c modes is 1 t c modes ctens chara . fo1 this root are i:>eats, nding • 1-1c1·ty ,) , the correspo . times ( root of mu1tip . . . v�, n,y,� 2 ,, , ... ,n,-1 y ,,. Thus , if the charactenst1c equation of a system 1s ny then the zero-input response of the system is

n + r-1) n 2 "·+c,, y; r+I + Cr+2Y,+2 Yi + Cr+I y" y0[n]=(c 1+c2n+c3n +• .. +c,n

EXAMPLE 9.6 Zero-Input Response of a Second-Order System with :;ltqeated Roots Consider a second-order difference equation with repeated roots: (£2 + 6E+ 9)y[n] = (2£2 + 6E)x[n] Dete rmine the zero-input response yo[n] if the initial condjtions are Yo[- I] = -l/3 and Yo(-2] = -2/9. The characteristic polynomial is y 2 + 6y+9 = (y+ 3)2, and we have a repeated characteristic root at y = -3. The characteristic modes are (-3)" and n(-3)". Hence, the zero-input response is Yo[ n] = (c1 + c2 n)(-3t a Although we can determine the constants c 1 and c2 from the initial conditio ns fol lowing procedure similar to Ex. 9.4, we instead use MATLAB to perform the needed calculations. 9 >> c = inv(((-3)-(-1) -1*(-3)-(-1);(-3)-(-2 ) _2*(-3)- (-2)])*[-1/3;-2/ ) C = 4 3 Thus, the zero-input response is Yo[n]

COMPLEX ROOTS

= (4+3n)(-3)"

. oecut rn will .A As in the case of confmuous-ti·me t s y systems, the complex roots of a disc rete- time s e be ueal"' . . . m paus of conJ ugates if the system q e uation coefficients are real. Complex roots can

9.3 System Response to Inte . rnal Condi.tion s: The Zero-Input Response 787 . exactly as we would treat real roots. Howeve r • as m the case of continuous-time systems, we ,JUSl_ real form of sol ution as an altemat1ve can also use the . . First we express the complex conjugate ro°ts and Y y• m polar form. If IY I is the magnitude and /3 is the angle of y, then Y = lyl eh"

and

The zero-input response is given by Yo[n] = CJ y n + c2(y•)" = cilyl"ej/ln

+ c2IYl"e-j,8n

For a real system, ci and c2 must be conjugates so that Yo[ n] is a C1

C

·9

= -e 1 2

real function of n. Let

and

Then

(9.12)

where c and 0 are arbitrary constants determined from the auxiliary conditions. This is the solution in real form, which avoids dealing with complex numbers.

EXAMPLE 9.7 Zero-Input Response of a Second-Order System with Complex Roots Consider a second-order difference equation with complex-conjugate roots: ( £2 - 1.56£ + 0.8 l )y[n] = (E + 3)x[n] Detennine the zero-input response yo[n] if the initial conditions are Yo[-1] = 2 and yo[-2] = 1. - 0.78+ j0.45). The characteristic polynomial is (y 2· - l .56y + 0.81) = (y -0.78- j0.45)(y . · ± j(rr/6> . W,e could .muned'iately wnte · · th e 9 e The charactensuc roots are 0.78 ± ;0.45; that 1s, O. solution as -jrr11/6 /6 yo[n] = c(0.9)"ejrr11 + c*(0.9)"e initial conditions Yo[-1) = 2 and Yo[-2)_0= 1,5 we find Setting n = -1 and -2 and using the o 735 1. 550+ j0.2025 = l . l 726e1 ·173 . c = 1.1550 - j0.2025 = l . l 726e-j .i and c* = 1

>> gamma = roots([l -1.56 0.81]); )-(-i); ... >> c = inv([gamma(l)-(-1) gamma(2 ] gamm a(l)-(-2) gamma(2)-(-2)])*[2;l >> C = 1.1550 - 0.2025i 1.1550 + 0.2025i real form of the n coefficient by using the ±j(rr/6). Hence, Alternately, we could also find lhe unknow IYI = 0.9 roots are 0.9e solution, as given in Eq. (9.12). ln the present case, the

788

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS and /3 = re /6, and the zero-input response, according to Eq. (9.12), is given by y0 fn] = c(0.9)"cos (: n+0 )

=

To determine the constants c and 0, we set 11 -1 and -2 in this equation and substitute the initial conditions y0 L- I]= 2 and yo[-2) = I to obtain 2=

(

c cos - 1f + 0 6 0.9

) = - [.J3

( 7f )

C

2

0.9

cos 0 +

['

l

.0 - sm 2

]

.]

-cos0+-sin0 C s --+0 = -C I= --co 2 0.81 2 3 (0 .9)2 .J3

or

.J3

1

.

-ccos0 +-csm 0 =2 1.8 1.8

.J3 · e = I

l 0 +--csm --ccos 1.62

1.62

These are two simultaneous equations in two unknowns ccos 0 and c sin 0. Solution of these equations yields ccos0 = 2.308

csin 0 = -0.397 Dividing csin 0 by ccos 0 yields tan 0 =

-0.397 -0.172 --= --

2.308 I 1 0 = tan- (-0.172)

= -0.17 rad Substituting 0 = -0.17 radian in ccos0 = 2.308 yields c = 2.34 and Yo[n] = 2.34(0.9)" cos (: n - 0.17)

n?: 0

Observe that here we have used radian units for both f3 and 0. We also could have used the degree unit, although this practice is not recommended. The important consideration is to be consistent and to use the same units for both fJ and 0.

j

9.4 The Unit Impulse Resp onse h[11]

789

DRILL 9.5 Zero-Input Response of a Second-Order System with Complex Roots Find the zero-input response of a system described by the equation y[n] +4yl n - 21 = 2x[11I

The initial conditions are .vol - I) = - I /(2v'2) and Yo[-2] computing the first three tenns iteratively.

= 1/(4.J2). Verify the

olution by

ANSWER

)'o[n] = (2) n cos ( f n - 3{)

9.4 THE UNIT IMPULSE RESPONSE h[n] Consider an nth-order system specified by the equation (EN +a1EN -I

+ · · · +aN -1E +aN )y[n] = (boEN +b 1 EN-I + · · · +bN-1E +bN )x[n]

or

Q[E]y[n] = P[E]x[n]

The unit impulse response h[n] is the solution of this equation for the input o[n] with all the initial conditions zero; that is, Q[E]h[n] = P[E]o[n] (9.13) subject to initial conditions h(-l]=h[-2]=· · -=h[-N]=O Equation (9.13) can be solved to determine h[n] iteratively or in a closed form. The following example demonstrates the iterative solution.

EXAMPLE 9.8 Iterative Determination of the Impulse Response Iteratively compute the first two values of the impulse response h[n] of a system described by the equation yln] - 0.6y[n - l] -0.16y[n- 2] = Sx[n]

To determine the unit impulse response, we substitute 8[n] for x[n] and h[n] for y[n] in the system's difference equation to obtain h[n] -0.6h[n - 1] -0.16h[n - 2] = 58[n]

790

CHAPTER 9

TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

subject to zero initial state; that is, h(-1] Setting n = 0 in this equation yields

= h(-2] = 0.

h[O]-0.6(0) -0.16(0) Setting n

=l

= 5(1)

in the same equation and using h[0] h[ 1)-0.6(5) -0.16(0)

= 5(0)

==>

h[O]

=5

= 5 , we obtain ==>

h[ l]

=3

Continuing this way, we can determine any number of terms of h[n] Unfortun ately, such a solution does not yield a closed-form expression for h[n]. Nevertheless. determining a few values of h[n] can be useful in determining the closed-form solution, as the following development shows.

9.4.1 The Closed-Form Solution of h[n] Recall that h[n] is the system response to input o[n] , which is zero for n > 0.We know that when the input is zero, only the characteristic modes can be sustained by the system. Therefore, h[n] must be made up of characteristic modes for n > 0.At n = 0, it may have some nonzero v alue Ao so that a general fonn of h[n] can be expressed ast h[n]

= A o 8[n] + Yc[n]u[n]

(9.14)

where Yc [n] is a linear combination of the characteristic modes. We now substitute Eq. (9.14) in Eq. (9.13) to obtain Q[E] (A o 8[n] + Yc[n]u(n]) = P[E]o[n]. Because Yc [n] is made up of characteristic modes, Q[EJyc [n]u[n] = 0, and we obtain A 0 Q[E] 8[n] = P[E]8[n], that is, A o(8[n +NJ +a18[n +N- l] + · · · +aN8[n])

= bo 8[n +N] + • • • +bNo[n]

Setting n = 0 in this equation and using the fact that 8[m] = 0 for all m f. O , and 8[0] = I, we obiaio

==>

bN Ao=­ aN

(9.51 )

Hence,* (9.61 ) The N unknown coefficients in Yc [n] (on the right-hand side) can be determined from a knowledge of N values of h[n]. Fortunately, it is a straightforward task to determine values of h[n] iterait ve!)'. twe assume that the tenn Yc [n] consists of characteristic modes for n > o only. To refl ect !his bc:haiior. the characteristic tenns should be expressed in the form y(' u[n _ l]. But because u[n - I] == 11[nl -olnl, c1yju[n - l] = c1yju[11] - c18[n], and y,[n] can be e xpres�ed in terms of exponentials i1 11[1]1 (whichstlll 1 at n = 0), plus an impulse at n = O. * If aN = 0, then Ao cannot be determined by Eq. (9.15). In such a case, we show in Sec. . 9 9 that /,[n] isof� form Aoo(rz] +A 18[n - I]+ Yc [n)u[n].We have here N + 2 unknowns, which can be determined fromN+· values h[0], h[ I),... , h[N + l] found iteratively.

9.4 The Unit Impulse Response h[n]

791

as demonstrated in Ex. 9.8. We compute N values h[OJ, h[I] , h[2], ... , h[N -1] iteratively.Now, sett ing n= 0, l, 2, ... , N - l in Eq.(9.16), we can detennine the N unknowns in Yc[n].This point will become clear in the following example.

- EXAMPLE 9.9 Closed-Form Determination of the Impulse Response

�fj ,,

Detennine the unit impulse response h[11] for a system in Ex. 9.8specified by the equation y[n] -0.6y[n -l] -0.16y[n- 2) = 5x[n]

This equation can be expressed in the advance form as y[n + 2) -0.6y[n+l] -0.16y[n] = 5x[n + 2]

or in advance operator form as (E2 -0.6E-0.16)y[n] = 5E2x[n] The characteristic polynomial is y 2 -0.6y -0.16= (y +0.2)(y -0.8) The characteristic modes are (-0.2t and (0.8)n .Therefore,

Inspecting the system difference equation, we see that aN = -0.l6and bN = 0. Therefore, according to Eq. (9.16), To determine c 1 and c2, we need to find two values of h[n] iteratively.From Ex. 9.8, we know that h[O] = 5 and h[l] = 3. Setting n= 0 and 1in our expression for h[n] and using the fact that h[OJ = 5 and h[l] = 3 , we obtain

Therefore,

h[n] = [(-0.2t + 4(0.St] u[n]

792

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

RILL9.6 Closed-Form Determination of the Impulse Response Pind la(n], the unit impulse response of the LTID systems specified by the following equations: (ll) y[n+ 1]-y[n] =x[n)

(b) y[n]-Sy[n-1]+6y[n-2]=8x[n- l]-19x[n-2l (c) y[n + 2]-4y[n +I]+4y[n] = 2x[n +2) - 2x[n + l] (d) y[n) = b[n]-2x[n- 1)

(a) (b) (c) (d)

= u[n-1) h[n] = -� o[n] + [J(2) n + i(3) n ]u[n]

h[n]

h[n] = (2 + n)2n u[n] h[n] = U[n] - 2cS[n - l]

PLE9.10 Filtering Perspective of the Unit Impulse Response Use the MATLAB filter command to solve Ex. 9.9. There are several ways to find the impulse response using MATLAB. In this method. we fut specify the unit impulse function, which will serve as our input. Vectors a and b are created 10 specify the system. The filter command is then used to determine the impulse response. In fact, this method can be used to determine the zero-state response for any input. >> >> >> >>

n = (0:19); delta = ©(n) 1.0.*(n== O); a = (1 -0.6 -0.16]; b = (5 0 OJ; h = filter(b,a,delta(n)); stem(n,h,'k. '); xlabel('n'); ylabel('h[n] ');

4 2

0

��___._____.___.__,__,___.....___._.......__.....___.__.:z:......_;,.___,.__.,.___,.__...__.__,

0

5

10

n

Figure 9.3 Impulse response for Ex. 9.10.

15

20

Ke-

rlr►

::::PFi:ee'ltr♦

9.5 System Response to External Input: The Zero-State Response

793

Comment. Although it is relatively simple to determine the impulse response h[n] by using the procedure in this section, in Chs. l I and l 2, we shaJI discuss much simpler methods that utilize the z-transform.

9.5 SYSTEM RESPONSE TO EXTERNAL INPUT: THE ZERO-STATE RESPONSE The zero-state response y[11] is the system response to an input x[n] when the system is in the zero state. In this section, we shall assume that systems are in the zero state unless mentioned otherwise so that the zero-state response will be the total response of the system. Here, we follow the procedure parallel to that used in the continuous-time case by expressing an arbitrary input x[n] as a sum of impulse components. A signal x[n] in Fig. 9.4a can be expressed as a sum of impulse components, such as those depicted in Figs. 9.4b-9.4f. The component of x[n] at n =mis x[m]o[n- m], and x[n] is the sum of all these components summed from m =-oo to oo. Therefore, x[n] = x[O]o[n] +x[l]o[n- 1)+x[2]8[n-2] + • • +x[-l]o[n+ 1)+x[-2]8[n+2]+ · ·

=

L x[m]o[n-m]

(9.17)

m=-00

For a linear system, if we know the system response to impulse o[n], we can obtain the system response to any arbitrary input by summing the system response to various impulse components. Let h[n] be the system response to impulse input o[n]. We shall use the notation x[n] => y[n) to indicate the input and the corresponding response of the system. Thus, if o[n) => h[n] then because of time invariance o[n-m]=>h[n-m] and because of linearity

x[m]8[n - m] => x[m]h[n-m]

and again because of linearity

L x[m]8[n - m]

m=-00

.x[11]

L x[m)h[n-m] 00

00

=}

m=-00

)'[11)

794

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS x[n]

(a)

-2 -1

3

2

4

,,_

x[-2Jc5[n + 2]

(b)

-2

11-

x[- l)c5[11

(c)

+

I]

,,_

-I x[O]c5[n]

(d)

rrr

(e)

I

1

,,_



[

,,_

x[2] n. Therefore, if x[n] and h[11] are both causal, the product x[m]h[n- m] = 0 for m< 0 and for m > n, and it is nonzero only for the range 0 � m � n. Therefore, Eq. (9.18) in thjs case reduces to II

y[n] = Lx[m]h[n - m]

(9.20)

111=0

rud.

We shall evaluate the convolution sum first by an analytical method and later with graphjcaJ

EXAMPLE 9.11 Convolution of Causal Signals Determine c[n]

= x[n] * g[n] for x[n] = (0.8) 11 u[n]

We have

Note that

c[n] = x[m]

= (0.8)"'u[m]

g[n] = (0.3) 11 u[n]

and

L x[m]g[n - m]

m=-oo

g[n- m] = (0. 3) 11 -111 u[n - m]

and

Both x[n] and g[n] are causal. Therefore [see Eq. (9.20)), II

II

c[n] = Lx[m]g[n- m] = I)0.8)111 u[m] (0.3) 11-111u[n- m] m=O

In this summation, m lies between 0 and n (0 � m � n). Therefore, if n ::'.: 0, then both m and n - m:?: 0 so that u[m] = u[n - m] = 1. If n < 0, m is negative because m lies between O and n, and u[m] = 0. Therefore,

c[n] =

{ °"" _

L,m-0 (0.8r

or c[n] = (0.3)"

0

(0.3)"-m

L (0-·.38)m u[n] 11

m=O

Q

, 9.5 System Response to ExtemaJ lnput: The Zero-State Response

797

This is a geometric progres3ion with common ratio (0.8/0.3). From Sec. B.8.3, we have - (0.3)" + 1 u[n] " (0.3) (0.8 -0.3) = 2((0.st+ ' - (0.3)11+ 1 Ju[n]

c[n] = (0.3)

n (0.8)n+I

EXAMPLE 9.12 Filtering Perspective of the Zero-State Response Use the MATLAB filter command to compute and sketch the zero-state response for the system described by (£2 + 0.5£ - 1)y[n] = (2£2 + 6E)x[n] and the input x[n] = 4-11 u[n]. We solve this problem using the same approach as Ex. 9.10. Although the input is bounded and quickly decays to zero, the system itself is unstable and an unbounded output results. >> n = (0:11); x = ©(n) 4.-(-n).*(n>=O); >> a = [1 0.5 -1]; b = [2 6 OJ; y = filter(b,a,x(n)); >> elf; stem(n, y ,'k. '); xlabel('n'); ylabel('y [n]'); >> axis((-0.5 11.5 -20 25]); 20 'i:

">:

0 -20



T .. T • T 2



r

6

4

,,

l 8

" 10

n

Figure 9.5 Zero-state response for Ex. 9.12.

RILL 9.7 Convolution of Causal Signals that (0.8)"u[n] *u[n] = 5[1 - (0.S) n+l]u[n].

CONVOLUTION SUM FROM A TABLE

Just as in the continuous-time case, we have prepared a table (Table 9.1) from which convolution sums may be determined directly for a variety of signal pairs. For example, the convolution in

..

798

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYS TEMS

I

I

Ex. 9.1 J can be read directly from this table (pair 4) as

(0.8)"u[n] * (O.3)"u[n] =

co.st+ ' - co.3)" + 1 u[n] = 2[(0.8

t+1 - (0.3yi+1

0 •8 _ 0• 3

I I I I I I

] u[n]

We shall demonstrate the use of the convolution table in the following exa mple.

TABLE 9.1

No. l

Select Convolution Sums

Xt [11.]

-------------- I --------X2[n] •x2[n] =x2[n] •x [n]

8[n - k]

xi[n]

x[n]

1

x[n-k]

--

[ J _ y n+I ]

2

y"u[n]

u[n]

3

u[n]

u[n]

4

yi"u[n]

Y2U[n]

5

u[n]

nu[n]

n(n + 1) u[n] 2

6

y"u[n]

nu[n]

7

nu[n]

y(y"-l)+n(l-y) [ ] u[n] (1- y) 2

nu[n]

¼n(n - l)(n + l)u[n]

8

y"u[n]

y"u[n]

(n + l)y"u[n]

9

nyi"u[n]

Y21 u[n]

IY1 I" cos (/3n + 0)u[n]

IY2l"u[n]

Y i"u[-(n + l)]

Y21 u[n]

10

11

I -y

u[n]

I

2

Yt -y 2

J

I

I

I

(n+ I)u[n] [y n+I _ yn+I]

,

u[n]

Yt ::/=-y2

9.5 System Response to External Input: The Zero-State Response

EXAMPLE 9.13 Con

by Tables

Using Table 9.1, find the (,. ·ro-state) response y[n] of an LTID system described by the equation )[n + 21 - 0.6y[n + I l - O.l 6y[11] = 5x[n + 2) if the inputx[n] = 4-n11!11J.

The input can be expressed as x[11] = 4-11 u[n] = (l/4)"u[n] = (0.25Yu[nJ. The unit impulse response of this system, obtained in Ex. 9.9. is h[n] = [ (-0.2)11 + 4(0.8)"]u[n]

Therefore, y[n] = x[n] * h[n]

= (0.25)"u[n] * [(-0.2)11 u[n] + 4(0.8tu[n]] = (0.25tu[n] *(-0.2)'1u[n] + (0.25)11u[n] *4(0.8tu[n]

We use pair 4 (Table 9 .1) to find the foregoing convolution sums. y[n]

=[

(0.25)'1+ 1

-

(-0.2)" +1

0.25 - (-0.2)

+4

(0.25)'1+1

-

(0.8)"+ 1]

0.25 - 0.8

u[n]

= (2.22[(0.25)11+1 - (-0.2)11+1 ] - 7.27[(0.25t+1 - (0.8t+ 1 ])u[n] = [-5.05(0.25Y, + 1 - 2.22(-0.2)11+ 1 + 7.27(0.8)"+ 1 ]u[n] Recognizing that we can express y[n] as y[n]

= (-1.26(0.25)" + 0.444(-0.2)1 + 5.81(0.8)"]u[n] = (-1.26(4)- + 0.444(-0.2t + 5.81(0.8t]u[n] 1

11

DRILL 9.8 Convolution by Tables Use Table 9.1 to show that (a) (0.8) n +l u[n] * u[n] = 4[1- 0.8(0.8) 11]u[n]

*

(b) nJ-nu[n] (0.2} 11u[n] =

(c) e-nu[n) * 2-nu[n] = 2 �e

� [(0.2)" - (1 - }n)3-n]u[n]

[e-

11 -

�2-11 ]u[n]

799

800

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS RESPONSE TO COMPLEX INPUTS As in the case of real continuous-time systems, we can show that for ari r rm system with real /t[n], if the input and the output are expressed in terms of their real and 1m;1ginary parts, then!he real part of the input generates the real part of the response and the II ldt,,nary part of t he input generates the imaginary part. Thus, if x[n] = X r [n] + jx;[ n]

and

using the right-directed arrow to indicate the input-output pair, we can 110w that and

X;[n] ==} y,rnJ

(9,211

The proof is similar to that used to derive Eq. (2.31) for LTIC systems. MULTIPLE INPUTS Multiple inputs to LTI systems can be treated by applying the superposition principle. Each input is considered separately, with all other inputs assumed to be zero. The sum of all these individual system responses constitutes the total system output when all the inputs are applied simultaneousl)

- 1] - 0.16y(n - 2) = 5x[n] responds to input (4)-n + 1.444(-0.2) 11 + 9.81 (0.8)"]11[n].

9.5.1 Graphical Procedure for the Convolution Sum The steps in evaluating the convolution sum are parallel to those followed in evaluating the convolution integral. The convolution sum of causal signals x[n] and g[n] is given by II

c[n]

= Lx[m]g[n - m] m=O

We first plot x[m] and g[n - m] as functions of m (not n), because the summation i omm. Functions x[m] and g[m] are the same as x[n] and g[n], plotted, respectively, as functions of m( ti Fig. 9.6). The convolution operation can be performed as follows: 1. Invert g[m] about the vertical axis (m = 0) to obtain g[-m] (Fig. 9.6d). Figure 9.6e sho 111 both x[m] and g[-m].

9.5 System Response to External Input: The Zero-State Response

801

2. Shift g[-m] by n units to obtain g[n - m]. For n > 0, the shift is to the right (delay); for n < 0, the shift is to the left (ad vance). Figure 9.6f shows g[n - m] for n > O; for n < 0, see Fig. 9.6g. 3. Next we multiply x[m] and g[n - m] and add all the products to obtain c[n]. The procedure is repeated for each value of n over the range -oo to oo. We shall demonstrate by an example the graphical procedure for finding the convolution sum. Although both the functio•ts in this example are causal, this procedure is applicable to the general case.

PLE9.14 Gra

rocedure for the Convolution Sum

Find c[n] = x[n] * g[n], where x[n] and g[n] are depicted in Figs. 9.6a and 9.6b, respectively. We are given Therefore,

x[n] = (0.8t x[m] = (0.8)"'

and and

g[n] = (0.3f g[n - m] = (0.3)11 -m

Figure 9.6f shows the general situation for n � 0. The two functions x[m] and g[n- m] overlap over the interval OS m Sn. Therefore, II

c[n] = Lx[m]g[n- m] m=O II

m=O

=

II co.3)

I: II

m=O

(0.8)

111

o.3 (see Sec. B.8.3)

For n < 0, there is no overlap between x[m] and g[n - m], as shown in Fig. 9.6g, so that c[n]=O

n >> >> >>

x = [0 1 2 3 2 1]; g = [1 1 1 1 1 1]; n = (0:1:length(x)+length(g)-2); c = conv(x,g); stem(n,c,'k. '); xlabel('n'); ylabel('c[n]'); axis([-0.5 10.5 0 10]); 10 0

I

,. 0

0

,. 0

0

T

4

2

6

8

T

lO

n

Figure 9.9 Convolution result for Ex. 9.16.

9.5.2 Interconnected Systems As with the continuous-time case, we can determine the impulse response of systems connected in parallel (Fig. 9.10a) and cascade (Figs. 9.10b, 9.10c). We can use arguments identical to those used for the continuous-time systems in Sec. 2.4.3 to show that if two LTID systems S1 and Si with impulse responses h 1 [n] and h2 [n], respectively, are connected in parallel, the composite parallel system impulse response is h 1 [n] + h2 [n]. Similarly, if these systems are connected in cascade, the impulse response of the composite system is ht [n] * h2[n]. Moreover, because h 1 [n] * h2 [n] = h 2 [n] * h 1 [n], linear systems commute. Their orders can be interchanged without affecting the composite system behavior. INVERSE SYSTEMS

If the two systems in cascade are the inverse of each other, with impulse responses h[n] and h;[n], respectively, then the impulse response of the cascade of these systems is h[n] * h;[n]. But, the cascade of a system with its inverse is an identity system, whose output is the same as the input. Hence, the unit impulse response of an identity system is 8[n]. Consequently, h[n] * h;[n] = 8[n]

As an example, we show that an accumulator system and a backward difference system are the inverse of each other. An accumulator system is specified byt y[n] =

L x[k] 11

k=-oo

t Equations (9.22) and (9.23) are identical to Eqs. (8.17) and (8.15), respectively, with r = 1.

(9.22)

9.5 System Response to External Input: The Zero-State Response

807

(a) � .............................................................................................................................. "

o[n) I

h 1 [n]

s

C

j h1ln] • li2 [n]

..:............................................................................................



(b) o[n)

'

:,··i�i-:i� (c)

x[n]

I

y[n)

y[k]

k= -co

(d) n

I

x[n]

x[k]

k = -oo

n

I

y[k]

k = -oo

(e)

Figure 9.10 Interconnected systems.

The backward difference system is specified by y[n] = x[n] -x[n- 1]

(9.23)

Fr om Eq. (9.22), we find hacc [n], the impulse response of the accumulator, as hacc [n] =

L 8[k] = u[n]

k=-oo

Similarly, from Eq. (9.23), hbdr[n], the impulse response of the backward difference system is given by hbdr[n] = 8[n] - 8[n - 1] We can verify that hacc

*h

bdr

= u[n] * (8[n] - 8[n - 1]} = u[n] - u[n - l] = 8[n]

808

CHAPTER 9 T1ME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

Roughly speaking, a discrete-time accumulator is analogous to a continuous-time integrator, and a backward difference system is analogous to a differentiator. We have already encountered examples of these systems in Exs. 8.12 and 8.13 (digital differentiator and integrator). SYSTEM RESPONSE TO

L�=-oox[k]

Figure 9.1 Od shows a cascade of two LTID systems: a system S with impulse response h[n], followed by an accumulator. Figure 9. I 0e shows a cascade of the same two systems in reverse order: an accumulator followed by S. In Fig. 9.lOd, if the input x[n] to S results in the output y[n], then the output of the system in Fig. 9.10d is the LY[k]. In Fig. 9.l0e, the output of the accumulator is the sum I:x[k]. Because the output of the system in Fig. 9. l0e is identical to that of system Fig. 9 .10d, it follows that

L x[k]

if x[n] =} y[n],

then

k=-oo

L y[k] n

n

=}

k=-oo

If we let x[n] = 8[n] and y[n] = h[n], we find that g[n], the unit step response of an LTID system with impulse response h[n], is given by g[n] =

L h[k]

k=-oo

(9.24)

The reader can readily prove the inverse relationship

h[n] = g[n] - g[n - 1]

A VERY SPECIAL FUNCTION FOR LTID SYSTEMS: THE EVERLASTING EXPONENTIAL Z n

In Sec. 2.4.4, we showed that there exists one signal for which the response of an LTIC system is the same as the input within a multiplicative constant. The response of an LTIC system to an everlasting exponential input lf' is H (s)es', where H (s) is the system transfer function. We now show that for an LTID system, the same role is played by an everlasting exponential�. The system response y[n] in this case is given by y[n] = h[n] * z'1

=

L h[m]z'1-m 00

m=-oo

= z'1

L h[m]z-m 00

m=-oo

F�r caus�I h[n], th� limits on the sum on the right-hand side would range from o oo. In any case, to this sum 1s a funct.J.on of z. Assuming that this sum converges, let us denote it by H[z]. Thus, y[n] = H[z]z"

(9.25)

I

I

9.5 System Response to External Input: The Zero-State Response

where

H[z] =

L h[m]z-m 00

m=-oo

809

(9.26)

Equation (9.25) is valid only for values of z for which the sum on the right-hand side of Eq. (9.26) exists (converges). Note that H[z] is a constant for a given z. Thus, the input and the output are the same (within a multiplicative constant) for the everlasting exponential input t'. H[z], which is called the transferfunction of the system, is a function of the complex variable z. An alternate definition of the transfer function H[z] of an LTID system from Eq. (9.25) is H[z] =

output signal I . input signal inpu1=evcrlas1ing exponential z"

(9.27)

The transfer function is defined for, and is meaningful to, LTID systems only. It does not exist for nonlinear or time-varying systems in general. We repeat again that in this discussion we are talking of the everlasting exponential, which starts at n = -oo, not the causal exponential z'1u[n], which starts at n = 0. For a system specified by Eq. (9.7), the transfer function is given by P[z] Q[z]

H[] z =

(9.28)

This follows readily by considering an everlasting input x[n] = z'1. According to Eq. (9.27), the output is y[n] = H[z]t'. Substitution of this x[n] and y[n] in Eq. (9.7) yields H[z] {Q[E]t}

= P[E]t

Moreover, Hence, Consequently,

P[E]z"

= P[z]t

and H[] z =

Q[E]t = Q[z]t

P[z] Q[z]

RILL 9.12 OT System Transfer Function Sho)V t:bat the transfer function Qf the digital differentiator in Ex. 8.12 (big shaded block in fig. 8.Zlb) is given by H[z]:::::; (z-1)/Tz, and the transfer function of an unit delay, specified y y[n] = x[n ...... 1-], it,m�CIWJ Y:: f.-1,. Uil�!1iiillll'1..iid!.:!::i:ai....;.;..------�'

810

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

9.5.3 Total Response The total response of an LTID system can be expressed as a sum of the zero-input and zero-state responses:

L ciYJ +x[n] * h[n] N

total response=

1

j=I

ZSR ZJR

In this expression, the zero-input response should be appropriately modified for the case of repeated roots. We have developed procedures to determine these two components. From the system equation, we find the characteristic roots and characte.ristic modes. The zero-input response is a linear combination of the characteristic modes. From the system equation, we also determine h[n], the impulse response, as discussed in Sec. 9.4. Knowing h[n] and the input x[n], we find the zero-state response as the convolution of x[n] and h[n]. The arbitrary constants c,, c2, . . . , en in the zero-input response are determined from the n initial conditions. For the system described by the equation y[n + 2] - 0.6y[n + 1] -0.16y[n] = 5x[n + 2]

with initial conditions y[-1]= 0,y[-2]= 25/4 and input x[n] = (4)-nu[n], we have determined the two components of the response in Exs. 9.4 and 9.13, respectively. From the results in these examples, the total response for n > 0 is total response= 0.2(-0.2)n + 0.8(0.8)" + 0.444(-0.2)n + 5.81(0.8)n - 1.26( 4)-n

(9.29)

ZSR

ZlR

NATURAL AND FORCED RESPONSE

The characteristic modes of this system are ( -0.2)" and (0.8)". The zero-input response is made up of characteristic modes exclusively, as expected, but the characteristic modes also appear in the zero-state response. When all the characteristic mode terms in the total response are lumped together, the resulting component is the natural response. The remaining part of the total response that is made up of noncharacteristic modes is the forced response. For the present case, Eq. (9.29) yields total response= 0.644( -0.2)n + 6.61(0.8)11 + -1.26( 4)-n n>0 naturaJ response

forced response

Just like differential equations, the classical solution to difference equations includes the natural and fcreed responses, a decomposition that lacks the engineering intuition and utility afforded by the zero-input and zero-state responses. The classical approach cannot separate the responses arising from internal conditions and external input. While the natural and forced solutions can be obtained from the zero-input and zero-state responses, the converse is not true. Further, the classical method is unable to express the system response to an inputx[n] as an explicit function of x[n]. In fact, the classical method is restricted to a certain class of inputs and cannot handle arbitrary inputs, as can the method to determine the zero-state response. For these (and other) reasons, we do not further detail the classical approach and its direct calculation of the forced and natural responses.

811

9.6 System Stability

9.6 SYSTEM STABILITY The concepts and criteria for the BIBO (external) stability and internal (asymptotic) stabiHty for discrete-time systems are identical to those corresponding to continuous-time systems. The comments in Sec. 2.5 for LTIC systems concerning the distinction between external and internal stability are also valid for LTID systems. Let us begin with external (BIBO) stabi)jty. 9.6.1

External (BIBO) Stability

Recall that y[n] and

ly[n]I

=

= h[n] *X[nJ =

L h[m]x[n 00

m=-oo

- m]

L h[m]x[n - m] � L lh[m]I lx[n-m]I 00

00

m=-oo

m=-oo

If x[n] is bounded, then lx[n -m]I < K 1 < oo, and ly[n]I � K1

L 00

m=-oo

lh[m]I

Clearly, the output is bounded if the summation on the right-hand side is bounded; that is, if

L lh[n]I < K2 < oo 00

(9.30)

11=-00

Thfa is a sufficient condition for BIBO stability. We can show that this is also a necessary condition (see Prob. 9.6-1 ). Therefore, if the impulse response h[n] of an LTID system is absolutely summable, the system is (BIBO) stable. Otherwise, it is unstable. All the comments about the nature of external and internal stability in Ch. 2 apply to discrete-time case. We shall not elaborate them further.

9.6.2 Internal (Asymptotic) Stability For LTID systems, as in the case of LTIC systems, internal stability, called asymptotical stability or stability in the sense of Lyapunov (also the zero-input stability), is defined in terms of the zero-input response of a system. For an LTID system specified by a difference equation .in the form of E q. (9.2) [or Eq. (9.7)], the zero-input response consists of the characteristic modes of the system. The mode corresponding to a characteristic root y is y". To be more general, let y be complex so that and

812

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

Since the magnitude of eHJn is always unity regardless of the value of n, the magnitude of y" is IY I". Therefore, if IYI < I, iflyl>l, and iflyl=l,

then y" ➔ 0 as n ➔ oo then y" ➔ oo as n ➔ oo then IYI" = 1 for all n

The characteristic modes corresponding to characteristic roots at various locations in the co mplex plane appear in Fig. 9.1 1. These results can be grasped more effectively in terms of the location of characteristic root s in the complex plane. Figure 9.12 shows a circle of unit radius, centered at the ongin in a complex plane. Our discussion shows that if all characteristic roots of the system lie inside the unit circle, ly;I < l for all i and the system is asymptotically stable. On the other hand, even if one characteristic root lies outside the unit circle, the system is unstable. If none of the char�cteristic roots lie outside the unit circle, but some simple (unrepeated) roots lie on the circle itself, the system is marginally stable. Jf two or more characteristic roots coincide on the unit circle (repeated roots), the system is unstable. The reason is that for repeated roots, the zero-input respon'.:e is of the form nr-I y", and if IYI = 1, then ln r-l y" I = nr-l ➔ oo as n ➔ oo.t Note, however, that repeated roots inside the unit circle do not cause instability. To summarize: I. An LTID system is asymptotica11y stable if, and only if, alJ the characteristic roots are inside the unit circle. The roots may be simple or repeated. 2. An LTID system is unstable if, and only if, either one or both of the following conditions exist: (i) at least one root is outside the unit circle; (ii) there are repeated roots on the unit circle. 3. An LTID system is marginally stable if and only if there are no roots outside the unit circle and there are some unrepeated roots on the unit circle.

9.6.3 Relationship between BIBO and Asymptotic Stability For LTID systems, the relation between the two types of stability is similar to those in LTIC systems. For a system specified by Eq. (9.2), we can readily show that if a characteristic root Yt

t If the development of discrete-time systems is parallel to that of continuous-time systems, we wonder why the parallel breaks down here. Why, for instance, are LHP and RHP not the regions demarcating stability and instability? The reason lies in the form of the characteristic modes. In continuous-time systems, we chose the form of characteristic mode as e>-;r. In discrete-time systems, for computational convenience, we choose the form to be y('. Had we chosen this fonn to be £?-in where y; = £?-,, then the LHP and RHP (for the location of ).;) again would demarcate stability and instability. The reason is that if y = e>- , IY I = l impliesleA I = I, and therefore A= jw. This shows that the unit circle in y plane maps into the imaginary ax.is in the),. plane.

, 9.6 System Stability

I n➔

"➔

\ II➔

n➔

(a)

(b)

\ n➔

1l ➔

n➔

(c)

n➔

813

11 ➔

II

(d)

Figure 9 11 Characteristic roots locations and the corresponding characteristic modes.



814

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS

t

Marginally stable

Unstable

Im

Re

-1

Figure 9.12 Characteristic root locations and system stability.

is inside the unit circle, the corresponding mode yf is absolutely summable. lo contrast, if Yt lies outside the unit circle, or on the unit circle, Yk is not absolutely summable.t This means that an asymptotically stable system is BIBO-stable. Moreover, a marginally stable or asymptotically unstable system is BIBO-unstable. The converse is not neces sarily true. The stability picture portrayed by the external description is of questionable value. BIBO (external) stability cannot ensure internal (asymptotic) stability, as the following example shows.

�t I,

•4 • • ,\,•

PL E 9.17 A BIBO-Stable An LTID systems consists of two subsystems S1 and S2 in cascade (Fig. 9.13). The impulse responses of these systems are h 1 [n] and h 2 [n]. respectively, given by h 1 [n]

= 48[n] - 3(0.Stu[n]

and

Investigate the BIBO and asymptotic stability of the composite system.

tTrus conclusion follows from the fact that (see Sec. B.8.3)

IYkl < 1

Moreover, if IYI � I, the sum diverges and goes to oo. These conclusions are valid also for the modes oftbe form 11 yf,'. r



9.6 System Stability

815

:··············-·····················•····•············ ···························1

_x __r,_,J_i ,----s,--t-----

-+)----il[y -n)_

:.... ..............................................................................' Figure 9.13 Composite system for Ex. 9.17.

The composite system impulse response li[n] is given by h[n]

= hi [n] * h2[n] = h2[n] * hi [n] = 2nu[n] * (48[n]- 3(0.5fu[n])

2n+I - (0.5?+1 ] u[n] 2 _ 0.S

= 4(2tu[n] - 3 [ = (0.5)"u[n]

If the composite cascade system were to be enclosed in a black box with only the input and the output terminals accessible, any measurement from these external terminals would show that the impulse response of the system is (O.Stu[n], without any hint of the unstable system sheltered inside the composite system. The composite system is BIBO-stable because its impulse response (0.5tu[n] is absolutely summable. However, the system S2 is asymptotically unstable because its characteristic root, 2, lies outside the unit circle. This system will eventually burn out (or saturate) because of the unbounded characteristic response generated by intended or unintended initial conditions, no matter how small. The system is asymptotically unstable, though BIBO-stable. This example shows that BIBO stability does not necessarily ensure asymptotic stability when a system is uncontrollable, unobservable, or both. The internal and the external descriptions of a system are equivalent only when the system is controllable and observable. In such a case, BIBO stability means the system is asymptotically stable, and vice versa.

Fortunately, uncontrollable or unobservable systems are not common in practice. Henceforth, in determining system stability, we shall assume that unless otherwise mentioned, the internal and the external descriptions of the system are equivalent, implying that the system is controllable and observable. -

l

. '

'') i' �� '!Y ,

,.

.



li1.vestigating Asymptotic and BIBO Stability

\J'

Determine the internal and external stability of systems specified by the following equations. In ea ch case, plot the characteristic roots in the complex plane. (a) y[n + 2] + 2.5y[n + 1] + y[n] = x[n + I] - 2x [n] (b) y[n] - y[n - 1] + 0.2ly[n - 2] = 2x[n - l] + 3x[n - 2]

(c) y[n + 3] + 2y[n + 2] + !y[n + 1] + ½y[n] =x[n + l] (d) (£ 2 - E + 1 ) 2y[n] = (3E + l)x[n]

816

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS (a) The characteristic polynomial is y 2+2.5y+l =(y+ 0.5)(y+2) The characteristic roots are -0.5 and -2. Because I - 21 > l (-2 lies outside the unit circle), the system is BIBO-unstable and also asymptotically unstable (Fig. 9.14a). (b) The characteristic polynomial is y 2 - y + 0.21 = (y - 0.3)(y - 0.7) The characteristic roots are 0.3 and 0.7, both of whjch lie inside the unit circle. The system is BIBO-stable and asymptotically stable (Fig. 9.14b).

-2

-0.5

0.3 0.7

(a)

i ·..

>/-••

-I

(b)

37T/4 •,

-0.5 ..../ .,/·...... -0.5

-1

(c)

Figure 9.14 Characteristic root locations for the system of Ex. 9.18.

(d)

9.7 lntuHive Insights into System Behavior

817

(c) The characteristic polynomial is

The characteristic roots are -1, -0.5 ± j0.5 (Fig. 9.14c). One of the characteristic roots is on the unit circle and the remaining two roots are inside the unit circle. The system is BIBO-unstable but marginally stable. (d) The characteristic polynomial is

The characteristic roots are (1 /2) ± j ( ,Ji/2) = 1e±j(rr 1 3 > repeated twice, and they lie on the unit circle (Fig. 9.14d). The system is BIBO-unstable and asymptotically unstable.

DRILL 9.13 Assessing Stability by Characteristic Roots sing the complex plane, locate the characteristic roots of the following systems, and use the �baracteristic root locations to determine external and internal stability of each system. (a) ( £ + I )(E2 + 6£ + 25)y[n] = 3Ex[n]

(b) (E- 1) 2 (E + 0.5)y[n] = (E2 + 2£+ 3)x[n]

systems are BIBO and asymptotically unstable.

9.7 INTUITIVE INSIGHTS INTO SYSTEM BEHAVIOR The intuitive insights into the behavior of continuous-time systems and their qualitative proofs, discussed in Sec. 2.6, also apply to discrete-time systems. For this reason, we shall merely mention here without discussion some of the insights presented in Sec. 2.6. Tbe system's entire (zero-input and zero-state) behavior is strongly influenced by the characteristic roots (or modes) of the system. The system responds strongly to input signals similar to its characteristic modes and poorly lo inputs very different from its characteristic modes. In fact, when the input is a characteristic mode of the system, the response goes to infinity, provided the mode is a nondecaying signal. This is the resonance phenomenon. The width of an impulse response h[n] indicates the response time (time required to respond fully to an input) of the system. It is the time constant of the system.t Discrete-time pulses are generally dispersed when passed through a discrete-time system. The amount of dispersion (or spreading out) is equal to the system t This part of the discussion applies to systems with impulse response h[n] th at is a mostly po sitive (or mostly negative) pulse.

818

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS time constant (or width of h[n]). The system time constant aJso determines the rate al which the system can transmit information. A smaller time constant corresponds to a higher rate of information transmission, and vice versa. We keep in mind that concepts such as time constant and pulse dispersion only coarsely illustrate system behavior. Let us illustrate these ideas with an example.

Determine the time constant, rise time, pulse dispersion, and filter characteristics of a lowpass OT system with impuJse response h[n] = 2(0.6) n u[nJ. Since h[n] resembles a single, mostly positive pulse, we know that the OT system is lowpass. Similar to the CT case shown in Sec. 2.6, we can determine the time constant Th as the width of a rectangle that approximates h[n]. This rectangle possesses the same peak height and total sum (area), as does h[n]. The peak of h[n] is 2, and the total sum (area) is

Since the width of a OT signal is 1 less than its length, we see that the time constant Th (rectangle width) is area 5 . T12 = rectangle width= -.- - 1= - - 1 = 1.5 samples height 2 Since time constant, rise time, pulse dispersion are all given by the same value, we see that time constant= rise time= pulse djspersion = Th = L.5 samples The approximate cutoff frequency of our DT system can be determined as the frequency of a DT sinusoid whose period equals the length of the rectangle approximation to h[n]. That is,

1 2 cutoff frequency= -- = - cycles/sample Th+ I 5 Equivalently, we can express the cutoff frequency as 4.rr /5 radians/sample. Notice that T1i is not an integer and thus lacks a clear physical meaning for our DT system. How, for example, can it take 1.5 samples for our DT system to fully respond to an input? We can put our minds at ease by remembering the approximate nature of T,, , which is meanllo provide only a rough understanding of system behavior.

9.8 MATLAB: Discrete-Time Systems

819

9.8 MATLAB: DISCRETE-TIME SYSTEMS Many special MATLAB functions are available to perform the operations of discrete-time systems, including the filter and conv commands. In this section, we investigate and apply these and other commands.

9.8.1 System Responses Through Filtering MATLAB 's fi 1ter command provides an efficient way to evaluate the system response of a constant coefficient linear difference equation represented in delay form as

L aky[n N

- k]

k=O

=

L bkx[n N

- k]

(9.31)

k=O

In the simplest form, filter requires three input arguments: a length-(N + I) vector of feedforward coefficients [bo,b,,... ,bN], a length-(N + 1) vector of feedback coefficients [a0,a 1, ••• ,aN], and an input vector.t Since no initial conditions are specified, the output corresponds to the system's zero-state response. To serve as an example, consider a system described by y(n]-y[n-l]+y[n-2] =x[n]. When x[n] = o[n], the zero-state response is equal to the impulse response h[n], which we compute over (0 � n S 30). » b = [1 0 OJ ; a = [1 -1 1]; >> n = (0:30)'; delta = ©(n) 1.0.•(n= =0); >> h = filter(b,a,delta(n)); stem(n,h,'k.'); >> axis([- .5 30.5 -1.1 1.1]); xlabel('n'); ylabel('h[n]'); As shown in Fig. 9.15, h[n] appears to be (No= 6)-periodic for n:::: 0. Since periodic signals are not absolutely summable, lh[n]I is not finite and the system is not BIBO-stable. 00

I:::_ ,,

'' S

-1

= =

-

0

�-

0

,,

'

,, ,,

-

,� 5





15 n

'

I

20

-

-

-

-

25

.30

Figure 9.15 h[n] for y[n] - y[n - l] + y[n - 2] = x[n].

t It is important to pay close attention to the inevitable notati�nal di� erences found throughout engineering documents. In MATLAB help documents, coefficient subscnpts begm at 1 rather than Oto better conform with MATLAB indexing conventions. That is, MATLAB labels ao as a ( 1), bo as b ( 1), and so forth.

820

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS Furthermore, the sinusoidal input x[n] = cos (2,rn/6)u[n], which is (No should generate a resonant zero-state response.

= 6)-periodic for,, �0.

>> x = ©(n) cos(2•pi•n/6).•(n>=O); >> y = filter(b,a,x(n)); >> stem(n,y,'k.'); xlabel('n'); ylabel('y[n] '); The response's linear envelope, shown in Fig. 9 .16, confirms a reso_nant response. The characteristic equation of the system is y 2 - y + l, which has roots Y = e ±pr/3• Since the inpui x[n] = cos(2.rcn.j6)u[n] = (1/2)(ejrrn/J + e-j,rnf3 )u[n] coincides with the characteristic roots,a resonant response is guaranteed. By adding initial conditions , the filter command can also compute a system's zero-input response and total response. Continuing the preceding example, consider finding the zero-input response for y[-1] = l and y(-2] = 2 over (0 :Sn S 30). >> z_i = filtic(b,a, [1 2]); >> y_O = filter(b,a,zeros(size(n)),z_i); >> stem(n,y_O,'k.'); xlabel('n') ; ylabel('y_{O} [n]'); >> axis([-O.5 30.5 -2.1 2.1]); There are many physical ways to implement a particular equation. MATLAB implemeois Eq. (9.31) by using the popular direct form II transposed structure.t Consequently, iniri� conditions must be compatible with this implementation structure. The signal-processing toolbox function f il tic converts the traditional y[-1], y[-2], ... , y[-N] initial conditions for use wilh the filter command. An input of zero is created with the zeros command. The dimensions of this zero input are made to match the vector n by using the size command. Finally, _ { } forces subscript text in the graphics window, and -{ } forces superscript text. The results are shown in Fig. 9.17. Given y[- l] = 1 and y[-2] = 2 and an input x[n] = cos (2rrn/6)u[n], the total response is easy to obtain with the filter command. >> y_total = filter(b,a,x(n),z_i);

-20 �-_..........___...._____.____ 0 5 10 15 20 n

.1..,_____,__ __ ____..J

Figure 9.16 Resonant zero-state response y[n] for x[n]

25

= cos (2rrn/6)u[n].

t Implementation structures , such as direct form II transposed, are discussed in Ch.6.

30

9.8 MATLAB: Discrete-Time Systems

821

2

'2 .._. 0 -2

0

10

5

20

15

n

25

30

Figure 9.17 Zero-input response Yo[n] for y[-1] = I andy(-2] =2.

Summing the zero-state and zero-input response gives the same result. Computing the total absolute error provides a check. >> sum(abs(y_total-(y + y_O))) ans = 1.8430e-014 Within computer round-off, both methods return the same sequence.

9.8.2 A Custom Filter Function The filti c command is available only if the signal-processing toolbox is installed. To accommodate installations without the signal-processing toolbox and to help develop your MATLAB skills, consider writing a function similar in syntax to filter that directly uses the ICsy[-l],y[-2], . . . ,y[-N]. Normalizingao= 1 and solvingEq. (9.31)fory[n] yield y[n]

=

L bkx[n N

k=O

- k] -

L aky[n -k] N

k=I

This recursive form provides a good basis for our custom filter function. function [y] = CH9MP1(b,a,x,yi); % CH9MP1.m : Chapter 9, MATLAB Program 1 % Fu nction M-file filters data x to c reate y % INPUTS: b = vector of feedforwa rd coefficients % a = vector of feedback coef fi cients x = i nput data vector o/. % yi = vector of i nitial con ditions [y[-1], y[-2], ... ] i. OUTPUTS: y = vector of filtered output data yi = flipu d(yi(:)); % Properly format !C's . y = [yi;zeros(length(x),1)]; % Prei nitialize y, begi n ni ng with !C's. x = [zeros(length(yi),1);x(:)]; % Append x with zeros to match size of y. b = b/a(i);a = a/a( 1); % Normalize coeffi cients. for n = length(yi)+1:length(y), for nb = O:length(b)-1,

822

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS y(n) = y(n) + b(nb+1)*x(n-nb); ¼ Feedforward terms.

end for na = 1:length(a)-1, y(n) = y(n) - a(na+1)*y(n- na); ¼ Feedback terms. end end

Most instructions in CH9MP1 have been discussed; now we tum to the flipud instruction. The flip up-down command flipud reverses the order of elements in a column vector. Although 001 used here, the flip left-right command fliplr reverses the order of elements in a row vector. Note that typing help f i Lename displays the first contiguous set of comment lines in an M-ftle. Thus, it is good programming practice to document M-files, as in CH9MP1, with an injtial block of clear comment lines. As an exercise, the reader should verify that CH9MP1 correctly computes the impulse response h[n], the zero-state response y[nJ, the zero-input response Yo[n], and the tot.al response y[n)+y0[n]. For example, the total response is computed by typing >> y_total

=

CH9MP1(b,a,x(n),[1 2))

9.8.3 Discrete-Time Convolution Convolution of two finite-duration discrete-time signals is accomplished by using the conv command. For example, the discrete-time convolution of two length- 4 rectangular pulses, g[n) = (u[n]-u[n-4]) *(u[n]- u[n-4]), is a length-( 4 +4-1 = 7)triangle. Representing u[n]-u[n-4] by the vector [1, 1, 1, 1 ], the convolution is computed by >>

conv([1 1 1 1],[1 1 1 1]) 3 2 4

ans = 1

3

2

1

Notice that (u[n+4]-u[nJ)*(u[nJ-u[n- 4])is also computed by conv([1111].[11 1 1]) and obviously yields the same result. The difference between these two cases is the regions of support: (0 S n S 6) for the first and (-4 S n ,S 2)for the second. Although the conv command does not compute the region of support, it is relatively easy to obtain. If vector w begins at 11 = 11• and vector v begins at n = n v , then conv(w, v) begins at n = n w +nv, In general, the conv command cannot properly convolve infinite-duration signals. This is nol too surprising, since computers themselves cannot store an infinite-duration signal. For special cases, however, conv can correctly compute a portion of such convolution problems. Consider the common case of convolving two causal signals. By passing the first N samples of eac h, conv returns a length-(2N- 1) sequence. The first N samples of this sequence are valid; the re main ing N - I samples are not. To illustrate this point, reconsider the zero-state response y[n] over (0 s n .s 30) for sysi�ni y[n)-y[n-I]+ y[n-2] = x[n] given inputx[n] = cos (2 rr n/6)u[n]. The results obtained by usLng a filtering approach are shown in Fig. 9 .16.

9.9 Appendix: Impulse Response for a Special Case

823

20 10 0 -10 -20

0

10

20

30

n

40

50

60

Figure 9.18 y[n) for x[n) =cos(2rr11/6)11[n] computed with conv.

The response can also be computed using convolution according to y[n] = h[n] * x[n]. The impulse response of this system ist h[n] = { cos (rrn/3)

+ � sin (rrn/3)} u[n]

Both h[n] and x[n] are causal and have infinite duration, so conv can be used to obtain a portion of the convolution. >> >> »

u = ©(n) 1.0.*(n>=0); h = ©(n) (cos(pi*n/3)+sin(pi*n/3)/sqrt(3)).*u(n); y = conv{h(n),x(n)); stem( (0:60] , y , 'k. '); xlabel('n'); ylabel('y [n] >);

The conv output is fully displayed in Fig. 9.18. As expected, the results are correct over (0 Sn S 30). The remaining values are clearly incorrect; the output envelope should continue to grow, not decay. Normally, these incorrect values are not displayed. >>

stem(n, y (1:31), 'k. '); xlabel('n'); ylabel('y [n]');

The resulting plot is identical to Fig. 9.16.

9.9 APPENDIX: IMPULSE RESPONSE FOR A SPECIAL CASE When aN = 0, A o = bN / aN becomes indeterminate, and the procedure needs to be modified slightly. When aN = 0, Q[E] can be expressed as EQ[E], and Eq. (9.13) can be expressed as EQ[E]h[n] = P[E]o[n] = P[E] {Eo[n - l]} = EP[E ]o[n - 1] Hence,

Q[E]h[n] = P[E]8[n - 1]

t Techniques to analytically determine h[n) are presented in Chs. 11 and 12.

824

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS In this case the input vanishes not for 11 2'.: l, but for n 2'.: 2 . Therefore, the response consists not only of the zero-input term and an impulse A 0 8[n] (at n = 0), but also of an impulse A 18[n - I] (at n = I). Therefore, We can determfoe the unknowns Ao, A 1, and the N - 1 coefficients in Yc[n] from the N + l number of initial values h[O], h[l], ..., h[N], determined as usual from the iterative solution of the equation Q[E]h[n.] = P[E]8[nV Similarly, if aN = aN-I = 0, we need to use the form h[n] = A 08[n] +A 1 8[n-l] +A 2 8[n-2] + Yc[n]u[n].The N + l unknown constants are determined from the N + I values h[O], h[ 1], ... , h[N], determined iteratively, and so on.

9.10 SUMMARY This chapter discusses time-domain analysis of LTID (linear, time-invariant, discrete-time) systems. The analysis is parallel to that of LTIC systems, with some minor differences. Discrete-time systems are described by difference equations. For an N th-order system, N auxiliary conditions must be specified for a unique solution. Characteristic modes are discrete-time exponentials of the form y'' corresponding to an unrepeated root y, and the modes are of the form ,i y" corresponding to a repeated root y. The unit impulse function 8[n] is a sequence of a single number of unit value at n = 0. The unit impulse response h[n] of a discrete-time system is a linear combination of its characteristic modes.* The zero-state response (response due to external input) of a linear system is obtained by breaking the input into impulse components and then adding the system responses to all the impulse components.The sum of the system responses to the impulse components is in the form of a sum, known as the convolution sum, whose structure and properties are similar to the convolution integral. The system response is obtained as the convolution sum of the inputx[n] with the system's impulse response h[n].Therefore, the knowledge of the system's impulse response allows us to determine the system response to any arbitrary input. LTID systems have a very special relationship to the everlasting exponential signal t1 because the response of an LTID system to such an input signal is the same signal within a multiplicative constant. The response of an LTID system to the everlasting exponential input t1 is H[z]z11, where H[z] is the transfer function of the system. The external stability criterion, the bounded-input/bounded-output (BIBO) stability criterion, states that a system is stable if and only if every bounded input produces a bounded output. Otherwise, the system is unstable. The internal stability criterion can be stated in terms of the location of characteristic roots of the system as follows: 1. An LTID system is asymptotically stable if and only if aU the characteristic roots are inside the unit circle. The roots may be repeated or unrepeated. t Q[y] is now an (N - I )-order polynomial. Hence, there are only N - l unknowns in Yc [n].

*There is a possibility of an impulse o[n] in addition lo characteristic modes.

Problems

825

2. A n LTID system is unstable if and only if either one or both of the following conditions exist: (i) at least one root is outside the unit circle; (ii) there are repeated roots on the unit circle. 3. An LTID system is marginally stable if and only if there are no roots outside the unit circle and some unrepeated roots on the unit circle. An asymptotically stable system is always BIBO-stable. The converse is not necessarily true.

PROBLEMS 9.1-1 Letting .J, identify n = 0, define the nonzero values of signal g[n] in vector form as

{d) Determine, if possible, whether the system is causal.

[1,2,3,4,5,4,3,2, I]. The impulse response of an LTID system is defined in terms of g[n] as h[n] =g[-2n - I]. (a) Express the nonzero values of h[n] in vector form, taking care to identify the 11 = 0 point. (b) Write a constant-coefficient linear differ­ ence equation (input x[n] and output y[n]) that has impulse response h[n]. (c) Show that the system is both linear and time-invariant. (d) Determine, if possible, whether the system is BIBO-stable. (e) Determine, if possible, whether the system is memoryless. (f) Determine, if possible, whether the system is causal.

9.1-3

Deterrnine whether each of the following state­ ments is true or false. If the statement is false, demonstrate by proof or example why the statement is false. If the statement is true, explain why. (a) The system described by y[n] = (n+ l)x[n] is causal. (b) The system described by y[n - l] = x[n] is causal.

9.1-4

A linear time-invariant system produces output Yt [n] in response to input x1 [n], as shown in Fig. P9. l-4. Determine and sketch the output y2[n] that results when inputx2[n] is applied to the same system.

9.1-5

A system is described by

9.J-2 An LTID system has an impulse response function h[n] = ul-(5 - n)/3]. (a) Using an accurate sketch or vector repre­ sentation, graphically depict h[n]. (b) Determine, if possible, whether the system is BIBO-stable. (c) Determine, if possible, whether the system is memoryless. R

R

k=-oo

(a) Explain what this system does. (b) Is the system BIBO-stable? Justify your answer. (c) Is the system linear? Justify your answer. (d) Is the system memoryless? Justify your answer.

R

v[11] R

R

aR

Figure P9.l-4

L x[k]('5[n-k]+o[n+k]) 00

y[nJ=!

v[11

+ 1I

R

aR

aR

v[N-

R

aR

II

v[ N]

R

aR

826

CHAPTER 9 TIME-DOMAIN ANALYSJS OP DISCRETE-TIME SYSTEMS From physics, we know that velocity is the 1ime derivative of position:

(e) Is I.he sys1em causal? Justify your answer. (t) Is 1he sys1em 1ime-invarian1? Justify your answer. 9.1-6

v(t)

A discrele-time system is given by y[n+ ll =

(a) ls I.he system BlBO-s1able? Jus1ify your answer. (b) Is 1he sys1em memoryless? Jus1ify your answer. (c) Is lhe system causal? Jus1ify your answer.

9.1-8

Explain why the continuous-lime syslem = x(2t) is always invertible, yet lhe corre­ sponding discrete-time system y[n] = x[2n] is not invertible.

Consider 1he input-output relationships of two similar discrete-time systems:

= sin (

�11

+ I) x[n]

9.2-1

Explain why x[n] can be recovered from YI [n], yet x[n] cannot be recovered from y2[n].

9.1-9 Consider a system that multiplies a given input by a ramp function, r[n]. That is, y[n] =

=

x[n] r[n] .

9.1-10 A jet-powered car is filmed using a camera operating at 60 frames per second. Let variable n designate the film frame, where n = 0 corresponds to engine ignition (film before ignition is discarded). By analyzing each frame of the film, it is possible to determine the car position x[n], measured in meters, from the original starting posi1ion x[O] = 0.

An LTID system is described by a constant coefficient linear difference equation 2y[n] + 2y[n -1) x[n- I]. (a) Express this system in standard advance operator form. (b) Using recursion, determine the first five values of the system impulse response h[n]. (c) Using recursion, determine the first five values of the system zero-state response to input x[n] 2u[n]. (d) Using recursion, determine for (0 � n � 4) the system zero-input response if y[-1] = l.

=

and

(a) Is I.he system BIBO-stable? Justify your answer. (b) Is the system linear? Justify your answer. (c) Is the system memoryless? Justify your answer. (d) ls the system causal? Justify your answer. (e) Is the system time-invariant? Justify your answer.

d = -v(t) dr

We can estimate the car velocity from 1he film data by using a simple difference equation v[n] = k(x[n] - x[n - I]). (a) Determine the appropriate constant k to ensure v[n] has units of meters per second. (b) Determine a standard-form constant coef­ ficient difference equation that outputs an estimate of acceleration, a[n], using an input of position, x[nJ. Identify the advantages and shortcomings of estimat­ ing acceleration a(t) with a[n]. What is the impulse response h[n] for this system?

y(t)

)'J [11]

dr

Furthermore, we know that acceleration is the lime derivative of velocity:

x[n]

--­ .x[n + I]

a(r)

9.1-7

= -x(t)

9.2-2

Solve recursively (first three terms only): (a) y[n + I] - 0.5y[n] = 0, with y(-1] 10 (b) y[n + l] + 2y[n] = x[n + l], with xfn] = e-nu[n] and y[-1) = 0

9.2-3

Solve the following equation recursively (first three terms only):

=

y[n] - 0.6y[n - I] - 0. I6y[n - 2) = 0 with y[-1) = -25,y[-2) =0. 9.2-4

Solve recursively the Ch. 8 second-order differ­ ence Eq. (8.13) for a sales estimate (first three

Problems tenns only), assuming y(-1] = y[-2] x[n] = IOOu[n].

= 0 and

9.3-6

9.2-5 Solv e the following equation recursively (first three tenns only):

while addressing, oddly enough, a problem involving rabbit reproduction. An element of the Fibonacci sequence is the sum of the previous two. (a) Find the constant-coefficient difference equation whose zero-input response f[n] with auxiliary conditions /[ 1] = 0 and /(2] = l is a Fibonacci sequence. Given /[n] is the system output, what is the system input? (b) What are the characteristic roots of this system? ls the system stable? (c) DesignatingO and 1 as the first and second Fibonacci numbers, determine the 5 0th Fibonacci number. Determine the 1000th Fibonacci number.

with x[11] = (3t u[n], y[-1] = 3, and y[-2] = 2 9.2-6 Repeat Prob. 9.2-5 for y[n] + 2y [n - l] + y[n - 2) = 2x[n]- x[n- l] with x[n] = (3)-n u[n], y[ -I] = 2, and y( -2] = 3. 9.3-1 Givenyo[-1] = 3 andyo[-2] = -1, determine the closed-form expression of the zero-input response Yo[n] of an LTID system described by the equation l 1 l 2 y[n] + y[n- I ]- y[n-2] = x[n]+ x[n-2] 9.3-2

6

3

3

Solve y[n+2]+3y[n+ I]+2y[n] =0

if y[ - I l = 0 and y[ -2] = I. 9.3-3 Solve y[n+2] +2 y[n+ I]+y [n] =0 if y[-1] = 1 andy[-2] = I. 9.3-4 Solve y[n+2]- 2y[11+ l] + 2y[n] = 0 ify[-1] = 1 andy[-2] =0.

9.3-5 For the general Nth-order difference Eq. (9.3), letting a, = a2 = · · · = aN = 0 results in a general causal Nth-order LTI nonrecursive difference equation

9.3-7 Find v[n], the voltage at the nth node of the Ch. 8 resistive ladder depicted in Fig. P8.5-6, if V = 100 volts and a= 2. [Hint/: Consider the node equation at the nth node with voltage v[n]. Hint 2: See Prob. 8.5-6 for the equation for v[n]. The auxiliary conditions are v[0] = I00 and v[N] = 0.] 9.3-8 Consider the discrete-time system y[n] + y[nl] + 0.25y[n - 2] = J3x[n - 8]. Find the zero input response, yo[n], if Yo[-1] = 1 and Yo[l] = l. 9.3-9 Provide a standard-form polynomial Q(X) such that Q(E) {y[n]} = x[n] corresponds to a marginally stable third-order LTID system and Q(D) {y(t)) = x(t) corresponds to a stable third-order LTIC system. 9.4-1 Find the unit impulse response h[ri] of systems specified by the following equations: (a) y [n + 1] + 2y [n] = x[n] (b) y[n] + 2y[n - I]= x[n] 9.4-2

Show that the characteristic roots for this system are zero-hence, that the zero-input response is zero. Consequently, the total response consists of the zero-state component only.

Leonardo Pisano Fibonacci, a famous thirteenth-century mathematician, generated the sequence of integers O { ,l,l,2,3,5,8,l3,21,34, ... }

y[n+2]+ 3y[n+I]+2y[n] = x[n+2]+3x[n+1)+3x[11 ]

6

827

Determine the unit impulse response h[n) of the following systems. In each case, use recur­ sion to verify the n= 3 value of the closed-form expression of h[n]. (a) (£2 + l){y[n]) = (E + O.S){x[n ]} (b) y[n] -y[n- l] + 0.25y [n- 2] = x[n] (c) y[n]- ¼y [n- I] - ty[n -2] = ½x[n - 2]

. ; ;rm•

828

9.4-3

CHAPT ER 9 TIME-DOMAIN A N ALYSIS OF DISCRETE-TIMESYSTEMS (d) y[11+ ] tYf"- 1 1 - b·l11- 21 = !xl11) v (e) y[11l+¼_1 [ -2]=x 1 l11] 2 ) l11]J (I) (£ - ; l) yl11Jl = (£ 2+ I {x 2 ( ( [nll (g) £ -1) £+¼)1y =£ 3x(11 { JI 21 l}=x(11] (h) (£- !> y[11 Consider a OT system with input x[11]and out­ puty[11I described by the difference equation

(b) Find the impulse response of a non­ recursive LTID system described by the equation y[nl

9.5-1

9.4-4 Repeat Prob. 9. 4-3 for a system described by the difference equation I 10

3 OJ

y[n+3I- -y[n+2]- -y[n+

Using the graphical convolution procedure, determine constants C1, C2, Yt, Y2, and N.

= Ex[11J

9.5-2

9.4-6 Repeat Prob. 9 .4 -1 for y[11] 6y[n- I]+25y[n-2] 9.4-7

The convolution y[n] = (t-11[n+51) * (3n u [ -11 -2] ) can be represented as

]I =2 x(n+I]

9.4-5 Repeat Prob. .9 4-1 for 2 (£ -6£+ 9)[n] y

= 2\'[n]

-4x(11-]l

(a) For the general Nth-order difference Eq. (9 .3), letting

9.5-3

Use the graphical convolution procedure to determine the following: (a) Ya[n] = u[n] * (u[n- 5] - u[n- 9 )+ (0.5)(II-S) 11[11- 91) (b) )'b[ll]=(½)lnl*11[-11+5)

Letx[nl = (0.5t (u[n+ 4]- u[n - 4] ) be input into an LTID system with an impulse response given by h[n]

results in a general causal Nth-order LTI difference equation

nonrecursivc

y[11] =

I ] -2x[n - ]3

Observe that the impulse response has only a finite (N) number of nonzero elements. For this reason, such systems are called finite-impulse response (FIR) systems. For a general recursive case E [ q. (9.7)], the impulse response has an infinite number of nonzero elements, and such systems are called infinite-impulse response (lIR) systems.

4y[11+ }I +y[11- ]I = 8xl11+I]+ 8x[11] (a) What is the order of this system? (b) Determine the characteristic mode(s) of the system. (c) Determine a closed-form expression for the system's impulse response h[11].

= 3x[n]-5x[n-

L b;x[11- i] N

i=O

Find the impulse response h[11] for this system. [Hint: The characteristic equation for this case is yn = 0 . Hence, all the characteristic roots are zero. ln this case, yc[n] = 0 , and the approach in Sec. 9.4 does not work. Use a direct method to find h[11] by realizing that h(n] is the response to urut impulse input.I

= { 20

[ 11 ( mod6 ) < 4] and(n:::: 0] otherwise

Recall, (rr mod p) is the remainder of the divi­ sion 11/p. The system is described according to the difference equation [y n] - y[n - 6] = 2\'[1+ 1] 2x[11- I]+ 2x[n- 2]+2x[n - ]3. (a) Determine the six characteristic roots (y1 through Y6) of the system. (b) Determine the value of y[IO], the zero-state output of system h[nl in response to x[n] at time n =IO . Express your result in decimal form to at least three decimal places (e.g., yl 0 1 ) = 3. l42 ) .

9.5-4 An LTID system has impulse response /t[n] = (0.5) ("+3> (11[n 1- u[n+61). A 6 -periodic DT

!

Problems

input signal x[n] is given by

x[nl =

I 2 3 0

= 0, ±3, ±6, ±9, ±12, . . . n - I, I ± 6, I ± 12, ... - 2, 2 ± 6, 2 ± 12, . . . n_ otherwise

n

9.S-11

829

(c) Determine the impulse response h cascade[n] of system I cascaded with an LTID sys­ tem with impulse response h2[n] = -3u [n- 13]. Simplify your answer. Derive the results in entries 1, 2, and 3 in Table 9.1. [Hint: You may need to use the information in Sec. B.8.3.]

(a) Is system h[n] causal? Mathematically justify your answer. (b) Determine the value of y[12], the zero-state output of system h[n] in response to x[n) at time n = 12. Express your result in decimal form to at least three decimal places (e.g., y[l 2] = 1.234).

9.S-13 Derive the results in entries 7 and 8 in

9.5.5 Find the (zero-state) response y[n] of an LTID system whose unit impulse response is

Table 9. l . [Hint: You may need to use the information in Sec. B.8.3.]

h[n] = (-2)"u[n - 1)

9.S-12

Table 9.1. [Hint: You may need to use the information in Sec. B.8.3.]

9.S-14 Derive the results in entries 9 and 11 in

9.S-15 Find the total response of a system specified by the equation

and the input is x[n] = e-nu[n + 1). Find your answer by computing the convolution sum and also by using Table 9.1. 9.S-6 Find the (zero-state) response y[n] of an LTID system if the input is x[n] = 3n-I u[n + 2], and h[n] =

y[n+ I]+2y[n] =x[n+ I] if y[-1] = IO, and the input x[n] = e-nu[n].

9.S-16 Find an LTID system (zero-state) response if its impulse response h[n] = (0.5)nu[n], and the input x[n] is (a) 2n u[n] (b) 2 n-3 u[n) (c) 2n u[n - 2] [Hint: You may need to use the convolution shift property of Eq. (9.19).]

½ [o[n - 2] - (-2)' +1 ]u[n - 3) 1

9.S-7 Find the (zero-state) response y[n] of an LTID system if the inputx[n] = (3) 11+2 u[n + I], and

9.S-8 Find the (zero-state) response y[n] of an LTID

system if the input x[n] = (3)-n+2 u[n + 3], and

9.5-17 For a system specified by equation y[n] = x[n] - 2x[n - l]

h[n] = 3(n - 2)(2)"-3 u[n -4]

find the system response to input x[n] = u[n]. What is the order of the system? What type of system (recursive or nonrecursive) is this? Is the knowledge of initial condition(s) necessary to find the system response? Explain.

9.5-9 Find the (zero-state) response y[n] of an LT[D system if its input x[n] = (2)" u[n - I], and h[n] = (3)" cos (; n - 0.5) u[n] Find your answer using only Table 9.1. 9,5-10 Consider an LTID system ("system I") described by (£ - ½) {y[n]} = x[n]. (a) Determine the impulse response h 1 [n] for system I. Simplify your answer. (b) Determine the step response s[n] for system 1 (the step response is the output in response to a unit step input). Simplify your answer.

Derive the results in entries 4, 5, and 6 in Table 9.1.

9.S-18

(a) A discrete-time LTI system is shown in Fig. P9.S-18. Express the overall impulse response of the system, h[n], in terms of h 1 [n], h 2[n], h3(11], h4[n], and hs[n]. (b) Two LTID systems in cascade have impulse response h1[n] and h2(n], respec­ tively. Show that if h 1 [n] = (0.9)"u[n] 0.5(0.9)"-1 11[n- I) and h2 [n] = (0.5 )' 1 u[n] - 0.9(0 .5)11- 1 u[11 - I), the cascade system is an identity system.

830

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS Find y[n] if the interest rate is I% per month (r= 0.01).

9.5-23 y[11]

x[n]

To pay off a loan of M dollars in N number of payments using a fixed monthly payment of P dollars, show that

rM

P=----1-(1 +r)-N

Figure P9.5-18

9.5-19

where r is the interest rate per dollar per month. [Hint: T his problem can be modeled by Ch. 8 Eq. (8.10) with the payments of P dollars starting at n = I. The problem can be approached in two ways. First, consider the loan as the initial condition yo[O] -M, and the input x[n] = Pu[n - I]. The loan balance is the sum of the zero-input component (due to the initial condition) and the zero-state component h[n] * x[n]. Second, consider the loan as an input -M at n = 0 aJong with the input due to payments. The loan balance is now ex.elusively a zero-state component h[n] u[n]. Because the loan is paid off in N payments, set y[N] = 0.)

(a) Show that for a causal system, Eq. (9.24) can also be expressed as

=

II

g[n)

= I)r11 - k) k=O

(b) How would the expressions in part (a) change if the system is not causal?

9.5-20

An LTID system with input x[n) and output yf n] has impulse response h[n) = 2(u[n + 2] u[n -3]). (a) Write a constant-coefficient linear differ­ ence equation that has the given impulse

response. [Hint: First express h[n] in tenns

9.5-24

of delta functions 8[nj.] (b) Using graphical convolution, determine the zero-state output of this system in response to the anticausaJ input x[n] = 2n u[-n]. A simplified closed-fonn solu­ tion is required.

9.5-21

9.5-22

Consider three LTID systems: system 1 has .j. impulse response h 1 [n] = [2, -3, 4), system 2 .j. has impulse response h 2[n] = [O, 0, -6, -9, 3], and system 3 is an identity system (output equals input). (a) Determine the overall impulse response h[n) if system 1 is connected in cascade with a parallel connection of systems 2 and 3. (b) For input x[n] = 11[-n], determine the zero-state response Yzsr[n] of system 2. In the Ch. 8 savings account problem described in Ex. 8.10, a woman deposits $500 at the beginning of every month, starting at n = 0 with the exception at 11 = 4, when instead of depositin� $500, she withdraws $WOO.

A man receives an a utomobile loan of$10,000 from a bank at the interest rate of 1.5% per month. His monlhly payment is $500, with the first payment due one month after he receives the loan. Compute the number of payments required to pay off the loan. Note that the last payment may not be exactly $500. [Hint: Follow the procedure in Prob. 9.5-23 to determine the balance y[n]. To detennine N, the number of payments, set y[N] 0. In general, N will not be an integer. The number of payments K is the largest integer � N. The residual payment is ly[K]I.]

=

9.5-25

Letting .J, identify the 11 = 0 values, use the sliding-tape method to determine the following: i J. (a ) Ya =[2,3,-2,-3]*[-I0,0,-5] .j. � (b) Yb= [2,-1,3,-2) * [-1,-4,1,-2] .j. .j. (c) Yc =[0,0,3,2,l ,2,3]*[2,3,-2,1] .j. i (d) Yd= [5,0,0,-2,8] * (-1, 1, 3,3,- 2, 3 ) J. .j. J. J. (e) Ye = ([l ,-l]*[L-1] )*([1,-1]*[1,-l]) J. l -1� (f) Yr= ([2,- I]*[1, -2]) * ([I, -2)*[2,- I])

Problems

9.5-26

Outside the values shown,assume all signals are zero. Let .J, identify the n = 0 and consider DT signals x[n] and h[n] whose nonzero val. ! ues are given as x[n] = [I, 2, 3, 4 , 5) and

equations in terms of n + I unknowns h[O], h( I], ... , h[n]. These equations can readily be solved iteratively.Thus,we can synthesize a system that yields a certain output y[n] for a given input x[n]. (a) Design a system (i.e., determine h[n]) that will yield the output sequence (8, 1 2, 14, 15, I S.S,1 5.75,...) for the input sequence (1,I,l ,I, I, I, ...). (b) For a system with the impulse response sequence ( I, 2,4, ...), the output sequence was (1,7/3, 43/9,...). Determine the input sequence.

h[n) = [-2,-1,1,2]. Use DT convolution (any method) to detem1ine y[n] = (2x[n 30 )) *(-�h[n - 10)). Express your result in vector notation, making sure to indicate the time index of the lef1most (nonzero) element.

9.5-27

Using the sliding-tape algorithm, show that (a) uf n]*u[n] = ( n+ l )u[n] (b) (uln] - u[n -m]) u[n] = (n + J)u[n] (n - m + l)u[n - m]

*

9.5-33

9.5-28 Using the sliding-tape algorithm. find x[n] * gfn] for the signals shown in Fig. P9.S-28. 9.5-29

Repeat Prob. 9.5-28 for the signals shown in Fig. P9.S-29.

9.5.30

Repeat Prob. 9.5-28 for the signals shown in Fig. P9.5-30.

9.5-31

Letting -I, identify n

= 0,

define the nonzero ! values of signal x[n] as [I, 2, 2). Similarly, define the non-zero values of signal y[n] ! as [3,4,6,6, 11,2. -2]. Using the sliding-tape algorithm as the basis for your work, determine the signal h[n] so that y[n] = x[n] * h[n].

9.5-32 The convolution sum in Eq. (9.20) can be expressed in matrix form as y = Hx, where y is a column vector containing y[O],y[l ], ...y[n]; x is a column vector con­ taining x[O],x[l],... x[n]; and His a lower triangular matrix defined as

h[OJ H= [

0

h[l )

h[O]

h[n)

h[n - l]

0 0

Knowing h[n) and the output y[n], we can determine the input x[n] according to x = u-•y. This operation is the reverse of convolu­ tion and is known as deconvolution. Moreover, knowing x[n] and y[n], we can determine h[n]. This can be done by expressing the forego­ ing matrix equation as n + l simultaneous

831

The sliding-tape method is conceptually quite valuable in understanding the convolution mechanism. With it, we can verify that DT convolution of Eq. (9.20) can be performed from an array using the sets x(O],x[ l],x[2], . .. and h[O), h[I],h[2], ...,as depicted in Fig. P9.S-33. The (i,j)tb element (element in the ith row and }th column) is given by x[i]h[j]. We add the elements of the array along diagonals to produce y[n] = x[n] * h[n]. For example, if we sum the elements corre­ sponding to the first diagonal of the array,we obtain y[O]. Similarly, if we sum along the second diagonal,we obtain y[l], and so on. Draw the array for the signals x[n] and h[n] in Ex.9.1 S, and find x[n] h(n].

*

9.5-34

Using Eq.(9.27), show that the transfer func­ tion of a unit delay is H[z] = 1/z.

9.5-35

A second-order LTID system has zero-input response

= I:{2+(½) k }ofn -k] k=O

(a) Determine the characteristic equation of this system, aoy 2+a,y +a2 = 0. (b) Find a bounded, causal input with infi­ nite duration that would cause a strong response from this system. Justify your choice. (c) Find a bounded, causal input with infinite duration that would cause a weak response from this system.Justify your choice.

832

g

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS x(,r] 5

*

[

1

1

]1

•••JJJJ I••• -4

5 ,, __

II---

Figure P9.5-28

• ·:t ·T -10

. .I.

* 5

II--

-5

JO

• • n-

• •

Figure P9.5-29 g[n]

x[n]

-3

3

,,

__

* -3

3

n----

(a)

g[n]

x[n] ·········· ....... .....

0

- -

..... I

* 6

12

n--

-12

-6

0 n--

(b)

Figure P9.5-30 9.5-36

An LTID filter has an impulse response func­ tion given by h 1 [n] = o[n + 2] - o[n - 2]. A second LTID system has an impulse response function given by h2 [n] = n(u[n + 4] - u[n - 41). (a) Carefully sketch the functions h 1 [n] and h2[n] over (-IO� n:::: 10). (b) Assume that the two systems are connected in parallel, as shown in Fig. P9.5-36a. Determine the impulse response hp [n] for the parallel system in terms of h 1 [n] and h2[n]. Sketch hp [n] over ( -10 :::: n � 10).

(c) Assume that the two systems are connected in cascade, as shown in Fig. P9.5-36b. Determine the impulse response hs [n] for the cascade system in terms of h 1 [n] and h2[n]. Sketch hs [n] over (-10:::: n:::: 10). 9.5-37

This problem investigates an interesting appli­ cation of discrete-time convolution: the expan­ sion of certain polynomial expressions. 2 (a) By hand, expand (z3 + z2 + z + 1) . Compare the coefficients to [l, l, I, 11 * [l,1,1,1].

Problems . ,rY[O]

. ,rY[lj

h[O]

h[l)

h[2]

h[3)

:1:lO]h[O]

:i=.IO.] h:['.l]

�JO]h1�i

x[O]h[3]

�ll)h(O]

:1:[.L]lt[i)

x[l]h[2]

x[l)h[3]

x[2]

3:[2)-h[d]

x[2]h[l]

x[2)h[2]

x[2]h[3]

x(3]

x(3]h[O]

x[3]h[l]

x[3]h[2]

x[3]h[3)

x[O] x[l]

.

833

. ,,, y(2]

Figure P9.5-33

y,,[11]

x[11]

x[n]

(a)

y.[n]

(b)

Figure P9.5-36 (b) Fonnulate a relationship between discrete­ time convolution and the expansion of constant-coefficient polynomial expres­ sions. (c) Use convolution to expand (z-4 - 2z-3 + 3,-2)4. (d) Use convolution to expand (z5 + 2z4 + 3z2 + 5)2 cz-4 - 5c2 + 13). 9

.s.33 Joe likes coffee, and he drinks his coffee according to a very particular routine. He begins by adding 2 teaspoons of sugar to his mug, which he then fills to the brim with hot coffee. He drinks 2/3 of the mug's contents, adds another 2 teaspoons of sugar, and tops the mug off with steaming hot coffee. This refill procedure continues, sometimes for many, many rups of coffee. Joe has noted that his coffee tends to taste sweeter with the number of refills.

Let independent variable n designate the coffee refill number. In this way, 11 = 0 indicates the first cup of coffee, n = l is the first refill, and so forth. Let x[n] represent the sugar (measured in teaspoons) added into the system (a coffee mug) on refill n. Let y[n] designate the amount of sugar (again, teaspoons) contained in the mug on refill 11. (a) The sugar (teaspoons) in Joe's coffee can be represented using a standard second-order constant coefficient differ­ ence equation y[n] + a1y[n - l ] + a2y[n 2] = box(n] + b i x[n - l] + b-ix[11 - 2]. Determine the constants a 1, a2, bo, bi, and b2. (b) Determinex[11J, the driving function to this system. (c) Solve the difference equation for y[n]. This requires finding the total solution. Joe always starts with a clean mug from the

834

CHAPTER 9

TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS conditions are zero and x[n] = u[n). The subscript R is only used to emphasize a recursive solution. (c) Define yc[n] = x[n] * h[n]. Using x[n] = u[n] and h[n] from part (a), computeyc[4J. The subscript C is only used to emphasize a convolution solution. (d) In this chapter, both recursion and con"0· lution are presented as potential meth'l.ls to compute the zero-state response (ZSR) of a discrete-time system. Comparing parts (b) and (c) , we see that YR[4] :/= yc[4]. Why are the two results not the same? Which method, if any, yields the correct ZSR value?

dishwasher, so y[-1] (the sugar content before the first cup) is zero. (d) Determine the steady-state value of y[n]. That is, what is y[11] as n ➔ oo? If possible, suggest a way of modifying x[11] so that the sugar content of Joe's coffee remains a constant for all nonnegative n.

9.5-39

A system is called complex if a real-valued input can produce a complex-valued output. Consider a causal complex system described by a first-order constant coefficient linear dif­ ference equation:

(jE+0.5)y[n] = (-SE)x[n] (a) Determine the impulse response function h[n] for this system. (b) Given inputx[n] =u[n-5] and initial con­ dition yo[- I]= j, determine the system's total output y[n] for n � 0.

9.5-40

An LTID system has impulse response function h[n] = n(u[n - 2] - u[n+ 2]). (a) Carefully sketch the function h[n] over (- 5 .5 n .5 5). (b) Determine the difference equation repre­ sentation of this system, using y[n] to designate the output and x[nJ to designate the input.

9.5-41

Consider three discrete-time signals: x[nJ, y[n], and z[n]. Denoting convolution as *• identify the expression(s) that is(are) equivalent to x[n](y[n] * z[n]): (a) (x[n] * y[n])z[nJ (b) (x[n]y[n]) * (x[n]z[n]) (c) (x[n]y[n]) * z[n) (d) none of the above Justify your answer!

9.5-42

A causal system with inputx[n) and outputy[n] is described by

y[n]-ny[n - l] =x[n] (a) By recursion, detennine the first six nonzero values of h[n], the response to x[n] = o[n]. Do you think this system is BIBO-stable? Why? (b) Compute YR[4] recursively from YR[n] nyR[n - l] = x[n], assuming all initial

9.6-1

ln Sec. 9.6 . l, we showed that for BIBO sta­ bility in an LTID system, it is sufficient for its impulse response h[n] to satisfy Eq. (9.30). Show that this is also a necessary condition for the system to be BIBO-stable. ln other words, show that if Eq. (9.30) is not satisfied, there exists a bounded input that produces unbounded output. [Hint: Assume that a sys­ tem exists for which h[n] violates Eq. (9.30), yet its output is bounded for every bounded input. Establish the contradiction in this state­ ment by considering an input x[n] defined by x[n1 - m] = 1 when h[m] > 0 and x[n 1 - m] = -1 when h[m] < 0, where n 1 is some fixed integer.)

9.6-2

Each of the following equations specifies an LTID system. Detennine whether each of these systems is BIBO-stable or -unstable. Determine also whether each is asymptotically stable, unstable, or marginally stable. (a) y[n + 2] + 0.6y[n + l) - 0.16y[n) = x[n + l) - 2x[n) (b) y [n] + 3y[n -1) + 2y[n -2] = x[11 - l]+ 2x[n -2]

(c) (E-l)2 (E+½)y[11]=x[n] (d) y [n]+ 2y[n - I]+ 0.96y[n - 2 ] = x[n] (e) y[n]+ y[n - l) - 2y[n - 2] = x[n]+ 2x[n - 1) (f) (£2 - l)(E2 + l)y[n) =x [n]

9.6-3 Consider two LTIC systems in cascade, as

illustrated in Fig. 9.13. The impulse response of the system S1 is h 1 [n] = 2n u[n] and the impulse response of the system S2 is h2[11) == o[n] - 2o[n - 1). Is the cascaded sys tem

Problems asymptotically stable or unstable? Determ ine the BIBO stability of the composite system. locates the characteristic roots of 10 causal, LTID systems, labeled A through J. Each system has only two roots and is described using operator notation as Q(E)yf11] = P(E)x[n]. All plots are drawn to scale, with the unjt circle shown for reference. For each of the following parts, identify all the answers that are correct. (a) Identify all systems that are unstable. (b) Assurrung all systems have P(E) = £2 , identify all systems that are real. Recall that a real system always generates a real-valued response to a real-valued input. (c) Identify all systems that support oscilla­ tory natural modes. (d) Identify all systems that have at least one mode whose envelope decays at a rate or2-n. (e) Identify all systems that have only one mode.

9.6-6

An LTID system has an impulse response given by

9,6-4 Figure P9.6-4

9.6-5

A discrete-time LTI system has an impulse response given by h[n] = o[n] +

h[n] =

(½)"11

r::_

(a) Is the system causal? Justify your answer. (b) Compute lh[n]j. Is this system 00 BIBO-stable? (c) Compute the energy and power of input signal x[n) = 3u[n - SJ. (d) Using inputx[n] = 3u[n -5), determine the zero-state response of this system at time n = 10. That is, detenniney75,[IO].

9.6-7

Show that a marginally stable system is BIBO-unstable. Verify your result by consid­ ering a system with characteristic roots on the unit circle and show that for the input of the form of the natural mode (wruch is bounded), the response is unbounded.

9.7-1

Detennine a constant coefficient linear clif­ ference equation that describes a system for wruch the input x[n] = 2(!Yu[-n - 4) causes resonance.

9.7-2

If one exists, detennine a real input x[n) that will cause resonance in the causal LTID system described by (£2 + l ){y[n]} = (£ + 0.5){x[n]}. If no such input exists, explain why not.

9.7-3

Consider two lowpass LTID systems, one with infinite-duration impulse response ht [n] = -(0.S)"u[n.] and the other with finite-duration impulse response h2[n] = 2(u[n] - u[n - 4]). Wruch system (I, 2, both, or neither) would more efficiently transmit a binary communica­ tion signal? Carefully justify your result.

(½)" u[n- I]

(a) Is the system stable? Is the system causal? Justify your answers. (b) Plot the signal x[n] = u[n - 3] - u[n + 3]. (c) Detennine the system's zero-state response y[n] to the input x[n] = u[11 - 3) - u[n+ 3). Plot y[n] over (- 10::: n::: IO).

835

Figure P9.6-4

836 9.8-1

9.8-2 9.8-3

9.8-4

CHAPTER 9 TIME-DOMAIN ANALYSIS OF DISCRETE-TIME SYSTEMS function using the conv command. Four vectors are passed to the function (x, y, rue, and ny) corresponding to the inputs x[n), y[11], and their respective time vectors. Notice that x and y are not necessarily the same length. Two outputs should be created (rxy and k) corresponding to rxy[kl and its shift vector. (d) Test your code from pan (c) using x[n] = 11111-5) - 11(n - IOJ over (0 �· n = nx � 20) and y(n) = u[-11 - 15) - u[-n - 10) + o[n -21 over (-20 � n = n1 � 10). Plot the result rxy as a function of the shift vector k. What shift k gives the largest magnitude of rx,,[k)? Does this make sense?

Write a MATLAB program that recursively computes and then plots the solution to yr11l­ h•f11 - 11 + h·ln- 21 =xfnl for (0 � 11 � 100) given x[11] = 0[11] + 11[11 - 50) and yl-2] = yl-11 = 2. Write MATLAB code to compute and plot the D T convolutions of Prob. 9.5-25. An indecisive student contemplates whether he should stay home or take his final exam, which is being held 2 miles away. S1aning at home. the student travels half the distance to the exam location before changing his mind. The student turns around and travels half the distance between his current location and his home before changing his mind again. This process of changing direction and traveling half the remaining distance continues until the student either reaches a destination or dies from exhaustion. (a) Determine a suitable difference equation description of this system. (b) Use MATLAB to simulate the difference equation in pan (a). Where does the student end up as 11 ➔ oo? How does your answer change if the student goes two-thirds the way each time, rather than halfway? (c) Detennine a closed-fonn solution Lo the equation in part (a). Use this solution to verify the results in pan (b). The cross-correlation function between x[n] and y[11] is given as rX>' [k] =

L x[n]y[n-k] 00

n=-oo

Notice that r.x,,[k] is quite similar to the convolution sum. The independent variable k corresponds to the relative shift between the two inputs. (a) Express r">,[k] in terms of convolution. Is r">,[k] = ry_..(k]? (b) Cross-correlation is said to indicate simi­ larity between two signals. Do you agree? Why or why not? (c) If xlnl and y[n] are both finite duration, MATLAB 's conv command is well suited to computer.t)'[k). Write a MATLAB func­ tion that computes the cross-correlation

9.8-5

A causal N-point max filler assigns y[n) to the maximum of (x[n), .. . ,x[n- (N- I)]}. (a) Write a MATLAB function that performs N-point max filtering on a length-M input vector x. The two function inputs are vec­ tor x and scalar N. To create the length-M output vector y, initjaJly pad the input vector with N - I zeros. The MATLAB command max may be helpful. (b) Test your filter and MATLAB code by fil­ tering a length-45 input defined as x(n] = cos(1C11/5) + o[n - 30] - c5[n - 35]. Sepa­ rately plot the results for N = 4, N = 8, and N =12. Comment on the tilter behavior.

9.8-6 A causal N-point min filter assigns y[n] to the minimum of (x[n], ... ,x[n- (N-1))}. (a) Write a MATLAB function that performs N-point min filtering on a length-M input vector x. The two function inputs are vec­ tor x and scalar N. To create the length-M output vector y, initially pad the input vector with N - I zeros. The MATLAB command min may be helpful. (b) Test your filter and MATLAB code by fil­ tering a length-45 input defined as x[n] = cos (rrn/5) + 8[11 - 30] - c5[n - 35). Sepa­ rately plot the results for N =4,N = 8, and N = 12. Comment on the filter behavior. 9.8-7

A causal N-point median filter assigns y[n] to the median of {x[n} ....,x[n - (N - I)]}. The median is found by soning sequence {x[n], ... ,x[n - (N - l)J} and choosing the middle value (odd N) or the average of the two middle values (even N).

Problems (a) Write a MATLAB function that perfonns N-point median filtering on a length-M input vector x. The two function inputs are vector x and scalar N. To create the Jength-M output vector y, initially pad the input vector with N - 1 zeros. The MATLAB command sort or median may be helpful. (b) Test your filter and MATLAB code by fil­ tering a length-45 input defined as x[n] cos (rrn/5) 8[11 - 30] - o[n -35]. Sepa­ rately plot the results for N 4, N 8, and N = 12. Comment on the filler behavior.

=

+

9.8-8

=

=

Recall thaty[n] =x[n/N] represents an upsam­ ple by N operation. An interpolation filter replaces the inserted zeros with more realistic values. A linear interpolation filter has impulse response h[n]=

I:

k=-(N-1)

(1-\:l)a(n-k)

(a) Detem1ine a constant coefficient dif­ ference equation that has impulse response h[n]. (b) The impulse response h[n] is noncausal. What is the smallest time shift necessary to make the filter causal? What is the effect of this shift on the behavior of the filter? (c) Write a MATLAB function that will compute the parameters necessary to implement an interpolation filter using MATLAB's filter command. That is, your function should output filter vectors band a given an input scalar N. (d) Test your filter and MATLAB code. To do this, create x[n] cos (n) for (0 ::: n::: 9).

=

837

Upsample x[n] by N = 10 to create a new signal x11p [11]. Design the corresponding N IO linear inierpolation filter, filter Xup [n] to produce y[n], and plot the results.

=

9.8-9

A causal N-point moving-average filter has impulse response h[n] (u[n] - u[n -

=

N])/N.

(a) Detennine a constant-coefficient differ­ ence equation that has impulse response h[n].

(b) Write a MATLAB function that will com­ pute the parameters necessary to imple­ ment an N-point moving-average filter using MATLAB's filter command. That is, your function should output filter vec­ tors band a given a scalar input N. (c) Test your filter and MATLAB code by fil­ tering a length-45 input defined as x[n] cos (rrn/5) + o[n - 30) - o[n - 35). Sepa­ rately plot the results for N = 4, N = 8, and N = 12. Comment on the filter behavior. (d) Problem 9.8-8 introduces linear interpola­ tion filters, for use following an upsam­ ple by N operation. Within a scale fac­ tor, show that a cascade of two N-point moving-average filters is equivalent to the linear interpolation filter. What is the scale factor difference? Test this idea with MAT­ LAB. Create x[n] cos (n) for (0 ::: n � 9). Upsample x[n] by N 10 to create a new signal Xup [n]. Design an N = 10 moving-average filter. Filter X11p [n] twice and scale to produce y[n]. Plot the results. Does the output from the cascaded pair of moving-average filters linearly interpolate the upsampled data?

=

=

=

FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS

In Chs. 3, 4, and 6, we studied the ways of representing a continuous-time signal as a sum of sinusoids or exponentials. In this chapter, we shall discuss simiJar development for discrete-time signals. Our approach is parallel to that used for continuous-time signals. We first represent a periodic x[n] as a Fourier series formed by a discrete-time exponential (or sinusoid) and its harmonics. Later we extend this representation to an aperiodic signal x[n] by considering x[nl as a limiting case of a periodic signal with the period approaching infinity.

10.1 PERIODIC SIGNAL REPRESENTATION BY DISCRETE-TIME FOURIER SERIES

A continuous-time periodic signal of period To can be represented as a trigonometric Fourier series consisting of a sinusoid of the fundamental frequency wo = 2,r /To, and all its harmonics. The exponential form of the Fourier series consists of exponentials ejo,, e ± jwo,, e±j2WQ,, e±jJluo,, .... A discrete-time periodic signal can be represented by a discrete-time Fourier series using a parallel development.Recall that a periodic signal x[n] with period No is characterized by the facl that x[n] = x[n + No] The smallest value of N0 for which this equation holds is the fundamental period. Thefwulamental frequency is Slo = 2,r /No rad/sample. An No-periodic signal x[n] can be represented by a discrete-time Fourier series made up of sinusoids of fundamental frequency Q0 = 2TC/No and its harmonics. As in the continuous-time case, we may use a trigonometric or an exponential form of the Fourier series. Because of its compactness and ease of mathematical manipu lations , the exponential form is preferable to the trigonometric. For this reason, we shall bypass the trigonometric form and go directly to the exponential form of the discrete-time Fourier series. The exponential Fourier series consists of the exponentials ejOn , e ± jnon, e ±jmo n, ..., e±jnflon, ..., and so on. There would be an infinite number of harmonics, except for the property proved in Sec. 8.3.5, that discrete-time exponentials whose frequencies are separated by 2,r (or integer multiples of 2,r) are identical because

838

10.1

Periodic Signal Representation by Discrete-Tune Pourier Series - j n,1 ±2rrmn ej(n±2irm)n e e =e.1·nn

839

m integer

The consequence of this result is that the rth harmonic is identical to the (r + No)th harmonic. To demonstrate this, let g" denote the nth harmonic einnon . Then

and

8r = 8r+No = 8r+2No = ·

· · = 8r+mN0

m integer

Thus, the first harmonic is identical to the (No + I)th harmonic, the second harmonic is identical to the (No+ 2)th harmonic, and so on. In other words, there are only N o independent harmonics, and their frequencies range over an interval 2,r (because the harmonics are separated by Q 0 = 2rr /N 0). This means that, unlike the continuous-time counterpart, the discrete-time Fourier series has only a finite number (N o ) of terms. This result is consistent with our observation in Sec. 8.3.5 that all discrete-time signals are bandlimited to a band from -rr to rr. Because the harmonics are separated by no = 2:rr /No , there can only be No harmonics in this band. We also saw that this band can be taken from O to 2:rr or any other contiguous band of width 2rr. This means we may choose the N o independent harmonics eirQo n over O::: r::; No - I, or over -1 ::: r::; N o - 2, or over 1::: r::: N o , or over any other suitable choice for that matter. Every one of these sets will have the same harmonics, although in different order. Let us consider the first choice, which corresponds to exponentials eirnon for r = 0, 1, 2, ... , N o - I. The Fourier series for an N0 -periodic signal x[n] consists of only these No harmonics, and can be expressed as No -I 2Jr x[n] = V,eirQ0 11 f2o=­ No r=O

L

To compute coefficients Vr , we multiply both sides by e-jmQon and sum over n from n = 0 to (No - 1). No -I No-I No-I mQ on (10.1) V,ei(r-m) non = x[n]e-j 11=0 r=O 11=0

LL

L

The right-hand sum, after interchanging the order of summation, results in

The inner sum, according to Eq. (5.15) in Sec. 5.5, is zero for all values of r "I- m. It is nonzero with a value N o only when r = m. This fact means the outside sum has only one term Vm No (corresponding tor= m). Therefore, the right-hand side of Eq. (10.1) is equal to Vm No, and

L x[n]e-j non = V N

No-I n=O

m

m

o

840

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS and

We now have a discrete-time Fourier series (DTFS) representation of an No-periodi c signal x[n x[n] =

L 'DreJ,non

] as

No-I

00.2)

r=O

where

2n no=­ No

( I 0.3)

Observe that DTFS Eqs. ( 10.2) and (10.3) are identical (within a scaling constant ) to the OFT Eqs. (5.13) and (5.12). t Therefore, we can use the efficient FFf algorithm to compute the DTFS coefficients.

10.1.1 Fourier Spectra of a Periodic Signal x[n] The Fourier series consists of No components 1) 1) e1no11 1) e12no11 ,• , 2 o, 1

• •,

1)N

ino11

0-1 e

The frequencies of these components are 0, no, 2Qo, ..., (N o - l)Qo, where n0 = 2rr/N0. The amount of the rth harmonic is 'D,. We can plot this amount 'D, (the Fourier coefficient) as a function of index r or frequency n. Such a plot, called the Fourier spectrum of x[n], gives us, at a glance, the graphical picture of the amounts of various harmonics of x[n]. In general, the Fourier coefficients 'D, are complex, and they can be represented in the polar form as 1), = l'Dr lejl'Dr The plot of l'D,I versus Q is called the amplitude spectrum and that of LV, versus Q is called the angle (or phase) spectrum. These two plots together are the frequency spectra of x[n]. Knowing these spectra, we can reconstruct or synthesize x[n] according to Eq. (10.2). Therefore, the Fourier (or frequency) spectra, which are an alternative way of describing a periodic signal x[n), are in every way equivalent (in terms of the information) to the plot of x[n] as a function of n. The Fourier spectra of a signal constitute the frequency-domain description of x[n], in contraSt to the time-domain description, where x[n] is specified as a function of index 11 (representing time). The results are ve ry similar to the representation of a continuous-time periodic signal by an exponential Fourier series except that, generally, the continuous-time signal spectrum bandwidlh is infinite and consists of an infinite number of exponential components (har monics). Toe spectrum of the discrete-time periodic signal, in contrast, is bandlimited and has at most No components. t If we letx[n] = Nox1t. and V, =X,, Eqs. (10.2) and (

·

vely 10.3) are identical 10 Eqs. (5.13) and (5.12), respecu

I 10.1 Periodic Signal Representation by Discrete-Time Fourier Series

841

PERIODIC EXTENSION OF FOURIER SPECTRUM

We now show that if [r] is an No-periodic function of r, then

L [r] = L [r]

No-I r=O

(10.4)

r=(N o)

where r = (No) indicates summation over any N0 consecutive values of r. Because ¢[r] is No-periodic, the same values repeat with period N0 • Hence, the sum of any set of No consecutive values of [r] must be the same no matter the value of rat which we start summing. BasicaUy, it represents the sum over one cycle. To apply this result to the DTFS, we observe that e-jrflon is No-periodic because

Therefore, if x[n] is No-periodic, x[n]e-jrflon is also No-periodic. Hence, from Eq.(10.3), it follows that V, is also No-periodic, as is V,ejrflon . Now, because of Eq.(10.4), we can express Eqs. (10.2) and(10.3) as x[n] = V,eirflon (10.5)

L

r=(N o)

and (10.6) If we plot V, for all values of r (rather than only 0 :;: r:;: No - 1), then the spectrum V, is No-periodic. Moreover, Eq. (10.5) shows that x[n] can be synthesized not only by the No exponentials corresponding to 0 :::: r :::: No - 1, but also by any successive No exponentials in this spectrum, starting at any value of r(positive or negative). For this reason, it is customary to show the spectrum V, for all values of r(not just over the interval O::: r:::: No - t ). Yet we must remember that to synthesize x[n]from this spectrum, we need to add only No consecutive components. Along the Q scale, V, repeats every 2rc intervals, and along the rscale, 1), repeats at intervals of N0 • Equations (10.5) and (10.6) show that both x[n] and its spectrum V, are N0-periodic and both have exactly the same number of components(No) over one period. Equation() 0.6) shows that V, is complex in general, and V_, is the conjugate of V, if x[n] is real. Thus, and so that the amplitude spectrum IV,I is an even function, and LV, is an odd function of r(or Q). All these concepts will be clarified by the examples to follow. The first example is rather trivial and serves mainly to familiarize the reader with the basic concepts of DTFS.

EXAMPLE 10.1 Discrete-Time Fourier Series of a Sinusoid Find the discrete-time Fourier series (DTFS) for x[n] = sin 0.1rr n (Fig. 10.1a). Sketch the amplitude and phase spectra. In this case, the sinusoid sin 0.1re nis periodic because Q/2rr = 1/20 is a rational number and the period No is [see Eq. (8.5)]

842

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE�TIME SIGNALS x[n]

(a)

,

0.5

1T

-21r -20

-IO

n-

21T

1T

10 10

20

0

r---

30

4-0

(b) L'D,

-277 �

2

1T 7T --

211

10 10

1T

2

l

n-

➔ii

(c)

Figure 10.1 Discrete-time sinusoid sin O. J,rn and its Fourier spectra.

2Jr

No = m(

Q

)

= m(�) =2 0m

O.lrr

The smallest value of m that makes 20m an integer ism= I. Therefore, the period No == 20s that no= 21r/No = O.lrr, and from Eq. (10.5), x[n] =

L

o

VreJ0.17rm

r=(20)

where the sum is performed over any 20 consecutive values of r. We shall select the �g: usin,, -10 � r < IO (values of r from - IO to 9). This choice corresponds to synthesizing x[n] the spectral components in the fundamental frequency range (-rr < Q JO (or n > -3rr). Show that.this Fourier is equivalent to that in Eq. (10.7).

-Jr:::

ILL 10.2 Discrete-Time Fourier Series of a Sum of Sinusoids x[nJ = 4cos0.2rrn + 6sin0.5rrn

over the interval O � r S 19. Use Eq. (10.3) to compute 'Dr ,

ANSWERS

No = 20 and x[n] = 2 ei0.27rn + (3e-jrr/2)eifJ.5u + (3ejrrf2 )e i l.5rrn + 2ei l.81r11

10.1

845

Periodic Signal Representation by Discrete-Tune Fourier Series

DRILL 10.3 Fundamental Period of Discrete-Time Sinusoids Find the fundamentaJ periods No, if any. for: (a) sin (301Jrn/4) and (b) cos 1.3n.

ANSWERS (a) N o= 8, (b) No does not exist because the sinusoid is not periodic.

EXAMPLE 10.2 Discrete-nme Fourier Series of a Periodic Gate Function Compute and plot the discrete-time Fourier series for the periodic sampled gate function shown in Fig. 10.2a. x[n]

-

-36 -32 -28

---

-- -- -----

-16

-4

0

4

------ -

.--T----

16

11-

(a) 9

'IJ,

32 ....

7T

(b)

Figure 10.2 (a) Periodic sampled gate pulse and (b) its Fourier spectrum.

In this case, No = 32 and Q0 = 2;r /32 = ;r / 16. Therefore, x[n] =

L

V,eir(rr/16)11

r=(32)

where

1 v,- 32

[ ] L xne

11=(32)

-jr(rr/16)11

--

-

n-

28

-

32

36

846

CHAPTER 10 FOURJER ANALYSIS OF DISCRETE-TIME SIGNALS For our conven ience, we shall choose the jnterval -16::: n � 15 for this summation, although any other interval of the same width (32 points ) would give the same resuJt.t

Vr Now, x[n]

=

= _l

15

"

L.J 32 11=-16

.x[n] e-jr(n/16)n

I for -4::: n ::: 4 and is zero for all other values of n. Therefore, 4

L

l e -jr(n/16)n V -r- 32 n=-4

(10.8)

. This is a geometric progression with a common ratio e-ir Therefore (see Sec. B.8.3),*

e- J (S,rr/16) _ eJ (4rrr/16) ] e- j(rrr/16) - 1 32 I e-J(0.Snr/16) [e-j(4.5nr/16) _ ei(4.5,rr/16)] ( ) = 32 e-i(0.far/16) [e-i(0.Slrr/16) _ ei(0.Snr/16)]

1 Vr=- [

sin (�) 1 ( ) = 32 . (0.5:rrr) sm -16 I ) ( ) sin (4.5rn 0 = 32 sin (0.5rS'2o)

7r

r2o=16

(10.9)

This spectrum (with its periodic extension) is depicted in Fig. 10.2b. DISCRETE-TIME FOURIER SERIES

uSING MATLAB

Let us confirm our results by using MATLAB to directly compute the DTFS according to Eq. (10.3). t In this example, we have used the same equations as those for the DFf in Ex. 5.9, within a scaling constant. In the present example, the values of x[n] at n = 4 and -4 are taken as l (full value), whereas in Ex. 5.9 these values are 0.5 (half the value). This is the reason for the slight difference in spectra in Figs. 10.2b and 5.19d. Unlike continuous-time signals, discontinuity is a meaningless concept in discrete-time signals. t S�ctly speaking, the geometric progression sum formula applies only if the common rati o e-J(rr/l6)r # I. When r = 0, this ratio is unity. Hence, Eq. (10.9) is valid for values of r # 0. For the case r = 0, the sum in Eq. (10.8) is given by 1 32

L x[n] =

ri=-4

i2

) is Fortunately, the value of Vo, as computed from Eq. (10.9), also happens to be 9 /32. Hence, Eq. (10.9 valid for all r.

10.2 Aperiodic Signal Representation by Fourier Integral

847

>> N_O = 32; n = (O:N_0-1); Omega_O = 2•pi/N_O; >> x n = [ones(l,5) zeros(l,23) ones(1,4)]; >> for r = O:N_0-1, >> X_r(r+l) = sum(x_n.•exp(-j•r•Omega_O•n))/N_O; >> end >> r = n; stem(r,real(X_r),>k.'); >> xlabel(>r'); ylabel(>X_r> ); axis([O 31 -.1 0.3)); The MATLAB result, shown in Fig. 10.3, matches Fig. 10.2b.

0.2

... 0.1

X

0

-0.1

0

5

IO

15

20

25

30

r

Figure 10.3 MATLAB-computed DTFS spectra for periodic sampled gate pulse of Ex. 10.2.

Alternatively, scaling the FFf by No produces the exact same result (Fig. I0.3). >> >>

X_r = fft(x_n)/N_O; stem(r,real(X_r),'k.'); xlabel('r'); ylabel('X_r'); axis([O 31 -.1 0.3));

10.2 APERIODIC SIGNAL REPRESENTATION BY FOURIER INTEGRAL In Sec. 10.1, we succeeded in representing periodic signals as a sum of (everlasting) exponentials. In this section, we extend this representation to aperiodic signals. The procedure is identical conceptually to that used in Ch. 4 for continuous-time signals. Applying a limiting process, we now show that an aperiodic signal x[n] can be expressed as a continuous sum (integral) of everlasting exponentials. To represent an aperiodic signal x[n] such as the one illustrated in Fig. I0.4a by everlasting exponential signals, let us construct a new periodic signal xN0[n] formed by repeating the signal x[n] every No units, as shown in Fig. 10.4b. The period No is made large enough to avoid overlap between the repeating cycles (No::: 2N + I). The periodic signal xN0[n] can be represented by an exponential Fourier series. If we let No ➔ oo, the signal x[n] repeats after an infinite interval, and therefore, lim

No ➔OO

XN0 [n]

=x[n]

848

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS x[nJ

-N

0

N

n-

N

N0

(a)

-N

0

n-

(b)

Figure 10.4 Generation of a periodic signal by periodic extension of a signal x[n].

Thus, the Fourier series representing XN0 [n] will also represent x[n] in the limit No ➔ oo. The exponential Fourier series for XN0 [n] is given by XNo [n] =

L 'D e r

r=(No)

jrOon

2,r no= No

(10.10)

where

(10.11) The limits for the sum on the right-hand side of Eq. (10.11) should be from -N to N. But because x[n] = 0 for lnl > N, it does not matter if the limits are taken from -oo to oo. It is interesting to see how the nature of the spectrum changes as No increases. To understand this behavior, let us define X(Q), a continuous function of Q, as X(Q)

=

L x[n]e-jQn

(10.12)

1 No

(!0.13)

11=-00

From this definition and Eq. (10.11), we have 'Dr = -X (rQo)

This result shows that the Fourier coefficients 1),. are 1/No times the sa mples of X(Q} takenevei _ -tc no rad/s.t Therefore, (1 /No)X(Q) 1s the envelope for the coefficients 'Dr. We now let No · t For the sake of s1mp · 1·1c1ty, we assume 'Dr and therefore X(Q) to be real. The argument, however' is a]SO valid for complex 'Dr [or X(Q)].

10.2 Aperiodic Signal Representation by Fourier Integral

849

doubling No repeatedly. Doubling No halves the fundamental frequency Q0 , with the result that the spacing between successive spectral components (harmonics) is halved, and there are now twice as many components (samples) in the spectrum. At the same time, by doubling No , the envelope of the coefficients V, is halved, as seen from Eq. ( I 0.13). If we continue th.is process of doubling No repeatedly, the number of components doubles in each step; the spectrum progressively becomes denser, while its magnitude V, becomes smaller. Note, however, that the relative shape of the envelope remains the same [proportional to X(Q) in Eq. (10.12)]. In the limit, as No ➔ oo, the fundamental frequency no➔ 0, and V, ➔ 0. The separation between successive harmonics, which is n0, is approaching zero (infinitesimal), and the spectrum becomes so dense that it appears to be continuous. But as the number of harmonics increases indefinitely, the harmonic amplitudes V, become vanishingly small (infinitesimal). We discussed an identical situation in Sec. 4.1. We follow the procedure in Sec. 4.1 and let No ➔ oo. According to Eq. ( 10.12), X(rno) =

L x[n]e-jr 00

non

11=-00

Using Eq. (10.13), we can express Eq. (10.10) as XN0 [nJ =

:

L X(rn )e rr2o = L X(rn )e

0 r=( No)

j

o

11

o

r=( No)

jrnon

(�;)

In the limit as N o ➔ oo, no➔ 0 and XN0 [n] ➔ x[n]. Therefore,

" X(rno)no jrS2on x[ n] = I.1m L.., [ ----] e 2rr no➔ O r=(N )

(10.14)

o

Because n 0 is infinitesimal, it will be appropriate to replace no with an infinitesimal notation .6.Q:

2rr

Equation ( 10.14) can be expressed as x[n]

=

(10.15)

L'iQ=­ No

1 lim - 6 Q➔ o 2rr

L X(rliQ)ejrt.n,ilin

r=(No)

(10.16)

The range r = (No) implies the interval of No number of harmonics, which is Nolin = 2rr according to Eq. (10.15). ln the limit, the right-hand side of Eq. (10.16) becomes the integral x[n] =

1

-1

2;r 2rr

X(Q)ei 11Q dn

(10.17)

al of 2rr. The spectrum X(Q) is given where f2rr indicates inteor o ation over any continuous interv by [Eq. (10.12)] X(Q)

=

L x[n]e-

11=-00

jn,1

(10.18)

850

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS

The integral on the right-hand side of Eq. (10.17) is caJled the Fourier integral. We have now succeeded in representing an aperiodic signal x[n] by a Fourier integral (rather t han a Fourier series). This integral is basically a Fourier series (ia the limit) with fundamental frequencY · I ejrt.n.11 1s • X( �Q r )b.Q/2rc. Thus � Q ➔ O, as seen in Eq. (10. 16). The amount of the exponenua which function, indicates spectral a as the relativ� acts the function X(Q) given by Eq. (10.18) amounts of various exponential components of x[n]. We call X(Q) the (direct) discrete-time Fourier transform (DTFT) of x[n ], and x[n] the inverse F discrete-time Fourier transform (lDTT) of X(Q). This nomenclature can be represented as X(Q) = DTFT{x[n]}

x[n] = [DT FT{X(Q))

and

The same information is conveyed by the statement thatx[n] and X(Q) are a (discrete-time) Fourier transform pair. Symbolically, this is expressed as x[n]

{=}

X(Q)

The Fourier transform X(Q) is the frequency-domain description of x[n].

10.2.1 Nature of Fourier Spectra We now discuss several important features of the discrete-time Fourier transform and the spectra associated with it.

n

FouruER SPECTRA ARE CoNTINuous FuNcTroNs oF Although x[n] is a discrete-time signal, X(Q), its DTFT is a continuous function of n for the simple reason that Q is a continuous variable, which can take any value over a continuous inter"al from -oo to oo. FOURIER SPECTRA ARE PERIODIC FUNCTIONS OF Q WITH PERIOD From Eq. (10.18), it follows that X(Q + 2rr) =

L (X)

x[n]e-j(0+ 2,r>11 =

n=-oo

2Jr

L x[n]e-j n -j2Jrn = X(Q) (X)

0

e

11=-00

Clearly, the spectrum X(Q) is a continuous, periodic function of n with period 2rr. We mu5I remember, however, that to synthesize x[n], we need to use the spectrum over a frequency intel"'al of only 2rr, starting at any value of Q [see Eq. (10.17)]. As a matter of convenience, we shall choose this interval to be the fundamental frequency range ( -rr, rr). It is, therefore, not necessa!Y to show discrete-time-signal spectra beyond the fundamental range, although we often do so. e show �he reas�n for the perio�ic beha_vior of X(Q) was discussed in Sec. 8.3.5, where w er, � v that, m a basic sense, the d1screte-t1me frequency Q is bandlimited to IQI � 1r. Howe fhis discrete-time sinusoids with frequencies separated by an integer multiple of 2rr are ide ntical. is why the spectrum is 2rr periodic.

851

10.2 Aperiodic Signal Representation by Fourier Integral CONJUGATE SYMMETRY OF X(Q)

From Eq. ( 10.18), we obtain the DTFT of X-[n] as DTFf{x•[n]} =

L x*[n]e-jOn = X*(-Q) 00

n=-oo

In other words,

x*[n] {::=::} x·c-n)

(10.19)

For real x[n], Eq. (10.19) reduces to x[n] {::=::} X*(-Q), which implies that for real x[n]

Therefore, for real x[n], X(Q) and X(-Q) are conjugates. Since X(Q) is generally complex, we have both amplitude and angle (or phase) spectra

Because of the conjugate symmetry of X(Q), it follows that for realx[n], IX(Q)I = IX(-Q)I

and

LX(Q) = -LX(-Q)

Therefore, the amplitude spectrum IX(Q)I is an even function of Q and the phase spectrum LX(Q) is an odd function of for real x[n].

n

PHYSICAL APPRECIATION OF THE DISCRETE-TIME FOURIER TRANSFORM

In understanding any aspect of the Fourier transform, we should remember that Fourier representation is a way of expressing a signal x[n] as a sum of everlasting exponentials (or sinusoids). The Fourier spectrum of a signal indicates the relative amplitudes and phases of the exponentials (or sinusoids) required to synthesizex[n]. A detailed explanation of the nature of such sums over a continuum of frequencies is provided in Sec. 4.1.1. EXISTENCE OF THE DTFT

Because le- j1'2n I = 1, from Eq. ( 10.18), it follows that the existence of X (Q) is guaranteed ifx[n] is absolutely summable; that is,

L lx[n]I < oo 00

(10.20)

11=-00

This shows that the condition of absolute summability is a sufficient condition for the existence of the DTFf representation. This condition also guarantees its uniform convergence. The inequality

852

CHAPTER 10 FOURJER ANALYSIS OF DISCRETE-TIME SIGNALS TABLE 10.1

Select Discrete-Time Fourier Transform Pairs

No. x[n] o[n -k] 2

ynu[n]

3

-ynu[-(n + I)]

4

ylnl

5

ny

6

X(il)

e-jkO.

-ln1egerk

efn

IYI< I

eiO.-y eiO. ei 0 -y

IYI> I

1 -y2 l-2ycosn+y2

lrl < I

yeiO. (eiO. _ y)2

IYl> Omega = linspace(0,2*pi,1000); >> X = sin(4.5*0mega)./sin(0.5*0mega); >> X(mod(Omega,2*pi)==O) = 4.5/0.5; N_O = 64; M = 9; >> x = [ones(1,(M+1)/2) zeros(1,N_O-M) ones(l,(M-1)/2)]; >> Xr = fft(x); Omega_O = 2*pi/N_O; r = O:N_0-1; >> plot(Omega,abs(X),'k-',Omega_O*r,abs(Xr),'k.'); axis([O 2•pi O 9.5]); >> xlabel('\Omega'); ylabel('IX(\Omega)I'); As shown in Fig. 10.8, the FFf samples align exactly with our analytical DTFT result.

8

2

2

3

n

4

s

6

Figure 10.8 Using the FFI' to verify the DTFT for Ex. 10.5.

PL E 10.6 Inverse DTFT of a Rectangular Spectrum Find the inverse DTFT of the rectangular pulse spectrum described over the fundamental band (lr21 � ;,r) by X(Q) = rect(Q/2r2c ) for nc .::: rr. Because of the periodicity property, X( ) Q repeats at the intervals of 2rr, as shown in Fig. l 0.9a.

858

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS X(fi)

•••

••• -21T

_JL

-'7T

4

0

27T

1T

1T

4

(a)

x[n]

0.25

n---(b)

Figure 10.9 Periodic gate spectrum and its inverse discrete-time Fourier transform.

According to Eq. (10.17), x[n] = _1

{

1r

2rr J_1T

=

nc __!__ { n i11 einn dQ e d Q= X(Q)

1 jn nc e nl -:--J 2rrn -nc

2rr 1-nc

si Q ) n c . = n( cn = smc(Qcn ) :rrn

:rr

The signal x[n] is depicted in Fig. 10.9b (for the case Qc = :rr /4).

litude and phase spectra for

..,

inQ)]

n-

859

10.3 Properties of the DTFf

10.3 PROPERTIES OF THE DTFT A close connection exists between the DTFJ' and the CTFJ' (continuous-time Fourier transform). For this reason, which Sec. 10.4 discusses, the properties of the DTFI' are very similar to those of the CTFT, as the following discussion shows. LINEARITY OF THE

DTFT

If

and then

a1x1 [n] + a2x2[n]

a1X1 (Q) + a2X2(Q)

The proof is trivial. The result can be extended to any finite sums. CONJUGATE SYMMETRY OF X(Q)

In Eq. (10.19), we proved the conjugation property (10.26) We also showed that as a consequence of this, when x[n] is real, X(Q) and X(-Q) are conjugates, that is, X(-Q) = X*(Q) This is the conjugate symmetry property. Since X(Q) is generally complex, we have both amplitude and angle (or phase) spectra Hence, for real x[n], it follows that IX(s-2)1

= IX(-Q)I

LX(Q) = -LX(-Q)

and

Therefore, for real x[n], the amplitude spectrum IX(Q)I is an even function of n and the phase spectrum LX(Q) is an odd function of n. TIME AND FREQUENCY REVERSAL Also called the reflection property, the time and frequency reversal property states that

x[-n]

(10.27)

¢::==> X(-Q)

Demonstration of this property is straightforward. From Eq. (10. 18), the DTFI' of x[-n] is

DTFT{x[-n]} =

L x[-n]eCXl

11=-00

J Qn

=

Loo x[m]e 00

m=-

Jnm

= X(-Q)

860

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS

EXAMPLE 10.7 Using th Use the time-frequency reversal property of Eq. (10.27) and pair 2 in Table 1 0.1 to derive pair 4 in Table 10.1. Pair 2 states that y"u[n]

ein

= -. -­ e1n y

IYI < 1

Hence, from Eq. (10.27), y-"u[-n] =

e-jn · e-J n -y

lrl < 1

Moreover, yin! could be expressed as a sum of y" u[n] and y-n u[-n], except that the impulse at n = 0 is counted twice (once from each of the two exponentials). Hence,

Combining these results and invoking the linearity property, we can write DTFf{y 1 "1}

ein

e-jn

ej - y

e-jP. -y

1 -y 2 1-2ycosQ + y

= -+-- -1 = -----2 n

IYI < 1

which agrees with pair 4 in Table 10.1.

of Eq. (10.27).

MULTIPLICATION BY n: FREQUENCY DIFFERENTIATION (Q)

.dX (10.28) nxn [ ] {::::::::=}J-­ dQ The result follows immediately by differentiating both sid�s of Eq. (10. 18) with respect ton.

10.3 Properties of the DTFf

861

EX.AMPLE 10.8 Using the Frequency-Differentiati Use the frequency-differentiation property of Eq. (10.28) and pair · 2m · 'r 1able 10. 1 to denve · · -r iable IO . 1. · 5 rn palf Pair 2 states that

n

--:-.1nej-­ -

y"u[n] =

e

IYI < 1

y

Hence, from Eq. (10.28), n • d { . ej ny11u[n]=J} drl. e1n - y

ye =---in - 2 iO

(e

y)

IYI < 1

which agrees with pair 5 in Table 10. l .

TIME-SHIFTING PROPERTY If then

x[n] {=} X(Q) x[n - k] {=} X(Q)e-JkO

for integer k

(10.29)

This property can be proved by direct substitution in the equation defining the direct transform. From Eq. (10. 18), we obtain

L x[n-k]e-jQ,i= L x[m]e-jQ[m+kl 00

x[n-k] 0 LX(O) = { - -T( Q 2+6cos(O) < 0

Figure 10.20b shows IX(O)I and LX(O) (dotted). Observe that the OFT values exactly equal the OTFf at the sampled frequencies; there is no approximation. This is always true of the OFT of a finite-length signal x[n]. However, if x[n] is obtained by truncating or windowing a longer sequence, we shall see that the OFT gives only approximate sample values of the OTFf. The OFT in this example has too few points to give a reasonable picture of the OTFT. The peak of the DTFf appearing between the second and third samples (between r = I and 2), for instance, is missed by the OFT. The two valleys of the DTFT are also missed. We definitely need more points in the OFf for an acceptable resolution. This goal is accomplished by zero padding, explained next.

ZERO PADDING AND THE PICKET FENCE EFFECT The OFT' yields samples of the OTFT spaced at frequency intervals of Oo = 21'( INo. In this way, the DFT size No determines the frequency resolution no. Seeing the OTFT through the OFT is like viewing X(Q) through a picket fence. Only the spectral components at the sampled frequencies

884

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS (which are integral multiples of n0) are visible. AU the remaining frequency component s are hidden, as though behind the pickets of a fence. If the DFT has too few points, then peaks, valleys , and other details of X(n) that exist between the DFT points (sampled frequencies) will not be seen, thus giving an erroneous view of the spectrum X(Q). This is precisely the case in Ex. 10.14. Actually, using the interpolation formula, it is possible to compute any number of values of the DTFf from the DFf. But having to use the interpolation formula reall y defeats the purp ose of the DFf. We therefore seek to reduce n 0 so that the number of samples is increased for a bett er view of the DTFf. Because n 0 = 2rr / No , we can reduce n0 by increasing No , the length of x[nj. But how do we increase the length of a signal when its length is fixed? Simple! Just append a sufficient numberof zero-valued samples to x[n]. This procedure, referred to as zero padding, is depicted in Fig. lO. l9c. Recall that No is the period of xN0 [11], which is formed by periodic repetition of x[n]. By appending a sufficient number of zeros to x[n], as illustrated in Fig. 10.19c, we can increase the period No as much as we wish, thereby increasing the number of points of the DFf. Zero padding allows us to obtain more samples of X(n) for clarity. Furthermore, this process does not change the underlying signal x[n], which means that zero padding does not affect the underlying spectrum X(Q). Returning to our picket fence analogy, zero padding increases the number cf pickets in our fence (No). but it cannot change what is behind the fence (X(Q)). To demonstrate the idea, we next rework Ex. 10.14 using zero padding to double the number of samples of the DTFf.

EXAMPLE 10.15 6-Point DFT of a Length-3 Signal Using zero padding, find the 6-point DFf of the length-3 signal of Ex. 10.14, x[n] = 3�[n] + 28[n - 1] + 38[11 - 2]. Compare the 6-point DFf Xr with the corresponding DTFT spectrum X(n). To compute a 6-point DFT from our length-3 signal, we must pad x[n] with three zeros, as shown in Fig. 10.21a. In this way, we treat x[n] as a 6-point signal with x[O] = 3, x[I] =2, x[2] = 3, and x[3] = x[4] = x[5] = 0. By padding x[n] with three zeros, No increases to 6, and Q 0 decreases to 2rr/6 = rr/3. From Eq. (10.47), we obtain 5

Xr = Lx[n]e -J r(!f )n

= 3 + 2 e-Jr� + 3 e-Jr�

11=0

Therefore, Xo =3+2+3 = 8

x, =3+2,-;; +3,-;1f =3+ ( I - j�) + (-�

-l;3)

= (�

-/,:3)

=5e-;;

885

10.6 Signal Processing by the OFT and FFT ..x(n]

8 -., l�I _.. .' ...

'•

--4

-2

IX(Q)I

': /- ;'

I\ \{

--6

\.

!

\i ',,,,,\(\:

0 1 2 3 4 5 6 7

I

I

I

q

(a)

211

T 0.61•

i•

OJ9":

I-..... .

3

2

'-

.

�.

L�

4 5 6

,_

' n-

411

211

J

(b) \./ LX(Q)



[ ...

...

Figure 10.21 Computing the 6-point DFf of a zero-padded length-3 signal.

In the same way, we find 2.11' -411' · 71 = 3+2e- 1•T +3e- 1 T = el'! X3 = 3 + 2e-i1r + 3e-i27r = 4

X2

• 411

• 8,r

· 11

X4 = 3+2e-1T +3e- 1T =e-1 '! Xs = 3 + 2e- 1T -�

� = 5e1•n! + 3e- 1T -1

The magnitudes and angles of the resulting Xr are thus r IXrl LXr

0 8 0

5

2

3 4

0

4

5 5

Observe that Xs = Xf and X4 = Xi as expected from the conjugate symmetry property. Figure 10.21 b shows plots of IX,I and LX,. Observe that we now have a 6-point DFT, which provides 6 samples of the DTFT spaced at a frequency interval of rr/3 (in contrast to 2rr /3 spacing in Ex. 10.14). The samples corresponding tor= 0, 2, and 4 in Ex. 10.15 are identical to the samples corresponding tor= 0, I, and 2 in Ex. 10.14. The DFT spectrum in Fig. 10.21b contains the three samples appearing in Fig. l 0.20b plus 3 more samples in between. Clearly, zero padding allows us a better view of the DTFT. But even in this case, the valleys of X(Q) are still largely missed by this 6-point DFT.

886

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS

EXAMPLE 10.16 32-Point DFr of a Length-3 Signal Using MATLAB Use MATLAB and zero padding to compute and plot the 32-point OFT of the length -3 signal of Ex. 10.14, x[n] = 38[n] + 28[n - l] + 38[n - 2]. Compare the 32-point DFf Xr with the corresponding DTFf spectrum X(Q). Because the signal length is 3, we need to pad 29 zeros to the signal. MATLAB is used to avoid tedious hand calculations. To plot the DTFT spectrum X(Q) on the same plots a5 Xr, we normalize the frequency variable as Q/Q0 . As shown in Fig. 10.22, the plots of IXrl and LXr exactly follow the DTFf spectrum X(Q). The underlying DTFT is clearly exposed by the dense sampling of the 32-point OFT. NO= 32; x = [3 2 3 zeros(1,N0-3)]; OmegaO = 2*pi/NO; r = O:N0-1; Xr = fft(x); Omega= linspace(0,2*pi,1001); X = ©(Omega) exp(-1j*Omega).*(2+6*cos(Omega)); subplot(121); stem(r,abs(Xr),'k. '); xlabel('r'); ylabel('IX_rl '); axis([O NO O 8.5]); line(Omega/OmegaO,abs(X(Omega)),'color',[O O 0]); subplot(122); stem(r,angle(Xr),'k.'); xlabel('r'); ylabel('\angle X_r'); axis([O NO -pi pi]); line(Omega/OmegaO,angle(X(Omega)),'color', [O O O]);

>> >> >> >> >> >> >> >> >> >>

8

... >
> >> >>

Figure 10.32 shows a peak magnitude of 0.5 at ±50 Hz. This result is consistent with Euler's representation cos(2n50nT) =

!e j2Jr 5011T

2

+ !e-j21r5011T 2

Lacking the 1/No scale factor, the OFT would have a peak amplitude 100 times larger. The inverse DTFS is obtained by scaling the inverse DFI' by N0. >> >>

x = real(ifft(X)*NO); stem(n,x,'k.'); axis([O 99 -1.1 1.1]); xlabel('n'); ylabel('x[n] ');

Figure I 0.33 confirms that the sinusoid x[n] is properly recovered . Although the result is theoretically real, computer round-off errors produce a small imaginary component, whkh the realcommand removes. Although MATLAB's fft command provides an efficient method to compute the DTFS, other important computational methods exist. A matrix-based approach is one popular way to implement Eq. (10.3). Although not as efficient as an FFI'-based algorithm, matrix-based I

0.4

...

'

I

I

'

I

i

-200

-100

I

,.

I

I

i

I

100

200

1

S 0.2 �

0

-500

-400

-300

Figure 10.32 DTFS computed by scaling the DFT.

0 f [Hz]

i 300

I

400

500

10.8 MATLAB: Working with the DTFS and the DTFr

903

-1 .____..____.....____...x...,___..J....___.L,___L__��-----1--___:!:.___:j

10

0

20

30

40

60

50

n

70

80

90

Figure 10.33 Inverse DTFS computed by scaling the inverse DFf.

approaches provide insight into the DTFS and serve as an excellent model for solving similarly structured problems. To begin, define WN0 = eino , which is a constant for a given N0 • Substituting WN0 into Eq. (10.3) yields No-I 1 'Dr = x[n]w,v:onr No n=O � An inner product of two vectors computes 'D,.

'°'

x[O] x[l]

WN-(No-l)r] o

x[2] x[No-1]

Stacking the results for all r yields 'Do 'D1 'D2

1)No -l

1 =No

1

1 1 1

2 w-

1

-(No-1) WN

wNo-' No

o

I

2 wN-

1

-(No-1) W

o wN-o4

o -1) WN�2(N o

-2(No-l) WN

-(No-1)2 wN

o

o

x[O]

x[l]

x[2]

x[No -1]

In matrix notation, this equation is compactly written as

Since it is also used to compute the DFf, matrix WNo is often called a OFT matrix. Let us create an anonymous function to compute the No-by-No DFf matrix WNo • Although not used here, the signal-processing toolbox function dftmtx computes the same DFf matrix, although in a less obvious but more efficient fashion.

>> W = ©(NO) (exp(-j*2*pi/NO)).-((O:N0-1)'*(0:N0-1));

904

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS

While less efficient than FFf-based methods, the matrix approach correctly computes the DTFS. >> >>

X = W(NO )•x/NO; stem(f-1/(2•T),fftshift(abs(X ) ),'k.'); axis([-500 500 -0.05 0.55]); xlabel('f [Hz]'); ylabel('IX(f)I ');

The resulting plot is indistinguishable from Fig. l 0.32.Problem l 0.8-1 investigates a matrix-based approach to compute Eq.(10.2), the inverse DTFS.

10.8.2 Measuring Code Performance Writing efficient code is important, particularly if the code is frequently used, requires complicated operations, involves large data sets, or operates in real time. MATLAB provides several tools for assessing code performance. When properly used, the profile function provides detailed statistics that help assess code performance. MATLAB help thoroughly describes the use of the sophisticated profile command. A simpler method of assessing code efficiency is to measure execution time and cor::)are it with a reference. The MATLAB command tic starts a stopwatch timer. The toe com.mar'.: reads the timer.Sandwiching instructions between tic and toe returns the elapsed time. For example, the execution time of the 100-point matrix-based DTFS computation is >>

tic; W(N0)•x/N0; toe Elapsed time is 0.002385 seconds.

Different machines operate at different speeds with different operating systems and with different background tasks. Therefore, elapsed-time measurements can vary considerably from machine to machine and from execution to execution.For relatively simple and short events like the present case, execution times can be so brief that MATLAB may report unreliable times or fail to register an elapsed time at all. To increase the elapsed time and therefore the accuracy of the time measurement, a loop is used to repeat the calculation. >>

tic; for i=l:100, W(NO)*x/NO; end; toe Elapsed time is 0.179059 seconds.

This elapsed time suggests that each 100-point DTFS calculation takes a little under 2 milliseconds. What exactly does this mean, however? Elapsed time is only meaningful relative to some reference.Let us see what difference occurs by precomputing the OFT matrix, rather than repeatedly using our anonymous function. >> W100 = W(100); tic; for i=l:100, W100*x/NO; end; toe Elapsed time is 0.001265 seconds. Amazingly, this small change makes a hundredfold change in our computational efficiency! Clearly, it is much better to precompute the DFT matrix. To provide another example, consider the time it takes to compute the same DTFS using the FFT-based approach. >>

tic; for i=l:100, fft(x)/NO; end; toe Elapsed time is 0.000419 seconds.

10.9 Summary

905

With this as a reference, our fastest matrix-based computations appear to be several times slower than the FFT-based computations. This difference becomes more dramatic as No is increased. Since the two methods provide identical results, there is little incentive to use the slower matrix-based approach, and the FFf-based algo�thm is generally preferred. Even so, the FFT can exhibit curious behavior: adding a few data points, even the artificial samples introduced by zero padding, can dramatically increase or decrease execution times.The tic and toe commands illustrate this strange result. Consider computing the DTFS of 1015 random data points 100 times. >>

x1 = rand(1015,1); tic ; for i=i:100; fft(x1)/1015; end; Ti= toe Tl= 0.0038

Next, pad the sequence with four zeros. >>

x2 = [x1 ;zeros(4,1)]; tic; for i=1:100; fft(x2)/1019; end; T2 = toe T2 = 0.0087

The ratio of the two elapsed times indicates that adding four points to an already long sequence increases the computation time by a factor of 2.Next, the sequence is zero-padded to a length of No= 1024. >>

x3 = [x2;zeros(5,1)]; tic; for i=l:100; fft(x3)/1024; end; T3 = toe T3 = 0.0017

In this case, the added data decrease the original execution time by a factor of 2 and the second execution time by a factor of 5 ! These results are particularly surprising when it is realized that the lengths of y1, y2, and y3 differ by less than l %. As it turns out, the efficiency of the fft command depends on the factorability of N0• With the factor command, 1015 = (5)(7)(29), 1019 is prime, and 1024 = (2) 10 • The most factorable length, 1024, results in the fastest execution, while the least factorable length, 1019, results in the slowest execution. To ensure the greatest factorability and fastest operation, vector lengths are ideaJly a power of 2.

10.9 SUMMARY This chapter deals with the analysis and processing of discrete-time signals. For analysis, our approach is parallel to that used in continuous-time signals.We first represent a periodic x[n] as a Fourier series formed by a discrete-time exponential and its harmonics. Later we extend this representation to an aperiodic signal x[n.] by considering x[n] to be a limiting case of a periodic signal with period approaching infinity. Periodic signals are represented by discrete-time Fourier series (DTFS); aperiodic signals are represented by the discrete-time Fourier transform (DTFT). The development, although similar to that of continuous-time signals, also reveals some significant differences. The basic difference in the two cases arises because a continuous-time exponential e iw, has a unique waveform for every value of w in the range -oo to oo.In contrast, a discrete-time exponential e jn,i has a unique waveform only for values of n in a continuous interval of 2rr. Therefore, if no is the fundamental frequency, then at most 2rr /no exponentials in the Fourier series are independent. Consequently, the discrete-time exponential Fourier series has only No = 2rr /Qo terms.

906

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS The discrete-time Fourier transform (DTFT) of an aperiodic signal is a continuous function of Q and is periodic with period 2rr. We can synthesize x[n] from spectral components of X(Q) in any band of width 2rr. In a basic sense, the DTFf has a finite spectral width of 2rr, which makes it bandlimited to rr radians. Linear. time-invariant, discrete-time (LTlD) systems can be analyzed by means of the DTFr if the input signals are DTF-lransformable and if the system is stable. Analysis of unstable (or marginally stable) systems and/or exponentially growing inputs can be handled by the z-transform, which is a generali.zed DTFT. The relationship of the DTFT to the z-transform is similar to that of the Fourier transform to the Laplace transform. Whereas the I)

0

No n+

909

No n-+-

(a)

(b)

Figure PJ0.2-7

JIIl!l mm..:0

6

........ 1

13

(a)

Figure Pl0.2-8

-2

-I

I

2

(b)

Figure Pl0.2-9

7T _!I.. � 0 4 4

n-

(b)

Figure Pl0.2-10 10.2-15

Are the following frequency-domain signals valid DTFI's? Answer yes or no, and justify your answers. (a) X(Q) = Q+,r (b) X(Q)=j+JT (c) X(Q) =sin(]0Q) (d) X(Q) = sin (Q/ 10) (e) X(Q) = 8(Q)

l0.2-16

Find the inverse DTFf for the spectrum depicted in Fig. P 10.2-16.

l0.3-1

Using only pairs 2 and 5 (Table I 0.1) and the time-shifting property of Eq. ( I 0.29), find

the DTFT of the following signals, assuming lal < 1: (a) u[n]-u[n -91 (b) a"-mu[n-m] (c) a"- 3 (u[n] -u[n-10]) (d) a11-mu[n] (e) a"u[n-m] (f) (n-m)a"-m u[11 -m] (g) (n-m)a"u[n] (h) na"-mu[n-m]

910

CHAPTER 10 FOURIER ANALYSIS OF DISCRETE-TIME SIGNAL.5 .t[n)

X[II]

3 -·· - -

.

.

,

0

0

n-

3

,,_

6

3 (b)

(a)

.x[n]

x[n]

9 3

-3

3

: 0

,,_

-2

0

,,_

2

(d)

(c)

Figure Pto.2-11

IX(n)I

IX(fi)I

n-.

-no

-no

no

0

no

0

LX(fi)

2

LX(fi)

-no

n-.

no n-+-

- '1T

(a)

(b)

Figure Pl0.2-12

z[n]

•••

••• -9

-6

-3

0

Figure Pl0.2-13

3

fl-+-

6

9

11--+

912 10.3-5

10.3-6

10.3-7

CHAPTER 10 FOURJER ANALYSIS OF DISCRETE-TIME SIGNALS Using only pair 2 (Table I0.1) and properties of the DTFf, find the DTFf of the following signals, assuming lal IYI

(11.5)

11.1

The z-Transform

921

Observe that X[z] exists only for lzl > IYI- For lzl < lyl, the sum in Eq. (11.4) does not converge; it goes to infinity. T herefore, the ROC of X[z] is the shaded region outside the circle of radius IYI, centered at the origin, in the z plane, as depicted in Fig. I I.lb. Later in Eq. (11.35), we show that the z-transform of another signaJ, -yn u [-(n + l)], is aJso z/(z - y). However, the ROC in this case is lzl < IY 1- Clearly, the inverse ,-transform of z/(z - y) is not unique. However, if we restrict the inverse transform to be causaJ, then the inverse transform is unique, namely, y11 u [n].

The ROC is required for evaluating x[n] from X[z], according to Eq. (11.2). The integral in Eq. ( l 1.2) is a contour integral, implying integration in a counterclockwise direction along a closed path centered at the origin and satisfying the condition lzl > Iy 1- Thus, any circular path centered at the origin and with a radius greater than IyI (Fig. 11.1 b) will suffice. We can show that the integral in Eq. (11.2) along any such path (with a radius greater than I Y I) yields the same result, namely, x[n].f Such integration in the complex plane requires a background in the theory of functions of complex variables. We can avoid this integration by compiling a table of z-transforms (Table 11.1 ), where z-transfor:m p airs are tabulated for a variety of signals. To find the inverse z-transform of, say, z/ (z - y), instead of using the complex integration in Eq. (11.2), we consult the table and find the inverse z-transform of z/(z-y) as y11 u[n]. Because of the uniqueness property of the unilateral z-transform, there is onJy one inverse for each X[z]. AJthough the table given here is rather short, it comprises the functions of most practical interest. The situation of the z-transform regarding the uniqueness of the inverse transform is parallel to that of the Laplace transform. For the bilateral case, the inverse z-transform is not unique unless the ROC is specified. For the unilateral case, the inverse transform is unique; the region of convergence need not be specified to determine the inverse z-transform.For this reason, we shall ignore the ROC in the unilateral z-transform Table 11. l. EXISTENCE OF THE z-TRANSFORM

By definition, 11 X[z] = � L.,x[n]z-

n=O

= L., 7 �x[n] n=O

The existence of the z-transform is guaranteed if I < oo L -lzl" 00

< IX[z]I -

lx[n]

11=0

for some lzl. Any signal x[n] that grows no faster than an exponential signal satisfies this condition. Thus, if for some ro

'o• for some ro, (11.6)

t Indeed, the path need not even be circular. It can have any odd shape, as long as it encloses the pole(s ) of X[z] and the path of integration is counterclockwise.

922

CHAPTER 11 TABLE 11.1 No.

DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM Select (Unilateral) z-Transform Pairs

x[n]

X[z]

8[11 -k]

z-1c

2

u[n]

3

nu[n]

z z-1 z (z- J)2 z(z+ I)

4

(z-

z(z2 +4z+ I)

5 6

1) 3

(z-

1) 4

z z-y 1 z-y yz (z-y)2

y"u[n]

7

8

yz(z+y) (z- y )3

9 n(n-I)(n-2)· · •(n-m+ 1) y"u[n] 111 y m.1

Jla

IY I" cos /Jn u(n]

I lb

IYI" sin /Jn u[n]

12a

rly I" cos (/Jn + 0)u[nJ

12b

rly I" cos (/Jn

12c

rly I" cos (/Jn + 0)u[nJ

r=

+ 0)u[nJ

z (z-y)m+I

z(z - ly I cos /3)

z2 - (21ylcos {J)z+ JyJ 2 zlyl sin /3

z2 -(21ylcos {J)z+ IYl2

rz[ zcos 0 - Iy I cos (/3-B)J z2 - (21ylcos {J)z+ IYl2

(0.5rei8 )z

(0.5re-i8 )z

---+--zz-y* y

z(Az+B)

z2+2az+ IYl 2

A 2 Jyl 2 +B2 - 2AaB

[yj2 _ a2 -a

fJ =cos- 1 IYI _ on =tan 1

Aa-B Ajly/2 -a2

--------------------------------

11.1 The z-Transform

923

then

IX[z]I �

ooL ( n=O

r:

)n

11

=

l

lzl > ro

ro lzl

1- -

Therefore, X[z] exists for lzl > ro. Almost alI practical signals satisfy Eq. ( 11.6) and are therefore 2 z-transformable. Some signal models (e.g., y 11 ) grow faster than the exponential signal 'o (for any ,0) and do not satisfy Eq. (11.6) and therefore are not z-transformable. Fortunately, such signals are of little practical or theoretical interest. Even such signals over a finite interval are z-transforrnable.

EXAMPLE 11.2 Bilateral z-Transform Find the z-transforms of: (a) 8[n], (b) u[n], (c) cos Pnu[n]. and (d) the signal shown in Fig. 11.2.

1

0

2

3

4

n--+

Figure 11.2 Signal for Ex. 11.2d.

Recall that by definition 00

X[z]

x[ ]

3 =L "'\'"'"' x[n]z- =x[O] + x[l] - + x[2] -++ · .. 11

z

n=O

z2

(a) For x[n] = 8[n], x[O] = 1 and x[2] = x(3] = x[4] = ·

8[n]

{=}

(b) For x[n] = u[n], x[O] = x[I] = x[3] =.

· · = 0. Therefore,

for all z

1

• • = 1. Therefore,

1 I 1 X[z] = I + - + 2 + "'"i + ...

z

z

z

z3

(11.7)

924

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM This geometric sum simplifies [see Sec. B.8.3] to X[z]

=

1 l 1- ­

z

-1 < 1

z

l zl > 1 Therefore, u[n]

z

-z- 1

lzl > 1

(c) Recall that cos {Jn = (eH3n +e- j/Jn)/2. Moreover, according to Eq.(11.5), ± lzl > le j/J I

=1

Therefore, 1 z z z(z -cos /3) X[z] = - [ -- + --- ] = -----2 z-ej/J z-e-j/J z2 -2 zcosf3+1 (d) Here, x[O] =x[l] =x[2] =x[3] =x[4] = 1 and x[5] = x[6] = ·

to Eq. (11.7),

· · = 0. Therefore, according

1 1 1 1 z4+z3 +z2 +z+l + = + [] = 1+-+-----Xz 2 3 z

z

z

z4

l zl > I

z4

for all z-:/= 0

We can also express this result in more compact form by summing the geometric progression on the right-hand side of the foregoing equation. From the result in Sec. B.8.3 with r = 1/z, m = 0, and n = 4, we obtain

X[z] •

(D'-G)" = 1-1

z

_z_(l _z-s) z-1

r 11.1 The ,-Transform

925

ANSWERS s + 4 + +,2 +z+I (a) X[::1= z � +2 z9

-1. - (z-4 -z-10)

or

z-1

r[11]

0

..

5

6

7

ls

9

-----+--+-

n-

Figure 11.3 Signal for Dnll 11 a

11.1.1 Inverse Transform by Partial Fraction Expansion and Tables As in the Laplace transform, we shall avoid the integration in the complex plane required to find the inverse z-transform [Eq. (11.2)) by using the (unilateral) transform table (Table 11.1). Many of the transforms X[z] of practical interest are rational functions (ratio of polynomials in z). whlch can be expressed as a sum of partial fractions, whose inverse transforms can be readily found in a transform table. The partial fraction method works because for every transformable x[n] defined for n:::. 0, there is a corresponding unique X[z] defined for lzl > ro (where ro is some constant), and vice versa. ':--� r:- �-:7 ,.!:'�··;,:� \_':t-,.'1�l�4 ,.

1W.

-

1

:: ......._..

..

11.3 Inverse z-Transform by Partial Fraction Expansion

.,_ .._

Find the inverse z-transforrns of 8z - 19 (a) (z-2)(z-3) z(2z2 - l lz+ 12) 3 (b) (z - l )(z-2) 2z(3z+ 17) (c ) (z - l )(z2 -6z+25) (a) Expanding X[z] into partial fractions yields 5 3 8z-19 X[z] = (z 2)(z-3) = z- 2 + z-3

926

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM From Table I 1.1, pair 7, we obtain x[n] = (3(2) 11 - 1 + 5(3) 11 - 1 ]u[n - 1]

(11.8)

If we expand rational X[z] into partial fractions directly, we shall always obtain an answer that is mulliplied by u[n - J) because of the nature of pair 7 in Table 11.1. This form is rather awkward as well as inconvenient. We prefer the form that contains u[n] rather than u[n -1]. A glance at Table 11.1 shows that the z-transform of every signal that is multiplied by u[n] has a factor z in the numerator. This observation suggests that we expand X[z] into modified partial fractions. where each term has a factor z in the numerator. This goal can be accomplished by expanding X[z]/z into partial fractions and then multiplying both sides by z. We shall demonstrate this procedure by reworking part (a). For this case, X[z] 8z-19 (-19/6) (3/2) (5/3) -= =---+--+-z(z-2)(z-3) z z z-2 z-3

Multiplying both sides by z yields 19 3 z X[z]=--+- ( 6 2 z-2

) +-

5 z (3 z-3

)

From pairs l and 6 in Table 11. l, it follows that x[n] = - 1: o[n] + [�(2)" + j(3)"] u[n]

(11.9)

The reader can verify that this answer is equivalent to that in Eq. ( 11.8) by computing x[n] in both cases for n = 0, 1,2, 3, ... , and comparing the results. The form in Eq. (11.9) is more convenient than that in Eq. (11.8). For this reason, we shall always expand X[z]/z rather than X[z] into partial fractions and then multiply both sides by z to obtain modified partial fractions of X[z], which have a factor z in the numerator. (b)

X[z] =

and

where

z(2z2 - 1 lz+ 12) (z - l)(z- 2)3

X[z] 2z2 -llz+l2 k ao a1 a2 -=-----=-+--+--+-3 3 2 z (z - 1)(z-2) z- 1 (z- 2) (z - 2) (z - 2)

=-2

11.1 The z-Transform

Therefore,

2z2 - l lz+I2_ -3 2 a, ai = + + 2 (z- 1 )(z -2)3 - z - 1 - (z - 2)3 (z _ 2) (z _ 2) z

X[z]

927

(11.10)

We can_ determine a, �nd a2 by clearing fractions. Or we may use a shortcut. For example, to determine a 2 , we multiply both sides of Eq. (11.10) by z and let z ➔ oo. This yields 0 = -3 - 0

+ 0 + a2

a2 = 3

==}

This re�ult leaves only one unknown, a,, which is readily determined by Jetting z talce any convenient value, say, z = 0 , on both sides of Eq. (11. 10). This produces

12 1 a1 3 -=3+-+--8 4 4 2 which yields a 1

=-1. Therefore,

X[z] -3 2 1 3 -=-------+3 2 z z-1 (z - 2) (z-2) z-2

and

-zz z _ _z_ X[z] = 3 -2 +3 z - 1 (z - 2) 3 (z - 2)2 z - 2

Now the use of Table 11.1, pairs 6 and 10, yields x[n]= [ -3-2

n(n - 1) n ] (2t- (2t+3(2t u[n] 2 8

=-[3+ icn 2+n-12)2n]u[n] (c) Complex Poles.

2z(3z +17)

2z(3z +17)

X[z]= (z-l)(z2-6z+25) (z-I)(z-3-j4)(z-3+j4)

The poles of X[z] are 1, 3 + j4, and 3-j4. W henever there are complex-conjugate poles, the problem can be worked out in two ways. In the first method, we expand X[z] into ( modified) first-order partial fractions. In the second method, rather than obtain one factor corresponding to each complex-conjugate pole, we obtain quadratic factors corresponding to each pair of complex-conjugate poles. This procedure is explained next. METHOD OF FIRST-ORDER FACTORS

X[z] =

-z-

2(3z+ 17) 2(3z +17) z +j4) (z- l)(z 2-6z+25)- (z- l)(z-3-j4)( -3

&kt

928

CHAPTER 11

DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM

We find the partial fraction of X[z]/z using the Heaviside "cover-up" method: I .6eJ2.246 I .6e-J2.246 2 X[z] - +---- -- + --z- 3- j4 z- 3 + j4 z - z- I and

X[z] = 2-'-+ ( l.6e- J2.246)

z-1

z . z-3-14

+ ( l.6eJ2.246)

z . z-3+14

The inverse transform of the first term on the right-hand side is 2u[n]. The inverse transform of the remaining two terms (complex-conjugate poles) can be obtained from pair 12b (Table 11.l) by identifying r/2= 1.6, 0 =-2.246 rad, y =3 + j4 = 5e1 0·927, so that IYI= 5, /3= 0.927. Therefore, x[n] = [2 + 3.2(5)"cos (0.927n- 2.246)]u[n] METHOD OF QUADRATIC FACTORS

X[z] 2(3z+17) 2 Az+B = ---2---- = --+2 --z

(z- I) (z

- 6z

+ 25)

z- 1

z

-

6z + 25

Multiplying both sides by z and letting z ➔ oo, we find 0=2+A ===}A=-2

and

-2z +B 2 2(3z+17) + = (z- l)(z2 - 6z+25) z-1 z2 -6z+25

To find B, we let z take any convenient value, say, z = 0. This step yields -34 B -=-2+-===}B=16 25 25 Therefore,

X[z] 2 -2z + 16 -=--+---2 z z - 1 z - 6z + 25

and

X[z] =

� + z(-2z + 16)

z - l z2 - 6z + 25 We now use pair 12c, where we identify A= -2, B = 16, IYI = 5, and a=-3. T herefore, r= and so that

✓I 00 + 256- l 92 = 3.2, 25 _ 9

0 = tan- 1

/3

= cos- 1 ( 3) = 0.927 rad 5

('=_�0) = -2.246rad

x(n] = (2 + 3.2(5)"cos (0.927n - 2.246)Ju[n]

11.1 The z-Transform PARTIAL FRACTION EXPANSION BY

929

MATLAB

Partial fraction expansions, including modified partial fraction expansions, are easily perfom1ed using MATLAB 's residue command. To demonstrate, let us perform the modified partial fraction expansion for part (a). All that is required is to specify the numerator and denominator polynomial coefficients of H[z]/z. Notice that an extra root at z = 0 is added to the roots of H[z] since this is a modified partial fraction expansion, where we expand H[z]/z rather than H[z]. We use the format rat command so that MATLAB reports the results in rational form (ratios of integers rather than decimal numbers). »

»

num = [8 -19]; den = poly( [2,3,0]); format rat [r,p,k] = residue(num,den) r = 5/3 3/2 -19/6 2 = 0 3 p

k = []

The residues r provide the numerator coefficients for the roots p; thus, H[z]

5/3

3/2

-19/6

--=--+--+-z z-3

z-2

z

Thls result matches our earlier hand calculations. Additional details on the residue command are available using MATLAB help facilities.

DRILL 11.2 Inverse z-Transform by Partial Fraction Expansion Fmd the inverse :-transform of the following functions: z(2z- L) (z- l)(z + 0.5) 1 (b) (z - I )(z + 0.5) 9 (c) (z + 2) (: - 0.5)2 (a)

(d)

[Hint:

5z(z- 1)

z 2 -1.6z+0.8

.Jo.s = 2/ Js.]

ANSWERS (a)

[J + � (-0.5)"]u[n]

(b) -28[,z] + [� + !(-0.S)"]u[n] { I88[n]- [0.72(-2) 11 + 17.28(0.5)11 -14.4n(0.5Y]u[n] 5 ( (}st cos(0.464n + 0.464)u[n]

f

930

CHAPTER 11

DISCRETE-TIME SYSTEM ANALYSIS USING 1HE Z-TRANSFORM

11.1.2 Inverse z-Transform by Power Series Expansion By definition, 00

X[z] = Lx[n]z-n n=O = x[0]+ x[l] -z- + x[2] + x[3] + .. .

7"° z3

=x[0]z 0+x[l]z-• +x[2]z-2+x[3]z-3 + ·

••

This result is a power series in z-•. Therefore, if we can expand X[z] into the power series in z- 1, the coefficients of this power series can be identified as x[0], x[1 ], x[2], x[3], . . .. A rational X[z] can be expanded into a power series of z- 1 by dividing its numerator by the denominator. Consider, for example, z2 (7z-2) X[z] = (z-0.2)(z-0.5)(z - I)

7z3 -2z 2 z 3 - l.7z 2+ 0.8z-0.1

To obtain a series expansion in powers of z-•, we divide the numerator by the denominator as follows: 7+9.9z- 1 + I l.23z -2+ l1.87z- 3+ · · · i3- J.7z2+0.8z-O.I)7z3-2z 2 7z3-11.9z 2+ 5.60z- 0.7 9.9z2- 5.60z+ 0.7 9.9z2-16.83z+ 7.92-0.99z-1 ll.23z- 7.22+0.99z-• 11.23z-19.09+ 8.98z-1 11.87-7.99z-l Thus, X[z] =

z2 (1z- 2)

7+9.9z- 1 + l l.23z-2+I l.87z-3+. . . = _ (z-0.2)(z 0.5)(z-I)

Therefore, x[0] = 7, x[I ] = 9.9, x[2] = 11.23, x[3] = 11.87, . . . Although this procedure yields x[n] directly, it does not provide a closed-form solution. For this reason, it is not very useful unless we want to know only the first few terms of the sequencex[n].

11.2 Some Properties of the z-Transform RELATIONSlllP BETWEEN

h[n]

AND

931

H[z]

For an LTfD system, if h[n] is its unit impulse response, then from Eq. (9.26), where we defined H(z], the system transfer function, we write

L h[n]z00

H[z] =

11

(11.11)

11=-00

For causal systems, the limits on the sum are from n = 0 to oo. This equation shows that the transfer function H[z] is the z-transfonn of the impulse response h[n] of an LTID system; that is, h[n] � H[z]

This important result relates the time-domain specification h[n] of a system to H[z], the frequency-domain specification of a system. The result is parallel to that for LTIC systems.

DRILL 11.4 Impulse Respons� b Inverse z Redo Drill 9.6 by taking the inverse z-transform of H[

11.2 SOME PROPERTIES OF THE z-TRANSFORM The z-transform properties are useful in the derivation of z-transforms of many functions and also in the solution of linear difference equations with constant coefficients. Here, we consider a few important properties of the z-transform. In our discussion, the variable n appearing in signals, such as x[n] and y[n], may or may not stand for time. However, in most applications of interest, n is proportional to time. For this reason, we shall loosely refer to the variable n as time.

11.2.1 Time-Shifting Properties In the following discussion of the shift property, we deal with shifted signals x[n]u[n], x[n - k]u[n - k], x[n - k]u[n], and x[n + k]u[n]. Unless we physically understand the meaning of such shifts, our understanding of the shift property remains mechanical rather than intuitive or heuristic. For this reason, using a hypothetical signal x[n], we have illustrated various shifted signals for k = I in Fig. 11.4. RIGHT SmFT (DELAY)

If then

x[n]u[n]

{=}

X[z]

1 z

x[n - l]u[n - l] -X[z]

(11.12)

932

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM

5

-5

x[n]

0

5

(a)

x[n]u[n]

5 ·.

-5

5

0 (b)

x[n - l]u[n - 1] ..'•

-4

6 n �

0 (c) x[n - 1]u[n]

.....

5

··. -4

0

(d) x[n + l]u[n] 5

-6

0 (e)

Figure 11.4 A signal x[n] and its shifted versions.

4

n.._.

11.2 Some Properties of the ,-Transform

933

In general, 1 x[n - m]u[n -m] � -X {z] zm

(11.13)

1 x{n- l]u[ri] ¢=:> -X[z]+x[-1)

(11.14)

Moreover,

Repe ated application of this property yields l[l I l ] x[n-2]u[ri.]{=:=}- -X[z)+x[-1] +x[-2)=-X[z]+-x[-l]+x[-2 2 z z z z ] In general, for integer value of m, x[n - m]u[n] � z-mx[z] + z-m Lx[-n]z" n=I

(11.15)

A look at Eqs. ( 11.12) and ( 11.14) shows that they are identical except for the extra term x[-1] in Eq. ( 11.14). We see from Figs. 11 .4c and 11.4d that x[n- l]u[n] is the same as x[n- l]u[n- 1) plus x[-1 ]8[n]. Hence, the difference between their transforms is x[-l].

Proof. For the integer value of m, Z{x[n - m]u[n - m]} = Lx[n - m]u[n-m]z-n n=O

Recall that x[n - m]u[n - m] = O for n < m so that the limits on the summation on the right-hand side can be taken from n = m to oo. Therefore, 00

Z{x[n - m]u[n-m]}

= I:x[n-m]z-n n=m 00

= Lx[r]z-(r+m r=O

)

1 ] -' = -X[z '\""" x[r]z =zm zm 00

1 1.-J r=O

934

SIS USING THE Z-TRANSFORM CHAPTER 11 DISCRETE-TIME SYSTEM AN ALY To prove Eq. (11.15), we have 00

] -11 " Z (x[n-m]u[n]} = l..Jx[11-m z

=

00

" x[ r]z-(r+m) l..J

r==-m

= z_,,, I:x[-n]t' +z11=1

LEFT SHIFT (ADVANCE) If then

111

X [z]

x[n]u[n] X[z] x[n + l]u[n] zX[z] - zx[O]

Repeated application of this property yields x[n+ 2]u[n] z{z(X[z] - zx[O]) -x[l]} = z X[z] -z2x[O] - zx[l] 2

and for the integer value of m, m-1

x[n+m]u[n] {==} z'"X[z] -z'11 Lx[n]z-n 11=0

Proof. By definition, 00

Z{x[n+m]u[n]}= Lx[n+m]z-n 11=0 00

= L X[r]z--,µ..T [lkT]

(11.32)

The corresponding z-transfer function is H[z, µ,] = e".µ,T

Z

z-e'AT

-

zi·µ,T z-e'AT

(11.33)

In this manner, we can prepare a table of modified z-transforms. When we use H[z, µ,] instead of H[z) in our analysis, we obtain the response at instants r = (k + µ,)T. By using different values of µ, in the range Oto T, we can obtain the complete response y(t).

EXAMPLE 11.11 Modified z-Transform to Compute Sampled-Data System Output Find the output y(t) for all t in Ex. 11. l 0. In Ex. 11.10, we found the response y[n] only at the sampling instants. To find the output values between sampling instants, we use the modified z-transform. The procedure is the same as before, except that we use the modified z-transform for the continuous-time systems and signals. For the system G(s) = 1/s + 4 with T = 0.5, the modified z-transforrn [Eq. (11.33) with ). = -4, and T = 0.5] is z z -2 H[z,µ,]=e'Aµ,T - - 2 =e µ. z z e -0.1353 Moreover, to find the modified z-transform corresponding to x(t) = u(t) [).. = 0 in Eq. ( 11.32)], we have X[z, µ] = z/ (z - 1). Substituting these expressions in those found in Ex. 11.10, we obtain 0.083z 0.583z z 2µ Y[z,µ,]=e- , [ l- z-0.394 + z-0.174 ] zThe inverse (modified) z-transform of this equation, determined using Eqs. ( 11.32) and ( 11.33), IS

y[(n + µ,)11

= e -2µ [ I- 0.583( 0.394)" + 0.083(-0.174)"] u[n]

The complete response is shown in Fig. l l .17.

966

CHAPTER 11

DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM

DESIGN OF SAMPLED-DATA SYSTEMS As with continuous-time control systems, sampled-data systems are designed to meet certain transient (PO, 1,, Is, etc.) and steady-state specifications. The design procedure follows along the lines similar to those used for continuous-time systems. We begin with a general second-order system. The relationship between closed-loop pole locations and the corresponding transient parameters PO, '" ts, ...are determined. Hence, for given transient specifications, an acceptable region in the z plane, where the dominant poles of the closed-loop transfer function T[z] should lie, is determined. Next, we sketch the root locus for the system.The rules for sketching the root locus are the same as those for continuous-time systems. If the root locus passes through the acceptable region, the transient specifications can be met by simple adjustment of the gain K. If not, we must use a compensator, which will steer the root locus in the acceptable region.

11.7

THE BILATERAL z-TRANSFORM

Situations involving noncausal signals or systems cannot be handled by the (unilateral) z-transform discussed so far. Such cases can be analyzed by the bilateral (or two-sided) z-transform defined in Eq. ( I 1. l ) as X[z] =

L x[n]z-n 00

11=-00

As in Eq.(11.2), the inverse z-transform is given by x[n]

= - -. J X[z]t- 1 dz 1

2rr;

f

These equations define the bilateral z-transform. Earlier, we showed that

y"u[n]

z -­

lzl > IYI

z -y

In contrast, the z-transform of the signal -y"u[-(n

Z{-y 11u[-(n + l)]}

(11.34)

+ 1)], illustrated in Fig. 11.18a, is

= �-y"z-n = �-(:) -I

-I

n

=-[I +(J+(;)'+··] =l-[1+;+(;)'+(;)'+··] I =1--­ z l-­ y

z

z-y Therefore,

I; I
ltll- If z = a is the smallest magnitude nonzero pole for an anticausal sequence, its ROC is lzl < lal. REGION OF CONVERGENCE FOR LEFT-SIDED AND RIGHT-SIDED SEQUENCES

Let us first consider a finite-duration sequence x1 [n], defined as a sequence that is nonzero for N1 � n � N2, where bothN1 and N2 are finite numbers and N2 > N1 .Also, X1 [z] =

L x [n]z-n N2

1

n= N 1

For example, if N1

= -2 andN2 = l, then X1 [z] = Xf[- 2] z

2

x [l]

1 +xi[-1 ]z + x1[0] + z

Assuming alJ the elements in x1[n] are finite, we observe that X1 [z] has two poles at z = oo because of terms x1[-2]z 2 +xi[-l]z and one pole at z = 0 because of term xr[l]/z. Thus, a finite-durat ion sequence could have poles at z = O and z = oo.Observe that X1[z] converges for all values of z except possibly z = 0 and z = oo. . This means that the ROC of a general signal x[n] + x1[11] 1s the same as the ROC of x[n] with the possible exception of z = 0 and z = oo.

968

CHAPTER 11 DISCRETE-TlME SYSTEM ANALYSIS USING THE Z-TRANSFORM

A right-sided sequence is zero for 11 < N2 < oo and a left-sided sequence is zero for n > N 1 > -oo. A causal sequence is always a right-sided sequence, but the converse is not necessarily true. An anticausal sequence is always a left-sided sequence, but the converse is not necessarily true. A two-sided sequence is of infinite duration and is neither right-sided nor left-sided. A right-sided sequence x,[n] can be expressed as x,[n] = Xc [n] + x 1[n], where Xc[n] is a causal signal and x1[11] is a finite-duration signal. Therefore, the ROC for x,[n] is the same as the ROC for Xc [n] except possibly z = oo. If z = f3 is the largest magnitude pole for a right-sided sequencex,[n], its ROC is I/JI< lzl � oo. Similarly, a left-sided sequence can be expressed as x,[n] =xa [n] +x1[n], where x0 [n] is an anticausal sequence and x1 [n] is a finite-duration signal. Therefore, the ROC for x,[11) is the same as the ROC for Xn [n] except possibly z =0. Thus, if z = a is the smallest magnitude nonzero pole for a left-sided sequence, its ROC is O � lzl < la.I.

EXAMPLE 11.12 Bilateral z-Transform Determine the bilateral ,-transform of .x[n]

= (0.9)"u[n]+(I.2tu[-(n+ 1)] x, [11]

x2[11]

From the results in Eqs. ( 11.34) and ( 11.35), we have z z-0.9

lzl > 0.9

X2[z]=-­ z-l.2

lzl < 1.2

X1[z] =

-z

The common region where both X1[z] and X2 [z] converge is 0.9 < lzl < 1.2 (Fig. 11.19b). Hence, X[z] = X1[z] + X2[z]

z

z

z-0.9 z-1.2 -0.3z (z-0.9)(z - 1.2)

0.9 < lzl < 1.2

The sequence x[n] and the ROC of X[z] are depicted in Fig. 11.19.

969

11.7 The Bilateral z·Transform

x[n)

Re

,,_ (a)

(b)

Figure 11.19 (a) Signal x[n] and (b) ROC of X[z].

EXAMPLE 11.13 Inv'�rse Bilateral z-Transform Find the inverse bilateral z-transform of X[z] =

-z(z + 0.4) (z- 0.8)(z - 2)

if the ROC is (a) lzl > 2, (b) lzl < 0.8, and (c) 0.8 < lzl < 2. (a)

and

X[zJ z

=

-(z+0. 4) = ____2_ (z - 0.8)(z - 2) z-0.8 z- 2 X[z] =

z

z

2 z - 0.8 - z - 2 Since the ROC is l zl > 2, both terms correspond to causal sequences and x[11] = [(0.8) n- 2(2)"]u[11]

This sequence appears in Fig. I 1.20a. (b) In this case, lzl < 0.8, which is less than the magnitudes of both poles. Hence. both terms correspond to anticausal sequences, and x[nJ = [-(0.8) " + 2(2)"]u[-(11 + I)]

This sequence appears in Fig. 11.20b. (c) In this case, 0.8 < lzl < 2; the part of X[z] corresponding to the pole at 0.8 is a causal sequence, and the part corresponding to the pole at 2 is an anticausal sequence:

970

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM x[n] = (0.8)"u[n] + 2(2)"u.[-(n + 1 )] This sequence appears in Fig. 11.20c. 5

0

t

10

15

,,_

-15

-5

,,_

0 x(11]

x[n)

-20

...

...

-5 X 10

5

-40

6

-10

-60

(b)

(a)

x[11]

-5

0

5

(c)

Figure 11.20 Three possible inverse transforms of X[z].

DRILL 11,18 Inverse Bilateral z-Transform Find the inverse bilateral z-transform of z X[z]=--­ z2 + 61z+ ! 6

ANSWER n]+6(-½fu[-(n+ l)]

IO

n--+-

11.7 The Bilateral z-Transform

971

INVERSE TRANSFORM BY EXPANSION OF X[z] IN POWER SERIES OF z We have n

For an anticausal sequence, which exists only for n � -1, this equation becomes X[z] =x[-l]z +x[-2]z 2 +x(-3]z3 + · .. We can find the inverse z-transform of X[z] by dividing the numerator polynomial by the denominator polynomial, botb in ascending powers of z, to obtain a polynomial in ascending powers of z. Thus, to find the inverse transform of z/(z -0.5) (when the ROC is lzl < 0.5), we divide z by -0.5+z to obtain -2z-4z 2 -8z 3 -• • •. Hence,x(-1] = -2,x[-2] = -4,x[-3] =-8, and so on.

11.7.1 Properties of the Bilateral z-Transform The properties o f the bilateral z-transform are similar to those of the unilateral transform. We shall merely state the properties here, without proofs, for x;[n] � X;[z].

LINEARITY

The ROC fo r a 1X 1 (z] + a2X2 [z] is the region common to (the intersection of) the ROCs for X1 [z] and X2 [z] .

SmFT x[n - m]

{=:::}

1 -X[z] zm

mis positive or negative integer

The ROC for X[z]/z'" is the ROC for X[z] except for the addition or deletion of z caused by the factor l /z'11 •

= 0 or z = oo

CONVOLUTION

xi [n] *x2 [n]

{=}

X1 [z]X2[z]

The ROC for Xi [z]X [z] is the region common to (th e intersection of) the ROCs for X1 [z] 2 and X2 [ z].

972

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM MULTIPLICATION BY

y" ynx[n] {==} X [;]

If the ROC for X[z] is IYil < lzl < I y2I, then the ROC for X(z/ Y] is I YYil that the ROC is scaled by the factor I y 1-


lzl

>

I 1/Y2I-

CoMPLEX CONJUGATION

x*[n] {=:} X"' [z*] The ROC for X*[z*] is the same as the ROC for X[z].

11.7.2 Using the Bilateral z-Transform for Analysis of LTID Systems Because the bilateral z-transform can handle noncausal signals, we can use this transfonn to analyze noncausal linear systems. The zero-state response y[n] is given by

y[n] = z- 1 {X[z]H[z]} provided X[z]H[z] exists. The ROC of X[z]H[z] is the region in which both X[z] and H[z] exist, which means that the region is the common part of the ROC of both X[z] and H[z].

XAMPLE 11.14 Zero-State Response by Bilateral z-Transform For a causal system specified by the transfer function

z H[z]=- ­ z-0.5

11.7 The Bilateral z-Transform find the zero-state response to input x[n]

= (0.8tu[n] + 2(2tu[-(n + l)]

X[z]=

z 2z -z(z+0.4) z-0.8 z-2 = (z- 0.8)(z- 2)

The ROC corresponding to the causal term is lzl > 0.8, and that corresponding to the anticausal term is lzl < 2. Hence, the ROC for X[z] is the common region, given by 0.8 < lzl < 2. Hence,

X[z] =

-z(z + 0.4) (z - 0.8)(z - 2)

Therefore, Y[z] =X[z]H[z] =

0.8 < lzl < 2

-z2 (z+0.4) (z -0.5)(z- 0.8)(z - 2)

Since the system is causal, the ROC of H[z] is lzl > 0.5. The ROC of X[zJ is 0.8 < lzl < 2. The common region of convergence for X[z] and H[z] is 0.8 < lzl < 2. Therefore, Y[z]

=

-z2 (z + 0.4)

0.8 < lzl < 2

(z -0.S)(z-0.8)(z-2)

Expanding Y[z] into modified partial fractions yields

z

8(

z

)

8(

z

Y[z]=---+- -- -- -) z-0.5 3 z-0.8 3 z-2

0.8 < lzl < 2

Since the ROC extends outward from the pole at 0.8, both poles at 0.5 and 0.8 correspond to causal sequence. The ROC extends inward from the pole at 2. Hence, the pole at 2 corresponds to anticausal sequence. Therefore, y[n] = [-(0.5f + !(0.8)"] u[n] + f (2)"u[-(n + I)]

EXAMPLE 11.15 Zero-State Response for an Input with No z-Transform For the system in Ex. 11.14, find the zero-state response to input x[n]

= (0.8)"u[n]+(0.6)"u[-(n+1))

973

974

CHAPTER 11

DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM

The �-transfonm of the causal and anticausaJ components x1 [11] and x2 [n) of the output are

lzl > 0.8 lzl < 0.6 Observe that a common ROC for X1 [z] and X2 [z) does not exist. Therefore, X[z] does not exist. In such a case. we take advantage of the superposition principle and find Y1 [,i] and Y2[n], the system responses to x1 [11) and x2[n], separately. The desired response y[n) is the sum of y1 [11] and )'2 (11). Now H[z] =

z z-0.5

--

lzl > 0.5

Y1 [z] = X1 [z]H[zJ =

·?

(z - 0.5)(z- 0.8)

-i

lzl > 0.8 0.5
o.8

o.s < lzl < 0.6 Therefore, Y1[n] = [-i(0.5)" + i(0.8) 11 ]u[n]

11 y1[n] = 5(0.5) u[n] + 6(0.6) 11u[-(n + l)]

and

y[n] = Y1[n] + Y1[n] = [ �o (0.5)" + i(0.8t]u[n] + 6(0.6) 11u[-(n

+ l)]

eral z-Transform ttie r.ero-state response to input

� (¾t u[n] +5(3)"u[-(n+ l)]

Summary

975

11.8 SUMMARY In this chapter, we discussed the analysis of linear, time-invariant, discrete-time (LTID) systems by means of the z-transform. The z-transform is an extension of the DTFT with the frequency variable jrl. generalized to a+ j n. Such an extension allows us to synthesize discrete-time signals by using exponentiaJ1y growing (discrete-time) sinusoids. The relationship of the z-transform to the D TFT is identical to that of the Laplace transform to the Fourier transform. Because of the generalization of the frequency variable, we can analyze all kinds of LTID systems and also handle exponentially growing inputs. The z-transfom1 changes the difference equations of LTID systems into algebraic equations. Therefore, solving these difference equations reduces to solving algebraic equations. The transfer function H[z] of an LTID system is equal to the ratio of the z-transform of the output to the z-transform of the input when all initial conditions are zero. Therefore, if X[z] is the z-transform of the input x[n] and Y[z] is the z-transform of the corresponding output y[n] (when all initial conditions are zero), then Y[z] = H[z]X[z]. For an LTID system specified by the difference equation Q[E]y[n] = P[E]x[n], the transfer function H[z] = P[z]/Q[z]. Moreover, H[z] is the z-transform of the system impulse response h[n]. We showed in Ch. 9 that the system response to an everlasting exponential t' is H[z]t'. LTID systems can be realized by scalar multipliers, adders, and time delays. A given transfer function can be synthesized in many different ways. We discussed canonical, transposed canonical, cascade, and paraJlel forms of realization. The realization procedure is identical to that for continuous-time systems with 1/s (integrator) replaced by 1/z (unit delay). In Sec. 11.5, we showed that discrete-time systems can be analyzed by the Laplace transform as if they were continuous-time systems. In fact, we showed that the z-transform is the Laplace transform with a change in variable. In practice, we often have to deal with hybrid systems consisting of discrete-time and continuous-time subsystems. Feedback hybrid systems are also called sampled-data systems. In such systems, we can relate the samples of the output to those of the input. However, the output is generally a continuous-time signal. The output values during the successive sampling intervals can be found by using the modified z-transform. The majority of the input si.gnals and practical systems are causal. Consequently, we are required to deaJ with causal signals most of the time. Restricting all signals to the causal type greatly simplifies z-transform analysis; the ROC of a signal becomes irrelevant to the analysis process. This special case of z-transform (which is restricted to causal signals) is caJled the unilateral z-transform. Much of the chapter deals with this transform. Section 11.7 discusses the general variety of the z-transform (bilateral z-transform), that can handle causal and noncausal signals and systems. In the bilateral transform, the inverse transform of X[z] is not unique, but depends on the ROC of X[z]. Thus, the ROC plays a crucial role in the bilateral z-transform.

Reference 1. Lyons, R. G., Understanding Digital Signal Processing, Addison-Wesley, Reading, MA, 1997.

976

11.1-1

11.1-2

CHAPTER 11

DISCRETE-TIME SYSTEM ANALYSIS USING THE Z-TRANSFORM

11.1-4 Using the definition of the ,-transform, find

Using the definition, compute the z-transform of x[n] = (-1 )n (11[11] - u[n -8]). Sketch the poles and zeros of X[z] in the z plane. No calculator is needed to do this problem!

the ,-transform and the ROC for each of the following signals: (a) u[n-m] (b) y 11 sinrrn u[n] (c) y 11 cosrr11u[n]

Determine the unilateral z-transform X[z] of the signal x[n] shown in Fig. P 11.1-2. As the picture suggests, x[n] = -3 for all n � 9 and x[n] = 0 for all 11 < 3.

(d) y 11 sin

(e) y"cos 00

3

( j) -u[n] 11! (k) (211- 1-(-2r-1Ju[n] Yn

n

5

-I

(I)

Figure Pll.1-2

11.1-3 (a) A causal signal has a ,-transform given by X[z] = zf'� 1 • Determine the time­

(ln ) o: 11 u[n]

n!

11,1-5

Showing all work, evaluate I::o n(-3/2tn .

11.1-6

Using only the z-transforms of Table I I.I, determine the z-transfonn of each of the following signals: (a) u[n]-u[n -2] (b) y 11- 2u[n -2) (c) 2n+ 1 u[n-l]+e'1- 1 u[n]

domain signal x[n] and sketch x[nJ over -4 � n � l J. [Hint: No complex arith­ metic is needed 10 solve this problem!] (b) Consider the causal semiperiodic sig­ nal y[n] shown in Fig. Pl l .1-3. Notice, y[n] continually repeats the sequence [I, 2, 3] for n � 0. Determine the uni­ lateral z-transform Y[z] of this signal. If possible, express your result as a rational function in standard form.

(d) [2- 11 cos(in)]u[n-l]

(e) ny 11 11[n -I] (f) n(n-l)(n-2) 2 11 - 3u[n-m] for m = 0, I, 2, 3 (g) (-1) 11 nu[n] 00

(h) Lk8(n-2k+ 1) k=O

y[n]

11.1-7

3 2

Find the inverse unilateral z-transform of each of the following: z(z-4) (a) 2 z -5z+6

(b)

-3

2 u[n] 1l'II

(h) n y 11 u[n] (i) nu[n]

15

10

-3

2 u[n]

(f) L22k8[n - 2k] k=O (g) y 11- 1 u[n -l]

x[11]

-5

:,rn

3

6

Figure Pl 1.1-3

9

n

(c)

z-4

z -5z+6 (e-2 -2) z 2

(z- e-2 )(z -2)

Problems 2 (z - 1) z3 z(2 z+ 3) (e) (z - I )(z 2 - 5z +6)

11.2-1 For a discrete-time signal shown in

(d)

Fig. P11.2-1, show that

X[z] =

z(-5z+22) (f) (z+ l )(z - 2)2 z( I .4z + 0.08)

(g)

(h) (i)

G) (k)

(I)

(z - 0.2)(z- 0.8)2 z(z- 2)

x[n)

2 - z+ 1

z

2z2 - 0.3z+ 0.25 z2 +0.6z+0.25 2z(3z- 23)

I '

(z - l )(z2 - 6z+25) z(3.83z+ 11.34) (z- 2)(z - 5z+25)

n-

Figure Pll.2-1

2

z (-2z +8z- 7) (z- l )(z- 2) 3 2

m- I

0

2

find the first three terms of x[n] if X[z] =

2z3 + 13z2 +z

X[z]=

11.2-2 Determine the unilateral z-transform of signal x[n] = (1-n)cos(!} (n- l))u[n - I].

11.2-3 Suppose a DT signal x[n] = 2 (u[n- JO] - u

[n -6]) has a transform X(z). Define Y(z) = � �X(2z). Using graphic plot or vector notation, determine the corresponding sig­ nal y[n].

z3+ 7z2 +2z+ I

(b) Extend the procedure used in part (a) to find the first four tenns of x[n] if

11.2-4 Suppose a DT signal x[n] = 3 (u[n] - u[n- 51)

=

has a transform X (z). Define Y (z) 2,-4 !X0). Using graphic plot or vector

2z4 + I 6z3 + 17z2 +3z z 3 +7z2 +2z+I

A right-sided signal x[n) has z-transform

4 3 given by X[z] = l'+2�:- 3z +4z . Using a power z -I series expansion of X[z], determine x[n] over

-5 .S: n � 5.

11.1-10

I -z-m l-z-1

Find your answer by using the definition in Eq. (I l.l) and by using Table I I. I and an appropriate property of the z-transform.

11.1-8 (a) Expanding X[z) as a power series in z-1,

11.1-9

977

Find x[n] by expanding X[z]

YZ = (z-y )2

notation, determine signal y[11]. 11.2-5

the

corresponding

Find the z-transform of the signal illustrated in Fig. Pl 1.2-5. Solve this problem in two ways, as in Exs. 11.2d and 11.4. Verify that the two answers are equivalent.

x[11]

4 ....... ,. ... .

as a power series in z- 1• l l.I-11

(a) In Table 11.l, if the numerator and the denominator powers of X[z] areM and N, respectively, explain why in some cases N- M = 0, while in others N- M I or N - M = m (many positive integer). (b) Without actually finding the z-transform , state what N - M is for X[z] correspond­ ing to x[n] = y" u[n - 4].

=

0

4

8

Figure Pl 1.2-S 11.2-6 Using z-transform techniques and proper­ ties (no time-domain convolution sum!),

11 ----

978

CHAPTER 11 DISCRETE-TIME SYSTEM ANALYSIS USrNG THE Z-TRANSFORM of time that corresponds to each of the fol­ lowing functions of z. Few or no calculations are necessary! Be careful. the graphs may be scaled differently.

determine the convolution y[n] = (½tu[11 3) * ,[11 2] = ix[11 I]+ !x[n - 2]. (a) Detennine the (standard-form) system transfer function H(z) and sketch the system pole-zero plot. (b) Using transf orm-domain techniques, detennine Yz ir[n] given y(-I) = 2 and y[-2) = -2.

11.3-14

Solve

-

11.3-17

Sally deposits $100 into her savings account on the first day of every month except for each December, when she uses her money to buy holiday gifts. Define b[m] as the balance in Sally's account on the first day of month m. Assume Sally opens her account in January (m = 0), continues making monthly payments forever (except each December!), and that her monthly interest rate is I%. Sally's account balance satisfies a simple difference equation b[m] = (I.01)b[m - I]+ p[m], where p[m] designates Sally's monthly deposits. Deter­ mine a closed-f orm expression for b[m] that is only a function of the month m.

11.3-18

For each impulse response, determine the number of system poles, whether the poles are real or complex, and whether the system is BIBO-stable. (a) h 1 (11) = (-1 + (0.5) n )u[n] (b) h2[n] = (j)ll(u[n] - u[n - 10])

11.3-19

Find the following sums:

-

y[n] + 2y[n- I]+ 2y[n- 2) =x[n-1]+2x[n-2] with y[0] = 0, y[I] = I, and x[11J = e"u[n].

11.3-15

11.3-16

A system with impulse response h[n] = 2(1/3) 11 u[11 - I] produces an output y[n] = (-2)nu [n - I]. Determine the corresponding input x[n]. A professor recently received an unexpected $IO (a futile bribe attached to a test). Being the savvy investor that she is, the professor decides to invest the $10 in a savings account that earns 0.5% interest compounded monthly (6.17% APY). Furthermore, she decides to supplement this initial investment with an additional $5 deposit made every month, beginning the month immediately following her initial investment. (a) Model the professor's savings account as a constant coefficient linear difference equation. Designate y(n) as the account balance at month n, where n = O corre­ sponds to the first month that interest is awarded (and that her $5 deposits begin). (b) Determine a closed-form solution for y[n]. That is, you should express y[n] as a function only of n. (c) If we consider the professor's bank account as a system, what is the sys­ tem impulse response h[11]? What is the system transfer function H[z]?

/l

Lk2 II

(b)

k=O

[Him: Consider a system whose output y[n]

is the desired sum. Examine the relationship between y[n] and y[n - I). Note also that y[0J = 0.)

11.3-20

Find the following sum: n

[Hint: See the hint for Prob. 11.3-19.]

11.3-21

Find the following sum:

[Hint: See the hint for Prob. 11.3-19.]

11.3-22

Redo Prob. 11.3-19 using the result in Prob. l l.2-13a.

981

Problems 11.3-23

Redo Prob. 11.3-20 using the result in Prob. J l .2- J 3a.

(a) Determine the impulse response of the inverse system 1,- 1 [nJ. (b) rs the inverse stable? Is the inverse causal? (c) Your boss asks you to implement 1,-1 [n] to the best of your ability. Describe your realizable design, taking care to identify any deficiencies.

!1,3-24 Redo Prob. 11.3-21 using the result m Prob. J J .2- I 3a. ll,3-25

(a) Find the zero-state response of an LTID system with transfer function H[z)

z = (z +0.2)(z -0.8)

=

11.4-1

and the input x[11) e >u[n]. Write the difference equation relating the (b) output y[11] to input x[n].

lz(n]

Jl.3-26 Repeat Prob. 11.3-25 for x[n] = u[n] and H[z] 11.3-27

A system has an impulse response given by

(11 +1

+· = [ ( IJs )

11

+(

11

Jgj ) ] u[rz]

I

This system can be implemented according to Fig. Pl I.4-1.

2z+3

= (z-2)(z -3)

Repeat Prob. 11.3-25 for H [z]

=

y[n)

6(5z - I) 6z2-5z+ I

and the input x[n] is (a) (4)-11u[11] (b) (4)-(n-Z) u[n-2] (c) (4)- H[ej

0

j011

Je

( 12.2)

Noting that cos Qn is the real part of einn , the use of Eq. (9 .2 I) yields cos Qn => Re {H[ej0 ]ej Qn} Expressing H[e

i0

]

in the polar form O H[ejQ) = IH[e iO ]lejLH[ei J

Eq. (12.3) can be expressed as

986

(12.3)

12.1 Frequency Response of Discrete-Tune Systems

987

In other words, the system response y[n] to a sinusoidal input cos Qn is given by y[n] = IH[ej 0]1 cos (Qn + LH[ei0 ]) Following the same argument , the system response to a sinusoid cos (Qn + 0) is (12.4) This result is valid only for BIBO-stable or asymptotically stable systems. The frequency response is meaningless for BIBO-unstable systems (which include marginally stable and asymptotically unstable systems). This follows from the fact that the frequency response in Eq. (12.2) is obtained by setting z = e i n in Eq. (12.1). But, as shown in Sec. 9.5.2 [Eqs. (9.25) and (9.26)], the relationship of Eq. (12.1) applies only for values of z for which H[z] exists. For BIBO-unstable systems, the ROC for H[z] does not include the unit circle where z = ein . This means, for BIBO-unstable systems, that H[z] is meaningless when z = ei 0 _t This important result shows that the response of an asymptotically or BIBO-stable LTID system to a discrete-time sinusoidal input of frequency Q is also a discrete-time sinusoid of the same frequency. The amplitude of the output sinusoid is IH[ei 0 ]1 times the input amplitude, and the phase of the output sinusoid is shifted by LH[ei0 ] with respect to the input phase. Clearly, IH[ei 0 ] I is the amplitude gain, and a plot of IH[ei0] I versus Q is the amplitude response of the discrete-time system. Similarly, LH[ei0 ] is the phase response of the system, and a plot of LH[ei0 ] versus n shows how the system modifies or shifts the phase of the input sinusoid. Note that H[ein ] incorporates the information of both amplitude and phase responses and therefore is called the frequency responses of the system. STEADY-STATE RESPONSE TO CAUSAL SINUSOIDAL INPUT As in the case of continuous-time systems, we can show that the response of an LTID system to a causal sinusoidal input cos Qn u[n] is y[n] in Eq. (12.4), plus a natural component consisting of the characteristic modes (see Prob. 12.1-9). For a stable system, all the modes decay exponential!y, and only the sinusoidal component in Eq. (12.4) persists. For this reason, this component is called the sinusoidal steady-state response of the system. Thus, Yss [n]. the steady-state response of a system to a causal sinusoidal input cos Qnu[n], is

SYSTEM RESPONSE TO SAMPLED CONTINUOUS-TIME SINUSOIDS So far we have considered the response of a discrete-time system to a discrete-time sinusoid cos Qn (or exponential ei 011). In practice, the input may be a sampled continuous-time sinusoid cos wt (or an exponential ejwr). When a sinusoid cos wt is sampled with sampling interval T, the resulting tThis may also be argued as follows. For BIBO-unstable systems, the zero-input response contains nondecaying natural mode terms of the form cos Qon or y n cosOon (y > 1). Hence, the response of such a system to a sinusoid cos Qn will contain not just the sinusoid of frequency Q but also nondecaying natural modes, rendering the concept of frequency response meaningless. Alternately, we can argue that when z = ei 0• a BIBO-unstable system violates the dominance condition ly;I < leiQ I for all i, where y; represents the ith characteristic root of the system.

988

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS signal is a discrete-time sinusoid cos amT, obtained by setting t = nT in cos wt. Therefore, all the results developed in this section apply if we substitute wT for n: Q=wT

(12.5)

EXAMPLE 12.1 Sinusoidal Response of a Difference Equation System For a system specified by the equation y[n + l] - 0.8y[n] = x[n +I]

find the system response to the inputs (a)

l n =l

(b)

cos[:n-0.2]

(c) a sampled sinusoid cos 1500! with sampling interval T = 0.001 The system equation can be expressed as (E - 0.8)y[n] = Ex[n] Therefore, the transfer function of the system is H[z] =

l 1-0.8z- 1

z z-0.8

The frequency response is H[e l-0] =

1 1 1 -0.8e-jn - (1- 0.8cos Q) + j0.8sin n

Therefore, IH[ej0]1 =

1 J(I -0.8cos Q)2 + (0.8sin Q)2

and LH[ejO] =-tan -I

[

_

1 JI.64- l.6cos n

0.8sin Q ] 1-0.8cos Q

(12.6)

(12.7)

The amplitude response IH[e j 0]1 can also be obtained by observing that IH1 2 = HH•. Since our system is real, we therefore see that (12.8)

12.1

989

Frequency Response of Discrete-Time Systems

Substituting for H[ein ], it follows that n 2 IH[ei ]l

=(

l

. )( I ) 1- 0.8eJn l - 0.8e-1n

=

1 1.64- l.6cos n

which matches the result found earlier. Figure 12.1 shows plots of amplitude and phase response as functions of compute the amplitude and the phase response for the various inputs.

-21r

0

n. We now

31T

n.-

(a) 53.13°

V

0

-53.13°

7T

27T

31T

n-

(b)

Figure 12.1 Frequency response of the LTID system of Ex. 12.1. (a) Since 1n

obtain

= (eint with n = 0, the amplitude response is H[e10 ]. From Eq. (12.6), we H[e l-0 ] =

1 ✓l .64- 1.6cos(0)

----;=:::;::;::==;:=;==;:; N. Therefore, the duration of h[n] is

12.4 Filter Design Criteria

1003

finite for a nonrecursive filter.For this reason, nonrecursive filters are also known as.finite impulse response (FIR) filters. Nonrecursive filters are a special case of recursive filters' in which all the recursive coefficients a 1 , a2, ..., aN are zero. An Nth-order nonrecursive filter transfer function is given by (a 1 = a 2 = ...= aN = O)

The inverse z-transform of this equation yields

Observe that h[n] = 0 for n > N. Because nonrecursive filters are a special case of recursive filters, we expect the performance of recursive filters to be superior. This expectation is true in the sense that a given amplitude response can be achieved by a recursive filter of an order smaller than that required for the corresponding nonrecursive filter. However, nonrecursive filters have the advantage that they can be designed to have linear-phase characteristics. Recursive filters can realize linear phase only approximately.

12.4 FILTER DESIGN CRITERIA A digital filter processes discrete-time signals to yield a discrete-time output. Digital filters can also process analog signals by converting them into discrete-time signals. If the input is a continuous-time signal x(t), it is converted into a discrete-time signal x[n] = x(nn by a CID (continuous-time to discrete-time) converter. The signal x[n] is now processed by a "digital" (meaning discrete-time) system with transfer function H[z]. The output y[n] of H[z] is then converted into an "analog" (meaning continuous-time) signal y(t). The system in Fig. 12.9a, therefore, acts as a continuous-time (or "analog") system. Our objective is to determine the "digital" (discrete-time) processor H[z] that will make the system in Fig. 12.9a equivalent to a desired "analog" (continuous-time) system with transfer function Ha (s), shown in Fig. 12.9b. We may strive to make the two systems behave similarly in the time domain or in the frequency domain. Accordingly, we have two different design procedures. Let us now determine the equivalence criterion of the two systems in the time domain and in the frequency domain.

12.4.1 Time-Domain Equivalence Criterion By time-domain equivalence, we mean that for the same input x(t), the output y(t) of the system in Fig. 12.9a is equal to the output y(t) of the system in Fig. 12.9b.Therefore, y(nn, the samples of the output in Fig. 12.9b, are identical to y[n], the output of H[z] in Fig. 12.9a.

1004

R

CHAPTER 12 FEQUENCY RESPONSE AND DIGITAL FILTERS

x(t)

-

Continuous to discrete, CID

x[n)

-

y[n)

Discrete-time sy�tcm

a

-

b

H[z)

Discrete to continuous.

DIC

y(t)

y(t)

(b)

Figure 12.9 Analog filter realization with a digital fiJter.

The output y(t) of the system in Fig. 12.9b is y(t)=

1

00 -oo

L x(m�r)h (t-m�r)tn 00

x(r)h0 (t-r)dr= lim

cn-o

m=-oo

0

For our purpose, it is convenient to use the notation T for �r. Assuming T (the sampling interval) to be small enough, such a change of notation yields

L x(mT)h (t-mT) 00

y(t) = T

0

111=-oo

The response at the nth sampling instant is y(nT) obtained by setting t = nT in the equation as

L x(mT)h [(n - m)T] 00

y(nT) = T

0

(12.11)

m=-oo

In Fig. 12.9a, the input to H[z] is x(nT) y[n], the output of H[z], is given by

= x[n]. If h[n] is the unit impulse response of H[z], then

y[n] =

L x[m]h[n - m]

(12.12)

111=-oo

If the two systems are to be equivalent, y(nT) in Eq. (12.11) must be equal to y[n] in Eq. (12.12). Therefore (12.13)

12.4

Filter Design Criteria

1005

This is the time-domain criterion for equivalence of the two systems.t According to this criterion, h[n], the unit impulse response of H[z] in Fig. 12.9a should be T times the samples of h0 (t), the unit impulse response of the system in Fig. 12.9b. This is known as the impulse invariance criterion of filter design.

12.4.2 Frequency-Domain Equivalence Criterion In Sec. 2.4.4 [Eq. (2.38)], we proved that for an analog system with transfer function H0 (s), the system response y(t) to the everlasting exponential input x(t) = ll' is also an everlasting exponential y(t) = H0 (s)es' (12.14) Similarly, in Eq. (9.25), we showed that for a discrete-time system with transfer function H[z], the system response y[n] to an everlasting exponential inputx[n] = t' is also an everlasting exponential H[z]t'; that is, y[n] = H[z]z'1

If the systems in Figs. 12.9a and 12.9b are equivalent, then the response of both systems to an everlasting exponential input x(t) = ll' should be the same. A continuous-time signal x(t) = e'' sampled every T seconds results in a discrete-time signal x[n] = esnT = z'1

with Z = lf7

This discrete-time exponential t1 is applied at the input of H[z] in Fig. 12.9a, whose response is (12.15) Now, for the system in Fig. 12.9b, y(nT), the nth sample of the output y(t) in Eq. (12.14), is y(nT) = H0 (s)e5"T

(12.16)

If the two systems are to be equivalent, a necessary condition is that y[n) in Eq. (12.15) must be equal to y(nT) in Eq. (12.16). This condition means that (12.17) This is the frequency-domain criterion for equivalence of the two systems. It should be remembered, however, that with this criterion we are ensuring only that the digital filter's response matches exactly that of the desired analog filter at sampling instants. If we want the two responses to match at every value oft, we must have T-4 0. Therefore, lim H[i7] = Ho (s) T➔O

(12.18)

t Because Tis a constant, some authors ignore the factor T, which yields a simplified criterion h[nl = ha(nD. Ignoring T merely scales the amplitude response of the resulting filter.

1006

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS A PRACTICAL DIFFICULTY Both of these criteria for filter design require the condition T ➔ 0 for realizing a digital filter equivalent to a given analog filter. However, this condition is impossible in practice because it necessitates an infinite sampling rate, resulting in an infinite data rate. In practice, we must choose a small but nonzero T to achieve a compromise between the two conflicting requirements, namely, closeness of approximation and system cost. This approximation, however, does not mean that the system in Fig. 12.9a is inferior to that in Fig. 12.9b, because often H a (s) itse lf is an approximation to what we are seeking. For example, in lowpass filter design we strive to design a system with ideal lowpass characteristics. Failing that, however, we settle for some approximation such as Butterworth lowpass transfer functions. In fact, it is entirely possible that H[z], which is an approximation of H0 (s), may be a better approximation of the desired characteristics than is Ha(s) itself.

12.5

RECURSIVE FILTER DESIGN BY THE TIME-DOMAIN CRITERION: THE IMPUL SE INVARIANCE METHOD

The time-domain design criterion for the equivalence of the systems in Figs. 12.9a and 12.9b is [see Eq. (12.13)) h[n] = lim Th0 (nn (12.19) T➔ O

where h[n] is the unit impulse response of H[z], h0 (t) is the unit impulse response of H0 (s), and T is the sampling interval in Fig. 12.9a. As indicated earlier, it is impractical to let T ➔ 0. In practice, T is chosen to be small but nonzero. We have already discussed the effect of aliasing and the consequent distortion in the frequency response caused by nonzero T. Assuming that we have selected a suitable value of T, we can ignore the condition T ➔ 0, and Eq. (12.19) can be expressed as h[n] = Th0 (nT)

(12.20)

H[z] = TZ{ha (nn}

(12.21)

The z-transform of this equation yields

This result yields the desired transfer function H[z]. Let us consider a first-order transfer function C

H0 (s)= -­ s->.. Taking the inverse Laplace transform of H0 (s), the impulse response of this filter is

The corresponding digital filter unit impulse response is given by Eq. (12.20) as h[n] = Tha (nn = Tcen>..T

12.5

Recursive Filter Design: The Impulse lnvariance Method cT

C

1007

li(n]

,_

"-

(a)

(b)

Impulse responses for analog and digital systems in the impulse invariance method of filter design. Figure 12.10

Figures 12. lOa and 12. lOb show h0 (t) and h[n]. According to Eq. (12.21), H[z] is T times the z-transform of h[n]. Thus, Tcz H[z]=-z-e>-T

(12.22)

The procedure of finding H[z) can be systematized for any Nth-order system. First we express an Nth-order analog transfer function H0 (s) as a sum of partial fractions ast H0 (s)=

-'L ->..i=I s 1 C·

Then the corresponding H[z) is given by H[z] = r" � i=I z-e>-;T C;Z

This transfer function can be readily realized, as explained in Sec. 11.4. Table 12. l lists several pairs of Ha (s) and their corresponding H[z]. For instance, to realize a digital integrator, we examine its H0 (s) = 1/s. From Table 12.1, corresponding to Ha (s) = l/s (pair 2), we find H[z] = Tz/(z-1). This is exactly the result we obtained in Ex. 8.13 using another approach. Note that the frequency response Ha (j w) of a p ractical analog filter cannot be bandlimited. Consequently, all these realizations are approximate. CHOOSING THE SAMPLING INTERVAL T

The impulse-invariance criterion (12.13) was derived under the assumption that T ➔ 0. Such an assumption is neither practical nor necessary for satisfactory design. The avoidance of aliasing is the most important consideration for the choice of T. In Eq. (8.7), we showed that for a sampling interval T seconds, the highest frequency that can be sampled without aliasing is l /2T Hz or TC/T radians per second. This implies that H0 (jw), the frequency response of the analog filter in t Assuming Ha (s) has simple poles. For repeated poles, the form changes accordingly. Entry 6 in Table 12. l is suitable for repeated poles.

1008

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS TABLE 12.1

No.

Select Impulse Invariance Pairs

H[z]

h[n]

H0 (s)

K8(t)

K

TK

TK8[n]

Tz

z-J

3

s2

4

5 6

7

s-).. 1 (s-)..)2 As+B s2+2as+c

r=

Tre-01 cos(bt+0)

Tre-0117 cos(bnT+0)

't2z (z- l )2 T3 z(z+ l) 2(z-1) 3 Tz z-e>..T y2 z�T (z-e>..T)2 Trz[zcos 0-e-ar cos(bT-0)] z2- (2e-07 cos bT)z+e-2.ar

Aa-B ) 0 = tan-1 ( A✓c-a 2

Fig. 12.9b, should not have spectral components beyond frequency rr/T radians per second. In other words, to avoid aliasing, the frequency response of the system H0(s) must be bandltmjted to rr /T radians per second. As shown in Ch. 4, the frequency response of a realizable LTIC system cannot be bandlimited; that is, the response generally exists for all frequencies up to oo. Therefore, it is impossible to digitally realize an LTIC system exactly without aliasing. The saving grace is that the frequency response of every realizable LTIC system decays with frequency. This allows for a compromise in digitally realizing an LTIC system with an acceptable level of aliasing. The smaller the value of T, the smaller the aliasing, and the better the approximation. Since it is impossible 10 make IH0 (jcv)I zero, we are satisfied with making it negligible beyond the frequency rr/T. As a rule of thumb [ 1], we choose T such that IH0 () w)I at the frequency w = rr /Tis less than a certain fraction (often taken as 1 %) of the peak value of IHa (j w) 1- This ensures that aliasing is negligible. The peak IH0 (j w) I usually occurs at w = 0 for lowpass filters and at the band center frequency Wr for bandpass filters.

EXAMPLE 12.4 Butteiworth Filter Design by the Impulse-Invariance Method Design a digital filter to realize a first-order lowpass Butterworth filter with the transfer function (12.23)

12.5

Recursive Filter Design: The Impulse Invariance Method

1009

Fo r this filter, we find the corresponding H[z] according to Eq. (12.22) (or pa.ir 5 in Table 12.1) as (12.24) Next, we select the value of T by means of the criterion according to which the gain at w = TC /T drops to 1 % of the maximum filter gain. However, this choice results in such a good design that aliasing is imperceptible. The resulting amplitude response is so close to the desired response that we can hardly notice the aliasing effect in our plot. For the sake of demonstrating the aliasing effect, we shall deliberately select a 10% criterion (instead of I%). We have

h case, IHa(jw)lmax = 1, which occurs at w = 0. Use of 10% criterion leads to IHa(TC/T)I = In tis 0.1. Observe that We IHa (jw)I :=:::: W >> We w Hence, We IH0 (1C/T)I :=::::- =0.1 ==> rr/T= IOwe =106 TC/T

Thus, the 10% criterion yields T = 10- 6TC. The 1 % criterion would have given T Substitution of T = 10-6TC in Eq. (12.24) yields H[z] =

0.3142z z - 0.7304

= 10- 11r. ( 12.25)

A canonical realization of this filter is shown in Fig. 12.11a. To find the frequency response of this digital filter, we rewrite H[z] as H[z]

0.3142 = 1 -0.7304z1

Therefore,

). H[e WT]

0.3142 ---------=----:-:�--:----= - I -0.7304e-iwT - (1 -0.7304 coswT) + j0.7304 sinwT 0.3142 - --------

The corresponding magnitude response is IH[ e iwT ] I =

0.3142

-;:::::========:::=;:;====::;:==::=:�=:==;;;� JO -0.7304 coswT)2 + (0.7304 sinwT)2 0.3142

- Jl.533 - l .4608coswT

(12.26)

1010

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS 0.3142 J[n)

0.7304 (a)

5x H>5

lef

w-

(b)

LH

5 X 10 5

w-

-......

········· · ..... · · .. LHa(jw )

--rr/2

. .... . . .. ... _______________ . . ....·~········--c·c·a . ... .-.

,• ......... . _

(c)

Figure 12.11 An example of filter design by the impulse invariance method: (a) filter realization, (b) amplitude response, and (c) phase response.

and the phase response is LH[ej"'T ]= -tan-l(

0 .7304sinwT ) 1 - 0.7304 coswT

(12.27)

This frequency response differs from the desired response Ha (j > >>

omegac = 10-5; Ba = [omegac); Aa= (1 omegac); Fs = 10-6/pi; [B,A] = impinvar(Ba,Aa,Fs) B = 0.3142 A= 1 .0000 -0.7304

This confirms our earlier result of Eq. ( 12.25) that the digital filter transfer function is H[z] =

0.3142z z-0.7304

DRILL 12.4 Filter Design by the Impulse-Invariance Method Design a digital fiJter to realize an analog transfer function

ANSWER H[z]

20Tz

rr . T= = z - e-20T with 2000

1012

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS LIMITATIONS OF THE IMPULSE !NvARIANCE METHOD

The impulse invariance method is handicapped by aliasing. Consequently, this method can be used to design filters where H0 (jw) becomes negligible beyond some frequency B Hz. This condition restricts the procedure 10 lowpass and bandpass filters. The impulse invariance method cannot be used for true highpass or bandstop filters. Moreover, to reduce aliasing effects, the sampling rate has to be very high, which makes its implementation costly. In general, the frequency-domain method discussed in the next section is superior to the impulse invariance method.

12.6 RECURSIVE FILTER DESIGN BY THE FREQUENCY-DOMAIN CRITERION: THE BILINEAR TRANSFORMATION METHOD The bilinear Lransformation method discussed in this section is preferable to the impulse invariance method in filtering problems where the gains are constant over certain bands (piecewise constant amplitude response). This condition exists in lowpass, bandpass, highpass, and bandstop filters. Moreover. this method requires a lower sampling rate compared to the impulse invariance method because of the absence of aliasing errors in the filter design process. ln addition, the tilter rolloff characteristics are sharper with this method compared to those obtained using the impulse invariance method. The absence of aliasing is the result of one-to-one mapping from the s plane to the z plane inherent in this method. The frequency-domain design criterion is [see Eq. ( 12.18)] (12.28) Let us consider the following power series for the hyperbolic tangent (see Sec. B.8.5): tanh

(sT) = esr12 - e-sT/2 -[sT 2

______ sT 2 e /

+e-s

T/2

2

_ � (sT) 3

3

2

2 (sT) +- +···] 5

15

2

For small T (T ➔ 0), we can ignore the higher-order terms in the infinite series on the right-hand side to yield esT/2 _ e -sT/2 ( hm -----) r➔ O esT/2 e-sT/2 .

Therefore, as T ➔ 0, s

2) ( = T

+

eT/2 -e-sT/2

esT/2 +e-sT/2

sT

=2

(2) -1 = T + eT

es T

I

Equation ( 12.28) now can be expressed as (12.29)

12.6

Recursive Filter Design: The Bilinear Transformation Method

1013

From this result, it follows that H[z] = Ha (2z-J) -T -Z+ 1

= H0 (s)I S-7' _ 2 z.::1 :+I

Therefore, we can obtain H(z] from Ha(s) by using the transformationt

s=(}):��

(12.30)

This transformation is known as the bilinear transformation. CHOICE OF T IN THE BILINEAR 'TRANSFORMATION

Because of the absence of aliasing in the bilinear transformation method, the value of the sampling interval T can be much smaller compared to the impulse invariance method. By the absence of aliasing, we mean only an absence of aliasing in transforming the analog filter into the digital filter. Signal aliasing, which limits the highest usable frequency, is still present. Thus if the highest frequency to be processed isfh Hz, then to avoid signal aliasing, we must still foUow [see Eq.(8.7)]

I T> [B,A] = impinvar(Ba,Aa,Fs) B = 0.1358 0.1358 A= 1.0000 -0.7285 This confirms our earlier result that the digital filter transfer function is z l H[z]=0.1358( ; ) z- .7285

FREQUENCY WARPING INHERENT IN THE BILINEAR TRANSFORMATION Figure 12.12 shows that IH[ejwT ]I � IHa(Jw)I for small (J). For large values of w, the error increases. Moreover, IH[ejwT ] I= 0 at w = JT /T. In fact, it appears as if the entire frequency band (0 to oo) in IHa (jw) I is compressed within the range (0, t) in H[ej""7]. Such warping of the frequency scale is peculiar to this transformation. To understand this behavior, consider Eq. (12.29) with s = jCt.):

.

2 eJwT - l ) H[ejwT ] = Ha (- . T T e1"" + 1

= Ha

(r

=i!!f!.) = +

2 e, - e----r- M . r . 1

eT

eT

. T Ha (J T2 tan w2 )

Therefore, the response of the resulting digital filter at some frequency wd is

where

2

WdT

w =-tan2 a T

(12.32)

Thus, in the resulting digital filter, the behavior of the desired response Ha(Jw) at some frequency Wa appears not at Ct.)0 but at frequency Wd, where [from Eq. (12.32)] 2 _ w0 T wd=-tan 1 -

T

2

Figure 12.13a shows the plot of Wd as a function of w0 • For small Wa, the curve in Fig. 12.13a is practically linear, so wd � Ct.)0• At higher values of w0 , there is considerable diversion in the values of w 0 and wd. Thus, the digital filter imitates the desired analog filter at low frequencies, but at higher frequencies there is considerable distortion. Using this etho�, if we are trying to � . synthesize a filter to realize H0 (jCt.)) depicted in Fig. 12.13b, the resulting d1g1tal filter frequency response will be warped, as illustrated in Fig. 12.13c. The analog filter behavior in the entire range

1016

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

t

wd = coo

rod A T

/

/

/

(a)

A

0

(I)

a

T

-

G1 (b) G2 G3

wo,

l1tJ

00 · 02

(I)-

(l)a3

[e /Ol T] G1

(c)

.n.. T

0)-

Figure U.13 Frequency warping in the bilinear transformation: (a) mapping relationship of analog and digitaJ frequencies, (b) analog response, and (c) corresponding digital response.

of Wa from O to oo is compressed in the digital filter in the range of CtJd from O to rr /T. This is as if a promising 20-year-old man, who, after learning that he has only a year to live, tries to crowd his last year with every possible adventure, passion, and sensation that a normal human being would have experienced in an entire lifetime. This compression and frequency warping effect is a peculiarity of the bilinear transformation. There are two ways of overcoming frequency warping. The first is to reduce T (increase the sampling rate) so that the signal bandwidth is kept weU below and w0 ::::: wd over the desired frequency band. This step is easy to execute, but it requires a higher sampling rate (lower T)

f

12.6 Recursive Filter Design: The Bilinear Transformation Method

1017

than necessary. The second approach, known as prewarping, solves the problem without unduly reducing T.

12.6.1 Bilinear Transformation Method with Prewarping ln prewarping, we start not with the desired Ha(Jw) but with a prewarped Ha (Jw) in such a way that the warping of the bilinear transformation will compensate for the prewarping exactly.The idea here is to begin with a distorted analog filter (prewarping) so that the distortion caused by bilinear transformation will be canceled by the built-in (prewarping) distortion.The idea is similar to the one used in prestressed concrete, in which a concrete beam is precompressed initially.When loaded, the beam experiences tension, which is canceled by the built-in compression. Usually, prewarping is done at certain critical frequencies rather than over the entire band. The final filter behavior is exactly equal to the desired behavior at these selected frequencies. Such a filter is adequate for most filtering problems if we choose the critical frequencies properly. If we require a filter to have gains G 1 , G2, ..• , G111 at frequencies (critical frequencies) w,, z, , . . . , Wm respectively, then we must start with an analog filter H'(jw) that has gains G 1 , G2 , . .. Gm at frequencies w,', >> >> >>

omegap = (1000 2000)/10-4; omegas = (450 4000)/10-4; Ghatp = -2.1; Ghats = -20; T = pi/10000; [N,omegac] = buttord(omegap,omegas,-Ghatp,-Ghats); [B,A] = butter(N,omegac) B = 0.0296 0 -0.0593 0 0.0296 A= 1.0000 -3. 1190 3. 9259 -2. 3539 0.5760

Thus,

0.0296z4 - 0.0593a2 + 0.0296 t - 3. l I 9z 3 + 3.9259z 2 - 2.3539z + 0.576 As expected, this result exactly matches our earlier design. H[z ]=

I' \

12.7 Nonrecursive Filters

1027

DRILL 12.6 Bilinear Transformation with Prewarphtg: Bandstop Filter U ing the bilinear transformation with prewarping. design a Chebyshev bandstop digital liJter with w$ 1 = lOOO, w.ri = 2000, Wp1 = 450, wl'2 = 4000, Gp = 0.7852 (-2.1 dB), and Gs = 0.1 (-20dB). Use T=rr/104.

ANSWER H[z]

= 0.3762(z4 - 3.6086z3 + 5.2555z2 - 3.6086z + l) ;!- - 2.2523z3 + 2.0563z2 - 1.2053z + 0.4197

12.7 NONRECURSIVE FILTERS Recursive filters are very sensitive to coefficient accuracy. Inaccuracies in their implementation, especially too short a word length, may change their behavior drastically and even make them unstable. Moreover, recursive filter designs are well established only for amplitude responses that are piecewise constant, such as lowpass, bandpass, highpass, and bandstop filters. In contrast, a nonrecursive filter can be designed to have an arbitrarily shaped frequency response. In addition, nonrecursive filters can be designed to have linear-phase response. On the other hand, if a recursive filter can be found to do the job of a nonrecursive tilter, the recursive filter is of lower order; that is, it is faster (with less processing delay) and requires less memory. If processing delay is not critical, a nonrecursive filter is the obvious choice. Recursive filters also have an important place in non-audio applications where a linear-phase response is desirable. We shall review the concept of nonrecursive systems briefly. As discussed in Sec. 12.3, nonrecursive ftlters may be viewed as recursive filters where all the feedback or recursive coefficients are zero; that is, when

The transfer function of the resulting Nth-order nonrecursive filter is

(l2.38)

Now, by definition, H[z) is the z-transform of h[11):

H[z]

=

f n=O

h[n]z-n

= h[0] +h[l]z-1 +h[2)z-2 + .. · +h[N]z-N +h[N + l]z- N, and

Eq. (12.39)

= h[O]+h[l]z- 1 +h[2]z-2 +· · • +h[N- l]z-CN-1) +h[N]z-N h[O]� +h[l]�- 1 + · · · +h[N- l]z+h[N] zN

where h[n] =

{ b,, O

( 12.40)

Os.ns.N otherwise

(12.41)

The impulse response h[n] has a finite length of (N + I) elements. Hence, these filters are finite impulse response (FIR) filters. We shall use the terms nonrecursive and FIR interchangeably. Similarly, the terms recursive and IIR (infinite impulse response) wiU be used interchangeably in our future discussion. The impulse response h[n] of an FIR filter can be expressed as h[n] = h[O]o[n] +h[l]o[n- I]+···+h[N]o[n -N]

(12.42)

The frequency response of this filter is obtained using Eq. (12.40) as H[ejwT ] = h[O] + h[ I ] e-jwT + · · • + h[N]e -JNwT =

L h[n e-i=T N

]

(12.43)

11=0

FIR FILTER REALIZATION

The nonrecursive (FIR) filter in Eq. (12.38) is a special case of a general filter with all feedback (or recursive) coefficients zero. Therefore, the realization of this filter is the same as that of the Nth-order recursive filter with all the feedback connections omitted. The typical canonical FIR filter realization is easily obtained by setting a, = · · · = aN = 0 in Fig. 11.8c. It is easy to verify from this figure that for the input o[n]. the output is h[n ] given in Eq. (12.42). An FIR filter realization, such as obtained by setting a 1 = · • - = aN = 0 in Fig. 11.8c, is a tapped delay line with successive taps at unit delay (T seconds) intervals. Such a filter is known as a transversal filter. Tapped analog delays are available commercially in integrated circuit form. In these circuits, the time delay is implemented by using charge transfer devices, which sample the input signal every T seconds (unit delay) and transfer the successive values of the samples to N storage cells. The stored signal at the kth tap is the input signal delayed by k time units (kT seconds). The sampling interval can be varied electronically over a wide range. Time delay can also be obtained by using shift registers.

12.7.1 SymmetrJ Conditions for Linear-Phase Response Consider an Nth-order recursive (FIR) filter described by the transfer function H[z] [Eq. (12.38) or Eq. (12.40)) and the corresponding impulse response h[n] [Eq. (12.42)). We now show that if h[n.] is either symmetric (Fig. 12.17a) or antisymmetric (Fig. I 2.17b) about its center point, the filler's phase response i� a linear function of w (or, equivalently, a linear function of Q). We consider a case where N is even. To avoid too much abstractness, we choose some convenient value for N.

1029

12.7 Nonrecursive Filters h[n] h[n] n-

0 I

2

4

3

0 I

,,_

(a)

2. (b)

l

Figure 12.17 Symmetry conditions for linear-phase frequency response in nonrecursive filters.

say, N = 4, to demonstrate our point. It will then be easier to understand the generalization to the Nth-order case. For N = 4, the impulse response in Eq. (12.42) reduces to h[n] = h[O]o[n] + h[I o[n ]

-

1) + h[2]8[n

The transfer function H[z] in Eq. ( 12.40) reduces to

-

2] + h[3]8[n-3] + h[4]o[n-4]

H[z] = h[O] + h[I]z -1 + h[2]z-2 + h[3]z-3 + h[4]z-4

= z- 2 (h[O]z2 + h[I]z + h[2] + h[3]z- 1 + h[4]z- 2)

Therefore, the frequency response is H[ej wT] =e-j2wT(h[O]ej2wT +h[l]ejwT +h[2]+h[3]e-jwT +h[4]e-j2wT) If h[n] is symmetric about its center point (n = 2 in this case), then h[O] = h[4],

and the frequency response reduces to H[ejw T

]

h[l ] = h[3]

= e-j2wT (h[O] [ ej2w T + e-j2wT] + h[ 2] + h[1] [ejwT + e-j wT]) = e-j2wr

(

h[2 + 2h[I]cos cvT + 2h[ O]cos 2cvT) ]

The quantity inside the parenthesis is real; it may be positive over some bands of frequencies and negative over other bands. This quantity represents the amplitude response IH[ejwT ]l.t t Strictly speaking, IH[ejwT ] I cannot be negative. Recall, however, that the only restriction on amplitude is that it cannot be complex. It has to be real; it can be positive or negative. We could have used some other notation such as A(w) to denote the amplitude response. But this would create too many related functions causing possible confusion. Another alternative is to incorporate the negative sign of the amplitude in the phase response, which will be increased (or decreased) by n over the band where the amplitude response is negative. This alternative will still maintain the phase linearity.

.,...,

1030

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS The phase response is given by

LH[e iwT ] = -2wT

The phase response is a linear funcLion of w. The group delay is the negative of the slope of LH[eiwT] wilh respect to w, which is 2T seconds in this case [see Eq. (4.40)]. If h[nJ is antisymmetric about its center point, then the antisymmetry about the center point requires that h[n ] = 0 at the center point (see Fig. I 2. I 7b) .t Thus, in this case h[O] = -h[4),

h[l] = -h[3],

h[2] = 0

and the frequency response reduces to H[ejwT ] = e-j2wT (h[O ] (ej2wT _ e -j2wT) + h[l ] (eiwT _ e-jwT) )

= 2je-i2wT (h[1] sin wT + h[O

]

sin 2wT)

= 2ei< f- 2wn (h[1) sin wT + h[O] sin 2wT) Thus, the phase response in this case is LH[e 1.wT] =

T(

- - 2wT 2

The phase response in this case is also a linear function of w. The system has a group delay (the negative slope of LH[eiw T] with respect to w) of 2T seconds ( 2 units), the same as in the symmetric case. The only difference is that the phase response has a constant term TC /2. We can obtain similar results for odd values of N (see Prob. 12. 7-1 ). This result can be generalized for an Nth-order case to show that the phase response is linear, and the group delay is N[ seconds (or � units).

JXAMPLE 12.10 Transfer Function of FIR Comb Filter Determine the transfer function and the frequency response of a sixth-order FIR comb filter whose impulse response is given by h[n]

= 8[n] - 8[n - 6]

This impulse response is illustrated in Fig. 12.18a. Its canonical realization is depicted in Fig. 12.18b.

t Antisymmetry requires that h[n] h[n] = 0 at this point.

= -h[-n]

at the center point also. This condition is possible only if

12.8 Non.recursive Filter Design I

h[n]

2ro'{VV\

(a) I

0

2

3

4

5

7

8

1031

11_

o

1t

21t

(c)

n

!� (b)

x[n]

• •

I l

I l

I l

I -

I -

l

l

-I Z

-

I

Figure 12.18 Impulse response, realization, and frequency response of an FIR comb filter.

Observe that h[n] is antisymmetric about n = 3. Also H[z] =

L h[n]z-n = 1-z-6 = z; 1 00

6

n=-oo

The frequency response is given by H[ejQ] = 1- e- j6n = e-j30(ej3'2 -e-j3 n ) = 2je-j3n sin 3n = 2ei(f-311) sin 3n

e term sin 30 can be positive as well as negative. Therefore,

and LH[ei0 ] is as indicated in Fig. 12.18d. The amplitude response, illustrated in Fig. 12.18c, is shaped like a comb with periodic nulls. Using the same argument, the reader can show that an Nth-order comb filter transfer ction is �-1 H[z]= -zn The corresponding Nth-order comb filter frequency response is

12.8 NONRECURSIVE FILTER DESIGN As in the case of recursive (IIR) filters, nonrecursive filters can be designed by using time-domain and frequency-domain equivalence criteria. In the time-domain equivalence criterion, the digital

1032

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS filter impulse response is made identicaJ, or closely matching, to the samples of the desired (analog) filter impulse response. In the frequency-domain equivalence criterion, the digital filter frequency response samples at uniform frequency intervals are matched to the desired analog filter frequency response samples. This method is also known as the frequency sampling or the spectral sampling method.

12.8.1 Time-Domain Equivalence Method of FIR Filter Design The time-domain equivalence method (also known as the Fourier series method when a rectangular window is used) of FIR filter design is identicaJ to that for IIR filters discussed in Sec. 1 2.5, except that the FIR filter impulse response must be causal and of finite duration. Therefore, the desired impulse response must be shifted and truncated. Truncating the impulse response abruptly will result in an oscillatory frequency response because of the Gibbs phenomenon discussed in Sec. 3.5.2. In some filtering applications, the oscillatory frequency response (which decays slowly as 1/w for a rectangular window) in the stopband may not be acceptable. By using a tapered window function for truncation, the oscillatory behavior can be reduced or even eliminated at the cost of increasing the transition band as discussed in Sec. 4.9. Note that the impulse response of an Nth-order FIR filter has N + l samples. Hence, for truncating h[n] for an Nth-order filter, we must use an (N + 1)-point window. Several window functions and their tradeoffs appear in Table 12.2.

TABLE 12.2

Common Discrete-Time Window Functions and Their Approximate Characteristics Main Lobe Width

Rolloff Rate [dB/dee]

Peak Side Lobe Level [dB]

4,r

4,

-20

-13.3

N

8,r N+I

-40

-26.5

Hann: ½[I-cos( 2Z11 )]

N+I

8Jr

-60

-3 1.5

Hamming: 0. 54-0.46cos(2Z")

N+I

8,r

-20

-42.7

Blackman: 0.4 2-0.5 cos(�11) + 0.08cos( 4z11)

12.Jr N+I

-60

-58.1

-20

varies with a

Window w[n] for 0 � n � N, 0 otherwise I. 2. 3. 4. 5. 6.

Rectangular: 1 Triangular (Bartlett):

1-

1211-NI

Kaiser: ro(a/1-(�/) /o(a)

varies with a

12.8 Nonrecursive Filter Design

1033

I

',

DESIGN PROCEDURE

Much of the discussion so far has been rather general. We shall now give some concrete examples of such filter design. Because we want the reader to be focused on the procedure, we shall intentionally choose a small value for N (the filter order) to avoid getting distracted by a jungle of data. The procedure, however, is general and it can be applied to any value of N. The steps in the time-domain equivalence design method are:

II

l. Determine the filter impulse response h[ll]: In the first step, we find the impulse response h[n] of the desired filter. According to the time-domain equivalence criterion in Eq. (12.13),

I

h[n] = Th0 (nT)

(12.44)

Here, ha(t) is the impulse response of the analog filter H0 (s). Therefore, h0 (t) is the inverse Laplace transform of H0 (s) or the inverse Fourier transform of H0 (jw). Thus,

1"/T H (jw)e h (t) = 1

a

27l' -1'l/T

0

1wr dw

(12.45)

T 11'l/T H0 (jw)e1wnT dw h[n] = Th0 (nT) = 2Jl' -1'l/T

(12.46)

h[n] = Th0 ([n- �]T)w[n] where w[n] is a finite duration symmetric window function over O � n � N. Combined with Eq. (12.46), we see that 0

(!_

1"/T Ha (jw)ej 27l' -1'l/T

I

',

I

I I

I

I

I I

w (n

-�)T dw) w[n]

I

I I I

I

I

2. Delay and window: For linear-phase filters, we generally start with zero-phase filters for which H0 (jw) is either real or imaginary. The impulse response h0 (t) is either an even or odd function of t (see Prob. 4.1-3). In either case, h0 (t) is centered at t = 0 and has infinite duration in general. But h[n] must have only a finite duration and it must start at n = 0 (causal) for filter realizability. Consequently, the h[n] found in step 1 needs to come from a shifted and truncated version of h0 (t). Specifically, we require

= Th ([n- �]T)w[n] =

I

I

Recall that a digital filter frequency response is periodic with the first period in the frequency range -f � w < f. Hence, the best we could hope is to realize the equivalence of H0 (j w) within this range. For this reason, the limits of integration are taken from -Jl' /T torr /T. Therefore, according to Eq. (12.44),

h[n]

',

(12.47)

This produces an h[n] that is finite in duration as well as causal. The_ dela � necessary to ensure a causal (realizable) system is what also produces the desired linear-phase frequency response. . . . ngular wmdow, which has a unit Straight truncation of data amounts to using a recta saw that although such · dow width, and zero weight for all other n. We · h t over the wm we1g

I

1034

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS a window gives the smallest transition band, it results in a slowly decaying oscillatory frequency response in the stopband. This behavior can be corrected by using a tapered window of a suitable width. 3. Filter frequency response and realization: Knowing h[O], h[ l], h[2], ..., h[N], we determineH[z] using Eq.(12.40) and the frequency response H[ei wT] using Eq. (12.4 3).We now realize the truncated h[n] using the structure inFig. l l .8c witha1 =· · ·=aN=O. OPTIMALITY OF THE RECTANGULAR WINDOW

If the truncating window is rectangular, the procedure outlined here is optimum in the sense that the energy of the error (difference) between the desired frequency response H0 (j(J)) and the realized frequency response H[eiwT] is the minimum for a given N.This conclusion follows from the fact that the resulting filter frequency response H[eiwT] is given by

II

This frequency response is an approximation of the desired frequency response H0 (j (J)) because of the finite length of h[n]. Thus, (12.4 8) II

How do we select h[n] for the best approximation in the sense of minimizing the energy of the error H0 (j(J)) -H[e iwT]? We have already solved this problem in Sec. 3.3.2. The preceding equation shows that the right-hand side is the finite-term exponential Fourier series for Ha (Jw) with period 2n/T. As seen from Eq.(12.46), h[n] are theFourier coefficients, which explains why the rectangular window method is also called the Fourier series method. According to the.finality property (see p. 252), a finite Fourier series is the optimum (in the sense of minimizing the error energy) for a given N. In other words, for a given number of terms (N + 1), any choice for h[n] other than the Fourier coefficients in Eq. (12.48) will lead to higher error energy. For windows other than rectangular, this minimum mean square error optimality deteriorates, and the filter is somewhat suboptimal.

Using rectangular and Hamming windows, design N = 6 FIR filters to approximate an audio lowpass filter with cutoff frequency fc = 20 kHz. Plot the magnitude response of each filter. Set the sampling frequency ls equal to four times the cutoff frequency fc.

1035

12.8 Nonrecursive Filter Design

In this case, the impulse response length (and thus the window length) is N + I sampling interval T is computed as ft =

r1 =4fc = soooo

=>

T= 12.5

X

= 7, and the

10- 6

Recall that a continuous-time frequency w appears as a discrete-time frequency Q = wT. Thus, the desired CT cutoff frequency We = 2rrf, =¥=fr appears at Q, =I, which is halfway to the highest (non-aliased) digital frequency Q = rr. Although we could now use Eq. (12.47) to find the desired impulse responses, we shall derive the results in steps. This approach, although more lengthy, will help clarify the basic design concepts. Let us consider the rectangular window (Fourier series method) first. Using p air 18 of Table 4.1, the impulse response of the desired ideal (zero-phase) lowpass filter is 1 ha (t) = -sine( 11:t) 2T 2T Next, we delay ha (t) by NJ = 3T to obtain

ha(t- 3T) =

rr(t- 3T) 1 ) sinc ( 2r 2T

This response is now scaled by T, truncated (windowed), and sampled by setting t = nT to obtain the desired FIR impulse response rr(n-3) 1 ) Wrcc [n] hcedn] = Tha (nT- 3T)wrec [n] = sinc ( 2 2 This result also follows directly from Eq. (12.47).

10

II

n

(b)

(a)

0.5

hrcc [n]

0.5

-2

8 (c)

II

/,ham [nl

8

-2

II

(d)

, (c) rectangular d Figure 12_19 Discrete-time LPF impulse responses: (a) ideal, (b) shifte ideal win dowe d, and (d) Hamming windowed.

1036

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS To visualize these three steps, consider Fig. 12.19. Using the impulse invariance method, the ideal but noncausal DT impulse response is Th0 (nD = 0.5sinc(nn/2) (Fig. 12.19a). To obtain a causal and finite-duration solution, we first delay the CT response by NT/2 = 3T (Fig. 12.19b) and then truncate it with a length-7 rectangular window (Fig. 12.19c). In this case, the noncausal filter is made realizable at a cost of a 3T-second delay. Over the window duration O :::s; n :::s; 6, the samples hrcc [n] exactly match the underlying CT impulse response. Notice, too, that hrec [n] is also midpoint symmetric and will thus have a linear-phase response. Using Eq. (12.40) and computing each value of hrec [n], the transfer function is

Setting z = ej0 , the frequency response is

Factoring out e-jJO and simplifying using trigonometric functions yield

Hrec [ej0] =

[� + � cos(Q) - � cos(3Q)] e-jJO 2

.1r

3.1r

The right-hand term e-jJO confirms the filter's linear-phase characteristics and represents three

samples (3T seconds) of delay. Computed using MATLAB, Fig. 12.20 shows the resulting magnitude response 1Hrec [ej0] I as well as the ideal lowpass response (shaded) for reference. The magnitude response exhibits oscillatory behavior that decays rather slowly over the stopband. Although increasing N improves the frequency response, the oscillatory nature persists due to the Gibbs phenomenon.

-TT

0

TT

n

Figure 12.20 Window method LPF magnitude responses: rectangular (solid) and Hamming (dashed).

We can readily confirm this example's calculations and plots using MATLAB. Since MATLAB computes sinc(x) as (sin (rrx))/rrx, we create a function mysinc that scales the MATLAB sine input by 1/rr to match the notation of the sine function in this book. To produce the filter designed using the rectangular window; we el(ecute the following code:

12.8 NonreCUisive Filter Design

1037

mysinc = ©(x) sinc( x/pi); Omega = linspace(-pi,pi,1001); N = 6; fc = 20000; T = 1/(4*fc); n = [0:N]; ha = ©(t) 1/(2*T)*mysinc(pi*t/(2*T)); wrec = ©(n) 1.0*((n>=O)&(n> Hrec = polyval(hrec,exp(1j*Omega)).*exp(-1j*N*Omega); >> plot(Omega,abs(Hrec),'.k');

>> >> >> >> >>

The resulting calculations and plots match the results shown in Figs. 12. l 9 and 12.20. The side lobes in the rectangular window are rather large (-13 dB), which results in a filter with poor stopband attenuation. The Hamming window, which has smaller side lobes (-43 dB), produces a filter with better stopband attenuation but wider transition band. Using a Hamming window rather than a rectangular window, the desired impulse response is

Using Table 12.2 to define Wh am[n], MATLAB readily computes hham[n), which is shown in Fig. 12.19d.

>> wham = ©(n) (0.54-0.46*cos(2*pi*n/N)).*((n >=O)&(n> hham = T*ha(n*T-N/2*T).*wha.m(n) hham = -0.0085 -0.0000 0.2451 0.5000 0.2451 0.0000

-0.0085

Unlike the result obtained using the rectangular window, hham[n] does not exactly match the underlying CT impulse response. The largest differences occur at the edges, where the taper of the Hamming window is greatest. Using Eq. (12.43) and the computed values of h ham[n], the frequency response is Hham [e ir2] = -0. 0085

+ 0.2451e-jW+o.se-j3Q +0.245 le-j4Q - 0. 0085e-j6Q

The magnitude response IHh am[ein ]I is shown dashed in Fig. 12.20.

>> Hham = polyval(hham,exp(1j*Omega)).*exp(-1j*N*Omega); >> plot(Omega,abs(Hham),'.k'); Observe that the transition band corresponding to the Hamming window is wider than that corresp onding to the rectangular window. Both hrec[n] and h ham[n] are symmetric about n = 3 and, hence, have linear phase responses. Comments: Referring to Fig. 12.20, the ideal (unwindowed) filter transitions from passband to stopband abruptly, resulting in a zero-width transition band. In the windowed filters, on the other hand, the spectrum spreads out, which results in a gradual transition from the passband to the stopband. For the rectangular window case, the distance (spectral spread) between the two peaks on either side of the transition region approximately equals ::, , the main lobe width of the rectangular window spectrum. Clearly, increasing the window width decreases the

1038

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS transition width. This result is intuitive because a wider window means that we are accepting more data (closer approximation), which should cause smaller distortion (smaller spectral spreading). Smaller window length (poorer approximation) causes more spectral spreading (more distortion). The windowed :filters also possess passband and stopband ripple. These characteristics are a result of the side lobes of the window spectra. When the ideal impulse response is windowed, its spectrum is convolved with the window's spectrum, and the window's side lobes leak into the passband and stopband. To keep this spectral leakage small, the window side lobes must also be small and decay rapidly with n. It is the type of window, not its length, that largely determines side lobe levels and decay rates.

ar Window Lowpass Filter Desi an N = 8 triangular artlett window, show that th� transfer with Oc = ,r /2 is

Using rectangular and Hamming windows, design N = 10 FIR filters to realize a digital differentiator. Plot the magnitude response of each filter. The transfer function of an ideal continuous-time differentiator is H0 (s) = s, and its frequency response is H0 (jw) = jw. Restricting our attention to the frequencies of interest to our digital differentiator, the desired impulse response is found from Eq. (12.47) as h[n] = Letting x = (n - �) r, we have h[n] =

=

( -T 2rr

(-

T lrc/T

2rr

lrr/T

-rr/T

-rr/T

.

N

jwe1w(n-i:)T dw) w[n]

.

jwe1wx dw) w[n]

{ {. ( p,,.,.,,J�"'"'"ID) w[n]

For integer 11, observe that sin(rrx/T) = sin (rr[n - �1) = O when N is even. Similarly, cos(rrx/T) = cos (rr[n - �J) = 0 when N is odd. Hence, for n # N /2,

1039

12.8 Nonrecursive Filter Design

h[n] =

Neven -sin(JT[n-�J) ,r[n-�12r

When N is even, h[n]

(12.49) w(n)

= 0 at the midpoint n = N/2.

Nodd

For this problem, we have N = 10, which is an even order. We use MATLAB to compute Th[n] using Eq. (12.49). >> N = 10; n = O:N; Omega = linspa ce(-pi,pi,1001); >> wree = ©(n) 1.0*((n>= O)&(n> wham = ©(n) (0.54-0.46*cos(2*pi*n/ N)).*((n>=O)&(n> Three = eos(pi*(n-N/2))./(n- N/2).*wrec(n) Three = 0. 200 -0.250 0.333 -0.500 1.000 Inf -1.000 O. 500 -0.333 0.250 -0.200 = cos(pi*(n-N/2))./(n-N/2).*wham(n) Thham >> == >> Threc(n N/2) = O; Thham(n == N/2) = O; Thham = 0.016 -0.042 0.133 -0.341 0.912 Inf -0.912 0.341 -0.133 0.042 -0.016 = >> Three(n =(Lh-1)/2) = O; Thham(n ==(Lh-1)/2) = O;

In each of the two cases, the n = N /2 = 5 point is initiaJly computed incorrectly as oo. It is a simple matter to adjust this point to the correct value of O (final line of code). Figures 12.21a and 12.21b show the (scaled) impulse responses Th rec [n] and Thham[n], and Fig. 12.21c compares their corresponding magnitude responses (rectangular solid, Hamming dashed) to an ideal differentiator (shaded). Notice once again that the rectangular window produces oscillatory behavior (the Gibbs phenomenon). The tapered Hamming window produces a much smoother response. >> >> >> >>

subplot(221); stem(n,T hrec,'k.'); subplot(222); stem(n,Thham ,'k. '); THree = polyval( T hrec,exp(1j*Dmega)).*exp(-1j*N*Omega); THham = polyval(Thham,exp(1j*Omega)).*exp(-1j*N*Omega); subplot(212); plot( Omega,abs(THrec), Omega,abs(THham),'--k');

Although we do not here display the phase responses of these filters, since the impulse responses are odd length and antisymmetric, both possess linear-phase characteristics. Except f or occasional jumps of ±Jl' and a linear-phase component of -SQ (corresponding to a time delay of ST seconds), both filters exactly match the phase response of an ideal differentiator. Notice that the Hamming-windowed filter is more accurate for low frequencies, but the rectangular-windowed filter operates over a broader frequency range. The taper of the Hamming window effectively shrinks the passband of the differentiator. This is generally true for all tapered windows. To compensate for this shrinkage, we usually start with a passband somewhat larger (typically, 25% larger) than the design passband. For example, to ensure that

1040

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS our Hamming-windowed differentiator works in the 20-kHz audio range, we might select T such that T.::: 16.67 µ,s 2n x20000::: ;; ===} T hrec [n]

Thham [n)

10

5

ll

5

II

-I

-I

(b)

(a)

TIH [ejO]I

-'TT

0 (c)

'TT 2

'TT

n

N = 10 digital differentiators: (a) Threc [n], (b) Thham[n], and (c) TIHrec [eifl ]I (solid) and TIHham [ein ]I (dashed).

Figure 12.21

ifferentiator Design with Odd �cal reasons are outside the scope of this book. odd-order digital �ve suP,erior performance, particularly at high frequencies. than do digital pf similar complexity but even order. Repeat Ex. 12.12 for N = 1 and verify tieing lower order (lower complexity), the resulting digital differentiators have onnance to the N = 10 digital differentiators.

RILL 12.9 Another Window Method Design Using a rectangular window, design an N = 6 FIR filter to realize the triangular response Ha U(J)) = l:,,.((J)T/1r). Determine the transfer function H[z] and plot the filter's magnitude response.

ANSWER H(z) = 0.0225 + 0.1023C 1 + 0,2026C2 + 0.2Sz-3 + 0.20Z6z-4 + 0.10\3z-s + 0.0225z- 6

12.8 Nonrecursive Filter Design

1041

12.8.2 Nonrecursive Filter Design by the Frequency-Domain Criterion: The Frequ ency Sampling Method The frequency-domain criterion is [see Eq. (12.17)]

In this case, we shall realize this equality for purely sinusoidal frequencies; that is,for s = jw: (12.50) For an Nth-order FIR filter, there are No=N + 1 elements in h[n], and we can hope to force the two frequency spectra in Eq. (12.50) to be equal only at No points. Because the spectral width is 2; , we choose these frequencies CVQ= ;;T rad/s apart; that is, 2,r Wo=­ NoT We require that r = 0, l, 2, ,. ..,No - 1

(12.51)

Note that these are the samples of the periodic extension of Ha (jw) (or H[eiwT ]). Because we force the frequency response of the filter to be equal to the desired frequency response at No equidistant frequencies in the spectrum, this method is known as the frequency sampling or the spectral sampling method. In order to find the filter transfer function, we first determine the filter impulse response h[n]. Thus,our problem is to determine the filter impulse response from the knowledge of the No uniform samples of the periodic extension of the filter frequency response H[eiwT]. But H[e iwT] is the DTFf of h[n] [see Eq. (10.58)]. Hence,as shown in Sec. 10.6 [Eqs. (10. 46) and (10.47)), h[n] and H[e i rwo T ] (the No uniform samples of H[eiwT]) are a OFT pair with n o = woT. Hence, the desired h[n] is the IDFf of H[eiwT1, given by

n = 0, l, 2,,...,No - l

(12.52)

Note that H[eirwo T J = Ha(jrwo) are known [Eq. (12.51)]. We can use the IFFf to compute the No values of h[n]. From these values of h[n],we can determine the filter transfer function H[z] as H[z] =

L h[n]z-

No-1 r,;,Q

11

(12.53)

1042

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS LINEAR PHASE (CONSTANT DELAY) FILTERS

We desire that H0 (jw) = H[ ej r ]. The filter featured in Eqs. (12.52) and (12.53) satisfies this condition only at No values of w. If we specify ideal fi1ter characteristics (brick wall, zero phase) at these points, the filter frequency response between samples can deviate considerably. As we shall see in later examples, a filter delay of (No -1)/2 generally produces superior results, including the desirable filter characteristic of linear phase.It is a sensible adjustment.It is not reasonable to assume a realizable (causal) filter with zero delay (zero phase); rather, we expect practical filters to have a delay that increases with filter complexity (order).To achieve this delay and consequent improvement in overall filter performance, we modify our filter design procedure slightly. First, we start with a zero phase (or a constant phase ±i ) response (usually by specifying the desired response using ideal filter characteristics). To produce a delay of (No - I) /2 units in h[n], we multiply H[eiwT] with e-jNo2-'wr_ Notice, the added filter delay does not alter the filter amplitude response, but the phase response changes by -(No - l )wT /2, which is a linear function of w. In our modified procedure to realize a frequency response H[ejwT ], we N l thus begin with H[ejwT ]e-j r wT and find the IDFT of its No samples. The resulting IDFf at n = 0, l , 2, 3, ..., No - 1is the desired (and improved) impulse response. 1 Note that waT =�,and the No uniform samples of H[ei T]e-jNo2- T are w

w

w

(12.54) Recall that the No samples Hr are the uniform samples of a periodic extension of N l H[ejwT]e-j r wT_ Hence, Eq. (12.54) applies to samples of the frequency range from O � w � f· The remaining samples are obtained by using the conjugate symmetry property Hr = Ht -r· The 0 desired h[n] is the IDFT of Hr ; that is,

n=O, 1,2, ...,No-l

(12.5 5)

As before, the filter transfer function H[z] is determined using Eq. (12.53).Let us now demonstrate the procedure through an example.

sign by Frequency Sampling Using the frequency sampling method, design a sixth-order linear-phase FIR lowpass filter of cutoff frequency {7 rad/s. The frequency response H[e

i wT

]

of an ideal lowpass filter is shown (shaded) in Fig. 12.2 2.

11 1 111 II I I I I

l

I

II

1043

12.8 Nonrecursive Filter Design

I I =II T

-::6K. 7T

-:il .=lL 7T 2T

-=2n. 7T

la.

1T

..!.. .il.

ll.

2T 1T

1T

!!..

T

.a.it 7T

l2ll. 1T

ll.ll 7T

l.l!. T

Figure 12.22 Lowpass filter design using the frequency sampling method.

For this design,

N o -1 _ � , _ _ 2rr _ 2rr N o - 7' ' 7 WO NoT - 7T No Using Eq. (12. 54), the seven samples of Hr are Thus, The remaining three samples are determined using the conjugation property of the DFf, Hr = HNo-r· Thus, According to Eq. (12.55), the desired h[n] is the IDFr of Hr ; that is, h[n]

6 "'\""""'

n = 0, l, 2, 3, ..., 6

= L..,,Hr e 1-2irm ---r

We may compute this IDFf by using the IFFf algorithm or by straightforward substitution of values of H r in the above equation as h[O] =

� � H = �[l + e-i6rr/7 +ei6n 7] =� (1 +2cos

7L..,, r

l

7

r=O

7

h[l] = � tH,ei'Blf = �[I +e-i4n/1 +e i4rrf1 ] = t r=O

6rr ) = -0.11 46 7

(1 +2cos 4;) =0.0793

2rr I .2rr 7 . 7 I 1 6 h[2] = LH,ei'� = [1 +e-12Jr/ +e1 l ] = 7 ( 1 +2cos 7) = 0.321 0

7

7 r=O

h[3] =

'°' H,e � = 7�[l + 1 + 1] = 0.4286 7 L..,, �

6

r=O

i

r

1044

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

Similarly, we can show that h[4] = 0.3210, h[5] = 0.0793,

and h[6] = -0.01146

Observe that h[n] is symmetrical about its center point n = 3 as expected. Compare these values with those found by the impulse invariance method in Ex. 12.11 for the rectangular window case. Although the two sets of values are different, they are comparable. W hat is the difference in the two filters? The impulse invariance filter optimizes the design with respect to all frequencies. It minimizes the mean-squared value of the difference between the desired and the realized frequency response. The frequency sampling method, in contrast, realizes a filter whose frequency response matches exactly the desired frequency response at No uniformly spaced frequencies. The mean-squared error in this design will generally be higher than that in the impulse invariance method. The filter transfer function is 0.0793 0.3210 0.4286 0.3210 0.0793 0.1146 + 5 _ 6 + 2 + 3 + z z z z z t 4 2 6 3 -O. l 146z + 0.0793z5 + 0.32102 + 0.42862 + 0.32102 + 0.0793z- 0.1146 -------------------------z6

H[z] _ - -0.1146+

The filter frequency response is H[ ejwT ] = -0.1146 + 0.0793e-jwT + 0.3210e-j2wT + 0.4286e -j3wr + 0.3210e-j4wT + 0.0793e-jSwT - O. l 146e-j6wT

= e-i 3wr[0.4286 +0.6420coswT + 0.1586cos 2wT- 0.2291 cos 3wT]

Figure 12.22 shows that the realized filter's magnitude response exactly mat�hes the desired response at the No sample points. The time delay adds a linear phase -3Tw to the filter characteristic. We can also use MATLAB to readily compute the desired impulse response h[n]. >> >> >> >>

N = 6; NO= N+1; r = O:N; OmegaO = 2*pi /NO; Hr = 1 .0*(r*DmegaO> >> >> >> >>

Omega = lirispa ce(0,2*pi,201); H = po lyval(h, exp(1j*Omega)) .*exp(-1j*N*Omega) stem(r•DmegaO,abs(Hr),'k.'); line( Omega, abs(H),' co lor', [O O OJ); xlabel('\Omega'); ylabel(' IH[e-{j\Omega}] I');

This resulting MATLAB plot confirms the magnitude response in Fig. 12.22.

-0.1146

j 12.8 Nonrecursive Filter Design

1045

AN ALTERNATE METHOD USING FREQUENCY SAMPLING FILTERS We now sho� an alternative approach to the frequency sampling method, which uses an No-order comb filter m c�scade with a parallel bank of No - l first-order filters. This structure forms a frequency sampl ing filter. The transfer function H[z] of the filter is first obtained by taking the z-transform of h[n] in Eq. (12.55): H[z] =

L h[n]z-n

No-I n=O

1 No-I

= No

No- I

L H, L (ei~\-1y1 r=O

11=0

The second sum on the right-hand side is a geometric series.From Sec.B.8.3, we therefore have

Hence, z!'o - 1 No- I H[z] = - No zNo f....t

'°'

..._,,-, r=O

zH, j

2;r r

(12.56)

z -e 1lo

H1(z] . . - . , , . - , ,

H2lzl

Observe that we do not need to perform IDFf (or IFFT) computations to obtain the desired filter transfer function.All we need are the values of the frequency samples Hr , which are given. Equation (12.56) shows that the desired filter is realized as a cascade of two filters with transfer functions H 1 [z) and H2 [z]. The first filter with transfer function H1 [z] is an No-order comb filter (see Ex. 12. I 0). The second filter with transfer function H2[z] is a paraJlel combination of No . 2Jrr first-order filters whose poles lie on the unit circle at / To (r = 0, I, 2, ..., No - J). Especially for lowpass and bandpass filters, notice that many coefficients Hr appearing in H2[z] are zero. For example, the lowpass filter of Ex. 12.13 has four out of seven coefficients that are zero.Thus, in practice the final filter is usually much simpler than it appears in Eq. ( 12.56). As a result, this method may require a fewer computations (multiplications and additions) than a filter obtained by the direct method (using the IDFf).

1046

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

The poles of the frequency sampling filter are complex in general because they lie on the unit circle. Therefore, the realization of H2 [z] should combine conjugate poles to form a parallel connection of quadratic transfer functions. AJI these points will be clarified by designing the filter in Ex. 12.13 by this method.

EXAMPLE 12.14 Lowpass FIR Filter Design by a Frequency Sampling Filter R edo Ex. 12.13 using a frequency sampling filter. In this case, N

= 6, No= N + l = 7. Also,

and Substituting these values in the transfer function H[z] of the desired filter in Eq. (12.56), we obtain zej6rr/7 z z1 - l ze-j6rc/7 H[z ] = -V [ - + z- ei2rr/7 + - e-i2rr/ ] 7 z 1 z "-,,--'

H1[d

_____________

_,,

H2Czl

We combine the last two complex-conjugate pole terms on the right-hand side to obtain

z7 - 1 z [ -H[z] = -7z7 z-1 z

7

-1

[

z

Hi [z)

- 2 cos 8 7 )

z

-

-

(2cos ; z+ 1)

l.802z(z - l) ] - I.247z+ l

--------H [z)

= -V z- l "-,,--'

z(2zcos 6f

7 ] +-----=---2 2 z2

2

We can realize this filter by placing the comb filter H 1 [z] in cascade with H2 [z], which consists of a first-order filter and a second-order filter in parallel.

POLE-ZERO CANCELLATION IN FREQUENCY SAMPLING FILTERS

With frequency sampling filters, we make an intriguing observation that a nonrecursive (FIR) filter is realized by a cascade of H i [z] and H2 [z]. However, H2[z] is recursive (IIR). This strange fact should alert us to the possibility of something interesting going on in this filter. For a nonrecursive filter, there can be no poles other than those at the origin [see Eq. (12.38)]. In the frequency sampling filter of Eq. (12.56), however, H2[z] has No poles at eirno (r = 0, 1, 2, . . . , No - 1). All these poles lie on the unit circle at equally spaced points. These poles simply cannot be in a truly

12.9 MATLAB: Designing High-Order Filters

1047

nonrecursive filter. They must somehow get canceled along the way somewhere. This is precisely what happens. The zeros of H1 [z] are exactly where the poles of H2[z] are because

Thus, the poles of H2[z) are canceled by the zeros of H1 [z], rendering the final filter nonrecursive. Pole-zero cancellation in this filter is a potential cause for mischief because such a perfect cancellation assumes exact realization of both H 1 [z] and H2[z:]. Such a realization requires the use of infinite precision arithmetic, which is a practical impossibility because of quantization effects (finite word lengths). Imperfect cancellation of poles and zeros means there will still be poles on the unit circle, and the filter will not have a finite impulse response. More serious, however, is the fact that the resulting system will be marginally stable. Such a system provides no damping of the round-off noise that is introduced in filter computations. In fact, such noise tends to increase with time and may render the filter useless. We can partially mitigate this problem by moving both the poles (of H2 [z]) and zeros (of H 1 [z]) to a circle of radius r = L - E, where E is a small positive number ➔ 0. This artifice will make the overall system asymptotically stable. SPECTRAL SAMPLING WITH WINDOWING

The frequency sampling method can be modified to take advantage of windowing. We first design a frequency sampling filter using a value No' that is much higher than the design value N0. The result is a filter that matches with the desired f requency response at a very large number (No') of points. Then we use a suitable No-point window to truncate the No'-point impulse response. This procedure yields the final design of a desired order.

12.9 MATLAB:

DESIGNING HIGH-ORDER FILTERS

Continued hardware advancements have only solidified the popularity of discrete-time filters. Unlike their continuous-time counterparts, the performance of discrete-time filters is not affected by component variations, temperature, humidity, or age. Furthermore, digital hardware is easily reprogrammed, which allows convenient change of device function. For example, certain digital hearing aids are individually programmed to match the required response of a user. Digital filters also allow for the implementation of high-order designs that are simply not practical using analog filters. Programs such as MATLAB greatly facilitate the design of high-order digital filters. In this section, we use MATLAB to design higher-order digital filters. We start with an ITR filter design using the bilinear transform and conclude with several examples of FIR design using the frequency sampling method.

12.9.1 IIR Filter Design Using the Bilinear Transform Transformation of a continuous-time filter to a discrete-time filter begins with the desired continuous-time transfer function

1048

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS B(s) z:,:_obk+N-M~-k Ha(s) = X(s) = A(s) = Lr=oaksN-k

Y(s)

As a matter of convenience, H0 (s) is represented in factored form as H0 (s)

bN- M

n:-1(s - Zk)

ao

n k=I (s- Pk)

=--

N

(12.57)

where Zk and Pk are the system poles and zeros, respectively. A mapping rule converts the rational function H0 (s) to a rational function H[z]. Section 12.6 1 1 introduced the bilinear transform, which maps s to z according to s = 2(1-z- )/T(l +z- ). Making this substitution into Eq. (12.57) yields ( 12.58)

In addition to the M zeros at (1 +zkT/2)/(1 - zkT/2) and N poles at (I +pkT/2)/(1 -pkT/ 2), there are N - M zeros at - 1. Since practical continuous-time filters require M ~ N for stability, the number of added zeros is thankfully always nonnegative. If the continuous-time filter H0 (s) does not prewarp critical frequencies, it is still possible to prewarp one critical CT frequency We to match a desired digital frequency Qc by setting the parameter T used in Eq. (12.58) according to (12.59) Let us illustrate the process with an example.

EXAMPLE 12.15 High-Order Bu.tte,rworth LPF Design via MATLAB Applying the bilinear transform to a normalized (w, = 1) analog prototype, use MATLAB to design a 20th-order Butterworth lowpass filter with cutoff frequency Q, = rr /3. Create the filter's pole-zero and magnitude response plots.

To begin, we establish a few design variables and use Eq. (7 .22) to compute the poles of the analog Butterworth prototype, which is an all-pole filter (M = O).

>> omegac = 1; Omegac = pi/3; N = 20; M = O; >> k = 1:N; pk= exp(1j*pi*(2*k+N-1)/(2*N)); Next, we adjust T so the normalized cutoff frequency we = 1 transforms to the desired digital cutoff frequency Q c = rr /3.

>>

T = 2/omegac*tan(Omegac/2);

J

12.9 MATLAB: Designing High-Order Filters

1049

Using Eq. (12.58), we determine the gain, zeros, and poles of our digital filter H[z].

>> >>

gain= 1/prod (2/T-pk); zeros= -1*ones (1,N); poles= (l+pk*T/2)./(1-pk*T/2);

Next, we determine the polynomial coefficients of H[z] and compute the digital filter's frequency response H[ejn].

>> >> >>

B = gain*poly(zeros); A= poly(poles); Omega= linspace(O,pi,1001); H = polyval(B,exp(lj*Omega))./polyval(A,exp(lj*Omega));

Due to the large order of the filter, we do not print out the coefficient vectors Band A; instead, we rely on pole-zero and magnitude response plots to veri fy the filter design.

>> >> >>

» >> >> >> >>

subplot( 121); circle= exp (lj*(0 :.01:2*pi)); plot( real(zeros),imag(zeros),'ko',real(poles),imag(poles),'kx ' ); line(real(circle),imag (circle),'color' ,[0 0 0],'linestyle',':'); axis equal; axis( [-1. 1 1.1 -1. 1 1. 1]); subplot(122); plot (Omega,abs (H),'k-'); axis([O pi O 1.1]); xlabel ('\Omega'); ylabel('IH[e-{j\Omega}] I' ); set(gca,'xtick',0:pi/3:pi); grid on ;

As Fig. 12.23 shows, the resulting pole-zero and magnitude response plots confirm the final

design is correct.

·· ·· ·····

1

0.5

0.8 .-.

g

0.6

~

:r: -

-I

0.4

0.2 0 1....,..__ _ _·_··_··~ .. _ · _ ·- --

-I

0

~

L . . __ _,__ \_ . _ _ _ _

~

0

1.0472

2.0944

3.14 16

n Figure 12.23 Pole-zero and magnitude response plots of a 20th-order Butterwonh lowpass filter.

1050

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

12.9.2 FIR Filter Design Using Frequency Sampling Next, let us design several high-order FIR filters using the frequency sampling method described in Sec. 12.8.2. We shaJl look at the difference between a lowpass filter designed in an attempt to have no delay (zero phase) compared to one that allows a delay of half the filter length (linear phase). We also consider a design example of a filter that does not conform to the standard classification of lowpass, highpass, bandpass, or bandstop.

EXAMPLE 12.16 Effect of Delay on FIR LPF Performance Repeat Ex. 12.13 using N = 30 for two cases: specifying frequency samples with zero phase (attempting a filter with no delay) and specifying frequency samples that allow a delay of half the filter length (linear phase). Plot the impulse response and magnitude response of each filter.

We can easily achieve this design with minor modification of the code in Ex. 12.13. In the first case, we change the order to N = 30, remove the linear-phase component, and add plotting routines for the impulse response.

>> >> >> >> >> >> >> >> >> >> >> >>

N = 30; NO= N+1; r = O:N; OmegaO = 2*pi/NO; Hr= 1.0*(r*OmegaO> >> >> >> >> >> >> >> >>

N = 50; NO= N+1; r = O:N; OmegaO = 2*pi/NO; Hr= (2-r*Dmega0).*(r*Dmega0> >> >>

1053

line(Omega,abs(H),'color',[O o OJ); xlabel('\Omega'); ylabel('IH [e-{j\Omega}] I'); axis([O 2*pi O 2.1]); set(gca,'xtick', [O 2 pi 2*pi-2 2*pi]);

As shown in Fig. 12.26, the magnitude response closely follows the desired triangular shape. Si nce the impulse response is symmetric about its midpoint, we know that the filter has a linear-phase response. Since the edges of h[n] are very close to zero, it is reasonable to expect that a lower-order filter would also produce acceptable results.

12.10 SUMMARY The response of an LTID system with transfer function H[z] to an everlasting sinusoid of freq uency Q is also an everlasting sinusoid of the same frequency. The output amplitude is IH[eiQ]I times the input amplitude, and the output sinusoid is shifted in phase with respect to the input sinusoid by LH[ein] radians. A plot of IH[ein] I versus Q indicates the amplitude gain of sinusoids of various frequencies and is called the amplitude response of the system. A plot of LH[eiQ] versus Q indicates the phase shift of sinusoids of various frequencies and is called the phase response. The frequency response of an LTID system is a periodic function of Q with period 2,r. This periodicity is the result of the fact that discrete-time sinusoids with frequencies differing by an integral multiple of 2.1r are identical. The frequency response of a system is determined by locations in the complex plane of poles and zeros of its transfer function. We can design frequency-selective filters by proper placement of its transfer function poles and zeros. Placing a pole (or a zero) near the point eiQo in the complex plane enhances (or suppresses) the frequency response at the frequency Q = Q 0 . Using this concept, a proper combination of poles and zeros at suitable locations can yield desired filter characteristics. Digital filters are classified into recursive and nonrecursive filters. The duration of the impulse response of a recursive filter is infinite; that of the nonrecursive filter is finite. For this reason, recursive filters are also called infinite impulse response (UR) filters, and nonrecursive filters are called finite impulse response (FIR) filters. Using AID and D/A interfaces, digital filters can process analog signals. Procedures for designing a digital filter that behaves like a given analog filter are discussed. A digital filter can approximate the behavior of a given analog filter in either the time domain or the frequency domain. This leads to two different design procedures, one using a time-domain equivalence criterion and the other a frequency-domain equivalence criterion. For recursive or IIR filters, the time-domain equivalence criterion yields the impulse invariance method, and the frequency-domain equivalence criterion yields the bilinear transformation method. The impulse invariance method is handicapped by aliasing and cannot be used for highpass and bandstop filters. The bilinear transformation method, which is generally superior to the impulse invariance method, suffers from a frequency warping effect. However, this effect can be neutralized by prewarping. For nonrecursive or FIR filters, the time-domain equivalence criterion leads to the method of windowing (Fourier series method), and the frequency-domain equivalence criterion leads to the method of frequency sampling. Because nonrecursive filters are a special case of recursive

1054

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

filters, we expect the perfonnance of recursive filters to be superior. This statement is true in the sense that a given amplitude response can be achieved by a recursive filter of an order smaller than that required for the corresponding nonrecursive filter. However, nonrecursive filters have the advantage that they can realize arbitrarily shaped amplitude responses and can possess linear-phase characteristics. Recursive filters are good for piecewise constant amplitude responses, and they can realize linear phase only approximately. A linear-phase characteristic in nonrecursive filters requires that the impulse response h[n] be either symmetric or antisymmetric about its center point.

Reference I. Mitra, S. K. Digital Signal Processing: A Computer-Based Approach, 2nd ed. McGrawHill, New York, 2001.

7 12.1-1

A CT sinusoid x(t) = cos(wr) is sampled at a greater-than-Nyquist rate Is = l000 Hz to produce a DT sinusoid x(t) = cos(Qn). Determine the analog frequency w if (a)

n=f

(b)

n = 2f

(c)

n=~

12.1-5 Find the amplitude and the phase response of the filters shown in Fig. P 12.1-5. [Hint:

12.1-2 Find the amplitude and phase response of the digital filters depicted in Fig. Pl 2. 1-2. 12.1-3

(a) Plot the magnitude response IH[ejQ]I over -2rr ~ Q ~ 2rr. (b) Plot the phase response LH [ejn] over -2,r ~ Q ~ 2rr. (c) Determine the system output y[n] in response to the periodic input x[n].

A causal LTID system H[z]

Express H [ein] as e-i2.mHa[ein].]

12.1-6

21 = 16(zcz-p )(z+ )

1

4

has a periodic input x[n] that toggles between the values I and 2. That is, x[n] = i

[ ... , l , 2, 1,2, 1, 2, 1, .. .], where x[O]

12.1-4 A causal LTID system H[z] =

-7cn - k] k=O

12.1-7

(a) Input-output relationships of two filters

are described by (i) y[n] = - 0.9y[n - l] +x[n] (ii) y[n] = 0.9y[n - I ] + x[n] For each case, find the transfer function, the amplitude response, and the phase response. Sketch the amplitude response, and state the type (highpass, lowpass, etc.) of each filter. (b) Find the response of each of these filters to a sinusoid x[n] = cos Qn for Q = 0.0lrr and 0.99rr. In general, show that

, Problems x[nJ

-

-:.u . x[n]

i

OA!~~ .. . -

y[n]

y[n]

(a)

x[n)

-

1055

(b)

r

3





LJ

y[n]

1.8

-0. 16

(c)

Figure P12.1-2

-

y(11]

(a)

-

y(n]

(b)

Figure P12.1-5 the gain (amplitude response) of filter (i) at frequency n0 is the same as the gain of filter (ii) at frequency rr - no.

(b) Find the system response y[n] for the input x[n) cos (0.5k- (rr / 3)).

=

12.1-9 For an asymptotically stable LTID system,

12,1-8 For an LTID system specified by the equation

show that the steady-state response to input ejnnu[n] is H[ejn )ejnnu[n]. The steady-state response is that part of the response that does not decay with time and persists forever.

y[n+ 1] - 0 .Sy[n] =x[n+ 1] + 0.8x[n] 12.1-10

(a) Find the amplitude and the phase response.

(a) A digital filter has the sampling interval T 50 µs. Determine the highest frequency that can be processed by this filter without aliasing.

=

1056

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

X X

X

I .. ,•

X

~

(b)

(a)

Figure P12.2.1 (b) If the highest frequency to be processed is 50 kHz, determine the minimum value of the sampling frequency Fs and the maximum value of the sampling interval T that can be used. 12.1-11

(a) Determine the five constants bo, br h2, ar , a nd a2 that specify the transfer f mctfon H[z] boz2+biz+"2 . z2+a1z+a2 (b) Using the techniques of Sec. 12.2, accurately hand-sketch the system magn· tude response IH [eiSl] I over the range (-2,'7 ~

=

Consider the discrete-time system represented by

n ~o). (c) Determine the output y[n ] of this s; tern if the input is x[n] = sin

C';n).

00

y[n]

= L (0.5/x[n -

k]

k=O

12.2s3 Repeat Prob. 12.2-2 if the zero at z = I

(a) Determine and plot the magnitude response JH [ejQ]I of the system. (b) Determine and plot the phase response LH [eiSl] of the system. (c) Find an efficient block representation that implements this system.

12.2-1

12.2-2

IS

moved to z = - 1 and H [ l ] = - I is specified rather than H [- 11 = - 1.

12.2-4

Figure Pl2.2-4 displays the pole-zero plot of a second-order real, causal LTID system tha1 has H(-1]=1.

Pole-zero configurations of certain filters are shown in Fig. P 12.2-J. Sketch roughly the amplitude response of these fiJters.

X····

Figure P 12.2-2 displays the pole-zero plot of a second-order real, causal LTID system that has H [-1] = -1.

1

Figure P12.2-4

1

Figure P12.2-2

(a) Determine the five constants k, b1, b2, a1, and a2 that specify the transfer function H[z] = k~+b1 z+b;2. z-+a1z+a2 (b) Using the techniques of Sec. 12.2, accurately band-sketch the system magnitude response IH[eJQJ I over the range (- rr ~

n ~ rr).

l Problems

1057

=

(c) A signal x(t) cos(] OOrr r) + cos(500m) is sampled at a greater than Nyquist rate fs Hz and then input into the above LTID system to produce DT output y[n] = fJcos(r2on + 0). Determine ls and no. You do not need to find constants f3

2

and 0.

(d) Is the impulse response h[n] of this system absolutely summable? Justify your answer.

:•,

12.2-5 Figure Pl2.2-5 displays the pole-zero plot of

Figure P12.2-7

a second-order real, causal LTID system that has H[I] = - 1.

(a) Determine the five constants k, b,, b2, a ,, and a2 that specify the transfer function H[z) = k ,22+hrz+bz . , + a1Ha2

(b) Using the techniques of Sec. 12.2, accurately hand-sketch the system magnitude response IH[ei 0 ]1 over the range (-rr:::

n :::rr).

1

(c) Determine the steady-state output Yss [n] of this system if the input is x [n] = cos( 3111 ) u[11]. (d) State whether this system is LP, HP, BP, BS, or something else. If the digital system operates at/s = 8 kHz, what is the approximate Hertzian cutoff frequency (or frequencies) of this system?

Figure Pl2.2-S (a) Determine the five constants k, b i , b2, a,, and a2 that specify the transfer function H[z] k .2+b1z+b2. z2+a1z+a2 (b) Using the techniques of Sec. 12.2, accurately hand-sketch the system magnitude response IH[ein] I over the range ( -rr :::

=

n::: Jr).

(c) A signal x (t) = cos(2rrft) is sampled at a rate Is = I kHz and then input into the above LTID system to produce DT output y [n]. Determine, if possible, the frequency or frequencies f that will produce zero output, y[n] = 0.

12.2-6 The system y[n] - y[n- l] = x[n] - x[n - 1] is an all-pass system that has zero phase response. Is there any difference between this system and the system y[n] x[n]? Justify your answer.

12.2-8

The magnitude and phase responses of a real, stable, LTl system are shown in Fig. Pl2.2-8. (a) What type of system is this: lowpass, highpass, bandpass, or bandstop? (b) What is the output of this system in response to

x1 [11] =2sin ( ; n+

I)

(c) What is the output of this system

in

response to

=

12.2-7

Figure PI2.2-7 displays the pole-zero plot of a second-order real, causal LTID system that has a repeated zero and H[ I] 4. The solid circle is the unit circle.

=

12.2-9 Consider an LTID system with system func•

tton H[z]

2

+t = bo z2z_ 9116 .

1058

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS

S: ~

180

I 0.9 0.8 0.7 0.6 0.5

135

~ 0.4 0.3 0.2 0. 1

0 ......0

00 V

90

~

45

c:........ ""o ......

0

::c

-45

" - -90 135 =----'-- ---'--------'--____J 0.25

0.5 f'll7r

- 180

0.75

0

0.5 fll1r

0.25

0.75

Figure P12.2-8 processed is 20 kHz (:Fh = 20,000). [Hint: See Ex. 12.3. The zeros should be at e=iwT for w corresponding to 5000 Hz. and the poles are at ae±iwT with a < I . Leave your answer in tenns of a. ReaJize this filter using the canonicaJ form. Find the amplitude response of the filter.]

(a) Determine the constant bo so that the system frequency response at n = -1C is - I. (b) Accurately sketch the system poles and zeros. (c) Using the locations of the system poles and zeros, sketch JH[e1n]I over O ~ n ~

27!".

(d) Detennine the response y[n) to the input x[n] = (- 1 + }) +}"+(I - })sin (1en + I). (e) Draw an appropriate block diagram representation of this system. 12.2-10

Do Prob. 12.9-3 by graphical procedure. Make the sketches approximately, without using MATLAB.

12.2-11

(a) ReaJize a digital filter whose transfer function is given by

12.2-13

14'1T

177T -4-

1911'

4

217T 2211' il -4-

T

Figure Pl2.2-13

(a) Is the filter LP, HP, BP, BS, or something else? Explain. (b) Sketch the pole-zero plot of a second-order system that behaves as a reasonable approximation of Fig. P 12.2-13. What is the coefficient bo?

z-a

12.2-12 Design a digital notch filter to reject frequency 5000 Hz completely and to have a sharp recovery on either side of 5000 Hz to a gain of unity. The highest frequency to be

151T

-4- -4-

H[z] =Kz+ I

(b) Sketch the amplitude response of this filter, assuming Jal < I. (c) The amplitude response of this lowpass filter is maximum at n = 0. The 3 dB bandwidth is the frequency at which the amplitude response drops to 0.707 (or 1/,,fi,) times its maximum value. Determine the 3 dB bandwidth of this filter when a =0.2.

Consider the desired DT system magnitude response JH[ei 0 JJ in Fig. P1 2.2-13.

12.2-14

Show that a first-order LTID system with a pole at z = r and a zero at z = 1/ r (r ~ l) is an allpass fi lter. In other words, show that the amplitude response IH[em]I of a system with the transfer function I

z- H [z]= -

-'

z-r

Problems is constant with frequency. This is a first-order allpass filter. [Hint: Show that the ratio of the distances of a ny point on the unit circle from the zero (at z l / r) and the pole (at z r) is a constant 1/r.] Generalize thls result to show that an LTID system with two poles at z = re±iO and two zeros at z (1/r)e±iO (r ~ 1) is an allpass filter. In other words, show that the amplitude response of a system with the transfer function

(b) If the system running the filter now operates at T = 25 µs, determine the cutoff frequencies Jc and n c. (c) If the system running the filter now operates at T = 100 µs, determine the cutoff frequencies Jc and Q c ,

=

=

=

1059

12.5-1

In Ch. 8, we used another approximation to find a digital system to realize an analog system. We showed that an analog system specified by Eq. (8.19) can be realized by using the digital system specified by Eq. (8.20). Compare that solution with the one res ulting from the impulse-invariance method. Show that one result is a close approximation of the other and that the approximation improves as T ➔ O.

=

z 2 - (2rcos0)z + r2

r

~1

12.5-2

(a) If hi [n] and h2[n], the impulse responses of two LTID systems, are related by h2[n] = (-l)nhi [n], then show that

H2[ein]

12.2-16

12.4-1

12.5-3

= H i [ej(Q±Jr)]

How is the frequency response spectrum H2[ein] related to Hi [ejn]? (b) If Hi [z] represents an ideal lowpass filter with cutoff frequency nc, sketch H2[ein]. What type of filter is H 2[ein]? Mappings such as the bilinear transformation are useful in the conversion of continuous-time filters to discrete-time filters. Another useful type of transformation is one that converts a discrete-time filter into a different type of discrete-time filter. Consider a transformation that replaces z with - z. (a) Show that thls transformation converts lowpass filters into highpass filters and highpass filters into Jowpass filters. (b) If the original filter is an FIR filter with impulse response h[n], what is the impulse response of the transformed filter? A lowpass digital filter with a sampling interval T = 50 µs has a cutoff frequency of

le= IO kHz. (a) What is the corresponding digital cutoff frequency nc?

=

corresponding DT system designed by the impulse-invariance method with T = 0. 1.

is constant with frequency.

12.2-15

A CT system has impulse response hc1 (t)

e- 1 u(t). Draw the DFJ realization of the

(a) Using the impulse-invariance criterion, design a digital filter to realize an analog filter with transfer function

H s _ 1s+20 a()- 2(s2 +7s+ 10) (b) Show a canonical and a parallel realization of the filter. Use a I% criterion for the choice of T .

12.5-4

Use the impulse-invariance criterion to design a digital filter to realize the second-order analog Butterworth filter with transfer function

Use a I% criterion for the choice of T. 12.5-5

Design a digital integrator using the impulse-invariance method. Find and give a rough sketch of the amplitude response, and compare it with that of the ideal integrator. If this integrator is used primarily for integrating audio signals (whose bandwidth is 20 kHz), determine a suitable value for T.

12.5-6

An oscillator by definition is a source (no input) that generates a sinusoid of a certain frequency WO- Therefore, an oscillator is a system whose zero-input response is a sinusoid of the des ired frequency. Find the

1060

CHAPTER 12 FREQUENCY RESPONSE AND DIGITAL FILTERS transfer function of a digital oscillator to oscillate at 10 kHz by the methods described in parts (a) and (b). In both methods, select T so that there are 10 samples in each cycle of the sinusoid. (a) Choose H[z] directly so that its zeroinput response is a discrete-lime sinusoid of frequency Q = wT corresponding to IO kHz. (b) Choose Ha(s) whose zero-input response is an analog sinusoid of JO kHz. Now use the impulse invariance method to determine H[z]. (c) Show a canonical realization of the oscillator.

12.5-7

A variant of the impulse invariance method is the srep-i11varia11ce method of digital fi lter

(b) Show that this transformation maps Q to 2arctan(wT/2).

12.6-2 Consider the design of a digital differentiator H[z]. (a) Determine H[z] using the bilinear transform. (b) Show a realization of this filter. (c) Find and sketch the magnitude response of this filter, and compare it with that of an ideal differentiator. (d) If this filter is used primarily for processing audio signals (voice and music) up to 20 kHz, determine a suitable value for T. 12.6-3

Repeat Prob. 12.6-2 for a digital integrator.

12.6-4

Using the bilinear transform with prewarping, design a digital lowpass Butterworth fi lter that satisfies Gp= -2 dB, Gs= -11 dB, CJJp = I00,r rad/s, and (J.)s = 200,r rad/s. The hjghest significant frequency is 250 Hz. If possible, it is desirable to oversatisfy the requirement of Gs,

12.6-5

Repeat Prob. 12.6-4 for a Chebyshev filter.

12.6-6

Using the bilinear transfonn with prewarping,

synthesis. In this method, for a given Ha(s), we design H[z] in Fig. 12.9a such that y(11T) in Fig. 12.9b is identical to y[n] in Fig. 12.9a when x(t) = u(t). (a) Show that, in general, H[z] = z - I

z[(.c-i Ha(s) )

Z

S

] l=kT

design a digital highpass Butterworth filter

that satisfies Gp = -2 dB, Gs = -10 dB, = I 50rr rad/s, and Ws = IOOrr rad/s. The highest significant frequency is 200 Hz.

(b) Use th.is method to design H[z] for

Wp We

Hn(S)= - -

s+wc

(c) Use the step-invariance method to synthesize a discrete-time integrator and compare its amplitude response with that of the ideal integrator.

12.5-8

Use the ramp-invariance method to synthesize a discrete-time differentiator and integrator. In this method, for a given Ha (s), we design H[z] such that y(nT) in Fig. 12.9b is identical to y [n] m Fig. 12.9a when x(t)

12.5-9

= tu(t).

12.6-7

Repeat Prob. 12.6-6 using an inverse Chebyshev fi lter.

12.6-8

Using the bilinear transfonn with prewarping, design a digital bandpass Butterworth filter that satisfies Gp = -2 dB, Gs = -12 dB, Wp 1 = 120 rad/s, (J.)P2 = 300 rad/s, w51 = 45 rad/s, and Ws2 = 450 rad/s. The highest signjficant frequency is 500 Hz.

12.6-9

Repeat Prob. 12.6-8 for a Chebyshev filter.

12.6-10

Using the bilinear transfonn with prewarping, design a digital bandstop Butterworth filter that satisfies Gp = - I dB, Gs = -22 dB, (J.)P• = 40 rad/s, (J.)P2 = 195 rad/s, w51 = 80 rad/s, and (J.)52 = 120 rad/s. The highest significant frequency is 200 Hz.

12.6-11

Repeat Prob. 12.6-10 for a Chebyshev filter.

12.6-12

The bilinear transfonn, given by Eq. ( 12.30), is actually a one-to-one mapping relationship between the s plane and the z plane.

In an impulse-invariance design, show that if Ha(s) is a transfer function of a stable system, the corresponding H[z] is also a transfer function of a stable system.

12.6-1 The bilinear transformation is defined by the rule s=2(1-z- 1)/T(I +z- 1). (a) Show that this transformation maps the w axis in the s plane to the unit circle z = eifl in the z plane.

Problems (a) Show that the bilinear transform maps the w axis in the s plane to the unit circle in the z plane. (b) Show that the bilinear transform maps the LHP and the RHP of the s plane map to the interior and the exterior, respectively, of the unit circle in the z plane. (c) Show that the bi linear transform maps a stable continuous-time system Ha(s) to a stable discrete-time system H[z].

12.7-1

(a) Show that the frequency response of a third-order (N 3) FIR tilter with a midpoint symmetric impulse response h[n] is given by

0

= 2e- jl.SQ

12.8-4 Consider the length-7 (N = 6) lowpass FTR filter of Ex. 12.11. Redes ign this tilter using a Hann window, and plot the resulting magnitude response IH[eln]I. How does this filter compare with those of Ex. J 2. l I ?

12.8-5 Consider the N = 10 FIR differentiators of Ex. 12.12. Redesign this filter using a Blackman window, and plot the resulting magnitude response. How does this solution compare with those of Ex. 12.12?

x (h[0]cos(3rl/ 2) +h( J]cos(Q/2)) (b) Show that the frequency response of a third-order (N 3) FIR filter with a midpoint antisymmetric impulse response lt[n] is given by

12.8-6

=

H[ei 0 ]

=

Using the frequency sampling method (with linear phase), design an N IO digital differentiator. Compare both h[n] and H[eif2J of the resulting design with the N = IO designs of Ex. 12.12.

12.9-1

Use MATLAB to generate pole-zero plots for the causal systems with the followi ng transfer functions: ✓ 2?+ 1 (a) H3 [ z) -_ it+o.4096

Using the Fourier series method, design an = 14 FIR filter to approximate an ideal lowpass filter with a cutoff frequency of 20 kHz. Let the sampling frequency be Is = 200 kHz.

12.8-2

Repeat Prob. 12.8-1 using the window design method and a Hamming window.

12,8-3

Using the Fourier series method, design an N 10 FIR filter to approximate the ideal bandpass filter shown in Fig. Pl2.8-3. Let

21(/T

12.8-7

N

T =2 x 10- 3 •

(...!:_)

(a) Sketch IH0 (jw) I and LH0 (jw). (b) Using rectangular and Hamming windows, apply the window method to design N = 14 FIR filters to approxjmate H0 (jw). Plot the resulting magnitude and phase responses, and compare them with the ideal.

Another form of comb filter is given by the transfer function

=

= -jsgn(w)TT

= 2ei..,,

>.."II -'

/(A,,)

I

>..~

(13.11 )

Since A also satisfies Eq. ( 13.9), we may advance a similar argument to show that if / (A) is a function of a square matrix A expressed as an infinite power series in A, then 00

f (A) = aol +

a, A+ a2A2

+ ···=

'"' ~ a ;A'. i=O

and, as argued earlier, the right-hand side can be expressed by using terms of power less than or equal to 11 - I , 11- l 2

/ (A)= /Joi + /31A + /J2A

+ · · · + ,B,,_ ,An-l = L ,B;A;

(1 3.12)

i=O

in which the coefficients {3;s are found from Eq. (I 3.11). If some of the eigenvalues are repeated (multiple roots), the results are somewhat modified. We shall demonstrate the utility of this result with the following two examples.

13.1.3 Computation of an Exponential and a Power of a Matrix Let us compute eA' defined by

From Eq. (13.12), we can express n-1

eA' =

L {3;(Ai i= I

in which the {3;s are given by Eq. (13. 11 ), with f(A;)

= e'J. ;,.

PL E 13.1 Computing the Exp Compute eA' for the case

A-[ -

0 1 ] -2 -3

13.2 Introduction to State Space

1069

The characteristic equation is

!Al -

I

I

Al = >..2 >..+3 -1 ='A2 +3>..+2=('A+1)0.. +2)= 0

Hence, the eigenvalues are >.. 1 = -1, >.. 2 = -2, and eA,

= .Bol + .BrA

in which

and

(13.13)

COMPUTATI ON OF

Ak

As Eq. (13.12) indicates, we can express Ak as

Ak = ,Bol + ,B,A + · • · + .B11-1A"- 1 in which the {3;s are given by Eq. (13.1 1) with/(>..;) computation of Ak by this method, see Ex. 13.13.

= A}.

For a completed example of the

13.2 INTRODUCTION TO STATE SPACE From the discussion in Ch. 1, we know that to determine a system's response(s) at any instant t, we need to know the system's inputs during its entire past, from -oo tot. If the inputs are known only fort > to, we can still determine the system output(s) for any t > to, provided we know certain initial conditions in the system at t = 10 . These initial conditions collectively are called the initial state of the system (at t = to). The state variables q 1(t), q2 (t), ... , qN(t) are the minimum number of system variables such that their initial values at any instant to are sufficient to determine the behavior of the system for all time t 2'.: to when the input(s) to the system is known fort~ to. This statement implies that an output of a system at any instant is determined completely from a knowledge of the values of the system state and the input at that instant. Initial conditions of a system can be specified in many different ways. Consequently, the system state can also be specified in many different ways. This means that state variables are not unique.

1070

CHAPTER 13 STATE-SPACE ANALYSIS

This discussion is also val id for multiple-input, multiple-output (MIMO) systems, where every possible system output at any instant t is determined completely from knowledge of the system state and the input(s) at the instant r. These ideas should become clear from the following example of an RLC circuit.

EXAMPLE 13.2 State-Space Description and Output Equations of an RI .~

Circuit Find a state-space description of the RLC circuit shown in Fig. 13.1. Verify that all possible system outputs at some instant r can be determined from knowledge of the system state and the input at that instant t.

.!.2 n

i1

+

IH

VI

i2

i3

+ X

V2

+

qi

-

i4

+

+

.!.3 n

V3

lf 5

V4

2n

Figure 13.1 Circuit for Ex. 13.2.

It is known that inductor currents and capacitor voltages in an RLC circuit can be used as one possible choice of state variables. For this reason, we shall choose q 1 (the capacitor voltage) and q2 (the inductor current) as our state variables. The node equation at the intermediate node is

but i3 = 0.2q1, i1 = 2(x-q1), and i2 = 3q1. Hence,

or This is the first state equation. To obtain the second state equation, we sum the voltages in the extreme right loop formed by C, L, and the 2 Q resistor so that they are equal to zero:

13.2 Introduction to State Space

1071

or Thus, the two state equations are

C/t

= - 25q1 -

5q2 + I Ox (/2 = q i -2q2

Every possible output can now be expressed as a linear combination of qi , q 2 , and x. From Fig . 13. 1, we have VJ

= x-qi

i1 =2(x-qi)

= qi i2 = 3q1

V2

i3 = ii - i2 -q2 = 2(x-qi) - 3q1 - qi= -5q1 - q2 + 1x i4

=qi

= 2i4 = 2q2 V3 = qi - V4 = qi -

V4

2q2

This set of equations is known as the output equation of the system. It is clear from this set that every possible output at some instant t can be determined from knowledge of q 1(t), q 2 (t), and x(t), the system state, and the input at the instant t. Once we have solved the state equations to obtain q 1(t) and q 2 (t), we can determine every possible output for any given input x(t).

For continuous-time systems, the state equations are N simultaneous first-order differential equations in N state variables q 1, q2, . .. , qN of the form i = 1,2, ... ,N

where x 1 , x 2 , ••• , Xj are the j system inputs. For a linear system, these equations reduce to a simpler linear form i= l ,2, .. . ,N

If there are k outputs y 1 ,y2 , .

. . ,Yk,

(13.14)

the k output equations are of the form m= 1,2, .. . , k

(13.15)

The N simultaneous first-order state equations are also known as the nomialjorm equations.

1072

CHAPTER 13 STATE-SPACE ANALYSIS

These equations can be written more conveniently in matrix form:

q,

a,1

a12

QJN

q,

b11

b12

bi i

x,

q2

a21

a22

a2N

q2

b21

b22

b 2i

X2

GN I

ON2

GNN

bNJ

bN2

bNj

xJ· '-v-'

+

f/.N '-v-'

q

qN

,._..,-,

B

q

A

X

and YI

CJI

c12

CJN

q,

Y2

c21

c22

C2N

q2

Yk

CkJ

Ck2

CkN

'-v-' y

C

+

qN

,._..,-, q

d11

d12

dli

x,

d21

d22

d2i

X2

dkl

dk2

dkj

x J· ..._,-,

D

X

or

q=Aq +Bx

(13.16)

y=Cq + Dx

(13.17)

and Equation (13.16) is the state equation and Eq. (13.17) is the output equation; q, y, and x are the state vector, the output vector, and the input vector, respectively. For discrete-time systems, the state equations are N simultaneous first-order difference equations. Discrete-time systems are discussed in Sec. 13.7.

13.3 A SYSTEMATIC PROCEDURE TO DETERMINE STATE EQUATIONS We shall discuss here a systematic procedure to determine the state-space description of linear time-invariant systems. In particular, we shall consider systems of two types: (1) RLC networks and (2) systems specified by block diagrams or Nth-order transfer functions.

13.3.1 Electrical Circuits The method used in Ex. 13.2 proves effective in most of the simple cases. The steps are as follows: 1. Choose all independent capacitor voltages and inductor currents to be the state variables. 2. Choose a set of loop currents; express the state variables and their first derivatives in terms of these loop currents. 3. Write loop equations, and eliminate all variables other than state variables (and their first derivatives) from the equations derived in steps 2 and 3.

13.3 A Systematic Procedure to Determine State Equations

1073

EXAMPLE 13.3 State Equations of an RLC Circuit Write the state equations for the network shown in Fig. 13.2.

2

n

I fl

IH

X

.!.p 2

2n

Figure 13.2 Circuit for Ex. 13.3.

Step 1. There is one inductor and one capacitor in the network. Therefore, we shall choose the inductor current q 1 and the capacitor voltage q2 as the state variables.

Step 2. The relationship between the loop currents and the state variables can be written by inspection:

q, I.

=i2 .

2q2 = l2 -

(13.18) .

l3

(13.19)

4i1 -2i2 =x

(13.20)

+ q2 = 0

(13.21)

-q2 +3i3 = 0

(13.22)

Step 3. The loop equations are

2(i2 - i1) + q,

Now we e liminate i 1, i2 , and i3 from the state and loop equations as follows. From Eq. ( 13.21), we have q, 2(i , - i2) - q2

=

We can e liminate i I and i2 from this equation by using Eqs. ( 13 .18) and ( 13.20) to obtain

. -qi - q2 +'1.X qi= The substitution of Eqs. (13.18) and (13.22) in Eq. (13.19) yields

. = 2qi - 3q2 2 q2

1074

CHAPTER 13 STATE-SPACE ANALYSIS

These are the desired state equations. We can express them in matrix form as (13.23)

The derivation of state equations from loop equations is facilitated considerably by choosing loops in such a way that on ly one loop current passes through each of the inductors or capacitors.

AN ALTERNATIVE PROCEDURE We can also determine the state equations by the foJlowing procedure: 1. Choose all independent capacitor voltages and inductor currents to be the state va..riables. 2. Replace each capacitor by a voltage source equal to the capacitor voltage, and replace each inductor by a current source equal to the inductor current. This step will transform the RLC network into a network consisting only of resistors, current sources, and voltage sources. 3. Find the current through each capacitor and equate it to Cq;, where q; is the capacitor voltage. Similarly, find the voltage across each inductor and equate it to Lqj, where qj is the inductor current.

EXAMPLE 13.4 Alternate Procedure to Determine State Equations Use the three-step alternative procedure just outlined to write the state equations for the network in Fig. 13.2. In the network in Fig. 13.2, we replace the inductor by a current source of current q 1 and the capacitor by a voltage source of voltage q2, as shown in Fig. 13 .3. The resulting network consists of four resistors, two voltage sources, and one current source. 1n

211

X

211

Figure 13.3 Equivalent circuit of the network in Fig. 13.2.

2n

13.3 A Systematic Procedure to Determine State Equations

1075

We can determine the voltage vi across the inductor and the current ic through the capacitor by using the principle of superposition. This step can be accomplished by inspection. For example, VL has three components arising from three sources. To compute the component due to x, we assume that q 1 = 0 ( open circuit) and q2 = 0 (short circuit). Under these conditions, the entire network to the right of the 2 n resistor is opened, and the component of vi due to x is the voltage across the 2 Q resistor. This voltage is clearly (I / 2)x. Similarly, to find the component of VL due to q,, we short x and q2. The source q 1 sees an equivalent resistor of 1 n across it, and hence vi = -q,. Continuing the process, we find that the component of vi due to q 2 is -q2. Hence, Using the same procedure, we find •

le=

I• 7.Q2 =q1 - 3I q2

These equations are identical to the state equations [Eq. ( 13.23)] obtained earlier.t

13.3.2 State Equations from a Transfer Function It is relatively easy to determine the state equations of a system specified by its transfer function.* Consider, for example, a first-order system with the transfer function H (s) = -

1

s+a

The system realization appears in Fig. 13.4. The integrator output q serves as a natural state variable since, in practical realization, initial conditions are placed on the integrator output. The integrator input is naturally q. From Fig. 13.4, we have

t This procedure requires modification if the system contains all-capacitor and voltage-source tie sets or all-inductor and current-source cut sets. In the case of all-capacitor and voltage-source tie sets, all capacitor voltages cannot be independent. One capacitor voltage can be expressed in terms of the remaining capacitor voltages and the voltage source(s) in that tie set. Consequently, one of the capacitor voltages should not be used as a state variable, and that capacitor should not be replaced by a voltage source. Similarly, in all-inductor and current-source tie sets, one inductor should not be replaced by a current source. If there are all-capacitor tie sets or all-inductor cut sets only, no further complications occur. In all-capacitor voltage-source tie sets and/or all-inductor current-source cut sets, we have additional difficulties in that the tenns involving derivatives of the input may occur. This problem can be solved by redefining the state variables. The final state variables will not be capacitor voltages and inductor currents.

*We implicitly assume that the system is controllable and observable. Th.is implies that there are no pole-zero cancellations in the transfer function. If such cancellations are present, the state variable description represents only the part of the system that is controllable and observable (the part of the system that is coupled to the input and the output). In other words, the internal description represented by the state equations is no better than the external description represented by the input-output equation.

1076

CHAPTER 13 STATE-SPACE ANALYSIS

-

X

-a

!s y

q

Figure 13.4 Block realization of H(s) = s~o.

q = -aq+x

and

y =q

In Sec. 6.6, we saw that a given transfer function can be realized in several ways. Consequently, we should be able to obtain different state-space descriptions of the same system by using different realizations. This assertion will be clarified by the following example.

EXAMPLE 13.5 State-Space Description from a Transfer Function Consider a system specified by the transfer function

2s+ 10 H(s) - - - - - ( - 2 ) ( -s + 5) ( - 1 ) - s3 +8s2 + 19s+ 12 -

s+ 1

direct form

s+3

i - -2+ - j --

s+4 - s+ l

cascade

s+3

s+4

parallel

The procedure developed in Sec. 6.6 allows us to realize H(s) as, among others, direct form II (DFII), transpose DFII (TOFIi), cascade, and parallel. These realizations are depicted in Fig. 13.5. Determine state-space descriptions for each of these realizations. As mentioned earlier, the output of each integrator serves as a natural state variable.

Direct Form II and Its Transpose Here, we shall realize the system using the canonical form (direct form II and its transpose) discussed in Sec. 6.6. If we choose the state variables to be the three integrator outputs q 1, q2, and q3, then, according to Fig. 13.5a,

q, =q2 q2 =q3 q3 = - 12q1 - 19q2 -8q3 +x Also, the output y is given by

13.3 A Systematic Procedure to Determine State Equations

.. > num = [2 10]; den= [1 8 19 12]; >> [A,B,C,D] =· tf2ss(num,den) A= -8 -19 -12 1 0 0 0 1 0 8 =

1 0 0

C=

0

D=

0

2

10

MATLAB 's convention for labeling state variables q 1, q2, ... , q11 in a block diagram, such as shown in Fig. 13.5a, is reversed. That is, MATLAB labels q1 as q11, q2 and q11 _,, and so on. Keeping this in mind, we see that MATLAB indeed confi rms our earlier results. It is also possible to determine the transfer function from the state-space representation using the ss2tf and tf commands: >>

[num,den) = ss2tf (A,B,C,D); H = tf(num,den) H

= 2 s + 10

Transpose Direct Form II We can also realize H (s) by using the transpose of the DFII form, as shown in Fig. 13.5b. If we label the output of the three integrators as the state variables v1, v2 , and v3 , then, according to Fig. 13.5b,

v, = -12v3 + IOx v2 = v1 - l 9v3+ 2x \13

= V2 -

8v3

13 3 - A Systematic Procedure to Determine State Equations

1079

and the output y is given by Y=

V3

The matrix form of these state and output equations becomes

O-12] [v']+ [10] [v'] = [O 0 I -8 0 1 Q - 19

~2 V3

V2

2

X

V3

'-..-'

e

A

and y = [0 0 I] [ ::] ~ V3 C

Observe closely the relationship between the state-space descriptions of H (s) by means of the DFII and TOFIi realizations. The A matrices in these two cases are the transpose of each other; also, the B of one is the transpose of C in the other, and vice versa. Hence, T

A

(A) =A.

(B)

T

= C, A

and

This is no coincidence. This duality relation is generally true [I]. Cascade Realization The three integrator outputs w,, w2, and equations for the summer outputs yields

w3

in Fig. 13.5c are the state variables. Writing and

Since w2 = 2w 1- 3w2 , we see that w3 = 2w 1 +2w2 -4w3. From Fig. 13.5c, we further see that = w 3 • Put into matrix form, the state and output equations are therefore

y

and y

= [0

0

l]

w,] [ W2

W3

Parallel Realization (Diagonal Representation) The three integrator outputs zi, z2 , and z3 in Fig. 13.5d are the state variables. The state equations are

z1=-z1+x z2 = -3z2 +x fa= -4z3+x

1080

CHAPTER 13 STATE-SPACE ANALYSIS

and the output equation is Y = 4j ZI

-

2Z2 + 32 Z3

In matrix form, these equations are

A GENERAL CASE It is clear that a system has several state-space descriptions. Notable among these are the variables obtained from the DFII, its transpose, and the diagonalized variables (in the parallel realization). State equations in these forms can be written immediately by inspection of the transfer function. Consider the general Nth-order transfer function

1

H(s) =ho!'+ b11'- + · · · + bN- is + hN sN +a1sN- 1+ · · •+aN-is+aN bol' +b11'- 1+ · · · +bN- is+bN (s-)q)(s->..2)· · ·(S-AN) k1 k2 kN =bo+ - - + - - + · · · + -s - A1 s - >..2 s-AN

(13.24)

( 13.25)

The realizations of H(s) found by using direct form II [Eq. (13.24)] and the parallel form [Eq. (13.25)) appear in Figs. 13.6a and 13.6b, respectively. The N integrator outputs qi, q2, ... , qN in Fig. 13.6a are the state variables. By inspection of this figure, we obtain

qN- 1 = qN C/N = -aNql - aN-1 q2 - · · · - a2qN-I - a ,qN +x and output y is

Y = bNq1

+ bN- 1q2 + · · · + b1qN + boqN

We can eliminate C/.N in this output equation by using the last state equation to yield

Y = (bN -boaN)q, + (bN-1 -boaN-1)q2 + · · · + (bi - boa1)qN + box

=bNq1 + bN-1q2 + •••+b1qN + box

13.3 A Systematic Procedure to Determine State Equations

X





t

..,

z2

0

0 0

0

>..2

0 0

z,

z2 +

0 0

ZN- I

ZN

0 0

AN-I

0

0

AN

X

(13 .26)

ZN-I ZN

and ZI

Z2 y= [k1

k2

...

kN-1

kN]

+boX ZN-I ZN

Observe that the diagonalized form of the state matrix [Eq. (13.26)) has the transfer function poles as its diagonal elements. The presence of repeated poles in H(s) will modify the procedure slightly. The handling of these cases is discussed in Sec. 6.6. It is clear from the foregoing discussion that a state-space description is not unique. For any realization of H(s) obtained from integrators, scalar multipliers, and adders, a corresponding state-space description exists. Since there are uncountable possible realizations of H (s), there are uncountable possible state-space descriptions. The advantages and drawbacks of various types of realization were discussed in Sec. 6.6.

13.4 S OLUTION O F STATE EQUATIONS The state equations of a linear system are N simultaneous linear differential equations of the first order. We studied the techniques of solving linear differential equations in Chs. 2 and 6. The same techniques can be applied to state equations without any modification. However, it is more convenient to carry out the solution in the framework of matrix notation. These equations can be solved in both the time and frequency domains (Laplace transform). The latter is relatively easier to deal with than the time-domain solution. For this reason, we shall first consider the Laplace transform solution.

13.4 Solution of State Equations

1083

13.4.1 Laplace Transform Solution of State Equations The ith state equation [Eq. (1 3.14)] is of the form

q; = a;, q 1+ a,-iq2 + · · · + a;NqN + b;1X1 + b,'2x2 + • • •+ b;jXj

(13.27)

We shall take the Laplace transform of thjs equation, Let q;(t) {::::::::} Q;(s)

so that q;(t) {::::::::} sQ1(s) - q;(O)

Also, let x;(t) {::::::::} X1(s)

The Laplace transform of Eq. (13.27) yields sQ;(s) - q;(O)

= a;1Q1 (s) +a;2Q2(s) + · · · + a;NQN(s) + b;1X1 (s) + bi2X2(s) + · · ·+bijXj(s)

Taking the Laplace transforms of all N state equations, we obtain Q , (s) Q2(s)

q , (0) q2(0)

a11 a21

a12 a22

OJN a2N

Q,(s)

QN(s)

qN(O) ..._,__.,

aNI

0N2

QNN

QN(S)

"--,,--'

Q(s)

Q(O)

s

Q (s)

A

b11 b21

Q2(s)

b12 b22

b1j b2j

X 1(s) X2(s)

bNj

Xj(S)

+

' -- --......- - --'-.,.-,' B

X (s)

Defining the vectors, as indicated, we have sQ (s) - q(O) = AQ(s) + BX(s)

or sQ (s) - AQ(s)

and

= q(O) + BX(s)

(sI - A)Q (s) = x(O) + BX(s)

where I is the N x N identity matrix. Solving for Q(s), we have Q (s)

=(sl -

A) - 1[q(O) + BX(s)]

=cl>(s)[q(O) + BX(s)] where

(13.28)

1084

CHAPTER 13

STATE-SPACE ANALYSIS

Thus, from Eq. (13.28),

Q(s) = cl>(s)q (O) + cl>(s)BX(s)

and

q(t) = .c,- 1(cl>(s)]q(O) + .c,- 1[cl>(s)BX(s)]

(13.29)

zcro-~1a1c rec,ponse

uro-inpul response

Equation (13.29) gives the desired solution. Observe the two components of the solution. The first component yields q(t) when the input .x(r) = 0. Hence, the first component is the zero-input response. In a similar manner, we see that the second component is the zero-state response.

EXAMPLE 13.6 Laplace Transform Solution to State Equations Using the Laplace transform, find the state vector q(t) for the system whose state equation is given by q=Aq + Bx where x(t) and the initial conditions are q 1(0)

= u(t)

= 2, q2 (0) = I .

From Eq. (13.28), we have Q(s) = cl>(s)[q(O) + BX(s)] Let us first find cl>(s). We have

(s1-A)

=s[~ ~]-[=~! _t]= [s;612 s~}l]

and cl>(s) = (s1 -A)-1

=

s+ I (s+4)(s+9) -36 [ (s+4)(s+9)

2/3 ] (s+4)(s+9) s+l2 (s+4)(s+9)

Now, q(O) is given as q (O) = [~]

Also, X(s) = 1/s, and BX(s)

l

= [ ~] = [ ~] s

Therefore,

I] = [6s+I] 7s

q (O) + BX(s) = 2 + 1s [ I+! s

s+I s

13.4 Solution of State Equations

1085

and Q(s)

=~ (s)[q(O) +BX(s)] [ (s+4)(s+9) ,+ I -36

(s+4)(.r+9) 2/3 ] [ 3s 6s+I ] s+l2 s+l (s+4}(.r+9) -s

(s+4 )(s+9) 2 2s +3s+ I ]

=

s(s+4)(s+9)

[

.r-59 (s+4)(s+9)

_ - [

+

1/36 _ 21/20 136/4S ] s s+4 s+9 -63/5 ~ s+4 s+9

+

The inverse Laplace transform of this equation yields

This result is readily confirmed using MATLAB and its symbolic toolbox.

>> >> >>

syms s A= (-12 2/3 ;-36 -1]; B = (1/3; 1]; qO q = i laplace(inv(s*eye(2)-A)*(qO+B*X))

q

= [2;1];

X

= 1/s;

(136•exp(-9•t))/45 - (21•exp(-4•t))/20 + 1/36

=

(68*exp(-9*t))/5 - (63*exp(-4*t))/5 To create a plot of the state vector, we use MATLAB 's subs command to substitute the symbolic variable t with a vector of desired values. = (0: . 01:2); q = subs(q); q1 = q(1,:); q2 = q(2,:); plot (t, q1, 'k', t ,q2, 'k--'); xlabel( 't'); ylabel( 'Amplitude'); legend( 'q_1 (t) ', 'q_2(t)', 'Location', 'SE');

>> >> >>

t

The resulting plot is shown in Fig. 13.7.

I

I

I

I

I

I

I

2~1 Q)

0

r\

~ -2

~

"O

-C. ~

\ \ \.

-4

0

-

--

.,,,.,

.,,,., .,.,.

---

q I (t) -

/

-

-

- q2(t)

/

I

I

I

0.2

0.4

0.6

Figure 13.7 State vector plot for Ex. 13.6.

' 0.8

'

'

'

1.2

1.4

' 1.6

' 1.8

2

1086

CHAPTER 13 STATE-SPACE ANALYSIS

THE OUTPUT The output equation is given by y=Cq +Dx

and

Y(s) = CQ(s) + DX(s)

Upon substituting Eq. (13.28) into this equation, we have Y(s) = C{ 4> (s)[q(0) + Bx X(s)]} + DX(s)

= ..___., C4>(s)q(0) + [C4> (s)B + D]X(s)

(I 3.30)

zcro-s1a1e response

zcro-inpu1 response

The zero-state response [i.e., the response Y(s) when q(O) = 0] is given by Y(s) = [C4> (s)B + D]X(s)

Note that the transfer function of a system is defined under the zero-state condition [see Eq. (6.23)]. The matrix C4>(s)B + D is the transfer function matrix H(s) of the system, which relates the responses y 1, Y2, ... , Yk to the inputs x 1, x2, ... , Xj: H(s)

and the zero-state response is

= C4>(s)B + D

(1 3.31)

Y(s) = H(s)X(s)

The matrix H(s) is a k x j matrix (k is the number of outputs and j is the number of inputs). The ijth element H;j(s) of H(s) is the transfer function that relates the output y;(t) to the input xi (t).

E 13.7 Transfer Function Matrix from State-Space Description Let us consider a system with a state equation

0 I ][qi]+ [l OJ [x'] [q']-[ (s)B + D = 1l OJ 1 [ (s+s+3 l~~+2)

[0

2

(s+ l)(s+2)

s+4 (s+ l)(s +2)

s+4 s+2

= [

2(s-2)

(s+l)(s+2)

(s+ I):(s+2)] s+2 .r2+5s+2 (s+ l)(s+ 2)

(13.34)

and the zero-state response is Y(s)

= H (s)X(s)

Remember that the ijth element of the transfer function matrix in Eq. (13.34) represents the transfer function that relates the output y;(t) to the input xj(t). For instance, the transfer function that relates the output y3 to the input x2 is H32 (s), where

s2+5s+2

H32(s)

= (s+ l)(s+2)

We can readily verify the transfer function matrix using MATLAB and its symbolic toolbox functions.

>> >> >>

A= (0 1;-2 -3]; B = (1 0;1 1]; C = (1 0;1 1; 0 2]; D = [0 0;1 O;O 1]; syms s; H = collect(simplify(C*inv(s*eye(2)-A)*B+D)) H =

[

(s + 4)/(s-2

3*s + 2), 1/(s-2 + 3*s + 2)] [ (s + 4)/(s + 2), 1/(s + 2)] [ (2*s - 4)/(s-2 + 3*s + 2), (s-2 + 5*s + 2)/(s-2 + 3*s + 2)] +

Transfer functions relating particular inputs to particular outputs, such as H32(s), can be obtained using the ss2tf and tf functions.

>>

[num,den] = ss2tf(A,B,C,D,2); H_32 = tf(num(3,:),den) H_32

=

1088

CHAPTER 13 STATE-SPACE ANALYSIS

CHARACTERISTIC ROOTS (EIGENVALUES) OF A MATRIX

It is interesting to observe that the denominator of every transfer function in Eq. (13.34) is (s + l )(s + 2) except for H 21(s) and H 22 (s), where the facto r (s + I) is canceled. This is no coincidence. We see that the denominator of every element of 4-(s) is jsl - Al because 4-(s) = (sl - A)- 1, and the inverse of a matrix has its determinant in the denominator. Since C, B, and D are matrices with constant elements, we see from Eq. ( 13.3 1) that the denominator of 4- (s) will also be the denominator of H(s). Hence, the denominator of every element of H (s) is Isl - Al , except for the possible cancellation of the common factors mentioned earlier. In other words, the zeros of the polynomial Isl - Al are also the poles of all transfer funct ions of the system. Therefore, the zeros ofthe polynomial Isl - Al are the characteristic roots ofthe system. Hence, the characteristic roots of the system are the roots of the equation Isl - A l =0

(1 3.35)

Since Isl - Al is an Nth-order polynomial in s with N zeros >.. 1, >..2, ... , >..N, we can write Eq. ( 13.35) as Isl -Al= ../" +a1~- I + · · • +aN- is+aN = (s->..,)(s->..2) · · · (s-AN) =0

For the system in Ex. 13.7, Isl - Al=,~

~,-,~2

1 ~3, = ,~ s~ 31

= s2 + 3s+2 = (s+ l)(s +2) Hence, and Equation () 3.35) is known as the characteristic equation of the matrix A, and '). 1, A2, ... , AN are the characteristic roots of A. The term eigenvalue, meaning "characteristic value" in German, is also commonly used in the literature. Thus, we have shown that the characteristic roots of a system are the eigenvalues (characteristic values) of the matrix A. At this point, the reader will recall that if At, A2, ... , AN are the poles of the transfer function, then the zero-input response is of the form (13.36) This fact is also obvious from Eq. (13.30). The denominator of every element of the zero-input response matrix C4-(s)q(0) is Isl - Al = (s->.. 1)(s-A2 ) • · · (s- )..N)- Therefore, the partial fraction expansion and the subsequent inverse Laplace transform will yield a zero-input component of the form in Eq. (I 3.36).

13.4.2 Time-Domain Solution of State Equations The state equation is

q = Aq + Bx

Si .P

>'S'+:rJ.i

4

C

,h (

,

$UCi

P

~ \

..

(13.37)

13.4 Solution of State Equations

1089

We now show that the solution of the vector differentia l Eq. ( 13.37) is q (t)

= eAtq(O) + fo

1

eA(t- r)Bx(r) dr

~efore proceedi?g fu_rther, _w e _mus~ define the matrix exponential eA1 • An exponential of a matrix 1s defined by an infinite senes 1dent1ca\ to that used in defining an exponential of a scalar. We shall define

(13.38) For examp le, if

then

and 2 2

A t

2!

= [O

2

1][0 l]~ =[2 l]t = (,2 2

1212

232

t2

and so on. We can show that the infinite series in Eq. ( 13.38) is absolutely and un iformly convergent for all values oft. Consequently, it can be differentiated or integrated term by term. Thus, to find (d/dr)eA', we differentiate the series on the right-hand side of Eq. (13.38) term by term:

Hence,

!!_eA'

dt

= AeA' = eA'A

Also note that from Eq. ( 13.38), it follows that

e°=I where I is just the identity matrix. If we premultiply or postmultiply the infinite series for eA' [Eq. (13.38)] by an infinite series fore-A' , we find that (e- A')(eA')

= (eA')(e-A') =I

(13.39)

f 1090

CHAPTER 13 STATE-SPACE ANALYSIS

In Sec. 13 .1.1, we showed that

dU

d

dV

-(UV) =-V+U-d dt dt t

Using this relationship, we observe that

~[e-A'q] = (!!.._e-Al)q +e-Al-21

eANI

and >-1 , )..2 , . . • , AN are the N characteristic values (eigenvalues) of A. We can also determine eA' by comparing Eqs. (1 3.41) and (13.29). It is clear that eA' =

.c,- t[4>(s)] = £ - ' [(sl-A)- ']

(13.44)

Thus, eA' and 4> (s) are a Laplace transform pair. To be consistent with Laplace transform notation, eA' is often denoted by (t), the state transition matrix (STM): e At

= (/>(t)

EXAMPLE 13.8 Time-Domain Method to Solve State Equations Use the time-domain method to solve Ex. 13.6.

For this case, the characteristic roots are given by

IsI -

Al - ls+ l 2 -

36

-} l=s2+13s+36 = (s+ 4)(s +9) =0 s+ 1

1092

CHAPTER 13 STATE-SPACE ANALYSIS

The roots are .i.. 1 = - 4 and A2 = -9, so

and eA'

=/Joi + /3,A

= Ge-4' _ ;e-9')[~ = [ ( 53 e-4r + !e-9') (-e- 4' + e-91)

¥

The zero-input response is given by [see Eq. (13.41)]

Note here the presence of u(t), indicating that the response begins at t = 0. The zero-state component is eAr * Bx [see Eq. (13.42)], where Bx=

and (-

3

s

eAr* BX(t)= [

[½]1 u(t) = [½u(t)] u(t)

e- 4r + ~e- 9' )u(t)

s

1- (e15

41

-

e-91 )u(t) ]

¥(t), 8(t), C, D, and B [Eq. (13.32)] into Eq. ( l3.4S), we have

OJ+ [~ ~] [8(t) I

O I

O

0]

8(t)

As the reader can verify, the Laplace transform of this equation yields the transfer function matrix H(s) in Eq. (13.34).

13.5 L INEA R T RAN SFORMATION

OF A STATE VECTOR

In Sec. 13.2, we saw that the state of a system can be specified in several ways. The sets of all possible state variables are related-in other words, if we are given one set of state variables, we should be able to relate it to any other set. We are particularly interested in a linear type of relationship. Let q1,q2, ... ,qN and w1 ,w2, .. . ,WN be two different sets of state variables specifying the same system. Let these sets be related by linear equations as W1 =p11q1

+p12q2+· · ·+P1NqN

W2

= P21 q, + P22q2 +' ' '+ P2NqN

WN

= PNI qi +PN2q2 + · · ·+ PNNqN

or Wj W2

WN

...-..,-,, w

-

P II

P12

PIN

q,

P21

P22

P2N

q2

PNI

PN2

PNN p

qN

---...-q

Defining the vector w and matrix pas just shown, we obtain the compact matrix representation

w=Pq

(13.46)

and

q =P- 1w · trans1orme & d 1·nto another state vector w through the linear transformation Thus, thestate vector q 1s in Eq. (13.46).

1096

CHAPTER 13 STATE-SPACE ANALYSIS

If we know w, we can determine q from q = p-' w, provided p - l exists. This is equivalent to saying that p is a nonsingular malrixt (!Pl -:j= 0). Thus, if P is a nonsingular matrix, the vector w defined by Eq. ( 13.46) is also a state vector. Consider Lhe state equation of a system q =Aq+ Bx If

w = Pq then and Hence, the state equation now becomes p- 1w= AP- 1w + Bx or

A

A

=Aw+Bx

( 13.47)

and

(1 3.48)

where Equation (13.47) is a state equation for the same system, but now it is expressed in terms of the state vector w. The output equation is also modified. Let the original output equation be y=Cq+Dx In terms of the new state variable w, this equation becomes y=C(P- 1w)+ Dx =Cw+ Dx where (13.49)

t Trus condition is equivalent to saying that all N equations in Eq. (13.46) are linearly independent; that is, none of the N equations can be expressed as a linear combination of the remaining equations.

r l3.5 Linear Transformation of a State Vector

1097

£X~MPL E13.10 Linear Transformation of the State Vector The state equations of a certain system are given by

Find the state equations for this system when the new state variables w1 and w2 are given as (13.50)

According to Eq. ( 13.47), the state equation for the state variable w is given by

w=Aw+Bx where [see Eqs. (13.48) and (13.49)]

A=Pff·=e ~11[~2 =e ~11[~2 =[~2 ~1] and

ii =PB = [:

~3m ~ r

~3m !;J

~,JU] =[~,]

[l~l]=[-; ~I] [::] + [~I] x(t)

Therefore,

The solution of this equation requires the state vector w. . .. I (0) by This is the desired state equauon or . . be obtained from the given m1t1a state q . .. t w(O) This can knowledge of the miua1 Sla e · using Eq. (13.50). It with Jess effort using MATLAB. We can obtain the same resu . B = (1; 2]; >> A= (0 1;-2 - 3] , >> p = [1 1 ;1 -1]; Bhat= P•B >> Ahat = P*A*inv(P), -2 0 Ahat = w2

.

3

Bhat =

3 -1

-1

f

1098

CHAPTER 13 STATE-SPACE ANALYSIS

INVARIANCE OF EIGENVALUES

We have seen that the poles of all possible transfer functions of a system are the eigenvalues of the matrix A. If we transform a state vector from q to w, the variables w,, w2, . . . , WN are linear combinations of q 1, q2, ••• , qN and therefore may be considered to be outputs. Hence, the poles of the transfer functions relating w1, w2, .. • , WN to the various inputs must also be the eigenvalues of matrix A. On the other hand, the system is also specified by Eq. (13.47). This means that the poles of the transfer fu nctions must be the eigenvalues of A. Therefore, the eigenvalues of matrix A remain unchanged for the linear transformation of variables represented by Eq. ( 13.46), and the eigenvalues of matrix A and matrix A(A= PAP- 1) are identical, implying that the characteristic equations of A and Aare also identical. This result also can be proved alternately as follows. Consider the matrix P(sl - A)P- 1• We have

Taking the determinants of both sides, we obtain

IPllsl-AIIP- 11= Isl -Al The determinants IPI and IP-11are reciprocals of each other. Hence, Isl - Al = Isl - Al This is the desired result. We haveAshown that the characteristic equations of A and Aare identical. Hence, the eigenvalues of A and A are identical. In Ex. 13. JO, matrix A is given as

A-[O l] - -2 -3 The characteristic equation is

Also,

-[-2 0]

AA

3

-1

and Isl -

[s+2 OJ = s2 +3s +2 =o

Al = - 3 s+ l A

This result verifies that the characteristic equations of A and Aare identical.

13.5 Linear Transformation of a State Vector

1099

13.5.1 Diagonalization of Matrix A For several reasons, it .is desirable to make matrix A diagopal. If A is not diagonal, we can transform the state variables such that the resulting matrix A is diagonaP One can show that for any diagonal matrix A, the diagonal elements of this matrix must necessarily be >.. 1, >..2 , ••• , AN (the eigenvalues) of the matrix. Consider the diagonal matrix A:

a, A=

0

0 0 a2 0

0

0

0 0

0

The characteristic equation is given by (s-a ,)

Isl-Al=

0

0 (s-a2)

0 0

0

0

0

0

0

or (s-a1)(s-a2)· · •(s-aN)=O

The nonzero (diagonal) elements of a diagonal matrix are therefore its eigenvalues >. 1, >.2, We shall denote the diagonal matrix by a special symbol, A:

At 0

A=

O O 0

A2

••• ,

AN.

0 0 (13.51)

0

0

0

AN

Let us now consider the transformation of the state vector A such that the resulting matrix Ais a diagonal matrix A.

Consider the system

q=Aq+Bx We shall assume that ).. 1, A2 , ••• , AN, the eigenvalues of A, are distinct (no repeated roots). Let us transform the state vector q into the new state vector z, using the transformation z=Pq

(13.52)

Then, after the development of Eq. ( 13.47), we have

i

= PAP-

1

z+PBx

We desire the transformation to be such that PAP- 1 is a diagonal matrix A given by Eq. (13 .51), or (13.53) z= Az + Bx

t In this discussion,

we assume distinct eigenvalues. If the eigenvalues are not distinct, we can reduce the

matrix to a modified diagonalized (Jordan) form.

1100

CHAPTER 13 STATE-SPACE ANALYSIS

Hence, or (13.54) We know A and A. Equation (13.54) therefore can be solved to determine P.

EXAMPLE 13.11 Diagonal Form of the State Equations Find the diagonalized form of the state equations for the system in Ex. 13. l 0. In this case, A= [

We found ).. 1 = -1 and

).2

~2 ~3]

= -2. Hence, - 1

A= [ 0

and Eq. (13.54) becomes

[~I ~2] [;~: ;~~]-[;:: ;~:] [~2 ~3] Equating the four elements on two sides, we obtain -p11

=-2p12

-p12 = Pl I - 3p12 -2p21 = -2p22 -2p22 =P21 -3p22

The reader will immediately recognize that the first two equations are identical and that the last two equations are identical. Hence, two equations may be discarded, leaving us with only two equations [p11 = 2p12 and p21 = p22] and four unknowns. This observation means that there is no unique solution. There is, in fact, an infinite number of solutions. We can assign any value to p 11 and p 21 to yield one possible solution.t If P11 = k1 and P21 = k2, then we have P12 = k1/2 and P22 = k2:

t If, however, we want the state equations in diagonalized form, as in Eq. (13.26), where all the elements of Bmatrix are unity, there is a unique solution. The reason is that the equation B= PB, where all the elements of Bare unity, imposes additional constraints. In the present example, this condition will yield PII = 1/2, P12 = 1/4, P21 = 1/3, and P22 = 1/ 3. The relationship between z and q is then

13.5 Linear Transformation of a State Vector

We may assign any values to k1 and k2 • For convenience let k • Id • 1 y1e s

P= [~

1101

· · =2 and k2 = 1.Th.1s subst1tut1on

~]

The transformed variables [Eq. (13.52)) are

This expression relat~s the_ new state variables z1 and z2 to the original state variables q 1 and q2 . The system equation with z as the state vector is given by [see Eq. (13.53)]

i= Az+Bx where

[2 !] [;] = [~]

~ B=PB= l

Hence,

[t]=[~l ~2][~:]+[~]x

(13.55)

or

z1 = -z1 +4x

ii

=-2z2+3x

Note the distinctive nature of these state equations. Each state equation involves only one variable and therefore can be solved by itself. A general state equation has the derivative of one state variable equal to a linear combination of all state variables. Such is not the case with the diagonalized matrix A. Each state variable z; is chosen so that it is uncoupled from the rest of the variables; hence, a system with N eigenvalues is split into N decoupled systems, each with an equation of the form z; = A;Z; + (input terms) This fact also can be readily seen from Fig. 13.8a, which is a realization of the system represented by Eq. (13.55). In contrast, consider the original state equations [see Ex. 13.10]

= q2 +x(t) CJ2 = -2q1 -3q2 + 2x(t)

CJ•

A realization for these equations is shown in Fig. 13.8b. It can be seen from Fig. 13.8a that the states z 1 and z2 are decoupled, whereas the states qi and q2 (Fig. 13.8b) are coupled. It should be remembered that Figs. 13.8a and 13.8b are realizations of the same system. t

t Here, we only have a simulated state equation; the outputs are not shown. The outputs are linear combinations of state variables (and inputs). Hence, the output equation can be easily incorporated into these diagrams.

1102

CHAPTER 13 STATE-SPACE ANALYSIS

'I X

X

'I

- I

..

i2

..

i

,I -3

-2 (b)

(a)

Figure 13.8 Two realizations of the second-order system.

MATRIX DIAGONALIZATION VIA MATLAB

The key to diagonalizing matrix A is to determine a matrix P that satisfies AP = PA [Eq. (13.54)), where A is a diagonal matrix of the eigenvalues of A. This problem is directly related to the classic eigenvalue problem, stated as

AV=VA where Vis a matrix of eigenvectors for A. If we can find V , we can take its inverse to determine P. That is, P = v- 1• This relationship is more fully developed in Sec. 13.8. MATLAB's built-in function eig can determine the eigenvectors of a matrix and, therefore, can help us determine a suitable matrix P. Let us demonstrate this approach for the current case.

»

A=[01;-2-3];B=[1;2];

>> >>

[V, Lambda]= eig(A); P = inv(V), Lambda, Bhat= P*B P =

2.8284 2.2361

Lambda= Bhat =

-1 0 5.6569 6.7082

1.4142 2 .2361

0 -2

13.6 Controllability and Observability

1103

Therefore,

z

= [Zl] = [2.8284 2.2361

Z2

and

·-[z']-[-1

Z-.

Z2

-

0

Q]

-2

[q1]- Pq

1.4142] 2.2361 q2

-

[Z l] [ 5.6569] z2 + 6.7082 x(t) = Az+Bx A

Recall that neither P nor B are unique, which explains why the MATLAB output does not need to match our previous solution. Still, the MATLAB results do their job and successfully diagonali ze matrix A.

13.6

C O N TROLLABILITY AND OBSERVABILITY

Consider a diagonalized state-space description of a system

i= Az+Bx

and

We shall assume that all N eigenvalues .l. 1, .l.2, Eq. (13.56) are of the form

Y=Cz+Dx ••• , AN

(13.56)

are distinct. The state equations in

m= 1,2, . . . ,N If hm1 , bm2,

... , bmi (the mth row in matrix B) are all zero, then Zm = "-111Z111

and the variable z,,, is uncontrollable because z,,, is not coupled to any of the inputs. Moreover, Zm is decoupled from all the remaining (N - 1) state variables because of the diagonalized nature of the variables. Hence, there is no direct or indirect coupling of Zm with any of the inputs, and the system is uncontrollable. In contrast, if at least one element in the mth row of Bis nonzero, Zm is coupled to at least one input and is therefore controllable. Thus, a system with a diagonalized state [Eq. ( 13.56)] is completely controllable if and only if the matrix Bhas no row of zero elements. The outputs [see Eq. (13.56)) are of the form j

Yi

= C;1 ZJ + Ct2Z2 + · · · + C;NZN + L d;,,,Xm

i = 1,2, ... , k

m= l

If c;111 = 0, then the state z111 will not appear in the expression for y;. Since all the states are decoupled because of the diagonalized nature of the equations, the state Zm cannot be observed directly or 1 indirectly (through other states) at the outputy;. Hence, them~ mode e>-'" will not be observed at the output y;. If c,,,,, c2111 , ••• , c1u,, (the mth column in matrix C) are all zero, the state z,11 will not

1104

CHAPTER 13 STATE-SPACE ANALYSIS

be observable at any of the k outputs, and the state z,,, is unobservable. In contrast, if at least one element in the mth column of C is nonzero, z111 is observable at least at one output. Thus, a system with diagonalized equations of the form in Eq. ( 13.56) is completely observable if and only if the matrix Chas no colum11 of zero elements. In this discussion, we assumed distinct eigenvalues; for repeated eigenvalues, the modified criteria can be found in the literature [ l , 2). If the state-space description is not in diagonalized form , it may be converted into diagonalized form using the procedure in Ex. 13. J I. It is also possible to test for controllabil ity and observability even if the state-space desctiption is in undiagonalized form [ I, 2].

EXAMPLE 13.12 Controllability and Observability Investigate the controllability and observability of the systems in Fig. 13.9.

X

I



41

..

I

t ]

j

s

qi

f

..

I

y

s

1



q2

-I

Qi

..

- I

(a)

..

41

f

..

I

.!.s

-

I

..

42

t ls y

-I

qi

q2

-I (b)

Figure 13.9 Systems for Ex. 13.12.

In both cases, the state variables are identified as the two integrator outputs, q 1 and q2• The state equations for the system in Fig. 13.9a are

i/1 =q, +x q2 =q1-q2

and

(13.57)

13·6 Controllability and Observability

1105

Hence,

Isl - AI == s - I O -l s+ 1

C == [I

-2]

= (s -

l)(s+ 1)

Therefore, >- 1= 1

and

and

A=[~ ~1] We shall now use the procedure in Sec. 13.5. 1 to d.1agonahze · this · system. According to Eq. (13.54), we have

[~ ~I][::: :::]-[::: :::][: ~I] The solution of this equation yields and Choosing Pt 1 = l and P 21 = 1, we have

and

All the rows of Bare nonzero. Hence, the system is controllable. Also, Y = Cq = cp- l Z = Cz

and

The first column of C is zero. Hence, the mode z1 (corresponding to .A.1= 1) is unobservable. The system is therefore controllable but not observable. We come to the same conclusion by realizing the system with the diagonalized state variables z1 and z2, whose state equations are i=Az + Bx h

y =Cz

1106

CHAPTER 13 STATE-SPACE ANALYSIS

Using our previous calculations, we have

z, =z,+x i2 = -z2 +x and

y=z2

Figure 13. lOa shows a realization of these equations. It is clear that each of the two modes is controllable, but the first mode (corresponding to )..= 1) is not observable at the output. ,····························..·········..·······················1 ! ...-------, i ;' !:

x.: --to· -

:'

l

.--•z1 ----

----~

:

1

s-1

...-------.

l l

iY

s+ l ' ·-----·--------------------··············----~-----------------' (a)

....................... ····-·----·-································ '

i

s -1

! i

X

! !

1 s +1

...........C



y

'

' ·--------------------------------------~--~----~----·-·'

(b)

Figure 13.10 Equivalents of the systems in Fig. 13.9.

The state equations for the system in Fig. 13.9b are

i/1 = -q, +x i/2= iJ.1 -q, +q2 = -2q1 + q2 +x and Hence,

A= [-1 OJ -2 1

(13.58)

=-;;

13.6 Controllability and Observability

Isl - Al= so that >.. 1

Is+-I I

1107

I

O sI =(s+ l )(s- 1)

= -1 , >..2 = I, and

Diagonalizing the matrix, we have

[~

~I

l[::: ::]

= [::: :::] [

=; ~]

The solution of this equation yields p , 1 = -p 12 and p 22 = 0. Choosing p 11 we obtain

= - I and p2 1 = I,

and B~ = PB =

[-I l

c=cP- =(o 1

1]

The first row of B is zero. Hence, the mode corresponding to >.. 1 = I is not controllable. However, since none of the columns of C vanish, both modes are observable at the output. Hence, the system is observable but not controllable. We reach the same conclusion by realizing the system with the diagonalized state variables z, and z2. The two state equations are

z=Az +Bx y= Cz Using our previous calculations, we have

z1 =z, z2 = -z2 +x and thus,

Figure 13. I Ob shows a realization of these equations. Clearly, each of the two modes is observable at the output, but the mode corresponding to>.., = I is not controllable.

,

1108

CHAPTER 13 STATE-SPACE ANALYSIS

USING

MATLAB TO DETERMINE CONTROLLABILITY AND OBSERVABILITY

As demonstrated in Ex. 13.11, we can use MATLAB's eig function to determine the matrix P that will diagonalize A. We can then use P to determine Band C, from which we can determine the controllability and observability of a system. Let us demonstrate the process for the two present systems. First, let us use MATLAB to compute Band C for the system in Fig. 13.9a. A

»

A"' (1 0;1 -1]; B"' [1; O]; C"' (1 -2];

>>

[V, Lambda]"' eig(A); P"'inv(V); Bhat"' P•B, Chat= C•inv (P) Bhat= -0.5000 1.1180 Chat= -2 0

Since au the rows of Bare nonzero, the system is controllable. However, one column of Cis zero, so one mode is unobservable. Next, let us use MATLAB to compute Band Cfor the system in Fig. 13.9b. >> >>

A= (-1 0;-2 1]; B = [1; 1]; C = [O 1]; [V, Lambda]= eig(A); P=inv(V); Bhat= P•B, Chat= C•inv(P) Bhat = 0 1.4142 Chat= 1.0000 0.7071

One of the rows of Bis zero, so one mode is uncontrollable. Since all of the columns of Care nonzero, the system is observable. As expected, the MATLAB results confirm our earlier conclusions regarding the controllability and observability of the systems of Fig. 13.9.

13.6.1 Inadequacy of the Transfer Function Description of a System Example 13.12 demonstrates the inadequacy of the transfer function to describe an LTI system in general. The systems in Figs. 13.9a and 13.9b both have the same transfer function

I s+I

H(s)=-

Yet the two systems are very different. Their true nature is revealed in Figs. 13. lOa and 13. IOb, respectively. Both the systems are unstable, but their transfer function H(s) = I /(s + I) does not give any hint of it. Moreover, the systems are very different from the viewpoint of controllability and observability. The system in Fig. 13.9a is controllable but not observable, whereas the system in Fig. 13.9b is observable but not controllable. The transfer function description of a system looks at a system only from the input and output terminals. Consequently, the transfer function description can specify only the part of the system that is coupled to the input and the output terminals. From Figs. 13.l0a and 13.lOb, we see that in both cases only a part of the system that has a transfer function H (s) = 1/(s + I) is coupled

13.7 State-Space Analysis of Discrete-Time Systems

1109

to the input and the output terminals. This is why both systems have the same transfer function H (s) 1/(s+ I). The state variable description [Eqs. (13.57) and (13.58)], on the other hand, contains all the information about these systems to describe them completely. The reason is that the state variable description is an internal description, not the external description obtained from the system behavior at external terminals. Apparently, the transfer function fails to descri be these systems completely because the transfer functions of these systems have a common factor s- I in the numerator and denomjnator; this common factor is canceled o ut in the systems in Fig. 13.9, with a consequent loss of the information. Such a situation occurs when a system is uncontrollable and/or unobservable. If a system is both controllable and observable (which is the case with most of the practical systems), the transfer function describes the system completely. In such a case, the internal and external descriptions are equivalent.

=

13.7 STATE-SPACE ANALYSIS OF DISCRETE-TIME SYSTEMS We have shown that an Nth-order differential equation can be expressed in terms of N first-order differential equations. In the following analogous procedure, we show that a general Nth-order difference equation can be expressed in terms of N first-order difference equations. Consider the z-transfer function

H[z] = bo/J + b11"-

1

+ · · · + hN- 1Z + bN zN +a,zN-I + · · · +aN-iz+aN

The input x [n] and the output y [n] of this system are related by the difference equation

The DFII realization of this equation is illustrated in Fig. 13.11. Signals appearing at the outputs of N delay elements are denoted by q1[n], q2[n], ... , qN[n]. The input of the first delay is qN[n + 1]. We can now write N equations, one at the input of each delay: q , [n + I] = q2[n) q2[n+ 1] = q3[n] (13.59) qN- 1[n+ 1] qN[n + l]

= qN[n] = -a.Nq1 [n] -

aN-1q2[n] - · · · -aiqN[n] +x[n)

and y[n]

= bNq 1[n ] + hN-1q2[n] + · · · + biqN[n] + boqN+l [n]

We can eliminate qN+i [n] from this equation by using the last equation in Eq. ( 13.59) to yield y[n]

= (bN -

boaN)q 1[n] + (bN- 1- boaN-1)q2[n] +···+(bi - boa,)qN[n] + bo,t[n]

= bNq i [n] + bN_ 1q2[n] + •• •+ b1qN[n] + bo,t[n]

(13 .60)

1110

CHAPTER 13 STATE-SPACE ANALYSIS qN(II





tI

z qN[II]

• -a,

f'i:\ y!nl

bo

+ I]

t

f

..b,

I

•b2

I

~

I

z qN- 1



I

- a2

~

l

f

t I

z

• - a,,,

I

'lm+I [nJ

•.

I

b"'

'

t !

t

qi[11]



I



f

-aN- 1

j

J

.z

i

bN- 1

q,[11]

• bN



.! z ]

..

- aN

I

j

Figure 13.11 Direct form JI realization of an Nth-order, discrete-time system.

where b; = b; - boa;. Equation (13.59) shows N first-order difference equations in N variables q 1[n ], q2 [n ], . .. , qN[n]. These variables should immediately be recognized as state variables, since the specification of the initial values of these variables in Fig. 13. 11 will uniquely determine the response y[n] for a given x[n] . Thus, Eq. ( 13.59) represents the state equations, and Eq. (13.60) is the output equation. In matrix form, we can write these equations as q1[n+l] q2[11 + l]

0 0

I 0

0

0 0

0

qi [n)

0

0

qz[n)

0 x[n] (13.61)

+ qN-1[n+ I] qN[n +I ]

0

0

0

-aN

-aN-1

-aN-2

q(n+l l

0 -a2 -a1

qN- 1[n) qN[n]

0

'-_,-'

.__,,

q(11]

8

A

and

y[n]

= [bN

... bi]

bN-1 C

q1 [n] q2[n] + bo x[n] '-v-'

qN[n]

D

(13.62)

13.7 State-Space Analysis of Discrete-Tune Systems

1111

In general, q[rz + l] = Aq[n] + Bx[n] y[n]

=Cq[n] + Dx[rz]

Here, we have represented a discrete-time system with state equations for DFII form. There are several other possible representations, as discussed in Sec. 13.3. We may, for example, use the cascade, parallel, or transpose of DFII forms to realize the system, or we may use some linear transformation of the state vector to realize other forms. In all cases, the output of each delay element qualifies as a state variable. We then write the equation at the input of each delay element. The N equations thus obtained are the N state equations.

13.7.1 Solution in State Space Consider the state equation

q[n + l] = Aq[n] + Bx[11]

From this equation, it follows that q[n]

=Aq[n - 1] +Bx[n- I]

and q[n - 1] = Aq[n - 2] + Bx[n - 2] q[n-2]

= Aq[n- 3] + Bx[n -3]

I I

q[l] = Aq[O] + Bx[O]

I

Substituting the expression for q[n - l] into that for q[n], we obtain q[n]

=A2q[n - 2] + ABx[n - 2] +Bx[n -

1]

Substituting the expression for q[n - 2] in this equation, we obtain q[n]

= A3 q[n -

3] + A2Bx[n- 3] + ABx[n - 2] + Bx[n - 1]

Continuing in this way, we obtain q[n] = A"q[O] + A"- ' Bx[O] + A'1- 2Bx[l] + ·. · + Bx[n- l]

I I

I I

11- I

I

m=O

I

= A11q[O] + I:A11- t- 111Bx[m) . • · Hence 11 > I and the summation is recognized The upper limit of this summauon 15 nonnegauve. ' - ' as the convolution sum

I

I

I

I

I

1112

CHAPTER 13 STATE-SPACE ANALYSIS

Consequently, q[11l = A"q[OJ + An- 111[11 - 1] * Bx[n]

( 13.63)

..___,_-

uros1a1c

and y[n) = Cq + Dx n- 1

= CA"q[O] + L CA

11 -

1 -

111

Bx[m] + Dx

111: 0

(13.64) In Sec. 13. 1.3, we showed Lhal (13.65) where (assuming N distinct eigenvalues of A)

A2

)._2 I )..2 2

)..N-1 I )..N- 1

AN

).._2

AN-I N

/3o

1

A1

/31

l

f3N- 1

-

N

2

- 1

)..." I

~

(13.66)

).."

N

and Ai, )..2, . .• , AN are the N eigenvalues of A. We can also determine A" from the z-transform formula, which will be derived later, in Eq. (13.70):

EXAMPLE 13.13 State-Space Analysis of a Discrete-Time System Give a state-space description of the system in Fig. 13.12. Find the output y[n] if the input x[n] = u[n] and the initial conditions are q 1[OJ = 2 and q2[0) = 3.

7 l3. State-Space Analysis of Discrete-Tune Systems

-

1113

x[nJ

I

I

I



5

6

Figure 13.12 System for Ex. 13.13.

Recognizing that q2[n]

= q,[n+ l], the state equations are (see Eqs. (13.61) and ( 13.62)]

and

To find the solution [Eq. (13.64)], we must first determine A". The characteristic equation of A is

p..1 - A l = Hence, ).. 1 = I / 3 and )..2

!~6 >-.-\ I - 6

2 =).. -

~)..

6

+ ~ = ().. - ~) ().. - ~) = 0 6

3

2

= I / 2 are the eigenvalues of A and [see Eq. (13.65)] A"= /Joi+ /3,A

where [see Eq. ( 13.66))

[ : ] = [:

ff' rn::J

= [

!6 ~ [gq-[!!~l~:.:-}~:~~:,] 2 ]

and

A" = [3 (3)-11 - 2(2r11 ]

=

3(3) - " - 2(2)- " [ (3)-11 - (2) - "

[~

~]

+ [-6(3)- " + 6(2)- "] [ ~i

- 6(3) - 11 + 6(2)- IJ] -2(3) - " + 3(2)- IJ

n ( 13.67)

1114

CHAPTER 13 STATE-SPACE ANALYSIS

We can now determine Lhe state vector q[n] from Eq. (13.63). Since we are interested in the output y[n], we shall use Eq. ( 13.64) directly. Note that

and the zero-input response is CA"q[O], with q[O] =[;]

Hence, the zero-input response is

The zero-state component. is given by Lhe convolution sum of CA11 - 1u[n-1] and Bx[n]. We can use the shifting property of the convolution sum [Eq. (9.19)] to obtai n the zero-state component by finding the convolution sum of CA" u[n] and Bx[n] and then replacing n with n - I in the result. We use this procedure because the convolution sums are listed in Table 9.1 for functions of the type x[n]u[n], rather than x[n]u[n - I ]. CA11 u[n] * Bx[n]

= [2(3)- 3(2)-n - 4(3)- + 9(2)-"J * [ ufn]] = -4(3)-" * u[n] + 9(2)-n * u[n] 11

11

-

Using Table 9.1 (pair 4), we obtain CA"u[n] * Bx[n]

= -4 [

1-3-(n+I )] [1-2-(n+I) ] _½ u[n] + 9 _ ½ u[n] 1 1

= [12 + 6(3-< +ll) 11

18(r> >> >> >> >> >>

1115

A= [O 1; -1/6 5/6]; B = [O; 1]; C = [-1 5]; D = O; N = 25; n = (0:N); x = ones(l, N+l ) ; qO = [2;3]; sys = ss(A,B,C,D,-1); ¼ Discrete-time state space model [y ,q] = lsim(sys,x, n,qO); ¼ Simulate output and state vector elf; stem(n,y,'k. '); xlabel(' n '); ylabel('y[n]'); axis([-.5 25.5 11.5 13 . 5]);

Toe MATLAB results, shown in Fig. 13.13, exactly align with the analytical solution derived earlier. Also notice that the zero-input and zero-state responses can be separately obtained using the same code and respectively setting either x or qO to zero.

13

~

12.5 12

0

10

5

20

15

25

n

Figure 13.13 Graphical solution to Ex. 13.13 by MATLAB simulation.

13.7.2 The z-Transform Solution The z-transform of Eq. (13.61) is given by

zQ[z] - zq[O] = AQ[z] + BX[z] Therefore, (zl - A)Q[z]

= zq[O] +BX[z]

and Q [z]

= (zl - A)- 1zq[O] + (zl - A) - 1BX[z] = (l - z- 1A)- 1q[O] + (zl - A)- 'BX[z]

Hence, ( 13.69) uro-inpul respon.5C

zero-state response

A comparison of Eq. ( 13.69) with Eq. (13.63) shows that A"= z- 1[(I - z- 1A)- 1]

(13.70)

1116

CHAPTER 13 STATE-SPACE ANALYSIS The output equation is given by

Y[z] = CQ[zl + DX[z]

= C[(I - , - 1A)- q[O] + (zl - A)- BX[z]] + DX[z] = C(I - z- 1A)- 1q[O] + [C(zl -A)- 1B + DJX[z] = C(I - , - 1A)- q[OJ + H[zJX[z] 1

1

1

(13.71)

'-.--'

zero-input response

1.;,rc,-s1n1c response

where

H[z] = C(zl -A)- 1B + D

(13.72)

Note that H[z] is the transfer function matrix of the system, and Hij [Z], the ijth element of H[z], is the transfer function relating the output Yi [n] to the input Xj [n] . If we define h[n] as h[n] =

z-1[H[z]]

then h[n] represents the unit impulse function response matrix of the system. Thus, hii [n], the ijth element of h[n], represents the zero-state response y;[n] when the input Xj[n] = 8[n] and all other inputs are zero.

EXAMPLE 13.14 z-Transfonn Solution to State Equations Use the z-transform to find the response y[n] for the system in Ex. 13.13.

According to Eq. (13.7 1),

5]u l=½f,r [i]+(-15][; ;~Jt,]

Y[z] = (-1

z(6z-5) 6,2-Sz+I

5]

=[-I

-z

[

(5z- l)z

13z2-3z 2

z

-

~z+ ! + (z- l)(z2 - ¾z+ ¾)

-8z

21z

z- -3

z--2

12z 12z 6z I 8z ++- -I I 1 I z- zz-- z--

=-+ -+ 1 I

3

2

Therefore,

. zero-input response

zero-s1a1e response

l

'fl

13.8 MATLAB: Toolboxes and State-Space Analysis

1117

LINEAR TRANSFORMATION, CONTROLLABILITY, AND OBSERVABILITY The procedure for linear transformation is parallel to that in the continuous-time case (Sec. 13.5). If w is the transformed-state vector given by

w=Pq then and Controllability and observability may be investigated by diagonalizing the matrix, as explained in Sec. 13.5.1.

13.8 MATLAB: TOOLBOXES AND STATE-SPACE ANALYSIS The preceding MATLAB sections provide a comprehensive introduction to the basic MATLAB environment. However, MATLAB also offers a wide range of toolboxes that perform specialized tasks. Once installed, toolbox functions operate no differently from ordinary MATLAB functions. Although toolboxes are purchased at extra cost, they save time and offer the convenience of predefined functions. It would talce significant effort to duplicate a toolbox's functionality by using custom user-defined programs. Three toolboxes are particularly appropriate in the study of signals and systems: the control system toolbox, the signal-processing toolbox, and the symbolic math toolbox. Functions from these toolboxes have been utilized throughout the text in the MATLAB examples as well as certain end-of-chapter problems. This section provides a more formal introduction to a selection of functions, both standard and toolbox, that are appropriate for state-space problems.

13.8.1 z-Transform Solutions to Discrete-Time, State-Space Systems As with continuous-time systems, it is often more convenient to solve discrete-time systems in the transform domain rather than in the time domain. As given in Ex. 13.13, consider the state-space description of the system shown in Fig. 13.12.

I][l/1 [n]] [o] (1

s 6

l/2

[ ] + 1 xn 11

and y[n] = [-1 5J

[qil/2 [[n]]] n

We are interested in the output y[n] in response to the input x [n] = u[n] with initial conditions q 1[O] = 2 and q2[0J = 3. To describe this system, the state matrices A , B, C, and D are first defined. >>

A= [O 1;-1/6 5/6]; B = [0; 1]; C = [-1 5]; D = 0;

1118

CHAPTER 13 STATE-SPACE ANALYSIS

Additionally, the vector of initial conditions is defined.

»

q_0

=

[2;3];

In the transfom1 domain, the solution to the state equation is (13.73) The solution is separated into two parts: the zero-input response and the zero-state response. MATLAB 's symbolic toolbox makes possible a symbolic representation of Eq. (13.73). First, a symbolic variable z needs to be defined. >>

z = sym('z');

The sym command is used to construct symbolic variables, objects, and numbers. Typing whos confirms that z is indeed a symbolic object. The syms command is a shorthand command for constructing symbolic objects. For example, syms z s is eq uivalent to the two instructions z = sym ( 'z 1 ) ; and s = sym ( 's ') ; . Next, a symbolic expression for X[z] needs to be constructed for the unit step input, x[n] = u[n]. The z-transfonn is computed by means of the ztrans command. >>

ztrans (sym('l')) X = z/(z-1) X =

Several comments are in order. First, the ztrans command assumes a causal signal. For n ::: 0, u[n] has a constant value of I. Second, the argument of ztrans needs to be a symbolic expression, even if the expression is a constant. Thus, a symbolic one sym(' 1 ') is required. Also note that continuous-time systems use Laplace transforms rather than z-transforms. In such cases, the laplace command replaces the ztrans command. Construction of Q[z] is now trivial. >>

Q = inv(eye(2)-z- (- 1)•A)•q_O + inv(z•eye(2)-A)•B•X Q= (18•z)/(6•z-2-5•z+1) + (2•z• (6•z-5))/(6•z-2- 5•z+1) + (6•z)/((z-1)•(6•z-2-5•z+1)) (18•z-2)/(6•z-2-5•z+l) - (2•z)/(6•z-2- 5•z+l) + (6•z-2)/((z-1)• (6•z-2- 5•z+1))

Unfortunately, not all MATLAB functions work with symbolic objects. Still, the symbolic toolbox overloads many standard MATLAB functions, such as inv, to work with symbolic objects. Recall that overloaded functions have identical names but different behavior; proper function selection is typically determined by context. The expression Q is somewhat unwieldy. The simplify command uses various algebraic techniques to simplify the result. >>

Q = simplify(Q) Q = -(2•z• (- 6*z-2 + 2•z + l)) /(6*z-3 ll•z-2 + 6•z 1) (2*z* (9*z-2 - 7•z + l ))/(6*z-3 - ll•z-2 + 6•z - 1)

The resulting expression is mathematically equivalent to the original but notationally more compact. Since D = 0, the output Y[z] is given by Y[z] = CQ[z].

13.8 MATLAB: Toolboxes and State-Space Analysis

>>

1119

Y = simplify(C*Q) Y = (6*z*(13*z-2 - 11*z + 2))/(6*z-3 - 11*z-2 + 6•z - 1)

The corresponding time-domain expression is obtained by using the inverse z-transforrn command iztrans.

>>

y = iztrans(Y) y = 3*(1/2)-n - 2*(1/3)-n + 12

Like ztrans, the iztrans command assumes a causal signal, so the result implies multiplication by a unit step. That is, the system output is y[n] = (3( I /2)"-2(1/3t + I2)u[11], which is equivalent to Eq. ( 13.68) derived in Ex. 13. 13. Continuous-time systems use inverse Laplace transforms rather than inverse z-transforrns. In such cases, the ilaplace command therefore replaces the iztrans command. Following a similar procedure, it is a simple matter to compute the zero-input response Yzir[n]:

>>

y_zir = iztrans(simplify(C•inv(eye(2)-z-(-1)*A)•q_0)) y_zir = 21•(1/2)-n - 8•(1/3)-n

The zero-state response is given by

>>

y_zsr = y - y_zir y_zsr = 6*(1/3)-n - 18•(1/2)-n + 12

Typing iztrans (simplify (C*inv (z*eye (2)-A) *B*X)) produces the same result. MATLAB plotting functions, such as plot and stem, do not directly support symbolic expressions. By using the subs command, however, it is easy to replace a symbolic variable with a vector of desired values.

>> >>

n = [0:25); stem(n,subs(y,n),'k.'); xlabel('n'); ylabel('y[n] '); axis([-.5 25 . 5 11.5 13.5));

Figure 13 .14 shows the results, which are equivalent to the results obtained in Ex. 13.13. Although there are plotting commands in the symbolic math toolbox such as ezplot that plot symbolic expressions, these plotting routines lack the flexibility needed to satisfactorily plot discrete-time functions. 13.5 13

......,

s>.

12.5 12 11.5

0

5

15

10

n

Figure 13.14 Output y[n] computed by using the symbolic math toolbox.

20

25

1120

CHAPTER 13 STATE-SPACE ANALYSIS

13.8.2 Transfer Functions from State-Space Representations A system's transfer function provides a wealth of useful information. From Eq. ( 13.72), the transfer function for the system described in Ex. 13. 13 is

>>

H = collect(simplify(C*inv(z*eye(2)-A)*B+D)) H = (30*z - 6)/(6*z-2 - 5*z + 1)

It is also possible to determine the numerator and denominator transfer function coefficients from a state-space model by using the signal-processing toolbox function ss2tf.

>>

[num,den] = ss2tf(A,B,C,D) num = 0 5.0000 - 1.0000 den= 1.0000 -0.8333 0.1667 The denominator of H [z] provides the characteristic polynomial

y2 -¾Y+i Equivalently, the characteristic polynomial is the determinant of (zl - A).

>>

syms gamma; char_poly = subs(det(z*eye(2)-A) , z,gamma) char _poly = gamma-2 - (5*gamma)/6 + 1/6

Here, the subs command replaces the symbolic variable z with the desired symbolic variable gamma. The roots command does not accommodate symbolic expressions. Thus, the sym2poly command converts the symbolic expression into a polynomial coefficient vector suitable for the roots command.

>>

roots(sym2poly(char_poly)) ans= 0.5000 0.3333 Talcing the inverse z-transform of H[z] yields the impulse response h[n].

>>

h = iztrans(H) h = 18*(1/2)-n - 12*(1/3)-n - 6*kroneckerDelta(n, 0)

As suggested by the characteristic roots, the characteristic modes of the system are (1/2)11 and (1/3)11 • Notice that the symbolic math toolbox represents 8[n] as kroneckerDelta(n, 0). In general, 8[n- a] is represented as kroneckerDelta(n-a , 0). This notation is frequently encountered. Consider, for example, delaying the input x[n] = u[n] by 2, x[n - 2) = u[n - 2]. In the transform domain, this is equivalent to z- 2x[z]. Talcing the inverse z-transform of z- 2X[z] yields

>>

iztrans(z-(-2)*X) ans= 1 - kroneckerDelta(n, 0) - kroneckerDelta(n - 1, 0)

That is, MATLAB represents the delayed unit step u[n - 2] as (-8[n - 1) - 8[n -0] + l )u[n]. The transfer function also permits convenient calculation of the zero-state response.

13.8 MATLAB: Toolboxes and State-Space AnaJysis

>>

1121

y_zsr = iztrans(H•X) y_zsr = 6•(1/3)-n - 18•(1/2)-n + 12

The result agrees with previous calculations.

13.8.3 Controllability and Observability of Discrete-Time Systems In their controllability and observability, discrete-time systems are analogous to continuous-time systems. For example, consider the LTlD system described by the constant coefficient difference equation y[n]+¾y[n-1J+!y[n-2]=x[n]+fx[n- l] Figure 13.15 illustrates the direct form II (DFII) realization of this system. The system input is x[n], the system output is y[n], and the outputs of the delay blocks are designated as state variables q 1[n] and q2[n]. The corresponding state and output equations (see Prob. 13.8-1 ) are

J [q[n + I]] = [-i0 -~1][qqz[n][n]] + [O 1 x[n]= AQ[n]+ Bx[n] 1

1

Q[n+l]= q2 [n+l ] and y[n]

= [-¾ -!]

[:~~~I]+

lx[nJ

=CQ[nJ + Dx[n]

To describe this system in MATLAB, the state matrices A, B, C, and D are first defined. >>

A = [O 1 ;- 1/6 -5/6]; B

=

[O; 1]; C

=

[-1/6 -1/3]; D

= 1;

To assess the controllability and observability of this system, the state matrix A needs to be diagonalized.t As shown in Eq. (13.54), this requires a transformation matrix P such that PA=AP

(13.74)

y[11J

x[n]

I 6

Figure 13.15 Direct form Il realization of y[n] + (5/6)y[n - J] + (l / 6)y[n - 2] x[n] + (1/ 2)x[n - I].

=

t This approach requires lhat the scace malrix A have unique eigenvalues. Systems wilh repeated roots require Lhat state matrix A be transformed into a modified diagonal form, also called the Jordan form. The MATLAB function jordan is used in these cases.

1122

CHAPTER 13 STATE-SPACE ANALYSIS

where A is a diagonal matrix containing the unique eigenvalues of A. Recall, the transformation matrix P is not unique. To determine a matrix P, it is helpful to review the eigenvalue problem. Mathematically, an eigendecomposition of A is expressed as

AV = VA where V is a matrix of eigenvectors and A is a diagonal matrix of eigenvalues. Pre- and post-multiplying both sides of this equation by v- 1 yields

v- 1Avv- 1 = v- 1vAv- 1 Simplification yields

v - 1A = Av- 1

(13.75)

Comparing Eqs. (13.74) and ( 13.75), we see that a suitable transformation matrix P is given by an inverse eigenvector matrix v- 1• The eig command is used to verify that A has the required distinct eigenvalues as well as compute the needed eigenvector matrix V. >>

[V,Lambda]

=

eig(A)

V =

0.9487 -0.3162

- 0.8944 0.4472

Lambda= - 0.3333 0

O - 0.5000

Since the diagonal elements of Lambda are all unique, a transformation matrix P is given by

»

P

inv(V);

=

The transformed state matrices A= PAP- 1, B= PB, and C= cp- 1 are easily computed by using transformation matrix P. Notice that matrix Dis unaffected by state variable transformations. >>

Ahat Ahat

=

P*A*inv(P), Bhat= P*B, Chat = C*inv(P)

=

- 0.3333 0.0000

-0.0000 -0.5000

Bhat= 6.3246 6.7082

Chat= -0.0527

-0.0000

Th~ proper operation of Pis verified by the correct diagonalization of A , A= A . Since no row of B is zero, the system is controllable. Since, however, at least one column of C is zero, the system is not observable. These characteristics are no coincidence. The DFII realization, which is more descriptively called the controller canonical form, is always controllable but not always observable.

l3.8 MATLAB: Toolboxes and State-Space Analysis t-----.-......

.x(n]

1123

yln l

I

2

I

Figure 13.16 Transposed direct form 11 realization of

6

y[n] + (5/6)y[n-

I]+ (I/ 6)y[n-2] =x[n] + (l/2)x[n- l].

As a second example, consider the same system realized using the transposed direct form II structure (TDFU), as shown in Fig. 13 .16. The system input is x[n], the system output is y[n], and the outputs of the delay blocks are designated as state variables v 1[n] and v2 [n]. The corresponding state and output equations (see Prob. 13.8-2) are

and y [n]

= [O I][~~~:~] + Ix[n] = CV[n] + Dx[n]

To describe this system in MATLAB, the state matrices A, B, C, and D are defined.

>>

A= (0 -1/6 ;1 - 5/6]; B = [-1/6; -1/3]; C = (0 1]; D = 1; To diagonalize A, a transformation matrix P is created.

>>

[V ,Lambda] = V =

0.4472 0.8944

eig(A)

0 . 3162 0.9487

Lambda=

-0.3333 0

0 -0.5000

The characteristic modes of a system do not depend on implementation, so the eigenvalues of the DFII and TDFII realizations are the same. However, the eigenvectors of the two realizations are quite different. Since the transformation matrix P depends on the eigenvectors, different realizations can possess different observability and controllability char!lcteristics. ~ 1 Using transformation matrix P, the transformed state matrices A = PAP- , B = PB, and C= cp- 1 are computed.

» >>

P = inv(V); Ahat = P*A*inv(P), Bhat

P*B, Chat= C•inv(P)

=.=-. : """'=-=~=;.:;;;;==::::::;;;;;;;;;;;;;;.,iiiiiiiii---------------....~

;,~r==·= -=- =========►::-== ....= - =-=-==...=--==·

1124

CHAPTER 13 STATE-SPACE ANALYSIS

Ahat = - 0.3333 0.0000 Bhat = - 0.3727 -0.0000 Chat= 0.8944

0

- 0. 5000

0.9487

Again, the proeer opera1ion of P is veri fied by the correct diagonalization of A, ~ = A. Since no column of C is zero, the system is observable. However, at least one row of B is zero, and therefore the system is nol controllable. The TDFJI realization, which is more descriptively called the observer canonical form, is always observable but not always controllable. It is interesting to note that the properties of controllability and observability are influenced by the particular realization of a system.

13.8.4 Matrix Exponentiation and the Matrix Exponential Matrix exponentiation is important to many problems, including the sol ution of discrete-time state-space equations. Equation ( 13.63), for example, shows that the state response requires matrix exponentiation, A11 • For a square A and speci fic n, MATLAB happily returns A n by using the operator. From the system in Ex. l3. l 3 and n = 3, we have >>

A = [0 1; -1/6 5/6]; n ans= 0.5278 -0 . 1389 -0.0880 0.3009

=

3; A- n

The same result is also obtained by typing A*A*A. Often, it is usefu l to solve A11 symbolically. Noting A11 toolbox can produce a symbolic expression for A11 • >>

= z- 1((1 -

z- 1A)- 1] , the symbolic

syms z n ; An = simplify(iztr ans(inv(eye (2)- z-(-1) *A))) An=

[ 3*(1/3)-n - 2*(1/2)-n , 6*(1/2)-n - 6*(1/3)-n] [ 1/3-n - 112-n, 3/rn - 2/3-n] Notice that this result is identical to Eq. (I 3.67), derived earlier. Substituti ng the case n = 3 into An provides a result that is identical to the one elicited by the previous A- n command. >>

double (subs(An ,n , 3)) ans= - 0 . 1389 - 0. 0880

0.5278 0.3009

-

Summary

1125

For continuous-time syste~s, the matrix exponential eA' is commonly encountered. The expm command can compute the matnx exponential symbolically. Using the system from Ex. 13.8 yields

>>

syms t; A= [-12 2/3;-36 -1]; eAt = simplify(expm(A•t)) eAt = [ -(exp(-9•t)•(3•exp(5•t) - 8))/5, (2•exp(-9•t)•(exp(5• t) - 1))/15] [ -(36•exp(-9•t)•(exp(5•t) - 1))/5, (exp(-9•t)•(8•exp(5•t ) - 3))/5]

This result is identical to the result computed in Ex. 13.8. Similar to the discrete-time case, an identical result is obtained by typing syms s; simplify(ilaplace(inv(s•eye(2)-A))). For a specifi c t, the matrix exponential is also easy to compute, either through substitution or direct computation. Consider the case t 3.

=

>>

double(subs(eAt,t,3)) ans = 1. 0e-004 * -0.0369 0.0082 -0 .4424 0.0983

The command expm(A*3) produces the same result.

13.9 SUMMARY An Nth-order system can be described in terms of N key variables-the state variables of the

system. The state variables are not unique; rather, they can be selected in a variety of ways. Every possible system output can be expressed as a Linear combination of the state variables and the inputs. Therefore, the state variables describe the entire system, not merely the relationship between certain input(s) and output(s). For this reason, the state variable description is an internal description of the system. Such a description is therefore the most general system description, and it contains the information of the external descriptions, such as the impulse response and the transfer function. The state variable description can also be extended to time-varying parameter systems and nonlinear systems. An external description of a system may not characterize the system completely. The state equations of a system can be written directly from knowledge of the system structure, from the system equations, or from the block diagram representation of the system. State equations consist of a set of N first-order differential equations and can be solved by time-domain or frequency-domain (transform) methods. Suitable procedures exist to transform one given set of state variables into another. Because a set of state variables is not unique, we can have an infinite variety of state-space descriptions of the same system. The use of an appropriate transformation allows us to see clearly which of the system states are controllable and which are observable.

References 1. Kailath, T., Linear Systems, Prentice-Hall, Englewood Cliffs, NJ, 1980. 2. Zadeh, L. and Desoer, C., Linear System Theo,y, McGraw-Hill, New York, 1963.

1126

CHAPTER 13 STATE-SPACE ANALYSIS

PROB LEMS 13.3-4

13.2-1 Convert each of the following second-order differential equations into a set of two first-order differential equations (state equations). State which of the sets represent nonlinear equations. (a) )i+ IOj1 +2y=x (b) )i + 2el'j, + logy= x (c) )i+ 1(y)j1+2(y)y= x 13.3-1 Write the state equations for the RLC network in Fig. P 13.3-1.

Write the state and output equations for the electrical network in Fig. Pl3.3-4.

2n lfl X

IF

_I_F 2

Figure P13.3-4

2 D ____

13.3-5

Write the state and output equations for the network in Fig. Pl 3.3-5.

ID IH

X

3D

IF

+

Figure Pl3.3-l 13.3-2 Write the state and output equations for the network in Fig. Pl3.3-2. 13.3-3 Write the state and output equations for the network in Fig. Pl3.3-3.

ID

X

2n

y

Figure Pl3.3-5 y

+ IF

13.3-6

I !l

Write the state and output equations of the system shown in Fig. Pl3.3-6. w

l2 H

I z 5 s +2 ........,_ s+ 10

y

Xi

+

X2

Figure Pl3.3-6 Figure P13.3-3

13.3-7

Write the state and output equations of the system shown in Fig. Pl 3.3-7.

r Problems

1127

.!. fl 3

-

+

X

I !l

+

+

q2

q,

.!. F 2

.!2 n

IH

y

Figure P13.3-2 JIIIIIIIIIll I I It 11111111111I11111111111111 I I I I II IllII I lttlll llHI I 111111111~



where

q,

I s-.\.1

lllo

A=[O - 1

llo

_1_ 1 s - >-2

q2

q(O) = [~]

...

..

Y2

13.4-2

A=

;\3

q (O) =

q4

l

s-A..

13.4-3

x(t)

=0

Repeat Prob. I 3.4-1 for

q3

1

s-

Y1

~6]

[-5 I

[!]

B = [~]

x(t)

= sin(I OOt)u(t)

Repeat Prob. 13.4-1 for

I IUIIIIIIIIUIUIIIII I IIIIII IIIIII IIIIIIII IIIIIIIIIIIIIIIIIII III IUIHIII

Figure P13.3-7 A=

13.3-8 For a system specified by the transfer function

H

= u(t)

- 1

q(O) = [~]

(a)

=

x(t)

A= [ 0

Repeat Prob. 13.3-8 for

H (s)

- I

Repeat Prob. 13.4-1 for

write sets of state equations for DFII and its transpose, cascade , and parallel forms. Also write the corresponding output equations.

4s (s+ l )(s+2)2

1

q(O) = [~1]

3s+ 10 ( )- s2 +7s + 12 s _

13.4-4

13.3-9

[-2 0]

13.4-5

Use the Laplace transform method to find the response y for

q= Aq + Bx(t)

(b)

H (s)

=

s3 + 1s2 + 12s (s+ 1} 3 (s+ 2)

y = Cq+Dx(t) where

13.4-1

Find the state vector q (t) by using the Laplace transform method if

q = Aq + Bx

A=[=~ ~] B=[~] C=[O

I]

D =O

1128

CHAPTER 13 STATE-SPACE ANALYSIS and x(t) = u(I)

13.4-6

q(O)

13.4-14

Repeat Prob. 13.4-4 using the lime-domain method.

13.4-15

Repeat Prob. 13.4-5 using the time-domain method.

13.4-16

Repeat Prob. 13.4-6 using the time-domain method.

13.4-17

Find the unit impulse response matrix h(t) for the system in Prob. 13.4-7 using Eq. (13.45).

13.4-18

Find the unit impulse response matrix h(t) for the system in Prob. 13.4-6.

= [~]

Repeat Prob. 13.4-5 for - I A= [ - I

C= [I x(1)

=u(t)

I]

D= I q(O) =

[7]

13.4-7 The transfer function H(s) in Prob. 13.3-8 is realized as a cascade of H1(s) followed by H2(s). where

13.4-19 Find the unit impulse response matrix h(I) for the system in Prob. 13.4-10. 13.5-1

The state equations of a certain system are given as C/1 =q2 +2.x i/2=-q1 -qz+x

Let the outputs of these subsystems be state variables q1 and q2 , respectively. Write the state equations and the output equation for this system and verify that H(s) = Clj)(s)B+ D.

Define a new state vector w such that

13.4-8

Find the transfer function matrix H(s) for the system in Prob. 13.4-5.

13.4-9

Find the transfer function matrix H(s) for the system in Prob. JJ.4-6.

Find the state equations of the system with w as the state vector. Detennine the characteristic roots (eigenvalues) of the matrix A in the original and the transformed state equations.

13.4-10

Find the transfer function matrix H(s) for the system

13.5-2 The state equations of a certain system are {JI= qz

q=Aq + Bx

CJ2

y= Cq+Dx where

= -2q1 -

3q2 + 2x

(a) Determine a new state vector w (in terms of vector q) such that the resulting state equations are in diagonalized form. (b) For output y given by y=Cq+Dx

13.4-11

Repeat Prob. I 3.4-1 using the time-domain method.

13.4-12

Repeat Prob. 13.4-2 using the time-domain method.

13.4-13

Repeat Prob. 13.4-3 using the time-domain method.

where

C=[

I I]

-I

2

determine the output y in terms of the new state vector w.

r Problems 13,5-3 Given a system

13.7- 1

1129

An LTl discrete-time system is specified by

I 0

A=[~

4= [~ -2

C

determine a new stale vector w such that the state equations are diagonal i1cd.

= [0

n 11

B = [~] D = (11

and

13,5-4 The state equations of a certain !-.ystcm are given in diagonali1cu form as

q (O)

= [~]

x[n]

= u[n]

0

-3 (a) Find the output y[n] using the time-

0

domain method. (b) Find the output y[nl using the frequencydomain method.

The output equation is given by y

= [1

Ilq

3

13.7-2

Detemli ne the output y for

q(O)

=

m

x(I)

An LT! discrete-time system is specified by the difference equation y[n + 2] + y[n + I]+ O. l6y[n]

= u (t)

= x[n + I) + 0.32x[n]

13.6-1 Write the state equations for the systems depicted in Fig. P 13.6- l. Determine a new state vector w such that the resulting state equations are in diagonalized form. Write the output y in terms of w. Determine in each case whether the system i s controllable and observable.

X

_I_

(a) Show the DFII. its transpose, cascade, and parallel realizations of this system. (b) Write the state and the output equations from these realizations, using the output of each delay element as a state variable. 13.7-3

y

y[n + 2) +y[n + I] - 6y[n]

s+a

X

s+a

s+a s+b

Figure Pl3.6-1

Repeat Prob. 13.7-2 for

= 2.x[n + 2) +x[n + I] y

13.8-1

Verify the state and output equations for the LTID system shown in Fig. 13.15.

13.8-2

Verify the state and output equations for the LTID system shown in Fig. 13.16.

INDEX

Tire page 1111111bers in italic refers to borlr Figures and Tables Abscissa of convergence, 518 Absolute value, 6 Acceleration error constant, 606 Acceleration error constant Ka, 606 Accumulator system, 762. 949. 950 Active circuits. Laplace transfonn and analysis of. 566-569 AID conversion. See Analog-10-digital conversion Adders, 585 Additive property, 97 Advance fom1, 757 Algebra of complex numbers, 5- 15 matrix, 37-42. 1065-1069 Aliasing, 452-453, 454, 455 general condition for, in sinusoids, 457-459 leakage in numerical computation and, 469, 471 sampling rate and, 753- 756 verification of, in sinusoids, 456-457 Aliasing error, 307 Allpass filter, 715-716 AM. See Amplitude modulation American Standard Code for lnfonnation Interchange (ASCH), 466 Amplitude, 16 in discrete-time sinusoids, 744 waveshaping role of, 279-284 Amplitude modulation (AM), 362-365, 393-397,412 Amplitude response, 639 Amplitude spectrum, 264, 358 Analog filters. See Continuous-time system analysis Analog signals, 78, 80 Analog systems, l09 Analog-to-digital conversion (AID conversion), 463-466, 464, 495

1130

Angle modulation, 401-414 Angles computing with electronic calculators, 8-11 generalized, 402 principal value of, 9 Angular acceleration, 116 Angular position, 116 Angular velocity, I 16 Anonymous functions, 126-128 Anti-aliasing filler, 455, 755 Anti-causal exponential, DTFf of, 855 Anti-causal signal, 81 Antipodal scheme, 247 Aperiodic signals, 79-81. 330-340, 847-858 Apparent frequency, 457-460, 750-752, 756 Armstrong, Edwin H., 412 Ars Mag11a (The Great Art) (Cardano), 2 ASCII. See American Standard Code for lnfonnation Interchange Associative property, convolution integration and, 171 Asymmetric quantizers, 497 Asymptotic stability, 198, 555-556, 81 1-812 Audio signals, 365,376,397,465 Autocorrelation, 249, 387-388 Automatic position control system, 590-596 Auxiliary conditions, 161 Backward difference system, 762, 806-808,950 Bandlimited interpolation, 451-452 Bandlimited signals, 408, 440, 449, 451-452,465,466 Bandpass filters, 673, 703, 997-999, 1023 Butterworth, 707-710, 1024-!026

Chebyshev, 704-706 Bandpass systems, 377- 380, 875 Bandstop filters, 673-675, 710-713, 999-1001, 1023- 1026 Bandwidth of angle-modulated signals, 408-41 O essential. 387, 421-422 of rectangular pulse, 344 signal duration reciprocity with, 357 of signals, 293, 40&-410 Bartlett (triangle) window, 417-418 Baseband signal, 388, 390-392, 398- 399, 401 , 408-409,41 1,413-414 Basis functions, 253 Basis signals, 253, 257. 306, 509 Basis vectors, 251 Beat effect, 393 Bessel-Thomson filters, 715 Bhaskar, 2 BIBO. See Bounded-input/ bounded-output BIBO stability, 196-203 Bilateral Laplace transfonn, 509, 5 I0, 512,612-615 inverse, 616-617 linear system analysis with, 619-620 properties of, 6 I8-6 I9 Bilateral z-transform, 919, 920-921, 923-924,966-974 Bilinear transformalion method, 1012-1026, 1047-1049 Binary communication, 247 Black, H. S., 443 Black box, 95, 119, 120,201,556, 662,815 Blackman window, 417-418, 425 Block convolution, 894-899 Block diagrams, 570, 570-572 Bode plots, 646-647 first-order pole, 650-653

Index with MATLAB. 661 pale at origin. 648-650 second-order pole, 653-656, 661-662 for second-order transfer functions, 656--061 transfer function from frequency response, 662 Bombelli, Raphael, 3-4 Bonaparte, Napoleon, 259-260, 529-530 Bounded input, I 10, 775 Bounded-input/bounded-output (BIBO), 110, 555-556, 775, 81 1- 812, 814-817, 986-987 Bounded output, 110, 775 Break frequency, 650 Bunerlly structure, 489, 491 Butterworth filters, 673, 677-687 bandpass, 707-710, 1024-1026 bandstop, 711-713 bilinear transformation method for designing, 1013-1015 cascaded second-order sections for, 718-721 higher-order lowpass, I 048-1049 impulse-invariance method for designing, 1008-1011 in MATLAB, 717- 718 Bunerworth lowpass filters, 683-687, 1006, 1017-1020 Canonical realizations, of transfer function, 952-954 Canonic direct fonn. 952 Cardano, Gerolamo, 2-3, 4 Carrier, 362 Carrier frequency, 402, 403 Cartesian form , 5, 8- 14 Cascaded second-order sections, 7 I 8-721 Cascade realization, 577- 579, 955, 1079 Cascade systems, 190-192, 201, 555- 556 Causal exponential bilateral ;:-transform of, 920-921 DTFT of, 852, 854 Fourier transform of, 334-335 Laplace transform and ROC of, 513-514 Causality condition, 778 Causal signal, 81, 172 Causal systems, 104-107, 172,775 Cayley-Hamilton theorem, 1066-1068 Champollion, Jean-Fran~ois, 260 Characteristic equation, 153, 783, 1066-1068, 1088, 1099 Characteristic function, 193

Characteristic modes, 162, 163, 203-205, 783 Characteristic polynomial, 153-156, 164,

166,202-203,220,554-555,592, 645, 783-784, 786-787, 791 Characteristic roots, 153,783,812, 813, 1088 Characteristic value, 1066, 1088 Chebyshev filters, 673, 677, 688-700, 704-706 Chebyshev highpass filters, 701-703, 1021-1023 Chebyshev lowpass filters, 693-696 Circular convolution, 483-484, 866,

889-890 Circular shifting property, 483 Clearing fractions, 26--27, 32-33, 525-526 Clipped sinusoids, 298-299 Closed-loop feedback, 588, 589 Closed-loop frequency response, 667 Coherent detection, 392 Column vector, 35 Communications industry, 412 Commutation, of matrices, 39 Commutative property, convolution integration and, 170 Compact trigonometric Fourier series, 263-267, 269-274, 288 Compensation, 608-611 Complete set, 251, 253 Complex conjugate poles, 653-654, 927 phase of. 656 realization of, 579 Complex-conjugate zeros, 654, 656 Complex factors of Q(x), 28-29 Complex frequency plane, 90 Complex numbers algebra of, 5-15 arithmetical operations, powers, and roots of, 11- 15 conjugate of, 6-7 defining, 1 historical background for, 1-4 logarithms of, 15 mathematical formulas for, 54 standard forms of, 14-15 Complex roots, 154, 786-788 Complex signals, 95, 242-243 Conformable, 39 Conjugate of complex numbers, 6-7 Conjugate symmetry, 95, 334, 354 DFf and, 482-483 of X(Q), 85 I, 859 Conjugation property, 334, 354, 859

1131

Constant Ka1a2 / b1b1, 648 Constant-parameter systems, 101 , 774 Continuous-time Fourier transform (CTFT), 859, 869-872 Continuous-time signals, 78, 79, I07 Continuous-time sinusoids, sampled, 746 Continuous-time system analysis block diagrams and, 570-571 differential and integro-differential equation solutions, 544-556 of electrical networks, 557- 569 feedback and controls applications of, 588-61 I sampled-data systems and, 959- 966 system realii.ation, 572-587 Continuous-time systems, 107-108, 764. See also Linear. time-invariant, continuous-time systems Continuous wall of poles, 673 Controllability/observability, 123-124, 1103-1109, 1117, 1121- 1124 Control systems design specifications, 596 frequency response and design of, 662-667 higher-order, 601 second-order, 596-599 simple, 590-596 Converge in the mean, 278 Convergence, of Fourier series, 277-286, 335 Convolution, 189-190 block, 894-899 of causal signals, 796-797 circular, 483-484, 889-890 correlation and, 249 discrete-time, 822-823, 889-894 fast, 890 Fourier transform and, 365-367 graphical understanding of, 178, 179, 180-187,217-219 linear, 485, 889-890 by tables, 175, 177. 797-799 z-transform and, 938-939 Convolution integral, 170-177, 176 Convolution sum, 795, 797, 798, 798, 800-805 Convolved functions, width of, 187 Comer frequency, 650 Correlation convolution and, 249 functions, 248-249 signal comparison, 243-249 signal detection and, 246-248 Correlation coefficient, 244-246

1132

Index

Coupled inductive networks, transfonned analysis of. 563-565 Cramer's rule. 22- 25, 569 Critically damped, 592 Crosscorrelation function, 248 CTFr. See Continuous-time Pourier transform Cubic equations, 58 Custom tilter functions, 821-822 Cutoff filter. 455 Cutoff frequency, 208, 678 Cycles per sample, 745 Damping coefficient. 115, 162 Damping ratio. 596 Dashpots linear, 115 torsional, 116 Data truncation, 414-420 Decade, 648 Decibels, 647 Decimation-in-frequency algorithm, 49 1 Decimation-in-time algorithm. 489, 491 Decomposition propeny. 99, 100 Delayed impulse. 168 Delay form, 757 Demodulation, 391-392, 396-397. 411-412 Depressed cubic equation. 3 Derivative fonnulas, 56 Design specifications, 596 Detection, 391. See also Demodulation correlation and, 246-248 envelope. 396-397 synchronous or coherent, 392 threshold, 247 Deterministic signals, 82 DFII realization. See Direct form n realization DFI realization. See Direct form I realization OFT. See Discrete Fourier transform Diagonalization, 1099-1 103 Diagonal matrix, 37 Difference equations, 777 differentiaJ equations and, 763-764 of linear, 939-950 order of, 764 recursive and nonrecursive forms of 763 ' recursive (iterative) solution of 778-781 • sinusoidaJ response of system of, 988-991

Differential equations auxiliary conditions and solutions of. 161 difference equations and, 763-764 Laplace transform and solutions of. 544-546 natural and forced solutions, 196 Differentiation propeny, , -transform and, 936- 937 Differentiators digital. 760-762 ideal. 643 window method for design or, 1038- 1039 Digital computers, 730, 880 Digital differentiators, 760-762 Digital tilters, 731,764, 1001 - 1003. See also Discrete-time systems; Recursive filters Digital integrators, 762 Digital signal processing, 764-765, 877 Digital signals, 78, 80, 461, 462 Digital systems, I09 Dirac delta function, 344-346 Dirac delta train, 348 Direct canonic form, 575-577 Direct form fl realization (DH! realization), 574-577, 952, 1076-1080, 1081, 1109, II/0 Direct fom1 I realization (DFl realjzation), 573-574, 574, 952 Direct realizat.ion, 950 Dirichlet conditions, 261, 279 Discontinuous functions, Fourier synthesis of, 283-284 Discrete Fourier transform (DFf), 307, 469-487 applications of, 484-487 derivation of, 47 1-474 interpreting, 881 in MATLAB, 491-498, 886 No for, 886-887 properties of, 482-484, 880 signal processing by, 877-899 6-point, 884-885 3-point, 882-883 truncated signals and, 888-889 Discrete-time complex exponential ejfln, 743-744 Discrete-time convolution or fi ltering, 822-823, 889-894 Discrete-time exponentiaJ y", 740-743 Discrete-time Fourier series (DTFS), 838-847, 877-878, 899-905

Discrete-time Fourier transform (DTFT). 850,853,878-880 CTFT connection with, 869-872 or decimated and interpolated signals. 870-872 interpreting, 881 LTJ discrete-time system analysis by, 872-877 in MATLAB, 857, 902- 905 propenies of, 859-869, 868 Discrete-time functions, 765- 766 Discrete-time impulse function b(t), 738, 739-740 Discrete-time signals, 78, 79, I07, 730 energy and power of, 732-733 in MATLAB, 765-766 models of, 738-753 operations for, 733-738 size of, 731-732 Discrete-time sinusoid cos(Qn + 0), 744-751 Discrete-time systems. I07-108, 730, 756-765 classification of, 774-777 equations for, 777-782 frequency response of. 986-992 in MATLAB, 819-823 properties of, 776-777 sampled-data systems and, 959-966 state-space analysis of, 1109-11 17 unit impulse response, 789-793 ZIR, 782-788 ZSR, 793-810 Discrete-time unit step function u[n], 739-740 Discrete-time window functions, 1032

Distortion, 374-376 Distortionless transmission, 375, 376 bandpass, 379-380 DTFT and, 874-875 filters for, 714-716 Distributive property, convolution integration and, 171 D,,, numerical computation of, 307 Dominant poles, 601 Double-sideband, suppressed-carrier modulation (DSB-SC), 388-393, 189 Downsampling, 736-738 DSB-SC. See Double-sideband, suppressed-carrier modulation DTFS. See Discrete-time Fourier series DTFI'. See Discrete-time Fourier transfonn

Index . property of Fourier transform,

ouaI11y

'

354-355. 374 oual of time sampling, 466-468

Eigenfunction. 193, 305 Eigenvalue, 153. 1066, 1067. 1088, 1098, 1103 Eigenvector. 1066 . Electrical circuits. state equations for, 1072- 1075 Electrical networks, analysis of, 557-569 Electrical systems. input-output description, 11 1- 114 Electromechanical systems, input-output description. 118 Elliptic fillers. 698-700 Elliptic rational function, 698 Energy of discrete-time signals. 732-733 error, 253 of error signal, 253-254 estimating in MATLAB, 131-133 signal, 384-388, 868-869 sum of onhogonal signals, 243 Energy signals, 81-82, 732 Energy spectral density. 385. 387-388 Envelope, 377 Envelope delay, 376, 378, 875 Envelope detector, 396-397 Equality in the mean, 335 Equalizer, 715 Equal-ripple functions, 689 Equilibrium states, J96, 198 Equivalent realizations. 580 Error energy, 253 Errors aliasing, 307 steady-state, 605--608 Error signal, energy of, 253-254 Error vector, 238 Essential bandwidth, 387, 42 1--422 Euler, Leonhard. 2 Euler's equalicy, 286 Euler's formula, 6, 90 Even and odd functions, 92-95 Everlasting exponential function e', 189, 193- 195 Everlasting exponential function t', 808- 809 Everlasting signals, 81 Exponential Fourier series, 286-302, 288, 838 Exponential Fourier spectra, 289- 297 Exponential function e'', 89-91

Exponentially varying discrete-time sinusoid y" cos(Q11 + 0), 752-753 Exponentially varying sinusoids, 22 Exponential modulation. See Angle modulation Exponential-order signals, 509 Exponentials Fourier transform of. 476-478 matrix, 1124-1 125 monotonic, 20-21 sinusoids in terms of, 20 time constants of, 20 Exponential signals, 305-306 External description. 119-120 External input, system response 10, 168- 196 External (BIBO) stability, 196-197, 775 Fast convolution, 890 Fast Fourier transform (FFT), 307,475, 488-491, 877-899 FDM. See Frequency-division multiplexing Feedback and controls, 588-612 compensation, 608-61I root locus, 60 1-605 stability and, 611-612 steady-stale errors, 605-608 FFT. See Fast Fourier transform Filter design, pole and zeros of H(s) and design of, 667- 677 Filtering OFT and, 485--487 discrete-time, using DFT, 889-894 discrete-time system response through, 819- 821 ideal and practical filters, 381-384 interpolation function sine (x) and, 341-342 time constant and, 207-208 using windows in, 4 18,419,420 ZSR and, 799 Filtering function, 341 Filters. See also specific filters anli-aliasing, 455, 755 design cri 1eria for, I 003- 1006 for distortion less transmission, 714-716 higher-order, 1047-1052 ideal, 38 1-384, 875-877 nonrecursive, I002-1003 practical, 381-384,676-677, 875-877 recursive, 1001- 1002 Finality property, 252, 1034 Fina.I value theorem, 542-543

1133

Finite impulse response filters (FIR filters), 954-955, 1003, 1028 with arbitrary magnitude response, 1051- 1052 comb, 1030-103 1 delay and, I049- 105 1 frequency sampling method in design of, 1040-1047 lowpass, 1042- 1046 time-domain equivalence method for design of, 1032-1039 Finite-memory systems, 104 First-order factors, inverse z-1tansform and, 927- 928 First-order hold fi lter, 449 FM. See Frequency modulation Folding frequency, 453 Forced response, 196, 810 Forced solution, 196 Fourier, Jean-Baptiste-Joseph, 259-26 1 Fourier integral, 850 aperiodic signal representation by, 330-340 Fourier series, 261,277-286, 1032, 1034. See also Trigonometric Fourier series exponential, 286-302 Legendre, 257-259 limitations of method of analysis, 306-307 MATLAB applications. 309-315 properties,300, 300-302 square wave synthesis by truncated, 281-282 Fourier spectra, 264 exponential, 289-297 of impulse train, 293-295 MATLAB plotting of, 267, 290 nature of, 850-858 numerical computation of, 308 periodic extension of, 841 of periodic signal x[n], 840-841 sampled signal and, 440--441, 441 Fourier transforn1, 334-339. See also Continuous-lime Fourier transform ; Discrete-1ime Fourier transform determining ZSR with, 372-373 of Dirac delta function, 344-345 duality property of, 354-355 frequency-shifting property of, 362-365 Laplace transform and, 520 linearity of, 336 LTlC sys1em response using, 339-340 in MATLAB. 420-42S, 478 numerical computa1ion of, 469--487

1134

Index

Fourier transform (co11tin11ed) of periodic signals, 347-348 propenies of, 352-372, 368 of rectangular pulse, 343-344, 479-482 scaling propeny of, 356. 357 of sign function, 350 of sinusoids, 347 time differentiation and integration and. 367-371 time-frequency duality in operations of. 352-353 time-shifting propeny of, 358-361 of unit step function 11(1). 349-350 Frequency, 16 apparent, 750,752.756 break, 650 carrier. 402, 403 corner, 650 cutoff. 208 fundamental, 838- 847 gain crossover, 665 half-power, 678 Henzian, 16 instantaneous, 402-405 phase crossover, 665 radian, 16 Frequency convolution bilateral Laplace transform and, 619 DTFT and, 866-867 Laplace transform and, 540-541 Frequency differentiation, 860, 861 Frequency-division multiplexing (FDM), 365, 412-414 Frequency-domain analysis. 383-384 Frequency-domain description, 264, 840 Frequency-domain equivalence criterion, 1005, 1012-1026. 1040-1047 Frequency-domain method, 551 Frequency modulation (FM), 402-408, 412 Frequency resolution, 474, 886-887 Frequency response, 374 Bode plots, 646-662 control system design using, 662-667 of discrete-time systems, 986-992 of LTIC systems, 638-646 in MATLAB, 716-717, 991 periodic nature of, 992 pole and zeros of H (s) and, 669-671 from pole-zero locations, 993-1001 transfer function from, 662 transient performance in terms of, 665-667 Frequency reversal, 859

Frequency sampling method. 1032, 1040-1047, 1049 Frequency scaling, 682 Frequency shifting, 362-365, 482-483. 536,618, 863- 865 Frequency spectra. 264 Frequency warping, 1015- 1017 Full-wave rectifier. 303- 305 Function M-files. 214-215 Fundamental. 261, 276-277 Fundamental band, 457, 749 Fundamental frequency, 838-847 Fundamental period, 733, 838-847 Gain, controlling with pole and zero placement, 994-995 Gain crossover frequency, 665 Gain enhancement, by pole, 670 Gain margin. 665 Gain suppression, by zero, 670-671 Gauss, Karl Friedrich, 4 GCF. See Greatest common factor Generalized angle, 402 Generalized Fourier series, 257 Generalized function, 88 Generalized linear phase (GLP), 378 Gibbs, Josiah Willard, 283, 285- 286 Gibbs phenomenon, 283-286, 309- 311, 1032, 1039 GLP. See Generalized linear phase Greatest common factor (GCF), 276 Group delay, 376, 377-378, 875 Half-power frequency, 678 Half-wave symmetry, 275 Hamming window, 417-418, 1037-1039 Hanning window, 417-418 Harmonically related frequencies, 276 Harmonic distortion, 298-299 Heaviside, Oliver, 261, 530-531 Heaviside "cover-up" method, 27-30, 32- 33,525-526 Henzian bandwidth, 409 Henzian frequency, 16 Higher-order filters, 1047-1052 Higher-order systems, poles and, 601 Highpass filters, 700-703, 995 Homogeneity property, 97 Ideal delay, 642-643 Ideal differentiator, 643 Ideal filters, 381-384, 676, 875-877 Ideal integrator, 643-644 Ideal interpolation, 450-451 , 451

Ideal linear phase (ILP), 378 Identity matrix, 37 Identity systems, 109, 775 IDTFT. See Inverse discrete-time Fourier transform rIR filters. See Infinite impulse response fi lters !LP. See Ideal linear phase Imaginary numbers. I, 2 Impedance, 557 Improper functions, 25, 33- 34 Impulse invariance criterion. 1005 Impulse invariance method, 1006-1012 Impulse matching, 164-167 Impulse response, 163- 168, 205, 220-221, 791-792,823-824. 1033 Impulse response matrix, 1094 Impulses, 171, 189 Impulse train, Fourier series spectra of, 293-295 Incrementally linear equations, 777 Indefinite integrals, 57 Inductive cable loading, 530 Inertia, moment of, 116 Infinite impulse response filters (HR filters), 1002, 1047-1049 Information transmission rate, time constantS and, 209-210 Initial conditions, 158-159, 161,547, 559-561, 1069 Initial state, I069 Initial value theorem. 542-543 Inner product, 240 Input. See also External input: Multiple-input, multiple-output; Single-input, single-output bounded, I IO, 775 noncausal, 6 I9-622

ramp,593,594-595 step, 591- 592, 593, 594-595 Input-output description, 111-118 Inputs, 64 Instantaneous frequency, 402-405 Instantaneous systems, l 03-104 Instantaneous velocity, 402 Integral control, 610 Integrator, in operational amplifier circu itS, 584 lntegro-differential equations, Laplace transform and solutions of, 544-546 Interconnected systems, 806--809 Internal conditions continuous-time system response to, 151-163

r Index discrete-time system response to, 782-788 Internal description. I I9- 125, I064 ternal stability, 198, 775, 811 -812 1 l~ternal (asymptotic) stability, 198, 8))--812, 814-817 Interpolation, 449, 450, 450-45 1, 451, 468. 736-738 Interpolation formula, 443, 451 Interpolation function sinc(x), 341-342 Inverse bilateral z-transfonn, 969-970 Inverse Chebyshev filters, 696-98 Inverse discrete-time Fourier transfonn (IDTFf). 472. 878-881 Inverse Fourier transfonn, 333 of Dirac delta function. 345-346 Inverse Laplace transform, 520--529, 1006. 1094-1095 Inverse systems. 556-557, 806-808, 949-950 Inverse ,-transform, 921, 925-930 Inversion. of matrices, 40--42 Inversion property of time and frequency, 357 Invertible systems, I 09-110, 775 Irrational numbers, 2 Jump discontinuities, 268, 28 I, 282, 378, 471 Kaiser window functions, 417-418, 42~25 Key variables, 120 Kirchhoff laws, 111, 213, 557 Kronecker delta function, 451-452 Lag compensator, 610, 611 Lagrange, Louis de, 530 Laplace, Pierre-Simon de (Marquis), 529-530 Laplace transfom1, 509-531 , 511. See also Bilateral Laplace transform active circuit analysis with, 566-569 differential and integro-differential equation solutions and, 544-546 electrical network analysis and, 557-569 electric circuit solutions and, 548-549 Fourier transform and, 520 frequency convolution and, 540--541 frequency shifting and, 536 initial and final values and, 542-543 initial conditions and, 547

intuitive interpretation of, 55 1-552 inverse, 520--529, 1006, 1094-1095 properties of, 532-544, 544 stability and, 554-556 state equation solutions with, 1083- 1088 system transfer functions and, 553 zero-input and zero-state components of response, 546 ZSR and, 541,546, 551-552 z-transfonn and, 918, 956-959 Lead compensator, 609,609,610 Left half-plane (LHP), 91 Left shift, 71, 130, 134, 178, 181, 733-735,934,939-942 Left-sided sequence, 968 Left-sided signal, 614 Legendre Fourier series, 257- 259 Leibniz, Gottfried Wilhelm, 465 L' Hopital's rule, 58 LHP. See Left half-plane Linear, time-invariant, continuous-time systems (LTIC systems), 150 everlasting exponential function ii' and, 193-195 Fourier transform for response of, 339-340 frequency response of, 638-646 impulse response detennination for, 220--221 periodic input response of, 303- 307 signal transmission through, 372-380 Linear, time-invariant, discrete-time systems (LTID systems), 774 DTFT analysis of, 872-877 equations for, 777- 782 external input response, 793-8 10 interconnected, 806-809 internal conditions responses, 782-788 stability, 811-8 17 system realization, 950-956, 95 I unit impulse response, 788-793 ZSR of, 944-948 Linear convolution, 485, 889-890 Linear dashpot, 115 Linear difference equations, ,-transform solution of, 939-950 Linear differential systems, 150 Linear discrete-time systems, 782 Linearity, 97-98 bilateral Laplace transform and, 618 DFf and, 482 of discrete-time systems, 774-775 DTFf and, 859 Fourier transform and, 354

1135

Laplace transform and, 513 ,-transform and, 919 Linear phase, 358-359. 376, 861--862, 874 Linear phase filters, 1041-1042 Linear-phase response, symmetry conditions for, 1028-1030 Linear springs, 114 Linear systems, 97- 101 bilateral Laplace transform in analysis of, 619-620 heuristic understanding of response of, 373-374 total response of, 195-196 Linear time-invariant systems (LTI systems). 150. 305-306 Linear time-varying systems, 102 Linear transformation, 36, 1095- 1103, 1117 Log amplitude, 648, 650, 653 Logarithms, of complex numbers, 15 Log magnitude, 648-650, 653-654 Louis SVIII (King), 260 Lower sideband (LSB), 389 Lowpass OT systems, 818 Lowpass filters, 672-673 Butt.erworth, 683-687, 1006, 1048- 1049 Chebyshev,693-696 elliptic, 698-700 FIR, 1042-1046 frequency response of, 995 window method for design of, 1034-1037 LSB. See Lower sideband LTIC systems. See Linear, time-invariant. continuous-time systems LTTD systems. See Linear, time-invariant. discrete-time systems LTI systems. See Linear time-invariant systems Lyapunov sense, 198, 222, 811 Maclaurin series, 55 Magnitude, 6 Magnitude response, 639, 1036 Marginal stability, 198 Mass, 114 rotational, 116 MATLAB anonymous functions, 126-128 bilinear transformation in, 1015, 1047-1049 Bode plots with. 661

1136

lndex

MATLAB (continued) Bullerwort.h bandpass filters with, 709-710 Butterworth bandstop filters, 712-713 Butterwonh filters in, 717- 72 1 Butterworth lowpass filters with, 685-687 calculator operations, 43-45 cascaded second-order sections, 718- 721 Chebyshev bandpass filters with, 704-706 code performance measurement, 904-905 continuous-time filters, 716-721 controllability and observability with, 1108, 1121-1124 control system toolbox, 604-605 convolution of two finite-duration signals with, 805-806 custom filte.r functions. 821-822 DFT in, 491-498, 886 discrete-time convolution, 822-823 d.iscrete-time exponentials with, 743 discrete-time signals in, 765-766 discrete-time sinusoids with, 745 discrete-time systems in, 819-823 DTFS with, 846-847. 902-905 DTFT with, 857, 902-905 element-by-element operations. 48-49 Find command, 717-718 for-loops, 215-2 17 Fourier series applications, 309-315 Fourier series spectra plotting with, 267, 290 Fourier transform, 420-425, 478 frequency response in, 642, 716-7 17, 991 graphical understanding of convolution and, 217-2 19 higher-order filter design in, 1047-I052 highpass Chebyshev filters with, 702-703 impulse invariance in, 1011 impulse response, 167 inverse Chebyshev filters with, 697-698 inverse Laplace transform with, 526-528 Kaiser window functions, 424-425 magnitude response with, 1036 matrix diagonalization in, 1102-1103 matrix exponentiation and matrix exponential, 1124-1125 matrix operations, 49-53

M-files, 212- 220 multiple magnitude response curves in, 998-999 normalized Butterworth filters, 68 1-682 numerical computation of Fourier spectra. 308 numerical integration and signal energy estimation, 131 - 133 overview of, 42-43 Parseval's theorem and, 421-422 panial fraction expansion in, 53, 929 polynomial evaluation in, 716-717 polynomial roots, 157 relational operators and unit step function, 128-130 simple plolling, 46-48 Sine function, 420-421 spectral sampling in, 423-424 square wave synthesis by truncated Fourierseries.281-282 state-space analysis with, 11 I7-1125 symbolic Laplace transform with, 528 synthesizing periodic replication, 370-371 transfer functions from state-space representations, I 120-1121 transfer functions of feedback systems with, 571-572 unit impulse response in, 792 vector operations, 45-46 visualizing operations on independent variable, I30-131 working with functions, 126-133 ZIR, 157-158 Matrices, 35-36 Cayley-Hamilton theorem and, 1066-1068 characteristic equation of. 1066-1068 characteristic roots of, l088 computing exponential and power of, 1068-1069 derivatives and integrals of, 1065-1066 impulse response, 1094 inversion of, 40-42 MATLAB operations, 49-53 state transition, l091, l095 transfer function, l086-1087 vector multiplication of, 40 Matrix algebra, 37-42, l065-1069 Matrix exponentiation, 1124-1125 Maximum stopband gain, 677 Mean-square value, 65 Mechanical systems, input-output description, 114-1 18

Memoryless systems, 103, 776 Method of residues. See Heaviside ''cover-up" method M-files, 212-215 Michelson, Alben. 285 M IMO. See Multiple-input, multiple-output Minimum passband gain, 677 Modes, 153 Modified partial fractions, 35, 926 Modified z-transform, 959, 960, 965 Modulating signal, 362 Modulation. See also specific types defining, 388 Modulation index, 394 Modulation property, 863--865 Moment of inen.ia, 116 Monotonic exponentials, 20-21 Multiple-input, multiple-output (MIMO), 98,125, 1064, 1070 No periodic, 733 No-periodic sequences, 472 Narrowband angle-modulated signal (NBEM), 410-41 I Natural binary code (NBC), 463 Natural frequencies, 153 Natural modes, 153 Natural response, 196, 810 Natural solution, I96 NBC. See Natural binary code NBEM. See Narrowband angle-modulated signal Negative feedback, 590 Negative frequencies, 291 Negative numbers, 2 Neper frequency, 90 Neutral equilibrium, 196, 198 Nichols plot, 664 Nonanticipative systems, 775 Noncausal input, 619-622 Noncausal signal, 81 Noncausal systems. 104-107, 621, 775 Noninvertible systems, 109-110, 775 Noninverting amplifier, 567 Nonlinear systems, 97-101 Nonrecursive filters, 1002-1003, 1027- 1047 Nonrecursive form , of difference equation, 763 Normalized Butterworth filter, 678-682 Normalized set, 251 Notch (bandstop) filters, 673-675 Numerical computation, leakage in, 469. 471

Index Nyquist criterion, 665 Nyquist interval, 442 Nyquist plot, 664. 665, 667 Nyquist rate. 442, 456, 869 Nyquist samples, 442, 456 Octave. 648 Open-loop feedback, 588 open-loop transfer function, 667 operational amplifier, 566, 566-567, 583-587, 584,585 operator notation, 781 - 782 Orthogonal signal set, signal representation by, 250-261 Orthogonal signal space, 251-259 Orthogonal vector space, 250-25 1 Orthonormal set, 251 Output equation, 122, 1064, I070-I071 , 1093-1094 Outputs, 64 0verdamped, 592 Overlap and add method, 895, 895-897 Overlap and save method, 888, 897-899 Paley-Wiener criterion, 382, 452 PAM. See Pulse-amplitude modulation Parallel realization, 577-580, 955, 1079-1080 Parseval's theorem, 254, 297, 385, 386-387,421-422, 867-869 Partial fraction expansion, 25 combination of Heaviside "cover-up" and clearing fractions, 32-33 Heaviside "cover-up" method, 27- 30 improper f(x), 33-34 inverse ,-transform by, 925- 929 MATLAB operations, 53, 929 method of clearing fractions, 26--27 modified partial fractions, 35 repeated factors of Q(x), 30-32 Passband, 676 PCM. See Pulse-code modulation Percent overshoot (PO), 592, 665 Periodic convolution, 866 Periodic extension of Fourier spectrum, 841 Periodic functions, 309-31 I Periodic gate function, DTFS of, 845-847 Periodic inputs, LTIC system response to, 303-307 Periodicity, 276-277, 747-748, 992 Periodic replication, synthesizing, 370-371 Periodic signals, 79-81, 80, 81

discrete-time Fourier series representation of, 838-847 Fourier spectra of, 840-841 Fourier transform of, 347-348 Phase angle, 360 Phase crossover frequency, 665 Phase-locked loop (PLL), 411 Phase margin, 665 Phase modulation (PM), 402-408 Phase-plane analysis, 125, 1065 Phase response, 639, 67 1 Phase shifts, 359 Phase spectra, 264, 279-284, 311-315, 358,360-361 Phasors, 18, 19 Physical systems, 775 Picket fence effect, 471 , 883-884 Pickoff node, 757-758 Piecewise function, 85 Piecewise polynomial periodic functions, 301 Pingala, 465 PLL. See Phase-locked loop PM. See Phase modulation PO. See Percent overshoot Pointwise convergent series, 278 Polar form, 8-11 Pole zero cancellation, 1046, 1075 Pole zero placement, 993-1001 Polynomial evaluation, in MATLAB, 716-717 Positional error constant., 606 Positional error constant Kp, 606 Positive feedback, 590 Postmultiplied matrices, 39 Power series, 55, 930 Power signals, 81-82, 732 PPM. See Pulse-position modulation Practical filters, 381-384, 676-677, 875-877 Practical sampling, 445-448, 446 Preece, William, 531 Preemphasis filter, 402 Premultiplied matrices, 39 Prewarping, JO I 7-1026 Principal value of angles, 9, 12, 14, 15, 360-361 Proper functions, 25, 27 Pulse-amplitude modulation (PAM), 460. 461 Pulse-code modulation (PCM), 460 Pulse dispersion or spreading, time constants and, 209 Pulse-position modulation (PPM), 460

1137

Pulse-width modulation (PWM), 460, 461 Pythagoras, 2 Quadratic equations, 58 Quadratic factors, 29, 523-524, 928 Quantization, 463, 495-498 Radian frequency, 16, 90 Radians per sample, 743, 744 Ramp functions, 241 Ramp input, 593, 594-595 Ramp signal, 257 Random signals, 82 Rational functions. 25, 26, 520 RCcircuit, 113-114, 161, 162 Real time, 105 Recrangular pulse DTFT of, 856-857 Fourier transform of, 343-344, 479-482 Rectangular spectrum, inverse DTFr of, 857-858 Rectangular window, 1033, 1034, 1038 Recursive filters, 1001-1002, 1006-1026 Recursive form, of difference equation, 763 Recursive system, 1002 Reflected unit step functions. 85-86 Reflection property, 357-358, 859-860 Region of convergence (ROC). 193, 5 IO, 513-515, 517,614, 920-921, 967-968 Region of existence, 514, 920 Relational operators, 128-130 Relative stability. 664-665 Repeated factors of Q(x), 30-32 Repeated poles. realization of, 579 Repeated roots, 153- 154, 786 Resonance phenomenon, 163, 210-212 Right half-plane (RHP). 91 Right shift, 71-72. 131,134.178, 181 , 733-735, 931-936, 939-942 Right-sided sequence, 968 Right-sided signal, 614 Rise time, 206. 592 RLC circuit, 111- 112, 114, 125, 1070-1071. 1073- 1074 RMS. See Root-mean-square ROC. See Region of convergence Rolloff rate, 41 S Root locus, 599, 601-605 Root-mean-square (RMS). 65, 68-70 Rotational mass, 116

1138

Index

Rotational systems, 116-118 Row vector, 35 Sallen-Key circuit 567-569 Sampled-data systems, 959-966 Samples. 469, 470. 475-476 cycles per, 745 radians per, 743, 744 Sampling, 445-448, 446, 456 Sampling instants, 965 Sampling intervals, 755. 1007- 1008 Sampling property of unit impulse, 87 Sampling rate aliasing and, 753-756 altering, 736-738 Sampling theorem, 440-448, 4~63, 472, 755 Scalar multiplier. 584 Scaling property, 97, 733 of Fourier transform, 356. 357 of Laplace transfom1, 540 MATLAB Sine function and, 420--421 z-transform and, 936 Schwarz. inequality, 244 Script M-files, 213- 214 Second-order control systems, analysis of, 596-599 Second-order poles, 653-662 Second-order zeros, 654 Snifting. 733-735 Shift invariance, 774 Shift property, convolution integration and, 171 Sidelobes, 415 Sifting property of unit impulse, 87 Signal comparison, correlation, 243-249 Signal detection, correlation and, 246-248 Signal distortion, during transmission, 374-376 Signal duration, bandwidth reciprocity with, 357 Signal energy, 65- 70, 384-388, 868- 869 Signal flow graph, 489,491 Signal power, 65-70 Signal reconstruction, 449-460 Signal representation, by orthogonal signal set, 250-261 Signals bandwidth of, 293, 408-410 classification of, 78- 82 combined operations on, 77-78 components of, 239-241 defining, 64 dual personality of, 306

estimating energy in MATLAB, 131-133 even and odd components, 93- 95 size of, 64-70 time reversal, 76-77 time scaling, 73- 75 time shifting, 71- 73 useful models for, 82- 91 as vectors, 237-243 Signal transmission, through LTIC systems. 372- 380 Signal truncation, 888-889 Sign function, Fourier transform of. 350 Simple interpolation, 449, 450 Simplified impulse matching, 165-167 sinc(x), 341 Sine waves, approximation of, 240-241, 254-256 Single-input, single-output (SISO), 98, 125. 1064 Single-sideband modulation (SSB), 397-401 Singularity functions, 89 Sinusoids, 16-17 addition of, 18- 20 aliasing general condition for aliasing in, 457-459 aliasing verification in. 456-457 clipped, 298-299 compression and expansion of, 76 discrete-time, 744-751 DTFS of, 841- 844 exponentially varying, 22 Fourier transform of, 347 harmonic distortion of clipped, 298-299 in terms of exponentials, 20 SISO. See Single-input, single-output Sliding-tape method, 803-805 Space, 64 Spectral compression, 356 Spectral decay, 284 Spectral density, 338 Spectral folding, 453 Spectral interpolation, 468 Spectral leakage, 415 Spectral sampling, 423-424, 466-468, 1032, 1041. 1047 Spectral shifting, 363- 364 Spectral spreading, 414,415,420,471 Spectrum of x(I), 335 Square matrix, 36 Square waves compact trigonometric Fourier series or, 272- 274

complex exponential approximation of,

243 set of harmonic sine wave approximation of. 254-256 sine-wave approximation of, 240-241 SSB. See Single-sideband modulation Stability feedback and control and, 6 I 1-612 transfer function and, 949 z-transform and, 949 Stable equilibrium, 196 Stable systems, 110, 775 Standard forms or complex numbers, 14-15 State equation, 122, 1064 determining, l 072- 1082 diagonalization, 1099-11 03 Laplace transform solution of, 1083-1088 solutions to, 1082-1095 time-domain solution of, 1088-i095 State equations, from transfer function, 1075- 1082 State space, 1069--1072 controllability and observability and, 1103-1109 discrete--time system analysis with, 1109- 1117 MATLAB for analysis with, 1117-1125 transfer function and description in, 1076-1080 State-space description, 121-125 State transition matrix (STM). 1091 , 1094-1095 State variables, 121, 1064, 1069 State vectors, linear transformation of, 1095-1103 Steady-state component. 645 Steady-state errors, 592, 605-608 Steady-state response, to causal sinusoidal inputs, 645-646, 987 Stem plots, 765-766 Step input, 591-592, 593, 594-595 Stiffness, 114 STM. See State transition matrix Stopband, 676 Strip of convergence. See Region of convergence Subcarriers, 412 Superposition, 98, 100 Suppressed carrier, 391 , 399 Symbolic Laplace transform, 528 Symmetric matrix, 37 Symmetric quantizers, 496

( Index symmelt)' c onjugate, 354 rier series, 295 in exponential Fou half-wave, 275 conditions, linear-phase response 0 1028- 103 trigon ometric Fourier series and,

274-275 on. 394 Syn chronous demodulati 2 ion. 39 Synchronous detect System behavior chnra cteristic modes and, 203-205 filtering, 207-208 intuiti\'e insights into, 8 I 7-818 pulse dispersion or spreading, 209 rare of information 1ransmission, 209-210 resonance phenomenon and, 210-212 response time, 205-206 rise time. 206

System memory, 104, 776 Sysrem realization. 572-587, 950-956, 951, 954-956. 1079-1080. See also Direct form U realization; Direct form I realization System responses to external inputs, 168- I 96 to internal conditions. 161-163 to sampled continuous-time sinusoids, 987-988 unit impulse response, 163-168 Systems. 95 classification of, 97-110 data for computing response of, 96--97 defining. 64 input-output description of, 111-1 J 8 internal and external descriptions of, 119-125,1064 with memory, 776 memory in, I 04 System stability external (BIBO), I 96--197 implications of, 203 internal (asymptotic), 198 of LTID systems, 8 I 1-8 J 7 relationship between BIBO and asymptotic, 198-203 System time constant, 205-206 System transfer functions inverse systems and, 556--557 Laplace transform for finding, 553 Tables, convolution by, 175, I 77, 797-799 Tapered-window functions, 4 I 7-418

Taylor series, 55 TOM. See Time-division multiplexing Third-order feedback systems, 603--605 Threshold detector, 247 Time, 64 Time advance, 73 Time compression, 76, 356, 540, 736 Time constants of exponentials, 20 filtering and, 207-208 information lransmission rate and 209-210 pulse djspersion or spreading and, 209 rise time and, 206 system, 205-206 Time convolution bilateral Laplace transform and, 619 DTFT and, 866 Laplace transform and, 540-541 Time delay, 73 Time differentiation biJateral Laplace transform and, 618 Fourier transform and, 367-37 J Laplace transform and, 537-539 Time-division multiplexing (TDM), 412, 461 Time-domain analysis, 150, 383-384 Time-domain description, 264 Time-domain equivalence criterion, 1003-1012, 1032-1039 Time-domain solution of state equations, 1088-1095 Time expansion, 540 Time-frequency duality, in Fourier 1ransform operations, 352-353, 374 Time integration bilateral Laplace transform and, 618 Fourier transform and, 367-371 Laplace 1ransform and,539-540 Time invariance, of discrete-time systems, 774-775 Time-invariant systems, 101-103, 774 Time reversal, 76--77, 735, 859 bilateral Laplace transform and. 6 I 9 z-t:ransform and, 937-938 Time scaling, 73-75, 618 Time shifting, 71-73, 735 DFf and,482-483 DTFf and, 861 Fourier transform and. 358-361 Laplace transform and. 532-535 ,-transform and, 931-935 Time-varying systems, 101-103 Torque, l 16-117 Torsional dashpotS, I 16

1139

Torsional springs, J 16 Total response, 195-196 Transfer function, 193 equivalent realizations and, 580 of FIR comb filter, 1030-1031 from frequency response,662 inadequacy of description with, I 108-1109 of LTJD systems, 809 in MATLAB, 571-572, 1120-1121 realization of, 573, 573-574, 952-954 956 second order, 656--661 stability and, 949 state equations from,1075-I082 state-space description from, 1076-1080, 1086-1087 of unit delay, 948 z-transfonn and, 944-945 Transfer function matrix, 1086-1087 Transient component, 645 Transient performance, in frequency response terms, 665-667 Translational sysrems,114-116 Transmission distortionless, 375, 376, 379-380, 714-716,874--875 signal distortion during, 374-376 time constants and rate of, 209-210 Transpose, 37 Transposed realization, 580-582 Transversal filter, 1028 Triangle function, 84-85 Triangle wave, compact trigonometric Fourier series of, 269-270 Trigonometric Fourier series, 261-277,

288,838

amplitude and phase spectra role in waveshaping, 279-284 compact, 263-267,269-274 convergence at jump discontinuities, 268 exponential Fourier series and,291-293 Fourier spectrum, 264 fundamental frequency and period, 276-277 periodicity of, 267-268 symmetry and, 274-275 Trigonometric identities, 55-56 Trigonometric set, 262 Truncated signals, 888-889 Tukey-Cooley algorithm, 488 Two-sided sequence. 968 Two-sided signal, 614

1140

Index

Uncontrollable systems, 120 Underdamped, 592 Uniformly convergent series, 278 Unilateral Laplace transfom1 , 515,516, 517 Unilateral z-transfom1, 919-920 Uniqueness property, of unilateral Laplace transform, 517 Unit circle. 741 Unit delay. transfer function of, 948 Unit gate function, 340-341 Unit impulse function b(t), 86-89 Unit impulse response /r(11), of discrete-time systems. 789- 793 Unit impulse response h(r). 163- 168. 172 of filters. 382 Unit matrix, 37 Unit step function 11(r). 82-86. 83. 128- 130. 349-350 Unit triangle function, 341 Unobservable systems. 120 Unstable equilibrium, 196 Unstable systems, 110, 775 Upper sideband (USB). 389 Upsampling, 736-738 USB. See Upper sideband VCO. See Voltage-controlled oscillator

Vectors, 35-36 components of, 237-239 error, 238 MATLAB operations, 45-46 matrix multiplication by, 40 signals as, 237- 243 state. I095- 1I03 Velocity error constant, 606 Velocity error constant K,,, 606

Vestigial sideband (VSB), 401 Voltage-controlled oscillator (VCO), 411 VSB. See Vestigial sideband Wall of poles, 672-f,73 Weber-Fechner law, 647 Wideband angle-modulated signal (WBEM), 410-411 Wideband angle modulation, 410 Width property convolution integration and, 172 of convolved functions. 187 Window functions, 414-420. 417 discrclc-time, 1032 Windowing, 1047 Window method for differentiator design, I 038-1039 for lowpass filter design, 1034-1037 Young, Thomas, 260 z-Domain differen1iation property, 936-937 z-Domain scaling property, 936 Zero-input response (ZIR), 98, 151-163, 203 complex roots, 154 of discrete-time systems. 782-788 Laplace transform and, 546 repeated roots, 153-154 total response and, 195-196 Zero-input stability, 199 Zero matrix, 37 Zero-order hold filter (ZOH filter), 449 Zero padding, 474, 475, 493-494, 883-884

Zero-state response (ZSR). 98. 168-196 bilateral ,-transform and, 972- 974 causality and, 172-173 convolution integral for, 170-177, 176 of discrete-time systems. 793-810 fi ltering and, 799 Fourier transform for determining, 372-373 interconnected systems and, 190-193,

806-809 Laplace transform and, 541 , 546, 550-553 of LTTD systems, 944-948 total response and, 195- 196 z-transform and. 944-948 Zero-state stability, 199 ZlR. See Zero-input response ZOH filter. See Zero-order hold filter ZSR. See Zero-state response Z-transform, 899-90 J, 918- 925, 922, 1006 bilateral, 920-921 , 923-924, 966-974 existence of, 921, 923 inverse, 925-929 inverse systems. 949-950 Laplace transform and, 918, 956-959 linear difference equation solutions with, 939-950 MATLAB state-space analysis and, 1117-1 119 modified, 959, 965 properties of, 931-939, 940 stability and, 949 state-space analysis of discrete-time systems and, 1115-1 116 transfer function and, 944-945

Acomprehensive introductory treatment of signals and linear systems with material on discrete-time systems that can be used in both a traditional course in Signals and Systems and in an introductory course in Digital Signal Processing. Praise from the Experts: "T11is is an excellent textbook for a junior-level course i11 System Analysis.'' - Eshan Sheybani, Virginia State University "This book has strong co11ti11uous signals and systems related content." - Mohammed Yeasin, University of Memphis "11,c book's greatest stre11gtl1 is tl1e experience a11d insight tl,at the author obviously has in continuous time systems a11d tire subtleties and history of the methods that are used to analyze and design tl,e111." - Peter Mathys, U11iversity of Colorado

New to Th.is Edition • Updated MATLAB material • Improved navigation/titling of examples and drills • Approximately half of the problems are new or revised

For additional information see also: www.oup.com/us/he

About the Author B.P. Lat hi is Professor Emeritus of Electrical Engineering at California State University, Sacramento. Dr. Lathi is renowned for his excellent writing across the upper-level electrical engineering curriculum. Roger Green is an Associate Professor of Electrical Engineering at North Dakota State

University. He has published numerous scholarly articles and given presentations on MATLAB, Signal Processing, and Fourier Analysis as a member of both the IEEE and ASEE. Along with fo ur colleagues, he is the proud owner of a patent for a Vector Calibration System, designed to identify vector mismatch between a plurality of signal paths and frequencies.

OXFORD UNl\'EII.SITY Pll~SS

www.oup.com/us/he Cover Photo: Slot entrance into the Wave, Pariah Canyon. Utah, Vermillion Cliffs NM, by Oksana Perkins, courtesy or Shutterstock. Cover Design: OUP Dtsign

ISBN 978-0-19-029904-0