Continuous and discrete signals and systems 9780131732384, 0131732382


1,633 127 27MB

English Pages [525] Year 1990

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Continuous and discrete signals and systems
 9780131732384, 0131732382

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

CONTINUOUS AND DISCRETE SIGNALS AND SYSTEMS

Prentice-Hall International Editions

SAMIR S. SOLIMAN MANDYAM D. SRINATH

CONTINUOUS AND DISCRETE SIGNALS AND SYSTEMS

SAMIR S. SOLIMAN Southern Methodist University Department of Electrical Engineering

MANDY AM D. SRINATH Southern Methodist University Department of Electrical Engineering

Prentice-Hall International, Inc.

This edition may be sold only in those countries to which it is consigned by Prentice-Hall International. It is not to be re-exported and it is not for sale in the U.S.A., Mexico, or Canada.

© 1990 by PRENTICE-HALL, INC. A Division of Simon & Schuster Englewood Cliffs, NJ. 07632

All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher.

Printed in the United States of America 10

987654321

isbn

о-1з-1?ззза-з

Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo Simon & Schuster Asia Pte. Ltd., Singapore Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro Prentice-Hall, Inc., Englewood Cliffs, New Jersey

To our families

Contents

XVII

PREFACE

1

SIGNAL REPRESENTATION

1

1.1

Introduction

1.2

Continuous-Time vs. Discrete-Time Signals

1.3

Periodic vs. Nonperiodic Signals

1.4

Energy and Power Signals

1.5

Transformations of the Independent Variable 1.5.1 1.5.2 1.5.3

1.6

1 2

4

7

10

The Shifting Operation, 10 The Reflection Operation, 13 The Time-Scaling Operation, 17

Elementary Signals

20

1.6.1 The Unit Step Function, 20 1.6.2 The Ramp Function, 22 1.6.3 The Sampling Function, 22 1.6.4 The Unit Impulse Function, 24 1.6.5 Derivatives of the Impulse Function, 31 1.7

Orthogonal Representations of Signals

1.8

Other Types of Signals

1.9

Summary

33

38

39

1.10

Checklist of Important Terms

1.11

Problems

41

41

vii

Contents

viii 2

51

CONTINUOUS-TIME SYSTEMS 2.1

Introduction

2.2

Classification of Continuous-Time Systems 2.2.7 2.2.2 2.2.3 2.2.4 2.2.5 2.2.6

2.3

52

Linear and Nonlinear Systems, 52 Time-Varying and Time-Invariant Systems, 56 Systems With, and Without Memory, 58 Causal Systems, 59 Invertibility and Inverse Systems, 61 Stable Systems, 62

63

Linear Time-Invariant Systems 2.3.1 2.3.2

2.4

51

The Convolution Integral, 63 Graphical Interpretation of Convolution, 69

Properties of Linear Time-Invariant Systems

75

2.4.1 Memoryless LTI Systems, 76 2.4.2 Causal LTI Systems, 76 2.4.3 Invertible LTI Systems, 77 2.4.4 Stable LTI Systems, 77

2.5

Systems Described by Differential Equations

2.5.7 2.5.2 2.2.3 2.5.4

2.6

3

Linear Constant-Coefficients Differential Equations, 78 Basic System Components, 80 Simulation Diagrams for Continuous-Time Systems, 81 Finding the Impulse Response, 85

State-Variable Representation 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5

78

87

State Equations, 87 Time-Domain Solution of the State Equations, 89 State Equations in First Canonical Form, 97 State Equations in Second Canonical Form, 98 Stability Considerations, 102

2.7

Summary

2.8

Checklist of Important Terms

2.9

Problems

104

106

106

117

FOURIER SERIES 3.1

Introduction

3.2

Exponential Fourier Series

3.3

Dirichlet Conditions

117

128

118

Contents

ix 3.4

Properties of Fourier Series 3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 3.4.6 3.4.7 3.4.8

4

Least-Squares Approximation Property, 131 Effects of Symmetry, 133 Linearity, 136 Product of Two Signals, 136 Convolution of Two Signals, 137 Parseval’s Theorem, 139 Shift in Time, 139 Integration of Periodic Signals, 141

3.5

Systems with Periodic Inputs

3.6

Gibbs Phenomenon

3.7

Summary

3.8

Checklist of Important Terms

3.9

Problems

3.10

131

141

149

151 155

155

Computer Problems

166

THE FOURIER TRANSFORM

167

4.1

Introduction

4.2

The Continuous-Time Fourier Transform

167 168

4.2.1 Development of the Fourier Transform, 168 4.2.2 Existence of the Fourier Transform, 170 4.2.3 Examples of the Continuous-Time Fourier Transform, 171

4.3

Properties of the Fourier Transform 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 4.3.7 4.3.8 4.3.9

4.4

178

Linearity, 178 Symmetry, 179 Time Shifting, 180 Time Scaling, 180 Differentiation, 182 Energy of Aperiodic Signals, 184 The Convolution, 186 Duality, 190 Modulation, 191

Applications of the Fourier Transform 4.4.1 Amplitude Modulation, 196 4.4.2 Multiplexing, 198 4.4.3 The Sampling Theorem, 200 4.4.4 Signal Filtering, 205

196

Contents

X

4.5

Duration-Bandwidth Relationships

209

4.5.1 Definitions of Duration and Bandwidth, 209 4.5.2 The Uncertainty Principle, 214

5

4.6

Summary

4.7

Checklist of Important Terms

4.8

Problems

216

218

218

229

THE LAPLACE TRANSFORM 5.1

Introduction

5.2

The Bilateral Laplace Transform

5.3

The Unilateral Laplace Transform

5.4

Bilateral Transforms Using Unilateral Transforms

5.5

Properties of the Unilateral Laplace Transform 5.5.7 5.5.2 5.5.3 5.5.4 5.5.5 5.5.6 5.5.7 5.5.8 5.5.9 5.5.10 5.5.11

229 230

233 235 237

Linearity, 237 Time Shifting, 237 Shifting in the s Domain, 238 Time Scaling, 239 Differentiation in the Time Domain, 239 Integration in the Time Domain, 242 Differentiation in the s Domain, 244 Modulation, 244 Convolution, 245 Initial-Value Theorem, 248 Final-Value Theorem, 249

5.6

The Inverse Laplace Transform

5.7

Simulation Diagrams for Continuous-Time Systems

5.8

Applications of the Laplace Transform

252

261

5.8.1 Solution of Differential Equations, 261 5.8.2 Application ofRLC Circuit Analysis, 262 5.8.3 Application to Control, 264

5.9

State Equations and the Laplace Transform

5.10

Stability in the s Domain

5.11

Summary

5.12

Checklist of Important Terms

5.13

Problems

271

273

275

275

268

255

xi

Contents

6

DISCRETE-TIME SYSTEMS

285

285

6.1

Introduction

6.2

Elementary Discrete-Time Signals

286

6.2.1 Discrete Impulse and Step Functions, 286 6.2.2 Exponential Sequences, 288 6.2.3 Scaling of Discrete-Time Signals, 291 6.3

Discrete-Time Systems

6.3.1

293

System Impulse Response and the Convolution Sum, 293

6A

Periodic Convolution

6.5

Difference Equation Representation of Discrete-Time Systems

300

303

6.5.1 Homogeneous Solution of the Difference Equation, 304 6.5.2 The Particular Solution, 306 6.5.3 Determination of the Impulse Response, 309 6.6

Simulation Diagrams for Discrete-Time Systems

6.7

State-Variable Representation of Discrete-Time Systems

311

315

6.7.1 Solution of State-Space Equations, 319 6.7.2 Impulse Response of Systems Described by State Equations, 321

7

6.8

Stability of Discrete-Time Systems

6.9

Summary

322

323

6.10

Checklist of Important Terms

6.11

Problems

324

325

FOURIER ANAL YSIS OF DISCRETE-TIME SYSTEMS

333

7.1

Introduction

7.2

Fourier-Series Representation of Discrete-Time Periodic Signals

7.3

The Discrete-Time Fourier Transform

7.4

Properties of the Discrete-Time Fourier Transform

7.4.1 7.4.2 7.4.3 7.4.4 7.4.5 7.4.6 7.4.7

333

343 348

Periodicity, 348 Linearity, 348 Time and Frequency Shifting, 348 Differentiation in Frequency, 348 Convolution, 349 Modulation, 352 Fourier Transform of Discrete-Time Periodic Sequences, 352

335

xii

Contents

8

7.5

Fourier Transform of Sampled Continuous-Time Signals

7.6

Summary

7.7

Checklist of Important Terms

7.8

Problems

354

357 359

359

364

THE Z-TRANSFORM 364

8.1

Introduction

8.2

The Z-Transform

8.3

Convergence of the Z-Transform

8.4

Z-Transform Properties

8.4.1 8.4.2 8.4.3 8.4.4 8.4.5 8.4.6 8.4.7 8.5

365

367

371

Linearity, 373 Time Shifting, 373 Frequency Scaling, 375 Differentiation with Respect to z, 376 Initial Value, 377 Final Value, 377 Convolution, 377

The Inverse Z-Transform

380

8.5.1 Inversion by a Power-Series Expansion, 381 8.5.2 Inversion by Partial-Fraction Expansion, 382

8.6

Z-Transfer Function of Causal Discrete-Time Systems

8.7

Z-Transform Analysis of State-Variable Systems

8.7.1

9

385

387

Frequency-Domain Solution of the State Equations, 392

8.8

Relation Between the Z-Transform and the Laplace Transform

8.9

Summary

395

396

8.10

Checklist of Important Terms

8.11

Problems

398

398

403

THE DISCRETE FOURIER TRANSFORM 9.1

Introduction

9.2

The Discrete Fourier Transform and Its Inverse

9.3

DFT Properties

403

406

9.3.1 Linearity, 406 9.3.2 Time Shifting, 406

405

xiii

Contents

9.3.3 Alternate Inversion Formula, 407 9.3.4 Time Convolution, 407 9.3.5 Relation to the Z-Transform, 409 9.4

Linear Convolution Using the DFT

9.5

Fast Fourier Transforms 9.5.1 9.5.2

409

411

The Decimation-in-Time (DIT) Algorithm, 412 The Decimation-in-Frequency (DIF) Algorithm, 416

9.6

Spectral Estimation of Analog Signals Using the DFT

9.7

Summary

9.8

Checklist of Important Terms

9.9

Problems

419

425 426

427

10 DESIGN OF ANALOG AND DIGITAL FIL TERS 10.1

Introduction

10.2

Frequency Transformations

10.3

Design of Analog Filters

10.3.1 10.3.2 10.4

430 433

436

The Butterworth Filter, 436 The Chebyshev Filter, 441

Digital Filters

10.4.1 10.4.2 10.4.3 10.4.4

430

445

Design of HR Digital Filters Using Impulse Invariance, 446 HR Design Using the Bilinear Transformation, 450 FIR Filter Design, 452 Computer-Aided Design of Digital Filters, 457

458

10.5

Summary

10.6

Checklist of Important Terms

10.7

Problems

459

459

APPENDIX A COMPLEX NUMBERS A.l

Definition

A.2

Arithmetical Operations

461 463

A.2.1 Addition and Subtraction, 463 A.2.2 Multiplication, 463 A.2.3 Division, 464

461

xiv

Contents

A.3

Powers and Roots of Complex Numbers

A.4

Inequalities

465

466

467

APPENDIX В MATHEMATICAL RELATIONS B.l

Trigonometric Identities

B.2

Exponential and Logarithmic Functions

B.3

Special Functions

467

468

469

B.3.1 Gamma Functions, 469 В3.2 Incomplete Gamma Functions, 469 B.3.3 Beta Functions, 470 B.4

Power-Series Expansion

B.5

Sums of Powers of Natural Numbers B.5.1 B.5.2

470

470

Sums of Binomial Coefficients, 471 Series Exponentials, 471

B.6

Definite Integrals

B.7

Indefinite Integrals

472 473

APPENDIX C ELEMENTARY MATRIX THEORY C.l

Basic Definition

C.2

Basic Operations

476

476 477

C.2.1 Matrix Addition, 4 77 C.2.2 Differentiation and Integration, 477 C.2.3 Matrix Multiplication, 477

C.3

Special Matrices

C.4

The Inverse of a Matrix

C.5

Eigenvalues and Eigenvectors

C.6

Functions of a Matrix C.6.1

478 480 481

482

The Cayley-Hamilton Theorem, 483

APPENDIX D PARTIAL FRACTIONS D.l

Case I: Nonrepeated Linear Factors

D.2

Case IE Repeated Linear Factors

486 487 488

xv

Contents D.3

Case III: Nonrepeated Irreducible Second-Degree Factors

D.4

Case IV: Repeated Irreducible Second-Degree Factors

490 491

BIBLIOGRAPHY

492

INDEX

495

Preface

This book provides an introductory but comprehensive treatment of the subject of continuous- and discrete-time signals and systems. The aim of building complex sys­ tems that perform sophisticated tasks imposes on engineering students a need to enhance their knowledge of signals and systems so that they are able to use effectively the rich variety of analysis and synthesis techniques that are available. This may explain why the subject of signals and systems is a core course in the electrical engineering curriculum in most schools. In writing this book, we have tried to present the most widely used techniques of signal and system analysis in an appropriate fashion for instruction at the junior or senior level in electrical engineering. The concepts and techniques that form the core of the book are of fundamental importance and should prove useful also to engineers wishing to update or extend their understanding of sig­ nals and systems through self-study. The book is divided into two parts. In the first part, a comprehensive treatment of continuous-time signals and systems is presented. In the second part, the results are extended to discrete-time signals and systems. In our experience, we have found that covering both continuous-time and discrete-time systems together frequently confuses students and they often are not clear as to whether a particular concept or technique applies to continuous-time or discrete-time systems, or both. The result is that they often use solution techniques that simply do not apply to particular problems. Further­ more, most physics, mathematics, and basic engineering courses preceding this course deal exclusively with continuous-time signals and systems. Consequently, students are able to follow the development of the theory and analysis of continuous-time systems without difficulty. Once they have become familiar with this material, which is covered in the first five chapters, students should be ready to handle discrete-time signals and systems.

xvii

xviii

Preface The book is organized so that all the chapters are distinct but closely related with smooth transitions between chapters, thereby providing considerable flexibility in course design. By appropriate choice of material, the book can be used as a text in several courses such as transform theory (Chapters 1, 3, 4, 5, 7, and 8), continuous-time signals and systems (1, 2, 3, 4, and 5), discrete-time signals and systems (Chapters 6, 7, 8, and 9), and signals and systems: continuous and discrete (Chapters 1, 2, 3, 4, 6, 7, and 8). We have been using the book at Southern Methodist University for a one-semester course covering both continuous-time and discrete-time systems and it has proved suc­ cessful. Normally, a signals and systems course is taught in the third year of a four-year undergraduate curriculum. Although the book is designed to be self-contained, a knowledge of calculus through integration of trigonometric functions, as well as some knowledge of differential equations, is presumed. A prior exposure to matrix algebra as well as a course in circuit analysis is preferable but not necessary. These prerequisite skills should be mastered by all electrical engineering students by their junior year. No prior experience with system analysis is required. While we use mathematics exten­ sively, we have done so, not rigorously, but in an engineering context as a means to enhance physical and intuitive understanding of the material. Whenever possible, theoretical results are interpreted heuristically, and supported by intuitive arguments. As with all subjects involving problem solving, we feel that it is imperative that a student sees many solved problems related to the material covered. We have included a large number of examples that are worked out in detail to illustrate concepts and to show the student the application of the theory developed in the text. In order to make the student aware of the wide range of applications of the principles that are covered, applications with practical significance are mentioned. These applications are selected to illustrate key concepts, stimulate interest, and bring out connections with other branches of electrical engineering. It is well recognized that the student does not fully understand a subject of this nature unless he or she is given the opportunity to work out problems in using and applying the basic tools that are developed in each chapter. This not only reinforces the understanding of the subject matter, but, in some cases, allows for the extension of vari­ ous concepts discussed in the text. In certain cases, even new material is introduced via the problem sets. Consequently, over 257 end-of-chapter problems have been included. These problems are of various types, some being straightforward applications of the basic ideas presented in the chapters, and are included to ensure that the student under­ stands the material fully. Some are moderately difficult, and other problems require that the student apply the theory he or she learned in the chapter to problems of practical importance. The relative amount of “design” work in various courses is always a concern for the electrical engineering faculty. The inclusion in this text of analog- and digital-filter design as well as other design-related material is in direct response to that concern. At the end of each chapter, we have included an item-by-item summary of all the important concepts and formulas covered in that chapter as well as a checklist of all

Preface

xix important terms discussed. This list serves as a reminder to the student of material that deserves special attention. Throughout the book, the emphasis is on linear time-invariant systems. The focus in Chapter 1 is on signals. This material, which is basic to the remainder of the book, considers the mathematical representation of signals. In this chapter, we cover a variety of subjects such as periodic signals, energy and power signals, transformations of the independent variable, elementary signals, and orthogonal representation of arbitrary sig­ nals. Chapter 2 is devoted to the time-domain characterization of continuous-time (CT) linear time-invariant (LTIV) systems. The chapter starts with the classification of continuous-time systems and then introduces the impulse-response characterization of LTIV systems and the convolution integral. This is followed by a discussion of systems characterized by linear constant-coefficients differential equations. Simulation diagrams for such systems are presented and used as a stepping-stone to introduce the state­ variable concept. The chapter concludes with a discussion of stability. To this point, the focus is on the time-domain description of signals and systems. Starting with Chapter 3, we consider frequency-domain descriptions. The Fourier-series representation of periodic signals and their properties are presented. The concept of line spectra for describing the frequency content of such signals is given. The response of linear systems to periodic inputs is discussed. The chapter concludes with a discussion of the Gibbs phenomenon. Chapter 4 begins with the development of the Fourier transform. Conditions under which the Fourier transform exists are presented and its properties discussed. Applica­ tions of the Fourier transform in areas such as amplitude modulation, multiplexing, sampling, and signal filtering are considered. The use of the transfer function in deter­ mining the response of LTIV systems is discussed. The Nyquist sampling theorem is derived from the impulse-modulation model for sampling. The several definitions of bandwidth are introduced and duration-bandwidth relationships discussed. Chapter 5 deals with the Laplace transform. Both unilateral and bilateral Laplace transforms are defined. Properties of the Laplace transform are derived and examples are given to demonstrate how these properties are used to evaluate new Laplace transform pairs or to find the inverse Laplace transform. The concept of the transfer function is introduced and other applications of the Laplace transform such as for the solution of differential equations, circuit analysis, and control systems are presented. The state-variable representation of systems in the frequency domain and the solution of the state equations using Laplace transforms are discussed. The treatment of continuous-time signals and systems ends with Chapter 5, and a course emphasizing only CT material can be ended at this point. By the end of this chapter, the reader should have acquired a good understanding of continuous-time sig­ nals and systems and should be ready for the second half of the book in which discrete-time signals and systems analysis are covered. We start our consideration of discrete-time systems in Chapter 6 with a discussion of elementary discrete-time signals. The impulse-response characterization of discrete­ time systems is presented and the convolution sum for determining the response to

XX

Preface

arbitrary inputs is derived. The difference equation representation of discrete-time sys­ tems and their solution is given. As in CT systems, simulation diagrams are discussed as a means of obtaining the state-variable representation of the discrete-time systems. Chapter 7 considers the Fourier analysis of discrete-time signals. The Fourier series for periodic sequences and the Fourier transform for arbitrary signals are derived. The similarities and differences between these and their continuous-time counterparts are brought out and their properties and applications discussed. The relation between the continuous-time and discrete-time Fourier transforms of sampled analog signals is derived and used to obtain the impulse-modulation model for sampling that is con­ sidered in Chapter 4. Chapter 8 discusses the Z-transform of discrete-time signals. The development fol­ lows closely that of Chapter 5 for the Laplace transform. Properties of the Z-transform are derived and their application in the analysis of discrete-time systems developed. The solution of difference equations and the analysis of state-variable systems using the Z-transform are also discussed. Finally, the relation between the Laplace and the Ztransforms of sampled signals is derived and the mapping of the s-plane onto the zplane is discussed. Chapter 9 introduces the discrete Fourier transform (DFT) for analyzing finite-length sequences. The properties of the DFT are derived and the differences with the other transforms discussed in the book are noted. The application of the DFT to linear sys­ tem analysis and to spectral estimation of analog signals is discussed. Two popular fast Fourier-transform (FFT) algorithms for the efficient computation of the DFT are presented. The final chapter, Chapter 10, considers some techniques for the design of analog and digital filters. Techniques for the design of two low-pass analog filters, namely, the Butterworth and the Chebyshev filters, are given. The impulse invariance and bilinear techniques for designing digital IIR filters are derived. Design of FIR digital filters using window functions is also discussed. The chapter concludes with a very brief overview of computer-aided techniques. In addition, four appendices are included. They should prove useful as a readily available source for some of the background material in complex variables and matrix algebra necessary for the course. A somewhat extensive list of frequently used formu­ las is also included. We wish to acknowledge the many people who have helped us in writing this book, especially the students on whom much of this material was classroom tested, and the reviewers whose comments were very useful. We wish to thank Dyan Muratalla, who typed a substantial part of the manuscript. Finally, we would like to thank our wives and families for their patience during the completion of this book. S. Soliman M. D. Srinath

CONTINUOUS AND DISCRETE SIGNALS AND SYSTEMS

Chapter

1

Signal Representations

1,1 INTRODUCTION Signals are detectable physical quantities or variables by which messages or information can be transmitted. A wide variety of signals are of practical importance in describing physical phenomena. Examples include human voice, television pictures, teletype data, and atmospheric temperature. Electrical signals are the most easily measured and the most simply represented type of signals. Therefore, many engineers prefer to transform physical variables to electrical signals. For example, many physical quantities, such as temperature, humidity, speech, wind speed, and light intensity, can be transformed, using transducers, to time-varying current or voltage signals. Electrical engineers deal with signals that have a broad range of shapes, amplitudes, time durations, and perhaps other physical properties. For example, a radar-system designer analyzes high-energy microwave pulses, a communication-system engineer who is concerned with signal detection and signal design analyzes information-carrying signals, a power engineer deals with high-voltage signals, and a computer engineer deals with millions of pulses per second. Mathematically, signals are represented as functions of one or more independent variables. For example, time-varying current or voltage signals are functions of one variable (time). The vibration of a rectangular membrane can be represented as a func­ tion of two spatial variables (% and у coordinates), the electrical field intensity can be looked at as a function of two variables (time and space). Finally, an image signal can be regarded as a function of two variables (x and у coordinates). In this introductory course of signals and systems, we focus attention on signals involving one independent variable, which we consider to be time, although it can be different in some specific applications. We begin this chapter with an introduction to two classes of signals that we are con­ cerned with throughout the text, namely, continuous-time and discrete-time signals.

1

Signal Representations

2

Chapter 1

Then in Section 1.3, we define periodic signals. Section 1.4 deals with the issue of power and energy signals. A number of transformations of the independent variable are discussed in Section 1.5. In Section 1.6, we introduce several important elementary sig­ nals that not only occur frequently in applications, but also serve as a basis to represent other signals. Section 1.7 is devoted to the orthogonal representation of signals. Orthogonal-series representation of signals is of significant importance in many applica­ tions such as Fourier series, sampling functions, and representations of discrete-time signals. These specific applications are so important that they are studied in some detail throughout the text. Other types of signals that are of importance to engineers are men­ tioned in Section 1.8.

1.2 CONTINUOUS-TIME VS. DISCRETE-TIME SIGNALS One way to classify signals is according to the nature of the independent variable. If the independent variable is continuous, the corresponding signal is called a continuous­ time signal and is defined for a continuum of values of the independent variable. A telephone or a radio signal as a function of time or an atmospheric pressure as a func­ tion of altitude are examples of continuous-time signals; see Figure 1.2.1.

Figure 1.2.1

Examples of continuous-time signals.

A continuous-time signal x(t) is said to be discontinuous (in amplitude) at t = t\ if x(f[) where tx ~ h and 4“ П are infinitesimal positive numbers. Signal x(r) is continuous at t = t{ if x(A) =л'(4) = Л'(П)- If signal v(r) is continuous at all points t, we say that x(r) is continuous. There are many continuous-time signals of interest that are not continuous at all points t. An example is the rectangular pulse function rect(z/x) defined by

rect(/7x) =

(1-2.1)

Sec. 1.2

Continuous-Time vs. Discrete-Time Signals

3

Continuous signal л*(0 is said to be piecewise continuous if it is continuous at all t except at a finite or countably infinite number of points. For such functions, we define the value of the signal at a point of discontinuity 11 as

.r(/|) = yk(^)-.r(?7)]

d-2.2)

The rectangular pulse function rect(/7x) shown in Figure 1.2.2 is an example of a piecewise continuous signal. Another example is the pulse train shown in Figure 1.2.3. This signal is continuous at all t except at t = 0, ±1, ±2, .... It follows from the definition that finite jumps are the only discontinuities that a piecewise continuous signal can have. Finite jumps are known as ordinary discontinuities. Furthermore, it is clear that the class of piecewise continuous signals includes every continuous signal. rect (г/т)

-r/2

0

r/2

Figure 1.2.2 signal.

A rectangular pulse

If the independent variable takes on only discrete values t = kTs, where Ts is a fixed positive real number, and к ranges over the set of integers (i.e., к =0, ±1, ±2, etc.), the corresponding signal x(kTs) is called a discrete-time signal. Discrete-time signals arise naturally in many areas of business, economics, science, and engineering. Examples of discrete-time signals are the amount of a loan payment in the kth month, the weekly Dow Jones stock index, and the output of an information source that produces one of the digits 1, 2, . . . , M every Ts seconds. We consider discrete-time signals in more detail in Chapter 6.

Figure 1.2.3

A pulse train.

Signa! Representations

4

Chapter 1

1,3 PERIODIC VS, NONPERIODIC SIGNALS Any continuous-time signal that satisfies the condition x(t)=x(t + пТ) ,

77-1,2,3,...

(1.3.1)

where T is a constant known as the fundamental period, is classified as a periodic sig­ nal. A signal x(0 that is not periodic is referred to as an aperiodic signal. Familiar examples of periodic signals are the sinusoidal functions. A real-valued sinusoidal sig­ nal can be expressed mathematically by a time-varying function of the form

x(t) = A sin(co0r + ф)

(1.3.2)

where

A coo ф

= amplitude = radian frequency in rad/s = initial phase angle with respect to the time origin in rad

This sinusoidal signal is periodic with fundamental period T = 2я/со0 for all values of co0. The sinusoidal time function described in Equation (1.3.2) is usually referred to as a sine wave. Examples of physical phenomena that approximately produce sinusoidal sig­ nals are the voltage output of an electrical alternator and the vertical displacement of a mass attached to a spring under the assumption that the spring has negligible mass and no damping. The pulse train shown in Figure 1.2.3 is another example of a periodic signal, with fundamental period T = 2. Notice that if x(t) is periodic with fundamental period 7, then x(t) is also periodic with period 27, 37, 47, .... The fundamental radian frequency of the periodic signal x(t) is related to the fundamental period by the relationship

2k co0 = ^

(1.3.3)

Engineers and most mathematicians refer to the sinusoidal signal with radian frequency (Од- = Acoo as lhe Ath harmonic. For example, the signal shown in Figure 1.2.3 has a fundamental radian frequency G>0 = я, a second harmonic radian frequency co2 = 2я, and a third harmonic radian frequency co3 = Зя. Figure 1.3.1 shows the first, second, and third harmonics of signal x(f) in Eq. (1.3.2) for specific values of A, coo, and ф. As can be seen from the figure, the waveforms corresponding to each harmonic are distinct. In theory, we can associate an infinite number of distinct harmonic signals with a given sinusoidal waveform. Periodic signals occur frequently in physical problems. In this section, we discuss the mathematical representation of periodic signals. In Chapter 3, we show how to represent any periodic signal in terms of simple periodic signals, such as sine and cosine. Example 1.3,1

Harmonically related continuous-time exponentials are sets of complex exponentials with fundamental frequencies that are all multiples of a single positive frequency coo. Mathematically,

Sec. 1.3

Periodic vs. Nonperiodic Signals

5

x} (Г) = ~ cos 2irt

AAA/VV

x3 (Г) = | cos 6irt

x2 (f) = c°s 4тг/

ШЙВ

Harmonically related sinusoids.

Figure 1.3.1

*(г) ф

шйв

= exp [#«w],

к = 0, ±1, ±2, ...

(1.3.4)

We show that for к 0, ф^(Г) is periodic with fundamental period 2я/1 £co0 I or funda­ mental frequency I £co0 I. In order for signal фА,(0 to be periodic with period T > 0, we must have exp |j£co0(r + T)] = exp |лсоо^

or, equivalently,

Note that since a signal that is periodic with period T is also periodic with period IT for any positive integer /, then all signals ф^(г) have a common period of 2я/соо. ■

The sum of two periodic signals may or may not be periodic. Consider the two periodic signals x(t) and y(r) with fundamental periods Tj and T2, respectively. We investigate under what conditions is the sum z(r) = ax(t) + by (t)

periodic and what is the fundamental period of this signal if it is periodic. Since x(t) is periodic with period T j, then x(t) = x(t + kT j)

Similarly, y(r) = y(t + ZT2) where к and / are integers, such that

z(0 = ax(t 4- kT^ + by(t + IT2) In order for z (r) to be periodic with period T, one needs

ax(t + T) + by(t + T) = ax(t + kT0 + by (t + IT 2) we therefore must have

T = kTx = IT 2

Signal Representations

6

Chapter 1

or, equivalently,

т2

к

In other words, the sum of two periodic signals is periodic only if the ratio of their respective periods can be expressed as a rational number.

Example 1.3.2

We wish to determine which of the following signals are periodic.

(a) ^i(0 = sin 5кг

(b) x2(0 = cos4Ttf (c) x3(t) = sin 13t (d) x4(r) = 2x,(r) -x2(f) (e) jc5(r) = X](r) + 3x3(r) For the previous signals, X\(t) is periodic with period Tx = y, x2(r) is periodic with

period T2 = y, x3(t) is periodic with fundamental period T3 = 2л;/13, x4(t) is periodic with period T4 = 2, but x5(t) is not periodic because there is no I and к such that T5 = kTj = IT3. In other words, the ratio of the periods of x{(t) and x3(t) is not a rational number. ■ Note that if x(0 and у (r) have the same period T, then z(r) = x(0 + у (Г) is periodic with period T, i.e., linear operations (addition in this case) do not affect the periodicity of the resulting signal. Nonlinear operations on periodic signals (such as multiplication) produce periodic signals with different fundamental periods. The following example demonstrates this fact.

Example 1.3.3

Let x(t) = cos ocqf and у (t) = cos co2r. Consider the signal z(r) = x(t)y (r). Signal x(t) is periodic with periodic 271/cOj, and signal y(t) is periodic with period 2tc/2- The fact that z(r) =л'(Г)у(Г) has two components, one with radian frequency co2 — ocq and the other with radian frequency co2 + cob can be seen by rewriting the product of the two cosine signals as COS CO! t COS (02r = y[C0S (C02 - COxy + COS (C02 + C£>! )r]

If C0j = co2 = co, then z(r) will have a constant term (y) and a second harmonic term (ycos2cor). In general, nonlinear operations on periodic signals can produce higher-

order harmonics.



Since a periodic signal is a signal of infinite duration that should start at t = and go on to t = 00, then all practical signals are nonperiodic. Nevertheless, the study of the system response to periodic inputs is essential (as we see in Chapter 4) in the process of developing the system response to all practical inputs.

Sec. 1.4

7

Energy and Power Signals

1.4 ENERGY AND POWER SIGNALS Let x(r) be a real-valued signal. If x(r) represents the voltage across a resistance R, it produces a current i(t) = x(t)IR. The instantaneous power of the signal is Ri2(t) - x2(r)/R, and the energy expended during the incremental interval dt is x2(f)/T? dt. In general, we do not necessarily know whether x(r) is a voltage or a current signal, and in order to normalize power, we assume that R = 1 ohm. Hence, the instantaneous power associated with signal л (Г) is .v2(/). The signal energy over a time interval of length 2L is defined as L

E2L = J 1л(Г) I2 dt

(1-4.1)

-L

and the total energy in the signal over the range t g (-©©, ©q) can be defined as i.

E = lim

J l.v(r)l2 dt

(1-4.2)

-L

The average power can then be defined as

J l.v(r) 12 dt

lim

(1.4.3)

Although we have used electrical signals to develop Equations (1.4.2) and (1.4.3), these equations define energy and power, respectively, of any arbitrary signal x(r). When the integral in Equation (1.4.2) exists and yields 0 < E < ©©, signal x(0 is said to be an energy signal. Inspection of Equation (1.4.3) reveals that energy signals have zero power. On the other hand, if the integral limit in Equation (1.4.3) exists and yields 0 < P < ©©, then x(r) is a power signal. Power signals have infinite energy. As stated earlier, periodic signals are assumed to exist for all time from -©© to +©© and, therefore, have infinite energy. If it happens that these periodic signals have finite average power (which they do in most cases), then they are power signals. In contrast, bounded finite-duration signals are energy signals.

Example 1.4.1

In this example, we show that for a periodic signal with period T, the average power is given by т P = ^\.x(t)\-dt (1.4.4) 1 0

If л'(г) is periodic with period T, then the integral in Equation (1.4.3) is the same over any interval of length T. Allowing the limit to be taken in a manner such that 2L is an integral multiple of the period, 2L =mT, the total energy of v(f) over an interval of length 2L is m times the energy over one period. The average power is then т P = lim m f I A (r) 12 dt mT n

Signal Representations

8

Chapter 1

т = 7J lx(r)l2

dt

1 0

Example 1.4.2

Consider the signals in Figure 1.4.1. We wish to classify each signal as a power or energy signal by calculating the power and energy in each case. The signal in Figure 1.4.1(a) is aperiodic with total energy: *2 (r)

(a)

(b)

Figure 1.4.1

Signals for Example 1.4.2.

7 n A2 E = J A2 exp[-2r] dt = —~ 2

0

which is finite. The average power is A J A2 exp[-2z] dt

l

lim

= lim Therefore, the signal in Figure 1.4.1(a) is an energy signal with energy A212 and zero average power. The signal in Figure 1.4.1(b) is periodic with period T. By using Equation (1.4.4), the average power of x2(0 is p = yJ

I * 2(0 1

dt

1 0

= — |а2г т L

I 0 +

A2t

Sec. 1.4

Energy and Power Signals

9 2Л2т T

Therefore, x2(r) is a power signal with infinite energy and average power of 2A2 t/T. ■

Example 1.4.3

Consider the sinusoidal signal

x(t) = A sin(coor + ф) This sinusoidal signal is periodic with period

2л “ co0 The average power of this signal is т P = — Ja2 sin2(co0r + Ф) dt 1 о 9 2л/ (Oq A2co0 f Г1 1 1 = —— J hr “ 7" cos(2°v + 2Ф) dt 2л о L2 2 J = Ai 2

The last step follows because the signal cos(2co0r + 2ф) is periodic with period T/2 and the area under a cosine signal over any interval of length IT is always zero, where I is a positive integer. (You should have no trouble confirming this result if you draw two complete periods of cos (2co0z + 2ф))■

Example 1.4.4

Consider the two nonperiodic signals shown in Figure 1.4.2. These two signals are examples of energy signals. The rectangular pulse shown in Figure 1.4.2(a) is strictly time limited since %i(0 is identically zero outside the pulse duration. The other signal is asymptotically time limited in the sense that x2(r) —> 0 as t —> ± . Such signals may also be described loosely as “pulses.” In either case, the average power equals zero. The energy for signal x! (r) is L

E! = lim J %2 (r) dt t/2

= J A2 dt - А2 т -t/2

For x2(r), L

lim j*A 2exp -2a \t

dt

Signal Representations

10

Chapter 1

A2

= lim---a

- exp

a

Since Ex and£2 are finite, Xj (7) and x2(0 are energy signals. Almost all time limited signals of practical interest are energy signals. ■

Figure 1.4.2

Signals for Example 1.4.4.

1.5 TRANSFORMATIONS OF THE INDEPENDENT VARIABLE There are a number of important operations that are often performed on signals. Most of these operations involve transformations of the independent variable. It is important that the reader know how to perform these operations and understand the physical meaning of each operation. The three operations we discuss in this section are shifting, reflecting, and time scaling. 1.5.1 The Shifting Operation Signal x(t - r0) represents a time-shifted version of x(7); see Figure 1.5.1. The shift in time is r0. If tQ > 0, then the signal is delayed by r0 seconds. Physically, tQ cannot take on negative values, but from the analytical viewpoint, x(t - tQ), tQ < 0, represents an advanced replica of x(r). Signals that are related in this fashion arise in applications such as radar, sonar, communication systems, and seismic signal processing. Example 1.5.1

Consider the signal x(t) shown in Figure 1.5.2. We want to plot x(t - 2) and x(t + 3). It can easily be seen that

x(0 =

7 + 1, 1, -t + 3, 0,

-I