RF and Baseband Techniques for Software Defined Radio 1580537936, 9781580537933, 9781580537940

This volume offers a comprehensive view of the RF and analogue hardware and systems design aspects of software defined r

387 33 3MB

English Pages 352 Year 2005

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

RF and Baseband Techniques for Software Defined Radio
 1580537936, 9781580537933, 9781580537940

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

TEAM LinG

RF and Baseband Techniques for Software Defined Radio

For a listing of recent titles in the Mobile Communications Series, turn to the back of this book.

RF and Baseband Techniques for Software Defined Radio Peter B. Kenington

artechhouse.com

Library of Congress Cataloging-in-Publication Data Kenington, Peter B. RF and baseband techniques for software defined radio/Peter B. Kenington. p. cm.—(Artech House mobile communications series) Includes bibliographical references and index. ISBN 1-58053-793-6 (alk. paper) 1. Software radio. I. Title. II. Series. TK5103.4875.K46 2005 621.3845—dc22

2005045271

British Library Cataloguing in Publication Data Kenington, Peter B. RF and baseband techniques for software defined radio—(Artech House mobile communications series) 1. Software radio 2. Radio circuits—Design I. Title 621.3’8412 ISBN-10: 1-58053-793-6

Cover design by Yekaterina Ratner

© 2005 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062 All rights reserved.

All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. International Standard Book Number: 1-58053-793-6 10 9 8 7 6 5 4 3 2 1

Contents

Preface

xi

Scope of This Book Organisation of the Text

xi xi

Acknowledgements

xiii

CHAPTER 1 Introduction 1.1 What Is a Software-Defined Radio? 1.2 The Requirement for Software-Defined Radio 1.2.1 Introduction 1.2.2 Legacy Systems 1.3 The Benefits of Multi-standard Terminals 1.3.1 Economies of Scale 1.3.2 Global Roaming 1.3.3 Service Upgrading 1.3.4 Adaptive Modulation and Coding 1.4 Operational Requirements 1.4.1 Key Requirements 1.4.2 Reconfiguration Mechanisms 1.5 Business Models for Software-Defined Radio 1.5.1 Introduction 1.5.2 Base-Station Model 1.5.3 Impact of OBSAI and CPRI™ 1.5.4 Handset Model 1.6 New Base-Station and Network Architectures 1.6.1 Separation of Digital and RF 1.6.2 Tower-Top Mounting 1.6.3 BTS Hoteling 1.7 Smart Antenna Systems 1.7.1 Introduction 1.7.2 Smart Antenna System Architectures 1.7.3 Power Consumption Issues 1.7.4 Calibration Issues

1 1 2 2 2 3 4 4 4 5 5 5 6 7 7 7 11 12 13 14 15 16 18 18 19 19 21

v

vi

Contents

1.8 Projects and Sources of Information on Software Defined Radio 1.8.1 SDR Forum 1.8.2 World Wide Research Forum (WWRF) 1.8.3 European Projects References

22 22 23 23 24

CHAPTER 2 Basic Architecture of a Software Defined Radio

25

2.1 2.2 2.3 2.4

Software Defined Radio Architectures Ideal Software Defined Radio Architecture Required Hardware Specifications Digital Aspects of a Software Defined Radio 2.4.1 Digital Hardware 2.4.2 Alternative Digital Processing Options for BTS Applications 2.4.3 Alternative Digital Processing Options for Handset Applications 2.5 Current Technology Limitations 2.5.1 A/D Signal-to-Noise Ratio and Power Consumption 2.5.2 Derivation of Minimum Power Consumption 2.5.3 Power Consumption Examples 2.5.4 ADC Performance Trends 2.6 Impact of Superconducting Technologies on Future SDR Systems References

25 26 27 30 30 33 35 41 41 43 47 51 54 55

CHAPTER 3 Flexible RF Receiver Architectures

57

3.1 Introduction 3.2 Receiver Architecture Options 3.2.1 Single-Carrier Designs 3.2.2 Multi-Carrier Receiver Designs 3.2.3 Zero IF Receiver Architectures 3.2.4 Use of a Six-Port Network in a Direct-Conversion Receiver 3.3 Implementation of a Digital Receiver 3.3.1 Introduction 3.3.2 Frequency Conversion Using Undersampling 3.3.3 Achieving Processing Gain Using Oversampling 3.3.4 Elimination of Receiver Spurious Products 3.3.5 Noise Figure 3.3.6 Receiver Sensitivity 3.3.7 Blocking and Intercept Point 3.3.8 Converter Performance Limitations 3.3.9 ADC Spurious Signals 3.3.10 Use of Dither to Reduce ADC Spurii 3.3.11 Alternative SFDR Improvement Techniques 3.3.12 Impact of Input Signal Modulation on Unwanted Spectral Products 3.3.13 Aperture Error 3.3.14 Impact of Clock Jitter on ADC Performance

57 57 57 60 60 82 84 84 84 85 86 88 92 93 95 97 107 109 109 110 111

Contents

vii

3.3.15 Impact of Synthesiser Phase Noise on SDR Receiver Performance 3.3.16 Converter Noise Figure 3.4 Influence of Phase Noise on EVM for a Linear Transceiver 3.4.1 Introduction 3.4.2 SVE Calculation Without Phase Noise Disturbance 3.4.3 Approximation of a Local Oscillator Phase Noise Characteristic 3.4.4 Incorporation of the LO Phase Noise into the EVM Calculation 3.4.5 Example Results 3.4.6 EVM Performance of a Multi-Stage System 3.5 Relationship Between EVM, PCDE, and ρ References

117 118 120 120 122 124 125 127 131 134 135

CHAPTER 4 Multi-Band and General Coverage Systems

139

4.1 Introduction 4.2 Multi-Band Flexible Receiver Design 4.3 The Problem of the Diplexer 4.3.1 RF Transmit/Receive Switch 4.3.2 Switched Diplexers 4.3.3 Diplexer Elimination by Cancellation 4.4 Achieving Image Rejection 4.4.1 Introduction 4.4.2 Use of a High IF 4.4.3 Image-Reject Mixing 4.5 Dynamic Range Enhancement 4.5.1 Feedback Techniques 4.5.2 Feedforward Techniques 4.5.3 Cascaded Non-Linearity Techniques 4.5.4 Use of Diplexer Elimination, Image-Reject Mixing, and High Dynamic Range Techniques in a Receiver References

139 140 142 146 151 152 158 158 158 159 170 171 173 178 179 180

CHAPTER 5 Flexible Transmitters and PAs

183

5.1 Introduction 5.2 Differences in PA Requirements for Base Stations and Handsets 5.2.1 Comparison of Requirements 5.2.2 Linearisation and Operational Bandwidths 5.3 Linear Upconversion Architectures 5.3.1 Analogue Quadrature Upconversion 5.3.2 Quadrature Upconversion with Interpolation 5.3.3 Interpolated Bandpass Upconversion 5.3.4 Digital IF Upconversion 5.3.5 Multi-Carrier Upconversion 5.3.6 Weaver Upconversion 5.3.7 Non-Ideal Performance of High-Speed DACs

183 184 184 185 186 186 194 197 198 199 201 204

viii

Contents

5.3.8 Linear Transmitter Utilising an RF DAC 5.3.9 Use of Frequency Multiplication in a Linear Upconverter 5.4 Constant-Envelope Upconversion Architectures 5.4.1 PLL-Based Reference or Divider Modulated Transmitter 5.4.2 PLL-Based Directly-Modulated VCO Transmitter 5.4.3 PLL-Based Input Reference Modulated Transmitter 5.4.4 Use of a Direct-Digital Synthesizer to Modulate a PLL-Based Transmitter 5.4.5 A PLL-Based Transmitter Utilising Modulated Fractional-N Synthesis 5.5 Broadband Quadrature Techniques 5.5.1 Introduction to Quadrature Techniques 5.5.2 Active All-Pass Filter 5.5.3 Use of Highpass and Lowpass Filters 5.5.4 Polyphase Filtering 5.5.5 Broadband Passive All-Pass Networks 5.5.6 Multi-Zero Networks 5.5.7 Tunable Broadband Phase Splitter 5.5.8 Lange Coupler 5.5.9 Multiplier-Divider Techniques References

205 209 210 210 211 212 213 213 215 216 216 217 221 222 225 225 227 228 229

CHAPTER 6 Linearisation and RF Synthesis Techniques Applied to SDR Transmitters

233

6.1 Introduction 6.2 Power Amplifier Linearisation Techniques 6.2.1 Predistortion 6.2.2 Analogue Predistortion 6.2.3 Feedforward 6.2.4 Basic Operation 6.2.5 Power Efficiency 6.2.6 Maintaining Feedforward System Performance 6.2.7 Performance Stabilisation Techniques 6.2.8 Relative Merits of the Feedforward Technique 6.3 Transmitter Linearisation Techniques 6.3.1 Digital Predistortion 6.3.2 Relative Merits of Predistortion Techniques 6.3.3 Feedback Techniques 6.3.4 RF Feedback 6.3.5 Envelope Feedback 6.3.6 Polar Loop 6.3.7 Cartesian Loop 6.4 RF Synthesis Techniques 6.4.1 Polar RF Synthesis Transmitter 6.4.3 Sigma-Delta Techniques 6.5 Power Efficiency

233 233 234 234 244 245 248 251 253 261 262 262 276 277 277 278 280 284 287 287 295 296

Contents

ix

6.6 Summary of the Relative Merits of Various Linear Amplifier and Transmitter Techniques References

297 301

Appendix A 90° Phase-Shift Networks

305

A.1 General Structure Reference

305 309

Appendix B Phase Noise in RF Oscillators

311

B.1

311 311 311 312

Leesons Equation B.1.1 SSB Phase Noise Characteristic of a Basic Oscillator. B.1.2 Leesons Equation References

Acronyms and Abbreviations

313

About the Author

319

Index

321

Preface Scope of This Book Software defined radio (SDR) is an emerging form of radio architecture, which encompasses a wide range of design techniques in order to realize a truly flexible, and potentially future-proof, transceiver system. As a field, it is very broad, encompassing: systems design, RF, IF, and baseband analogue hardware design, digital hardware design, and software engineering. Covering all of these topics in sufficient detail to become a design reference would be a huge task, therefore, this book focuses on the former aspects of SDR, namely, systems design and RF, IF, and baseband analogue hardware design. It also includes an introduction to some of the digital hardware options for the baseband signal-processing element, although it does not attempt to provide detailed design information in this area. The emphasis on analogue hardware and digital conversion technologies stems from the current reality that the antenna plus analogue-to-digital converter (ADC) architecture often envisaged for future radio receivers (and similarly for the transmitter) is far away from becoming a reality in the main cellular communications bands. Indeed, there are some very significant challenges to overcome before this could ever be considered a realistic architecture for cellular applications, and these are highlighted in this book. Many of the techniques described in the book are still at the research stage and are presented as ideas for further development. Some of these techniques may never become a reality and may be superseded by alternative technologies; however, their presence in this book will hopefully stimulate such alternative ideas and further the development of the exciting field of SDR.

Organisation of the Text The text is divided into six chapters covering the hardware aspects of software-defined radio design. The chapters may be summarised as follows: 1. Introduction. 2. Basic Architecture of a Software-Defined Radio. The ideal software-defined radio architecture is introduced and the practical limitations of current technology are highlighted, which make this architecture currently unrealizable. Some of the digital aspects of a software-defined radio are also discussed, along with appropriate implementation technologies. Finally,

xi

xii

Preface

3.

4.

5.

6.

some issues with high-speed ADC performance and power consumption are highlighted. Flexible RF Receiver Architectures. A range of techniques is covered which may be used to realise a practical receiver RF section for a software-defined radio. Detailed examples are given, where appropriate, of the calculations involved at various stages of the design. Multi-band and General Coverage Systems. Techniques covered in this chapter include dynamic range enhancement methods, image removal architectures, and duplexer elimination techniques, among others. Flexible Transmitters and PAs. A crucial and difficult area in a software-defined radio is a flexible, linear transmitter. This chapter highlights the various techniques available to solve this problem and also covers techniques appropriate for providing the broadband quadrature signals required in many transmitter (and receiver) architectures. Linearisation and RF Synthesis Techniques Applied to SDR Transmitters. The final chapter discusses the various linearised power amplifier and transmitter architectures appropriate for use in an SDR system. It compares them based upon complexity, cost, suitability for integration, and power efficiency.

Acknowledgements The material presented in this book represents a distillation of many years of research, design, and development work, carried out primarily at the University of Bristol and at Wireless Systems International Ltd. I would like to thank my former colleagues at both establishments for their assistance, encouragement, and criticism, without which many of the ideas presented here would never have reached fruition. In particular, I would like to thank Ross Wilkinson, Andy Bateman, Bill Whitmarsh, Jim Marvill, Dave Bennett, Kieran Parsons, Shixiang Chen, Sue Gillard, John Bishop, Tony Smithson, and Jonathan Rogers for their help and enthusiasm throughout the many projects we worked on together. I would also like to thank my wife, Gay, for her understanding during the many long evenings and weekends taken to prepare the manuscript.

xiii

CHAPTER 1

Introduction 1.1

What Is a Software Defined Radio? The term software radio has become associated with a large number of different technologies and no standard definition exists. The term is usually used to refer to a radio transceiver in which the key parameters are defined in software and in which fundamental aspects of the radio’s operation can be reconfigured by the upgrading of that software. A number of associated terms have also been used in the context of programmable or reconfigurable mobile systems [1]: •

Software defined radio (SDR): This is the term adopted by the SDR Forum—an international body looking at the standards aspects of software radio [2].



Multi-standard terminal (MST): This type of terminal is not necessarily a software defined radio in the context of this book, although it may be implemented in that way. It simply refers to a terminal which is capable of operation on a number of differing air interface standards. This type of terminal will provide either wider international roaming than would a single-standard device, or a necessary smooth upgrade path from a legacy system to a new standard, for example, the transition from Global System for Mobile communications (GSM) to wideband code-division multiple access (WCDMA). Reconfigurable radio: This term is used to encompass both software and firmware reconfiguration [e.g., through the use of programmable logic devices, such as field programmable gate array (FPGAs)]. Both forms of reconfiguration are likely to be necessary in any cost and power-efficient software radio implementation. Flexible architecture radio (FAR): This is a wider definition still than those above. It indicates that all aspects of the radio system are flexible, not just the baseband/digital section. A true FAR should allow parameters such as the number and type of up/downconversion stages to be altered by software as well as, for example, IF filter bandwidths and even the RF frequency band of operation. This is clearly a utopian goal for software radio.





Further variations on the above themes are also in use; however, they all fall into one or other of the above categories. These categorisations will be used, where relevant, in this book.

1

2

1.2

Introduction

The Requirement for Software Defined Radio 1.2.1

Introduction

Software defined radio is an enabling technology in many areas of communications. It has been, or is being, examined for a very wide range of applications, including: 1. Military communications—the U.S. SpeakEasy programme [3]. This programme formed some of the initial basis for the SDR Forum [2]. The benefits of a software defined radio system in a military context are obvious. A radio which can change not only its scrambling or encryption codes on an ad hoc basis, but one which can also change its modulation format, channel bandwidth, data rate, and voice codec type is clearly an exciting operational prospect. An adaptable radio of this type could both foil an enemy’s attempts at eavesdropping and be configured to match operational requirements or conditions (e.g., propagation characteristics). Such a system clearly has huge potential benefits in the theatre of war. 2. Civilian mobile communication. In the competitive world of civilian communications, any system that allows an operator or service provider to offer enhanced benefits or services relative to competing operators clearly has huge potential. The costs incurred in evolving a complete network to a new standard, by means of hardware replacement, are enormous. It is estimated, for example, that the cost of building the new 3G network in Europe, assuming that all of the expected licenses are deployed, will be upwards of $200 billion [4]. If the existing GSM infrastructure hardware had been designed on software defined radio principles (and if it possessed sufficient processing power when it was installed in the early 1990s—a very big if), then the cost of deploying 3G would be a tiny fraction of this figure. They key issue is, of course, whether sufficient digital processing can be built-in at deployment to allow for unknown service or air interface upgrades. This issue is explored further later when examining the business case for software defined radio. 3. Introduction of new technologies in legacy frequency bands. There are a number of examples of this form of application for software defined radio, ranging from U.S. specialised mobile radio (SMR) deployments [5], through European data systems [6] to international seismic survey operations [7]. These will also be examined in more detail later. 1.2.2

Legacy Systems

Software-defined radio technology is often an enabling technique when it is desirable to upgrade from an existing, well-established system to a newer, high capacity network. A good example of this is in the U.S. SMR bands, where frequency modulation (FM) is (or increasingly was) widely deployed for use by taxis, security firms, and so forth. The traditional options for upgrading such networks were: 1. Deploy an entire new network using the existing sites and frequency bands (but with a new technology, e.g., modulation format), furnish all of the users

1.3 The Benefits of Multi-standard Terminals

3

with new mobiles and handhelds, and have a hard switch-off of the old network on a particular day. There are numerous, obvious disadvantages with this approach: •

Installation of new mobiles alongside existing ones is very difficult and not popular with users.



Installation of a complete new network in a relatively short time requires a huge capital investment—a phased replacement is much more palatable from a financial perspective.



Some network problems are almost inevitable when deploying a new technology and then placing a heavy load on it at switch-on. If these are significant, the users will be highly dissatisfied—again, a phased deployment will minimise such disruption.

2. Obtain a new frequency band for the new technology, deploy a new network, which in this case needs to have similar coverage to the old network, but not necessarily similar capacity (initially), and gradually migrate users to the new system. This is clearly a preferable option, assuming that the new frequency band obtained has adequate propagation characteristics. The key issue with this approach is that the obtaining of a new frequency band, in an already overcrowded spectrum, is both difficult and very expensive. In addition, both bands must be serviced from the same customer base (at least initially) and this can put a huge financial burden on the existing customers, which may drive them away, or on the company’s investors if they choose to subsidize the system in the short term. Adopting a software defined radio based approach to the terminals in such a system allows the gradual deployment of both the new network (in the existing frequency allocation) and the new mobiles. For example, in a typical FM SMR scenario, one of the five allocated SMR channels could be replaced by a new deployment, followed by a second deployment some time later, as mobile deployment grew, and so on until the entire network had been renewed. Strictly speaking, a software defined radio terminal is not essential for this to take place (a dual-mode terminal would suffice); however, it is likely that if a significant technological change is involved in transitioning from the old system to the new, then a software defined radio based terminal will be the most cost effective (and smallest) solution.

1.3

The Benefits of Multi-standard Terminals A multi-standard terminal (MST) is a subscriber unit that is capable of operation with a variety of different mobile radio standards. Although it is not strictly necessary for such a terminal to be implemented using software defined radio techniques, it is likely that this approach is the most economic in many cases. Some of the key benefits of a reconfigurable MST are outlined in the following sections.

4

Introduction

1.3.1

Economies of Scale

Even if terminal adaption over the air or via third-party software was not possible or was not permitted by, for example, regulatory bodies, the production benefits of a software-defined radio approach could well justify its existence. The wide range of new and existing standards in the cellular and mobile marketplace has resulted in the adoption of a diverse range of subscriber terminal (and base-station) architectures for the different systems deployed around the globe. The ability to develop and manufacture a single reconfigurable terminal, which can be configured at the final stage of manufacture to tailor it to a particular market, clearly presents immense benefits to equipment manufacturers. With the design, components used, and hardware manufacturing processes all being identical for all terminals worldwide, the economy of scale would be huge. This has the potential to offset the additional hardware costs which would be inevitable in the realisation of such a generic device. 1.3.2

Global Roaming

The present proliferation of mobile standards and the gradual migration to third generation systems means that a large number of different network technologies will exist globally for some time to come. Indeed, even in the case of 3G systems, where a concerted effort was made by international standards bodies to ensure that a single global standard was produced, there are still significant regional differences, in particular between the U.S. and European offerings (and also, potentially, China). With this background, it is clearly desirable to produce a terminal which is capable of operation on both legacy systems and the various competing 3G standards. Indeed, it could be argued that this is the only way in which 3G systems will be accepted by users, since the huge cost of a full-coverage network roll-out will discourage many operators from providing the same levels of coverage (at least initially) as their existing 2G systems enjoy. A user is unlikely to trade in his or her 2G terminal for one with perhaps better services, but a significantly poorer basic voice coverage. This is indeed what is happening in virtually all current 3G deployments. A software defined radio architecture represents a very attractive solution to this problem. 1.3.3

Service Upgrading

A powerful benefit of a software defined radio terminal, from the perspective of the network operator, is the ability to download new services to the terminal after it has been purchased and is operational on the network. At present, significant service upgrades require the purchase of a new terminal, with the required software built-in, and this clearly discourages the adoption of these new services for a period. The launch of General Packet Radio Service (GPRS) data services on the GSM network is a good example of this. With an SDR handset architecture, services could be downloaded overnight, when the network is quiet, or from a Web site in the same manner as personal computer (PC) software upgrades are distributed. There are clearly a number of logistical issues with this benefit (e.g., what to do about phones which are turned off at the time of the upgrade or what happens if a particular phone crashes with the new software, perhaps just prior to requiring the phone for an emergency call—software

1.4 Operational Requirements

5

which the phone user may not have wanted, and so forth). Many of these problems have been solved by the PC industry and hence it is likely that this benefit will be realised in some manner with software defined radios. 1.3.4

Adaptive Modulation and Coding

The ability to adapt key transmission parameters to the prevailing channel or traffic conditions is a further key benefit of a software defined radio. It is possible, for example, to reduce the complexity of the modulation format, such as from 16-QAM (quadrature amplitude modulation) to quadrature phase-shift keying (QPSK) when channel conditions become poor, thereby improving noise immunity and decoding margin. It is also possible to adapt the channel coding scheme to better cope with particular types of interference, rather than Gaussian noise, when moving from, say, a rural cell to an urban one. Many parameters may be adapted dynamically, for example, burst structure, modulation type, data rate, channel and source coding, multiple-access schemes, and so forth.

1.4

Operational Requirements 1.4.1

Key Requirements

The operational characteristics of an ideal multi-standard terminal include the following operations. 1.4.1.1

Software-Definable Operation

As outlined earlier, the key to many of the advantages of a multi-standard terminal lies in its ability to be reconfigured either: during manufacture, prior to purchase, following purchase (e.g., after-market software), in operation (e.g., adaptation of coding or modulation), or preferably all four. This impacts primarily upon the digital and baseband sections of the terminal and will require the use of reprogrammable hardware as well as programmable digital signal processors in a power and cost-effective implementation. 1.4.1.2

Multi-Band Operation

The ability to process signals corresponding to a wide range of frequency bands and channel bandwidths is a critical feature of a MST. This will impact heavily on the radio frequency segments of the terminal and it is this area which is arguably the main technology limitation on software defined radio implementation at present (although processor power consumption and cost are still both major issues for SDR). 1.4.1.3

Multi-Mode Operation

Many multi-mode software defined radios already exist, although they are often not promoted as such (since the other features/benefits of software defined radio

6

Introduction

techniques are not exploited). The ability to change mode and, consequently, modulation, coding, burst structure, compression algorithms, and signalling protocols is clearly an essential feature of an MST. 1.4.2

Reconfiguration Mechanisms

There are two main reconfiguration mechanisms that are currently favoured for software defined radio, with each having a number of variants. The basic mechanisms are over-air download and manual upgrades. 1.4.2.1

Over-Air Download

With this mechanism, a range of options from the updating of a small number of parameters (e.g., filter coefficients), through the adding of a new service (e.g., e-mail) to a full upgrade of the terminal software, are possible. There are two main issues to be overcome: how to ensure that all mobiles in a network have been upgraded before making use of the new parameters (e.g., a tighter channel spacing, following an upgrade of filter coefficients) and how to deal with the potentially large amount of data which would need to be distributed to each phone (e.g., for a complete upgrade of the whole handset software). Although not huge by comparison with a typical PC application, for example, it would still constitute a significant amount of network traffic. It is currently thought that the only realistic option to address this latter problem is to restrict over-air downloads to the updating of coefficients or the enabling of already-existing software services. In this case, it is assumed that the handset already contains the software required for the new service (for example, the e-mail service proposed earlier), and all that needs to take place over the air interface is for the service to be enabled and an e-mail address assigned to the phone. This would only require a small amount of network traffic and hence would represent a realistic proposition for the network operator and a realistic cost for the user. The higher data rates available in third generation (3G) systems may make broadcast updating of handset software, and not just system parameters, a realistic option. In the case of the other main issue, that of ensuring that all mobiles have been upgraded prior to a major service change (e.g., a change of channel spacing or modulation format), this is significantly more problematic. The only realistic option is for a similar process to be adopted to that of Internet software upgrades. In this case, new software can be automatically downloaded once the service provider has flagged that a new download is available. Both the old and the new systems would have to operate simultaneously until such time as more than a certain (high) percentage of users had upgraded. Support for the old service would then cease. 1.4.2.2

Manual Download

This option is again similar to its personal computer counterpart. If the users wish to add a new service, they would simply purchase a disk or CD containing the required software and load it into their phone from a PC (or have it installed by their phone vendor). This could apply to anything from a single new application (e.g., e-mail)

1.5 Business Models for Software Defined Radio

7

through to an entire new suite of software including an operating system. Elements of this model are beginning to emerge (along with its Internet download counterpart) in the sale of ring tones for some 2G and 3G phones. 1.4.2.3

Handset Operating Systems

For these models to work successfully, it is likely that some form of handset operating system will need to emerge, in a manner similar to the Windows and Linux operating systems for PCs. Such systems are already in use, with WindowsME and Symbian being the two main examples available currently, however these are, at present, not specifically tailored to the needs of an SDR handset or digital signal processing applications in general. Other options exist, however, such as OSE [8]; this operating system is already being used in base stations for this type of application. If and when this happens to the required degree, it paves the way for third-party applications providers to greatly expand the capability of the humble mobile telephone and this is probably essential for the success of third generation systems.

1.5

Business Models for Software Defined Radio 1.5.1

Introduction

The advent of software defined radio has already revolutionised the business model for mobile communications (see the discussion of legacy systems in Section 1.2.2). There are two further business issues which need to be addressed: 1. The business model for infrastructure procurement—the changes which are now possible to traditional outsourcing models; 2. The financial penalties and benefits in adopting a software defined radio approach to terminal design. In other words, in a fiercely price-competitive handset market, why pay a premium for the additional technology necessary for a software defined radio–based terminal (and how large is that premium)? 1.5.2

Base-Station Model

The architecture of most wireless base stations has moved from an intrinsically modulation-specific architecture to a largely software-defined architecture. This change, in addition to the recent moves toward standardisation of the internal base-station transceiver system (BTS) digital interfaces, in the Open Base-Station Architecture Initiative (OBSAI) and Common Public Radio Interface (CPRI™) initiatives (see Section 1.5.3), radically alters the BTS procurement and business model options. The interface between the waveform generation and waveform transmission functions is now, in many cases, digital and it is increasingly common for an original equipment manufacturer (OEM) to outsource both the baseband digital card hardware and the high-power RF transceiver hardware. This leaves the OEM free to concentrate on the complex application layer software and services provision areas, which are their key differentiators in many applications.

8

Introduction

The ideal BTS model, from an OEM’s perspective, would involve a small number of standard (not necessarily standardised) building blocks, which could be cascaded in order to form a complete hardware solution. This has not been possible in the past, due to the application-specific and vendor-specific nature of the components involved. The advent of SDR techniques, however, is increasingly leading to this model being adopted. An outline of the modulation generation and transmission elements of this type of base station is shown in Figure 1.1. It is now possible to define each of the main elements (digital signal processing and linearised transmitter and diplexer) as modules (from a hardware perspective). Of these items, many are already outsourced by OEMs, for example, the digital and/or digital signal processor (DSP) card, the diplexer, and also the PA element of the transmitter. The final step, from an SDR business perspective, is to outsource the upconverter and associated synthesizer as part of an overall linearised transmitter solution. There are now a number of BTS OEMs heading down this route in order to simplify the hardware and supply-chain aspects of their base-station infrastructure solutions. This trend will continue and spread to the OEMs not already adopting this model, since the OEMs increasingly need to concentrate their efforts on their areas of core competence, in order to provide product differentiation. These areas are typically in software and services provision (e.g., the quality of the switch), as they are capable of generating additional profit or valuable service differentiation for the network operator. They are therefore increasingly keen to offload the RF hardware elements of the system as these occupy skilled (and expensive) engineering effort and carry substantial development risk, yet yield minimal benefit in the form of saleable features. 1.5.2.1

New BTS Business Models Enabled by SDR

The adoption of an SDR architecture for a BTS moves the interface to the digital domain and leads to the concept of an RF black box containing all of the RF aspects of both the transmitter and receiver(s), plus, increasingly, the diplexer, although this is still a separate component in many designs. Such a system is illustrated in Figure 1.2 with a Cartesian interface; alternatively, a digital IF or digital polar interface could also be used. An argument often levelled against the adoption of this approach is that too much of the base station is being subcontracted. It is common practice, at present, for the PA module to be bought as a subsystem from an outside supplier, and this is a widely

Diplexer Upconverter Baseband input(s)

DSP

RF PA

Linearisation

Linearised transmitter To receiver

Figure 1.1

Digital input/RF output transmitter employed in a software defined radio base station.

1.5 Business Models for Software Defined Radio

I/Q digital input

Digital lineariser

9

Upconverter

DAC

PA

Lineariser feedback Diplexer

I/Q digital output

Figure 1.2

Digital downconverter

Downconverter

ADC

LNA

Contents of an RF black box SDR system.

accepted business model within the BTS OEM community. Putting this into context, Figure 1.3(a) shows a generic RF upconverter and PA combination, and it would appear, at first glance, that subcontracting the design of the PA does not represent a significant amount of the overall system. If, however, the PA is linearised, which is a common requirement in code-division multiple access (CDMA), orthogonal frequency division multiplexing (OFDM), and π/4-DQPSK (differential quadrature phase-shift keying) systems, the complexity embedded within that block becomes significant and it is also a much bigger component of the overall transmitter cost and size. This is highlighted in Figure 1.3(b), in which the relative size of the power

RF PA

(a)

RF PA

(b)

Figure 1.3 (a) Conventional block-diagram of the RF section of a linear transmitter and (b) representation of this scaled by cost.

10

Introduction

amplifier (PA) and RF signal processing elements is scaled based upon cost, although this figure would also look similar if the scaling was based upon physical size. The conclusion from this illustration is clear: subcontracting the remaining elements of the RF section of an SDR transmitter will have a relatively minor impact on unit cost (and, typically, a positive impact on overall BTS cost). It will free up valuable RF engineering resource for other tasks and will have no impact on the product differentiation capability of the OEM—few, if any, base-station sales are based upon the incorporation of a novel upconverter within the BTS design. 1.5.2.2

Upgrade Versus Replacement in a BTS Context

Upgradability is often a difficult concept to sell in a high-technology system. It seems to be a straightforward argument on the surface: Technology advances so quickly that systems rapidly become obsolete; why not, therefore, design them so that they can be upgraded rather than discarded? In practice, the cost of upgrading versus the cost of replacement does not usually make this argument compelling. Take the PC market as an example. Very few people upgrade processors, motherboards, and so forth, as it is not usually economic. By the time a user has upgraded the motherboard so that it is capable of working with a new, faster processor, added a new graphics card to work with the desired new applications, added or replaced the hard drive, as the new software assumes a greater disk capacity and hence uses more disk space, and so forth, the cost is usually similar to or greater than the cost of a new machine. In addition, the upgraded machine is unlikely to have a warranty and hence the user is alone responsible for any compatibility problems. In the case of a BTS, some of the compatibility and warranty issues can be alleviated, since it is likely that the upgrade hardware will be provided by the original system vendor and hence they will be responsible for maintaining system integrity (although OBSAI and CPRI™ may change this paradigm). The issue of cost versus replacement is still just as valid, however, and here there are some significant differences when compared to the PC model described earlier. The radio parts of a BTS consist of two main sections: 1. The radio frequency and analogue electronics, including the analogue baseband processing (e.g., anti-alias filtering), IF components, local oscillators, low-noise amplification, power amplification, and so forth. These elements are mostly housed within the transceiver unit (sometimes known as the TRx), with the RF power amplifier and diplexer often being separate components. 2. The digital signal processing hardware, firmware, and software. This often appears as one or more separate cards (e.g., one per RF carrier) plugged into a card frame, with a common bus. This part of the BTS contains the DSP devices, application-specific integrated circuit (ASICs), FPGAs, memory, and clock oscillators necessary to generate the modulation, coding, framing, and so forth required for the system or systems which it is designed to support. The interface between these two elements can be analogue, in which case the necessary digital-to-analogue converter (DACs) and analogue-to-digital converter

1.5 Business Models for Software Defined Radio

11

(ADCs) can be found on the digital signal processing cards, or digital, when they will be found in the TRx unit(s). It is increasingly common for the interface to adopt the latter, digital format, with both OBSAI and CPRI™ adopting this architecture. There is a significant difference in the rate and type of development between these two parts of a BTS system. For example, a new design of RF power amplifier in an existing frequency band (e.g., 900 MHz) might well use the same RF power devices as an existing design from, say, 2 years previously. Even if new devices are available, they are unlikely to differ in any significant regard, of import to software-defined radio, from their predecessors. They are unlikely, for example, to cover a significantly greater bandwidth (they are usually band-specific anyway) or have a significantly improved intermodulation distortion (IMD) performance. The old design is therefore not, strictly speaking, out of date—it will still work well with a new modulation format. The same is true of most aspects of the RF part of the system; new parts are created which are cheaper, smaller, and more efficient, but functionally they are little or no more capable than their predecessors, from a reconfiguration viewpoint. It is therefore realistic to design the RF elements of the system to be future-proof in some meaningful way, without fear that future improvements in technology will render them unusable. Linearised power amplifiers are a good example of this. A mid-1990s feedforward based power amplifier in, say, the 800-MHz band, is still a fully flexible, reconfigurable component today and it still covers the correct (allocated) bandwidth. A newer design of amplifier will certainly have advantages over its earlier counterpart (most notably in the areas of cost, size, and efficiency); however, the older design will still perform adequately in a software-defined radio context. In the case of the digital parts of the system, however, this is certainly not the case. Moore’s law [9] indicates that an 18-month-old processor (e.g., a DSP) will have half the processing power of a recent part; extrapolate this over a realistic replacement life cycle of perhaps 5 to 7 years (or more) leads to a processing power difference of around 16 times. Trying to future-proof this part of the system would therefore be very difficult and would involve the use of a very large number of state-of-the-art components and a considerable expense. It is unlikely that this cost could be justified, based upon a future-proofing argument alone and it is difficult to see any other good reason for adopting this approach. The most sensible option for future-proofing may therefore be to invest significant design effort in future-proofing the RF and analogue baseband elements of the system (probably, but not necessarily, including the ADCs and DACs) and making the baseband digital cards in the form of current-technology throwaway items, much like the motherboard in a PC. The cost of upgrading a digital card (or a number of digital cards) in the future will be modest in comparison to the cost of upgrading the whole BTS and much lower than the cost of upgrading the RF elements; the RF power amplifier alone is likely to account for 50% or more of the cost of a third generation BTS installation (excluding site costs) within the next few years. 1.5.3

Impact of OBSAI and CPRI™

OBSAI [10] and CPRI [11] are industry-led standardisation activities aimed at opening up the interfaces within a BTS. They are intended to provide an open

12

Introduction

marketplace into which third-party equipment vendors will be able to provide high-volume BTS subsystems for a range of OEM customers, thereby reducing costs for an individual OEM customer. Most major OEMs now belong to one or other of these organisations, with initial drafts of the standards already having been published and product developments being under way. The standards cover: •

Baseband module to RF module high-speed data interface (transmitting the I-Q data representing the waveform(s) to be transmitted);



Low-speed data for control, operation, administration, maintenance, and provisioning (OAM&P), and so forth; Clock/timing distribution; Interface to remote RF heads.

• •

In addition, OBSAI is currently going further and specifying aspects of the module mechanics, power supply, testing, and so forth. Both of these standards activities are based around an SDR-friendly baseband in-phase and quadrature (I-Q) interface. The use of SDR hardware architectures is therefore highly appropriate for both of these standards and they have the potential to bring the economies-of-scale benefits of SDR to the BTS marketplace. This arises for a number of reasons: •

The baseband, crest-factor reduction, DAC, and upconversion architectures are reuseable across a range of frequency bands and air interface standards, typically with very minor changes [e.g., a synthesizer voltage-controlled oscillator (VCO)].



Likewise, the downconversion, analogue-to-digital (A/D) conversion, and baseband receive architectures are also reusable across a range of frequency bands and air interface standards with similarly minor changes. Software for the protocols associated with the above interfaces is typically reuseable across all platforms.



These initiatives could therefore be viewed as good for the adoption of SDR techniques within mainstream base-station designs. 1.5.4

Handset Model

There are three categories of customer for software defined radio technology and each has its own potential series of benefits, which will be reaped if it is successful: 1. Equipment manufacturers. The benefit to equipment manufacturers is largely in terms of economies of scale: The adoption of SDR allows a manufacturer to ideally only manufacture a single handset product (at least in terms of its electronics). If this occurs, then the benefits to that manufacturer are potentially huge. Only a single product needs to be supported, in terms of hardware, thus considerably simplifying almost all aspects of manufacturing, thereby:

1.6 New Base-Station and Network Architectures



Lowering inventory requirements;



Reducing the number of suppliers;



Increasing ordering volumes;



Reducing documentation/support/spares requirements, and so forth;



Reducing test equipment and test-set design requirements.

13

This will also, therefore, have a significant impact upon cost. It will also, of course, reduce the need for design teams for different air interface platforms and hence may even lead to staff reductions in these areas. 2. Network operator. Competition between network operators is largely on the basis of cost and quality of service; anything that can give an operator an edge in quality, without resulting in a significant increase in cost is clearly of interest to them. SDR terminals allow the operator the potential of field upgrades in order to correct problems or add new services and features. The correction of problems is actually a very attractive area, as a handset recall to fix a software bug is an extremely expensive undertaking. The ability to be able to add new service innovations instantly, rather than having to wait for old handsets to become obsolete, is also of great interest. A good example of where this would have been useful to a network operator is in the addition of frequency-hopping to the GSM network. One of the early, popular handsets did not implement that feature and hence the introduction of frequency-hopping was delayed on many networks, until this handset could be deemed obsolete. 3. Consumer. The consumer’s buying decision is influenced by many factors and technology/features is only one of them. Many other factors are of equal or greater importance, such as: size, weight, battery life, case styling, and even brand credibility; all of these are equally or more important to many users. The advent of software defined radio does, however, bring some powerful benefits and these may prove to be key selling features to some customers. Such features include genuine global roaming and full upgradability (in the same way that PC software can be upgraded). This latter feature will lead to a new PC-like after-market software industry. Global roaming, on the other hand, will only be an advantage to the international business traveller; however, this market segment is typically wealthy and a heavy airtime user and is therefore a very attractive customer to a network operator and operators are likely to take disproportionate steps to acquire this customer’s business (e.g., greater than average handset subsidies). This makes the affordable adoption of SDR technology more likely for this (top) end of the market, therefore providing a way into the market for this technology via the business traveller route.

1.6

New Base-Station and Network Architectures The use of PA linearisation and, in particular, digital-input linearised transmitters enables a number of new base-station topologies to be realised. These topologies

14

Introduction

result in a number of significant advantages for both the base-station manufacturer and the network operator, particularly in the areas of power consumption and cost. In a conventional base station, the baseband and RF sections of the transceiver are usually physically close to each other, and in many cases in the same box. The power amplifier(s) are also generally close, typically being located in the same rack (if not the same box, in the case of single channel PAs). The RF power is not, therefore, being generated close to its intended point of use (the antenna), and a significant amount is wasted in getting it there. There are three alternative topologies which are enabled by the use of linear power amplifiers: the separation of digital and RF sections within a base station, tower-top mounting (not new in itself, but linear PAs add some new possibilities which may overcome current concerns), and orphaned RF networks (also known as BTS hoteling). 1.6.1

Separation of Digital and RF

The advent of OBSAI and CPRI™, together with that of the digital-input transceiver systems upon which they both rely, means that the historic placement of both the digital and RF sections of a base station in close physical proximity is no longer a requirement. The baseband section can now be effectively stand-alone, since the RF transceiver is simply a linear processing device, which will faithfully reproduce the input signal described to in digital form. It is therefore possible to physically separate the baseband and RF sections of the base station by an almost arbitrary distance, particularly if an optical transport medium is utilised between the two. The transceiver may therefore be mounted at a convenient location close to the antenna, for example, on the side of a building or at the top of a mast, with a consequent reduction in RF power requirements (due to the virtual elimination of high-power RF cable losses) and a lowering of both purchase and running costs. An example of the type of application where the separation of the digital and RF aspects of a base station may prove useful is in a (larger) urban base site. Here, space may be rented in expensive office accommodation, close to the top of a building, for example. The only item, however, which needs to be placed in that location is the RF section; the remainder of the system could easily be placed in lower-cost basement space, with, for example, a fibre-optic link between the two. Alternatively, a single location could be used to house the base-station digital and network interfacing hardware for a number of base sites, required, for example, to provide coverage in a large building (e.g., a shopping mall or an airport). The individual base sites would then merely consist of a number of RF sections [or remote RF heads (RRH)], with each comprising an RF black box (see Section 1.6.3.2) and an antenna. The RF black box itself would incorporate a digital or optical input, RF transmitter and an RF input, digital (or optical) output receiver, together with a diplexer (as appropriate) and any local power supply functionality (DC, or AC mains). It is often easier to source mains power locally rather than to attempt to distribute low-voltage DC along with the digital or optical signals, although cables do exist to facilitate this (e.g., Amarra [12]).

1.6 New Base-Station and Network Architectures

1.6.2

15

Tower-Top Mounting

A second new topology is to mount the RF transceiver at the top of the mast containing the transmit and receive antennas, as illustrated in Figure 1.4(a). Some installations of this type already exist; however, the approach adopted has been to send low-power RF signals up the mast, to be amplified at the top. This approach has a number of benefits over the more traditional approach of mounting the amplifier(s) in a cabin at the base of the mast, since it eliminates high-power RF cable losses. These can account for up to half of the power generated in the PA, and this in turn can lead to a doubling of the effective PA efficiency (based on dc power input to the PA versus RF power transmitted from the antenna). The main issues with adopting this approach are: 1. Maintenance. The failure of a unit at the top of a mast will result in an expensive operation to effect a repair or replacement. The reliability required from such (outdoor) units is therefore high, and network operators are largely skeptical that it can be met. There are grounds for optimism, however, as confidence in active electronics at the masthead has been boosted by the installation and good reliability of tower-mounted amplifiers (TMAs) in 3G systems. In addition, the lower RF output powers required from tower-mounted units mean that they require fewer active devices and will run cooler (thereby mitigating two major potential sources of unreliability). A further consideration with regard to reliability is that of failures in the high-power RF cable, for example, due to water ingress. While not the most common cause of base-station failure, it is nevertheless a significant issue

RF Black Box

RF Black Box

RF Black Box

Optical fibre

Mast

RF Black Box

Optical fibre

~ ~

~10 km

~ ~

~ ~

Mast

Base station Base-station cabin

Base station

Base station

Network

Network

(a)

(b)

Base station

Central basestation hub

Figure 1.4 Use of a digital-to-RF black-box to (a) enhance an existing base-station concept and (b) enable a new centralised hub architecture.

16

Introduction

and the replacement of this cable with, for example, fibre-optic cable, should greatly reduce the incidence of failures in this part of the system. 2. Weight. Mounting a number of high-power RF transceivers at the top of a mast will add significantly to the loading on the mast and could lead to the requirement for an upgraded or replacement mast. This clearly adds to the cost of the installation and will remove some of the cost benefits of this approach (but probably not all, if designed-in at the outset). It is, however, worth bearing in mind that the weight of the cables going up the mast to feed the RRH unit(s) will be greatly reduced (optical fibre, for example, is much lighter than low-loss, high-power coaxial feeder cable). The weight issue is therefore one which requires careful analysis for each individual case and may end up as an advantage in many installations. 3. Delay. If a public data network, or a significant number of routers or switches, is used for distribution of the digital signals, this will result in a significant delay being inserted into the cell. This will have the effect of reducing the maximum cell radius and hence its overall coverage. In the case of providing transmission utilising a public distribution network, this delay may well be unknown (and could vary from day to day with routing changes which are out of the control of the network operator). This latter issue may well exclude the use of public networks for distributed base-station applications. As long as a public network is not used (nor a large number of routers/switches), an acceptable cell radius can usually be obtained for most air interface systems, even after transmission has taken place over many tens of kilometers.

1.6.3 1.6.3.1

BTS Hoteling Introduction

The concept of BTS hoteling is illustrated in Figure 1.4(b). This is a new innovation in network deployment in which the majority of the components of a traditional base station are housed in a central location (the hub). This hub can be placed at a convenient, low-cost location, for example, in the basement of a downtown building or in an out-of-town industrial park. This leaves a minimum of components that are required to be housed at the cell site. The concept is similar to that of its Internet counterpart (i.e., Internet hoteling). All of the network components, interface elements, and so forth, as well as the baseband signal generation, modulation, demodulation, coding, and framing functionality are housed at the central base-station hub. The hub interfaces directly to the relevant telecommunications network and derives all subscriber calls from there. It also generates and receives the modulated data samples required for transmission to/from the remote RF head. The hub therefore contains all of the intelligence of the base station and appropriate measures can be taken to ensure its continued operation (e.g., having a permanent staff for maintenance, utilising N+1 redundancy with automatic switching and so forth). Neither of these provisions would be economic for a single base-site, however they may well be justified at a large BTS hoteling hub.

1.6 New Base-Station and Network Architectures

1.6.3.2

17

Remote RF Head

The contents of the remote RF head (or RF black box of Figure 1.4) are similar to those shown in Figure 1.2 and described in Section 1.5.2.1. The key difference here is that a digital interface is added which is capable of longer transmission distances than that required in a typical base-station application (where only a few tens of centimeters of backplane or shielded cable are required). This interface is typically optical for longer distances (see Figure 1.5), although twisted pair (e.g., CAT5) or coaxial cables may also be used for in-building or shorter outdoor applications. 1.6.3.3

Advantages of BTS Hoteling

BTS hoteling is beginning to be deployed in trial networks by a number of operators. They are interested in this technology as it yields a number of significant benefits: 1. Simplification of maintenance/upgrades. Since the majority of the basestation equipment for a number of sites will be housed in one location, only one maintenance visit is required to cover all of these sites. At present, all sites must be visited individually. 2. Reduction/elimination of base-site huts and cabins. Aside from the capital and maintenance cost of these buildings, they also add to planning difficulties, due to the acoustic noise of the air conditioning systems presently required by many of them (ignoring the aesthetic and health concerns of local residents, which can also impact upon planning decisions in many markets). Enlarging a hut to house newer (e.g., 3G) equipment, in addition to the existing 2G racks, generally involves a renegotiation with the site landlord, and this can prove expensive (in some cases, around $75,000 in legal fees alone!). 3. Reduced power consumption. Placing the RF elements at the top of the tower eliminates the cable losses inherent even in high-power coaxial cable. These losses are typically around 2 dB (but can be much higher), equating to a 37% reduction in power consumption (assuming that power requirements scale with output power for the PA chosen).

Optical input/ output

Digital lineariser Digital/ optical interface

PA

Lineariser feedback Diplexer

Digital downconverter

Figure 1.5

Upconverter

DAC

ADC

Downconverter

Internal structure of the RF black box concept.

LNA

18

Introduction

4. Lower deployment costs. In addition to the cost benefits of a lower power PA, the fact that the BTS now no longer requires a cabinet at the bottom of the tower will significantly reduce its build cost (and probably site acquisition costs as well). Air conditioning also needs only to be provided at a single location, namely, the BTS hub. 5. Lower operating costs. The much greater (effective) efficiencies referred to above, together with the removal of the need for air conditioning at a large number of remote sites, leads to a significant reduction in operational costs. This will amount to many millions of dollars per annum for a typical European 3G network. 6. Higher reliability. The removal of one BTS failure mechanism (the high-power coaxial cable) and the placement of much of the BTS hardware in a benign air-conditioned environment will lead to an improvement in overall system reliability. Also, the potential to use N+1 redundancy for the BTS elements contained in the hub site, means that any failures that do occur will have a minimal impact upon the smooth running of the network. 7. Ease of maintenance. Placing the majority of the BTS hardware in one location allows central maintenance to be undertaken and possibly even 24-hour manning. This will significantly reduce the time between a failure occurring and its repair. If a failure occurs in the RF section, this should take no longer to repair than at present for, for example, a cable failure (reputed to be more common than PA failure in most networks). 8. Ease-of-network expansion. In an existing network, adding a new site typically requires the acquisition of space for both an antenna and a ground cabinet. In a city centre, the former may be relatively readily available (e.g., lampposts and traffic lights), with the latter being the bigger problem (due to restrictions on street furniture). If the only electronics to be deployed are those required for the RF and this is now small (city centre deployments tend to be low power), minimal or zero street furniture deployments may be possible.

1.7

Smart Antenna Systems 1.7.1

Introduction

A smart antenna system can bring significant benefits to both 2G and 3G networks. These benefits are principally in the areas of improved interference cancellation and in enhanced system capacity. These benefits do, however, come at a price with the requirements for multiple RF power amplifiers, armoured feed cables and calibration systems adding significantly to the cost of this type of system. Although adaptive antenna systems have been proposed and researched for some time, they have yet to achieve widespread acceptance to date and this is largely because of their associated deployment and equipment costs. The advent of software defined radio architectures is, however, a key development in enabling smart antenna base stations to be realised utilising baseband beamforming. This type of architecture assists in mitigating some of the cost, size, cabling, and calibration issues discussed earlier, bringing smart antenna systems closer to acceptance by network operators. It is still not clear, however, whether such systems will ever achieve a satisfactory business case.

1.7 Smart Antenna Systems

1.7.2

19

Smart Antenna System Architectures

A beam-steering smart antenna system operates by feeding gain and phase weighted versions of the same signal(s) to an array of antenna elements spaced, typically, half a wavelength apart. The gain and phase weightings imposed on the signals fed to each antenna element determine the direction of the beam and the beam pattern (i.e., the position of the sidelobes and nulls). All of these features may be controlled by the smart antenna system, allowing the wanted signals to be targeted by the main lobe and any unwanted interferes minimised by steering a null (or multiple nulls) in their direction(s). Although some smart antenna deployments exist today, these are mainly retro-fits to existing base stations (mostly in the United States) and utilise high-power gain/phase controllers to implement the beam-steering functionality. In the future, however, new deployments, are likely to be designed-in as a part of the base station and these will almost certainly employ baseband beamforming in place of the high-power Butler matrix [13] beamformers used at present. In a baseband beamforming architecture, the gain and phase weightings, required to steering the beam and its associated nulls, are formed within the digital signal processing functionality found at baseband. All that is then required of the transmitter and multicarrier power amplifier (MCPA) is to faithfully reproduce these signals as high-power RF channels, without significant distortion (either linear—i.e., gain/phase errors/ripple—or non-linear—i.e., IMD). The transmitter units must therefore be highly linear (generating very little IM distortion) and have a flat frequency response. There are two main methods of realising a baseband beamforming smart antenna system: 1. Utilise conventional (e.g., feedforward) RF-input/output multicarrier power amplifiers (MCPAs) in conjunction with a traditional transmitter/ upconverter. A system designed in this way is illustrated in Figure 1.6. In this configuration, the power amplifiers are mounted in the cabinet at the base of the mast and are cooled by fans and an overall air conditioning system for the cabinet itself. The relative inefficiency of this type of amplifier and the requirement to overcome feeder losses in transmitting the signals to the top of the mast, both lead to the requirement for large amounts of heat to be dissipated. 2. Utilise a digital-input SDR transmitter of the type described earlier in this chapter (see Sections 1.5.2.1 and 1.6.3.2). The architecture for this solution is shown in Figure 1.7. The base-station cabinet no longer contains the RF elements of the transmit chain, as these have now been moved to the masthead, along with the MCPA. 1.7.3

Power Consumption Issues

It is worth examining the power consumed in adopting each of these approaches. Consider, for example, a four-element adaptive array (arguably the smallest feasible deployment) and a three-carrier WCDMA configuration. Assume that each carrier must be transmitted (from the BTS cabinet) at 10W (i.e., 15W at the output of the PA

20

Introduction Antenna elements

Low-power RF coax feed (one required for each antenna element

Highpower RF coax

Mast

MCPA

MCPA

MCPA

MCPA

DACs and upconversion

DACs and upconversion

DACs and upconversion

DACs and upconversion

Baseband digital processing

Baseband digital processing

Baseband digital processing

Baseband digital processing

Feedback downconverter and ADC (one required for each antenna element)

Network interface Ground cabinet

Figure 1.6 Downlink smart antenna system based upon conventional multi-carrier power amplifiers. Note that only one of the calibration feedback paths is shown, for simplicity—one is required for each antenna element.

to allow for losses in the diplexer, connectors, and so forth) and that allowance is made for the fact that the power from each antenna element sums to form the wanted beam (i.e., each PA needs only to generate one-fourth of the total required power). In the case of the architecture shown in Figure 1.6, the total transmitted power (from all of the MCPAs) is 45W, with the efficiency level being typically around 10% (or less) for existing feedforward systems. This results in a total dissipated power of 450W, most of which must be removed by air conditioning. In the case of the architecture shown in Figure 1.7, and assuming that the feeder cable used in the above example has 2 dB of loss (typical for a 30-m cable run), the total MCPA output power required at the top of the mast, for equivalent coverage,

1.7 Smart Antenna Systems

21 Antenna elements

MCPAs DACs and upconversion

Optical fibre

Mast

Baseband digital processing

Network interface

Ground cabinet

Figure 1.7 Downlink smart antenna base-station configuration employing digital power amplification.

has reduced to 28W. The efficiency of the MCPA will also have increased to over 15% (based on the use of digital predistortion in the SDR transmitter), hence resulting in a total system power dissipation of less than 190W. This represents a saving of 260W—close to 60%—over the architecture shown in Figure 1.6. This is clearly a very significant saving and is conservative (as an overall BTS system saving), since most smart antenna systems will use more than four elements and two or three sectors. 1.7.4

Calibration Issues

A further issue, which separates the two architectures discussed earlier, is that of the additional cabling required to maintain calibration for the complete system. A smart antenna system relies on the faithful translation to RF of the gain and phase weights applied to the signals at baseband. Furthermore, it relies on each of the antenna elements receiving signals of the correct gain and phase weighting relative to each other. This latter requirement typically necessitates periodic calibration of each of the RF transmit/power amplification systems (of which there are four in the example shown in Figure 1.6). This will result in yet more cabling, as shown in that figure, in order to accommodate the feedback signals required from each antenna element.

22

Introduction

With an SDR transmitter-based solution, it is possible to lock each of the high-power transmitter units together (with one acting as the master), hence preserving their relative gain and phase properties without repeated calibration. This saves both RF signal processing hardware and multiple RF cables—both adding to cost and also, potentially, to reduced reliability.

1.8

Projects and Sources of Information on Software Defined Radio 1.8.1

SDR Forum

The SDR Forum [2] is an industry-based standards and promotional organisation which is “dedicated to supporting the development, deployment, and use of open architectures for advanced wireless systems.” It therefore aims to: •

Accelerate the proliferation of enabling software definable technologies necessary for the introduction of advanced devices and services for the wireless Internet;



Develop uniform requirements and standards for SDR technologies to extend the capabilities of current and evolving wireless networks.

It consists of three work committees and these are outlined in the following paragraphs. 1.8.1.1

Regulatory Committee

The charter for the Regulatory Committee states that its purpose is: “To promote the development of a global regulatory framework supporting software download and reconfiguration mechanisms and technologies for SDR-enabled equipment and services.” 1.8.1.2

Markets Committee

The charter for the Markets Committee states that it exists to “raise industry awareness through public relations activities, including published articles, press releases and representation at industry trade shows; to increase and maintain Forum membership; and to collect and analyse market data on all industry segments.” 1.8.1.3

Technical Committee

The charter for the Technical Committee states that it exists to “promote the advancement of software-defined radios by using focused working groups to develop open architecture specifications of hardware and software structures.” Within the Technical Committee there are three working groups: 1. Download/Handheld: Charter—To promote the use of software-defined radio technology in handheld terminals, providing dynamic reconfiguration under severe constraints on size, weight, and power.

1.8 Projects and Sources of Information on Software Defined Radio

23

2. Base Station/Smart Antennas: Charter—To promote the use of softwaredefined radio and reconfigurable adaptive processing technology in wireless base stations worldwide for terrestrial, satellite, mobile, and fixed services. 3. Mobile: Charter—To promote the use of software defined radio technology in commercial and military applications where station mobility, dynamic networking, and functional flexibility are required. Identify and maintain the collection of recommended wireless, network and application interface standards to meet these objectives. Develop and promulgate new standards as necessary. 1.8.2

World Wide Research Forum (WWRF)

Working group 6 of this forum is undertaking activities dealing with reconfigurability and is taking inputs from (amongst others) the U.K. Mobile Virtual Centre of Excellence (MVCE) [14]. 1.8.3

European Projects

Over the last few years, a number of European projects have also been focused on various aspects of software defined radio technology, including: •

SLATS [15];



FIRST [16];



SUNBEAM [17];



CAST [18];



MOBIVAS [19];



WINDFLEX [20];



WINE [21];



PASTORAL [22];



DRIVE [23];



MuMoR [24];



SODERA [25, 26];



SCOUT [27];



TRUST [28, 29].

Note that this list is in roughly chronological order and is not exhaustive. Each of these projects has examined slightly different aspects of SDR, incorporating both techniques, such as software download and dynamic reconfiguration and technology, such as multiband transmitter and receiver architectures. While none of the above projects is intended to generate standards or guidelines (unlike the SDR Forum, discussed earlier), they have generated technology in the software-defined radio area and all have informed standards in their respective fields—generally through European Technical Standards Institute (ETSI) submissions.

24

Introduction

References [1] Kenington, P. B., “Linearised Transmitters: An Enabling Technology for Software Defined Radio”, IEEE Communications Magazine, Vol. 40, No. 2, February 2002, pp. 156–162. [2] http://www.sdrforum.org. [3] Lackey, R. J., and D. W. Upmal, “SPEAKEasy: The Military Software Radio”, IEEE Communications Magazine, Vol. 33, May 1995, pp. 56–61. [4] DaSilva’s, J. S., “It Is Dangerous to Put Limits on Wireless”, 3GIS Conference, Athens, Greece, July 2–3, 2001; http://www.cordis.lu/ist/ka4/mobile/pubar/past/ec_pres_2001.htm. [5] Kenington, P. B., “Dynamic Channel Multicarrier Architecture (DC/MA)-An Ideal Solution for Asia”, Proc. of 2nd Annual Asia Pacific Public & Private Trunked Mobile Radio Conference ‘96, Singapore, August 26–28, 1996. [6] “Radio Equipment and Systems (RES); Land Mobile Service; Technical Characteristics and Test Conditions for Radio Equipment Intended for Transmission of Data (and Speech) and Having an External Connector”, ETSI, European Telecommunication Standard (ETS) 300 113, 1999. [7] http://www.fairfield.com/Boxhome.html. [8] http://www.ose.com. [9] Moore, G. E., “Cramming More Components onto Integrated Circuits”, Electronics, Vol. 38, No. 8, April 19, 1965. [10] http://www.obsai.org. [11] http://www.cpri.info. [12] http://www.andrew.com. [13] Butler, J., and R. Lowe, “Beam-Forming Matrix Simplifies Design of Electronically Scanned Antennas”, Electronic Design, April 12, 1961, pp. 170–173. [14] Georganopoulos, N., et al., “Terminal-Centric View of Software Reconfigurable System Architecture and Enabling Components and Technologies”, IEEE Communications Magazine, Vol. 42, No. 5, May 2004, pp. 100–110. [15] http://www.csem.ch/slats/project.html. [16] http://www.cordis.lu/infowin/acts/rus/projects/ac005.htm. [17] http://www.cordis.lu/infowin/acts/rus/projects/ac347.htm. [18] http://www.cast5.freeserve.co.uk/. [19] http://www.ccrle.nec.de/Projects/mobivas.htm. [20] http://labreti.ing.uniroma1.it/windflex/. [21] http://www.vtt.fi/ele/projects/wine/. [22] http://pastoral.telecomitalialab.com/. [23] http://www.ist-drive.org/index2.html. [24] http://www.ee.surrey.ac.uk/CCSR/IST/Mumor/. [25] http://dbs.cordis.lu/fep-cgi/srchidadb?ACTION=D&SESSION=113462004-4-8&DOC=1 &TBL=EN_PROJ&RCN=EP_RCN_A:57124&CALLER=PROJ_FP5 (or search for project number: IST-1999-11243 in the Cordis database). [26] http://www.bbw.admin.ch/html/pages/abstracts/html/fp/fp5/5is99.0336-1.html. [27] http://www.ist-scout.org. [28] http://dbs.cordis.lu/fep-cgi/srchidadb?ACTION=D&SESSION=122992004-4-8&DOC=1 21&TBL=EN_PROJ&RCN=EP_RPG:IST-1999-12070&CALLER=PROJ_IST (or search for project number: IST-1999-12070 in the Cordis database). [29] http://www4.in.tum.de/~scout/trust_webpage_src/trust_frameset.html.

CHAPTER 2

Basic Architecture of a Software Defined Radio 2.1

Software Defined Radio Architectures A software defined radio (SDR) is a form of transceiver in which ideally all aspects of its operation are determined using versatile, general-purpose hardware whose configuration is under software control. This is often thought of in terms of baseband DSPs, hence the term software radio, which is often used to describe this type of system; however, FPGAs, ASICs (containing a re-programmable element, e.g., an embedded processor), massively parallel processor arrays, and other techniques are also applicable. The more general terms flexible architecture radio (FAR) and software defined radio (SDR) are therefore becoming increasingly adopted. Although not strictly necessary, in order to be termed software defined, this type of radio is also commonly assumed to be broadband (multi-band or multi-frequency in operation). This assumption is made, as one of the principal applications of this type of transceiver is perceived to that of replacing the numerous handsets currently required to guarantee cellular (and in the future, satellite) operation worldwide. Even with the GSM system having achieved a certain degree of ubiquity worldwide, it is still not possible to utilise a single handset in all countries (with cellular coverage) worldwide. Furthermore, the many competing standards (GSM, CDMA, WCDMA, AMPS, D-AMPS, PDC) all have differing characteristics, tariffs, and so forth, and hence a multi-mode, multi-band transceiver, covering all of these systems, would certainly be a useful device. The concept of a multi-band or general coverage terminal is, strictly speaking, an extension of the basic software defined radio concept into that of a broadband flexible architecture radio, since the basic reprogrammability and adaptability aspects of operation do not depend upon multi-band coverage. It would be possible, for example, to construct a useful software defined radio which operated in the 800-/900-MHz area of spectrum and which could adapt between AMPS, GSM, DAMPS, PDC, and CDMA. It is now normal, however, for a handset to have multi-frequency operation and hence the extension of this principle to a software defined radio is a natural one. The international business traveller market is still seen as both large and lucrative, particularly in terms of call charges, hence making this type of handset attractive to both manufacturers and network providers. There are many issues which must be addressed in determining if a softwaredefined radio is realistic and also to what extent it is flexible. For example, it is possible to create a single-band software defined radio with a narrowband channel

25

26

Basic Architecture of a Software Defined Radio

restriction relatively easily [1]. Coping with wider channel bandwidths and operating in multiple bands in differing parts of the spectrum is much more difficult, but nevertheless essential, for a combined GSM/PCS/WCDMA handset, for example. What this chapter aims to do is to examine the simplest possible architecture for an SDR and then to demonstrate why this will not be feasible, for most applications, for some time to come (if ever, in some cases). The remainder of this book will then go on to describe the more complex, but more realistic, architectures in use today (or potentially usable in the near future).

2.2

Ideal Software Defined Radio Architecture An ideal software defined radio is shown in Figure 2.1; note that the A/D converter is assumed to have a built-in anti-alias filter and that the D/A is assumed to have a built-in reconstruction filter. The ideal software defined radio has the following features: •

The modulation scheme, channelisation, protocols, and equalisation for transmit and receive are all determined in software within the digital processing subsystem. This is shown containing a DSP in Figure 2.1; however, as was highlighted earlier, there exists a variety of applicable signal processing hardware solutions for this element.



The ideal circulator is used to separate the transmit and receive path signals, without the usual frequency restrictions placed upon this function when using filter-based solutions (e.g., a conventional diplexer). This component relies on ideal (perfect) matching between itself and the antenna and power amplifier impedances and so is unrealistic in practice, based upon typical transTransmit/ receive antenna Ideal circulator D/A RF-output DAC

High-linearity, high-efficiency RF PA

DSP

A/D

Digital processing subsystem

Figure 2.1

RF-input ADC

Ideal software defined radio architecture.

2.3 Required Hardware Specifications





2.3

27

mit/receive isolation requirements. Since the primary alternative (a diplexer) is very much a fixed-frequency component within a radio, its elimination is a key element in a multi-band or even multi-standard radio. Some potential techniques for solving this problem are proposed elsewhere in this book. Note that the circulator would also have to be very broadband, which most current designs are not. The linear (or linearised) power amplifier ensures an ideal transfer of the RF modulation from the DAC to a high-power signal suitable for transmission, with low (ideally no) adjacent channel emissions. Note that this function could also be provided by an RF synthesis technique, in which case the DAC and power amplifier functions would effectively be combined into a single high-power RF synthesis block. Anti-alias and reconstruction filtering is clearly required in this architecture (but is not shown in Figure 2.1). It should, however, be relatively straightforward to implement, assuming that the ADC and DAC have sampling rates of many gigahertz. Current transmit, receive, and duplex filtering can achieve excellent roll-off rates in both handportable and (especially) base-station designs. The main change would be in transforming them from bandpass (where relevant) to lowpass designs.

Required Hardware Specifications The ideal hardware architecture, shown in Figure 2.1, imposes some difficult specifications upon each of the elements in the system. It is worth examining each of these specifications in detail, in order to judge the likelihood of technological advancement over the coming years making them a realistic proposition. This will then provide a backdrop to the techniques presented in the remainder of this book and their applicability to particular standards or systems. In order to derive these specifications, it is necessary to make some assumptions about the types of modulation scheme (and in some cases, multiple access scheme) which the radio is likely to need to accommodate. If these assumptions are based on current and currently proposed schemes for cellular and PMR systems worldwide, the specifications shown in Table 2.1 could be chosen. Note that many other variants of these requirements are possible and that Table 2.1 represents only one (hopefully realistic) collection of values. The specifications outlined in Table 2.1 highlight some key difficulties in realising a transceiver capable of meeting these requirements, over a broad coverage range. These may be summarised as follows: 1. Antenna: A frequency range of almost 5 octaves is required, together with a realistic gain/loss figure around 0 dBi. Combine this with the usual (handset) requirements of small size, near-omnidirectional coverage pattern (typically, excluding the users head), and low cost, and the physical realisation of this component becomes extremely challenging. 2. Circulator or duplexer: This is discussed in more detail later in this book; however, it needs high isolation and a broadband coverage range. In the case

28

Basic Architecture of a Software Defined Radio

Table 2.1

Basic Specifications for a Handportable Software Defined Radio

Parameter Frequency coverage

Value 100 MHz−2.2 GHz

Notes This would cover most PMR, cellular, PCN/PCS, mobile satellite, and UMTS bands worldwide.

Receiver dynamic range

0 dBm to −120 dBm (based on a 25-kHz equivalent channel bandwidth)

This must not only cope with fading and inband interferers, but any signals in the above frequency range.

Transmit power output

1W

This is reducing as time progresses and health fears increase, but most systems still require this power level (many PMR systems require more).

Transmit adjacent channel power

−75 dBc

This figure is slightly in excess of most known specifications in this area (e.g., TETRA [2]).

Transmit power control range

70 dB

Most CDMA systems, for example, require a large power control range.

Transmit power ramping range

75 dB

DECT [3] requires 68 dB and is probably the toughest current requirement in this area.

Channel bandwidth

5 MHz

Based on the 3GPP WCDMA standard for UMTS [4].

Receiver image rejection

60 dB

Based on an interpretation of the TETRA [2] specifications.

Source: [1].

of a conventional, filter-based duplexer, this latter requirement is essentially impossible to achieve with current technologies. 3. A/D converter: The sampling rate of this converter, if it is to Nyquist sample directly at RF, would need to be at least 4.4 GHz and, in reality, much more (to allow for a realistic anti-alias filter roll-off and real-world converter performance). If, however, the converter is permitted to undersample, the required sampling rate drops dramatically. The required sampling rate could fall to 20 MSPS (based on two-times Nyquist bandpass sampling), assuming that the RF filtering and ADC analogue input were up to the task (a significant challenge in the former case). This would lead to an input bandwidth requirement extending to 2.2 GHz and a resolution of around 20 bits (from the receiver dynamic range requirement1). Even this is an extremely exacting specification, particularly with current technology, and hence the alternative architectures, covered in Chapters 3 through 6, are required to allow a realisable A/D converter to be used. Note that if a synthesizer and conventional downconversion are employed (in place of bandpass sampling), this resolution is available at very low cost in the form of digital audio converters. Up to 200 kHz of channel bandwidth can be accommodated in this way, relatively easily and cheaply (based on I/Q downconversion prior to the A/D converters). At the time of this writing, 1.

Even a 5-MHz receiver bandwidth (based on the above 20-MSPS sampling) can be swamped by one or more narrowband carriers, when operating in, for example, the GSM bands. The full receiver dynamic range is therefore (ideally) required from the ADC.

2.3 Required Hardware Specifications

29

16-bit converters are becoming available with an appropriate sample rate and the trend is for converters to have increasing analogue bandwidths (>1 GHz is now emerging as a specification in a number of parts); it therefore seems likely that this requirement will be realisable within the next 5 to 10 years (although power consumption is likely to be a concern; see Section 2.5.1). 4. D/A converter: This component is currently realisable, although with a relatively high power consumption, again assuming that conventional upconversion is employed and that power control is employed either prior to or within the linear power amplifier. A resolution of 12 to 14 bits at 20 MSPS would be required. IF output devices are also now increasingly common and the available IFs are increasing as technology improves. Current devices are capable of operation at an IF in the hundreds of megahertz region; however, here again, this will improve over the coming few years to the point where RF output frequencies (e.g., 800/900 MHz and 1.9/2.1 GHz) will become a reality, at a realistic cost. 5. Receiver anti-alias filtering: Based on the two-times Nyquist sampling converters discussed above, an attenuation of 60 dB is required around 18 MHz from the channel edge. This would be extremely difficult, if not impossible, to achieve in a bandpass filter capable of tuning from 100 MHz to 2.2 GHz. With the architecture proposed in Figure 2.1, this component presents a serious challenge and strongly indicates that a synthesiser-based downconversion mechanism would almost certainly need to be employed in a software defined radio for the foreseeable future. Improvements in sampling rates (for a given converter resolution) will, however, allow this requirement to be relaxed and may enable some limited forms of SDR to be realised without such high-performance filtering needing to be included. 6. DSP (DSP processors and equivalent technologies): Technology in this area is progressing very rapidly and the primary issue at present is that of power consumption (for handset operation). Combinations of reconfigurable hardware (e.g., FPGAs) and fully software programmable processors are likely to yield the best performance in terms of power consumption, although other, newer architectures are also strong challengers in this area (e.g., massively parallel arrays). These technologies are discussed in Section 2.4. 7. RF power amplifier: Considerable research has been directed at the linearisation of power amplifiers in recent years and a number of candidate techniques exist (see Chapter 6). Many narrowband systems have employed the Cartesian loop technique, achieving up to −70-dBc intermodulation product levels. For broader bandwidth systems, RF predistortion, digital predistortion, and feedforward techniques have also been used. At present, digital predistortion is a realisable solution and fits well with the architecture of a software defined radio. In particular, it is now increasingly employed in base-station equipment [5]. In the future, however, RF synthesis architectures, such as envelope restoration and sigma-delta techniques, may well also see widespread application.

30

Basic Architecture of a Software Defined Radio

The specifications outlined in Table 2.1 and the components required to realise them are clearly not available with current technology and may not be achievable, in many cases, for a considerable period (if ever). It is therefore necessary to examine other architectures and/or restrictions in the specifications contained in Table 2.1, in order for software defined radio to become a reality in the short or medium term. Such architectures are dealt with in detail in the remaining chapters of this book.

2.4

Digital Aspects of a Software Defined Radio 2.4.1

Digital Hardware

There exists a range of solutions to the digital processing problem for a software defined radio, each with its own characteristics and application areas. The digital processing area is, in many respects, as challenging as the analogue processing described in detail in this book and the intention of this section is merely to highlight the options and their main characteristics. The two biggest issues at present are the power consumption and cost of the various options. In a base-station application, these are less of an issue (but are still a significant challenge); however, they are perhaps the main inhibitor to the widespread used of software defined radio in handsets and other portable devices. The arguments for and against (largely against) the provision of large amounts of reconfigurable processing in a base station (as a future-proofing method) have already been covered in Chapter 1. The use of reconfigurability as a method of providing upgrading, improvement, or backwards compatibility (i.e., a smooth transition from a legacy system) is, however, a strong argument for flexible processing and SDR concepts. It is in this context that the processing options outlined in the following will be discussed. Cost is also a multi-faceted issue. Most designs judge cost based almost exclusively on the cost of the target device used for the code (be it a processor or an FPGA). In the case of a very high-volume application (e.g., a handset), this might be a reasonable approach, although even here it could be somewhat shortsighted. In the case of a base-station design, however, there are many other considerations that will determine the overall cost of a design (particularly if lifetime cost is considered and not just purchase cost). As a summary, the factors that influence the cost of the digital elements of an SDR BTS include: •

Direct cost of the processing device itself.



Costs involved in the associated ancillary and interfacing devices (e.g., memory, clock circuitry, and so forth) Non-recurring expense (NRE). This is most obviously associated with ASIC or application specific signal processor (ASSP) designs and includes mask-set costs, fabrication, and so forth. These costs are rising dramatically as feature sizes reduce and are therefore making the break-even volume (compared to, say, FPGAs) much higher as time progresses. Tools/training investment. Changing from one digital technology to another (e.g., from DSPs to FPGAs) may well involve a significant change of design





2.4 Digital Aspects of a Software Defined Radio

31

personnel, or at the very least a degree of retraining. This will have an associated cost and also an opportunity cost as the time to market will be increased (see the following). Even changing from one manufacturer’s processors to another may involve a loss of productivity while the development team familiarizes itself with a new feature set and the new tricks required to get the best out of a particular device. •

Cooling. The cost of cooling can undergo step changes as the form of cooling required changes. The most obvious example is in going from convection cooling to forced-air cooling, with the cost of the fans now needing to be added to the bill of materials. Additional power consumption will also add to the cost of the power supply, although with modern switched-mode designs, this is usually small. It is, however, a much bigger issue in handset designs due to the increased requirements it places upon the battery and the user acceptance issues of large batteries or reduced talk times.



Development time/resource. This is becoming an increasingly important aspect of cost, as product life cycles, even of base-station designs, reduce as each new design appears. The volume of units sold of a particular design is then lower and the cost of producing that design becomes an ever-larger proportion of its selling price. Techniques or architectures which allow these designs to be generated quickly, or significant portions of designs to be reused between evolutionary models in a range (as well as across models in a given range), are clearly attractive, even if the devices upon which they are based are not the lowest-cost components available.



Flexibility. This is a benefit in terms of time to market for new products and hence a benefit in terms of opportunity cost. If full flexibility could be provided for the same cost as a fixed solution (e.g., a single-application ASIC), then it would be a simple decision to adopt a flexible approach. This is almost never true and hence a full business case must be developed for flexibility, in a given marketplace, and each opportunity judged on its merits.

2.4.1.1

Digital Signal Processors (DSPs)

DSPs were arguably the original enabling technology for software defined radio (other than perhaps in military circles where cost is less important). They have the advantage of complete flexibility, wide applicability, and a wide availability of skilled practitioners in their software. They are also high-volume devices and hence the benefits of economies of scale may be realised across a large number of applications in a wide range of industries (not just wireless communications). This, in general, makes up for their lack of optimisation for a given specific project or niche application area, and allows them to be a realistic option for early prototyping and initial production volumes of a new design, as well as for the final volume product, in some cases. They are best suited to the less computationally intensive forms of signal processing, rather than very high-speed front-end applications. They are often utilised for involved, off-line processing of data which has been acquired and undergone initial processing/storage by a different type of device (e.g., an FPGA or an ASIC).

32

Basic Architecture of a Software Defined Radio

They are, however, well supported and also tend to come in backwards-compatible families, which allow development to take place on a state-of-the-art (SOTA) device, with the final application device being lower cost. This generally occurs for two reasons: 1. The SOTA device being used in development will not be SOTA by the end of the development cycle and hence will generally have reduced in cost. The volume of usage of the device will also have increased, which will also help to reduce its cost. 2. Developers tend to pick a device for their development systems which is definitely large enough to meet the requirement in question. It is often the case that once development is nearing completion, the design will have been optimized such that it may be executed in a lesser member of the same device family. This will have an associated cost benefit. In larger systems, it may be the number of devices that can be reduced; however, this will still result in a lower overall cost. 2.4.1.2

Field-Programmable Gate Arrays (FPGAs)

FPGAs have undergone a revolution in recent years, both in performance and cost. From humble beginnings as simple, flexible glue logic in complex digital designs, they are now a credible processing platform in their own right and able to rival ASIC solutions in many areas (and act as a low-cost prototyping mechanism for ASIC designs). They have also undergone a revolution in volume pricing, which means that they are no longer consigned to the prototype and initial volume parts of the product life cycle, but can now be used throughout volume production, in some applications. It is also possible to convert from an FPGA to a quasi-ASIC, with a high-degree of confidence of success and a relatively low NRE (and hence break-even volume). FPGAs are therefore challenging and displacing ASICs in traditional ASIC application areas. Furthermore, they provide much more flexibility than can be cost-effectively built into an ASIC, thereby fitting with the requirements of SDR very well. In common with DSPs, they also tend to come in families, thereby, again, allowing an initial design to take place on a large (potentially overspecified) device with the final device being chosen to just fit the processing requirement. It is also possible (but not necessarily economic) to add IP processor cores into an FPGA (or an FPGA-derived ASIC). This makes possible a single-chip solution in some applications and this may be important for size or reliability reasons (with the improved reliability coming from the reduction in devices and soldered joints). 2.4.1.3

ASICs

The main issue with utilising ASICs [or, more correctly, application-specific signal processors (ASSPs)] within an SDR system, lies in their lack of flexibility (or conversely, the cost of adding flexibility). There are many methods by which flexibility may be introduced within an ASSP, and these include:

2.4 Digital Aspects of a Software Defined Radio

33



Provision of multiple toolbox functions with flexible input parameters. An example would be a QAM modulator that had an input variable to configure it from 16 to 256 QAM, for example.



Provision of hardware for all current modulation formats, coding schemes, and so forth in a single (large!) ASSP, with the ability to select between the different paths. This is not strictly flexible in the generic sense; however, it is flexible in its range of functionality—the user will not care how he is provided with service over a range of standards, just that he obtains service at a low cost. The major disadvantage with this option is that it is not really future-proof, unless the system designer has an extraordinary insight into the future trend in mobile communications (and can convince his or her management that he or she is right). A combination of one or both of the above with some programmable DSP functionality (e.g., using an embedded DSP core). The key here is in providing enough DSP power to be useful and provide a degree of future-proofing, without designing essentially a DSP device—it would almost certainly be lower cost to buy an off-the-shelf DSP device from a volume vendor.



Development and fabrication costs are also a major consideration in choosing an ASSP route. For example, the break-even costs in going from a 180-nm to a 90-nm feature size increase by a huge factor (between 10 and 100 times). This has a dramatic effect on the business case for an ASSP development. 2.4.2 2.4.2.1

Alternative Digital Processing Options for BTS Applications Enhanced FPGAs

This type of digital processing option, also known as a configurable computing machine (CCM) [6], essentially adds some application-specific functional blocks or architectural constructs to a standard FPGA device, as a method of providing tailoring or optimisation for a specific market segment (e.g., wireless). There are a number of options within this category and these are mostly tailored (in design goals, at least) toward handset applications. These are summarised in Section 2.4.3. 2.4.2.2

Programmable Application-Specific Standard Product (P-ASSP)

This type of processor consists of a general-purpose core that is supplemented by a range of functionally optimized coprocessors or kernels. These latter elements are optimized for specific signal processing functions, such as equalisation, and allow the commonly used wireless signal processing functions to be implemented in a more optimal manner than would be the case by utilising purely a general-purpose DSP. An example of this type of device is that of the Wireless Systems Processor designed by Morphics [7]. In this processor, multiple devices process different aspects of the signal-processing task in parallel, with dedicated processing elements being targeted at particular parts of the problem. This provides an optimised, yet flexible, architecture and hence is a good compromise solution. It does, however, rely on the IC system designer accurately predicting which functional elements will

34

Basic Architecture of a Software Defined Radio

be required in a range of current and future applications. This is clearly a difficult thing to predict and hence the flexibility of this device will be limited for future applications and the degree of future-proofing it affords is likely to be small. With the design life cycle for a BTS becoming ever shorter, however, this may not be a major issue in practice. 2.4.2.3

Massively Parallel Processor Arrays

A massively parallel array is a processor array consisting of a large number of processors connected by very high-speed on-chip interconnect. Each processor has a comparatively modest processing capability on its own (compared to, say, a single dedicated DSP chip) and is assigned a portion of the overall signal-processing problem. The idea behind this approach is that the available processing power (and hence silicon) is used most efficiently, thereby extracting the optimum performance from a given unit of cost or power consumption. This process does, however, rely on the interchip communication overhead not becoming a significant use of processor resource. It also relies on fast interprocessor communication and on a good mapping of the signal-processing problem across the array of processors. This mapping process is usually undertaken by a specialist tool and hence requires the digital processing engineers to undergo retraining in a very different way of performing their signal processing designs. This can extend the time to market for an initial design and is also risky, since this type of processor is currently only manufactured by small companies (e.g., picoChip [8]). If such a company was to fail, a manufacturer could be left with a large amount of (expensive, non-standard) IP that would then have to be rewritten for an alternative signal processing solution. This problem is common (to varying degrees) to most of the newer processing technologies discussed here. 2.4.2.4

Reconfigurable Compute Fabric

The Reconfigurable Compute Fabric (RCF) device from Freescale Semiconductor is an attempt to provide the benefits of a programmable signal-processing solution, at a cost level and power consumption close to that of an ASIC-based (or ASSP-based) solution. A single device combines a number of RCF cores (six in an MRC6011) into a single computing node. It claims a peak performance of 24 Giga complex calculations per second, for I and Q signals, at 8-bit resolution [9]. The device has a power consumption of around 3W and is therefore only suitable for infrastructure applications (at present). Each RCF core contains the following functions: 1. RISC processor with instruction and data cache; 2. Reconfigurable computing array of 16 cells, each containing: a pipelined multiply-accumulate (MAC) unit; arithmetic, logic, and conditioning units; and a special-purpose complex correlation unit; 3. Large input/output buffers; 4. Single and burst transfer DMA controller.

2.4 Digital Aspects of a Software Defined Radio

35

The device claims that the RISC processor is optimised for efficient C-code compilation—an important element in ensuring that designs are portable and low-risk (from a device availability perspective). 2.4.3

Alternative Digital Processing Options for Handset Applications

Research and development activity is underway, both in academia and in a number of start-up companies, examining ways of achieving reconfigurability and flexibility in baseband processing, while maintaining a low power consumption. This section outlines a number of these activities, both academic and commercial, as it is not clear which (if any) will become successful solutions for SDR handset applications. 2.4.3.1

Garp

The Garp architecture was designed by the University of California at Berkeley as a reconfigurable accelerator for use with general-purpose processors. It aims to solve the problems of long reconfiguration times and low data bandwidths, which have proved to be a deterrent for designers wishing to utilise reconfigurable computing techniques. The Garp architecture, shown in Figure 2.2, combines both a standard processor and a reconfigurable hardware array, with reconfiguration costing only a few cycles of overhead. It has direct access to memory from the reconfigurable core itself, with the standard processor being capable of operating at 1 million instructions per second (1 MIP)—although the overall device operates at a clock speed of only 100 MHz. The reconfigurable hardware within Garp consists of combinatorial logic blocks and programmable wiring (similar to FPGAs), with explicit move instructions from the processor being required to move data between the processor and hardware array. Garp also features a high-level compiler, which can extract C-code

Memory

Instruction cache

Standard processor

Figure 2.2

Garp processor architecture [10].

Data cache

Configurable array

36

Basic Architecture of a Software Defined Radio

instructions and automatically implement sections of code with a high degree of instruction-level parallelism (ILP). Although Garp was not designed explicitly for SDR applications, it contains many of the features that are desirable in SDR applications (e.g., direct memory access, hardware-to-processor transfer via memory, thereby keeping I/O bandwidths low). The main features which would need to be added to Garp, for an SDR application, revolve around the DSP and communications functionality for which the processor would need to be tailored. 2.4.3.2

Algorithm-Specific Instruction Set Processor (ASIP)

This hardware architecture moves away from the general-purpose reconfigurable device idea, realising that a more efficient approach is to utilise prior knowledge of the various standards in order to tailor a hardware accelerator. This accelerator can then perform the highly computationally intensive tasks, required in a specific application, alongside a general-purpose DSP [11]. The architecture of this device is shown in Figure 2.3. The ASIP hardware accelerator itself consists of a number of processing elements, with each being designed to execute a specific class of algorithm (e.g., linear transformations, orthogonal transformations, and so forth). The accelerator shares a common bus with the DSP and any RAM and I/O modules present. These elements may be utilised more than once to compute a particular result (e.g., two passes through an 8-tap FIR to obtain a 16-tap FIR), with the intermediate results being stored in RAM. The configuration RAM provides both read addresses for the processing element data RAM and configuration instructions for the processing element itself. A finite state machine provides the write addresses to the processing elements. The

Configuration RAM Read addresses

Bus interface including DMA

Write addresses

Data ram System bus

µ/DSP

Figure 2.3

Finite state machine

Input Processing element 1 Output

Architecture of the ASIP hardware accelerator [6].

Input Processing element 2 Output

2.4 Digital Aspects of a Software Defined Radio

37

ASIP can therefore operate stand-alone, executing loops as required and can use an interrupt and direct memory access to supply results back to the DSP. 2.4.3.3

Field Programmable Function Array (FPFA)

The FPFA structure [12] forms part of a reconfigurable hardware platform, which also consists of FPGA and general-purpose processor (ARM core) elements. The FPGA elements are intended for bit-wise functions (e.g., P-N code generation) and the general-purpose processor for control functions (e.g., if/then or while/do loops). The FPFA itself is intended for use in repetitive calculations within loops and for computationally intensive DSP tasks—particularly those involving a regular structure. It consists of a number of processor tiles, as shown in Figure 2.4, each of which houses a number of simple processing elements, complete with its own instruction stream. This allows a large number of tasks to run in parallel and also improves the overall chip clock speed and energy consumption. The basic architecture of a processor tile is outlined in Figure 2.5. Each processor tile consists of a number of reconfigurable ALUs (five in the case of Figure 2.5), with local memory, a control unit and a configuration unit. The ALUs are intended to execute the inner loops contained within a particular application and load their operands from neighbouring ALU outputs, local registers, or values stored in a look-up table. Reconfiguration of the tile is enabled by storing the ALU configuration in local memory. In some respects this reconfigurable hardware concept is similar to that of the massively parallel processor array discussed in Section 2.4.2.3. A similar mapping

Figure 2.4

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Processor Tile

Architecture of an FPFA [12].

38

Basic Architecture of a Software Defined Radio

Mem

Mem

ALU

Mem

Mem

ALU

Mem

Mem

ALU

Mem

Mem

Mem

ALU

Mem

ALU

Communication and reconfiguration unit Control unit Program memory

Figure 2.5

Internal architecture of a processor tile within a field programmable function array [12].

of the computational problem onto the processor array must also take place, with a similar compromise between granularity (in terms of processor size/capability) and communications overhead, needing to be struck. 2.4.3.4

Raw

The Raw processor from Massachusetts Institute of Technology (MIT) [6] consists of 16 identical programmable tiles (see Figures 2.6 and 2.7), each of which connects only to its four neighbours and consists of the following elements: •

A static communication router;



Two dynamic communication routers; An eight-stage, in-order, MIPS-style processor; A four-stage pipelined floating point unit; A 32-kB data cache; 96 kB of software-managed instruction cache.

• • • •

Figure 2.6

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Raw Tile

Architecture of MIT’s Raw microprocessor [6].

2.4 Digital Aspects of a Software Defined Radio

39 To/from adjacent tile

Compute resources

To/from adjacent tile

Programmable routers

32-bit full-duplex network link

Figure 2.7

To/from adjacent tile

To/from adjacent tile

Internal architecture of a Raw processor tile [6].

The primary benefit of the Raw device architecture is that it can achieve ASIC-like levels of latency from a reprogrammable processor. This is possible because the interconnection between computational units within a microprocessor is exposed to the instruction set. The programmer is therefore given the ability to know where the instruction will be physically executed and the number of transitions that are needed between tiles in order for it to be carried out. This permits the compiler to control the transfer of data between tiles, in a similar manner to that which occurs in an ASIC, and to enable tasks to run in parallel on different tiles. Through the ability to provide run-time reconfiguration, enabled by the dynamic control of the system’s computational resources by software, the Raw processor is therefore able to provide ASIC-like levels of latency. It also provides some of the desirable characteristics required by an SDR application, although power consumption also needs to be kept to a minimum for handset applications. 2.4.3.5

Stallion

The Stallion processor from Virginia Tech utilises stream-based processing in order to realize a flexible, high-throughput, low power configurable computing machine. The concept of stream-based processing involves having a common port for both processing and data packets, with a packet header indicating the type of packet being sent and the module to which subsequent packets should be routed.

40

Basic Architecture of a Software Defined Radio

Although not specifically designed for low-power wireless communications devices, simulations have shown that reasonable power consumption may be obtained from this architecture [13].

2.4.3.6

Adaptive Computing Machine (ACM)

This device, developed by Quicksilver Technology [14], is shown in Figure 2.8. It is an attempt to provide the computing flexibility of a DSP with the power consumption of an ASIC and is targeted primarily at handset applications, where power consumption is of paramount importance. It could be considered as an enhanced FPGA in that it has the ability to utilise on-board programmable logic to optimise the gate requirement needed to fulfil a particular signal processing function, however unlike an FPGA it can reconfigure this functionality very rapidly. It also has the ability to create a custom data-path that will exactly fit the optimum instruction sequence, required to implement a given algorithm. This data path design can be stored in software and quickly downloaded, with the result that regular hardware optimisation can occur, thereby cutting the number of execution cycles required for a given task. Each node in Figure 2.8(a) consists of a number of computational units, with each having its own local memory and a configuration memory element. A node is self-sufficient and can execute algorithms that are downloaded in the form of binary files. The nodes are connected through a Matrix Interconnect Network (MIN), which carries data, control information and the binary algorithm files. Each ACM is also market-specific, with the collection of node types on a given piece of silicon being determined by the needs of a particular market segment. Such a tailored approach avoids the one-size-fits-all philosophy of some processors and leads to an improved prospect of achieving appropriate power consumption levels for a given handheld application.

2.4.3.7

FastMATH™ Processor

The FastMATH™ Adaptive Signal Processor™ from Intrinsity [15] consists of the following three elements: 1. A 2.5-GHz matrix and parallel vector maths unit. This unit provides high-speed parallel data computation for matrix and vector mathematics data types, which are commonly used in adaptive algorithms for wireless applications. 2. A 2.5-GHz MIPS32 processing core. This is a high-performance, industry standard processor, with widely available design toolsets. It can be used for algorithm adaptation, control, and general processing. 3. A high-speed I/O, using dual RapidIO ports. This primarily enables the partitioning of complex designs across multiple processors—a feature more appropriate for BTS designs than for handheld applications. Together these elements enable the processor to provide very high-speed reconfigurable processing, which is tailored toward the types of algorithms commonly found in SDR and wireless communications applications.

2.5 Current Technology Limitations

41

Finite state machine

Scalar

Arithmetic

Bit manipulation Matrix interconect network (MIN) (a)

Matrix interconnect network root

(b)

Figure 2.8 Architecture of the Adaptive Computing Machine: (a) 4-node cluster and (b) Adapt2400 ACM architecture [14].

It is clear from the above discussion that there exists a wide range of options to solve the hardware-processing problem in a software defined radio. This range of options is continually increasing and some of these options will undoubtedly find mainstream acceptance in one form or another.

2.5

Current Technology Limitations 2.5.1 2.5.1.1

A/D Signal-to-Noise Ratio and Power Consumption Background

An ideal software-defined radio receiver is often considered as an A/D converter connected directly to an antenna, as is shown in Figure 2.1. This section considers this ideal approach and analyses it in terms of the power consumption of the A/D

42

Basic Architecture of a Software Defined Radio

converter. The aim of the analysis is to find the minimum possible value for this power consumption and then to determine from this what levels of performance can realistically be expected from this component in the short and medium term. Since the power consumption of the ADC is an important parameter in determining the overall power consumption of a handset design, for example, it clearly needs to be relatively small for a generic SDR handset to become a realistic proposition. This section will also examine how close state-of-the-art (monolithic IC) ADCs have come to the relevant ideal level of power consumption over the last 12 years or so. From this it can be surmised how quickly and easily it will be for technological advancements to progress devices to become close to this ideal value and hence when suitably high-performance, low-power converters may become a reality. 2.5.1.2

A/D Performance

The specifications for an A/D converter, which would be appropriate for use in the ideal software-defined radio shown in Figure 2.1 and summarised in Table 2.1, are given in Table 2.2. Note that these specifications are a compromise relative to those in Table 2.1, yet are still extremely challenging. Although the high sample rate specified in Table 2.2 could be used to effectively realise an extra bit of resolution, it is assumed here that oversampling is necessary to relax the analogue anti-alias and/or IF filtering requirements. If this were not allowed for, the requirements on these components would be onerous, particularly when taking account of the gain and phase flatness requirements of most digital schemes (to meet EVM requirements, for example). The specifications detailed in Table 2.2 are clearly very exacting and are somewhat in excess of those achievable with present technology. They do, however, illustrate the level of ADC performance required for use in a software-defined radio receiver, of the type shown in Figure 2.1, for an acceptable level of RF performance (i.e., a reasonable resistance to) blocking coupled with an adequate sensitivity. 2.5.1.3

Generic A/D Converter

A generic A/D converter consists of the elements shown in Figure 2.9 [16]. It contains four main elements: Table 2.2

Specifications for an Ideal Software Defined Radio ADC

Parameter Resolution

Value 20 bits

Comments = 121.76-dB dynamic range, assuming a perfect converter; this value results from assuming that >100-dB signal range is required (e.g., from −20 to −120 dBm) and that a 12-dB signal-to-noise ratio is required at minimum sensitivity

Sample rate

40 Msps

Based on 4× Nyquist sampling of a single UMTS WCDMA carrier, with alias downconversion

RF Input bandwidth

DC/100 MHz−2.2 GHz

To cover PMR, cellular, PCN/PCS, UMTS, mobile satellite

Spurious-free dynamic range

>121.76 dB

Assumed not to be the limiting factor in receiver sensitivity

2.5 Current Technology Limitations

43 Sample and hold

Quantiser

Analogue input

Digital output Buffer

Anti-alias filter

Clock

Figure 2.9

Generic form of an A/D converter for wideband digitisation at IF or RF.



An anti-alias filter, to remove input signal frequencies which would otherwise alias into the wanted signal band, upon being digitised;



A sample-and-hold circuit, to maintain the input signal to the quantiser at a constant level during quantization; A quantiser, to convert the (now constant level) analogue voltage into a digital word; A digital buffer.





The quantisation function may take place in a variety of ways, including flash, successive approximation, sigma-delta, bandpass sigma-delta, and subranging. The analysis presented in this section is independent of the implementation technology and is based upon the power requirements of the sample-and-hold element alone. It is therefore an extremely optimistic analysis and results in the calculation of a power consumption value well below that which could ever be approached in practice. It is presented as a way of highlighting a fundamental point about the form of A/D conversion currently used in virtually all converters, irrespective of the type of quantiser they employ. Note that although the anti-alias filter of Figure 2.9 is shown as a bandpass element (consistent with the requirements outlined in Table 2.2), it may be a lowpass design in many applications (in particular, if undersampling is not used). 2.5.2

Derivation of Minimum Power Consumption

The analysis presented in this section is based on [17] and assumes that the A/D converter itself consumes no power; therefore the only power supplied to it is in the form of its input signal. The resulting power consumption, calculated using this technique, is therefore a very optimistic minimum possible value for the power consumption of an ADC. In a practical ADC, the conversion circuitry, digital output circuitry and supervisory functions will all consume significant power, hence adding materially to the values calculated here. The analysis only serves, therefore, to illustrate the theoretical minimum power that a converter could possibly consume, and hence provides a power limit below which is it potentially not possible to go (other than with a radically different architecture—see Section 2.6). The gap between the ideal power consumption presented here and that achievable by current (state-of-the-art) devices is over four orders of magnitude, thus indicating that technology is still some way from the ideal in this area. The figures presented do, however, serve to indicate that the ideal software defined radio

44

Basic Architecture of a Software Defined Radio

architecture, discussed above, may never be a practical reality for a handportable (i.e., battery powered) design. 2.5.2.1

Assumptions

The analysis is fundamentally based upon the use of a sample-and-hold device within the converter and hence may not be valid for all converter types (e.g., flash converters of the type reported in [18]). Given the complexity of flash converters for multi-bit designs, however, it is unlikely that device with 18-bit or more bits will be available in the foreseeable future. The converter’s power consumption is assumed to come from the input signal, rather than from an external DC power supply, as it is this signal which is being used to charge the capacitance in the sample-and-hold device. In general, the input signal will be buffered within the ADC and hence it is this buffer that would, in practice, supply the power (deriving its power, in turn, from the power supply to the converter circuit). Note, however, that not all high-speed converters utilise buffers, as these will often contribute to unwanted offsets; an example of this type of buffer-less converter is described in [19]. 2.5.2.2

Analysis

The dynamic range of an A/D converter is determined by a combination of the peak signal voltage which it can convert and the resolution (and hence quantisation noise) of the conversion process. The quantisation noise power must be equal to, or below, the thermal noise power present within the converter bandwidth, at the input to the converter. If this is not the case, some of the available resolution will be wasted. Once the noise floor is determined, the minimum possible peak input signal level (i.e., the minimum possible full-scale voltage) for the converter then follows (see Figure 2.10). From these two levels, it is possible to calculate the power consumption of the converter based on the minimum charging current of the converter input capacitance; the capacitance value itself is, in turn, based upon the thermal noise floor −23 (kT/Ci, where k is Boltzmann’s constant = 1.38 × 10 J/K, T is the device temperature in Kelvin, and Ci is the input capacitance of the converter in farads). The signal to quantisation noise ratio (dynamic range) of an A/D converter is given by: DC = 6n + 176 . dB

(2.1)

where n is the resolution (number of bits) of the converter. The converter noise floor must appear at level of at least DC decibels below the full-scale input voltage level, Vfs, in order for the converter resolution to be fully realised (see Figure 2.10). Hence:  Vfs   DC = 20 log10    e nq 

where enq is the noise voltage level of the quantisation noise floor.

(2.2)

2.5 Current Technology Limitations

45

Signal level Peak signal voltage, Vfs

Converter dynamic range, D C

Thermal noise floor (kT/C i )

Figure 2.10

Dynamic range of an A/D converter.

Combining (2.1) and (2.2) gives: e nq =

V fs2

(2.3)

10 ( 6 n + 1. 76 )/ 20

The mean-square quantisation noise voltage is therefore: 2 e nq =

Vfs 10

(2.4)

( 6 n + 1. 76 )/10

For the converter to be able to fully utilise this dynamic range, the quantisation noise level must be greater than or equal to the thermal noise floor of the converter. This thermal noise floor is given by [20]: e nt2 =

kT Ci

(2.5)

Equating the two noise floors yields [from (2.4) and (2.5)]: 2 e nq = e nt2 =

Vfs2 10

( 6 n + 1. 76 )/10

=

kT Ci

(2.6)

Hence: C i = kT

10 ( 6 n + 1. 76 )/10 Vfs2

(2.7)

For the converter to accurately convert the input voltage it is presented with, this input capacitance must be capable of being charged to the full-scale voltage of the converter, within the converter sampling interval (and preferably well within this interval). A charge of Qi coulombs must therefore be transferred to the input capacitance within the sampling interval, ts, giving:

46

Basic Architecture of a Software Defined Radio

Qi = I i t s

(2.8)

where Ii is the converter input current and Q i = C i Vfs

(2.9)

Hence: I it s Vfs

(2.10)

kT 10 ( 6 n + 1. 76 )/10 ts Vfs

(2.11)

Ci =

Combining (2.7) and (2.10) gives: Ii =

Finally, the power consumed in this process may be ascertained from: Pi = I i Vfs

(2.12)

Giving: Pi =

kT ( 6 n + 1. 76 )/10 10 Watts ts

(2.13)

from (2.11). It is important to note that this power consumption is independent of Vfs, the full-scale voltage for the converter. In many systems, the quantisation noise may be reduced, artificially, to a level below that of the thermal noise floor (e.g., by decimation). As a result, it is the thermal noise floor, which is the ultimate limit on system performance. It is also possible, however, to assess systems which employ a sampling rate too low to allow decimation, using this analysis. Results for both are included in the following. 2.5.2.3

Factor of Merit for Converter Efficiency

The current generation of A/D converters are very far from meeting the theoretical limit discussed above (by over four orders of magnitude, as has already been noted), thereby indicating that technology has a long way to go before the power consumption values derived above can be approached. Such values do, however, serve to indicate that the ideal software defined radio architecture, shown in Figure 2.1, may never become a practical reality for a handportable (i.e., battery powered) device, without a revolution in A/D converter technology. Such a revolution may come from the use of Josephson junctions and superconducting technology (see Section 2.6), although this technology itself has many unresolved issues, even for base-station designs, and hence also may never allow the ADC performance values outlined above to be realised in a handset design.

2.5 Current Technology Limitations

47

The power consumption performance of an ADC can be quantified by the energy per conversion and per unit of resolution, calculated as follows. Combining (2.5) and (2.10) gives:

(S

N) = 2

Vfs2 e

2 nt

=

Vfs I i t s KT

=

Pi KTf s

(2.14)

Hence, for a non-ideal converter in which the conversion energy, ECR, exceeds the ideal value of kT: ECR =

Pi f s (S N)

2

Joules

(2.15)

where fs is the sampling frequency and (S/N) is the signal to noise ratio of the converter. The ratio ECR/kT represents an excess power consumption factor for an ADC. It can therefore be used to define a power consumptionfactor of merit for an A/D converter: M=

ECR kT

(2.16)

An ideal converter would have a factor of merit of unity. 2.5.3

Power Consumption Examples

Taking the sample-rate shown in Table 2.2, the corresponding power consumption figures, for a range of values of ADC resolution, are shown in Figure 2.11. In generating this figure, it was assumed that a 3-dB margin would be necessary between the quantisation noise floor and the thermal noise floor for the converter to operate over its full, usable, dynamic range (as defined by its resolution). This is an optimistic assumption, with a more realistic value being discussed later. An alternative way of viewing this is to think of it as a 3-dB noise figure for the converter, assuming that decimation is employed to reduce the quantisation noise floor. The results shown in Figure 2.11 were derived from (2.13) with the sampling frequency set to 40 MSPS. This graph shows that for the 20-bit resolution converter chosen in Table 2.2, the theoretical minimum power consumption would be around 500 mW when operating at 40 MSPS. This is a high value for potential application in a handset design, particularly when considering the many other items which must also consume significant power in such an application (e.g., the fast DSP processor(s), memory, linear transmitter, and so forth); it would probably be considered excessive in most designs. Selecting a resolution above this value would obviously increase the power consumption still further and this would certainly be unacceptable for handset applications. It may also be considered excessive for base-station use (particularly the consumption of a 24-bit device) due to the package cooling problems that would result and the consequent reliability issues.

Basic Architecture of a Software Defined Radio

Power consumption (W)

48 10

3

10

2

10

1

10

0

−1

10

−2

10

−3

10

−4

10

−5

10

−6

10

−7

10

−8

10

−9

10

6

8

10

12

14 16 18 Number of bits

20

22

24

Figure 2.11 Minimum theoretical power consumption for an A/D converter, operating at 40 MSPS, for various values of resolution.

In Figure 2.12 the impact of sample rate upon converter power consumption is highlighted, for a range of values of converter resolution. This clarifies what could be expected to happen if direct sampling of the RF waveform (as opposed to alias downconversion) was employed. Direct sampling of the RF waveform has the advantage of making the anti-alias filter design more realistic; this would result in a system that is significantly closer to the ideal scenario. Examining the 20-bit resolution discussed in Table 2.2, although now assuming a sample rate of around 5 GSPS to accommodate direct Nyquist sampling of the whole of the frequency range discussed in that table (DC to 2.2 GHz), will result in a power consumption of around 50W. This is clearly excessive for a handset application and would also prove problematic from a packaging perspective if aimed at base-station applications. Even by reducing the sample rate by a factor of 10, the resulting power consumption is too high for any modern handset application, while providing direct RF sampling of only HF and VHF waveforms. This discussion has been based around the theoretical minimum power consumption values derived from this analysis and, as has already been noted, these are hopelessly optimistic figures when compared to current converter designs. As an example, consider the ADS5500 from Texas Instruments. It is a 125-Msps, 14-bit design and consumes 750 mW, which is relatively good for such (at present) high performance. Comparing this with the theoretical minimum power consumption for this specification yields a figure of 3.77 × 10−4 W, which is around three orders of magnitude smaller. This device is close to the current state of the art in terms of power consumption versus speed and resolution, it therefore gives a good indication of how far above the ideal power consumption level current devices are operating. The earlier figures and discussion were deliberately optimistic in two areas: 1. The margin required between the thermal and quantisation noise floors was set at 3 dB—a more realistic value would be 10 dB (giving the converter a

2.5 Current Technology Limitations 10 10 10 10 Power consumption (W)

10 10

49

5 4 3 2 1 0

−1

10

−2

10

−3

10

−4

6-bit 8-bit 10-bit 12-bit 14-bit 16-bit 18-bit 20-bit 22-bit 24-bit

10

−5

10

−6

10

−7

10

−8

10

−9

10

−10

10

−11

10

0

10

1

10

2

10 Sample rate (Msps)

3

10

4

10

Figure 2.12 Minimum theoretical power consumption for an A/D converter over a range of sampling rates.

theoretical noise figure of 10 log (1.1) = 0.4 dB compared to a noise figure of 1.8 dB for the assumption used in deriving Figure 2.12) and this is assumed in Figure 2.13. This will allow a greater margin for added noise from, for example, practical input buffer stages. 2. The whole of the sample time was allowed for input (sample and hold) capacitance charging—this is again optimistic, as the whole point of a sample and hold is to provide a steady voltage to allow an accurate conversion process to take place. A more realistic assumption might be to allow 10% of the sample time for the input capacitance to charge—the effect of this is shown in Figure 2.14. Comparing Figure 2.13 with Figure 2.12, again using the example of the 20-bit converter highlighted in Table 2.2, it can be seen that the power consumption increases from 500 mW to almost 2.5W, at a sample rate of 40 MSPS. This represents a 7-dB increase in power consumption, which is directly equivalent to the 7-dB change in the noise floor differential. The relaxation in noise performance for the converter therefore results in a substantial increase in its power consumption, leading to the conclusion that it is well worthwhile attempting to minimise the differential between the two noise floors whenever possible. Figure 2.14 shows the impact of reducing the time allowed for capacitor charging to one tenth of the sample time (with the remainder then being available for the conversion operation itself). Comparing this with Figure 2.12 shows that again, the increase in power consumed is in direct proportion to the change (in this case, the change in time), as the power required for the 20-bit, 40 MSPS converter has increased from around 500 mW to around 5W. This represent a substantial increase

50

Basic Architecture of a Software Defined Radio 10

6

10 5 10

4

10

3

Power consumption (W)

10 2 10 1 10 0 10−1 −2

10

−3

6-bit 8-bit 10-bit 12-bit 14-bit 16-bit 18-bit 20-bit 22-bit 24-bit

10

−4

10

−5

10

10−6 10−7 −8

10

−9

10

−10

10

0

10

1

10

2

10 Sample rate (Msps)

3

10

4

10

Figure 2.13 Minimum theoretical power consumption for an A/D converter over a range of sampling rates, assuming that a 10-dB margin is required between the thermal and quantisation noise floors.

in power consumed and indicates that the other aspects of conversion (i.e., all aspects excluding the sample and hold operation) should occupy the minimum amount of time possible, in a good design. These discussions have assumed Nyquist sampling, with no account taken of the ability to trade sampling rate for resolution. It is, of course, possible to achieve a higher effective resolution using oversampling techniques and thereby gain an effective increase in the number of bits of available A/D resolution. The additional number of (effective) bits realised by this approach, Neb, is given by: N eb =

10 log(FOS 2 ) 6

(2.17)

where FOS is the over-sampling factor, given by: FOS =

fs B

(2.18)

and B is the RF channel bandwidth. It would therefore be possible, for example, to utilise a 200-MSPS, 16-bit converter (not yet available, but potentially available in the near future) to sample a GSM-EDGE waveform (200-kHz bandwidth) and achieve an equivalent performance to a 20.5 bit converter operating at its Nyquist rate. The increase in sampling rate does, however, lead to a proportionate increase in power consumption [from (2.13)] and this negates the power consumption benefits of the extra resolution. However, this method may well enable high-resolution

2.5 Current Technology Limitations 10

51

6

10 5 10

4

10

3

Power consumption (W)

10 2 10 1 10 0 10−1 −2

10

−3

6-bit 8-bit 10-bit 12-bit 14-bit 16-bit 18-bit 20-bit 22-bit 24-bit

10

−4

10

−5

10

10−6 10−7 −8

10

−9

10

−10

10

0

10

1

10

2

10 Sample rate (Msps)

3

10

4

10

Figure 2.14 Minimum theoretical power consumption for an A/D converter over a range of sampling rates, assuming that only 10% of the sample time is available for charging of the input capacitance.

converters to be realised, at useable effective sampling rates, in the near term; conventional methods of achieving a similar resolution will probably take much longer (see Section 2.5.4.3). 2.5.3.1

Factor of Merit

The concept of a factor of merit was introduced in (2.16) and it is possible to use this to compare the performance of a range of current and past state-of-the-art ADC devices (i.e., devices that represented the state-of-the-art at introduction). Table 2.3 details this comparison for converters available between approximately 1992 and 2004 (many of the earlier converters are now no longer in production). It is evident from this table that state-of-the-art devices, both past and present, are many orders of magnitude above the theoretical performance level. This further underlines the fact that suitable high-speed, high-resolution converters appropriate for use in the ideal software defined radio architecture are somewhat of a challenge, based on current techniques and understanding. 2.5.4 2.5.4.1

ADC Performance Trends Power Consumption

As has been noted above, all existing converters are many orders of magnitude above the theoretical minimum power consumption for their resolution and conversion speed. Power consumption does, of course, decrease over time (for a given performance level), as process evolution and design optimization help to bring

52

Basic Architecture of a Software Defined Radio

Table 2.3

Factor of Merit for a Selection of High-Speed A/D Converter ICs (from 1992 to 2004)

Company

Part Number

Sample Rate

Resolution

Power Cons.

ECR

MSPS

bits

mW

×10 J/ unit of SNR

M −15

Dimension-less

Maxim

MAX1427

80

15

1970

0.62

1.55E+5

Texas Instruments

ADS5500

125

14

750

0.60

1.50E+5

ADI

AD6645

105

14

1500

0.90

2.25E+5

ADI

AD9245

80

14

410

0.23

0.579E+5

Texas Instruments

ADS5422

65

14

1200

1.16

2.91E+5

Texas Instruments

ADS5421

40

14

900

0.71

1.78E+5

Burr Brown*

ADS800

40

12

390

0.58

1.40E+05

ADI

AD9042

50

12

600

0.72

1.73E+05

Comlinear**

CLC949

20

12

300

0.89

2.16E+05

ADI

AD9220

10

12

250

1.49

3.60E+05

Burr Brown

ADS802

10

12

250

1.49

3.60E+05

Analogic

ADC3120

20

14

5000

0.93

2.25E+05

Harris

HI5808

10

12

300

1.79

4.32E+05

Harris

HI5810

20

10

150

7.15

1.73E+06

ADI

AD9023

20

12

1500

4.47

1.08E+06

Comlinear

CLC938

30

12

6570

13.05

3.15E+06

ADI

AD9020

60

10

3400

54.04

1.31E+07

ADI

AD9014

10

14

12800

4.77

1.15E+06

* Now part of Texas Instruments. ** Now part of National Semiconductor.

down the excess power consumed. This process will continue until the theoretical minimum limit is approached, with excess power consumption levels in the order of 10 to 100 times perhaps. Examining the decrease of the energy spent by conversion per unit of resolution (ECR) by commercial high-speed ADCs during the last 12 years, it is evident that the factor of merit has decreased almost consistently by a factor of 10 every 5 years. Technologies exist that may overcome the minimum power consumption limitation outlined above. It is possible, for example, to make an ADC with a sample and hold circuit which does not need to reset between samples; such a device is described in [21]. The same article does, however, also indicate that the sample-and-hold circuit consumes a fraction of the power of the complete ADC at around 10–15%. It therefore indicates that the other technology areas contained within an ADC must make significant progress before the sample-and-hold circuit dominates power consumption. 2.5.4.2

Analogue Bandwidth

The analogue bandwidth of a converter is determined by the gate length of the active devices used in its sample-and-hold circuit. It is given by:

2.5 Current Technology Limitations

53

B=

ka L2

Hz

(2.19)

where L is the gate length of the active device and ka is a constant. Over recent years, this gate length has reduced by a factor of 2 every 4 years, leading to an increase in the analogue bandwidth for an ADC of 4 times over the same 4-year period. Typical analogue bandwidths for, say, 14-bit converters are currently in the low hundreds of megahertz. It is reasonable, therefore, to predict that they will have reached the 2.2-GHz specification of Table 2.2 before the end of the current decade (i.e., before 2010).

2.5.4.3

Sample Rate and Resolution

Considerable research is being directed toward improved converter performance and the large wireless marketplace for this type of technology will ensure that this focus continues. Examination of the state of the art for monolithic converter devices over the past decade or so, indicates that for a given resolution, the available sample rate increases by roughly a factor of 10 for each decade. This is an indication of a form of Moore’s law [22] for ADCs, although it is not quite as straightforward as is the case with processors. With processors, computational power (MIPS) is the main judgement criterion; in the case of an ADC, both resolution and sample rate are of (arguably equal) importance. Research and development are therefore devoted to both aspects of performance and the emphasis of this R&D can change with time and the perceived range of applications. On this basis, it can be predicted that a 16-bit, 5GSPS ADC will be available within the next 20 years (i.e., before 2024). Such a part would be capable of sampling all existing PMR, cellular, PCS, and WCDMA RF waveforms directly, with a useful dynamic range. Alternatively, it could be stated that: for a given sampling rate, ADC resolution improves at a rate of roughly 5 bits per decade. This means that the specification outlined in Table 2.2 will be met within the next decade (i.e., before 2014). Combining both of the above, it could be predicted that a 20-bit, 5-GSPS ADC will be available in around 25 years (i.e., before 2030). Such a converter would be capable of direct RF sampling of all existing PMR, cellular, PCS, and WCDMA standards, without the need for analogue filtering (other than anti-alias filtering), with sufficient dynamic range for most civil radio applications. Of course, all of these existing standards will have been replaced (probably a number of times) within that period; however, the frequency bands may well remain allocated to civil personal communications, thereby allowing the new standards to benefit from direct RF sampling. The only major issue is that of power consumption. The reasons behind this issue have already been covered above and it may well prove to be the major problem in realising the above goals, particularly for portable devices.

54

2.6

Basic Architecture of a Software Defined Radio

Impact of Superconducting Technologies on Future SDR Systems The adoption of a superconducting technology brings the potential for a major shift in the architecture and capability of a software-defined radio. This type of technology will only be credible for base-station applications for the foreseeable future, however it has some quite remarkable potential benefits in this application area. The earlier discussions on technology development in silicon ADCs and the predictions made about when particular levels of performance will be reached did not take account of the application of Josephson junctions and superconducting quantum interference device (SQUID) architectures to the field of ADC design [23]. If such techniques are taken into account, the predictions made above may well be wildly pessimistic (in terms of pure technological ability, taking no account of cost). For example, a sample rate of 20 GSPS was reported in 2001 [24] for a delta-sigma ADC, with rates currently in the range of 40GSPS being claimed [25]. This latter part has a claimed specification which includes an SNR of >57 dB over a 20-MHz bandwidth at 2.23 GHz (based around a bandpass sigma-delta architecture). These figures are not yet adequate for the levels of SDR performance outlined in this chapter; however, the potential is there for this technology to meet the requirements outlined here, somewhat ahead of any silicon solution (if indeed silicon products ever manage to meet such exacting requirements). The main advantages claimed for a superconducting SDR solution include: •

Very high-speed digital logic (~50 times faster than silicon LSI);



Very low power dissipation (10,000 times lower than for conventional semiconductor technologies). This figure does not take account of the power consumption of the cryocooler and vacuum pump (if required). Very high accuracy (5 parts per billion accuracy at 10V); Very high SFDR for both ADCs and DACs (due to the fundamentally different way that quantisation takes place in a superconducting converter); Very high sensitivity (claimed to be 60 dB better than a conventional semiconductor front end); Very low noise (system is essentially thermal noise-free); Ideal digital interconnects within an LSI chip (no R-C delay, hence, speed of light transmission); Large feature size and hence low mask costs, simple fabrication, and so forth. As an example, the 40-GSPS ADC discussed above was fabricated using 3-µm lithography (current silicon ADCs are fabricated using, typically, 0.25-µm lithography or less).

• •



• •



There are, however a number of obvious disadvantages (at least at present): •

Requirement for cryocoolers and (often) vacuum pumps. These are expensive items (particularly the cryocooler) and are also mechanical devices, leading to a reliability-cost compromise. Many studies, including some in which the author has had a peripheral involvement, have shown that cryocoolers can be made to be extremely reliable (MTBFs of many tens of years), however these

2.6 Impact of Superconducting Technologies on Future SDR Systems

55

have all tended to be expensive (often military) devices and certainly higher cost than the wireless marketplace has traditionally accepted. •



Packaging costs. The requirement to maintain a temperature close to absolute zero (4.2–5k, in some cases [26]) or around 70 or 80k (for high-temperature superconductors [27]) leads to a requirement for both airtight seals and good thermal insulation. Both of these are expensive to achieve. Size. This has also, traditionally, been an issue, however the size of the mechanical components involved has come down significantly in recent years, to the point where it is now close to being comparable with some larger base-station installations. Size may therefore become less of an issue for superconducting wireless solutions.

Essentially, the above disadvantages can be summarized as a major cost issue. At present, superconducting solutions are a very long way from the cost of equivalent (at least in terms of functionality) conventional solutions. If this issue can be overcome, however, they have the potential to be an excellent fit with the requirements of all parts of an SDR base station (with the exception of the RF power amplifier): DSP, digital upconversion and downconversion, ADC and DAC, and LNA.

References [1] Kenington, P. B., “Emerging Technologies for Software Radios”, IEE Electronics and Communications Engineering Journal, Vol. 11, No. 2, April 1999, pp. 69–83. [2] Trans-European Trunked Radio (TETRA): Conformance Testing Specification, Part 1: Radio, ETS 300 394-1, March 1996. [3] Digital European Cordless Telecommunications (DECT) Common Interface, Part 2: Physical Layer, ETS 300 175-2, October 1992. [4] “Submission of Proposed Radio Technologies: The ETSI UMTS Terrestrial Radio Access (UTRA) ITU-R RTT Candidate Submission”, ETSI SMG2. Submitted on January 29, 1998; at http://www.itu.ch/imt/. [5] Kenington, P. B., “Linearised Transmitters—An Enabling Technology for SoftwareDefined Radio”, IEEE Communications Magazine, Vol. 40, No. 2, February 2002, pp. 156–162. [6] Srikanteswara, S., et al., “An Overview of Configurable Computing Machines for Software Radio Handsets”, IEEE Communications Magazine, Vol. 41, No. 7, July 2003, pp. 134–141. [7] Zhang, N., and R. W. Brodersen, “Architectural Evaluation of Flexible Digital Signal Processing for Wireless Receivers”, Proc. of 34th Asilomar Conference on Signals, Systems and Computers, Vol. 1, October 29–November 1, 2000, pp. 78–83. [8] Baines, R., and D. Pulley, “A Total Cost Approach to Evaluating Different Reconfigurable Architectures for Baseband Processing in Wireless Receivers”, IEEE Communications Magazine, January 2003, pp. 105–113. [9] Freescale Semiconductor: “MRC6011 Reconfigurable Compute Fabric”, Product Brief: MRC6011PB, Rev. 1, October 2004; http://www.freescale.com. [10] Hauser, J. R., and J. Wawrzynek, “Garp: A MIPS Processor with a Reconfigurable Coprocessor”, Proc. of 5th Annual IEEE Symposium on FPGAs for Custom Computing Machines, April 16–18, 1997, pp. 12–21.

56

Basic Architecture of a Software Defined Radio [11] Brakensiek, J., et al., “Software Radio Approach for Re-Configurable Multi-Standard Radios”, 13th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Vol. 1, September 15–18, 2002, pp. 110–114. [12] Heysters, P.M. et al, “A Reconfigurable Function Array Architecture for 3G and 4G Wireless Terminals”, Proc. of 2002 World Wireless Congress, San Francisco, CA, May 2002, pp. 399–404. [13] Srikanteswara, S., et al., “Soft Radio Implementations for 3G and Future High Data Rate Systems”, IEEE Global Telecommunications Conference 2001, GLOBECOM 01, Vol. 6, November 25–29, 2001, pp. 3,370–3,374. [14] Plunkett, B., and J. Watson, “Adapt2400 ACM: Architecture Overview”, Quicksilver Technology; http://www.quicksilvertech.com. [15] http://www.intrinsity.com. [16] Wepman, J. A., “Analog-to-Digital Converters and Their Applications in Radio Receivers”, IEEE Communications Magazine, Vol. 33, No. 5, May 1995, pp. 39–45. [17] Kenington, P. B., and L. Astier, “Power Consumption of A/D Converters for Software Radio Applications”, IEEE Trans. on Vehicular Technology, Vol. 49, No. 2, March 2000, pp. 643–650. [18] Reyhani, H., and P. Quinlan, “A 5V 6-b 80 Ms/s BiCMOS Flash ADC”, IEEE Journal of Solid-State Circuits, Vol. 29, No. 8, August 1994, pp. 873–878. [19] Yuan, J., and C. Svensson, “A 10-bit 5-MS/s Successive Approximation ADC Cell Used in a 70-MS/s ADC Array in 1.2-µm CMOS”, IEEE Journal of Solid-State Circuits, Vol. 29, No. 8, August 1994, pp. 866–872. [20] Smith, J., Modern Communication Circuits, New York: McGraw-Hill, 1986, Chapter 3. [21] Kim, K. Y., N. Kusayanagi, and A. A. Abidi, “A 10-b, 100-MS/s CMOS A/D Converter”, IEEE Journal of Solid-State Circuits, Vol. 32, No. 3, August 1997, pp. 320–311. [22] Moore, G. E., “Cramming More Components onto Integrated Circuits”, Electronics, Vol. 38, No. 8, April 19, 1965. [23] Lee, G. S., and D. A. Petersen, “Superconductive A/D converters”, Proceedings of the IEEE, Vol. 77, Iss. 8, pp. 1,264–1,273, August 1989. [24] Mukhanov, O. A., et al., “High-Resolution ADC Operation Up to 19.6 GHz Clock Frequency”, Supercond. Sci. Technol., Vol. 14, 2001, pp. 1,066–1,070. [25] HYPRES Inc., “Benefits of Superconducting Microelectronics—Quantum Leap Increase in Performance and Decrease in Cost: Commercial Wireless Base Stations”, February 2004; http://www.hypres.com. [26] ter Brake, H. J. M., “Cryogenic Systems for Superconducting Devices”, in H. Weinstock, (ed.), Applications of Superconductivity, Boston, MA: Kluwer, 2000. [27] Kenington, P. B., et al., “Transposer Systems for Digital Terrestrial Television”, IEE Electronics and Communications Engineering Journal, February 2001, pp. 17–32.

CHAPTER 3

Flexible RF Receiver Architectures 3.1

Introduction The concept of flexibility in a receiver breaks down into two main areas: that of flexibility in the modulation format, coding, and framing and that of flexibility in terms of RF frequency (i.e., the ability to cover multiple bands, or provide general coverage, which is defined as covering all bands between a declared minimum and maximum frequency). This latter area, frequency flexibility, is certainly the more challenging of the two and is a concept which is the subject of much research. The former area has been much more widely addressed and most commercial communications receiver designs employ many of its basic principles, even if they do not aim to provide a wide choice of modulation formats. Both concepts are covered in this chapter and Chapter 4, with a range of ideas being presented to enable the provision of frequency flexibility. Many of these have not yet been implemented in commercial designs and are still the subject of ongoing research; however, they are presented here as a set of basic concepts for further development.

3.2

Receiver Architecture Options 3.2.1 3.2.1.1

Single-Carrier Designs Analogue Quadrature Receiver Design

In contrast with the area of transmitter design, where the requirement for linearity is a relatively recent phenomenon, receivers have needed to preserve signal amplitude information and, in particular, dynamic range in almost all designs over the years. Thus, most receivers, even those for FM and other constant-envelope systems, are inherently linear for the majority of their RF and IF paths. The translation from a conventional receiver design to a linear receiver design is therefore more straightforward than for an equivalent transmitter and may only involve alterations to the detection and (possibly) AGC stages. A simplified, single-band flexible receiver architecture is shown in Figure 3.1. Its flexibility stems from the use of a DSP as the baseband demodulation function; it can thereby demodulate any modulation format within its processing and data conversion bandwidth. Note that if a variety of different modulation formats are to be received by the same radio architecture, then the desired channel bandwidths must be carefully considered. For example, if the receiver is to handle both GSM (200-kHz bandwidth) and PDC (30-kHz bandwidth), then the IF filter, anti-alias

57

58

Flexible RF Receiver Architectures

Linear IF amp

Mixer

Iout

A/D LO In Anti-alias converters filter Baseband AGC Qin and DSP voice/data Qout output I/Q Anti-alias demodulator filter Fixed synth. IF In

Band-select Low-noise amplifier filter

IF filter

Channel synth. (variable)

Figure 3.1

I in

90°

Simplified linear receiver architecture.

filters, and A/D input bandwidth (and hence sampling rate) must be chosen based on the wider of the two bandwidths (more than 200 kHz for the IF filter and more than 100 kHz each for the anti-alias filters and A/D converter input bandwidths). If this is done, then it is possible for the IF chain and A/D converters to experience a wide dynamic range of signals when in PDC mode, since at least six PDC channels could appear in the IF bandwidth. This leads to a requirement for a greater instantaneous dynamic range in the IF and A/D converters than might otherwise be necessary. This problem may be overcome by the use of flexible baseband filtering employing, for example, switched-capacitor techniques, although such techniques bring their own problems (e.g., noise). Note that the flexibility of the processing in the DSP allows many of the conventional receiver functions to be implemented in that part of the system. Examples include: •

Detection/demodulation of the modulation format;



Fast AGC (e.g., by feedforward techniques); AFC, either by the use of internal (to the DSP) oscillators for frequency translation, or by pulling of the external frequency standard (not shown in Figure 3.1); Companding for analogue voice; Deinterleaving and decoding/error correction of data;



• •

3.2.1.2

Digital IF Receiver Architecture

An alternative, single-band flexible receiver architecture, is shown in Figure 3.2. Here the quadrature downconversion function is contained within the DSP, and this has the advantage that perfect quadrature accuracy can be obtained, without the presence of DC offsets. This is usually performed by ensuring that the final IF (labelled baseband IF in Figure 3.2) is at a frequency sufficiently high that some channel selection can be performed, but sufficiently low that a sensible A/D and DSP processing bandwidth results. This compromise is currently around the 10−50-MHz region, but continues to increase as A/D converter technology advances. The minimum frequency is determined by the requirement that at least a single channel must

3.2 Receiver Architecture Options

59 Linear IF amp

Anti-alias filter

AGC

Band-select Low-noise amplifier filter

IF filter

Baseband IF amplifier

A/D converters and DSP Baseband voice/data output

Channel synth. (variable)

Figure 3.2

Digital IF-based linear receiver architecture.

be capable of being Nyquist sampled at the A/D converter (10 MHz being the minimum requirement, approximately, for 3GPP WCDMA). Some allowance should, of course, be made for frequency drift of the receiver local oscillator in selecting this frequency, if frequency correction is to be performed within the DSP. For wideband systems (e.g., CDMA), this is generally negligible. Allowance must also be made for the roll-off of the IF filter and hence the potential for adjacent-channel energy to enter the front-end. This will also force the baseband IF higher in frequency. 3.2.1.3

Digital Processing for Digital IF Reception

Having sampled the baseband IF in the above architecture, a digital IF is created. This must typically be mixed down to form a complex baseband signal (i.e., to form baseband I and Q components). This can be performed as shown (conceptually) in Figure 3.3. The digital IF signal, created by sampling the analogue baseband IF signal at a rate fs, is mixed with a quadrature (numerical) oscillator running at exactly fs/4. This can be achieved by multiplying the digital IF samples by the periodic sequences: [1, 0, −1, 0] for the real channel and [0, −1, 0, −1] for the imaginary channel. The resulting baseband I and Q streams are then filtered by separate finite impulse response (FIR) low-pass filters, to form the required baseband digital signals. These can then be passed for subsequent processing (e.g., demodulation), as required. In practice, it is more computationally efficient to remove the samples multiplied by zero from the subsequent FIR filtering process. It is therefore possible to I DBB

Iout From ADC

90º

NCO In

FIR filter QDBB

Qout I/Q demodulator

FIR filter

fs /4

Figure 3.3

Conceptual process of digital quadrature demodulation.

60

Flexible RF Receiver Architectures

devise an architecture in which the complex mixing and FIR filtering processes are merged [1]. This is shown in Figure 3.4. In this architecture, two identical filters are shown in which the mixer is realised in a manner such that for the real part, only even-order filter coefficients are used, and for the imaginary part, only odd-order coefficients are necessary. The required signs for the coefficients, together with the use of double clock delays, are both also shown in Figure 3.4. Note that this structure dictates that the number of FIR filter coefficients must be a multiple of 4. 3.2.2

Multi-Carrier Receiver Designs

The multicarrier receiver concept is an extension of the digital IF receiver shown in Figure 3.5. In this case, multiple quadrature downconversions are performed in the digital domain using separate numerically controlled oscillators (NCOs). Channel selectivity is provided using digital lowpass filtering on the resulting I and Q baseband signals; as a consequence, the selectivity achieved can be very good. This approach to a multi-carrier receiver problem, such as a cellular BTS, has the significant advantage of a considerable saving in RF hardware over an approach involving a number of separate receivers. In the case of a military application, such as a surveillance receiver, it allows a large number of channels to be monitored simultaneously at a relatively modest cost and with a small device. A multiple-receiver design would quickly become unwieldy in this case. 3.2.3

Zero IF Receiver Architectures

A single downconversion receiver was first proposed by Colebrook in 1924 [2], only 6 years after Armstrong introduced the superheterodyne concept [3]. Colebrook also coined the term homodyne to describe his single-downconversion concept, −2

Z +a0

−a2

−2

Z +a4

−2

Z −a6

−2

Z −a4N−2

I DBB From ADC Q DBB

−a1

−a5

+a3 −2

Z

+a7 −2

Z

+a4N−1 −2

Z

Z -2

Figure 3.4 Combined mixing and FIR filtering process for conversion from a digital IF to complex baseband. (From: [1]. © 2005 IEEE. Reprinted with permission.)

Multi-channel IF filter

Baseband Anti-alias IF amplifier filter



In

90°

Band or channel group synth.

NCO 1

Baseband processing

Low-noise amplifier

Channel 1 voice/data output

NCO 2

Baseband processing

Bandselect filter

Channel 2 voice/data output

Baseband processing

A/D

Channel N voice/data output

Fixed synth.



In

90°

3.2 Receiver Architecture Options

Linear IF amp

Lowpass (channel-select) filters



In

90°

NCO N

DSP

Multi-carrier receiver architecture, based on a digital IF.

61

Figure 3.5

62

Flexible RF Receiver Architectures

although this differs from modern direct conversion receivers, in that a true homodyne receiver derives its LO (local oscillator) directly from, for example, the transmitter, or from self-oscillation of the active device, and does not use a separate oscillator. Most recent single conversion receivers, for SDR or other demanding communications applications, utilise a separate LO synthesiser and tune this in order to receive the desired channel(s). A single downconversion receiver, which is amenable to both single and multi-carrier operation, is shown in Figure 3.6. Here a direct-conversion or zero-IF solution is employed, with quadrature downconversion taking the RF signal directly to baseband. This is potentially a very attractive option for the following reasons: •

Channel selection. The use of digital filters allows for the implementation of far better channel selection filters than could be implemented in hardware at IF. In particular, tight specification linear-phase filtering is possible, which causes minimal disturbance to digital modulation schemes.



It is a simple architecture and hence potentially very low cost. The image frequency is in-band and hence the required image-rejection, based on the gain and phase balance of the I/Q demodulator, is considerably reduced. Around 30−35 dB is acceptable for most systems.





Only a single local oscillator signal is required.



No IF filter is required, hence saving cost and space and increasing the likelihood of achieving a single-chip solution.

It does, however, have a number of fundamental problems, which largely explain its restricted use to date: •

A high accuracy quadrature network is required, which must be broadband and require no tuning or setting-up. This is now possible to a limited degree with a number of integrated parts.



A DC offset appears at the centre of the baseband channel in I and Q and is usually quite high in level with respect to the weaker signals which the receiver may be required to demodulate. This is therefore a serious limitation on the Iout RF In Bandpass filter

Low-noise amplifier

90°

A/D converters and DSP

LO In Q out

I/Q demodulator

Channel synth

Figure 3.6

Zero-IF receiver architecture.

3.2 Receiver Architecture Options

63

sensitivity of the receiver and proves very difficult to eliminate with most modulation formats, since they generally have a significant level of energy at this point in their spectra. •

Radiation. As the local oscillator appears on the wanted channel frequency and there is very little isolation between it and the antenna, significant levels of the LO signal can be rebroadcast. This is one effect contributing to the DC offsets mentioned in Section 3.2.3.3.



Noise. The use of a baseband IF results in problems with low-frequency noise appearing at the centre of the channel (1/f noise); this must be insignificant with respect to the signal; otherwise, it will have a detrimental effect on overall sensitivity.



Second-order intermodulation. Second-order (or second harmonic) distortion in the LNA or mixers can result in significant levels of second-order distortion appearing at (and around) DC.

These issues have been highlighted and investigated by a number of authors (e.g., [48]), and will be discussed further in the following sections. 3.2.3.1

Quadrature Mismatch

The effect of quadrature mismatch on receiver performance can be described in a number of ways, depending upon the modulation format (or carrier format in a multi-carrier receiver) in question. In the case of an SSB or AM system, for example, it will impact upon the signal to noise ratio and user acceptance of the demodulated speech signal. A large quadrature imbalance will result in a significant in-band image and since this falls on top of the wanted signal and has, typically, similar (audio) characteristics, its practical impact can be severe. In general, a poor SNR, where the unwanted component is white noise, is much more acceptable and understandable than one in which the unwanted component is another speech signal or similar interferer. In the case of a digital modulation format, quadrature mismatch will typically impact upon error vector magnitude (EVM) and thereby the detectability of the signal. Section 3.4.2 details the calculation of the impact of quadrature errors on EVM. In the case of a receiver, the noise already present on the received signal will cause a further EVM degradation and hence quadrature errors can be viewed as having an impact (ultimately) upon receiver sensitivity. 3.2.3.2

Quadrature Mismatch Compensation

There exists a range of mechanisms for overcoming quadrature errors and the selection of an appropriate method (or none) is usually a compromise between implementational complexity and cost. These methods have counterparts for transmitter (quadrature upconverter) compensation and are dealt with in more detail in Chapter 5. A brief introduction only will be provided here. In an analogue quadrature downconverter with two notionally equal-level outputs, a small gain and phase error will inevitably exist between them. This error will

64

Flexible RF Receiver Architectures

have two elements: a static (i.e., frequency invariant) component and a frequency-varying (ripple) component. Both will generate (if uncompensated) an unwanted in-band image signal or a signal vector error, depending upon how the problem is viewed. In the case of the static component, it is possible to compensate for this error by predistorting the I and Q signals, either internally within the DSP or externally in analogue hardware. In either case, the form of compensation required is shown in Figure 3.7. The required compensation may be achieved by modifying the I and Q baseband signals in the DSP following the downconverter, in the manner shown in Figure 3.7. A small fraction of the I channel signal is added to the Q channel output, and by alteration of the variables KI1, KI2, KQ1 and KQ2, any amount of gain and phase mismatch may be accommodated. This process can be a manual calibration, undertaken upon manufacture, for example, or automated using a control scheme. Note that this latter option is more difficult to realise than its upconverter counterpart, since the system has no knowledge of its input signal. Some form of known sounding signal is therefore usually necessary and the generation and upconversion of this signal usually negates many of the simplicity and cost benefits of the direct-conversion approach. Any frequency varying (ripple) component of the gain and phase imbalance will have a similar impact to that described earlier (i.e., it will also contribute to the unwanted image or degradation of the signal vector error). In this case, however, the impact will typically be an order of magnitude or more lower than that of the uncompensated static errors (and often much more for a IC implementation). This is due to the amount of ripple typically being of a much smaller magnitude than the static errors. The other key difference is that it is much more difficult to compensate (either manually or automatically) for the effects of ripple and it is usually not necessary (or economic) in most systems.

K I1

I

I' K I2 Compensated quadrature output signals

Quadrature input signals K Q2

K Q1

Q

Q'

Q

Q

Q'

I I Input signals

Figure 3.7

I' Output signals

Gain and phase error compensation in a quadrature downconverter.

3.2 Receiver Architecture Options

65

Where a known digital format (or, potentially, a range of known digital formats) is being received, it is possible to utilise a pilot sequence, embedded within the modulation, to provide quadrature error correction. One form of this technique is outlined in [6], for an OFDM system, in which a suitably chosen pilot sequence is applied over two OFDM symbols. This provides a very rapid correction of the I and Q error, as complete correction takes place within the two pilot symbols. The use of only two pilot symbols is also a small overhead in a multi-carrier OFDM environment. The main disadvantage with this approach is that it relies on the service provider transmitting suitable pilot symbols for the receiver to utilise in its quadrature-correction algorithm. Such symbols are available in some mobile communications standards and hence the use of pilot-symbol insertion techniques is possible in these cases. 3.2.3.3

DC Offset Issues

The effect of DC offsets on the baseband I and Q signals is to shift the origin of the signal constellation (see Section 3.2.3.5). This can lead to a degradation in bit error rate, since the demodulation algorithm in the receiver will effectively be looking for constellation points in the wrong place. It can also lead to saturation of the baseband A/D converters (or amplifiers) and hence an effective drop in dynamic range of the receiver. With most digital signals, it is not possible to filter the DC offsets (e.g., using a highpass baseband filter in each of the I and Q channels) without also removing some of the wanted signal energy. The DC offset must therefore be removed by alternative means, or prevented from occurring in the first place. Some options for both of these approaches are discussed next. There exists a range of sources for the DC offsets occurring within a direct-conversion receiver. These may be broken down into sources of static DC error and sources of dynamic DC error. Static DC errors are generally caused by LO leakage and self-mixing occurring within the receiver itself; dynamic DC errors are caused by inadequate compensation of time-varying effects within the receivers environment. Examples of the latter are: a. Local reflections of the receiver LO, which is reradiated from the receiver antenna (see the following), which are then picked up and downconverted by the receiver. b. Rapid increases in signal strength, such as those caused by Rayleigh fading, which are not tracked sufficiently quickly by the receiver AGC. The receiver is therefore effectively overloaded for a short period and the second-order component (and other even-order components) of the resulting non-linearity will cause a DC signal to be generated. Static DC Error

Figure 3.8 shows the main potential leakage paths, and hence, indirectly, sources of DC offsets in a direct-conversion receiver. These may be summarised as follows: 1. Leakage within or around the downconversion mixers, for example due to the imperfect LO-RF isolation within the mixer. The level of DC offset

66

Flexible RF Receiver Architectures I-channel output 3

4

1

5

In



90°

Bandpass filter

LNA

Local oscillator

2 Q-channel output

Figure 3.8

2. 3.

4.

5.

LO leakage paths in a direct conversion receiver.

generated by this mechanism is typically fairly constant across the operating frequency band, unless this band is very large. Local reflections of the receiver LO (as “a” in Section 3.2.3.3). Direct leakage from the LO to the receiver input. This can be caused by either radiation of the LO from the case of the unit which is then picked up by the receive antenna, or by radiation across the receiver PCB. The level of DC offset generated by this mechanism typically varies, sometimes markedly, across the operating frequency band. This is due to the frequency-varying phase-shift (i.e., delay) through the various components between the filter input and the mixer RF inputs. At some frequencies, this phase shift will be 90º, thereby generating a (theoretically) zero DC voltage at the mixer output (for most mixer types). At other frequencies, the phase shift will be close to 0º, thereby generating a maximum in the mixer output DC voltage. Leakage of the LO to the LNA input typically through radiation from/to PCB tracks. Again, the level of DC offset generated by this mechanism typically varies across the operating frequency band. Leakage of the LO to the LNA output again, typically through radiation from/to PCB tracks. Here again, the level of DC offset generated by this mechanism also typically varies across the operating frequency band.

Whilst it is possible to minimise these sources, by careful layout and screening of the components, it is not usually possible to eliminate them entirely. 3.2.3.4

DC Offset Compensation

There exists a range of options to help alleviate DC-offset issues in a zero IF receiver, which are principally caused by self-mixing of the LO signal received at various points within the analogue hardware, as outlined earlier. Frequency Modification

One common method of solving local oscillator leakage problems is to remove the frequency of the oscillator from the receive frequency range of interest. This is a common technique in handset designs that do not utilise a direct-conversion approach and is, arguably, even more beneficial in designs which do utilise this type of architecture. The idea behind this approach is to ensure that the local oscillator

3.2 Receiver Architecture Options

67

VCO is not operating on (or near to) the desired receive channel (or band). It is, frequently, radiation from the VCO itself and its tuned circuits that leads to the largest leakage component. If the VCO is not operating on the receive channel frequency, therefore, this source of leakage, and hence DC offset, is eliminated. There will still, obviously, be the potential for the required LO signal, at the desired receive frequency, (which must be created at some point) to leak to an undesired location. However this final frequency generation step can take place physically very close to the point at which the LO signal is required, thereby minimising leakage effects. Figure 3.9 shows four options which can achieve the goal outlined above. Figure 3.9(ac) [7, 8] shows various methods of multiplying, dividing, or prescaling the synthesiser (VCO) frequency, thereby ensuring that it is far removed from the desired receive frequency. Clearly these processes have the potential to generate a significant number of spurs, particularly if their output signals are allowed to leak to any part of the original synthesiser, however they are relatively simple and low-cost methods of generating the desired result. In the case of Figure 3.9(d), a second (fixed) oscillator is added to offset the main synthesiser from the desired receive frequency. While this clearly appears to be an additional cost, it can be arranged to be a cost-reduction (or at least costneutral) strategy in a transceiver application. In such a configuration, the main synthesiser can be designed to cover the transmit frequency range (assuming a direct-conversion or direct-modulation form of transmitter), with the offset oscillator configured to cover the (duplex) split between the transmit and receive bands. Since this split is typically a constant value, irrespective of channel number, this offset oscillator can be a fixed frequency device. The two oscillators are then mixed together and filtered, very close (physically) to the LO input of the quadrature downconverter. Note that the filter can be a lowpass, highpass, or bandpass device, as appropriate. Capacitive Coupling

Although capacitive coupling of the baseband signals (Figure 3.10) will remove some of the wanted signal energy in many systems, this may be acceptable in cases where significant energy is not present around the centre of the signal. This is true in the case of CDMA and WCDMA, where capacitive coupling may be employed without significant degradation of the signal-to-noise ratio [8]. In the case of signals which, when downconverted, possess significant energy at or close to DC, capacitive coupling is not an option. GSM is an example of such a signal and alternative techniques must be used for its GMSK modulation format. DC Calibration

In cases where capacitive coupling cannot be used, it is possible to perform a DC calibration of the system and inject an appropriate DC (or carrier) level into the system, in order to cancel the DC offset. The measurement process for this system is typically performed in the digital part of the receiver, although analogue sample-and-hold devices could also be used, prior to the analogue to digital converters. The measurements upon which the DC injection level is based can be taken during an idle slot, or when the receiver is not actively receiving a signal (i.e., when the receiver is standing by or roaming). Alternatively, a long-term average of the

68

Flexible RF Receiver Architectures I-channel output Local oscillator xN



In

90°

Bandpass filter

LNA

Frequency divider

FVCO = FRx /N

Q-channel output (a) I-channel output Local oscillator −1/N

In

0° 90°

Bandpass filter

LNA

Frequency divider

FVCO = FRx *n

Q-channel output (b) I-channel output Local oscillator −n/N

In

0° 90°

Bandpass filter

LNA

FVCO = FRx *(n/m)

Prescaler Q-channel output

(c) I-channel output Local oscillator In

0° 90°

Bandpass filter

LNA

Bandpass filter

FVCO = FRx ± FOffset

Q-channel output Fixed oscillator FOffset (d)

Figure 3.9 LO generation options to reduce leakage of the LO signal at the receiver input frequency. Utilising (a) a divider; (b) a frequency multiplier; (c) a prescaler; and (d) an offset oscillator.

3.2 Receiver Architecture Options

69

received signal can be taken (e.g., over many seconds or minutes) and the result of this used as the DC value to be subtracted. An architecture that shows the use of DC calibration and removal is illustrated in Figure 3.11. The operation of the calibration scheme is straightforward and is based upon the digital sample and hold processes following the A/D converters. Both of these operate on either a long-term average of the A/D output or on an average taken over the duration of a vacant slot (or similar idle period). The results of the averaging processes are held and fed to the (low-speed) D/A converters. These then subtract the required value of DC from the I and Q channel outputs of the mixers. Clearly the subtraction process could equally well be carried out digitally; however, this will utilise (waste) valuable bits on the A/D converters, since these must always convert the unwanted DC component of the baseband signals, prior to subtraction. It is typically lower cost to utilise low-speed DACs and to convert the existing ADC buffer amplifiers into summing amplifiers than it is to pay for extra bits on a high-speed ADC. Frequently these ADCs are already state-of-the-art regarding speed and resolution; hence, adding extra bits may not be an option at any (sensible) price. The main drawback of a DC-calibration scheme is that it is unable to adequately compensate for dynamic DC offsets (unless the dynamic effects are slow and the calibration update rate is rapid). This type of offset must either be removed by capacitive coupling (described earlier) or by the use of a continuous-time feedback control scheme. An example of this type of scheme is the servo control loop and this is described next. Servo Control Loops

Figure 3.12 shows how the system of Figure 3.11 can be modified to provide real-time servo control for the DC-offset removal process. The sample-and-hold processing of Figure 3.11 has now been replaced by an integrator. The action of this integrator will be to ramp in the direction of the DC offset (i.e., increasing positive output number for a positive input number and vice versa) until the DAC output is sufficiently large to subtract the channel offset. The two loops, for the I and Q channels, operate independently, since the DC values in each channel will be, to a degree at least, independent. Clearly other forms of controller are possible (e.g., integral and proportional), with potential benefits in dynamic operation over the simple integral controller described here.

CC

Local oscillator In

I-channel output



90°

Bandpass filter

LNA

Fvco = FRx

CC

Figure 3.10

Q-channel output

Capacitive coupling applied in the I and Q paths to remove unwanted DC offsets.

70

Flexible RF Receiver Architectures

D/A

A/D Local oscillator

Summing amplifier In

Digital receiver processing



90°

Bandpass filter

LNA

Summing amplifier

Fvco = F Rx

S/H

A/D

S/H

D/A

Figure 3.11

DC calibration used to remove offsets in a direct conversion receiver.

The main drawback with this technique is that the finite loop bandwidth of the system will inevitably result in some degradation of the receiver signal-to-noise ratio, due to the removal of some signal energy around DC. In this respect, the technique has a similar drawback to that of AC coupling, discussed earlier, although the effective coupling capacitor value obtained can be far higher than any sensible, physically small capacitor could achieve. 3.2.3.5

Combination of DC Offsets and Gain/Phase Error

Figure 3.13 illustrates the impact of DC-offsets, gain/phase errors, and both effects simultaneously on a 64-QAM constellation when received by a zero-IF receiver. A

D/A

A/D Local oscillator

Summing amplifier In

Digital receiver processing



90°

Bandpass filter

LNA

Fvco = F Rx

dt

Summing amplifier A/D

dt

D/A

Figure 3.12 Use of a servo-type control loop to remove DC offsets from the I-and-Q baseband outputs of a direct-conversion receiver.

3.2 Receiver Architecture Options

71 Q

I

(a) Q

I

(b)

Figure 3.13 Illustration of the effect of DC offsets and I/Q imbalance on a 64-QAM constellation: (a) original constellation; (b) impact of DC offsets (Q-channel only); (c) impact of I/Q imbalance (gain and phase errors); and (d) combined effect of DC (Q-channel only) and I/Q errors.

DC offset alone will shift the constellation from being centred on the origin in the I/Q plane [Figure 3.13(b)], making detection of the individual symbols more difficult. An error in both gain and phase [Figure 3.13(c)] will distort the constellation (making it wider and thinner or taller and narrower) and rotate it about the origin in the I/Q plane. Again, this can result in a lower signal-to-noise ratio, or in the extreme, errors in the detection process. Combining both effects [Figure 3.13(d)] further increases the distance of a given constellation point from its expected location. Without suitable compensation, it is clearly possible for significant errors to result, hence illustrating the need for techniques such as those discussed in this section to be applied to a direct-conversion receiver. Note that the DC offsets and gain/phase errors illustrated in Figure 3.13 are deliberately severe and generally much higher than would be encountered in most

72

Flexible RF Receiver Architectures Q

I

(c) Q

I

(d)

Figure 3.13

Continued.

practical systems (certainly at high signal strengths). This was intentional, since more realistic error levels would be much more difficult to detect by eye. 3.2.3.6

Impact of Quadrature Mismatch on an OFDM Signal

In an OFDM system, quadrature mismatch errors cause intercarrier interference (ICI) from the subcarrier located at the mirror-image frequency of the subcarrier in question [9]. The conjugate of the data transmitted on the kth subcarrier therefore interferes with the data contained on the (Ns − kth) subcarrier (and vice versa), where Ns is the number of subcarriers contained within the OFDM system. In the case of an OFDM system, quadrature mismatch may be estimated and corrected within the demodulation processing, as there exists a linear relationship between the data present on a given subcarrier and the interfering frequency-mirror subcarrier. An adaptive equaliser, containing two taps, may therefore be used to jointly cancel both the effects of the channel and of the I/Q mismatch.

3.2 Receiver Architecture Options

3.2.3.7

73

1/f Noise

1/f, or flicker, noise is inherent in most semiconductor devices and its origins are not well understood. Indeed, it was sometimes referred to as semiconductor noise in the early years of semiconductor production because of its dominant effect in these early devices. The term 1/f noise comes from its power spectral density, which is given by: S n (f ) =

kn f

β

V 2 Hz

(3.1)

where kn is a constant (equal to the power spectral density at 1 Hz), f is frequency, and is in the range 0.8 to 1.4 [10]. Typically, is approximated as unity, giving: S n (f ) =

kn 2 V Hz f

(3.2)

The mean square noise voltage in the frequency range f1 to f2 is therefore: e f2 =



f2

f1

kn df f

(3.3)

= kn ln( f 2 f1 ) V 2

In a receiver, it is possible to define a point, fα at which the flicker noise equals the cascaded receiver thermal noise floor. This is illustrated in Figure 3.14 and varies depending upon the semiconductor process and device technology used. For example, in a BiCMOS process it is in the range ~48 kHz, whereas for a MOSFET device, it may be around 1 MHz [11]. In a direct conversion receiver, the IF is at baseband and stretches down to DC; 1/f noise is therefore clearly a potential problem in the downconversion mixers and also in any baseband amplification. The noise floor at the mixer output, including the effects of 1/f noise, may be calculated [based on (3.3)] as: n(t ) = n 0

[( f

2

]

− f1 ) + f a ln( f 2 f1 ) V 2

(3.4)

where n0 is the input-referred noise floor at the downconverter, the signal passband of the baseband spectrum is defined by f1 and f2, and fα is as defined previously. It is obvious from the above that narrowband signals, with significant energy around DC (when downconverted), are most susceptible to this type of noise. SSB signals in military systems, for example, along with GSM and GSM-EDGE are therefore potentially susceptible. 1/f noise is much less of an issue for CDMA and WCDMA signals, due to their having relatively little signal energy close to DC.

3.2.3.8

Second-Order Distortion Requirements

Second-order distortion in a direct-conversion receiver can cause blocking or jamming signals (whether intentional or not) to degrade the receiver signal-to-noise

74

Flexible RF Receiver Architectures n(t) CDMA signal (615 kHz wide at baseband) Flicker noise GSM signal (100 kHz wide at baseband) Cascaded noise floor

fa

f1

f2

Frequency

Figure 3.14 Impact of 1/f noise emanating from a frequency mixer in a zero IF receiver. (From: [8]. © 2005 IEEE. Reprinted with permission.)

ratio. The mechanism by which this occurs, will be outlined in the following paragraphs. A transfer characteristic containing a second-order non-linearity may be expressed as: Vout (t ) = K1 Vin (t ) + K 2 Vin2 (t )

(3.5)

Figure 3.15 illustrates an example characteristic for the case where K1 = 10 and K2 = 2 and demonstrates the effect of such a characteristic on a pure sinusoid in both the time and frequency domains. The larger the coefficient of the second-order term (K2), the more curved the transfer characteristic will appear and hence the greater the distortion of the input waveshape. Note that in the frequency domain a second signal component has now appeared at twice the original frequency (2f1) and this gives rise to the term second harmonic distortion, used to describe the form of non-linear distortion introduced by the second-order term. Note further that a DC term also results from the second-order term in the transfer characteristic. Examination of the amplitude of the second harmonic component indicates that it will increase in proportion to the square of the input signal (and also in proportion to the constant, K2). The amplitude of the fundamental frequency component, however, will only increase in proportion to the fundamental gain, K1. As a result, it is evident that the amplitude of the second harmonic will increase at a greater rate than that of the fundamental component. A point can thus be envisaged where the fundamental and second harmonic components are of equal level; the signal level at which this would occur is termed the second-order intercept point, usually expressed as a power in dBm. This may be quoted as either an input or an output intercept point; with the former being most commonly found in receiver front-end specifications. It is often designated as IP2 or IP2. The characteristics of the fundamental and second harmonic amplitude levels, with varying input level, are shown in Figure 3.16 for the transfer characteristic illustrated previously (K1 = 10 and K2 = 2). The latter parts of the two characteristics are shown dotted since the input and output levels required to obtain these parts of

3.2 Receiver Architecture Options

75 Output Voltage (V) 30 25 20 15 10 5

−2

−1

−5

1

2 Input voltage (V)

−10 −15 (a) Output voltage (V) 15 Input voltage (V) 10

+1V

5

Time

Time −5

−1V Input

Output (b)

Amplitude

Amplitude

Frequency

f

f

Input

2f

Frequency

Output (c)

Figure 3.15 Transfer characteristic (a) and effect on a sinusoid in the time domain (b) and frequency domain (c) of an amplifier with transfer characteristic Vout (t ) = 10Vin (t ) + 2Vin2 (t ).

the characteristics in practice would be impossible, without destroying the device. In this example, the second-order (input) intercept point may be quoted as approximately 5V, corresponding to the output signal level where the two characteristics cross, divided by the linear gain (K1=10).

76

Flexible RF Receiver Architectures Output voltage (volts peak) 200 100 50

20 10

Second order intercept

l enta

dam

Fun

5 n co Se

arm dh

ic on

2 1 0.5

Figure 3.16

1

2 3 Input voltage (volts peak)

5

10

Illustration of the second-order intercept point of a non-linear receiver.

Note that intercept point values are more commonly quoted in dBm. For the above example, assuming a 50-Ω system, the second order input intercept point is therefore +27 dBm. If a sinusoidal input signal, Vin(t) = Acos(ωt), is fed into a receiver with a second-order non-linearity described by (3.5), then the resulting (distorted) output signal will be: Vout (t ) = AK1 cos( ωt ) + K 2 [A cos( ωt )] =

A 2 K2 A 2 K2 + cos(2 ωt ) 2 2

2

(3.6)

This signal has, in addition to the wanted, linearly amplified signal, a component at DC and a component at double the input frequency (the second harmonic). If this input signal is an unwanted CW jammer, then the DC term it generates can potentially exacerbate the DC offset problems discussed above. If it is a modulated signal, then it will generate a spectrum around DC and this will appear as unwanted noise or interference to the wanted receive signal. This in turn will lower the receiver SNR. Both of these effects are illustrated in Figure 3.17. A second effect of a finite IP2 in a direct conversion receiver is that of downconverting the leakage of the transmitter output signal which leaks through or around the duplex filter. Clearly this is only an issue in a full-duplex, frequency-division duplex, system, with the mechanism being that illustrated in Figure 3.18. The transmitter output signal is similar, in virtually all respects, to a blocker as described earlier and unwanted downconversion occurs by the same means. In this case, however, there are two means of alleviating the problem: improving IP2 (as before) or increasing the transmit-receive isolation, using better filtering or circuit layout. The question of which, if either, of these two issues is the dominant requirement on receiver IP2 performance, is not straightforward. In a hostile radio environment,

3.2 Receiver Architecture Options

77

Jammer Jammer at DC

Wanted

fw

fj

fw

fj

2f w

2f j

fw

fj

2f w

2f j

Jammer Second-order nonlinearity Jammer envelope at DC

Wanted

fw

Figure 3.17

fj

Impact of second-order distortion on CW and modulated jammer signals.

such as one in which multiple, uncoordinated services occupy the same band (e.g., the situation which existed with CDMA and AMPS at 800 MHz in the United States), or on a battlefield, where the jammers are likely to be intentional and to have originated from the enemy, then the first scenario is likely to be dominant. In a more benign radio environment, where terminal cost or size are more of an issue (e.g., consumer walkie-talkies), then sacrifices may need to be made in duplexer performance or circuit layout, making the second scenario more likely. In the case of a WCDMA handset application, it has been suggested that Tx-Rx leakage is the dominant requirement [12]. In either case, the method for calculating the required IP2 level is the same, with the exception that a jammer received power level is substituted for the transmit interferer power level discussed next. Tx From Tx upconverter Transmitter PA Rx I-channel output Rx envelope at DC

LNA In



90°

Duplex filter

Local oscillator Rx

Tx

Tx envelope at DC Q-channel output

Figure 3.18 Effect of second order distortion in a direct conversion receiver upon transmit signals leaking to the receiver input.

78

Flexible RF Receiver Architectures

The permitted noise power at the input to the receiver is given by: Pn. rx = Ps ,min + G s + S QPSK + GC − M dBm

(3.7)

where: Pn,rx is the permitted receiver noise power (noise floor). −3

Ps,min is the reference sensitivity required to meet a 10 BER. Gs is the spreading gain (=10log(128)) for WCDMA. −3

SQPSK is the signal-to-noise ratio required to achieve a BER of 10 uncoded bitstream.

for an

GC is the expected coding gain. M is a margin to allow for IP2 degradation and other performance/ implementation limitations in the system (assumed to be 0.5 dB, below). If the coding is assumed to be 1/3-rate convolutional, with constraint length 9, its gain will be 9 dB. Equation (3.7) therefore yields: Pn. rx = −117 + 21 − 10 + 9 − 05 . = −97.5 dBm

(3.8)

The resulting receiver noise figure is therefore: F rx = Pn. rx − 10 log( BW ) − 10 log( kT ) = −97.5 − 10 log(384 . × 10 6 ) − −17383 .

(3.9)

= 105 . dB

This is quite a straightforward requirement to meet with current handset chip technologies. If it is assumed that the transmitter is utilising a large number of active channels on the uplink (say, >10), then it is possible to show that this will result in a maximum filtering benefit from the baseband RRC filters of 4.2 dB [12, 13]. This results from the fact that such a large number of active channels effectively appear as a Gaussian noise-like signal. The effects of a smaller number of active channels on the uplink will be considered later. The assumption, stated earlier, that the allowable degradation (or implementation margin, M) is 0.5 dB, results in a requirement for the second-order distortion components to be at least 10 dB below the receiver noise floor (assuming that all of the implementation margin is taken up by the second-order distortion). The second-order distortion level must therefore be at a maximum level of −107.5 dBm. If it is assumed that the transmit output power is +21 dBm (class 4 WCDMA handset), then it is reasonable to assume a worst-case Tx-Rx leakage level of −30 dBm (i.e., 51 dB of isolation in the duplex filter and associated circuit layout). The required receiver dynamic range is then: PDR = −30 − −107.5 = 77.5 dB

(3.10)

3.2 Receiver Architecture Options

79

The required second-order intercept point is then: PIP 2 = Pleakage + PDR = −30 + 77.5

(3.11)

= +47.5 dBm

where Pleakage is the power of the transmit signal leaking to the receiver input and PDR is given in (3.10). The above calculation is also illustrated in Figure 3.19. The IP2 figure presented in (3.10) is quite a tough requirement for a handset, but does stem from a set of worst-case assumptions (many active channels, only 0.5 dB of implementation margin, and so forth). The sensitivity figure calculated in (3.9) is quite easy to achieve in most handset designs and could be tightened up to leave a greater margin for second-order distortion (i.e., a greater implementation margin, M). If the number of active transmit channels is only assumed to be 2, simulations presented in [12] show that a further 9.3 dB of relaxation in IP2 can be allowed. The resulting IP2 requirement is then a much more reasonable +38.2 dBm. 3.2.3.9

Gain Control Requirements

The lack of channel filtering, prior to baseband, in a direct-conversion receiver, makes the application and design of the receiver AGC system more critical than in Power (dBm)* Second-order intercept point

+47.5

77.5 dB

Interferer (Tx leakage) power

−30 77.5 dB

−96 −97.5

Equivalent wanted signal power (in 3.84 MHz bandwidth) −1.5 dB** −10 dB***

−107.5

Max level of second-order distortion power (without filtering)

21dB (spreading gain) −117.0

Wanted signal power

*All powers are specified with regards to a bandwidth of 3.84MHz unless otherwise indicated. **QPSK SNR + coding gain + implementation margin. ***Required margin to meet 0.5-dB max degradation (implementation margin).

Figure 3.19

Derivation of the IP2 requirement in a direct-conversion receiver.

80

Flexible RF Receiver Architectures

super-heterodyne receivers. The various locations typically used to apply gain control are summarised in Figure 3.20. The precise locations chosen, and the range of control applied at each point, will depend upon the application and, in particular, on whether the receiver is designed for single-carrier (e.g., handset) or multi-carrier (e.g., base-station) use. The AGC system is designed primarily to make optimum use of the ADC dynamic range available and to prevent saturation of any of the gain or mixing stages. In a handset (typically, integrated) application, AGC may well be applied at each of the locations shown in Figure 3.20 and possibly also to the downconversion mixers themselves [14]. Essentially the task of the AGC in this case is to maintain the maximum signal level at each location, while preventing saturation of the signal peaks. Such saturation, if it occurs, would significantly degrade the BER (or SNR) of the system. In the case of a multi-carrier BTS design, it is likely that higher dynamic range components can be used and hence incorporating AGC at each location shown in Figure 3.20, is unlikely to be necessary. In this case, however, the setting of the AGC and the dynamic range required from each of the components requires much more careful consideration. Not only must the signal statistics (e.g., fading) be considered for one channel, but the instantaneous dynamic range, for the composite spectrum of all channels to be received simultaneously, must be addressed. In this case, AGC is typically only provided at the LNA (or elsewhere in the RF path), with the remaining components designed to have a high dynamic range. This AGC may also be switched rather than continuous in nature, with a threshold (or thresholds) and appropriate hysteresis levels chosen, based upon the anticipated signal environment. 3.2.3.10

Multi-Mode Issues with Direct Conversion Receivers

In a multi-mode receiver and, in particular, for handset applications, the ADC dynamic range must be considered carefully. The main issue occurs where the air interface standards to be received differ significantly in channel bandwidth, thereby requiring differing baseband channel-selection filter bandwidths. While it is possible to implement a number of channel selection filters and to switch between these for

LNA

Local oscillator

Lowpass filter In

To I-channel ADC



90°

Bandpass filter

Lowpass filter

To Q-channel ADC

AGC

Figure 3.20

Possible locations for the gain control elements in a direct-conversion receiver.

3.2 Receiver Architecture Options

81

the different modes, it is more common to utilise a single filter bandwidth, designed for the widest band signal of interest. In this case, when a narrower band signal is received, multiple carriers may be incident on the ADC and the desired carrier may well not be the strongest of these. This would be true, for example, when a receiver is designed for both WCDMA and GSM. In this case, approximately 19 GSM carriers can pass through the baseband filter bandwidth, assuming that it is designed to just pass the 3.84-MHz WCDMA spectrum. In this scenario, any of the 18 unwanted GSM carriers could be significantly stronger than the wanted carrier, thereby necessitating the AGC level to be set for that strong carrier (to prevent saturation of any of the components). The ADC alone must therefore deal with the strong, but unwanted, carrier and still resolve the weaker, but wanted, carrier. The required dynamic range may therefore be very high in a mobile fading environment. Providing higher dynamic range in a handset application usually increases the power consumption of the system and hence multi-mode operation can result in a significant power-penalty being imposed upon the resulting product. This is in addition to any extra signal processing required to support multiple modes, although this can usually be powered down when not required. 3.2.3.11 Receiver

Baseband Filter Implementation for an Integrated Direct-Conversion

The baseband filters employed in a direct-conversion receiver need to have a large dynamic range, as discussed earlier, in order to cope with both strong jammer and weak wanted signals. The noise floor of these filters (and associated baseband amplification) must be sufficiently low that it does not significantly impact upon the overall cascaded noise figure of the receiver system. There are three main filter technologies employed in this part of a DCR system: switched capacitor, active RC, and gmC. These three options have various relative merits in the key specification areas of tuning capability, noise performance, and strong signal handling (i.e., dynamic range) [8]. These relative merits are summarised in Table 3.1. Filter tuning may be performed on power-up, in any system, or by utilising vacant time slots in TDMA or TDD systems. 3.2.3.12

Direct Conversion Receiver Employing Both Baseband and Digital IFs

The architecture shown in Figure 3.21 [14] is a novel variation on the direct-conversion format discussed above. It is arguable whether it should be described as direct conversion at all, since it uses, effectively two baseband IFs and a digital IF. The first Table 3.1 Relative Merits of Various Baseband Filter Technologies for Use in Integrated Direct-Conversion Receivers Filter Type Switched capacitor

Tuning Capability Excellent

Noise Performance Poor (~20 nV/√ Hz)

Strong Signal Handling Good

Active RC

Moderate

Moderate (~6 nV/√ Hz)

Excellent

gmC

Good

Excellent (~2–4 nV/√ Hz)

Poor

82

Flexible RF Receiver Architectures

Bandpass filter 0°

In

Lowpass filter

90°

Digital filter To Ichannel processing 0°

In

A/D

90°

Low-noise amplifier



In

90°

A/D converter Quadrature demodulator

Lowpass filter

First local oscillator

Quadrature modulator

NCO To Qchannel processing

Digital Digital filter quadrature Digital receiver demodulator processing block

Second local oscillator

Figure 3.21 Direct-conversion receiver architecture employing both an analogue baseband IF and a digital IF. (From: [14]. © 2005 IEEE. Reprinted with permission.)

downconversion process, at least, is however, direct and hence it is probably most appropriately discussed here. Referring to Figure 3.21, the input signal is converted directly to baseband, following bandpass filtering and low-noise amplification. The resulting quadrature baseband signals are then lowpass filtered to define the channel or subband of interest, prior to quadrature upconversion to a suitable (low) IF. This IF is chosen to be appropriate for a low-cost/power consumption A/D converter and is already band-limited by the action of the baseband lowpass filters following the quadrature downconverter. The resulting digital IF can then be quadrature downconverted, with high accuracy, by the digital downconverter within the digital receiver processing block. The primary benefit of this architecture is that it allows the selection of an appropriate IF for use with low power consumption ADCs, in handportable terminals. It also requires only a single ADC; a conventional direct-conversion receiver requiring two. There are clearly potential issues with quadrature errors in the analogue downconversion and upconversion processes in this scheme, however these are likely to be minimised by an integrated (ASSP) implementation of the technique, due to the high-degree of component matching which can be achieved using this method of fabrication. The additional complexity and power consumption of the analogue parts of this architecture may also be justifiable, due to the overall reduction in power consumption, afforded by the optimal choice of ADC and sampling rate, which it affords. 3.2.4

Use of a Six-Port Network in a Direct-Conversion Receiver

A novel, alternative form of direct conversion receiver involves the use of a six-port network (SPN). This type of network was first proposed for use in vector network analysers [15, 16] and has since been suggested for use in very high frequency receivers [17]. It is this latter application which is of potential interest in a software defined radio application, since this form of receiver utilises broadband incoherent detection and passive RF components, thereby making it, to a large extent, modulation format and channel bandwidth agnostic. It is also potentially capable of

3.2 Receiver Architecture Options

83

operation over many decades of frequency. For example, [18] demonstrates the use of a 6-port network for reflection coefficient measurement over the range 2 to 2,200 MHz. This range would cover virtually all military and civilian portable communications bands in a single receiver. The format of a basic 6-port discriminator (SPD) is shown in Figure 3.22. As is evident from this figure, the basic discriminator consists entirely of passive couplers or hybrids and diode-based detectors. It is therefore both simple and inherently broadband (within the bandwidth limitations of a quadrature hybrid, in the case of Figure 3.22, although other implementations are possible). The six-port discriminator, together with its associated A/D converters, replaces the quadrature downconverter required in a conventional direct conversion receiver. The format of the complete receiver is shown in Figure 3.23. It does not suffer from the image or adjacent channel issues inherent in conventional superheterodyne receivers, hence allowing the required (analogue) channel filtering to be relaxed in comparison to those systems. It is reported to be much less susceptible to I/Q errors (gain and phase imbalance in the quadrature hybrids) and also has a superior or equivalent immunity to DC-offsets in the detection system (mixers in a conventional direct-conversion receiver, diode detectors in the case of an SPD). The potential for a superior immunity to DC-offsets arises from the fact that, in an integrated implementation of the SPD (including detectors), it is likely that the DC drift in the detectors would occur in the same direction for each of them. In this case, although the offsets would not necessarily track perfectly, the difference in the DC levels from each detector will be smaller than if the drift was random for each detector. The SPN technique operates by detecting the relative amplitude, phase, and frequency (i.e., the frequency offset) of the received signal relative to the local oscillator signal. To do this accurately, the system must be calibrated; however, this is reported to be possible utilising the received signal itself [17]. Calibration must also be undertaken at each frequency of interest, for optimum results. An automated calibration scheme is proposed in [19]. The DSP unit performs the necessary calculations in order to provide a demodulated signal output and also to provide signal correction, based on the results of the calibration process. The main drawback of the technique (other than its requirement for calibration) is that it requires four high-speed analogue-to-digital converters in place of the two Rx Ouput 1 50Ω

Local oscillator

Detector Rx Ouput 2 Detector Quadrature hybrid

RF input

Quadrature hybrid

Quadrature hybrid

Rx Ouput 3 Detector

3-dB Attenuator

Figure 3.22

Basic 6-port discriminator.

Quadrature hybrid

Rx Ouput 4 Detector

84

Flexible RF Receiver Architectures

Rx ouput 1 Rx ouput 2 6-port network

Rx ouput 3

Bandpass Low-noise filter amplifier Rx ouput 4

Figure 3.23

A/D DSP A/D

Baseband voice/data output

A/D A/D converters

6-port discriminator Local oscillator

A/D

Frequency control

Digital receiver employing a 6-port discriminator.

required in a conventional direct-conversion receiver. The use of diode-based detectors is likely, however, to result in better strong-signal handling characteristics and hence this type of receiver may find favour in military SDR applications, where cost is less of an issue. It is also easier to use at higher frequencies (e.g., millimeter-wave and above), where conventional mixers are more difficult to fabricate.

3.3

Implementation of a Digital Receiver 3.3.1

Introduction

There are a number of unique aspects of a digital radio implementation which allow a wider choice of options in a receiver design. These options include the use of oversampling to achieve a lower noise floor than the chosen converter resolution would normally allow and the use of undersampling as a method of downconversion. These techniques, together with a range of new mechanisms which can add to both spurious and noise specifications, make the design of a digital receiver somewhat different to its analogue counterpart. This section will cover the major aspects of a digital receiver design, suitable for use in a software defined radio application. 3.3.2

Frequency Conversion Using Undersampling

Undersampling is the act of sampling a signal at a sampling rate much lower than one quarter of the Nyquist rate (i.e., much lower than half of the signal frequency). If the signal frequency is, for example, 100 MHz, then the minimum required sample frequency, the Nyquist sample rate, is 200 MHz (although practical converters would require this to be at least 250 MHz). This signal would be undersampled by employing a sample rate of 10 MSPS (SPS = samples per second), although a practical converter will usually require some overhead on this value (i.e., adequate performance will probably only be achieved for bandwidths less than about 0.4fS). It is therefore possible to oversample and undersample simultaneously, since oversampling is defined with respect to the signal bandwidth and undersampling with respect to its absolute frequency. The SNR gain which can be achieved in the digital domain results from the fact that the available noise power is now spread over a wider range of frequencies. The actual amount of noise (or integrated noise) over the whole bandwidth is unchanged, however it is spread more widely and hence the spectral noise density reduces. It is possible to take advantage of this reduction using digital filtering; the noise within the bandwidth of the digital filter will be lower than the integrated noise of the original signal, hence providing an effective improvement in SNR. It is important to note that anti-alias filtering is important in preventing SNR degradation. It is common to think that an IF filter, which removes any image or spurious products which could fall in-band after sampling, is all that is required in order to create an uncorrupted spectrum. While this is true for unwanted interfering signals, it is not necessarily true for noise. It is possible to design a receiver frequency plan with a relatively wide IF filter (to save cost or reduce size) which adequately suppresses images, but which still passes noise in the second Nyquist zone. This noise will be aliased into the wanted band (first Nyquist zone) and will reduce the SNR by 3 dB. A good anti-alias filter can help to reduce this figure. Assuming that aliased noise is not an issue, the converter noise floor is given by: NC = 18 . + 602 . N + 10 log( f s 2 ) dBc Hz

(3.15)

where N is the converter resolution (number of bits). Thus for every doubling of the sample rate, the converter noise power spectral density reduces by 3 dB. If a digital filter is employed to remove the unwanted noise surrounding the wanted signal, the processing gain achieved by oversampling can be found from: B  GSNR = 10 log IF  dB  fs 

(3.16)

This equation assumes that the filter perfectly fits the wanted signal and is ‘brick wall’ in nature. Both of these assumptions can be quite close to reality in the case of a digital filter (unlike their analogue counterparts). 3.3.4

Elimination of Receiver Spurious Products

It is possible, in many designs, to carefully plan the sample rate and the IF spectral position, to ensure that converter and buffer amplifier harmonics do not cause interference to the wanted signals. All converters will create harmonics and the level of these increases the closer the input signal appears to the top of its dynamic range. While these harmonics are unavoidable, careful frequency planning can ensure that they are not problematic. Again, the application of near-perfect digital filtering can

3.3 Implementation of a Digital Receiver

87

help to eliminate these unwanted signals and ensure that the maximum possible bandwidth is available for wanted transmissions. These techniques can be applied alongside oversampling and this process extends further the areas of the spectrum in which harmonics may fall, without compromising receiver performance. Note that the other main consequence of amplifier and converter non-linearity, namely, intermodulation distortion, must also be considered carefully, as this is an in-band distortion. It may be possible to eliminate the more problematic effects of this by means of digital filtering (i.e., signals appearing around, rather than on top of, the carriers); however, this becomes more difficult as the number of signals increases. As an example of frequency planning, consider the following scenario. A converter with a maximum sample rate of 80 MSPS is to be employed to sample a signal with a bandwidth of 10 MHz. It is possible to determine a suitable IF centre frequency which allows the second and third harmonics to be placed out of band, and this is illustrated in Figure 3.24. The IF needs to be placed between 10 MHz and 20 MHz (i.e., the IF centre frequency needs to be 15 MHz). In this case, the second harmonic will fall between 20 MHz and 40 MHz and the third harmonic between 30 MHz and 60 MHz. The top 20 MHz of the third harmonic exceeds fS/2 and hence will wrap around and sit on top of the top 10 MHz of the second harmonic and the lower 10 MHz of the third harmonic. In both cases, this is not a problem, as it does not impinge upon the wanted band. Note that this example takes no account of the required digital filter roll-off, nor of the practical analogue sampling bandwidth limitations of a real converter (typically 0.4fS, not the ideal 0.5fS assumed here). These would both reduce the IF bandwidth which could be employed at this sample rate. An alternative frequency planning technique is to utilise undersampling and move the filtering burden from the digital domain to the analogue domain. The key Amplitude (dB)

Wanted signal Second harmonic Third harmonic Aliased Third harmonic Superimposition of third harmonic and aliased third harmonic fS /2

0

10

20

30

40

50

60

Frequency (MHz)

Figure 3.24 Frequency planning to ensure converter harmonics do not intrude on the wanted receive band.

88

Flexible RF Receiver Architectures

advantage of this technique is that it allows the whole of the Nyquist bandwidth of the converter to be used for the wanted signal—none of it is wasted in frequency planning, as was the case in the earlier example. This technique is illustrated in Figure 3.25, where an example is chosen placing the IF in the third Nyquist zone. Since harmonic distortion is largely caused in the analogue parts of the converter (buffer amplifiers and analogue input circuitry for the sampling process), it is placed well away from the wanted band by this technique. Intermodulation distortion, however, must again be considered carefully, as it will not be alleviated in the same manner and could cause significant, unwanted interference. If the conversion process itself still generates harmonics, these can be dealt with by the first technique, although this is at the expense of useable converter bandwidth. In the example shown in Figure 3.25, an 80-MSPS converter is again chosen, however the IF input is now full-band [i.e., it covers the maximum possible (theoretical) sampling capability of the converter, 40 MHz]. The IF input to the system is now in the third Nyquist zone, from fS to 3fS/2 (80–120 MHz) and this places the second harmonic between 160 MHz and 240 MHz, giving the analogue IF filter 40 MHz (between 120 and 160 MHz) within which to roll off to an acceptable level. This is well within the capability of, for example, surface acoustic wave (SAW) filter devices. The third harmonic falls between 240 and 360 MHz and hence is of little concern.

3.3.5 3.3.5.1

Noise Figure Overall System Noise Figure

The noise factor of a system element (e.g., an amplifier or mixer) is defined as the ratio of the signal-to-noise present at the output of the system to that at its input, that is,

Amplitude (dB)

Wanted signal (IF input signal)

fS/2

Aliased wanted signal Second harmonic Analogue filter passband

0

40

80

120

160

200

240

Frequency (MHz)

Figure 3.25 Using alias downconversion to ensure that converter harmonics do not intrude on the wanted receive band.

3.3 Implementation of a Digital Receiver

89

F =

SNR out SNR in

(3.17)

Note that the input and output signal to noise ratios are specified in linear units (not decibels). If the input and output signal-to-noise ratios are identical, then the system element has added no noise and its noise factor is unity. All practical elements will add at least some noise and hence F > 1. The noise factor of a cascaded system (e.g., a receiver) is given by: F RX = F1 +

F2 − 1 F3 − 1 Fn − 1 + +L+ G1 G1 G2 G1 G2 L Gn −1

(3.18)

where the various noise factors and gains are defined in Figure 3.26. This equation assumes that all elements of the system are perfectly matched and hence that maximum power transfer occurs. The noise figure of the system can then be found from: NF RX = 10 log(F RX )

(3.19)

In order to determine the receiver performance which can be achieved from a given system, it is necessary to know the available noise power and the detection bandwidth (i.e., the bandwidth of the narrowest-band part of the system, typically the detector for the modulation). The available noise power from the source (typically an antenna in the case of a receiver) is given by: PN = kTB

(3.20) −23

where k is Boltzmanns constant (1.38062 × 10 J/K), T is the temperature of the source (in Kelvin), and B is the system bandwidth. At room temperature (290K), in a normalised system (1-Hz bandwidth), this equates to approximately −174 dBm/Hz. Having obtained the cascaded noise figure for the receiver, it is now possible to determine the noise power obtained at its output, from: PN, out = PN, dB + NF RX + G dB

(3.21)

where PN,dB is the available noise power (in decibels) and GdB is the system gain (again in decibels). Thus for a receiver with a noise figure of 10 dB and a gain of 40 dB, the output noise power at room temperature would be −124 dBm/Hz. This can be converted to a noise power by adding the logarithm of the detection bandwidth:

G1 F1

Figure 3.26

G2 F2

Cascaded noise figure calculation.

G3 F3

Gn Fn

90

Flexible RF Receiver Architectures

PN, tot, dBm = PN, dB + NF RX + GdB + 10 log( B d )

(3.22)

where Bd is the detector bandwidth. In the above example, with a detection bandwidth of 200 kHz, the total output noise power, PN,tot,dBm, is then −71 dBm. When considering passive elements, including some mixers (e.g., most diode-ring based designs) in these equations, their noise figure is equal to their loss and a cascade of lossy elements may be treated as a single element with a loss equal to the combined loss of the individual elements. A common example is a mixer followed by a filter—the total loss of the two elements should be used. 3.3.5.2

ADC Signal-to-Noise Ratio and Effective Number of Bits (ENOBs)

The signal-to-noise ratio of an ideal N-bit analogue to digital converter is given by: SNR ideal = 602 . N + 176 .

(3.23)

from this it is possible to derive the effective number of bits for a practical ADC: ENOB =

SNR measured − 176 . 602 .

(3.24)

The ENOB value for a converter is a much more useful specification than is the architectural (designed) number of bits. The ENOB value for a 12-bit, high sample-rate ADC will typically be around 10.5, indicating that the available SNR will be around 65-dB, not the almost 74-dB value which could be assumed from its 12-bit hardware design. 3.3.5.3

Inclusion of ADC Noise

As an ADC is a voltage-driven device, typically with a relatively high input impedance, it is usually simpler to deal with it in terms of input-referred noise voltages, rather than assigning it a noise figure. There are therefore three stages to computing the overall noise floor for the complete receiver, including the converter and its quantisation noise: 1. Calculate the noise power from the RF and IF parts of the system (as described above) and convert this to a noise voltage. This conversion should be performed based upon the input impedance of the converter (typically not 50Ω). The noise voltage squared, from the RF and IF parts of the system, is given by: VN2, IF = PN, tot R ADC

(3.25)

where PN,tot is the RF/IF output noise power (in watts) and RADC is the input impedance of the analogue-to-digital converter in ohms. Since the converter impedance is normally high (perhaps 1,000Ω), it is typical to lower this impedance using a shunt resistance (say, 200Ω) and then to match the 50Ω output of the RF/IF part of the system to this using a 4:1

3.3 Implementation of a Digital Receiver

91

impedance transformation. This is illustrated in Figure 3.27. Note that the transformer also serves a second useful function, namely that of converting the single-ended 50-Ω RF/IF subsystem into a differential (balanced) form, as is typically required by an A/D converter. Taking this example, a total noise power of −71 dBm equates to 7.943 × −11 10 W. The 1:4 impedance transformation in matching this to 200Ω produces a voltage gain of two times. The rms noise voltage developed across the A/D input terminals is therefore: VN, IF = 2 PN, tot R ADC = 2 7.943 × 10 −11 × 200 = 252 µV

(3.26)

2. Calculate the ADC input-referred noise (including the effect of quantisation noise). The ADC input-referred noise is given by: VN, ADC = VFS , rms × 10 − SNR ADC

20

(3.27)

where VFS,rms is the rms value of the full-scale voltage capability of the ADC and SNRADC is the quoted converter signal-to-noise ratio at full scale. Taking a typical example for a high-performance 14-bit converter, the full-scale voltage is 2.048-V peak-to-peak and the signal-to-noise ratio at this input level is 72 dB. The rms noise voltage contribution of the converter is therefore: VN, ADC =

2.048

× 10 −72

20

(3.28)

2 2 = 182 µV

3. Sum the RF/IF and ADC noise voltage contributions (square-root of a sum-of-squares). The two noise contributions may now be combined using: VN, RX =

VN2, IF + VN2, ADC

(3.29)

= 311µVrms

Receiver RF/IF stages

Figure 3.27

50Ω

1:4

200Ω

1000Ω

Impedance matching for a receive A/D converter.

A/D converter

Digital interface

92

Flexible RF Receiver Architectures

This result provides the total noise present at the ADC, including all RF, IF, analogue baseband, and quantisation noise sources. 3.3.6

Receiver Sensitivity

Now that the receiver noise voltage at the ADC input is known, it is possible to calculate the overall receiver sensitivity. There are two different scenarios here: single carrier reception and multi-carrier reception. 3.3.6.1

Single-Carrier Reception

The signal-to-noise ratio for the receiver is given by:

(

)

SNR RX = 20 log VS VN, RX + GSNR dB

(3.30)

where VS is the ADC wanted input signal voltage (2-V pk-to-pk in the following example), VN,RX is given by (3.29) and GSNR by (3.16). In the latter case, the signal bandwidth, fS , is that of the single carrier being received. Taking the above example, and assuming that the receiver is only required to process a single channel with a bandwidth of 200 kHz (e.g., in a handset application), the receiver signal-to-noise ratio is then:

( ) = 20 log(0707 . 311 × 10 ) + 699 .

SNR RX = 20 log VS VN, RX + GSNR −6

(3.31)

= 74.12 dB

This assumes that digital filtering is used to select only the wanted channel, hence realising the processing gain in signal-to-noise ratio. This is valid since all of the noise present at the input to the ADC (i.e., RF/IF noise and ADC-generated noise) will be filtered in the digital domain. This signal-to-noise ratio can then be used to calculate the receiver’s sensitivity, i.e., the minimum received signal power from which a useable signal can be extracted. For a digital modulation format, the minimum carrier to noise ratio required for an acceptable bit-error rate (BER) is typically around 10 dB. Based on the earlier example, the received signal can therefore drop by 64.12 dB while still allowing the receiver to generate an acceptable BER. The ADC full-scale input power is +4 dBm (2-V pk-to-pk into 200 ohms); hence, the signal power at the input to the ADC to just achieve an acceptable BER is +4–64.12 = −60.12 dBm. If the gain of the RF and IF stages totals 40 dB (as set previously), then the overall receive sensitivity would be −100.12 dBm. 3.3.6.2

Multi-Carrier Reception

Multi-carrier reception is frequently required in base-station applications and may also be required in some remote/portable applications (e.g., for broadcast OFDM). The major difference in this case is that an amount of headroom is required to cope with the fact that the signals could sum in phase and hence produce large peaks.

3.3 Implementation of a Digital Receiver

93

The theoretical peak voltage level for n carriers, all of equal amplitude, is given by: Vpk , n = nVpk

(3.32)

where Vpk is the peak voltage of any single carrier. Clearly, this indicates that a large headroom may be required in a multi-carrier base-station application (e.g., 15.6 dB for 6 carriers). This could significantly compromise the received signal to noise ratio and hence the overall system sensitivity. In practice, however, each of the carriers typically comes from an independent source, since the clocks of the transmitting stations (e.g., handsets) are unlikely to be synchronised with each other. The statistical likelihood of the carriers aligning in-phase is therefore very low and hence a more realistic headroom assumption may be made. A more typical allowance is 3 dB, particularly given that the carriers are also likely to be identical in power. An exception to this is in CDMA systems, however even here, the high peak-to-average ratio ensures that the likelihood of peaks both aligning in phase and coinciding in time, is small. In the event that the signals do align, the converter will clip and this will cause a momentary overflow condition to occur. This will, however, be very brief (and infrequent) and the system should recover quickly. 3.3.7 3.3.7.1

Blocking and Intercept Point Cascaded Intercept Point

In a similar manner to that described above for noise figure, it is possible to calculate the effective third-order intercept point of a cascade of elements (amplifiers, mixers, and so forth) from the intercept points of the individual elements. This can be used to determine an approximate value for the intercept point of a complete receiver front-end as illustrated in Figure 3.28. The third-order intercept point of the complete signal processing chain is given by: IP3tot =

1 G1 G1 G2 G G L Gn 1 + + +L 1 2 IP31 IP3 2 IP3 3 IP3 n

(3.33)

where each of the IP3 and gain terms is expressed in linear units (not decibels), that is, Gn,

Gn = 10

IP3 n = 10

+2 . dB

IP 3 n ,

/10

+2 . dB

(3.34) /10

It is important not to forget about the intercept point units used initially, as these will usually be specified in dBm, hence producing an intercept point in milliwatts in linear units. Note also that the above intercept points are at the input to each stage and that the overall result is therefore an input intercept point. This is

94

Flexible RF Receiver Architectures

G1 IP31

Figure 3.28

G2 IP32

G3 IP33

Gn IP3n

Cascaded intercept point calculation.

what is typically specified for a receiver system (output intercept point is more typically used for a transmitter or power amplifier). Converting this input intercept point into an IMD level is achieved using:

(

PIMD, dB = 2 P1, dBm − P3 rd , dBm

)

(3.35)

where P1,dBm is the power of one tone in a two-tone test, P3rd,dBm is the third-order intercept point expressed in dBm and PIMD,dB is the relative power of the third-order (i.e., largest) IMD products in dBc. In other words: P3 rd , dBm = 10 log(IP3tot )

(3.36)

Note that (3.33) assumes that the intermodulation products add in phase (i.e., a voltage addition). If this is not the case, such as for third-order products appear far from the carrier (e.g., those around the third harmonic), when significant AM-PM distortion is present, or where the block non-linearities possess significant memory (with a non-uniform distribution over frequency), then the above becomes (for a two-stage system): 1 1 1 = + 2 2 2 OIP3tot (G2 OIP31 ) OIP3 2

(3.37)

This equation is more likely to be appropriate for relatively non-linear power amplifier systems (hence it is expressed as an output intercept point and is based upon output intercept point values for the individual blocks). The former equation is more appropriate for the relatively well-behaved non-linearities present in receiver systems. Note that where a non-linearity has significant AM-PM and/or memory, the concept of an intercept point is not necessarily helpful, as it will not readily prove to be an accurate predictor of IMD performance. The concept of adding IM powers, in a cascaded system, is therefore of relatively limited use, in practice, with (3.33) therefore being by far the most commonly used equation. 3.3.7.2

IMD Level in a Receiver Design

It is typical to design the RF parts of the receiver system to achieve an IMD level equal to the noise floor at the input to the ADC. In this way, neither parameter dominates and the system could not be said to have been over-designed in either area. Like most rules of thumb, however, there are exceptions. In this case, for example, systems in which the signal can be integrated in order to resolve the required

3.3 Implementation of a Digital Receiver

95

information, should be designed with an IMD performance below the expected noise floor. This is necessary since integration will not reduce the level of the IMD and hence this will ultimately dominate. In the case of the example used earlier, the maximum input-referred IMD level should be −110.12 dBm measured in a 200-kHz bandwidth and assuming the same 10-dB minimum SNR. If the maximum useable RF received signal strength is required to be −25 dBm, the resulting input intercept point must be: P3 rd , dBm = P1, dBm −

PIMD , dB

2 ( −11012 . − ( −25 )) = −25 − 2 = 17.56 dBm

(3.38)

This value can then be used in (3.36) and subsequently (3.33) in order to find the intercept point requirements of the various system blocks. 3.3.8

Converter Performance Limitations

It is instructive at this point, to examine, briefly, the main assessment criteria for analogue-to-digital converter performance. These criteria are also valid for assessing digital-to-analogue converter performance, with similar definitions also applying to that type of component. Both ADC and DAC components have imperfections which limit their practical performance. While many of these issues are present in all converters (both DC/low-speed and high speed/bandwidth), they are usually most evident in the demanding high-speed applications associated with software defined radio. Even within a purpose-designed high-speed converter, many aspects of its dynamic performance degrade with increasing clock frequency (but still within its specified limit) and with increasing input signal frequency, for a given clock frequency. Even the usage recommendations for a good converter (ADC or DAC) will only countenance its use with an input frequency up to 0.4fs (where fs is the frequency of the sampling clock)—in other words, up to 80% of its Nyquist frequency. 3.3.8.1

SINAD

SINAD is used as a specification in receiver systems in general as well as specifically for A/D and D/A converters within transceiver systems. It stands for signal-to-interference, noise, and distortion (i.e., the ratio of the wanted signal to any form of undesirable contamination). In the case of data converters, interference is not typically an issue and hence SINAD becomes the ratio of the wanted signal to the unwanted noise and distortion. For an ideal converter, SINAD is related only to the number of bits present in the converter: SINAD = 176 . + 602 . N dB

(3.39)

96

Flexible RF Receiver Architectures

where N is the number of bits. For an ideal 12-bit converter, the SINAD is 74 dB; however, a typical high-speed (100-MHz sampling) A/D converter, for example, may manage a SINAD of only 60 dB. Equation (3.39) above may be rewritten as: ENOB = ( SINAD − 176 . ) / 602 .

(3.40)

where SINAD in this case is the actual measured value from a given converter. N has now become the effective number of bits (ENOB) for the converter. For example, the above 100-MHz sampling ADC would have an ENOB value of 9.76 (somewhat less than the 12 bits it physically produces).

3.3.8.2

Signal-to-Noise Ratio (SNR)

In the case of an ideal converter, SINAD and SNR are interchangeable, since an ideal converter would add no distortion and the only source of noise would be that resulting from quantisation. In reality, however, the SINAD figure for a converter will always be poorer than its SNR, due to the presence of distortion in the analogue parts of the system. SNR alone will specify the noise floor of the system, including distortion, analogue noise, and quantisation effects, and the latter two can typically be improved by the use of oversampling, as discussed in Section 3.3.3. SNR may be defined as:   Signal Power at FSD SNR = 10 log  dB  Power of (total) Residual Error

(3.41)

where FSD is the full-scale deflection (maximum input) of the converter and the power of the total residual error is the power present in the full Nyquist band of the converter, other than the wanted signal power. This includes analogue noise, quantisation noise, spurious signals, harmonic and intermodulation distortions, and so forth. 3.3.8.3

Spurious-Free Dynamic Range (SFDR)

Spurious-free dynamic range is defined as the ratio of the rms voltage of an input sinewave to the rms value of the largest spur, measured using an FFT in the frequency domain. The spur need not be a harmonic component of the original sinewave and could arise as a result of interaction between the sampling clock and the input waveform. SFDR is normally specified in decibels. SFDR may be defined as:   RMS Signal Voltage at FSD SFDR = 20 log  dB  RMS Voltage of the Largest Spurious Product 

or, in power terms:

(3.42)

3.3 Implementation of a Digital Receiver

  Signal Powerat FSD SFDR = 10 log  dB Powerof theLargest Spuri ous Product  

97

(3.43)

SFDR is useful in assessing how well a given ADC will perform when attempting to detect a weak signal in the presence of a strong interferer. It is therefore an important specification for an ADC that is intended for use in a radio receiver. The SFDR and SNR of an ADC will almost never be equal, in a practical converter. The SNR value incorporates analogue noise and quantisation effects, in addition to distortion power, and is measured in the entire Nyquist band; it will therefore result in a value which is typically much poorer than that of the SFDR. SFDR includes only the power of the highest spurious product and hence will almost always yield a higher value. SFDR is a useful measure of ADC performance in cases where the signal bandwidth of interest is less than the full Nyquist band. As has already been outlined in Section 3.3.3, digital filtering may be utilised to improve SNR in this case and hence SFDR becomes a more meaningful method of assessing ADC performance. In particular, a spurious product may fall in the digital filter passband and hence will not be improved by the filtering process; the SFDR specification for the converter will therefore predict the converter performance in this case. There are a number of techniques for improving the SFDR performance of a converter and these will be discussed later in Section 3.3.10. In the case of an ideal ADC, the maximum value of SFDR occurs at the converter’s maximum input level (i.e., full-scale deflection, FSD). In a practical converter, however, the maximum value of SFDR often occurs at an operating point well below full scale (i.e., several decibels below FSD). This occurs due to the fact that as the input signal level approaches full-scale, the converter’s transfer characteristic becomes increasingly non-linear and the advantage of the additional signal power is more than outweighed by the resulting added distortion. Furthermore, inevitable fluctuations in the input signal level in a practical circuit, have a much more pronounced effect when operating close to full-scale. This is due to the fact that small increases in the input level will result in clipping and this will lead to a significant increase in distortion. For these reasons, it is important top operate ADCs a few decibels backed off from their full-scale value, in order to achieve optimum performance.

3.3.9

ADC Spurious Signals

ADC spurii are typically a far greater limitation on receiver sensitivity than straightforward noise and signal-to-noise ratio. These spurii are caused by either nonlinearity in the conversion process (traditionally specified as differential non-linearity, DNL or integral non-linearity, INL) or slew-rate limitations, which are generally introduced in the sample-and-hold device on the front end of the converter. Sample-and-hold device technology has improved to the extent that this is no longer typically a major issue in modern high-speed converters operating in the first Nyquist zone, particularly when they are not being utilised at full scale (as is usually the case in a radio receiver application). Non-linearities in the conversion process itself are therefore the major problem and the evaluation of these problems in radio

98

Flexible RF Receiver Architectures

receivers must extend beyond the traditional DNL and INL specifications, in order to be useful. The key issue in a receiver ADC is the location of the non-linearity within the ADC transfer characteristic. A typical receiver design will not operate the ADC close to its full-scale limit, other than in very strong signal conditions. A relatively severe non-linearity, say, +1 LSB, occurring at this point, is therefore much less of an issue than a smaller error, say, +0.25 LSB, half way along the transfer characteristic. The reason for this is simply that the converter will spend much more of its time with the input waveform repeatedly transitioning this middle part of the characteristic—a non-linearity here is therefore a much more significant problem. These issues lead to two different methods of specifying non-linearity in converters: static performance measures and dynamic performance measures. Note that the same issues are common to both ADCs and DACs and hence the following discussions are equally applicable to both. Static performance measures, including INL and DNL, are useful to provide an overall indication of converter non-linearity. It is the dynamic measures, such as SINAD and SFDR, however, which provide the most useful indication of ADC performance in a receiver application. The impact of ADC spurious signals can be severe and it is this, and not noise performance, which is frequently the limitation on receiver dynamic range. Continuing the above example, and assuming that the A/D converter has a spurious-free dynamic range of −76 dB relative to full scale (i.e., 76 dBFS), this equates to a spurious level of −72 dBm (for a full-scale input of +4 dBm, i.e., 2-V pk-to-pk into 200 ohms). If an acceptable carrier-to-interference (C/I) ratio is 18 dB (note that this is typically greater than the required carrier-to-noise ratio for most systems), then the minimum acceptable ADC signal level is −54 dBm. Adding to this the 40 dB of RF/IF gain gives a receive sensitivity of −94 dBm, which compares to the noise-based receive sensitivity of −100.12 dBm. In other words, noise is clearly not the limiting factor in this example and a better-designed converter, with an improved SFDR specification relative to its (good) SNR specification, would allow this SNR to be fully utilised. In this example, the SFDR specification would have to improve to 83 dBFS or better for converter SNR to become the limiting factor.

3.3.9.1

DNL in a Dynamic Environment

As was indicated above, DNL errors occurring in the middle of an ADC transfer characteristic are much more significant in a receiver application than those occurring close to full scale. The dynamics of radio signal reception are such that operation close to full scale is likely to also result in clipping of the signal and this will, in most cases, produce much more in the way of spurious products than will the DNL error. A basic illustration of DNL occurring at different parts of an ADC transfer characteristic is shown in Figure 3.29. The impact of this upon an input waveform is illustrated in Figure 3.30 for the case where the input signal amplitude is < ±0.375fS. To understand why DNL errors in high-speed communications converters commonly occur in the middle portion of the transfer characteristic, it is necessary to understand a little of their construction. The use of traditional flash converter techniques (see Section 3.3.9.2), for a high-resolution converter, would require a very

3.3 Implementation of a Digital Receiver

99 Code output

Input −0.5fs −0.375fs−0.25fs −0.125fs

0.125fs 0.25fs 0.375fs 0.5fs

(a) Code output

Input −0.5fs −0.375fs−0.25fs −0.125fs

0.125fs 0.25fs 0.375fs 0.5fs

(b)

Figure 3.29 DNL occurring in different parts of an ADC transfer characteristic: (a) DNL at mid-scale, and (b) DNL at ±0.375fS.

large number of high-speed comparators to be fabricated and these would occupy a large silicon area (hence making the resulting device very expensive). High-speed communications converters therefore typically employ multi-stage techniques, with portions of the converter being reused, thereby saving significant silicon area. This reuse, however, means that DNL errors present in the reused portion of the converter are repeatedly encountered throughout the voltage range of the input signal. Even though these errors may be small (say, 0.25 dB or less), they can still have a significant impact upon the practical SFDR seen in a radio reception environment.

100

Flexible RF Receiver Architectures Code output

Input −0.5fs −0.375fs−0.25fs −0.125fs

0.125fs 0.25fs 0.375fs

(a) Code output

Input −0.5fs −0.375fs−0.25fs −0.125fs

0.125fs 0.25fs 0.375fs

(b)

Figure 3.30 Effect of DNL upon an input waveform with an amplitude of 110*

>98

>110*

>110*

IM3 suppression (dB)

>87

>91

>97

>97

Image rejection (dB)

>50**

>50**

>50**

>50**

Input voltage range Total silicon area

1V rms, differential 2

0.55 mm

*Result limited by measurement noise floor. ** Measurement limited by test setup.

3.3 Implementation of a Digital Receiver

105

converter. In this case, the probability of any code (the Ith code) appearing at the output to the converter is given by [23]:

(

N −1  1  −1  Vpk I − 2 PI = sin  π A pk 2 N  

) − sin  

−1

(

 Vpk I − 1 − 2 N −1   A pk 2 N 

)   

(3.44)

where N is the converter resolution (bits) and I is the code under consideration. The probability of the converter operating in a given top percentile of its range, based upon a full-scale sinewave input, is given by: PT =

V   100  cos −1 1 − T     200  π 

(3.45)

where VT is the upper percentile in question. Conversely, for operation in the middle of the converter range, the probability is given by: PM =

100  −1  VM   sin    200  π 

(3.46)

where VM is the mid-percentile in question. If these two equations are evaluated for a percentile value of 0.1%, the corresponding values for the two probabilities are PT = 1% and PM = 0.016%. It is clear from this that the converter is much more likely to be generating a code close to full scale than it is to be generating a code around mid-scale. The conclusion here is that DNL problems with mid-scale codes are likely to be less of an issue than those with full-scale codes. However, as discussed earlier, a converter in a radio receiver is likely to operate for most of the time significantly backed off from full scale. For example, a converter operating at 25-dB backoff will only be utilizing approximately 5.6% of the available codes (assuming that the converter has a large number of bits). For a 14-bit converter, assuming that it has four bad codes in the middle of its range, the percentage of the time during which these codes are utilised for varying degrees of backoff are presented in Table 3.5. It is evident from this table that the higher the degree of backoff, the more frequently the bad codes are exercised and hence the greater the amount of distortion generated. Bad codes occurring close to full scale, by contrast, would not be utilised at all in most cases, and hence would contribute nothing to the overall distortion characteristic. The alternative method of evaluating this problem is to assume that the input signal to the receiver is noise-like, with either a Gaussian distribution (e.g., for CDMA) or a Rayleigh distribution (e.g., for a narrowband single-carrier receiver operating in a mobile environment, a good example being GSM). The earlier arguments still apply, although the transitions through bad codes lower in the ADC transfer characteristic may be more frequent due to the high peak to mean of this type of signal.

106

Flexible RF Receiver Architectures Table 3.5

3.3.9.4

Effect of Bad Codes with Backoff for an A/D Converter

Degree of Backoff from Full Scale (dB) 10

Utilisation of Converter Range (%) 31.6

Utilisation of Bad Codes (%) 0.077

20

10

0.244

30

3.16

0.772

40

1.00

2.44

50

0.316

7.72

60

0.1

24.4

Impact of DNL on a High-Speed Converter

Section 3.3.9.2 described a typical converter architecture for a high-speed communications converter. It is clear from the operational description of this converter that the second ADC is used many times over the full-scale range of the complete converter. Any DNL errors in this converter will therefore appear many times within the n full-scale range of the complete converter −2 times in fact, where n is the number of bits in the first converter. Thus, in a 12-bit converter, consisting of a 6-bit first converter and a 7-bit second converter, the DNL errors in the second converter will 6 repeat 2 = 64 times over the range of the complete 12-bit ADC. The impact of this on the signal-to-noise ratio for the converter (assuming that it is the dominant effect) is given by:  1 + ε SNR DNL, dB = −20 log N  2 

(3.47)

where is the average DNL of the converter and N is its resolution (number of bits). This equation is illustrated in Figure 3.33, for a range of values for DNL error and converter resolution. It can be seen from this figure that the signal-to-noise ratio degrades from its ideal value (based on quantisation noise, at 0 LSBs of DNL error) to a lower value as the DNL error increases. The effect is gradual, but very important, as it reduces the effective number of bits (ENOBs) of the converter. For example, a 12-bit converter with 3 LSBs of DNL error effectively becomes a 10-bit converter in terms of the signal-to-noise ratio.

3.3.9.5

INL Error

Intergral non-linearity (INL) is the deviation from an ideal straight line in the static input-output transfer characteristic of a converter. The overall shape of this characteristic can be analysed to yield the harmonic and intermodulation characteristics for the converter. It may also be possible to use this information either to predistort or postdistort the converter characteristic in order to minimise the impact of the INL error upon system performance. This is most easily undertaken in the digital domain, where predistortion of a DAC or postdistortion of an ADC is possible. The aim of both techniques is to yield an overall cascaded linear transfer characteristic

3.3 Implementation of a Digital Receiver

107

120 110

Signal-to-noise ratio (dB)

100 90 80 70 60 50

4 bit 6 bit 8 bit 10 bit 12 bit 14 bit 16 bit 18 bit 20 bit

40 30 20 0

1

2

3

DNL error (LSBs)

Figure 3.33

Effect of DNL errors upon signal-to-noise ratio for an ADC.

for the system (composed of two complementary non-linear characteristics). This technique is illustrated in Figure 3.34. 3.3.10

Use of Dither to Reduce ADC Spurii

The dynamic problems with DNL highlighted earlier can lead to a significant number of spurs appearing in the output spectrum of a communications ADC. While it is not possible to reduce the overall energy of the distortion caused by DNL errors, it can be distributed across the frequency band of the converter, thereby eliminating discrete spurii and replacing them with a slight overall increase in noise floor. This increase in noise floor is usually much more acceptable in a radio receiver application than discrete spurii, since a badly located spurious signal can severely compromise a receiver’s dynamic range performance. The method of achieving this is called dither and it involves the addition of a noise source to the ADC input signal. The result of this is to effectively randomise the DNL errors in the ADC characteristic, thereby spreading the converter spurii and creating a uniform noise floor. A range of methods exists to achieve this and two of these are illustrated in Figures 3.35 and 3.36. The method shown in Figure 3.35 places the dither signal entirely within the ADC operating bandwidth, thereby decreasing the SNR of the resulting system. This is overcome by digital subtraction of the pseudorandom binary sequence (PRBS) from the ADC output signal, thereby removing the added noise (but leaving the additional noise created by the spreading of the spurious signals). This method is useful when either large amounts of dither are required to overcome the (large) DNL errors within the ADC, or the entire input frequency range of the ADC is required for wanted signals. If there is a small amount of unused input frequency range (say, a few hundred kilohertz), then it is possible to arrange for the dither signal to appear in this range. This has the advantage that the dither signal does not require subsequent subtraction, hence making the additional circuitry required to

108

Flexible RF Receiver Architectures Digital domain

Analogue domain

D/A

Digital input Predistorter

Analogue output

D/A converter (a)

Analogue domain

Analogue input

Digital domain

A/D

Digital output Post distorter

A/D converter

(b)

Figure 3.34 an ADC.

INL correction of a data-converter:(a) predistortion of a DAC; and (b) postdistortion of

Dither source DAC D/A

Analogue input signal

Digital PRBS generator ADC output signal

Receiver ADC A/D

Analogue signals N-bit digital signals

Figure 3.35

Wideband dither approach employing subtraction to remove in-band noise.

implement dithering very simple. Furthermore, it does not require any additional DSP overhead, thereby enabling the design to be purely analogue. There are two obvious locations for the out-of-band noise signal in a receiver application: around DC (shown in Figure 3.36) and around fS/2. Either or both of these locations are commonly unused in receiver designs, since they can cause problems in the remainder of the system (e.g., swamping the ADC with carrier leakage at DC). They are therefore ideal locations for placement of the dither spectrum.

3.3 Implementation of a Digital Receiver

109

The addition of the dither signal effectively randomises the DNL errors within the converter dynamic range. This eliminates the repeated nature of the DNL codes, discussed in Section 3.3.9.4, and enables them to resemble a more uniform distribution. A given input voltage level will not, therefore, result in a specific good or bad code, but instead will produce a random distribution of both good and bad codes. The amount of dither required for a particular converter is usually best arrived at experimentally. Alternatively, manufacturers data sheets or application notes will sometimes indicate the optimum number or range of codes of dither for their products or an equivalent dither power. Some are even beginning to incorporate SFDR optimisation circuits within their converters. It is important to note that dither will only improve SFDR up to the point where track-and-hold device errors dominate. Dither is therefore typically of value at lower frequencies, where track-and-hold errors will be small. What constitutes lower frequencies will depend upon the converter in question. Many modern communications converters are designed with track-and-hold devices that operate well beyond the first Nyquist zone; indeed, such converters are typically intended for alias downconversion applications. In this case, lower frequencies may actually refer to many times the Nyquist frequency, and hence track-and-hold device errors may not be an issue in practice. The broadband Gaussian noise source shown in Figure 3.36 can be realised in a number of ways. Perhaps the simplest method is to use a noise diode, followed by a fixed or variable gain amplifier. An outline of this solution is provided in Figure 3.37. In this case, D1 is the noise diode, with R2 and C (primarily) forming a highpass filter for the noise, prior to amplification and level control. The highpass filter is largely concerned with DC removal, but may also be used for rudimentary noise shaping. 3.3.11

Alternative SFDR Improvement Techniques

Although dither is a commonly used technique for improving SFDR, a number of other options exist. These options typically involve processing of the digital information, following digitisation by the ADC. They include: 1. Phase-plane compensation [24]; 2. State-variable compensation [25]; 3. Projection filtering [26]. 3.3.12

Impact of Input Signal Modulation on Unwanted Spectral Products

The increasing application of wideband modulation formats, such as spread-spectrum and OFDM, has an impact upon the way spurious products can be treated in an ADC. The harmonics generated by an ADC are harmonics of its input signal; as a result, they will occupy a bandwidth related to that input signal. For a second harmonic, the input signal bandwidth is doubled and for a third harmonic it is trebled, and so forth. Since these wideband signals are effectively noise-like, their harmonics will also be noise-like and of a wider bandwidth. It is therefore more appropriate to treat them as a noise signal and examine their impact as a degradation in SNR

110

Flexible RF Receiver Architectures Broadband Gaussian noise source

Low-pass filter

Analogue input signal

Receiver ADC A/D

ADC output signal

Analogue signals N-bit digital signals

Figure 3.36

A simple out-of-band dither technique to reduce the effect of ADC spurii.

+VCC

R1 C

D1

Figure 3.37

R2

VR 1

A

Vn

Simple analogue noise generator to provide dither for an ADC.

performance (often referred to as SINAD-signal to interference, noise, and distortion). The same is also true for intermodulation products and ADC spurii (e.g., created by DNL errors). 3.3.13

Aperture Error

The term aperture error refers to the imperfections present in the actual sampling process itself. In an ideal converter, the sampling process is instantaneous, with no signal leakage occurring during its operation. In a practical converter, however, the sampling process takes a finite time and hence the input signal is not convolved with an ideal impulse function it is, instead, convolved with a one-sided triangular impulse function (see Figure 3.38). This results in a sampling error which is dependant upon the frequency of the input signal; at low input frequencies, the imperfect impulse function mentioned earlier is sufficiently narrow to appear perfect, and hence sampling occurs normally. At high input frequencies the input signal leakage

3.3 Implementation of a Digital Receiver

111

Time (a)

Time (b)

Figure 3.38

(a) Ideal and (b) practical sampling impulse functions for an ADC.

referred to earlier occurs, yielding (potentially) a different output code for the same input voltage as was applied in the low-frequency case. In practice, such errors have a similar impact to those of slew rate limitations and are indistinguishable in the output spectrum. They are also generally small and are therefore masked by slew-rate limitations in most converters. 3.3.14

Impact of Clock Jitter on ADC Performance

One of the principal factors which can degrade SNR, from that based on quantisation alone, is clock jitter. The result of jitter is an uncertainty (error) in the voltage at the sample point, as illustrated in Figure 3.39. Although this figure refers to an ADC, a similar mechanism occurs in a DAC. Jitter in a clock oscillator is the time domain representation of phase noise in the frequency domain. In common with low-phase noise requirements in conventional high-frequency oscillators, the best low-jitter sources are quartz crystal based. These are typically capable of better than 0.2 ps of jitter. The other main source of clock jitter is the buffer between the clock source and the ADC or DAC. The use of an AC-coupled, differential ECL/PECL buffer is one good solution, as it has very fast outputs, matched in terms of slew rate. The impact of clock jitter is related to the slew rate of the input signal. The greater the slew rate, the greater the error voltage for a given amount of jitter; the greatest slew rate occurs at the zero crossing of the highest-frequency sinewave of interest (as shown in Figure 3.39). A perfect clock would sample at the ideal point shown, with the impact of jitter being to cause the actual sampling point to shift either side of this point (randomly) by an amount determined by the jitter. The resulting voltage error is given by: Verr = S R t jitter

(3.48)

112

Flexible RF Receiver Architectures Input waveform

Verr

∆t Intended sample point

Figure 3.39

Actual sample point

Effect of clock jitter on the conversion process for an analogue-to-digital converter.

where SR is the maximum slew rate of the input sinusoid and tjitter is the timing uncertainty of the clock. In the case of a sinusoidal input signal: ν s (t ) = VM sin(2 πf in t )

(3.49)

The slew rate can be found from: d ν s (t ) = 2 πf in VM cos(2 πf in t ) dt

(3.50)

This reaches its maximum when t = 0, giving: d ν s (t ) = 2 πf in VM dt max

(3.51)

The impact of clock jitter upon the SNR of the converter is therefore: SNR j, dB = −20 log(2 πf in t j , rms )

(3.52)

where fin is the maximum input frequency of interest (in hertz) and tj,rms is the rms value of the jitter (in seconds). This equation assumes that jitter is the limiting factor and not quantisation noise. If the SNR derived from jitter alone is below the theoretical SNR from quantisation noise, then (3.47) will take precedence. Equation (3.52) is illustrated in Figure 3.40 at a range of values for jitter and input frequency. Note that this figure only provides an accurate idea of SNR in the case where aperture uncertainty is the dominant source of SNR degradation (i.e., it dominates both thermal noise and DNL). The greatest impact of jitter occurs in an IF sampling (alias downconversion) system. In this case, the input frequency to be used in (3.52) is potentially high (hundreds of megahertz, perhaps), hence making the jitter of the sampling clock a critical parameter. As a typical example, a 100-MHz analogue input signal, being sampled

3.3 Implementation of a Digital Receiver

113

0.1 ps 0.3 ps 1 ps 3 ps 10 ps 30 ps 100 ps 300 ps 1000 ps

130 120

Signal-to-noise ratio (dB)

110 100 90 80 70 60 50 40 30 20 10 10

Figure 3.40

20

30

40 50 60 Frequency (MHz)

70

80

90

100

SNR degradation caused by clock jitter for a selection of input frequencies.

using a clock with a jitter of 0.8-ps rms, would yield a maximum possible signal-to-noise ratio of 66 dB. This is equivalent to an ENOB of 11 bits, and hence an expensive 14-bit converter, for example, would be wasted in this scenario. For a 14-bit converter to be potentially useable (to its full dynamic range capability), the clock jitter would have to reduce to 0.1 ps. This is a very challenging figure to meet in a practical clock source at present. The theoretical phase noise performance of an RF oscillator is governed by Leesons equation [27] and further details of this model, and its implications for oscillator circuit design, are provided in Appendix B. 3.3.14.1

Combined Noise Performance

Clock jitter (or aperture uncertainty) alone is not the only limitation on the noise performance of a converter. As has already been discussed, DNL and thermal noise both contribute to the overall noise and hence all three effects must be analysed together. This results in an overall signal-to-noise ratio given by [28]:

SNR tot, dB = 176 . − 20 log

(2πf

in

t j , rms

)

2

 1 + ε + N  2 

2

 2 2Vn, rms   +  2N   

where: fin is the maximum input frequency of interest (in hertz). tj,rms is the rms value of the jitter (in seconds).

2

(3.53)

114

Flexible RF Receiver Architectures

is the average DNL of the converter (in LSBs). Vn,rms is the thermal noise (in LSBs). N is the number of bits for the converter. Evaluating each of the three terms in the above equation separately allows an assessment to be made as to which is dominant (if any). Clearly it is potentially extravagant to specify a clock jitter such that DNL and/or thermal noise dominates to a significant degree. Figure 3.41 illustrates the impact of clock jitter on the signal-to-noise ratio available from a range of ADCs [based on (3.53)]. The parameters are selected to be typical of a current digital-IF receiver, using a current-generation of ADC, namely, a DNL error of 0.5 LSB and an input frequency of 20 MHz. In this example, the effect of thermal noise is neglected, although clearly this would degrade the figures presented, in a practical application. It is evident from Figure 3.41 that an extremely low-jitter clock is required if the full performance from a high-speed, high-resolution converter is to be realised. For example, a 16-bit converter requires the jitter present on the clock to be better than 0.1-ps rms, in order to avoid significant degradation of the signal-to-noise ratio. If the clock jitter is increased to 1 ps, the SNR performance of an ideal 16-bit converter becomes almost identical to that available from an ideal 14-bit converter; the additional resolution available from the 16-bit device is therefore wasted. Turning now to a potential future requirement and looking at the case of an RF-sampling ADC, with an input frequency of 2 GHz (and the same DNL of 0.5 LSB), the jitter requirement changes dramatically, as is illustrated in Figure 3.42. In this case, even a 12-bit ADC will require a clock jitter of 0.01-ps rms, or better, in order to achieve its full (theoretical) performance. For the 16-bit example used

120

4-bit 6-bit 8-bit 10-bit 12-bit 14-bit 16-bit 18-bit 20-bit

Signal-to-noise ratio (dB)

100

80

60

40

20 −2.00

10

2

3

4 5 6 7

−1.00

10

2

3

4 5 6 7

0.00

10

2

3

4 5 6 7

1.00

10

RMS Jitter (ps)

Figure 3.41 Impact of clock jitter on ADC signal-to-noise ratio for a range of values of converter resolution. DNL error = 0.5 LSB; input frequency = 20 MHz. Thermal noise contribution is assumed to be negligible.

3.3 Implementation of a Digital Receiver

115

120

4-bit 6-bit 8-bit 10-bit 12-bit 14-bit 16-bit 18-bit 20-bit

Signal-to-noise ratio (dB)

100

80

60

40

20 −1.0

10

2

3

4 5 6 7

0.0

10

2

3

4 5 6 7

1.0

10

2

3

4 5 6 7

2.0

10

RMS Jitter (fs)

Figure 3.42 Impact of clock jitter on ADC signal-to-noise ratio for a range of values of converter resolution. DNL error = 0.5 LSB; input frequency = 2 GHz. Thermal noise contribution is assumed to be negligible.

earlier, the required jitter must now improve to 1fs—an extremely difficult value to achieve, particularly given the high clock frequency (>>4 GHz), which would be required for Nyquist sampling of the high input frequency. To gauge how realistic these specifications are, in practice, it is worth examining the jitter specifications for current, high-quality crystal oscillators. A brief examination of commercially available parts indicates that for a 1-GHz output frequency (still far from the more than 4 GHz required), a clock jitter of 13 ps is realistic. This is a long way from the 1fs requirement discussed earlier, thus illustrating the extreme technological challenges which must be overcome before high-resolution RF sampling techniques can become a reality. This issue is, of course, in addition to the considerable challenge of designing and fabricating the converter itself a feat which is also some way from being achieved (as was discussed in Chapter 2). 3.3.14.2

Impact of Clock Oscillator Phase Noise on Jitter

The phase noise present on a clock oscillator will clearly impact the amount of jitter it suffers. There is a direct relationship between the two parameters and this is given by [29]: σ 2pn ( τ ) =

2T02 π

2



f offset

0

[L( f ) sin

where: pn

( ) is the rms jitter (in seconds).

2

( πfτ)] df

(3.54)

116

Flexible RF Receiver Architectures

T0 is the period of the clock oscillator (in seconds). ≅ NT0 is the time after the Nth period. foffset is the maximum offset frequency of interest (in hertz). L(f) is the phase noise spectral density relative to the carrier at an offset f (usually expressed in dBc/Hz). This equation is typically evaluated by numerical integration based on a plot of phase noise taken from the clock oscillator. It is usually beneficial to calculate jitter on an oscillator in this manner, since jitter measurement equipment is usually much less sensitive than phase noise measurement equipment. It is therefore possible to arrive at a jitter figure for a very high performance oscillator, via (3.54), when it is not possible to measure it directly.

3.3.14.3

Impact of Clock Oscillator Spurs on Jitter

A practical clock oscillator may well have one or more spurs forming a part of its phase-noise characteristic. These spurs can occur anywhere in the spectrum surrounding the oscillator and may arise from a variety of causes, such as modulation appearing on the oscillator supply, breakthrough from other oscillators or signals within the unit, and so forth. An illustration of the typical form of an oscillator characteristic is shown in Figure 3.43, with the spur in this case occurring where the close-in phase noise characteristic meets the broadband noise floor of the oscillator. Clearly these spurs are an undesired component and they will have an impact upon the jitter resulting from the clock oscillator. It is possible to calculate the impact of spurs upon jitter, using [29]: Phase noise (dBc/Hz)

Close-in phase noise

Spurious component Noise floor

Offset frequency (log scale)

Figure 3.43 Typical phase-noise characteristic for a clock oscillator, incorporating spurious components.

3.3 Implementation of a Digital Receiver

σ 2s ( τ ) =

117

T02 π

2

∑ L( f ) sin [πf τ] 2

m

m

(3.55)

m

where: s

( ) is the rms jitter (in seconds).

T0 is the period of the clock oscillator (in seconds). ≅ NT0 is the time after the Nth period. L(fm) is the spurious amplitude relative to the carrier (usually expressed in dBc). fm is the offset from the oscillator (centre) frequency (in hertz) at which the spurious component occurs. m is the number of spurious products—note that the spurious products are not assumed to be symmetrical and hence the impact of those above and below the carrier must be assessed separately (i.e., m = 2 for a pair of spurious products, one above and one below the carrier). For a typical spur level of −75 dBc, at on offset up to 1% of centre frequency, the impact upon clock jitter is negligible (many orders of magnitude less than the impact of phase noise) [30]. Even higher spurs or a greater number of spurs is likely to have little impact, unless very low jitter (say, less than 0.1 ps) is being considered.

3.3.14.4

Combined Thermal and Quantisation Noise

The ADC noise voltage may also be computed, for any signal level within the ADC input range (i.e., not just at full scale), using (3.56):  FS dBm − SNR dBc − S dBFS    10 

Vn ,tq = 10 −3 Z in × 10 

(3.56)

where Zin is the input impedance (Ω), FSdBm is the converter full-scale power (in dBm), SNRdBc is the measured signal-to-noise ratio at the chosen input level (in decibels, relative to the signal level), and SdBFS is the chosen input signal level (in decibels, relative to the converters full-scale input capability). The converter full-scale power, FSdBm, may be calculated using:  10 3 VFS2 , rms   FS dBm = 10 log   Z in  

(3.57)

where VFS,rms is the converter full-scale voltage input, expressed in volts rms.

3.3.15

Impact of Synthesiser Phase Noise on SDR Receiver Performance

In many respects, synthesiser phase noise is similar to clock jitter in its impact upon receiver performance (i.e., it imposes a degradation on the signal-to-noise ratio).

118

Flexible RF Receiver Architectures

The key difference is that phase noise has a non-uniform distribution around the oscillator frequency. It is typically severe close to this frequency and reduces as frequency separation increases. Clock jitter, on the other hand, results in a uniform distribution (and hence a uniform SNR degradation). The impact of synthesiser phase noise on EVM performance is considered elsewhere in this chapter and hence this section will concentrate on SNR issues. The local oscillator is mixed (multiplied) with the input signal in the downconversion mixer; this results in a convolution of the input signal and LO spectra in the frequency domain. The result of this is that both adjacent signals and parts of the wanted signal are mixed with the phase noise and this leads to an increased in-channel noise floor. This process is known as reciprocal mixing. Continuing the earlier example of a handset application with a receiver channel bandwidth of 200 kHz and assuming that the design goal with respect to phase noise is that it should be similar to, or lower than, the thermal noise power from the antenna (at 290K): N Ant, dBm = kT = −174 dBm Hz

(3.58)

Assuming that the maximum wanted signal power is as stated previously (−25 dBm) and that this signal is noise-like, with a uniform distribution over its 200-kHz bandwidth, then the signal power per unit bandwidth is −78 dBm/Hz. The maximum permissible phase noise power at an offset of 200 kHz (first adjacent channel), assuming a uniform phase-noise distribution across the 200-kHz signal bandwidth, is then: PN = −174 − ( −78) = −96 dBm Hz

(3.59)

This latter assumption is clearly somewhat optimistic, as most phase-noise profiles show a decreasing phase noise power with increased separation from the carrier centre frequency. This result (and method) could, however, be used as either a worst-case figure (to specify the phase noise requirement at the channel edge closest to the receiver centre frequency, in the case of reciprocal mixing) or to inform an estimate for the offset-channel (unwanted channel) centre frequency. A much better method is to consider the effect of the phase-noise on error vector magnitude (EVM), as will be outlined later in this chapter. This method allows the designer to assess the impact of phase noise on the actual signal quality and hence demodulated error performance and should be used in preference to the above, rather simplistic method whenever possible.

3.3.16

Converter Noise Figure

In a cascaded system analysis, it is sometimes useful to be able to calculate the noise figure of the ADC, as the final element in the receive chain. A key difference with an ADC is that the noise figure varies with a range of ADC operating parameters (e.g.,

3.3 Implementation of a Digital Receiver

119

sample rate, input impedance, and so forth) and is not a constant, as it is with most traditional RF/IF signal processing elements. Noise figure for an ADC is given by [28]: 2   VADC , rms  − SNR ADC − 10 log fs  − 10 log kTB  NF ADC, dB = 10 log  Z ⋅ 10 −3   10 −3   2 B   in

(3.60)

where: VADC,rms is the rms value of the ADC input voltage range. fS is the sampling frequency (in hertz). Zin is the converter input impedance. SNRADC is the signal-to-noise ratio of the converter. B is the bandwidth (in hertz). T is the system temperature (in Kelvin). −23

k is Boltzmanns constant (1.38 × 10

J/K).

Take, for example, a converter with a maximum input voltage of 1-V pk into an input impedance of 200Ω. If this converter operates with a signal-to-noise ratio of 76 dB, at a sample rate of 100 MSPS (at room temperature, 290K), then its noise figure (in a 1-Hz bandwidth) will be 25 dB. A plot of the range of noise figures available from this example converter for different sampling frequencies is provided in Figure 3.44. Although this plot is mathematically correct, it should be borne in mind that, in the case of a real converter, the converter SNR is likely to improve at

43 41

Noise figure (dB)

39 37 35 33 31 29 27 25 10

20

30

40 50 60 70 Sampling frequency (MHz)

80

90

100

Figure 3.44 Noise figure (in a 1-Hz bandwidth) of an ADC, for a range of sample rates and an intrinsic SNR of 76 dB.

120

Flexible RF Receiver Architectures

lower sampling rates. This is not taken into account in Figure 3.44, where a constant 76 dB is assumed for the SNR. It can be seen from this plot that lowering the sampling rate has a dramatic effect on noise figure; this is likely to be only partly compensated by any SNR improvements at these sample rates.

3.4

Influence of Phase Noise on EVM for a Linear Transceiver Signal vector error (SVE) and its consequent error vector magnitude (EVM) are commonly used parameters in specifying the degree of corruption a data constellation point undergoes, in various parts of a transmitter or receiver architecture. In this way various effects may be taken into account, in terms of what is important to the processes of detection of the wanted data (i.e., the deviation of the data point from its anticipated position). The two primary corruption mechanisms in most transmitter and receiver systems are the vector errors present in the quadrature modulator and demodulator and the phase noise present on the local oscillator (LO) of each. PA non-linearity can also be an issue, but adjacent-channel and other similar requirements usually dictate that the PA’s linearity is sufficiently good such that it has a relatively negligible impact upon EVM. Analysis of the impact of quadrature modulator errors is relatively straightforward; however, incorporation of the impact of LO phase noise is usually undertaken only as a part of complex system simulation. This section presents a simple, deterministic method of analysing the effect of both gain/phase errors and LO phase noise on the error vector magnitude of a transmitter or receiver. Some practical results are also presented, to illustrate the accuracy of the predictions achieved. 3.4.1

Introduction

Error vector magnitude is now a commonly quoted specification for both transmitter and receiver performance evaluation (e.g., TETRA [31] and UMTS [32]). Typical figures are in the range of 5 to 17.5% for most mobile radio systems (e.g., TETRA [33] and UMTS [32]) and various test instruments now incorporate the measurement of this parameter as a standard feature. There are potentially a large number of factors in the design of the transmitter or receiver which can contribute to the final measured value; however, in practice, in a well-designed system, most are usually negligible. Examples include problems with the receiver detection process (normally performed digitally), transmitter non-linearities (see earlier comments), synthesiser frequency errors (normally tracked out), and errors in modulation generation (normally performed digitally with very little resultant error). There are two sources of SVE that are generally non-negligible in a system design and affect both the transmitter and receiver. The first is the gain and phase imbalance present in the quadrature modulator in the transmitter [34] and the corresponding quadrature demodulator in the receiver [35] (assuming that both are performed by some analogue means, for example, as shown in Figure 3.45). These errors result from imperfect matching between the two mixers (or analogue multipliers) and an imperfect 90° split in the local oscillator path. Together these result in a distortion of the vector in the I/Q plane, as illustrated in Figure 3.46.

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

121

I in

I

LO In

90°

RF out

RF In

90°

Qin

LO In

Q out

I/Q demodulator (b)

I/Q modulator (a)

Figure 3.45

out

Quadrature (a) modulator and (b) demodulator using analogue hardware.

Gain error

Q

Error vector Measured signal

Phase error

Reference signal (ideal signal vector) I

Figure 3.46

Illustration of signal vector error in the I/Q plane.

Although these errors may be corrected digitally [36], this would typically involve a degree of individual testing during production and this is generally undesirable. Figures presented in [35] indicate that a phase error of 1° can be achieved as a typical specification for an IC implementation, although currently available commercial devices [34] specify 0.2 dB of amplitude error and 3° of phase error. The effect of quadrature modulator and demodulator errors on the adaptive predistortion method of RF power amplifier linearisation has been studied in some detail [37]. This study indicates the detrimental effect of such errors on predistorter performance and also highlights some methods for overcoming them. An earlier study highlights the detrimental effect of quadrature errors on the spectral characteristics of a power amplifier [38]; quadrature errors are therefore clearly undesirable in many areas. The second major contributor to SVE is phase noise present on the local oscillator. This results in a random rotation of the signal vector around the I/Q plane, with a mean error determined by the synthesiser characteristics and the characteristics of the detection and tracking filtering present in the receiver. The detail of this will be covered later in this section. Phase noise is present on all signal sources and although it is technologically possible to reduce it to a degree whereby it would have

122

Flexible RF Receiver Architectures

a negligible effect on EVM (over and above that generated by the I/Q modulation and demodulation process), this is not usually economic in mobile and handportable radio designs. It must therefore be incorporated into any study on EVM in a particular design. The traditional method of studying the effects on EVM of phase noise and modulator/demodulator errors is by means of a block-level simulation of the complete transmitter or receiver system (or both). This is a relatively complex and costly process and relies on the availability of sophisticated simulation tools of sufficient accuracy and of operators skilled in the use of these tools and interpretation of the subsequent results. Care must be taken to ensure that sampling rates are chosen appropriately and that blocks are being used as intended and not beyond their capabilities. The effect of phase noise on a received carrier, as a result of mobile propagation effects, has been studied in detail [37]; however, this analysis is long and complex and does not lend itself easily to adaptation as a design tool for the system designer of a transmitter or receiver. The purpose of this section is to present a simple, deterministic technique for analysing the combined effect of phase noise and gain/phase errors on EVM for both transmitter and receiver systems. The technique could be written into a simulation, if desired, or used as a stand-alone tool to allow the required phase-noise and gain/phase balance parameters to be determined at the system design stage. Verification of the accuracy of the model used is provided by means of practical measurements on a π/4-DQPSK system, utilising various gain and phase errors and with differing phase-noise characteristics. 3.4.2

SVE Calculation Without Phase Noise Disturbance

With reference to Figure 3.46, the magnitude of the signal error vector may be determined using the cosine rule as: EV =

[R

2

]

+ M 2 − 2RM cos( φ e )

(3.61)

where R is the magnitude of the reference (ideal) vector, M is the magnitude of the measured (actual) vector, and e is the phase error between them. The measured vector magnitude, M, is composed of the reference vector magnitude, R, plus a component resulting from the gain error present in the system, Ge: M = R + Ge

(3.62)

If the reference vector magnitude is set to unity, then the resultant error vector magnitude (in percent), EVM, may be found from: EVM = 100

[1 + M ] − 2M cos(φ 2

e

)

(3.63)

The EVM may be plotted as a family of curves over a range of values for the gain and phase errors. Two examples are shown in Figures 3.47 and 3.48, with Figure 3.47 representing a general overview for a wide range of errors and Figure 3.48

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

123

60

Error vector magnitude (%)

50

40

30

20

0.01 dB 0.02 dB 0.05 dB 0.1 dB 0.2 dB 0.5 dB 1.0 dB

10

0

Figure 3.47

0

5

10 15 Phase error (degrees)

20

25

Error vector magnitude for a wide range of gain and phase errors.

20

Error vector magnitude (%)

18 16 14 12 10 8 6

0.1 dB 0.2 dB 0.3 dB 0.4 dB 0.5 dB 0.6 dB 0.7 dB

4 2 0

0

1

2

3

4

5

Phase error (degrees)

Figure 3.48 Error vector magnitude for a range of gain and phase errors typically found in commercial quadrature modulators and demodulators.

showing a detailed view of the range of errors generally encountered in most commercial quadrature modulator and demodulator subsystems, whether integrated circuit or hybrid based. A typical specification, for example, is a gain error of 0.3 dB and a phase error of 3°; using Figure 3.48 indicates that this results in an error vector magnitude of 9%. A given component is unlikely to be at both extremes simultaneously and hence may well be acceptable (on a statistical basis) with this specification, despite the relatively high EVM figure. A more acceptable figure for many systems would be around 6% and this can be guaranteed with, for example, gain and phase error values of 0.2 dB and 2°, respectively.

124

Flexible RF Receiver Architectures

As was stated earlier, it is possible to predistort the input signal vectors in order to compensate for the errors present in an upconverter [40]; however, this usually requires some form of calibration on a per-unit basis and this is generally undesirable in a production environment, unless absolutely necessary.

3.4.3

Approximation of a Local Oscillator Phase Noise Characteristic

In order to analyse the effect of local oscillator phase noise on EVM, it is necessary to be able to satisfactorily approximate the SSB phase noise characteristic. These characteristics may be measured using a phase noise measurement apparatus, predicted from the design equations for a phase-locked loop frequency synthesiser, for example, or obtained from manufacturer’s data sheets. The method outlined next allows any of these sources to be used, hence giving the technique wide application. The basis of the technique is to employ a piecewise linear approximation to the logarithmic plot of the SSB phase noise characteristic, as this is the form most commonly used and quoted on data sheets. It may also be predicted with ease from a synthesiser design [41, 42] or from an oscillator design [43]. The form of the characteristic is essentially a series of components of the form: Φα =

K f nα

(3.64)

where is an integer indicating the segment number of the linear segment in question and n determines the slope of that segment. The complete characteristic is therefore a summation of these segments, so arranged to ensure that they join end to end and form a quasi-continuous characteristic. This may be accomplished using the Heaviside step function as follows (based on four segments):   K H( f1 − f − δ) +  ( n − n ) log ( f ) n  H( f 2 − f − δ)H( f − f1 ) 1 2 1 2 f f  10   K +  [ ( n − n ) log ( f ) + ( n − n ) log ( f )] n  H( f 3 − f − δ)H( f − f 2 ) 1 2 1 2 3 2 f 3 10   K +  [ ( n − n ) log ( f ) + ( n − n ) log ( f ) + ( n − n ) log ( f )] n  H( f 4 − f − δ)H( f − f 3 ) 1 2 1 2 3 2 3 4 3 f 4 10

Φ SSB ( f ) =

K

n1

(3.65)

where f is the frequency offset from the carrier frequency, K is a scaling constant (to place the characteristic at the correct dBc value), f1 to f4 are the breakpoints of the segments (in hertz), n1 to n4 determine the gradients of the segments, and is a small frequency offset to ensure that the Heaviside functions do not coincide at the breakpoints and thereby create a spurious value. An illustration of the use of this technique is provided by Figure 3.49. This shows an SSB phase noise characteristic from an 850-MHz synthesiser with a narrow loop bandwidth. The measured data points are shown (dotted line) along with the piecewise approximation (solid line). It can be seen that if the frequency response peak occurring at low frequency is ignored, then the piecewise approximation is a

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

125

Phase noise relative to carrier (dBc/Hz)

-40 -50 -60 -70 -80 -90 -100 -110 -120 101

2 3

102

2 3

103 2 3 104 Frequency (Hz)

2 3

10 5

2 3

106

Figure 3.49 Piecewise approximation to a practical 850-MHz, single-loop frequency synthesiser. Dotted line: measured characteristic; solid line: piecewise approximation.

very close fit to the measured characteristic. The values used in (3.65) in this case are shown in Table 3.6. 3.4.4

Incorporation of the LO Phase Noise into the EVM Calculation

The basis of the method is to calculate the root mean square (rms) value of the phase jitter from a carrier corrupted by DSB phase noise, within a given measurement bandwidth, and then to combine this error with that already present due to the phase error from the quadrature upconverter or downconverter. This additional phase error will then add to the overall EVM for the system. The measurement bandwidth will be determined by an equivalent brick-wall filter, corresponding to the detector bandwidth for the system and modulation format in question. As an Table 3.6 Parameters Used in (3.65) to Provide a Piecewise Approximation to the 850-MHz Synthesiser Parameter n1

Value 0

n2

2.4

n3

1.55

n4

0.3

f1

90 Hz

f2

10 kHz

f3

150 kHz

f4

1 MHz

δ

0.001 Hz

K

10−5

126

Flexible RF Receiver Architectures

example, a TETRA system may be approximated using an 18-kHz bandwidth for a perfect filter. The method assumes that no AM noise is present on the local oscillator; this is a reasonable assumption in a well-designed system. It also takes no account of synthesiser spurs, which fall within the measurement bandwidth; these should again be negligible in a well-designed system. Consider a perfectly clean carrier with a power level, C, and a superposed single noise sideband in a 1-Hz bandwidth at a certain offset frequency from that carrier. If the long-term mean value of the sideband power is No, then it can be shown that the phase modulation index, , is given by [44]: θ≈

No C

(3.66)

An LO spectrum will normally consist of two equal sidebands, hence giving: θ≈

2N o C

Hence the rms phase deviation (jitter) per above approximation to be true is: φο =

2N op C

(3.67)

Hz (of DSB noise), assuming the

rads rms per Hz

(3.68)

where Nop is the single-sideband phase noise density per hertz of RF bandwidth at a given offset frequency from the carrier. The mean-square phase jitter at a given offset frequency from the carrier is therefore given by: φ ο2 =

2 N op C

rads 2 per Hz

(3.69)

To analyse the effect of all of the phase jitter, and hence the complete phase-noise corruption of the carrier, it is simply necessary to integrate the above over the frequency range of interest: φ2 =



b

0

 2 N op    df rads 2 C  f

(3.70)

In the case of the SSB phase noise characteristic given by (3.65), this becomes: φ2 =



b

0

Φ SSB ( f )df rads 2

and hence the equivalent mean phase deviation is:

(3.71)

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

φ=

φ2 =



b

0

Φ SSB ( f )df rads

127

(3.72)

The use of segmented 1/f-based approximations to the phase noise characteristic makes the evaluation of this integral relatively straightforward, as it is simply the summation of a series of definite integrals, one for each segment, up to the maximum bandwidth of interest. Segments beyond this maximum may be ignored completely, as they will have no effect on the resulting EVM.

3.4.5

Example Results

The results presented here are based on the comparison of measured and predicted performance for a range of signal source characteristics, four of which are presented here. The modulation format used was unfiltered QPSK, which was simulated in a manner which eliminated, as far as was practicable, upconversion gain and phase errors. The measurement of EVM was taken from a commercial vector signal analyzer, which provides both graphical output of EVM versus time (symbol) and a numerical value averaged over a number of symbols (user definable and set at 2,048 for the tests shown here). The unfiltered QPSK modulation was simulated by means of a frequency offset between the synthesiser under test and the measurement frequency of the vector signal analyser. This was set to 5 kHz for the tests shown below and this corresponds to a data rate of 20 k-symbols/sec. Since no quadrature upconverter is present in the system, the errors recorded can only be due to amplitude and phase noise present on the synthesiser. The level of amplitude noise was measured on the vector signal analyser and found to be negligibly small for each of the cases under consideration.

3.4.5.1

Measured Results

The four test signals shown here were chosen to represent a range of EVM values, from almost zero up to approximately 13%. This covers most of the range normally specified in the majority of mobile radio systems in existence at present and is therefore a reasonable range over which to demonstrate the validity of the model. At the top end of the range, the value of EVM varies significantly between different samples of 2,048 symbols, and exhibited a range of 8.49 at the minimum extreme up to 17.42 at the maximum. The results at this end of the measurement range are therefore less accurate than at the other values. In all cases, 20 measurements were taken, each of 2,048 symbols and an average value of EVM computed. These results appear in Table 3.8. The results presented in Figures 3.50 to 3.53 represent a snapshot of this process, with only a single sample of 2,048 symbols being represented in the numbers in the top right-hand corner. Figure 3.50 shows four results from the first local oscillator test source, resulting from measurements made on the vector network analyser. The top left-hand corner plot shows the instantaneous EVM value at each symbol point, showing the variation over a number of symbols. This is then averaged to produce the value shown in the top right-hand corner (0.2% rms), along with other numerical information relating to the error vector. The bottom left-hand corner plot shows the

128

Flexible RF Receiver Architectures

(a)

(b)

(c)

(d)

Figure 3.50 SVE results using the first LO test signal: (a) error vector magnitude: vertical scale: 0 to 20%, horizontal scale: 0 to 2,048 symbols; (b) numerical readout of average and peak error vector statistics; (c) QPSK constellation in I/Q plane; (d) and instantaneous phase deviation of constellation points: vertical scale: −10º to +10º, horizontal scale: 0 to 2,048 symbols.

(a)

(b)

(c)

(d)

Figure 3.51 SVE results using the second LO test signal: (a) error vector magnitude: vertical scale: 0 to 20%, horizontal scale: 0 to 2,048 symbols; (b) numerical readout of average and peak error vector statistics; (c) QPSK constellation in I/Q plane; and (d) Instantaneous phase deviation of constellation points: vertical scale: −10° to +10°, horizontal scale: 0 to 2,048 symbols.

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

129

(a)

(b)

(c)

(d)

Figure 3.52 SVE results using the third LO test signal: (a) error vector magnitude: vertical scale: 0 to 50%, horizontal scale: 0 to 2,048 symbols; (b) numerical readout of average and peak error vector statistics; (c) QPSK constellation in I/Q plane; and (d) Instantaneous phase deviation of constellation points: vertical scale: −25° to +25°, horizontal scale: 0 to 2,048 symbols.

(a)

(b)

(c)

(d)

Figure 3.53 SVE results using the fourth LO test signal: (a) error vector magnitude: vertical scale: 0 to 50%, horizontal scale: 0 to 2,048 symbols; (b) numerical readout of average and peak error vector statistics; (c) QPSK constellation in I/Q plane; and (d) Instantaneous phase deviation of constellation points: vertical scale: −25º to +25º, horizontal scale: 0 to 2,048 symbols.

130

Flexible RF Receiver Architectures

constellation and transitions between the four points in the QPSK signal. Finally, the bottom right-hand corner plot shows the phase deviation (jitter) present on the individual symbols, over a number of symbols (and hence time). It is clear from the low level of the response in each of these plots that this first source has a very high spectral purity and is, indeed, better than the reference oscillator in the vector signal analyser (which is itself very good). Figure 3.51 shows the results from a more typical local oscillator source for a mobile radio system. The average EVM in this case (over 20 × 2,048 symbols) is 4.80% and the effect of the noise can clearly be seen on each of the traces. The constellation diagram shows thicker transitions between points and a blurring of the points themselves, while both the phase deviation and EVM traces show distinctive peaks. Figure 3.52 shows an LO source toward the middle of what would normally be considered acceptable in most systems (and would prove unacceptable in some). The clearest distinction may be seen in the constellation diagram, with very poorly defined points and thick transitions being the obvious hallmarks of a noisy signal. Finally, Figure 3.53 shows a system operating toward the upper end of what would normally be considered acceptable, in most mobile radio specifications. Here, the constellation points are very indistinct and the phase deviation plot, in particular, demonstrates the presence of a significant degree of corruption of the signal phase. 3.4.5.2

Comparison with Predicted Performance

As an illustration of the effect of the modelled behaviour of phase noise, in addition to gain and phase errors in an upconverter, consider the example of Figure 3.49; the predicted results, from this local oscillator, are shown in Figure 3.54. These should be compared to Figure 3.48, in which no phase noise was assumed to be present. In order to demonstrate the validity of the model, a range of local oscillator signal characteristics are modelled and the corresponding practical results measured as outlined above. A summary of the model parameters derived for the four LO signals used is given in Table 3.7. The measured characteristics, together with the relevant piecewise approximations, are shown in Figures 3.55 to 3.58. It can be seen that the model closely approximates the measured characteristic in all cases, hence eliminating this as a major source of error in the comparison. The model was used to predict the EVM in each case and this can be compared to the average measured performance; the results are presented in Table 3.8. The predicted results are based on a lower limit of integration in (3.72) of 10 Hz, since the measurement system will track (using an estimation technique) phase deviation rates at or below this value. It can be seen from this that the accuracy of the model is generally very good, if a little pessimistic in some cases (~10% overestimate in two cases). Measurement uncertainties at the highest value of EVM will lead to a poorer accuracy at this extreme. These uncertainties are due to the wide variation in EVM experienced at these high values of phase noise and the consequent requirement for a large number of values to be averaged in order to yield a consistent result. It is likely that in a practical design, which could tolerate these high levels of EVM, other processes (such as amplifier non-linearity) would be dominant.

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

131

25 23

Signal vector error (%)

21 19 17 15 13 0 dB 0.1 dB 0.2 dB 0.3 dB 0.4 dB 0.5 dB 0.6 dB 0.7 dB

11 9 7 5 0

1

2 3 Phase error (degrees)

4

5

Figure 3.54 Error vector magnitude for a range of gain and phase errors, incorporating the effect of a local oscillator with a phase noise characteristic represented by Figure 3.49.

Table 3.7 Parameters Used in (3.65) to Provide a Piecewise Approximation to the Four Local Oscillator Test Signals Parameter n1

LO 1 0.3

LO 2 0

LO 3 0.4

n2

3.2

1.95

1.85

1.8

n3

2.35

3.1

4

4.15

n4

0

1

1.1

1.2

f1

100 Hz

60 Hz

45 Hz

50 Hz

LO 4 0

f2

1 kHz

4 kHz

4 kHz

f3

3 kHz

20 kHz

20 kHz

f4

1 MHz

1 MHz

1 MHz

1 MHz

δ

0.001 Hz

0.001 Hz

0.001 Hz

0.001 Hz

K

10

−8.1

10

−4.8

10

−3.8

3.5 kHz 20 kHz

10

−4.0

The results are based on an assumed perfect upconversion process (zero gain and phase error), since there was no upconverter present (or necessary) in the method used for simulating the QPSK. If upconverter errors are incorporated, a new version of Figure 3.48 can be generated, incorporating the EVM contribution (in terms of phase error) for the local oscillator. An example of this is shown in Figure 3.54. 3.4.6

EVM Performance of a Multi-Stage System

The derivation and method outlined earlier deal only with a single-stage system, and also assume that the system is frequency and phase flat (any deviation from this ideal will also add to the output EVM). In the case of a multi-stage network (again

132

Flexible RF Receiver Architectures

Phase noise relative to carrier (dBc/Hz)

−80 −90 −100 −110 −120 −130 −140 1 10

2

3

2

10

2

3

3

10

2

3

4

10

2

3

5

10

2

3

6

10

Frequency (Hz)

Figure 3.55 SSB phase-noise characteristic of a high-quality multi-loop signal generator (first local oscillator test signal). Dotted line: measured characteristic; solid line: piecewise approximation Note that the specified and modeled characteristics are superimposed (almost perfectly), and hence the modelled characteristic only can be seen.

Phase noise relative to carrier (dBc/Hz)

−40 Model Actual

−50 −60 −70 −80 −90 −100 −110 −120 1 10

2

3

2

10

2

3

3

2

3

4

10 10 Frequency (Hz)

2

3

5

10

2

3

6

10

Figure 3.56 SSB Phase noise characteristic of the second local oscillator test signal. Dotted line: measured characteristic; solid line: piecewise approximation.

frequency and phase flat), the EVM values for the individual stages must be added in an rms sense; that is: EVM tot = EVM 12 + EVM 22 + L + EVM n2

(3.73)

3.4 Influence of Phase Noise on EVM for a Linear Transceiver

133

Phase noise relative to carrier (dBc/Hz)

−40 Model Actual

−50 −60 −70 −80 −90 −100 −110 −120 1 10

2

3

2

10

2

3

3

2

3

4

10 10 Frequency (Hz)

2

3

5

10

2

3

6

10

Figure 3.57 SSB Phase noise characteristic of the third local oscillator test signal. Dotted line: measured characteristic; solid line: piecewise approximation.

Phase noise relative to carrier (dBc/Hz)

−40 Model Actual

−50 −60 −70 −80 −90 −100 −110 −120 1 10

2

3

2

10

2

3

3

10

2

3

4

10

2

3

5

10

2

3

6

10

Frequency (Hz)

Figure 3.58 SSB Phase noise characteristic of the fourth local oscillator test signal. Dotted line: measured characteristic; solid line: piecewise approximation.

where EVMtot is the total EVM at the output of the system, and EVM1 to EVMn are the EVM values for the various stages in the system (e.g., local oscillators, if they are the dominant sources of EVM at each stage). This method must also be used (in a modified form) in order to assess the amount of EVM added by a network or system (i.e., to remove the effects of the source EVM). In this case, (3.73) becomes:

134

Flexible RF Receiver Architectures Table 3.8 Comparison Between Measured and Predicted EVM Performance LO Test Signal Source LO 1

Measured EVM Performance (%) 0.2

Predicted EVM Performance (%) 0.1

LO 2

4.80

5.47

LO 3

9.10

9.17

LO 4

12.22

13.42

2 EVM DUT = EVM Meas − EVM 2Source

(3.74)

where EVMMeas is the measured EVM (displayed on the test instrument), EVMSource is the EVM measured from the source alone and EVMDUT is the EVM resulting from the device under test.

3.5 Relationship Between EVM, PCDE, and Peak code domain error (PCDE) [45] is used as a specification, similar to EVM, for WCDMA systems. The advantage of a peak code domain error requirement, over relying solely on EVM, is to ensure that modulation errors are spread evenly over the codes. This prevents errors from disproportionately impacting a few codes, for which performance would be degraded. This phenomenon cannot be detected using an error vector magnitude test, as this is measured before despreading. PCDE is specified in the WCDMA specifications published by the Third Generation Partnership Project (3GPP). If the error is evenly distributed across the codes, the PCDE is given by:  EVM 2  PCDE = 10 log   S2 

(3.75)

where EVM is the error vector magnitude and S is the spreading factor. For example, if EVM is 12.5%, with a spreading factor of 256, the resulting peak code domain error is −42.14dB. Since the error is assumed to be evenly distributed, this is effectively the mean code domain error and not strictly PCDE. Rho (ρ) is the ratio of correlated power to total transmitted power for a CDMA signal (i.e., the degree of correlation between a perfect reference signal and the actual signal generated by the transmitter). It is specified in the IS-95 and CDMA2000 standards. This correlated power is derived following the removal of frequency, phase, and time offsets and subsequently performing a cross-correlation between the corrected, measured signal and the ideal reference. Any of the transmitted energy that does not correlate appears as added noise, and this may interfere with other users of the system. Rho (ρ) can therefore be written as:

3.5 Relationship Between EVM, PCDE, and ρ

135

ρ=

PC PT

(3.76)

PT = PT + PE

where PC is the power which correlates with the ideal reference signal, PT is the total transmitted power, and PE is the error power. For IS-97 ([46], superseded by [47]), the value of for the transmitter must be less than 0.912, indicating that up to 8.88% of the transmitted power can be wasted and appear as a potential interferer to the other channels within the system. There are a number of relationships between ρ, EVM, (mean) code domain power ( i), spreading factor, and (mean) code domain error: S −1 (EVM ) 2 ≈ 1 − (EVM ) 2 S

(3.77)

 EVM 2   1 − ρ ρ i = 10 log   ≈ 10 log  S − 1  S 

(3.78)

 EVM 2   1 − ρ CDE = 10 log  ≈ 10 log   S   S 

(3.79)

ρ = 1−

Note that EVM is expressed, here, as a pure ratio, not a percentage (i.e., an EVM of 10% would be entered as 0.1 in the above equations). Note that (3.79) is similar to (3.75), dealing with the mean code domain error and not its peak value. In the 3GPP standard, peak-code-domain-error, PCDE, is the specified parameter. As a rule of thumb, the PCDE is 5–7 dB above the mean code-domain-error. There is unfortunately no corresponding rule of thumb in 3GPP2, since it does not have a well-defined test model; hence, the result is dependent upon the code selected. Note also that in 3GPP2, ρ is only defined for the pilot and not for a fully (or even partially) loaded system. It is typically measured with all of the other codes turned off, with the pilot therefore taking only 20% of the total system power (i.e., 7 dB of backoff). Furthermore, the peak-to-mean ratio of the pilot is only 6 dB, not the 9.5 dB of the system in normal operation, making the ρ specification relatively easy to meet in most cases.

References [1] Wiesler, A., and F. K. Jondral, A Software Radio for Second- and Third-Generation Mobile Systems, IEEE Trans. on Vehicular Technology, Vol. 51, No. 4, July 2002, pp. 738–748. [2] Colebrook, F. M., Homodyne, Wireless World and Radio Review, No. 13, 1924, p. 774. [3] Lessing, L., Man of High Fidelity: Edwin Howard ArmstrongA Biography, New York: Bantam Books, 1969. [4] Fernandez-Duran, A., et al., Zero-IF Receiver Architecture for Multistandard Compatible Radio Systems: Girafe Project, IEEE Vehicular Technology Conference, Vol. 2, May 1996, pp. 1,052−1,056.

136

Flexible RF Receiver Architectures [5] Lawton, M. C., and J. D. Waters, The Design of Flexible Receivers for Communicating Appliances, IEEE Vehicular Technology Conference, Vol. 2, May 1996, pp. 1,060−1,064. [6] Kirkland, W. R., and K. H. Teo, I/Q Distortion Correction for OFDM Direct Conversion Receiver, IEE Electronics Letters, Vol. 39, No. 1, January 9, 2003, pp. 131–133. [7] Itoh, K., et al., Even Harmonic Direct Conversion IC for Mobile Handsets: Design Challenges and Solutions, IEEE RFIC Symposium Digest, June 1999, pp. 53–56. [8] Loke, A., and F. Ali, Direct Conversion Radio for Digital Mobile PhonesDesign Issues, Status, and Trends, IEEE Trans. on Microwave Theory and Techniques, Vol. 50, No. 11, November 2002, pp. 2,422−2,435. [9] Lang, S., R. M. Rao, and B. Daneshrad, Design and Development of a 5.25GHz Software Defined Wireless OFDM Communication Platform, IEEE Radio Communications, Vol. 1, No. 2, June 2004, pp. S6–S12. [Note: contained within IEEE Communications Magazine, Vol. 42, No. 6, June 2004]. [10] Wolf, D., 1/f Noise: Noise in Physical Systems, Proc. of 5th International Conference on Noise, Bad Nauheim, Germany, 1978, pp. 122–133. [11] Razavi, B., Design Considerations for Direct Conversion Receivers, IEEE Trans. on Circuits and Systems II, Vol. 44, June 1997, pp. 428–435. [12] Minnis, B. J., and P. A. Moore, Estimating the IP2 Requirement for a Zero-IF UMTS Receiver, Microwave Engineering Europe, July 2002, pp. 31–36. [13] Davenport, W. B., and W. L. Root, Introduction to the Theory of Random Signals and Noise, New York: IEEE Press, 1987. [14] Tsurumi, H., and Y. Suzuki, Broadband RF Stage Architecture for Software Defined Radio in Handheld Terminal Applications, IEEE Communications Magazine, February 1999, pp. 90–95. [15] Engen, G. F., The Six-Port Reflectometer: An Alternative Network Analyzer, IEEE Trans. on Microwave Theory and Techniques, Vol. 25, No. 12, December 1977, pp. 1,075−1,080. [16] Engen, G. F., An Improved Circuit for Implementing the Six-Port Technique of Microwave Measurements, IEEE Trans. on Microwave Theory and Techniques, Vol. 25, No. 12, December 1977, pp. 1,080−1,083. [17] Li, J., R. G. Bosisio, and K. Wu, Computer and Measurement Simulation of a New Digital Receiver Operating Directly at Millimeter-Wave Frequencies, IEEE Trans. on Microwave Theory and Techniques, Vol. 43, No. 12, December 1995, pp. 2,766−2,772. [18] Hesselbarth, J., F. Wiedmann, and B. Huyart, Two New Six-Port Reflectometers Covering Very Large Bandwidths, IEEE Trans. on Instrumentation and Measurement, Vol. 46, August 1997, pp. 966–969. [19] Nevaux, G., B. Huyart, and G. J. Rodriguez-Guisantes, Wide-Band RF Receiver Using the Five-Port Technology, IEEE Trans. on Vehicular Technology, Vol. 53, No. 5, September 2004, pp. 1,441–1,451. [20] Schreier, R., et al., A Flexible 10-300MHz Receiver IC Employing a Bandpass Sigma-Delta ADC, Proceedings of the IEEE International Microwave Symposium, Phoenix, AZ, 2001. [21] Dagher, E. H., et al., A 2-GHz Analog-to-Digital Delta-Sigma Modulator for CDMA Receivers with 79-dB Signal-to-Noise Ratio in 1.23MHz Bandwidth, IEEE Journal of Solid-State Circuits, Vol. 39, No. 11, November 2004, pp. 1,819–1,828. [22] van Veldhoven, R. H. M., A Triple-Mode Continuous-Time Σ∆ Modulator with Switched-Capacitor Feedback DAC for a GSM-EDGE/CDMA2000/UMTS Receiver, IEEE Journal of Solid-State Circuits, Vol. 38, No. 12, December 2003, pp. 2,069–2,076. [23] Brannon, B., Overcoming Converter Nonlinearities with Dither, analogue Devices Application Note No. AN-410, analogue Devices Inc, One Technology Way, Norwood, MA, 1995. [24] Spencer, N. W., Comparison of State-of-the-Art Analog-to-Digital Converters, Massachusetts Institute of Technology, Lincoln Laboratory, Lexington, MA, Project Report AST-4, March 1988.

3.5 Relationship Between EVM, PCDE, and ρ

137

[25] Irons, F. H., and T. A. Rebold, Characterization of High-Frequency Analog-to-Digital Converters for Spectral Analysis Applications, Massachusetts Institute of Technology, Lincoln Laboratory, Lexington, MA, Project Report AST-2, November 1986. [26] Thao, N. T., and M. Vetterli, Optimal MSE Signal Reconstruction in Oversamples A/D Conversion Using Convexity, Proc. of ICASSP 92, Vol. 4, 1992, pp. 165–168. [27] Robins, W. P., Phase Noise in Signal Sources, London, England: Peter Peregrinus Ltd., 1982, p. 53. [28] Tuttlebee, W., (ed.), Software Defined Radio-Enabling Technologies, New York: John Wiley & Sons, Chapter 4. [29] Yang, K., and S. Lee, Examine the Effects of Random Noise on Jitter, Microwaves & RF, September 2004, pp. 76–86. [30] Design a Low-Jitter Clock for High-Speed Data Converters, Maxim Application Note No. 800, November 20, 2001; http://www.maxim-ic.com/appnotes.cfm/ appnote_number/800. [31] Trans-European Trunked Radio (TETRA); Voice plus Data (V + D); Part 2: Air Interface (AI), ETS 300 392-2, clause 6, European Telecommunications Standards Institute. [32] 3GPP Technical Specification Group, Radio Access Network: UTRA (BS) FDD; Radio Transmission and Reception, 3G TS 25.104. [33] Trans-European Trunked Radio (TETRA); Conformance testing specification; Part 1: Radio, ETS 300 394-1, European Telecommunications Standards Institute. [34] RF Micro Devices 1995 Designers Handbook, RF Micro Devices Inc., 1995, pp. 107−149. [35] Hull, C. D., J. L. Tham, and R. R. Chu, A Direct-Conversion Receiver for 900 MHz (ISM Band) Spread-Spectrum Digital Cordless Telephone, IEEE Journal of Solid-State Circuits, Vol. 31, No. 12, December 1996, pp. 1,955−1,963. [36] Hilborn, D. S., S. P. Stapleton, and J. K. Cavers, An Adaptive Direct-Conversion Transmitter, IEEE Trans. on Vehicular Technology, Vol. 43, No. 2, May 1994, pp. 223−233. [37] Cavers, J. K., The Effect of Quadrature Modulator and Demodulator Errors on Adaptive Digital Predistorters for Amplifier linearisation, IEEE Trans. on Vehicular Technology, Vol. 46, No. 2, May 1997, pp. 456−466. [38] Faulkener, M., and T. Mattsson, Spectral Sensitivity of Power Amplifiers to Quadrature Modulator Misalignment, IEEE Trans. on Vehicular Technology, Vol. 41, November 1992, pp. 516−525. [39] Adachi, F., and M. Sawahashi, Error Rate Analysis of MDPSK/CPSK with Diversity Reception Under Very Slow Rayleigh Fading and Cochannel Interference, IEEE Trans. on Vehicular Technology, Vol. 43, No. 2, May 1994, pp. 252−263. [40] Faulkener, M., T. Mattson, and W. Yates, Automatic Adjustment of Quadrature Modulators, IEE Electronics Letters, Vol. 27, No. 3, 1991, pp. 214−216. [41] Mini Circuits, VCO Designers Handbook, Scientific Components, Brooklyn, NY, 1996. [42] Robins, W. P., Phase Noise in Signal Sources: Theory and Applications, London, England: Peter Peregrinus Ltd., 1982, Chapters 7 and 8. [43] Smith, J., Modern Communication Circuits, New York: McGraw-Hill, 1986, Chapter 10. [44] Robins, W. P., Phase Noise in Signal Sources: Theory and Applications, London, England: Peter Peregrinus Ltd., 1982, Chapter 3. [45] 3GPP Technical Specification Group, Radio Access Network, TS 25.141 V3.2.0, Base Station Conformance Testing, 2000. [46] Telecommunications Industry Association (USA), TIA/EIA/IS-97-A (CDMA): Recommended Minimum Performance Standards for Base Station Supporting Dual-Mode Wideband Spread Spectrum Cellular Mobile Station, July 1996. [47] Telecommunications Industry Association (USA), TIA-97-E: Recommended Minimum Per® formance Standards for cdma2000 Spread Spectrum Base Stations (ANSI/TIA97-E-2003), February 2003.

CHAPTER 4

Multi-Band and General Coverage Systems 4.1

Introduction Current radio receiver designs are, in general, inherently narrowband and can only achieve general (or broadband) coverage by the switching or alteration of narrowband elements. Certain designs, such as those used in many scanning receivers, do not attempt to overcome some of the fundamental receiver problems, such as blocking and image rejection, but rely on the user being able to eliminate interference by positioning of the set, or some other mechanism such as the use of a directional antenna. Where this is not possible, the user must tolerate the problem and the restriction in frequency usage whichensues, as the price of achieving wideband coverage. The aim of the ideas presented in this section is to propose systems and techniques for the elimination of many or all of the fundamental problems which prevent the truly universal radio receiver from becoming a reality. The ideas presented are not fully developed solutions, currently in production, but more a series of proposals as to how some of these fundamental issues might be addressed. There are three basic problems which need to be solved: 1. The diplexer filter required in a full-duplex transceiver must be specifically and carefully designed for its intended band of operation. This filter is usually either a helical component or formed from a dielectric (such as ceramic) and hence is almost impossible to tune in any sensible fashion over a reasonable range of frequencies. A multiple-band transceiver would therefore require a number of diplexer filters and this would very quickly become prohibitive, both in terms of cost and size. 2. The front-end preselect filter (also known as the band-select or cover filter), utilised to reject the image signal and other particularly strong out-of-band signals, must also be either tunable or eliminated in order to allow multi-band coverage. Electronic tuning of this filter is a more realistic proposition than that of the diplexer filter mentioned earlier; however, the change in technologies (from, perhaps, lumped-element to dielectric-based) across, say, 100 MHz to 2 GHz, would make this difficult, if not impossible. The alternative to the use of such a filter would require the front-end amplifier [or low-noise amplifier (LNA)] to be able to handle the full dynamic range of signals within the broad coverage range. This may include TV transmissions of many kilowatts and microcellular transmissions of a

139

140

Multi-Band and General Coverage Systems

few milliwatts, and hence a very high dynamic range amplifier is required. Such an amplifier could be created by backing off a high-power linear amplifier (of, say, 10W), but this is unrealistic in a hand portable radio. It is therefore necessary to utilize a more conventional low-noise amplifier and eliminate its distortion when dealing with high input signal strengths. 3. A further consequence of eliminating the front-end filter is that the image signal is no longer suppressed and hence has the potential to interfere directly with the wanted signal in the receiver. This image signal must therefore be suppressed by some other mechanism which does not involve filtering at the input signal frequency.

4.2

Multi-Band Flexible Receiver Design As was hinted earlier, the addition of wide channel bandwidths and, in particular, multiple operating bands significantly increases the difficulty of producing a flexible receiver design. The widening of the channel bandwidth has the following consequences: •

The number of narrowband carriers which can enter the IF and baseband chains is significantly increased, thus increasing the potential dynamic range required in these parts of the system. In going from an IF of, say, 200 kHz (for GSM) to 4 or 5 MHz (for UMTS in Europe), the number of 25 kHz channels (e.g., for TETRA) that could enter the IF increases from 8 to 200.



The sampling rate and dynamic range required of the A/D converter also both increase significantly. This may well make the A/D an unrealisable part using current technology (or indeed, following medium-term advances in current technology).

In going from a single-band to multiple bands, the receiver faces a number of further problems: •

RF preselection filtering becomes difficult or impossible, since the filter must now be tuned to each band of interest. Alternatively, a bank of switched filters may be employed, but this can quickly become unwieldy for a truly flexible system. This latter technique has been used in a number of military systems in the past.



The channel synthesiser must tune over a far wider range than for a single-band system.



The diplexer in a full-duplex transceiver must have a variable frequency of operation and a variable transmit/receive frequency split. Since the diplexer is currently realised in ceramic, SAW, or helical resonators in most portable systems, this is clearly impossible with current techniques. Again, the main obvious alternative is the use of multiple units, with switching to determine which is in use at a given point in time. As before, this can quickly become unwieldy.

4.2 Multi-Band Flexible Receiver Design

141

It is worth examining the consequences of eliminating the inflexible components mentioned above on the overall receiver performance, since the only option is to design without these components and utilise alternative means of solving the resulting problems (if possible). If the front-end preselect and diplexer filters are removed, then three main problems result: 1. All image rejection from this filter is lost, thus leaving the receiver prone to signals appearing at its image frequency. 2. All radio signals within the bandwidth capability of the antenna will impinge upon the front-end low-noise amplifier in the receiver. This amplifier will therefore require a very high dynamic range to prevent overload from strong, unwanted signals (e.g., broadcast TV transmissions in a handportable communications receiver). 3. Without a diplexer, the full power of the transmitter output signals may impinge upon the receiver input (depending upon what is used to replace the diplexer). The receiver must therefore be able to cope with these signals, or else utilise an alternative method of eliminating them. One possible approach to solving these problems is shown in Figure 4.1. At first glance, the only major difference between this figure and Figure 3.1 is that the front-end filter has been removed. The consequences of this act have, however, been incorporated in the labelling of the various system components. The front-end amplifier is now required to have a high dynamic range and this will have implications for either its power consumption (if conventional techniques are used) or complexity (if a linearisation technique is used) or possibly both. The mixer must now incorporate the image-rejection capability originally provided by the front-end filter; hence, some form of image-reject mixer will be necessary. It too will experience the full dynamic range of the input signals and hence must be able to cope with this without introducing undue levels of distortion and hence possibly blocking weak, wanted signals. The other significant difference is the introduction of variable-bandwidth anti-alias filters prior to the A/D converters. These can then effectively perform the channel-selection filtering in the receiver and hence significantly reduce the dynamic range required of the A/D converters. The only remaining dynamic range High-dynamic range, controlled IR mixer

IF In AGC

High-dynamic range RF Amp

LO In 90°

Variable LP filter Q in

Q out

IF filter

Broadband I/Q demodulator Channel synth.

Figure 4.1

I in

I out

Linear IF amp

Possible universal receiver architecture.

Fixed synth.

Variable LP filter

A/D converters and DSP Baseband voice/data output

142

Multi-Band and General Coverage Systems

requirement would then be that necessary to cope with fast-fading of the wanted channel, slow fading having been eliminated by the AGC on the IF amplifier (assuming that it has sufficient dynamic range to cope). This filtering may be implemented using switched-capacitor techniques, for example, with the clock frequency (and hence filter bandwidth) being under software control. Note that the use of anti-alias filtering as a channel filter is only possible in a single-channel application (e.g., a portable receiver). An architecture more akin to that of the right-hand side of Figure 3.5 would be required in a multi-carrier base-station application. It is clear from this discussion that a number of these components have yet to be realised, although research is currently underway to solve these problems, as they are potentially key to the practical realisation of a multi-band flexible architecture radio.

4.3

The Problem of the Diplexer The diplexer filter in a mobile radio transceiver has, for many years, been the sole method of achieving the necessary removal of the transmitter output signal from the receiver input, in a full-duplex radio. This component is normally essential in order to realise the benefits of a standard telephone conversation in an FDD system (i.e., to be able to speak and listen simultaneously). In addition, it has been a feature of many TDD and TDMA systems, due to the requirement for the transmit and receive frames to overlap, when a long turnaround time for the transmit/receive signals is present (e.g., when the user is close to the edge of a cell). The use of a diplexer (or duplexer1) filter has a number of significant disadvantages that must either be tolerated or circumscribed in order to enjoy its benefits. These may be summarised as follows: •

Size: Their physical construction is such that they are often bulky, and even in handportables they can consume a relatively significant amount of space.



Construction: Their function, and hence their required form of construction, means that it is unlikely that they will be successfully integrated along with the silicon components within a transceiver (in the short or medium term). They are therefore a barrier to achieving a single-chip, full-duplex radio. Frequency spacing: The operation of current diplexer filters dictates that a significant frequency spacing between transmit and receive bands is required. This split is, for example, 90 MHz for 1,800-MHz cellular equipment and any attempt to reduce it would result in a significant increase in size for the diplexer. Spectrum inefficiency: The use of a diplexer requires a frequency split between transmit and receive bands. The proposed technique to eliminate the diplexer should mean that this split could be eliminated, thus allowing both transmission paths to operate on the same frequency (known as on-frequency duplex). This in turn could lead to a doubling of the number of channels available in a





1.

The names diplexer and duplexer are used interchangeably and refer to the same component.

4.3 The Problem of the Diplexer

143

given bandwidth. Note that the required performance from the technique increases markedly when attempting to achieve on-frequency duplex, as the performance requirement changes from one of eliminating overload in the receiver path to one of suppressing the transmit signal to a level below the minimum required receive sensitivity, by more than the cochannel protection ratio of the modulation format in question. This is an extremely tough requirement in most systems. One example of a problem in the transmit-receive frequency split occurs in the 220-MHz SMR band in the United States. The issue is that of a small split between transmit and receive bands within the 220-MHz allocation; a given pair of transmit and receive channels is only separated by 1 MHz and this is a very small percentage of the frequency of operation. As a comparison, take the 1,800-MHz DCS band: Here the split is 90 MHz, which is around 5% of the operating frequency. At 220 MHz, the split is only 0.45% of the operating frequency. It is this small percentage, which dictates the specification required of the diplexer filters in order to allow full duplex operation. Creating a filter with a suitably high rejection over such a narrow frequency band, at VHF, would result in a very large and expensive item (prohibitively so), if indeed it is realisable at all. Such filters would be nonsense in a handportable and prohibitively expensive and unacceptably large in a mobile. A radically new approach to this problem is therefore required. If it is assumed that a 2.5-W output power (+34 dBm) is required from the mobile and that the receiver is well designed and hence has a dynamic range of 80 dB, then the maximum level of transmit signal permitted in the receiver front end is −40 dBm (for an overall receiver sensitivity of −120 dBm). The rejection required therefore is +34−(−40) dB = 74 dB. This level of rejection must mainly be provided by some form of cancellation without sapping significant additional power from the supply or adding unreasonable levels of complexity. The preceding discussion concentrates on conventional diplexer issues; there are, however, a number of issues which arise when considering a flexible architecture radio. In particular, the requirement for flexibility, in a multi-mode radio capable of operating with a number of radio systems (even in the same area of spectrum), introduces new duplexer issues which must be addressed. The problems occur since the different systems may use different multiple access schemes [e.g., frequency-division duplex (FDD), time-division duplex (TDD), time-division multiple access (TDMA), or code-division multiple access (CDMA)] and may have different transmit/receive frequency splits (or none at all in the case of TDD systems). The required transmit/receive isolation for continuous-time, full duplex transmission (i.e., not TDD and not TDMA with non-simultaneous transmit/receive timeslots) is based on the transmit power level and required receive sensitivity, along with the receive A/D dynamic range and the selectivity of the receiver (digital) filtering. Consider the example of a handset full-duplex transceiver, with a 1-W (+30-dBm) maximum output power capability and a minimum receive sensitivity of −110 dBm (for a given modulation bandwidth). If it is assumed that a 10-dB C/I

144

Multi-Band and General Coverage Systems

ratio is the minimum for adequate demodulation of the chosen modulation scheme, then the minimum isolation which must be provided by the duplexer, for on-frequency duplex, is: Z I ,OFD = PTx = (Pmin − DC I ) = +30 − ( −110 − 10)

(4.1)

= 150 dB

This is an extremely stringent requirement and would prove almost impossible to meet by any known and economic technique. If however, a duplex frequency split is now introduced, the situation becomes more realistic. Consider the earlier example, but now with a duplex frequency-split introduced, such that the receive IF digital filtering can reduce the unwanted residual transmitter signal appearing in the front-end received signal, to a negligible level. This makes no assumption about any analogue IF filtering, which may well ease the burden on, for example, ADC dynamic range, as the general case of a fully flexible receiver architecture is assumed here. There are now two isolation considerations which must both be met, however each is potentially much less stringent than that considered earlier. The first consideration is overload of the receiver and hence its front-end and receiver strong signal handling capability. This breaks down into the analogue part (LNA, mixers, and so forth) and its IMD performance and the A/D converter and its dynamic range. In this case (split-frequency duplex), the required isolation may be derived as follows. In the limiting case, the IMD power generated by the receiver non-linearity (or clipping) must not exceed the specified minimum sensitivity plus the required C/I for the modulation format in question. In practice a margin of at least 3 dB would be desirable; however, the simplified analysis below assumes no margin, hence: PIMD = Pmin − DC I

(4.2)

The IMD power resulting from the front-end non-linearity, based upon the simple assumption of a two-tone test and a purely third-order non-lineareity, is given by: PIMD = 2(PTone − PIP 3 )

(4.3)

The tone power in this case is provided by the unwanted leakage of the transmit signal into the receive signal path, hence: PTone = PTx − Z I , SFD1

(4.4)

Combining (4.2) through (4.4) gives: Z I , SFD1 = PTx =

Pmin − DC I 2

− PIP 3

(4.5)

4.3 The Problem of the Diplexer

145

where PTx is the transmitter output power, Pmin is the minimum specified receive signal power, DC/I is the minimum required carrier-to-interference ratio for the modulation format in question, and PIP3 is the third-order intercept point of the receiver front-end analogue components. If this example is used and a receiver input intercept point of +30 dBm is assumed (a reasonable upper limit for a linearised receiver front-end in a handset), the required isolation reduces to 60 dB. This is still a very high value, but may not be completely beyond the bounds of possibility for a future isolation technology. Note that (4.5) assumes that an adequate A/D converter dynamic range is available, where this dynamic range is given by the difference between the unwanted (residual) transmitter output signal impinging upon the receiver and the maximum permitted interferer level [given by (4.2)]. Strictly speaking, this is the required spurious-free dynamic range (SFDR) rather than the signal-to-noise ratio (although this may also be important, depending upon the degree of averaging and/or filtering which can be employed in the digital domain, to extract the wanted signal). It is given by:

(

)

DA D = PTx − Z I , SFD1 − (Pmin − DC I )

(4.6)

= 90 dB

This is again high, but not out of the question, particularly in a narrowband system. As ADC technology improves in the future, it will become increasingly realistic, even in broadband (and hence high sample-rate) systems. The second requirement is that the transmitter noise floor must not mask the received signal. This results in the following isolation requirement: Z I , SFD 2 = N Tx − (Pmin − DC I )

(4.7)

where NTx is the transmitter output noise power (in the receiver bandwidth). A typical figure for this noise power is around −75 dBm, based upon the minimum received power levels used above (and hence channel bandwidths). With this figure, the required isolation is 45 dB, making the first consideration (on receiver linearity) dominant in this case. There are a number of partial or complete solutions to the isolation problem: 1. Tx/Rx switch. It is possible to implement a purely switch-based duplex facility, and this has many advantages. First, it can be made very broadband (multi-octave, if necessary) since filtering is not necessary. Second, it places no restrictions on the system duplex frequency split, since no frequency-selective components need be involved. Finally, it will allow on-frequency duplex (i.e., TDD) for the same reason. It may be implemented using simple PIN-diode switch technology and is therefore low cost, although transmit-receive isolation is an issue and it may be necessary to disable the transmitter while in receive mode to ensure that the transmitter noise floor does not de-sense the receiver. 2. Switched diplexer. Recent advances in integrated diplexer techniques have led to the possibility of implementing a switched diplexer, in which

146

Multi-Band and General Coverage Systems

the transmit and receive paths can be switched between two (or more) paths. This type of system is discussed in more detail in Section 4.3.2. It has a number of disadvantages, including band-limiting (i.e., it is not commensurate with an ideal general-coverage SDR), relatively high loss (typically) due to losses in the switches, and limited power handling (again, due to switch-related issues, such as saturation and IMD). 3. Circulator. A second solution is to use a circulator, as shown in Figure 4.2. The main drawbacks of this approach lie in the frequency range limitations of most high-isolation parts and the achievable isolation from low-cost, small-sized components, suitable for handset applications. Typical isolation values for these parts, even band-specific items, are in the range of 10 to 25 dB. This is adequate for their current, primary application in protection of the transmitter from the wide range of antenna VSWR conditions. However, even the higher value is not adequate for the duplex function under consideration here and it is difficult to envisage that an improved design could achieve the required figures without additional help from filtering or some other method of isolation enhancement. 4. Duplexer elimination schemes. This heading covers some new methods of achieving transmit/receive isolation. It is possible, for example, to use cancellation-based techniques in order to remove the transmit signal from the receive signal path, although these techniques themselves have a number of disadvantages. They are complex and have difficulties coping with external reflections. They also generally require complex antenna arrangements, which are not currently compatible with small handset designs. Research is, however, being performed in this area and a solution may be developed in the future. Further details of the basic concept are provided in Section 4.3.3. 4.3.1

RF Transmit/Receive Switch

Although it is possible to use coaxial relays for transmit/receive switching, virtually all low and medium power systems now utilise PIN diodes or FETs for this purpose.

Circulator From Tx baseband circuitry and upconversion Linear RF Power amplifier To downconversion and receiver baseband circuitry Receive LNA

Figure 4.2

Use of a circulator to provide transmit/receive isolation.

4.3 The Problem of the Diplexer

147

The basic configuration of the switch simply connects the common terminal to the antenna, with the two changeover contacts being connected to the transmitter output and receiver input respectively. This is shown in Figure 4.3. Clearly the four main performance criteria for the switch are: 1. Isolation. A high degree of isolation is required to prevent the transmit signals from overloading the receiver front end. The amount of isolation required is similar to that calculated above. This is by no means a trivial requirement, even for a switch, and it is usually achieved in conjunction with disabling the transmitter during the receive portions of the communication interchange. Typical switch isolation values range from 20 dB to 60 dB, depending upon the frequency of operation, the performance of the diode(s) used, and the complexity of the switch. 2. Linearity. This is particularly important for the transmit path, as a poor linearity performance could significantly degrade the demanding linearity specifications, which are often required from modern (linearised) transmitters. Linearity performance is usually related to the carrier lifetime of the diode itself, together with the resistance or attenuation being demanded from it. A long carrier lifetime generally results in low distortion, with a short carrier lifetime only resulting in low distortion at extremes of bias (either on or off). In the case of a PIN switch, where the diode bias is either at a high value or zero (or possibly a small reverse-bias), carrier lifetime becomes less of an issue and shorter lifetime diodes can usually be applied. 3. Power-handling capability. The power-handling capability of the switch is usually set by either the PIN diode’s breakdown voltage or its power dissipation capability, with the latter usually being the limiting factor. For example, consider a PIN diode used in series mode, with an on-resistance of 1Ω, operating in a 50Ω system. If the maximum dissipation of the diode is 2W, then the maximum power handling of the switch will be approximately 100W. 4. Loss. While loss is an issue in power handling and device dissipation, as outlined above, it can also be a contributor to receiver noise figure. A low through-loss is therefore desirable for the receive path, as well as for the transmit path.

PIN-diode switch

From Tx baseband circuitry and upconversion Linear RF Power amplifier To downconversion and receiver baseband circuitry Receive LNA

Figure 4.3

Use of a SPDT PIN-diode switch for transmit/receive changeover.

148

Multi-Band and General Coverage Systems

The simplest configuration commonly used for PIN-diode based transmit/receive switching is shown in Figure 4.4. In this circuit, L1 is an RF choke, C1–C3 are DC blocking capacitors and D1, D2 are the PIN diodes. The 50Ω, quarter-wave line can be constructed from any suitable transmission-line medium (e.g., coaxial cable or microstrip line). This configuration offers a number of advantages over the alternative of utilising purely series diodes: 1. It requires only a single bias control line, thereby simplifying the control circuitry. 2. Bias is only required while in transmit mode, resulting in a low-power receive mode. 3. Both diodes are biased when the transceiver is in transmit mode. This is an advantage, since PIN diodes usually generate most (harmonic) distortion when in their off state, due to modulation of the diode’s capacitance, or self-bias resulting from rectification of the transmitted signal. Since they are in their on state while the system is in transmit mode, this situation is avoided. Operation of the switch is straightforward. In transmit mode, a bias current is applied and both diodes appear in series (at DC) and hence turn on (low resistance). In this mode, the transmitter output is connected to the antenna and the receiver input is shorted to ground. The short appearing at the receiver input is transformed by the quarter-wave transmission-line to an open circuit at the antenna port. This provides a high impedance to the transmission of signals from the transmitter output (which have arrived at the antenna port), preventing them from entering the receiver. In receive mode, the bias current is removed and both diodes turn off (high-resistance). This disconnects the transmitter from the antenna port and removes the short circuit at the input to the receiver. This, in turn, allows the normal IBias L1 From Tx baseband circuitry and upconversion

C2

To downconversion and receiver baseband circuitry

50Ω Receive LNA

Figure 4.4

D1

C1

Linear RF power amplifier

C3

λ/4 D2

Simple series-shunt transmit/receive switch using PIN diodes.

4.3 The Problem of the Diplexer

149

transmission of signals arriving at the antenna port, through the quarter-wave line, and into the receiver input. Note that the bias circuitry should possess a high impedance to RF signals (at the desired operating frequency) in both transmit and receive modes. This will prevent the generation of intermodulation and harmonic distortion and also prevent unwanted loading of the transmitter output, when in transmit mode. Note also that for lower frequency operation, it is possible to replace the quarter-wave line by a lumped-element equivalent. This will usually be smaller and have a lower, loss resulting in a better receiver noise figure. A simple, three-element lumped-element quarter-wave line is shown in Figure 4.5. Its component values can be found from: C in = C out =

1 2 πfZ 0

(4.8)

and LS =

Z0 2πf

(4.9)

When carefully constructed, this type of switch is capable of more than 30 dB of transmit/receive isolation at frequencies up to 2 GHz. Transmit and receive through losses (when in transmit and receive modes, respectively) are typically in the region of 0.4 to 0.6 dB. An alternative configuration is shown in Figure 4.6, effectively doubling-up the arrangement shown in Figure 4.4. In this case, two series diodes are used in the transmit path, thereby increasing isolation in receive mode, due to the halving of the effective reverse-bias capacitance of either diode (assuming that both are identical). Similarly, in the receive path, two diodes and two quarter-wave transmission-lines are employed, and these provide enhanced isolation in this part of the circuit (theoretically, it should more than double). The drawback of this arrangement is, however, an increase in both the transmitter and receiver path losses. This results in an increased transmitter output power requirement (for equivalent power at the antenna) and results in an increased receiver noise figure. Note that the configuration shown in Figure 4.6 requires a negative bias current for transmit mode and a zero (or positive) bias current for receive mode. This can be altered to a positive bias requirement (as used in Figure 4.4) by reversing the direction of all diodes. A final option is shown in Figure 4.7. Again four diodes are used, in this case as separate series-shunt switches for both the transmitter output and the receiver LS Input

Output C in

Figure 4.5

C out

Lumped-element equivalent of a quarter-wave line for use in transmit/receive switches.

150

Multi-Band and General Coverage Systems IBias (−ve) L1 From Tx baseband circuitry and upconversion

D2 C2

To downconversion and receiver baseband circuitry Receive LNA

Figure 4.6

D1

C1

Linear RF power amplifier

C3

50Ω

50Ω

λ/4

λ/4

D4

D3

Improved-isolation series-shunt transmit/receive switch.

I Bias1 L1 From Tx baseband circuitry and upconversion Linear RF power amplifier

D2

C1 D1

C2 I Bias2 L2 To downconversion and receiver baseband circuitry Receive LNA

Figure 4.7

D3

C3

L2

D4

A high-isolation SPDT transmit/receive switch.

input. This configuration has the disadvantage of requiring two bias control lines, but can provide good isolation without the need for a ¼-wave transmission line (or lumped-element equivalent).

4.3 The Problem of the Diplexer

4.3.2

151

Switched Diplexers

These have been discussed briefly above and involve the fabrication of a number of diplex filter elements for the transmit and receive paths. A two-way or multi-way change-over switch is then employed, to select the required filter pair for a given transmit and receive band allocation. One example of this type of system is shown in Figure 4.8 and is described in the literature [2]. It utilises GaAs PHEMT switches, as these were reported to have a number of advantages over PIN diodes at the required frequencies of operation. These advantages included: low current/low voltage and single supply operation together with having no requirement to resonate out parasitics. The switches provided more than 20dB of isolation (excluding filter isolation), together with an insertion loss of less than 1dB for the overall diplexer circuit. 900 MHz Rx

GaAs FET switch

Lowpass filter

900 MHz Tx

GSM Lowpass filters

To/from antenna DCS

1800 MHz Tx

Highpass filter

GaAs FET switch

1800 MHz Rx

Diplexer (a)

C1 L1 Antenna connection (50Ω)

C2

L2

C4

GSM input/ output (50Ω)

C3

DCS input/ output (50Ω)

C5 C6 C7 L3

(b)

Figure 4.8 Switched antenna diplexer: (a) block diagram of the diplexer module; (b) diplexer circuit schematic. (From: [1] © 2005 IEEE. Reprinted with permission.)

152

Multi-Band and General Coverage Systems

There are a number of disadvantages of this arrangement. First, the GaAs switches will have some non-linearity and this will impact upon the adjacent channel performance of the transmitter(s) used in the system, for non-constant envelope modulation formats (the design reported in [1] was primarily intended for GSM/DCS and linearity performance was not discussed). Second, the technique will only work in a number of discrete bands, and these must typically be quite widely spaced from each other. It is not, therefore, possible to use this approach for a general coverage system, offering full flexibility. An alternative fabrication technology to that described earlier involves the use of SAW resonators to form the filter elements and PIN diodes to switch the required elements in and out of circuit. Such a system is also described in the literature [2] and, in this case, is employed to operate in two pairs of Tx/Rx bands, which are close to each other (both are within the 800-/900-MHz area of the spectrum). The diplexer was designed to operate in a handset application for the Japanese cdmaOne system, which has receive band allocations from 832 to 846 MHz and 860 to 870 MHz and transmit band allocations from 887 to 901 MHz and 915 to 925 MHz. There is therefore a 14-MHz gap between the two different pairs of transmit and receive frequency allocations and a 55-MHz duplex split. The duplexer was reported to have a transmit path loss of less than 2 dB and a receive path loss of less than 3.3 dB. It had over 50 dB of image attenuation and a transmit-receive isolation of 35 dB, from the transmit filter characteristic, and 52 dB, from the receive filter characteristic. It also had acceptable distortion characteristics from the PIN diodes. The main disadvantage of the technique is in its increased losses over those described earlier, although the comparison cannot be made directly. 4.3.3

Diplexer Elimination by Cancellation

A more radical idea for the elimination of the diplexer in a full-duplex software defined radio system is in the controlled cancellation of the unwanted transmitter output signal as it appears in the receive signal path. The form of the solution to this problem involves the removal of the transmitter output signal from the receiver input by anti-phase cancellation in a precisely controlled manner. A general block diagram illustration of this approach is shown in Figure 4.9. The signal from the receive antenna (which may be coincident with, or part of, the transmit antenna) will contain a significant degree of unwanted coupling of the transmitter output signal. This coupling (or its effects) must be removed in order to prevent overloading of the front-end components within the receiver section. This can be achieved by taking a sample of the transmitter output signal utilising a coupler and, after suitable processing, subtracting this signal from the receiver input signal, using a subtracter (which could also be a coupler). Since the receiver input signal will contain both the wanted signal and the unwanted coupling, this subtraction process will remove the unwanted coupling to a high degree (assuming a negligible amount of multipath coupling between the antennas). The control circuitry uses samples of the receive signal after cancellation and typically the transmit signal, in order to ensure intelligent and rapid operation of the control elements (e.g., a variable attenuator and phase-shifter, as shown in Figure

4.3 The Problem of the Diplexer

153

Transmit antenna

From Tx baseband circuitry and upconversion

Unwanted coupling Receive antenna

Linear RF power amplifier

Voltagevariable attenuator Controller

Φ

Voltagevariable phase-shift

To downconversion and receiver baseband circuitry Receive LNA

Figure 4.9

Subtracter

Removal of the transmit signal from the receive path by anti-phase cancellation.

4.9, or a vector modulator); the goal is both to obtain and maintain optimum cancellation. The control process can take place in real time or utilise a periodic updating mechanism. 4.3.3.1

Implementation Options

There are many potential methods of realising this system and only a restricted 2 number may be included here. In the case of a linear radio incorporating a quadrature input transmitter (e.g., a Cartesian loop), one possible configuration is shown in Figure 4.10. The received signal will contain a significant quantity of the energy from the transmitted signal due to the coupling between the two antennas. These antennas may be separate structures, a composite item, or, in the extreme, a single antenna with a circulator, isolator, attenuator, or coupler (or similar device) used to perform the transmit and receive path separation. A sample of the transmit signal from the quadrature input transmitter is processed by a phase-shifter and a variable attenuator, before being fed to one input of a subtracter. The received signal forms the other input to the subtracter and the result of the subtraction process is fed to the receiver front end. If the variable phase 2.

A linear radio refers to one in which the baseband signal information is transmitted by one or other or both of amplitude and phase modulation of a carrier. Such radios may be used for the transmission of SSB, AM, FM, 16-QAM, GMSK, QPSK, CDMA, and almost any other recognized form of modulation.

154

Multi-Band and General Coverage Systems Transmit antenna

I/Q baseband inputs



Unwanted coupling

In 90º

Linear RF power amplifier

Receive antenna

Local oscillator

Voltagevariable attenuator Controller



In

Φ

90º

Voltagevariable phase-shift

To downconversion and receiver baseband circuitry Receive LNA

Front-end filter

Subtracter

Figure 4.10 One potential configuration of the diplexer elimination technique when applied to a quadrature-input transmitter.

and attenuator elements are correctly adjusted, the signal appearing at the input to the receiver front end will contain predominantly the wanted receive signal; the unwanted transmitter output signal will have been substantially eliminated. The remainder of the receiver processing (mixing, amplification, and detection) can then operate as in any other standard receiver configuration. A key element of the system is the control of the variable phase and attenuation (or gain) elements in order to achieve and maintain optimum cancellation of the unwanted transmitter output signal from the receive signal path. For this purpose a control circuit utilising, for example, a digital signal processor (DSP) is configured to provide the required parameter optimisation for both of the control elements, based on the measurement of an error signal, relative to a reference signal, derived from the transmitter. In the case of Figure 4.10, the reference signal is formed from the baseband (or audio) inputs to the transmitter. In this case they are supplied in analogue form, although they could, advantageously, be supplied digitally, where the transmitter input is supplied in that manner. These form one set of inputs to the controller. The other set of inputs is formed from a coherent quadrature downconversion of a sample of the received signal, after processing by the front-end components. The

4.3 The Problem of the Diplexer

155

oscillator used for this downconversion process may be the same as that used for upconversion in the transmitter; this is the case illustrated in Figure 4.10. Note that the downconverted signal could be supplied at a digital IF and the quadrature conversion could be undertaken digitally within the controller. Thus two sets of inputs are supplied to the controller which is sufficient to enable it to provide optimum control of the system in order to maximise the cancellation of the transmitter output signals in the receive signal path. The detailed realisation of the controller may be achieved in many ways and the option chosen depends upon the precise form of its reference and/or error signal inputs. In the system shown in Figure 4.10, the function of the controller is primarily to adjust the variable attenuator and phase-shifter in order to minimise the level of the error signals at its input and to maintain that state as conditions change [such as the movement of persons or objects in the vicinity of the antenna(s)]. The purpose of the reference signals in this case is to provide a coherent reference with which to perform this minimisation. Clearly, the reference signals could be omitted and an energy minimisation performed on one or more of the error signals. An alternative configuration is shown in Figure 4.11, in which the separate transmit and receive antennas are replaced by a single antenna, and a circulator is used to provide the basic transmit/receive separation. This configuration has many advantages over that of Figure 4.10 since a single antenna is generally much more acceptable to users of both handportable and mobile equipment. The action of the

Transmit antenna

I/Q baseband inputs

In



90º

Linear RF PA Local oscillator



In

90º



Unwanted coupling through imperfect circulator

Vector modulator

Controller

In

90º

RF amplifier To downconversion and receiver baseband circuitry

Figure 4.11

Receive LNA

Front-end filter

Subtracter

Alternative configuration involving a single transmit/receive antenna and a circulator.

156

Multi-Band and General Coverage Systems

circulator is to permit a radio frequency signal, within its operating frequency range, to travel in only one direction (illustrated by the arrows in Figure 4.11). The transmitter output signal is therefore prevented from appearing at the receiver input. This embodiment also illustrates the use of a vector modulator in place of the gain and phase controllers shown previously. The type of vector modulator used is not critical and could be modified by, for example, replacing the multipliers shown in Figure 4.11, by variable attenuators. A similar result can, theoretically, be achieved by the use of a directional coupler in the transmit signal path. The coupler is arranged such that the transmit signal passes to the antenna relatively unimpeded and the unidirectional coupled port feeds the receiver input. The directivity of the coupler ensures that a significantly reduced level of the transmitter output signal appears at the input to the receiver. This configuration is shown in Figure 4.12. A disadvantage of this latter technique is that the sensitivity of the receiver is compromised by the coupling factor of the coupler. Thus, a lower overall receive sensitivity than that of which the receiver alone is capable is obtained. A further problem Transmit antenna

I/Q Baseband inputs

In



90º

Linear RF PA Local oscillator



In

Unwanted coupling through imperfect coupler

90º

Vector modulator

Controller



In

90º

RF amplifier To downconversion and receiver baseband circuitry Receive LNA

Figure 4.12 coupler.

Front-end filter

Subtracter

Further alternative configuration involving a single transmit/receive antenna and a directional

4.3 The Problem of the Diplexer

157

is that the directivity of the coupler is dependent upon its terminating impedances and is likely to be poor when connected to an antenna (particularly a mobile or portable antenna). It is not possible to utilise an isolator to alleviate this problem, as its unidirectional nature would inhibit the (wanted) receive signals from the antenna. An enhancement of the above concepts is to utilise a multi-path cancellation scheme. In the event that a sufficient degree of cancellation is unable to be provided by a single subtraction process, a number of additional subtraction processes may be included. There is clearly a limitation to this process where the gain/phase flatness of the signals involved is the restriction in cancellation performance. This is often due to the influence of the antenna return loss characteristic, which can have a substantial slope or ripple with respect to frequency, particularly in the case of a handset antenna. 4.3.3.2

Use of an Auxiliary Transmitter

One method of overcoming the above issue is to generate a separate cancellation signal, at an appropriate power level, and use this in place of the transmit signal sample [3]. This approach allows the power of digital signal processing to be applied in monitoring the received signal and optimising the digitally generated transmit signal, by means of adaptive filtering. Thus, a perfect cancellation vector may be generated, at all desired frequencies within the transmit signal bandwidth, and adaptively controlled to ensure ideal cancellation in a changing antenna environment. The required system arrangement for this technique is shown in Figure 4.13. A clear disadvantage of this technique is the added complexity and cost of a second transmit chain. This transmitter will, however, be of lower power than that of the primary transmitter (at least 10 dB and more if the antenna match is good, or the coupling network has some useful isolation). If this transmit chain can be fabricated Transmit/ receive antenna

D/A

Upconverter D/A converter

DSP

D/A

Linear RF PA (main)

U/C

Upconverter D/A converter

A/D

Coupling network

U/C

Linear RF PA (lower power)

D/C

Downconverter Receive A/D LNA converter

Subtracter

Figure 4.13 Use of an auxiliary transmitter in an active transmit-signal cancellation system. (From: [3]. © 2005 IEEE. Reprinted with permission.)

158

Multi-Band and General Coverage Systems

as a part of the main transmitter IC (in a handset application), and it does not require a separate PA device (i.e., a good antenna match or isolation are provided), then the added cost should be minimal. This technique is ideally suited to applications where the transmit channel bandwidth is broad, such as in CDMA, WCDMA, or OFDM. The earlier technique would certainly suffer a poorer level of performance in this case and would probably, therefore, not be practical.

4.4

Achieving Image Rejection 4.4.1

Introduction

The presence of an image is an issue in all receiver systems. In single-band receivers it can be dealt with relatively easily by the use of suitable RF and IF filtering and a sensible design for the downconversion frequency plan. In multi-band or general-coverage receiver systems, the image problem becomes much more acute and alternatives to traditional IF filtering must be found to ensure good receiver performance. This section discusses the two primary options for solving this problem and describes solutions to some of the inherent practical issues in each case. Arguably, a third option is the use of a direct-conversion receiver architecture. In this case, the image appears in-band and hence is well controlled, in so much as it is an image of the wanted signal and not of a signal at an unknown level relative to the wanted signal. The special case of a direct-conversion receiver is discussed in Chapter 3. 4.4.2

Use of a High IF

One possible technique for overcoming the problems of image rejection in a wideband coverage receiver is the use of a very high first intermediate frequency. This places the image at a frequency well outside of the potential band of interest and hence enables its removal either by explicit filtering or by the implicit filtering of the antenna (much less reliable). As an example, consider a receiver required to receive any signal in the frequency range 400 MHz to 2.5 GHz (thus covering virtually all of the mobile communications bands). The first IF could be chosen as, for example, 3.5 GHz, as this would allow a reasonably high degree of filtering to be achieved before the lower end of the image frequency band, which starts at 7.4 GHz. This frequency plan is illustrated in Figure 4.14. This arrangement has a number of disadvantages: 1. The synthesised first-local oscillator has to operate at a very high frequency (3.96 GHz in the example shown in Figure 4.14) and hence its phase noise can be poor for a low-cost and low-power device. 2. The second local oscillator, which still requires low phase noise, also needs to operate around 3.5 GHz (depending upon the digital IF frequency chosen). Again this is potentially an expensive device, with the likelihood of it also having a relatively high power consumption.

4.4 Achieving Image Rejection

159

Complete receiver coverage: 400 MHz to 2.5 GHz

IF frequency band

Complete image band: 7.4 GHz to 9.5 GHz LO range 3.9 GHz to 6 GHz

... 0.4

Figure 4.14

2.5

3.5

7.4

9.5

Frequency (GHz)

Use of a high IF to achieve good image-rejection in a multi-band receiver.

3. The high-dynamic range present at the front end needs to be preserved up to and including the second mixer [i.e., it needs to be preserved in the front end, first mixer, first IF amplifier(s), and second mixer]. This is due to the (assumed) range of channel bandwidths required to be processed by this front-end (e.g., from 3.84-MHz WCDMA to 30-kHz AMPS); the first IF filter and amplifiers must be able to process the wider required bandwidth and this means having to deal with a large number of narrower-band carriers when operating in that part of the spectrum. This is quite an onerous requirement and may be difficult to meet in practice. The high-dynamic range techniques mentioned elsewhere in this chapter, particularly involving the incorporation of the mixer, could potentially be employed to good effect here. However, this would add cost and complexity. 4. Obtaining a good IF filter response at 3.5 GHz is not straightforward. This is balanced somewhat by the relatively relaxed requirements on this device in terms of roll-off. It has, for example, around 400 MHz, within which it must achieve sufficient attenuation to reject the local oscillator signal. An alternative frequency plan is shown in Figure 4.15. In this case, the IF is now placed at a higher frequency, allowing the first local oscillator (LO) to run at a lower frequency. This will ease the requirements on this LO which, being synthesised, has the more difficult design issues regarding phase-noise, spurious, and so forth. This option does, however, make the IF filter design more difficult and (potentially) expensive. It is evident that the use of a high IF is not without its problems, although it does solve the image-rejection issue. It is predominantly the cost and difficulty of overcoming these issues which have led to alternative options being considered. The main alternative is image-reject mixing and this is described next. 4.4.3

Image-Reject Mixing

The basic concept of image-reject mixing is not new and has been described in numerous papers (e.g., [4]). The essential idea is to make use of the fact that the image frequency will reflect about the frequency origin, when downconverted by

160

Multi-Band and General Coverage Systems

Complete receiver coverage: 400 MHz to 2.5 GHz LO range 2.6 GHz to 4.7 GHz

Complete image band: 7.7 GHz to 9.8 GHz

IF frequency band

...

0.4

Figure 4.15

2.5

5.1

7.7

9.8

Frequency (GHz)

Alternative frequency plan utilising a lower local oscillator frequency range.

the local oscillator, and hence will suffer a 180° phase reversal. This phase reversal can be used to distinguish the image frequency from the wanted RF signal and hence to enable cancellation of the unwanted image. The basic configuration of an image-reject mixer is shown in Figure 4.16. Note that the local oscillator and RF input signals can be interchanged without loss of functionality, but that the arrangement shown below has the advantage that it is simpler to produce the required broadband 90° splitter for a constant-level high strength signal (i.e., the local oscillator signal). Some of the broadband quadrature techniques described elsewhere in this book could be used for this purpose. The in-phase splitter may easily be realised for broadband operation by either transformer or resistive splitting techniques; the latter having the disadvantage of a 3-dB additional degradation in the mixer noise figure. The quadrature combiner on the right-hand side of the diagram operates at the IF frequency and therefore need only be a (relatively) narrowband component. Transformer, microstrip, coupled-transmission-line, or lumped-element techniques may therefore be used in this component, depending upon the IF frequency chosen. If this circuit is used as part of a high-dynamic range general coverage receiver system, then the two mixers will require a good dynamic range to avoid blocking. This can be achieved with, for example, high-level diode or FET ring mixers (e.g., Mixer

RF input signal (wanted + image)





90º

90º

In Local oscillator

50Ω

Mixer

Figure 4.16

IF out Image out

Basic configuration of an image-reject mixer.

4.4 Achieving Image Rejection

161

level 17 devices), or by utilising some of the lineariastion techniques described in Section 4.5. 4.4.3.1

Alternative Forms of Image-Reject Mixer

The image-reject mixer shown in Figure 4.16 is essentially a form of the Hartley IR mixer [5]. The original form of this mixer is shown in Figure 4.17. In this case, the required IF quadrature is provided by the RC and CR networks, which are designed to operate at their 3-dB points, at the desired IF. This quadrature method is described in more detail in Chapter 5. An alternative configuration, more commonly used in upconversion (and, indeed, Cartesian loop transmitters [6]), is shown in Figure 4.18. This technique, known as the Weaver method, utilises a second pair of mixers to provide the required output quadrature. The clear disadvantage of this architecture is the requirement for a second local oscillator; hence, it is not often used in practice. The realisation of this latter quadrature mixing stage in the digital domain, within a Mixer

R Lowpass filter

RF input signal (wanted + image)

In



IF out

90°

Local oscillator

C

Mixer

Figure 4.17

Lowpass filter

R

A Hartley image-reject mixer.

Mixer

RF input signal (wanted + image)

In

Lowpass filter





90º

Local oscillator 2

Mixer

A Weaver image-reject mixer.

Mixer

In

90º

Local oscillator 1

Figure 4.18

C

Lowpass filter

Mixer

IF out

162

Multi-Band and General Coverage Systems

receiver, may alleviate this cost. This does, of course, assume that the cost of the relatively high-speed ADCs required to provide the analogue conversion, does not exceed the cost of the mixers and LO. 4.4.3.2

Enhancement of the Standard Image-Reject Mixer

The basic image-reject mixer described above relies on accurate gain and phase matching of the upper and lower paths to achieve a high degree of cancellation of the unwanted image signal. In a production component, with a reasonable temperature specification, it is possible to achieve an image rejection of around 20–30 dB typically. This figure is far from the more than 60-dB requirement in order for it to be suitable for use in a general-purpose receiver, as the sole method of achieving image rejection. A possible enhancement is therefore to control the gain and phase balance within the image reject mixer, using an automatic control mechanism, in order to achieve and maintain a high degree of image cancellation. There are a number of mechanisms by which this may be accomplished and each will be described next. Note that the illustrated method in each case employs I and Q detection and polar control; however, the control functions could be as easily implemented in vector form (e.g., using a vector modulator) with similar performance and operation. Control Based on Local Oscillator Nulling

Figure 4.19 shows a basic block diagram of this approach. The technique involves the injection of a small amount of DC into the RF ports of the mixers, thereby allowing some local oscillator feedthrough. If the system is perfectly balanced, then the feedthrough should be cancelled at the output of the system (i.e., no local oscillator signal should be present). The control mechanism can therefore detect the residual local oscillator signal (in I and Q components) using either the input local oscillator (thus creating a DC error signal) or an offset (but tracking) oscillator, which will create an audio frequency error signal suitable for processing in a DSP. This latter option will overcome the problems of DC offsets in the I/Q detection mixers and subsequent analogue processing. The gain and phase control components can appear in a number of locations within the basic circuit, including prior to the mixer in the lower path or the equivalent positions in the upper path. The same control circuitry and control signals could be used in all of the appropriate positions. The advantage of putting the gain and phase control components after the mixers (either one) is that they are then only required to operate at the intermediate frequency and hence need only operate over a narrow bandwidth. This will make them both lower cost and easier to realise in a practical system. An alternative to utilising the LO directly is to utilise an offset frequency close to the LO, in order to generate an audio-frequency IF, suitable for processing within a low-cost DSP. One option for this arrangement is shown in Figure 4.20. The key benefit of this approach is that it eliminates DC offset issues for the downconversion mixers, analogue integrators, and ADCs (where used).

Splitter

Splitter

RF input signal (wanted + image) + small DC voltage

In



Quadrature splitter

Quadrature combiner

90º

90º

Variable phase-shift

Local oscillator

IF out + small LO component



Image out

Variable attenuator

4.4 Achieving Image Rejection

Mixer

50Ω

Φ Mixer Integrator

dt

dt

Integrator

Correlator In



90º

Splitter

Quadrature splitter

Correlator

Control of an image-reject mixer by utilising the local oscillator as a sounding signal.

163

Figure 4.19

164

Mixer

Splitter 0º

RF input signal (wanted + image)

In

Quadrature splitter

Quadrature combiner

90º

0º 90º

IF out Image out

Variable Variable phase-shift attenuator

Local oscillator

50Ω

Φ

Sounding signal oscillator (close to image signal) Mixer

D/A

D/A DSP

Integrator

dt

dt

Integrator

Mixer

A/D

In



90º

Quadrature splitter

Splitter

A/D

Correlator

Offset oscillator

Figure 4.20

Control of an image-reject mixer by utilising the local oscillator as a sounding signal and an offset oscillator for detection.

Mixer

Multi-Band and General Coverage Systems

Correlator

4.4 Achieving Image Rejection

165

This system utilises a separate, locked local oscillator to downconvert the input and output versions of the LO signal to, say, a few kilohertz or tens of kilohertz. This oscillator would track the main local oscillator (approximately it could operate with a larger step size, for example, to lower cost) such that it maintains an audio-frequency separation from the LO. For example, if the main LO was synthesised in 5-kHz steps, the tracking LO could be synthesised in 25-kHz steps, thus yielding a maximum difference frequency of 25 kHz (since a 0-Hz difference frequency is undesirable, due to the problem of DC offsets mentioned earlier). Control Based on Sounding Tone Injection

The basis of this technique is to inject a sounding tone into the RF input to the image reject mixer (in addition to the off-air signals) and utilise this to set up the gain and phase controllers. This signal can be located either close to the wanted RF signal(s) or close to the unwanted image signal(s). If the sounding signal is injected close to the image frequency, the control circuitry will act to null out the sounding signal at the IF output port of the complete image reject (IR) mixer. If it is injected close to the wanted transmission, the control circuitry will act to null the sounding signal at the image output of the IR mixer. In this description, the sounding signal could take the form of a CW carrier, a spread-spectrum signal, a swept tone, or a switched (TDM) signal. The positioning of the signal (in terms of its frequency) may vary over a wide range, with the wanted transmission and image frequencies being able to appear on either side of the local oscillator frequency, depending upon the implementation of the IR mixer. For example, interchanging the role of the IF out and image out terminals in the above diagrams, would have this effect. In the system as described, with, for example, the image frequency set to be higher than the wanted transmission, then an image-like sounding signal could appear anywhere (in frequency terms) above the LO frequency. Similarly, the transmission-like sounding signal could appear anywhere below the LO signal. There are a number of advantages in placing the sounding signal close to, but not at, the image frequency: 1. Correlation between the image signal and sounding signal is avoided; this would otherwise have the potential to provide erroneous control information. 2. Any uncancelled sounding signal components appearing at the IF output will be offset from the wanted IF signal and hence will not get through the IF filter (if the system is designed appropriately). 3. Close placement of the sounding tone to the image frequency will still ensure good correlation between the image null point and the null point of the sounding signal. This will, in turn, ensure that a good overall level of image rejection can be achieved and maintained, despite the frequency difference between the two. Optimum placement of the sounding signal is probably one or two channels away from the image frequency (either side).

166

Multi-Band and General Coverage Systems

The main disadvantage of this option is the additional hardware required to generate and subsequently downconvert the sounding signal. Variations in this technique are discussed in the literature, utilising periodic calibration [7] or one-time only calibration [8]. In the latter case, digital storage of the calibration coefficients is used to remove the need for periodic recalibration. Drift in performance will still occur, however, and this must be characterised. It is clearly less of an issue in the well-matched environment of an integrated circuit receiver, but would probably yield unacceptable performance in a discrete solution (with periodic recalibration being required). Control Based on Direct Multiplication

A further alternative control scheme is proposed in [9] and shown in Figure 4.21. In this scheme, error signals are generated directly from the signals within the IR mixer and these are used to control a variable gain and delay element (the latter being formed using a filter). It is similar, in some respects, to the first scheme described in this section, since it utilises the local oscillator signals as a basis for assessing the amplitude and phase errors present in the system. The phase and amplitude error signals, respectively, resulting from the error signal generation processing shown in Figure 4.21(a), are given by: V∆θ = −

A2  1 2  A θ 1 + RF  2 16  

(4.10)

V ∆A = −

 A2  1 A∆A 1 + RF  2 8  

(4.11)

and

where A = A1A2 is the product of the two LO signal amplitudes, θ = θ1 + θ2 is the sum of the phase angles of the two local oscillator signals, ∆A = ∆A1A2 + ∆A2A1 (where ∆A1 and ∆A2 are the amplitude errors of the two local oscillators) and ARF is the amplitude of the RF input signal to the IR mixer system. If the RF input signal is small (or disconnected for calibration), the above equations reduce to: V∆θ = −

1 2 A θ 2

(4.12)

V ∆A = −

1 A∆A 2

(4.13)

and

These two signals therefore provide independent steering information for correction of the amplitude and phase errors present in the system. Figure 4.21(b) shows how these signals are utilised to correct the gain and phase imbalance initially present in the system. The quadrature local oscillator signals,

Lowpass filter

Amplitude error, V∆A

Mixer

4.4 Achieving Image Rejection

Mixer

a In

RF input signal (wanted + image)



In

90°



90°

LO1

b+a

b-a

IF out

LO2 b

Mixer

Lowpass filter

Mixer

Phase error, V∆θ

167

Figure 4.21 Multiplication-based control for a Weaver image-reject downconverter: (a) error signal generation; and (b) control system based on the generated error signals. (After: [9].)

168

Multi-Band and General Coverage Systems Mixer

Lowpass filter

Multiplier

LO2 In



90º

RF input signal (wanted + image)

In



90º

LO1

Amplitude error, V∆A

Variable gain/ delay element

IF out

Phase error, V∆θ

Mixer

Figure 4.21

Lowpass filter

Multiplier

Continued.

together with the gain and phase error signal voltages, form the inputs to a variable gain/phase element block. The integral of the phase error voltage is used to vary the pole and zero locations of an integrated filter (by varying gmO), while the integral of the gain error voltage is used to vary the gain of the filter (by varying gm). In this way, the relative amplitude and relative phase of the two IF signals feeding the IF output summer/subtracter is varied, in order to maximise the degree of image rejection achieved. The prototype device, described in [9], was reported to have an image rejection of 26 dB, prior to calibration/correction, and 59 dB afterwards (based on a 1.8-GHz RF input frequency and a 1.4-GHz image frequency). One point to note, however, is that once the approximation used to derive (4.12) and (4.13) is no longer valid, the resulting calibration error leads to a significant drop in image-rejection performance. Consequently, if large RF input signals are expected on a regular basis, periodic, rather than continuous, calibration should be employed. 4.4.3.3

Application of Polyphase Filtering in an Image-Rejection Mixer

The general concept and use of polyphase filtering is described in Chapter 5, for application in broadband quadrature networks in transmitters and upconverters. It is, however, equally possible to utilise them in receivers (notably integrated circuit, single-chip receivers) and, in particular, as the IF quadrature network in an image-reject mixer [10, 11]. Their broadband properties are useful in this application, where a low-IF is required relative to the signal bandwidth under consideration. In a typical integrated circuit configuration for this low-IF application, two matched bandpass filters would be incorporated within the IR mixer (occupying the positions of the lowpass filters shown in Figure 4.17). These IF filters are not intended to provide image-rejection, merely to eliminate all other unwanted signals which pass through the (wide) front-end cover filter. A polyphase filter can be used to replace these two bandpass filters and to provide both the required bandpass response and the required IF quadrature. The format of such a filter is shown in

4.4 Achieving Image Rejection

169

Figure 4.22, while its inclusion within the context of an IR mixer is illustrated in Figure 4.23. Note that all components with a suffix 1 will ideally match those with a suffix 2 in these figures. A key advantage of an IR mixer, when utilising a polyphase filter in this context, is that its bandpass characteristic is symmetrical about the IF centre frequency—it therefore causes no degradation to the received eye. A conventional low-frequency bandpass filter will typically have an asymmetric response about its design centre frequency, thereby distorting the received data eye (partially closing the eye in the corresponding eye diagram). A second key advantage of polyphase filtering in this application is that the degree of component matching required in the two sections of the filter (upper path and lower path) is much less stringent than that of the two conventional IF filters as described above. In other words, a higher degree of image-rejection may be achieved for a given degree of component matching. The degree of image-rejection which can be achieved is given by: S IR =

= ≅

1  R fb   1 + 4  R 

2

1

(4.14)

1 + 16Q 2 1 Q 4

where Q is the quality factor of the filter, Rfb (= Rfb1 = Rfb2) is the value of the feedback resistors (perfect matching assumed), and R (= R1 = R2) is the value of the cross coupling resistors (again, perfect matching is assumed). This is, of course, the image Rfb1

C1 Vout1 A1 R1 From I/Q mixer

R2

−1 −Vout1

A2 Vout2

C2 R fb2

Figure 4.22

An active polyphase filter [12].

170

Multi-Band and General Coverage Systems Rfb1

Mixer

C1 A1

R1 RF input signal (wanted + image)

In0º 90º

Local oscillator

−1 IF out

R2 A2

Mixer

C2

Rfb2

Figure 4.23

Application of a polyphase filter in an IR mixer.

leakage in an ideal implementation, with no component mismatch. The impact of component mismatch will be to degrade the amount of image-rejection calculated using (4.14), however the degree of matching required, for, say, 60 dB of image rejection, is still a factor of 4 lower than that which would be required if conventional IF filters were to be used instead [11]. In this case, a mismatch of 0.4% would be adequate, compared with a mismatch of 0.1% for conventional IF filters.

4.5

Dynamic Range Enhancement The receiver in a software defined radio, particularly one operating over multiple bands, is likely to encounter a wider input dynamic range than is a single-mode receiver. This is due to the fact that an ideal multi-band software defined radio will have little or no front-end filtering, to save both the size and cost which would be added by the use of multiple front-end filters, for the multiple bands to be received. The front end is therefore likely to encounter a wide range of both potentially wanted and certainly unwanted carriers and must process these linearly until at least the point where the wanted channel can be selected. Failure to do this will result in significant blocking problems for the receiver and/or a significant EVM degradation for the wanted channel. This section presents a range of linearity enhancement techniques which are appropriate for receiver front-end designs. Some of these are only applicable to the LNA, whereas others can improve the linearity of the complete RF/IF signal processing subsystem. There are also differences in bandwidth applicability, with some techniques providing very good performance over a narrow (single-channel) bandwidth and others providing perhaps less dramatic performance, but over a broad (multi-carrier or even multi-band) frequency range. Many of the techniques presented are analogous to their high-power linearisation counterparts; however, the criteria for use in a receiver front end are

4.5 Dynamic Range Enhancement

171

significantly different. Specifically, noise performance is not typically a major priority in a high-power design; however, it is clearly critical in a receiver front end. Indeed, some techniques (such as standard feedforward applied around an LNA/mixer) do not actually improve overall dynamic range; they merely shift where the useable dynamic range appears in terms of signal power. In other words, they degrade the front-end noise figure by the same amount as they improve its intercept point. A much cheaper equivalent, if a shift in dynamic range is the desired goal, is to insert an attenuator prior to the LNA. Note that most of the techniques presented here will not compensate for genuine clipping of the signal. If front-end overload is occurring to such a degree that either the LNA or first mixer is driven to this level, then non-linearity and consequent distortion/blocking are inevitable. Feedforward can provide some benefit in this area, however this is at the expense of a higher-power error amplifier, with a consequent likely degradation in noise figure and thereby overall system noise figure. Feedback techniques can actually increase the output distortion at the point where clipping occurs, since it will attempt to generate an infinite compensation signal that can have a broad spectral characteristic (see, for example, [6], Chapter 4) 4.5.1

Feedback Techniques

Feedback is a commonly used technique in RF power amplifier lineariastion applications, in its various guises: RF feedback, modulation feedback, Cartesian loop, and so forth. Some of these techniques are also applicable in receiver applications, particularly for linearisation of the front-end LNA. Some new feedback variants are also emerging which are configured specifically with a receiver in mind and act to linearise the first mixer and/or the LNA. The relative merits of all of these techniques will be discussed in this section. The techniques to be discussed in this section concentrate on those which can be implemented at a macro level, that is, taking mixers, amplifiers, and filters as separate blocks (in separate packages if necessary), rather than techniques which can only be implemented at an integrated circuit design level. It is assumed, therefore, that the building blocks from which these systems are constructed are already selected as being state-of-the-art devices. There is typically little point (from a cost or size perspective) in using these techniques to improve a poorly performing mixer, for example, to the point of matching an existing state-of-the-art, stand-alone device. The existing device will almost certainly be smaller and lower cost. The use of feedback as a mechanism for receiver linearisation has a number of potential advantages. 1. It is often capable of large linearity improvements, as long as it is operated within its gain-bandwidth-delay product limit. 2. It is often (but not always) a simple technique to implement and hence is small and low cost, both of these criteria being essential in commercial SDR applications. 3. It can be used to linearise both the LNA and the front-end mixer, if properly configured.

172

Multi-Band and General Coverage Systems

4. It can generally be constructed in such a manner as to have minimal impact upon system noise figure; this is clearly an advantage in a receiver application. 4.5.1.1

Conventional RF Feedback

Conventional RF feedback may be applied externally to an existing LNA and can achieve relatively good wideband performance due to the inherently low delay of low-power LNAs (particularly integrated, e.g., MMIC, designs). The main issue in the application of such feedback is that it lowers the gain of the LNA stage by an amount equal to the degree of feedback applied. In most cases, this lowering of gain, and its consequent impact upon overall receiver noise figure, is not worth the improvement in intercept point which it enables. A better technique, if a sacrifice in noise figure is permissible, is to utilise a higher power LNA, based around a medium or even a high-power device (in, for example, military systems), and bias it for good linearity. The overall performance achieved, measured as an improvement in dynamic range, is likely to be rather better than that obtained by linearising a lower power device or MMIC. 4.5.1.2

IF/RF Feedback with Vector Subtraction

This technique (outlined in Figure 4.24) contains elements of both feedback and feedforward, although it is essentially a feedback technique in terms of its distortion cancellation methodology. Its detailed operation is illustrated by the various spectra (amplitude versus frequency plots) shown at a number of points within the figure.

RF Downconverting amplifier mixer RF input

IF output IF filter

Variable phase-shift

RF amplifier

Φ Φ

Variable phase-shift

Variable attenuator

Local oscillator

RF signal path IF signal path LO signal path

Variable attenuator

RF image filter

Upconverting mixer

IF amplifier

Figure 4.24 Feedback-based mixer linearisation technique using IF/RF feedback. (From: [13]. © 2001 IEE. Reprinted with permission.)

4.5 Dynamic Range Enhancement

173

The nonlinear downconverting mixer is fed with a combination of the wanted (input) signal and an error signal derived from the system output. The purpose of the error signal is to act as a predistorting signal for the mixer, with the feedback mechanism operating continuously in real-time (unlike that of, say, conventional predistortion). The error signal results from an upconversion of the IF output of the system, using the same local oscillator as that of the original downconversion process. This reupconverted signal must be filtered to remove image products and this filter will contribute to the overall loop delay (perhaps significantly). The resulting RF error signal should now be comparable with the RF input signal (although containing unwanted distortion). Finally, a copy of the input signal is subtracted from the RF error signal, with an appropriate gain and phase weighting to ensure near-ideal vector subtraction. The resulting error signal resembles those typically seen in a feedforward system. It is then gain and phase weighted and amplified and added to the original input signal, to form the RF input to the downconversion mixer, as described earlier. The degree of gain and phase matching required in the main-signal cancellation process, to achieve good performance, is reported to be quite high. A match of 0.1 dB and 0.1° was used to generate the results outlined below. To achieve and maintain these levels of matching in a practical solution would require an automatic control system, in very much the same way as that of the error-generating loop in a feedforward system. Note that because this is a feedback-based process in the system described here, the cancellation performance will inevitably degrade with increasing bandwidth (even with perfect gain and phase matching) and that this will introduce a fundamental bandwidth limitation into the system. The main advantage of the technique lies in its potentially high linearity improvement capability. The technique was reported to be capable of some 20–25 dB of IMD improvement for both a two-tone test and a π/4-DQPSK carrier. This represents a useful improvement in receiver intercept point. This linearity improvement was achieved with a minimal degradation in overall noise figure of 0.2 dB. This is a clear advantage over both feedforward and predistortion techniques. The main disadvantages of the technique lie in its relative complexity, particularly when a control scheme is included to maintain the performance of the vector cancellation part of the system, and its inherent gain-bandwidth limitation. The implementation described in the literature [13] is very narrowband, operating over tens or low hundreds of kilohertz and is therefore essentially a single-carrier technique, suitable primarily for improving the signal quality of a single received (strong) signal. While it is undoubtedly possible to extend this bandwidth, particularly if the technique was to be implemented in an ASIC, it is unlikely ever to be suitable for multi-band front-end operation, as it stands. 4.5.2 4.5.2.1

Feedforward Techniques Feedforward Linearisation of an LNA

Feedforward [6] can be employed as an LNA linearisation technique as well as being used in high-power linearisation systems. In the case of a low-noise system, the feedforward process will cancel the noise generated in the main amplifier, which

174

Multi-Band and General Coverage Systems

can therefore be a relatively high-power (good intercept point), relatively high noise-figure circuit. The critical elements, from a noise perspective, then become the reference-path components, which must be low loss, and the error amplifier, which must have a good noise figure. The error amplifier is, however, relieved of the burden of achieving a high intercept point, and hence can be a more conventional LNA. The noise added by the main amplifier can be thought of as an additional signal, which is not also present on the reference path, and hence will appear as part of the error signal. It will therefore be corrected as a part of the natural operation of the feedforward process and, assuming a perfect gain/phase balance for the overall system, will be eliminated in the output of the complete feedforward amplifier. The feedforward process therefore not only eliminates distortion added by the main power amplifier, but also noise and indeed any other spurious signals present at the output of the main amplifier which are not also present in the reference path. This is a very powerful and useful benefit of a feedforward system, as it allows relatively low-noise amplifiers to be constructed with extremely high third-order intercept points, hence resulting in a very high dynamic range system. The configuration of a feedforward system for use as a low-noise, high-intercept point amplifier is shown in Figure 4.25. Note that the two blocks labelled compensation circuit refer to the gain/phase controllers or vector modulators used to achieve optimum cancellation of the main signal energy in the error signal and of the error signal energy (main amplifier noise and distortion) in the output signal. The noise figure of a feedforward amplifier is determined by the elements of the system which are not included within the correction process (i.e., those elements of the loop for which correction is sought). In other words, noise added in the reference path, or by the error amplifier and associated components, is not corrected for by the feedforward process and will be added to the output signal at the level it appears at the output of the error amplifier, less the coupling factor of the output coupler. This may be summarised with reference to Figure 4.26 as follows: The total loss up to the error amplifier input is: L T = LC 1 + L TD + L S + LCC (dB)

(4.15)

Since the components introducing this total loss may be assumed to be matched to the characteristic impedance of the system (50Ω), it can be shown that the result−23 ing noise power is kTB (watts), where k is Boltzmanns constant (1.38 × 10 J/K), T Main amplifier

τ Input coupler

Figure 4.25

Time delay

Output coupler

τ

Compensation circuit

LNA Input

Time delay

LNA output

Compensation circuit Subtracter

Error amplifier

Configuration of a feedforward system for optimum noise performance.

4.5 Dynamic Range Enhancement

175

Main amplifier

Coupling factor = C C2 (dB) Matched 50Ω system up to this point, hence noise power = kTB

Time delay

Input coupler

Error amplifier

Subtracter

τ

Input Loss = L C1 (dB)

Figure 4.26

Output coupler

Compensation circuit

Loss = L TD (dB)

Loss = L S (dB)

Loss = L CC (dB)

Gain = G A2 (dB) Noise figure = FA2(dB)

Noise figure of a feedforward system.

is the system temperature in Kelvin, and B is the bandwidth of interest (hertz) [14]. The system input noise is therefore: N in = kTB ( W )

(4.16)

The noise power at the output of the complete feedforward system may therefore be derived: N out = Gerr F err N in = kTB.10FA 2 10 .10 = kTB.10

(G A 2 −C C 2 ) 10

( W)

(4.17)

(FA 2 + G A 2 −C C 2 ) 10

3

where Ferr is the error amplifier noise factor and Gerr is the gain of the error amplifier, as seen at the output of the feedforward amplifier (i.e., incorporating the output coupler loss). The system noise factor is therefore: F =

N out GT N in

(4.18)

where GT is the total gain of the reference and error paths in the feedforward system and is given by: GT = 10

( − L T + G A 2 −C C 2 ) 10

(4.19)

Hence, the system noise factor is: F =

Simplifying gives: 3.

Noise figure = 10log10(noise factor).

(

kTB 10 10

(FA 2 + G A 2 −C C 2 ) 10

( − L T + G A 2 −C C 2 ) 10

)

. kTB

(4.20)

176

Multi-Band and General Coverage Systems

F = 10

( L T + FA 2 ) 10

(4.21)

Hence the system noise figure is given by: F dB = 10 log10 (F ) = L T + F A 2 (dB)

(4.22)

The noise figure of the feedforward amplifier, in the case where perfect nulling of the main amplifier distortion and spurious signals is assumed, is therefore determined purely by the losses in the reference path and the noise figure of the error amplifier. Note that the input splitter and the subtracter in Figure 4.26 are both shown as directional couplers, configured to provide minimum loss to the reference signal. This is the optimum configuration for minimum noise figure, although it does require a higher main amplifier gain than would a system based on 3-dB hybrid splitters. The use of feedforward is unlikely ever to achieve the noise figure performance of the very best LNAs, simply because there will always be a finite loss in the input coupler, subtracter, and reference-path delay line. These will add to the (potentially state-of-the-art) noise figure of the error amplifier, resulting in a compromised noise figure. A properly designed feedforward system can, however, provide a useful dynamic range enhancement over that of a conventional LNA, as it can extend the upper-end intercept point by more that it reduces the lower end noise performance (by increasing the noise figure). Designs have therefore been undertaken in, for example, military applications, where a very high intercept point is advantageous to reduce or eliminate the impact of jammers.

4.5.2.2

Feedforward Linearisation of a Cascaded Front End

The various architectures shown in Figure 4.27 are options for utilising feedforward to linearise a front end and first mixer—the critical elements as far as strong-signals handling are concerned, in a typical receiver (an IF filter may be used to protect subsequent stages). Note that the IF delay-line could advantageously be implemented as an IF filter (e.g., using a SAW or ceramic device). This would then, in conjunction with the use of a filter preceding or succeeding the error amplifier, ensure that the cancellation process in the output coupler was a relatively narrowband subtraction. Such a subtraction could achieve good performance with relative ease. There is, however, a fundamental flaw with all of these architectures. They do not achieve an overall benefit in terms of the achievable dynamic range from the front-end. While each is capable of producing an increase in the input intercept point of the front-end, it will also yield an identical (or greater) increase in noise figure, such that the overall dynamic range achieved remains unchanged. A much simpler (and cheaper) way of achieving the same end is to insert an attenuator in front of the cascade of an LNA and mixer, where the LNA is based upon the error amplifier, originally intended for use in the feedforward system. The reason for this is that there is always one mixer which must handle the full dynamic range of the signal and is not subject to IMD/noise correction by the feedforward process [for example, the mixer in the reference path of Figure 4.27(a)].

4.5 Dynamic Range Enhancement

177

IF Time delay

Downconversion mixer

Output coupler

τ

Compensation circuit

IF output

RF Main amplifier

τ

RF Input Input coupler

Downconversion mixer Compensation circuit IF Error amplifier

Subtracter

RF Time delay

Local oscillator (a)

IF time delay

Downconversion mixer

Output coupler

τ

Compensation circuit

IF output

RF main amplifier Upconversion mixer

Local oscillator

τ

RF Input Input coupler

RF time delay

Compensation circuit IF error Subtracter Downconversion amplifier mixer (b)

IF Time delay

Downconversion mixer

τ

Compensation circuit RF main amplifier Local oscillator

Input coupler

RF time delay

IF output

Upconversion mixer Downconversion mixer

τ

RF input

Output coupler

Compensation circuit Subtracter

RF error amplifier

(c)

Figure 4.27 Various options for incorporating both the LNA and mixer within a feedforward-based front end: (a) Option1; (b) Option2; and (c) Option 3.

178

Multi-Band and General Coverage Systems

If it were possible to make a suitable mixer to meet the required specification for this device (for a given application), it would be possible to use this same device (in conjunction with an attenuator and/or LNA) to achieve the same result, without feedforward correction.

4.5.3

Cascaded Non-Linearity Techniques

A much better linearisation technique for receiver front ends is that of employing a cascaded non-linearity. There are three locations in which this could be employed, and these are summarised in Figure 4.28. These configurations are essentially forms of predistortion or postdistortion and operate in exactly the same manner (see Chapter 6). The form of non-linearity required in these systems can be very simple (e.g., third order only), since front-end non-linearities tend to be relatively simple and well behaved (unlike those of most high-power RF amplifiers). The benefits of utilising more involved forms of non-linearity are normally outweighed by the complexity and cost disadvantages of their implementation. Figure 4.28(a) shows a conventional predistortion configuration, applied to a receiver front-end. Again, non-linearity present in subsequent IF stages is ignored (at least from the perspective of strong-signal handling/blocking), as the IF filter (following the first mixer) will afford protection. The main disadvantage of this technique lies in the loss inherent in the predistorter and its consequent noise figure. This will add to the overall noise figure of the receiver front-end. Even if a low-noise, active predistorter is used, the front-end noise figure will still be higher than that of the LNA alone. Although it is potentially a useable architecture, there are better options. Figure 4.28(b) shows a cascaded non-linearity in the form of a predistorter/ postdistorter configuration, applied to a receiver front end. In this case, the nonlinearity acts as a postdistorter to the LNA and as a predistorter to the mixer. In this position it should have a minimal effect on front-end noise figure, while providing a similar linearity improvement to the overall system, as did the previous configuration. Results from a system built using this configuration (by the author) yielded an intercept point improvement of around 10 dB at 1.8 GHz, although maintenance of this performance would either require good temperature tracking of the various non-linearities, or a control scheme. The noise figure degradation, when employing the technique, was negligible. Finally, Figure 4.28(c) shows a cascaded non-linearity, in the form of a postdistorter configuration for an LNA and mixer combination. Whilst this option can provide correction for the non-linearity experienced by the wanted signal(s), it is unable to aid in improving blocking and other issues caused by the non-linear processing of unwanted or out-of-band signals (e.g., cross-modulation). It is therefore of limited use and provides no real benefits over the previous configuration. It could also act as a predistorter for an IF amplifier possessing poor linearity; however, it is usually better to design appropriately linear IF stages in the first place and not to rely upon a lineariser.

4.5 Dynamic Range Enhancement

179

IF output

RF input Predistorter

LNA

LO Input

(a)

IF output

RF input Post distorter

LNA LO input

(b)

IF output

RF input LNA

Prepost distorter

LO input

(c)

Figure 4.28 Options for the placement of a cascaded non-linearity in order to improve front-end dynamic range: (a) predistortion; (b) pre/postdistortion; and (c) postdistortion.

4.5.4 Use of Diplexer Elimination, Image-Reject Mixing, and High Dynamic Range Techniques in a Receiver

The concepts which have been described in this section may be incorporated together to form the universal receiver concept mentioned in the introduction to Chapter 3. One embodiment of this arrangement is shown in Figure 4.29. There are various valid configurations, based on the above concept, with some of the systems components being intertwined. For example, the high-dynamic range front-end amplifier is likely to form part of the diplexer elimination circuit

180

Multi-Band and General Coverage Systems

From Tx baseband circuitry and upconversion

Linearized RF power amplifier

Universal diplexer

Controlled Imagereject mixer To downconversion and receiver baseband circuitry High-dynamic range (linearized) receive LNA

Wide-coverage synthesizer

Figure 4.29

Complete universal receiver concept.

configuration. Similarly, the image-reject mixer could be linearised as part of the front-end amplifier (as described earlier for a conventional mixer).

References [1] Lucero, R., et al., Design of an LTCC Switch Diplexer Front End Module for GSM/DCS/PCS Applications, Proc. of IEEE International Microwave Symposium, Phoenix, AZ, May 2001. [2] Hikita, M., et al., New Low-Distortion Band-Switching Techniques for SAW Antenna Duplexers Used in Ultra-Wide-Band Cellular Phone, IEEE Trans. on Microwave Theory and Techniques, Vol. 52, No. 1, pp. 38–45, January 2001. [3] Schacherbauer, W., et al., A Flexible Multiband Front-End for Software Radios Using High IF and Active Interference Cancellation, IEEE International Microwave Symposium, Phoenix, AZ, May 2001, pp. 1,085–1,088. [4] Joswick, W., Uses and Applications of I & Q Networks, Microwaves and RF, 1994. [5] Hartley, R., Modulation System, U.S. Patent No. 1,666,206, April 1928. [6] Kenington, P. B., High-Linearity RF Amplifier Design, Artech House, 2000. [7] Montemayor, R., and B. Razavi, A Self-Calibrating 900-MHz CMOS Image-Reject Receiver, Proc. of 26th European Solid-State Circuits Conference, Stockholm, Sweden, September 1921, 2000, pp. 292–295. [8] Der, L., and B. Razavi, A 2GHz CMOS Image-Reject Receiver with Sign-Sign LMS Calibration, IEEE International Solid-State Circuits Conference Digest Technical Papers, February 2001, pp. 294–295. [9] Elmala, M. A. I., and S. H. K. Embabi, Calibration of Phase and Gain Mismatches in Weaver Image-Reject Receiver, IEEE Journal of Solid-State Circuits, Vol. 39, No. 2, February 2004, pp. 283–289. [10] Crols, J., and M. Steyaert, A Single-Chip 900MHz CMOS Receiver Front-End with a High-Performance, Low-IF Topology, IEEE Journal of Solid-State Circuits, Vol. 30, No. 12, December 1995, pp. 1,483–1,492. [11] Hornak, T., Using Polyphase Filters as Image Attenuators, RF Design, June 2001, pp. 26–34. [12] Voorman, J. O., Asymmetric Polyphase Filter, U.S. Patent No. 4,914,408, June 12, 1989.

4.5 Dynamic Range Enhancement

181

[13] Nesimoglu, T., et al., Linearised Mixer Using Frequency Retranslation, IEE Electronics Letters, Vol. 37, No. 25, December 6, 2001, pp. 1,493–1,494. [14] Fish, P. J., Electronic Noise and Low Noise Design, New York: Macmillan Press, 1993, Chapter 4.

CHAPTER 5

Flexible Transmitters and PAs 5.1

Introduction Arguably the most important element of any software defined radio system, whether in a base station or handset, is the linear or linearised transmitter. Receiver systems have always required a high degree of linearity, as they must possess a good strong signal handling capability, in addition to good low-noise performance. In the case of transmitters, however, a high degree of linearity is a relatively recent requirement, arising predominantly from the widespread adoption of cellular networks. Transmitters used in this type of application require a much greater degree of linearity (i.e., a much lower level of distortion) than even single-sideband (SSB) linear transmitters used in the past (e.g., for military applications). This is due to the near-far effect present in cellular systems ([1], Chapter 1), which results in transmitter non-linearities causing significant interference to users of adjacent channels, thereby limiting system capacity. This limitation affects both uplink and downlink, depending upon which transmitter has the non-linearity problem: If it is the handset transmitter, the uplink capacity of a nearby cell will be impacted; if it is a BTS transmitter problem, the downlink capacity of a nearby cell will be impacted. Even with the high-linearity transmitters available today, many city-centre systems are currently interference limited (in terms of capacity) rather than noise limited. Highlinearity transmitters are therefore an enabling technology for many cellular systems, irrespective of the use (or otherwise) of a software defined radio-based architecture in their realisation. In the case of a generic software defined radio system, a high-linearity transmitter is essential for any design that must be capable of operation on an envelope-varying modulation format. In practice, this means virtually all software defined radio systems must adopt one or other of the high-linearity amplifier or transmitter technologies highlighted in Chapter 6 and covered in detail in [1]. This follows from the fact that most modern modulation formats incorporate some degree of envelope variation, the only significant exception at present being GSM and its derivatives (DCS and PCS). The basic architecture of a software defined radio transmitter revolves around the creation of a baseband version of the desired RF spectrum, followed by a linear path translating that spectrum to a high-power RF signal. The frequency translation (upconversion) and power amplification processes, involved in creating the high power RF signal, must therefore fall into one of the following categories: 1. Inherently linear processing. The main mechanism by which this is ensured is typically by the use of backoff of all stages from their 1-dB compression

183

184

Flexible Transmitters and PAs

points. This has the obvious advantage of simplicity, in terms of design, but is typically highly inefficient (particularly in the case of the power amplifier) and costly, since it is required to significantly overrate all of the components involved. 2. Linearisation of the RF PA. With this option, a linearisation technique, such as those described in Chapter 6, is applied to the RF power amplifier, with inherently linear processing used for the upconversion system. This significantly reduces the size and cost of the transmitter, relative to that of option 1, but still requires the upconverter to be overrated. 3. Linearisation of the complete transmitter. Linearisation techniques exist which are capable of linearising the complete transmitter from its base-band input to its high-power RF output. This form of solution allows the upconversion processing to be more non-linear, hence requiring less backoff, and thus to be potentially cheaper. Again a number of these techniques will be described in Chapter 6. 4. RF synthesis techniques. The final option relies on the processing of constant envelope waveforms throughout the upconversion and power amplification hardware, with the desired envelope-varying RF waveform being synthesized by combining these waveforms at the output. Examples of this type of system are described in Chapter 6.

5.2

Differences in PA Requirements for Base Stations and Handsets A number of differences exist in the requirements placed upon an RF power amplifier, depending upon whether it is to be deployed in an SDR base station or handset (other than the obvious difference in output power requirement). These will depend to a degree upon the range of modulation formats to be supported (i.e., how generic the system is designed to be) as well as potentially dictating the range of transmitter architectures and/or linearisation techniques that are applicable.

5.2.1

Comparison of Requirements

The primary constraints upon a base-station or handset linear PA or transmitter may be summarized as follows: 1. Output power. The output power of a base-station PA is typically much greater than that of a handset, both in terms of overall mean power and on a power-per-carrier basis. In some micro and pico-cell applications, the power levels of the two may be similar on a power-per-carrier basis, but typically the number of carriers involved dictates that the base-station PA is of a much higher power overall. 2. Size. This is the most obvious difference—a handset will clearly have much less space within which the PA must be accommodated. In cellular base stations, a rack format is still common, although even here size is becoming a major issue, particularly for micro and pico BTS applications.

5.2 Differences in PA Requirements for Base Stations and Handsets

185

3. Efficiency. The battery life of a handset is a key selling feature (or conversely the battery size/weight for a given talk-time)—efficiency is clearly of major concern here. Efficiency is, however, of arguably similar importance in the case of the base station, due to issues of size, cooling, and running costs. An increase in base-station PA efficiency from, say, 10% to say 15%, when multiplied across a complete 3G network, can result in savings of many millions of dollars per annum in electricity costs alone. These savings will be compounded in practice by additional savings in cooling costs, and power supply costs. In addition, removing the need for air conditioning systems and the reduction and/or removal of the need for cooling fans will significantly improve BTS reliability. In a typical BTS, it is the air conditioning unit which is the single least-reliable element and failure of this subsystem frequently results in failure (due to overheating) of other subsystems (notably the RF power amplifier or transmitter).

5.2.2

Linearisation and Operational Bandwidths

It is useful, at this point, to draw the distinction between the linearising and operational bandwidths of a linear transmitter. An amplifier or transmitter is capable of performing linearisation over a certain channel or multichannel bandwidth, which is determined by the bandwidth of the feedback loop (in, for example, a Cartesian loop transmitter) or gain/phase matching of the system components (in, for example, feedforward or predistortion amplifiers). This is termed the linearising bandwidth and will obviously depend on parameters such as the gain employed in the feedback loop, as well as its bandwidth. A practical limit on this bandwidth is in the region of a few hundreds of kilohertz (for a Cartesian feedback transmitter with a high level of loop gain and a standard RF power module) or a few tens of megahertz (for a feedforward or predistortion system). In the case of a feedback system, higher linearisation bandwidths are possible when using very low delay power amplifiers. Examples include integrated circuit PAs found in handsets and broadband MMIC amplifiers, both of which can have delays of less than 1 ns. The operational bandwidth is defined by the circuit components, that is, the bandwidth of the power amplifier chain, the quadrature bandwidth of the local oscillators, and the phase-shift network, as applicable. It is the bandwidth within which the linearising bandwidth can appear while still maintaining acceptable performance; this may be several tens of megahertz for a typical feedback design and may be over 100 MHz for a feedforward or predistortion system. For example, a typical feedback transmitter operating in a mobile radio system utilising DAMPS modulation would have a linearising bandwidth of 30 kHz and an operational bandwidth of 30 MHz. In other words, the 30-kHz linear channel could appear anywhere in the 30-MHz spectrum allocation and, more importantly, could be reallocated simply by reprogramming the channel synthesiser. Note that the linearising bandwidth is defined here in terms of the wanted channel bandwidth. The lineariser will clearly suppress all significant IMD products caused as a result of the signals appearing within its linearising bandwidth. In the example above, significant IMD products could be generated over a bandwidth of a few hundred kilohertz as a result

186

Flexible Transmitters and PAs

of the signals falling within the 30-kHz linearising bandwidth. This wider bandwidth is sometimes referred to as the linearisation bandwidth.

5.3

Linear Upconversion Architectures 5.3.1

Analogue Quadrature Upconversion

One of the first upconversion architectures to be employed in a software defined radio transmitter, quadrature analogue upconversion, is still widely deployed today. Although both the phasing and Weaver methods of upconversion can be supported, it is the Weaver method which is most commonly employed (see Section 5.3.6), as it requires the minimum possible bandwidth (and hence sampling rate) for the D/A converters. The basic configuration of this technique is shown in Figure 5.1. A quadrature signal is generated by the DSP; this fits well with many digital modulation formats, as these are typically generated in a quadrature format. The I and Q channel signals feed D/A converters which only need to operate ideally at a sampling rate equal to the channel bandwidth (i.e., half of the Nyquist rate for the channel bandwidth). This is due to the fact that the I and Q signals themselves only occupy half of the channel bandwidth, the full bandwidth being created only at the summed RF output. The outputs of each of the D/A converters feed anti-alias lowpass filters. If the sampling rate is chosen to be the minimum necessary to fulfill the requirements of Nyquist, then these filters must be ideal brick wall types. In practice, a sampling rate somewhat higher than that required by Nyquist is usually used and this is increasingly in the form of an interpolating DAC. Such a DAC operates internally at a much higher sampling rate than that of a conventional DAC and includes interpolation filtering, thereby ensuring that the alias products appear far from the wanted channel(s). These products can therefore be filtered easily by conventional analogue lowpass filters. The quadrature mixers and local oscillator quadrature splitter can be fabricated using discrete mixers and a 90º hybrid splitter. It is, however, more common for these functions to be provided in a single integrated component in most software defined radio applications. Integration into a single component has the advantage of good gain and phase matching (ripple) between the two paths and good temperature I-channel DAC

Lowpass filter

D/A

DSP

Local oscillator

In



90°

RF output

D/A Q-channel DAC

Figure 5.1

Lowpass filter

Quadrature upconversion in a linear transmitter employing an analogue upconverter.

5.3 Linear Upconversion Architectures

187

stability for these parameters. It also generally allows a lower power local oscillator signal to be employed and this is advantageous in a handset application in reducing both spurii and power consumption. The local oscillator is generated by one of two main methods. The first is a frequency multiplication and division architecture which internally generates twice the local oscillator frequency and then divides this by two, generating quadrature local oscillator signals in the process. The second is by using a broadband 90º phase-shift filter (a polyphase filter). This latter option has the advantage of being a linear process, thereby generating far less energy at the second harmonic of the local oscillator and hence producing a cleaner output spectrum. It also allows the variations in performance at a range of local oscillator levels to be assessed, and an optimum level to be selected to fulfill a particular requirement (e.g., gain and phase balance, LO feedthrough, and IMD level). 5.3.1.1 Issues and Mitigations for an Analogue Quadrature Upconvert Architecture I/Q gain and Phase Imbalance

In any analogue system with two notionally equal-level outputs there will inevitably be a small gain and phase error between them. This error will have two elements: a static (i.e., frequency invariant) component and a frequency-varying (ripple) component. Both will generate (if uncompensated) an unwanted in-band image signal, falling on top of the wanted signal, but lower in level. In the case of the static component, it is possible to compensate for this error by predistorting the I and Q signals, either internally within the DSP or externally in analogue hardware. In either case, the form of compensation required is shown in Figure 5.2. The required compensation may be achieved by modifying the I and Q baseband signals supplied from the DSP in the manner shown in Figure 5.2. A small fraction of the I channel signal is added to the Q channel output, and by alteration of the

I

I'

K I1 K I2

Compensated quadrature output signals

Quadrature input signals KQ2 Q

KQ1

Q'

Q

Q

I Input signals

Figure 5.2

Compensation for quadrature upconverter errors.

Q'

I I' Output signals

188

Flexible Transmitters and PAs

variables KX1 and KX2 (where X = I or Q), any amount of gain and phase mismatch may be accommodated. This is directly analogous to the method discussed in Chapter 3 for receiver I/Q mismatch compensation. It is possible to automate the compensation process by providing a feedback reference path and thereby generate an error signal to correct the mismatch. Clearly this feedback path must have a more accurate quadrature gain/phase balance than the upconverter, in order to improve its performance. It is unlikely that this will be possible using analogue hardware (if it were, the required design changes should be incorporated into the upconverter), and hence a digital solution is required. The basic form of this solution is shown in Figure 5.3. A number of alternative solutions have been proposed, based upon the provision of an analogue (and hence imperfect) quadrature demodulator in place of the digital IF and quadrature demodulator shown in Figure 5.3. These are detailed in the literature [26]. Most of these methods rely on the provision of a training sequence within the data, or a training signal (e.g., tones). Training sequences are usually undesirable in any transmission system, as they effectively waste data and hence reduce the capacity of the link. They are sometimes provided for other purposes, however, such as to allow the receiver to synchronise and in this case a dual use of the training sequence can be envisaged. A further drawback of these schemes is that they are typically slow and may also rely upon a near-perfect knowledge of the characteristics of the detector used (often an envelope detector). It is clearly better, if at all possible, to use a digital IF and an ideal digital quadrature demodulator. The use of a digital IF in the feedback path, allows the feedback path quadrature processing to be performed digitally and hence be perfect from the viewpoint of I/Q I-channel DAC

Lowpass filter

D/A

On-channel local oscillator

DSP



In

RF output 90º

D/A Q-channel DAC

Ideal quadrature reference signals

Lowpass filter

Mixer 0º

NCO

In

A/D

90º

A/D converter

Lowpass filter Off-channel local oscillator

Digital IF (e.g. ~tens of megahertz)

Figure 5.3

Automation of the quadrature error compensation process.

5.3 Linear Upconversion Architectures

189

errors. It is therefore possible to envisage a number of algorithms which can take advantage of this perfect reference in order to compensate for I/Q gain and phase imbalance in the analogue quadrature upconverter. Compensation would take the form of a weighted summation of some of the I-channel output signal into the Q-channel and/or vice versa, as shown in Figure 5.2. This would ideally take place digitally prior to the DACs, although this does imply a small loss of dynamic range, due to the compensation headroom required. The A/D converter required in the feedback path of Figure 5.3 must be capable of sampling fast enough to deal with an IF input. It will therefore need to sample at a minimum of twice the sample rate of the I/Q DACs (ignoring interpolation), and usually somewhat higher than this. The distortion performance of this converter (and, indeed, its dynamic range) should not be critical if a suitable algorithm is used, assuming that only I/Q gain and phase errors are to be compensated. More typically this feedback path is also used for distortion measurement in some form of linearisation technique, and in this instance distortion performance becomes much more critical (see Section 6.3.1.4). Image suppression is arguably most critical in a multi-carrier system in which the carrier distribution may be non-symmetrical about the centre frequency of the band, or allocation, of interest. A four-carrier WCDMA system is a good example of this, as shown in Figure 5.4. It is possible, although not necessarily desirable, to have a bandwidth occupancy of the type shown in this figure and this leads to an image signal appearing on an unoccupied channel. This image will therefore appear as an adjacent channel signal and hence will be required to meet adjacent channel power levels (e.g., those specified by 3GPP [7, 8]). If, on the other hand, the spectrum allocation was fully occupied (all four channels used), then the image specification would be determined by signal quality requirements, such as signal vector error, (SVE) [sometimes known as error vector magnitude (EVM)]. Amplitude Channel

Channel

1

2

Channel 4 Unwanted local oscillator leakage

Image

Image

Image

4

2

1

Center of allocated band

Frequency

Allocated bandwidth

Figure 5.4 Non-symmetrical carrier distribution for a four-carrier WCDMA allocation showing unwanted in-band image products.

190

Flexible Transmitters and PAs

The frequency varying (ripple) component of the gain and phase imbalance will have a similar impact to that described above (i.e., it will also contribute to the unwanted image). In this case, however, the impact will typically be an order of magnitude or more lower than that of the uncompensated static errors (and often much more for a integrated circuit implementation). This is due to the amount of ripple typically being much smaller than the static errors, in most quadrature upconverter designs. The other key difference is that it is much more difficult to compensate (either manually or automatically) for the effects of ripple and it is usually not necessary (or economic) in most systems. An alternative or additional solution to providing controlled image suppression (as described earlier) is to detect the overall bandwidth of the input signals (e.g., using an FFT algorithm) and to retune the LO signal(s) to ensure that the spectrum is always symmetrically distributed around the upconverter LO. This then ensures that the in-band image spectrum always falls directly on top of the carriers, thereby relaxing the image suppression requirement. The limitation is now set by either EVM requirements (e.g., 34 to 40 dBc for 2% to 1% EVM) or by the use of asymmetric carriers (e.g., one at full power and one backed-off). The only circumstance in which this frequency symmetry is not valid is in, for example, a four-carrier WCDMA system in which carriers 1, 2, and 4 are turned on and carrier 3 is turned off (as shown in Figure 5.4). This is an uncommon scenario (globally) as most allocations are 1, 2, or 3 carriers, and hence this may be an acceptable limitation. LO Leakage Suppression

Local oscillator leakage in the output spectrum occurs at the centre frequency of the upconversion process and hence is typically in-band (as shown in Figure 5.4). It cannot therefore be removed by filtering, which is the traditional method of elimination in conventional upconverters. Carrier leakage has three main mechanisms: 1. Imperfect isolation between LO and RF ports in the mixers. This can occur both in discrete FET or diode-ring based mixer implementations and in IC mixers. It can only be improved by better design of the mixers themselves and is a typical selection parameter for mixers to be used in direct upconversion applications. 2. Unwanted DC generated within the mixer appearing on the IF port and causing leakage of the LO signal through to the RF port. Unwanted DC within the mixer is typically generated by non-linear self-mixing of the IF or local oscillator signals. Self-mixing of either of these signals (i.e., multiplying the signal with itself) will result in one or more harmonics of the signal (depending upon the order of non-linearity involved), plus a baseband component incorporating DC. It is this DC component, when it appears in the IF portion of the mixer, which causes unwanted LO leakage. 3. DC offsets appearing at the mixer input. These can be generated by the IF input circuitry of a quadrature upconverter chip (e.g., DC offsets from an input amplifier) or by DC offsets at the output of the I and Q DACs or filters, which are connected to the IF inputs. These can be eliminated by AC coupling, assuming that a gap exists in the centre of the desired output

5.3 Linear Upconversion Architectures

191

spectrum (as is the case in Figure 5.4). This is illustrated in Figure 5.5. If a gap does not exist, as is likely to be the case in systems employing an odd number of adjacent carriers, then a calibration technique (described next) is required. Note that the roll-off of the AC-coupling process will be 6 dB/octave or 20 dB/decade (for a single-pole filter, i.e., a coupling capacitor), but that this is based on octaves at baseband. In other words if the 3-dB frequency is chosen to be 1 kHz, 60 dB of attenuation will, theoretically, be provided at 10 Hz. This will then create a notch at the centre of the RF band with a 3-dB bandwidth of 2 kHz and a 60-dB bandwidth of 20 Hz. The effect of all three of the sources of carrier leakage is the same and hence a single method may be used for their elimination. Since DC leakage from the DACs into the mixer IF inputs can generate LO leakage, injection of an appropriate amount (and sign) of DC can be used to eliminate it. It will eliminate the effect of all sources of leakage since they can be viewed as a vector summation and the deliberately injected DC is designed to cancel the resultant (and not the individual sources). The hardware architecture required to achieve this is conveniently the same as that required to remove static gain and phase errors (shown in Figure 5.3). As LO leakage is an error in the forward path of a feedback system, which is not replicated in the feedback path (as the feedback demodulator is implemented digitally), it is therefore possible to eliminate it—ideally by DC injection into the mixers from the forward-path DACs. If the loss of dynamic range from these DACs is unacceptable, separate, low-performance DACs may be used, as shown in Figure 5.6. Out-of Channel/Band DAC or Upconverter Noise Floor

Many specifications (e.g., 3GPP [7]) place stringent requirements on out-of-channel and out-of-band emissions. In most systems, the close-to-carrier emissions are Amplitude

Response of AC coupling filter at output of both baseband DACs

Channel

Channel

Channel

1

2

4

Image

Image

Image

4

2

1

Center of allocated band

Frequency

Allocated bandwidth

Figure 5.5 Incorporation of AC-coupling between the I/Q DAC outputs and the mixer IF inputs in a quadrature upconverter.

192

Flexible Transmitters and PAs I-channel low-freq. DAC

Lowpass filter

D/A I-channel DAC

I-channel DC compensation

Lowpass filter

D/A

On-channel local oscillator

DSP

In



RF output

90º

D/A Q-channel DAC

Ideal quadrature reference signals

Lowpass filter

Q-Channel DC compensation

D/A Lowpass filter

Q-channel low-freq. DAC

Mixer NCO

In



A/D

90º

A/D converter

Lowpass filter

Off-channel local oscillator

Digital IF (e.g. ~tens of megahertz)

Figure 5.6

Use of separate DACs for LO leakage suppression.

dominated by distortion in the transmit power amplifier; however, this situation changes far from the wanted carrier (many tens of megahertz, typically). This is illustrated in Figure 5.7. Far from the wanted carrier, noise from the DAC and the upconverter become dominant and these result in a relatively flat noise spectrum (unlike IMD). Clearly, if the spectrum being transmitted consists of a number of relatively narrowband, widely spaced carriers (e.g., from a GSM multi-carrier transmitter), then DAC and upconverter noise may well be visible between the carriers and may dominate the noise in this area. The situation shown in Figure 5.7 is primarily illustrative of the case of a single wideband non-constant envelope carrier (e.g., CDMA) or a number of closely spaced carriers (either narrowband or wideband). Mitigation in this case must, in general, revolve around the specification and optimum use of both the DAC and the upconverter. In general, a discrete passive upconverter (e.g., diode-ring mixer based) will have a better noise performance than will an integrated circuit implementation. It will also, of course, have a generally poorer gain and phase balance and an insertion loss—all of these parameters must

5.3 Linear Upconversion Architectures

193

Amplitude

Channel

DAC and upconverter noise dominate

IMD noise dominates

IMD noise dominates

DAC and upconverter noise dominates Frequency

Figure 5.7

IMD and noise contributions to out-of-channel emissions.

be traded off in a given design. The only filtering help which is available (and then only for the DAC noise) is obtained from the anti-alias filters. In systems where the DACs are not used to provide linearity improvement (e.g., by predistortion), then these filters may produce a modest amount of assistance. In the case where the DACs are being used to provide linearity improvement, it is likely that the anti-alias filters will need to have a bandwidth such that significant roll-off is not available to help meet the nearer of the out-of-band specifications. LO Phase Noise

The amount of phase noise present on the upconversion LO is important in achieving both good adjacent channel performance and good EVM performance, from any software defined radio transmitter. Poor phase-noise performance may be intrinsic to the design of the oscillator, in which case the only solution is an improved design, or it may be due to modulation of the VCO by the high-power output from the transmitter. This latter problem is most common in systems using an on-frequency VCO, such as the direct upconversion system being considered here. There are two main solutions to this problem: 1. Improved screening of the transmitter output (and any noise/RF it induces on supply lines) from the VCO and its supply lines; 2. Realisation of the on-frequency LO from a mix of two off-frequency LOs. This approach ensures that neither of the VCOs appears on-channel and this greatly reduces the potential for interference from the transmitter output signal. Note that the clocks feeding the data converters are also critical in this regard and these must have a very low jitter. Since these are commonly derived from the same reference as the LO synthesiser(s), it is important to maintain the purity of this reference.

194

Flexible Transmitters and PAs

EVM Performance

Transmitter EVM performance is determined by a number of factors: 1. LO leakage; 2. I/Q error or image suppression; 3. LO phase noise. Assuming that these factors are reduced, to the degree required to meet the other system requirements (e.g., adjacent channel performance), they should be easily good enough not to compromise EVM performance in most systems [likewise for peak code domain error (PCDE)].

5.3.2

Quadrature Upconversion with Interpolation

This scheme, shown in Figure 5.8, is similar to the basic quadrature upconversion scheme described earlier (and is identical if interpolating DACs are used in that scheme). Interpolation is important (and arguably essential) in ensuring that the alias products produced by the DACs are sufficiently separated from the wanted channel(s) such that they can be attenuated to an acceptable level by the analogue, lowpass, anti-alias filters. These filters will typically have a demanding flatness specification, particularly if they form part of a linearisation scheme, such as predistortion, or an RF synthesis technique, such as LINC [9, 10]. Requiring a broad, flat passband restricts the type of filter which can be employed to, for example, Bessel, Butterworth, or very low ripple Chebyshev designs. All of these designs require a very high order of filter to be employed in order to achieve a reasonable roll-off in the stopband and there are practical restrictions on the maximum order that can be achieved. All of these considerations lead to a desire to place the alias bands as far as is practicable from the wanted channels; oversampling coupled with interpolation is a relatively simple method of achieving this. The technique of interpolation operates by increasing the effective sample rate of the input waveform, by synthesizing additional samples in between the existing I-channel DAC

Lowpass filter

D/A n

DSP

Digital interpolation filters

n



Local oscillator

In

RF output 90º

D/A Q-channel DAC

Lowpass filter

Figure 5.8 Quadrature upconversion in a linear transmitter employing an analogue upconverter and digital interpolation filtering.

5.3 Linear Upconversion Architectures

195

samples. These new samples are based upon a weighted average of the original samples. The DAC now needs to operate at this new sample rate, which may be typically 4 or 8 times the original (e.g., Nyquist) sample rate. This clearly places far greater demands upon the DAC, but it does allow the bulk of the signal processing (i.e., everything prior to the interpolation process) to operate at the minimum possible sample rate. Figure 5.9 illustrates this process in the time domain for a single sinusoidal input signal at an original sampling rate of 5 samples per cycle [Figure 5.9(a), i.e., well within the Nyquist limit] and an oversampling rate of 4 [Figure 5.9(b)]. It can be seen that each of the original samples has been replaced by 4 new samples, with the sample time consequently reduced to one quarter of that of the original sampling process. The effect of this on the DAC output can be seen in Figure 5.9(c, d) for the original and oversampling rates, respectively. It is clear that in Figure 5.9(d) a much more recognizable facsimile of a sinewave is generated and it follows that this will therefore result in a cleaner output spectrum. Figure 5.10 illustrates the effect of the various processes shown in Figure 5.9, in the frequency domain. In Figure 5.10(a), the complete spectrum of the non-interpolated DAC output can be seen (up to the fourth Nyquist zone). The anti-alias filter required in this case would need to have an adequate roll-off by the first image product—a tight specification. If interpolation is now employed, the situation Four samples replace each original sample

1/f Clock

1/4fClock Time

(a)

Time

(b)

1/4fClock

1/f Clock

Time

(c)

Time

(d)

Figure 5.9 Time-domain view of the effect of interpolation on a sinewave input signal: (a) 5 samples per cycle; (b) 20 samples per cycle; (c) unfiltered DAC output from (a); and (d) unfiltered DAC output from (b).

196

Flexible Transmitters and PAs Fundamental

First image

Fundamental

Amplitude

Amplitude

Digital interpolation filter response

fClock

2fClock

Frequency 3fClock 4fClock

Attenuated first image

fClock

2fClock

(a)

Frequency 3fClock 4fClock (b)

Fundamental

Fundamental DAC frequency response

Amplitude

Amplitude

DAC frequency response

fClock

2fClock

3fClock 4fClock (c)

Frequency

fClock

2fClock

Frequency 3fClock 4fClock (d)

Figure 5.10 Frequency-domain view of the effect of interpolation on a sinewave input signal: sampled output spectrum (a) before and (b) after interpolation; and DAC output (c) from (a) and (d) from (b), showing effect of the sin(x)/x response.

improves to that shown in Figure 5.10(b); the interpolation filter is now able to greatly attenuate the images found in the first, second and third Nyquist zones, leaving only that present in the fourth zone. Given the large frequency separation between this product and the wanted fundamental, it is a fairly straightforward matter to design an antialias filter to eliminate it. The job of this anti-alias filter is made even simpler by virtue of the effect of the sin(x)/x response of the DAC itself (illustrated in Figure 5.10(c) for the non-interpolated DAC and Figure 5.10(d) for the interpolated DAC). In both cases, the DAC response helps to attenuate the alias products; however, in the non-interpolated case, the attenuation of the first image is small. This means that a tight anti-alias filter response is still required and little advantage is gained from the DAC roll-off. The use of interpolation, however, allows the DAC response to have a beneficial effect, with a useful level of attenuation being provided in the fourth Nyquist zone. The internal structure of an interpolating DAC is shown in Figure 5.11. The input data is fed to a latch that holds the data for access by the interpolation routine and filter. The interpolation processing is clocked at the relevant multiple of the input data rate (i.e., four times, in the example given earlier) and this is also the clock rate for the DAC itself. The internal data rate between the interpolation process and the DAC core is then very high; however, as this takes place on-chip, it is not an

5.3 Linear Upconversion Architectures

197

Digital interpolation filter

Interpolating DAC Data input

N

Input data latch

N

N

n

D/A

Analogue output

4f Clock

fClock

xN Frequency multiplier

fClock

Figure 5.11

Structure of an interpolating DAC.

issue. The interface data rate between the DSP and the (interpolating) DAC chip now returns to the minimum possible rate (usually Nyquist plus an implementation margin). Typical DAC devices are capable of good performance at up to 80% of the Nyquist limit. Clock multiplication may be provided on-chip in the form of a frequency multiplier or phase-locked loop (PLL), or it may need to be provided externally by the user. It is, of course, possible to implement the interpolation processing externally, within the signal processing which generates the input data. It is then necessary, however, to utilise a DAC with a very fast input interface [e.g., low voltage differential signalling (LVDS)]. Obtaining a suitable DAC, with an appropriate sample rate capability, may be both costly and difficult and this may make a discrete implementation of the technique unattractive. The main problem, at present, with providing high-speed DACs is generally not in the DAC core itself; it is more typically the digital interface that proves to be the bottleneck. Interpolating DACs are usually more expensive than their non-interpolating counterparts (due to the large silicon area occupied by the interpolation filter). It may therefore be cheaper to use a non-interpolating DAC and implement the interpolation filter on the DSP device (space permitting) or as part of an ASIC, if one is being used for the signal processing operations. 5.3.3

Interpolated Bandpass Upconversion

Interpolated bandpass upconversion (Figure 5.12) is similar to that described above, except that the interpolation filter now selects one of the higher images. It can therefore be viewed as analogous to alias downconversion, when using an ADC. This architecture has the advantage that local oscillator leakage is now no longer a part of the wanted output spectrum and hence can be eliminated more easily (e.g., using an analogue highpass filter). It does, however, place greater demands upon the analogue performance of the DAC and the DACs sin(x)/x response may introduce an unacceptable amplitude slope across the bandwidth of interest. This is

198

Flexible Transmitters and PAs Lowpass filter

I-channel DAC D/A n Bandpass digital interpolation filters

DSP

n

In

RF Output 90º

D/A Q-channel DAC

Figure 5.12



Local oscillator

Lowpass filter

Analogue quadrature upconversion employing bandpass interpolation.

unlikely to be a problem for a single narrowband carrier but may be more significant for a multi-carrier CDMA transmitter. 5.3.4

Digital IF Upconversion

It is now possible, with modern DACs, to obtain an output at a useable IF frequency (i.e., many tens of megahertz). This brings with it the option of performing the required quadrature upconversion processing in the digital domain, thereby achieving near perfect image rejection and local oscillator suppression. The architecture required to do this is shown in Figure 5.13. The implementation shown in Figure 5.13 still employs interpolation filtering and this provides the same benefits as previously obtained with the analogue upconversion architecture of Figure 5.8. The outputs of the interpolation processes now feed a digital quadrature upconverter, which utilises a numerically controlled oscillator (NCO) as the local oscillator signal. The use of an NCO permits frequency-hopping to take place digitally, if desired, and this can typically provide a much shorter hop time than with an analogue PLL. If this approach is chosen, however, it is important to note that the analogue IF filter must now be widened to cover the whole bandwidth over which hopping may occur (typically the whole frequency allocation). It is therefore no longer able to remove close-in DAC spurs and these must be sufficiently low to meet the required system specification unaided.

n DSP

Digital interpolation filters n

Mixer 0º

In

90º

NCO

D/A IF output DAC

Digital quadrature upconverter

Figure 5.13

Bandpass filter

Bandpass filter Local oscillator

Transmitter architecture employing a digital IF output.

RF output RF amplifier

5.3 Linear Upconversion Architectures

199

The output of the digital upconverter feeds an IF output DAC, which, if oversampled, may be operating at a rate of many hundreds of megahertz. The output of this DAC contains the wanted band, plus a range of harmonic and alias products. These are typically removed using a bandpass filter (e.g., a SAW filter); however, some simple analogue lowpass filtering to eliminate some of the higher harmonics may also be beneficial (the higher-frequency stopband attenuation of some filters can be poor). Once the wanted IF band has been selected by this first IF filter, one or more stages of conventional frequency upconversion may then be employed to translate the signal to its final RF allocation. This architecture has the advantage that only a single DAC is needed, although the requirement for one or more IF filters (with typically a tight specification) will frequently more than offset the cost saving of a second DAC. In addition, the performance of most DACs with an IF output will be poorer than with a baseband output, thus making a given specification more difficult to achieve. Comparing the performance of a typical DAC, when utilising a 5-MHz baseband signal and a 20-MHz IF signal, indicates that a 5- to 10-dB reduction in spurious-free dynamic range can result. This assumes that the DAC is designed to operate at a suitable sample rate, such that the 20-MHz IF can be accommodated with a reasonable, but not an excessive, margin. 5.3.5

Multi-Carrier Upconversion

Figure 5.14 shows a logical extension of the digital IF transmitter architecture detailed above. In this case, multiple carriers (three are shown) are separately upconverted, each by its own NCO. These are then summed digitally, prior to digital-to-analogue conversion. Since this is now a multi-carrier signal, the peak-to-mean ratio of the signal is likely to have increased, unless steps are taken to counteract this effect (e.g., prefilter and/or postfilter clipping, carrier phasing, and crest factor reduction). The DAC must possess a sufficient dynamic range to cope with this and hence this architecture is generally the most demanding in terms of DAC performance. The circuitry following the DAC is similar to that described earlier for a digital IF based transmitter. The main differences in this case are: 1. The filtering must now be sufficiently wide to cope with a number of not necessarily adjacent carriers, but while still having a similar roll-off (typically) to that of the single carrier system. This therefore places greater demands upon the filter design. 2. The dynamic range of the active circuitry (mixers, amplifier, and so forth) must be greater in order to cope with the greater peak-to-mean ratio of the multi-carrier signal. The NCOs may be used for frequency hopping, as outlined earlier for the digital IF architecture. In this case, however, there is no disadvantage in terms of the analogue IF filter, as this must already cover the whole band of interest and hence the DAC spurs must be low enough to meet the system requirement (or this architecture cannot be used).

200

n DSP

Pulse shaping

Digital interpolation filters



In 90º

NCO n

n DSP

Pulse shaping

Mixer

Digital interpolation filters



In

D/A

90º

NCO

RF output IF output DAC

n

Bandpass filter

Bandpass filter

RF amplifier

Local oscillator n Pulse shaping

Digital interpolation filters



In 90º

NCO n

Figure 5.14

Multi-carrier transmitter architecture employing a digital IF output (three-carrier version shown).

Flexible Transmitters and PAs

DSP

5.3 Linear Upconversion Architectures

201

Standard integrated circuits are available which implement the digital upconversion and NCO functionality of the above system, typically for four carriers. Alternatively, the whole of the digital system, including modulation generation, coding, and framing, can be implemented in a single application-specific signal processor (ASSP). 5.3.6

Weaver Upconversion

Weaver upconversion is implicitly incorporated in the above quadrature-based transmitter architectures. It was originally envisaged as a method of SSB generation and was first proposed in 1956 [11] as a method of generating SSB, without the requirement for a narrowband crystal filter. It is an extension of the phasing method of SSB generation, which suffers from the fact that the remaining unsuppressed image band appears adjacent to the wanted frequency band. Although, in theory, this image should not exist, imperfect system components mean that it cannot entirely be eliminated. This is a particular problem in a mobile radio environment, as the adjacent channel performance must be very good—hence the need for highly linear amplification and an alternative method of SSB generation. The principal advantage of the Weaver method is that the image channel falls within the band of the wanted channel and hence the suppression specification is greatly relaxed. A Weaver method SSB generator is shown in Figure 5.15. It is a direct-conversion architecture, although it can be (and frequently is) used to directly convert to an appropriate IF, prior to conventional analogue upconversion to the required frequency band. A significant advantage of the technique, when used in a software defined radio application, is that it allows many aspects of the system to be implemented in a DSP device, and in particular, those areas which would otherwise be difficult to realize in analogue hardware (e.g., the generation of the quadrature signal components, SI1 and SQ1, in Figure 5.15). The operation of a Weaver generator may be described, with reference to Figure 5.15, as follows: The baseband input signal is restricted to a bandwidth, B, with a SI1

SI2

cos(ω0t)

SI3

cos(ωCt)

Sin

Sout 90º

90º

sin(ω0t)

sin(ωC t) SQ1

Figure 5.15

Weaver method SSB generator.

SQ2

SQ3

202

Flexible Transmitters and PAs

band centre frequency, f0, and a lower limit, fL, as shown in Figure 5.16. The input band may be considered as a summation of sinusoids: s in (t ) =

N

∑E n =1

n

cos( ω n t + φ n )

(5.1)

The baseband input signal is mixed with a quadrature oscillator operating at half of the required modulation bandwidth. Two baseband-frequency quadrature paths are thus formed where the baseband spectrum in each has been folded on top of itself, occupying half of the original bandwidth. In an SDR implementation, particularly of a digital modulation scheme, these two signals may well result directly from the modulation process, thereby removing the need for the left-hand half of Figure 5.15. In either case, the resulting signals are: N

s I1 ( t ) =

∑E

sQ1 (t ) =

∑ −E

[

]

[

]

N

[

]

(5.2)

[

]

(5.3)

cos ( ω n − ω 0 )t + φ n + ∑ E n cos ( ω n + ω 0 )t + φ n

n

n =1

n =1

and N

n =1

n

N

sin ( ω n − ω 0 )t + φ n + ∑ E n sin ( ω n + ω 0 )t + φ n n =1

The resulting spectrum appears as shown in Figure 5.17; note the gap between the top of the required baseband spectrum and the bottom of the mixer products band. This provides a convenient region for the lowpass filter roll-off and will be 600 Hz wide for a 300-Hz–3.4-kHz audio input spectrum. The resulting filtered signals will be: N

s12 (t ) =

∑E

sQ 2 (t ) =

∑ −E

n

n =1

[

]

[

]

cos ( ω n − ω 0 )t + φ n

(5.4)

and N

n =1

n

sin ( ω n − ω 0 )t + φ n

(5.5)

Each path is then upconverted to the final channel frequency by a quadrature local oscillator operating at the centre of the channel. This is not what would be Amplitude

B fL

Figure 5.16

Baseband input signal spectrum.

f0

fL+B

Frequency

5.3 Linear Upconversion Architectures

203

Amplitude

2fL

0

B/2

f0

2f 0 −B/2

2f0 +B/2

2f0

Frequency Figure 5.17

Signal spectrum at the output of the first balanced modulators.

described as the carrier frequency in a conventional filter-based SSB system, however in most modern mobile communications systems, the carrier frequency defines the centre of the wanted channel and not necessarily the frequency of the upconverting local oscillator(s) (although given the widespread use of quadrature upconversion techniques, the term carrier frequency is now almost synonymous with the centre of the wanted channel.) The resulting RF output signals are therefore: s I 3 (t ) =

N

∑ n =1

N

En cos ( ω c + ω n − ω 0 )t + φ n 2

[

E + ∑ n cos ( ω c − ω n + ω 0 )t − φ n n =1 2

[

]

(5.6)

]

and N

En cos ( ω c + ω n − ω 0 )t + φ n n =1 2 N E − ∑ n cos ( ω c − ω n + ω 0 )t − φ n n =1 2

sQ 3 (t ) =

[

]

[

]



(5.7)

The two paths are then summed to produce an SSB channel in which the image from the final upconversion process appears in-band and the suppression of which is mainly governed by the quadrature accuracy of the oscillator and the leakages involved in the RF summing junction: s out = s I 3 + sQ 3

(5.8)

Hence s out =

N

∑E n =1

n

[

cos ( ω c + ω n − ω 0 )t + φ n

]

(5.9)

204

Flexible Transmitters and PAs

The two quadrature baseband frequency paths, created by use of the Weaver method, lend themselves rather neatly to application in a Cartesian loop transmitter [12] or a digital predistortion transmitter, or indeed any quadrature-based transmitter architecture (e.g., some forms of the LINC technique, certain sigma-delta techniques, and so forth). All of these options are discussed in Chapter 6. 5.3.7

Non-Ideal Performance of High-Speed DACs

Many of the imperfections encountered in a high-speed DAC are analogous to those already discussed in Chapter 3, relating to high-speed ADCs. These include INL and DNL errors, finite spurious-free dynamic range, SINAD and signal-to-noise ratios, and bad or missing codes. This section will briefly discuss the additional issues relating to high-speed DACs and their impact upon transmitter performance; it will not duplicate the discussion of Chapter 3 and the reader is referred to that chapter for details on some of the other high-speed data converter issues. 5.3.7.1

Distortion Mechanisms Which Depend upon Output Signal Frequency

Many DAC non-linearities are not dependent upon the frequency of the DAC output signal. These include INL, DNL, and bad or missing codes. These errors are typically small in a modern converter and should not impact significantly upon the converter’s spectral properties. There are three main distortion mechanisms, however, which do depend upon the output frequency required from the DAC: 1. Amplifier slew rate: Caused by the finite bandwidth/slew rate of the output amplifier(s); 2. Non-linear capacitance: Occurring within the DAC IC itself; 3. Open-loop non-linearities: These begin to manifest themselves as loop gain rolls off at higher frequencies and hence the effect of feedback reduces. All three of these mechanisms (along with the other effects discussed next) contribute to the reduction in performance experienced with high-speed converters, as their frequency limit of operation is reached. 5.3.7.2

Step Response of a High-Frequency DAC

Figure 5.18 shows the impact of a step change in the required DAC output voltage, upon the actual voltage appearing at the DAC output. The dashed line shows the desired ideal output change which the DAC should produce, as a result of a step change in the input code (e.g., from half to full scale), while the solid line shows the form of the actual output which will be seen in practice. Examining the practical DAC response in detail highlights a number of imperfections. These break down into static errors and dynamic errors. The main static error shown in Figure 5.18 is dynamic non-linearity (DNL)—this is the difference between the desired steady-state output voltage, for a given input code, and the actual voltage generated by the DAC. There are a number of dynamic errors and

5.3 Linear Upconversion Architectures Clock or data feedthrough

Overshoot and ringing

DAC output voltage

205

DNL error

Ideal DAC output response Nonlinear slewing Glitch impulse

Settling time Time

Figure 5.18 voltage.

Response of a practical high-speed DAC to a step change in the required output

dynamic aspects of the system exhibiting non-ideal behaviour, usually resulting from large step changes and/or short update periods: 1. Glitch impulses. These occur at the start of the transition from the previous code’s output voltage to that of the new code. This typically results in a small reduction in the DAC’s output voltage at the start of the transition from a lower to a higher output voltage (and conversely for a higher-to-lower voltage transition). 2. Non-linear slewing. The transition from the previous output voltage level to the new output voltage level will not occur instantaneously, nor will it (necessarily) occur linearly. Clearly the impact of this effect will increase with increasing sample rate and it is one of the reasons behind the lower performance seen with most high-speed converters, as their sample rate limit is approached and reached. 3. Overshoot and ringing. As in any system operating at high frequencies with (unavoidable) reactive elements, an amount of overshoot and/or ringing is inevitable. Again this will become more pronounced, in terms of its overall effect on the output spectrum, as the converter’s sample rate limit is approached and reached. 4. Clock or data feedthrough. This effect is typically most evident after the output voltage has begun to settle and can be seen as a small glitch in the steady-state response of the DAC. It can result from many causes, both internal to the DAC and also in the external circuitry. Clearly, care should be exercised in the design of the latter, to ensure that such glitches are minimised or eliminated. 5.3.8

Linear Transmitter Utilising an RF DAC

The concept of using a DAC which that operates at the required final carrier frequency, is clearly attractive for both handset and base-station applications. In the former case, power consumption is likely to be a major issue for some time to come;

206

Flexible Transmitters and PAs

however, the latter area is likely to be an early adopter of this type of system, as and when it becomes commercially viable. RF DACs are now beginning to be discussed, with low-resolution, high-speed conventional converters appearing in some direct-to-carrier (RF) applications and higher speed converters appearing in the literature (e.g., [13]). This section discusses a promising alternative to the conventional type of high-speed DAC for RF applications. It is not yet a commercial product (as of the time of this writing), but does offer a promising route to achieving carrier-frequency synthesis of an RF waveform. 5.3.8.1

Drawbacks of Existing DACs

The performance of existing high-speed DACs is limited by distortions present at the data switching transitions (Figure 5.19) and these distortions impact upon the frequency-domain performance of the device. The three main causes of this are [14]: 1. Intersymbol interference (ISI); 2. Imperfect timing synchronization; 3. Clock jitter. It is possible to solve the first problem (ISI) by using a return-to-zero (RZ) DAC architecture. This type of DAC effectively removes the sample-to-sample memory of the converter, thereby ensuring that the switching data transients are more closely related to the input data stream. It does, however, require the DAC to produce larger steps for the same output energy and this increases its sensitivity to clock jitter [15, 16]. 5.3.8.2

Structure and Operation of an RF DAC

The concept of utilising a sinusoidally shaped pulse as the output for each DAC code has been proposed, in order to solve the ISI problem mentioned earlier, as well as to alleviate the jitter problem [17]. This mechanism solves the former problem, since the waveform chosen is an RZ pulse, with the sinusoidal output and the DAC sampling clock being aligned, such that the DAC is switched in the regions of the sinusoidal pulse where it falls to zero. If the sinewave used to create the pulse is perfectly locked to the data clock, the sensitivity of the resulting system to clock jitter is substantially removed. This occurs since the sinewave pulse has zero value and zero gradient at the switching points. The RF DAC concept, shown in Figure 5.20, builds on this idea by utilising multiple oscillatory periods (or pulses) within each DAC output code. This idea retains Ampl.

Ampl. Digital input

D/A

Analogue output

A Time

t

Figure 5.19

Time

t

t+1/f s

Response of a conventional DAC to an impulse, showing switching-edge distortion.

5.3 Linear Upconversion Architectures

207

Ampl.

Ampl. Digital input

D/A

Analogue output

A

RF DAC t

Figure 5.20

Time

Time t

t+1/f s

Basic structure of an RF DAC [14].

the above properties with respect to ISI and jitter immunity, but adds the advantage that upconversion is effectively performed at the same time, thereby creating an output at a desired RF carrier frequency. The RF DAC can therefore be thought of as equivalent to a conventional DAC followed by a mixer and LO-based upconverter. An alternative approach, utilising a conventional DAC, is to employ one of the higher Nyquist zones at the output of the DAC, in place of the first Nyquist zone. This can be achieved simply by placing a bandpass filter, designed for the required centre frequency, after the DAC, thereby isolating the desired Nyquist zone, contained in the DAC output. This approach does, however, have the significant disadvantage that the sin(x)/x response of the DAC means that the output level of this higher Nyquist zone signal, will be significantly below that of the main lobe and also significantly lower than an equivalent output from the RF DAC discussed here. The RF DAC employs a harmonic of the DAC clock as the oscillatory waveform, used to generate the DAC output pulses (fOSC nfS). Examples of the resulting form of the DAC output pulse are shown in Figure 5.21, for n = 2 and n = 3. It can be seen from this figure that DAC switching can still occur in regions of the waveform where it drops to zero and has a gradient of zero. It therefore has the same ISI and jitter advantages described above for a single sinusoidal pulse. Comparison of Figures 5.21(a) and 5.21(b) shows that it is possible to centre the DAC’s response around any desired frequency, simply by changing the oscillator frequency, fOSC. It must, of course, be ensured that the resulting sample rate is an appropriate compromise between sampling the desired signal bandwidth (as a minimum) and not over-designing the system with regard to sample rate, as this is expensive in both cost and power consumed. The key advantages of the RF DAC, over the more conventional DAC-plusmixer approach are: 1. The non–return-to-zero DACs used in a conventional DAC-plus-mixer architecture, result in them being prone to the ISI and clock-jitter problems discussed earlier. 2. The conventional architecture is also prone to phase noise on the upconversion local oscillator. Although a similar oscillatory waveform is required for the RF DAC, the resultant system is still capable of better overall performance [14, 18]. 3. An RF DAC offers power consumption, hardware complexity, and noise budget savings, as it does not require local oscillator, mixer, or additional filter components and the current-to-voltage transformations required to drive such devices.

208

Flexible Transmitters and PAs A

t

t+1/f s

Time

(a)

A

t+1/f s

t

Time

(b)

A

t

t+1/f s

Time

(c)

Figure 5.21 Example of using multiple oscillatory pulses for each DAC output code in order to create an RF DAC: (a) fOSC = 2fS, (b) fOSC = 3fS, and (c) fOSC = 2fS.

The main drawback is that the DAC needs to switch synchronously with the minima in the oscillatory pulse waveform; some form of phase-locked loop is therefore required. If this is not provided or performs poorly, the RF DAC will still operate and create the desired RF output signal. It will, however, suffer from both clock jitter and ISI sensitivity, as would a conventional DAC. 5.3.8.3

Transmitter Architecture Using an RF DAC

The form of transmitter architecture required to exploit the benefits of an RF DAC is very simple, as shown in Figure 5.22. Digital quadrature upconversion is performed to generate a real output signal (to eliminate the requirement to use two RF DACs), with this signal being provided at a low digital IF. The RF DAC then performs both the digital-to-analogue conversion and upconversion functions of a conventional transmitter and all that remains in the analogue domain is a bandpass filter and a PA, both operating at the required output frequency. If it is desired to keep DAC sample rates to an absolute minimum, which may be advantageous in narrowband and/or single-carrier applications, the dual-DAC architecture of Figure 5.23 may be used. In this case, the RF DACs form an integral part of the quadrature upconversion process and therefore operate with digital baseband input signals. Their required sample rate is then, at worst, half that of the DAC

5.3 Linear Upconversion Architectures

In

DSP

209



RF output DAC

NCO

Bandpass filter

RF amplifier

Digital quadrature upconverter

Digital processing

Figure 5.22

RF output

D/A

90º

Linear transmitter architecture employing an RF DAC.

required in Figure 5.22, and typically much lower. This saves both cost and power in the DAC, although as two are now required, whether an overall saving would result is not necessarily clear. 5.3.9

Use of Frequency Multiplication in a Linear Upconverter

Frequency multiplication is a very low-cost upconversion technique and is in widespread use in very low-cost consumer handheld transmitters. It is traditionally used only with constant-envelope modulation formats (typically analogue FM) and is indeed, by its very nature, a highly non-linear technique. It has been suggested in [19], however, that the technique is capable of linearisation, to a limited degree, by means of digital predistortion—it is therefore potentially appropriate for use in linear SDR applications, although at present the cost of the digital processing required for the predistorter will far outweigh the lower cost of the upconverter. In the future, as the cost of the required digital processing reduces, it may become a worthwhile technique in low-cost applications. It may also be an appropriate way of realising a quasi-linear transmitter in very high frequency (millimeter-wave) applications, where frequency multiplication is the most sensible, or only, option for achieving upconversion.

D/A

In

DSP



RF output DAC

NCO D/A Digital processing

Figure 5.23

RF output

90º

Bandpass filter

RF output DAC Digital quadrature upconverter

Linear transmitter architecture employing dual RF DACs.

RF amplifier

210

Flexible Transmitters and PAs

The basic format of an odd-order frequency multiplier is shown in Figure 5.24. The modulated frequency source, Vs, (at a frequency f0) is fed to a pair of anti-parallel diodes (in this case, as a series element) via a matching network. The appropriate nth harmonic is selected by the output matching network, which is resonant at nf0. The resulting harmonic is then fed to the load (e.g., an antenna). Detailed design information on frequency multipliers may be found in [20]. As an example, the schematic for a Schottky diode based 820-MHz input, 2.46-GHz output frequency tripler, is shown in Figure 5.25. It utilises the Agilent HSMS-2852 diodes and these appear as shunt elements, matched by two discrete matching networks. The reported results for this network [19], when using predistortion, indicate that an improvement from 4 dBc to 34 dBc is possible for a single-channel IS-95 CDMA signal, with an EVM improvement from close to 100% to around 7.3%. While these are clearly not spectacular results from an absolute performance perspective, they may well be adequate in many fixed or quasi-fixed radio applications (e.g., satellite systems). This is therefore potentially an interesting technique in some application areas.

5.4

Constant-Envelope Upconversion Architectures The previous section focused on generic upconverter/transmitter architectures, suitable for any modulation format (within a given bandwidth capability). This section will concentrate on transmitter architectures which are suitable only for constant-envelope, phase (or frequency) modulated transmitters, or, perhaps more importantly, for use as part of an envelope elimination and restoration (EE&R) transmitter (discussed in Chapter 6). In the latter case, the transmitter architectures described here would form the phase-modulation part of the EE&R transmitter, with a separate amplitude modulation system modulating the PA drain supply (e.g., using a pulse-width modulator). 5.4.1

PLL-Based Reference or Divider Modulated Transmitter

The first architecture, shown in Figure 5.26, is that of a standard, modulated-PLL transmitter. This form of transmitter utilises a conventional PLL-based synthesiser Z0

Vs

Figure 5.24

Z0

f0

Format of a basic odd-order frequency multiplier.

nf 0

ZL

5.4 Constant-Envelope Upconversion Architectures

211

C 1 = 10.5 pF

C 1 = 9.0 pF

L 1 = 82 nH

Vs

L 1 = 5.6nH

ZL

HSMS −2852

Figure 5.25

Circuit diagram of a Schottky diode based frequency tripler.

and modulates this, either by directly modulating the frequency reference (if this is formed from a VC-TCXO, for example) or by dithering the divider ratio within the frequency divider. 5.4.2

PLL-Based Directly-Modulated VCO Transmitter

A modified form of the technique illustrated in Figure 5.26 is shown in Figure 5.27, where the modulation process in this case utilises a direct imposition of the input data upon the VCO within the PLL [21]. Its method of operation is as follows: Initially, the PLL is operated with the data modulation switch closed, in order to allow the loop to lock and the VCO to be pulled on to the correct carrier frequency. The data switch is then opened and the transmit data applied directly to the VCO. This data then directly modulates the control voltage to the VCO, thereby modulating its frequency. The primary advantage of this technique is that it is simple, since the VCO performs both modulation and upconversion functions. It can therefore be fabricated to have a very low power consumption, in an integrated circuit implementation. It does, however, have the significant disadvantage that the VCO is operating open loop, when being modulated by the data signal, and hence is liable to drift off the desired carrier frequency. It will also suffer from injection locking, whereby the high-power output of the PA pulls (and hence modulates) the VCO. Excellent Frequency reference

VCO Phase/freq. detector

Modulation input

Charge pump Lowpass filter Divider ÷N

Tx data/ modulation input options

Figure 5.26

Dither input

A PLL-based reference or divider-modulated transmitter.

RF PA

212

Flexible Transmitters and PAs Tx data input VCO Phase/freq. detector Frequency reference

RF PA

Charge pump Lowpass filter

÷N Divider

Figure 5.27

A PLL-based directly modulated VCO transmitter.

screening is required between the PA stage(s) and the VCO in order to overcome this problem. 5.4.3

PLL-Based Input Reference Modulated Transmitter

The PLL frequency reference can be generated from a local oscillator, upon which the desired transmit data has been imposed by a DSP and quadrature upconverter [22]. This type of architecture is shown in Figure 5.28. There are a number of possible methods of implementing the baseband/DSP part of the system, and hence only a generic DSP block is shown in Figure 5.28. This block is therefore assumed to incorporate a Hilbert transform filter, DACs, and reconstruction filtering (if required, in each case, depending upon the implementation chosen). A DSP device is ideally suited to the implementation of a Hilbert transform filter and hence is very appropriate for use in this application. The operation of this transmitter is similar to that of Figure 5.26, with the exception that in this case, the reference for the PLL is generated by upconverting I/Q data signals (generated by the DSP), using a local oscillator. The other key difference is that the frequency divider in Figure 5.26 has been replaced by a mixer and a

D/A

DSP

D/A converter

VCO In



90º

Phase/freq. detector

Charge pump Lowpass Filter

D/A

Mixer

D/A converter Lowpass Filter

First local oscillator

Figure 5.28

Second local oscillator

A PLL-based input reference modulated transmitter.

RF PA

5.4 Constant-Envelope Upconversion Architectures

213

second local oscillator. This relaxes the small synthesis step size requirement, which would otherwise be necessary in the first local oscillator, as this oscillator can provide some or all of the required tuning. This architecture is, again, simple and amenable to integration in a low-power device. It does, however, require two local oscillators, in addition to the VCO, and pulling of the second LO and/or the VCO is still an issue (again, requiring good PA to LO/VCO isolation). 5.4.4

Use of a Direct-Digital Synthesizer to Modulate a PLL-Based Transmitter

An alternative modification to the architecture that is shown in Figure 5.26 is shown in Figure 5.29; in this case, a direct-digital synthesiser (DDS) is used to form the reference signal and this is fed by a combination of the channel frequency information, for the given set of channels on which the transmitter can operate, and the desired transmit data. In this way, the reference frequency to the PLL is supplied as an already-modulated signal and the PLL effectively upconverts this signal to the desired carrier frequency. The fundamental reference for this system is now the clock oscillator (not shown in Figure 5.29), which clocks the DDS (and typically also the DAC). This overcomes the frequency drift problems associated with the architecture of Figure 5.27, although whether it has any advantage, in terms of output noise, over the architectures of Figures 5.26 and 5.28, will depend upon the particular implementation in each case. It also has significant disadvantages, at present, due to the fine frequency resolution which would be required in most applications and the trade-off between switching time and spurious response for the DDS device. The former issue is similar to that discussed in Section 5.4.3 and could be overcome by the same mechanism (i.e., use of a mixer and second LO in place of the frequency divider). 5.4.5

A PLL-Based Transmitter Utilising Modulated Fractional-N Synthesis

If a fractional-N synthesizer is used in place of a conventional synthesizer, a transmitter of the form shown in Figure 5.30 can be created [23]. In this transmitter, the divide ratio is modulated by the desired transmit data and frequency modulation is thereby achieved. This stems from the fact that the output frequency for a PLL synthesiser is given by:

Sinusoid LUT (e.g., stored in ROM) Tx data input

D/A

VCO Phase/freq. detector

RF PA

Charge pump Lowpass filter

Required channel frequency data

PLL reference ÷N Divider

Figure 5.29

A PLL-based transmitter in which the input reference is generated by a DDS.

214

Flexible Transmitters and PAs

VCO Phase/freq. detector

RF PA

Charge pump Lowpass filter

Frequency reference

÷ N/ (N+1) Divider

∆−Σ modulator

Tx data input Tx data filter Required channel frequency data

Figure 5.30

A constant-envelope transmitter employing a fractional-N-based upconversion system.

F out = NF ref

(5.10)

If the divide ratio, N, is now modulated, (5.10) becomes: F out (t ) = N{D(t )}F ref

(5.11)

where D(t) is the data modulation (following filtering and so forth). The frequency of the output signal has therefore been modulated by the data waveform. The operation of the transmitter is as follows. The transmit data is first filtered, typically by a Gaussian FIR digital filter, before summation with the division ratio value required to select the desired channel (i.e., the carrier frequency offset value). The resulting signal forms the input to the delta-sigma modulator, the output of which is used to control the divide ratio of the PLL. It is this variation of the division ratio which modulates the carrier and thereby provides the desired, modulated output signal. The use of a delta-sigma modulator allows the quantisation noise, generated in the modulation process, to be moved to a frequency range much higher than that of the baseband data spectrum. This noise can therefore be filtered by the naturally lowpass characteristic of the PLL. A key advantage of this architecture, over those already achieved using the input-reference modulation techniques discussed earlier, is that higher data rates can be transmitted without requiring the reference frequency to be lowered. The use of a delta-sigma modulator, and the lack of any requirement for mixers, allows this architecture to be both simple and capable of implementation in a low-power integrated device.

5.5 Broadband Quadrature Techniques

215

One disadvantage of this architecture lies in its requirement for the PLL loop bandwidth to be greater than the modulation bandwidth. This drawback may, however, be overcome by utilising the method proposed in [21].

5.5

Broadband Quadrature Techniques A broadband quadrature network is an important enabling technology in a number of areas within a software defined radio system. There are a range of applications for quadrature networks within a software defined radio, each with different characteristic requirements. These include: 1. Quadrature upconverter. In this application, it is the local oscillator which must be generated in quadrature and the broadband requirement comes from the coverage range desired from the system; the instantaneous bandwidth required of the quadrature generation system is negligible. In this case, any of the techniques described below may be applied, including the more traditional doubler/divider (digital) techniques for LO quadrature generation. Quadrature errors manifest themselves as an imperfect image suppression, with this occurring in-band in the case of Weaver [11] upconversion (see Section 5.3.6). Note that in the case of a multi-carrier transmitter (e.g., in a base station), in-band refers to the instantaneous transmit bandwidth of the system. This can cause unwanted products to appear in adjacent channels, as was shown in Figure 5.4. In addition, these errors are likely to result in an increased error vector magnitude from the system, unless this parameter is already dominated by local oscillator phase noise or PA non-linearity. In the case of Figure 5.4, channels 1 and 4 would suffer most severely from this problem, since these have in-band image signals falling directly on top of them. Channel 2 should exhibit a much better EVM performance, taken in isolation. 2. Quadrature downconverter. Similarly, in this application, it is the local oscillator which must be generated in quadrature and the broadband requirement again comes from the coverage range desired from the system; the instantaneous bandwidth required being negligible. Any of the following techniques may be applied, including the more traditional doubler/divider (digital) techniques for LO quadrature generation. Quadrature errors can manifest themselves, here, as an imperfect image suppression (as above) when the downconverter is used in the feedback loop of a linearised transmitter (e.g., Cartesian loop). They can also result in an imperfect receiver image suppression, when applied to a receive quadrature downconverter. In either case, this is likely to result in a degradation of the error vector magnitude performance of the system, as was the case with the upconverter discussed earlier. 3. Image-reject mixer. The use of image-reject mixing is described in Chapter 4 and hence the treatment here will be brief. Broadband quadrature is required in two parts of an image-reject mixer: in the LO path and at RF. In the

216

Flexible Transmitters and PAs

former case, the same issues and solutions apply as described above for quadrature upconverters and downconverters. In the latter case, a true broadband quadrature is required (i.e., one with a relatively large instantaneous bandwidth). This reduces the number of techniques available for its realization, largely to those based on all-pass (or polyphase) filtering and coupler-based techniques (e.g., a Lange coupler). Quadrature errors will manifest themselves as a reduction in the available image rejection.

5.5.1

Introduction to Quadrature Techniques

There exists a wide range of narrowband methods for generating quadrature signals. Such techniques include various forms of quadrature hybrid (lumped-element, branch-line, ring, and transformer), together with narrowband filter-based techniques. These techniques are typically capable of acceptable quadrature behaviour (for example,