High-Level Modeling and Synthesis of Analog Integrated Systems (Analog Circuits and Signal Processing) [1 ed.] 9781402068010, 1402068018

Various approaches for finding optimal values for the parameters of analog cells have made their entrance in commercial

333 15 3MB

English Pages 297 [287] Year 2008

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
1402068018......Page 1
Contents......Page 9
1.1.1 A historic perspective......Page 20
1.1.2 An economic perspective......Page 21
1.1.3 A technological perspective......Page 24
1.1.4 A design perspective......Page 27
1.2.1 Research areas......Page 28
1.2.2 High-level synthesis of analog and mixed-signal systems......Page 30
References......Page 31
Part I: Analog Design Methodologies......Page 33
2.2 Analog and mixed-signal systems......Page 34
2.3 Description and abstraction levels......Page 36
2.4 Parameters and performance......Page 42
2.5 Design flows for analog and mixed-signal systems......Page 45
2.6 Conclusions......Page 49
References......Page 50
3.2 Classification......Page 54
3.3 Historic overview......Page 56
3.4 Design strategy based on generic behavior......Page 80
3.5 Conclusions......Page 87
References......Page 88
Part II: Generic Behavioral Modeling......Page 99
4.1 Introduction......Page 100
4.2 Time-domain modeling approaches......Page 101
4.3 A model for continuous-time ΔΣ modulators......Page 107
4.4 A generic behavioral model for sampled-data systems......Page 144
4.5 Conclusions......Page 154
References......Page 155
5.2 Frequency-domain modeling approaches......Page 160
5.3 A new frequency-domain framework......Page 166
5.4 Generic behavioral models for front-ends......Page 202
5.5 Conclusions......Page 219
References......Page 220
Part III: Top-Down Heterogeneous Synthesis......Page 225
6.2 Objectives for synthesis strategy......Page 226
6.3 Top-down heterogeneous optimization methodology......Page 232
6.4 Application: analog-to-digital conversion......Page 257
6.5 Conclusions......Page 266
References......Page 267
7. Conclusions......Page 270
Symbols and Abbreviations......Page 274
C......Page 283
L......Page 284
R......Page 285
T......Page 286
Y......Page 287
Recommend Papers

High-Level Modeling and Synthesis of Analog Integrated Systems (Analog Circuits and Signal Processing) [1 ed.]
 9781402068010, 1402068018

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

High-Level Modeling and Synthesis of Analog Integrated Systems

ANALOG CIRCUITS AND SIGNAL PROCESSING SERIES Consulting Editor: Mohammed Ismail. Ohio State University Titles in Series: THE GM/ID DESIGN METHODOLOGY FOR CMOS ANALOG LOW POWER INTEGRATED CIRCUITS Jespers, Paul G.A. ISBN-10: 0-387-47100-6 CIRCUIT AND INTERCONNECT DESIGN FOR HIGH BIT-RATE APPLICATIONS Veenstra, Hugo, Long, John R. ISBN: 978-1-4020-6882-9 HIGH-RESOLUTION IF-TO-BASEBAND SIGMADELTA ADC FOR CAR RADIOS Silva, Paulo G.R., Huijsing, Johan H. ISBN: 978-1-4020-8163-7 SILICON-BASED RF FRONT-ENDS FOR ULTRA WIDEBAND RADIOS Safarian, Aminghasem, Heydari, Payam ISBN: 978-1-4020-6721-1 HIGH-LEVEL MODELING AND SYNTHESIS OF ANALOG INTEGRATED SYSTEMS Martens, Ewout S.J., Gielen, Georges ISBN: 978-1-4020-6801-0 MULTI-BAND RF FRONT-ENDS WITH ADAPTIVE IMAGE REJECTION A DECT/BLUETOOTH CASE STUDY Vidojkovic, V., van der Tang, J., Leeuwenburgh, A., van Roermund, A.H.M. ISBN: 978-1-4020-6533-0 BASEBAND ANALOG CIRCUITS FOR SOFTWARE DEFINED RADIO Giannini, Vito, Craninckx, Jan, Baschirotto, Andrea ISBN: 978-1-4020-6537-8 DESIGN OF HIGH VOLTAGE XDSL LINE DRIVERS IN STANDARD CMOS Serneels, Bert, Steyaert, Michiel ISBN: 978-1-4020-6789-1 CMOS MULTI-CHANNEL SINGLE-CHIP RECEIVERS FOR MULTI-GIGABIT OPT... Muller, P., Leblebici, Y. ISBN 978-1-4020-5911-7 ANALOG-BASEBAND ARCHITECTURES AND CIRCUITS FOR MULTISTANDARD AND LOW-VOLTAGE WIRELESS TRANSCEIVERS Mak, Pui In, U, Seng-Pan, Martins, Rui Paulo ISBN: 978-1-4020-6432-6 FULL-CHIP NANOMETER ROUTING TECHNIQUES Ho, Tsung-Yi, Chang, Yao-Wen, Chen, Sao-Jie ISBN: 978-1-4020-6194-3 ANALOG CIRCUIT DESIGN TECHNIQUES AT 0.5V Chatterjee, S., Kinget, P., Tsividis, Y., Pun, K.P. ISBN-10: 0-387-69953-8 LOW-FREQUENCY NOISE IN ADVANCED MOS DEVICES von Haartman, M., Östling, M. ISBN 978-1-4020-5909-4 SWITCHED-CAPACITOR TECHNIQUES FOR HIGH-ACCURACY FILTER AND ADC... Quinn, P.J., Roermund, A.H.M.v. ISBN 978-1-4020-6257-5 ULTRA LOW POWER CAPACITIVE SENSOR INTERFACES Bracke, W., Puers, R. (et al.) ISBN 978-1-4020-6231-5 BROADBAND OPTO-ELECTRICAL RECEIVERS IN STANDARD CMOS Hermans, C., Steyaert, M. ISBN 978-1-4020-6221-6 CMOS SINGLE CHIP FAST FREQUENCY HOPPING SYNTHESIZERS FOR WIRELESS MULTI-GIGAHERTZ APPLICATIONS Bourdi, Taoufik, Kale, Izzet ISBN: 978-1-4020-5927-8 CMOS CURRENT-MODE CIRCUITS FOR DATA COMMUNICATIONS Yuan, Fei ISBN: 0-387-29758-8 ADAPTIVE LOW-POWER CIRCUITS FOR WIRELESS COMMUNICATIONS Tasic, Aleksandar, Serdijn, Wouter A., Long, John R. ISBN: 978-1-4020-5249-1 PRECISION TEMPERATURE SENSORS IN CMOS TECHNOLOGY Pertijs, Michiel A.P., Huijsing, Johan H. ISBN-10: 1-4020-5257-X

Ewout S.J. Martens



Georges G.E. Gielen

High-Level Modeling and Synthesis of Analog Integrated Systems

123

Ewout S.J. Martens KU Leuven Belgium

ISBN 978-1-4020-6801-0

Georges G.E. Gielen KU Leuven Belgium

e-ISBN 978-1-4020-6802-7

Library of Congress Control Number: 2007942726 All Rights Reserved c 2008 Springer Science + Business Media B.V.  No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Printed on acid-free paper. 9 8 7 6 5 4 3 2 1 springer.com

To my mother and father

Preface

As the miniaturization of semiconductor technology continues, electronic systems on chips offer a more extensive and more complex functionality with better performance, higher frequencies and less power consumption. Whereas digital designers can take full advantage of the availability of design automation tools to build huge systems, the lack of support by computer programs for different abstraction levels makes analog design a time-consuming handcraft which limits the possibilities to implement large systems. Various approaches for finding optimal values for the parameters of analog cells, like op amps, have been investigated since the mid-1980s, and they have made their entrance in commercial applications. However, a larger impact on the performance is expected if tools are developed which operate on a higher abstraction level and consider multiple architectural choices to realize a particular functionality. In this book, the opportunities, conditions, problems, solutions and systematic methodologies for this new generation of analog CAD tools are examined. The outline of this book is as follows. In the first part, the characteristics of the analog design process are systematically analyzed and several approaches for automated analog synthesis are summarized. Comparison of their properties with the requirements for high-level synthesis of analog and mixed-signal systems results in a new design paradigm: the high-level design flow based on generic behavior. This design approach involves a modeling strategy using generic behavioral models and a synthesis strategy leading to the exploration of a heterogeneous design space containing different architectures. The modeling strategy is further elaborated in Part II. Generic behavioral models allow to easily represent a wide range of diverse architectures with several non-idealities and, at the same time, to exploit specific aspects of a class of systems leading to efficient evaluation of the performance. Two novel models are defined in this book. The first one adopts a time-domain approach and is suited for classes like ∆Σ modulators and sampled-data systems. For the second model, a new frequency-domain framework has been developed (the Phase-Frequency Transfer model ) which allows the representation of classes of RF systems like front-ends of wireless receivers.

VIII

Preface

Finally, in the third part of this book, the synthesis strategy has been concretized with a new algorithm for high-level optimization of analog and mixedsignal systems. This top-down heterogeneous approach naturally deals with diverse performance characteristics and optimization targets, allows to combine various sources of design and optimization knowledge and explores several architectures. The results of a prototype implementation are discussed. This dedicated C++-program is capable of generating different types of analog-todigital converters for different sets of specifications. This book is the result of 6 years of scientific research within the MICAS research group of the department of electrical engineering (ESAT) at the Katholieke Universiteit Leuven in Belgium. Many people have contributed to the results and presentation of this work and I would like to thank them. In the first place, the research topic was introduced to me by Professor G. Gielen. Based on his extended experience in the field of design automation of analog circuits, he was capable to provide stimulating daily advice and useful ideas during the research. The first version of this text was edited thoroughly and his comments have led to various improvements. Furthermore, I got the possibility to present the results of this work at several internal conferences and to meet and have interesting discussions with various people. My first steps into analog and digital micro-electronics were taken under supervision of Professor W. Sansen (K.U. Leuven) and Professor H. De Man (K.U. Leuven and Imec), respectively. I am pleased that they were willing to make some time free for carefully reading this text and for making various suggestions to improve its quality. Further, Professor W. Dehaene (K.U. Leuven), Professor J. Vandewalle (K.U. Leuven) and Professor P. Rombouts (R.U. Gent) have read and evaluated this text positively. For the major part of this work, I obtained a grant from the the Funds for Scientific Research (F.W.O.) of the Flemish government. I would like to thank them for this financial support which has made this work possible. Further, my gratitude goes to Martin Vogels and Kenneth Francken for the collaboration with them on the subject of ∆Σ modulators. Many other people of the MICAS research group helped me to understand various aspects of analog and mixed-signal design and solve numerous problems. Also, the many discussions – technical and others – and activities have resulted in pleasant memories of my stay at MICAS and the people that I met there. Finally, my mother and father supported me implicitly during my stay at the university, both as a student and as a research assistant. They gave me the chance to start my studies and helped me with all indispensable matters. I would like to dedicate this work to them.

Ewout Martens Leuven, July 2007

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 The world of electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 A historic perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.2 An economic perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.3 A technological perspective . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.4 A design perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Analog design automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.1 Research areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.2 High-level synthesis of analog and mixed-signal systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Part I Analog Design Methodologies 2

Foundations of Design Flows for AMS Systems . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Analog and mixed-signal systems . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Description and abstraction levels . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Operations in the abstraction–description plane . . . . . . . 2.4 Parameters and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Design flows for analog and mixed-signal systems . . . . . . . . . . . . 2.5.1 The generic design flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Simplified generic design flows . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Specification of the generic design flow . . . . . . . . . . . . . . . 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 17 17 19 19 23 25 28 28 30 32 32 33

X

3

Contents

Analog and Mixed-Signal Design Strategies . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Historic overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Selection before or after dimensioning . . . . . . . . . . . . . . . . 3.3.2 Selection during dimensioning . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Top-down creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Bottom-up generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Design strategy based on generic behavior . . . . . . . . . . . . . . . . . . 3.4.1 Design flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Generic behavioral models . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 37 39 39 49 53 57 61 63 64 67 70 71

Part II Generic Behavioral Modeling 4

Time-Domain Generic Behavioral Models . . . . . . . . . . . . . . . . . . 85 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2 Time-domain modeling approaches . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2.1 Time-domain simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2.2 Time-marching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.2.3 Sampled-data methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2.4 Waveform relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.5 Collocation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3 A model for continuous-time ∆Σ modulators . . . . . . . . . . . . . . . . 92 4.3.1 The generic behavioral model . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.2 Templates for the generic functions . . . . . . . . . . . . . . . . . . 102 4.3.3 Implementation of the generic behavioral model . . . . . . . 122 4.4 A generic behavioral model for sampled-data systems . . . . . . . . 129 4.4.1 General sampled-data systems . . . . . . . . . . . . . . . . . . . . . . 129 4.4.2 Generic behavioral model . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.4.3 Templates for the generic functions . . . . . . . . . . . . . . . . . . 133 4.4.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

5

Frequency-Domain Generic Behavioral Models . . . . . . . . . . . . 145 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.2 Frequency-domain modeling approaches . . . . . . . . . . . . . . . . . . . . 145 5.2.1 Frequency-domain simulation . . . . . . . . . . . . . . . . . . . . . . . 145 5.2.2 Harmonic balance algorithms . . . . . . . . . . . . . . . . . . . . . . . 147 5.2.3 Approaches with time-domain equivalent low-pass models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Contents

XI

5.2.4 Strictly frequency-domain approaches . . . . . . . . . . . . . . . . 149 5.2.5 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.3 A new frequency-domain framework . . . . . . . . . . . . . . . . . . . . . . . 151 5.3.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.3.2 Signal representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5.3.3 Linear representation of building blocks . . . . . . . . . . . . . . 160 5.3.4 Noise signals in the Phase-Frequency Transfer model . . . 171 5.3.5 Nonlinear behavior of building blocks . . . . . . . . . . . . . . . . 177 5.4 Generic behavioral models for front-ends . . . . . . . . . . . . . . . . . . . 187 5.4.1 Fundamental RF receiver front-end operation . . . . . . . . . 187 5.4.2 Generic behavioral model . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.4.3 Templates for the generic functions . . . . . . . . . . . . . . . . . . 191 5.4.4 Implementation and experimental results . . . . . . . . . . . . . 198 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Part III Top-Down Heterogeneous Synthesis 6

Top-Down Heterogeneous Optimization . . . . . . . . . . . . . . . . . . . . 213 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.2 Objectives for synthesis strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.2.1 Fundamental requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6.2.2 An evolutionary approach . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.3 Top-down heterogeneous optimization methodology . . . . . . . . . . 219 6.3.1 Overview of methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.3.2 Design population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 6.3.3 The embryonic design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.3.4 Performance calculators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.3.5 Evaluators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.3.6 Satisfaction levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.3.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.3.8 Selection of transformations . . . . . . . . . . . . . . . . . . . . . . . . 241 6.3.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 6.4 Application: analog-to-digital conversion . . . . . . . . . . . . . . . . . . . . 244 6.4.1 Dedicated library for top-down heterogeneous optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.4.2 Dedicated program for A/D conversion . . . . . . . . . . . . . . . 245 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

7

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Symbols and Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271

List of Figures

1.1

1.2 1.3 1.4 2.1 2.2 2.3

2.4 2.5

3.1 3.2 3.3

Worldwide revenues realized by the semiconductor industry with actual data till 2006 and predictions for 2007 [Source: SIA]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exponential variation of the number of transistors on a microprocessor and their average price [Source: Intel]. . . . . . Worldwide revenues realized by the EDA industry [Source: EDA Consortium]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution in silicon CMOS technologies. . . . . . . . . . . . . . . . . . . . . . Examples of different electronic systems. . . . . . . . . . . . . . . . . . . . . Example of a simple integrator represented at different description levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The abstraction–description plane showing the relation between abstraction and description levels. The darkness of the bars indicates the commonly accepted correlation between the abstraction and description level. . . . . . . . . . . . . . . . . Examples of the four basic operations in the abstraction– description plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The generic design flow within a specific description level for analog and mixed-signal systems using the four basic operations: translation, refinement, transformation and simplification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 4 5 6 18 20

22 23

29

Schematic representation of the design strategy applying selection of the topology before or after dimensioning. . . . . . . . . . 40 Example of a typical design flow for a basic analog cell consisting of topology selection followed by circuit sizing [46]. . . 40 Schematic representation of the design strategy performing the selection of the topology and the dimensioning concurrently. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

XIV

List of Figures

3.4

Example of a (flat) template with various optional elements indicated by the rectangles, i.e., several optional cascode stages, different compensation schemes and an optional second stage [77]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Schematic representation of the design strategy based on top-down creation of the architecture. . . . . . . . . . . . . . . . . . . . . 3.6 Example of a top-down creation process which starts from a textual description of the required functionality and generates a behavioral model of a specific architecture [75]. 3.7 Schematic representation of the design strategy based on bottom-up generation of the architecture. . . . . . . . . . . . . . . . . . 3.8 Examples of the seed and transformations used to generate analog circuits using a bottom-up design strategy [71]. . . . . . . . . 3.9 Overview of major analog EDA tools for analog synthesis developed in the last 20 years and published in open literature. . 3.10 High-level design flow for analog and mixed-signal systems based on generic behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Schematic representation of the generic behavioral model with alphabets Σ, Q1 , . . . Qn and Ψ , generic functions ζ1 , . . . , ζn and interaction model (3.9). . . . . . . . . . . . . . . . . . . . . . .

4.1

51 54

55 58 59 62 66

68

Examples of different architectures for CT ∆Σ A/D converters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2 Example of a DAC-pulse affected different types of jitter: sampling (∆tk and ∆tk+1 ), pulse-width (∆Tpw ) and pulse-delay jitter (∆td ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.3 General model for the class of CT ∆Σ A/D converters with the elements of the specific modulator of Fig. 4.1b fit into it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.4 Schematic representation of the interaction model of the generic behavioral model for CT ∆Σ modulators. . . . . . . . 101 4.5 Examples of different implementation styles for integrating building blocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6 Macro-model for R–C integrator used in Example 4.1. . . . . . . . . . 105 4.7 Example of a weakly nonlinear model of order D for an integrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.8 Example of the characteristic curves of a single-bit quantizer. . . 119 4.9 Example of the characteristic curves of a single-bit D/A converter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.10 Schematic representation of the SystemC modules used for evaluating the generic behavioral model for CT ∆Σ modulators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.11 Example of refinement: SNDR as a function of the signal input amplitude for an ideal modulator and one with additional non-idealities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

List of Figures

XV

4.12 Degradation of signal-to-noise ratio due to linear non-idealities of the integrators with the generic behavioral model and the VHDL-AMS model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.13 SNR degradation due to weakly nonlinear distortion calculated with both the VHDL-AMS behavioral model and the generic behavioral model with different levels of accuracy. . . . . . . . . . . . . 127 4.14 Degradation of SNDR due to saturation of the integrators of the loop filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.15 Degradation of signal-to-noise ratio due to sampling jitter with non-return-to-zero (NRTZ) or return-to-zero (RTZ) pulses and with or without pulse-width jitter (PWJ) for different models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.16 Schematic representation of the interaction model of the generic behavioral model for SD systems. . . . . . . . . . . . . . . 134 4.17 Example of a first-order switched-capacitor section. . . . . . . . . . . . 136 4.18 Increase of harmonic distortion for a switched-capacitor filter with and without clock feedthrough as a function of the distortion of the op amp used. . . . . . . . . . . . . . . . . . . . . . . . . 139 5.1

Example of polyphase signals with different number of phases N in a simple receiver architecture. . . . . . . . . . . . . . . . . . . . . . . . . . 154 5.2 Example of a frequency-domain representation of a Polyphase Harmonic Signal with N = 3 phases and A = 4 frequencies. . . . . 158 5.3 Examples of phase-converters converting the differential mode to the positive sequence and vice versa. . . . . . . . . . . . . . . . . . . . . . 162 5.4 Example of a 4-dimensional symmetrical polyphase filter. . . . . . . 163 5.5 Example of a simple sampled RC-lowpass filter [45]. . . . . . . . . . . 164 5.6 Graphical representation of the Polyphase Harmonic Transfer ˜ L (f ), the transfers between the phase-frequency Matrix H L (f ) and the planes formally given by the elements Hi,j interpretation as a collection of polyphase filters HL k,l (f ) L ˜ or as filters of multi-carrier signals H p,q (f ). . . . . . . . . . . . . . . . . . 166 5.7 Three elementary connection types for PHTMs. An RF front-end architecture can be described as a set of elementary connections between the PHTMs of the building blocks. . . . . . . . 168 5.8 Representation of a PHTM as a matrix box plot. . . . . . . . . . . . . . 170 5.9 The graphical representation of the PHTM of the 4-to-2 converter of Fig. 5.3a for different sets of polyphase base vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 5.10 Example of a down-conversion structure with 4-phase input and output signals. A polyphase noise signal n (t) is used as input x (t). All operations are defined in differential mode. . . 176

XVI

List of Figures

5.11 Graphical representation of the mapping operation of a second-order distortion tensor between phase-frequency planes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.12 The input signal and the matrix box plots of the transfer matrices of degree one, two and three for a third-order nonlinearity with K3 = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.13 Example of an RF front-end architecture with fundamental operations of down-conversion and sampling. . . . . . . . . . . . . . . . . . 188 5.14 Schematic representation of the interaction model of the generic behavioral model for RF receiver front-end architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.15 Example of a polyphase mixer modeled as a two-stage operation. The first stage converts N = 4 input signals to a polyphase signal with M = 8 phases which is converted on its turn to K = 4 output phases. . . . . . . . . . . . . . . . . . . . . . . . . 194 5.16 Example of a sampling stage and the model as a cascade of a filter and stage converting a continuous-time (CT) signal to a discrete-time (DT) one. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.17 Schematic representation of the RF front-end architecture used in the examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.18 IRR as a function of the mismatch between the gain of the mixers. For each value of σgain 100 samples are used to calculate the IRR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.19 Schematic and numerical representation of the ideal and non-ideal phase transfers from the desired frequency fd and the image frequency fim to the required output frequency fout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.20 Schematic representation of the non-ideal phase-frequency transfers in the mixer when a distortion tensor is added. . . . . . . 203 5.21 Matrix box plot of PHTM from the input of the down-converter to the output of the sampling part. . . . . . . . . . . . . . . . . . . . . . . . . . 203 6.1 6.2

6.3 6.4

6.5

Global configuration of the optimization framework. . . . . . . . . . . 220 Diagram of the flow of the heterogeneous optimization algorithm used for top-down optimization of both topology and parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Examples of different designs derived from each other and represented as ordered lists of transformations. . . . . . . . . . . . 223 Example of a tree of designs generated during the optimization process. Each gray area corresponds to a cluster of the population, having the same architecture. . . . . . . . . . . . . . . 224 Schematic representation of the embryonic design for Example 6.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

List of Figures

6.6

6.7 6.8

6.9 6.10 6.11 6.12 6.13 6.14 6.15

XVII

Examples of satisfaction functions ES mapping a single performance characteristic onto a satisfaction level for different objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Examples of designs obtained by application of rule-based transformations to improve SNR and power. . . . . . . . . . . . . . . . . . 238 Examples of designs obtained by application of transformations based on mathematical equivalences to improve the IRR and to decrease NF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Variation of cross-over rate for recombination transformations as a function of the generation number G of the cluster . . . . . . . 240 Schematic flow of selecting transformations to modify a design D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Schematic representation of the use of the dedicated library Oedipus as kernel for the Antigone program. . . . . . . . . . . . . . . 245 Results of eight optimizations with Antigone for different speed and accuracy specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Examples of architectures obtained as the result of the top-down heterogeneous optimization procedure. . . . . . . . . 249 Part of the tree of designs for the example resulting in the converter of Fig. 6.13b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Example of the evolution of the satisfaction level of the best design in the population. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

List of Tables

2.1 2.2 2.3

Overview of alphabets with elements (E) and interconnections (I) for different description levels. . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Comparison between different kinds of performance functions. . . 26 Comparison between three principal design flows. . . . . . . . . . . . . . 31

3.1

Overview of properties of base categories of optimization approaches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.1

Overview of properties of four major time-domain simulation approaches: time-marching (TM), sampled-data (SD), waveform relaxation (WR) and collocation methods (COL). . . . 87 Definitions of the alphabets used in the generic behavioral model for CT ∆Σ modulators corresponding to interfaces, memory and feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Definitions of the mapping operations of the generic functions of the generic behavioral model for CT ∆Σ A/D converters. . . . 100 Overview of input parameters for simulations of the SystemC module and test bench common to different examples. . . . . . . . . 124 Simulation times and accuracy for curves of Fig. 4.13. . . . . . . . . . 127 Average simulation times for different saturation models in Fig. 4.14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Definitions of the alphabets used in the generic behavioral model for general SD systems for interfaces and memory. . . . . . . 131 Definitions of the mapping operations of the generic functions of the generic behavioral model for SD systems. . . . . . . . . . . . . . . 132 Overview of input parameters for simulation of the switched-capacitor filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

4.2

4.3 4.4 4.5 4.6 4.7 4.8 4.9

XX

List of Tables

5.1

5.2 5.3

5.4

5.5 6.1

Overview of properties of three major approaches for simulation of high-frequency systems: harmonic balance (HB), time-domain low-pass (TD) and strictly frequency-domain (FD). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Special unity symmetrical polyphase signals. . . . . . . . . . . . . . . . . . 156 Definitions of the alphabets used in the generic behavioral model for the interfaces and internal signal of the RF receiver front-end architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Definitions of the mapping operations of the generic functions of the generic behavioral model for RF receiver front-end architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Input signals for weakly nonlinear analysis. . . . . . . . . . . . . . . . . . . 202 Overview of the parameters used in the synthesis processes resulting in the converters of Fig. 6.12. . . . . . . . . . . . . . . . . . . . . . . 251

Listings

4.1 4.2

Interaction model of the generic behavioral model for CT ∆Σ modulators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Interaction model of the generic behavioral model for general SD systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

5.1

Interaction model of the generic behavioral model for RF receiver front-end architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

6.1

Pseudo-code of the algorithm executed by the central control unit of the optimization framework in Fig. 6.1. . . . . . . . . . . . . . . . 221

1 Introduction

1.1 The world of electronics 1.1.1 A historic perspective In the sixth century B.C., the Greek philosopher Thales of Miletus discovered that rubbing amber leads to attraction of small objects, a phenomenon now known as static electricity. Today, more than 2,500 years later, electricity plays a vital role in modern society in various domains, like communication, energy, transportation, health services, domestic comfort, industrial processes and entertainment. As a result of this electrical revolution, the world consumed about 16 billion MWh electricity in 2006, more than twice the consumption of 25 years ago, and continuously increasing. The first phase in the evolution towards an electrical world is mainly situated in the eighteenth and nineteenth centuries. Scientists like Franklin, de Coulomb, Amp`ere, Ohm, Faraday, Gauss and Maxwell performed various electrical and magnetic experiments and provided the mathematical framework to describe the fundamental principles of electromagnetism. These physical laws resulted in basic concepts like capacitance, electromagnetic forces, voltaic cells, resistance, inductance and electromagnetic waves. The new theories were rapidly put into practice by engineers in the nineteenth century. In 1837, the first patent on an electric machine was granted to Davenport for his (DC) electric motor. The development of powerful electromagnets and relays by Henry was the base for the first commercial telegraph lines, enabling the transmission of messages over long distances. Over a period of merely 25 years, major inventions like the telephone, light bulb, phonograph, induction motor and wireless telegraph made electrical applications common property. With the inventions of the diode and triode vacuum tubes at the start of the twentieth century, the flow of electrons could be controlled and electronic circuits were born. After WWII, the bulky and power-hungry vacuum tubes were replaced by semiconductor devices due to the development of the first

2

1 Introduction

bipolar transistor in 1947. Another milestone was achieved in 1958 with the creation of the first monolithic Integrated Circuit (IC) by Kilby. Instead of building circuits with discrete components, multiple solid-state devices could now be put onto one chip (e.g., the first successful monolithic op amp µA702 in 1964). Working MOS field-effect transistors were first developed in 1960. Three years later, they became the building blocks of CMOS logic circuits [9] which is still the dominant technology for ICs nowadays. One of the most important applications of electronics is the ability to build large-scale programmable digital computers. Whereas first computers (e.g., ENIAC in 1947) consisted of vacuum tubes and later discrete transistors, the introduction of ICs opened new possibilities for speed, size and reliability. The race towards more powerful computers in the second half of the twentieth century took off with the first single-chip microprocessor, Intel 4004 in 1971, with about 2,300 transistors operating at 108 kHz. The most recent processor contains 1.3 billion transistors at more than 3 GHz [8]. Designing complex electronic circuits by hand became quickly infeasible. The emergence of Computer-Aided Design (CAD) brought a solution. In 1967, the first real application of CAD was reported: a computer program was used to determine the connections between transistors of the Micromosaic IC depending on the specification. At the beginning of the 1980s, a new industry started with the foundation of companies like Mentor Graphics, specialized in tools supporting the design process. Today, their Electronic Design Automation (EDA) programs have become indispensable aids for the development of semiconductor products. 1.1.2 An economic perspective The possibilities offered by the semiconductor technology has led to a strong increase of the demand for ICs in various application areas, like computing (e.g., PCs, video game consoles), communication (e.g., GSM, DECT phones, networks like 802.11a/b/g or Bluetooth), automotive (e.g., optimized engine control, safety improvement like ABS or ESC), industry (e.g., automation of production processes) and consumer electronics (e.g., MP3-players, home appliances, digital cameras). This evolution is reflected by the rise of the worldwide sales of semiconductor products shown in Fig. 1.1a for the last 25 years. For 2009, a growth till 321 billion dollars is expected. The primary growth market is Asia-Pacific (46%), whereas Europe is good for 15%. Figure 1.1b depicts the variation of the sales of analog semiconductor products over the last decade. Despite the enormous growth of digital applications, the category of analog electronics has experienced an increase similar to the growth of the total market, resulting in an almost constant contribution of about 15% of the entire market. Digital electronics and memories take about 46% and 22%, respectively. The rest consists mainly of discrete and optoelectronic devices. In some domains, the part of the IC market for analog products may be higher, e.g., up to 40% for automotive ICs.1 1

[Source: WSTS, IC Insights].

1.1 The world of electronics

3

300

Sales [billion $]

250 200 150 100 50 0 1980

1985

1990

1995

2000

2005

2010

Year (a) Total semiconductor industry 45

25 20

35 15 30 10 25

Relative Sales [%]

Sales [billion $]

40

5

20

Absolute analog sales Relative analog sales

15 1996

1998

2000

2002

2004

2006

0 2008

Year (b) Analog semiconductor industry Fig. 1.1. Worldwide revenues realized by the semiconductor industry with actual data till 2006 and predictions for 2007 [Source: SIA].

An important driver for the growth of the semiconductor industry is the miniaturization of the devices resulting in higher performance at lower cost. Smaller sizes make it feasible to put more transistors on a single chip which allows more complex ICs with an extended functionality. An example is shown in Fig. 1.2a for the number of transistors on microprocessors which doubles almost every two years. This exponential increase of the complexity of ICs is known as Moore’s law [7]. This trend is the result of several factors like smaller transistors, larger dies, increased costs for making masks needed to fabricate the chips at lower cost per function, technological innovations and better design techniques (e.g., the availability of CAD tools) [6]. Overall, the

4

1 Introduction 1010

Number of transistors

109 108 107 106 105 104

doubling every 24 months commercial microprocessors

103 1970 1975 1980 1985 1990 1995 2000 2005 2010

Year (a) Transistors on microprocessors

Average transistor price [$]

100 10−1 10−2 10−3 10−4 10−5 10−6 10−7

halving every 21 months commercial processes

10−8 1970 1975 1980 1985 1990 1995 2000 2005 2010

Year (b) Average price of transistors Fig. 1.2. Exponential variation of the number of transistors on a microprocessor and their average price [Source: Intel].

reduction of the average cost per transistor on a chip is exponential as depicted in Fig. 1.2b. Hence, for the same price, more sophisticated chips can be produced over the years. Driven by the demands of the semiconductor industry to develop within a limited design time more complicated systems, which are subject to larger risks for design errors, the sales of the EDA industry has doubled over less than 10 years, as shown in Fig. 1.3, till more than 5 billion dollars in 2006. Whereas the first automation efforts where limited to specific tasks, like design entry tools, CAD programs are now encountered at various stages

1.1 The world of electronics

5

Annual Revenue [billion $]

5.5 5 4.5 4 3.5 3 2.5 2 1996

1998

2000

2002

2004

2006

Year

Fig. 1.3. Worldwide revenues realized by the EDA industry [Source: EDA Consortium].

throughout the design flow. Some examples of application areas are simulation (e.g., from system-level descriptions written in some Hardware Description Language (HDL) to transistor schematics), physical design (e.g., placement of devices and routing of chips and PCB layout tools), analysis (e.g., tools for analysis of timing, power and frequency behavior), design-for-test (e.g., automatic insertion of test structures), verification (e.g., formal checking of properties), and synthesis (e.g., logic synthesis and several optimization tools). Especially the last three categories are in the first place commercialized for digital systems. For analog and mixed-signal designs, on the other hand, these EDA tools reside still mostly in academic and industrial research with only a few commercial tools like simulators and some circuit optimizers (e.g., Cadence Virtuoso NeoCircuit, Synopsis Circuit Explorer, or Mentor Graphics Eldo Optimizer). For example, according to the ITRS of 2006, the degree of analog automation for system- or circuit-level design is merely 15% compared to the digital counterpart, and techniques to increase this level to 50% are not expected to emerge before 2020. As a result, the design time of even a small analog part may become a critical point in the overall time-to-market for mixed-signal designs. 1.1.3 A technological perspective Semiconductor chips can be fabricated using various materials (e.g., Si, SiGe, GaAs or InP) providing different types of transistors, like bipolar or MOSFETs. Currently, CMOS technologies are mostly adopted mainly because they inherently consume less static power than the alternative options [5]. However, for specific applications, like high-power or high-speed circuits, other technologies can outperform CMOS. Consequently, different technologies to implement chips still coexist.

6

1 Introduction

Scaling of the CMOS technology not only allows to integrate more functionality on a single chip, but it also improves important properties. For example, ideal scaling results in a constant power density and a speed linearly increasing with smaller transistor sizes [1]. Actual technologies show a decrease of both the power per circuit and the delay time [2]. As a result, a new MOS generation is introduced about every 2–2.5 years with a minimum size about 1.4 times smaller. Figure 1.4a shows an example for the technologies used to build microprocessors. By 2008, 45 nm-processes are the most advanced technologies according to this prediction. 10 µm

Technology size

3 µm

1 µm 0.6 µm

0.25 µm 0.13 µm halving every 5 years

65 nm

commercial processors

1970 1975 1980 1985 1990 1995 2000 2005 2010

Year

(a) Technologies for commercial processors [Source: Intel] 1 µm

wireless receivers microprocessors

Minimum technology size

0.7 µm 0.5 µm 0.35 µm 0.25 µm 0.18 µm 0.13 µm 90 nm 65 nm

1994

1996

1998

2000

2002

2004

2006

Year

(b) Technologies of designs published at ISSCC Fig. 1.4. Evolution in silicon CMOS technologies.

1.1 The world of electronics

7

The moments of introduction of new technologies differ for analog and digital electronics. As an illustration of this trend, Fig. 1.4b shows the minimum transistor lengths of two groups of CMOS chips presented at the International Solid-State Circuits Conference (ISSCC) over more than ten years. The group of analog front-ends for wireless receivers (e.g., GSM, WLAN or Bluetooth) usually employs a technology of a previous generation whereas microprocessors are implemented using the most recent processes. This tendency is a manifestation of the productivity gap between analog and digital design. The advantages of miniaturization are more easily exploited by digital systems partly due to the availability of EDA tools at different levels of the design process which simplifies the management of large complex systems. The scaling of transistors influences other parameters that are important for analog circuits besides the main characteristics for digital basic building blocks (i.e., power consumption and switching speed). Consequently, accurate models and appropriate circuit design techniques need to be developed to account for all deep-submicron effects. For example, smaller feature sizes come with lower supply voltages which leave less room for signal swing and make some techniques infeasible (e.g., when multiple transistors are stacked as in cascode circuits). Short-channel effects reduce the intrinsic voltage gain of transistors (gm /gout ), making the design of amplifiers and other circuits more challenging [2]. Also the matching of minimal-size devices degrades with scaling [3] which may result in the need for transistors with larger areas than the minimum dictated by the technology. On the other hand, higher cut-off frequencies fT for thinner transistors make CMOS also suited for circuits operating at higher frequencies. Furthermore, improvements in noise figure due to thermal noise can be expected as technology scales [4]. Modern CMOS technologies also allow to build on chip passive components, like inductors, resistors and capacitors, with reasonable quality factors at high frequencies [4]. Hence, all elements of an electronic system can be integrated onto a single chip: a SoC. Analog and digital circuits, memories and passive components are all fabricated with the same technology which decreases the production cost. Furthermore, no structures should be provided to lead intermediate signals on- and off-chip between different separate ICs. On the other hand, interference between the different parts (e.g., switching noise transmitted via the common substrate) may deteriorate the performance, especially the analog section. Such effects further increase the complexity of the design process and hence widen the productivity gap. Accurate modeling and simulation tools and appropriate systematic design methodologies may provide a solution. When different technologies are necessary (e.g., when optical components or MEMS are used), single chips or SoCs can be stacked onto each other as a MCM or SiP.

8

1 Introduction

1.1.4 A design perspective Digital systems simplify the electrical signals to a sequence of a discrete number of values (e.g., 0 and 1), usually at a certain rhythm orchestrated by a clock. Physically, the digital numbers correspond to quantized voltages or currents (e.g., zero and maximal volt). Electronic circuits are abstracted to gates, registers, operators or processors. The systems that this abstraction allows to build, process signals with high accuracy, are rather unaffected by natural noise due to the noise margins and are able to represent any kind of information. Despite all advantages of digital electronics, the natural environment requires signals which are continuous in both time and value. Electromagnetic waves like light and radio signals, mechanical phenomena like pressure and speed, sound and temperature are all examples of analog signals, which should be detected or produced by analog electronic components before or after digital processing. Hence, analog systems remain indispensable. On the other hand, the performance of analog functions can be improved by digital techniques, so that most analog circuits are in fact mixed-signal. In other words, analog and digital electronics have a mutualistic relationship. Since analog circuits deal with the real signals, non-ideal effects due to physical imperfections of the semiconductor technology have a large influence on the performance. For example, noise of various origins imposes a limit on the minimal detectable magnitude of the signals. The physics of semiconductor devices for small signals is simplified to simple linear mathematical equations between currents and voltages, but complex nonlinear relations are required to describe large-signal behavior. The nonlinear behavior may even be exploited to realize certain functionality (e.g., in translinear circuits). Furthermore, the variation of the process parameters used in such equations and a small deviation from selected values for transistor sizes is inevitable. Managing all these phenomena is what makes analog design cumbersome and challenging. Typically, a lot of experience is required to quickly identify the most important sources of performance degradation in an analog design. On a typical SoC, the design of the analog part may take as long as the digital part while it only occupies a fraction of the chip area and uses a rather small number of transistors. To realize high-quality designs, the analog designer often has to deal with circuit descriptions that are closely related to the actual physical realization, like transistor schematics. For high-frequency designs, also effects of the actual layout should be taken into account. When elementary blocks, like op amps, are distinguished, unwanted interaction between them should be accounted for. As a result, building large-scale designs is not as easy as for digital systems which take advantage of the abstraction of low-level effects. Several properties are needed to define the behavior of an analog circuit, e.g., power consumption, bandwidth, noise factors, nonlinear characteristics or frequency behavior. Furthermore, they are determined by various parameter

1.2 Analog design automation

9

values. This multidimensionality of analog circuits makes it infeasible to build a library of standard designs which can easily be re-used. Every situation then requires a new custom design. The building blocks in digital electronics, on the other hand, are more easily reused. This reality is another reason for the productivity gap of analog design compared to its digital counterpart. Huge advancements in digital design were driven by the availability of EDA tools. For example, algorithms have been developed to realize logic functions written in a hardware description language as optimized structures of interconnected logic gates whose transistors are pre-dimensioned and drawn in a layout. To also speed up the analog design process while delivering designs with a high quality, efficient systematic methodologies supported by appropriate analog CAD programs have to be developed. Therefore, analog EDA can serve as a bridge between the possibilities of the new semiconductor technologies and the design practice of digital systems on the one hand and the capabilities of typical low-level analog design on the other hand.

1.2 Analog design automation 1.2.1 Research areas First analog CAD tools only provided support for drawing the transistor schematics and layouts. Nowadays, the analog EDA industry and researchers develop methods and algorithms to automate various aspects of the design cycle of analog and mixed-signal ICs: Simulation. In the 1970s, one of the most popular CAD programs ever was released: SPICE. It is specialized in calculating the time-domain and AC behavior of analog circuits. Today, time-efficient computer simulations are an indispensable aid during the design process. Whereas the original simulation methods are still appropriate for small-size circuits operating at moderate frequencies, the collection of commercially available simulation techniques is now very diverse with, e.g., frequency-domain approaches, discrete-time methods and large-scale algorithms. Modeling. The results of simulations depend on the quality of the models used to describe the behavior of the system. Therefore, lots of efforts need to be made to generate accurate descriptions. For example, models for transistors with rather large dimensions and at low frequencies are developed for deep-submicron technologies or RF applications. Beyond the level of individual transistors, various types of models are available for entire building blocks with different trade-offs between accuracy and complexity and written in different hardware description languages. Also, algorithms are invented to automatically generate good models. Layout. Several tasks regarding the drawing of layouts of analog ICs are to a certain extent qualified for automation: e.g., generation of the physical structure of individual devices like transistors, placement of them on the

10

1 Introduction

chip and the creation of interconnections between them. Especially for systems showing some repetition, like A/D converters, layout automation can be very effective. Also tools for extraction of layout parasitics and for checking correctness of layouts are now commonly employed by analog designers. Analysis. Traditionally, analog designers use hand calculations with simplified models to analyze the properties of a circuit which are expressed, for instance, as characteristic numbers or transfer functions. New technologies and applications increase the need for new analysis techniques, e.g., for yield estimation, mismatch influences or noise effects in systems operating at high frequencies. Besides advanced analysis methods, various algorithms have been developed to automatically obtain interesting properties by numerical evaluation of the fundamental equations of the circuit. Furthermore, research over the last 20 years has resulted in methods to generate simplified symbolic expressions that help designers gaining insight in the circuit behavior. Synthesis. Designing an electronic circuit involves selecting the elements, their interconnection and their dimensions. In general, the analog designer chooses the circuit schematic and the dimensions are obtained by execution of a design plan which translates the results of analysis into a set of equations from which the sizes are calculated. Several CAD tools have been developed to automate the sizing task by application of mathematical optimization techniques, usually for elementary building blocks like op amps or for repetitive structure like filters. More recently, efforts have been made to enhance the degree of automation of the design process towards the level of the actual synthesis of analog and mixed-signal systems built up from elementary blocks, which includes also the selection of the topology. Verification. Simulation allows to study the behavior of a system for a particular set of inputs. Formal verification, on the other hand, tries to show the correctness of the circuit for all kinds of inputs. Formal notations of properties and techniques to prove them have to be developed. This research area for analog electronics has only gathered attention in the last 10 years. Analog CAD tools for low-level tasks like generation of transistor layouts, extraction of parasitic elements and verification via simulations are now well established. Tools of a second generation provide solutions for analysis via numerical and symbolic derivation of performance characteristics, and for the selection of values for parameters like transistor sizes and capacitor values in basic building blocks. This book, however, focuses on a third generation of CAD tools, which look to a higher level of abstraction covering larger analog systems, and to corresponding modeling, analysis and synthesis techniques. The opportunities, conditions, problems, solutions and systematic methodologies for this new generation of tools are investigated.

1.2 Analog design automation

11

1.2.2 High-level synthesis of analog and mixed-signal systems Whereas synthesis methods for basic building blocks look for optimal values for the parameters, like devices sizes and bias values, to realize a certain set of performance specifications, high-level synthesis deals with the generation of the architecture and the selection of its building blocks and their performance values. This design level enhances the possibilities to create more optimal designs. Indeed, instead of trying to find the best solution for a particular block, choosing another architecture may make that block superfluous. Therefore, techniques need to be examined to explore the design space of different architectures suited for a particular function. In this book, systems are considered which mainly consist of analog building blocks without large digital processing units. Digital signals occur as inputs or outputs of such systems (e.g., for data converters), or as control signals (e.g., to steer switches or make a selection between different coefficient values). As a result, these systems are usually a subsystem of a larger electronic system which typically also contains sensors, actuators, DSP blocks and digital controllers besides the actual system in the real world around which the electronic system is built. Both simple and multitask applications, like analog-to-digital conversion or the reception of wireless signals, are considered in this research. The results of this research can be summarized around three main issues of analog high-level synthesis, corresponding to the three main parts of this book. 1. Analog design methodologies. First, the fundamental properties of the analog or mixed-signal design process are systematically analyzed. Chapter 2 formally introduces elementary concepts like description and abstraction levels, performance values and basic operations during the design process. They are summarized in a new framework: the generic design flow. The generic character of this flow corresponds to its ability to fit in most specific design flows used for analog systems. It is shown that a design strategy consists of a modeling and a synthesis strategy which are inextricably tied up with each other. In Chap. 3, an extensive overview is presented of design strategies and corresponding EDA tools developed during the last 25 years. A new classification system is used: based on their ability to deal with different architectures, four main groups are distinguished. In an attempt to combine the best properties, a new design strategy for analog systems is introduced: the high-level design flow based on generic behavior. The corresponding modeling and synthesis strategy are further elaborated in the next two parts. 2. Generic behavioral modeling. A modeling strategy suited for highlevel synthesis should easily deal with the ability to represent different architectures. This requires models that are generic enough to facilitate the representation of various architectures. Further, time-efficient evaluation of the properties, e.g., via simulation, is necessary for extensive design

12

1 Introduction

space exploration. Therefore, a new methodology with generic behavioral models strongly related to an evaluation approach has been developed. Two main types of such models are distinguished: time- and frequencydomain models. Time-domain generic behavioral models are discussed in Chap. 4. As an example, new models for the extended classes of continuous-time ∆Σ modulators and sampled-data systems are presented. For the development of frequency-domain generic behavioral models, a new mathematical framework is proposed in Chap. 5: the Phase-Frequency Transfer model. Analog front-ends of receivers are taken as explanatory example. This is a generalization of earlier modeling concepts. 3. Top-down heterogeneous synthesis. The design strategy is completed by the development of a new algorithm for high-level optimization of analog and mixed-signal systems: the top-down heterogeneous optimization methodology. In Chap. 6, it is explained how this technique deals with numerous heterogeneous aspects of high-level analog design, like different architectures, multifarious performance characteristics, various sources of design knowledge and diverse optimization targets. A prototype program exploring a part of the design space of analog-to-digital converters for different specifications has been implemented using the new method. Finally, conclusions are presented in Chap. 7. The framework developed in this book for describing the high-level design process of analog and mixed-signal systems helps identifying the problems and needs for the development of a new generation of EDA tools. The proposed solutions are suited for integration in CAD programs which will reduce the productivity gap between analog and digital design practice, allow to exploit the possibilities of modern semiconductor technologies, and open new opportunities for analog design.

References [1] D. L. Critchlow. MOSFET Scaling—The Driver of VLSI Technology. Proceedings of the IEEE, 87(4):659–667, Apr. 1999. [2] A. J. Joseph, D. L. Harame, B. Jagannathan, D. Coolbaugh, D. Ahlgren, J. Magerlein, L. Lanzerotti, N. Feilchenfeld, S. St. Onge, J. Dunn, and E. Nowak. Status and Direction of Communication Technologies—SiGe BiCMOS and RFCMOS. Proceedings of the IEEE, 93(9):1539–1558, Sept. 2005. [3] P. R. Kinget. Device Mismatch and Tradeoffs in the Design of Analog Circuits. IEEE Journal of Solid-State Circuits, 40(6):1212–1224, June 2005. [4] T. H. Lee and S. S. Wong. CMOS RF Integrated Circuits at 5 GHz and Beyond. Proceedings of the IEEE, 88(10):1560–1571, Oct. 2000.

References

13

[5] A. Masaki. Possibilities of Deep-Submicrometer CMOS for Very-HighSpeed Computer Logic. Proceedings of the IEEE, 81(9):1311–1324, Sept. 1993. [6] E. Mollick. Establishing Moore’s Law. IEEE Annals of the History of Computing, 28(3):62–75, July–Sept. 2006. [7] G. E. Moore. Cramming more components onto integrated circuits. Electronics, 38(8):114–117, Apr. 1965. [8] S. Rusu, S. Tam, H. Muljono, D. Ayers, and J. Chang. A Dual-Core R Processor with 16MB L3 Cache. In IEEE Int. Multi-Threaded Xeon Solid-State Circuits Conf., pages 102–103, San Francisco, Feb. 2006. [9] F. M. Wanlass and C. T. Sah. Nanowatt Logic Using Field-Effect MetalOxide Semiconductor Triodes. In IEEE Int. Solid-State Circuits Conf., pages 32–33, Philadelphia, Feb. 1963.

Part I

Analog Design Methodologies

2 Foundations of Design Flows for Analog and Mixed-Signal Systems

2.1 Introduction As electronic systems tend to offer more functionality and consume less power, the design complexity continuously increases. As a result, design automation is an indispensable aid to deliver first-time-right and optimized designs in a time-to-market as short as possible. Successful development of EDA tools starts with the definition of a systematic methodology. Whereas for digital systems systematic high-level synthesis is commonly employed, analog synthesis is more ad hoc, especially at higher abstraction levels. This chapter first describes the kind of analog and mixed-signal systems targeted in this work. Then, the fundamental properties of design flows for these systems are summarized. Finally, a general methodology for high-level analog and mixedsignal synthesis is presented which provides a framework for developing analog EDA tools.

2.2 Analog and mixed-signal systems Engineers analyze and design various types of systems. In general, a system can be defined as a collection of interconnected components that transforms a set of inputs received from its environment to a set of outputs. In an electronic system, the vast majority of the internal signals used as interconnections are electrical signals. Inputs and outputs are also provided as electrical quantities, or converted from or to such signals using sensors or actuators. Figure 2.1 depicts four examples of electronic systems which correspond to different sizes in terms of basic electrical components like transistors and resistors. In Fig. 2.1a, an A/D converter is interpreted as a ‘system’ consisting of OTAs, capacitors, a quantizer and DAC. Most of the components are analog building blocks, except for the blocks performing a data conversion in which both analog and digital signals occur. Such A/D converter is only a component of a larger system, like the RF front-end of a receiver shown

18

2 Foundations of Design Flows for AMS Systems

vin

C1

C2

C3

A1

A2

A3

vout

G K1

K2

K3

DAC

(a) A third-order ∆Σ analog-to-digital converter

A

A

90◦

fosc

POLYPHASE FILTER

sin

D sout A D

(b) The RF front-end of a low-IF receiver VDD Ich Pu

vin

Phase Detector

vout

VCO C1 Pd Ich

R

C2

÷N (c) A PLL with charge pump sensors

physical environment

z −1

z −1

z −1

+

A D

analog

x

actuators

D A

analog

DSP

Select ?

digital control

(d) Typical embedded system model Fig. 2.1. Examples of different electronic systems.

user

2.3 Description and abstraction levels

19

in Fig. 2.1b. The clock signal can be generated, for instance, using a PLL depicted in Fig. 2.1c. These three systems are just a subsystem of the system of Fig. 2.1d. This system is used to control some segment of the natural environment whose phenomena are detected by sensors. After analog processing, most of the data processing happens using Digital Signal Processing (DSP), controlled by digital control logic that interacts with the user. Finally, the environment is influenced by actuators steered by the outputs of building blocks like power amplifiers. The methodologies developed in this work are targeted to analog and mixed-signal systems, like those shown in Fig. 2.1a–2.1c. Hence, only the subsystems denoted by ‘analog’ in Fig. 2.1d are interpreted as ‘systems’. All other components are considered to be part of the environment of the analog or mixed-signal system. More specifically, A/D converters like that shown in Fig. 2.1a, are used in Chaps. 4 and 6. RF front-ends similar to that of Fig. 2.1b, are frequently encountered in Chap. 5.

2.3 Description and abstraction levels 2.3.1 Definitions Systematic design methodologies based on different design representations are commonly used when designing both analog and digital electronic systems [34, 4, 23]. Degradation of the main behavior due to physical effects, like parasitic resistances and capacitances, mismatch, noise and nonlinearities, is more important in analog than in digital systems. Indeed, the time-discrete and quantized character of digital signals makes them more immune for such non-idealities than the continuous-time and real-valued analog waveforms. This knowledge makes it more difficult to make a strict distinction between different abstraction levels for analog design flows in contrast to common practice in digital design methodologies [33]. Instead, a discrimination should be made between a description level and an abstraction level [18]. Definitions 2.1 and 2.2 shown below indicate their meaning in this book. Definition 2.1 (Description level). A description level, denoted by D = (E, I), is a pair of two alphabets: an alphabet of elementary elements E and an alphabet of interconnection types I. A system at a specific description level D = (E, I) can be represented by a graph S = (N, E) with nodes N = {n|n ∈ E} and edges E = {e|e ∈ I}. Within the design flow of integrated analog and mixed-signal systems, five description levels are commonly used. An example of each description level for an integrator is depicted in Fig. 2.2: Physical level. The system is described as a collection of rectangles or polygons at different layers (e.g., n-poly, p-diffusion, metal) corresponding to the physical layout of the chip.

20

2 Foundations of Design Flows for AMS Systems Vdd

Vdd

Vdd

Vdd Ref3

Vdd Ref7

Ref5

Ref7

Ref3

Ref4

Ref4

Inp

Inn

Outn

Outp

Vdd

Vdd Vdd Ref8

Ref1

Ref1 Ref9

Ref2

(a) Physical level

Ref6

Ref6

Ref2

(b) Circuit level architecture behavioral of integrator is quantity vin across iin through inp to inn; quantity vout across iout through outp to outn; quantity vout1 : voltage; begin T*vout1’dot + T*wpl*(vout1 + vout1’dot/wp2 + vout1’dot’dot/(wp1*wp2)) == vin + vin’dot/wz; iin == vin / Rin; iout == vout / Rout; if vout1 > Amax use vout == Amax; elsif vout1 > -Amax use vout == -Amax; else vout == vout1; end use; end architecture behavioral;

(c) Macro level

(d) Behavioral level

(e) Functional level

Fig. 2.2. Example of a simple integrator represented at different description levels.

2.3 Description and abstraction levels

21

Circuit level. A connection graph between basic elements (e.g., transistors, diodes, resistors, capacitors, inductors) represents the system. These basic elements will be translated into the physical description by mapping individual or a small collection of them onto a set of interconnected rectangles. The interconnections are conservative nodes, i.e., nodes at which Kirchoff’s current law applies. Macro level. Macro-models are used to describe the system. They are built up out of controlled sources, resistances, capacitances, inductances, op amps, switches, etc. No one-to-one relation between the elements of the macro-model and the circuit elements can be identified. For example, a collection of elements of the macro-model may be mapped onto a single circuit element like a transistor whereas a macro-model of an op amp can be mapped onto multiple transistors. Behavioral level. The system consists of a collection of building blocks which are described by a set of mathematical relations between the input and output signals found at their ports. Characterization happens in time or frequency domain, e.g., with differential and algebraic equations or transfer functions. Functional level. Mathematical equations describe how the input information signals are mapped onto the output information signals. This operation can be represented by a signal flow graph. Table 2.1 lists the alphabets E and I for the various description levels. These descriptions can be written in different languages, e.g., a textual netlist, a description with a hardware description language like VHDL-AMS [8] or a graphical representation like a schematic. Note that sometimes the same language can be used to represent systems at different description levels. Table 2.1. Overview of alphabets with elements (E) and interconnections (I) for different description levels. Description level

Elements

Interconnections

Physical

Rectangles at different layers

Position of rectangles

Circuit

Transistors, resistors, capacitors, inductors, diodes

Conservative nodes

Macro-model

Op amp, controlled sources, resistors, capacitors, inductors

Conservative nodes

Behavioral

Blocks with mathematical relations between port variables

Conservative channels between ports

Functional

Mathematical relations between variables

Information signals

22

2 Foundations of Design Flows for AMS Systems

A description level indicates how the analog or mixed-signal system is represented. An abstraction level on the other hand, deals with the relation between the model of the system and its real behavior. Definition 2.2 (Abstraction level). The abstraction level A of a description is the degree to which information about non-ideal effects or structure is neglected compared to the dominant behavior of the entire system. Abstraction levels obtain their meaning from the comparison with other abstraction levels: by making the description more understandable via simplification, the abstraction level is increased; on the other hand, adding more details lowers the abstraction level. When information about the structure of the system is added to the representation, the hierarchy of descriptions is traversed towards a lower abstraction level. A system may be described with a certain description level at different levels of abstraction. Although it is clear to consider the functional level at a ‘high’ abstraction level and the physical level at a ‘low’ abstraction level, it is not straightforward to compare the abstraction levels of different description levels. Indeed, a behavioral model may contain a detailed description of nonlinear effects whereas the macro-model may be linear. Extraction of a layout results in a circuit description of the system at the same abstraction level as the physical description. Figure 2.3 indicates which areas in the abstraction–description plane are usually covered during an analog design flow. The description levels overlap each other on the abstraction ladder. This property of the abstraction–description plane is consistent with the

abstraction level

functional specification

physical realization

functional behavioral macro -model circuit

physical

description level

Fig. 2.3. The abstraction–description plane showing the relation between abstraction and description levels. The darkness of the bars indicates the commonly accepted correlation between the abstraction and description level.

2.3 Description and abstraction levels

23

observation that some design methodologies incorporate only a few description levels. Due to the overlap one can easily jump, for instance, from the behavioral to the circuit level. The two white stars in Fig. 2.3 indicate the start and end points of the design flow. Designing an electronic system amounts to converting the functional specification at the highest abstraction level to a physical realization at the lowest abstraction level via operations in the abstraction–description plane. 2.3.2 Operations in the abstraction–description plane Analysis and synthesis of analog and mixed-signal systems can formally be represented as moves in the plane with abstraction and description levels. In this book, four fundamental types of such operations are distinguished. They are denoted by refinement, simplification, translation and transformation. This subsection introduces their formal definitions used in this work. Definition 2.3 (Refinement). A refinement RD maps the system S described with description level D = (E, I) onto a representation S ′ at a lower abstraction level with the same description level D.

simplification tra translation on

refinement

transformation

abstraction level

Systematic analysis implies the application of subsequent refinement operations on the system. Such an operation may introduce, for instance, an extra pole in a behavioral model or limit the gain of an op amp. The performance of the system can then be re-evaluated, possibly with different parameter values for the extra pole or gain. Refinement corresponds to a top-down vertical line in the abstraction–description plane as indicated in Fig. 2.4.

translation

functional behavioral macro-model circuit

physical

description level

Fig. 2.4. Examples of the four basic operations in the abstraction–description plane.

24

2 Foundations of Design Flows for AMS Systems

A refinement operation can also be used to introduce a subdivision of the system: several building blocks are identified and a model for each of the blocks is defined. Hence, additional knowledge about the structure of a system corresponds to descending the hierarchy of descriptions. Such operation also includes the derivation of values for the parameters used in the models of the building blocks. Definition 2.4 (Simplification). A simplification SD maps the system S described with description level D = (E, I) onto a representation S ′ at a higher abstraction level with the same description level D. Model generation consists in applying multiple simplification operations to a system representation to obtain a model with less accuracy, but easier to interpret or to simulate. Simplification is the reverse operation of refinement: for example, an op amp is assumed to have infinite gain and no poles. Another commonly applied simplification is linearizing the system for smallsignal analysis. In Fig. 2.4, the operation is represented by an upwards oriented trajectory. Definition 2.5 (Translation). A translation TD,D′ maps the system S described with description level D = (E, I) onto a representation S ′ with description level D′ = (E ′ , I ′ ) preserving the level of abstraction. A translation happens usually between two adjacent description levels of Fig. 2.4, although a designer can jump over one level, e.g., the behavioral level. Consequently, a translation describes a horizontal trajectory in the abstraction–description plane from left to right or vice versa. Examples are the mapping of a transfer function onto an op amp circuit (from behavioral to macro level), or a layout extraction (from physical to circuit level). Definition 2.6 (Transformation). A transformation TD maps the system S described with description level D = (E, I) onto a representation S ′ at the same abstraction level with the same description level D. Since a transformation does not increase or decrease the level of abstraction, applying transformations allows the exploration of different different interconnection schemes between the elementary elements of E of the description level D = (E, I). Such a scheme is called an architecture or a topology. For example, a different filter type at the behavioral level [1] or another op amp structure at the circuit level may be selected using a transformation. A transformation is always linked with a motivation, that gives the reason to select a new topology, e.g., a particular architecture of the system does not allow to meet the specifications, or a simpler architecture is required. Figure 2.4 depicts a transformation as a loop. A transformation always consists of two parts. First, a new architecture is generated by choosing a set of elementary elements out of E and corresponding

2.4 Parameters and performance

25

interconnections out of I. Then, the values for the parameters of the elementary elements should be defined in a constraints transformation step. During analysis and synthesis, these fundamental operations are commonly combined in one step. For example, a translation is usually followed or preceded by a refinement or simplification. Further, simplification is usually applied before a transformation operation to make an abstraction of the non-ideal effects. At a higher abstraction level, it is easier to identify a mathematical equivalence between two topologies. When applying the operations, the designer has to deal with the fact that the behavior of the system is altered if the abstraction level is changed. This can cause a problem of correctness of the design. The behavior may be changed so much that dis-functioning of the system results. For example, due to extra poles the system may become unstable. When the abstraction level is increased with a simplification operation, careful examination of modeling errors leads to conditions which should be fulfilled to validate the simplification. Refinement requires the introduction of constraints to deal with the problem of correctness [5]: when new physical effects are introduced, constraints on the magnitude of these effects should be generated. An analog system cannot be designed at a certain abstraction level without knowledge of other abstraction levels expressed as either conditions or constraints. These properties are needed to guarantee consistent functionality between systems at different abstraction levels.

2.4 Parameters and performance At each stage in the design process, a point in the abstraction–description plane can be associated with the interim design. The design on its term corresponds to a set of parameters and a set of performance values. These properties are weakly coupled with the abstraction and description level of the design. Following definitions establish their meaning used throughout this book. Definition 2.7 (Parameter). A parameter p ∈ p (D) of a design at the description level D = (E, I) is a property of an elementary element E ∈ E or interconnection I ∈ I. For example, for the circuit level, properties like sizes of transistors or capacitance values are parameters. At the behavioral model, poles and zeros or distortion coefficients are usually encountered. Definition 2.8 (Performance value). A performance value P ∈ P(A) of a design at a certain abstraction level A is a property of the entire diagram representing the design. At high abstraction levels, aspects like bit-error-rate or signal-to-noise ratio are used. Designs at lower abstraction levels can be characterized by properties

26

2 Foundations of Design Flows for AMS Systems

like input-referred noise level, delay or phase margin. Note that a performance value at a certain abstraction level can be a parameter at a higher abstraction level. The same performance value can also be used for different description levels. Definition 2.9 (Performance function). The performance function ΠD,A of a design described with description level D at abstraction level A transforms parameters into performance values: ΠD,A : p (D) → P(A) : p → ΠD,A (p) = P.

(2.1)

Aspects like computational time, accuracy, flexibility and set-up time have led to the development of different kinds of performance functions. Furthermore, different performance functions provide a designer different levels of insight in the system and its performance. More insight makes it easier to reuse the analog circuit. In analog design, five types of performance functions are frequently encountered, which are listed in Table 2.2: Simulation-based. A set of stimuli is defined and the design is simulated numerically to calculate the output signals from which the performance is determined. A plethora of techniques implemented in simulators exist: e.g., time-marching algorithms [41], harmonic balance in frequency domain [24], time-discrete approaches [30], shooting methods [31], mixed frequency-time [20] or multi-time techniques [35]. Consequently, both CPU time and preciseness range from high to low levels. This approach is by far the most popular one to calculate the performance function. However, it only provides limited insight in the operation of the system. Table-based. The performance as function of the parameters of the system can be saved in some kind of table without any reference to the inner working. The table is obtained by executing many simulations, so that the very fast evaluation of the performance function is replaced by an increase of the initial set-up time. The flexibility is low since only pre-simulated sets of parameters can be used. Different approaches either derive an explicit Table 2.2. Comparison between different kinds of performance functions. Type

Evaluation

Set-up

Accuracy

Flexibility

Insight







time





 ¨- · · ·  





User equations









Performance models





Symbolic analysis





  ¨-

  ¨-

  ¨-

Simulation-based Table-based

· · ·  ¨-



2.4 Parameters and performance

27

table [3] or use an optimization algorithm to derive a Pareto-front [12]. The last method limits the set-up time considerably and chooses Pareto– optimal combinations of parameter values to solve the problem of low flexibility. User-defined equations. If the designer provides analytical expressions for the performance function, fast evaluation can be guaranteed during the design process [39, 27]. Insight in the system is indispensable to derive such expressions. The disadvantages are obvious: the equations are only valid in a limited area of the parameter space and each new topology requires a large effort to derive equations. Moreover, the derivation of an analytical expression in a simple form is usually at the expense of the accuracy [14]. However, in early design stages the problem of the low accuracy may be less important [29]. Therefore, the derivation of user-defined equations is common practice by analog designers. Performance models. 1 A template is proposed or constructed for the performance function and the coefficients of the template are calculated using a regression method. This method includes the execution of many simulations to sample the parameter space: the number of required simulations increases exponentially with the number of parameters. Several types of templates are employed in analog design, e.g., rational functions [38], spline functions [6], posynomials [9], support vector machines [10] and neural networks [43]. A more general approach [26] uses a grammar instead of a fixed template. A trade-off should be made between the accuracy of the models on the one hand and the complexity of the expressions and ease of interpretation on the other hand. Compared to a table-based approach, the performance model as performance function is not limited to the simulated points (the training set). Nevertheless, only a limited area of the parameter space is addressed for a fixed topology. Symbolic analysis. An explicit symbolic expression for the performance function is derived in an automated way which offers the same benefits as a user-defined equation. Since a major problem is the complexity of the generated formula, several methods have been developed to simplify before, during or after the analysis [19, 42, 17]. A considerable gain in evaluation speed is achieved by saving the expression as a Determinant Decision Diagram (DDD) [37]. The resulting simplified expressions for the performance characteristics helps to understand the influence of the parameters on the performance. Even more than performance models, symbolic analysis usually explores only a small area of the parameter space. Although some methods have been developed for nonlinear symbolic analysis [25], the first application of this technique are linear or weakly nonlinear performance functions.

1

Sometimes also called ‘macro models’, but to avoid confusion with the macro description level, the term ‘performance models’ is preferred.

28

2 Foundations of Design Flows for AMS Systems

Sometimes the actual performance function contains too detailed information about the system, for example, during architectural exploration. If it is sufficient to know whether a performance specification can be achieved, the feasibility function can be used. It indicates the feasible performance range of the system. Definition 2.10 (Feasibility function). The feasibility function ΦD,A corresponding to a performance function ΠD,A maps the performance space onto boolean values:  1 ∃p ∈ p (D) : ΠD,A (p) = P (2.2) ΦD,A : D(A) → B : P → ΦD,A (P ) = 0 ∀p ∈ p (D) : ΠD,A (p) = P This function can be calculated from the performance function using definition (2.2). Other approaches derive a performance model directly for the feasibility function. For example, in [22] and [15], support vector machines are proposed as template directly for the feasibility function.

2.5 Design flows for analog and mixed-signal systems 2.5.1 The generic design flow A design process formally consists in the application of a chain of operations in the abstraction–description plane from specification to implementation. As indicated in Fig. 2.3, the most extended design flow starts from a description of the functionality, usually written in some hardware description language, and ends with a layout ready to be fabricated. Usually, however, different designers deal with only a part of the total flow, e.g., a system designer from the functional specification to a behavioral model of a specific architecture; or a layout designer from circuit to physical level. The four basic operations can be put together in an iterative design process, called the generic design flow in this work and depicted in Fig. 2.5. The system to design is described with a certain description level D at a particular abstraction level A. The results of the performance function (2.1), the specifications and acceptable tolerances on the specifications are used to perform an evaluation of the interim design. The outcome of this evaluation determines the course of the flow to follow: •

If the design fails to meet the specifications, a transformation of either the architecture or its parameters should be applied. Simplification can remove details to make it easier to choose which transformation should be selected. For example, by looking at the ideal information flow, mathematical equivalences can be exploited to decrease the influences of the parasitic signal flows without affecting the information signals.

2.5 Design flows for analog and mixed-signal systems

29

Description level

system S

performance calculation TRANSFORMATION constraints transformation

SIMPLIFICATION

REFINEMENT

simulation−based

topology generation

tabular− based

user−defined equations

performance models

symbolic analysis

performance evaluation

tolerances

implementation choice addition of non−ideal effects

specifications

specs met specs not met

enough details

too many details

too few details

SIMPLIFICATION

TRANSLATION

Fig. 2.5. The generic design flow within a specific description level for analog and mixed-signal systems using the four basic operations: translation, refinement, transformation and simplification.

• If the specifications are met, but the abstraction level does not correspond to the wishes of the designer, details should be added or removed. In order to enable a refinement operation which will take more physical effects into account, it may be necessary to make a choice about the implementation style (e.g., Gm–C or R–C integrators in a filter design). • Finally, the design within the description level has finished once all specifications are met and there are enough details available. Translation will be required to convert the representation of the system to the next description level, which can be to the left or right in the abstraction– description plane. The generic design flow is a framework in which most tasks supported by EDA-tools for analog and mixed-signal systems can be fit. For example, the transformation loop refers to design exploration, the refinement loop to analysis and the simplification loop to model reduction and generation. For practical design purposes, the generic design flow is usually simplified.

30

2 Foundations of Design Flows for AMS Systems

2.5.2 Simplified generic design flows Different design flows originate from the selection of both the kind of operations and the order in which they are used. Selection of a design methodology for a particular case depends on several criteria: • Design complexity: the level of complication of the design indicated by the number of different fundamental components like transistors and capacitors, the number of interconnections between them and the degree of ‘chaos’ by lack of repetitive structures • Design time: the time available to deliver a product that fulfills given specifications well enough • Design reuse: the degree to which parts of the design can be recycled from or for other projects, e.g., by using libraries or analog IP • Design insight: the information that is obtained during the design about various, usually parasitic, physical effects or the functionality of the system • Redesign: the risk on redesign when it turns out that some physical effects are underestimated • Design stage: the area of the abstraction–description plane where the design flow is applied: at higher abstraction or description levels, a different design methodology may be more appropriate than at lower ones • Exploration of design space: the ability of the method to explore a large design space, with different architectures and/or parameter values The actual tasks to perform during the synthesis process depend on these properties. For example, the inputs and outputs of the process are related to the design stage. The abilities regarding design space exploration determine whether only parameter values or also an architecture should be chosen. During the design of an analog or mixed-signal system, the abstraction level can be subsequently lowered or raised, or left unchanged. These three different ways to cope with the abstraction level correspond to the three major design methodologies that are frequently adopted in analog synthesis. The major properties of these methods are summarized in Table 2.3: Flat methodology. In this approach, a translation operation is performed usually directly to the circuit level. Neither refinement nor simplification steps are used. Instead, an optimal set for the parameters (like widths and lengths of transistors) is found via a design plan [13, 21] or an optimization strategy [18, 2]. The main disadvantage of this methodology is the limited complexity of the designs it can handle in a reasonable time [32]. On the other hand, all non-ideal effects can directly be taken into account and insight about them can be acquired. As a result, its main application is the design of basic building blocks like VCOs, op amps or amplifiers. Bottom-up methodology. The design process most often starts with a description at circuit level of the building blocks. They are synthesized independently from each other after which the entire system is assembled. Therefore, no refinement operations are needed in the design flow. A great

2.5 Design flows for analog and mixed-signal systems

31

Table 2.3. Comparison between three principal design flows. Flat

Bottom-up

Top-down

Complexity

Limited to building block level

From blocks to small systems

From blocks to large systems

Design time

Increases rapidly with number of components

Limited for experienced designers

Depends on method used to transform constraints

Design reuse

Seldom possible

Largely available if multiple cell designs are used

Blocks with proper constraints

Insight

About all low-level effects

About influence of physical effects on non-idealities

About influence of non-idealities on functionality

Redesign

All details are already present

When the safety margins are chosen too small

Only lower-level blocks in case of wrong constraints

Design stage

At low abstraction and circuit levels

From low to medium levels of abstraction

From medium to high levels of abstraction

Exploration

Only of parameter values

Multiple cell architectures may be available

Architectures and parameter values

advantage is the possibility of concurrent design of different blocks. The major challenge lies in ensuring correct behavior of the system after assembling. To this end, the designer traditionally selects safety margins based on his experience. To avoid redesign when the margins are chosen too small, multiple designs of the building blocks can be generated in case of automated design from which the best solution is selected during assembling. For example, only the Pareto–optimal cell designs can be used for system-level exploration [16]. Top-down methodology. To cope with complex designs starting from functional specifications, the large system is divided into smaller blocks in the top-down design methodology [23]. The design at a high abstraction level of a complex system corresponds to deriving the constraints of the building blocks and determining the influence of their non-idealities on the overall functionality. Applying these constraints transformation steps results in a top-down constraint-driven approach [4]. Simplification operations are not used in this method. However, the main problem is to find an optimal and feasible set of constraints for the building blocks without

32

2 Foundations of Design Flows for AMS Systems

knowledge about the details of their implementation. Instead, they are represented, for example, by behavioral models [40, 27], performance and feasibility models [36], Pareto fronts [28] or with sensitivities [7]. The different properties for these three methods shown in Table 2.3 suggest that no design flow is suitable for all situations. An experienced designer may design a rather small system in a relatively short time using a flat approach. A multi-cell bottom-up approach, on the other hand, may be more appropriate if there are well-understood, in-house designed cells of basic building blocks available. With a systematic top-down approach, an optimized large design may be delivered which however could be infeasible due to its complexity for other approaches. Combination of the basic methodologies can be adopted. For example, performance models can first be generated in a bottom-up way and then be used in a top-down design flow. The design flow for analog systems then resembles the ‘meet-in-the-middle’ design strategy used to design digital systems [11]. The system design is separated from the design of the basic silicon modules and both design flows meet in the middle of the abstraction levels. 2.5.3 Specification of the generic design flow To use the generic design flow or one of its simplified variants in an actual design process, both a modeling and synthesis strategy must be drawn up: Modeling strategy. This strategy determines how system S in Fig. 2.5 is represented. The description level used is a first criterion to select a modeling style. Furthermore, a good modeling strategy makes it easy to execute the performance function. Finally, to support the selected synthesis strategy some properties like the ease to represent different architectures at different abstraction levels may be important. Synthesis strategy. To synthesize an analog or mixed-signal system, the basic operations of Fig. 2.5 should be subsequently executed. The synthesis strategy indicates the kind and order of the operations to be applied during the different design stages. It determines issues like how to select a topology, how to change parameters and which details should be added in a refinement step. Together, the modeling and synthesis strategy form a design strategy. Defining a systematic design methodology results in the identification of design tasks for which CAD tools can be developed.

2.6 Conclusions This chapter presents the formal foundations for systematic design methodologies for analog and mixed-signal systems. The tasks performed during different design stages are represented on a plane with abstraction and

References

33

description levels. The identification of basic operations in this plane then leads to the definition of the generic design flow. Specifying a modeling and synthesis strategy results in a systematic design methodology. Chapter 3 examines different design strategies supported by EDA-tools. A new approach based on a modeling strategy with generic behavioral models is proposed and investigated in the remaining chapters.

References [1] B. A. A. Antao. Architectural Exploration for Analog System Synthesis. In IEEE Custom Integrated Circuits Conf., pages 529–532, Santa Clara, May 1995. [2] T. Binder, C. Heitzinger, and S. Selberherr. A Study on Global and Local Optimization Techniques for TCAD Analysis Tasks. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 23(6): 814–822, June 2004. [3] R. J. Bishop, J. J. Paulos, M. B. Steer, and S. H. Ardalan. Table-Based Simulation of Delta-Sigma Modulators. IEEE Trans. on Circuits and Systems, 37(3):447–451, Mar. 1990. [4] H. Chang, E. Charbon, U. Choudhury, A. Demir, E. Felt, E. Liu, E. Malavasi, A. Sangiovanni-Vincentelli, and I. Vassiliou, editors. A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits. Kluwer Academic, 1996. [5] H. Chang, A. Sangiovanni-Vincentelli, F. Balarin, E. Charbon, U. Choudhury, G. Jusuf, E. Liu, E. Malavasi, R. Neff, and P. R. Gray. A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits. In IEEE Custom Integrated Circuits Conf., pages 8.4.1–8.4.6, Boston, May 1992. [6] C.-Y. Chao and L. Milor. Performance Modeling of Analog Circuits Using Additive Regression Splines. In IEEE Custom Integrated Circuits Conf., pages 301–304, San Diego, May 1994. [7] E. Charbon, E. Malavasi, and A. Sangiovanni-Vincentelli. Generalized Constraint Generation for Analog Circuit Design. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 408–414, Santa Clara, Nov. 1993. [8] E. Christen and K. Bakalar. VHDL-AMS—A Hardware Description Language for Analog and Mixed-Signal Applications. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 46(10):1263–1272, Oct. 1999. [9] W. Daems, G. Gielen, and W. Sansen. Simulation-Based Generation of Posynomial Performance Models for the Sizing of Analog Integrated Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(5):517–534, May 2003.

34

2 Foundations of Design Flows for AMS Systems

[10] F. De Bernardinis, M. I. Jordan, and A. Sangiovanni-Vincentelli. Support Vector Machines for Analog Circuit Performance Representation. In IEEE/ACM Design Automation Conf., pages 964–969, Anaheim, June 2003. [11] H. De Man, J. Rabaey, J. Vanhoof, G. Goossens, P. Six, and L. Claesen. CATHEDRAL-II — a computer-aided synthesis system for digital signal processing VLSI systems. IEE Computer-Aided Engineering Journal, 5(2):55–66, Apr. 1988. [12] B. De Smedt and G. G. E. Gielen. Watson: Design Space Boundary Exploration and Model Generation for Analog and RF IC Design. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(2):213–224, Feb. 2003. [13] M. G. R. Degrauwe, O. Nys, E. Dijkstra, J. Rijmenants, S. Bitz, B. L. A. G. Goffart, E. A. Vittoz, S. Cserveny, C. Meixenberger, G. van der Stappen, and H. J. Oguey. IDAC: An Interactive Design Tool for Analog CMOS Circuits. IEEE Journal of Solid-State Circuits, 22(6):1106–1116, Dec. 1987. [14] M. del Mar Hershenson, S. P. Boyd, and T. H. Lee. Optimal Design of a CMOS Op-Amp via Geometric Programming. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 20(1):1–21, Jan. 2001. [15] M. Ding and R. Vemuri. A Combined Feasibility and Performance Macromodel for Analog Circuits. In IEEE/ACM Design Automation Conf., pages 63–68, Anaheim, June 2005. [16] T. Eeckelaert, T. McConaghy, and G. Gielen. Efficient Multiobjective Synthesis of Analog Circuits using Hierarchical Pareto–optimal Performance Hypersurfaces. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 1070–1075, Munich, Mar. 2005. [17] F. Fern´ andez, A. Rodriguez-V´ azquez, J. L. Huertas, and G. G. E. Gielen, editors. Symbolic Analysis Techniques: Applications to Analog Design Automation. Wiley-IEEE, New York, 1997. [18] G. G. E. Gielen and R. A. Rutenbar. Computer-Aided Design of Analog and Mixed-Signal Integrated Circuits. Proceedings of the IEEE, 88(12):1825–1854, Dec. 2000. [19] G. G. E. Gielen, H. C. C. Walscharts, and W. M. C. Sansen. ISAAC: A Symbolic Simulator for Analog Integrated Circuits. IEEE Journal of Solid-State Circuits, 24(6):1587–1597, Dec. 1989. [20] R. Griffith and M. S. Nakhla. Mixed Frequency/Time Domain Analysis of Nonlinear Circuits. IEEE Trans. on Computer-Aided Design, 11(8): 1032–1043, Aug. 1992. [21] R. K. Henderson, L. Astier, A. El Khalifa, and M. Degrauwe. A Spreadsheet Interface for Analog Design Knowledge Capture and Re-use. In IEEE Custom Integrated Circuits Conf., pages 13.3.1–13.3.4, San Diego, May 1993.

References

35

[22] T. Kiely and G. Gielen. Performance Modeling of Analog Integrated Circuits using Least-Squares Support Vector Machines. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 448–453, Paris, Feb. 2004. [23] K. Kundert, H. Chang, D. Jefferies, G. Lamant, E. Malavasi, and F. Sendig. Design of Mixed-Signal Systems-on-a-Chip. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 19(12): 1561–1571, Dec. 2000. [24] K. S. Kundert and A. Sangiovanni-Vincentelli. Simulation of Nonlinear Circuits in the Frequency Domain. IEEE Trans. on Computer-Aided Design, 5(4):521–535, Oct. 1986. [25] A. Manthe, Zhao Li, and C.-J. R. Shi. Symbolic Analysis of Analog Circuits with Hard Nonlinearity. In IEEE/ACM Design Automation Conf., pages 542–545, Anaheim, June 2003. [26] T. McConaghy, T. Eeckelaert, and G. Gielen. CAFFEINE: TemplateFree Symbolic Model Generation of Analog Circuits via Canonical Form Functions and Genetic Programming. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 1082–1087, Munich, Mar. 2005. [27] F. Medeiro, B. P´erez-Verd´ u, A. Rodr´ıguez-V´ azquez, and J. L. Huertas. A Vertically Integrated Tool for Automated Design of Σ∆ Modulators. IEEE Journal of Solid-State Circuits, 30(7):762–772, July 1995. [28] D. Mueller, G. Stehr, H. Graeb, and U. Schlichtmann. Deterministic Approaches to Analog Performance Space Exploration (PSE). In IEEE/ACM Design Automation Conf., pages 869–874, Anaheim, June 2005. [29] H. Onodera, H. Kanbara, and K. Tamaru. Operational-Amplifier Compilation with Performance Optimization. IEEE Journal of Solid-State Circuits, 25(2):466–473, Apr. 1990. [30] A. Opal. Sampled Data Simulation of Linear and Nonlinear Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 15(3):295–307, Mar. 1996. [31] J. R. Parkhurst and L. L. Ogborn. Determining the Steady-State Output of Nonlinear Oscillatory Circuits Using Multiple Shooting. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 14(7):882–889, July 1995. [32] R. Phelps, M. J. Krasnicki, R. A. Rutenbar, L. R. Carley, and J. R. Hellums. A Case Study of Synthesis for Industrial-Scale Analog IP: Redesign of the Equalizer/Filter Frontend for and ADSL CODEC. In IEEE/ACM Design Automation Conf., pages 1–6, Los Angeles, June 2000. [33] J. M. Rabaey. Digital Integrated Circuits. A Design Perspective. PrenticeHall, New Jersey, 1996.

36

2 Foundations of Design Flows for AMS Systems

[34] T. Riesgo, Y. Torroja, and E. de la Torre. Design Methodologies Based on Hardware Description Languages. IEEE Trans. on Industrial Electronics, 46(1):3–12, Feb. 1999. [35] J. Roychowdhury. Analyzing Circuits with Widely Separated Time Scales Using Numerical PDE Methods. IEEE Trans. on Circuits and Systems— I: Fundamental Theory and Applications, 48(5):578–594, May 2001. [36] J. Shao and R. Harjani. Macromodeling of Analog Circuits for Hierarchical Circuit Design. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 656–663, San Jose, Nov. 1994. [37] C.-J. R. Shi and X.-D. Tan. Canonical Symbolic Analysis of Large Analog Circuits with Determinant Decision Diagrams. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 19(1):1–18, Jan. 2000. [38] M. A. Styblinski and S. Aftab. Combination of Interpolation and SelfOrganizing Approximation Techniques—A New Approach to Circuit Performance Modeling. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 12(11):1775–1785, Nov. 1993. [39] K. Swings, G. Gielen, and W. Sansen. An Intelligent Analog IC Design System Based on Manipulation of Design Equations. In IEEE Custom Integrated Circuits Conf., pages 8.6.1–8.6.4, Boston, May 1990. [40] I. Vassiliou, H. Chang, A. Demir, E. Charbon, P. Miliozzi, and A. Sangiovanni-Vincentelli. A Video Driver System Designed Using a Top-Down, Constraint-Driven Methodology. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 463–468, San Jose, Nov. 1996. [41] A. Vladimirescu. The SPICE book. Wiley, New York, 1994. [42] P. Wambacq, F. V. Fern´ andez, G. Gielen, W. Sansen, and A. Rodr´ıguezV´ azquez. Efficient Symbolic Computation of Approximated Small-Signal Characteristics of Analog Integrated Circuits. IEEE Journal of SolidState Circuits, 30(3):327–330, Mar. 1995. [43] G. Wolfe and R. Vemuri. Extraction and Use of Neural Network Models in Automated Synthesis of Operational Amplifiers. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(2): 198–212, Feb. 2003.

3 Analog and Mixed-Signal Design Strategies

3.1 Introduction The analog designer at the start of the 21st century has several design strategies at his or her disposal to synthesize a well-functioning electronic system. Which strategy to choose depends on issues like the complexity of the system with respect to size and performance requirements, the available computational power, and the technology used to produce the end product. Furthermore, different modeling and synthesis strategies can be selected during different stages of the design process. This chapter first presents a new classification scheme and brief descriptions of design strategies supported by EDA tools developed by researchers and companies to help the analog designer to select the right approach for the right task. Further, at the end of this chapter, a general new design strategy is introduced and defined to overcome limitations of existing approaches for high-level design of analog and mixed-signal systems.

3.2 Classification The most traditional procedure to design an analog circuit starts with the selection of a topology, usually at circuit level. After analysis with simplified equations, a design plan is formulated to find values for the parameters to obtain correct behavior [103]. Modern design strategies suited for integration in EDA tools differ from this traditional approach in several aspects: Place in abstraction–description plane. Different design strategies may be appropriate at different abstraction and/or description levels. Further, one or more description levels may be included in the synthesis techniques. For example, layout-aware approaches operate on both circuit and physical levels when determining the dimensions of the transistors [117, 98, 44].

38

3 Analog and Mixed-Signal Design Strategies

Flexibility. Some strategies are only applicable to a specific topology or to a class of systems whereas more general approaches can deal with multiple architectures or circuit types. Also, the modeling strategy of the design method may limit its application [32]. Exploration of architectures. Several approaches require that the designer selects the entire architecture or parts of it from a limited set of available topologies that the design strategy can deal with [50, 63, 115]. On the other hand, systematic exploration in a design space with different architectures can be part of the synthesis strategy [67, 65, 36]. Optimization. To examine the design space, different values for the parameter and/or different architectures must be selected. Whereas a designer can make proposals based on its experience or on analysis, most modern methods for designing analog and mixed-signal systems include an optimization algorithm [56, 46, 11]. These four characteristics of design strategies can be used to distinguish different classes of design approaches. For example, a classification method may focus on the place in the abstraction–description plane (e.g., frontend strategies from specifications to circuit-level description versus back-end strategies from circuit to layout [15]) or on the optimization method used (e.g., circuit sizing methods [46]). However, gain in circuit performance is not only obtained by finding optimal parameters for a particular architecture, but also by selecting a proper architecture. Indeed, the selected architecture for a specific application can have an inherently worse performance than an alternative one. Therefore, the base criterion for the classification system used in this book is the method to obtain the architecture or topology. Based on this principle, four main classes can be recognized: 1. Selection before or after dimensioning. Determining optimal values for the parameters is separated from the selection of the architecture. The designer uses his or her experience or a knowledge-assistant tool to choose a topology. Alternatively, multiple architectures can be dimensioned and afterwards the best solution is selected. Either way, only a limited set of architectures is available in a library. This class is elaborated in Section 3.3.1. 2. Selection during dimensioning. The synthesis strategy allows to make an architectural selection during the execution of the algorithm for dimensioning. Topological options are stored in a library as entire architectures or as different choices for subblocks of a generalized architecture. This generalization makes a larger set of architectures available for exploration. Section 3.3.3 summarizes these approaches. 3. Top-down creation. Instead of selecting the architecture from a library, the functionality of the system is described at a higher abstraction level, usually with some kind of hardware description language. Via subsequent translation steps, this description is mapped onto a specific topology.

3.3 Historic overview

39

Dimensioning happens either during or after the mapping operation, for example in a constraint transformation step as shown in Section 3.3.3. 4. Bottom-up generation. Finally, the architecture can be created starting from a low abstraction level. The design usually starts at the circuit level by connecting individual transistors with each other in a knowledge-based, systematic or stochastic way. It is straightforward to obtain several new circuit topologies with this class of design approaches which is described in Section 3.3.4. Notice that besides the architecture or topology of a circuit and its nominal parameter values, other important issues should also be considered in analog design, like layout, (substrate) noise effects and robustness regarding technological and operational variations. Although some techniques described in this chapter are also applicable to some of these issues, main emphasis is on methods to obtain the architecture and values for its parameters. Following historic overview illustrates these methods in more detail.

3.3 Historic overview The new classification system based on the method used for architectural exploration is now used to describe the most important design strategies that are supported by CAD tools that have been developed over more than 20 years in the recent past. 3.3.1 Selection before or after dimensioning Traditional design strategies first select the architecture and then alter its parameters to achieve the required performance. If at the end the specifications are not met, costly redesign is needed. The corresponding design flow is depicted in Fig. 3.1. Figure 3.2 shows an example of this design flow for a simple OTA. The proposition of new values by the optimizer corresponds to a transformation applied on the parameters in the generic design flow of Fig. 2.5. Simplification nor refinement operations are present in this simplified design flow. The calculation of the performance happens by evaluation of one of the five types of performance functions listed in Section 2.4. To fit into the common design habits, a first generation of EDA tools focused on the automation of the two tasks of the traditional design approach: the topology selection and the search for the optimal parameter values. Furthermore, automation of the process to find the parameter values, creates the possibility to select multiple topologies, dimension them and make a final selection afterwards. In case of multi-objective algorithms, multiple combinations of parameter values are returned for a single topology, from which the best one is then selected [29]. Redesign is only necessary when none of the dimensioned topologies meets the specifications. As indicated in Fig. 3.1, such redesign comes down to initially selecting another topology.

40

3 Analog and Mixed-Signal Design Strategies top1

top2

top3

topn

initial selection potential redesign

finding parameter values

selected topologies p

performance function ΠD,A

optimizer P

dimensioned topologies

final selection best dimensioned topology

Fig. 3.1. Schematic representation of the design strategy applying selection of the topology before or after dimensioning.

circuit specs

gain slew rate ...

Vdd M2a

topology selection

vin2

V1

M2b

M1a

M1b

M4

Rc

V2 Cc

M5

vin1

V3

M6

vout

V1 M3

M7

CL

I 1 , I 2 , I 3 , ... V1 , V 2 , V 3 , ... W1 , L 1 , W 2 , L 2 , ... Vdd M2a

biasing & sizing

vin2

V1

M2b

M1a

M1b

M4

Rc

V2 Cc

M5

vin1

V3

M6

vout CL

V1 M3

M7

sized schematic

Fig. 3.2. Example of a typical design flow for a basic analog cell consisting of topology selection followed by circuit sizing [46].

The design strategy of Fig. 3.1 can be applied on different description levels. For example, for specific building blocks, like op amps, the circuit level is usually selected. Sometimes this is combined with the physical level, e.g., for mixers [117], VCOs [28] or LNAs [87]. On the other hand, the behavioral level is more appropriate for larger systems, like ∆Σ A/D converters [81], PLLs [119] or complete RF systems [22]. Selection mechanisms Selecting the appropriate topology for a specific application is a challenging task requiring lots of experience. Therefore, in early design tools (e.g., IDAC

3.3 Historic overview

41

[30]) it is up to the user to make the selection. Other approaches can be divided into three groups: • Rule-based selection: Sets of rules are delivered with each architectural option to guide the selection process. • Use of feasibility function: Comparison of the feasibility function of each topology with the specifications limits the number of topologies to dimension. • Trade-off estimation: The best architecture corresponds to the one with the highest (or lowest) estimated trade-off, expressed as a figure of merit. Tools have been developed to automate these approaches to a certain extent. Rules Each topology available in a library is accompanied by a set of rules indicating when a particular architecture is suited for a specific application. For example, some op amp variant delivers a high gain, but has a small bandwidth. Heuristics provided by an experienced designer form a first base to develop qualitative rules. Further, sizing a circuit for different specifications can be used to automatically derive rules and enhance the knowledge of the selection tool. For each combination of specifications, the rules should be combined for each available option after which the best circuit is selected. The OPASYN framework [63], for example, stores the knowledge of op amps in a decision tree. At each node a pruning heuristic is added which provides a standard and special decision path for some specification. Finally, one or more leaves remain and are returned as possible topologies. New topologies need to be fitted into the decision tree, which can be a hard task when there are many different performance metrics. A more systematic approach is adopted in FASY [113]: fuzzy logic is used to process and combine the rules for a library of op amps. Membership functions indicating a grade between [0, 1] are defined for variables corresponding to descriptions as ‘small’ or ‘high’. Mathematical manipulation of these variables then results in a total grade for each topology. Once a topology has been selected and sized, a rule can be derived that can become part of the FASY knowledge. Feasibility The feasibility function (2.2) provides a straightforward mechanism for architectural selection. Comparison of the specifications with the feasibility space of each topology learns which alternatives can realize the required system [110] and which cannot. The latter are then eliminated. Instead of deriving the entire feasibility function at once, its use in a selection mechanism can be split up into two steps. A first selection is made by determining the feasible performance intervals without taking into account

42

3 Analog and Mixed-Signal Design Strategies

interdependencies between the different performance specifications. Then, the smaller set of topologies is pruned by calculating the total feasibility function. Consequently, the second, computationally intensive, step should only be applied to a limited number of options (e.g., AMGIE [115]). The feasibility space itself is obtained from symbolic declarative models which describe the performance of different op amp topologies. A major disadvantage is the computational time required which increases rapidly when there are multiple performance characteristics. Figures of merit Yet another selection mechanism is based on the derivation of a FoM for each architecture in the library. In general, a FoM is defined as a single characteristic number derived from the implementation properties Pi (e.g., power and area), the required specifications Sj (e.g., gain and bandwidth), and technology characteristics Tk (e.g., minimum length or cut-off frequency):   n  n   np s t    fi (Pi ; αi ) · gj (Sj ; βj ) · hk (Tk ; γk ) , (3.1) FoM = C · i=1

j=1

k=1

with C a constant and fi ( · ), gj ( · ) and hk ( · ) power functions with αi , βj and γh as base or exponent. A simple example is the conversion energy for A/D converters [45]: 1 (3.2) FoM = · P 1 · 2−ENOB · BW −1 , 2 where no technology parameters are included. Selection of an architecture happens by associating the highest or lowest FoM with the best choice. Designers usually use simplified behavioral models to estimate the FoM beforehand (e.g., SDOPT for ∆Σ modulators [80]). Alternatively, a set of already realized designs can be used to derive the parameters in (3.1) [122]. The usefulness of the FoM as selection mechanism diminishes if the architectures differ considerably due to the different dependencies of the implementation properties on the specifications and technology parameters. Optimization methods Once one or more architectures have been selected, values for the parameters must be chosen in order to meet the specifications and simultaneously optimize properties like power and area. Several methods have been developed to cope with this optimization problem which typically has a high complexity. Indeed, in analog and mixed-signal systems, many objectives and constraints have to be taken into account. At circuit level, for example, extra constraints about the operating region of the transistors should be included. Interaction between blocks at higher abstraction levels also results in extra constraints. Furthermore, complex device models can make the feasibility space of an

3.3 Historic overview

43

analog system quite fanciful. More specifically, the problem is nonlinear and non-convex and multiple local optimal points are encountered. Finally, lack of analytical expressions for the performance function results in the need for time-consuming function evaluations leading to long optimization processes. Different methods for optimizing the parameters of analog and mixedsignal systems have several properties that distinguish them: Single- or multi-objective. Combining all objectives into one global objective function results in a single-objective approach [11]. On the other hand, multi-objective methods return several solutions from which the best in a certain situation is selected based on the relative importance of the different objectives [127]. Deterministic or stochastic. Most optimization algorithms are iterative: they alter temporary solutions by application of parameter steps. Two ways exist to determine new values for parameters during the algorithm. First, a deterministic function is systematically applied on previous values to calculate a step [41]. Each run of the algorithm with the same initial conditions results in the same solution. Alternatively, the new parameter values can be selected based on stochastic variables which may result in different solutions after each run [107]. Single-point or population. At each stage of the optimization process, one or more intermediate solutions are stored in a population. Tasks like performance function evaluation for all members of the population can easily be distributed over multiple processors. Further, a new parameter set is derived by taking a step from one point regardless of other intermediate points, or by combining several points to decide about new parameter values. With or without derivatives. To obtain new sets of parameter values, the values of the performance function in the current points are typically required. Some approaches, however, also need the derivatives of this function which results in more complicated calculations at each stage, especially when no analytical expression is available. Global or local optimum. An optimization algorithm may guarantee that the result of the optimization process is the global optimum whereas others can get stuck in a local optimum [95] dependent on the initial point. In practice, the first category returns with high probability a starting point in the neighborhood of the global optimum due to the limited run time. Restrictions on modeling strategy. The optimization algorithm may impose restrictions on the models used to represent the system. For example, continuous functions are usually necessary if derivatives are used. To find a global optimum, a convex optimization problem may be required [32, 73]. Use of design knowledge. The optimization of analog and mixed-signal systems can be regarded as a standard mathematical problem without exploiting the knowledge of experienced designers about the system, or include heuristics to find optimal parameter values [105]. Further, some

44

3 Analog and Mixed-Signal Design Strategies

heuristics can be translated into a set of mathematical sizing rules, e.g., to guarantee the functionality or robustness of subblocks [48]. With or without memory. Memory can be added to an optimization to simplify the calculation of the performance function or to improve the selection of new sets of parameter values. For example, the already evaluated points and their results are remembered. This memory can be present explicitly [116, 53] or implicitly, e.g., by the constitution of the population. Also, expert tools can be built which store knowledge of previously designed systems. Continuous or discrete optimization. The algorithm selects either values for the parameters from either continuous or discrete sets of values. When continuous values are used, they may be discretized once the final solution has been found (e.g., for transistor dimensions), although this operation can deteriorate the performance. Convergence. The convergence to a final solution may be theoretically guaranteed by an algorithm. On the other hand, the practical convergence rate can be rather slow or the certainty to reach a solution may only come true after an infinite number of iterations. Speed. Finally, the time needed to run by an optimization algorithm depends on most of the previous properties, like the convergence rate, the population size and the complexity of the calculations at each step (e.g., calculating derivatives slows down the process). Further, the execution time can increase drastically with the complexity of the system, e.g., the number of design variables and objectives. All these properties can result in a different realization which leads to a profusion of optimization approaches. Roughly speaking, six base categories are frequently encountered in CAD tools for analog synthesis published in the literature: • • • • • •

Application of knowledge-based design plans, heuristics or rules Deterministic unconstrained algorithms used for local optimization Constrained optimization methods solving linear and nonlinear programs Greedy stochastic optimization techniques Simulated annealing approaches Evolutionary computation

Table 3.1 lists the properties of the base categories according to their elementary implementation in CAD tools. However, practical implementations can be based on a combination of these fundamental methods, e.g., simulated annealing followed by a traditional unconstrained algorithm [113], or a genetic algorithm + simulated annealing method [126]. Design knowledge An analog designer develops a design plan to find the values of the parameters [103]. Such a plan describes how and in which order to calculate all unknown

Table 3.1. Overview of properties of base categories of optimization approaches. Stoch./ deterministic

Population

Derivatives

Global optimum

Design knowledge

Singleobjective

Deterministic

Single point

May be included

No guarantee on optimum

Usually analytical models

Local unconstrained

Singleand multiobjective

Deterministic

Single point

May be included

Only local optimum

Constrained

Singleand multiobjective

Deterministic

Single point

For nonlinear programs

Greedy stochastic

Usually singleobjective

Stochastic

Sometimes with multiple points

Annealing

Usually singleobjective

Stochastic

Basic method is singlepoint

Evolution

Singleand multiobjective

Stochastic

Memory

Cont./ discrete

Convergence

Heuristics, rules and design plans

In expert systems

Usually continuous

If no iterations are used

Fast design plans, but slow if interactive

Continuoustime models

Only to select initial point

No memory

Usually continuous

For wellbehaving objective functions

Fast for smallsized problems

Global for convex programs

Continuous or convex models

For setting up models

No memory

Sometimes Usually converdiscrete gence mixed with cont. values

Usually no derivatives needed

No guarantee on optimum

No restrictions

To guide random search

To guide random search

Both continuous and discrete

No guarantee on convergence

Slow for large design spaces

Usually no derivatives needed

Close to global after many iterations

No restrictions

To select initial point

Only temperature in memory

Both continuous and discrete

Probably after enough iterations

Slow for large design spaces

Population No derivaof individ- tives needed uals

Likely close to global after many iterations

No restrictions

No specific design knowledge

Implicit via population

Basic GAs use discrete values

Probably after enough iterations

Very slow for large design spaces

Models

Knowledge

Speed

Fast for smallsized problems

3.3 Historic overview

Single-/ multiobjective

45

46

3 Analog and Mixed-Signal Design Strategies

variables defined in the circuit (e.g., transistor widths and lengths) or system (e.g., noise figures and gain factors in an RF transceiver). The design knowledge should be translated for further use in CAD tools. A straightforward approach is to compile complete design plans for each topology (e.g., IDAC [30]). Multiple plans are needed in the library since different plans are required not only for different topologies, but also for different objectives. A more general system is obtained by building an expert system which also contains general rules and is frequently interactive, e.g., PROSAIC [12], OP–1 [105] and ASIMOV [91]. Nevertheless, the number of architectures that can be dimensioned is limited due to restrictions on the rules’ application area. For physical description levels, general placement and routing algorithms can be implemented in a tool which generate automatically an interconnected scheme of standard cells for a wide range of circuits (e.g., CALMOS [9]). Note that several EDA tools usually contain implicitly design knowledge, for example boundaries and typical values for parameters, initial rough sizing based on simplified models (e.g., STAIC [52], FPAD [40]), or module generators or templates for layouts (e.g., CYCLONE [28]). Explicit incorporation of heuristics as mathematical relations is achieved using the sizing rules method [48]. Local unconstrained optimizers The dimensioning problem is easily translated into a standard mathematical unconstrained optimization problem by defining a scalar function which should be minimized: G(p) =

N 

wk · fk (Pk (p)) +

vl · gl (Pl+N (p) , Sl ) +



implementation properties





C 

um · hm (p), (3.3)

m=1

l=1

k=1



M 



performance specifications







structural and sizing constraints



with p, P and S the design parameters, performance values and specifications of the system, respectively, and wk , vl and um weight factors. fk ( · ), gl ( · ) and hm ( · ) are deterministic evaluation or penalty functions which can take various forms [93, 86, 89, 40] (e.g., different forms for equality and inequality constraints). For implementation properties, like power and area, identity functions can be used, whereas for specifications usually a shift and normalization is applied. At low abstraction levels, models and specifications for manufacturing or operating tolerances can be taken into account (e.g. [5]). Function (3.3) may be referred to as the objective, error, performance or cost function. Numerous algorithms have been developed to find the minimal value of (3.3) [41]. Several CAD tools contain an implementation of such an algorithm at different levels of complexity: for example, simplex methods (e.g. [86], [56]), gradient-based methods (e.g., OPASYN [63], [86]), several Newton-like approaches (e.g. OAC [93], [11]), and trust region methods (e.g., OPDIC [6]).

3.3 Historic overview

47

A basic condition for correct convergence of these methods is a good starting point. This is derived, for example, by exploiting design knowledge [93]. Another problem is the rapid increase of the execution time with the orders of the parameter and performance spaces, especially when derivatives are needed. Furthermore, analog design problems are usually characterized by various constraints on the parameters. Therefore, constrained optimization methods are more often encountered. Constrained optimization Instead of making a combination of all objectives and constraints into one function as in (3.3), the problem of finding dimensions can also be considered as a constrained optimization problem, usually written in a standard form: min

p∈p (D)

f (p)

subject to gk (p) ≤ 1 for k = 1, . . . , N hl (p) = 1 for l = 1, . . . , M p>0

(3.4)

where the parts of (3.3) with specifications and constraints are explicitly written down as constraints. Depending on the character of the functions in (3.4), the problem is known as a linear, quadratic or nonlinear program [41]. In multi-objective formulations, f (p) is replaced by f (p) [109]. Since linear programs are only encountered in some special cases (e.g., for filter optimization [100] or layout retargeting [10]), the main focus of analog CAD tools is on nonlinear programs. Several algorithms have been successfully applied for analog designs, like gradient-based approaches (e.g., JiffyTune [21]), methods of feasible directions (e.g., DELIGHT.SPICE [89], [106]), and SQP for op amps [76], analog filters [24], LNAs [87], and general analog cells including statistical analysis (e.g., iEDISON2 [33], WiCkeD [4]). Like the optimizers listed above, most algorithms can get stuck in a local optimum. However, if all functions in (3.4) are convex, a local solution of (3.4) is also the global optimum. This situation is achieved by using a Geometric Program (GP) which can be transformed into a convex program. In a GP, all equality constraints hl (p) are monomials: αD 1 hl (p) = cl pα 1 · · · pD ,

cl ∈ R+ and α1 , . . . , αD ∈ R,

(3.5)

and all functions f (p) and g k (p) must be sums of monomials or posynomials. Geometric programming has been applied to the sizing of several analog systems, like CMOS op amps (GPCAD [32], [73]), pipelined A/D converters ([31]), PLLs ([20]), multi-stage amplifiers ([25]), on-chip inductors ([58]), LNAs (ROAD [124]) and systems with statistical variations (e.g., OPERA [125]). The biggest disadvantage of the method is the limitation on the modeling strategy, especially if a model has to be derived for each topology by hand. Possible solutions consist in fitting the performance function with a

48

3 Analog and Mixed-Signal Design Strategies

posynomial model [23] or solving a series of GPs [118]. Of course, this slows down the procedure, making the method less attractive. Greedy stochastic optimization Greedy algorithms will only accept a new set of parameters if there is some improvement. Elementary stochastic optimization methods like random grid search and random step [56] are not feasible for finding optimal parameter values for an analog system due to the rather extended design space. Instead, they are usually encountered in combination with other approaches. A straightforward combination is with design knowledge, like heuristics (e.g., iMAVERICK [59]) or information that is derived during the execution of the optimization algorithm [82]. Also, the gradient used in a deterministic approach can be replaced by an estimation based on random perturbations resulting in a stochastic approximation approach (e.g., TOY [111] for yield optimization). A globally optimal point is more likely found (but not guaranteed) if random pattern searches are combined with a population-based approach (e.g., ANACONDA [95]). Annealing Annealing methods mimic aspects of annealing of materials, and are characterized by a cooling mechanism, represented by a decrease of the temperature T by using, for example, a linear or exponential decaying function [60]. Starting from some point p in the parameter space, a new set of parameters p ′ is derived, by selecting statistically a new point in the neighborhood of the old one [47], or by applying a step of a local optimizer [90]. This new point is accepted if

G(p ′ ) − G(p) ′ > X, (3.6) G(p ) < G(p) ∨ exp − kT with G(p) an objective function similar to (3.3), X a random number uniformly distributed on [0, 1] and k a normalization constant (Boltzmann’s constant if G(p) represents energy). As a result, in contrast to the greedy algorithms, up-hill moves have a certain probability to be accepted in annealing approaches and hence the method can escape from local minima. Consequently, the global optimum is theoretically reached after an infinite number of iterations. In a practical implementation, however, it is likely – although not guaranteed – that after enough iterations, the solution is close to the global optimum. The ability to calculate optimal dimensions without the need for derivatives makes the basic simulated annealing algorithm an attractive choice for optimization in analog CAD tools. Applications include sizing of general analog cells (e.g., OPTIMAN [47], FRIDGE [79], ASTRX/OBLX [90]), op amps (e.g., FASY [113], GBOPCAD [57]), VCOs (e.g., CYCLONE [28]),

3.3 Historic overview

49

∆Σ modulators (e.g. [102]) and RF receivers (e.g., ORCA [22]), as well as layout generation (e.g., ILAC [101], KOAN/ANAGRAM [19], LAYLA [68], PUPPY-A [72]). In order to speed up the optimization process, several annealing algorithms are executed concurrently on parallel CPUs and operations like recombination commonly found in a genetic evolution process are used to find new points. Examples of such approaches have been implemented for amplifiers in ASF [66] and for layouts in [126]. Evolution Evolutionary algorithms mimic aspects of the natural biological evolution process to find a global optimal solution. A population of individuals (or phenotypes) is created where the parameters of a creature are collected in its genome (or genotypes). The latter is represented, for example, by a binary string (in original genetic algorithms), real-valued vectors (in evolution strategies [8]) or trees (in genetic programming [64]). Each individual is assigned a fitness value corresponding to (3.3) which can be used for ranking and selection. During the optimization process, new generations are built up by application of genetic operations. The selection operator chooses individuals from the population for breeding. Mutation perturbs the genotypes to explore new areas in the design space. Finally, recombination or crossover creates new children by combining the genotypes of the parents. A practical implementation may use only some operations or employ variants (e.g., differential evolution [96]). Usually, adaptations are made to the original algorithms to guarantee convergence to the global optimum (e.g., by employing an elitist selection mechanism). However, similar to the annealing algorithms, this global optimum results only after an infinite number of generations. The ability to deal with complex structures while only function values of the performance metrics and cost function without derivatives are needed, makes evolutionary methods suited for a wide range of applications. Analog CAD tools based on evolutionary algorithms have been developed to find the optimal parameters of various systems, like RF building blocks (e.g., gaRFeeld [116]), voltage references (e.g. [37]), ADCs (e.g. [121]), op amps (e.g. [123], [2]) and power amplifiers (e.g., M-DESIGN [97]). Evolutionary algorithms can also be employed for multi-objective optimization, e.g., WATSON [29] for op amps and [18] for RF systems. The fitness then becomes a function of the Paretodominance of the individuals. 3.3.2 Selection during dimensioning Instead of calculating the optimal dimensions of a particular architecture, and selecting the best architecture beforehand or afterwards, the selection process and the dimensioning are interweaved. As a result, the available design space is enlarged which makes it possible to obtain a better solution compared to the optimization of a fixed topology, be it at larger computational complexity.

50

3 Analog and Mixed-Signal Design Strategies temp1 temp2 temp3

tempn

template choice chosen template

optimizer p′

translate into architecture p

potential redesign

finding architecture and parameter values

P

performance function ΠD,A

final evaluation best dimensioned topology

Fig. 3.3. Schematic representation of the design strategy performing the selection of the topology and the dimensioning concurrently.

Figure 3.3 shows a diagram of the design strategy which combines selection and dimensioning. Similar to the methodology of Fig. 3.1, both approaches are examples of flat design flows summarized in Table 2.3. But rather than selecting a specific architecture, a template is chosen explicitly or implicitly. The use of a template or target architecture is also often used in approaches for architectural synthesis of digital systems, e.g., the CATHEDRAL systems [27, 16]. An example of a template for the design of various op amp architectures is shown in Fig. 3.4. Such a template defines a topology in terms of blocks for which different alternatives exist. As a result, all architectural choices available in a library fit into the template. So, the total number of different topologies remains limited. This property ensures that only meaningful architectures are obtained but at the same time no new topologies can be explored. If several templates can be chosen in a tool, a step similar to the initial selection in Fig. 3.1 should be performed. During the actual optimization, a set of parameters p ′ selected by the optimizer is translated into a particular architecture and its parameters p. Since different architectures are represented during the dimensioning process, the number and meaning of the parameters is variable. Then, the performance function is evaluated. A final evaluation reveals whether the chosen template can deliver a good design. If not, another template has to be chosen and optimized.

3.3 Historic overview

51

V5

V3 Cc

Rc V4

vout

V4

Cc

vinp

vinn

V2

V1

Fig. 3.4. Example of a (flat) template with various optional elements indicated by the rectangles, i.e., several optional cascode stages, different compensation schemes and an optional second stage [77].

A key issue in the combination of the selection and dimensioning process, is the definition of the template. Analog CAD tools implementing the design strategy of Fig. 3.3 use different description styles for the templates: • Hierarchic: The template is written as a connection of subblocks for which different types with similar functionality can be chosen during the optimization process. • Flat: All available topologies are characterized by a single description with integer or binary values to indicate different choices. The specific properties of each group of templates result in different methods to explore the design space and to find an optimal architecture. Hierarchical templates A design strategy based on hierarchical templates has some similarities with a top-down design flow where at each level a separated selection and optimization process is performed. First, general properties common to the different types of a particular subblock are derived. Depending on their actual values, a type is then selected and a specific architecture is generated. Alternatively, a depth-first search algorithm is applied by subsequently selecting all different types. Afterwards, type-specific parameters of the subblocks are determined. The template, however, imposes several restrictions like the kind of subblocks, their interconnection and the limited number of different types for them. Consequently, the designer has only a rather small number of architectures at his or her disposal. From the optimization methods summarized in Section 3.3.1, hierarchical templates are most easily fit in approaches based on the application of design

52

3 Analog and Mixed-Signal Design Strategies

knowledge. Indeed, in order to use hierarchical templates, the optimization method should deal with the ability to vary the meaning and number of the parameters in order to represent different architectures. Specific heuristics, rules or design plans are associated with subblocks or their types. An advantage of this approach is that the same knowledge can be used for different templates with common subblocks. Sometimes, a template contains conditional building blocks which can be removed (e.g., an optional second stage). Several prototype EDA tools implementing a knowledge-based optimization method with hierarchical templates have been developed. Some examples are CAMP [43] which contains different options for some subblocks in an op amp, BLADES [39] for three-stage op amp structures, OASYS [50] and ISAID [114] for various op amp architectures. Usually, the end solution of such a tool is further optimized with a local unconstrained optimizer to find the best set of parameters for the architecture obtained by the expert system. Furthermore, during the dimensioning, other optimization approaches can be adopted to find values for the parameters of a specific architecture. For example, SEAS [88] uses simulated annealing for parameter optimization and an expert system to decide which types of subblocks to replace during the optimization. A complete optimization with evolutionary algorithms of both parameters and architecture represented by flexible hierarchical templates requires the definition of the appropriate operators (i.e., mutation and crossover). For example, such templates and operators are defined in MOJITO [78] for op amp topologies. A major advantage is the possibility to represent a heavily larger number of topologies (about 3,500) than would be possible with flat templates, as discussed in the next subsection. Another approach to deal with hierarchical templates consists in creating the performance curves for different types of building blocks. The parameters of the blocks are their performance values and hence retain their meaning for different types. A parameter set corresponds directly to a dimensioned subblock with possibly a different number and types of internal parameters [38].

Flat templates Whereas knowledge-based approaches are suited to work with hierarchical templates, other optimization methods usually struggle with the difficulty of flexible parameters. Therefore, these methods more naturally deal with a flat template where the parameters altered by the optimizer do not change in number nor meaning. The translation into an architecture maps the parameters onto the properties of possibly different topologies. Since diverse optimization algorithms can be adopted during the design process, flat templates are usually preferred to their hierarchical counterparts. A major challenge in working with flat templates is to find a suitable system representation. A possible solution consists in defining a topology

3.3 Historic overview

53

which combines all accessible architectures by making some connections or entire subblocks optional. A binary variable indicates whether the optional part should be included. Consequently, the system is described by a set of equations which contain both real-valued variables (the normal parameters) and integers (the binary numbers). These equations can be considered as a MINLP. They can be solved using a constrained optimization method combined with a combinatorial optimizer (e.g., a branch-and-bound algorithm). Analog CAD tools based on such an approach have been developed, for example, for CMOS op amps [77] (shown in Fig. 3.4) and ∆Σ modulators [55]. An alternative approach to represent different topologies is to use integers to indicate entire architectures and a fixed number of other parameters that are processed by the optimizer regardless of their concrete meaning which can differ in the different topologies. This approach is frequently combined with a stochastic optimization method. First applications are repetitive structures, like filters, where the same parameter is easily re-used for multiple stages since the difference between such stages is merely a scaling factor. If represented as a genome in an evolutionary algorithm, only global parameters and the parameters of each different kind of stage should be included to make it possible to reconstruct the entire topology. An example of such an approach is DAISY [42] for ∆Σ modulators. Yet another way to exploit the flexibility offered by genome representations is to indicate the type of subblock to use by an integer together with a bit string which is translated into the actual parameters of the subblock. As a result, for a system with n building blocks, the operations of the optimizer are always applied on lists with n pairs. Such representation and operations are defined, for example, in DARWIN [67] for three-stage op amps. Infeasible combinations of subblock types are detected with a connection matrix. 3.3.3 Top-down creation A major disadvantage of the design strategies based on selection before, during or after dimensioning is the limited number of architectures that are explored. All available topologies are selected from a library, either entirely or as a template with a few options for subblocks or interconnections. In contrast to these design strategies, analog EDA tools which create the topology offer a wider design range and the possibility to find new architectures. A diagram of the corresponding design strategy is depicted in Fig. 3.5. The input is usually a language-based description of the functionality required from the system, written down using a Hardware Description Language (HDL) like VHDL-AMS, sometimes annotated with information to guide the architectural generation process [35]. First, the input description is converted into some internal representation, e.g., a data flow graph [70, 1], a symbolic signal flow graph [54] or a state-space description [3]. Then, the actual optimization process happens in two steps. First, one or more architectures are created by mapping the internal representation onto

54

3 Analog and Mixed-Signal Design Strategies system description

conversion

P′

mapping onto architecture

architectures p

performance function ΠD,A

P

parameter optimizer

decrease abstraction level

p′

performance estimator

potential redesign

finding architecture and parameter values

internal representation

dimensioned architectures

evaluation best dimensioned topology

Fig. 3.5. Schematic representation of the design strategy based on top-down creation of the architecture.

a connection of lower-level building blocks. To guide this mapping operation, an estimation of the performance of the architecture can be provided. The second step is the constraints transformation step [17] in which the parameters of the generated architectures are optimized. This step coincides with the application of one of the optimization approaches elaborated in Section 3.3.1. The two steps can also be combined resulting in a simultaneous architectural and parametric optimization process as shown in Fig. 3.6. If no parameters are found to realize the required performance specifications, a redesign is invoked by initiating a new architectural mapping. The dimensioned architectures are evaluated and the optimization is restarted at a lower abstraction level. Performance estimation Estimation of the performance of a particular mapping without determining the values of its parameters is usually difficult and hence inaccurate. As a result, the estimation is often limited to some heuristic values like the complexity of the topology, e.g., the number of building blocks or loops. The algorithm selecting the architectural mapping then uses the results of the performance function via the large feedback loop denoted as ‘potential redesign’ in Fig. 3.5. This performance function is calculated using one of the methods described in Section 2.4.

3.3 Historic overview

55

textual specification y = didig2(sample2(x2,par(fs))); x = int3(u−v,par2(a1,0.5),par(fs)); v = dac(y);

a1

embryonic design fs

u

y

heterogeneous optimization calculators evaluators transformation collection

Central Control Unit

cluster3 cluster1

cluster4 cluster2

Population

optimal design Cint

Gvin

Cint

Zout

Gvin

Cint

Zout

Gvin

Zout D/A

Fig. 3.6. Example of a top-down creation process which starts from a textual description of the required functionality and generates a behavioral model of a specific architecture [75].

For some well-known types of systems, a fitting method can be adopted. The parameters of an analytical expression of the performance function are estimated by applying a fitting algorithm using a database of completed designs. This approach is used, for example, to estimate the power for analog filters in ACTIF [69]. The method is somewhat similar to performance models for calculating the performance functions, but an abstraction is made of the actual implementation. Furthermore, systematically sampling the design space is not possible. Another approach consists in designing the system down to levels where the performance is more easily estimated, for example using first-order models to speed up the design [34]. Nevertheless, the advantage of using simplified models vanishes if many lower-level choices should be made and the resulting estimation is not accurate enough.

56

3 Analog and Mixed-Signal Design Strategies

Architectural mapping mechanisms The mapping of the internal representation onto an architecture consists of two steps which are usually separated from each other. First, a set of transformations is applied on the internal description which is usually the result of direct translations of all constructs of the language-based system description. For example, if a signal flow graph is used, branches and nodes are eliminated or combined to simplify the model [54]. After the transformation of the internal representation, this description should be mapped onto a particular architecture. Two different types of mappings are used in analog EDA tools: • Straight mapping: A collection of one-to-one relations between the internal representation and the actual topology is available from which the user can select one. • Optimized mapping: Multiple architectures correspond to a specific internal description of the system. In the first category, the designer performs architectural exploration by providing different input descriptions or selecting another mapping. In the other approach, an optimization algorithm is used to select the architecture in the second approach. Straight mapping A traditional variant of a design strategy based on top-down creation uses a block diagram as internal representation. The mapping operations come down to mapping each building block on a lower-level structure [17]. Hence, a collection of one-to-one relations between the internal representation and the actual topology is available. The designer performs architectural exploration by providing different input descriptions. More flexibility is offered by interactive tools which allow the designer to make some choices during different steps of the straight mapping process, like an implementation style or a transformation. For example, ARCHGEN [3] maps a state-space representation of filters onto different forms of architectures (e.g., observable or controllable) and translates that subsequently into different topologies for which different implementations can be selected (e.g., Gm–C or op amp–C). Another approach employs interactive selection of transformations on admittance matrices to generate filter structures [49]. The interactive character of straight mapping methods makes them suited to quickly examine different options. An experience designer can then decide which architecture should be preferred. Optimized mapping Optimized mapping approaches usually use a custom-defined graph type as inner representation. Further, a library of building blocks together with their

3.3 Historic overview

57

subgraph representation is defined. Via a pattern recognition algorithm, the graph is then divided into subgraphs corresponding to the blocks in the library. Multiple architectures correspond to the same internal representation. Only the behavior of the blocks at the same abstraction level as the input description should be known for this step. To find a complete cover of the graph, the search for patterns can be executed in decreasing order of block complexity. Once a pattern is found, the corresponding subgraph is eliminated and this process is repeated until a complete architecture is obtained. The complexity is the only criterion used to direct the optimizer. This approach has been implemented, for example, in TAGUS [54] to explore A/D conversion algorithms. An alternative heuristic is used in KANDIS [92] for the high-level synthesis of mixed-signal systems. After an interactive partitioning of the internal graph in digital, discrete- and continuous-time subsystems, the last two parts are realized with as few building blocks as possible. Interaction with the user is needed to guide the process if multiple choices are encountered. The use of performance estimators enables the use of more general optimization algorithms. For example, a branch-and-bound algorithm first generates, based on a set of transformations, all possible filter structures corresponding to a subgraph and then eliminates the topologies for which a large area is estimated [34]. In VASE [36], a tabu-search method is employed to find different architectures for op amp circuits, e.g., to select between blocks operating on voltages or currents. During optimization, the algorithm changes the signal types repetitively to improve the performance and places the performed architectural changes in a tabu list of prohibited options during some iterations. Area is estimated by deriving a first selection of the parameters. In Antigone [75] (depicted in Fig. 3.6), a straight mapping towards an initial seed is followed by an evolutionary optimization process that generates a complex, more realistic architecture to perform an analog-to-digital conversion. Transformations add or convert architectural elements and parameters or insert implementation details. The required functionality is preserved during the transformations. Hence, this approach combines both types of mapping procedures. This approach is explained in greater detail in Chap. 6. 3.3.4 Bottom-up generation Traditional analog design methodologies analyze a circuit by recognizing fundamental building blocks like a differential pair or a cascode transistor [103]. Extra elements are added to obtain certain effects, e.g., a resistor to cancel a zero, or a capacitor to increase the phase margin. In this way, circuits are created from bottom to top. Therefore, it is a natural choice to support and automate this design strategy by analog EDA tools. More than top-down design approaches, a design methodology starting from low abstraction levels offers opportunities to invent completely new topologies: to map a system’s

58

3 Analog and Mixed-Signal Design Strategies seed selection initial architecture

optimizer p′

add basic entity

remove basic entity

assign value to parameter

p

potential redesign

finding architecture and parameter values

P

performance function ΠD,A dimensioned architectures

final evaluation best dimensioned topology

Fig. 3.7. Schematic representation of the design strategy based on bottom-up generation of the architecture.

representation onto lower-level building blocks, some assumptions about these blocks should be made, whereas connecting small elementary blocks with each other gives more freedom. Figure 3.7 shows the outline of the design methodology where the architecture is created in a bottom-up design flow. This design strategy is an example of a simplified generic design flow with properties listed in Table 2.3. The first step implies the selection of an initial architecture or seed which depends on the type and specifications of the system to be designed. This seed should be provided externally by the designer which implies some design experience. During the actual design process, an optimizer selects a transformation: a basic entity is either added or deleted to the architecture, or a value is assigned to a parameter. They can also be combined into one step. Various fundamental entities are possible, like single transistors, elementary building blocks or node connections. An example is given in Fig. 3.8. Once an architecture has been generated, the performance function is evaluated to provide some clues to the optimizer to make a new selection of transformation. Population-based optimizers will return multiple dimensioned architectures from which the best one is selected. Redesign is only needed if the initial architecture turns out to be ineffective. Since the bottom-up design flow usually starts at circuit level, most information needed to provide a good calculation of the performance function should be available, avoiding redesign.

3.3 Historic overview evolved circuit

start node

59

end node

Rs Rl vs

(a) Seed for generating filters active node circuit under construction

active node

circuit under construction

active circuit node under construction

circuit under construction

active node

output

(b) Two operations which add an extra element to the circuit Fig. 3.8. Examples of the seed and transformations used to generate analog circuits using a bottom-up design strategy [71].

Applications of the design strategy of Fig. 3.7 differ in both the representation of the system (modeling strategy) and the selection of transformations (synthesis strategy). Three exploration methods can be distinguished: •

• •

Knowledge-based: A designer uses its insight in the working principle of the circuit and determines which elements to add, remove or replace and selects parameter values using standard CAD tools like SPICE and a circuit schematic editor. Exhaustive: All possibilities with a limited complexity are systematically generated, e.g., as graphs, and evaluated for their benefits. Stochastic: A stochastic algorithm is adopted to select the operations that will improve the performance values.

For automated synthesis tools, either an exhaustive or stochastic exploration approach should be adopted. The knowledge-based method, on the other hand, resembles the daily practice of analog designers supported mainly by simulation tools. Exhaustive exploration Systematically generating all circuits or systems guarantees that the best architecture is examined. Of course, the computational time increases rapidly with the number of entities in the system. As a result, this method is only practical for pretty small structures. To create all possibilities, a formal representation for the basic entities is required. For example, a basic element frequently encountered in macro-models for analog circuits is a VCCS. Its operation can be represented as a linear

60

3 Analog and Mixed-Signal Design Strategies

tree-graph with four nodes and a current (output) and a voltage (control) branch [61]. All possible graphs with a limited number of VCCSs are easily generated by defining a number of nodes and drawing all branches between them. After analyzing the resulting circuits, e.g., by calculating the two-port parameters, the useful combinations are selected. Finally, each subgraph is mapped onto a circuit element like a resistor or transistor. Application of this approach results, for instance, in all amplifiers [13] or all CMOS transconductances with maximally two VCCS functions [62]. The bottom-up system generation method is only applied to create an architecture which is symbolically analyzed. The dimensioning process is comparable to the design strategy based on selection of the topology before dimensioning. Stochastic exploration Analog circuits, like op amps, usually contain dozens of transistors [103] despite their relatively simple functionality compared to their digital counterparts. As a result, any random search process in the design space using the circuit description level, tends to be computationally inefficient if no guidance is provided during the operation. A solution is to steer the process by incorporating design knowledge about, for example, basic building blocks and their interconnections as shown in Fig. 3.8. This knowledge, however, can also be derived during the optimization itself, e.g., by analyzing and comparing the generated architectures. Parts from different topologies can then be combined to create new ones. This mechanism naturally leads to evolutionary computation methods. A major challenge in evolutionary algorithms is the modeling strategy which must be suited to apply the basic genetic operations. For example, recombination of standard circuit-level descriptions is not so trivial. A more appropriate circuit representation technique consists of collecting all selected transformations in Fig. 3.7 in one genome. The result is a program which starts from an embryonic circuit and each line inserts a circuit element or changes a connection. Parameters are provided either along with the functions that alter the topology, or with specific commands. To evaluate the performance function, the program is executed and the generated circuit is simulated with a SPICE-like simulator. The fixed-length programs can directly be used in a genetic algorithm [71]. Crossover comes down to joining two half programs with each other. Alternatively, they can be represented as trees for use with genetic programming techniques [65, 108]. During recombination, subtrees corresponding to subcircuits are exchanged. Note that for these operations, the transformations should be general enough to remain meaningful since they are swapped between programs. Also, the circuit simulator must be able to deal with all generated circuits, which may differ considerably from the commonly used analog circuits. For example, simulation becomes more difficult if there are

3.3 Historic overview

61

many (feedback) connections between the nodes. Especially randomly generated circuits used as initial population may cause problems. 3.3.5 Overview A historic overview of most analog EDA tools and methods elaborated above is shown in Fig. 3.9. The vertical axis gives an indication of the abstraction level at which the tool operates. In agreement with Fig. 2.3, low levels correspond to physical (layout) and circuit levels whereas at the top the functional and behavioral level dominate. These description levels are also mentioned in Fig. 3.9. Early efforts to develop CAD tools were mainly focused on the low levels of abstraction. Whereas tools for layout generation (e.g., CALMOS [9]) can deal with systems containing hundreds or thousands of cells, tools used to determine the sizes of the components like transistors, are usually limited to rather small systems. Large systems are then split into smaller systems by an experienced designer. More recent tools also address more complex systems which are dealt with at high abstraction levels. The classes corresponding to the applied method to find the topology are denoted by different icons in Fig. 3.9. Following conclusions can be drawn for the four main classes: Selection before or after dimensioning. Optimization of the parameters using one of the methods of Table 3.1 was the first objective of the firstgeneration design automation methods. The selection of the architecture was the task of the designer. Today, several approaches are well-developed and various analog synthesis tools are commercially available to optimize the parameters of a selected (circuit) topology for analog cells of limited size. Nevertheless, parameter optimization remains a target of research in analog EDA methodologies. Indeed, new technologies, new applications and higher sensitivity to non-ideal phenomena (e.g., deep-submicron effects and substrate noise at continuously higher frequencies) are drivers for research towards tools at all abstraction levels for both small and large systems. Selection during dimensioning. Early methods using templates to select parts of the architecture during the sizing process introduced a rather limited number of architectural options. More recent approaches, however, have shown that a large design space can be explored during the optimization (e.g., more than 3,500 op amp topologies in [78] compared to merely 64 structures in [77]). Hence, at low abstraction levels, these approaches form an alternative for bottom-up generation approaches: various architectures are possible, but the use of a template avoids unknown – and possibly unreliable – architectural results. Top-down creation. Techniques for creating the topology in a top-down flow usually stay at the higher abstraction levels. At lower levels, several

62 3 Analog and Mixed-Signal Design Strategies Fig. 3.9. Overview of major analog EDA tools for analog synthesis developed in the last 20 years and published in open literature.

3.4 Design strategy based on generic behavior

63

effects like the coupling between the building blocks, make it difficult to follow a strict top-down design path. Hence, it may be more appropriate to combine a top-down creation method at behavioral or functional level with the selection of the (circuit) topology before, during or after the sizing process. In the domain of digital synthesis, such approach is known as a ‘meet-in-the-middle’ strategy [27]. Bottom-up generation. Generating the architecture in a bottom-up approach is only achievable with circuit-level descriptions. Two major problems limit the applicability of these methods: the rapid ‘explosion’ of the design space and corresponding evaluation time, and the possibility to come up with systems which differ too much from well-known analog circuits to be accepted by an analog designer. As a result, most research nowadays focuses on methods of the other three classes. Approaches to integrate the topology synthesis with the optimization of the parameters, either by selection during dimensioning or via top-down or bottom-up creation, have not made there entrance into commercial analog EDA tools yet. Tools offering automated parameter optimization methods, on the other hand, can be used to design state-of-the-art analog cells or parts of hierarchically decomposed systems. Examples are NeoCircuit of Cadence [14], Circuit Explorer of Synopsis [112], Eldo Optimizer of Mentor Graphics [83], WiCkeD (DesignMD) of MunEDA (ChipMD) [84], and Arsyn of Orora Design Technologies [94]. They all require the selection of the (circuit) topology before optimization of its parameters using one of the methods of Table 3.1. SPICElike simulations (e.g., with HSpice, Spectre or Eldo) are adopted to evaluate the performance function. On the other hand, modern VLSI technology makes it possible to integrate complete electronic systems on a single chip (SoC–System-on-Chip) or in a single package (SiP–System-in-Package) [85]. The increased degree of integration results in systems with more functionality, more flexibility, e.g., to cope with different standards, and better performance characteristics, like less power and higher bandwidths. The consequent increase of complexity leads to a growing attention for design of analog and mixed-signal systems in regions of the abstraction–description plane beyond the area of the circuit and physical description levels.

3.4 High-level design strategy based on generic behavior The design strategy developed in this work is based on a projection of the requirements of high-level analog design onto the fundamental properties of design flows elaborated in Chap. 2. The result is a high-level design strategy characterized by the exploitation of generic behavior of analog systems. A formal definition of this strategy is presented in this section.

64

3 Analog and Mixed-Signal Design Strategies

3.4.1 Design flow Efficient high-level analysis and synthesis pose some requirements on the design strategy: Large systems. As stated in Section 2.2 of Chap. 2, a high-level design strategy in the context of this book is targeted to large complex systems containing analog and mixed-signal electronic circuits. Although the main focus of the developed design strategy is on the analog parts, most systems contain different implementation styles, e.g., in data converters [99] or to improve performance with digital calibration techniques [120]. The modeling and synthesis strategy should deal with these large mixed-signal systems. Different abstraction levels. The performance characteristics of analog systems deteriorate due to physical effects. To analyze their influence during the design, representations of the system at lower abstraction levels are required, even during high-level design. The modeling strategy should support easy addition of extra sources of non-ideal behavior. During synthesis, subsequent refinement operations need to be applied to take these effects into account. Different topologies. Architectural exploration is a key issue in a high-level design methodology. Better results can be expected if there are more topological alternatives available for the synthesis strategy as architectural transformations. Consequently, models for a plethora of different architectures should be easily created. A large design space can be achieved if the modeling strategy allows straightforward representation of newly generated architectures. Problems spotting. Whereas a large design space is beneficial to include the best solution among the available architectures, guidelines are needed to explore it. Apart from the required functionality, the reasons for performance degradation may serve as signposts. To this end, a separation between ideal dominant and non-ideal parasitic signals should be possible with the selected system representation. This segregation allows identification of the problems in the architecture. Evaluation. During synthesis, several architectures with different topological characteristics, parameters and incorporated non-idealities need to be evaluated. A modeling strategy linked to a time-efficient evaluation of the performance function leads to the ability to explore a vast number of options in a reasonable time. Since at high abstraction levels several nonideal effects are not accounted for, a high-level synthesis strategy should deal with the possibility that the best architecture may be different before and after refinement operations. Flexible optimization. To obtain the best results, the high-level design strategy should be accompanied by a dedicated optimization approach. This algorithm should deal with different architectures with different numbers of parameters and even different performance characteristics (e.g.,

3.4 Design strategy based on generic behavior

65

stability issues are more relevant in feedback than in feedforward structures). Furthermore, an experienced designer will like to give input to guide the optimization process. A flexible optimization strategy takes all these aspects into account. These requirements can be compared with the general concepts about design flows for analog and mixed-signal systems of Chap. 2 and with the historic overview of the previous section. The following conclusions about an efficient high-level design strategy can be drawn: Top-down flow. To deal with larger systems, the top-down design flow is preferred to the flat and bottom-up methodologies. Furthermore, this approach has the best capabilities for an exploration of system architectures and parameters. However, the top-down paradigm can be combined with bottom-up generated performance models for the building blocks. This results in an approach comparable to the ‘meet-in-the-middle’ for the design of digital systems [27]. Behavioral models. To further simplify the generic top-down design flow, the high-level design is restricted to one description level so that translation operations can be left out. The behavioral description level covers a large part of the abstraction level axis, which makes it straightforward to represent systems at different levels of abstraction. Moreover, ideal and non-ideal signals can be separated more easily in a behavioral model than in a macro-model or circuit description. The actual language used to describe the models depends on the concrete system that has to be designed. Generic models. To easily represent different types of architectures at different abstraction levels, a flexible modeling style is adopted. Instead of a specific behavioral model for a particular system, a generic model is proposed which can be used for a large number of topologies. Via a specialization step, the generic model is then translated into a description for a specific architecture. Simulation-based evaluation. Simulations offer the greatest flexibility for various types of performance values with a limited set-up time for evaluation of the performance function. Therefore, they are the first choice to analyze the behavior of the (non-ideal) system. Computational time is reduced by using simulation algorithms using the specific properties of the generic model. However, for low-level building blocks, the characteristics can be evaluated with performance models [23, 26, 123]. Evolutionary optimization process. The high complexity of the optimization problem favors the use of a custom evolutionary method over standard algorithms. Genetic operations can alter both the parameters and the topology which makes it a heterogeneous optimization process. Furthermore, with a population-based approach, worse solutions are still investigated so that after introduction of more physical effects, such a solution might turn out to be good. Besides, several solutions can be evaluated in parallel.

66

3 Analog and Mixed-Signal Design Strategies Functional Specifications

Translation

Behavioral description level

GENERIC BEHAVIORAL MODEL

Specialization Transformation architectural exploration

Simulation

analysis

Refinement

Simplification

tolerances

SELECTION OF GENETIC OPERATION specs not met

specs

specs met enough too few details details

Translation

Synthesis of Building Blocks

Fig. 3.10. High-level design flow for analog and mixed-signal systems based on generic behavior.

A schematic representation of the design flow developed in this work according to these concepts is shown in Fig. 3.10 [74]. In this design flow, the modeling strategy is based on the representation of the system with a generic behavioral model. This general description is first derived from the functional specifications. Then, it is specialized to represent a specific architecture at a specific abstraction level. By simulation of this specific behavioral model, the performance values of the system are determined. Based on a comparison of these performance values with specifications and tolerances, the synthesis strategy determines which operations will be applied: a refinement or a transformation of the parameters or the architecture after an optional simplification step. These operations are considered as generalized genetic mutation or recombination steps. Architectural exploration is achieved via subsequent transformations whereas the systematic analysis loop corresponds to the application of refinement operations. This synthesis strategy is a heterogeneous genetic optimization algorithm which will be further elaborated in Chap. 6.

3.4 Design strategy based on generic behavior

67

3.4.2 Generic behavioral models A specific behavioral model describes the behavior of a particular architecture by a set of mathematical relations between the input and output signals and the states of the system. This concept is generalized by the generic behavioral model defined in this work by following definition [74]. Definition 3.1 (Generic behavioral model). A generic behavioral model of a system describes the functionality in an indirect way as a mathematical interaction between generic functions which represent some relation between signals and states of specific types found in the system. To define a generic behavioral model, three kinds of ingredients are needed: alphabets, generic functions and an interaction model. Definition 3.2 (Alphabet). An alphabet is the smallest set to which a signal or state must belong. Defining the alphabets of a system corresponds to specifying the types of all signals and states of the system. Some frequently encountered alphabets in mixed-signal design are: • C0 [R] = {u|u : R → R : t → u(t)} ➥ the set of continuous-time signals • FC0 [R] = {U |U : R → C : ω → U (ω)} ➥ the set of frequency-domain signals • Rk = {(x1 , . . . , xk ) |xi ∈ R} ➥ the set of k-dimensional real-valued state vectors • Ba = {(b1 , . . . , ba ) |bi ∈ {0, 1}} ➥ the set of words represented by a binary signals The formal definition of the input and output alphabets ensures that the model can easily be fitted into a model of a larger system where all building blocks are represented by generic behavioral models. The only condition for connecting an output to an input is a match between the corresponding alphabets. Definition 3.3 (Generic function). A generic function is a mapping of input elements of particular alphabets onto output elements of particular alphabets described by mathematical parameterized equations. Specialization of the generic functions means assigning specific values to the general parameters of the equations. For example, the generic function χ specifies a down-conversion operation:   (3.7) χ : C0 [R] → C0 [R] : u(t) → ℜ u(t) + jH {u(t)} · e −j2πfsh t ,

with ℜ{a + jb} = a and H { · } denotes the Hilbert transform. Specialization implies specifying a value for the frequency shift fsh . Another familiar example is a first-order transfer function:

68

3 Analog and Mixed-Signal Design Strategies

h1 : FC0 [R] → FC0 [R] : U (ω) → K

jω + z U (ω) , jω + p

(3.8)

where the gain K, pole p and zero z should be specified during specialization. Furthermore, a generic function can contain a variable number of parameters. For example, a filter function may be specified where the number of poles is variable. Definition 3.4 (Interaction model). An interaction model is an ordered set of instructions that calculate all signals and states of the system via mathematical equations containing generic functions applied on signals and states obtained from either the input or the execution of preceding instructions. The interaction model expresses the dynamic behavior of the system in terms of the generic functions. The most straightforward interaction model is just a subsequent application of generic functions ζi on the input signal x and the states qi belonging to some alphabet: input x q1 ←− ζ1 (x) q2 ←− ζ2 (q1 ) .. .

(3.9)

qn ←− ζn (qn−1 ) output v = qn Figure 3.11 depicts a diagram representing a generic behavioral model with this simple interaction model. More complicated interaction models are little programs with loops and conditional branches, for example indicating when signals and states should be calculated at each time point. The time points are then parameters of the interaction model itself. Conditions in the interaction model may result in the elimination of some generic functions from all instructions that are actually executed. The different elements of the generic behavioral model can be lumped together in a formal machine M which can be represented as an n-tuple: M = (A1 , A2 , . . ., f1 , f2 , . . ., p1 , p2 , . . . ).       alphabets

q1 ∈ Q1

ζ1

state 1

instruction n q2 ∈ Q2

ζ2

state 2

(3.10)

interaction model parameters

instruction 2

instruction 1 u∈Σ

IN

generic functions

qn−1 ∈ Qn−1

...

v∈Ψ

ζn

state n

OUT

Fig. 3.11. Schematic representation of the generic behavioral model with alphabets Σ, Q1 , . . . Qn and Ψ , generic functions ζ1 , . . . , ζn and interaction model (3.9).

3.4 Design strategy based on generic behavior

69

This formal machine describes the generic behavioral model as a model of computation, comparable to commonly used models like a FSM or Turing machine [104]. Execution of the machine’s algorithm starts by specialization of the generic functions followed by the application of the interaction model. The representation of a generic behavioral model as a formal machine may explicitly be referred to by the term formal model, which should be distinguished from models used for formal verification [51]. The concept of generic behavioral model is a compromise between two traditional approaches: dedicated tools using specific behavioral models (e.g., DAISY [42] for ∆Σ modulators), and the general-purpose simulators like MATLAB/Simulink or simulators of hardware description languages (e.g., a VHDL-AMS simulator [7]). A trade-off is made between the simulation time and the flexibility to represent a wide range of architectures at different abstraction levels, as explained in the next paragraphs. Simulation Specific behavioral models can be simulated with dedicated algorithms which exploit the specific characteristics of the system. Since only a limited set of architectures and details should be represented, simplifications and speedup techniques for the simulation can be precompiled in a dedicated tool. As a result, the evaluation of specific behavioral models needs a rather short computational time. For general-purpose simulators, no system-specific tricks can be applied to make the simulations faster. Compilation of the models written in a generalpurpose language is usually offered, but the time needed for this extra step becomes an important delay when several simulations of different architectures with different parameter values are needed in a design flow. A tool using a generic behavioral model implements the dedicated interaction model. This precompiled simulation algorithm focuses on the properties of the class of architectures captured by the generic representation. Different simulation methods can be studied and implemented for different classes. Therefore, it requires less computational time than general-purpose simulators. For example, simulations of the generic behavioral model for ∆Σ modulators developed in this work and presented in Chap. 4, require 6× to 10× less computational time than (time-marching) simulations of a model written in VHDL-AMS. On the other hand, the wider application range comes at the cost of an extra step for the specialization of the generic functions and results in longer simulation times than with specific dedicated tools. Flexibility Dedicated simulation tools with specific behavioral models allow a user to set some properties like gain, bandwidth or distortion factor. However, these characteristics correspond to a particular architecture represented at some abstraction level. The only way to change the abstraction level is to either

70

3 Analog and Mixed-Signal Design Strategies

select another specific model or to give special values to the parameters to exclude the effect of a non-ideality, like give the value zero to a distortion factor. With a general-purpose language, all kinds of architectures can be described at any level of detail, which offers a much larger flexibility than usually needed in a design flow. The great freedom for the user, however, has the disadvantage that no inherent structure is present in the models. This lack of structure makes it difficult for a design strategy to systematically manipulate the architecture or its parameters. Generic behavioral models exhibit a large flexibility within a certain class of systems. Several levels of details can be introduced by providing the appropriate specialization functions. A generic filter function, for example, may be replaced by a model with only dominant poles or by one with also parasitic poles. Although the architecture must fit within the generic template, this template is the result of a translation of the functional specifications so that the set of architectures is limited to the collection of useful systems. The available search space during synthesis can be limited by providing an alternative generic behavioral model. A generic model for A/D converters, for example, can be limited to a specific subtype like ∆Σ converters. Another benefit of working with a generic template is that there is a link between a part of the model and its contribution to the behavior of the system. For example, a generic function can indicate a frequency conversion, a filtering or a quantization function. Furthermore, ideal and non-ideal behavior can easily be separated by providing corresponding ideal and non-ideal generic functions. To conclude, EDA tools based on generic behavioral models stand midway between dedicated tools with specific behavioral models and simulators of general-purpose languages. Therefore, they are very suited for a high-level design strategy for analog and mixed-signal systems, and will be used in our design flow.

3.5 Conclusions Several techniques have been developed in research stage over the last twenty years to design analog and mixed-signal systems. The different approaches can be classified based on the treatment of the architecture: it can be selected or created. Selection can occur before, during or after the determination of the values for the parameters. Creation of an architecture happens either via a top-down or a bottom-up design flow. As a result, four major classes of design strategies have been distinguished. Several optimization algorithms exist to select the values of the parameters once the architecture has been chosen. With the aid of a template, several architectural alternatives can be introduced in the optimization. Top-down creation methods perform subsequent

References

71

mapping operations to translate a description of the functionality into a lowlevel topology. Finally, bottom-up approaches connect basic entities to ‘invent’ the architecture. Based on the concepts of the previous chapter, a top-down high-level design strategy has been introduced based on the representation of the system as generic behavioral models (the modeling strategy) and on an evolutionary optimization process (the synthesis strategy). This modeling style allows to easily represent a wide range of architectures at different levels of abstraction. The models have a strict structure which makes it straightforward to retrieve the functionality of parts of the model and to identify non-ideal signal flows. Furthermore, dedicated simulation algorithms can be used for time-efficient evaluations. The operations in the abstraction–description plane occurring in the design flow (transformation, refinement and simplification) are translated into the selection of appropriate specialization functions. The heterogeneous optimization algorithm used as synthesis strategy will be described in Chap. 6. The next two chapters of this work elaborate on the development of generic behavioral models in both the time and frequency domain.

References [1] A. Achyuthan and M. I. Elmasry. Mixed Analog/Digital Hardware Synthesis of Artificial Neural Networks. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 13(9):1073–1087, Sept. 1994. [2] G. Alpaydın, S. Balkır, and G. D¨ undar. An Evolutionary Approach to Automatic Synthesis of High-Performance Analog Integrated Circuits. IEEE Trans. on Evolutionary Computation, 7(3):240–252, June 2003. [3] B. A. A. Antoa and A. J. Brodersen. ARCHGEN: Automated Synthesis of Analog Systems. IEEE Trans. on Very Large Scale Integration (VLSI) Systems, 3(2):231–244, June 1995. [4] K. Antreich, J. Eckmueller, H. Graeb, M. Pronath, F. Schenkel, R. Schwencker, and S. Zizala. WiCkeD: Analog Circuit Synthesis Incorporating Mismatch. In IEEE Custom Integrated Circuits Conf., pages 511–514, Orlando, May 2000. [5] K. J. Antreich, H. E. Graeb, and C. U. Wieser. Circuit Analysis and Optimization Driven by Worst-Case Distances. IEEE Trans. on ComputerAided Design of Integrated Circuits and Systems, 13(1):57–71, Jan. 1994. [6] K. J. Antreich, P. Leibner, and F. P¨ ornbacher. Nominal Design of Integrated Circuits on Circuit Level by an Interactive Improvement Method. IEEE Trans. on Circuits and Systems, 35(12):1501–1511, Dec. 1988. [7] P. J. Ashenden, G. D. Peterson, and D. A. Teegarden. The System Designer’s Guide to VHDL-AMS. Morgan Kaufmann Publishers, San Francisco, 2003.

72

3 Analog and Mixed-Signal Design Strategies

[8] T. B¨ ack, U. Hammel, and H.-P. Schwefel. Evolutionary Computation: Comments on the History and Current State. IEEE Trans. on Evolutionary Computation, 1(1):3–17, Apr. 1997. [9] H. Beke, W. M. C. Sansen, and R. Van Overstraeten. CALMOS: A Computer-Aided Layout Program for MOS/LSI. IEEE Journal of SolidState Circuits, 12(3):281–282, June 1977. [10] S. Bhattacharya, N. Jangkrajarng, R. Hartono, and C.-J. R. Shi. Correct-by-Construction Layout-Centric Retargeting of Large Analog Designs. In IEEE/ACM Design Automation Conf., pages 139–144, San Diego, June 2004. [11] T. Binder, C. Heitzinger, and S. Selberherr. A Study on Global and Local Optimization Techniques for TCAD Analysis Tasks. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 23(6):814– 822, June 2004. [12] R. J. Bowman and D. J. Lane. A Knowledge-Based System for Analog Integrated Circuit Design. In IEEE/ACM Int. Conf. on ComputerAided Design, pages 210–212, Santa Clara, Nov. 1985. [13] F. Bruccoleri, E. A. M. Klumperink, and B. Nauta. Generating All Two-MOS Transistor Amplifiers Leads to New Wide-Band LNAs. IEEE Journal of Solid-State Circuits, 36(7):1032–1040, July 2001. [14] Cadence Design Systems, Inc. Virtuoso NeoCircuit Circuit Sizing and Optimization. 2007. http://www.cadence.com/datasheets/ virneocircuit_ds.pdf. [15] L. R. Carley, G. G. E. Gielen, R. A. Rutenbar, and W. M. C. Sansen. Synthesis Tools for Mixed-Signal ICs: Progress on Frontend and Backend Strategies. In IEEE/ACM Design Automation Conf., pages 298– 303, Las Vegas, June 1996. [16] F. Catthoor, J. Rabaey, and H. De Man. Target Architectures in the CATHEDRAL Synthesis Systems: Objectives and Impact. In IEEE Int. Symp. on Circuits and Systems, pages 1907–1910, Portland, May 1989. [17] H. Chang, E. Charbon, U. Choudhury, A. Demir, E. Felt, E. Liu, E. Malavasi, A. Sangiovanni-Vincentelli, and I. Vassiliou, editors. A TopDown, Constraint-Driven Design Methodology for Analog Integrated Circuits. Kluwer Academic, 1996. [18] M. Chu and D. J. Allstot. Elitist Nondominated Sorting Genetic Algorithm Based RF IC Optimizer. IEEE Trans. on Circuits and Systems— I: Regular Papers, 52(3):535–545, Mar. 2005. [19] J. M. Cohn, D. J. Garrod, R. A. Rutenbar, and L. R. Carley. KOAN/ANAGRAM II: New Tools for Device-Level Analog Placement and Routing. IEEE Journal of Solid-State Circuits, 26(3):330–342, Mar. 1991. [20] D. M. Colleran, C. Portmann, A. Hassibi, C. Crusius, S. S. Mohan, S. Boyd, T. H. Lee, and M. del Mar Heshenson. Optimization of PhaseLocked Loop Circuits via Geometric Programming. In IEEE Custom Integrated Circuits Conf., pages 377–380, San Jose, Sept. 2003.

References

73

[21] A. R. Conn, P. K. Coulman, R. A. Haring, G. L. Morrill, C. Visweswariah, and Chai Wah Wu. JiffyTune: Circuit Optimization Using Time-Domain Sensitivities. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 17(12):1292–1309, Dec. 1998. [22] J. Crols, S. Donnay, M. Steyaert, and G. Gielen. A High-Level Design and Optimization Tool for Analog RF Receiver Front-Ends. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 550–553, San Jose, Nov. 1995. [23] W. Daems, G. Gielen, and W. Sansen. Simulation-Based Generation of Posynomial Performance Models for the Sizing of Analog Integrated Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(5):517–534, May 2003. [24] N. Damera-Venkata and B. L. Evans. An Automated Framework for Multicriteria Optimization of Analog Filter Designs. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 46(8):981–990, Aug. 1999. [25] J. L. Dawson, S. P. Boyd, M. del Mar Hershenson, and T. H. Lee. Optimal Allocation of Local Feedback in Multistage Amplifiers via Geometric Programming. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 48(1):1–11, Jan. 2001. [26] F. De Bernardinis, M. I. Jordan, and A. Sangiovanni-Vincentelli. Support Vector Machines for Analog Circuit Performance Representation. In IEEE/ACM Design Automation Conf., pages 964–969, Anaheim, June 2003. [27] H. De Man, J. Rabaey, J. Vanhoof, G. Goossens, P. Six, and L. Claesen. CATHEDRAL-II — a computer-aided synthesis system for digital signal processing VLSI systems. IEE Computer-Aided Engineering Journal, 5(2):55–66, Apr. 1988. [28] C. R. C. De Ranter, G. Van der Plas, M. S. J. Steyaert, G. G. E. Gielen, and W. M. C. Sansen. CYCLONE: Automated Design and Layout of RF LC-Oscillators. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 21(10):1161–1170, Oct. 2002. [29] B. De Smedt and G. G. E. Gielen. Watson: Design Space Boundary Exploration and Model Generation for Analog and RF IC Design. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(2):213–224, Feb. 2003. [30] M. G. R. Degrauwe, O. Nys, E. Dijkstra, J. Rijmenants, S. Bitz, B. L. A. G. Goffart, E. A. Vittoz, S. Cserveny, C. Meixenberger, G. van der Stappen, and H. J. Oguey. IDAC: An Interactive Design Tool for Analog CMOS Circuits. IEEE Journal of Solid-State Circuits, 22(6):1106–1116, Dec. 1987. [31] M. del Mar Hershenson. Design of pipeline analog-to-digital converters via geometric programming. In IEEE/ACM Int. Conf. on ComputerAided Design, pages 317–324, San Jose, Nov. 2002.

74

3 Analog and Mixed-Signal Design Strategies

[32] M. del Mar Hershenson, S. P. Boyd, and T. H. Lee. Optimal Design of a CMOS Op-Amp via Geometric Programming. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 20(1):1–21, Jan. 2001. [33] A. Dharchoudhury and S. M. Kang. An Integrated Approach to Realistic Worst-Case Design Optimization of MOS Analog Circuits. In IEEE/ACM Design Automation Conf., pages 704–709, Anaheim, June 1992. [34] A. Doboli, N. Dhanwada, A. Nunez-Aldana, and R. Vemuri. A TwoLayer Library-Based Approach to Synthesis of Analog Systems from VHDL-AMS Specifications. ACM Trans. on Design Automation of Electronic Systems, 9(2):238–271, Apr. 2004. [35] A. Doboli and R. Vemuri. Behavioral Modeling for High-Level Synthesis of Analog and Mixed-Signal Systems From VHDL-AMS. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(11):1504–1520, Nov. 2003. [36] A. Doboli and R. Vemuri. Exploration-Based High-Level Synthesis of Linear Analog Systems Operating at Low/Medium Frequencies. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(11):1556–1568, Nov. 2003. [37] Dongkyung Nam, Yun Deuk Seo, Lae-Jeong Park, Cheol Hoon Park, and Bumsup Kim. Parameter Optimization of an On-Chip Voltage Reference Circuit Using Evolutionary Programming. IEEE Trans. on Evolutionary Computation, 5(4):414–421, Aug. 2001. [38] T. Eeckelaert, R. Schoofs, G. Gielen, M. Steyaert, and W. Sansen. Hierarchical Bottom–up Analog Optimization Methodology Validated by a Delta–Sigma A/D Converter Design for the 802.11a/b/g Standard. In IEEE/ACM Design Automation Conf., pages 25–30, San Francisco, July 2006. [39] F. El-Turky and E. E. Perry. BLADES: An Artificial Intelligence Approach to Analog Circuit Design. IEEE Trans. on Computer-Aided Design, 8(6):680–692, June 1989. [40] M. Fares and B. Kaminska. FPAD: A Fuzzy Nonlinear Programming Approach to Analog Circuit Design. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 14(7):785–793, July 1995. [41] R. Fletcher. Practical Methods of Optimization. Wiley, Chichester, 1987. [42] K. Francken and G. G. E. Gielen. A High-Level Simulation and Synthesis Environment for ∆Σ Modulators. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(8):1049–1061, Aug. 2003. [43] A. H. Fung, B. W. Lee, and B. J. Sheu. Self-Reconstructing Technique for Expert System-Based Analog IC Designs. IEEE Trans. on Circuits and Systems, 36(2):318–321, Feb. 1989. [44] Gang Zhang, A. Dengi, R. A. Rohrer, R. A. Rutenbar, and L. R. Carley. A Synthesis Flow Toward Fast Parasistic Closure For Radio-Frequency

References

[45] [46]

[47]

[48]

[49]

[50]

[51]

[52]

[53]

[54]

[55]

[56]

[57]

75

Integrated Circuits. In IEEE/ACM Design Automation Conf., pages 155–158, San Diego, June 2004. G. Geelen. A 6b 1.1GSample/s CMOS A/D Converter. In IEEE Int. Solid-State Circuits Conf., pages 128–129, San Francisco, Feb. 2001. G. G. E. Gielen and R. A. Rutenbar. Computer-Aided Design of Analog and Mixed-Signal Integrated Circuits. Proceedings of the IEEE, 88(12):1825–1854, Dec. 2000. G. G. E. Gielen, H. C. C. Walscharts, and W. M. C. Sansen. Analog Circuit Design Optimization Based on Symbolic Simulation and Simulated Annealing. IEEE Journal of Solid-State Circuits, 25(3):707–713, June 1990. H. Graeb, S. Zizala, J. Eckmueller, and K. Antreich. The Sizing Rules Method for Analog Integrated Circuit Design. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 343–349, San Jose, Nov. 2001. D. G. Haigh, F. Q. Tan, and C. Papavassiliou. Systematic Synthesis of Active-RC Circuit Building-Blocks. Analog Integrated Circuits and Signal Processing, 43(3):297–315, June 2005. R. Harjani, R. A. Rutenbar, and L. R. Carley. OASYS: A Framework for Analog Circuit Synthesis. IEEE Trans. on Computer-Aided Design, 8(12):1247–1266, Dec. 1989. W. Hartong, L. Hedrich, and E. Barke. Model Checking Algorithms for Analog Verification. In IEEE/ACM Design Automation Conf., pages 542–547, New Orleans, June 2002. J. P. Harvey, M. I. Elmasry, and B. Leung. STAIC: An Interactive Framework for Synthesizing CMOS and BiCMOS Analog Circuits. IEEE Trans. on Computer-Aided Design, 11(11):1402–1417, Nov. 1992. Hongzhou Liu, A. Singhee, R. A. Rutenbar, and L. R. Carley. Remembrance of Circuit Past: Macromodeling by Data Mining in Large Analog Design Spaces. In IEEE/ACM Design Automation Conf., pages 437–442, New Orleans, June 2002. N. C. Horta and J. E. Franca. Algorithm-Driven Synthesis of Data Conversion Architectures. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 16(10):1116–1135, Oct. 1997. Hua Tang and A. Doboli. High-Level Synthesis of ∆Σ Modulator Topologies Optimized for Complexity, Sensitivity, and Power Consumption. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 25(3):597–607, Mar. 2006. L. P. Huelsman. Optimization—A Powerful Tool for Analysis and Design. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 40(7):431–439, July 1993. Jie Yuan, N. Farhat, and J. Van der Spiegel. GBOPCAD: A Synthesis Tool for High-Performance Gain-Boosted Opamp Design. IEEE Trans. on Circuits and Systems—I: Regular Papers, 52(8):1535–1544, Aug. 2005.

76

3 Analog and Mixed-Signal Design Strategies

[58] Jintae Kim, Jaeseo Lee, L. Vandenberghe, and Chih-Kong Ken Yang. Techniques for Improving the Accuracy of Geometric-Programming Based Analog Circuit Design Optimization. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 863–870, San Jose, Nov. 2004. [59] Y.-C. Ju, V. B. Rao, and R. A. Saleh. Consistency Checking and Optimization of Macromodels. IEEE Trans. on Computer-Aided Design, 10(8):957–967, Aug. 1991. [60] S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi. Optimization by Simulated Annealing. Science, 220(4598):671–680, May 1983. [61] E. A. M. Klumperink, F. Bruccoleri, and B. Nauta. Finding All Elementary Circuits Exploiting Transconductance. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 48(11):1039– 1053, Nov. 2001. [62] E. A. M. Klumperink and B. Nauta. Systematic Comparison of HF CMOS Transconductors. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 50(10):728–741, Oct. 2003. [63] H. Y. Koh, C. H. S´equin, and P. R. Gray. OPASYN: A Compliler for CMOS Operational Amplifiers. IEEE Trans. on Computer-Aided Design, 9(2):113–125, Feb. 1990. [64] J. R. Koza. Genetic Programming. On the Programming of Computers by Means of Natural Selection. Bradford Book, Cambridge, 1992. [65] J. R. Koza, F. H. Bennett, III, D. Andre, M. A. Keane, and F. Dunlap. Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming. IEEE Trans. on Evolutionary Computation, 1(2):109– 128, July 1997. [66] M. J. Krasnicki, R. Phelps, J. R. Hellums, M. McClung, R. A. Rutenbar, and L. R. Carley. ASF: A Practical Simulation-Based Methodology for the Synthesis of Custom Analog Circuits. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 350–357, San Jose, Nov. 2001. [67] W. Kruiskamp and D. Leenaerts. DARWIN: CMOS opamp Synthesis by means of a Genetic Algorithm. In IEEE/ACM Design Automation Conf., pages 433–438, San Francisco, June 1995. [68] K. Lampaert, G. Gielen, and W. M. Sansen. A Performance-Driven Placement Tool for Analog Integrated Circuits. IEEE Journal of SolidState Circuits, 30(7):773–780, July 1995. [69] E. Lauwers and G. Gielen. Power Estimation Methods for Analog Circuits for Architectural Exploration of Integrated Systems. IEEE Trans. on Very Large Scale Integration (VLSI) Systems, 10(2):155–162, Apr. 2002. [70] E. A. Lee and T. M. Parks. Dataflow Process Networks. Proceedings of the IEEE, 83(5):773–799, May 1995. [71] J. D. Lohn and S. P. Colombano. A Circuit Representation Technique for Automated Circuit Design. IEEE Trans. on Evolutionary Computation, 3(3):205–219, Sept. 1999.

References

77

[72] E. Malavasi, E. Charbon, E. Felt, and A. Sangiovanni-Vincentelli. Automation of IC Layout with Analog Constraints. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 15(8):923– 942, Aug. 1996. [73] P. Mandal and V. Visvanathan. CMOS Op-Amp Sizing Using a Geometric Programming Formulation. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 20(1):22–38, Jan. 2001. [74] E. Martens and G. Gielen. Generic Behavioral Modeling of Analog and Mixed-Signal Systems for Efficient Architectural-Level Exploration. In ECSI Forum on Specifications and Design Languages, pages 15–22, Darmstadt, Sept. 2006. [75] E. Martens and G. Gielen. Top-Down Heterogeneous Synthesis of Analog and Mixed-Signal Systems. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 275–280, Munich, Mar. 2006. [76] P. C. Maulik, L. R. Carley, and D. J. Allstot. Sizing of Cell-Level Analog Circuits Using Constrained Optimization Techniques. IEEE Journal of Solid-State Circuits, 28(3):233–241, Mar. 1993. [77] P. C. Maulik, L. R. Carley, and R. A. Rutenbar. Integer Programming Based Topology Selection of Cell-Level Analog Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 14(4):401– 412, Apr. 1995. [78] T. McConaghy, P. Palmers, G. Gielen, and M. Steyaert. Simultaneous Multi-Topology Multi-Objective Circuit Sizing Across Thousands of Analog Circuit Topologies. In IEEE/ACM Design Automation Conf., pages 944–947, San Diego, June 2007. [79] F. Medeiro, F. V. Fern´ andez, R. Dom´ınguez-Castro, and A. Rodr´ıguezV´ azquez. A Statistical Optimization-Based Approach for Automated Sizing of Analog Cells. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 594–597, San Jose, Nov. 1994. ´ Rodr´ıguez[80] F. Medeiro, B. P´erez-Verd´ u, J. M. de la Rosa, and A. V´ azquez. Using CAD Tools for Shortening the Design Cycle of HighPerformance Sigma–Delta Modulators: A 16.4 bit, 9.6 kHz, 1.71 mW Σ∆M in CMOS 0.7 µm Technology. Int. Journal of Circuit Theory and Applications, 25(5):319–334, Sept. 1997. [81] F. Medeiro, B. P´erez-Verd´ u, A. Rodr´ıguez-V´ azquez, and J. L. Huertas. A Vertically Integrated Tool for Automated Design of Σ∆ Modulators. IEEE Journal of Solid-State Circuits, 30(7):762–772, July 1995. [82] S. Mehrotra, P. Franzon, and Wentai Liu. Stochastic Optimization Approach to Transistor Sizing for CMOS VLSI Circuits. In IEEE/ACM Design Automation Conf., pages 36–40, San Diego, June 1994. [83] Mentor Graphics. Eldo Datasheet. 2007. http://www.mentor.com/ products/ic_nanometer_design/custom_design_simulation/eldo/ upload/eldods.pdf. [84] MunEDA. WiCkeD – Product Overview. 2007. http://www.muneda. com/pdf/WiCkeD\%20Product\%20Overview.pdf.

78

3 Analog and Mixed-Signal Design Strategies

[85] B. Murari. Bridging the Gap Between the Digital and Real Worlds: the Expanding Role of Analog Interface Technologies. In IEEE Int. Solid-State Circuits Conf., pages 30–35, San Francisco, Feb. 2003. [86] N. S. Nagaraj. A New Optimizer for Performance Optimization of Analog Integrated Circuits. In IEEE/ACM Design Automation Conf., pages 148–153, Dallas, June 1993. [87] A. Nieuwoudt, T. Ragheb, and Y. Massoud. SOC-NLNA: Synthesis and Optimization for Fully Integrated Narrow-Band CMOS Low Noise Amplifiers. In IEEE/ACM Design Automation Conf., pages 879–884, San Francisco, July 2006. [88] Z.-Q. Ning, T. Mouthaan, and H. Wallinga. SEAS: A Simulated Evolution Approach for Analog Circuit Synthesis. In IEEE Custom Integrated Circuits Conf., pages 5.2.1–5.2.4, San Diego, May 1991. [89] W. Nye, D. C. Riley, A. Sangiovanni-Vincentelli, and A. L. Tits. DELIGHT.SPICE: An Optimization-Based System for the Design of Integrated Circuits. IEEE Trans. on Computer-Aided Design, 7(4):501– 519, Apr. 1988. [90] E. S. Ochotta, R. A. Rutenbar, and L. R. Carley. Synthesis of HighPerformance Analog Circuits in ASTRX/OBLX. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 15(3):273– 294, Mar. 1996. [91] I. O’Connor and A. Kaiser. Automated Design of Switched-Current Cells. In IEEE Custom Integrated Circuits Conf., pages 477–480, Santa Clara, May 1998. [92] P. Oehler, C. Grimm, and K. Waldschmidt. A Methodology for SystemLevel Synthesis of Mixed-Signal Applications. IEEE Trans. on Very Large Scale Integration (VLSI) Systems, 10(6):935–942, Dec. 2002. [93] H. Onodera, H. Kanbara, and K. Tamaru. Operational-Amplifier Compilation with Performance Optimization. IEEE Journal of Solid-State Circuits, 25(2):466–473, Apr. 1990. [94] Orora Design Technologies, Inc. Arsyn Circuit Synthesis Platform for Automated Analog, RF and Digital Transistor Circuit Design. 2007. http://www.orora.com/Products-arsyn/. [95] R. Phelps, M. Krasnicki, R. A. Rutenbar, L. R. Carley, and J. R. Hellums. Anaconda: Simulation-Based Synthesis of Analog Circuits Via Stochastic Pattern Search. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 19(6):703–717, June 2000. [96] K. V. Price, R. M. Storn, and J. A. Lampinen. Differential Evolution. A Practical Approach to Global Optimization. Springer, Berlin, 2005. [97] J. Ramos, K. Francken, G. G. E. Gielen, and M. S. J. Steyaert. An Efficient, Fully Parasitic-Aware Power Amplifier Design Optimization Tool. IEEE Trans. on Circuits and Systems—I: Regular Papers, 52(8):1526– 1534, Aug. 2005. [98] M. Ranjan, W. Verhaegen, A. Agarwal, H. Sampath, R. Vemuri, and G. Gielen. Fast, Layout-Inclusive Analog Circuit Synthesis using

References

[99] [100]

[101]

[102]

[103] [104] [105]

[106]

[107] [108]

[109]

[110]

[111]

[112]

79

Pre-Compiled Parasitic-Aware Symbolic Performance Models. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 604–609, Paris, Feb. 2004. B. Razavi. Principles of Data Conversion System Design. Wiley, New York, 1995. J. Ren and M. Greenstreet. A Unified Optimization Framework for Equalization Filter Synthesis. In IEEE/ACM Design Automation Conf., pages 638–643, Anaheim, June 2005. J. Rijmenants, J. B. Litsios, T. R. Schwarz, and M. G. R. Degrauwe. ILAC: An Automated Layout Tool for Analog CMOS Circuits. IEEE Journal of Solid-State Circuits, 24(2):417–425, Apr. 1989. J. Ruiz-Amaya, J. de la Rosa, F. V. Fern´ andez, F. Medeiro, R. del R´ıo, B. P´erez-Verd´ u, and A. Rodr´ıguez-V´azquez. High-Level Synthesis of Switched-Capacitor, Switched-Current and Continuous-Time Σ∆ Modulators Using SIMULINK-Based Time-Domain Behavioral Models. IEEE Trans. on Circuits and Systems—I: Regular Papers, 52(9):1795– 1810, Sept. 2005. W. M. C. Sansen. Analog Design Essentials. Springer, Dordrecht, 2006. J. E. Savage. Models of Computation. Exploring the Power of Computing. Addison-Wesley, Reading, 1998. B. J. Sheu, A. H. Fung, and Y.-N. Lai. A Knowledge-Based Approach to Analog IC Design. IEEE Trans. on Circuits and Systems, 35(2):256– 258, Feb. 1988. J.-M. Shyu, A. Sangiovanni-Vincentelli, J. P. Fishburn, and A. E. Dunlop. Optimization-Based Transistor Sizing. IEEE Journal of Solid-State Circuits, 23(2):400–409, Apr. 1988. J. C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control. Wiley, Hoboken, 2003. T. Sripramong and C. Toumazou. The Invention of CMOS Amplifiers Using Genetic Programming and Current-Flow Analysis. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 21(11):1237–1252, Nov. 2002. G. Stehr, H. Graeb, and K. Antreich. Performance Trade-off Analysis of Analog Circuits By Normal-Boundary Intersection. In IEEE/ACM Design Automation Conf., pages 958–963, Anaheim, CA, June 2003. G. Stehr, H. Graeb, and K. Antreich. Analog Performance Space Exploration by Fourier-Motzkin Elimination with Application to Hierarchical Sizing. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 847–854, San Jose, CA, Nov. 2004. M. A. Styblinski and L. J. Opalski. Algorithms and Software Tools for IC Yield Optimization Based on Fundamental Fabrication Parameters. IEEE Trans. on Computer-Aided Design, 5(1):79–89, Jan. 1986. Synopsis. Circuit Explorer Analysis, Optimization and Trade-offs. 2007. http://www.synopsys.com/products/mixedsignal/hspice/ circuit_explorer_ds.pdf.

80

3 Analog and Mixed-Signal Design Strategies

[113] A. Torralba, J. Ch´ avez, and L. G. Franquelo. FASY: A Fuzzy-Logic Based Tool for Analog Synthesis. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 15(7):705–715, July 1996. [114] C. Toumazou and C. A. Makris. Analog IC Design Automation: Part I—Automated Circuit Generation: New Concepts and Methods. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 14(2):218–238, Feb. 1995. [115] G. Van der Plas, G. Debyser, F. Leyn, K. Lampaert, J. Vandenbussche, G. G. E. Gielen, W. Sansen, P. Veselinovic, and D. Leenaerts. AMGIE— A Synthesis Environment for CMOS Analog Integrated Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 20(9):1037–1058, Sept. 2001. [116] P. Vancorenland, C. De Ranter, M. Steyaert, and G. Gielen. Optimal RF design using Smart Evolutionary Algorithms. In IEEE/ACM Design Automation Conf., pages 7–10, Los Angeles, June 2000. [117] P. Vancorenland, G. Van der Plas, M. Steyaert, G. Gielen, and W. Sansen. A Layout-aware Synthesis Methodology for RF Circuits. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 358–362, San Jose, Nov. 2001. [118] J. P. Vanderhaegen and R. W. Brodersen. Automated Design of Operational Transconductance Amplifiers using Reversed Geometric Programming. In IEEE/ACM Design Automation Conf., pages 133–138, San Diego, June 2004. [119] I. Vassiliou, H. Chang, A. Demir, E. Charbon, P. Miliozzi, and A. Sangiovanni-Vincentelli. A Video Driver System Designed Using a Top-Down, Constraint-Driven Methodology. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 463–468, San Jose, Nov. 1996. [120] I. Vassiliou, K. Vavelidis, T. Georgantas, S. Plevridis, N. Haralabidis, G. Kamoulakos, C. Kapnistis, S. Kavadias, Y. Kokolakis, P. Merakos, J. C. Rudell, A. Yamanaka, S. Bouras, and I. Bouras. A SingleChip Digitally Calibrated 5.15–5.825-GHz 0.18-µm CMOS Transceiver for 802.11a Wireless LAN. IEEE Journal of Solid-State Circuits, 38(12):2221–2231, Dec. 2003. [121] B. Vaz, N. Paulino, J. Goes, R. Costa, R. Tavares, and A. SteigerGar¸c˜ao. Design of Low-Voltage CMOS Pipelined ADC’s using 1 picoJoule of Energy per Conversion. In IEEE Int. Symp. on Circuits and Systems, volume 1, pages 921–924, Scottsdale, May 2002. [122] M. Vogels and G. Gielen. Architectural Selection of A/D Converters. In IEEE/ACM Design Automation Conf., pages 974–977, Anaheim, June 2003. [123] G. Wolfe and R. Vemuri. Extraction and Use of Neural Network Models in Automated Synthesis of Operational Amplifiers. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(2):198– 212, Feb. 2003.

References

81

[124] Xin Li, P. Gopalakrishnan, Yang Xu, and L. T. Pileggi. Robust Analog/RF Circuit Design with Projection-Based Posynomial Modeling. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 855– 862, San Jose, Nov. 2004. [125] Y. Xu, K.-L. Hsiung, X. Li, I. Nausieda, S. Boyd, and L. Pileggi. OPERA: OPtimization with Ellipsoidal uncertainty for Robust Analog IC design. In IEEE/ACM Design Automation Conf., pages 632–637, Anaheim, June 2005. [126] L. Zhang, R. Raut, Y. Jiang, and U. Kleine. Placement Algorithm in Analog-Layout Designs. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 25(10):1889–1903, Oct. 2006. [127] E. Zitzler and L. Thiele. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Trans. on Evolutionary Computation, 3(4):257–271, Nov. 1999.

Part II

Generic Behavioral Modeling

4 Time-Domain Generic Behavioral Models

4.1 Introduction Generic behavioral models describe the behavior of an entire class of analog or mixed-signal system instead of directly representing a particular architecture with specific non-idealities. This allows to widen the design space of architectures covered by the model while the common characteristics of the systems in the class can be exploited to yield time-efficient performance evaluation methods. To offer these properties, systems are described in an indirect way via generic functions and an interaction scheme. These elements are closely related to the evaluation method of the model via simulation: the interaction scheme expresses the dynamic relations between the generic functions. Time- and frequency-domain approaches are commonly used in analog design. Both can be adopted as intrinsic simulation scheme for the generic behavioral model. This chapter focuses on the time-domain techniques that are developed in this work whereas frequency-domain models are discussed in the next chapter. The chapter has been organized as follows. First, commonly employed simulation approaches in time domain are summarized which can all be used to derive interaction schemes of the generic behavioral models. As example of a class of systems typically evaluated with time-domain simulations, continuous-time ∆Σ modulators are chosen for which a complete generic behavioral model is presented. Solutions are proposed to represent all major non-idealities within the context of the modeling approach. Then, it is shown how most of the basic principles used in the model for ∆Σ modulators can also be employed for other types of systems. More specifically, a generic behavioral model is presented which is suited to represent general sampled-data system of limited size.

86

4 Time-Domain Generic Behavioral Models

4.2 Time-domain modeling approaches 4.2.1 Time-domain simulation Time-domain simulation methods are quite popular for systems containing signals at moderate frequencies. Since the result of the simulation is a sampled or analytical representation of all signals in the system, the designer has a direct representation of the waveforms without the need of an extra conversion step which may introduce extra numerical errors. Instead, the numerical error of the transient signal can be controlled directly. Further, nonlinear behavior is easily added to the time-domain models via nonlinear expressions in function of the signals or via a limitation of the function values (e.g., saturation) or derivatives (e.g., slew rate). For mixed-signal systems with a strong interaction between analog and digital parts, simulations in time domain have an extra benefit. Time-efficient simulation of digital systems is often accomplished via simulations based on events or transactions, like a change of clock phase. For example, the computational model for a digital controller is usually an FSM. This model of computation is inherently a time-domain approach. The time-domain simulation of the analog subsystem fits well into these time-domain simulation framework for the digital parts. As a result, similar to simulations of large digital systems, problems may rise due to a ‘time tyranny’ caused by widely spread time constants of the different parts [2]. However, for the systems used as example of time-domain generic behavioral models in this chapter, the time constants of analog and digital signals are of the same magnitude. For a time-domain generic behavioral model, several time-domain models of computation can be selected. First, a brief overview of commonly applied approaches is given. The main focus in this chapter is on transient simulation of circuits with various types of input signals which are suited for connection with the computational models for the digital part in mixed-signal systems. Therefore, some time-domain methods are not considered here, like approaches for calculating steady-state responses like shooting methods (e.g., [46]) or mixed frequency-time methods (e.g., [25]), envelope-following methods which return only the small-varying component (e.g., [68]), and multiple-time methods which assume multi-periodic signals (e.g., [49]). The general time-domain simulation problem for analog systems addressed in this chapter can be described as finding an N -dimensional signal y (t) on an interval [t0 , t0 + T ] which is the solution of the following set of equations: dq (t) = f (q (t) , t) , dt q (t0 ) = q 0 ,

dq (t) y (t) = g , q (t) , t , dt

(4.1a) (4.1b) (4.1c)

4.2 Time-domain modeling approaches

87

Table 4.1. Overview of properties of four major time-domain simulation approaches: time-marching (TM), sampled-data (SD), waveform relaxation (WR) and collocation methods (COL). Property

TM

SD

WR

COL

Interval widths





Simulation speed



 ¨

  ¨-



Nonlinearities







Discontinuities















  ¨-

Jitter General systems

  ¨ 

with q (t) : R → RM an M -dimensional signal vector, q 0 ∈ RM a constant M -dimensional vector, and f : RM × R → RM and g : RM × RM × R → RN two algebraic vector functions. A circuit description is transformed into the formulation of (4.1) using a standard circuit analysis method like tableau analysis, MNA or state equations [7]. Finding y (t) comes down to finding q (t) and substituting it in (4.1c). There are four major approaches to calculate the solution q (t) of (4.1): TM, SD, WR and COL methods. They split the time interval [t0 , t0 + T ] in small (TM) or large (SD) time steps, in time windows (WR) or solve the equations over the entire interval at once (COL). Table 4.1 compares some properties of the different computational models: • Interval widths: the widths of the subintervals on [t0 , t0 + T ] (larger is indicated as more positive) • Simulation speed: a quantitative indication of the computational time • Nonlinearities: the ease of dealing with models in which strong and weak nonlinearities are included • Discontinuities: problems raised by discontinuous signals • Jitter: the extent to which (clock) jitter can be taken into account • General systems: the ability to apply the method to general system as opposed to special systems with properties like linearity, clocked signals or loosely coupled blocks The different approaches are now discussed in more detail. 4.2.2 Time-marching algorithms In a time-marching (TM) algorithm, the solution at a point ti ∈ [t0 , t0 + T ] is calculated based on the solution in previous points ti−1 , ti−2 , . . . where the time step ti − ti−1 is small compared to the fundamental frequencies in the system. As a result, a major disadvantage is the rapidly increasing number of

88

4 Time-Domain Generic Behavioral Models

calculations for fast varying signals and hence a long CPU time. On the other hand, the method is general applicable and a high simulation accuracy can be achieved. The first step in a TM algorithm is to convert the set of differential equations (4.1a) written in ti to a set of algebraic equations between the values of q (t) in ti , ti−1 , . . . . Usually, a polynomial expression is used to substitute q (ti ) with: p 

¯· ak (hi , . . . , hi−p ) · q (ti−k ) + h

k=1

p 

bl (hi , . . . , hi−p ) · f (q (ti−l ) , ti−l ) ,

l=0

(4.2) ¯ the mean time step. The coefficients ak with hi−j = ti−j − ti−j−1 and h and bl depend on the chosen integration rule, like forward/backward Euler, trapezoidal, Gear or Runga-Kutta [8]. For example, the second-order Gear– Shichman formula becomes [37]: 2

q (ti ) =

¯ h2i 2hi h (hi + hi−1 ) f (q (ti ) , ti ) q (ti−1 ) − q (ti−2 ) + ˆ ˆ ˆ hi−1 h hi−1 h h

(4.3)

ˆ = 2hi + hi−1 . with h The result of the substitution is a set of M algebraic equations which are nonlinear in general:

F (q (ti ) , q (ti−1 ) , . . . , q (ti−p ) , ti , ti−1 , . . . , ti−p ) = 0.

(4.4)

These equations can be solved for q (ti ) by iteratively linearizing the system around an interim solution, for example with a Newton–Raphson scheme. This requires the calculation of an M × M Jacobian matrix. The linearized equations can then be solved directly, e.g., via an LU-decomposition with a  complexity varying from super linear for a sparse system to O M 3 for a dense matrix with M the number of equations proportional to the size of the circuit [5]. For large systems with sparse matrices, iterative schemes like Gauss–Seidel [3] or relaxation methods [20] can be adopted. TM algorithms are commonly used by standard simulators like SPICE [40, 61]. They are quite generally applicable, for both linear and strongly nonlinear circuits. Several mechanisms have been developed to control the local truncation errors introduced at each new point ti and the global error over several time points. The errors, however, can only be bounded by decreasing the time step resulting in a large simulation time. Problems also rise when some signals or their derivatives are discontinuous, like the ones occurring in circuits with (ideal) switches which are used frequently in mixed-signal systems. Boolean-controlled elements can be inserted in the circuit equations which require, for example, interpolation techniques to detect the moment of occurrence of a discontinuity [1].

4.2 Time-domain modeling approaches

89

4.2.3 Sampled-data methods Sampled-data (SD) approaches divide the time interval [t0 , t0 + T ] into larger parts than time-marching (TM) algorithms to reduce the simulation time. The actual interval width is determined by either the nonlinearity level of the circuit or by the clock signal, e.g., in switched-capacitor filters or data converters. By selecting an interval width equal to the clock phase, discontinuities at the switching moments are more easily dealt with than with TM approaches. In a SD method the value of q (t) at a time point ti is described directly as an algebraic function of the value at the previous time point: q (ti ) = φ(q (ti−1 ) , ti−1 , ti ) ,

(4.5)

with φ : RM × R × R → RM the transition function. This function may be provided directly as a behavioral model, for example with a Z-domain description or by transforming the MNA equations to difference equations [11]. It can also be defined implicitly via a set of algebraic equations between the components of q (ti ) and q (ti−1 ). For a constant time step, simulations occur in discrete time. Several dedicated tools have been developed based on this method, e.g., TOSCA [28] and DAISY [16] for DT ∆Σ converters, DIANA-SC [10], SWITCAP2 [56] and SDSIM [44] for switched-capacitor filters, SIMPLL for PLLs [22] and SPLICE [9] for switched-capacitor filters and data converters. In some special cases, the transition function may be derived from the circuit equations. In particular, circuits that are linear between the time points ti ∈ [t0 , t0 + T ], fit quite well for SD simulation [14]. The circuit equations can be written as follows: dq (t) = A · q (t) + b · u(t) , dt

(4.6)

with A and B constant matrices and u(t) a set of input signals. The transition function (4.5) can be written using the exponential of the system matrix A: φ(q (ti−1 ) , ti−1 , ti ) = e A(ti −ti−1 ) · q (ti−1 ) + e Ati



ti

e −Aτ · B · u(τ ) dτ,

ti−1

(4.7) where the last integral can be expressed in closed form in some special cases, like exponential or sinusoidal input signals u(t) [44]. Numerical Laplace inversion is another method to obtain the transition function: φ(q (ti−1 ) , ti−1 , ti ) = −



M  zl 1 , Kl · Q ti − ti−1 ti − ti−1

Q(s) = (sI − A)

l=1 −1

· [B · U (s) + q (ti−1 )] ,

(4.8a) (4.8b)

90

4 Time-Domain Generic Behavioral Models

with Kl and zl the residues and poles of a Pad´e rational function of order [N, M ] , N < M , and I the identity matrix [55, 45]. SD simulation methods are naturally suited for SD systems like DT filters. When the linear approximation is accurate and the time step is constant, very fast simulations are obtained since the exponential of the system matrix should only be calculated once. Also discontinuities at the switching times can easily be dealt with. However, several issues limit the efficiency: •

The integrals in (4.7) should be calculated numerically for general input signals without a closed expression for the values of the integrals. In particular, only exponential, sinusoidal and linear signals lead to simple symbolic expressions. • Clock jitter can only be taken into account via approximations [14, 43] unless the exponential e A∆t or the matrix inversion in (4.8) is recalculated at each time step. • For weakly nonlinear circuits where the nonlinearities can be described by polynomial expressions, approaches based on Volterra series [62] can be applied [60, 67]. The input of an nth-order Volterra circuit, that is a linear circuit, depends on the linear and lower-order responses, which should already be available in several time points. As a result, several transition functions over the entire interval [t0 , t0 + T ] should be applied after one other. 4.2.4 Waveform relaxation Instead of calculating the system response in subsequent time points of the time interval [t0 , t0 + T ], the waveforms on large time windows can be approximated at once. In the waveform relaxation (WR) methods, the approximated waveforms for some components of q (t) of (4.1) on the window [t′0 , t′0 + T ′ ] are used to improve the results for the other components. The system is split up in S subsystems, each characterized by a subset of the equations (4.1) [27, 41]:   dq [k] (t) = f [k] q [k] (t) , u [k] (t) , t dt

(4.9)

with the total state vector and the input of the kth subsystem given by:   T q (t) = q [1] (t)T . . . q [S] (t)T , (4.10a)   T (4.10b) u [k] (t) = q [1] (t)T . . . q [k−1] (t)T q [k+1] (t)T . . . q [S] (t)T .

Starting with an initial guess, e.g., q (t) = q 0 , the equations for the subsystems are subsequently solved until convergence is reached. The input of each subsystem is given by the solutions of the previous iterations or subsystems according to a relaxation scheme, like Gauss–Seidel or Gauss–Jacobi [20].

4.2 Time-domain modeling approaches

91

This approach is mainly appropriate for systems that can easily be divided into blocks, like a loosely coupled cascade of subsystems. For such systems, like MOS digital circuits, each equation of (4.1) contains only a few variables leading to sparse matrices in time-marching (TM) algorithms. For large systems, a multilevel division can be used and WR can be applied in a hierarchical way [53]. To calculate the waveforms for each subsystem, the equations can be linearized using a Newton–Raphson iteration at each iteration i + 1:    dq [k](i) (t) ∂f [k]  [k](i) [k](i) [k](i+1) [k](i) + = q (t) , u (t) , t · q (t) − q dt ∂q [k]     f [k] q [k](i) (t) , u [k](i) , t + q [k](i+1) (t) − q [k](i) . (4.11)

In this WRN scheme, only a linear approximation of the waveforms of each subsystem is calculated in each step [51, 64]. The NWR scheme, on the other hand, applies the Newton-Raphson iteration (4.11) to the entire system at each iteration and solves the resulting large linear system with a relaxation method [13]. Since the WR methods calculate the entire waveform over the entire time window at each iteration, time-efficient simulations only result when the waveforms of the previous iteration are a good approximation over the entire window. Solving a simplified system with a TM algorithm may result in a bad approximation near the end of the window. Therefore, the width of the time windows is limited. The NWR scheme solves this problem at the cost of solving very large linear systems. 4.2.5 Collocation methods Collocation or pseudo-spectral methods solve the system’s equations over the entire interval [t0 , t0 + T ] at once. An approximation for the solution q (t) is written as an element of the P -dimensional space described by a linear combination of the base functions {ψ1 (t) , . . . , ψP (t)}: q (t) ≈

P 

q k · ψk (t) ,

(4.12)

k=1

with unknown coefficients q k ∈ RM (M denotes the number of equations used to describe the system). Usually each base function ψk (t) depends on a set of parameters (αk,1 , . . . , αk,nk ). For frequency-domain simulations, the set of base functions corresponds to the trigonometric functions at multiples of the fundamental frequency [24]. Commonly applied base functions for time-domain approaches, on the other hand, are wavelets and spline functions [12], polynomials (e.g., Chebyshev

92

4 Time-Domain Generic Behavioral Models

polynomials [65, 35]), and complex damped exponentials [57]. The choice depends on the signal characteristics Collocation methods are most effective if there is a good idea about the shape of the signals so that only a few base functions are needed to describe them. Examples are modulated signals and slow-varying signals which correspond to polynomials. To find the unknown coefficients q k , expression (4.12) can be substituted into the differential equations and coefficients of the base functions can be equalized. In general, however, these coefficients cannot be easily derived. An alternative is the collocation method [21]. C collocation points are chosen within the interval [t0 , t0 + T ] in which the equations are written down:   P  P   dψk (t)  (4.13) =f q k · ψk (ti ) , ti , qk · dt t=ti k=1

k=1

for i = 1, . . . , C. This leads to a set of M · C (nonlinear) algebraic equations from which the unknown coefficients can be solved. Also unknown parameters from the base functions are found. As a result, collocation methods increase the width of the time interval at the expense of larger systems of nonlinear equations since C must be chosen large enough to obtain the right number of conditions. The success of a collocation method depends greatly on how well the set of base functions describe the exact solution. Usually the approximation is only valid over a limited time interval which practically limits the interval width. Therefore, the large interval can be split up into smaller pieces with for each subinterval another approximation. A set of boundary conditions should be introduced which further increases the complexity of the system of equations to solve [12]. 4.2.6 Conclusion It is clear from Table 4.1 on p. 87 that a trade-off should be made between properties regarding simulation speed, different aspects of accuracy and flexibility. Consequently, no method is perfect for all situations and hence, a combination of several types of algorithms may give the best results. This approach has been adopted in this work. Methods have been developed that try to combine the best of the different techniques for the classes of CT ∆Σ modulators and sampled-data (SD) systems. They are explained in the next sections.

4.3 A model for continuous-time ∆Σ modulators Interfaces between the real, analog world and the digital circuitry on chip form a major challenge [39]. Due to their mild conditions for the analog building blocks, ∆Σ A/D converters are a popular way to perform the analog-to-digital

4.3 A model for continuous-time ∆Σ modulators d

K3

l

ti



u(t)

93



x1 (t)

x2 (t)

y

f

K2

K1

v(t)

DAC

(a) Single-loop second-order ∆Σ low-pass A/D converter 

u(t)



x1 (t)

x2 (t)

ti y1

l f

K2

K1

v1 (t)



C

DAC

ti

x3 (t)

y2

l f

K3 v2 (t)

DAC

d

(b) Cascaded 2-1 ∆Σ low-pass A/D converter Fig. 4.1. Examples of different architectures for CT ∆Σ A/D converters.

conversion [4, 42]. Although discrete-time (DT) architectures are most commonly used, continuous-time (CT) variants outstand compared to their DT counterparts for applications demanding high speed and/or low power [6, 66]. A plethora of architectures for ∆Σ modulators have been developed, e.g., single-loop and cascaded structures (see Fig. 4.1), single- and multi-bit quantization, low- and bandpass converters [47, 18]. Therefore, the class of CT ∆Σ modulators corresponds to a space of architectures that is large enough to show the usefulness of the approach based on generic behavioral models. 4.3.1 The generic behavioral model The definition of a generic behavioral model for continuous-time (CT) ∆Σ modulators requires the definition of alphabets, generic functions and an interaction scheme. These properties depend on the selected simulation method. The chaotic nature of the signals in a ∆Σ modulator favors the use of timedomain approaches over frequency-domain simulations. CT ∆Σ modulators are therefore a typical example of the application of time-domain computational models.

94

4 Time-Domain Generic Behavioral Models

From the four major approaches summarized in section 4.2 the most frequently applied methods for ∆Σ modulators are time-marching (TM) algorithms of models written with general-purpose languages like MATLAB/Simulink (e.g., in [50]) and sampled-data (SD) simulations implemented in dedicated tools [44, 54]. From Table 4.1, it can be concluded that the first type of approaches makes it easy to add nonlinearities to the model whereas the second one has a clear advantage with respect to simulation speed, but only for almost linear systems. Requirements The selection of the model of computation in this work is guided by the requirements of the model. Since simulations are used in a design flow during architectural operation, fast SD simulations are preferred. Besides, the sampling operation of the ∆Σ modulators makes the use of an event-driven computational model straightforward. On the other hand, important non-idealities of CT ∆Σ modulators should be taken into account: Linear non-idealities. The first type of non-idealities to study during an analog design are linear effects of the integrators or resonators like finite gain and parasitic poles. For architectural exploration, several integrator characteristics at different abstraction levels will be examined. Also the variation of the coefficients of the modulator can be considered as linear non-idealities. Nonlinear distortion. Weakly nonlinear distortion of the integrators gives raise to harmonic signals and intermodulation products which lower the Signal-to-Noise-and-Distortion Ratio (SNDR) of the modulator. Strong nonlinearities may also be important effects. Whereas the slew rate limitations are more definite in discrete-time (DT) converters, saturation of the outputs of the integrators should be examined closely in the CT variants. Another source of nonlinear distortion in a CT ∆Σ modulator is the nonlinearity of the DAC pulse. To study these effects, it is necessary that the computational model can easily deal with different types of feedback pulses, like return-to-zero and non-return-to-zero pulses, and asymmetric pulses. Furthermore, several types of feedback pulses should be dealt with so that during architectural exploration also non-rectangular pulses can be examined. Clock jitter and excess pulse delay. The exact period between the sampling moments is variable due to the phase noise of the clock signal. This sampling jitter results in time intervals between the events of the SD simulation with different widths. Furthermore, within a time interval, the DAC pulse is affected by both pulse-delay and pulse-width jitter as shown in Fig. 4.2. Whereas for DT converters, most charge is transferred at the start of the pulse, the current for CT converters is almost constant if

4.3 A model for continuous-time ∆Σ modulators

95

Tpw

td

∆td

∆Tpw

tk

tk+1

∆tk

∆tk+1

time

Fig. 4.2. Example of a DAC-pulse affected different types of jitter: sampling (∆tk and ∆tk+1 ), pulse-width (∆Tpw ) and pulse-delay jitter (∆td ).

rectangular pulses are used by the DAC during a period resulting in a charge error proportional to the jitter. Other shapes of waveforms may reduce this error. The delay of the pulse can be so large that the pulse falls partially in the next interval. Also, the delay may depend on the signal characteristics which can be regarded as signal-dependent jitter. To study the effects of different types of jitter, the simulation approach should be able to deal not only with different types of feedback pulses, but also with pulse shapes varying from interval to interval. Thermal noise. Sampled systems like DT ∆Σ modulators suffer from sampling noise expressed by the kT /C noise. Since the sampling operation of a CT ∆Σ modulator happens near the end of the loop, it is suppressed by the gain of the loop. Therefore, the most important noise effects originate from the thermal noise of the first integrator. It should be possible in the computational model to take these effects into account during the time-domain simulations. Quantizer non-idealities. The quantizer or multi-bit comparator suffers from effects like finite gain, offset, metastability and electronic hysteresis. These effects are described by the characteristic curves of the quantizer. Including all these effects in the generic behavioral model means that no standard SD algorithm can be adopted. Instead, a dedicated approach has been developed which exploits the specific characteristics of CT ∆Σ modulators. It combines a coverage of all above non-idealities with a high computational efficiency. On the other hand, by opting for a dedicated method, the simulation approach is only applicable to systems that fit into the definition of the class of CT ∆Σ modulators. Figure 4.1 shows two specific members of this class. The architectures in this class are characterized by three parts as shown in Fig. 4.3 [29]: • Loop filter: a chain of integrating or resonating building blocks, indicated by the gray zone labeled ‘δ’ in Fig. 4.1 and 4.3 • Digital-to-analog converters: DACs creating the feedback pulses, split in all parts labeled ‘φ’

96

4 Time-Domain Generic Behavioral Models u(t)

DAC

y1

H(s) DAC f

y2

d

l

Fig. 4.3. General model for the class of CT ∆Σ A/D converters with the elements of the specific modulator of Fig. 4.1b fit into it.

• Quantizers: comparators calculating the output of the modulator, consisting of all gray zones labeled ‘λ’ The ∆Σ modulator is followed by a digital filter that combines the output signals y1 , y2 , . . ., filters out the quantization noise and reduces the sampling frequency. The inherent anti-aliasing property of a CT ∆Σ modulator makes an extra input filter superfluous. Notice that the loop filter in Fig. 4.3 is regarded as one component of the system. The feedback loop around this analog block consists of the quantizers and the DACs. Hence, this class contains no CT feedback loops around the loop filter. Such loops may, however, be part of the internal structure of the loop filter. Since the dedicated approach developed in this work for the class of CT ∆Σ modulators is limited to systems that fit into the generic scheme of Fig. 4.3, systems with CT feedback loops can only be mapped onto the template of the class by considering the loops as parts of the loop filter. For systems with lots of such loops, the dedicated model for CT ∆Σ modulators becomes less efficient, and, consequently, either a dedicated model for a new class or a general-purpose approach should be selected. Alphabets Based on the class specifications, the modeling requirements regarding nonidealities, and the event-driven simulation model, the alphabets of the generic behavioral model can be derived: Input. The definition of the alphabet Σ for the input signals is guided by several considerations. The model of the modulator should be easily connected to the preceding analog building blocks which can produce different types of waveforms. Further, the most accurate simulation results are in general obtained if the signal is known over the entire interval instead of only in some sampling points. Therefore, the input signal u is a general continuous-time (CT) signal: u ∈ Σ = C0 [R] ,

(4.14)

4.3 A model for continuous-time ∆Σ modulators

97

which should be provided to the generic behavioral model as an analytical expression. Output. The output alphabet Ψ is determined by the interface of the modulator to the digital filter. The output bits of all quantizers are gathered in a vector y which is a digital m-bit word: y ∈ Ψ = Bm ,

(4.15)

with B the set of binary values. Due to the hysteresis of the quantizer, the output of a quantizer is determined by the input and the previous output. This is expressed by the state of the quantizer. The alphabet Qout for the vector q out with the states of all quantizers is defined as follows: q out ∈ Qout = Ba ,

(4.16)

with a the total number of binary states. Noise. Signals originating from the thermal noise are referred to the input of the loop filter so that they can be treated in the same way as the normal input signals. A vector n of CT stochastic signals is the result: k

n ∈ Υ = C0 [R] ,

(4.17)

where the definition of the noise alphabet Υ indicates that there are k equivalent input noise signals. Clock. The choice for a type of sampled-data (SD) computational model implies that one of the signals in the generic behavioral model should obtain information about the sampling times. Since sampling jitter leads to different interval widths, all sample moments should be provided. Consequently, the model contains the signal σ = (t1 , t2 , . . . , ts ) ∈ Rs ,

(4.18)

where s sampling points are selected on [t0 , t0 + T ] and Rs is the formal alphabet associated with the signal. Feedback. The internal feedback signals are delivered by the DACs. To model the different non-idealities of these blocks and different possible pulse shapes, the corresponding alphabet Ω specifies that there are r general CT signals: r (4.19) v ∈ Ω = C0 [R] . Like the input signals, the feedback signals are also provided in an analytical form. The feedback pulses depend not only on the output of the modulator during that interval but also on the previous ones. For example, a nonreturn-to-zero pulse from 1 to 1 is just a constant signal whereas such a pulse from 0 to 1 will have a finite raise time. Besides, a modulator with

98

4 Time-Domain Generic Behavioral Models

large pulse delay can cause the pulses to be spread out over two adjacent intervals. By adding memory signals q fb to the generic behavioral model, these effects can be analyzed. The alphabet for the feedback memory signals is (4.20) q fb ∈ Qfb = Bb , where b denotes the required number of memory elements (which may differ from the number of output signals m). Loop filter. Each integrating or resonating block contains ideal and parasitic internal state signals. These states can be lumped together in one state vector s c for the entire loop filter: n

s c ∈ Qc = C0 [R] ,

(4.21)

with n the total number of states or order of the loop filter. Due to the sampling operation in the modulator, an additional alphabet should be provided for the discrete-time (DT) states: s ∈ Q = Rn .

(4.22)

Table 4.2 lists the different alphabets for the interfaces to the rest of the system, the internal memory elements and the internal feedback signals. Generic functions Defining a generic function starts by describing the input and output alphabets of the mapping operation. Then, the parameterized equations between Table 4.2. Definitions of the alphabets used in the generic behavioral model for CT ∆Σ modulators corresponding to interfaces, memory and feedback. Signal type

Alphabet

Definition

Application

Input signal

u∈Σ

Σ = C0 [R]

Input test signal

Output signal

y ∈Ψ

Ψ =B

m

Output bit stream k

Noise signal

n ∈Υ

Υ = C0 [R]

Sample moments

σ ∈ Rs

σ = (t1 , t2 , . . . , ts )

States loop filter

s c ∈ Qc

Qc = C0 [R]n

Parasitic poles

Discrete states

s∈Q

Q = Rn

Sampled states

Output state

q out ∈ Qout

Qout = Ba

Feedback state Feedback signal

q fb ∈ Qfb v ∈Ω

Qfb = B

Sampler

Quantizer hysteresis

b

Ω = C0 [R]

Thermal noise

Excess delay r

DAC pulse

4.3 A model for continuous-time ∆Σ modulators

99

elements of these alphabets are described. Only the alphabets are required to describe the interaction model of the generic behavioral model. Therefore, this subsection only defines the global structure of the mappings. Section 4.3.2 elaborates on the parametric templates. It is straightforward to associate a generic function with each of the parts that characterize the class of CT ∆Σ A/D converters: Output function. The quantizers calculate the output of the modulator depending on their inputs and internal states. Since these inputs correspond to states of the loop filter (or a combination of them), the generic output function λ is given by λ : Q × Qout → Ψ × Qout .

(4.23)

Feedback function. The feedback signals are generated by the digital-toanalog converters in a similar way as the output signals by the quantizers. Hence, the generic feedback function φ is also similar to the output function, but with the appropriate alphabets: φ : Ψ × Qfb → Ω × Qfb .

(4.24)

Next-state functions. Transition function (4.5) of the standard sampleddata (SD) simulation approach corresponds to the calculation of the state vector s at time tk based on its values at tk−1 . This can be regarded as a next-state function: δ : Σ × Ω × Q × P(σ) → Q,

(4.25)

which has been written as an explicit function of the input, output and feedback signals and P(σ) is defined as follows: P(σ) = {(tk , tk+1 ) ⊂ σ|tk ∈ σ} .

(4.26)

However, since the loop filter is supposed to have a dominant linear behavior, the next-state function δ can be split up into a linear (δL ) and a nonlinear (δNL ) next-state function. The nonlinear deviation from the dominant response can then be calculated based on the signals for the states of the loop filter over the time interval obtained from the linear approximation δL : δL : Σ × Υ × Ω × Q × P(σ) → Qc , δNL : Σ × Υ × Ω × Q × P(σ) × Qc → Q.

(4.27) (4.28)

An overview of the mapping characteristics for the generic functions is presented in Table 4.3. The functions are also indicated in the topologies of Fig. 4.1.

100

4 Time-Domain Generic Behavioral Models

Table 4.3. Definitions of the mapping operations of the generic functions of the generic behavioral model for CT ∆Σ A/D converters. Function

Definition

Output

λ : Q × Qout → Ψ × Qout

Feedback

φ : Ψ × Qfb → Ω × Qfb

Linear next-state

δL : Σ × Υ × Ω × Q × P(σ) → Qc

Loop filter

Nonlinear next-state

δNL : Σ × Υ × Ω × Q × P(σ) × Qc → Q

Loop filter

1 2 3 4 5 6 7 8 9 10 11 12

Modulator part Quantizers DACs

input u, σ s ←− s 0 q fb ←− q fb 0   (y (1) , q out ) ←− λ s 0 , q out 0 for k = 1, . . . , |σ| − 1 update  n    v , q fb ←− φ y (k) , q fb s¯c ←− δL (u, n, v , s, (tk , tk+1 )) s ←− δNL (u, n, v , s, (tk , tk+1 ) , s¯c ) (y (k + 1) , q out ) ←− λ(s, q out ) end for output y

Listing 4.1. Interaction model of the generic behavioral model for CT ∆Σ modulators.

Interaction model The dynamic behavior of the modulator is captured in the interaction model of the generic behavioral model. The event-driven simulation approach applied in the model for continuous-time (CT) ∆Σ modulators is written down using the generic functions. This results in the interaction model of Listing 4.1. First, all memory present in the model and the initial output value are initialized (lines 2–4). Then, events are generated by the sampling clock which provides a new sample moment out of σ. Such event fires the machine to execute a five-step algorithm (lines 6–10): 1. 2. 3. 4.

Select a value for the noise signals n. Calculate the feedback signal v with feedback function φ. Calculate the linear next state s¯ c on the time interval [tk−1 , tk ]. Calculate the nonlinear next state based on the already calculated linear next state. 5. Update the output vector y .

4.3 A model for continuous-time ∆Σ modulators

101

Fig. 4.4. Schematic representation of the interaction model of the generic behavioral model for CT ∆Σ modulators.

Figure 4.4 depicts the diagram of the interaction model. The three main parts of the class of CT ∆Σ modulators indicated in Fig. 4.1 are easily recognized. The figure also mentions the different signals and their alphabets, and the generic functions of the model. The events are generated by the sampler (symbolized by the broken arrow). The curved arrows assign initial values to the memory elements.

102

4 Time-Domain Generic Behavioral Models

Notice that the characteristics of the class of CT ∆Σ modulators allow to open the feedback loop between two sampling moments. The link between two consecutive sample intervals is made via the sampled feedback state q fb . This dedicated approach results in an efficient simulation method at the cost of less flexibility expressed by the template of the class shown in Fig. 4.3 on p. 96. However, when the generic behavioral model is used in the design methodology discussed in Section 3.4 (Fig. 3.10 on p. 66), several simulations of different members of the class of CT ∆Σ modulators are required. Consequently, the generic behavioral model is allows to represent all systems that can be selected during synthesis, while simultaneously providing a dedicated time-efficient simulation approach. Similar to the general expression (3.10) on p. 68, the generic behavioral model for the class of CT ∆Σ modulators can be written as a machine M : M = (Σ, Ψ, Υ, Ω, Rs , Qc , Q, Qout , Qfb , λ, φ, δL , δNL , s 0 , q out 0 , q fb 0 ). (4.29)       alphabets

generic functions

interaction model parameters

This machine is the formal representation of the generic behavioral model. It contains all the necessary information to implements the model. Whereas in a FSM only digital states are present [52], the model of computation for CT ∆Σ modulators contains both real-valued and digital states [26]. Therefore, it can be considered as a Mixed-Signal Finite State Machine. 4.3.2 Templates for the generic functions To represent a particular architecture with certain non-idealities, the generic functions of the generic behavioral model should be specified. This section introduces the templates for the next-state, feedback and output functions and their relationship with the characteristics of the ∆Σ modulators. Further, the choices of sampler and noise signals are explained. Linear next-state function (δL ) The linear next-state function is based on a new expression derived in this work (Theorem 4.1 on p. 106) for the solution of the linear differential equations describing the loop filter. Before this theorem is formulated and proofed, the derivation of the actual loop filter equations based on the templates for its building blocks is discussed. Building block template To derive a template for the linear next-state function, it is convenient to start with templates for the integrating or resonating building blocks of the loop filter. This procedures makes it easy to analyze and design the blocks

4.3 A model for continuous-time ∆Σ modulators

103

separately. Besides, linear non-idealities are usually properties of the building blocks like a pole or gain mismatch. In a continuous-time (CT) ∆Σ modulator, a chain of B integrating or resonating building blocks can be identified. If the modulator contains different loops – like in the cascaded topology of Fig. 4.1b – the chain is simply the concatenation of the different subchains. In general, the outputs of all blocks in the chain can be inputs for any block to realize feedback and feedforward loops in the loop filter. Consequently, the nth block of the chain can be characterized by the following generic set of differential equations: B

 dq n (t) = An q n (t) + E n,i xi (t) + F n v (t) , dt i=0

(4.30a)

i=n

xn (t) = C n q n (t) +

B 

gn,i xi (t) + H n v (t) ,

(4.30b)

i=0 i=n

where xn (t) is the output of the nth block and x0 (t) coincides with the input signal u(t) of the modulator. q n (t) is a vector with internal variables or states for the nth building block. v (t) denotes the feedback signal (see Table 4.2). The template parameters are An , E n,i , F n , C n , gn,i and H n . It is straightforward to extend the template with extra input signals, like the noise signals n. There are several ways to use this template in the design flow based on generic behavioral models: High-level exploration. Straightforward representations for the building blocks are transfer functions with identification of gain, poles and zeros. Such a model can easily be translated into a state-space model that fits into the template [36]. This way, a high-level implementation-independent exploration of the design space can be performed. Implementation choice. A specific parameterized implementation style for the building blocks can be mapped onto the template with some analysis technique like a state-space description [23]. Figure 4.5 shows two examples of such styles for integrating building blocks. The template parameters are a symbolic function of the implementation-specific non-idealities (e.g., finite gain or gain-bandwidth) [19]. With a library of implementation styles, different options can be explored during architectural exploration and specifications can be determined. Bottom-up models. Yet another approach is the use of already designed building blocks. Automated bottom-up generation can be employed in a similar way as the sampling of the design space during the bottomup derivation of performance models. The circuit equations like tableau, MNA or state-space equations can directly be used as a specialization for the template of the building block [8]. Several model order reduction techniques can be applied to limit the computational complexity.

104

4 Time-Domain Generic Behavioral Models Ci u(t)

gm

x(t)

R1

Ci

u(t)

x(t)

h

R2 v(t)

v(t)

(a) R–C implementation

(b) Gm–C implementation

Fig. 4.5. Examples of different implementation styles for integrating building blocks.

Example 4.1. For a low-pass single-loop ∆Σ modulator, a high-level exploration approach can describe the non-ideal integrating building blocks with the transfer function: I(s) =

1 , (s + ωl ) (1 + s/ωh )

(4.31)

with a low-frequency pole ωl which causes a finite DC-gain and a parasitic high-frequency gain ωh . To fit this function with the generic template, a conversion to a state-space description is performed:          d qn,1 −ωl − ωh ωl ωh qn,1 An F = + (4.32a) x0 (t) + n v(t) , 0 q q 1 0 0 dt n,2 n,2 

(4.32b) xn (t) = 0 ωh q n (t) ,

with An and Fn the feedforward and feedback coefficients. It is assumed that there are no extra loops in the filter. This description of the integrator can be used to explore values for the poles independently of the actual implementation. On the other hand, a specific implementation can be selected for the integrator. For example, for the R–C integrator of Fig. 4.5a, the macro-model of Fig. 4.6 can be used. The op amp has a finite gain A0 and a finite GBW . The equations corresponding to the generic template of building blocks are:       1   1  1 1 d qn,1 − RC − RC qn,1 R C i i 1 i = + x0 (t) + R2 Ci v(t) , −GBW −GBW ′ qn,2 0 0 dt qn,2 (4.33a)

 (4.33b) xn (t) = 0 1 q n (t) ,   with R = R1 R2 and GBW ′ = 1 + A10 GBW . This kind of representation is used to explore and determine values for gain and gain-bandwidth of the op amp. 

4.3 A model for continuous-time ∆Σ modulators

105

Ci

q1

R1 U (s)

W (s)

X(s)

q2

R2

A0 A0 1+s GBW

W(s)

V (s)

Fig. 4.6. Macro-model for R–C integrator used in Example 4.1.

Loop filter description All vectors q n (t) can be lumped together to obtain the CT state vector s c (t): ⎡ ⎤ q 1 (t) ⎢ ⎥ s c (t) = ⎣ ... ⎦ . (4.34) q B (t)

Similarly, the outputs of all integrators are gathered into the vector x (t). Next lemma describes how the equations for these vectors can be obtained if the models of the building blocks are cast into template (4.30). Lemma 4.1 (Loop filter equation). The differential and algebraic equations characterizing the loop filter of a CT ∆Σ modulator can be written as ds c (t) = M s c (t) + B u(t) , dt x (t) = N s c (t) + D u(t) ,

(4.35a) (4.35b)

where the dependency of the matrices M, N, B and D on the parameters of the templates of the building blocks (4.30) is described as follows: M = A + E (I M − G) N = (I M − G)

−1

−1

C,

B = F + E (I M − G) −1

(4.36a) (4.36b)

−1

D = E (I M − G)

C,

H.

H,

(4.36c) (4.36d)

The matrices A and C are block-diagonal matrices with the system matrices An and output matrices C n on the main diagonal respectively. The factors E m,n and gm,n are gathered in E and G respectively with E n,n = 0 and gn,n = 0. All input signals (x0 (t) and v (t)) are represented by u(t). Their gain factors are collected in F = [E n,0 F n ]n=1...B and H = [gn,0 H n ]n=1...B . Finally, I M denotes the identity matrix of order M which coincides with the total order of the filter.

106

4 Time-Domain Generic Behavioral Models

Proof. To derive (4.35), the template equations are written down for all blocks n = 1, . . . , B. Then, elimination of the output vector x (t) leads directly to the expressions (4.36) for the global matrices. Note that for practical ∆Σ modulators, G contains most non-zero elements in the upper triangle, and, ⊔ consequently, I M − G is usually regular. ⊓ The interconnection between the building blocks of the modulator is reflected by the matrices E and G. No special structure is required for the loop filter: during architectural exploration, alternative topologies with several loops may be examined. Nevertheless, a frequently encountered special case is the feedforward structure, e.g., the basic single-loop and cascaded topologies of Fig. 4.1. Since both E and G are then lower-triangular block matrices, the calculation of the global matrices M, N, B and D is simplified due to a simple inverse operation. The global system matrix M will also have a lower-triangular block structure, which also results in a simplified expression for the linear next-state function template [29]. Linear next-state template The general expression for the linear next-state function is found by solving the global set of differential equations (4.35a) between two sampling points tk−1 and tk . Note that expression (4.35b) for the output vector x (t) is part of the template of the generic output function φ. First choice to solve the system of linear differential equations is applying the expressions for the transition functions used by the sampled-data (SD) methods. However, as indicated in Section 4.2.3, the calculations at each event become cumbersome for general time intervals with variable widths (caused by clock jitter) and for general signals (e.g., polynomial or damped sinusoidal signals). Since this increases the computational time, another solution method is proposed which allows exploiting the properties of the class of CT ∆Σ modulators to obtain dedicated time-efficient simulations [29]. Theorem 4.1 (Linear response). For the linear system (4.35a), there exist a set of constant matrices Q k,m , Rk,m , P rl,p , P il,p , T rl,p and T il,p independent of the time points or input signals, so that the solution for q n (t) on [tk−1 , tk ] with s = s c (tk−1 ) is s c (t) = κ(u(t) , s, tk−1 , tk ) ck n1   = Q k,m (∆t)m e λk ∆t s c (tk−1 ) k=1 m=0

+

dl n2  

r  P l,p cos(βl ∆t) − P il,p sin(βl ∆t) (∆t)p e αl ∆t s c (tk−1 ) l=1 p=0

+

ck n1  

k=1 m=0

t

;u

(t) + Rk,m Eλk−1 k ,m

dl  n2    tk−1 ;u t ;u i T rl,p Cαk−1 (t) , S (t) − T l,p αl ,βl ,p l ,βl ,p l=1 p=0

(4.37)

4.3 A model for continuous-time ∆Σ modulators

107

with n1 = number of real eigenvalues λk of M, n2 = number of complex eigenvalues αl + jβl of M, c +1 ck = minimal value for which dim null(M − λk I M ) k equals the algebraic multiplicity of real eigenvalue λk , d +1 dl = minimal value for which dim null(M − (αl + jβl ) I M ) l equals the algebraic multiplicity of complex eigenvalue αl + jβl , and with the E -, C - and S -functions acting as filters defined as follows:  t m (t − τ ) tk−1 ;u e λk (t−τ ) u(τ ) dτ, (4.38a) Eλk ,m (t) = m! tk−1  t p (t − τ ) αl (t−τ ) ;u e cos[βl (t − τ )] u(τ ) dτ, (4.38b) Cαtk−1 (t) = l ,βl ,p p! tk−1  t p (t − τ ) αl (t−τ ) ;u e Sαtk−1 (t) = sin[βl (t − τ )] u(τ ) dτ. (4.38c) l ,βl ,p p! tk−1 Proof. Since the expression for s¯ c (t) should be valid for all time values t, it can be substituted into (4.35a) and the coefficients of corresponding functions can be equalized. This results in a set of equations for the coefficient matrices. For example, the conditions for the matrices Q k,m are: (m + 1) Q k,m+1 + λk Q k,m = M Q k,m , λk Q k,m = M Q k,m ,

m = 0, . . . , ck − 1,

(4.39a)

m = ck .

(4.39b)

These conditions imply that the columns of Q k,m are generalized eigenvectors of order ck −m+1 of M with eigenvalue λk . Since the generalized eigenspace of eigenvalue λk is a space with the same dimension as the algebraic multiplicity of λk , the coefficient matrix can always be written as a linear combination of the vectors forming a base for the generalized eigenspace for eigenvalue λk : Q k,m = E k · Γ k,m ,

(4.40)

where the columns of E k are the base vectors and Γ k,m contains unknown coefficients. These coefficients can be found recursively from (4.39) once one matrix Γ k,m is known. The base for the generalized eigenspace of eigenvalue λk corresponds to a base for the null space of A − λk I M . Several techniques exist to calculate such a base, for example, using the singular value decomposition c of (A − λk I M ) k . Similarly, one can derive following expressions: P rl,p = F rl · ∆rl,p − F il · ∆il,p ,

(4.41a)

P il,p

(4.41b)

=

F rl

· ∆il,p

+

F il

· ∆rl,p ,

where the columns of F rl + jF il from a base of the generalized eigenspace of eigenvalue αl + jβl and ∆rl,p and ∆il,p are the matrices with unknown coefficients.

108

4 Time-Domain Generic Behavioral Models

The initial condition of the differential equations at tk−1 , n1 

Q k,0 +

k=1

n2 

P rl,0 = I M ,

(4.42)

l=1

gives an extra condition from which the unknown coefficient matrices Γ k,0 , ∆rl,0 and ∆il,0 can be determined. A similar reasoning can be followed to show that the coefficients of matrices Rk,m , T rl,p and T il,p can easily be obtained via the generalized eigenstructure of M. This set of coefficients, however, also depend on the global input matrix M as can be derived from the condition n1 

k=1

Rk,0 +

n2 

T rl,0 = B,

(4.43)

l=1

which follows from the condition that the expression for q n (t) should be valid for any input signal. ⊓ ⊔ The auxiliary function κ defined in Theorem 4.1 can be used to formulate the template of the linear next-state function of the generic behavioral model for CT ∆Σ modulators: ⎞ ⎛⎡ ⎤ u (4.44) s¯ c = δL (u, n, v , s, (tk−1 , tk )) = κ⎝⎣n ⎦ , s, tk−1 , tk ⎠ . v

Note that this template is only used indirectly: the templates for the building blocks are specialized after which the specialization of the template for the linear next-state function is calculated. The application of Theorem 4.1 to determine the linear response of the loop filter of the modulator offers several opportunities to decrease the computational time compared to the traditional SD approaches of section 4.2.3: Constant coefficients. The coefficients in matrices Q k,m , Rk,m , P rl,p , P il,p , S rl,p and S il,p depend only the system’s configuration. Consequently, they should only be calculated during the initialization phase of the interaction model so that the overall simulation time does not increase proportional to the time needed to obtain the coefficients. Time-hungry operations are the calculations of the eigenvalues and eigenvectors and solving the conditions (4.42) and (4.43). At each new event, on the other hand, the computational effort consists of evaluating the filter functions and straightforward   evaluation of expression (4.37) which requires maximal O 2M 3 operations, with M the number of state variables. To compare: if the matrix exponential of transition function (4.5) is obtained via Pad´e approxima tion then O αM 3 flops with α > 4 are needed [20].

4.3 A model for continuous-time ∆Σ modulators

109

Structure of modulator. The typical structure of the loop filter in practical ∆Σ modulators is reflected in the global matrices M and B and, in consequence, also in the coefficient matrices. More specifically, several coefficients will be zero which limits the number of operations required at each event. For example, for a feedforward system all matrices Q k,m and Rk,m are lower triangular which reduces the number of flops required for the first part by a factor 2. Analytical integration. The integrals of the E -, C - and S -functions (4.38) can be calculated analytically for several types of input signals [19]. This is more time-efficient than numerical integration. For example, the implemented analytical integration requires more than 10 times less CPU time compared to the standard QAG integration algorithm of the standard QUADPACK library [48]. Symbolic expressions are readily derived for signals of the type f (t) = e µt sin(ωt + φ) (c0 + c1 t + . . . + cn tn ) ,

(4.45)

and for various special cases of this template (e.g., pure sinusoidal or constant signals). This general template allows not only different kinds of input signals, but also a wide exploration of different types of DAC pulses. Most symbolic expressions for (4.45) are recursive in the order of the E -function. As a result, calculation of an E -function of order k for a certain signal f (t) immediately provides all lower-order functions, which are needed anyway in (4.37). Further, the C - and S -functions are also calculated together for a particular signal. Finally, at each event, only one new time point should be taken into account since the start point of an interval was the end point during the calculations for the previous event. Analytical integration also simplifies the processing of discontinuous signals or Dirac pulses. These signals occur in a modulator to model ideal DAC pulses or sampling effects [14]. Constant time step. Although clock jitter is an important effect, it is sometimes neglected during high-level architectural exploration. In this case, expression (4.37) can be simplified considerably. The part dependent on the state vector at time tk−1 corresponds to the matrix exponential of transition function (4.7). Furthermore, the symbolic expressions for the E -, C - and S -functions also reduce to simpler forms. Summarizing, the template for the linear next-state function of the generic behavioral model allows to save parts of the evaluation algorithm that are constant during the simulation over multiple sampling points. The simulation speed is decreased at the expense of a higher memory needed to store coefficients and symbolic expressions.

110

4 Time-Domain Generic Behavioral Models

Weakly nonlinear next-state function (δNL ) The nonlinear next-state function of the generic behavioral model takes into account all nonlinear distortion effects of the ∆Σ modulator. A distinction is made between weak and strong nonlinearities. Therefore, two templates for the nonlinear next-state functions are derived. This subsection presents the template for the weak nonlinearities whereas the next subsection deals with strongly nonlinear behavior. Both functions should be applied subsequently to form the total next-state function. The template for the weakly nonlinear next-state function is derived by solving a system of differential equations. This set of equations is obtained by first defining a template for the building blocks and then combining them to find the description for the entire loop filter. The solution of these equations is calculated by approximation the responses due to the small weakly nonlinear perturbations with a finite series of orthogonal polynomials [33]. The resulting expressions corresponds to the required weakly nonlinear next-state function. Building block template Analog integrated circuits exhibiting weakly nonlinear behavior can be modeled using basic nonlinear elements like a nonlinear conductance, capacitance or transconductance which are described by polynomials [63]. For example, a weakly nonlinear transconductance of order D is characterized as follows:

 (4.46) i(t) = gm · v(t) + α2 v 2 (t) + . . . + αD v D (t) ,

where α2 , . . . , αD are distortion coefficients. v(t) is the control variable of the nonlinearity. In general, the characteristic equation is a multivariate polynomial in multiple control variables. Assume the linearized model for a building block is given by (4.30). In general, the control variables can be described as a linear combination of the variables q n (t) and the input signals xi (t) and v (t). Therefore, the vector with control variables θ n (t) for the nth building block can be defined via a transformation matrix T n : ⎡ ⎤ q n (t) θ n (t) = T n · ⎣ x (t) ⎦ , (4.47) v (t) T

with x (t) = u(t) x1 (t) . . . xB (t) . To incorporate the weak nonlinearities, the linear template for building blocks (4.30) is extended with terms that depend on powers of the control variables. The resulting weakly nonlinear template can be written as follows:

4.3 A model for continuous-time ∆Σ modulators

111

D

dq n (t)  [k] dθ n(k) (t) + = Pn dt dt k=2

An q n (t) +

B 

E n,i xi (t) + F n v (t) +

i=0 i=n

xn (t) = C n q n (t) +

B  i=0 i=n

gn,i xi (t) + H n v (t) +

D 

(k) R[k] n θ n (t) , (4.48a)

k=2 D 

S n[k] θ (k) n (t) ,

(4.48b)

k=2

with B the number of building blocks in the loop filter and D the maximum [k] distortion order of the weakly nonlinear system. P n[k] , R[k] n and S n contain the parameters that characterize the kth order nonlinearities. The other matrices have the same meaning as in the linear template (4.30). The superscript (k) of θ n(k) (t) indicates that each element of θ n (t) should be raised to power k. Template (4.48) only includes powers of the control variables and no products which are encountered in multivariate characteristic polynomials. However, by introducing extra control variables such products can also be fit into the template. For example, 1 1 1 2 2 2 θ1 (t) · θ2 (t) = − θ1 (t) − θ2 (t) + θ3 (t) , 2 2 2

(4.49)

with θ3 (t) = θ1 (t) + θ2 (t). This linear combination can easily be added to the transformation matrix T n . To derive a model for a building block that maps onto the template, several techniques can be employed. For example, an analysis similar to MNA [8] directly results in a similar set of equations as the template. The characteristic matrices can be automatically create by filling them with stamps for each element. The user then only has to give the circuit description and the definitions of the nonlinearities as input. It may be beneficial to select another set of variables than for the linear template. If the control variable of a nonlinearity is a current, for instance, then currents may be more appropriate as variables than voltages. Example 4.2. Including weak nonlinearities happens usually only when the abstraction level is already lowered to a level where the implementation style of the integrator has already been decided. Figure 4.7 depicts an example of a nonlinear macro-model for the R–C integrator of Fig. 4.5a. There are three control voltages (vC (t), vR (t) and w(t)) but only two variables for the linear description which are chosen to be the state variables vC (t) and w(t). The transformation matrix T n is then defined as follows: ⎤ ⎡ ⎡ ⎤   1 0 w(t) w(t) θ n (t) = ⎣vC (t)⎦ = ⎣0 1 ⎦ · = T n q n (t) . (4.50) vC (t) vR (t) 1 −1

112

4 Time-Domain Generic Behavioral Models Ci (vC ) vR (t) = xn (t)

w(t) u(t) R1

R1

v(t) R2

R2

Cin

gm (w)w

Rout(vR)

  D−1 , Ci (vC ) = Ci0 1 + α2 vC + · · · + αD vC   gm (v1 ) = gm0 1 + β2 w + · · · + βD wD−1 ,   D−1 Gout (vR ) = Gout 0 1 + γ2 vR + · · · + γD vR . Fig. 4.7. Example of a weakly nonlinear model of order D for an integrator.

Straightforward application of Kirchoff’s laws is employed to derive the characteristic equations,  G+gm +Gout Gout   D  0 0 0 − dq n (t)  0 0 0 dθ (k) n (t) Cin Cin + = Gout 0 q n (t) gm0 +Gout 0 0 αk 0 dt dt − Ci Ci0 k=2 0    1   1  Gout 0 γk gm0 βk N  − Cin 0 − Cin (k) R C R C 1 in 2 in θ n (t) , u(t) + v(t) + + Gout 0 γk gm0 βk 0 0 0 C C i i k=2 0

0

(4.51a)

 xn (t) = 1 −1 q n (t) ,

(4.51b)

which directly fit into template (4.48).



Loop filter description Similar to the linear description of the loop filter, all templates of the building blocks can be combined to obtain a set of differential equations for the entire loop filter. Lemma 4.2 (Nonlinear loop filter equation). The set of weakly nonlinear characteristic equations for the loop filter of a CT ∆Σ modulator is given by D

D

k=2

k=2

 ds c (t)  [k] dθ (k) + = M s c (t) + B u(t) + U [k] θ (k) , P dt dt

(4.52a)

V [k] θ (k) ,

(4.52b)

x (t) = N s c (t) + D u(t) +

D 

k=2

with

4.3 A model for continuous-time ∆Σ modulators

U[k] = R[k] + E (I M − G) −1

[k]

−1

[k]

= (I M − G) S , ⎡ ⎤

 s c (t) θ(t) = Ts Ti ⎣ x (t) ⎦ , v (t)

V

S[k] ,

113

(4.53a) (4.53b) (4.53c)

and with the other parameter matrices the same as in Lemma 4.1. The matrices P[k] , R[k] and S[k] are block diagonal matrices with the matrices P [k] n , [k] [k] Rn and S n on the main diagonal respectively. The block elements on the main diagonal of Ts contain the parts of T n corresponding to q n (t). The other parts of T n are placed in Ti . Proof. Analogous to Lemma 4.1, a big system with all template equations for the building blocks is created from which the output vector x (t) is eliminated to obtain (4.52) and (4.53). ⊓ ⊔ Nonlinear next-state template Since the loop filter is a weakly nonlinear system, the nonlinear response shall be a small deviation from the linear response. Therefore, a core system can be identified which is deteriorated by a perturbation similar to the approach of [58]. The effect of the perturbation is then calculated directly. To obtain equations for these perturbations, the total response of (4.52) is divided into a dominant linear part (indicated with bar) and a small perturbation (indicated with tilde): s c (t) = s¯ c (t) + s˜ c (t) , x (t) = x¯ (t) + x˜ (t) .

(4.54a) (4.54b)

If the input signal is the result of a previous block in the system that also has a dominant linear behavior, the input signal can also made a distinction between an ideal and a perturbation signal: ¯ + u(t) ˜ . u(t) = u(t)

(4.54c)

The weakly nonlinear next-state function δNL returns only the perturbation part since the linear function δL is used to find the dominant signal. Note that by providing the ideal and non-ideal signals separately, insight is provided in the importance of the perturbation. This information may be used in a design flow to select an appropriate transformation of parameters or architecture to improve the performance. Substituting expression (4.54) into the nonlinear loop filter equations and simplifying by collecting corresponding terms, results in a set of equations for only the perturbation part: d˜ s c (t) −1 ˜ + e(t)] , (4.55) = f (˜ s c (t) , t) = [I M + K (t) Ts ] [M s˜ c (t) + B u(t) dt

114

4 Time-Domain Generic Behavioral Models

with e(t) =

D 

(k)

U[k] θ(t)

− K (t) Ts

k=2

K (t) =

D 

k P[k] (diag θ(t))

k−1

  d¯ s c (t) x (t) − K(t) Ti , u(t) dt

,

(4.56a)

(4.56b)

k=2

where ‘diag θ(t)’ converts vector θ(t) to a diagonal matrix. An advantage of calculating the perturbation directly instead of the entire response is that it can be expected that small perturbations are obtained with larger accuracy. A small relative error on the total signal can still result in a large error for the perturbation signal. To solve (4.55), one of the methods reviewed in Section 4.2 can be employed. As can be learned from Table 4.1 on p. 87, applying time-marching (TM) algorithms would devastate the advantages of a sampled-data (SD) approach regarding computational time. Instead, it is more time-efficient to solve the equations directly over the entire time interval. Therefore, waveform relaxation (WR) and collocation methods are more appropriate. In this work, a combination of these methods is used to solve the perturbation equations (4.55). The resulting gain in simulation is up to a factor 10 for the collocation method compared to a standard TM algorithm [33]. A typical loop filter in a CT ∆Σ modulator contains one major chain of integrating or resonating building blocks with some additional feedforward and feedback paths. This structure makes it advantageous to calculate the perturbation signals for each block separately and combine them afterwards. Assume the total response of the nth building block is given by an auxiliary function κn similar to the one of Theorem 4.1. Like the other variables and signals in the system, it can be split into a linear part and perturbation: ¯ n (u(t) , s, tk−1 , tk ) + κ ˜ n (u(t) , s, tk−1 , tk ) , κn (u(t) , s, tk−1 , tk ) = κ

(4.57)

¯ n (u(t) , s, v , tk−1 , tk ) coincides with a subvector of where the linear part κ (4.37). Combining this equation with expressions (4.54) leads to: ¯ n (u(t) ¯ , s¯ , tk−1 , tk ) κn (u(t) , s, tk−1 , tk ) ≈ κ ˜ n (u(t) ¯ , s¯ , tk−1 , tk ) + κ ¯ n (u(t) ¯ , s˜ , tk−1 , tk ) + κ ¯ n (u(t) ˜ , s¯ , tk−1 , tk ) , (4.58) +κ where the perturbation responses on the perturbation signals are neglected. Equation (4.58) is the base for a relaxation scheme. After the application of the linear next-state function δL , the perturbation responses for each block are ¯ , s¯ , tk−1 , tk )). These are used as perturbation inputs for calculated (˜ κn (u(t) the other building blocks using the linear function again. For a filter without extra loops, this approximation usually suffices. Otherwise, multiple iterations over all the building blocks should be performed. Two closely connected blocks can better be considered as one block in the iteration scheme due to their large

4.3 A model for continuous-time ∆Σ modulators

115

mutual influence. Consequently, the relaxation scheme is only optional: the perturbation can also be calculated from the entire system at the cost of solving larger sets of algebraic equations. Regardless if the perturbation signals should be found for the entire loop filter or for only a part of it, the perturbation equations are always of the form (4.55). Further, the time interval [tk−1 , tk ] is normalized to a standard interval (chosen to be [−1, 1]) using the transformation ξ = αt + β,

(4.59)

which results in the normalized perturbation equations over the standard interval:



1 ξ−β ξ−β dˆ s c (ξ) = f sˆ c (ξ) , , with sˆ c (ξ) = s˜ c . (4.60) dξ α α α This normalization results in the same solution independently of the actual time interval. Since generally an analytical solution does not exist for this set of equations, an approximation similar to (4.12) over the entire time interval is used: P  ˆ i ψi (ξ) , (4.61) S sˆ c (ξ) ≈ i=0

ˆ i are the coefficients to where a set of (P + 1) base functions ψi is used and S be calculated. The selection of the base functions is guided by some considerations: •

The signals that should be approximated are perturbation signals originating from nonlinearities described by polynomials. These perturbations are zero at the start of the time interval and will probably grow towards the end. As a result, it is quite reasonable to assume that they vary in some polynomial way. • Since the nonlinear response should be calculated at each event of the sampler, the evaluation of the base functions at some time point should also be time-efficient, which can be obtained with polynomials. • An approximation should be provided over the entire finite interval. Taylor series, for instance, approximate the signal only well at the beginning of the interval. The main point of interest, however, is the end point of the interval. • In a practical application, the number of base functions (P + 1) should be determined during the initialization phase. To this end, the base functions should form a series with rapid convergence. To fulfill these conditions, orthogonal polynomials are chosen as base functions. More specifically, Chebyshev polynomials [35] are selected. ˆ i of the polynomial series (4.61) a collocation To find the coefficients S method is adopted. Substituting the approximation (4.61) in (4.60) and equalizing corresponding coefficients would introduce large errors on the derivatives.

116

4 Time-Domain Generic Behavioral Models

Furthermore, all input signals should be written in terms of the base functions. Instead, the differential equations are written down in P collocation points. Together with the initial conditions, a nonlinear system is obtained from which the unknown coefficients can be obtained. This is described by Theorem 4.2. ˆ i of Theorem 4.2 (Solution perturbation equations). The coefficients S the Chebyshev approximation of order P for the solution sˆ c (ξ) of (4.60) of order P can be approximated to first order by ⎡ ⎤ ˆ0 S −1  dv (0) ⎢ .. ⎥ ˆ · v (0) , (4.62) ⎣ . ⎦=S ≈− ˆ dS ˆ SP with v : R(P +1)M → R(P +1)M defined by ⎡

T0′ (ξ1 ) I M   ⎢ .. . ˆ =⎢ v S ⎢ ⎣T0′ (ξM ) I M T0 (−1) I M with

⎤ ⎡  ⎤ f sˆ (ξ1 ) , ξ1α−β . . . TP′ (ξ1 ) I M ⎢ ⎥ ⎥ ⎢ ⎥ .. .. .. 1 ⎥ ⎢ ⎥ . . . ˆ− ⎢  ⎥S ⎥ , (4.63)  ′ ⎢ α ⎦ ξ −β ⎥ . . . TP (ξP ) I M ⎣f sˆ (ξP ) , Pα ⎦ . . . TP (−1) I M 0



ˆ sˆ (ξj ) = T0 (ξj ) I M . . . TP (ξj ) I M S

(j − 1) ̟ , ξj = cos P −1

i (j − 1) ̟ , Ti (ξj ) = cos P −1 ⎧   i−1 ⎨i + 2i ) 2 cos 2l(j−1)̟ l=1 P −1 ′   Ti (ξj ) = i ⎩2i ) 2 cos (2l−1)(j−1)̟ l=1 P −1

(4.64) (4.65) (4.66) i odd, i even.

(4.67)

Proof. The P collocation points on the standard interval [−1, 1] are chosen according to the Gauss–Lobatto nodes which leads for Chebyshev polynomials to the values (4.65). For the special case that P = 1 the end point (1) is selected as collocation point. Since the Chebyshev polynomials are given by Ti (ξ) = cos(i acos ξ) ,

(4.68)

it is straightforward to show that the function values and derivatives in the collocation points are given by (4.66) and (4.67) respectively. Then, for each collocation point ξj , the approximation can be substituted into the system (4.60):

4.3 A model for continuous-time ∆Σ modulators

P ξj − β 1 dˆ s (ξj )  ˆ ′ = . S i Ti (ξj ) = f sˆ c (ξj ) , dξ α α i=0

117

(4.69)

Since at the start point of the interval (−1) no perturbation occurs, the initial condition becomes P  ˆ i Ti (−1) = 0. (4.70) S sˆ (−1) = i=0

  ˆ = 0. The Combining (4.69) and (4.70) corresponds to the equation v S ˆ are the solution of this set of nonlinear algebraic equations. coefficients S Starting with an initial guess of zero perturbation and applying a Newton– Raphson iteration results in the first-order approximation (4.62). To obtain more accurate results, more Newton–Raphson iterations can be performed. However, since the system is weakly nonlinear, a first-order approximation may actually be sufficient. The coefficients are then found as the solution of one system of (P + 1) M linear equations. ⊓ ⊔ Application of Theorem 4.2 leads to the definition of the template corresponding to the weakly nonlinear next-state function δNL : s = δNL (u, n, v , s, (tk−1 , tk ) , s¯ c (t)) = s¯ c (tk ) +

P 

i ˆ (−1) S i,

(4.71)

i=0

ˆ i given by (4.62). with S The complexity of the weakly nonlinear next-state function depends on how efficiently the jacobian of (4.62) can be calculated. Symbolic expressions can be derived for the jacobians of (4.55). They can then be combined to obtain dvd(0) ˆ . Several parts are common to the jacobians and the function S values. Further time speed-up is obtained since some parts should only be calculated once at the beginning of the simulation. After all, the characteristic equations of the system and the normalized time interval do not change during the execution of the interaction scheme of the generic behavioral model. Strongly nonlinear next-state function (δNL ) The second part of the nonlinear next-state function δNL is used to take into account the effects of strong nonlinearities. In this section, a template to incorporate voltage saturation of the outputs of the building blocks is elaborated. Slew rate limitations are less important in continuous-time (CT) ∆Σ modulators, but they can be treated in a similar way. To characterize a strong nonlinearities of a signal s(t), a piecewise function is used. For example, for saturation, this results in:  sgn f (t)A, |f (t)| ≥ A, (4.72) s(t) = f (t) , |f (t)| < A.

118

4 Time-Domain Generic Behavioral Models

As a result, it is straightforward to model them by splitting the system accordingly. Therefore, instead of one model for the system, several models are used, each of them enabling another case of the piecewise strongly nonlinear functions. For the example of (4.72), this means that in such a model the output of a building block becomes the saturation value instead of the unsaturated value. For each of these models, the linear and weakly nonlinear next-state functions can be employed. For a general system with B building blocks and saturation with a positive and negative value defined by (4.72), there are theoretically 3B different systems. For practical purposes, this number can be reduced by using models for the systems with different number of inputs (e.g., an extra input containing a constant signal of either A or −A). Furthermore, several combinations are quite unlikely, so it is in practice limited to a multiple of B. To find the point tsw where to switch between the models, the outputs of the building blocks are approximated by a polynomial expression p(t) of a low degree. For the perturbation signals the CT signals are already available as Chebyshev series. The linear part can be interpolated by calculating the values in some time points on [tk−1 , tk ] using the analytical expression (4.37). Furthermore, also the derivatives can easily be calculated using the differential equation of the loop filter. Hence, Hermite interpolation can be applied. In general, a second-order polynomial provides a good idea about values of the outputs compared to the saturation values. The switching point tsw is the solution of an equation like (4.73) p(tsw ) = ±A, which can easily be solved for polynomials of limited degree. Once the switching point is found, the correct system model should be selected and the corresponding next-state functions should be applied. By consequence, the corresponding template for the strongly nonlinear next-state function can be written as follows: s = δNL (u, n, v , s, (tk−1 , tk ) , s¯ c (t))   [n ] [n ] = δNL2 u, n, v , s sw , (tsw , tk ) , δL 2 (u, n, v , s sw , (tsw , tk )) ,

with the state vector at the switching point ssw given by   [n ] [n ] s sw = δNL1 u, n, v , s, (tk−1 , tsw ) , δL 1 (u, n, v , s, (tk−1 , tsw )) .

(4.74a)

(4.74b)

The models of the system before and after tsw are indicated by the numbers n1 and n2 in the superscripts of the next-state functions δNL respectively. Output function (λ) The output function of the generic behavioral model of Fig. 4.4 consists of two parts. The first part corresponds to the output function of the loop filter, given

4.3 A model for continuous-time ∆Σ modulators ζ 0(x)

ζ 1(x)

1

1

x

x

0

(a) qout = 0

119

0

(b) qout = 1

Fig. 4.8. Example of the characteristic curves of a single-bit quantizer.

by (4.35b) for the linear approximation and by (4.52b) if weak nonlinearities should also be taken into account. The second part corresponds to the characteristics of the quantizers. A simple ideal comparator can easily be represented by a sign function without the need for extra states. In general, however, for all 2a discrete values possible for the output state q out an input-output curve ζq out (x ) is defined together with an output next-state function δout,q out (x ). Metastability can be introduced by defining stochastic functions as quantizer characteristics. With these characteristics, the template for the output function can be defined as follows:   (4.75) (y , q out ) = λ(s, q out ) = ζq out (x (s)) , δout,q out (x (s)) , with x (s) = x (tk ) given by an output function of the loop filter.

Example 4.3. Figure 4.8 depicts an example of a 1-bit quantizer model with two characteristic curves ζ0 (x) and ζ1 (x) with finite gain and offset. The use of two different curves allows the modeling of (electronic) hysteresis. The output next-state function δout,qout (x) where qout is either zero or one, is given by ζqout (x). which is shown in Fig. 4.8. 

Feedback function (φ) Similar to the output function, a set of characteristic curves should be defined to describe the shape of the pulse of the D/A converters. More specifically, a feedback signal ηr and the feedback next-state function  δfb,rshould correspond to all achievable values of the combination r = y , q fb . For a single-bit D/A converter, for instance, four different signals should be defined. They correspond to the different transitions of the DAC input (0–0, 0–1, 1–0 or 1–1). It is clear that no change in the input value may be represented by a constant output signal in case of pulses that last the entire sampling period.

120

4 Time-Domain Generic Behavioral Models

h(0,0)(t)

h(0,1)(t)

h(1,0)(t)

Aref

Aref

tk

−Aref

(a) y = 0; qfb = 0

Aref tk

tk t

h(1,1)(t)

t

t

tk

t

−Aref

−Aref

(b) y = 0; qfb = 1

(c) y = 1; qfb = 0

(d) y = 1; qfb = 1

Fig. 4.9. Example of the characteristic curves of a single-bit D/A converter.

Pulse-width jitter and pulse-delay jitter can easily be added to the model by describing the characteristic curves with stochastic variables with Gaussian distribution. This results in an adaptation of the pulse shape at each new event. The use of a feedback function in the model makes it easy to explore different waveforms. A typical transformation during architectural exploration would be the selection of different pulse shapes [43]. The template for the generic feedback function becomes:       (4.76) v , q fb = φ y , q fb = ηy ,q fb , δfb,(y ,q ) . fb

Example 4.4. Figure 4.9 shows an example of a non-return-to-zero pulse for a single-bit DAC. Due to the delay, the pulse partially shifts into the next sampling interval. This can easily be modeled using four different characteristic curves. The function calculating the new state of the feedback part is then defined by δfb,(y,qfb ) = y. As a result, at each sampling moment, the feedback state q fb corresponds to the previous input value.  Sampler (σ) The sampling moments provided to the machine M as the set σ should be generated so that sampling jitter is included. Two approaches are commonly used. The first one chooses the time points as follows: σ = (kTs + ∆Tk )k=1,...,s ,

(4.77)

with Ts the average time step and ∆Tk a Gaussian-distributed stochastic variable with zero mean and variance σJ2 . The other model uses a Markov process for describing the process of generating sampling points. Each new time point depends on the value of the previous one: (4.78) tk = tk−1 + Ts + ∆Tk . The second model shows more agreement with the sample moments generated by a real clock affected by jitter [6] due to the introduction of extra phase noise.

4.3 A model for continuous-time ∆Σ modulators

121

Thermal noise (Υ ) Noise could be included in the model by applying a set of input signals n ∈ Υ corresponding to the equivalent noise signals of the blocks. This approach, however, has some disadvantages: • A large set of signals, most probably sinusoidal signals with random phase and amplitude, should be used to obtain realistic noise signals. Adding all these signals as extra input signals to the model of the modulator increases the computation time considerably. • Noise signals like white noise or 1/f-noise are usually defined in the frequency domain. For general spectra, it is not straightforward to find a set of transient stochastic signals with the same power spectral density. Because of these drawbacks, another approach is followed in the generic novel behavioral model. The noise signals are transformed to stochastic variables whose values can be added to the state vector s of the loop filter. Thermal noise is usually studied under the assumption of a linear system. For such systems and for white noise sources, the characteristics of these variables are given by Theorem 4.3. Theorem 4.3 (Equivalent noise variables). The perturbation of the nth state variable sn ∈ s of an M -dimensional linear system caused by a white Gaussian random process n(t) with spectral density Sn (f ) = N0 defined as equivalent input noise source of the system is equivalent to the accumulated effect of M stochastic variables n with a joined multivariate Gaussian distribution with zero mean and with an M × M covariance matrix K n . For each couple of eigenvalues (λk , λl ) with corresponding orders (mk , ml ), ranging from 0 to ck or cl for mk and ml respectively, the entry of the covariance matrix is calculated by evaluation of following expression:  Ts mk +ml t e (λk +λl )t dt, K((λk ,mk ),(λl ,ml )) = Rk,mk (n, 1) Rl,ml (n, 1) N0 mk !ml ! 0 (4.79) with Ts the sampling period and ck and Rk,m defined by (4.37). For complex eigenvalues, two or four entries are needed corresponding to the C - and S functions. Proof. Like the normal input signal, the noise signal is filtered by M filters represented by the E -, C - and S -functions. Then, all filtered signals are sampled and summed to obtain the total state signal. The multivariate stochastic variables n contain the samples. Once the samples are obtained, the perturbation is given by the sum of them. ˜ λk ,mk (t) by a For each set (λk , mk ), the noise signal n(t) is filtered to n filter of the form  mk λk t Rk,mk (n, 1) t k!e , 0 < t < Ts , (4.80) gλk ,mk (t) = 0, otherwise.

122

4 Time-Domain Generic Behavioral Models

The cross spectral density between two filter noise processes n ˜ λk ,mk (t) and n ˜ λl ,ml (t) can be calculated to be Sn˜ λk ,mk ,˜nλl ,ml (f ) = Gλk ,mk (f ) Sn (f ) Gλl ,ml (f )

(4.81)

where c denotes the complex conjugate of c and Gλk ,mk (f ) is the Fourier transform of (4.80). Since the noise process is a Gaussian random process and n contains samples of the linear filtered version of n(t), n has a multivariate Gaussian distribution with zero mean and with a covariance matrix whose values are obtained by evaluating the cross-correlation (4.81) at zero [38]. Using the Wiener–Khinchin relation, Parseval’s theorem and the assumption of white noise, these elements can be calculated as follows:  ∞ (4.82a) Gλk ,mk (f ) Sn (f ) Gλl ,ml (f ) df K((λk ,mk ),(λl ,ml )) = −∞

= N0



Ts

gλk ,mk (t) gλl ,ml (t) dt

(4.82b)

0

which equals (4.79).

⊓ ⊔

Symbolic expressions can be derived for for the integrals (4.79) needed to evaluate the entries of the covariance matrix for both real and complex eigenvalues. When the noise source does not have a flat characteristic, like a 1/f noise characteristic, the elements of the covariance matrix can be calculated numerically. Since they should only be calculated once at the start of the simulation, this operation will not hurt overall simulation time. Note that phase noise of the sampling clock is modeled as jitter using sampling points that are not equidistant. 4.3.3 Implementation of the generic behavioral model A custom SystemC module The generic behavioral model for continuous-time (CT) ∆Σ modulators is not linked to a specific description language like MATLAB scripts, VHDL-AMS, Verilog-AMS or SystemC. On the other hand, the characteristics of the model guide the language choice: •

Since it is an event-driven model, a language with built-in support for these type of computational models simplifies the implementation. • If the model for the modulator is part of a larger system, both analog and digital parts should be modeled to represent the entire system. • The simulator should dynamically specialize the templates of the generic functions whereas the implementation model can be compiled beforehand. • Some flexibility is required regarding the signal types since some alphabets of Table 4.2 are not available in a standard language.

4.3 A model for continuous-time ∆Σ modulators

123

sc_testbench

Inputs sc_source sc_sampler

src

clock

sc_source

Output function

Feedback function sc_dac

sc_quantizer

src

Q

next− state function

output

states

DAC

sc_dac

sc_quantizer

Q

DAC

sc_nextstate

sc_delta_sigma

Fig. 4.10. Schematic representation of the SystemC modules used for evaluating the generic behavioral model for CT ∆Σ modulators.

Based on these considerations, a new dedicated simulator has been developed in this work which uses the SystemC library to implement the eventdriven model of computation. This library can easily be linked with custom C++ classes to realize special signal types, specialization of templates and post-processing tasks like the calculation of the signal-to-noise ratio. Since SystemC has been developed to model systems with both hardware and software functions, it can be fit into a larger system. A novel SystemC module, sc delta sigma, has been written which uses the methods proposed in this work to take into account effects of various non-idealities. Figure 4.10 shows the global configuration of the module. sc delta sigma represents the CT ∆Σ A/D converter and is built up out of submodules corresponding to the different parts of the generic behavioral model. Since the module is dedicated to the class of CT ∆Σ modulators with its characteristics and limitations defined by the template of Fig. 4.3 (p. 96), its configuration is similar to the interaction model of Fig. 4.4 (p. 101). Regarding the alphabets listed in Table 4.2 (p. 98), the data types defined in the SystemC language have been extended with a type for CT signals. This type represents a sum of signal parts which are defined as analytical signals over a finite or infinite time interval. For all signals fitting in template (4.45) on p. 109, analytical calculations have been developed and are available. Further, a test bench has been written to translate at the start of a transient simulation ASCII input files with specializations for the generic behavioral model to a specific architecture with non-idealities. These specializations are formulated in a kind of netlist with definitions for the different blocks, e.g., in the loop filter. Finally, FFT post-processing is applied to the output bit

124

4 Time-Domain Generic Behavioral Models

stream to obtain the typical spectrum of a ∆Σ output signal from which characteristics like Signal-to-Noise Ratio (SNR) can be derived. Examples To illustrate and test the developed implementation, a case study has been performed with a third-order single-loop single-bit CT ∆Σ A/D converter. Several experiments have been performed to study different effects (published in [30, 31, 34]). Parameters of the specialization of the modulator that are common to all experiments are listed in Table 4.4. Each simulation of the test bench takes a few seconds dependent on the number of non-idealities included in the model. A full transistor-level simulation, on the other hand, takes hours or days [6] but takes into account all non-ideal effects. Hence, the approach used in this work is appropriate when multiple performance evaluations for several architectures are required as in the design strategy of Fig. 3.10 on p. 66. The full transistor-level simulation is suited to verify the final design. To evaluate both simulation speed and size of the errors introduced by various approximations made to obtain the templates of the generic functions, a specific behavioral model has been implemented with VHDL-AMS for comparison purposes. This model for a third-order CT ∆Σ modulator contains the differential equations of the loop filter, processes triggered by the sampling clock for the quantizer and a D/A converter. A time-marching (TM) simulation algorithm has been employed using the simulator ADVance MS of Mentor Graphics. Table 4.4. Overview of input parameters for simulations of the SystemC module and test bench common to different examples. Parameter Modulator order B Oversampling ratio OSR Input amplitude Ain

Value 3 32 0.5 V

Input frequency fin

500 kHz

Sampling frequency fs

128 MHz

Poles integrators p1 , p2 , p3 Zeros integrators z1 DAC pulse height Vref DAC pulse shape

2fs · 10−4 , 8fs · 10−2 , 2fs 6fs · 10−2 1V Non-return-to-zero

FFT width s

16,384

Window type

Blackman–Harris

4.3 A model for continuous-time ∆Σ modulators

125

The generic behavioral model is intended to be used in a high-level design flow as depicted in Fig. 3.10 on p. 66. Hence, the outputs of the synthesis procedure (to be discussed in Chap. 6) are both the architecture and its parameters. They serve as inputs for lower-level synthesis methods, typically at circuit-level. As a result, the accuracy of the high-level models used to describe the behavior of building blocks (e.g., the state-space description for an integrator), is similar to those of specific behavioral models. Whereas such a specific behavioral model represent a specific architecture at a particular abstraction level, the generic behavioral model allows to easily represent different architectures at various abstraction levels by providing different inputs during the specialization phase of the design flow in Fig. 3.10. Signal-to-noise ratio as a function of the input amplitude The ideal maximum SNR can be determined by examining the variation of the SNR as a function of the amplitude of the input signal. This results in the curves shown in Fig. 4.11. The top curve (dark solid line) corresponds to a modulator with ideal integrators. Starting from this ideal model, subsequent refinement operations are executed until the abstraction level is low enough to perform a translation to another description level and start, for instance, with the design of the building blocks. First, linear non-idealities are added to the integrating blocks: the poles and zeros mentioned in Table 4.4. The resulting degradation in SNR can be observed in Fig. 4.11 (light solid line). Then, saturation of the outputs of the integrators and clock jitter is included in the specialization. The dashed curves in Fig. 4.11 shows the effect on 70 60

Ideal modulator + poles/zeros loop filter + saturation integrators + clock jitter

SNDR [dB]

50 40 30 20 10 0 −4 10

−3

10

−2

10

−1

10

0

10

Ain w.r.t. Vref

Fig. 4.11. Example of refinement: SNDR as a function of the signal input amplitude for an ideal modulator and one with additional non-idealities.

126

4 Time-Domain Generic Behavioral Models

the modulator’s performance. Since the saturation limits the maximal signal in the loop, larger input signals can be used without resulting in instability of the modulator. Based on Fig. 4.11 an input amplitude Ain of 0.5 V has been chosen for subsequent experiments. This value gives a high SNR without driving the modulator into the instability region. Degradation due to linear non-idealities To obtain the specifications of the building blocks, the degradation of the SNDR as a function of a parameter can be studied. In Fig. 4.12 an example is shown of a degradation due to the addition of an extra pole p3 in the transfer function of the integrators. The integrators have fixed poles p1 = 2fs · 10−4 and p2 = 8fs · 10−2 , and a zero z1 = 6fs · 10−2 . The average time for one simulation is 3.5 seconds with the custom SystemC module whereas the VHDLAMS simulation takes more than 10 times as long. No significant differences in SNR values (besides numerical imperfections) are noticed between both types of simulations since the SystemC module employs exact analytical calculations for linear non-idealities. Note that this experiment cannot be performed by the tool DAISY-CT presented in [17] due to its inherent assumption of second-order transfer functions for the integrators. The generic behavioral model proposed in this work does not make such assumptions, making it more suited for modeling at different levels of abstraction. Also, band-pass CT modulators contain resonating building blocks where a parasitic pole result in a third-order model. 0 −2

∆SNR [dB]

−4 −6 −8 −10 −12 VHDL-AMS model generic behavioral model

−14 0 10

1

2

10

10

3

10

p3 w.r.t. fs

Fig. 4.12. Degradation of signal-to-noise ratio due to linear non-idealities of the integrators with the generic behavioral model and the VHDL-AMS model.

4.3 A model for continuous-time ∆Σ modulators

127

Degradation due to weakly nonlinear non-idealities Figure 4.13 depicts the drop in SNR performance for simulations with the method presented in Section 4.3.2 using a first-, second- and third-order Chebyshev polynomial series. The nonlinear integrator of Fig. 4.7 with a distortion coefficient α3 has been selected. The curves show a good agreement: Table 4.5 gives an overview of the simulation times together with the RMS value of the error between results with the Chebyshev polynomial series and with the VHDL-AMS simulation. The approximations result in a 6× to 10× increase in simulation efficiency. This shows that the proposed methodology offers a good trade-off between simulation time and accuracy. Degradation due to saturation The degradation in SNDR due to saturation is plotted in Fig. 4.14. Again, the results of four different models are compared with each other: the results of the TM simulation of the specific VHDL-AMS behavioral model and the 2 0 −2

∆SNDR[dB]

−4 −6 −8 −10 −12 −14 −16 −18

VHDL-AMS model 1st order approximation 2nd order approximation 3rd order approximation

−20 −4 10

−3

10

−2

10

−1

10

0

10

1

10

Distortion coefficient α3

Fig. 4.13. SNR degradation due to weakly nonlinear distortion calculated with both the VHDL-AMS behavioral model and the generic behavioral model with different levels of accuracy. Table 4.5. Simulation times and accuracy for curves of Fig. 4.13. Model VHDL-AMS

Time [sec]

RMS error [dB]

70



6

1.5

2nd order

8

1.25

3rd order

11

0.75

1st order

128

4 Time-Domain Generic Behavioral Models 0

−2

∆SNDR [dB]

−4

−6

−8 VHDL-AMS model 1st order approximation 2nd order approximation 3rd order approximation

−10

−12 0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

Vsat w.r.t. Vref

Fig. 4.14. Degradation of SNDR due to saturation of the integrators of the loop filter. Table 4.6. Average simulation times for different saturation models in Fig. 4.14. Model

Time [sec]

RMS error [dB]

98.13



1st order

4.25

0.92

2nd order

4.96

0.61

3rd order

5.86

0.59

VHDL-AMS

values obtained via the generic behavioral models as described in section 4.3.2 where the polynomial approximation of the response is of first, second and third order. A higher-order model results in a more accurate model, at the expense of an increase in simulation time. The average simulation times with estimations of the errors compared to the results of the VHDL-AMS model are shown in Table 4.6. One can conclude that a first-order model is accurate enough in this case to predict the minimum allowable saturation level. From this analysis, the designer can derive the specification for the saturation level. If necessary, he can, for example, decide to scale the coefficients to lower the signal levels. Such an operation will be a parameter transformation in the generic design flow. Degradation due to jitter Figure 4.15 shows the influence of jitter on the SNR. The degradation due to jitter depends on the kind of pulse selected for the D/A converter. For a NRTZ pulse, only sampling jitter affects the performance. This corresponds

4.4 A generic behavioral model for sampled-data systems

129

0

∆SNDR [dB]

−5

−10

−15

−20

−25

−30 −5 10

VHDL-AMS, NRTZ Generic model, NRTZ VHDL-AMS, RTZ Generic model, RTZ VHDL-AMS, RTZ, PWJ Generic model, RTZ, PWJ −4

10

−3

10

−2

10

−1

10

0

10

σJ w.r.t. Ts

Fig. 4.15. Degradation of signal-to-noise ratio due to sampling jitter with nonreturn-to-zero (NRTZ) or return-to-zero (RTZ) pulses and with or without pulse-width jitter (PWJ) for different models.

to the curves with squares in Fig. 4.15. A RTZ pulse, on the other hand, is less sensitive to sampling jitter but is also affected by PWJ. For a pulse from 0.15Ts to 0.8Ts the resulting degradation is shown in Fig. 4.15 by the curves with circles and triangles respectively. For low jitter values, the curves almost coincides. With a RTZ pulse, the amount of charge injected at each sampling moments remains almost independent of the actual sampling moment and corresponding start point of the jitter. When PWJ is taken into account, the SNDR degradation due to jitter is clearly increased. Note that jitter does not increase the simulation time. Indeed, all generic functions and their specializations are a function of the start and end points of the sampling interval without assuming that this will be constant throughout the simulation. As a results, simulation times are similar to those of other experiments. These curves help during the design flow to select a pulse shape via architectural transformation steps. Also, the maximal allowable jitter before going to a lower level of abstraction through refinement can be derived.

4.4 A generic behavioral model for sampled-data systems 4.4.1 General sampled-data systems The techniques developed in the previous section were applied in a generic behavioral model for the class continuous-time (CT) ∆Σ modulators. The basic principles and theorems can also be employed for systems of other classes. To illustrate this widened application area, a generic behavioral model has been

130

4 Time-Domain Generic Behavioral Models

developed for the general class of sampled-data (SD) systems. Such systems, like switched-capacitor filters, are frequently encountered in modern ICs [14]. Definition 4.1 (Sampled-data system). A general F -phase sampled-data system is a time-variant system where the topology of the system alternates regularly between a set of F architectures. The time points at which the topology is changed are the switching or sampling moments. A regular switching sequence means that the switching frequency should be high compared to the bandwidths of the information signals. Switched-capacitor filters are usually 2-phase SD systems, but also multiple phases can be used [15]. An A/D converter can be considered as a 1-phase system. 4.4.2 Generic behavioral model Requirements Many non-idealities encountered in continuous-time (CT) ∆Σ modulators are also found in general sampled-data (SD) systems. More specifically, linear non-idealities of the filter, weakly nonlinear elements and thermal noise have similar effects in both classes of systems. Also clock jitter is a major source of performance degradation. Regarding strong nonlinearities, slew rate limitations are more pronounced in SD systems than in CT ∆Σ converters. When deriving the generic behavioral model, special attention has to be given to extra non-idealities introduced by the switching operation in SD systems: Clock feedthrough. The parasitic capacitances between the control signal node (usually the gate of the switching transistor with the clock) and the signal nodes (drain and gate) are charged or discharged when the voltage of the control signal changes. As a result, the voltage at the signal nodes is influenced. Charge injection. The charge built up in the channel of the switching transistor flows to the signal nodes leading to a voltage variation at these nodes. Alphabets For all variables encountered in a SD system, an alphabet should be defined. For the noise signals n and the switching moments σ, the same explanation as for the generic behavioral model for CT ∆Σ modulator holds. Alphabets typical for the SD class can be summarized as follows: Input. The inputs of the SD system can be continuous or discrete in time. Since, however, in the latter case Dirac-pulses can be used as inputs, a general description is obtained by assuming the same kind of input signals as the CT ∆Σ modulator. Therefore, the alphabet is given by (4.14).

4.4 A generic behavioral model for sampled-data systems

131

Filter architecture. Each of the F architectures for the system is characterized by a set of state signals. This means that the system is characterized by a set of state vectors: Ls c = (s c,1 , . . . , s c,F ) ∈ ΛQc = (C0 [R]

n1

, . . . , C0 [R]

nF

),

(4.83)

with n1 , . . . , nF the orders of all the F topologies. Different orders for different architectures are possible. For example, a capacitor may be disconnected during one phase. The sampling operation in the system leads to an additional alphabet for the discrete-time (DT) states: Ls = (s 1 , . . . , s F ) ∈ ΛQ = (Rn1 , . . . , [R]

nF

).

(4.84)

Output. The output of the SD system is an m-dimensional DT signal: y ∈ Ψc = Rm ,

(4.85)

which may be further processed, e.g., filtered or quantized, if the SD system is part of a larger system Phase. A special memory element should be added to the generic behavioral model to track the architecture corresponding to the current phase. Its alphabet is given by: qp ∈ Qp = {1, 2, . . . , F } .

(4.86)

An overview of all alphabets is given in Table 4.7. Generic functions Table 4.8 lists the definitions for the generic functions used to model different parts of the SD system. The output function is easily identified in the Table 4.7. Definitions of the alphabets used in the generic behavioral model for general SD systems for interfaces and memory. Signal type

Alphabet

Definition

Application

Input signal

u∈Σ

Σ = C0 [R]

Test signal

Output signal

y ∈ Ψc

Ψ = Rm

Outputs k

Noise signal

n ∈Υ

Υ = C0 [R]

Switching times

σ ∈ Rs

σ = (t1 , t2 , . . . , ts )

States filter

Ls c ∈ ΛQc

ΛQc = (C0 [R]n1 , . . . , C0 [R]nF ) n1

White noise Clock

nF

Discrete states

Ls ∈ ΛQ

ΛQ = (R

Phase

q p ∈ Qp

Qp = {1, 2, . . . , F }

,...,R

)

Poles Samples Architecture

132

4 Time-Domain Generic Behavioral Models

Table 4.8. Definitions of the mapping operations of the generic functions of the generic behavioral model for SD systems. Function

Definition

Output

λ : ΛQ × Qp → Ψp

Linear next-state

δL : Σ × Υ × ΛQ × P(σ) × Qp → ΛQc

Nonlinear next-state

δNL : Σ × Υ × ΛQ × P(σ) × ΛQc × Qp → Q Filter phase

Switching

ǫ : ΛQ × Qp → ΛQ × Qp

1 2 3 4 5 6 7 8 9 10 11 12

Part Outputs Filter phase

Switches

input u, σ Ls ←− Ls 0 qp ←− qp0 y (1) ←− λ(Ls 0 , qf0 ) for k = 1, . . . , |σ| − 1 update n ¯ ¯s (c) ←− δL (u, n, Ls , (tk , tk+1 ) , qp ) L   ¯ ¯s c , qp ˜ ˜s ←− δNL u, n, Ls , (tk , tk+1 ) , L L   ˜ ˜s , qp (Ls , qp ) ←− ǫ L y (k + 1) ←− λ(Ls , qp ) end for output y

Listing 4.2. Interaction model of the generic behavioral model for general SD systems.

SD system. Both the linear and nonlinear next-state functions deal with the transition of the state variables within a particular phase. Finally, the switching function provides a template which links the values of the different state vectors of different architectures before and after the switching event with each other. This function not only converts between different number and/or meaning of states of two topologies, but it also enables the modeling of the switching events, i.e., clock feedthrough and charge injection. Further, it returns the new phase that the system has entered. The use of different functions for the discrete and CT parts of the response of a SD system is encountered frequently in analysis techniques for switched-capacitor networks [59]. Interaction model The dynamic behavior of the SD system as an interaction between the generic functions is shown in Listing 4.2. It starts with assigning initial values to all memory elements and the output (lines 2–4). The events are rising from the

4.4 A generic behavioral model for sampled-data systems

133

switching events are represented by selecting a new value of σ. Each event results in the execution of the following 5-step algorithm (lines 6–11): 1. 2. 3. 4.

Determine values for the noise signals n. ¯ s for the system qp on [tk , tk+1 ]. Calculate the linear next state L c Find the corresponding nonlinear correction. Calculate the effects of switching the system from one architecture to another via new values for the state vectors and the phase variable. 5. Calculate the new output value for the output.

The schematic diagram of this interaction model is depicted in Fig. 4.16. The main loop contains the output, next-state and switching functions which is activated by the events generated by the sampler. The compact representation of the generic behavioral model for SD systems as a formal machine becomes: M = (Σ, Ψ, Υ, Rs , ΛQc , ΛQ , Qp , λ, δL , δNL , ǫ,     alphabets

generic functions

Ls , qp ),  0 0

(4.87)

interaction model parameters

which is a variant of the Mixed-Signal Finite State Machine of (4.29). 4.4.3 Templates for the generic functions The use of equivalent random variables to take into account the effects caused by the noise signals n and the selection of the switching moments σ is similar to choices made in the model for continuous-time (CT) ∆Σ modulators. Therefore, this section focuses only on the next-state, output and switching functions. Next-state functions The linear and nonlinear models for the system within a particular phase are the same as for the loop filter of the ∆Σ A/D converter. For the linear case, an auxiliary function κf is defined for each architecture f which is given by (4.37). The template for the linear next-state function for general sampleddata (SD) systems is then defined as follows:   ¯ ¯s = s¯ c,1 , . . . , s¯ c,q , . . . , s¯ c,F (4.88a) L p c = δL (u, n, Ls , (tk−1 , tk ) , qp )

 

u , s¯ c,qp , tk−1 , tk , . . . , s¯ c,F . = s¯ c,1 , . . . , κqp n

(4.88b) (4.88c)

The templates for the nonlinear next-state functions for both strong and weak nonlinearities are defined in a similar way: the state vector of the activated ˜ ˜s is updated by the next-state function of the phase phase f = qp in the set L c pq given by (4.71) or (4.74).

134

4 Time-Domain Generic Behavioral Models

Fig. 4.16. Schematic representation of the interaction model of the generic behavioral model for SD systems.

4.4 A generic behavioral model for sampled-data systems

135

Output function The output of the sampled-data (SD) system is given by the expression for x (f ) (t) for topology f : either (4.35b) for linear approximations or (4.48b) for nonlinear systems. Consequently, the template for the output function can be written as follows:   (4.89) y = λ(Ls , qp ) = x (qp ) s qp . Switching function

When the system switches between two architectures at the switching moment tk , the state vector of the first architecture should be transformed to the state vector of the second one. In the absence of non-ideal switching effects, this operation can be described by a linear transformation:    (4.90a) (Ls , qp ) = ǫ(Ls , qp ) = ǫ s 1 , . . . , s qp′ , . . . , s F , qp =



  s 1 , . . . , T qp ,qp′ · s qp , . . . , s F , qp′ ,

(4.90b)

with qp′ = 1 + [(1 + qp ) mod F ] for a collection of architectures referred to by numbers from 1 to F . If non-ideal switching events like clock feedthrough or charge injection have to be taken into account, a more complicated template should be adopted as follows. During the switching event from qp to qp′ , the large system containing both architectures and the switching devices can be described by a set of differential equations: ds qp ,qp′ (t) dt

= M qp ,qp′ s qp ,qp′ (t) + B qp ,qp′ u(t) + B ′qp ,qp′

du(t) , dt

(4.91)

where the control signal of the switches is added as an extra input signal to the vector u(t). This template for the joint system can be derived in the same way as the one for the normal architectures. The extra term with the derivative of the input signals is added to account for signals that are discontinuous at the switching moments (like the clock signal). The joint state vector s qp ,qp′ (t) can be transformed to the state vectors of the different architectures: s qp′ = U qp ,qp′ s qp ,qp′ , s qp ,qp′ =

F 

f =1

V qp ,qp′ ;f

  sf , u

(4.92a) (4.92b)

Finally, the template for the switching function can easily be derived by integrating (4.91) between the moments tk− and tk+ , respectively right before

136

4 Time-Domain Generic Behavioral Models

and after the switching moment tk . The resulting equation can be written down as follows: s qp′ (tk+ ) = U qp ,qp′ ·

F 

V qp ,qp′ ;f

f =1

   tk+ s f (tk− ) − U qp ,qp′ · B qp ,qp′ u(τ ) dτ u(tk− ) tk−

− U qp ,qp′ · B ′qp ,qp′ [u(tk+ ) − u(tk− )] . (4.93) The integrals are only non-zero when Dirac pulses occur at the switching moments which are used to model charge injection. This template can directly be used as part of the switching function. It is straightforward to extend the template so that also nonlinear terms similar to the ones in (4.71) are incorporated. However, instead of an explicit expression, an implicit function is obtained for s qp′ (tk+ ). Example 4.5. Figure 4.17 shows a single first-order switched-capacitor filter which is a typical 2-phase sampled-data (SD) system. The switches have a finite (linear) on-resistance R and the finite gain of the op amp is A. The model for the clock feedthrough on node n1 for a transition from phase 2 to phase 1 is derived. Since the input of the op amp has infinite impedance, the state vector of the first phase contains only the voltage over the sampling capacitor Ca . For the second phase, on the other hand, three state variables are needed: ⎡ ⎤ va

 (4.94) s 1 = v a , s 2 = ⎣ vb ⎦ . vc During the switching event the overlap capacitances of the switches φ2 on node 1 make the system of order 4: f2 n2 Cb

f2

f1

f1

f1

f2

Cc

u(t)

n1

y(t)

Ca

Fig. 4.17. Example of a first-order switched-capacitor section.

4.4 A generic behavioral model for sampled-data systems

⎤ ⎡ −2 ⎡ va (t) τa ⎢0 d ⎢ ds 2,1 (t) vb (t) ⎥ ⎥ ⎢ ⎢ = = dt dt ⎣ vc (t) ⎦ ⎣ 0 v2 (t) 0

0 0 0 0

1 τa (1+A)

0 0 0

1 τa 1 τb

137

b



⎤ 0 0   ⎢ ⎥ ⎢ 0⎥ u(t) ⎢0 +⎢ ⎣ 0 0⎦ φ(t) + ⎢ ⎣0 2 1 0 3τb 3 ⎡

⎤⎡ ⎤ 0 va (t) −1 ⎥ ⎢ ⎥ τb ⎥ ⎢ vb (t) ⎥ 0 ⎦ ⎣ vc (t) ⎦ −2 v2 (t) 3τ 0 0



 ⎥ d  u(t) ⎥ , (4.95) dt φ(t)

Cov (1+A) ⎥ Cov +Cc (1+A) ⎦ Cov 1 3 Cov +Cc (1+A)

with τa = RCa , τb = RCb and φ(t) the clock signal. The transformation of the joint state variables to the state vectors of the two topologies is defined via the matrices ⎡ ⎤ 10000 ⎢0 1 0 0 0⎥

 ⎥ (4.96) U 2,1 = 1 0 0 0 ; V 2,1;1 = 0; V 2,1;2 = ⎢ ⎣0 0 1 0 0⎦ . 1 1 0 2 1 2 0 Finally, application of (4.93) leads to the switching function with clock feedthrough: s 1 (tk+ ) = va (tk− ) +

Cov (1 + A) VDD , Cov + Cc (1 + A)

where the control signal of the switches descends from VDD to zero.

(4.97) 

4.4.4 Implementation The large number of similarities between the generic behavioral model for continuous-time (CT) ∆Σ modulators and that for general sampled-data (SD) systems, makes it straightforward to use a similar implementation technique. Consequently, a SystemC module has been developed built up of submodules corresponding to the different parts of the model shown in Fig. 4.16. Only a new submodule for the switching function ǫ have been added since the modules for the next-state and output functions, inputs and sampler can be re-used. Note that discrete-time (DT) ∆Σ modulators can be simulated using the the newly developed module for SD systems by connecting it with appropriate output and feedback functions. Example 4.6. The system shown in Fig. 4.17 has been used as a case study for the implemented model [32]. The effect of weakly nonlinear behavior on the

138

4 Time-Domain Generic Behavioral Models

output distortion is analyzed for models with and without clock feedthrough. The output of the op amp is modeled using a third-order polynomial: 3

y(t) = −A1 v1 (t) − αv1 (t) ,

(4.98)

with α the distortion coefficient. Only the architecture in the second clock phase is affected by this distortion. Taking into account this weakly nonlinear behavior the equation corresponding to the switching function can be derived similar to the linear case of Example 4.5: αCc v1 (tk+ ) + [Cc (1 + A1 ) + Cov ] v1 (tk+ ) = − [Cov (VDD − v1 (tk− )) − Cc vc (tk− )] , (4.99) where v¯c (t) (1 + A1 ) = v1 (t) is chosen as state variable and vc (t) = v¯c (t) +

α v¯3 . 1 + A1 c

(4.100)

Table 4.9 lists all input values for the simulation. The variation of the output distortion as a function of the distortion factor α is shown in Fig. 4.18. When an overlap capacitance Cov of 100 fF is present, a slower degradation of the harmonic distortion is observed. Since the SystemC module for CT ∆Σ modulators has been recycled for a large part, simulation speed and error are comparable to the results shown in Fig. 4.13. 

Table 4.9. Overview of input parameters for simulation of the switched-capacitor filter. Parameter

Value

Voltage supply VDD

2.5 V

Linear gain A1

50

Switch resistance R

100 Ω

Capacitors Ca and Cb

10 pF

Capacitors Cc Sampling frequency fs

2 pF 10 MHz

Input amplitude Ain

0.25 V

Input frequency fin

600 kHz

FFT width s

4,096

Window type

Blackman–Harris

4.5 Conclusions

139

10 5

∆HD 3 [dB]

0 −5 −10 −15 −20 Without overlap capacitance With overlap capacitance

−25

10−1

100

101

102

103

104

Distortion coefficient α

Fig. 4.18. Increase of harmonic distortion for a switched-capacitor filter with and without clock feedthrough as a function of the distortion of the op amp used.

4.5 Conclusions Time-domain simulations are often the preferred method to calculate performance characteristics for analog and mixed-signal circuits with signals at moderate frequencies. Complex nonlinearities like those encountered in ∆Σ modulators are easily represented in the time domain. Furthermore, integration with complicated digital blocks typically simulated in an event-driven way is simplified. Several approaches have been developed to find results that achieve enough accuracy over a limited simulation time interval. There can be divided into four main categories. Time-marching algorithms offer the largest control over the numerical and approximation errors, but sampled-data, waveform relaxation and collocation methods are more time-efficient. Based on these observations, a methodology has been developed to describe a generic behavioral model of analog and mixed-signal systems in the time domain with an acceptable accuracy–speed trade-off. Exact analytical expressions can be derived for linear systems, both in continuous and discrete time. For nonlinear systems, errors are inevitably introduced. Approximations using perturbation methods are made when weak and strong nonlinearities are included. The approach based on generic behavioral models separates the generic behavior (a property of an entire class of systems) from the specific behavior (for a particular architecture at a certain abstraction level). As a result, the computational time can be limited by exploiting the characteristics of the class and building dedicated simulation tools for classes of systems, like continuous-time ∆Σ modulators and general

140

4 Time-Domain Generic Behavioral Models

sampled-data systems. This has been illustrated with dedicated implementations in SystemC for both types of systems, showing small computation times at very low error. Time-domain modeling and simulation, however, is only appropriate for a part of the systems found on analog and mixed-signal chips. For other systems like RF front-ends, approaches operating in the frequency domain are more suited. These methods are the focus of the next chapter.

References [1] G. Arnout and H. J. De Man. The Use of Threshold Functions and Boolean-Controlled Network Elements for Macromodeling of LSI Circuits. IEEE Journal of Solid-State Circuits, 13(3):326–332, June 1978. [2] I. Bolsens, H. J. De Man, B. Lin, K. Van Rompaey, S. Vercauteren, and D. Verkest. Hardware/Software Co-Design of Digital Telecommunication Systems. Proceedings of the IEEE, 85(3):391–418, Mar. 1997. [3] R. Burch, P. Yang, P. Cox, and K. Mayaram. A New Matrix Solution Technique for General Circuit Simulation. IEEE Trans. on ComputerAided Design of Integrated Circuits and Systems, 12(2):225–241, Feb. 1993. [4] J. C. Candy and G. C. Temes. Oversampling Delta-Sigma Converters: Theory, Design and Simulation. IEEE, 1992. [5] T.-H. Chen, J.-L. Tsai, C. C.-P. Chen, and T. Karnik. HiSIM: Hierarchical Interconnect-Centric Circuit Simulator. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 489–496, San Jose, Nov. 2004. [6] J. A. Cherry and W. M. Snelgrove. Continuous-time Delta-Sigma Data Modulators for High-Speed A/D Conversion. Theory, Practice and Fundamental Performance Limits. Kluwer Academic, 2000. [7] L. O. Chua, C. A. Desoer, and E. S. Kuh. Linear and Nonlinear Circuits. McGraw-Hill, New York, 1987. [8] L. O. Chua and P.-M. Lin. Computer-Aided Analysis of Electronic Circuits. Prentice-Hall, Englewood Cliffs, 1975. [9] M. A. Copeland, G. P. Bell, and T. A. Kwasniewski. A Mixed-Mode Sampled-Data Simulation Program. IEEE Journal of Solid-State Circuits, 22(6):1098–1105, Dec. 1987. [10] H. De Man, J. Rabaey, L. Claesen, and J. Vandewalle. DIANA-SC: A complete CAD-system for switched capacitor filters. In European SolidState Circuits Conf., pages 130–133, Freiburg, Sept. 1981. [11] H. J. De Man, J. Rabaey, G. Arnout, and J. Vandewalle. Practical Implementation of a General Computer Aided Design Technique for Switched Capacitor Circuits. IEEE Journal of Solid-State Circuits, 15(2):190–200, Apr. 1980.

References

141

[12] Dian Zhou and Wei Cai. A Fast Wavelet Collocation Method for HighSpeed Circuit Simulation. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 46(8):920–930, Aug. 1999. [13] D. J. Erdman and D. J. Rose. Newton Waveform Relaxation Techniques for Tightly Coupled Systems. IEEE Trans. on Computer-Aided Design, 11(5):598–606, May 1992. [14] Fei Yuan and A. Opal. Computer Methods for Switched Circuits. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 50(8):1013–1024, Aug. 2003. [15] A. Fettweis, D. Herbst, B. Hoefflinger, J. Pandel, and R. Schweer. MOS Switched-Capacitor Filters Using Voltage Inverter Switches. IEEE Trans. on Circuits and Systems, 27(6):527–538, June 1980. [16] K. Francken and G. G. E. Gielen. A High-Level Simulation and Synthesis Environment for ∆Σ Modulators. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(8):1049–1061, Aug. 2003. [17] K. Francken, M. Vogels, E. Martens, and G. Gielen. A Behavioral Simulation Tool for Continuous–Time ∆Σ Modulators. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 234–239, San Jose, Nov. 2002. [18] Y. Geerts, M. Steyaert, and W. Sansen. Design of Multi-Bit Delta-Sigma A/D Converters. Kluwer Academic, 2002. [19] G. G. E. Gielen, K. Francken, E. Martens, and M. Vogels. An Analytical Integration Method for the Simulation of Continuous-Time ∆Σ Modulators. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 23(3):389–399, Mar. 2004. [20] G. H. Golub and C. F. V. Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 1989. [21] D. Gottlieb and S. A. Orszag. Numerical Analysis of Spectral Methods: Theory and Applications. Society for Industrial and Applied Mathematics, Philadelphia, 1977. [22] C. D. Hedayat, A. Hachem, Y. Leduc, and G. Benbassat. Modeling and Characterization of the 3rd Order Charge-Pump PLL: a Fully Event-driven Approach. Analog Integrated Circuits and Signal Processing, 19(1):25–45, Apr. 1999. [23] L. P. Huelsman, editor. Linear Circuit Analysis. In W.-K. Chen, editor, The Circuits and Filters Handbook, Section IV. CRC, Salem, 1995. [24] K. S. Kundert and A. Sangiovanni-Vincentelli. Simulation of Nonlinear Circuits in the Frequency Domain. IEEE Trans. on Computer-Aided Design, 5(4):521–535, Oct. 1986. [25] K. S. Kundert, J. White, and A. Sangiovanni-Vincentelli. A Mixed Frequency–Time Approach for Distortion Analysis of Switching Filter Circuits. IEEE Journal of Solid-State Circuits, 24(2):443–451, Apr. 1989. [26] E. A. Lee and A. Sangiovanni-Vincentelli. A Framework for Comparing Models of Computation. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 17(12):1217–1229, Dec. 1998.

142

4 Time-Domain Generic Behavioral Models

[27] E. Lelarasmee, A. E. Ruehli, and A. L. Sangiovanni-Vincentelli. The Waveform Relaxation Method for Time-Domain Analysis of Large Scale Integrated Circuits. IEEE Trans. on Computer-Aided Design, 1(3):131– 145, July 1982. [28] V. Liberali, V. F. Dias, M. Ciapponi, and F. Maloberti. TOSCA: A Simulator for Switched-Capacitor Noise-Shaping A/D Converters. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 12(9):1376–1386, Sept. 1993. [29] E. Martens and G. Gielen. A Model of Computation for Continuous– Time ∆Σ Modulators. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 162–167, Munich, Mar. 2003. [30] E. Martens and G. Gielen. Formal modeling of ∆Σ Modulators. In Program for Research on Integrated Systems and Circuits, pages 233– 239, Veldhoven, Nov. 2003. [31] E. Martens and G. Gielen. High–Level Modeling of Continuous–Time ∆Σ A/D-Converters. In IEEE Asia South Pacific Design Automation Conference, pages 51–56, Yokohama, Jan. 2004. [32] E. Martens and G. Gielen. Behavioral modeling and simulation of weakly nonlinear sampled-data systems. In IEEE Int. Symp. on Circuits and Systems, volume III, pages 2247–2250, Kobe, May 2005. [33] E. Martens and G. Gielen. Time-Domain Simulation of Sampled Weakly Nonlinear Systems Using Analytical Integration and Orthogonal Polynomial Series. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 120–125, Munich, Mar. 2005. [34] E. S. J. Martens and G. G. E. Gielen. Analyzing Continuous–Time ∆Σ Modulators With Generic Behavioral Models. IEEE Trans. on ComputerAided Design of Integrated Circuits and Systems, 25(5):924–932, May 2006. [35] J. C. Mason and D. C. Handscomb. Chebyshev Polynomials. Chapman & Hall/CRC, Boca Raton, 2003. [36] MathWorks. Signal Processing Toolbox. For Use with MATLAB. 2007. http://www.mathworks.com/access/helpdesk/help/pdf_doc/ signal/signal_tb.pdf. [37] W. J. McCalla. Fundamentals of Computer-Aided Circuit Simulation. Kluwer Academic Publishers, Boston, MA, 1988. [38] D. Middleton. An Introduction to Statistical Communication Theory. Peninsula, Los Altos, 1987. [39] B. Murari. Bridging the Gap Between the Digital and Real Worlds: the Expanding Role of Analog Interface Technologies. In IEEE Int. SolidState Circuits Conf., pages 30–35, San Francisco, Feb. 2003. [40] L. W. Nagel and D. O. Pederson. SPICE-Simulation Program with Integrated Circuit Emphasis. Technical Report ERL-M382, Univ. California, Berkeley, Electronics Research Laboratory, Apr. 1973.

References

143

[41] A. R. Newton and A. L. Sangiovanni-Vincentelli. Relaxation-Based Electrical Simulation. IEEE Trans. on Computer-Aided Design, 3(4):308–331, Oct. 1984. [42] S. R. Norsworthy, R. Schreier, and G. C. Temes. Delta-Sigma Data Converters. Theory, Design and Simulation. IEEE, 1997. [43] O. Oliaei. State-Space Analysis of Clock Jitter in Continuous-Time Oversampling Data Converters. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 50(1):31–37, Jan. 2003. [44] A. Opal. Sampled Data Simulation of Linear and Nonlinear Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 15(3):295–307, Mar. 1996. [45] A. Opal and J. Vlach. Consistent Initial Conditions of Linear Switched Networks. IEEE Trans. on Circuits and Systems, 37(3):364–372, Mar. 1990. [46] J. R. Parkhurst and L. L. Ogborn. Determining the Steady-State Output of Nonlinear Oscillatory Circuits Using Multiple Shooting. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 14(7):882–889, July 1995. [47] V. Peluso, M. Steyaert, and W. M. C. Sansen. Design of Low-Voltage Low-Power CMOS Delta-Sigma A/D Converters. Kluwer Academic, 1999. ¨ [48] R. Piessens, E. De Doncker-Kapenga, C. W. Uberhuber, and D. K. Kahaner. Quadpack : A Subroutine Package for Automatic Integration. Springer, Berlin, 1983. [49] J. Roychowdhury. Analyzing Circuits with Widely Separated Time Scales Using Numerical PDE Methods. IEEE Trans. on Circuits and Systems— I: Fundamental Theory and Applications, 48(5):578–594, May 2001. [50] J. Ruiz-Amaya, J. de la Rosa, F. V. Fern´ andez, F. Medeiro, R. del R´ıo, B. P´erez-Verd´ u, and A. Rodr´ıguez-V´azquez. High-Level Synthesis of Switched-Capacitor, Switched-Current and Continuous-Time Σ∆ Modulators Using SIMULINK-Based Time-Domain Behavioral Models. IEEE Trans. on Circuits and Systems—I: Regular Papers, 52(9):1795– 1810, Sept. 2005. [51] R. A. Saleh and J. K. White. Accelerating Relaxation Algorithms for Circuit Simulation Using Waveform-Newton and Step-Size Refinement. IEEE Trans. on Computer-Aided Design, 9(9):951–958, Sept. 1990. [52] J. E. Savage. Models of Computation. Exploring the Power of Computing. Addison-Wesley, Reading, 1998. [53] P. Saviz and O. Wing. Circuit Simulation by Hierarchical Waveform Relaxation. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 12(6):845–860, June 1993. [54] R. Schreier and B. Zhang. Delta-Sigma Modulators Employing Continuous-Time Circuitry. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 43(4):324–332, Apr. 1996.

144

4 Time-Domain Generic Behavioral Models

[55] K. Singhal and J. Vlach. Computation of time domain response by numerical inversion of the Laplace transform. J. Franklin Inst., 299(2):109–126, Feb. 1975. [56] K. Suyama, S.-C. Fang, and Y. P. Tsividis. Simulation of Mixed SwitchedCapacitor/Digital Networks with Signal-Driven Switches. IEEE Journal of Solid-State Circuits, 25(6):1403–1413, Dec. 1990. [57] P. Vanassche, G. Gielen, and W. Sansen. Efficient Time-Domain Simulation of Telecom Frontends Using a Complex Damped Exponential Signal Model. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 169–175, Munich, Mar. 2001. [58] P. Vanassche, G. Gielen, and W. Sansen. Efficient Analysis of SlowVarying Oscillator Dynamics. IEEE Trans. on Circuits and Systems—I: Regular Papers, 51(8):1457–1467, Aug. 2004. [59] J. Vandewalle, H. J. De Man, and J. Rabaey. Time, Frequency, and zDomain Modified Nodal Analysis of Switched-Capacitor Networks. IEEE Trans. on Circuits and Systems, 28(3):186–195, Mar. 1981. [60] J. Vandewalle, J. Rabaey, W. Vercruysse, and H. J. De Man. ComputerAided Distortion Analysis of Switched Capacitor Filters in the Frequency Domain. IEEE Journal of Solid-State Circuits, 18(3):324–333, June 1983. [61] A. Vladimirescu. The SPICE book. Wiley, New York, 1994. [62] V. Volterra. Theory of Functionals and of Integral and IntegroDifferential Equations. Dover, New York, 1959. [63] P. Wambacq and W. Sansen. Distortion Analysis of Analog Integrated Circuits. Kluwer Academic Publishers, Boston, 1998. [64] E. Z. Xia and R. A. Saleh. Parallel Waveform-Newton Algorithms for Circuit Simulation. IEEE Trans. on Computer-Aided Design, 11(4):432– 442, Apr. 1992. [65] B. Yang and J. Phillips. A multi-interval Chebyshev collocation method for efficient high-accuracy RF circuit simulation. In IEEE/ACM Design Automation Conf., pages 178–183, Los Angeles, June 2000. [66] L. Yao, M. Steyaert, and W. Sansen. Low-Power Low-Voltage SigmaDelta Modulators in Nanometer CMOS. Kluwer Academic, 2006. [67] F. Yuan and A. Opal. An Efficient Transient Analysis Algorithm for Mildly Nonlinear Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 21(6):662–673, June 2002. [68] T. Zhang and D. Feng. An Efficient and Accurate Algorithm for Autonomous Envelope Following with Applications. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 614–617, San Jose, Nov. 2005.

5 Frequency-Domain Generic Behavioral Models

5.1 Introduction The analog designer frequently characterizes the behavior of a system using frequency-domain properties. Therefore, representation and simulation of analog and mixed-signal systems directly in the frequency domain can naturally serve as base elements for building generic behavioral models. For efficient use in a design flow, such models should be suited both for time-efficient evaluation of the model via simulation, and for analysis to identify both intended and parasitic signal flows in an architecture. To be able to write frequency-domain generic functions according to these requirements, a mathematical framework is presented in this chapter. Together with a specific interaction scheme for a wide class of front-end architectures, frequency-domain generic behavioral models for efficient top-down design are obtained. First, the different types of frequency-domain modeling approaches that are often used to simulate and analyze analog high-frequency systems are discussed. Based on the properties of the techniques of this brief overview, the novel mathematical framework is then introduced which allows to fulfill the specific requirements of generic behavioral models, like the flexibility to easily represent many different architectures combined with efficient evaluation methods of the performance characteristics. Techniques are presented to include all major non-ideal effects. In the last part of this chapter, generic behavioral models for front-end architectures are derived which use the new framework to describe frequency-domain behaviors.

5.2 Frequency-domain modeling approaches 5.2.1 Frequency-domain simulation Whereas the behavior of analog systems operating at rather low frequencies can easily be obtained with time-domain simulation methods, these methods become quickly too time-consuming for systems with signals at high

146

5 Frequency-Domain Generic Behavioral Models

frequencies like RF front-ends. Indeed, according to Nyquist’s sampling theorem, the maximal time step between the samples to be able to reconstruct the original signal is inversely proportional to the bandwidth of the signals [40]. As a result, in approaches like time-marching algorithms, many sampling points are needed to obtain the steady-state response for systems operating at RF frequencies. On the other hand, instead of a set of samples, frequencydomain approaches describe the signals directly as sums of (modulated) sine waves which explicitly refers to the frequency content. The behavior of all building blocks is described as a function of the frequencies present in the signals, for example using transfer functions. As a result, no direct representation of the signal in time is available, making it more challenging to take into account effects that are typically described in the time domain, like nonlinear distortion. Strictly speaking, a frequency-domain simulation method calculates the response of a system over a frequency interval [fmin , fmax ] without looking at the corresponding signals in the time domain. The aim of this chapter, however, is to develop generic behavioral models for transceiver architectures based on the frequency-domain characterization of signals and system. Consequently, to select a computational model as base for efficient generic behavioral models, several approaches with a direct reference to the frequency-domain for simulating front-ends containing signals with high-frequency components are reviewed. Compared to time-domain approaches, the frequency interval is not split up to calculate the response for subsequent frequency steps, since the response at a certain frequency is the result of the input values at all frequencies. Instead, the signals are always considered over the entire frequency interval. Different approaches result from different ways to deal with the following properties: • Signal carriers: the set of carrier signals used to describe the signal which can be a fixed or variable set throughout the system, with a constant or variable distance between them • Modulation model: the way the modulation of the carrier frequencies is represented, e.g., with complex time samples or with a discrete or continuous spectrum, or without any modulation • Computational flow: the method used to obtain the signals of a large system, e.g., by solving one big set of equations for the entire system at once, or via a signal-flow approach • Nonlinearities: the approach used to calculate the nonlinear response, e.g., with a frequency-domain model or via conversion to the corresponding time-domain signals These basic properties can be used to compare simulation methods for RF architectures with each other as shown in Table 5.1. Based on the use of the time domain for both the modulation model and the handling of nonlinearities, three major groups are distinguished in this work: models which operate

5.2 Frequency-domain modeling approaches

147

strictly in the frequency domain [5, 67], equivalent low-pass or base-band models in the time domain [23, 58, 65] and harmonic balance approaches which evaluates nonlinear models in the time domain and linear approximations in the frequency domain [26]. These methods are now discussed in detail. 5.2.2 Harmonic balance algorithms The harmonic balance method shows great similarities with the time-domain collocation methods described in Section 4.2.5. The set of circuit equations (4.1a), obtained for instance with the MNA method, can be written with explicit indication of the input signal u(t): dq (t) = f (q (t) , u(t) , t) , dt

(5.1)

with q (t) the M -dimensional signal vector of the system containing voltages, currents and/or states depending on the adopted analysis method. In the basic harmonic balance algorithm, the response q (t) of the system to an input signal u(t) with period T0 is assumed to be also periodic with the same period [25]. Consequently, all signals of the system as well as f ( · ) of (5.1) can be written as Fourier series with fundamental frequency f0 = 1/T0 , e.g., q (t) =

∞ 

q k e j2πkf0 ,

k ∈ Z.

(5.2)

k=−∞

with q k ∈ CM , q −k = q k and a + jb = a−jb. Substituting these expressions in (5.1) converts the set of differential equations into a set of algebraic equations: j2πkf0 q k = F k (q k , u k ) ,

k ∈ Z.

(5.3)

For a general nonlinear function f ( · ), the Fourier coefficients F k are found by first converting the signals q (t) and u(t) to the time domain, applying the nonlinear function and finally transforming them back to the frequency domain [31]. This operation makes the method a mixed-domain procedure. The Fourier series are truncated after the maximal frequency of interest F f0 which results in a system of M (2F + 1) nonlinear equations with unknown complex coefficients q 0 , . . . , q F . These equations can be solved, for instance, with a Newton–Raphson scheme comparable to the time-marching algorithms. The resulting large set of linearized equations is usually solved iteratively [38], e.g., with relaxation methods [26] or algorithms based on Krylov subspaces [54, 14, 20]. Pre-conditioners are usually used to obtain a system with the same solution but easier to solve [61, 42]. The basic harmonic balance scheme can be extended to systems where the input signal is a multi-tone input signal, e.g., using generalized Fourier series [55], multi-dimensional Fourier transforms [24] or false frequencies [26].

148

5 Frequency-Domain Generic Behavioral Models

However, the number of tones F used in the Fourier series rapidly increases when the input signal contains several close frequencies or exhibits strongly nonlinear behavior, resulting in very large systems of equations to solve [60]. Furthermore, several input signals of interest, like digitally modulated signals, cannot be represented by a small number of sinusoidal tones [53]. The application of harmonic balance algorithms is limited by the number of signals in the system (M ), the distance between the tones (f0 ) and the highest frequency (F f0 ). For example, long computational times are required to calculate the steady-state response for entire front-end architectures with a wide frequency range of interest taking into account intermodulation distortion [13]. As a result, simulation of individual building blocks or systems at high abstraction levels are the main target for this approach. 5.2.3 Approaches with time-domain equivalent low-pass models The signals occurring in RF transceivers can be represented with a limited number of time-domain samples if the bandpass character is taken into account. The equivalent low-pass transformation is used to convert between the normal signal x(t) and the low-pass equivalent xL (t) [4]:

 (5.4a) xL (t) = x(t) + jH {x(t)} · e −j2πf0 t ,  L (5.4b) x(t) = ℜ x (t) · e j2πf0 t ,

where H { · } denotes the Hilbert transform [3] and ℜ{ · } selects the real part. If the spectrum of the real signal is X(f ), then the Fourier transform of xL (t) is given by  (5.5) F xL (t) = 2 · X(f + f0 ) · u(f + f0 ) ,

with u(f ) the unit step function. Consequently, if the normal signal x(t) contains only frequency components within a band 2 · BW around f0 , the complex equivalent signal xL (t) is a low-pass signal with bandwidth BW . Therefore, it can be represented by complex time samples at a sampling frequency larger than BW . By converting each building block to an operation between equivalent lowpass signals, a simulation model for the entire system is obtained [23, 19]. For example, a filter with linear transfer function H(f ) is converted to the filter H(f + f0 ) · u(f + f0 ). Each signal is represented by a complex time signal, where different sampling rates can be used at different nodes in the system. Further, transfer functions are translated into digital filters. This approach results in a time-domain data-flow simulation and has been adopted, e.g., by the Communications Toolbox in MATLAB/Simulink [37]. The classical equivalent low-pass model around a main carrier frequency can be extended to multiple carriers [65, 9]. A generalized representation of (5.4b) is postulated as signal representation:

5.2 Frequency-domain modeling approaches

x(t) =

N  i=1

 j2πfi t , ℜ xL i (t) · e

149

(5.6)

where xL i (t) is the low-pass equivalent of the frequency components of the signal in a band around carrier frequency fi . For each carrier frequency, a different bandwidth can be used and hence a different sampling rate for each complex equivalent signal: the Multi-Rate Multi-Carrier model. This extended low-pass model is the basic signal representation in simulators like FAST [58] and DISHARMONY [11]. Since modulation techniques used in modern communication standards like OFDM and MC-CDMA, apply a modulation on multiple tones, a multicarrier signal representation is more appropriate for modern transceivers [44]. Another advantage of the extended low-pass model is the ability to take out-of-band distortion into account caused, for example, by intermodulation products. The classical model looks only at the in-band AM-to-AM and AMto-PM distortion [23]. For weakly nonlinear distortion, a combination of the components at different carriers can be made [58, 9]. Strongly nonlinear response at each time point can be approximated by assuming an instantaneous signal consisting of unmodulated sinusoids at the carrier frequencies. FFT and IFFT are used to calculate the nonlinearity for this signal in the time domain and obtaining again the multi-carrier representation. However, whereas a main advantage of time-domain simulation is the ability to easily deal with strongly nonlinear behavior, this property is lost in approaches with multi-carrier time-domain equivalent low-pass models. Instead, cumbersome computations are needed at each time point. 5.2.4 Strictly frequency-domain approaches In a strictly frequency-domain approach, both the signals and the behavior of the building blocks are expressed in frequency domain without conversion to the time domain. For the signals, a multi-tone model comparable to the one used by harmonic balance methods (5.2) can be employed [67]. Further, continuous-time spectra are obtained by converting time-domain representations like (5.4b) or (5.6) to the frequency domain so that the signal is written as a sum of signal components around the carrier frequencies, e.g., X(f ) =

N   1 L Xi (f − fi ) + XiL (−f − fi ) , 2 i=1

(5.7)

 with XiL (f ) = F xL i (t) . These components can be sampled [67] or an analytical expression can be chosen. For example, the ORCA tool [5] uses band-limited rational functions to represent the power spectral density of the signals. Analytical expressions are then provided for the operation of all building blocks.

150

5 Frequency-Domain Generic Behavioral Models

The architecture of a transceiver is directly translated into a frequencydomain data-flow simulation by modeling the behavior of all building blocks in the frequency domain. Feedback paths in the system can be dealt with by applying frequency relaxation [67]. The simple linear behavior of the blocks is easily represented by commonly used transfer functions [22]. Linear periodic time-variant behavior is represented using time-varying transfer functions [66] or harmonic transfer matrices [56] which define transfer operations between different components at different frequencies. k Weakly nonlinear distortion of the form x(t) is translated in the frequency domain into convolution operations between k signal components [12, 67] with coefficients determined via combinatorics. Other approaches are based on Volterra series [62] to obtain models with Volterra kernel transfer functions [63, 66]. Time-variant Volterra kernels [10, 41] model nonlinearity in periodically time-varying systems. Simulations in the frequency domain are closely related to the way a designer thinks about an RF transceiver: filtering operations, harmonics and intermodulation products have a direct representation. High-frequency components are easily taken into account without the need for drastically increasing the number of samples. Also noise transfer functions can easily be computed to determine noise figures [5]. The exact calculation of all nonlinear distortion components however requires the computation of a large number of combinations of the input components which increases rapidly with the distortion order k, similar to the time-domain multi-carrier low-pass models. Therefore, only the largest components should be taken into account. Another drawback of fully frequencydomain methods is the difficulty to deal with strongly nonlinear behavior, which is only easily modeled by conversion to the time domain. These nonidealities are therefore often either neglected or approximated by linear or weakly nonlinear behavior. 5.2.5 Overview Table 5.1 shows a comparison between the properties of the three groups of simulation approaches with frequency-domain emphasis. In addition to the properties listed in Section 5.2.1, the dependency of the computational complexity on the signal representation in the different methods is also indicated. The ability to easily deal with most non-idealities, like strong nonlinearities, is the strongest attraction for the harmonic balance methods. They can also be applied to general systems, whereas the other approaches are intended for mainly feedforward systems. On the other hand, both the time-domain and strictly frequency-domain approaches can easily deal with modulated sinusoidal signals and with different carriers. The computational complexity is not determined by the distance between the spectra. Frequency-domain

5.3 A new frequency-domain framework

151

Table 5.1. Overview of properties of three major approaches for simulation of highfrequency systems: harmonic balance (HB), time-domain low-pass (TD) and strictly frequency-domain (FD). HB

TD

FD

Signal carriers

Equidistant tones

Single or multiple carriers

Multiple carriers

Modulation model

Unmodulated sinusoidal tones

Time-domain complex signal

Unmodulated sinusoidal tones or continuous spectra

Computational flow

One global set of equations

Data-flow model

Data-flow model

Nonlinearities

Via conversion to time domain

AM/AM+PM, via combinatorics or via FFT/IFFT

Via combinatorics or with Volterra kernels

Computational complexity

Tone spacing and maximal frequency

Nonlinearity order, carriers number and bandwidths

Nonlinearity order and carriers number

approaches show great similarities with commonly applied analysis methods whereas time-domain simulation provides a direct representation of the transient signals. To conclude, the best approach depends on the circumstances in which the simulation model is used.

5.3 A new framework for developing frequency-domain generic behavioral models A generic behavioral model serves as the representation of a system during the architectural exploration phase of a design flow depicted in Fig. 3.10 on p. 66. One of its properties is the ability to represent different architectures by different specializations of generic functions. The dynamic behavior between these generic functions is expressed by an interaction scheme which is invariant during the exploration. Whereas this scheme can be based on one of the standard frequency-domain simulation methods summarized in the previous section, efficient application of the model during high-level design puts specific requirements on the representation. Therefore, a novel mathematical framework has been developed in this work that simplifies the derivation of generic behavioral models for RF transceivers [35].

152

5 Frequency-Domain Generic Behavioral Models

5.3.1 Requirements In the past years, the world of telecommunications has seen a strong growth in communication standards, e.g., GSM, EDGE, Bluetooth and IEEE 802.11x [15]. Consequently, a wide variety of architectures for analog and mixed-signal front-ends have been developed [47]. Additionally, these architectures become continuously more complex since they can frequently handle multiple standards (e.g. [43]). Furthermore, mobile communication puts strong requirements on the power consumption of the transceivers resulting in several techniques to improve the performance of existing architectures. The generic behavioral models for RF transceivers should be able to deal with this increasing complexity in order to systematically study the performance limitations of front-end architectures. As a result, the mathematical framework used to develop generic behavioral models should fulfill some requirements: Generic signals. Signals in RF transceivers can consist of pure tones or of AM/FM/PM-modulated sinusoidal signals on one or multiple high carrier frequencies. Further, the information signal may be split over multiple paths, like IQ-paths. Since these signal characteristics can differ throughout the architecture, a generic signal model should be used to easily represent signals with different properties. Linear behavior. The behavior of the building blocks at high abstraction levels is often approximated by a linear model. The generic behavioral model should be able to include deviations of the ideal response caused by linear non-idealities like parasitic poles, finite gain, or mismatches in multipath topologies. Since frequency conversion is a commonly used operation in transceivers, both time-variant and time-invariant systems should be represented by generic structures in the framework. The non-idealities can then be added to this generic representations by subsequent refinement steps. Nonlinear distortion. Nonlinear distortion results in the generation of harmonic signals and intermodulation products deteriorating the SNDR. To analyze these effects, distortion-related parameters of the building blocks like IP 3 [50] should be incorporated in the generic behavioral model. They are converted into parasitic signal flows. The addition of nonlinear behavior ensures that the design methodology with generic behavioral models can be used in a large area of the abstraction-description plane, from high to low abstraction levels. Parasitic signals. If an architecture fails to achieve some specifications, a clear identification of the unwanted signal flows can help to propose transformations for improvement. For example, characterizing the transfers of parasitic signals throughout the architecture can be used to add extra filters. The relative importance of different non-ideal signals can be examined. Therefore, the model should separate ideal from linear and nonlinear non-ideal flows instead of only calculating the total response, as needed, for example, in the harmonic balance approaches.

5.3 A new frequency-domain framework

153

Noise. An important property of analog front-ends is the noise behavior expressed by the noise factor [29]. To estimate this figure the transfer from a noise signal towards the output should be computed. Therefore, the mathematical framework should support the calculation of transfer functions and the propagation of stochastic signals. Since noise is usually defined by a power spectral density, frequency-domain characterization of these properties is straightforward. Signal processing algorithm. Efficient architectural exploration implies the exploitation of mathematical relations. These operations are supported with a direct link between the structure and the processing algorithm. The generic behavioral model should make a distinction between the real signals in the architecture (e.g., all different signals in multiple paths) and the information signals of the algorithm and provide a formal translation between them. The generic behavioral models for RF front-ends presented in this work try to combine the advantages of simulation-oriented and information-oriented models. The first group of models implement one of the simulation methods of Section 5.2. Representation at low abstraction levels with many non-idealities is the major benefit of these approaches. Models that focus on the information flow throughout the architecture typically operate at high abstraction levels with almost ideal blocks. Their main focus is the verification of signal processing algorithms. The mathematical framework that has been developed allows to write generic behavioral models that can emphasize both the information flow and the architectural structure at high and low levels of abstraction. 5.3.2 Signal representation The base data structure of the proposed mathematical framework is the generic representation of signals as Polyphase Harmonic Signals [32]. This signal model consists of a set of polyphase bandpass signals at different carrier frequencies. A formal link between the information and the real signals in the system is established via linear base transformations of the polyphase signals. More accurate representations are achieved by increasing the number of frequencies, or by changing the modulation of the signal components. In this section, a formal definition of the PHS model is given. This concept is based on an harmonic extension of polyphase signals. Therefore, polyphase signals and related base transformations are first elaborated. Then, the actual signal model is defined which is graphically illustrated by Fig. 5.2. Polyphase signals and base transformations In general, analog and mixed-signal front-ends contain building blocks performing operations on sets of signals rather than on a single signal. Figure 5.1 shows an example of operations of sets of one (single-ended), two (common

154

5 Frequency-Domain Generic Behavioral Models

LNA N=1

90◦ N= 2

H1 (s)

A

H2 (s)

A

D

fLO

D N= 4

Fig. 5.1. Example of polyphase signals with different number of phases N in a simple receiver architecture.

and differential mode) and four real signals (common and differential in IQpaths). All signals of the set are represented independently of each other in standard simulated-oriented models. There is, however, only one information signal which is directly characterized in information-oriented transceiver models. For a set of two signals, the information is usually carried in the differential-mode signal. When there is an I- and an Q-path, complex signal processing is used to represent the operations on the information signal [8, 36]: if the differential-mode signals of each path are x(t) and y(t) respectively, then the useful signal is the complex signal x(t) + jy(t) with a possible different behavior for positive and negative frequencies. Although this complex signal model makes it straightforward to analyze the dominant operating principle of an architecture, it also has some limitations: • The concept is limited to sets of four signals and cannot easily be extended to a larger number of signals which can occur in frequency converters (e.g. [48], [27]) or demodulators (e.g. [49]). • Since a complex signal representation is used to model different paths, the model is incompatible with the equivalent low-pass transformation (5.4). As a result, representing modulated signals becomes cumbersome requiring time-consuming simulation techniques. • The direct response of the building blocks on the real signals is not directly represented. Instead, only a combination of the physical signals as a complex signal is present. Since non-idealities of blocks like mixers and filter are usually expressed in terms of the real signals, it is not straightforward to add them to the complex signals. No common-mode signals can be included in the model which makes it impossible to represent some non-ideal effects, like parasitic transfers from common to differential modes corresponding, for instance, with the CMRR. In order to circumvent these limitations, the signal model of the new framework we developed is based on an explicit representation of signals as general N -dimensional polyphase signals [18].

5.3 A new frequency-domain framework

155

Definition 5.1 (Polyphase signal). An N -dimensional polyphase signal modulated sinusoidal signals

s (t) is an ordered set of N

Ap (t) cos(2πf0 t + φp (t))

(5.8)

with the same fundamental frequency f0 , but with different amplitudes and/or phases. Application of the equivalent low-pass transformation (5.4) on the components of s (t) results in the complex time-domain representation for the polyphase signal:

s L (t) =

T

A1 (t) e jφ1 (t) . . . AN (t) e jφN (t) .

(5.9)

The corresponding frequency-domain signal is usually more appropriate when analyzing front-end architectures:







S L (f ) = F A1 (t) e jφ1 (t) . . . F AN (t) e jφN (t)

T

.

(5.10)

The polyphase signal can also be interpreted within concepts of general linear theory. The elements of the vector (5.9) can be considered as time L . Each base varying coordinates with respect to the base E = e1L , . . . , eN vector contains a signal in one phase and no signals in the other phases:

epL (t) = [ 0

T

... 0 1 0 ... 0] .

1

(5.11)

N

p

This set of base vectors is termed the single-phases base. Other bases can be defined which are of special interest when modeling front-end architectures: Common-/differential-mode base. Each base vector corresponds to a common- or differential-mode signal for a subset of two signals, usually corresponding with a signal path:

eL

2p−1 2p

(t) = [ 0 . . . 0 1

1

T

±1 0 . . . 0 ] ,

2p−1 2p

N

(5.12)

for p = 1, . . . , N/2. This base corresponds to the way designers usually reason about circuits which simplifies the representation of the building blocks in the architecture. Symmetrical components base. The base vector form a unity symmetrical polyphase signal, i.e. the amplitudes are one and there is a constant phase difference ∆φ between two successive signals of the set: 

(5.13) epL (t) = α0 αp−1 α2(p−1) . . . α(N −1)(p−1) T , with α = e−j2π/N and for p = 1, . . . , N . There are some special choices for p which are listed in Table 5.2. The symmetrical components base

156

5 Frequency-Domain Generic Behavioral Models Table 5.2. Special unity symmetrical polyphase signals. p

∆φ

Name

1

0◦

Identical sequence

2

360◦ N ◦

Positive sequence

[1

−j

−1

Semi-identical sequence

[1

−1

1

−1]

Negative sequence

[1

j

−1

−j]

N 2

N

+1

180

− 360 N



Example for N = 4 [1

1

1

1] j]

emphasizes the information signal. For four phases, the connection with the representation of the model for complex signal processing is obvious: the coordinates of the polyphase signal for the positive and negative sequences correspond to the signal components at positive and negative frequencies. The 2-dimensional symmetrical components base coincides with the common-/differential-mode base. Conversion from one base to another is a basic linear operation and is characterized by an N × N base transformation matrix B N :

L′   e1 (t) . . . eNL′ (t) = e1L (t) . . . eNL (t) · B N . (5.14) Example 5.1. The base transformation matrix for conversion from a singlephases base to the common-/differential-mode base is given by   1 1 B sp→cd = I ⊗ , (5.15) N/2 N 1 −1

where I N/2 is the identity matrix of order N2 and ‘⊗’ denotes the direct matrix or Kronecker product. For a conversion from a single-phases base to one with symmetrical components, B N becomes ⎡ 0 ⎤ α0 . . . α0 α ⎢α0 α1 . . . α(N −1) ⎥ ⎢ ⎥ B sp→sc = (5.16) ⎢ .. ⎥. .. .. .. N ⎣ . ⎦ . . . α0 α(N −1) . . . α(N −1)(N −1)

Conversion from the common-/differential-mode vectors to the symmetrical components is then characterized by      1 1 1 sp→sc I = ⊗ I ⊗ · BN , (5.17) B cd→sc N/2 N 1 N/2 −1 2 which combines (5.15) and (5.16).



General linear theory can be applied to find the coordinates of the polyphase signal with respect to the new base [30]:

5.3 A new frequency-domain framework L s L′ (t) = B −1 N · s (t) .

157

(5.18)

For example, if x1 (t) and x2 (t) are two single-phases signals, than the common/differential mode signals become x1 (t) + x2 (t) and x1 (t) − x2 (t) using base transformation matrix (5.15). Application of the inverse equivalent low-pass transformation (5.4b) results in the base transformation formula for real polyphase signals:   · s (t) − ℑ B −1 · H {s (t)} , s ′ (t) = ℜ B −1 (5.19) N N where ℜ{ · } and ℑ{ · } select the real and imaginary parts respectively and

H { · } denotes the Hilbert transform [3]. These base transformations provide a formal way to represent either architecture (real signals with single-phases or common-/differential-mode base) or algorithm (information signals with base of symmetrical components) and convert between them. The generic concept gives the possibility to easily explore different options for the number of phases, corresponding to different architectures. The representation as polyphase signals can also be used for the multiphase sampled-data systems defined in Section 4.4 of Chap. 4. Each real signal S(f ) is split in a set of F signals Sr (f ) corresponding to the number of (clock) phases F of the system [59]: S(f ) =

F 

Sr (f ) .

(5.20)

r=1

This expression is then used to find the total information signal. Note that the total number of phases in an architecture with N ′ parallel paths becomes N = F · N ′ . The base transformation matrices are adjusted accordingly to realize base transformations (e.g., those of Example 5.1) in each phase of the multi-phase sampled-data system. Polyphase Harmonic Signals Typical signals in front-end architectures consists of a collection of signal components around different carrier frequencies. This harmonic character of the signals is combined with the concept of polyphase signals elaborated above in the basic signal representation of our framework: the Polyphase Harmonic Signal (PHS). Basically, it contains a component for every phase-frequency combination, described by an equivalent low-pass description. Definition 5.2 (PHS—Polyphase Harmonic Signal). The Polyphase Harmonic Signal (PHS) s˜(t) with the N -dimensional base E and the A-dimensional set of fundamental frequencies F is an ordered set of A · N modulated sinusoidal signals si (t) given by si (t) = Ap (t) cos(2πfk t + φp (t)) ,

(5.21a)

158

5 Frequency-Domain Generic Behavioral Models

corresponding to the signal of phase p at fundamental frequency k with * + * + i−1 i−1 p=i− N, k= + 1, (5.21b) N N for i = 1, . . . , A · N . The equivalent low-pass transformation (5.4) is used for each signal si (t) in the set with the fundamental frequency fk defined by (5.21b) to transform the signals to a complex time-domain representation. The PHS can be written as a set of polyphase signals with different fundamental frequencies: T  (5.22) s˜L (t) = s1L (t)T . . . sAL (t)T . Alternatively, it can be considered as a polyphase signal with multi-carrier signals s˜L p (t) in each phase: T  (5.23) s˜L (t) = P A,N · s˜L1 (t)T . . . s˜LN (t)T , with permutation matrix P A,N given by

 P A,N = I A ⊗ I N (:, 1) . . . I A ⊗ I N (:, N ) ,

(5.24)

where I N (:, p) selects the pth column of I N . This dual character of a PHS is schematically represented in Fig. 5.2 for a 3-dimensional PHS defined on F = {f1 , f2 , f3 , f4 }. The signal components are represented in the phase-frequency plane by the spectrum of each signal Si (f ) = F {si (t)} concentrated around carrier frequency fk . An unmodulated sinusoid is indicated by an arrow. The total frequency-domain PHS is represented with a low-pass equivalent model: T  (5.25a) S˜ L (f ) = S1L (f )T . . . SAL (f )T , T  T L = P A,N · S˜1L (f )T . . . S˜N , (5.25b) (f )

corresponding to a set of polyphase or multi-carrier signals respectively. S3

e

S˜ (f )

as ph

S9 S11

S2

p3 S1

p2

S5 S4

p1 f1

S12

S8

S6

f2

S7

f3

S10

f4

F

Fig. 5.2. Example of a frequency-domain representation of a Polyphase Harmonic Signal with N = 3 phases and A = 4 frequencies.

5.3 A new frequency-domain framework

159

The signal representation depicted in Fig. 5.2 has some specific properties in which it differs from the signal models used in the pure simulation-oriented approaches described in Section 5.2 above: • In contrast to the harmonic balance approaches, the frequencies present in F should not be equidistant or contain the DC component. Like the multicarrier model (5.6), a random set of frequencies can be selected. As a result, the model of the signal can be limited to the most important or interesting frequencies at each node in the architecture. These frequencies are usually determined by the communication standards (e.g., by definition of the frequency bands), the operations of the building blocks (e.g., sampling or down-conversion operations) and by the number of non-ideal components like harmonics that one wants to take into account. • Only positive frequencies are present in F . The typical asymmetric representation of polyphase filters with different behavior for positive and negative frequencies is present in the presented framework as a different behavior for the positive and negative sequences. In other words, the negative frequencies correspond to another phase in the PHS representation. Since each component of the PHS represents a real modulated sinusoidal signal, the spectrum at negative frequencies in a particular phase is the complex conjugate of that at positive frequencies and hence it contains no extra information. If the wanted information signal is present twice in a signal, the information flow throughout the architecture becomes less clear, making the link between algorithm and architecture more confused. Moreover, all data structures used to model the building blocks would contain more elements. • All signal components are represented with low-pass equivalent spectra either as samples or via an analytical expression, like a rational function. Conversion to time-domain is easily achieved via an inverse Fourier transform, but the main characteristic behavior of the building blocks can be expressed as operations on these equivalent baseband components in frequency domain. • The generic character of the signal model makes it easy to represent both modulated sinusoids and pure tones. This principle simplifies the applications of refinement operations on the signal components during a highlevel design flow. A first analysis may contain simple signal representations (e.g., S˜ L (f ) independent of f ) whereas subsequent analyses may use more complicated models. The system designer gets the possibility to gradually increase the level of detail when analyzing the front-end architecture. • Actual implementations of the interaction scheme of a generic behavioral model written with the presented framework can easily reduce the complexity by reducing the number of components of the Polyphase Harmonic Signals. For example, if a designer working at high abstraction level is not interested in all common-mode signals, the number of components is easily reduced by a factor of two. However, the proposed methodology

160

5 Frequency-Domain Generic Behavioral Models

provides the flexibility to include these signals to verify the behavior of the architecture, if desired. When calculating properties like SNR, the power present in the Polyphase Harmonic Signal (PHS) is needed. The power signal can be described with a multi-carrier low-pass equivalent representation similar to the PHS. The different components can directly be represented, or they can be obtained from representation (5.25). If the bandwidths of the baseband equivalent signals are small enough, than the components are proportional to    2  L  2 T  ˜ L 2 . (5.26) S (f ) = S1L (f ) . . . SAN (f )

The signal components should be re-arranged if the bandwidths are larger than the distance between the fundamental frequencies. To perform this rearrangement, a set of block filters Wk (f ) is defined which are centered around the fundamental frequencies fk and with a bandwidth determined by the place of the close frequencies fk−1 and fk+1 . The operation is then mathematically described as follows:

S˜ L′ (f ) =  ⎤  W1 (f + f1 ) u(f + f1 ) SkL (f − fk + f1 ) + SkL (−f − fk − f1 ) A ⎢ ⎥  ⎥ ⎢ .. ⎥, ⎢ . ⎦ ⎣   k=1 WA (f + fA ) u(f + fA ) SkL (f − fk + fA ) + SkL (−f − fk − fA ) ⎡

(5.27)

with SkL (f ) the low-pass equivalent polyphase signal at fk , defined by (5.25a). The right terms in the sums usually vanish. Finally, a transformation of the polyphase base E of a PHS S˜ L (f ) can easily be expressed using the base transformation matrix B N defined in (5.14):



· S˜ L (f ) . S˜ L′ (f ) = I A ⊗ B −1 N

(5.28)

5.3.3 Linear representation of building blocks The basic building blocks in analog front-end architectures are amplifiers, filters, phase converters, mixers, A/D converters, etc. To study the behavior of these blocks, first a linear approximation is made since the dominant behavior of these blocks is linear. For this purpose, this subsection presents the Polyphase Harmonic Transfer Matrix (PHTM) as the element of the developed mathematical framework to represent linear – possibly time-variant – behavior. Such a matrix contains elements which describe the linear transfers from the signal components in different phases and at different fundamental frequencies in the input signal towards components of the output signal. Therefore, the new framework can be referred to as the Phase-Frequency

5.3 A new frequency-domain framework

161

Transfer model (PFT model). Each element of the PHTM behaves similarly to a familiar transfer function as depicted in Fig. 5.6. Again, the new concept is considered as an extension of polyphase behavior. After introducing the formal definition of Polyphase Harmonic Transfer Matrixs (PHTMs), properties for manipulation of PHTMs are derived in this section. Finally, a graphical representation of PHTMs as matrix box plots (Fig. 5.8) is introduced to view the main transfers at once during analysis. Polyphase filters A polyphase filter describes a mapping operation of an input signal x (t) onto the output signal y (t). Whereas linear single-phase filters are described by transfer functions, the operation of a linear polyphase filter is described with a polyphase transfer matrix H(f ). Definition 5.3 (Polyphase filter). A linear (N, M )-dimensional polyphase filter is a building block that converts an N -phase polyphase signal x (t) into an M -phase polyphase signal y (t) characterized by N · M transfer functions Hq,p (f ) lumped together in the M × N matrix H(f ):

Y (f ) = H(f ) · X (f ) .

(5.29)

If base transformations are applied to the input and output signals with base transformation matrices B Nin and B Nout respectively, then the polyphase transfer matrix with respect to the new bases is easily calculated from general linear theory [30]: (5.30) H′ (f ) = B −1 Nout · H(f ) · B Nin . This property can be used to examine the effect of a filter onto the information signals. First, the transfer matrix is described with respect to a singlephases or common-/differential-mode base emphasizing the structure of the architecture. Then, by application of (5.30), the transfers are obtained of the information signals described with respect to the symmetrical components base. As a result, there is always a formal connection between the physical components and the effects on the signals of the data processing algorithm. Following example illustrates this property. Example 5.2. Figure 5.1 shows two filters with transfer functions H1 (f ) and H2 (f ) for the differential signals in a typical receiver architecture with an I- and Q-path. Assume the common-mode transfer functions for the two filters are G1 (f ) and G2 (f ) respectively and the transfers from common- to differential mode are small (i.e., CMRR is large). Application of (5.30) with base transformation matrix (5.17) results in the polyphase transfer matrix referred to the symmetrical components:

162

5 Frequency-Domain Generic Behavioral Models

H(f ) = ⎤ ⎡ 0 G1 (f ) − G2 (f ) 0 G1 (f ) + G2 (f ) 1⎢ 0 H1 (f ) − H2 (f )⎥ 0 H1 (f ) + H2 (f ) ⎥ . (5.31) ⎢ ⎦ 0 G1 (f ) + G2 (f ) 0 2 ⎣G1 (f ) − G2 (f ) 0 H1 (f ) + H2 (f ) 0 H1 (f ) − H2 (f )

The odd and even symmetrical components remain strictly separated if the rejection ratios are large: for example, the second component results only in signals in the second and fourth phases whereas the transfer towards phases 1 and 3 is zero, which is indicated by the elements in the second column. Furthermore, if H1 (f ) and H2 (f ) are exactly the same, the positive and negative sequences are also separated and hence, the only non-zero elements are found on the main diagonal which is the desired signal operation. In other words, a mismatch between the two filter transfer functions, expressed using the familiar common-/differential-mode base, is translated into a leakage from the positive towards the negative sequence or vice versa. Hence, the information signal which is present in only one of these sequences is deteriorated.  In addition to polyphase filters consisting of a combination of classical filters like that in Example 5.2, there are some special types of polyphase filters frequently encountered in front-end architectures: Phase-converter. An increase or decrease of the number of phases of the polyphase signal is easily represented by a polyphase filter. For example, a commonly used conversion is from the differential mode of a 2-phase signal to the positive or negative sequence of a 4-phase signal or vice versa. Figure 5.3 depicts two filters that implement such a phase conversion within a certain bandwidth. With equation (5.30) the operation of these structures on the information signal can be derived. C R x1

R

R C

x2 x3

R

R

C

R

y1

R y1



C x4

R

R

R

C

R′

R

x1

C R′

x2 y2

y(t)

y2

x(t)

y4 R R′

x(t) (a) A 4-to-2 phase-converter

y3

R

y(t)

(b) A 2-to-4 phase-converter

Fig. 5.3. Examples of phase-converters converting the differential mode to the positive sequence and vice versa.

5.3 A new frequency-domain framework C

R C x1 x2

R R

y1

R

C

y2 C

x3

163

C y3

R R

x4

x(t)

C C R

R

y4

y(t) C

Fig. 5.4. Example of a 4-dimensional symmetrical polyphase filter.

Symmetrical polyphase filter. In a symmetrical polyphase filter1 a symmetrical polyphase signal characterized by a constant phase difference ∆φ is mapped onto a symmetrical polyphase signal with the same phase difference ∆φ. As a result, the polyphase transfer matrix described with the symmetrical components base is a diagonal matrix. Such a filter is easily described using a single-phases base. Indeed, application of (5.30) shows that the polyphase transfer matrix with respect to a single-phases base should be a circulant matrix, which results in a symmetrical structure as shown in Fig. 5.4. Furthermore, it can be shown that the transfer functions for the symmetrical polyphase signals obey the following rule: HN −k+2 (j2πf ) = Hk (−j2πf ),

(5.32)

where ∆φ is 2π N (k − 1) for Hk (f ) with k = 1, . . . , N . Consequently, a symmetrical polyphase filter is characterized by ⌈N/2⌉ independent filter functions for the symmetrical components. H1 (j2πf ) and HN/2+1 (j2πf ) are real functions in j2πf whereas the others may be complex. The conversion to single-phases bases gives a possibility to find a realization for a polyphase symmetrical filter with specific characteristics for symmetrical polyphase signals. Multi-phase sampled-data filters can also be represented using the formalism developed in this work. Next example shows that such a system can be considered as a polyphase filter where each real output signal is divided into different logical phases corresponding to the phases of the clock signal. Example 5.3. In Fig. 5.5, a small example is given of a SD filter with 2 phases ¯ both with duty cycle 1 ). The input is a normal 2-phase signal (φ and φ, 2 1

These filters are usually termed just ‘polyphase filter’ or ‘complex filter’.

164

5 Frequency-Domain Generic Behavioral Models φ¯ x1

φ

y1

C1 C2

y2

φ¯ x2

x(t)

φ

y3

C1 C2

y4

y(t)

Fig. 5.5. Example of a simple sampled RC-lowpass filter [45].

whereas the output signals are each split into two logical phases according to (5.20) (p. 157). Such a system can be described using using the frequency analysis techniques of [59, 45]. The polyphase transfer matrix is then easily derived: ⎤ ⎡ H1 (f ) 0 ⎢H2 (f ) 0 ⎥ ⎥ (5.33a) H(f ) = ⎢ ⎣ 0 H1 (f )⎦ , 0 H2 (f ) with

  sin(ν) jν 1 C1 1 sin(ν) jν C1 e e , 1− + H1 (f ) = 2 ν C1 + C2 − C2 e −j4ν 2 C1 + C2 ν (5.33b) H2 (f ) =

1 sin(ν) −jν C1 e . 2 ν C1 + C2 − C2 e −j4ν

(5.33c)

where ν = πf T /2 and T denotes the time period. It is assumed that this sampling period is chosen accordingly to Nyquist’s sampling theorem [40], otherwise extra terms arise to express the aliasing effects. An example of these effects is discussed later in Section 5.4.3. Note that the total output signals in the two physical phases of the system are calculated as y1 + y2 and y3 + y4 .  The transfer functions of the polyphase transfer matrices are also converted to low-pass equivalents [23]: L (f ) = Hq,p (f + f0 ) · u(f + f0 ) , Hq,p

(5.34)

which differs from the equivalent low-pass transformation (5.5) for signals by a factor of two. As a result, the characteristic equation for polyphase filters can be written similar to (5.29):

Y L (f ) = HL (f ) · X L (f ) .

(5.35)

5.3 A new frequency-domain framework

165

Polyphase harmonic filters By taking into account the multi-carrier composition of the signals in a frontend architecture, the polyphase filter can be considered as a mapping operation between phase-frequency planes. This is a generalization of the polyphase filters described in the previous section. The Polyphase Harmonic Signal (PHS) x˜ (t) described in the phase-frequency plane at the input as shown in Fig. 5.2 is converted to the signal y˜ (t) represented in the second plane at the output. Linear behavior of such building blocks is represented in the new framework by Polyphase Harmonic Transfer Matrices (PHTMs). Definition 5.4 (PHTM—Polyphase Harmonic Transfer Matrix). The ˜ L (f ) with polyphase bases Polyphase Harmonic Transfer Matrix (PHTM) H (Ein , Eout ) of orders (Nin , Nout ) and frequency sets (Fin , Fout ) of dimensions (Ain , Aout ) is an Aout Nout × Ain Nin matrix where the matrix element 0,L 1,L L Hi,j (f ) = Hi,j (f ) C0 + Hi,j (f ) C1

(5.36)

represents the total phase-frequency transfer from the signal component in phase p at frequency k to phase q at frequency l given by (5.21b) and with operator Cc :  S(f ) , c = 0, Cc {S(f )} = (5.37) S(−f ), c = 1. The operator Cc takes into account transfers from negative fundamental frequencies. Since in real signals the information is the same as that at the positive frequencies, the operation on these components is made a part of the description of the building block instead of extending the signal model. This extension would result in a model that contains the same information twice and requires matrices with four times the number of elements of PHTMs, containing twice as many transfer functions. This modeling principle simplifies the analysis of the flow of the information signal throughout the architecture. From Definition 5.4, it follows that the linearized input–output relation of a polyphase harmonic filter is described by a matrix multiplication: ˜ L (f ) . ˜ L (f ) · X Y˜ L (f ) = H

(5.38)

The ambiguous character of polyphase harmonic signals given by (5.25) results in the interpretation of a PHTM as a set of either polyphase filters between polyphase signals at different fundamental frequencies, or transfers of multi-carrier signals between different phases. A schematic representation of this duality property of PHTMs is shown in Fig. 5.6. One of the two views is typically used to derive the PHTM for a building block. For example, a multi-carrier character is emphasized in frequency-conversion blocks like mixers whereas the polyphase behavior is the major view in time-invariant filters.

166

ph

X3L

p2

L 3,1 (f )

p1

e as

as

e

X4L

  ˜  Y (f )

ph

˜ L (f ) H 2,1

Y˜2L

p2 p1

f1

Y6L

Y1L

˜L X 1 f2

f3

f1

F

f2

f3

F

L (f ) H6,4

X˜ (f )

˜ L (f )

Y˜ (f )

˜ L (f ), the transfers between the phase-frequency Fig. 5.6. Graphical representation of the Polyphase Harmonic Transfer Matrix H L (f ) and the interpretation as a collection of polyphase filters HL planes formally given by the elements Hi,j k,l (f ) or as filters L ˜ of multi-carrier signals H p,q (f ).

5 Frequency-Domain Generic Behavioral Models

  ˜  X (f ) 

5.3 A new frequency-domain framework

167

Example 5.4. Analyzing an Nout × Nin LTI polyphase filter in a front-end architecture starts by deriving its characteristic polyphase transfer matrix H(f ) elaborated in the previous subsection. The selection of a set of fundamental frequencies F = {f1 , . . . , fA } is guided by the properties of the input signal. The output frequencies are usually the same, otherwise the Polyphase Harmonic Transfer Matrix (PHTM) is followed by a re-arrangement of the signal components over the new frequencies described by (5.27) (p. 160). The PHTM characterizing the polyphase filter is then easily found: ˜ L (f ) H = diag(H(f + f1 ) u(f + f1 ) C0 , . . . , H(f + fA ) u(f + fA ) C0 ) , (5.39) where ‘diag’ generates a block diagonal matrix. The sparse structure of the PHTM can be exploited for efficient simulation of the generic behavioral model while the general matrix notation is used for analysis purposes since it allows to easily derive the phase-frequency transfers.  Example 5.5. A linear periodic time-variant system, like a mixer, with period ˘ (f ) are 1/f0 may be represented by an HTM [57]. All elements of the HTM H given by ˘ n,m (f ) = Hn−m (f + mf0 ) , (5.40) H with Hk (f ) = F {hk (τ )} and where the functions hk (τ ) are the Fourier coefficients of the characteristic kernel h(t, τ ) of the system which is periodic in t. Both the input and output axis contain frequencies that are a multiple of the fundamental frequency f0 . Suppose the frequency sets are Fin = {kf0 |0 ≤ k ≤ A} and Fout = {kf0 |0 ≤ k ≤ B}. The PHTM of the single-phase signal corresponding to the HTM (5.40) is then found to be equal to   ˘ 0,1:A (f ) C0 + u(f ) H ˘ 0,−1:−A (f ) C1 ˘ 0,0 (f ) C0 u(f ) H H L ˜ , H (f ) = ˘ 1:B,1:A (f ) C0 + H ˘ 1:B,−1:−A (f ) C1 ˘ 1:B,0 (f ) [C0 + C1 ] H H (5.41) ˘ a:b,c:d (f ) denotes the submatrix of HTM H ˘ (f ) with all elewhere the H ˘ m,n (f ) where a ≤ n ≤ b and c ≤ m ≤ d. Note that the submatrix ments H ˘ −B:−1,−A:A (f ) of the HTM does not appear in the PHTM. The factors u(f ) H are the result of the conversion of the linear periodically modulated signals used in [57] to the polyphase harmonic signals used in this book.  The conversion to new polyphase bases for a PHTM is mathematically translated into the following transformation of the PHTM: 

˜ ) · [I A ⊗ B N ] , ˜ ′ (f ) = I A ⊗ B −1 · H(f (5.42) H in in out Nout which is easily derived from (5.28) and (5.30).

168

5 Frequency-Domain Generic Behavioral Models

A conversion of the frequency axis cannot be represented by a simple matrix multiplication as can be derived from (5.27). Instead, a re-arrangement of the signal components over the new frequency axis (described by (5.27) on p. 160) should be applied in the signal flow before the building block represented by the PHTM. The frequency axis at the output of a building block is determined by the fundamental frequencies at the input and the operating principle of the block, so the axes cannot be randomly chosen. For example, a down-conversion operation will implicate a shift of the frequency axis whereas a sampler truncates the output frequency axis. For power signals given by (5.26), the components of the output signal can be calculated based on the power densities of the components of the input signal and their correlation. For noise power input signals, the input–output relation for systems represented by a PHTM is elaborated in Section 5.3.4 later on. Manipulation of Polyphase Harmonic Transfer Matrices To analyze the linear behavior of a front-end architecture, each building block is represented by a Polyphase Harmonic Transfer Matrix (PHTM). The operation of the entire architecture is then obtained by combining the individual PHTMs to one global matrix. Both numerical and symbolical expressions can be derived [33]. Three elementary configurations encountered in architectures are shown in Fig. 5.7. Parallel and series (cascade) connection correspond to sum and multiplication of PHTMs respectively similar to normal matrix operations. The sum of elements of the form (5.36) corresponding to (5.7a) is defined as follows:     0,L 0,L 1,L 1,L L (f ) + GL Hi,j i,j (f ) = Hi,j (f ) + Gi,j (f ) C0 + Hi,j (f ) + Gi,j (f ) C1 .

(5.43)

(a) Parallel

(b) Series

(c) Feedback

Fig. 5.7. Three elementary connection types for PHTMs. An RF front-end architecture can be described as a set of elementary connections between the PHTMs of the building blocks.

5.3 A new frequency-domain framework

169

Their product used in the configuration of (5.7b) is calculated according to the following expression:   1,L 0,L 0,L 1,L L Hi,j (f ) GL i,j (f ) = Hi,j (f ) Gi,j (f ) + Hi,j (f ) Gi,j (−f ) C0





1,L 0,L 1,L + Hi,j (f ) G0,L i,j (−f ) + Hi,j (f ) Gi,j (f ) C1 . (5.44)

To calculate the global PHTM for a feedback connection shown in Fig. 5.7c, the general equation for a feedback configuration should be evaluated [51]:  −1 ˜ ) ˜ ). I AN + G(f H(f (5.45)

This expression can be evaluated numerically at the expense of a large computational time. Since front-end architectures are usually characterized by a dominant feedforward path, a more efficient approach is the application of frequency relaxation techniques to calculate the total response [67]. A signal with a unity component is applied at the input and the dominant output signal is derived. Then, the feedback signal is determined after which the output signal is updated. For a parasitic feedback path, convergence should be achieved. From the output signals corresponding to the different input signals, the total PHTM can easily be derived. To find a symbolic expression for the global PHTM of a feedback connection, classical symbolic circuit analysis can be applied to obtain the matrix inverse [16]. Alternatively, PHTMs with a dominant block diagonal, corresponding to a mainly LTI behavior, can be approximated by a series development similar to the technique elaborated in [56] for harmonic transfer matrices. Graphical representation of Polyphase Harmonic Transfer Matrices

To make the model for the linear behavior of a building block easier to interpret, the corresponding Polyphase Harmonic Transfer Matrix (PHTM) can be translated into a graphical representation. The major phase-frequency transfers can be derived directly from a matrix box plot [34]. Figure 5.8 depicts an example matrix box plot with an indication of the different properties. The elements of the frequency sets Fin and Fout of the PHTM are put on the horizontal and vertical axis respectively. The different boxes and their level of darkness in a matrix box plot indicate several properties of the PHTM: Frequency transfers. The area of the big block at a point (fk , fl ) of the graph indicates the strength of the total transfer from fl to fk over all phases. The strength of a transfer is calculated as the mean power gain associated with the transfer function in the 3 dB-bandwidth.

170

5 Frequency-Domain Generic Behavioral Models contribution f1 < f3

Fout

f3

largest phase transfer f2 smallest frequency transfer 3

f1

2 1 1 2 3

f1

f2

f3

Fin

Fig. 5.8. Representation of a PHTM as a matrix box plot.

Frequency contributions. The width of the block at a point (fk , fl ) is a measure for the relative contribution of the transfer from fl to fk in the total transfer from all input frequencies to fk . As a result, the largest contribution at a particular frequency fk originates from the block with the largest width. Phase transfers. The transfers between phases for a transfer between two particular frequencies are indicated by the subdivisions within the large block: the darker the subblock, the larger the transfer between the corresponding phases. The input phases are numbered from left to right and the output phases from bottom to top as indicated in Fig. 5.8. The box plot only represents the PHTM regardless of the actual input signal of the filter. Hence, to obtain a representation of the different components of the output signal, all elements of the PHTM should be multiplied with the corresponding component of the input signal. A box plot is then drawn of the newly created matrix. Example 5.6. An example of a matrix box plot is shown in Fig. 5.9 for the filter depicted in Fig. 5.3a which transforms a 4-phase signal into a 2-phase signal. Consequently each large box is divided in 2 × 4 subblocks. The plot in Fig. 5.9a corresponds to the PHTM derived with respect to a single-phases polyphase base (5.11) emphasizing the structure of the circuit. The graphical representation of the PHTM with respect to the base of symmetrical components (5.13) is shown in Fig. 5.9b and allows to identify the main information flows throughout the filter. The places of the boxes give information about the type of filter: from plot Fig. 5.9a, it can be derived that the building block is time-invariant since

5.3 A new frequency-domain framework

f5

f5

f4

f4

f3

f3

f2

f2

f1

f1

f1

f2

f3

f4

(a) Single-phases base

f5

171

from phase 4 to phase 2

f1

f2

f3

f4

f5

(b) Symmetrical components base

Fig. 5.9. The graphical representation of the PHTM of the 4-to-2 converter of Fig. 5.3a for different sets of polyphase base vectors.

there are only blocks on the diagonal and the input and output frequency axis are equal. Further, the transfers decrease with increasing frequency. By converting the PHTM to a polyphase base with symmetrical components (Fig. 5.9b), the operation of the filter at the central frequency can be found: there is a transfer from the positive sequence (input phase 2) to the differential mode (output phase 2) whereas the negative sequence (input phase 4) is filtered out. At other frequencies, however, there is a leakage indicated by the light gray color of the small boxes corresponding to the transfers from phase 4 to phase 2, like the one from f1 to f1 indicated in Fig. 5.9b. Hence, the desired transfer from the positive sequence to the differential output mode is only perfect at f3 whereas, at other frequencies, the linear distortion of the filters results in a mismatch and consequent leakage.  5.3.4 Noise signals in the Phase-Frequency Transfer model Noise in a front-end architecture is one of the major limitations for the performance. To analyze these effects, the transfers of a set of noise signals towards the output of a system should be calculated. In the Phase-Frequency Transfer model, the (linearized) system is represented by a Polyphase Harmonic Transfer Matrix where each element describes a transfer of a component of the input signal in a particular phase and placed around a fundamental frequency towards a component of the output signal, possibly in another phase or centralized around another frequency. A compatible representation of the frequency-domain properties of the noise signals is the Polyphase Harmonic Power Spectral Density Matrix. First, mathematical foundations for this representation developed in this work are introduced. The new data structure is then used to derive the characteristics of the output signal of a linear system with a stochastic inputs like noise.

172

5 Frequency-Domain Generic Behavioral Models

Noise representation The noise signals occurring in an analog system can be represented by a set of Nin equivalent input noise signals ni (t). These signals are gathered in the polyphase noise signal n (t). For wide-sense stationary noise processes with zero mean, the stochastic characteristics of n (t) are captured in the correlation matrix Rn (τ ) containing the auto- and cross-correlation functions: Rnp nq (τ ) = E{np (t + τ ) nq (t)} ,

(5.46)

where E{ · } denotes the expectation operator [39]. Usually the properties are specified in the frequency domain with the Power Spectral Density Matrix (PSDM) Sn (f ) = F {Rn (τ )}, e.g., to easily represent white or 1/f -noise. The stochastic properties of a polyphase harmonic signal can be represented with similar matrices. Definition 5.5 (PHCM—Polyphase Harmonic Correlation Matrix). ˜ L (t + τ, t) of an N -phase signal n (t) with a frequency set F = A PHCM R n {f1 , . . . , fA } is an AN × AN matrix where each element is a correlation func˜ L (t) associated tion between two elements of the Polyphase Harmonic Signal N with n (t):  , ˜ L (t + τ, t) = RLL L (t + τ, t) = E nL (t + τ ) nL (t) , (5.47) R i n j n n i,j

i

j

˜L L with nL i (t) and nj (t) the components of N (t) corresponding to phases p and q and frequencies k and l respectively, as defined by (5.21b).

Definition 5.6 (PHPSDM—Polyphase Harmonic Power Spectral ˜ L (f, t) of a polyphase signal is the Fourier Density Matrix). A PHPSDM S n transform of the PHCM with respect to τ : , ˜ L (t + τ, t) . ˜ L (f, t) = Fτ R S (5.48) n n

Both definitions are applicable for both stationary and non-stationary stochastic processes. It is important to stress that non-stationary crosscorrelations between different components at different frequencies may result from a stationary PSDM. A conversion from the PSDM to the representation in the Phase-Frequency Transfer model (PFT model) is easily performed by defining block filters Wk (f ) around the fundamental frequencies gathered in the frequency set F = {f1 , . . . , fA } similar to those used in (5.27). The Polyphase Harmonic Power Spectral Density Matrix is then calculated as follows: ˜ L (f, t) = S n   2 2 diag |W1′ (f + f1 )| Sn (f + f1 ) , . . . , |WA′ (f + fA )| Sn (f + fA ) , (5.49)

5.3 A new frequency-domain framework

173

with Wk′ (f ) = 2Wk (f ) u(f ). The factor two takes into account that the only positive frequencies are present in F . The non-overlapping block filters remove the cross-correlation between components at different fundamental frequencies which results in a stationary PHPSDM (independent of time variable t). Note that these filters are not unique, but they can easily be defined based on the ideal signal flow [34]. ˜ L (f, t) in the PFT The inverse conversion, from a general PHPSDM S n model to a PSDM Sn (f, t) comes down to an evaluation of the matrix elements Snp nq (f, t) =   A 1  j2π(fk −fl )t L −j2π(fk −fl )t L SnL nL (f − fk , t) e , + SnL nL (−f − fk , t)e j i j i 4 k,l=1

(5.50)

with i = p + (k − 1) N and j = q + (l − 1) N , for all phases p and q. In case of uncorrelated components at different frequencies (as in (5.49)), the sum should only be taken over all frequencies fk = fl . As a result, a stationary PHPSDM is converted into a stationary PSDM. In general, however, the exponential functions in (5.50) translate a stationary PHPSDM into non-stationary power spectral density functions. A transformation of the polyphase base E may be performed on the noise signal n (t), for instance, to analyze the effect of the noise on the symmetrical components carrying the information signal. The new representation is found by application of (5.19) or (5.28). The PHPSDM of the converted signal is also calculated using the base transformation matrix B N :     L 

˜ (f ) · I A ⊗ B −1 † , ˜ L′ (f ) = I A ⊗ B −1 · S (5.51) S n n N N where the superscript † denotes the conjugate transpose. Following example illustrates this property.

Example 5.7. A four-phase polyphase noise signal may be characterized by two uncorrelated, differential-mode white noise processes. The Power Spectral Density Matrix (PSDM) Sn (f ) contains all frequency-domain information of the noise signals and is easily defined with respect to the polyphase base with common-/differential-mode vectors (5.12). Sn (f ) then corresponds to stationary noise processes where all elements are zero except stationary and zero except for the elements Sn2 n2 (f ) = N1 ,

and

Sn4 n4 (f ) = N2 ,

(5.52)

which indicate the white noise levels of the two noise signals. The Polyphase Harmonic Power Spectral Density Matrix (PHPSDM) representation in the PFT model that is associated with Sn (f ) becomes a block-diagonal matrix:

174

5 Frequency-Domain Generic Behavioral Models

˜ L (f ) = diag S k n



   0 0 N1 0 ⊗ , 2 0 N2 0 |Wk′ (f + fk )|

(5.53)

with diagk (ak ) = diag(a1 , . . . , akmax ). The influence on the information signals is obtained by converting the polyphase base towards symmetrical components by application of (5.51) with base transformation matrix (5.17):   

 1 N1 + N2 N1 − N2 0 0 L′ ˜ ⊗ . (5.54) Sn (f ) = diagk 2 0 |Wk′ (f + fk )| 4 N1 − N2 N1 + N 2 As a result, the correlation between the noise on the positive and negative sequences (with one of them carrying the information signal besides the noise) becomes zero if the two white noise processes have the same stochastic characteristics.  Filtering noise signals Analyzing the noise in an architecture requires first the calculation of equivalent input noise signals which can be translated into a polyphase harmonic power spectral density matrix. Then, a linear approximation is made of the (sub)system expressed as a polyphase harmonic transfer matrix. The manipulations of Section 5.3.3 can be used to calculate the Polyphase Harmonic Transfer Matrix (PHTM). Finally, the characteristics of the noise signals at the output can be derived by application of Theorem 5.1. Theorem 5.1 (Polyphase Harmonic Transfer Matrix noise filtering). A system represented by the Polyphase Harmonic Transfer Matrix ˜ 0,L (f ) C0 + H ˜ 1,L (f ) C1 ˜ L (f ) = H H

(5.55)

maps a real wide-sense stationary noise signal n (t) represented by the station˜ L (f ) ary Polyphase Harmonic Power Spectral Density Matrix (PHPSDM) S n L ˜ onto a signal y (t) characterized by the stationary PHPSDM Sy (f ) given by ˜ L (f ) = H ˜ 0,L (f ) S ˜ L (f ) H ˜ 0,L (f )† + H ˜ 1,L (f ) S ˜ 1,L (f )† . ˜ L (−f )H S n y n

(5.56)

Proof. The calculation of each component of the output signal of the linear system in the time domain can be described by the equation: yiL (t) =

A in Nin k=1

  1,L L L (t) , h0,L n (t) ∗ n (t) + h (t) ∗ k k i,k i,k

(5.57)

where ‘∗’ denotes the convolution operation. This equation must be substituted into (5.47) to obtain the cross-correlation between components of the output signal. Plugging in the definition of the convolution, taking into account the stationary character of the input noise process, and simplifying, results in an expression for the cross-correlation function:

5.3 A new frequency-domain framework

RyLL yL (t + τ, t) = i

j

∞ ∞  A in Nin 

175

0,L hi,k (t + τ − u) RnLL nL (u − v) h0,L j,l (t − v) k

l

k,l=1 −∞−∞

1,L L + h0,L i,k (t + τ − u) RnL nL (u − v) hj,l (t − v) k

+ h1,L i,k (t

+τ −

l

u) RnLL nL (u k l

− v) h0,L j,l (t − v) 

1,L L + h1,L i,k (t + τ − u) RnL nL (u − v) hj,l (t − v) du dv k

l

(5.58)

It can be shown that all correlation functions in the second and third sum are zero due to the definition of the equivalent low-pass transformation (5.5) which filters out all negative frequency components. Evaluation of the remaining integrals results in a stationary cross-correlation function: RyLL yL (t + τ, t) = i j   A in Nin 0,L 1,L 1,L 0,L L L hi,k (τ ) ∗ RnL nL (τ ) ∗ hi,k (−τ ) + hi,k (τ ) ∗ RnL nL (τ ) ∗ hi,k (−τ ) . k

k,l=1

l

k

l

(5.59)

Finally, application of the definition of PHPSDM implies the use of the Fourier transform and results in the expression for the frequency-domain input–output relationship (5.56). ⊓ ⊔ Theorem 5.1 is only applicable for stationary input noise signals characterized by a block diagonal PHPSDM, otherwise extra terms rise in (5.58). A more general expression involves the calculation of a second PHPSDM indicating the correlation between a normal signal component and a complex conjugate. The corresponding terms in (5.58) do not vanish which results in a more complicated expression than the simplified one (5.56). In following example, the theorem is applied to calculate the noise characteristics at the output of a down-conversion architecture. Example 5.8. The input polyphase noise signal of the down-conversion operation depicted in Fig. 5.10 is given by Polyphase Harmonic Power Spectral Density Matrix (PHPSDM) (5.53) derived in Example 5.7. The choice of the frequency set depends on the use of the building block. For example, if the down-conversion architecture is used for the conversion of a signal at 5fd to 2fd for some frequency fd , using an oscillator signal with amplitude V and frequency fosc = 3fd , then the set F = {kfd |k = 0, . . . , A} can be used with A determined by the maximal frequency in the system. Block filters around the fundamental frequencies fk ∈ F with bandwidth fd are used. The Polyphase Harmonic Transfer Matrix (PHTM) of the mixer set is readily found with respect to the base of common-/differential-mode signals via equation (5.41): it is a block matrix with polyphase filters defined according to

176

5 Frequency-Domain Generic Behavioral Models V cos(w o t) x1 x2

H1 (f )

H2 (f )

y1 y2

x3 x4

H1 (f )

H2 (f )

y3 y4

V sin(w o t)

x (t)

y(t)

Fig. 5.10. Example of a down-conversion structure with 4-phase input and output signals. A polyphase noise signal n (t) is used as input x (t). All operations are defined in differential mode.

⎧ V ⎪ ⎪ 2 u(f ) diag(0, [C0 + C1 ] , 0, j [C0 − C1 ]) , (k, l) = (0, 3) , ⎪ ⎪ V ⎪ ⎪ (k, l) = (3, 0) , ⎨ 2 diag(0, [C0 + C1 ] , 0, −j [C0 + C1 ]) , ˜ L (f ) = V diag(0, C0 , 0, ∓j C0 ) , M k = l ± 3, k,l 2 ⎪ ⎪ V ⎪ diag(0, C1 , 0, −j C1 ) , k = l + 1 < 3, ⎪ 2 ⎪ ⎪ ⎩0 · I , otherwise, 4 (5.60) for k, l = 0, . . . , A. The total input–output PHTM is the series connection of three PHTMs calculated using (5.44): ˜ L (f ) · M ˜ L (f ) · H ˜ L (f ) , ˜ L (f ) = H H tot 2 1

(5.61)

˜ L (f ) defined by (5.39). This PHTM is used in expression ˜ L (f ) and H with H 1 2 ˜ L (f ) for an input given by (5.53). (5.56) to calculate the output PHPSDM S The final result contains the stationary components as block matrices on the main diagonal:   ˜ L (f ) = V 2 |H1 (f + fk − f3 )|2 + |H1 (f + fk + f3 )|2 · S k,k 2

|H2 (f + fk )| · diag(0, N1 , 0, N2 ) · Wk (f + fk ) u(f + fk ) , (5.62)

for fk ∈ F . The two terms correspond to up- and down-converted noise components, respectively. Due to the time-invariant nature of the down-conversion operation, there are also components with a cyclostationary stochastic behavior with period 1/ (2fosc ). This property is reflected in the PHPSDM by non-zero off-diagonal block matrices SL k,l (f ) for |fk − fl | = 2fosc , which are calculated to be: 

2  fk + fl  L 2 ˜ Sk,l (f ) = V H1 f +  H2 (f + fk ) H2 (f + fl ) · 2

diag(0, N1 , 0, −N2 ) · Wk (f + fk ) u(f + fk ) . (5.63)

5.3 A new frequency-domain framework

177

These cyclostationary components should be taken into account if the system is followed by another LTI system (e.g., a second down-conversion stage) to find the correct noise values in the different bands. 

5.3.5 Nonlinear behavior of building blocks A more realistic model of a building block or an entire system is obtained if both linear and nonlinear non-ideal behavior is taken into account. Since in practice most blocks are designed to perform a linear operation, the main emphasis in this subsection is on the addition of weak nonlinearities to the linear Phase-Frequency Transfer model (PFT model). To this end, Polyphase Distortion Tensors (PDTs) are added in parallel with the linear Polyphase Harmonic Transfer Matrices (PHTMs). Similar to a PHTM, a PDT describes the operation of the building block as a set of transfers of components of the input signal in a particular phase and at some frequency towards components of the output signal at different phases or frequencies. After introducing a formal definition of this new data structure, the input–output relation using this PDT is derived and the link with with polynomial nonlinear models is made. Finally, techniques are presented to model other types of nonlinearities besides polynomial ones, like strong nonlinearities. Distortion tensors When a building block exhibits weakly nonlinear behavior, the different components of the input polyphase harmonic signal x (t) are combined with each other and the result is mapped onto the phase-frequency plane at the output. A commonly used model for weak nonlinearity, corresponding to the case of a memoryless nonlinearity, is a polynomial expression [64] for the output signal y (t): ∞  (m) Km x (t) , (5.64) y (t) = m=1

(m)

denotes the mth Hadamard power of v [21]. This expression can be where v related to commonly used figure of merits for nonlinear behavior of RF building blocks. For example, the third-order intercept point for single-phase signals is calculated based on the first- and third-order distortion coefficient [50]: /   K1  .  (5.65) IP 3 = 2  K3 

All information about the mapping operation of the combination of components of the input signal onto the output is gathered in a polyphase distortion tensor.

178

5 Frequency-Domain Generic Behavioral Models

Definition 5.7 (PDT—Polyphase Distortion Tensor). A Polyphase ˜ L with polyphase bases (Ein , Eout ) of Distortion Tensor (PDT) of order m D m orders (Nin , Nout ) and frequency sets (Fin , Fout ) of dimensions (Ain , Aout ) is a tensor with (m + 1) dimensions Aout Nout × Ain Nin × Ain Nin × · · · × Ain Nin which is contravariant in the first variable and covariant in the other ones and where the tensor element 

DmL

i

j1 ,...,jm

=

1 

c1 ,...,cm ,L Hi,j Cc1 ,...,cm 1 ,...,jm

(5.66)

c1 ,...,cm =0

represents the combined transfer of the signal components in phases p1 , . . . , pm at frequencies k1 , . . . , km to phase q at frequency l given by (5.21b) and with operator Cc1 ,...,cm a generalization of operator Cc defined by (5.37): Cc1 ,c2 ,...,cm {S1 (f ) ∗ S2 (f )} = Cc1 {S1 (f )} ∗ Cc2 ,...,cm {S2 (f )} ,

(5.67)

where the convolution operation is denoted by ‘∗’. Figure 5.11 shows the meaning of some elements of a second-order distortion tensor. Each element corresponds to a mapping of a combination of two input components onto the phase-frequency plane at the output. Since several combinations of input components occur multiple times in the distortion tensor, the number of non-zero elements can be reduced drastically. For a polynomial nonlinearity of order m, characterized by a polyphase filter matrix Km in (5.64), the next theorem indicates which values to choose for the elements of the distortion tensor to obtain a maximum number of zeros. As a result of this selection, the calculation of the input–output relation of the system is simplified. Theorem 5.2 (Polynomial nonlinearity in PFT model). The frequencydomain signal components of the output Polyphase Harmonic Signal Y˜ L (f ) defined on Aout fundamental frequencies of an mth order weakly nonlinear system, characterized by the equation

y (t) = Km x (t)(m) ,

(5.68)

and with an input frequency set with Ain elements, are given by

YkL (f ) =

Ain 

l1 ≤···≤lm =1

 k Km dmL l

1 ,...,lm



XlL1 (f ) ⊛ · · · ⊛ XlLm (f ) ,

(5.69)

for k = 1, . . . , Aout . ‘⊛’ denotes the Hadamard product with the convolution as operator instead of the normal multiplication [21], and the single-phase tensor elements are:   L k 21−m m! u(f + fk ) Cc1 ,...,cm , dm l ,...,l = c c1 1 m φ[(−1) (fl1 + fε ) , . . . , (−1) m (flm + fε )] (c ,...,cm ) 1 ∈C(k;l1 ,...,lm )

(5.70)

X2L

1 D2L 4,5

p2 X5L

p1 f2

f3

p2

f1

D2

X˜ (f )

Y6L

p1

F

 L 6

Y4L

Y1L

f2

f3

F

2,2

˜ L (f ) D 2

Y˜ (f )

Fig. 5.11. Graphical representation of the mapping operation of a second-order distortion tensor between phase-frequency planes.

5.3 A new frequency-domain framework

f1

Y˜ (f ) e

X4L

1,3

ph

ph

as

e

X˜ (f )

4

as

D2L

179

180

5 Frequency-Domain Generic Behavioral Models

for some frequency fε > 0 Hz, with the sum taken over the elements of the set C of operator modifiers: c

C(k; l1 , . . . , lm ) = {(c1 , . . . , cm ) ∈ Bm | fk = (−1) 1 fl1 + · · · + (−1)

cm

flm

∧ ∀i ∈ {1, 2, . . . , m − 1} : li = li+1 ⇒ ci ≤ ci+1 } , (5.71) and with the occurrence function φ: 1/n1

φ[a1 , . . . , am ] = (n1 !)

1/nm

· · · (nm !)

,

ni = |{j ∈ {1, 2, . . . , m} | ai = aj }| .

(5.72a) (5.72b)

Proof. By substituting the definition of the equivalent low-pass transformation (5.4) into (5.68), and expanding, the components of the output signal are found: A 1 out   1 Km Wk′ (f + fk ) · YkL (f ) = m 2 l1 ,...,lm =1 c1 ,...,cm

  Cc1 XlL1 (f + fk − fl ,c ) ⊛ · · · ⊛ Ccm XlLm (f + fk − fl ,c ) , (5.73) c

c

with fl ,c = (−1) 1 fl1 + · · · + (−1) m flm , and Wk′ (f ) = 2Wk (f ) u(f ). The window filters out all components with fk = fl ,c . The sum in (5.73) contains the same combination of transformed input components several times: both the components XlLi (f ) and the operator modifiers cj for the same component can be permutated. The first number of permutations is equal to |Sn ({fl1 , . . . , flm })| =

m! , φ[fl1 , . . . , flm ]

(5.74)

whereas for a frequency fli that occurs ni times in (fl1 , . . . , flm ), the number of combinations for a set of modifiers with n0i zeros is given by a binomial coefficient. The total number of permutations for a particular combination of frequencies and corresponding modifiers becomes then m  1/ni  ni m! m! (5.75) = c c1 0 φ[fl1 , . . . , flm ] i=1 ni φ[(−1) fl1 , . . . , (−1) m flm ] 0

1

Since for fli = 0 Hz, (−1) fli = (−1) fli , a strictly positive frequency fε is added to avoid problems with zero frequencies. As a matter of choice, the combination with fl1 ≤ · · · ≤ flm and with the lowest binary number c = (c1 , . . . , cm ) is selected and multiplied with the number of permutations. This operation results in the single-phase tensor k  ⊔ element dmL l1 ,...,lm defined in (5.70). ⊓

In practical analog circuits, the distortion components are rather rapidly decreasing and, consequently, the second- and third-order distortion components take into account the effects of distortion of even and odd orders, respectively. The next two examples apply Theorem 5.2 to derive expressions for second- and third-order distortion models.

5.3 A new frequency-domain framework

181

Example 5.9 (Second-order distortion). The elements of the single-phase tensor for a second-order nonlinear distortion are found by writing down expression (5.70) for m = 2. Exhaustive reasoning over the elements of C(k; l1 , l2 ) results in the elements depending on the mutual relation of the output and input frequencies2 : ⎧ 1 ⎪ ⎪ ⎨ φ[fl1 ,fl2 ] C 01 ,0 , 0 Hz < fl1 ≤ fl2 ∧ fk = ±fl1 + fl2 ,  L k d2 = C0,0 + C1,0 , 0 Hz = fl1 < fl2 ∧ fk = fl1 + fl2 , (5.76a) l1 ,l2 ⎪ ⎪ ⎩0, otherwise, for fk = 0 Hz, and 

d2 L

k

l1 ,l2

=

(5.76b)

⎧ ⎪ 0 Hz < fl1 = fl2 ∧ fk = fl1 − fl2 , u(f ) C0,1 , ⎪ ⎪ ⎨ 1 2 u(f ) [C0,0 + 2C0,1 + C1,1 ] , 0 Hz = fl1 = fl2 ∧ fk = fl1 + fl2 , ⎪ ⎪ ⎪ ⎩ 0, otherwise,

if fk = 0 Hz. In this case, the factor u(f ) introduced by the equivalent low-pass transformation does not vanish.  Example 5.10 (Third-order distortion). Similar to the case of m = 2, the elements of the third-order single-phase distortion tensor are found by application of (5.70) for m = 3. The result for fk = 0 Hz becomes: 

d3 L

k

l1 ,l2 ,l3

=

(5.77a)

⎧ 3 0 Hz < fl1 C0,0,0 ⎪ ⎪ ⎪ 2φ[fl1 ,fl2 ,fl3 ] ⎪ ⎪ ⎪ 3 ⎪ 0 Hz < fl1 C ⎪ ⎪ 2φ[fl1 ,fl2 ] 0,0,1 ⎪ ⎪ ⎪ ⎪ 3 ⎪ 0 Hz < fl1 C ⎪ ⎪ 2φ[fl2 ,fl3 ] 1,0,0 ⎪ ⎨ 3φ[fk ,fl3 ] 0 Hz < fl1 C0 2φ ⎪ (fl1 ,fl2 ) 1 ,1,0 ⎪   ⎪ ⎪ ⎪ 3 ⎪ 0 Hz = fl1 ⎪ 2φ[fl ,fl ] C0, 01 ,0 + C1, 01 ,0 ⎪ 2 3 ⎪ ⎪ ⎪3 ⎪ ⎪ ⎪ 4 [C0,0,0 + 2 C0,1,0 + C1,1,0 ] 0 Hz = fl1 ⎪ ⎪ ⎪ ⎩ 0 otherwise, 2

All upper and lower level symbols of

0 1

≤ fl2 ≤ fl3 ∧ fk = fl1 + fl2 + fl3 , ≤ fl2 ≤ fl3 ∧ fk = fl1 + fl2 − fl3 , < fl2 ≤ fl3 ∧ fk = fl2 − fl1 + fl3 , ≤ fl2 1 < fl3 ∧ fk = fl3 ± fl1 − fl2 , < fl2 ≤ fl3 ∧ fk = fl1 ± fl2 + fl3 , = fl2 < fl3 ∧ fk = fl1 + fl2 + fl3 ,

and ± belong to each other.

182

5 Frequency-Domain Generic Behavioral Models

and for fk = 0 Hz:  L k d3 = l1 ,l2 ,l3

(5.77b)

⎧ 3 u(f ) [C0,0,1 + C1,1,0 ] ⎪ ⎪ 2φ[fl1 ,fl2 ] ⎪ ⎪ ⎪ ⎪ 3 ⎪ ⎪ ⎨ 2 u(f ) [C0,0,1 + C1,0,1 ]

0 Hz < fl1 ≤ fl2 < fl3 ∧ fk = fl1 + fl2 − fl3 , 0 Hz = fl1 < fl2 = fl3 ∧ fk = fl1 + fl2 − fl3 ,

1 0 Hz = fl1 = fl2 = fl3 ∧ fk = fl1 + fl2 + fl3 , ⎪ 4 u(f ) [C0,0,0 + 3 C0,0,1 ⎪ ⎪ ⎪ ⎪ + 3 C0,1,1 + C1,1,1 ] ⎪ ⎪ ⎪ ⎩ 0 otherwise.

The different terms raise from the up- or down-conversion of the components of the real signals at the positive or negative frequencies to the output frequency fk .  Polyphase Distortion Tensors (PDTs) of the same order m of two blocks in a parallel connection (Fig. 5.7a) can be combined to one distortion tensor of order m by adding the corresponding elements. For series and feedback connections, however, the combined distortion tensor will be of a different order than the original ones. As a result, symbolic expressions tend to be complicated. Therefore, numerical evaluation is in general more appropriate to analyze nonlinear effects. Weakly nonlinear mapping The mapping operation of a nonlinear building block can be represented using the inner product between a Polyphase Distortion Tensor (PDT) and a Polyphase Harmonic Signal (PHS). This operation is an extension of the normal matrix multiplication [2]. Definition 5.8 (Inner product). The inner product along the kth dimenL sion of an (m + 1)-dimensional tensor T˜ (f ) with dimensions P1 ×· · ·×Pm+1 with a Polyphase Harmonic Signal S˜ L (f ) with Pk components is an m-dimensional tensor 1 0 L L′ T˜ (f ) = T˜ (f ) , S˜ L (f ) , k

with 

T

L′

j (f ) i

2 ,...,ik−1 ,ik+1 ,...,im+1

=

Pk   q=1

T

L′

j (f ) i

2 ,...,q,...,im+1

(5.78a)

· SqL (f ) , (5.78b)

for j = 1, . . . , P1 ; ir = 1, . . . , Pr and r = 2, . . . , m + 1. Since the elements of the tensor and vector are functions, a variant of the inner product (5.78) can be defined where the standard multiplication in (5.78b) is

5.3 A new frequency-domain framework

183

replaced by the convolution. Consequently, (5.78a) is changed slightly for this alternative inner product: 0 L 1∗ L′ (5.79) T˜ (f ) = T˜ (f ) , S˜ L (f ) . k

For a general weak nonlinearity of order m, characterized by (5.68), the components of the output distortion signal Y˜ L (f ) are given by (5.69). By proper arrangement, this expression can be written as the repeated calculation ˜ L (f ) of the building block and the of m inner products between the PDT D m ˜ L (f ): harmonic input signal X 2 00 3∗ 1 1∗ ˜ L (f ) , X ˜ L (f ) · · · . ˜ L (f ) , X Y˜ L (f ) = · · · D (5.80) m 2

2

2

A simplified relation is derived for use during a first analysis of the architecture by assuming that the signal components have a small bandwidth compared to the resolution of the frequency axis. Subsequent refinements of the model will add more details to the signal model. In the simplified case, Dirac pulses approximate the signal components. The inner products are adjusted accordingly: 0 L 1 1∗ 0 L 1 0 L ˜ L (0) . ˜ L (f ) ≈ D ˜ L (f ) ≈ D ˜ (f ) , X ˜ (f ) , X ˜ (f ) , X (5.81) D m m m k

k

k

With these approximations, the concepts of PDT and inner product facilitate the characterization of the total transfer from the input components towards the output components. A weak nonlinearity of order m generates from an input frequency fk with component xL i (t) transfers of several signals 2 m L L xL (t), x (t) , . . ., x (t) towards various output frequencies, corresponding i i i to transfers of degree 1 to m. These transfers are realized by multiplication with other signal components. The sizes of all transfers of degree d can be ˜ L (f ) which is obtained via calculation collected into a transfer matrix H m,d of the inner products between the distortion tensor and the input polyphase harmonic signal evaluated at the origin. From these transfers, the relative importance of a frequency component regarding the nonlinearity can be determined. This knowledge can be used to simplify the model by neglecting some contributions. Example 5.11. For a third-order (m = 3) nonlinearity, the transfer matrix of third degree (d = 3) is found by taking the elements of the Polyphase Distortion Tensor (PDT) with equal covariant indices:     ˜ L (f ) =  D L j  . (5.82) H 3,3 3 i,i,i j,i

 ˜ L (f ) indicates the transfer from the signal xL (t) 3 at Each element of H 3,3 i frequency fi to frequency fj . For example, Fig. 5.12a shows the input signal of

184

5 Frequency-Domain Generic Behavioral Models

a single-phase system modeled by equation (5.68) with K3 = 1. The transfer matrix of third degree can be represented by the matrix box plot depicted in Fig. 5.12d. There is no third-degree transfer between different frequencies. The transfer matrix of the first degree (d = 1) for a third-order nonlinearity (m = 3) is found by evaluating six inner products: 00 L,[1] 1 00 L,[2] 1 1 1 ˜ L (0) , X ˜ L (0) + D ˜ L (0) , X ˜ L (0) ˜ L (f ) = ˜ ˜ (f ) , X (f ) , X D H 3,1 3 3 3 2 3 3 00 L,[3] 1 1 L L ˜ ˜ ˜ + D 3 (f ) , X (0) , X (0) , (5.83) 2

2

where the superscript [p] indicates that the transfer is calculated from the component corresponding to the pth covariant index. As a result, the elements of the PDT with the same component in another covariant index should be set to zero to obtain only the transfers of the first degree. The graphical representation of the first-degree transfer matrix is shown in Fig. 5.12b. If the elements of the PDT in (5.83) are chosen according to Example 5.10, then the three terms correspond to different kinds of transfers. The first one counts for transfers of a frequency fk towards fl via components at frequencies higher than fk . The last one corresponds to transfers via components at lower frequencies. The other signal transports are incorporated in the middle term. Finally, the second-degree transfer matrix is calculated via the derivation of three-dimensional tensors by equating two covariant indices instead of three as in (5.82). Calculation of the inner products of these tensors with the input signal results in the transfer matrix represented in Fig. 5.12c. It can be concluded that at the center frequency f6 = 100 MHz most nonlinear distortion is due to the second-degree transfers. Odd-degree transfers give rise to in-band distortion. 

The input–output relationship for nonlinear behavior in terms of a PDT given by (5.80) corresponds to polynomial nonlinearities defined by (5.68). PDTs, however, can also be used for more general types of nonlinearities. For example, a general mth order weakly nonlinear input–output relation may be defined by

y (t) = Hy (t) ∗ {[Hx

1

(t) ∗ x (t)] ◦ · · · ◦ [Hxm (t) ∗ x (t)]} ,

(5.84)

where Hy (t) and Hxi (t) are general polyphase filters and the symbol ‘◦’ denotes the Hadamard product. For single-phase signals, (5.84) corresponds to a Volterra kernel of the form [64]: Hm (jω1 , . . . , jωm ) = Hm,0 (jω1 + · · · + jωm ) Hm,1 (jω1 ) · · · Hm,m (jωm ) . (5.85) Note that the nonlinearity represented by this model (5.84) can also possess memory in contrast to the standard polynomial model (5.68). The input– output characteristic can be expressed using inner products and matrix multiplication:

5.3 A new frequency-domain framework

185

Pin [dBm]

−10 −20 −30 −40 75 f1

80 f2

85 f3

90 f4

95 f5

100 f6

110 f8

105 f7

115 f9

120 f10

125 f11

f [Hz]

(a) Input signal

f11

f11

f10

f10

f9

f9

f8

f8

f7

f7

f6

f6

f5

f5

f4

f4

f3

f3

f2

f2

f1

f1

f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

(b)

˜L H 3,1 (f )

f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

˜L (c) H 3,2 (f )

f11 f10 f9 f8 f7 f6 f5 f4 f3 f2 f1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

˜L (d) H 3,3 (f ) Fig. 5.12. The input signal and the matrix box plots of the transfer matrices of degree one, two and three for a third-order nonlinearity with K3 = 1.

186

5 Frequency-Domain Generic Behavioral Models

Y˜ L (f ) = HL y (f )

3∗ 2 00 1 1∗ L L L L L ˜ ˜ ˜ · · · · D m , Hx1 (f ) · X (f ) , Hxm (f ) · X (f ) · · · , (5.86) 2

2

2

using the standard mth order distortion tensor and the PHTMs corresponding to the general polyphase filters. A large variety of weak nonlinearities encountered in analog circuitry can be taken into account with this generalized weakly nonlinear model. Strongly nonlinear behavior Different mechanisms exist to take into account strongly nonlinear effects, like saturation, quantization or the operation of a limiter [4]. Three methods can easily be incorporated into the Phase-Frequency Transfer model: Time-domain conversion. The Polyphase Harmonic Signal at the input of the nonlinear block is converted to the real polyphase signals in the time domain using (5.10) and (5.4). Then, the nonlinear input-output relationship is applied and the signal is converted back to a polyphase harmonic signal. This method is comparable to that used by time-domain bandpass simulation approaches [58]. Weak approximation. During analysis, the designer is usually interested in the harmonic distortion figures HD m . Therefore, the output of a strongly nonlinear block can be rewritten as:

y (t) = F(x (t)) ≈

M 

Km x (t)

(m)

+ e (t) ,

(5.87)

m=1

where e (t) is a random noise signal. Consequently, the weakly nonlinear mapping (5.80) can be used to approximate the output. The noise source e (t) accounts for the strongly nonlinear effects which can be filtered through the linearized system using (5.56) similar to the model for quantization noise generally used to analyze ∆Σ modulators. Quasi-linear approximation. A quasi-linear model for a building block describes its operation with a Polyphase Harmonic Transfer Matrix (PHTM) where the elements may depend on both the frequencies and the magnitudes of the input components. This PHTM is derived from the describing function of the building block where the input is approximated by a sum of pure sine waves [17]. Each approach is a trade-off between accuracy and simulation efficiency: conversion to the time domain returns the exact output signal but is time consuming whereas a quasi-linear approximation requires only one matrix multiplication once the approximation has been derived. Approximation by a weak nonlinearity, on the other hand, is more easily derived than the quasilinear estimation.

5.4 Generic behavioral models for front-ends

187

5.4 Generic behavioral models for front-end architectures The framework of the Phase-Frequency Transfer model (PFT model) described in the previous sections provides data structures in which several architectures can be represented at different abstraction levels. Furthermore, the presence of a frequency-domain simulation method makes it suited to represent and analyze systems operating at high frequencies. As a result, this model naturally serves as the underlying mathematical theory to develop generic behavioral models for RF systems. In this section, first the class of architectures is defined for which a generic behavioral model is derived. A basic RF receiver is chosen as explanatory example but the same principles are easily applied to other systems like transmitters. Then, the different elements of the actual model are presented: the alphabets, the generic functions and their templates and the interaction scheme. Finally, some simulation results of an implementation with MATLAB are summarized. 5.4.1 Fundamental RF receiver front-end operation The basic operation of an RF receiver front-end architecture is to convert a high-frequency input signal down to a low frequency, amplify it and filter out noise and spurs. Then, it is usually sampled and converted to a digital signal for further processing [6, 29, 46, 1]. Therefore, the fundamental RF front-end operation used in this work to derive a generic behavioral is defined by the following input–output equation:

∞  −j2πf t   k sh · , (5.88) p(t) ∗ δ t − y(t) = ℜ u(t) + jH {u(t)} · e fs k=−∞

with fsh the frequency shift, fs the sampling frequency and p(t) a sampling pulse. Generating digital words only requires the conversion to the appropriate output alphabet and the addition of (quantization) noise. Figure 5.13 depicts an example of an implementation of (5.88) [7]. In general, four different parts can be distinguished in the architecture: • High-frequency amplifier or filter: e.g., a low-noise amplifier, indicated by the gray zone labeled ‘ζ’ in Fig. 5.13 • Down-conversion stage: a set of mixers, phase-converters and (polyphase) filters, corresponding to the area labeled ‘µ’ • Low-frequency filter: filters at the base-band or low-IF frequency, represented by the dark rectangle with label ‘ξ’ • Sampling stage: the analog-to-digital conversion marked ‘λ’, sometimes preceded by a discrete-time filter The digital processing is considered to be part of the environment in which the generic behavioral model for RF receiver operates.

188

5 Frequency-Domain Generic Behavioral Models m z

x

l

A D u(t)

G

fs

90◦

y(t)

fs

A

fosc

D 90



Fig. 5.13. Example of an RF front-end architecture with fundamental operations of down-conversion and sampling.

5.4.2 Generic behavioral model Defining a generic behavioral model for RF front-ends comes down to specifying alphabets, generic functions and an interaction scheme. The alphabets formally define the character of the input, output and intermediate signals. The main functionality of the different parts is determined by the templates for the generic functions. They contain various parameters which are specified during a specialization phase to represent a particular architecture at a specific abstraction level. Evaluation of the generic behavioral model implies the execution of the interaction scheme which describes the dynamic behavior indirectly, in terms of the generic functions. All these elements are derived based on the use of the Phase-Frequency Transfer model (PFT model) as computational model for frequency-domain simulations. Alphabets The following types of alphabets are defined for the signals that are used in the frequency-domain generic behavioral model: Input. The input signal of the receiver is the frequency-domain representation of the antenna signal: U (f ) ∈ Σ = C0 [C] ,

(5.89)

where only single-phase input signals are assumed. Extension to multiple inputs are straightforward. Further, in case of stochastic signals, like noise signals, the power spectral density should be specified: k

N in (f ) ∈ Υin = C0 [R] .

(5.90)

Internal signals. All signals within the architecture are represented using the concept of Polyphase Harmonic Signal (PHS) of the PFT model. This

5.4 Generic behavioral models for front-ends

189

representation allows different number of phases and/or frequency components. In first analysis stages, designers typically use only a limited number of sine waves as test signals. Then, signal refinement includes adding more parasitic signal components and more complicated frequency contents. The alphabet for internal signals is then given by

S˜ L (f ) ∈ ΘN,A = C0 [C]N · A ,

(5.91)

for an internal signal with N phases and A frequency components. This representation is dual: both information and structure can be represented via a base transformation (5.18). Internal stochastic signals are represented as Polyphase Harmonic Power Spectral Density Matrices (PHPSDMs) as defined in Section 5.3.4. Note that in a frequency-domain simulation approach, no explicit notation of the memory should be made since the representation can be converted to the time-domain signal over the entire time interval. Output. Finally, the output interface of the generic behavioral model is a m-phase signal portrayed in the frequency domain:

Y (f ) ∈ Ψ = C0 [C]m .

(5.92)

For the output noise signals the power spectral density is returned. Table 5.3 summarizes the different alphabets of the model. Generic functions Two groups of generic functions are defined: Operations. A generic function is associated with each of the four parts indicated in the receiver architecture of Fig. 5.13. Such a generic function consists of a linearized part and a nonlinear correction. The linearized Table 5.3. Definitions of the alphabets used in the generic behavioral model for the interfaces and internal signal of the RF receiver front-end architectures. Signal type

Alphabet

Definition

Application

Input signal

U ∈Σ

Σ = C0 [C]

Test signal

Output signal

Y ∈Ψ

Ψ = C0 [C]

m

Outputs

Noise signal

N in ∈ Υin

Υin = C0 [R]k

White noise

Output noise

N out ∈ Υout

Υout = C0 [R]l

SNR out

Internal signal

S˜ L ∈ ΘN,A

ΘN,A = C0 [C]N · A

Intermediate signal

Internal noise

˜L S n

ΞN,A = C0 [C]N A×N A

Intermediate noise

∈ ΞN,A

190

5 Frequency-Domain Generic Behavioral Models

Table 5.4. Definitions of the mapping operations of the generic functions of the generic behavioral model for RF receiver front-end architectures. Function

Definition

Application

Input conversion

κin : Σ × Υin → ΘN,A × ΞN,A

Input

High-frequency

ζ : ΘN,A × ΞN,A × Υin → ΘN,A × ΞN,A

LNA

Down-conversion

µ : ΘN,A × ΞN,A × Υin → ΘN,A × ΞN,A

Mixers

Low-frequency

ξ : ΘN,A × ΞN,A × Υin → ΘN,A × ΞN,A

LPF

Sampling

λ : ΘN,A × ΞN,A × Υin → ΘN,A × ΞN,A

ADC

Output conversion

κout : ΘN,A × ΞN,A → Ψ × Υout

Re-arrangement

ρ : ΘN,A × ΞN,A → ΘN,A × ΞN,A

Output Simplification

mapping of the input onto the output signal corresponds to a multiplication with a Polyphase Harmonic Transfer Matrix (PHTM) in the PFT model. The nonlinear behavior is reflected by input–output relationships of type (5.80) or (5.86). The results of both responses are summed to obtain the total output signal. For noise signals only the linearized part is used as these are normally small signals. Since noise signals can occur anywhere in the architecture, a noise vector is added as an input alphabet of all generic functions. Conversion. Both the input and output signals should be converted to and from the structures of the PFT model. Therefore, input and output conversion generic functions are specified. Parameters of these functions are the definition of the frequency set and the window functions needed to divide the signal over the different components according to (5.27). Further, internal conversion is present in the model as the re-arrangement generic function ρ. This function redistributes the signal components over a frequency axis, leaving out small contributions. An overview of all generic functions used in the generic behavioral model is presented in Table 5.4. Templates needed to specialize the operation functions during architectural exploration are introduced in the next section. The templates for the conversion functions κin and κout correspond to the definition of the Polyphase Harmonic Signal (PHS) of the PFT model. Interaction model The dynamic behavior of a system represented by a generic behavioral model is expressed indirectly as relations between the generic functions. Listing 5.1 shows this interaction model for the fundamental RF receiver front-end defined in Section 5.4.1. The corresponding schematic representation is depicted in Fig. 5.14.

5.4 Generic behavioral models for front-ends 1  input U, Nin L ˜L 2 S˜in , Sn,in ←− κin (U, N in )     L ˜L L ˜L ˜ , Sn,in , N in 3 S1 , Sn,1 ←− ζ S˜in     ˜L ˜L ˜L 4 S˜2L , S n,2 ←− µ Sin , Sn,hf , N in     ˜L ˜L ˜L 5 S˜3L , S n,3 ←− ξ Sin , Sn,dc , N in     ˜L ˜L ˜L 6 S˜4L , S n,4 ←− λ Sin , Sn,lf , N in   ˜L 7 (Y , N out ) ←− κout S˜lfL , S n,lf

191

    L ˜L ˜L and S˜hf , Sn,hf ←− ρ S˜1L , S n,1     L ˜L ˜L and S˜dc , Sn,dc ←− ρ S˜2L , S n,2     ˜L ˜L and S˜lfL , S ←− ρ S˜3L , S n,3 n,lf     L ˜L ˜L ←− ρ S˜4L , S and S˜out ,S n,4 n,out

8 output Y , N out

Listing 5.1. Interaction model of the generic behavioral model for RF receiver front-end architectures.

First, the input signal is converted from its normal frequency-domain representation to a polyphase harmonic signal using the input conversion function (line 2). Then, the different stages of the fundamental RF receiver are traversed by applying the corresponding generic functions (lines 3–6). The linear and nonlinear parts are executed in parallel as indicated in Fig. 5.14. After each function, the output signal can be simplified using the re-arrangement function which remains the same throughout the interaction model. Finally, the appropriate frequency-domain signals are formed to return as output (line 7). This straightforward interaction model can easily be extended, for example, with extra stages or with a feedback path. In the latter case, frequency relaxation is applied to converge the signals via subsequent calculations throughout the signal loop. All information about the generic behavioral model for RF receiver frontend architectures is represented compactly as a formal machine, M = (Σ, Ψ, Υin , Υout , ΘN,A , ΞN,A , κin , ζ, µ, ξ, λ, κout , ρ),     alphabets

(5.93)

generic functions

where no specific interaction model parameters for the fundamental RF receiver front-end are required. The absence of memory elements and the use of the signal-flow computational model makes it an example of a Memoryless Data-flow Computer [28, 52]. 5.4.3 Templates for the generic functions A template for a generic function indicates which parameters should be provided during a specialization phase in order to represent an actual system during architectural exploration. Starting from templates for the building blocks present in a part of the architecture as indicated in Fig. 5.13, the generic function is derived for further use in the model.

192

5 Frequency-Domain Generic Behavioral Models INPUT U∈S Nin ∈ Υin

input conversion

noise ˜L

n,in

∈ Ξ N,A

conversion κin

˜L

n,in

∈ Ξ N,A

L ∈Θ S˜dc N,A ˜L ∈ΞN,A n,dc

L ∈Θ S˜in N,A ˜L ∈Ξ N,A n,in

high frequency filter ζ linear

nonlinear

rearrangement ρ

low frequency filter ξ linear

rearrangement ρ

L ∈Θ S˜hf N,A ˜L ∈ΞN,A n,hf

down-conversion µ linear

nonlinear

rearrangement ρ L ∈Θ S˜dc N,A ˜L ∈ΞN,A n,dc

nonlinear

S˜lfL ∈ΘN,A ˜L ∈Ξ N,A n,lf

sampling λ linear

nonlinear

rearrangement ρ L ∈Θ S˜out N,A ˜L n,out ∈ΞN,A

output conversion conversion κout Y ∈Ψ N out ∈Υout

OUTPUT

Fig. 5.14. Schematic representation of the interaction model of the generic behavioral model for RF receiver front-end architectures.

High- and low-frequency filters (ζ and ξ) The generic functions corresponding to the two time-invariant parts of the architecture are characterized by the same template. These parts can be considered as a cascade of S different stages. For example, the high-frequency part ‘ζ’ in Fig. 5.13 consists of two stages: the amplifier G and the phaseconverter with 90◦ -shift. Further, each stage s consists of Bs building blocks,

5.4 Generic behavioral models for front-ends

193

e.g., B1 = 2 for the only stage in the low-frequency filter ξ of Fig. 5.13. Each of these blocks is an LTI filter represented by one or more transfer functions which are gathered in a polyphase filter matrix:

Y s,b (f ) = Hs,b (f ) · X s,b (f ) .

(5.94)

The polyphase base used to describe the filter usually contains the common-/ differential-mode or single-phases base vectors as discussed in Section 5.3.3. To derive the template for the generic function, the polyphase filters are first converted to the representation with respect to single-phases base using (5.30). The polyphase filter of a single stage is then easily derived: ⎡

⎤ B −1 Ns,1,out · Hs,1 (f ) · B Ns,1,in ⎢ ⎥ .. Hs (f ) = P −1 ⎦ · P s,in , s,out · ⎣ . −1 B Ns,Bs ,out · Hs,Bs (f ) · B Ns,Bs ,in

(5.95)

where identity matrices are used if a part of the signal is not filtered such as the upper signals in the second stage of the high-frequency part ζ in Fig. 5.13. The matrices P s stands for a permutation of the phases to convert to the standard single-phases base (5.11). Finally, the polyphase harmonic transfer matrix of the filtering part is obtained by converting the LTI polyphase filters to polyphase harmonic transfer matrices with (5.39) and cascading the PHTMs of the different stages using (5.44). The linear part of the generic functions ζ and ξ applies the input-output relationship (5.38). For nonlinear distortion, a combination scheme similar to (5.95) is used for the Polyphase Distortion Tensors (PDTs). Instead of combining them to one tensor, however, it is easier to calculate the output signal of the different stages subsequently. Down-conversion function (µ) The down-conversion operation in a front-end architecture is, in general, performed by one or more polyphase mixers. Figure 5.15 shows a more detailed representation of the polyphase mixer of Fig. 5.13 with N = 4 input phases and M = 4 output phases. The polyphase mixing operation can be split into two stages. The first stage contains the set of mixers of which each one multiplies a part of the input signal with a part of the oscillator signal. The result of the first stage is a polyphase signal with, in general, a different number of phases than the output signal. In the second stage this polyphase signal is converted to the desired number of phases using a phase converter. For each stage a polyphase harmonic transfer matrix can be derived. The Polyphase Harmonic Transfer Matrix (PHTM) of the entire polyphase mixer is then found as the cascade connection of the two PHTMs [32].

194

5 Frequency-Domain Generic Behavioral Models

mixing stage

phase-converter z1

x1

z2 cos(wt)

x2

z3 z4 z5

x3 sin(wt)

z6

y1 y2 y3

x4

z7 z8

y4

x (t)

z(t)

y(t)

Fig. 5.15. Example of a polyphase mixer modeled as a two-stage operation. The first stage converts N = 4 input signals to a polyphase signal with M = 8 phases which is converted on its turn to K = 4 output phases.

The function of the mixing stage can be described by the following steps: 1. Split input signal. The N -phase input signal X (f ) is split up into Q parts X q (f ) (Q = 2 in Fig. 5.15): T  T T X (f ) = P −1 · . (5.96) X (f ) . . . X (f ) 1 Q in

The operation of a mixer is usually defined for common-/differential mode base vectors. Consequently, the permutation matrix P in is given by      1 0 I ⊗ . (5.97) P in = I N/2 ⊗ 0 N/2 1

Each part X q (f ) is represented as a Polyphase Harmonic Signal (PHS) X˜qL (f ). 2. Mixer template. Each mixer in the polyphase mixer is assigned a unique pair: mixer (q, p) is the mixer which multiplies the qth part of the N phase input signal with the pth part of the oscillator signal. The number of mixers for each part of the input signal is denoted by P (P = 2 in Fig. 5.15). The general template used to model the operation of a mixer is an Harmonic Transfer Matrix (HTM) [57]. This HTM is translated to the PFT model using (5.41). The output signal of mixer (q, p) is then the PHS L (f ): Z˜ q,p L ˜ L (f ) , ˜ L (f ) · X Z˜ q,p (f ) = H (5.98) q,p q L ˜ 3. Output signal. The signals Zq,p are lumped together into the M -phase ˜ L using a permutation matrix P out similar to the input output PHS Z signals. The PHTM of the mixing stage is derived by evaluating these three steps with the concepts of the PFT model:

5.4 Generic behavioral models for front-ends

195

˜ L (f ) = H mix ⎛⎡ ˜ L ⎤ ⎡ ˜ L ⎤⎞ HN,1 H1,1

 ⎜⎢ .. ⎥ ⎢ .. ⎥⎟ −1 I Aout ⊗ P out · diag⎝⎣ . ⎦ , . . . , ⎣ . ⎦⎠ · [I Ain ⊗ P in ] . (5.99) ˜L ˜L H H 1,P

N,P

The second stage of the polyphase mixer consists of the conversion of the M -phase output signal of the mixing stage to the K-phase output signal. ˜ L (f ) as a normal phase converter. This stage is characterized by a PHTM H pc Evaluation of the generic down-conversion function after specialization comes down to applying the input-output relation in terms of the PFT model for a cascade connection of PHTMs built up out of pairs of PHTMs for mixing and phase-conversion stages. As for the high- and low-frequency generic functions, the nonlinear distortion components are found by evaluating all intermediate signals instead of first deriving the total Polyphase Distortion Tensor (PDT).

Sampling function (λ) Basic sampling The basic sampling operation of the fundamental RF receiver front-end is shown in Fig. 5.16a. The operation is modeled as an operation in stages as shown in Fig. 5.16b [34]. First, all non-ideal (linear and nonlinear) effects are taken into account by a stage consisting of filters. Then, the continuous-time signal is converted to discrete time via an ideal sampling operation. Again,

φ x1

filter stage

fs

y1

x1

H1 (s)

y2

x2

H2 (s)

y3

x3

H3 (s)

y4

x4

H4 (s)

y(t)

x(t)

Cs 1

CT to DT

y1

fs

x2

Cs 2

y2

fs

x3

Cs 3

y3

fs

x4

x(t)

Cs 4

(a) Real sampling structure

y4

y(t)

(b) Two-stage model

Fig. 5.16. Example of a sampling stage and the model as a cascade of a filter and stage converting a continuous-time (CT) signal to a discrete-time (DT) one.

196

5 Frequency-Domain Generic Behavioral Models

each stage corresponds to a Polyphase Harmonic Transfer Matrix (PHTM). The template of the filter stage is comparable to that of the high- and lowfrequency parts discussed in Section 5.4.3. Two types of sampling operations can be distinguished: a sample-and-hold where each sample is multiplied by a pulse p(t) shifted to the sampling point; and a sampling method corresponding to the operation of an on–off switch in which the continuous-time signal is multiplied by the shifted pulse. The effects in the frequency domain are frequently studied [4]. If the spectra of the input signal and pulse are X(f ) and P (f ) respectively, then the output spectrum for a sample-and-hold operation becomes Y (f ) = fs · P (f )

∞ 

X(f − mfs ) ,

(5.100)

m=−∞

and for an on–off switch Y (f ) = fs

∞ 

P (mfs ) X(f − mfs ) ,

(5.101)

m=−∞

with fs the sampling frequency. These two equations are easily translated to PHTMs in the Phase-Frequency Transfer model (PFT model). The transfer from phase q at frequency fl to phase p at frequency fk can be written as follows for a sample-and-hold operation: ˜ L (f ) = H i,j ⎧ ⎪ ⎪fs P (f + fk ) u(f ⎪ ⎪ ⎪ ⎪fs P (f + fk ) u(f ⎨ fs P (f + fk ) u(f ⎪ ⎪ ⎪ fs P (f + fk ) u(f ⎪ ⎪ ⎪ ⎩0

+ fk ) C0 , |fk − fl | = nfs ∧ fk + fl = mfs , + fk ) C1 , |fk − fl | =  nfs ∧ fk + fl = mfs , + fk ) [C0 + C1 ] , |fk − fl | = nfs ∧ fk + fl = mfs , + fk ) C0 , fk = fl , otherwise, (5.102)

with i, j defined by (5.21b) and n, m ∈ Z+ 0 . For an on–off switch, the total transfer becomes ˜ L (f ) = (5.103) H i,j ⎧ |fk − fl | = nfs ∧ fk + fl = mfs , fs P (fk − fl ) C0 , ⎪ ⎪ ⎪ ⎪ ⎪ fs P (fk + fl ) C1 , |fk − fl | =  nfs ∧ fk + fl = mfs , ⎪ ⎪ ⎪ ⎨f [P (f − f ) C + P (f + f ) C ] , |f − f | = nf ∧ f + f = mf , s k l 0 k l 1 k l s k l s ⎪ u(f ) [P (−f ) C + P (f ) C ] , f − f = nf ∧ f + f = nf , f s l 0 l 1 l k s k l s ⎪ ⎪ ⎪ ⎪ ⎪fs P (0) C0 , fk = fl , ⎪ ⎪ ⎩ 0 otherwise.

To ensure correct behavior, the output frequency axis is first chosen based on the input axis, the sampling frequency and the maximum number of shifted

5.4 Generic behavioral models for front-ends

197

components to be taken into account. Then, the re-arrangement function ρ is used to ‘cast’ the frequency response into the required output frequency axis. The generic sampling function λ applies the input–output relation of the two cascaded stages shown in Fig. 5.16b. For linear operation, the total PHTM can be calculated whereas for nonlinear distortion, the internal signals will always be considered. Multi-phase sampling In a multi-phase sampling operation, the actual sampling operation is accompanied by a discrete-time filter which is usually a switched-capacitor filter. As indicated by (5.20), an output signal is Yr (f ) can be associated with each phase r of the clock signal. The output signal of such a sampled-data system is in the frequency domain generally calculated using transmission functions for the different phases of the clock signal [59]: ∞ 

Yr (f ) =

Tr (f ; m) X(f − mfs ) ,

(5.104)

m=−∞

with fs the clock frequency. When the sampling period is chosen small enough, only the components at m = 0 are important and the system becomes a time-invariant polyphase filter, as shown in Example 5.3. With the techniques presented in [45], the transmission functions Tr (f ; m) can easily be derived for a general sampled-data system. For example, for the filter of Fig. 5.5, these functions are as follows:   m 1 sin mπ sin ν jν sin ν (−1) e jν j mπ 2 2 2 e , + − T1 (f ; m) = mπ e 2ν 1 + γ − γe −j4ν 1+γ ν 2 (5.105a) m

T2 (f ; m) =

sin ν (−1) e −jν , 2ν 1 + γ − γe −j4ν

(5.105b)

with ν = πf / (2fs ) and γ = C2 /C1 . Similar to the basic sampling operation discussed above, the equations of the multi-phase sampling scheme can now be translated into a PHTM: ˜ L (f ) = H i,j  ⎧  fk −fl ⎪ f + f ; T k ⎪ f s  C0 , ⎪ r ⎪ ⎪ ⎪ l ⎪ Tr f + fk ; fkf+f C1 , ⎪ s ⎪   ⎪ ⎪ fk −fl ⎪ ⎨T f + f ; C r

k

0

(5.106) |fk − fl | = nfs ∧ fk + fl = mfs , |fk − fl | =  nfs ∧ fk + fl = mfs ,

fs |fk − fl | = nfs ∧ fk + fl = mfs , (f + f + T ⎪ r k ; m) C1 , ⎪ ⎪ ⎪ ⎪ u(f ) [Tr (f ; −n) C0 + Tr (f ; n) C1 ] , fl − fk = nfs ∧ fk + fl = nfs , ⎪ ⎪ ⎪ ⎪ ⎪ Tr (f + fk ; 0) C0 , fk = fl , ⎪ ⎪ ⎩ 0 otherwise,

198

5 Frequency-Domain Generic Behavioral Models

with r the phase of the clock signal. To analyze the total output signal, the sum of all corresponding parts into the different clock phases should be calculated. Expression (5.106) can be used as template for the generic sampling function λ. It allows to take into account details of the actual implementation of a filtering operation, for instance, as a switched-capacitor filter. 5.4.4 Implementation and experimental results of the generic behavioral model A custom MATLAB toolbox To implement the generic behavioral model described above, a custom toolbox in MATLAB has been developed in this work to manipulate the different elements of the Phase-Frequency Transfer model (PFT model) presented in Section 5.3. This implementation choice is guided by some considerations: • Many operations of the PFT model are defined as operations on matrices which are well supported within MATLAB. As a result, efficient execution of these operations is achieved in a limited development time. • On the other hand, some flexibility is required to develop custom objects. To this end, structures are used for the fundamental data structures of the model, like polyphase bases, Polyphase Harmonic Signals (PHSs) and Polyphase Harmonic Transfer Matrices (PHTMs). Tree-like structures are used to implement operations on these objects. • The relative limited complexity of the generic behavioral model and its use for high-level architectural simulations make the simulation time less critical. Therefore, although a complete custom implementation (e.g., with C/C++) will result in more time-efficient simulations, the advantage of such an approach is only marginal. Once the library has been written, the generic behavioral model is easily written as a MATLAB script. Parameters needed for the specialization phase of the analysis methodology, are provided as inputs. All intermediate signals and the output signals are available for further processing, for instance to generate graphical representations like the one shown in Fig. 5.6. Examples Some properties of the Phase-Frequency Transfer model (PFT model) are now demonstrated using the architecture similar to the one shown in Fig. 5.17. In the examples, the different parts are chosen as follows: • High-frequency filter ζ: a low-noise amplifier with differential output, followed by a 2-to-4 phase-converter • Down-conversion µ: a polyphase mixer consisting of four mixers operating on differential-mode signals, cascaded with an 8-to-4 phase converter

5.4 Generic behavioral models for front-ends

199

m z

x

l

A D u(t)

G

90◦

fosc

fs

y(t)

fs

A D

90



Fig. 5.17. Schematic representation of the RF front-end architecture used in the examples.

• Low-frequency filter ξ: two low-pass filters, one in each signal path • Sampling λ: a basic sampling operation of both the I- and Q-signals The information signal is an GMSK signal at 900 MHz. Usually, an architecture is analyzed in two steps. First, the frequencies are chosen over a wide range with a small resolution. The signals are almost ideal and only a few non-idealities of the building blocks are taken into account. This simplified model can be used to determine the important frequencies. Then, the system and the signals are refined and only a limited number of frequencies is selected. If necessary, the frequency axis can be extended. A similar procedure can be followed to determine the important phases. IRR degradation A major cause of degradation of the IRR in polyphase mixers is the mismatch between the mixers. Figure 5.18 shows the results of simulations using the implementation of the framework. The mean value of the mixers’ gain is 5 dB. This type of analysis can be used, for example, to derive a specification for the maximum gain mismatch that can be allowed. Within the newly developed PFT model, the linearized behavior of the different building blocks is represented with Polyphase Harmonic Transfer Matrices (PHTMs). The elements of such a PHTM act as transfer functions of components of the input signal towards components of the output signals. These components occur in different phases and around different frequencies. A non-ideality like mixer mismatch is easily represented using the polyphase base with common- and differential-mode signals. The information signal, however, appears in a symmetrical component (e.g., the positive sequence) whereas the image signals are present in other symmetrical components. Hence, the formal base transformation of the PFT model allows to easily find the effects of the mismatch on the information signals. Using the manipulation properties of PHTMs discussed in Section 5.3.3, and symbolic

200

5 Frequency-Domain Generic Behavioral Models 37 36 35

IRR [dB]

34 33 32 31 30 29 28 27 0

2

4

6

8

10

σgain (% of mean value)

Fig. 5.18. IRR as a function of the mismatch between the gain of the mixers. For each value of σgain 100 samples are used to calculate the IRR.

calculations, a symbolic expression for the image rejection ratio is derived with the structures and operations defined in the custom MATLAB toolbox:     G1,1 G1,2 G2,1 G2,2 + + +  1+j2πντ 1+j2πντ1,2 1+j2πντ2,1 1+j2πντ2,2  1,1 IRR =  (5.107) , G1,1 G1,2 G2,1 G2,2 + 1+j2πντ − 1+j2πντ + 1+j2πντ  − 1+j2πντ  1,1 1,2 2,1 2,2

with Gq,p and τq,p the (mean) gain and time constant of mixer (q, p), respectively. This expression also clearly shows the influence of mixer mismatch on IRR: in the ideal case, the different contributions in the denominator cancel each other. Of course, figures like the one shown in Fig. 5.18 can also easily be derived using one of the methods mentioned in Section 5.2. The PFT model, on the other hand, additionally explains the origin of the IRR as a result of transfers between phases and frequencies. Figure 5.19 depicts schematic representations of the individually calculated transfers in the down-conversion part before and after the mixing stage and phase-converter of Fig. 5.15. They correspond to the elements of the PHTMs (the transfer at the fundamental frequency is taken as value). Two frequency transfers are of importance: from the central frequency fd (900 MHz) to fout (50 MHz) and from the image frequency fim (950 MHz) to fout . The first one is indicated by the paths with the thick dark lines in Figs. 5.19a and 5.19c. In this ideal case with ideally matched mixers, the signal is split up over two phases by the mixing stage, but they are recombined by the phase converter stage. The signal component at fim is also split up, but ideal recombination shows no component after the second stage in phase 4. This has been indicated by the dashed light lines in Fig. 5.19a.

5.4 Generic behavioral models for front-ends

201

8-to-4 converter

mixing stage p8 p6

p4 p2

p4 p2

p4 p2 fim

fd

fout |

im



sh |

|

d



sh |

(a) With ideal mixers 8-to-4 converter

mixing stage p8 p6

p4 p2

p4 p2

p4 p2 fim

fd

fout |

im



sh |

|

d



sh |

(b) With mixers affected by mismatch 8-to-4 converter

mixing stage 0.8222e −j21. 7

fim p2



0.0145e j13. 9



p4

0.3417e j62. 7



p6



p8



p2

0.0079e −j26. 2 0.0145e −j13. 9

fd p4

p2

0.8222e j21. 7



p4

0.0079e j26. 2



p6

0.3417e −j62. 7



p8

1.8478e j22. 5



0.7654e j67. 5



p2

0.7654e −j67. 5



1.8478e −j22. 5



1.8478e j22. 5



0.7654e j67. 5



fout

p4

0.7654e −j67. 5



1.8478e −j22. 5



(c) Low-frequency numerical values for transfers in the non-ideal case Fig. 5.19. Schematic and numerical representation of the ideal and non-ideal phase transfers from the desired frequency fd and the image frequency fim to the required output frequency fout .

However, when there is a mismatch between the mixers of the polyphase mixing stage, the signal components leak away to other phases resulting in parasitic components after the phase converter in phase 4. The main parasitic signal is due to leakage of the signal at the image frequency fim . This transfer has been indicated by the light lines in Figs. 5.19b and 5.19c.

202

5 Frequency-Domain Generic Behavioral Models

To increase the IRR, the signal in phase 2 needs to be filtered out before the mixing stage (i.e. by selecting another phase converter in the high-frequency part). The PFT model, however, suggests also another solution. By examining Fig. 5.19c, it can be detected that the main parasitic signal flow is via phase 4 whereas the main information signal flow is via phase 8. Therefore, an improvement of the IRR may also be achieved by using another 8-to-4 converter which attenuates the fourth phase. Weak nonlinearities When extending the model to weak nonlinearities, a Polyphase Distortion Tensor (PDT) is added in parallel to each Polyphase Harmonic Transfer Matrix (PHTM) that models the linear behavior of a mixer. The elements of this PDT indicate the transfers of products of components of different phases and frequencies in the input signal towards components of the output signal. Hence, They represent undesired extra transfers in the phase-frequency space. The main nonlinear distortion in a mixer is represented by the IP 3 figure of merit. To examine the effect of a finite IP 3 of the mixers, a third-order PDT has been calculated according to expressions (5.64), (5.65) and (5.77) of Example 5.10. The input signals are listed in Table 5.5 and the IP 3 of the mixers equals 3 dBm with a conversion gain of 5 dB. A scheme of the phase-frequency transfers can be derived similar to the ones shown in Fig. 5.19c. This non-ideal transfers are shown in Fig. 5.20. One finds that a main distortion component of −109 dBm is situated after the mixing stage in phase 4. The component in phase 8 on the other hand is 7.5 dB lower. This result suggests that the phase converter should mainly convert the signal component at phase 8 before the 8-to-4 converter to phase 4 at the output of the phase complete system. Again, this is achieved by selecting the appropriate filter after the polyphase mixing stage. Sampling operation For the sampler present in the A/D converters of Fig. 5.17, a sample-and-hold operation has been selected which converts the continuous-time signals to discrete time. Inside the PFT model, this operation is represented by a Polyphase Table 5.5. Input signals for weakly nonlinear analysis. Input frequency

Type

Power

GMSK

−94 dBm

f2 = 900.8 MHz

Sinusoidal

−43 dBm

f3 = 901.6 MHz

Sinusoidal

−43 dBm

f1 = 900 MHz

5.4 Generic behavioral models for front-ends 8-to-4 converter

mixing stage

p8

p2

203

p4

p4 f1

f2

f3

fout fout

f2′

f3′

Fig. 5.20. Schematic representation of the non-ideal phase-frequency transfers in the mixer when a distortion tensor is added.

f6 f5 f4 f3 f2 f1 f1

f2

f3

f4

f5

f6

Fig. 5.21. Matrix box plot of PHTM from the input of the down-converter to the output of the sampling part.

Harmonic Transfer Matrix (PHTM) which has been defined in Section 5.4.3. A graphical representation for such a PHTM is the matrix box plot introduced in Section 5.3.3. First, the PHTM is transformed so that it describes the inputs for signals defined with respect to a polyphase base with symmetrical components. Then, the matrix box plot easily allows to identify the dominant information flows throughout the architecture: the size and level of darkness of the boxes in the matrix box plot indicates the value of the elements of the PHTM. An example is shown in Fig. 5.21 for the PHTM describing the transfer from the high-frequency part to the output of the system. As expected, there is signal folding from the high frequency f5 to the low frequency f1 . This non-ideal transfer indicates that either the selected sampling frequency is too low or that the low-pass filter has a large cut-off frequency. Selection of another filter characteristic is therefore expected to be a remedy to improve the performance.

204

5 Frequency-Domain Generic Behavioral Models

5.5 Conclusions Designing front-end architectures requires an efficient systematic analysis approach to evaluate the performance of the system and to identify possible failures. By developing a generic behavioral model, a framework is provided for exploration of different architectures to perform and analyze operations at different abstraction levels. The fundamental RF receiver front-end is an example of a class of architectures in which a major number of operations encountered in front-ends is present. To fulfill the specific requirements of generic behavioral models for use in the top-down high-level design flow defined in Chap. 3 a new modeling framework has been developed: the Phase-Frequency Transfer model (PFT model). The data structures and operations of this model allow to represent the ideal and non-ideal behavior of the major building blocks of a wide range of front-end architectures. These data structures and operations are: Signals. Signals are represented as Polyphase Harmonic Signals (PHSs) so that a uniform model is used for all signals in the architecture which can occur in one or more parallel signal paths or phases. This representation includes a duality: it reflects either the actual, real signals or the information carried by the polyphase signal. This property allows to first build a model by specifying the effects on the physical signals, and then analyze the effects on the information flow. Linear operation. The linearized behavior of the building blocks is modeled via Polyphase Harmonic Transfer Matrices (PHTMs). The input–output relationship comes down to the calculation of a matrix-vector product. Also, this representation indicates the transfers of either real or information signals between different frequencies and/or phases. Nonlinear operation. Weak nonlinearities are translated in the new framework to Polyphase Distortion Tensors (PDTs). This complete frequencydomain description of nonlinearities is used to represent parasitic transfers like harmonic distortion and intermodulation components. The characteristic equation of a building block is mathematically defined as the calculation of inner products between the tensor and the input vector. Noise signals. To represent the characteristics of noise signals, Polyphase Harmonic Power Spectral Density Matrices (PHPSDMs) have been introduced. Using the linearized approximation of a system, a method has been derived to obtain the characteristics of the noise signals at the output and hence the degradation of the signal-to-noise ratio. The templates of the generic functions in the presented generic behavioral model translate properties of building blocks like polyphase mixers, filters or samplers, to the concepts of the Phase-Frequency Transfer model. As a result, the signals throughout the architecture can be determined and, via the dual character of both the signal and building block representation, the effects on the degradation are represented as parasitic transfers between phase-frequency

References

205

planes. This information can be used to modify the architecture so that the dominant (wanted) flows are only slightly affected whereas the parasitic flows are attenuated or eliminated, so that the performance is improved. The generic behavioral modeling strategy can be used to represent a large class of architectures with different levels of details. Information about performance degradation and failure is provided. The next part shows how these aspects are incorporated into the design strategy of Fig. 3.10 on p. 66 to systematically design analog systems.

References [1] A. A. Abidi. RF CMOS Comes of Age. IEEE Journal of Solid-State Circuits, 39(4):549–561, Apr. 2004. [2] G. Arfken. Mathematical Methods for Physicists. University Press, San Diego, 1985. [3] R. N. Bracewell. The Fourier Transform and Its Applications. McGrawHill Book Company, New York, 1978. [4] L. W. Couch II. Digital and Analog Communication Systems. PrenticeHall, New Jersey, 1997. [5] J. Crols, S. Donnay, M. Steyaert, and G. Gielen. A High-Level Design and Optimization Tool for Analog RF Receiver Front-Ends. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 550–553, San Jose, Nov. 1995. [6] J. Crols and M. Steyaert. CMOS Wireless Transceiver Design. Springer, Dordrecht, 1997. [7] J. Crols and M. S. J. Steyaert. A Single-Chip 900 MHz CMOS Receiver Front-End with a High Performance Low-IF Topology. IEEE Journal of Solid-State Circuits, 30(12):1483–1492, Dec. 1995. [8] J. Crols and M. S. J. Steyaert. Low-IF Topologies for High-Performance Analog Front Ends of Fully Integrated Receivers. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 45(3): 269–282, Mar. 1998. [9] P. Dobrovoln´ y, G. Vandersteen, P. Wambacq, and S. Donnay. Analysis and Compact Behavioral Modeling of Nonlinear Distortion in Analog Communication Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(9):1215–1227, Sept. 2003. [10] P. Dobrovoln´ y, G. Vandersteen, P. Wambacq, and S. Donnay. Analysis and White-Box Modeling of Weakly Nonlinear Time-Varying Circuits. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 624–629, Munich, Mar. 2003. [11] P. Dobrovoln´ y, P. Wambacq, G. Vandersteen, S. Donnay, M. Engels, and I. Bolsens. Generation of Multicarrier Complex Lowpass Models of RF ICs. In IEEE MTT-S Int. Microwave Symp., volume 1, pages 419–422, Phoenix, May 2001.

206

5 Frequency-Domain Generic Behavioral Models

[12] S. Donnay and G. Gielen. System-Level Analysis of RF Transceiver Integrated Circuits. In IEEE Southwest Symp. on Mixed-Signal Design, pages 37–42, Tucson, Apr. 1999. [13] A. Dunlop, A. Demir, P. Feldmann, S. Kapur, D. Long, R. Melville, and J. Roychowdhury. Tools and Methodology for RF IC Design. In IEEE/ACM Design Automation Conf., pages 414–420, San Francisco, June 1998. [14] P. Feldmann and J. Roychowdhury. Computation of circuit waveform envelopes using an efficient, matrix-decomposed harmonic balance algorithm. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 295–300, San Jose, Nov. 1996. [15] J. Fenk. RF-Trends in Mobile Communication. In IEEE European SolidState Circuits Conf., pages 21–27, Estoril, Sept. 2003. [16] F. Fern´ andez, A. Rodriguez-V´ azquez, J. L. Huertas, and G. G. E. Gielen, editors. Symbolic Analysis Techniques: Applications to Analog Design Automation. Wiley, New York, 1997. [17] A. Gelb and W. E. V. Velde. Multiple-Input Describing Functions and Nonlinear System Design. McGraw-Hill, New York, 1968. [18] M. J. Gingell. Single Sideband Modulation using Sequence Asymmetric Polyphase Networks. Electrical Communication Magazine, 48(1–2): 21–25, 1973. [19] I. A. Glover and P. M. Grant. Digital Communications. Prentice-Hall, Essex, 2004. [20] M. Gourary, S. Ulyanov, M. Zharov, and S. Rusakov. New Methods for Speeding up Computation of Newton Updates in Harmonic Balance. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 61–64, San Jose, Nov. 1999. [21] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991. [22] L. P. Huelsman, editor. Linear Circuit Analysis. In W.-K. Chen, editor, The Circuits and Filters Handbook, Section IV. CRC Press, Salem, 1995. [23] M. C. Jeruchim, P. Balaban, and K. S. Shanmugan. Simulation of Communication Systems. Plenum, New York, 1992. [24] K. S. Kundert. Introduction to RF Simulation and Its Application. IEEE Journal of Solid-State Circuits, 34(9):1298–1319, Sept. 1999. [25] K. S. Kundert and A. Sangiovanni-Vincentelli. Simulation of Nonlinear Circuits in the Frequency Domain. IEEE Trans. on Computer-Aided Design, 5(4):521–535, Oct. 1986. [26] K. S. Kundert, J. K. White, and A. Sangiovanni-Vincentelli. SteadyState Methods for Simulating Analog and Microwave Circuits. Kluwer Academic, Norwell, 1990. [27] Kyeongho Lee, Joonbae Park, Jeong-Woo Lee, Seung-Wook Lee, Hyung Ki Huh, Deog-Kyoon Jeong, and Wonchan Kim. A Single-Chip 2.4GHz Direct-Conversion CMOS Receiver for Wireless Local Loop using

References

[28] [29] [30] [31]

[32]

[33]

[34]

[35]

[36]

[37] [38]

[39] [40]

[41]

207

Multiphase Reduced Frequency Conversion Technique. IEEE Journal of Solid-State Circuits, 36(5):800–809, May 2001. E. A. Lee and T. M. Parks. Dataflow Process Networks. Proceedings of the IEEE, 83(5):773–799, May 1995. T. H. Lee. The Design of CMOS Radio-Frequency Integrated Circuits. Cambridge University Press, Cambridge, 1998. S. Lipschutz. Schaum’s Outline of Theory and Problems of Linear Algebra. McGraw-Hill, New York, 1968. D. Long, R. Melville, K. Ashby, and B. Horton. Full-Chip Harmonic Balance. In IEEE Custom Integrated Circuits Conf., pages 379–382, Santa Clara, CA, May 1997. E. Martens and G. Gielen. A Phase–Frequency Transfer Description of Analog and Mixed–Signal Front–End Architectures for System–Level Design. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 436–441, Paris, Feb. 2004. E. Martens and G. Gielen. Symbolic Analysis of Front-End Architectures Using Polyphase Harmonic Transfer Matrices. In IEEE Int. Workshop on Symbolic Analysis and Applications in Circuit Design, pages 12–15, Wroclaw, Poland, Sept. 2004. E. Martens and G. Gielen. A Behavioral Model of Sampled-Data Systems in the Phase-Frequency Transfer Domain for Architectural Exploration of Transceivers. In IEEE Int. Symp. on Circuits and Systems, pages 987–990, Island of Kos, Greece, May 2006. E. S. J. Martens and G. G. E. Gielen. Phase-Frequency Transfer Model of Analogue and Mixed-Signal Front-End Architectures for System-Level Design. IEE Proceedings Computers and Digital Techniques, 152(1): 45–52, Jan. 2005. K. W. Martin. Complex Signal Processing is Not Complex. IEEE Trans. on Circuits and Systems—I: Regular Papers, 51(9):1823–1836, Sept. 2004. MathWorks. Communications Toolbox User’s Guide. 2007. http://www. mathworks.com/access/helpdesk/help/pdf_doc/comm/comm.pdf. R. C. Melville, P. Feldmann, and J. Roychowdhury. Efficient Multi-tone Distortion Analysis of Analog Integrated Circuits. In IEEE Custom Integrated Circuits Conf., pages 241–244, Santa Clara, CA, May 1995. D. Middleton. An Introduction to Statistical Communication Theory. Peninsula Publishing, Los Altos, CA, 1987. H. Nyquist. Certain Topics in Telegraph Transmission Theory. In Trans. of the American Institute of Electrical Engineers, pages 617–644, New York, Feb. 1928. Peng Li and L. T. Pileggi. Efficient Per-Nonlinearity Distortion Analysis for Analog and RF Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 22(10):1297–1309, Oct. 2003.

208

5 Frequency-Domain Generic Behavioral Models

[42] Peng Li and L. T. Pileggi. Efficient Harmonic Balance Simulation Using Multi-Level Frequency Decomposition. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 677–682, San Jose, Nov. 2004. [43] Pengfei Zhang, L. Der, Dawei Guo, I. Sever, T. Bourdi, C. Lam, A. Zolfaghari, J. Chen, D. Gambetta, Baohong Cheng, S. Gowder, S. Hart, L. Huynh, T. Nguyen, and B. Razavi. A Single-Chip DualBand Direct-Conversion IEEE 802.11a/b/g WLAN Transceiver in 0.18µm CMOS. IEEE Journal of Solid-State Circuits, 40(9):1932–1939, Sept. 2005. [44] J. G. Proakis. Digital Communications. McGraw-Hill, New York, 2001. [45] J. Rabaey. A Unified Computer Aided Design Technique for Switched Capacitor Systems in the Time and the Frequency Domain. Ph.D. thesis, Katholieke Universiteit Leuven, Leuven, 1983. [46] B. Razavi. RF IC Design Challenges. In IEEE/ACM Design Automation Conf., pages 408–413, San Francisco, June 1998. [47] B. Razavi. RF CMOS Transceivers for Cellular Telephony. IEEE Communications Magazine, 41(8):144–149, Aug. 2003. [48] J. C. Rudell, J.-J. Ou, T. B. Cho, G. Chien, F. Brianti, J. A. Weldon, and P. R. Gray. A 1.9-GHz Wide-Band IF Double Conversion CMOS Receiver for Cordless Telephone Applications. IEEE Journal of SolidState Circuits, 32(12):2071–2088, Dec. 1997. [49] S. Samadian, R. Hayashi, and A. A. Abidi. Demodulators for a ZeroIF Bluetooth Receiver. IEEE Journal of Solid-State Circuits, 38(8): 1393–1396, Aug. 2003. [50] W. Sansen. Distortion in Elementary Transistor Circuits. IEEE Trans. on Circuits and Systems—II: Analog and Digital Signal Processing, 46(3):315–325, Mar. 1999. [51] W. M. C. Sansen. Analog Design Essentials. Springer, Dordrecht, 2006. [52] J. E. Savage. Models of Computation. Exploring the Power of Computing. Addison-Wesley, Reading, 1998. [53] H. Taub and D. L. Schilling. Principles of Communication Systems. McGraw-Hill, New York, 1987. [54] R. Telichevesky, K. Kundert, I. Elfadel, and J. White. Fast Simulation Algorithms for RF Circuits. In IEEE Custom Integrated Circuits Conf., pages 437–444, San Diego, May 1996. [55] A. Ushida and L. O. Chua. Frequency-Domain Analysis of Nonlinear Circuits Driven by Multi-Tone Signals. IEEE Trans. on Circuits and Systems, 31(9):766–779, Sept. 1984. [56] P. Vanassche, G. Gielen, and W. Sansen. Symbolic Modeling of Periodically Time-Varying Systems Using Harmonic Transfer Matrices. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 21(9):1011–1024, Sept. 2002. [57] P. Vanassche, G. Gielen, and W. Sansen. Systematic Modeling and Analysis of Telecom Frontends and their Building Blocks. Springer, Dordrecht, 2005.

References

209

[58] G. Vandersteen, P. Wambacq, Y. Rolain, P. Dobrovoln´ y, S. Donnay, M. Engels, and I. Bolsens. A methodology for efficient high-level dataflow simulation of mixed-signal front-ends of digital telecom transceivers. In IEEE/ACM Design Automation Conf., pages 440–445, Los Angeles, June 2000. [59] J. Vandewalle, H. J. De Man, and J. Rabaey. Time, Frequency, and z-Domain Modified Nodal Analysis of Switched-Capacitor Networks. IEEE Trans. on Circuits and Systems, 28(3):186–195, Mar. 1981. [60] I. Vassiliou and A. Sangiovanni-Vincentelli. A Frequency-Domain, Volterra Series-Based Behavioral Simulation Tool for RF Systems. In IEEE Custom Integrated Circuits Conf., pages 21–24, San Diego, May 1999. [61] F. Veers´e. Efficient iterative time preconditioners for harmonic balance RF circuit simulation. In IEEE/ACM Int. Conf. on Computer-Aided Design, pages 251–254, San Jose, Nov. 2003. [62] V. Volterra. Theory of Functionals and of Integral and IntegroDifferential Equations. Dover, New York, 1959. [63] P. Wambacq, P. Dobrovoln´ y, S. Donnay, M. Engels, and I. Bolsens. Compact modeling of nonlinear distortion in analog communication circuits. In IEEE/ACM Design, Automation and Test in Europe Conf. and Exhibition, pages 350–354, Paris, Mar. 2000. [64] P. Wambacq and W. Sansen. Distortion Analysis of Analog Integrated Circuits. Kluwer Academic, Boston, 1998. [65] P. Wambacq, G. Vandersteen, Y. Rolain, P. Dobrovoln´ y, M. Goffioul, and S. Donnay. Dataflow Simulation of Mixed-Signal Communication Circuits Using a Local Multirate, Multicarrier Signal Representation. IEEE Trans. on Circuits and Systems—I: Fundamental Theory and Applications, 49(11):1554–1562, Nov. 2002. [66] Xin Li, Peng Li, Yang Wu, and L. T. Pileggi. Analog and RF Circuit Macromodels for System-Level Analysis. In IEEE/ACM Design Automation Conf., pages 478–483, Anaheim, June 2003. [67] Xin Li, Yang Xu, Peng Li, P. Gopalakrishnan, and L. T. Pileggi. A Frequency Relaxation Approach for Analog/RF System-Level Simulation. In IEEE/ACM Design Automation Conf., pages 842–847, San Diego, June 2004.

Part III

Top-Down Heterogeneous Synthesis

6 Top-Down Heterogeneous Optimization

6.1 Introduction The modeling strategy based on the development of generic behavioral models as described in the previous part allows to represent several systems of classes of architectures at different levels of abstraction. Interaction schemes derived from simulation approaches in both the time and frequency domain can be adopted as explained in the previous chapters. An efficient way to represent designs and to calculate their performance values is only one part of a design strategy. Indeed, question remains which architecture and parameter values to select during the specialization of the generic behavioral model. Therefore, the synthesis strategy presented in this chapter completes the design strategy based on generic behavior introduced in Chap. 3. First, the fundamental properties of a synthesis strategy suited for the design flow based on generic behavior are summarized. Based on these requirements, a new top-down heterogeneous optimization algorithm is presented in this book. Basically, it is an evolutionary method suited specifically for topdown design which develops a simple high-level solution into a complex system at low abstraction level. It provides a systematic methodology to choose transformations, refinement and translation operations. To verify the method, a custom C++ library has been written and used into a prototype EDA tool for some classes of A/D converters.

6.2 Objectives for synthesis strategy Different synthesis strategies are appropriate at different phases of the analog design process. For example, designing a basic op amp topology requires another approach than constructing an entire mixed-signal system with several basic building blocks like A/D converters, filters and mixers. Therefore, first the objectives and basic properties of the synthesis strategy developed in this work are defined.

214

6 Top-Down Heterogeneous Optimization

6.2.1 Fundamental requirements As stated in Chap. 3, the design strategy based on generic behavior operates mainly on the behavioral description level, but at different levels of abstraction. The complexity of the systems that are dealt with varies from basic blocks like ∆Σ modulators to entire RF front-end architectures, corresponding to the class represented by the generic behavioral model. This modeling strategy poses a limitation on the synthesis approach: the algorithm can merely select fundamental operations that move the design point around in the abstraction–description plane of Fig. 2.3 in areas reachable by the class represented by the generic behavioral model. The smaller the available space, the less number of operations are available which makes the choice more easy. The main goal of the synthesis strategy presented in this work is to convert a functional description of an analog or mixed-signal system into a behavioral description at a lower abstraction level of a specific topology. This corresponds in the design flow of Fig. 3.10 on p. 66 with determining the proper specialization for the generic behavioral model. More specifically, seven objectives form the targets of the new synthesis approach. 1. The optimization strategy adopts a top-down design methodology starting from a functional description level to cope with systems with a higher complexity than basic blocks [2]. 2. New topologies can be created as the result of the optimization process. 3. The algorithm performs an optimization of both topology and parameter values concurrently. 4. The process deals with different types of performance characteristics generated by various types of performance functions. 5. Multiple optimization objectives with different importance are taken into account and used to drive the optimization process towards an amelioration of the most important violated objective. 6. There are different driving forces to come up with transformations to alter one or multiple designs to improve the performance. 7. The strategy defines several tasks which can be executed in parallel. These objectives are the foundations of the top-down heterogeneous genetic optimization algorithm presented as synthesis strategy in this work. Top-down design The design of any system always starts by defining the required functionality. For a single building block this is usually straightforward: e.g., an OTA operates like a voltage-controlled current source, a mixer should multiply two signals and a filter has a specific transfer function. A larger system, however, combines several fundamental operations. Hence, an explicit declaration of the functionality is more commonly found when designing complex systems whereas for small systems like single blocks the functionality is implicitly

6.2 Objectives for synthesis strategy

215

assumed. For such small systems, the designer selects a topology (or architectural template) with known functionality that fits the purpose. For larger systems, exploitation of the functionality is more efficient to deal with the increased complexity. As a result, a top-down approach is the first choice. Indeed, the overall functionality is split up in basic operations corresponding to smaller entities or building blocks. Their operation is then deteriorated by subsequently adding more non-ideal effects to the models in various refinement steps. A complete bottom-up approach would directly select basic entities and combine them to achieve the required functionality. Consequently, the fundamental operation is more difficult to derive and causes of failures are harder to locate. The synthesis strategy developed in this work starts by translating the functional description to the behavioral description level by selecting the appropriate generic behavioral model and specification. A strict top-down mechanism is followed so that intermediate designs can only become more complex, i.e. more divided into smaller pieces or modeled with more non-idealities. Consequently, the lack of simplification operations in this methodology avoids looping around in the abstraction–description plane. Topology generation Analog designers usually consult during the design process some library of basic topologies, explicit or implicit in their memory. When the functionality of the system to design consists of several elementary operations, however, the blocks in such a database can only be used to represent parts of the entire system. Hence, a synthesis strategy suited for high-level design should either select a template (e.g., of a front-end architecture) or create the architecture. Building blocks from the library are then used as basic entities. Hence, providing such library corresponds to determining the kind of outputs of the high-level design process. Several reasons favor the generation of topologies rather than selecting them. Generation of the architecture explores the largest design space including possibly more optimal solutions. Furthermore, only standard functionalities are covered by templates whereas new applications may require combinations or a slightly adjusted behavior. Finally, creation of the architecture adds the possibility to examine new approaches for existing problems. For example, even for a basic operation like a filter or an A/D conversion, the synthesis approach can be used to look whether other architectures than the commonly adopted ones, are suited for specific applications. Note that this may also depend on the technology. Parametric and architectural optimization Changing the architecture can have a larger influence on the performance of a system than only selecting new values for the parameters. Therefore, during

216

6 Top-Down Heterogeneous Optimization

the synthesis process, transformation operations are applied on intermediate designs which alter the topology or its parameter values (or both). As a result, a flexible optimization approach has to be employed to deal with both types of transformations. Therefore, the actual operation of the transformation is hidden for the optimizer: it only knows that something is changed to the intermediate design which results in another performance. So, during optimization, no difference regarding the effect on performance should be made between parametric and architectural transformations: the optimization process should be heterogeneous. On the other hand, during a normal design flow, selecting new parameters is expected to happen more often than tampering with the topology. Therefore, after an architectural modification, first its parameters should be moved towards their optimal values before making any additional changes to the architecture. To achieve this, the probability of selecting a certain operation varies throughout the optimization process. Heterogeneous performance characteristics During the design of an analog system, various kinds of performance characteristics and figures of merits are encountered. Whereas most optimization methods assume real-valued numbers, (e.g., power, area or noise figure), designers are also interested in issues like stability, the transfer function or settling behavior. Also symbolic expressions are sometimes used as performance metrics. Furthermore, different characteristics for the performance can be obtained from the different types of performance functions listed in Table 2.2. The ability to deal with heterogeneous performance characteristics requires also some flexibility of the optimization approach. During the execution of the algorithm, the different types of performance should be worked with in a similar way. Hence, the main procedure can only process them as abstract entities and the actual performance functions are encapsulated [18]. This property marks another aspect of the heterogeneous character of the algorithm. Heterogeneous objectives The main targets in the low-level design of basic digital building blocks are threefold: minimal power, minimal delay and implementation of the actual function. For high-level digital design, other objectives are also important like the number of data processing blocks, the size and kind of the memory elements, the representation of the binary words and the level of parallelism. The design of analog integrated circuits is always characterized by many objectives, e.g., regarding power, area, noise, distortion and functionality. Furthermore, the goal of an objective is not always to minimize or maximize a certain value. For example, they should only be high or low enough, lay within certain bounds, or have a particular value. So, both optimization targets formulated as minimization problems and objectives formulated as fuzzy rules

6.2 Objectives for synthesis strategy

217

[4] are available in the synthesis approach. Again, this can be interpreted as a manifestation of the heterogeneous character of the method developed in this work. Nevertheless, the multifarious objectives are translated into a unified figure which serves several purposes. Objectives with different importance values (expressed by a weight factor) can be compared with each other to calculate a trade-off. The unified figure acts then as a generalized cost function. The selection of operations to alter the topology or to change parameter values can be driven by the most important violated optimization target. This concept is similar to, for example, an analog designer who often tries to realize some functional characteristics and performs optimization of the remaining aspects afterwards. Other applications of the unified figure include detection of convergence and pruning the design space. Heterogeneous transformations A final element of the heterogeneity of the synthesis approach is related to the origin of the operations to transform intermediate designs. The optimization methods of Table 3.1 include several ways to change the parameters of an intermediate design during an iterative procedure. For example, values can be obtained from an equation derived by an experienced designer or via symbolic analysis, from the deterministic calculation of an increment or decrement of parameter values, from a random perturbation or by combining values of different designs of a population. Note that in order to include the last kind of transformations, operations on multiple designs at once should be performed instead of changing only a single design. Transformations that change the topology are mainly the result of application of certain rules. Mathematical equivalences and design knowledge are exploited. Also combination of different topologies is defined as an architectural transformation. To allow input from different sources, an open optimization framework is required. Consequently, it has some resemblance with design expert systems since design knowledge can be formally defined and saved as transformations and used during the optimization process. Concurrent tasks Modern computer technology allows to distribute tasks over a computer network and execute them in parallel so that the overall computational time is reduced [20]. Therefore, the design strategy defines several tasks which can be done concurrently. Since a major part of the CPU time is spent on calculating the performance metrics, different performance functions for one or several intermediate designs are evaluated concurrently. Furthermore, comparison of performance values with objectives and selection of transformations to improve the design are, to a certain extent, performed simultaneously for multiple intermediate solutions.

218

6 Top-Down Heterogeneous Optimization

6.2.2 An evolutionary approach The requirements of a top-down design flow and the ability to create topologies makes the synthesis approach developed in this work a member of the class of top-down creation methodologies elaborated in Chap. 3. However, it also shares some characteristics with other classes. For example, transformations to change parameter values are based on the optimization methods of Table 3.1. The concept of generic behavioral model can be considered as a kind of highlevel template to represent multiple architectures during the design process. Finally, the stochastic bottom-up approach based on evolutionary algorithms is adopted [11], but modified for a top-down design flow. Input of the methodology developed in this work is a set of equations describing the functionality at a particular abstraction level. For example, for an A/D converter, a high-level description only specifies the actual domain conversion whereas lower-level descriptions include details about the actual conversion algorithm [9]. Output is an architecture with building blocks described by behavioral models at a sufficiently low abstraction level. Typically, the design flow continues by translating these models into another description level (e.g., a circuit topology). The synthesis process corresponds to the evolution of a simple creature to a complex one via transformations. The complication of a design is indicated by the abstraction level: adding more details to the representation of the system increases the complexity. The functionality, on the other hand, is preserved during the process ensuring that intermediate designs are mathematically correct, although the specifications might not be met. This property is a major difference between the presented top-down approach and bottomup design methodologies where adding new elements can always change the basic operation of the system, leading to invalid solutions. Similar to most evolutionary approaches, multiple intermediate designs are processed and stored in a population during the optimization. The population introduces extra possibilities for parallel computing, enables concurrent exploration of different design areas and allows transformations to change information between temporary solutions. Furthermore, multiple transformations can be applied on a particular design resulting in several new designs. The consequent increase of the population is balanced by a reduction as the result of an evaluation of the designs. Designs with a low performance compared to the other individuals of the population are removed once they have lost their ability to be transformed to better solutions. The top-down design paradigm is easily combined with this populationbased approach. From a certain design, multiple lower-level specializations can be investigated by creating several new individuals in the population. The original design, however, usually survives a few generations. As a result, bad specializations will die out while other parts of the population explore other possibilities without the need for simplification operations. This feature

6.3 Top-down heterogeneous optimization methodology

219

reduces the number of required available transformations and hence the startup time of the design process. As indicated in Chap. 3, evolutionary approaches offer a large flexibility and can deal with various types of system representations, performance metrics, optimization targets and genetic operations. Consequently, the optimization algorithm based on an evolutionary process is suited to handle the different forms of heterogeneity required for an efficient high-level design methodology. Heterogeneous performance characteristics and objectives are immediately plugged in via different kinds of performance functions and multifarious evaluations. Heterogeneous transformations and optimization of both architecture and parameters is realized by selecting the proper genetic operations. To conclude, a top-down evolutionary process possesses multiple properties which fit well with the requirements of a high-level design methodology for analog and mixed-signal systems.

6.3 Top-down heterogeneous optimization methodology The core of the synthesis strategy tackles the complexity of designing analog and mixed-signal systems by employing a kind of genetic evolutionary algorithm. Further, several types of components are defined within the optimization framework to fulfill specific tasks. Together, they perform the top-down heterogeneous optimization. 6.3.1 Overview of methodology Figure 6.1 gives on overview of the configuration of the synthesis framework used to apply the top-down heterogeneous optimization approach. Five main components can be distinguished: • Population: the set of intermediate design solutions • Calculators: the manifestation in the optimization framework of performance functions • Evaluators: components used to compare performance values with specifications and to make a first selection of transformations • Transformation collection: the set of all possible operations on the intermediate solutions to improve the designs • Central control unit: the central part of the framework which executes the main algorithm A schematic representation of the basic program executed in the central control unit is depicted in Fig. 6.2. Listing 6.1 shows the corresponding pseudocode of the optimization algorithm.

220

6 Top-Down Heterogeneous Optimization transformations + probabilities

calculators

evaluators

performances

SNDR satisfaction levels

SNR-power trade-off

complexity power

central control unit

designs

noise

stability check

designs

noise evaluation

transformation collection

...

designs

...

transform

distortion

performances

cluster3 cluster4

cluster1 cluster2

population

Fig. 6.1. Global configuration of the optimization framework. y = f (x )

INPUT Dembryonic

population

G = 1, 2, . . .

generation designs

D

apply transformations T˜1

...

calculators

T˜j

select transformations

performance calculations

transform

P

N

number of transformations

global evaluation

evaluators S˜ 1 , . . . , S˜ m

S˜ 1 , . . . , S˜ m

evaluation S˜ 1

...

S˜ m

propose transformations

calculate result of population S˜ global

simplify population

˜n T˜1 , . . . , T

OUTPUT D best

Fig. 6.2. Diagram of the flow of the heterogeneous optimization algorithm used for top-down optimization of both topology and parameters.

6.3 Top-down heterogeneous optimization methodology 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

221

input y = f (x ) Dembryonic ← map y = f (x ) population D ← {Dembryonic } for generation ← 1, . . . , generation max for each design D ∈ D PD ← ∅; SD ← ∅; TD ← ∅ for each calculator C ∈ C performance values PD ← PD ∪ C(D) for each evaluator E ∈ E wE ← weight of E E ← ES (D, PD ) satisfaction level SD , E , wE SD ← SD ∪ SD

  E transformations TDE ← ET D, PD , SD , TD ← TD ∪ TDE , wE satisfaction level S¯D ← combine SD transformations T¯D ← combine TD  satisfaction level S¯ ← combine S¯D |D ∈ D find best design Dbest with Smax = S¯Dbest if stop criterion reached then output Dbest simplify population D Tselected ← ∅ for each design D ∈ D add M transformations Ti with (Ti , pi ) ∈ T¯D to Tselected for each T ∈ Tselected apply T and add new designs to D

Listing 6.1. Pseudo-code of the algorithm executed by the central control unit of the optimization framework in Fig. 6.1.

The input of the algorithm is the set of equations (y = f (x )) that describe the general functionality of the system. Via a direct mapping procedure (described in Section 6.3.3), this mathematical description is directly translated into a first, functionally correct design: the embryonic design. The initial population contains only this embryonic design (lines 1–3 in Listing 6.1), and will later grow till its maximal size determined which is an input parameter of the algorithm. A new generation starts with initializing the sets of performance values PD , satisfaction levels SD and transformations TD associated with the designs D (line 6). Then, the algorithm continues with the calculation of all performance metrics of all designs in the population by the performance calculators (lines 7–8). These performance values serve as inputs for the evaluators. They have two tasks. First, they evaluate the designs and assign to each of them a satisfaction level. This figure indicates how well a certain objective is achieved.

222

6 Top-Down Heterogeneous Optimization

If the satisfaction is low, the evaluators should come up with proposals for transformations to improve the designs (lines 9–14). For each design, an overall satisfaction level is calculated by combining the satisfaction levels assigned to the design by the different evaluators (line 15). On their term, the satisfaction levels of all designs in the population are combined with each other which leads to a global satisfaction level suited for a global evaluation (line 17). If this evaluation is positive, the design with the highest global satisfaction level is chosen as the best one. This best design is returned and the algorithm ends (lines 18–20). During the optimization process, the best design at a certain abstraction level can only be improved by a new design with a higher satisfaction level. Since the maximal value for this figure is one, a convergence of the satisfaction levels at some abstraction level is always achieved. In case of a negative report, the program continues with a simplification or reduction of the population to avoid unlimited growth of the population (line 21). The transformations proposed by all evaluators for each design are combined with each other to take into account the weight of the evaluator (line 16). Once all transformations of a design are put into a set, a limited number of them is randomly selected according to a specific probability density function which associates with each transformation a probability that it will result in a large performance improvement. These selected transformations are then applied on the design resulting in new designs which are added to the population (lines 22–26). The next subsections elaborate on the different aspects and components of the optimization framework. 6.3.2 Design population The basic optimization algorithm depicted in Listing 6.1 makes no assumption about the actual representation or meaning of the designs. As a result, no particular design model should be employed extending the application area of the generic approach to other domains than high-level analog synthesis (e.g., architectural synthesis of digital systems). On the other hand, a fundamental property of the designs in the population is the ability to generate new specimens by application of transformations. This characteristic is employed to define the generic structure used to identify the elements of the population. Definition 6.1 (Design structure). A design structure D is an ordered list of transformations (T1 , . . . , Tn ) which corresponds to the design obtained by application of the composed transformation Tn ◦ · · · ◦ T1 on the embryonic design. With these design structures, applying a transformation on a design comes down to extending the list with an extra operation. Consequently, the lengths of the lists increase with each new generation of the population. Basic transformations, especially those that alter the architecture, are usually straightforward translations of design knowledge, as shown in next example.

6.3 Top-down heterogeneous optimization methodology

223

Example 6.1. Figure 6.3 shows small examples of lists of transformations. The embryonic design is the result of the direct mapping of an A/D conversion operation. Then, four transformations are defined: the addition of a first-order continuous-time ∆Σ-loop (T1 ), the extension to a second-order ∆Σ-loop (T2 ), the change of the feedback factor α to α′ (T3 ) and finally the selection of an implementation style for the integrators (T4 ). Together, they represent five designs with different architectures and/or abstraction levels but with the same functionality. 

y

x(t)



x(t)

y D/A

(a) Dembryonic = ()

(b) D1 = (T1 ).



x(t)



y

b

a

D/A

(c) D2 = (T1 , T2 )



x(t)



a′

y

b

D/A

(d) D3 = (T1 , T2 , T3 )

x(t)

y a′

b

D/A

(e) D4 = (T1 , T2 , T3 , T4 )

Fig. 6.3. Examples of different designs derived from each other and represented as ordered lists of transformations.

224

6 Top-Down Heterogeneous Optimization

Each design structure can also be considered as a small program where each program line is given by a transformation of a design of the previous generation. Starting from the embryonic design, execution of the program results in the actual design represented by the design structure. Hence, for each design, the different steps in the generation process are available. Instead of just returning a final solution, this approach also indicates how it is obtained, which gives the designer more insight in the solution. If needed, extra intelligence can be added into the framework (e.g., as a transformation or evaluator) to improve the results of the synthesis process. During each iteration of the main optimization loop in Listing 6.1, multiple transformations are selected for each design in the population as explained later in Section 6.3.8. So, with each new generation, several new designs arise from a single design. As a result, all designs in a population can be gathered in a tree as shown in Fig. 6.4. Each node in the design tree is a design, that is refined when the tree is descended. Simplification of the population cuts some branches of the tree to limit its radius. The tree is divided into clusters, indicated by the dark areas in Fig. 6.4 and by designs with different shading in Fig. 6.1. A cluster is a subtree with nodes connected to each other via transformations that only change the parameters. Consequently, within a cluster, all designs have the same architecture but with different parameter values. They form a subpopulation which corresponds to the set of individuals in many standard optimization approaches, like a GA. Both the design structure as a program and the representation as a tree are concepts that are also encountered in genetic programs [10]. Their use in this work differs from their application to bottom-up generation of analog circuits as summarized in Section 3.3.4 in several aspects:

generation 1

Dembryonic T1

T2 D2

D1 T4 D4 T10

T11

D10 D11

T6

T5 D5

T12 D12

T3

T7

D6

T15

T13

T14

D13

D14

D7

D15

generation 2

D3

T16 D16

T8

T9

D8

D9

T17 T18 D17

generation 3 T20

T19 D18

D19

D20

generation 4

Fig. 6.4. Example of a tree of designs generated during the optimization process. Each gray area corresponds to a cluster of the population, having the same architecture.

6.3 Top-down heterogeneous optimization methodology

225

Top-down flow. The top of the three is at the highest abstraction level and each new program line can add more details. In the programs of the bottom-up generation class, on the other hand, each line adds an extra entity at low abstraction levels. Program length. The length of the program lists corresponds to the generation of the population which increases throughout the optimization process. Hence, no fixed-length representation is used, although at a certain generation, all design structures in the population are of the same size. Functional correctness. All designs in the design tree have the same functional behavior which is defined by the input equations. To preserve this property, transformations cannot simply be put in an arbitrary order: some operations make only sense if they are preceded by others. Also, the effect of a transformation can differ depending on its place in the design structure. As a result, two programs are not allowed to randomly exchange some lines, as would be necessary in a standard recombination operation. Hence, special operations are needed. To summarize, the definition of design structures as lists of transformations translates the synthesis problem of an analog or mixed-signal system into a search process for the optimal sequence, number and kind of transformations.

6.3.3 The embryonic design The starting point for the optimization procedure and hence the root of the tree of designs in Fig. 6.4 is the embryonic design. This first design proposal is derived from the mathematical equations y = f (x ) which define the functionality of the analog or mixed-signal system to be synthesized. Such a derivation corresponds to a translation operation between the functional and behavioral description levels in the abstraction–description plane of Fig. 2.3. Definition 6.2 (Embryonic design). The embryonic design is a behavioral model with idealized blocks obtained by a direct translation of the input equations y = f (x ) describing the functional specifications of the system to be designed. Note that only the functionality is represented by the embryonic design. All design objectives and optimization targets are within the framework of Fig. 6.1 represented via the evaluators. In order to perform the direct translation of the equations, a grammar should be defined which contains keywords for the main operations encountered in the systems of interest. To this end, a subset of an existing generalpurpose language like MATLAB, VHDL-AMS, Verilog-AMS or SystemCAMS can be specified. Alternatively, a custom language can be created. Since a far more limited set of operations is then defined, such an approach simplifies

226

6 Top-Down Heterogeneous Optimization

the processing of the equations and it easily ensures that only fundamental operations are used when defining the functionality. A general-purpose language, on the other hand, should be restricted so that only constructs recognized by the synthesis process are available. Therefore, in this work, a new custom grammar has been defined, built with the six following element classes: Signal properties. For the input signal, the type should be defined: either a continuous-time, discrete-time or digital signal. Further, extra characteristics are added like number of bits, sampling frequency and intervals for signal amplitude or frequencies. The characteristics of the output signals result from the operations which generate them from the input signals. Basic mathematics. This class consists of elementary mathematical operations like sum, product and powers of a signal. When multiple operands are required, the corresponding signal should have a common signal type. Linear transfer. In analog design, it is often required that a system behaves linearly. Therefore, a set of operations has been defined to express the functionality of basic time-invariant linear subsystems, e.g., integrating and differentiating operations, and transfer functions. Specialized operations. In addition to linear time-invariant, behavior typical analog operations are represented by basic grammatical elements. Examples are up- and down-conversion, modulation and demodulation of signals or translinear operations. Conversion. Mixed-signal systems contain different signal types. Mapping of signals between two domains is performed by conversion operations which add the signal properties characteristic for the domain of the output signal of the operation. Parameters. Most operations contain parameters to completely define the functionality. Keywords are provided in the grammar to indicate that these parameters have fixed values, or that their values should also be determined by the optimization process. In the latter case, usually limit values are provided. With this grammar, the basic functionality of most analog and mixed-signal systems is easily defined as a set of simultaneous equations y = f (x ). Example 6.2. To describe the functionality of the analog front-end of a basic receiver in terms of the custom grammar, the following equations are written down1 : u = input3(cont,fc,BW); x = down4(prod2(u,par(A)),par(G),fdiff,BW); x2 = transfer4(x,par(K),vect(z1),vect(p1,p2)); y = didig(sample2(x2,par(fs))); 1

(6.1a) (6.1b) (6.1c) (6.1d)

The number (if any) after a keyword (e.g., down4) indicates the number of parameters.

6.3 Top-down heterogeneous optimization methodology u fc,BW

x

A

H G,fdiff,BW

K,z1,p1,p2

x2

227

y

fs

Fig. 6.5. Schematic representation of the embryonic design for Example 6.2.

First, the input signal u is defined as a continuous-time signal with a certain carrier frequency fc and bandwidth BW. The second equation indicates that the input u is first amplified and then down-converted over a (fixed) frequency distance fdiff. Afterwards, the signal is filtered using the mentioned gain K, zeros and poles of the transfer function. Finally, via a sampling and quantization operation, the digital output signal y is obtained.  The embryonic design is obtained by creating a graph obtained by mapping each keyword of the functional grammar onto a node corresponding to an idealized block. For Example 6.2, the result of this mapping operation is shown in Fig. 6.5. Hence, this mapping should be defined along with the grammar. Then, the behavioral model of the first design proposal is created by proper specialization of a generic behavioral model suited for the system to design. The resulting specific behavioral model is used to calculate the properties and to evaluate the design. The original input equations now define conditions between signals in the behavioral model of the design which should remain invariant throughout the optimization process. The realization of the functionality in the embryonic design reflects only the function of the system and is as simple as possible, regardless of whether the specifications are met or not. For example, the conversion operation of an A/D converter is the functionality represented by the embryonic design, but the required accuracy is usually not achieved. Note that only the specifications regarding the input signal are provided with the description. Other specifications are incorporated into the evaluators of the framework. A more realistic realization than the simple embryonic design is obtained by subsequently adding more details to the design via appropriate transformations. However, at this lower abstraction level, the functionality is still preserved. This is exactly the goal of a real top-down design flow [8]. Of course, after the low-level design (e.g., at circuit level), a bottom-up verification is required. The concept of starting with an embryonic design and refining it, is in agreement with a basic heuristic rule in engineering: first, the most simple solution is tried; only if it fails, more complex solutions are examined. According to the fundamental requirements for the synthesis strategy elaborated in Section 6.2.1, the mappings performed by the selected transformations should fit in a strict top-down design flow. So, they can merge building blocks in the design proposal, provide them with more details or change their parameters. On the other hand, no operations or blocks can be deleted. Since the set of functional equations is straightforwardly mapped onto the embryonic

228

6 Top-Down Heterogeneous Optimization

design, the designer limits the design space that will be explored during the optimization by defining more restrictive input equations. Example 6.3. The functional specification for an A/D converter can be limited to a single functional equation (the definition of the input signal is not included): y = didig(sample2(u,1.0e6)); (6.2) which specifies that the input signal u is sampled at 1 MHz and converted to a digital signal y. This equation is mapped onto the embryonic design of Fig. 6.3a. However, another starting point may reveal more details about the implementation. For example, a more detailed input description could indicate that there should be a ∆Σ loop around a simple quantizer: y = didig(sample2(x2,par(fs)));

(6.3a)

x = int3(u-v,par2(a1,0.5),par(fs)); v = dac(y);

(6.3b) (6.3c)

This time, the root of the design tree corresponds to Fig. 6.3b. By making the initial equations more detailed, the search space of possible architectures is limited by the designer.  Once the embryonic design has been generated, it is put in the population to form the first generation (line 3 in Listing 6.1). Then, the actual optimization process is invoked. 6.3.4 Performance calculators The main loop of the optimization algorithm starts by determining the performance of all designs in the population. To this end, the set of calculators C shown in Fig. 6.1 is activated. Definition 6.3 (Performance calculator). A performance calculator C ∈ C maps a design proposal D ∈ D onto an n-dimensional vector of performance values P: n

C : D → P(A) : D → C(D) = P.

(6.4)

As stated in Definition 2.8, a performance value of a design should be interpreted in a wide sense: it corresponds to any property of the design, which can be a specification, an objective, or a heuristic metrics like complexity. Multifarious performance characteristics result in different representations, like numbers (e.g., Noise Figure (NF ), Signal-to-Noise Ratio (SNR), Integral Non-Linearity (INL)), intervals (e.g., an estimation of power together with a confidence interval), symbolic results (e.g., a symbolic transfer function) or curves (e.g., a step response). The core algorithm, however, is immune to the actual representation of the performance characteristics.

6.3 Top-down heterogeneous optimization methodology

229

The framework does not specify how the performance should be calculated, making it straightforward to plug in any type of performance function listed in Section 2.4, like a behavioral simulator (e.g., a simulator for generic behavioral models), analytical equations (e.g., obtained via symbolic analysis), performance models (e.g., derived from circuit-level building block models) or estimators (e.g., for power). Each performance calculator contains then one or more performance functions. Since the calculations are separated from the actual evaluations, the calculated performance value can easily be used in multiple evaluations without the need to repeat them. Furthermore, different performance calculators operate independently of each other and can perform the calculations for different designs simultaneously. Hence, the computations of the performance calculators are the first task to distribute over a network of multiple computers to exploit parallelism and reduce the overall optimization time. 6.3.5 Evaluators Once all performance characteristics have been calculated for each design in the population, the performance values are interpreted. This task is assigned to the set of evaluators E in the optimization framework of Fig. 6.1. Definition 6.4 (Evaluator). An evaluator E ∈ E maps a design D together E ∈ S and a set with with its performance vector P onto a satisfaction level SD E m transformations TD,i ∈ T associated with probabilities pE i :     E  E E  n E : (D, P(A) ) → S, 2T ×[0,1] : (D, P ) → SD , TD,i , pi |i = 1, . . . , m , (6.5) E with pE 1 + · · · + pm = 1. The number of transformations m may be different for each design D. First, the performance values returned by the performance calculators are compared to the system specifications or optimization objectives, optionally taking into account some tolerances. The result of this evaluation is the satisfaction level indicating how well the goal of the evaluator is achieved. The part of the evaluator that calculates the satisfaction level is the satisfaction function. Examples of such satisfaction functions are provided in Section 6.3.6. If the evaluator is completely satisfied, no further steps needs to be taken. In the opposite case, a set of transformations is provided by the evaluator. Application of such a transformation on the design proposal should result in an improvement of the satisfaction level of the evaluator. When multiple operations are available, the probability corresponding with each of them suggests the preference of the evaluator for a particular transformation. These numbers give the evaluator the chance to propose multiple solutions (e.g., from different transformation libraries), but with different expected effects on the performance.

230

6 Top-Down Heterogeneous Optimization

Several evaluators are present in the optimization framework. They differ from each other in various characteristics: Evaluator’s goal. Different evaluators contain different satisfaction functions to calculate the satisfaction level. Some will focus on specifications, others on optimization targets. Hence, one evaluator can monitor a specific objective whereas this objective is not taken into account by other evaluators. Combination of the results of the evaluators happens at a later stage of the optimization algorithm. Transformations. Even when two evaluators have the same goal, they may be distinct from each other in the kind of transformations that are proposed. For example, one evaluator can try to improve an objective by proposing knowledge-based transformations while another one prefers stochastic operations. This property facilitates the extension of the framework with new evaluators when extra knowledge or new techniques are added in the form of transformations. Weight factor. The relative importance of a certain evaluator E is expressed by its weight factor wE ∈ [0, 1]. This number enables the user of the optimization framework to indicate its main interests. However, when an evaluator is completely satisfied, its weight factor is neglected. Furthermore, the importance is not constant throughout the simulation. Indeed, factors that influence the weight of a particular evaluator are, for instance, the generation number of the population or cluster and the satisfaction level of other specific evaluators. Some evaluators may be deactivated completely during a part of the optimization process by setting their weight factor to zero. As a result, optimization may first focus on a subset of the specifications and objectives and then widen the optimization targets. Such approach is similar to common practice in interactive optimization, like the heuristic-based synthesis of an analog system by an experienced designer. For example, first, the specifications are met, then the power is minimized and finally the area covered by the system is made as small as possible. This results in a greedy optimization approach. Concurrency. Similar to the performance calculators, also the evaluators operate independently of each other and of different design proposals. As a consequence, the tasks of the evaluators can also be executed in parallel by distributing them over a network of computers. Evaluators can be combined if their goals and weight factors are the same. The probabilities of the transformations are then adjusted accordingly. On the other hand, it may be useful to have two evaluators with the same goal with different types of transformations: the weight factor is then used to favor one type of transformations at different phases of the optimization process (e.g., at young or old population generations). The use of a set of multiple independent evaluators is a major difference between our optimization framework and other approaches. The main advantage in our method is the presence of a direct link between a target that is not

6.3 Top-down heterogeneous optimization methodology

231

achieved (expressed by a low satisfaction level of an evaluator) and a way to improve it (via a particular transformation proposed by the same evaluator). This extra information provides additional insight in the optimization process and makes the final result more transparent for the designer. This method closely resembles common practice in chip design and makes the method open to input from experienced designers besides pure mathematical optimization techniques. The tasks of the evaluators are similar to the jobs of designers collaborating in a team to design a complex system. Each of them contributes to the overall optimization with its own specific skills and experience. The human interaction is formally represented by the satisfaction levels and transformations with their probabilities. The influence of each designer or evaluator corresponds with the weight factor in the optimization framework. Finally, the central control unit of Fig. 6.1 acts as the manager of the design team. Example 6.4. A simple evaluator only monitors one performance characteristic. For example, it can focus on the stability of the system and becomes completely satisfied if the system is stable. The satisfaction of a simple evaluator for power improves with decreasing power values and it becomes never completely satisfied. Concrete satisfaction functions for this behavior are shown in Fig. 6.6c and 6.6a, respectively. Further, a complex evaluator may focus on a combination of performance characteristics. The most straightforward way to implement this is to work with a cost function (3.3). More complex schemes can be used to express the dependency of one performance metric (e.g., noise figure) on another (e.g., gain or place in the overall system). Evaluators are a general concept that can also be used to obtain a design with sufficient details: if an optimal design is found, a dedicated evaluator may be dissatisfied proposing transformations to lower the abstraction level. This ensures that the optimization continues and has the opportunity to produce the desired results. Hence, special control during the optimization process is easily represented by special evaluators. 

6.3.6 Satisfaction levels The presence of heterogeneous performance metrics and objectives in the optimization framework makes it necessary to introduce unified numbers. To this end, satisfaction levels are defined. They are used throughout the optimization process to formally represent the quality of intermediate designs. Definition 6.5 (Satisfaction level). A satisfaction level S ∈ S is a normalized real value on the interval [0, 1] used as comparison system for the results of evaluations.

232

6 Top-Down Heterogeneous Optimization

Satisfaction levels of zero and one are referred to with total dissatisfaction and total satisfaction respectively. Special actions are associated with these extreme values depending on the use of the satisfaction levels. Several applications of them are encountered in the optimization framework: Evaluators’ results. As indicated in the previous section, the first application of satisfaction levels is the representation of the evaluation results of the evaluators. Totally satisfied evaluators are not further treated whereas total dissatisfaction leads to the certainty to select and apply one of the proposed transformations. Population simplification. Line 21 in Listing 6.1 initiates a simplification of the population so that it maximum size, provided as input of the algorithm, is not exceeded. The satisfaction levels provide a straightforward method to perform the necessary size reduction. The designs with the lowest overall satisfaction levels are, for instance, selected for elimination. Alternatively, the unified numbers are used to derive a probability distribution. The designs to remove are then chosen stochastically. This approach allows unsuitable designs to artificially survive for a while. However, if the population size is chosen large enough, these designs will have had all chances to exploit their opportunities for improvement. Stop criterion. Each iterative optimization algorithm has to cope with the problem of how to end the process. With the satisfaction levels, methods similar to those used in GAs can be employed. For example, the satisfaction level of the population or of the best design should reach a minimal value, or their absolute or relative changes during some generations should be small. Also the difference between the satisfaction levels of the designs should be small in the major part of the population with designs in the neighborhood of the optimal solution. A practical implementation usually adopts a combination of these criteria. Evaluators’ weight factors. The global satisfaction level of the population can be used to make the weight factors of the evaluators variable. Especially the special evaluators (e.g., to control the abstraction levels as indicated in Example 6.4) depend on the satisfaction for the entire population. They are invoked once a high satisfaction is obtained. Since new evaluations are then taken into account, a drop in global satisfaction level may occur. As a result, the global satisfaction level does not monotonically increase with the generation number. A major advantage of the use of satisfaction levels is the ability to deal with fuzzy requirements [24]. Instead of writing down a single constraint like P ≥ Ptarget ,

(6.6)

corresponding with either total satisfaction or dissatisfaction, fuzzy constraints explore the whole range of available values in S. To this end, the satisfaction function of the corresponding evaluator should be defined.

6.3 Top-down heterogeneous optimization methodology

233

Definition 6.6 (Satisfaction function). A satisfaction function ES maps a design D and its n-dimensional vector of performance values P onto a satisfaction level: n

E = ES (D, P ) . ES : D × P(A) → S : (D, P) → SD

(6.7)

This satisfaction function allows a designer to add more details about a performance characteristic, e.g., by defining minimal, maximal, typical or impossible values. More design knowledge can be included, leading to more guidance for the optimization process and more intuitive specifications. Example 6.5. Figure 6.6 depicts four typical examples of satisfaction functions for one-dimensional performance vectors. Figure 6.6a corresponds to the minimization of an objective, like power or area. Defining a point Pmax allows 1 0.8

0.9

Satisfaction level S

Satisfaction level S

1

unacceptable values

0.9 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.8 0.7 0.6 0.5 0.4 0.3 0.2

acceptable values

0.1

0

0

Pmax

1

1 0.9

0.8

0.8

ignorance

0.5 0.4 0.3

veto

0.2 0.1 0

Satisfaction level S

Satisfaction level S

(b) Specific target value

0.9

0.6

Pmax

Performance P

(a) Minimization

0.7

Ptarget

Pmin

Performance P

0.7 0.6 0.5 0.4 0.3

avoidance

veto

0.2 0.1

Pmin

Ptarget

Performance P

(c) Minimal target value

0

Pmin

Ptarget

Performance P

(d) Minimal and specific target value

Fig. 6.6. Examples of satisfaction functions ES mapping a single performance characteristic onto a satisfaction level for different objectives.

234

6 Top-Down Heterogeneous Optimization

to start acceptance for typical values by assigning very low satisfaction levels to higher values of the objective. Defining such typical values comes down to determining an order of magnitude. This task is usually straightforward for an experienced designer, or the evaluation of a figure of merit (3.1) can give a basic idea of such values. The function of Fig. 6.6b is used to achieve a specific value Ptarget for a performance characteristic, like an impedance. Due to the shape of the satisfaction function, the optimization will tend to move the performance value towards the target, but it also allows a margin. The minimal and maximal values Pmin and Pmax define the range of acceptable values and hence the tolerance of the specification. Constraint (6.6) (e.g., for a GBW ) is translated into the function of Fig. 6.6c. Lower values result in total dissatisfaction and a corresponding ‘veto’ of the evaluator leading to the selection of a transformation of that evaluator. However, for values larger than Ptarget , the actual value does not matter any more and the evaluator can be ignored. Again, a certain tolerance is introduced on the performance value. A similar constraint is shown in Fig. 6.6d, but in this case, values higher than Ptarget are by preference avoided and are therefore assigned lower satisfaction levels. This function may be used, for example, when higher values of gain factors are known to result in instability or in an overconsumption of power.  Combination For each design in the population, an overall satisfaction level is calculated by combination of the appreciation values of all evaluators. First, the weight factors wE of the evaluators E ∈ E are updated since the importance of already achieved objectives may be decreased in favor of the other optimization goals. Then, a weighted sum of the individual satisfaction levels is calculated: ) E E∈E wE SD ¯ . (6.8) SD = ) E∈E wE The global satisfaction level S¯D′ is found as the average of all values S¯D over all designs D in the subset D′ of the population (e.g., in a cluster): 1 S¯D′ = ′ |D |



S¯D .

(6.9)

D∈D ′ ⊂D

Note that S¯D in (6.8) is similar to the fit value in GAs and that (6.9) can be considered as an implicit (reversed) cost function. However, the concept of satisfaction level is more general than the terms in an explicit cost function (3.3) since it may contain various complex functions of one or more performance characteristics and of any other parameter available during the optimization

6.3 Top-down heterogeneous optimization methodology

235

(e.g., the population size or generation number). Furthermore, via the evaluators a link is established between the desirability of the performance value and specific solutions to improve that specific performance characteristic. Finally, the adaptation of the weight factors leads to the ability to shift the focus of the optimization during the execution of the optimization algorithm. As a result, satisfaction levels offer more opportunities to steer the optimization process and to deal with the complex demands of analog and mixed-signal synthesis problems. 6.3.7 Transformations The collection of transformations available for the evaluators contains all design knowledge and techniques to increase the satisfaction levels of the intermediate designs and hence ameliorate their performance values. A transformation T ∈ T in the optimization framework described in this chapter is a general concept including both the transformation and refinement operations in the abstraction–description plane described by Defs. 2.6 and 2.3 respectively. Definition 6.7 (Transformation). A transformation T ∈ T maps an m-dimensional vector of designs D onto an m-dimensional vector of new designs D ′ by performing operations on their architecture, parameters or abstraction level: T : Dm → Dm : D → D ′ = T (D) . (6.10) The library of transformations determines how and which part of the design space can be explored during the optimization process. Hence, to optimize in a subspace (e.g., consisting of only a specific type of systems) only the corresponding transformations should be included. This library is part of the dedicated program implementing the optimization strategy which may allow the user of the program to select which transformations should be taken into account. Knowing which operations are available to alter the characteristics of a design is just a part of the design knowledge. Therefore, in the transformation collection of Fig. 6.1, each transformation is stored as a transformation context. Definition 6.8 (Transformation context). A transformation context τ = (R, T, O) consists of a context requirement R, a transformation T and an expected outcome O where R maps a subset of designs D and the k invariant conditions i defined by the input equations onto a boolean value to indicate whether the transformation is applicable or not on D: R : Dm × I k → B : (D, i ) → true|false,

(6.11)

T is defined by (6.10), and O maps the subset of designs and performance values onto an estimation of the change in performance values ∆P i :

236

6 Top-Down Heterogeneous Optimization n×m

n×m

O : Dm × P(A) → P(A) 

  : D, P 1 . . . P m → ∆P 1 . . . ∆P m .

(6.12)

The evaluator uses the context requirement function R to determine whether the pattern required for the transformation is present in the designs. Then, in case of an affirmative result, the expected outcome function O is called to obtain a prediction of the new performance values. If these changes have a positive influence on the satisfaction level of the evaluator, the transformation is put on the shortlist, so that it can be chosen by the selection mechanism corresponding to lines 23–24 of the optimization algorithm shown in Listing 6.1. Several techniques can be adopted to estimate the performance of the transformed systems, like heuristic equations, symbolic analysis, sensitivities or assignment of probabilities. Exact predictions can be obtained via (behavioral) simulations at the cost of long evaluation times. The next example includes a prediction based upon a commonly applied design equation. Example 6.6. Transformation T2 of Example 6.1 shown in Fig. 6.3c introduces a second-order continuous-time ∆Σ-loop. Hence the context requirement function R returns true if the design has a similar pattern as the design of Fig. 6.3b with the correct types for the input and output signals. The expected outcome function O predicts an increase in SNR, estimated by [15]:  

2

 OSR 3 OSR 3π 2  B A 2 −1 5 −3 , (6.13) ∆SNR = 2 π π with A the input amplitude, B the number of bits of the quantizer, and OSR the over-sampling ratio of the discrete output signal.  The open character of the optimization framework allows to build a collection of transformations consisting of several subcollections. This aspect allows to easily extend the library with problem-specific subcollections or new design information. In general, the heterogeneous transformations are characterized by various fundamental properties: Objective- or model-driven. Most evaluators select transformations that change the performance values in order to achieve their objectives. As explained by Example 6.4, however, special evaluators are included in the framework to lower the abstraction level. Therefore, in order to change the satisfaction level of such evaluators, model-driven transformations should be selected. Inter- or intra-cluster operation. Transformations that operate within a cluster, only change the parameters of the designs in the cluster. On the other hand, when the architecture is modified, the newly created designs are put into another cluster. Hence, such transformations either create clusters or move designs between them.

6.3 Top-down heterogeneous optimization methodology

237

Number of designs. Transformations can be applied on a single design or on multiple designs at once. Intra-cluster transformations usually convert one design into another whereas within a cluster parameter values can be exchanged. In the latter case, the transformation is added to the transformation list TDE (line 13 in Listing 6.1) of all designs involved. A cluster- or population-based transformation operates on an entire cluster or population at once. For example, changes in abstraction level usually happen at the entire population. Note that in case of a transformation on multiple designs (m > 1 in (6.10)), the list used to represent a design structure according to Definition 6.1 contains transformations which can be considered as dependent on other designs. This property again emphasizes the importance of the order of the transformations in the list of a design structure to guarantee functional correctness. Deterministic or stochastic. A deterministic transformation has always the same effect when applied to a particular design. On the other hand, different intermediate designs may be created by several calls to a stochastic transformation. Consequently, such transformations can be chosen multiple times by the selection mechanism. Source of information. Designers use to several sources to synthesize analog and mixed-signal systems, like simulations in the time and frequency domain, numerical and symbolic analyses, years of experience or basic mathematical techniques. All these approaches can serve as inspiration for the definition of transformations. Despite the different characteristics of the transformations, the formal definition allows evaluators to make an abstraction of the actual operation. Their only concern is to find operations that improve the satisfaction level. Within the optimization framework, the task of the designer is to define different transformations. The following examples illustrate several kinds of such transformations belonging to the objective-driven group. Example 6.7 (Rule-based transformations). Several techniques have been developed by analog designers to perform an A/D conversion. Example 6.1, for instance, introduced transformations to develop ∆Σ-modulators. In Fig. 6.7, basic transformations for evolving flash architectures are illustrated. The first one (Fig. 6.7a to 6.7b) decreases the degradation of the SNR due to quantization noise. The estimated performance improvement becomes [22]: ∆SNR = 9A2 22B−1 OSR,

(6.14)

with A the input amplitude and B the number of original bits (B = 1 in Fig. 6.7). The second transformation (Fig. 6.7b to 6.7c) uses an interpolation method to reduce the power consumption. This rule is an application of the design knowledge that the slope of the step of non-ideal quantizers is always limited. 

238

6 Top-Down Heterogeneous Optimization

c

c

x(t)

0

y

x(t)

0

y

−c

(a) Initial design

(b) Multiple quantizers

x(t)

Σ

y

−c

(c) Interpolation

Fig. 6.7. Examples of designs obtained by application of rule-based transformations to improve SNR and power.

Example 6.8 (Mathematical equivalences). Exploiting a basic mathematical equivalence in topology exploration comes down to replacing an architecture by one with the same input–output characteristic for ideal signals. Performance improvement is achieved by altering the non-ideal signal paths. Figure 6.8 depicts five stages in the development of an architecture for a downconversion operation associated with four topological transformations. Note that the number of phases of the output signal varies in the different architectures. However, the information signal corresponds only to one phase which is the useful output signal that also is described in the functional specifications. The number of phases is just a topological parameter. Figure 6.8a is the initial embryonic design. First, the Image Rejection Ratio (IRR) is increased by replacing the multiplication with a real oscillator signal cos(2πf0 t) with mixers in an I- and Q-path resulting in the architecture of Fig. 6.8b. The filters H(s) have a passband around fi − f0 and a stop-band around fi + f0 with fi the frequency of the input information signal. The improvement in performance value is estimated by [13]: 4 , (6.15) ∆IRR ≈ 2 (∆φ) + ε2 with ε the gain error between the mixers and ∆φ the phase difference between the oscillator signals. Note that typical values should be provided for these numbers to calculate the estimation. In Fig. 6.8c, the IF-filters are replaced by a polyphase filter H(s) which has only a pass-band for the phase where the information signal resides. Further down-conversion to base-band is then less influenced by the parasitic signals on the same frequencies in other phases and hence the overall IRR is further improved. The number of phases of the input signal has been modified by the transformation resulting in the design of Fig. 6.8d. Again, the expected outcome is an increase in IRR which can be estimated as the difference between (5.107) and (6.15):

6.3 Top-down heterogeneous optimization methodology

239

cos(ω0t)

cos(ω0t)

x1 x2

H(s)

x (t)

y1 y2

x1 x2

y(t)

x (t)

H(s) H(s)

y(t)

sin(ω0t)

(a) Initial design

y1 y2 y3 y4

(b) Complex oscillator signal

cos(ω0t) cos(ω0t)

x1 x2

y1 y2 y3 y4

(s)

x (t)

y(t)

sin(ω0t)

x1 x2

1(s)

sin(ω0t)

2(s)

x (t)

y1 y2 y3 y4

y(t) cos(ω0t)

(c) Complex IF filtering

(d) Double complex mixing

cos(ω0t)

x1 x2

1 (s)

G

2 (s)

sin(ω0t)

x (t)

y1 y2 y3 y4

y(t) cos(ω0t)

(e) Low-noise amplifier added Fig. 6.8. Examples of designs obtained by application of transformations based on mathematical equivalences to improve the IRR and to decrease NF .

∆IRR ≈

G2 2

(G + σG )



4 2

(∆φ) + ε2

,

(6.16)

with G and σG the mean and standard deviation of the gain factors of the mixers, respectively. The parameters defining the transfer function of the polyphase filter are changed by subsequent transformations operating within the cluster.

240

6 Top-Down Heterogeneous Optimization

Finally, the input noise figure of the architecture is reduced by adding an LNA in front of the down-conversion structure as shown in Fig. 6.8e. An estimation is easily found by application of Friis’s noise formula [3]: ∆NF = NF LNA +

NF old (1 − GLNA ) − 1 , GLNA

(6.17)

with NF LNA and GLNA the noise figure and gain factor of the LNA respectively, and NF old the noise figure of the system depicted in Fig. 6.8d.  Example 6.9 (Stochastic parametric transformations). Mutation and recombination operations are adopted from standard Genetic Algorithms (GAs), but in our case they operate within a cluster instead of the entire population. They are usually selected both by evaluators with stochastic parametric transformations at their disposal. Recombination is always executed on a set of multiple designs. Random mutation of parameters acts on a single design whereas also mutations can be defined which perform a step in a particular direction depending on properties of a set of designs (e.g., following the direction with the largest gradient of the evaluator’s satisfaction level). Consequently, the entire set should then be given as input parameter of the transformation. However, compared to traditional GAs, the number of designs in the clusters is variable. Each cluster starts as a single design resulting from an architectural transformation (e.g., only the embryonic design at the start of the optimization process). Hence, at the start of a new cluster, mutations are used for exploration of the available subspace. During this growth phase of the cluster, the CR should be small compared to the MR. Once the cluster contains a large number N of designs, the rates are similar to those defined in GAs. So, CR varies with the generation number of the cluster and is not the same for all clusters in the population. In Fig. 6.9, CR increases linearly

Cross-over Rate CR

CRmax

CRN

standard region

CRmin 1

cluster growth GN

Generation G

Fig. 6.9. Variation of cross-over rate for recombination transformations as a function of the generation number G of the cluster.

6.3 Top-down heterogeneous optimization methodology

241

during the growth of the cluster, after which it is almost constant similar, to a standard GA. MR is then for each generation easily derived from CR: MR = 1 − CR − κ,

(6.18)

with κ ∈ [0, 1] the percentage of the designs that should remain unaltered, which is typically quite low since otherwise the transformation is quite pointless. Estimations of the performance change for stochastic transformations cannot be calculated beforehand. Instead, the expected outcome function returns random variations or combinations of the performance values for mutations and recombinations.  To conclude, the transformations used in the optimization framework are a powerful concept allowing to explore a large design space. Heuristic rules and mathematical equations can be combined with statistical operations of commonly employed GAs. As a result, this approach combines both architectural and parametric modifications of the designs. 6.3.8 Selection of transformations Once the evaluators have built for each design a list of transformations that improve their satisfaction levels, a final selection should be made taking into account the relative importance of the different evaluators. These transformations are then applied on the intermediate designs resulting in the tree of designs shown in Fig. 6.4. The method used to make this choice in the optimization framework is an objective-driven selection mechanism: from all the transformations available in the collection, those with the highest probability to improve the overall satisfaction level are most likely to be selected. The practical implementation of this approach involves four steps which are schematically depicted in Fig. 6.10: 1. Evaluator-based probabilities. When an evaluator E selects a transE for a design D, it determines the probability pE formation TD,i D,i of a successful increase of its satisfaction level by applying that particular transformation on the design. This value is based on the results of the expected output functions O given by (6.12):   α   E − SD Es D, P + ∆P|TD,i  E   α ,  (6.19) pE D,i = )  E − SD  E ∈T E Es D, P + ∆P|T E TD,i D D,i where α (usually 1 or 2) determines the relative importance between high and low expected improvements.

242

6 Top-Down Heterogeneous Optimization transformation collection T1

T2

T3

T4

evaluator E 1

1 pE D,1

E1 p D,2

E1 p D,3

E1 p D,4

0

0

evaluator E 2

E2 pD,1

E2 p D,2

E2 p D,3

E2 p D,4

2 pE D,5

2 pE D,6

evaluator E 3

0

0

0

0

0

1

T5

T6

evaluators

probability combination

w1 , w 2 , w 3 pD,1

pD,2

pD,3

pD,4

pD,5

pD,6 info cluster

probability distribution

probability adaptation

T1

selections of other designs

p′D,4

p′D,3

p′D,1 T2

p ′D,6

p ′D,5

p ′D,2 T3

T4

T5

random selection

T6

find number of selections M

T selected,1 , . . . , Tselected,M

Fig. 6.10. Schematic flow of selecting transformations to modify a design D.

2. Transformation-based probabilities. Each member of the transformation collection is associated with a set of |E| × |D| probabilities. This set is reduced to one probability per design in the population by evaluating ) E E E∈E wE nD pD,i , (6.20) pD,i = ) E E∈E wE nD  E   with wE the weight factors of the evaluators and nE D = TD . Hence, the more important an evaluator is considered to be, the higher are the probabilities of the transformations that will improve its satisfaction level. From (6.19), it follows that the larger the size of the set TDE , the smaller the probabilities of the transformations become. Therefore, nE D is included in (6.20) to ensure that evaluators which find lots of transformations are not discriminated. 3. Probability distribution. The values calculated via (6.20) are further modified before the actual probability distribution is obtained. This adaptation is driven by properties of the cluster to which the design belongs, like its size or generation number. First application is the ability to explore the values for the parameters of a design before trying new topological modifications. By introducing a probability adaption step, architectural transformations can be discouraged for small or new clusters.

6.3 Top-down heterogeneous optimization methodology

243

4. Random selection. Finally, M transformations are chosen by random selection using the derived probability distribution function for the transformations in the collection. Stochastic transformations can be chosen multiple times whereas deterministic variants are only available once. In case of a transformation operating on a set of designs rather than on a single one, this transformation is chosen for all design proposals once it has been selected for one of them. Hence, the selector needs to be aware of the selections made for all other designs so far. In practice, transformations on sets are, of course, first selected for those designs for which no other choice is provided. The actual number of choices M is determined by various parameters. For example, at the start of a new cluster, a larger amount of transformations is required whereas at later stages, the selections are limited to a few to avoid large simplifications of the population. As a result, M has the inverse behavior of CR depicted in Fig. 6.9, and qualitatively comparable to the variation of MR. In the selection procedure, transformations that decrease the satisfaction level of all evaluators are only chosen in case of mutations of the parameters. This is similar to other optimization methods trying to find a global optimal. On the other hand, architectural transformations without any positive effect are avoided to limit the number of clusters in the population and hence the overall computational time. 6.3.9 Conclusion The top-down heterogeneous optimization strategy that has been developed in this work is a generic algorithm where several components are specified before execution. This approach offers the flexibility to work with different design representations, performance functions (e.g., simulation of generic behavioral models), and optimization objectives. Various parametric and architectural modifications of intermediate solutions are defined to explore the design space. Defining more transformations result in a larger design space at the cost of a longer computational time required by the algorithm. Another important factor for the execution time is the time-efficiency of the performance evaluation function for each design. Therefore, distributing these calculations over a network of computers is appropriate. By limiting the number of available transformations, the behavior of other deterministic or stochastic optimization approaches is obtained (e.g., a GA or simulated annealing method). The framework, however, allows to combine these traditional approaches with each other and with knowledge-based design rules. This way, both mathematical techniques and designers’ experience can be exploited.

244

6 Top-Down Heterogeneous Optimization

6.4 Application: analog-to-digital conversion As application of the optimization framework, a custom tool has been developed to explore a part of the design space of analog-to-digital converters [16]. The implementation consists of two parts: a library with basic elements of the framework and the actual program for A/D converters. The following subsections elaborate on this application showing how the developed framework is used for the design of mixed-signal systems. 6.4.1 Dedicated library for top-down heterogeneous optimization A specialized C++ library, called Oedipus (OptimizEr in a DesIgn sPace of strUctureS), has been written (about 5,000 lines of C++ code) with all application-independent structures of the optimization framework. More specifically, three kinds of objects are defined: Abstract structures. The formal definitions of all concepts introduced in section 6.3 have been translated into custom classes using the ability of the C++ language to represent abstract data classes. The basic properties of design proposals, calculators with their performance characteristics, evaluators with satisfaction levels and transformations are established. The actual optimization algorithm deals with these abstract structures. Basic specializations. For a particular application, the abstract data classes need to be refined using application-specific information. Several frequently occurring specializations are predefined, like several types of performance metrics, various satisfaction functions, and basic transformations of parameters (e.g., mutations and recombinations). Also calculators defined as external programs, like the simulator for generic behavioral models for ∆Σ modulators of Chap. 4, are easily plugged in. Algorithm. The kernel of the optimization framework is Listing 6.1 (p. 221) executed by the main optimizer object in the library which corresponds to the control unit of Fig. 6.1. The generic character of the algorithm makes it straightforward to develop an implementation working only with the abstract structures. For example, designs are gathered into clusters and a population, without knowledge about their actual meaning. Aspects like transformation selection and stop criteria are all included in the dedicated optimizer object. Furthermore, the ability has been added to distribute tasks like performance calculations over a network of computers acting as a Parallel Virtual Machine [7]. With the elements of the dedicated library, the development of optimization programs specialized to a certain type of systems is simplified. The only requirement is to write the appropriate specializations as shown in the next subsection for analog-to-digital converters. This division into a generic algorithm and a specialization phase is comparable to the use of a generic behavioral model. This similarity binds the synthesis and modeling approaches developed into this work together in a consistent generic design methodology.

6.4 Application: analog-to-digital conversion

245

6.4.2 Dedicated program for A/D conversion Internal organization The actual dedicated program for the optimization of architectures to perform an A/D conversion (Antigone — ANalog-To-dIGital cONverter Evolution) is built around the Oedipus library as shown in Fig. 6.11. Hence, Antigone consists of a collection of dedicated classes which interact with the Oedipus library that executes the main algorithm. The specialization of the generic structures of the dedicated library involves defining a specific input language, a design representation used for internal interactions, and dedicated calculators, evaluators and transformations. These specialized structures inherit the properties of the classes defined in the Oedipus library. A major part of the development of the program consists in defining the design representation. Within Antigone, all intermediate designs created during the optimization process, are represented by a custom-defined graph, implemented as ADProposal objects as mentioned in Fig. 6.11. Nodes correspond to subsystems or operations, and edges indicate the signal flow. Various dedicated methods have been defined, e.g., to apply topological transformations, change the values of parameters, or map the graph onto a behavioral model suited for further processing. To this end, the graph collects all necessary information about the system, like the impedances and the kind of signals.

input equations

y = didig2(x2,par(N)) ... x = sample2(u,par(fs))

ADProposal

Dembryonic

calculators Caccuracy

OEDIPUS Ctransfer

evaluators Espec֒1

Ccomplexity

Cpower

Espec֒2 Eobj ֒1 Eobj ֒2 Eabstract

Tsample Tbits Tfeedback Tfilter Tmutation Tcross−over Tnon−idealities

transformations collection ADProposal

Doptimal

Fig. 6.11. Schematic representation of the use of the dedicated library Oedipus as kernel for the Antigone program.

246

6 Top-Down Heterogeneous Optimization

Interface and possibilities The user interacts with the Antigone program via ASCII text files. As inputs, the set of equations y = f (x ) which describe the functionality, should be provided along with parameters for the dedicated calculators, evaluators and transformations. System specifications (e.g., accuracy and speed) are part of the parameter set of the evaluators. Finally, parameters of the main algorithm are passed to the Oedipus library. Next paragraphs elaborate on these aspects. Input equations To define the kind of system to synthesize, the custom language introduced in Section 6.3.3 is used. Since in Antigone, an A/D conversion should always be realized, the input and output signals, are automatically limited to continuoustime and digital signals, respectively. Example 6.3 (p. 228) demonstrates how to specify subspaces. The most straightforward use of the dedicated synthesis program is to start with a functional description like that given in (6.2) on p. 228: y = didig(sample2(u,par(fs))); (6.21) This equation has been used as input to synthesize different kinds of A/D converters for different specifications. The results are shown in Fig. 6.12 and are discussed in the next subsection. Calculators Several types of calculators have been included in Antigone. The user can choose to activate them or not (indicated by the check boxes in Fig. 6.11). For example, a time-domain simulation is performed using a generic behavioral model similar to the one introduced in Chap. 4. As parameters, the user has to specify the characteristics of the input signal used as test signal, like its bandwidth and maximal value. The behavior in the frequency domain is obtained by symbolically calculating transfer functions. Various numbers indicating the complexity of the system are derived, like the number of blocks, signals, or feedback loops. Similar power estimators as proposed in [12] are available to give expected values for the power needed. These estimations are only suited for determining relative values for comparison purposes. Hence, they just give an indication of the expected power instead of providing an accurate estimation. Using the ability of the Oedipus library to realize a parallel virtual machine, a collection of computers can be defined to distribute the calculations. For the examples of Fig. 6.12, a set of 10 computers has been specified. All of R 4 processor with a 2.8–3.4 GHz clock. them have a Pentium

6.4 Application: analog-to-digital conversion 16

247

[26]

14

[29] [27] [6]

Accuracy [bit]

12

[1]

10

[21]

Fig. 6.13b 8

Fig. 6.13a [14]

6

[23] [5]

4

Flash converters ∆Σ mo dulators Flash ADC references ∆Σ ADC references

2 4

10

5

10

6

10

[19]

7

10

8

10

9

10

Signal bandwidth [Hz]

Fig. 6.12. Results of eight optimizations with Antigone for different speed and accuracy specifications.

Evaluators Similar to the calculators, the user of the Antigone program can choose which evaluators should be included in the optimization process. Further, the parameters of the evaluators and their satisfaction functions have to be specified. For a practical execution of the algorithm, following data about the evaluators should be provided: •

As described in Section 6.3.5, the specifications for the system to design (e.g., SNR min , INLmax , input-output characteristic or maximal complexity) are mapped onto appropriate evaluators by defining the parameters of the appropriate satisfaction function. Dependent on the type of the constraint or requirement, the user selects one of the satisfaction functions of Example 6.5 to obtain the satisfaction levels. For the examples of Fig. 6.12, the signal bandwidth and the required accuracy (ENOB ) have to be determined with satisfaction function depicted in Fig. 6.6b on p. 233. • The optimization targets, like the minimization of the power, are also converted into evaluators. The corresponding satisfaction function for such a target is shown in Fig. 6.6a on p. 233. • Further, specifications like maximal values for the sampling frequency or the number of bits should be provided. Their satisfaction function returns

248

6 Top-Down Heterogeneous Optimization

simply one or zero, indicating whether the value is acceptable or not. Topological restrictions are specified with similar satisfaction functions, which typically maps a complexity number calculated by a calculator onto a satisfaction level. • Finally, one evaluator monitors the abstraction level of the intermediate designs. In the examples, its parameters has been set so that some details about the implementation of the converters are included. Two examples of this final abstraction level are shown in Fig. 6.13. Evaluators with a satisfaction level lower than one, propose improvements by selecting transformations from the available transformation collection. Transformations Various transformations have been defined in Antigone to let the embryonic design evolve into a specific A/D converter architecture considered optimal by the evaluators present. Examples of architectural transformations are given in Fig. 6.6 (p. 236) and 6.7 (p. 237). Other examples are the addition of extra feedback or feedforward paths, the relocation of the sampling operation or a variation of properties like the sampling frequency and the number of bits. For parametric transformations, the basic specializations of the Oedipus library are used to enable mutation and recombination operations. Finally, dedicated transformations take care of the addition of appropriate non-idealities. The user can indicate which transformations (and their parameters) should be included and hence which part of the design space should be explored. In order to demonstrate the capabilities of the framework, the prototype implementation of the optimization framework has been developed for the subspace of flash and ∆Σ modulators. Consequently, only architectural transformations that result in these types of converters are available for selection by the user of Antigone. Extension to other types of converters, like pipeline or successive approximation [22], can be achieved by implementing new topological modifications via an extra transformation collection. In each of the examples depicted in Fig. 6.12, a combination of statistical and heuristic transformations has been selected from the available transformation collection to find an architecture and values for its parameters. Some examples of these transformations are: •

The sampling frequency for oversampled converters is increased in discrete steps. This transformation is selected by an evaluator targeted to achieve the required accuracy. • Mutation and recombination of parameters are used to determine values for, e.g., filter coefficients Ai or transconductances G (defined in Fig. 6.13) based on simulations of (specialized) generic behavioral models [17, 25]. • All architectural transformations for A/D converters discussed in this chapter are included: e.g., addition of feedback loops, filters or feedforward paths, extra coefficients, comparators and quantizers. As an example, Fig. 6.14 shows a part of the design tree (similar to Fig. 6.4 on p. 224)

6.4 Application: analog-to-digital conversion

249

Vref Rref

Cin Zout

G′ vin Rref

Cin Zout

G′ vin

Rref

H (z )

u(t)

y

Rref

Cin Zout

G′ vin Rref

(a) Flash converter (Vref = 2.5 V, Rtot ≈ 500 Ω, Zout ≈ 6 kΩ, G ≈ 2.5 mS)

Cint

u(t)

Cint

Cint

A2

A1 k1

Gvin

Zout

y

A3 k2

Gvin

Zout

k3

Gvin

Zout

D/A

(b) ∆Σ converter (A1 ≈ 5 µS, A2 ≈ 100 µS, A3 ≈ 100 µS, k1 ≈ 5 µS, k2 ≈ 20 µS, k3 ≈ 40 µS, G ≈ 4.8 mS, Cint = 5 pF, Zout ≈ 10 kΩ) Fig. 6.13. Examples of architectures obtained as the result of the top-down heterogeneous optimization procedure.

250

6 Top-Down Heterogeneous Optimization fs

Dembryonic

f1

Tbits

Tsample Tsample

f

Tsample

A1 H(z)

A2

f

Tfeedback

f



A1

..

.

f

Tfilter

k1

Tfeedback

Tfeedback

D/A

Tbits f

f

d dt

A



B

A1



A1



k1

A1

D/A

k1

...

...



A2

k1

Tfeedback

k2 D/A

H(z)



A3

H(z)

An

...



A1

...

...

D/A

...

...

f

Tfeedback

k3 D/A



A1 k1



A2



A1

k2



A3

k2



A4 k4

k3

D/A

...

...

...



A2

k1 D/A

Tfeedback 

A1



A2

k1



A3

k2

B3

B3 

A1

k3 D/A



A2

k1



A3

k2

A4

k3

k4 D/A

decrease abstraction level

Tnon−idealities

Cint

Tnon−idealities

Tnon−idealities

Cint Cint

Gvin

...

...

Tmutation Tcross−over

Zout

Gvin

Cint

Cint

Zout

Cint

Gvin

Zout

Gvin

Zout

Gvin

Cint

Cint

Cint

Zout

D/A

Gvin

Zout

Gvin

Zout

Gvin

Zout

Gvin

Zout

D/A D/A

Tcross−over ...

...

...

Tmutation

Fig. 6.14. Part of the tree of designs for the example resulting in the converter of Fig. 6.13b.

obtained from the synthesis process which results in the architecture of Fig. 6.13b. • For some other parameters, like Cint , typical values are selected by the transformations, or simplified equations are used (e.g., for Rref [28]). When a new parameter is introduced by an architectural transformation, an heuristic typical value is assigned to it. These values are linked in the Antigone program with the corresponding transformations. Algorithm Several parameters of the optimization algorithm has to be set, like the maximum number of clusters, designs and generations. The user has to select one

6.4 Application: analog-to-digital conversion

251

of the stop criteria described in section 6.3.6. Also, various aspects of the selection mechanism of Fig. 6.10 on p. 242 can be specified, like the number of selections M for young and old clusters. The first one corresponds to exploration of the parameters the new topology. The values of the old parameters are then taken as invariable during a limited number of transformations. Only when the generation number of the cluster is large enough, all parameters are available for modification by transformations. This procedure allows to inherit information from an old cluster to a newly created one. Experimental results Each optimization process results in a behavioral description of the final solution. Also, information about intermediate designs can be obtained via log files. As experiment, eight different synthesis runs with different speed and accuracy requirements were executed. The power of the converters is to be minimized. Details about the inputs selected by the user in these runs are discussed above and parameters are listed in Table 6.1. Although the defined and implemented set of transformations in the prototype program limits the number of different architectures that can be obtained, it is yet useful to demonstrate some properties of the proposed approach. Speed–accuracy trade-off Figure 6.12 shows the results of the eight optimization runs plotted into the speed–accuracy space. The large circles and crosses mark generated flash and ∆Σ converters, respectively [16]. To give an idea about the speed–accuracy trade-off in real implementations, small dots and plus signs have been added corresponding to A/D converters published in open literature. The areas in the graph for both types of converters (flash and ∆Σ converters) are comparable between both the optimized and published systems. Hence, despite the limited Table 6.1. Overview of the parameters used in the synthesis processes resulting in the converters of Fig. 6.12. Parameter Optimization target Specifications Computer configuration

Value minimal power BW [Hz], ENOB [bit] 10 workstations

Maximal population size |D|max

10 clusters with 500 designs

Maximal generation number G

1,000

Number of selections M Input signal

5 (normal) – 40 (exploration) 0.5 V @ 0.9BW Hz

252

6 Top-Down Heterogeneous Optimization

complexity of both the transformation collection and the behavioral models, fairly realistic results can be obtained. Architectures The available transformations and the presence of an evaluator which limits the complexity of the generated designs, result in topologies comparable to some well-known architectures. Examples of such architectures are shown in Fig. 6.13. Figure 6.13a is a flash converter with pre-amplifiers and Fig. 6.13b is a continuous-time ∆Σ modulator. They are also indicated next to the large circles and crosses marks in Fig. 6.12, respectively. Design tree The Antigone program is capable of translating the same functional description into different kinds of A/D converters depending on the input specifications. The tool does not merely perform a selection of a type of data converter: it also determines values for parameters like various coefficients. Moreover, it creates the architectures: only the transformations are defined, not the actual architectures. Figure 6.14 shows a part of the design tree that is built during the optimization process to obtain the final result shown in Fig. 6.13b. For each cluster, only one member is depicted. The top-down procedure is clearly visible: the transformations only merge blocks, add new blocks or decrease the number of details, but no simplifications are performed. The path leading to the final result is indicated by the gray arrow. Satisfaction level In Fig. 6.15, the evolution of the satisfaction level of the best design in the population is shown throughout the optimization process. As expected, at a particular abstraction level, it steadily increases towards total satisfaction of

Satisfaction level S

1

decrease abstraction level

0

1

500

1000

Generation G

Fig. 6.15. Example of the evolution of the satisfaction level of the best design in the population.

6.5 Conclusions

253

all the evaluators (i.e., a satisfaction level of one). A decrease of the abstraction level, however, can result in a drop of the best satisfaction level. Since these modifications of the abstraction level can occur only a discrete number of times, a convergence towards the best value is still achieved. Complexity The final design Doptimal of Fig. 6.11, is mapped onto a specific behavioral model resulting in architectures similar to those depicted in Fig. 6.13. Lowerlevel models are obtained at the expense of larger optimization times. Typically, a couple of hours (